Azure private DNS is a great solution to simplify DNS resolution for cloud resources in Azure. However, chances are you have components in your infrastructure that do not natively integrate with Azure DNS zones. In this post, I will show you how you to enable your own DNS solution to resolve names from Azure private DNS zones with CoreDNS on Azure Kubernetes Service.

Table of Contents

  1. Introduction
  2. DNS Resolution in Azure
    1. Azure provided DNS
    2. Azure Private DNS Zones
    3. Custom DNS
  3. Scenario
  4. Solution / Architecture
  5. Setup
    1. Resource Group / VNET
    2. Azure Firewall
    3. Virtual Machines / NAT
    4. Configure Windows DNS Server
    5. Create AKS Cluster
    6. Setup CoreDNS
    7. Final Steps
  6. Conclusion
    1. When to use this?
    2. Make it production ready
    3. Zone Requirements
    4. AKS / CoreDNS

This is a multi part article with the following parts:

Part 1 - Azure Hybrid DNS Architecture
Part 2 - Azure Hybrid DNS Part 2

Introduction

I will showcase how you can simplify the DNS resolution of Azure Private DNS zones from anywhere in your network using CoreDNS on a central, private AKS cluster.
Adoption of Azure Kubernetes Services (AKS) has increased over the last few month and since AKS private clusters are globally available, the demand has grown even more.
With new private clusters, new challenges are introduced as well - each private AKS cluster gets deployed with an private Azure DNS zone in which the Kubernetes Master API gets their DNS name in. This really opens the door to expand the capabilities of Azure Private DNS zones for other purposes as well.

DNS Resolution in Azure

In Azure, you can choose from three options for DNS resolution. Azure provided DNS, Azure Private DNS Zones and custom DNS.
Each of this options has its benefits depending on the goal you want to achieve.

Azure provided DNS

The default when you deploy a new VNET. Resolution is done in the Azure backend. This is mostly not sufficient because it is not possible to alter DNS zone names nor can the lifecycle be managed.

Azure Private DNS Zones

Azure Private DNS zones can be used to resolve names to a specific domain in your VNET. It supports features like auto registrations for VMs and is a great way to create split-horizon resolution. However, for your VMs to be able to resolve the names of these zones, the zone must be linked to each VNET. This can conflict with auto registration in certain scenarios.

Custom DNS

You can use your own DNS servers in Azure. This option offers more flexibility. For this, you need to configure the DNS server on each VNET.

Scenario

When companies start using cloud services, it has been common practice to migrate virtual machines to Azure first and shift workloads to cloud native services afterwards.
Therefore, some basic infrastructure gets set up in Azure as well - like domain controllers, DNS, VPN, networking and so forth.
I want to take a look at the Domain Controllers and DNS servers and how they are being integrated into Azure.
It is most likely, that a new AD site has been created and at least two domain controllers has been set up that also serve as DNS servers.
Those DNS servers are configured on all VNETs to resolve names in the entire network.

The following image shows this a little better:

ArchitectureArchitecture

This is a reference architecture with a simplified view that does not include all components involved nor is it accurate in terms of how the components are connected - it is supposed to give an overview of the data flow.

When you deploy a private AKS cluster into this, a private DNS zone gets created as well and linked to the VNET the cluster got created in.

Architecture with AKSArchitecture with AKS

At this point, the master API of the private AKS cluster akscluster-9123as9.15bf9f2f-f5df-492a-a83b-7e64b597e265.privatelink.westeurope.azmk8s.io can only be resolved (and therefore securely reached) from machines located in the VNET where the AKS got deployed in.
Further along, you might want to deploy several AKS clusters or just private DNS zones by themselves and all of them need to be resolved from anywhere in your network.

Solution / Architecture

In general, to resolve addresses from Azure with your own DNS Server, the DNS request for the particular zone must be forwarded to the virtual IP 168.63.129.16.
For private DNS zones, an additional zone link must be created:

Azure Private DNS Zone LinkAzure Private DNS Zone Link

The approach I’m proposing uses the DNS server solution that is already in place - in my case a Microsoft DNS server that also acts as an Domain Controller.
This is very common but any regular DNS solution will work. The DNS server gets an conditional forwarder for the zone that points to CoreDNS running on AKS which will forward the requests to the virtual IP 168.63.129.16:

Custom DNS Forwarder ArchitectureCustom DNS Forwarder Architecture

CoreDNS is a very lightweight, plugin based DNS solution that is perfect for this scenario.
The AKS cluster will be placed in the shared network and all private DNS zones will be linked to the shared VNET.
The Windows DNS server will be configured to forward the traffic based on requests of the zone privatelink.westeurope.azmk8s.io.

This architecture is based on the requirement to deploy several private AKS clusters. Each cluster comes with a private DNS zone in which the master API gets exposed in. In order to be more dynamic and to eliminate the task of creating or deleting a new conditional forwarder when a new cluster gets provisioned or decommissioned, one conditional forwarder gets created to a central AKS cluster that hosts CoreDNS. CoreDNS forwards all DNS requests to the virtual IP 168.63.129.16. When a new private AKS cluster is provisioned, the private DNS zone needs to be linked to the VNET of the shared AKS cluster and the resolution is established.

Setup

The following steps will describe the entire setup of a testing environment. However, in order to deploy this entirely on Azure, the on-premises part is excluded.

ArchitectureArchitecture

Resource Group / VNET

You might want to change some values so they fit into your environment.
The entire code is written to be executed consecutively.

This part creates the following resources:

  • Resource Group
  • UDRs
  • VNETs
  • VNET peerings
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
$rg = 'aksHybridDnsResolver'
$location = 'westeurope'

# create vnets
$vnets = @(@{'IDM' = '172.20.2.0/24'}, @{'HUB' = '172.20.1.0/24'}, @{'SHARED' = '172.20.3.0/24'}, @{'SERVER' = '172.20.4.0/24'})

$rgs = Get-AzResourceGroup
if($rg -notin $($rgs.ResourceGroupName)){
New-AzResourceGroup -Name $rg -Location $location
}

# create default udr for subnets with route to firewall
$routeTablePublic = New-AzRouteTable -Name 'default-udr' -ResourceGroupName $rg -location $location
$routeTableFirewall = New-AzRouteTable -Name 'firewall-udr' -ResourceGroupName $rg -location $location

foreach($vnet in $vnets){
$subnetName = 'subnet01'
if($vnet.Keys -eq 'HUB'){
$subnetName = 'AzureFirewallSubnet'
}

if($vnet.Keys -in @('HUB','SHARED')){
$subnet = New-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $vnet[$vnet.Keys]

} else {
$subnet = New-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $vnet[$vnet.Keys] -RouteTableId $routeTablePublic.id
}

New-AzVirtualNetwork -Name $vnet.Keys -ResourceGroupName $rg -Location $location -AddressPrefix $vnet[$vnet.Keys] -Subnet $subnet | Out-Null
}

$hubVnet = Get-AzVirtualNetwork -Name 'HUB' -ResourceGroupName $rg
$idmVnet = Get-AzVirtualNetwork -Name 'IDM' -ResourceGroupName $rg
$sharedVnet = Get-AzVirtualNetwork -Name 'SHARED' -ResourceGroupName $rg
$serverVnet = Get-AzVirtualNetwork -Name 'SERVER' -ResourceGroupName $rg

# create peerings
Add-AzVirtualNetworkPeering -Name "HUB-IDM" -VirtualNetwork $hubVnet -RemoteVirtualNetworkId $idmVnet.Id -BlockVirtualNetworkAccess:$false -AllowForwardedTraffic:$true | Out-Null
Add-AzVirtualNetworkPeering -Name "IDM-HUB" -VirtualNetwork $idmVnet -RemoteVirtualNetworkId $hubVnet.Id -BlockVirtualNetworkAccess:$false -AllowForwardedTraffic:$true | Out-Null

Add-AzVirtualNetworkPeering -Name "HUB-SHARED" -VirtualNetwork $hubVnet -RemoteVirtualNetworkId $sharedVnet.Id -BlockVirtualNetworkAccess:$false -AllowForwardedTraffic:$true | Out-Null
Add-AzVirtualNetworkPeering -Name "SHARED-HUB" -VirtualNetwork $sharedVnet -RemoteVirtualNetworkId $hubVnet.Id -BlockVirtualNetworkAccess:$false -AllowForwardedTraffic:$true | Out-Null

Add-AzVirtualNetworkPeering -Name "HUB-SERVER" -VirtualNetwork $hubVnet -RemoteVirtualNetworkId $serverVnet.Id -BlockVirtualNetworkAccess:$false -AllowForwardedTraffic:$true | Out-Null
Add-AzVirtualNetworkPeering -Name "SERVER-HUB" -VirtualNetwork $serverVnet -RemoteVirtualNetworkId $hubVnet.Id -BlockVirtualNetworkAccess:$false -AllowForwardedTraffic:$true | Out-Null

Azure Firewall

This part creates the following resources:

  • Azure Firewall
  • Network Rule on Firewall
  • Routes on UDR for Firewall, IDM- and Server VNET

For the sake of easy testing, I added only one rule to the firewall that allows absolutely everything - feel free to change it to the values you like 😉

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# create Firewall
$fwPip = New-AzPublicIpAddress -Name 'fw-pip' -ResourceGroupName $rg -Location $location -AllocationMethod Static -Sku Standard
$azureFirewall = New-AzFirewall -Name 'FW01' -ResourceGroupName $rg -Location $location -VirtualNetworkName 'HUB' -PublicIpName 'fw-pip'

# create network rule that allows all traffic
$NetRule1 = New-AzFirewallNetworkRule -Name "Allow-everything" -Protocol TCP,ICMP,UDP,Any -SourceAddress '*' -DestinationAddress '*' -DestinationPort '*'
$NetRuleCollection = New-AzFirewallNetworkRuleCollection -Name 'Allow-everything' -Priority 500 -Rule $NetRule1 -ActionType "Allow"

$azureFirewall.NetworkRuleCollections = $NetRuleCollection
Set-AzFirewall -AzureFirewall $azureFirewall

# create route to firewall
Get-AzRouteTable -ResourceGroupName $rg -Name 'default-udr' | Add-AzRouteConfig -Name 'firewall' -AddressPrefix '0.0.0.0/0' -NextHopType "VirtualAppliance" -NextHopIpAddress $($azureFirewall.IpConfigurations.PrivateIpAddress) | Set-AzRouteTable

# create route to Internet
Get-AzRouteTable -ResourceGroupName $rg -Name 'firewall-udr' | Add-AzRouteConfig -Name 'internet' -AddressPrefix '0.0.0.0/0' -NextHopType "Internet" | Set-AzRouteTable

Virtual Machines / NAT

This part creates the following resources:

  • Domain Controller
  • additional Server
  • NAT Rules on Firewall

Since no server is publicly accessible, two NAT rules are created on the Azure Firewall to access them using the public IP of the firewall.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# create DC
$VMLocalAdminUser = "lcoadm"
$VMLocalAdminSecurePassword = ConvertTo-SecureString 's3cuRepa$$w0rd123' -AsPlainText -Force

$dcNic = New-AzNetworkInterface -Name 'DC-nic' -ResourceGroupName $rg -Location $location -SubnetId $idmVnet.Subnets.id

$Credential = New-Object System.Management.Automation.PSCredential ($VMLocalAdminUser, $VMLocalAdminSecurePassword)

$VirtualMachine = New-AzVMConfig -VMName 'DC' -VMSize 'Standard_D2s_v3'
$VirtualMachine = Set-AzVMOperatingSystem -VM $VirtualMachine -Windows -ComputerName 'DC' -Credential $Credential -ProvisionVMAgent -EnableAutoUpdate
$VirtualMachine = Add-AzVMNetworkInterface -VM $VirtualMachine -Id $dcNic.Id
$VirtualMachine = Set-AzVMSourceImage -VM $VirtualMachine -PublisherName 'MicrosoftWindowsServer' -Offer 'WindowsServer' -Skus '2019-Datacenter' -Version latest

New-AzVM -ResourceGroupName $rg -Location $location -VM $VirtualMachine -Verbose

# create Windows vm Terminal
$termNic = New-AzNetworkInterface -Name 'TERM-nic' -ResourceGroupName $rg -Location $location -SubnetId $serverVnet.Subnets.id

$VirtualMachine = New-AzVMConfig -VMName 'TERM' -VMSize 'Standard_D2s_v3'
$VirtualMachine = Set-AzVMOperatingSystem -VM $VirtualMachine -Windows -ComputerName 'TERM' -Credential $Credential -ProvisionVMAgent -EnableAutoUpdate
$VirtualMachine = Add-AzVMNetworkInterface -VM $VirtualMachine -Id $termNic.Id
$VirtualMachine = Set-AzVMSourceImage -VM $VirtualMachine -PublisherName 'MicrosoftWindowsServer' -Offer 'WindowsServer' -Skus '2019-Datacenter' -Version latest

New-AzVM -ResourceGroupName $rg -Location $location -VM $VirtualMachine -Verbose

# add DNAT to Firewall to access DC and Term using RDP
$dnatRuleDC = New-AzFirewallNatRule -Name "dc-rdp" -Protocol "TCP" -SourceAddress "*" -DestinationAddress $fwPip.IpAddress -DestinationPort "3389" -TranslatedAddress $dcNic.IpConfigurations.PrivateIpAddress -TranslatedPort "3389"
$dnatRuleTerm = New-AzFirewallNatRule -Name "term-rdp" -Protocol "TCP" -SourceAddress "*" -DestinationAddress $fwPip.IpAddress -DestinationPort "3390" -TranslatedAddress $termNic.IpConfigurations.PrivateIpAddress -TranslatedPort "3389"
$natRuleCollection = New-AzFirewallNatRuleCollection -Name "rdp-in" -Priority 500 -Rule $dnatRuleDC
$natRuleCollection.AddRule($dnatRuleTerm)
$azureFirewall.NatRuleCollections = $natRuleCollection
Set-AzFirewall -AzureFirewall $azureFirewall

Configure Windows DNS Server

At this point, it is time to configure the Windows DNS Server.

  • install DNS role
  • install scoop
  • install azure cli and kubectl (used later)
  • setup DNS Zone
1
2
# connect to the server
mstsc /v $fwPip.IpAddress

I like to use scoop to install tools - you can also install the tools manually.
Start a PowerShell as an Administrator and run the following commands:

1
2
3
4
Install-WindowsFeature DNS -IncludeManagementTools
iwr -useb get.scoop.sh | iex
scoop install azure-cli
scoop install kubectl

Setup the DNS Server:

1
2
3
Add-DnsServerForwarder -IPAddress 1.1.1.1 -PassThru
Add-DnsServerPrimaryZone -Name "myDomain.com" -PassThru
Clear-DnsServerCache

Create AKS Cluster

Sadly, there is no PowerShell cmdlet to setup a private AKS Clsuter into an existing VNET.
Therefore, install it using azure cli.

1
2
3
4
5
6
7
8
# create service principal
az ad sp create-for-rbac --skip-assignment --name aksSp

# assign owner permission to subscription
az role assignment create --role "Owner" --assignee-object-id '<sp id>' --scope /subscriptions/<subscriptionId> --assignee-principal-type ServicePrincipal

# create private aks cluster
az aks create --name 'sharedaks' --resource-group 'aksHybridDnsResolver' --enable-private-cluster --node-count 2 --max-pods 110 --kubernetes-version '1.16.7' --enable-vmss --enable-cluster-autoscaler --min-count 2 --max-count 3 --service-principal '<sp>' --client-secret '<secret>' --location westeurope --network-plugin 'kubenet' --node-vm-size 'Standard_D3_v2' --nodepool-name 'default' --pod-cidr '172.40.0.0/20' --vnet-subnet-id $sharedVnet.Subnets.id --node-osdisk-size 100 --service-cidr '172.41.0.0/16' --dns-service-ip '172.41.0.10'

Setup CoreDNS

To set up CoreDNS, we need to establish a connection from the DNS Server to the AKS cluster. Therefore, a temporary VNET link from the private DNS zone of the cluster to the IDM VNET needs to be created:

1
2
3
Install-Module -Name Az.PrivateDns -Force
$privDnsZone = Get-AzPrivateDnsZone -ResourceGroupName "MC_$($rg)_sharedaks_$($location)"
$link = New-AzPrivateDnsVirtualNetworkLink -ZoneName $privDnsZone.name -ResourceGroupName "MC_$($rg)_sharedaks_$($location)" -Name "IDM" -VirtualNetworkId $idmVnet.id

Connect to the DNS server again:

1
2
# connect to the server
mstsc /v $fwPip.IpAddress

Open a PowerShell as an administrator and execute the following commands:

1
2
3
4
5
6
# create conditional forwarder
Add-DnsServerConditionalForwarderZone -Name "privatelink.westeurope.azmk8s.io" -MasterServers 172.20.3.100 -PassThru

New-Item -Path 'C:\' -Name temp -ItemType Directory
New-Item -Path 'C:\temp' -Name 'coreDNS.yaml' -ItemType File
notepad 'C:\temp\coreDNS.yaml'

Copy the following content into notepad and save the file afterwards:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---
apiVersion: v1
kind: Service
metadata:
name: dnslb
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
loadBalancerIP: 172.20.3.100
ports:
- port: 53
protocol: UDP
selector:
app: dnsforwarder
---
apiVersion: v1
kind: ConfigMap
metadata:
name: corednsconf
data:
corefile: |
.:53 {
forward . 168.63.129.16
log
cache 30
errors
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dnsforwarder
labels:
app: dnsforwarder
spec:
replicas: 1
selector:
matchLabels:
app: dnsforwarder
template:
metadata:
labels:
app: dnsforwarder
spec:
containers:
- name: coredns
image: coredns/coredns:1.6.9
command: ["/coredns"]
args: ["-conf", "/root/corefile"]
resources:
limits:
memory: 170Mi
cpu: "100m"
requests:
memory: 100Mi
cpu: "70m"
volumeMounts:
- name: corefile
mountPath: /root/
ports:
- containerPort: 53
volumes:
- name: corefile
configMap:
name: corednsconf

The .yaml file represents the configuration of CoreDNS - it consists of three Kubernetes objects:

  • Service - used to expose the CoreDNS application with an Azure Load Balancer
  • Config Map - the configuration of Core DNS
  • Deployment - deployment of CoreDNS Pods

Now we need to login to azure using the azure cli and get the kubeconfig to connect to the AKS cluster:

1
2
3
az login
az account set -s <subscriptionId>
az aks get-credentials -n sharedaks -g <resource group name> --admin

Afterwards, we can use kubectl to manage the cluster:

1
2
kubectl create namespace dns
kubectl apply -f 'C:\temp\coreDNS.yaml' -n dns

You can check if the deployment was successful using the following commands:

1
2
kubectl get pods -n dns
kubectl get svc -n dns

The response should look like this:

kubectl outputkubectl output

It can take a while until the external ip shows - a new Azure Load Balancer gets created in the background.

Final Steps

This part is for the following actions:

  • remove temp VNET link
  • set route to UDRs
  • set DNS server to VNETs
  • restarts all VMs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# remove temp VNET link
Remove-AzPrivateDnsVirtualNetworkLink -ResourceGroupName "MC_$($rg)_sharedaks_$($location)" -ZoneName $privDnsZone.name -Name 'IDM'

# add route to cluster udr
Get-AzRouteTable -ResourceGroupName "MC_$($rg)_sharedaks_$($location)" | Add-AzRouteConfig -Name 'firewall' -AddressPrefix '0.0.0.0/0' -NextHopType "VirtualAppliance" -NextHopIpAddress $($azureFirewall.IpConfigurations.PrivateIpAddress) | Set-AzRouteTable

# set dns server to vnet
$dcIp = (Get-AzNetworkInterface -Name 'DC-nic' -ResourceGroupName $rg).IpConfigurations.PrivateIpAddress

$sharedVnet.DhcpOptions.DnsServers = @($dcIp, '168.63.129.16')
Set-AzVirtualNetwork -VirtualNetwork $sharedVnet

$idmVnet.DhcpOptions.DnsServers = @($dcIp)
Set-AzVirtualNetwork -VirtualNetwork $idmVnet

$serverVnet.DhcpOptions.DnsServers = @($dcIp)
Set-AzVirtualNetwork -VirtualNetwork $serverVnet

# restart aks vmss
Get-AzVmss -ResourceGroupName "MC_$($rg)_sharedaks_$($location)" | Restart-AzVmss

# restart all vms
Get-AzVM -ResourceGroupName $rg | Restart-AzVM

Conclusion

Now its possible to use any client in the network (that has the DNS server configured) to resolve the DNS name of the Azure Private Zone from the cluster.

Custom DNS Forwarder ArchitectureCustom DNS Forwarder Architecture
1
2
# connect to the second server and test the DNS resolution there
mstsc /v "$($fwPip.IpAddress):3390"

When a new private AKS cluster is provisioned, the private DNS zone just needs to be linked to the shared VNET and the DNS resolution is established.

If you want to use Azure Private DNS zones for other zones too, you need to create a new conditional forwarder on your DNS server for each zone that points to the CoreDNS load balancer.

When to use this?

Of course, just to forward some DNS requests, this is way too big in terms of administration, know-how and costs. However, if your organization uses container based applications, this solution is perfect for DevOps processes.
With a central AKS cluster, you are also be able to consolidate several applications running on VMs into container - for instance Elastic, Splunk, HashiCorp Vault and much more.

Make it production ready

To use this in a production environment, some steps should be taken care of.

Zone Requirements

Based on how your network is set up, you might want to change the conditional forwarder to something that matches the layout. You don’t want your clients in America to resolve names in Europe 😉

AKS / CoreDNS

For production workloads, Microsoft recommends at least three node AKS cluster.
The CoreDNS should get some affinity/antiaffinity rules to be distributed more evenly. If you use Helm and CI/CD, you might want to add the CoreDNS deployment into a Helm chart and deploy is with your pipeline of choice.