Integrating AKS with Azure firewall

Tufin
10 min readApr 9, 2019

--

By Zvika Gazit

Introduction

Microsoft recently released a firewall service as part of the Azure cloud platform. As a company that targets the enterprise market, this makes a lot of sense. Enterprise security teams often think of firewalls as a synonym for security because they are used to them and rely on them.

At the same time, enterprises are also shifting their applications to a micro-service architecture with Kubernetes.

In this article we will see how to setup an AKS cluster behind an Azure firewall to control egress and ingress traffic between the Kubernetes cluster and the internet.

But before diving in, let’s understand why Microsoft created this firewall.

Why Firewalls? What’s wrong with security groups?

Cloud developers rely on security groups and IAM policies to secure their apps. While these mechanisms work, they are limited compared to a traditional firewall. For example, they lack application-level traffic inspection capabilities, integrated logging and threat detection.

But the main reason security teams prefer firewalls is separation of duties. A firewall gives the security team a check point which is outside of the developers’ reach and allows them to maintain control. This may seem to contradict the idea of the cloud which is all about automation and developer-driven agility. We will later see how to overcome this with security policy automation.

Solution Architecture

There are several alternative architectures for this integration. The main differences are which network entity you expose to the internet — the load balancer, the Azure firewall or both.

The approach I chose is a simple and secure architecture where the firewall is the only front end point exposed to the internet and the load balancer is internal, behind the firewall.

Traffic flow

  1. The firewall performs a destination NAT and forwards the ingress traffic to the internal load balancer’s private IP
  2. The load balancer routes the traffic according to the configured ingress routes defined by the Kubernetes ingress resource

Two “logistic” notes before we begin

  • We’ll use Azure CLI and kubectl for the configuration. This will help streamline the process and ease script creation
  • We’ll add ${clusterName} to every created entity to easily identify it as part of this setup. For example: {clusterName}-firewall-vnet.

Integrating Azure Kubernetes and Firewall

Azure cli installation

To run the commands in this article, install Azure cli. The script for the installation is for Ubuntu. See instructions for other platforms here.

sudo apt-get install apt-transport-https lsb-release software-properties-common dirmngr -yAZ_REPO=$(lsb_release -cs)
echo “deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main” | \
sudo tee /etc/apt/sources.list.d/azure-cli.list
sudo apt-key — keyring /etc/apt/trusted.gpg.d/Microsoft.gpg adv \
— keyserver packages.microsoft.com \
— recv-keys BC528686B50D79E339D3721CEB3E94ADBE1229CF
sudo apt-get update && sudo apt-get install azure-cli

Step 1 — AKS creation

  1. Sign-in to Azure using an account with admin rights on the Azure subscription.
az login

When the sign-in page opens, choose your account and return to the terminal.

2. Create service-principal credentials to be used in your script. For the service principal name use “http://<servicePrincipalName>” format.

az ad sp create-for-rbac --name http://testDeployer --skip-assignment

The command will print an output containing the service-principal’s details. Use the output to as variables for the next commands.

appId=<appId>
servicePrincipalName="http://testDeployer"
servicePrincipalPassword=<password>
tenantId=<tenantId>

3. Assign a contributor role to the new service-principal.

az role assignment create --assignee ${appId} --role Contributor

4. Login to Azure cloud using the newly created service-principal.

az login --service-principal -u ${servicePrincipalName} \
-p ${servicePrincipalPassword} --tenant ${tenantId}

5. Define variables to be used by the script.

clusterName=”az-aks-firewall”
subscriptionId=”Firewall”
location=”uksouth”
resourceGroup=”${clusterName}-group”
numNodes=”1" # Number of nodes in AKS cluster
ingressDnsName=”gbank” # DNS name attached to the FW's public IP
firewallPrivateIp=”172.16.0.4"
ingressPrivateIp=”10.240.0.42" # nginx ingress controller "LoadBalancer" IP

6. Create a resource group to contain all Azure resources that are going to be created using this procedure.

az group create --name ${resourceGroup} --location ${location}

7. Create the AKS cluster.

az aks create \
--resource-group "${resourceGroup}" \
--name "${clusterName}" \
--node-count "${numNodes}" \
--service-principal "${servicePrincipalName}" \
--client-secret "${servicePrincipalPassword}" \
--generate-ssh-keys

Step 2 — Azure firewall creation and configuration

  1. Create Vnet to be used by the Azure firewall. For the “subnet-name” you must use “AzureFirewallSubnet”.
az network vnet create \
--name "${clusterName}-firewall-vnet" \
--resource-group "${resourceGroup}" \
--address-prefixes "172.16.0.0/22" \
--subnet-name "AzureFirewallSubnet" \
--subnet-prefixes "172.16.0.0/24"

2. Create public IP to be used by the firewall.
The DNS name we are attaching to the firewall’s public IP will be used by our demo application (Gbank) internet clients.
So it you choose “gbank” as the “dns-name”, Azure will consider this as the DNS domain prefix and add automatically “uksouth.cloudapp.azure.com” as the domain suffix according to the Azure location.

az network public-ip create \
--name "${clusterName}-firewall-publicip" \
--resource-group "${resourceGroup}" \
--allocation-method static \
--dns-name "${ingressDnsName}" \
--sku standard

3. Create the Azure firewall.

az network firewall create \
--name "${clusterName}-firewall" \
--resource-group "${resourceGroup}"

4. The newly created firewall does not have any network configuration yet. To configure networking ,attach the firewall to the dedicated Vnet and public IP we’ve just created.

az network firewall ip-config create \
--name "${clusterName}-firewall-ipconfig" \
--resource-group "${resourceGroup}" \
--firewall-name "${clusterName}-firewall" \
--public-ip-address "${clusterName}-firewall-publicip" \
--vnet-name "${clusterName}-firewall-vnet" \
--private-ip-address "${firewallPrivateIp}"

5. Create basic destination NAT (DNAT), network (L4) and application (L7) rules required by the Gbank application.

DNAT rule — Traffic that arrives to the firewall’s public IP will be destination-translated to the NGINX internal load-balancer’s IP address which in our setup is private (10.240.0.42).
First, we’ll get the previously created public IP because The DNAT rule needs it.

firewallPublicIp=$(az network public-ip list --resource-group "${resourceGroup}" --query "[].[ipAddress]" --output tsv)az network firewall nat-rule create \
--resource-group "${resourceGroup}" \
--firewall-name "${clusterName}-firewall" \
--collection-name "aks-ingress-dnat-rules" \
--priority "100" \
--action "dnat" \
--name "dnat-to-lb" \
--protocol "TCP" \
--source-addresses "*" \
--destination-addresses "${firewallPublicIp}" \
--destination-port "80" \
--translated-address "${ingressPrivateIp}" \
--translated-port "80"

Network rule — Egress traffic from the AKS cluster will be allowed to ANY

az network firewall network-rule create \
--resource-group "${resourceGroup}" \
--firewall-name "${clusterName}-firewall" \
--collection-name "aks-l4-allow-egress-rules" \
--priority "100" \
--action "allow" \
--name allow-all \
--protocol "TCP" \
--source-addresses "10.0.0.0/8" \
--destination-addresses "*" \
--destination-ports "*"

Application rule — Internet clients will be allowed to reach the DNS name attached to the firewall’s public IP.
First, we’ll get the FQDN attached to the firewall’s public IP.

ingressFqdn=$(az network public-ip show --name ${clusterName}-firewall-publicip \
--resource-group "${resourceGroup}" \
--query "dnsSettings.fqdn" --output tsv)

Step 3 — Firewall Vnet — AKS Vnet peering

Now that we’ve completed the firewall deployment and configuration, we’ll connect its Vnet to the AKS Vnet (Vnet peering), in order to allow direct routing between them.

Because Azure automatically creates all “compute” entities in a special resource-group (not the one we’ve created), we should get the name of that resource-group (aksResourceGroup).
Using the group’s name we’ll get the AKS Vnet id.
Both AKS Vnet and Firewall Vnet IDs will be used by the Vnet peering.

Note: The local and remote Vnets are in different resource-groups. in that case, we’ll have to use “id” for the “remote-vnet” rather than “name”

The first step is to initiate the peering on the first Vnet and second step is to complete it on the second Vnet.

aksResourceGroup=$(az group list --query "[?contains(id, '${clusterName}_${location}')].[id]" \
--output tsv | awk -F 'resourceGroups/' '{print $2}')
aksVnetName=$(az network vnet list \
--resource-group "${aksResourceGroup}" \
--query "[?contains(name, 'aks-vnet')].name" --output tsv)
aksVnetId=$(az network vnet list \
--resource-group "${aksResourceGroup}" \
--query "[?contains(name, 'aks-vnet')].id" --output tsv)
firewallVnetId=$(az network vnet show \
--resource-group "${resourceGroup}" \
--name "${clusterName}-firewall-vnet" \
--query id --out tsv)
# initiate
az network vnet peering create \
--name "aks-peer-firewall" \
--resource-group "${aksResourceGroup}" \
--vnet-name "${aksVnetName}" \
--remote-vnet "${firewallVnetId}" \
--allow-vnet-access
# Complete
az network vnet peering create \
--name "firewall-peer-aks" \
--resource-group "${resourceGroup}" \
--vnet-name "${clusterName}-firewall-vnet" \
--remote-vnet "${aksVnetId}" \
--allow-vnet-access

Step 4 — Configuring default route on the AKS routing table

Now that we have firewall Vnet and AKS Vnet peering we can use the firewall’s private IP as the AKS’s next hope toward the internet. In order to route all AKS egress traffic via the the firewall, we’ll define default route on the AKS routing table.

aksRouteTable=$(az network route-table list \
--resource-group "${aksResourceGroup}" \
--query "[].name" --output tsv)
az network route-table route create \
--name defaultRoute \
--next-hop-type VirtualAppliance \
--resource-group "${aksResourceGroup}" \
--route-table-name "${aksRouteTable}" \
--next-hop-ip-address "${firewallPrivateIp}" \
--address-prefix "0.0.0.0/0"

Step 5 — Deploying the demo application on the AKS cluster

For this article we’ll use Gbank demo application. The following outlines briefly what this application does:

  • Using the Customer portal the new user sign-ups to Generic Bank and creates an account in an in-memory Redis DB
  • Indexer fetches the newly created account from Redis DB, writes it in a postgres DB and deletes it from the Redis DB
  • Using the Admin portal, the admin user can get the list of newly created accounts
  • In addition, the Admin portal provides timezone information in three pre-defined locations. This happens following a request that admin portal sends to the time service which in turn sends timezone information request to api.timezonedb.com.
Demo application
  1. Clone the Generic bank repo
git clone https://github.com/Tufin/generic-bank.git && cd generic-bank

2. Define the kubectl context and authentication to the the new AKS cluster

az aks get-credentials --resource-group ${resourceGroup} --name ${clusterName} --overwrite-existing

3. Create a cluster-admin role to allow the current logged-in user to perform actions on the AKS cluster

kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user=$(az account show --query user.name -o tsv)

4. The only secret required by the Gbank application is API key for api.timezonedb.com. In order to get that API key do the following:

Go to https://timezonedb.com/api, sign-up, get the API key and define it as a variable in your terminal.

export TIMEZONEDB_API_KEY=<TIMEZONEDB_API_KEY>

5. Deploy the Gbank demo application using a bash script located in the root of generic-bank repo.

./deploy.sh

Step 6 — Expose services to the internet using Kubernetes ingress

There are more than one options to expose Kubernetes services to the internet. In this article we use the most recommended and advanced option: Ingress controller. Two elements allow this functionality:
Ingress controller pod — acts as a reverse-proxy to other services that should be reachable to the application’s internet clients.
Ingress resource — used to configure the ingress controller. It may include TLS configuration, prefix-based routing and other L7 settings.

We’ll use nginx as an ingress controller for Gbank application. We’ll deploy it using helm.

  1. Install helm client and server
# Helm client installation on Ubuntu, for other platform see instructions heresudo curl -sLO https://storage.googleapis.com/kubernetes-helm/helm-v2.13.0-linux-amd64.tar.gz
sudo tar -zxf helm-v2.13.0-linux-amd64.tar.gz
sudo mv linux-amd64/helm /usr/local/bin/helm
# Server (Tiller)
cat << EOF | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
EOF
helm init --service-account tiller# Check that Tiller is running it should take few seconds
kubectl -n kube-system get pods -l name=tiller -o jsonpath='{..status.containerStatuses[*].ready}'

2. Deploy the internal Nginx ingress controller with the pre-defined private IP.

cat <<EOT > internal_nginx_controller.yaml 
controller:
service:
loadBalancerIP: ${ingressPrivateIp}
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
EOT
helm install stable/nginx-ingress \
--namespace kube-system \
-f internal_nginx_controller.yaml \
--set controller.replicaCount=1
# Check the LoadBalancer IP assignment.
ingressName=$(kubectl -n kube-system get svc `helm list --short`-nginx-ingress-controller -o jsonpath='{..metadata.name}')
ingressIp=$(kubectl -n kube-system get svc ${ingressName} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

3. Deploy the ingress resource

# Get the DNS name resolved to the firewall's public IP 
ingressFqdn=$(az network public-ip show --name "${clusterName}-firewall-publicip" --resource-group ${resourceGroup} --query "dnsSettings.fqdn" --output tsv)
# Create the ingress resource yaml
cat <<EOT >> ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: ${ingressFqdn}
http:
paths:
- path: /kite/.*
backend:
serviceName: kite
servicePort: 80
- path: /boa/admin/accounts
backend:
serviceName: admin
servicePort: 80
- path: /admin/.*
backend:
serviceName: admin
servicePort: 80
- path: /admin/
backend:
serviceName: admin
servicePort: 80
- path: /time
backend:
serviceName: admin
servicePort: 80
- path: /customer/.*
backend:
serviceName: customer
servicePort: 80
- path: /customer/
backend:
serviceName: customer
servicePort: 80
- path: /accounts/.*
backend:
serviceName: customer
servicePort: 80
- path: /balance
backend:
serviceName: customer
servicePort: 80
EOT
# Deploy the ingress resource
kubectl create -f ingress.yaml
# Verify that the DNS name you've defined was attached to the ingress resource.
kubectl get ingress

We’re done!!!

3. Step 7 — check the application

  1. Open the customer portal and create a new account.
    Go to http://gbank.uksouth.cloudapp.azure.com/customer/ and sign-up
Gbank Customers portal
  1. Go to http://gbank.uksouth.cloudapp.azure.com/admin/ and open the admin portal where you can list the account created and the timezones info.
Gbank admin portal — Timezones
Gbank admin portal — accounts list

Security Policy Automation

The steps above explain how to secure an AKS cluster with an Azure firewall. I hope you find it useful and I look forward to your comments.

As mentioned in the introduction, firewalls provide the separation of duties that allows security teams to maintain control. But now, doesn’t this conflict with DevOps need for automation and agility?

Here at Tufin, we developed a unique solution to this problem which allows you to benefit from both worlds simultaneously — security and automation. Tufin Orca discovers the application connectivity in the Kubernetes cluster and automatically provisions it to the Azure firewall allowing the best of both worlds.

--

--

Tufin

From the Security Policy Company. This blog is dedicated to cloud-native topics such as Kubernetes, cloud security and micro-services.