Govern & configure multi-cloud Kubernetes (AKS, EKS, GKE ) from a Single Point of Control in Azure with Azure Arc-enabled Kubernetes - Part 1

Most organizations adopting Kubernetes have more than one Kubernetes Cluster running in Production. In addition, these Clusters are running on multiple public cloud provider infrastructure (such as Azure, AWS or GCP) - including on-premise datacenters.

This raises a critical question for DevOps Engineers

"How do we govern, configure and manage multiple Kubernetes Clusters running in multiple Clouds, from a Single Point Of Control?"

Microsoft helps us solve this problem with Azure Arc-enabled Kubernetes.

In this two part series, I show you how to configure Azure Arc as a control plane and import Kubernetes clusters running on other public cloud providers (such as GCP or AWS) or clusters running on your on-premise data center (such as VMware vSphere or Azure Stack HCI) to Azure Arc.

Let's get started!

Table of Contents

Part 1

Part 2

What is Azure Arc-enabled Kubernetes?

Azure Arc-enabled Kubernetes allows you to attach and configure Kubernetes clusters running anywhere. You can connect your clusters running on other public cloud providers (such as GCP or AWS) or clusters running on your on-premise data center (such as VMware vSphere or Azure Stack HCI) to Azure Arc.

How does it work?

It works by projecting your non-Azure and on-premise Kubernetes clusters into the Azure Resource Manager. When you import a Kubernetes cluster that you want to manage, Azure Arc uses Helm to deploy an agent chart on the cluster. The cluster nodes then initiate an outbound communication to the Microsoft Container Registry and pull the images needed to create the Azure Arc agents in the azure-arc namespace.

Azure tracks the clusters as a projection of the customer-managed Kubernetes cluster, not the actual Kubernetes cluster itself. Cluster metadata (like Kubernetes version, agent version, and number of nodes) appears on the Azure Arc-enabled Kubernetes resource as metadata.

What are the Key Benefits?

  • Import your resources into a single system so you can organize and inventory through a variety of Azure scopes eg Management groups & Resource Groups.
  • Set guardrails across all your resources with Azure Policy .
  • Standardize role-based access control (RBAC) across systems.
  • Automate and delegate remediation of incidents and problems.
  • Enforce run-time conformance and audit resources with Azure Policy.
  • Integrated with GitHub, Azure Monitor, Security Center, Update, and more.
  • Common templating for automating configuration and infrastructure as code.
  • End-to-end identity for users and resources with Azure Active Directory (Azure AD) and Azure Resource Manager.
  • Implement GitOps for modern application delivery

What are the firewall requirements?

Azure Arc-enabled Kubernetes does not require inbound ports on the firewall but the Azure Arc agents do require outbound communication. Ensure that you allow TCP on ports 443 and 9418 with a destination targets of the prerequisite list of network endpoints below;


How do you connect a Kubernetes Cluster to Azure Arc?

At a high level, the following steps are involved in connecting a Kubernetes cluster to Azure Arc

  • Create a Kubernetes cluster on your choice of infrastructure
  • Install the Azure CLI on your workstation
  • Install Helm on your workstation
  • Install the connectedk8s extension
  • Start the Azure Arc registration for your cluster
  • Verify that your cluster connected successfully to Azure Arc

Create sample Kubernetes Clusters

For this demonstration, I want to import Kubernetes clusters within Azure as well as external clusters running in AWS, GCP and my local workstation (on-premise) . Since I don't have any existing clusters, I will be creating some from scratch - using;

  • GitHub for code repository
  • Terraform to define everything as Infrastructure as Code .
  • Azure Pipeline to build and release to the target environments in Azure, GCP and AWS
All the code used in this demonstration will be made publicly available in GitHub when PART 2 of this series is published. If you want to be notified, please subscribe.

Cluster Topologies

When designing a Kubernetes Cluster - the key things I take into consideration based on the business requirements are networking, security, identity, management, and monitoring. The end goal is maximum security, high availability and scalability at the lowest cost possible without compromising performance.

For this demonstration, I've designed simple reference architectures for AKS, EKS and GCP incorporating the most common best practice. These are just for demonstration purposes only.

  • AKS
Design Highlights
  - Fully Private cluster (Private Link)
  - Custom Virtual Network
  - Azure Firewall for network protection
  - Nodepool Egress through Azure Firewall
  - Role Based Access Control (RBAC) enabled
  - Azure AD Integration 
  - Network Policy for secure pod traffic
  - Azure CNI for Cluster Networking
  - Container Insignts for performance visibility 
  - Microsoft Defender for real-time threat protection 
  - ArgoCD for GitOps 
  • EKS
Design Highlights
  - Highly available spanning two Availability Zones (AZ)
  - Transit Gateway for VPC interconnection
  - Security Groups for EKS node group instance protection
  - Custom VPC configured with private only subnets
  - Ingress via public ALB with WAF in Management VPC
  - Outbound internet access via Managed NAT gateway in management VPC
  - Network Firewall for network protection
  - VPC Endpoints for private access to AWS PaaS
  - Cross-zone load balancing for Ingress
  • GKE
  - Highly available spanning two Availability Zones (AZ)
  - Private Google Access
  - Firewall Rules GKE node group protection
  - Custom VPC configured with private only subnets


With all that theory being said, PART 1 ends here. We will continue by wiring it all in PART 2 where we will;

  • Setup build and release pipelines for the sample Kubernetes clusters
  • Setup release pipeline stage to start the Azure Arc registration for the clusters
  • Verify that your cluster connected successfully to Azure Arc
  • Setup release pipeline to set governance and security guardrails across all Clusters
  • Demonstrate GitOps by installing a sample application on the three clusters using ArgoCD
  • and MORE! Thanks for reading! Subscribe to get notified when PART 2 is published soon!
Jim Musana