Skip to content

Flux

My deployment to Azure using Terraform and Flux

To deploy my needed infrastructure and applications to Azure, I use a combination of Terraform and Flux, all running from GitHub Actions workflows. This is only a high level overview and some details have been excluded or skimmed over for brevity.

Terraform

For me, one of the biggest limitations of Terraform is how bad it is at DRY (i.e. Don’t Repeat Yourself). I wanted to maintain a pure Terraform solution whilst trying to minimise the repetition but also allow the easy spinning up of new clusters as needed. I also knew I needed a “global” environment as well as various deployment environments but, for now, development and production will suffice.

Modules

Each Terraform modules is in it’s own repo with a gitinfo.txt file specifying the version of the module. On merge to main, a pipeline runs which tags the commit with Major, Major.Minor and Major.Minor.Patch tags so that the modules can be specifically pinned or, if desired, a broader level of pinning can be used.

Folder Structure

Each module contains a src folder which contains an examples folder for Terraform examples of the modules being used and a module folder which contains the actual module. This will be then referenced by the scope repos.

Scopes

Three scope repos are in use – Environment, Global and GitHub – which make use of the above modules and standard Terraform resources.

Global and GitHub are single use scopes and are only utilised once and configure all global resources (e.g. DNS zones) and all GitHub repositories. The latter holds configuration for all repositories in GitHub including it’s own. The initial application of this, to get round the chicken and egg problem, will be covered in another article.

The environment scope is used to configure all environments and within each environment scope, there are regions and within them are farms and within them are clusters. This allows for common resources to be placed at the appropriate level.

Folder Structure

Each scope contains a src folder which contains three sub folders:

  • collection – This contains the resources to be applied to the scope
  • config – This contains one or more tfvars files which, especially in the case of environments, contain the different configuration to be applied to each environment
  • scope – This is the starting point of a scope and it what is referenced by the plan and apply pipelines – it should only contain a single reference to the module in ../collection

Data Between Scopes

To pass data between scopes, the pipelines create secrets within GitHub which can then be referenced by other pipelines and passed to the target scope. The main example of this is the global scope needs to pass data, e.g. DNS details, to the environments.

Variables in Environment Scope

To aid with data passing between layers within the environment, a few variables have been set up which get passed through the layers and can have data added as they pass down the chain. These variables are defaults, created_infrastructure and credentials. As an example, when an environment level key vault is created, it’s ID and name are added to created_infrastructure so that access can be set up for the AKS cluster using a managed identity.

Flux

Bootstrapping

When a cluster is provisioned, it is automatically given a cluster folder in the Flux repo and has a cluster_variables secret created to store values that either Flux or Helm charts being applied by Flux may need including things like the region or global key vault’s name.

Folder Structure

The top level folder is the clusters folder and it contains a _template folder along with a folder per environment. The individual environment folders then contain a folder for the region, the farm, and finally the cluster (e.g. development/uksouth/f01/c01). The c01 folder is based on the _template folder.

The remaining folders are applications, clients, infrastructure, redirects and services with each of these being referenced from the c01 folder.

The infrastructure folder contains setup manifests and Helm releases for things like Istio, External DNS or External Secrets Operator.

The redirects folder is split up by environment and defines any redirects which should be added for that environment. These are managed using a redirects Helm chart and a series of values passed to its HelmRelease object.

The services folder allows you to define services which should be deployed at the various levels of global, environmental, regional, farm or cluster level. There is a definitions folder containing the base definition for each service.

The applications folder defines applications which should be deployed to specific clusters and, as with services, there is a definitions folder which contains the default configuration. These are generally non-targeted applications such as a commercial landing page.

The final folder is clients which contains a definition for any client applications. It’s quite likely this folder may only contain a single definition if only a single SaaS application (note this is single application, not single microservice) exists. There are then the usual nested environment, region, farm and cluster folders with each cluster defining the clients that are deployed to that specific instance.

Changing branch used by Flux deployment

If the need arises to change the branch used by a Flux installation, this can be done without bootstrapping the cluster again. Not that this is recommended only on similar setups (e.g. you’re trying out a new change on a dev cluster and want to point to your dev branch which is based on the default branch).

Components

This article assumes the following components are in play:

  • Kubernetes cluster called cluster01
  • Flux mono repo with the cluster definition in the locations clusters/development (this folder contains the flux-system folder)
  • A default branch called main
  • A working branch called 12345-something-to-test that is based on main with a new change in it

Process

  1. Switch to your working branch
  2. Run the following command at a shell prompt (Bash, PowerShell, etc…) when the current context is you the target cluster: flux suspend source git flux-system
  3. Update clusters/development/flux-system/gotk-sync.yaml to set the value of branch to 12345-something-to-test and commit and push it
  4. Within your Kubernetes cluster, update the resource of type GitRepository named flux-system in the flux-system namespace so the branch field is also 12345-something-to-test
  5. Run the following command at a shell prompt (Bash, PowerShell, etc…) when the current context is you the target cluster: flux resume source git flux-system

Once testing is complete, repeat the above process but setting the branch back to main.

Azure Kubernetes Service (AKS) and Flux – i – Introduction and AKS cluster setup

Background

I wanted an Azure Kubernetes Service (AKS) cluster to run some tests against but also, ultimately, host some sites. I wanted an easy way to manage the contents of the cluster so decided to go with a GitOps workflow using a mix of Helm and Flux. For the purposes of this walkthrough, make sure these are pre-installed along with Kubectl. You may also find a tool called K9s useful. If you’re using Windows, using Chocolatey should make installing these packages easier.

The reason behind this choice was a combination of research done both in and out of my day job.

Demo Applications

For the purposes of this article, any applications installed that aren’t generally available ones (e.g. NGINX Ingress Controller or Cert-Manager) will be my demo container and its associated Helm chart. This demo package offers multiple versions (all actually being the same image except for the version number) to test things like version ranges and, once up and running, the application can make use of other Azure services such as Azure Service Bus, Azure Key Vault and Azure App Configuration, all using managed identity.

Goal

The goal is simple – get an AKS cluster up and running, set up reserved hosting (this will save about 64% on hosting costs a month) and get a basic demo up and running.

Be aware that following these instructions will likely cost you money unless you have free Azure credit.

Walkthrough

This section is split into several subsections covering the various steps to set up an AKS cluster and some other Azure services and control access to them using a managed identity.

For the purposes of this demo, it will describe creating a cluster using the Azure Portal. Also, it’s not intended as a complete guide to using Azure so some steps will not be covered in detail.

This guide assumes you have an Azure Subscription. As I’m in the UK, all references to region will be UK South.

Azure Cluster

Go to the Azure Portal then “Kubernetes services” and select the “Create a Kubernetes cluster” option.

Choose the subscription and resource group you wish to use. For the “Cluster preset configuration” choose “Cost-optimised ($)” (this will give you a Standard_B4ms node with 4 vCPUs and 16GB of memory). You can change this but a minimum of 8GB is recommended for this demo.

Next, enter a suitable cluster name and choose the region you want. For the Kubernetes version, choose the latest available version. For the scale method, set this to “Manual” and set the number of nodes to 1.

In a normal cluster, a minimum of two nodes is recommended for resilience reasons however, for the purposes of this demo and keeping costs down, a single node will suffice.

Next navigate to node pools and choose the “userpool” node pool and change the scale method to “Manual” and the number of nodes to 0.

Access can be left on the default settings for now. On network, change the “Network configuration” to “Azure CNI” and then choose a virtual network or create a new one. For the “DNS name prefix”, use the same name as the cluster, using lowercase only, numbers and hyphens for spaces, dots, etc…

On the “Advaned” config page, for the “Infrastructure resource group”, enter the same name used for the “DNS name prefix”

Once you get to the “Review + create” tab, check everything looks OK and click “Create”. After a short while, the cluster will be provisioned and you’ll receive a notification of this.

If you wish to create a more realistic cluster but still keep costs down, I’d recommend two Standard_B2s nodes for agentpool and two Standard_B4s nodes for user pool. Remember the Kubernetes is designed to scale horizonally so, in many situations, it’s better to add nodes rather than more CPU and memory to each node. For an “early days” cluster, a series of Standard_B8ms up to Standard_B16ms would be a reasonable choice.

Reserved Pricing

Only do this if you wish to keep a machine running with the same machine SKU (i.e. spec) chosen above. It doesn’t have to be the same machine you keep but the reservation will be linked to the machine type.

Navigate to the “Purchase reservations” section of Azure and choose the “Virtual machine” option. Under recommended, you should see the virtual machine you set up above. Select it along with the term you want. Three years offers the best discounts

Azure Kubernetes Service (AKS) and Flux – ii – Flux with Empty Repository

Flux

Flux requires a hosted Git repository to function. This can be in Azure DevOps, GitHub, Bitbucket, etc…

The instructions below show how to setup an empty repository so can fully configure your cluster from scratch. There are also instructions for setting up a basic cluster featuring NGINX Ingress Controller, Cert-Manager, Seq and a demo application by copying an existing Git repo. Be aware that, out of the box, Seq is open access.

Repository Setup

This article will set up things using an empty repository. A future article (will be linked when available) will guide you through doing this with a pre-populated repo. This example uses Azure DevOps for a Git repository with a personal access token (PAT).

You’ll need to create a PAT so do this now. You’ll need “Full” code permissions for the PAT and, for ease, set expiry to be as long as possible. Longer term, other solutions will need to be used and not a PAT linked to a personal account. The default branch is called main.

Empty Repository

In Azure DevOps, create a new repository with the README enabled. For this demo, we’ll assume it’s called “Flux.Demo” Clone this repo into Visual Studio Code or another tool that’s good for editing YAML files.

You can maintain multiple clusters in one repo (AKA a monorepo) so the advice is to think about this when setting up your repository. The pattern I’ll be using is region\environment\cluster e.g. uksouth\production\01 but any logical pattern makes sense.

In VS Code, create a “clusters” folder and within it a “uksouth” folder and within that a “development” folder and within that an “01” folder. In the root, create a folder called “infrastructure” and within that create two folders: “flux-support” and “common”.

In the “flux-support” directory, create a file called “cluster-variables-configmap.yaml” and populate with the following:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-variables
  namespace: flux-system
data:
  cluster_region: '${cluster_region}'
  cluster_env: '${cluster_env}'
  cluster_number: '${cluster_number}'
  cluster_managedidentity_id: '${cluster_managedidentity_id}'

We’ll use this config map in other definitions when we want to know what region, environment, etc… we’re in. You can use tags on your infrastructure for some information but this will allow any value to be defined and, importantly, accessed within your Flux definitions, not just your running workloads.

In the “flux-support” folder, also create the a file called “kustomization.yaml”:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - cluster-variables-configmap.yaml

In the “common” folder, we’ll add four files which will create an NGINX Ingress Controller along with setting up Cert-Manager and allow us to deploy the demo app.

cert-manager.yaml
---
apiVersion: v1
kind: Namespace
metadata:
  name: cert-manager
  labels:
    toolkit.fluxcd.io/tenant: sre-team
---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
  name: cert-manager
  namespace: cert-manager
spec:
  interval: 24h
  url: https://charts.jetstack.io
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: cert-manager
  namespace: cert-manager
spec:
  interval: 30m
  chart:
    spec:
      chart: cert-manager
      version: "1.x"
      sourceRef:
        kind: HelmRepository
        name: cert-manager
        namespace: cert-manager
      interval: 12h
  values:
    installCRDs: true
helmrepository-jabbermouth.yaml
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
  name: jabbermouth
  namespace: flux-system
spec:
  interval: 5m0s
  url: oci://registry-1.docker.io/jabbermouth
  type: 'oci'
ingress-nginx.yaml
---
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    toolkit.fluxcd.io/tenant: sre-team
---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
spec:
  interval: 24h
  url: https://kubernetes.github.io/ingress-nginx
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
spec:
  interval: 30m
  chart:
    spec:
      chart: ingress-nginx
      version: '*'
      sourceRef:
        kind: HelmRepository
        name: ingress-nginx
        namespace: ingress-nginx
      interval: 12h
  values:
    controller:
      service:
        annotations:
          service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: /healthz
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - helmrepository-jabbermouth.yaml
  - cert-manager.yaml
  - ingress-nginx.yaml

Going back to the clusters\uksouth\development\01 folder, create a folder called “flux-system” and in it create two empty files called “gotk-components.yaml” and “gotk-sync.yaml”.

Back in the clusters\uksouth\development\01 folder, create a new file called “flux-support.yaml” and populate it with the following content:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: flux-fluxsupport
  namespace: flux-system
spec:
  interval: 10m0s
  sourceRef:
    kind: GitRepository
    name: flux-system
  path: ./infrastructure/flux-support
  prune: true
  wait: true
  timeout: 5m0s
  postBuild:
    substitute:
      cluster_region: 'uksouth'
      cluster_env: 'development'
      cluster_number: 'c01'
      cluster_managedidentity_id: ''

We’ll populate the cluster_managedidentity_id value later. Also note that cluster_number is prefixing the number with a “c” due to a quirk with how numbers are handled.

Next, create a file called “common.yaml” and populate it with the following content:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: flux-common
  namespace: flux-system
spec:
  interval: 10m0s
  dependsOn:
    - name: flux-fluxsupport
  sourceRef:
    kind: GitRepository
    name: flux-system
  path: ./infrastructure/common
  prune: true
  wait: true
  timeout: 5m0s

Note the dependsOn value which says this can not be run until the flux-support kustomization has completed.

First Deployment – Bootstrapping the Cluster

You show now commit your changes to Git and push them to the remote server.

The next step is to make Flux aware of the repo. The following PowerShell script will do this. Note you will need to update the three “your-” values (your-git-pat, your-organisation and your-project) with your specific values.

$REPO_TOKEN="your-git-pat"
$REPO_URL="https://dev.azure.com/your-organisation/your-project/_git/Flux.Demo"
$REGION = "uksouth",
$ENVIRONMENT = "development",
$CLUSTER = "01",
$BRANCH = "main"

flux bootstrap git `
  --token-auth=true `
  --password=$REPO_TOKEN `
  --url=$REPO_URL `
  --branch=$BRANCH `
  --path="clusters/$REGION/$ENVIRONMENT/$CLUSTER"

To view the state of your Flux deployment, you can run the following command:

flux get kustomizations

To view this in watch mode, append --watch to the end of the command. Once all the rows show as ready and have an “Applied revision: main@sha1:…” then your cluster should be ready.

Next, run the following command to get the external IP of your ingress controller:

kubectl get svc -n ingress-nginx

Note the external IP for later use. You can also do a simple test by going to http:/// and you should see a “404 Not Found” error page from Nginx.

Demo Application

In the cluster definition folder (clusters\uksouth\development\01), create a file called “applications.yaml” and populate it with the following contents:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: flux-applications
  namespace: flux-system
spec:
  interval: 10m0s
  dependsOn:
    - name: flux-common
  sourceRef:
    kind: GitRepository
    name: flux-system
  path: ./applications/demo
  prune: true
  wait: true
  timeout: 5m0s
  postBuild:
    substituteFrom:
      - kind: ConfigMap
        name: cluster-variables

Now, in the root, create a folder called “applications” and, in that folder, create a folder called “demo”. We’ll create two files in that folder:

demo.yaml

---
apiVersion: v1
kind: Namespace
metadata:
  name: applications
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: demoapp
  namespace: applications
spec:
  interval: 5m
  chart:
    spec:
      chart: helm-demo
      version: '^1.0.0'
      sourceRef:
        kind: HelmRepository
        name: jabbermouth
        namespace: flux-system
      interval: 1m
  values:
    ingress:
      domain: demo.yourdomain.com
      path: /simple
      tls:
        enabled: true
        letsEncryptEmail: 'youremail@yourdomain.com'
    config:
      environment:
        overridden:
          createAs: 'inline'
          value: 'Common to all environments'
        onlyFromEnvVar:
          value: 'Demo application'

You will need to update the ingress section with your domain and email address.

Next, create the kustomization file in the applications\demo folder:

#### kustomization.yaml

```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - demo.yaml

Before pushing your changes, create a DNS record that matches the domain entered above, using the external IP address you retrieved in the previous section.

Once the DNS record is configured, commit and push your changes to repo and monitor Flux as before:

flux get kustomizations To force a reconciliation, you can run the following command:

flux reconcile kustomization flux-system --with-source

If all worked as expected, going to the equivalent of https://demo.yourdomain.com/simple should show a simple demo site with a valid “SSL” certificate.