Skip to content

Kubernetes

Delete a Kubernetes resource that is stuck "Terminating"

General Resource

Run the following command to force a resource to delete. It works by removing all finalizers. Only use this if a resource has become “stuck”.

kubectl patch resourcetype resource-name -n resource-namespace -p '{"metadata":{"finalizers":[]}}' --type=merge

Replace resourcetype with the type of resource (e.g. pod, helmrelease, etc…), replace resource-name with the name of your resource and resource-namespace with the name of the resource which the resource belongs to.

If the resource has become orphaned (i.e. the namespace a resource belongs to has been deleted), recreate the namespace and then run the above command for each resource.

Namespace

If a namespace is stuck and the above method doesn’t work, run the following command, replacing my-namespace with the name of your namespace to delete.

kubectl get ns my-namespace -o json | jq '.spec.finalizers = []' | kubectl replace --raw "/api/v1/namespaces/my-namespace/finalize" -f -

Solution found on Stack Overflow.

Creating a simple Kubernetes operator in .NET (C#)

Note: I'd recommend building a Kubernetes operator using Go as it is the language of choice for Kubernetes. This article is more of a proof of concept.

Creating a Kubernetes operator can seem a bit overwhelming at first. To help, there’s a simple NuGet package called CanSupportMe.Operator which can be as simple as watching for a secret or config map or creating a custom resource definition (CRD) and watching that. You then get call backs for new items, modified items, reconciling items, deleting items and deleted items. (* CRDs only)

The call backs also expose a data object manager which lets you do things like create secrets and config maps, force the reconciliation of an object, clear all finalizers and check if a resource exists.

Example

The follow is a console application with the following packages installed:

dotnet add package Microsoft.Extensions.Configuration.Abstractions
dotnet add package Microsoft.Extensions.Hosting
dotnet add package Microsoft.Extensions.Hosting.Abstractions
dotnet add package CanSupportMe.Operator

If you’re not use a context with a token in your KUBECONFIG, use the commented out failover token lines to specify one (how to generate one is on the NuGet package page).

using CanSupportMe.Operator.Extensions;
using CanSupportMe.Operator.Models;
using CanSupportMe.Operator.Options;
using Microsoft.Extensions.Hosting;

// const string FAILOVER_TOKEN = "<YOUR_TOKEN_GOES_HERE>";

try
{  
  Console.WriteLine("Application starting up");

  IHost host = Host.CreateDefaultBuilder(args)
    .ConfigureServices((context, services) =>
    {
      services.AddOperator(options =>
      {
        options.Group = "";
        options.Kind = "Secret";
        options.Version = "v1";
        options.Plural = "secrets";
        options.Scope = ResourceScope.Namespaced;
        options.LabelFilters.Add("app.kubernetes.io/managed-by", "DemoOperator");

        options.OnAdded = (kind, name, @namespace, item, dataObjectManager) =>
        {
          var typedItem = (KubernetesSecret)item;

          Console.WriteLine($"On {kind} Add: {name} to {@namespace} which is of type {typedItem.Type} with {typedItem.Data?.Count} item(s)");
        };

        options.FailoverToken = FAILOVER_TOKEN;
      });
    })
    .Build();

  host.Run();
}
catch (Exception ex)
{
  Console.WriteLine($"Application start failed because {ex.Message}");
}

To create a secret that will trigger the OnAdded call back, create a file called secret.yaml with the following contents:

apiVersion: v1
kind: Secret
metadata:
  name: test-secret-with-label
  namespace: default
  labels:
    app.kubernetes.io/managed-by: DemoOperator
stringData:
  notImportant: SomeValue
type: Opaque

Then apply it to your Kubernetes cluster using the following command:

kubectl apply -f secret.yaml

This should result in the following being output to the console:

On Secret Add: test-secret-with-label to default which is of type Opaque with 1 item(s)

Changing branch used by Flux deployment

If the need arises to change the branch used by a Flux installation, this can be done without bootstrapping the cluster again. Not that this is recommended only on similar setups (e.g. you’re trying out a new change on a dev cluster and want to point to your dev branch which is based on the default branch).

Components

This article assumes the following components are in play:

  • Kubernetes cluster called cluster01
  • Flux mono repo with the cluster definition in the locations clusters/development (this folder contains the flux-system folder)
  • A default branch called main
  • A working branch called 12345-something-to-test that is based on main with a new change in it

Process

  1. Switch to your working branch
  2. Run the following command at a shell prompt (Bash, PowerShell, etc…) when the current context is you the target cluster: flux suspend source git flux-system
  3. Update clusters/development/flux-system/gotk-sync.yaml to set the value of branch to 12345-something-to-test and commit and push it
  4. Within your Kubernetes cluster, update the resource of type GitRepository named flux-system in the flux-system namespace so the branch field is also 12345-something-to-test
  5. Run the following command at a shell prompt (Bash, PowerShell, etc…) when the current context is you the target cluster: flux resume source git flux-system

Once testing is complete, repeat the above process but setting the branch back to main.

.NET Container Running as Non-Root in Kubernetes

This is a quick guide on how to get a standard .NET container running as a non-root, non privileged user in Kubernetes. It is not a complete security guide but rather just enough if you require your pods to not run under root.

Update Dockerfile

The first step is to update the Dockerfile. Two changes are required here; one to change the port and one to specify the user to use.

Exported Port

At the start of the Dockerfile, replace any EXPORT statements with the following:

ENV ASPNETCORE_URLS http://+:8000
EXPOSE 8000

This will expose your application to the cluster on port 8000 rather than port 80.

User

Next, just before the ENTRYPOINT instruction, add the following line:

USER $APP_UID

Now build and push the container to your container registry of choice either manually or via a CI/CD pipeline.

Kubernetes Manifests

Deployment Manifest

Add the following snippet to the deployment manifest under the container entry that is to be locked down:

securityContext:
  allowPrivilegeEscalation: false
  runAsNonRoot: true
  runAsUser: 1654

Service Manifest

As the exported container port has now changed, update any service you may have defined so it looks similar to the following service manifest:

apiVersion: v1
kind: Service
metadata:
  name: your-service
  namespace: service-namespace
spec:
  selector:
     app: your-app
  ports:
    - name: http
      port: 80
      targetPort: 8000
  type: ClusterIP

The pod will still be accessible via its service on port 80 so things like ingress or gateway definitions or references from other apps do not need to be updated.

Azure Kubernetes Service (AKS) and Flux – i – Introduction and AKS cluster setup

Background

I wanted an Azure Kubernetes Service (AKS) cluster to run some tests against but also, ultimately, host some sites. I wanted an easy way to manage the contents of the cluster so decided to go with a GitOps workflow using a mix of Helm and Flux. For the purposes of this walkthrough, make sure these are pre-installed along with Kubectl. You may also find a tool called K9s useful. If you’re using Windows, using Chocolatey should make installing these packages easier.

The reason behind this choice was a combination of research done both in and out of my day job.

Demo Applications

For the purposes of this article, any applications installed that aren’t generally available ones (e.g. NGINX Ingress Controller or Cert-Manager) will be my demo container and its associated Helm chart. This demo package offers multiple versions (all actually being the same image except for the version number) to test things like version ranges and, once up and running, the application can make use of other Azure services such as Azure Service Bus, Azure Key Vault and Azure App Configuration, all using managed identity.

Goal

The goal is simple – get an AKS cluster up and running, set up reserved hosting (this will save about 64% on hosting costs a month) and get a basic demo up and running.

Be aware that following these instructions will likely cost you money unless you have free Azure credit.

Walkthrough

This section is split into several subsections covering the various steps to set up an AKS cluster and some other Azure services and control access to them using a managed identity.

For the purposes of this demo, it will describe creating a cluster using the Azure Portal. Also, it’s not intended as a complete guide to using Azure so some steps will not be covered in detail.

This guide assumes you have an Azure Subscription. As I’m in the UK, all references to region will be UK South.

Azure Cluster

Go to the Azure Portal then “Kubernetes services” and select the “Create a Kubernetes cluster” option.

Choose the subscription and resource group you wish to use. For the “Cluster preset configuration” choose “Cost-optimised ($)” (this will give you a Standard_B4ms node with 4 vCPUs and 16GB of memory). You can change this but a minimum of 8GB is recommended for this demo.

Next, enter a suitable cluster name and choose the region you want. For the Kubernetes version, choose the latest available version. For the scale method, set this to “Manual” and set the number of nodes to 1.

In a normal cluster, a minimum of two nodes is recommended for resilience reasons however, for the purposes of this demo and keeping costs down, a single node will suffice.

Next navigate to node pools and choose the “userpool” node pool and change the scale method to “Manual” and the number of nodes to 0.

Access can be left on the default settings for now. On network, change the “Network configuration” to “Azure CNI” and then choose a virtual network or create a new one. For the “DNS name prefix”, use the same name as the cluster, using lowercase only, numbers and hyphens for spaces, dots, etc…

On the “Advaned” config page, for the “Infrastructure resource group”, enter the same name used for the “DNS name prefix”

Once you get to the “Review + create” tab, check everything looks OK and click “Create”. After a short while, the cluster will be provisioned and you’ll receive a notification of this.

If you wish to create a more realistic cluster but still keep costs down, I’d recommend two Standard_B2s nodes for agentpool and two Standard_B4s nodes for user pool. Remember the Kubernetes is designed to scale horizonally so, in many situations, it’s better to add nodes rather than more CPU and memory to each node. For an “early days” cluster, a series of Standard_B8ms up to Standard_B16ms would be a reasonable choice.

Reserved Pricing

Only do this if you wish to keep a machine running with the same machine SKU (i.e. spec) chosen above. It doesn’t have to be the same machine you keep but the reservation will be linked to the machine type.

Navigate to the “Purchase reservations” section of Azure and choose the “Virtual machine” option. Under recommended, you should see the virtual machine you set up above. Select it along with the term you want. Three years offers the best discounts

Azure Kubernetes Service (AKS) and Flux – ii – Flux with Empty Repository

Flux

Flux requires a hosted Git repository to function. This can be in Azure DevOps, GitHub, Bitbucket, etc…

The instructions below show how to setup an empty repository so can fully configure your cluster from scratch. There are also instructions for setting up a basic cluster featuring NGINX Ingress Controller, Cert-Manager, Seq and a demo application by copying an existing Git repo. Be aware that, out of the box, Seq is open access.

Repository Setup

This article will set up things using an empty repository. A future article (will be linked when available) will guide you through doing this with a pre-populated repo. This example uses Azure DevOps for a Git repository with a personal access token (PAT).

You’ll need to create a PAT so do this now. You’ll need “Full” code permissions for the PAT and, for ease, set expiry to be as long as possible. Longer term, other solutions will need to be used and not a PAT linked to a personal account. The default branch is called main.

Empty Repository

In Azure DevOps, create a new repository with the README enabled. For this demo, we’ll assume it’s called “Flux.Demo” Clone this repo into Visual Studio Code or another tool that’s good for editing YAML files.

You can maintain multiple clusters in one repo (AKA a monorepo) so the advice is to think about this when setting up your repository. The pattern I’ll be using is region\environment\cluster e.g. uksouth\production\01 but any logical pattern makes sense.

In VS Code, create a “clusters” folder and within it a “uksouth” folder and within that a “development” folder and within that an “01” folder. In the root, create a folder called “infrastructure” and within that create two folders: “flux-support” and “common”.

In the “flux-support” directory, create a file called “cluster-variables-configmap.yaml” and populate with the following:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-variables
  namespace: flux-system
data:
  cluster_region: '${cluster_region}'
  cluster_env: '${cluster_env}'
  cluster_number: '${cluster_number}'
  cluster_managedidentity_id: '${cluster_managedidentity_id}'

We’ll use this config map in other definitions when we want to know what region, environment, etc… we’re in. You can use tags on your infrastructure for some information but this will allow any value to be defined and, importantly, accessed within your Flux definitions, not just your running workloads.

In the “flux-support” folder, also create the a file called “kustomization.yaml”:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - cluster-variables-configmap.yaml

In the “common” folder, we’ll add four files which will create an NGINX Ingress Controller along with setting up Cert-Manager and allow us to deploy the demo app.

cert-manager.yaml
---
apiVersion: v1
kind: Namespace
metadata:
  name: cert-manager
  labels:
    toolkit.fluxcd.io/tenant: sre-team
---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
  name: cert-manager
  namespace: cert-manager
spec:
  interval: 24h
  url: https://charts.jetstack.io
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: cert-manager
  namespace: cert-manager
spec:
  interval: 30m
  chart:
    spec:
      chart: cert-manager
      version: "1.x"
      sourceRef:
        kind: HelmRepository
        name: cert-manager
        namespace: cert-manager
      interval: 12h
  values:
    installCRDs: true
helmrepository-jabbermouth.yaml
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
  name: jabbermouth
  namespace: flux-system
spec:
  interval: 5m0s
  url: oci://registry-1.docker.io/jabbermouth
  type: 'oci'
ingress-nginx.yaml
---
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    toolkit.fluxcd.io/tenant: sre-team
---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
spec:
  interval: 24h
  url: https://kubernetes.github.io/ingress-nginx
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
spec:
  interval: 30m
  chart:
    spec:
      chart: ingress-nginx
      version: '*'
      sourceRef:
        kind: HelmRepository
        name: ingress-nginx
        namespace: ingress-nginx
      interval: 12h
  values:
    controller:
      service:
        annotations:
          service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: /healthz
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - helmrepository-jabbermouth.yaml
  - cert-manager.yaml
  - ingress-nginx.yaml

Going back to the clusters\uksouth\development\01 folder, create a folder called “flux-system” and in it create two empty files called “gotk-components.yaml” and “gotk-sync.yaml”.

Back in the clusters\uksouth\development\01 folder, create a new file called “flux-support.yaml” and populate it with the following content:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: flux-fluxsupport
  namespace: flux-system
spec:
  interval: 10m0s
  sourceRef:
    kind: GitRepository
    name: flux-system
  path: ./infrastructure/flux-support
  prune: true
  wait: true
  timeout: 5m0s
  postBuild:
    substitute:
      cluster_region: 'uksouth'
      cluster_env: 'development'
      cluster_number: 'c01'
      cluster_managedidentity_id: ''

We’ll populate the cluster_managedidentity_id value later. Also note that cluster_number is prefixing the number with a “c” due to a quirk with how numbers are handled.

Next, create a file called “common.yaml” and populate it with the following content:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: flux-common
  namespace: flux-system
spec:
  interval: 10m0s
  dependsOn:
    - name: flux-fluxsupport
  sourceRef:
    kind: GitRepository
    name: flux-system
  path: ./infrastructure/common
  prune: true
  wait: true
  timeout: 5m0s

Note the dependsOn value which says this can not be run until the flux-support kustomization has completed.

First Deployment – Bootstrapping the Cluster

You show now commit your changes to Git and push them to the remote server.

The next step is to make Flux aware of the repo. The following PowerShell script will do this. Note you will need to update the three “your-” values (your-git-pat, your-organisation and your-project) with your specific values.

$REPO_TOKEN="your-git-pat"
$REPO_URL="https://dev.azure.com/your-organisation/your-project/_git/Flux.Demo"
$REGION = "uksouth",
$ENVIRONMENT = "development",
$CLUSTER = "01",
$BRANCH = "main"

flux bootstrap git `
  --token-auth=true `
  --password=$REPO_TOKEN `
  --url=$REPO_URL `
  --branch=$BRANCH `
  --path="clusters/$REGION/$ENVIRONMENT/$CLUSTER"

To view the state of your Flux deployment, you can run the following command:

flux get kustomizations

To view this in watch mode, append --watch to the end of the command. Once all the rows show as ready and have an “Applied revision: main@sha1:…” then your cluster should be ready.

Next, run the following command to get the external IP of your ingress controller:

kubectl get svc -n ingress-nginx

Note the external IP for later use. You can also do a simple test by going to http:/// and you should see a “404 Not Found” error page from Nginx.

Demo Application

In the cluster definition folder (clusters\uksouth\development\01), create a file called “applications.yaml” and populate it with the following contents:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: flux-applications
  namespace: flux-system
spec:
  interval: 10m0s
  dependsOn:
    - name: flux-common
  sourceRef:
    kind: GitRepository
    name: flux-system
  path: ./applications/demo
  prune: true
  wait: true
  timeout: 5m0s
  postBuild:
    substituteFrom:
      - kind: ConfigMap
        name: cluster-variables

Now, in the root, create a folder called “applications” and, in that folder, create a folder called “demo”. We’ll create two files in that folder:

demo.yaml

---
apiVersion: v1
kind: Namespace
metadata:
  name: applications
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: demoapp
  namespace: applications
spec:
  interval: 5m
  chart:
    spec:
      chart: helm-demo
      version: '^1.0.0'
      sourceRef:
        kind: HelmRepository
        name: jabbermouth
        namespace: flux-system
      interval: 1m
  values:
    ingress:
      domain: demo.yourdomain.com
      path: /simple
      tls:
        enabled: true
        letsEncryptEmail: 'youremail@yourdomain.com'
    config:
      environment:
        overridden:
          createAs: 'inline'
          value: 'Common to all environments'
        onlyFromEnvVar:
          value: 'Demo application'

You will need to update the ingress section with your domain and email address.

Next, create the kustomization file in the applications\demo folder:

#### kustomization.yaml

```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - demo.yaml

Before pushing your changes, create a DNS record that matches the domain entered above, using the external IP address you retrieved in the previous section.

Once the DNS record is configured, commit and push your changes to repo and monitor Flux as before:

flux get kustomizations To force a reconciliation, you can run the following command:

flux reconcile kustomization flux-system --with-source

If all worked as expected, going to the equivalent of https://demo.yourdomain.com/simple should show a simple demo site with a valid “SSL” certificate.

Install Metrics Server into Kubernetes running in Docker Desktop using Helm

If using the Helm chart to install Metrics Server in a Docker Desktop instance of Kubernetes, it may not start due to insecure certificates. To resolve this, run the following command when doing the install (it can also be applied to an existing installation):

helm upgrade --install --set args[0]='--kubelet-insecure-tls' metrics-server metrics-server/metrics-server

If the repo hasn’t been added already, run the following first:

helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/

Running Octopus Deploy Container With Kubernetes and Helm

When trying to run Kubernetes or Helm deployments from a local Octopus Deploy container, an error will be encountered because these tools aren’t available by default.

One solution to this problem is create a custom container that includes them. Below is an example of this.

FROM octopusdeploy/octopusdeploy:latest

RUN apt update
RUN apt install -y ca-certificates curl apt-transport-https
RUN curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
RUN curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | tee /usr/share/keyrings/helm.gpg > /dev/null
RUN echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list
RUN echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | tee /etc/apt/sources.list.d/helm-stable-debian.list
RUN apt update
RUN apt install -y kubectl helm

To build the container, run the following command:

docker build --pull -t octopusdeploy-withkubectlandhelm .

Once created, follow the standard instructions on Octopus’ site, replacing the image name in the Docker Compose file with the custom container name. This:

image: octopusdeploy/octopusdeploy:${OCTOPUS_SERVER_TAG}

Becomes:

image: octopusdeploy-withkubectlandhelm

Create a Service Account and Get Token In Kubernetes Running In Docker Desktop

When running Kubernetes in Docker Desktop 4.8 and later, creating a service account doesn’t create the token properly. The following script will create a service account and retrieve the token. Note that it creates a cluster admin service account for the purposes of this demonstration.

Create a file called create-service-account.sh or similar and populate as follows:

kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: $1
  namespace: $2

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: $1
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: $1
  namespace: $2

---

apiVersion: v1
kind: Secret
metadata:
  name: $1
  annotations:
    kubernetes.io/service-account.name: $1
type: kubernetes.io/service-account-token
EOF

TOKEN=$(kubectl get secret $1 -n $2 --template='{{.data.token}}' | base64 --decode)

echo
echo $TOKEN
echo

To create a service account called my-service-account in the namespace development run the following command:

bash create-service-account.sh my-service-account development

Running an Azure DevOps Build Agent in Kubernetes

The basics…

Firstly, create a new namespace called build-agents to hold the new agents and also create a PAT in Azure DevOps, ideally associated with a service principal.

kubectl create ns build-agents

If desired, add details about your Docker account if you wish to use your account for unlimited pulls.

kubectl create secret docker-registry dockerhub-account --docker-username=dockerhub-account-name --docker-password=dockerhub-account-password --docker-email= --namespace build-agents

In the above script, dockerhub-account will be the name of the secret in Kubernetes, dockerhub-account-name should be changed to your Docker Hub account name and dockerhub-account-password should be changed to your Docker Hub password or token.

Next create a simple YAML file with the following values:

buildagent-settings.yaml

image:
  pullPolicy: Always
  pullSecrets:
    - dockerhub-account

personalAccessToken: "your-ads-pat"

agent:
  organisationUrl: "https://dev.azure.com/your-org-name/"
  pool: "Default"
  dockerNodes: true

In the above file, you should change your-ads-pat to a Personal Access Token (PAT) you’ve generated which has “Read & manage” agent permissions. You should also change your-org-name to the name of your Azure DevOps organisation. If you wish to deploy the agent to a pool other than the "Default" one, you should also update pool. The image section can be completely removed or either of the sub-values if desired. If you’re not running Docker for your container hosting (Kubernetes 1.24+ will no longer use Docker as its container engine, using Containerd instead), set dockerNodes to false.

Once you are done, it’s time to install the first agent:

helm repo add jabbermouth https://helm.jabbermouth.co.uk
helm install -n build-agents -f buildagent-settings.yaml linux-agent-1 jabbermouth/AzureDevOpsBuildAgent

To add an additional build agent, simply run the following:

helm install -n build-agents -f buildagent-settings.yaml linux-agent-2 jabbermouth/AzureDevOpsBuildAgent

The container can be found on Docker Hub, the files used to build the container can be found on GitHub and the Helm chart code can also be found on GitHub.