Skip to content

Blog

Output variables in PowerShell and GitHub Actions

When running a PowerShell script within a GitHub Actions workflow, you may wish to output variables from one step and access them in another.

Below is a simple example of outputting a variable in one step and accessing it in another:

name: Pass value from one step to another
on: 
  workflow_dispatch:
  push:
    branches:
      - main

jobs:
  PassTheParcel:
    runs-on: ubuntu-latest
    steps:
    - name: Determine some values
      id: calculate_values
      shell: pwsh
      run: |
        $ErrorActionPreference = "Stop" # not required but I always include it so errors don't get missed

        # Normally these would be calculated rather than just hard-coded strings
        $someValue = "Hello, World!"
        $anotherValue = "Goodbye, World!"

        # Outputting for diagnostic purposes - don't do this with sensitive data!
        Write-Host "Some value: $someValue"
        Write-Host "Another value: $anotherValue"

        "SOME_VALUE=$someValue" | Out-File -FilePath $env:GITHUB_OUTPUT -Append
        "ANOTHER_VALUE=$anotherValue" | Out-File -FilePath $env:GITHUB_OUTPUT -Append

    - name: Use the values
      shell: pwsh
      env: 
        VALUE_TO_USE_1: ${{ steps.calculate_values.outputs.SOME_VALUE }}
        VALUE_TO_USE_2: ${{ steps.calculate_values.outputs.ANOTHER_VALUE }}
      run: |
        $ErrorActionPreference = "Stop" # not required but I always include it so errors don't get missed

        Write-Host "Values received were `"$($env:VALUE_TO_USE_1)`" and `"$($env:VALUE_TO_USE_2)`""

Whilst both examples use PowerShell, there is no requirement that both steps use the same shell.

Adding an additional route to a VPN connection in Windows 11

If you're running a split VPN in Windows 11 (i.e. one which sends only certain traffic over the VPN and does not use the VPN as the default gateway), you may wish to add specific IP addresses to also be routed over the VPN. An example of this might be a service which is only accessible via a certain IP address.

Assuming you've already set up your VPN connection in Windows, you can add a route using the following PowerShell command:

Add-VpnConnectionRoute -Name '<Split VPN Name>' -DestinationPrefix <123.45.67.89>/32

Where:

  • <Split VPN Name> is the name of the VPN connection you wish to add the route to
  • <123.45.67.89> is the IP address you wish to route over the VPN rather than via your default gateway/normal internet connection

For example, if you have a VPN connection named My VPN and you wish to route traffic to 216.58.204.68 over the VPN, you would use the following command:

Add-VpnConnectionRoute -Name 'My VPN' -DestinationPrefix 216.58.204.68/32

The Journey That Is My Career

I’ve worked as an IT professional for over 20 years having been fortunate enough to have been around computers from a young age and then going on to study Software Engineer at university. In those 20+ years, I’ve had several different roles in a few areas of IT at several companies. This article is about the journey. If you want to see “the facts”, that’s what my LinkedIn profile is for!

What’s This About?

This article is a summary of my career and why I’ve made the choices I’ve made. A lot of the choices made along the way weren’t right or wrong necessarily but, with hindsight, I’m not sure I’d make the same choice if I knew then what I know now. Having said that, I’m happy with where my life is right now but the “best advice” to my past self may not match the decision I (didn’t) make at the time.

Take what you will from this article, whether that be a way to waste a few minutes or something to learn from or, dare I say it, inspire you to take the next step in your career path.

First “Proper” Job

My first “proper” job was actually my placement year at university. I count this as a proper job as it was one year, full time and doing similar jobs to the rest of the IT staff at the company.

The year was split into three parts – six months in the support department, five months in one of the dev teams and a further six weeks in the support department, partly covering for the manager who was on leave. Despite really wanting to be a software engineer, developer or whatever term you want to use, it turned out the part of the year I’d really enjoyed was working in the support team. The year also gave me a lust to get into the working world after completing my degree.

Post-Graduation

After my experience during my placement year and approached the end of my final year, I started applying for several IT support positions around the country and ended up being offered two positions – one in Leeds and one in Abingdon, Oxfordshire. In the end, I opted for the latter as it was slightly more money, in the Thames Valley corridor and, as I knew people in the area already from my placement year, had a pre-established social life and more living options via house shares. The actual jobs were similar enough that, from my perspective at the time, it didn’t make too much difference at such an early stage in my career.

I started my new role in early September 2001, a very memorable time in most people’s lives, regardless of how close you were to the events of the 9/11 attacks. Like many, I remember where I was when I first heard about it and struggled to keep up-to-date with the still relatively new internet news and adding servers took weeks or months, not minutes.

After being in my new role for a few months, I started doing some coding again, working on a support system for internal use and also managing things like annual leave between performing my other duties.

All was going well until July 2001 when we all told the company was going into administration and, effective immediately, we all had no jobs and my short tenure meant no redundancy either. Needless to say, I began job hunting immediately. This wasn’t the easiest time as I was competing with a whole year’s worth of graduating IT students.

Take 2…

In the following couple of months, the company had been bought out and downsized but taken on some of the old staff. I was then asked to come in and do some contracting work which, after a little while, was converted into a full time role, effectively just picking up where I left off.

Once again, all was going well until February 2003 when the new company went into administration but this time it wasn’t coming back.

Take 3…

As the old company was wrapping up, I was contacted by the old managing director of the initial version of the company stating he was putting together a bid to be the assets and clients and would like me to join the new team. I liked the work, I liked the people, the salary was acceptable and, honestly, it meant no need to job hunt so I accepted the offer.

All was going well for a few years, I got a nice (nothing crazy) pay rise and we even started working on a new version of the product using C# and the .NET Framework. It was then that the problems started…

Warning Signs!

Salary payments weren’t always being paid on time and, even if it was just a few days, that makes you worry when you have mortgage payments to make (yes, I was able to buy when I was young thanks to buying with a friend but that’s another story!) and I didn’t have a pile of cash lying around to cover the shortcomings.

Over the next couple of years – yes, years! – things got worse and worse with salary payments now being several months behind. I “stayed loyal” to this company for a long time as it continued to owe me more and more money and whilst I did look at and even interview at a couple of places, ultimately I lacked the C# experience many places were looking for as I think we were only in the early months of development at this point and all the learning was self taught. This was in a time before all the excellent tutorials sites like Pluralsight were mainstream and YouTube offerings were limited to mainly cat videos.

I did realise enough was enough in early 2008 and began looking for a new opportunity, eventually finding a role at a company I’d interviewed at a year previous but didn’t have the necessary C# experience.

By the time I left “take 3”, they owed me four months salary! Fortunately, for me, the company went into administration shortly after I said I was leaving so the government compensation scheme kicked in and covered about half of the amount owed.

The Long Journey Begins

My next position ended up being a nearly fourteen year tenure in various roles within the IT department.

I started out as a developer, working on a mix of classic ASP and .NET products and was doing this for about six months when the development manager changed (I won’t go into the details about why) resulting in me getting a new line manager. For various reasons, this new setup wasn’t working for me so I began looking for new roles, along with other colleagues in the department.

Two weeks later, the new development manager resigned unexpectedly and I was approached by the dev manager’s manager about taking on some of the technical parts of their role, specifically around the database and server management. I agreed to this as I like to have a broad understanding of how the systems are setup that the software is running on.

Over the course of the next couple of years, I got on with my work, helping out the team, working on planning, supporting recruitment, etc… then in 2011 I took on the role of a team leader. Management wasn’t my goal at the time but I was effectively doing several parts of the role anyway and, as you’d expect, it included a pay rise so I accepted the offer.

More Management

In the latter half of 2016, the decision was made by senior management to offshore a lot of the development team to India, mainly for cost reasons, resulting in several redundancies in my team. I was offered a newly created role of Software Engineering Manager with day-to-day management of the entire UK part of the development team that was remaining as well as general management of the whole development team.

This role would see me pushed towards a more managerial and less hands-on position which, even at the time, I wasn’t massively keen on or looking for. I think that, partly due to the safely blanket my current company offered and partly as it was a promotion (which is a good thing, right?!), I accepted once again.

I then spent two years in this position, dealing with the running of the development team and managing the processes we followed as well providing hands-on support to the team when needed. Due to another reshuffle, I was given direct management of the entire development team and another new line manager; my seventh since joining the company.

Is this the right path?

By this point, in early 2019, I was becoming concerned about where my career was heading as I still wanted to be doing a significant amount of hands-on work but I was spending an increasing amount of time in meetings, etc… doing things that didn’t really feel were a good use of my time or skills. There were definitely a good few of these meetings I felt fell into the category of “this could have been an email” as they were largely conveying decisions, regardless of whether it was presented as a discussion or not.

One thing the department, and specifically my manager, had been discussing was the use of cloud hosting, instead of the current bare-metal setup and the technologies and flexibilities that would open up. This discussion, especially combined with the enthusiasm from my manager at the time, pushed me to spend a lot of my own time looking into containerisation, including Kubernetes, as well as brushing up my development skills to help take the development team into the “brave new world” that the cloud offers as we were still almost exclusively developing in .NET Framework. I was also thinking, at the same time, this will be good for my CV although this wasn’t my only motivation and I certainly didn’t have “one foot out of the door”.

Is This What I Want?

By late 2019/early 2020, I was giving serious thought to moving on to another company but I knew I needed to get my skills up to scratch and, at this point, I was thinking of going into a duel management/hands-on role. What I was sure of at this point was my current role was useful to my current employer but wasn’t making me a good candidate for the wider job market and was therefore stunting my career. For this reason, I set myself a few goals:

  • Get Kubernetes up and running with a real production workload
  • Improve the logging being used within the team
  • Get the dev team practices updated to the standards common in the industry
  • Get application deployments, especially in Kubernetes, as smooth as possible to help “sell” the new way of working
  • Give the development team members a sense of ownership of a product from design and coding to deployment to production

Whilst there was some resistance along the way, particular regarding unit testing and pull requests (yes, these weren’t really being done which now scares me with hindsight!), standards were improved and Kubernetes was progressing with some applications migrating over to the new platform in 2021.

Time To Go!

Over the summer of 2021, my manager, who had also been a big supporter of the work I’d been doing with Kubernetes, decided to leave, leaving a void in the company structure but it also meant seeing a vital ally to the “new way” going. This had me asking questions like could I do their job, could I get their job or do I even want their job? After a few weeks without an update, I pushed for one from management and was told they were rejigging things again and not planning on having a direct replacement.

These changes resulted in a few conclusions/realisations for me:

  • My (management) career within the company had stalled with no realistic path forward without others leaving
  • My salary was rapidly approaching the ceiling for my grade and having seen an, on average, below inflation rise over the past few years was, in real terms, falling
  • Progression in a technical role wasn’t really possible after a certain level – a level I’d already passed
  • My role, due to team structure changes, had been devalued in my opinion with a key part of the development process moved into another team

It was at this point I decided I had to go – I was “piggy in the middle” with reduced autonomy and job satisfaction had dwindled significantly, some overnight but some steadily over the previous year or two. A subsequent change a few months later further devalued my role although, in that case, I do believe this change was a good choice for the affected sub team’s efficiency, it just didn’t help me.

I actually wondered, given how my role was changed, if management were trying to phase out my role (and me therefore!) after a current long running project completed, scheduled for mid-2022 although realistically it would be sometime in 2023. Whilst I never had any confirmation of this, it does illustrate how I felt my position at the company was.

I began applying for roles that were either hands-on or, at least, mainly hands-on. I was looking at both software engineer roles and DevOps engineer roles. After several months of searching, I found a role I liked so I handed in my three months’ notice and began doing a lot of handover meetings and padding out the detailed documentation I’d already written.

Three months later and my (almost) fourteen year tenure was over. Whilst I enjoyed the hands-on part of my role, the other parts were increasingly demotivating and so I was pleased to be going.

A (Brief) New Beginning

The role I’d accepted was doing software development, mainly using TypeScript/JavaScript centred around React and Express, both of which were new to me. There was some .NET Framework-based C# in the mix with plans for a key admin system being built in .NET 6/7 as well as moving their microservice hosting over to Kubernetes.

All was going well, I loved the team I was working with and was enjoying the work enough until the main C# work and/or Kubernetes work started but then opportunity came knocking and the new employer was no longer certain of going down the Kubernetes path.

In mid-September 2022, I was contacted by an agency with a few roles that were DevOps engineer roles in my preferred technologies of C#, Azure and Kubernetes. I was in my probationary period still (i.e. short notice period) and decided it was an opportunity I couldn’t pass up so I accepted one of the new roles and two weeks later I was gone.

And Here We Are…

I started my current role in early October 2022 and I’ve been loving it. The tech stack is the perfect fit, the team I’m working with have been very helpful and the big new Kubernetes project that’s coming up should be very interesting and challenging and I can’t wait to get my hands on it!

2024 Update

It’s now December 2024 and several things have changed at my current employer. I’ve had two changes of role following the departure of my team lead and other department reorganisations. My main project also started – R&D in December 2022 and proper in May 2023. This saw the first client-facing infrastructure going out in Q2 of 2024.

The size of the technology team at the company is much larger than previous employers I’ve worked for (approximately 100 people) so several key functions such as "DevOps" (pipelines, etc…) and infrastructure are covered by different teams. This does lead to a narrower area of work that is often the case with a DevOps role. Fortunately the project did allow me to in both of these areas as the company moves to greater use of IaC and Terraform.

Whilst I’ve gained the exposure to Azure and Terraform that I’d been looking for in switching roles (twice!), I've not always had the same sense of ownership I’ve been used to - bigger pond and all that - but the state of play is an evolving one and hopefully 2025 will see greater ownership. Something I didn't think I’d be saying is that, at times, I am missing some of the physical interactions of being in the office. Unfortunately my current office is 1½ - 2 hours away so it’s not time or cost efficient to go in more regularly. I do try to make sure that some of those casual chats are still happening, albeit via Teams during catch ups with my direct reports.

Final Thoughts

Your career is just that, your career. Whilst people may help you out along the way and see your potential, they will likely only see things from the perspective of what you can offer them or the company. This may not be the best thing for you thought so, for example, if you’re being pushed up the management chain, make sure that’s what you want.

Is there a perfect job? I’m not sure but I think there a few things that you ideally want:

  • Acceptable compensation (salary, leave, pension, etc…)
  • Working conditions (remote/hybrid/office working, flexible hours, part time, etc…)
  • A supportive team that you can rely on – this applies regardless of whether you are the junior or the manager
  • Work that keeps you challenged and interested

I think that if you look at what you do, it’s worth thinking “would this look good on my CV?” and that’s not because it’s “all about you” but because it can be a good indication if you should be doing something. If your company is still coding exclusively in .NET Framework, you aren’t going to have as much choice compared to .NET 6/7/8 coders, especially as time goes on. There’s legacy support of course, but these legacy systems will eventually go.

I integrated the technologies I wanted on my CV into my role as I had the ability to do so and I had the support of my manager. Why did I have that support? It was a good set of skills for the company to have, as well as me, and that is a good sign you’re on the right path.

I’d also recommend going to things like Meet Ups and seeing what the “real world” uses and even how many jokes are being made at the expense of the technologies you and your company are still using.

Whatever your motivation for looking for a new job – money, flexibility, technology, people or even just bored and demotivated – look at what you have and look at what you need and the market wants and don’t be afraid of that challenge to build up your skills to get the career you want. Leaving a company, especially after a long time, can be big and scary thing but once you do, taking that next step is much easier.

Delete a Kubernetes resource that is stuck "Terminating"

General Resource

Run the following command to force a resource to delete. It works by removing all finalizers. Only use this if a resource has become “stuck”.

kubectl patch resourcetype resource-name -n resource-namespace -p '{"metadata":{"finalizers":[]}}' --type=merge

Replace resourcetype with the type of resource (e.g. pod, helmrelease, etc…), replace resource-name with the name of your resource and resource-namespace with the name of the resource which the resource belongs to.

If the resource has become orphaned (i.e. the namespace a resource belongs to has been deleted), recreate the namespace and then run the above command for each resource.

Namespace

If a namespace is stuck and the above method doesn’t work, run the following command, replacing my-namespace with the name of your namespace to delete.

kubectl get ns my-namespace -o json | jq '.spec.finalizers = []' | kubectl replace --raw "/api/v1/namespaces/my-namespace/finalize" -f -

Solution found on Stack Overflow.

Engineering approach

When I work on a project, small or large, this is the approach I like to take to designing and building a solution. This is written from the perspective of a new solution but could be adapted for existing solutions too.

High Level Design

The first place I start is a high level design. I don’t go into great detail or try to design a final system – I simply include the basic components I’ll need to get to something deliverable. This usually includes software components such as APIs (e.g. customers, orders, stock, etc…) and infrastructure (e.g. Kubernetes cluster, identity provider, secret storage, etc…). Sometimes specific technologies will be listed if these are known (e.g. if it’s an Azure house, then Azure Kubernetes Service (AKS), Entra ID and Azure Key Vault). This may be done as one diagram or two.

Breaking Up Work

I then take this high level design and think about the features and stories it will produce. Again, at this stage, nothing too detailed but thinking more about delivery of “something” even if it’s of no use.

As an example, let’s say we know we will be using AKS to host our application and have decided we will do this setup rather than look at container apps as a first stage. We’ve also decided we’ll be using a GitOps workflow (e.g. Flux) for deployments and, to start with, we want a simple API which has some kind of API key based access management. The initial requirements to get this API hosted and accessible might be:

  1. Set up repos for infra and API
  2. Create infrastructure*
  3. Create simple API
  4. Configure Flux
  5. Configure pipelines

* This first iteration may not be as secure or hardened as would be preferred but it’s nothing more than a first step with no data at risk.

Set up repos for infra and API

If Infrastructure-as-Code (IaC) is being used to manage your repos, simple add the two new repo names and you’re done. If still doing manually, add the appropriate repos after agreeing a naming convention. This is what I class as a sprint 0 story as it needs doing to unblock work on one or more other stories without creating (too many…) dependencies in a sprint. If running with a Kanban approach, this would be the first story to tackle.

Create infrastructure

Again, if an existing IaC setup is being used, this may be a very quick process. If not, infrastructure could be created manually or using something like Terraform. If time and skills permit, I’d opt for Terraform to ensure long-term consistency and it also makes it easier to iterate through changes as work progresses.

Remember that in this initial phase, only a single, small environment is required.

Create simple API

This is referring to nothing more than an API with a suitable set of stub endpoints. Nothing needs to be real in terms of real CRUD activities, etc… If helper libraries exist, it’s recommended these be used to simplify setup and ensure consistency. For example, this helper library could set up things like logging, basic health checks or config file loading.

As a strong advocate of Test Driven Development (TDD), I recommend starting with a TDD approach from the beginning, even for this “hello world” API.

Services should be build independently as possible and as loosely coupled as possible so it’s recommended any config, even that which will come from things like Azure Key Vault, reference local files and let Kubernetes manage this dependency. For databases such as SQL Server or Azure Storage, local Docker versions can be used, not that any of these should be needed for this first cut of the app.

Configure Flux

Flux is a GitOps tool and whilst isn’t strictly necessary for the first cut, without resorting to more complex pipelines and manual deployments, is a reasonable first step.

If you’re using a common and shared Helm chart for your services, that should significantly speed up the release of services as it will handle a lot of the boiler plate configuration and ensure consistency.

This initial Flux setup may want to include nothing but the ability to deploy an application using a Helm chart or could include broader requirements such as ingress. As things such as port forwarding can be used at these early stages, this may be required.

Pipelines

Hopefully a set of pipelines already exist in the form of templates that can be called with minimal parameters but, if not, this first pipeline should only do the basics of building and pushing an image to a container registry. If using Azure, Azure Container Registry (ACR) would be recommended for easier authentication.

It’s also highly recommended that a pull request pipeline be created at this time. Even when using trunk-based development, the process of going through a PR pipeline can ensure all tests are passing and security scans are successfully completed.

Potential Stories

This requirements above should lead to specific work items. These should be as small as possible (i.e. completable in no more than one day) whilst still delivering “something of value”. For example, say setting up the Terraform for new infrastructure will take 2-3 days based on previous experience, a single story for all this work would be undesirable. A better breakdown would be:

  1. Provision global resource group with ACR instance – this could include setting up the repo and storage account to hold the state
  2. Provision suitable Entra groups and grant appropriate access to ACR
  3. Provision AKS cluster and needed VNET
  4. Provision a service principal for use with pipelines
  5. Create pull request pipeline which runs a terraform plan and is called from the infrastructure repo
  6. Create a deploy pipeline which runs a terraform apply and is called on any merging to main
  7. Setup a repo for the Flux configuration and bootstrap the cluster using Terraform

Notice that this list encompasses parts of several of the initial high level tasks identified. At this very early stage, some dependencies on previous stories is to be expected.

It’s likely that existing resources (e.g. pipelines, code templates or libraries, etc…) exist which will make some of these tasks quite small.

Methodology

Before you start “doing stuff”, the work management approach needs to be considered. An agile approach would be an obvious in today’s world with Kanban and Scrum being the two more common approaches taken.

Scrum usually consists of work blocks (known as sprints) of 1 to 4 weeks with 2 weeks being the most common. Each sprint has a goal and collection of work to achieve that goal with some that “must” be completed, some that “should” be completed, some that “could” be completed if all goes to plan and some that “won’t” be completed unless work is completed much quicker than expected. This is called MoSCoW prioritising and is a typical method used. You’ll often see 1 to 4 used instead of MoSCoW but they often translate to the same basic understandings.

Kanban is more like a continuous list of work with the queue regularly reviewed, resulting in work being added, reprioritised and potentially removed.

Either approach supports trunk-based development with regular releases i.e. there’s no need to wait until the end of a sprint to do a release.

Story Writing

Once the approach has been agreed, the team (engineers, product owner, etc…) should agree on what stories are needed, write them, refine them and agree on a priority. Once enough work has been prepared for the first sprint or first week or two for Kanban, work can commence.

As time goes on, it’s good to have one or two sprints ahead (1-2 months of work) planned out. Things can always be reprioritised but this helps begin to see a bigger picture and set time expectations.

The Components

Once the first batch of stories have been written, work can commence. For the various areas of engineering, I endeavour to adhere to the following:

Infrastructure-as-Code (IaC)

All infrastructure should be done as IaC to ensure consistency and repeatability. My tool of choice is Terraform and whilst this tool isn’t perfect, it’s the most popular tool available and, as such, has a large amount of support available and works with many providers including Azure, AWS, GitHub, SonarCloud, etc…

If something is to be tested by manual changes, it is recommended that these changes are made in a separate area (e.g. subscription) to the infrastructure managed by IaC.

Code

All code should be written using a Test Driven Development (TDD) approach with tests testing requirements (i.e. inputs and outputs) rather than the inner workings of a method. Ultimately a series of unit, integration and end-to-end tests should be developed and executed as part of release pipelines.

Any code committed to main must be releasable to production. This doesn’t necessarily mean it can be used on production – a feature gate could be keeping it hidden – but the code should be safe to release to production.

The use of tools for static code analysis and Snyk can help scan code and containers for code or security issues ahead of any code being released to a server.

Logging, observability and monitoring are essential to keeping a system healthy and diagnosing problems when they arise. Suitable tooling should be in place such as Prometheus and Grafana or Datadog should be in place as early as possible in the development stage. The use of OpenTelemetry is strongly encouraged to enable easier migration to different tooling.

The observance of things like SOLID, DRY, KISS, etc… are always encouraged.

APIs and Microservices

When building microservices and APIs, I align to the following rules:

  1. Communication through interfaces – any communication, whether REST, gRPC or class, will communicate through interfaces and, for message-based/event driven applications, . agreed schema
  2. Any service must only communicate to another service or the data belonging to another service through the above interfaces – no service A looking at service B’s data . directly
  3. Any service must be independently deployable
  4. Any service should gracefully handle any unavailability of an external service
  5. Any API must be built to be externalisable (i.e. can be exposed to the public internet)
  6. Within a major version of an API, any changes should be non-breaking – when breaking changes are needed, a new major version should be created and the previous version(s) maintained

Separation of Concerns

It is strongly recommended that, where possible, things such as authentication, configuration, credentials, etc… are managed independently of the code. In other words, an application running locally should use local configuration files to store non-sensitive configuration (e.g. a connection to a local SQL Server instance is OK but nothing that is hosted). If sensitive values must be stored, use things like User Secrets as these aren’t part of the repo and therefore not stored in source control.

For loading sensitive values, where possible use managed identities (or equivalents) and, where these aren’t possible, try to use (service) accounts that are machine created (likely with IaC) and their credentials stored in things like Key Vault and never actually accessed/known by humans. These credentials along with other configuration values can then be made available to applications (i.e. pods) in Kubernetes using tools like External Secrets Operator and Azure App Configuration Provider and mounting them as files into the pod. Equally, storage can be mounted in a similar manner.

Putting It All Together

Work should be selected from the top of the queue with a goal to complete active stories (or bugs once they come in) before picking up new ones. In other words, pull requests should be given priority and then, before picking up a new story, see if you can potentially assist on an already active story.

When is it done?

It’s important to have a clear definition of done so that all parties agree what this means. As a minimum, it should mean a story has successfully completed a pull request to main but arguably it should mean, as a minimum, QA (either automated or manual) has been completed. I would suggest something is only “done” once it has been successfully deployed to production and confirmed to be working.

Istio and JWT authentication

I’ve been looking at a workflow which allow the JWT authentication in Istio to be used for both UI and API requests. The basic idea is to make use of Istio’s JWT authentication mechanism to control initial access and then use an authorization policy to control access to specific services.

The diagram below shows the basic workflow of how this will work. It has been simplified by removing things like refresh tokens or authorization policies.

Flow chart showing JWT flow

The basic idea is that once the basic auth is done to a 3rd party service such as Azure AD/Entra ID or Google Workspace, a signed token is generated and stored in a cookie. This cookie value is then used for further auth requests. For APIs, the cookie value can be used directly and passed to any API calls. For UIs, because Istio can’t currently extract tokens from, an Envoy Filter is used. This looks for a specific cookie and copies the value to the authentication header before forwarding the request to Istio’s authentication handler.

If the token is valid (signed by a known private key and hasn’t expired), traffic is allowed to pass (subject to authorization policies) or rejected. On unauthorised requests, APIs just respond as unauthorised where as the UI redirects to the /auth page.

Creating a simple Kubernetes operator in .NET (C#)

Note: I'd recommend building a Kubernetes operator using Go as it is the language of choice for Kubernetes. This article is more of a proof of concept.

Creating a Kubernetes operator can seem a bit overwhelming at first. To help, there’s a simple NuGet package called CanSupportMe.Operator which can be as simple as watching for a secret or config map or creating a custom resource definition (CRD) and watching that. You then get call backs for new items, modified items, reconciling items, deleting items and deleted items. (* CRDs only)

The call backs also expose a data object manager which lets you do things like create secrets and config maps, force the reconciliation of an object, clear all finalizers and check if a resource exists.

Example

The follow is a console application with the following packages installed:

dotnet add package Microsoft.Extensions.Configuration.Abstractions
dotnet add package Microsoft.Extensions.Hosting
dotnet add package Microsoft.Extensions.Hosting.Abstractions
dotnet add package CanSupportMe.Operator

If you’re not use a context with a token in your KUBECONFIG, use the commented out failover token lines to specify one (how to generate one is on the NuGet package page).

using CanSupportMe.Operator.Extensions;
using CanSupportMe.Operator.Models;
using CanSupportMe.Operator.Options;
using Microsoft.Extensions.Hosting;

// const string FAILOVER_TOKEN = "<YOUR_TOKEN_GOES_HERE>";

try
{  
  Console.WriteLine("Application starting up");

  IHost host = Host.CreateDefaultBuilder(args)
    .ConfigureServices((context, services) =>
    {
      services.AddOperator(options =>
      {
        options.Group = "";
        options.Kind = "Secret";
        options.Version = "v1";
        options.Plural = "secrets";
        options.Scope = ResourceScope.Namespaced;
        options.LabelFilters.Add("app.kubernetes.io/managed-by", "DemoOperator");

        options.OnAdded = (kind, name, @namespace, item, dataObjectManager) =>
        {
          var typedItem = (KubernetesSecret)item;

          Console.WriteLine($"On {kind} Add: {name} to {@namespace} which is of type {typedItem.Type} with {typedItem.Data?.Count} item(s)");
        };

        options.FailoverToken = FAILOVER_TOKEN;
      });
    })
    .Build();

  host.Run();
}
catch (Exception ex)
{
  Console.WriteLine($"Application start failed because {ex.Message}");
}

To create a secret that will trigger the OnAdded call back, create a file called secret.yaml with the following contents:

apiVersion: v1
kind: Secret
metadata:
  name: test-secret-with-label
  namespace: default
  labels:
    app.kubernetes.io/managed-by: DemoOperator
stringData:
  notImportant: SomeValue
type: Opaque

Then apply it to your Kubernetes cluster using the following command:

kubectl apply -f secret.yaml

This should result in the following being output to the console:

On Secret Add: test-secret-with-label to default which is of type Opaque with 1 item(s)

My deployment to Azure using Terraform and Flux

To deploy my needed infrastructure and applications to Azure, I use a combination of Terraform and Flux, all running from GitHub Actions workflows. This is only a high level overview and some details have been excluded or skimmed over for brevity.

Terraform

For me, one of the biggest limitations of Terraform is how bad it is at DRY (i.e. Don’t Repeat Yourself). I wanted to maintain a pure Terraform solution whilst trying to minimise the repetition but also allow the easy spinning up of new clusters as needed. I also knew I needed a “global” environment as well as various deployment environments but, for now, development and production will suffice.

Modules

Each Terraform modules is in it’s own repo with a gitinfo.txt file specifying the version of the module. On merge to main, a pipeline runs which tags the commit with Major, Major.Minor and Major.Minor.Patch tags so that the modules can be specifically pinned or, if desired, a broader level of pinning can be used.

Folder Structure

Each module contains a src folder which contains an examples folder for Terraform examples of the modules being used and a module folder which contains the actual module. This will be then referenced by the scope repos.

Scopes

Three scope repos are in use – Environment, Global and GitHub – which make use of the above modules and standard Terraform resources.

Global and GitHub are single use scopes and are only utilised once and configure all global resources (e.g. DNS zones) and all GitHub repositories. The latter holds configuration for all repositories in GitHub including it’s own. The initial application of this, to get round the chicken and egg problem, will be covered in another article.

The environment scope is used to configure all environments and within each environment scope, there are regions and within them are farms and within them are clusters. This allows for common resources to be placed at the appropriate level.

Folder Structure

Each scope contains a src folder which contains three sub folders:

  • collection – This contains the resources to be applied to the scope
  • config – This contains one or more tfvars files which, especially in the case of environments, contain the different configuration to be applied to each environment
  • scope – This is the starting point of a scope and it what is referenced by the plan and apply pipelines – it should only contain a single reference to the module in ../collection

Data Between Scopes

To pass data between scopes, the pipelines create secrets within GitHub which can then be referenced by other pipelines and passed to the target scope. The main example of this is the global scope needs to pass data, e.g. DNS details, to the environments.

Variables in Environment Scope

To aid with data passing between layers within the environment, a few variables have been set up which get passed through the layers and can have data added as they pass down the chain. These variables are defaults, created_infrastructure and credentials. As an example, when an environment level key vault is created, it’s ID and name are added to created_infrastructure so that access can be set up for the AKS cluster using a managed identity.

Flux

Bootstrapping

When a cluster is provisioned, it is automatically given a cluster folder in the Flux repo and has a cluster_variables secret created to store values that either Flux or Helm charts being applied by Flux may need including things like the region or global key vault’s name.

Folder Structure

The top level folder is the clusters folder and it contains a _template folder along with a folder per environment. The individual environment folders then contain a folder for the region, the farm, and finally the cluster (e.g. development/uksouth/f01/c01). The c01 folder is based on the _template folder.

The remaining folders are applications, clients, infrastructure, redirects and services with each of these being referenced from the c01 folder.

The infrastructure folder contains setup manifests and Helm releases for things like Istio, External DNS or External Secrets Operator.

The redirects folder is split up by environment and defines any redirects which should be added for that environment. These are managed using a redirects Helm chart and a series of values passed to its HelmRelease object.

The services folder allows you to define services which should be deployed at the various levels of global, environmental, regional, farm or cluster level. There is a definitions folder containing the base definition for each service.

The applications folder defines applications which should be deployed to specific clusters and, as with services, there is a definitions folder which contains the default configuration. These are generally non-targeted applications such as a commercial landing page.

The final folder is clients which contains a definition for any client applications. It’s quite likely this folder may only contain a single definition if only a single SaaS application (note this is single application, not single microservice) exists. There are then the usual nested environment, region, farm and cluster folders with each cluster defining the clients that are deployed to that specific instance.

The NuGet Packages I Use

This article is simply a list of the common NuGet packages I use. This list should be viewed as a “this is what I do” list rather than a recommendation. This isn’t an exhaustive list and will be updated over time. For example, a few months ago I switched from Moq to NSubstitute. Please note that some of the packages are mine.

Development

  • DateTimeProvider
  • FluentResults
  • Serilog.AspNetCore
  • Serilog.Enrichers.Environment
  • Serilog.Enrichers.Thread
  • Serilog.Extras
  • Serilog.Settings.Configuration
  • Serilog.SInks.Seq
  • System.IO.Abstractions
  • Throw

Testing

  • bUnit
  • FluentAssertions
  • NSubstitute

Create a Signed JWT and validate it in Istio using a JWK

This guide shows how to create a public/private key pair and how to use these to create a JWK and a signed JWT and then validate a request to a service running in Kubernetes where Istio is running.

Generate a Public/Private Key Pair

For this you’ll need openssl installed. If you have Chocolatey installed, choco install openssl -y will do the job.

Run the following commands:

ssh-keygen -t rsa -b 4096 -m PEM -f jwtRS256.key
# Don't add passphrase
openssl rsa -in jwtRS256.key -pubout -outform PEM -out jwtRS256.key.pub

This produces a .key file containing the private key and a .key.pub file containing the public key.

Create JWK from Public Key

Go to the JWK Creator site and paste in the contents of your public key. For purpose, choose Signing and for the algorithm, choose RS256.

Manifests

The following will secure all workloads where they have a label of type and a value of api so they must have a JWT and it must be valid.

apiVersion: security.istio.io/v1
kind: RequestAuthentication
metadata:
  name: "validate-jwt"
  namespace: istio-system
spec:
  selector:
    matchLabels:
      type: api
  jwtRules:
  - issuer: "testing@secure.istio.io"
    jwks: |
      {
        "keys": [
          <YOUR_JWK_GOES_HERE>
        ]
      }

---

apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
  name: "deny-requests-without-jwt"
  namespace: istio-system
spec:
  selector:
    matchLabels:
      type: api
  action: DENY
  rules:
  - from:
    - source:
        notRequestPrincipals: ["*"]

Replace <YOUR_JWK_GOES_HERE> with your JWK created in the previous step. Make sure your indentation doesn’t go to less than the existing { below the jwks: | line.

Create a Test JWT

The following PowerShell script will create a JWT that will last 3 hours and print it to the screen.

Clear-Host

# Install-Module jwtPS -force

Import-Module -Name jwtPS

$encryption = [jwtTypes+encryption]::SHA256
$algorithm = [jwtTypes+algorithm]::HMAC
$alg = [jwtTypes+cryptographyType]::new($algorithm, $encryption)

$key = "jwtRS256.key"
# The content must be joined otherwise you would have a string array.
$keyContent = (Get-Content -Path $key) -join ""
$payload = @{
    aud = "jwtPS"        
    iss = "testing@secure.istio.io"
    sub = "testing@secure.istio.io"
    nbf = 0
    groups = @(
      "group1",
      "group2"    
    )
    exp = ([System.DateTimeOffset]::Now.AddHours(3)).ToUnixTimeSeconds()
    iat = ([System.DateTimeOffset]::Now).ToUnixTimeSeconds()
    jti = [guid]::NewGuid()
}
$encryption = [jwtTypes+encryption]::SHA256
$algorithm = [jwtTypes+algorithm]::RSA
$alg = [jwtTypes+cryptographyType]::new($algorithm, $encryption)
$jwt = New-JWT -Payload $payload -Algorithm $alg -Secret $keyContent

Write-Host $jwt

Note that you will likely need to run the line Install-Module jwtPS -force in an elevated prompt.

Testing

To test this, you will need a service you can point at in your cluster with a label of type set to api. Don’t forget to update URLs as needed.

$TOKEN = jwt_from_previous_script
curl https://example.com/api/v1/test
curl --header "Authorization: Bearer $TOKEN" https://example.com/api/v1/test

In the above examples, the first request should return 401 and the second request should return 200.