Skip to content

2022

Running an Azure DevOps Build Agent in Kubernetes

The basics…

Firstly, create a new namespace called build-agents to hold the new agents and also create a PAT in Azure DevOps, ideally associated with a service principal.

kubectl create ns build-agents

If desired, add details about your Docker account if you wish to use your account for unlimited pulls.

kubectl create secret docker-registry dockerhub-account --docker-username=dockerhub-account-name --docker-password=dockerhub-account-password --docker-email= --namespace build-agents

In the above script, dockerhub-account will be the name of the secret in Kubernetes, dockerhub-account-name should be changed to your Docker Hub account name and dockerhub-account-password should be changed to your Docker Hub password or token.

Next create a simple YAML file with the following values:

buildagent-settings.yaml

image:
  pullPolicy: Always
  pullSecrets:
    - dockerhub-account

personalAccessToken: "your-ads-pat"

agent:
  organisationUrl: "https://dev.azure.com/your-org-name/"
  pool: "Default"
  dockerNodes: true

In the above file, you should change your-ads-pat to a Personal Access Token (PAT) you’ve generated which has “Read & manage” agent permissions. You should also change your-org-name to the name of your Azure DevOps organisation. If you wish to deploy the agent to a pool other than the "Default" one, you should also update pool. The image section can be completely removed or either of the sub-values if desired. If you’re not running Docker for your container hosting (Kubernetes 1.24+ will no longer use Docker as its container engine, using Containerd instead), set dockerNodes to false.

Once you are done, it’s time to install the first agent:

helm repo add jabbermouth https://helm.jabbermouth.co.uk
helm install -n build-agents -f buildagent-settings.yaml linux-agent-1 jabbermouth/AzureDevOpsBuildAgent

To add an additional build agent, simply run the following:

helm install -n build-agents -f buildagent-settings.yaml linux-agent-2 jabbermouth/AzureDevOpsBuildAgent

The container can be found on Docker Hub, the files used to build the container can be found on GitHub and the Helm chart code can also be found on GitHub.

Trunk-based development with release branches

I would no longer advocate this approach, in favour of releasing from trunk directly. For my latest thinking, please see my engineering approach article.

There are many ways to develop software and different ways to use branches within that workflow. This article outlines one such approach which is centred around trunk-based development with release branches to get code out to production.

This is not true continuous delivery but does allow a good balance to be achieved between continuous integration and planned deliveries, particularly for teams or projects without the automated testing in place to support a true CI/CD pipeline.

For the purposes of this article, Azure DevOps will be used when a specific platform is referenced regarding version control and deployment pipelines.

This article will be updated (and republished if appropriate) as the process evolves. It is being published now as a starting point and to share ideas.

Environments

For this article, the assumption is that four environments are in play; local development, an integration environment and a blue/green staging and production environment. For the most part and for the purposes of this article, a separate staging and production environment setup will work just the same.

Branches

There will be four types of branches used in this model:

  • main
  • story/*
  • release/*
  • *-on-release-*

Branch Protection

It is recommended that very few people can directly push to main or a release/* branch (anyone should be able to create the latter though). It’s also recommended that self approving PRs are disabled.

Branch Overviews

This section provides a summary of the different branches that are expected within this workflow.

main

The main branch is deployed to the development environment and should be regularly* getting updated with the latest code changes. Any code committed to this branch should be releasable with new features that shouldn’t be available on production protected using feature flagging. This branch should be protected from direct changes.

  • No code should be in a branch outside main that is older than a day – stories or tasks should be small enough to support this
story/ (or task/ or hotfix/ or bug/)

These branches should be short lived and regularly pushed to main (I recommend using pull requests). If working on a story that will take longer than a day it would make sense to have a branch last longer than a day but regular merges to main should still be happening.

release/*

These branches are created from the main branch and will trigger a push to a staging environment. If a problem is found, changes shouldn’t be made directly to the release branch but should be made against main (via a story branch and a pull request) and then a cherry pick pull request made to the release branch. These branches should be protected by direct changes.

Once a release to production is completed successfully, these branches could be deleted but it may be preferred to keep at least some of these branches for references (perhaps the latest three branches) in case an issue occurs.

The naming format of the part after release can be anything as long as it’s unique per release. If daily or less frequent releases are being done, release/yyyyMMdd could be used (e.g. release/20221017). Alternatively, a major.minor.revision approach can be taken.

-on-release-

These branches only come into play when a cherry pick is needed from main to a release branch. The naming format used here is the one that Azure DevOps uses when using the cherry pick functionality (accessible by the three-dot/kebab menu) on a commit’s details page. Once the target branch (i.e. a release/* branch) is selected, the default topic branch name is populated and will look something like 79744d39-on-release-20221017.

Branching Workflow Example

The following example doesn’t highlight origin vs local versions of branches with the assumption being that any local branches are kept up to date. A good approach for this is rebasing the child branch to the parent branch and then, for story branches, using a forced push to origin from local using git push -f. If more than one engineer is working on a story branch, this approach will likely cause issues and isn’t recommended. Whilst multiple engineers working on a story is fine, they probably shouldn’t be working on the same story branch (e.g. one does API, one does UI).

Git branching diagram showing four branches

Code Quality and Testing

For trunk-based development to work well, it’s essential that there is confidence in the quality of the code being merged into the main branch and potentially released at any time.

There are several tools and processes that can be used to improve code quality and improve release confidence including:

  • Pull requests
  • Comprehensive unit testing
  • Test Driven Development
  • Behaviour Driven Development
  • Regression Testing

In the following sections, each of these topics will be covered.

Pull Requests

When code is being merged into the main branch or, when a cherry pick is needed into a release branch via a *-on-release-* branch, a pull request should be used. This gets a second pair of eyes on the code but also should have branch policies in place that trigger a build and run any unit tests before allowing the merge to complete.

Things to look for in a pull request include but are not limited to:

  • Missing or incomplete unit testing
  • Where required, feature flag not in place
  • "Code smells" – code written in a ways that is not good practice and/or may be detrimental to performance, security, etc…
  • Hard coded values
  • Redundant code
  • Non-compliance with SOLID, DRY, etc…
  • Not meeting the requirements of the story, task, etc…
  • Microservice code written in a tightly coupled way
  • Circular references
  • Legal requirements not met e.g. accessibility, cookie compliance, etc…

Some of these may be validly missing from code (e.g. requirement covered by another task, cookie banner is another story, etc…) but remember the code that is approved should be releasable so, if unit testing shouldn’t be missing in most cases and, for incomplete features, a feature flag should likely be in place.

Unit Tests

Having unit tests for your code is a good idea for (at least) two reasons:

  1. You can have confidence when making changes to code that existing functionality hasn’t been broken
  2. Your test confirms your code does what you think it does

Code coverage is used to measure how much of your code is covered by unit testing. A good minimum level is 80% although in some situations, a higher level may be advised.

Test Driven Development (TDD)

The TL;DR version of this is write the tests before the code with the idea being the requirement is written before the implementation. TDD uses a red, green, refactor approach which means a test is written which fails (because there is no implementation), then the implementation is written to make the tests pass and then the code is refactored to get rid of “code smells”, etc… If gaps in the tests are noted, they should be filled, once again using the red, green, refactor approach.

Some YouTube videos on the subject are available from Continuous Delivery and JetBrains.

Behaviour Driven Development (BDD)

BDD is intended to provide clear, human readable, requirements to the engineers that can be translated directly into runnable tests. Doing this using TDD is recommended. If using Visual Studio and .NET, Specflow is one of the most popular choices.

Requirements are written in the form of Given… When… Then… statements to define a requirement. For example:

Given a user is authenticated and authorised
When they visit the landing page
Then they are automatically redirected to the main dashboard

Regression Testing

This kind of testing is about making sure a site as a whole works and is often done with manual testing or automated using tools/frameworks like Selenium or Playwright.

Ideally, especially when running automated testing, these should be run against a clean environment that is spun up for the purposes of testing to guarantee a fixed starting point and to help reduce the brittleness often inherit in automated regression testing. This would include all code, databases and other services. A full dataset may not be required and, if testing against a large dataset, may not be cost or time efficient.

Pipelines

A key part of any CI/CD setup is the pipeline(s) used to build and deploy the application. My preferred approach is a templated one as it helps:

  • Minimise repetition
  • Allow for a single application configuration file
  • Use of scripting languages (e.g. PowerShell Core and Bash) to minimise tying to a specific platform (i.e. Windows or Linux) or pipeline technology (i.e. Azure Pipelines, Octopus, etc…)
  • Use of Helm for Kubernetes deployments

.NET 5/6, Docker and custom NuGet server

When using a custom NuGet server and you've added a nuget.config file to the solution, you'll need to add the following line to the default Dockerfile build by Visual Studio to allow the container to be built.

COPY ["nuget.config", "/src/"]

This should be placed before the RUN dotnet restore … line.

The filename is case sensitive within the container so using all lowercase is recommended for the file name. If you need to change the case, you may need to do two commits of the file (e.g. rename it NuGet.config –commit–> nuget1.config –commit–> nuget.config).

If running in a CI/CD pipeline and you have fixed custom NuGet servers, you can inject a nuget.config file into the CI pipeline however the file will still need referencing in the Dockerfile as above to be correctly used by the container build process.

Update a Blazor UI when a message is pushed to a RabbitMQ queue

Scenario: I've pushed a message to RabbitMQ for a long running service and I want to be notified when it's finished. There are several ways this could be done but the way I wanted to try, given RabbitMQ was already part of the setup, was to have a message appearing on a queue trigger a UI update whilst making use of Blazored.Toast.

It's quite likely what's discussed here will work in Blazor WebAssembly but I've only tried it in Blazor Server.

The code for this project is available in GitHub.

Prerequesites and Setup

For the purposes of this documentation, it is assumed an instance of RabbitMQ is running locally. If you need to set one up, install Docker (if required) then run the following command:

docker run --name rabbitmq -d --restart always -p 5672:5672 -p 15672:15672 rabbitmq:management

The admin UI will be accessible on http://localhost:15672/ with a username and password of guest.

To begin, create a standard Blazor Server app (this can be done in the Visual Studio UI or from the command line:

dotnet new blazorserver -n Demo.RabbitMQ.Notifications

Get RabbitMQ Client working

Firstly, install the RabbitMQ.Client and Blazored.Toast NuGet packages into the Blazor Server project.

Next, create a new class file (e.g. RabbitMQ.cs) and, if desired, place it in a suitable folder (e.g. Messaging) and then populate it with the following, updating the namespace as needed:

using RabbitMQ.Client.Events;
using RabbitMQ.Client;
using System.Text;

namespace Demo.RabbitMQ.Notifications.Messaging;

public class RabbitMQ
{
    public static event Func<string, Task> MessageReceived;

    public async void Setup(CancellationToken cancellationToken)
    {
        var factory = new ConnectionFactory() { HostName = "host.docker.internal" };
        using (var connection = factory.CreateConnection())
        using (var channel = connection.CreateModel())
        {
            channel.QueueDeclare(queue: "user-queue",
                                 durable: true,
                                 exclusive: false,
                                 autoDelete: false,
                                 arguments: null);

            var consumer = new EventingBasicConsumer(channel);
            consumer.Received += (model, ea) =>
            {
                var body = ea.Body.ToArray();
                var message = Encoding.UTF8.GetString(body);
                MessageReceived.Invoke(message);
            };
            channel.BasicConsume(queue: "user-queue",
                                 autoAck: true,
                                 consumer: consumer);

            while (!cancellationToken.IsCancellationRequested)
            {
                await Task.Delay(1000);
            }
        }
    }
}

Note that in the above snippet, the hostname and queue name are hard coded however these should normally come from config.

In _Imports.razor, add the following two using statements:

@using Blazored.Toast
@using Blazored.Toast.Services

Within Program.cs, add the a using statement for Blazored.Toast:

using Blazored.Toast;

Then, at the end of the other service registrations, add the following:

builder.Services.AddBlazoredToast();

Next, within Shared/MainLayout.razor, add the following three lines at the top of the page:

@implements IDisposable
@inject IToastService toastService

<BlazoredToasts />

Next, add or update the @code section with the following content:

@code {
    CancellationTokenSource messageConsumerCancellationToken = new();

    protected override void OnAfterRender(bool firstRender)
    {
        if (firstRender)
        {
            Messaging.RabbitMQ.MessageReceived += async (receivedMessage) => toastService.ShowInfo(receivedMessage);

            new Messaging.RabbitMQ().Setup(messageConsumerCancellationToken.Token);
        }
    }

    public void Dispose()
    {
        messageConsumerCancellationToken.Cancel();
    }
}

This is set to run in OnAfterRender rather than OnInitialize or OnInitializeAsync so that any messages currently in the queue are displayed without needing to use any kind of caching.

Finally, in Pages/_Layout.cshtml, add a reference to the Blazored.Toast CSS above the css/site.css reference:

<link href="_content/Blazored.Toast/blazored-toast.min.css" rel="stylesheet" />

Try it out!

Now go to http://localhost:15672/#/queues (log in if prompted) and notice there's not “user-queue” queue yet. Now run the application and the new queue will appear in the admin UI. Click on it and scroll down to the “Publish message” section. Type in some text and click the “Publish message” button and quickly switch back to your app and a toast notification should be visible in the top right.

If the menu is obscuring the toast, add the following to the bottom of wwwroot/css/site.css:

.blazored-toast-container {
    z-index: 999;
}

Create TypeScript Next.js app that runs in a container

Note: This guide uses Node 18 and Yarn and is based on the documentation on the Next.js site.

Go to the location where you wish to create the new project, create a new directory and then change into that directory. Next, run the following command to initialise the project

yarn create next-app --typescript .

To run the app locally on port 3000 in development mode and have the site auto refresh on changes, run the following command:

yarn dev

Next, modify next.config.js to so module.exports looks as below:

module.exports = {
  ...nextConfig,
  output: 'standalone',
};

Now, in the root of the project, create a new file called Dockerfile and populate it with the following:

# Install dependencies only when needed
FROM node:18-alpine AS build
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app

# Install dependencies based on the preferred package manager
COPY package.json yarn.lock ./

RUN yarn

COPY . .

# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1

RUN yarn build

# Production image, copy all the files and run next
FROM node:18-alpine AS runner
WORKDIR /app

ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1

RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs

# You only need to copy next.config.js if you are NOT using the default configuration
COPY --from=build /app/next.config.js ./
COPY --from=build /app/public ./public
COPY --from=build /app/package.json ./package.json

# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=build --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=build --chown=nextjs:nodejs /app/.next/static ./.next/static

USER nextjs

EXPOSE 3000

ENV PORT 3000

CMD ["node", "server.js"]

To build the container, run the following command, adjusting that tag (-t) parameter as required:

docker build -t jabbermouth/demo-nextjs .

To run this locally on port 3100, run the following command, adjust the tag as needed:

docker run -it --rm -p 3100:3000 jabbermouth/demo-nextjs

Setup local Redis container

To set up a local Redis container that automatically starts up on reboot, run the following command:

docker run --name redis -d --restart always -p 6379:6379 redis:latest

To include a UI, run the following command:

docker run --name redis-ui -d --restart always -p 8001:8001 redislabs/redisinsight

Yarn setup for Express API with TypeScript

This article talks through the process of setting up a basic Express API using TypeScript and Yarn for package management instead of NPM.

In the folder you'd like to create your project in, type the following commands. For the purpose of this project, we'll assume the folder being used is called “api”.

mkdir api
cd api
yarn init --yes

This sets up the initial project so that additional packages including Express can be installed.

Next, we'll add the Express and DotEnv packages:

yarn add express dotenv

At this point, we could test things but we want a typescript app so next we'll add the required TypeScript files as dev dependencies using -D:

yarn add -D typescript @types/express @types/node

The next step is to generate a tsconfig.json file which is accomplished by running the following command:

yarn tsc --init

In tsconfig.json, locate the commented out line // "outDir": "./", and uncomment it and set the path to ./dist so the line now looks like the following (note the comment at the end of the line can be left in if desired):

"outDir": "./dist",

Next create a file called index.ts in the root of the project and populate it with the following example code:

import express, { Express, Request, Response } from 'express';
import dotenv from 'dotenv';

dotenv.config();

const app: Express = express();
const port = process.env.PORT;

app.get('/', (req: Request, res: Response) => {
  res.send('Express + TypeScript Server');
});

app.listen(port, () => {
  console.log(`⚡️[server]: Server is running at http://localhost:${port}`);
});

To make development easier, we'll add a few tools as dev dependencies:

yarn add -D concurrently nodemon

Next, add or replace the scripts section of package.json with the following (I suggest before the dependencies section):

  "scripts": {
    "build": "npx tsc",
    "start": "node dist/index.js",
    "dev": "concurrently \"npx tsc --watch\" \"nodemon -q dist/index.js\""
  },

And update the value of main from index.js

Finally, create a .env file in the root of the project and populate it with the following (updating the port if required):

PORT=3100

Finally, start the server in dev mode by typing the following:

yarn dev

A link to the site will be showing. Using the port suggestion above, this will be http://localhost:3100/ which when accessed from the browser will show an empty screen displaying the message “Express + TypeScript Server”.

Bitbucket, SSH auth, Visual Studio and VS Code

To get SSH key authentication working with Bitbucket from with both Visual Studio 2019 (or earlier potentially) and VS Code can be challenging. Some commands I've used are below.

VS Code

git config --global core.sshCommand C:/Windows/System32/OpenSSH/ssh.exe

Visual Studio

Visual Studio 2022 doesn't require additional changes but if running 2019 (i.e. 32-bit) or earlier, I had to run the following to get it to work but this then caused VS Code to stop authing properly.

git config --global core.sshCommand "\"C:\Program Files\Git\usr\bin\ssh.exe\""

This requires Git being installed from https://git-scm.com/.

Kaniko Setup for an Azure DevOps Linux build agent

NOTE: This document is still being tested so some parts may not quite work yet.

This document takes you through the process of setting up and using Kaniko for building containers on a Kubernetes-hosted Linux build agent without Docker being installed. This then allows for the complete removal of Docker from your worker nodes when switching over to containerd, etc…

For the purposes of this demo, the assumption is that a namespace called build-agents will be used to host Kaniko jobs and the Azure DevOps build agents. There is also a Docker secret required to push the container to Docker Hub.

PreRequisites

This process makes use of a ReadWriteMany (RWX) persistent storage volume and is assumed to be running using a build agent in the cluster as outlined in this Kubernetes Build Agent repo. The only change required is adding the following under the agent section (storageClass and size are optional):

  kaniko:
    enabled: true
    storageClass: longhorn
    size: 5Gi

Setup

Namepace

To set up your namespace for Kaniko (i.e. build-agents) run the following command:

kubectl create ns build-agents

Service Account

Next, create a file called kaniko-setup.sh and copy in the following script:

#!/bin/bash

namespace=build-agents
dockersecret=dockerhub-jabbermouth

while getopts ":n:d:?:" opt; do
  case $opt in
    n) namespace="$OPTARG"
    ;;
    d) dockersecret="$OPTARG"
    ;;
    ?) 
    echo "Usage: helpers/kaniko-setup.sh [OPTIONS]"
    echo
    echo "Options"
    echo "  n = namespace to create kaniko account in (default: $namespace)"
    echo "  d = name of Docker Hub secret to use (default: $dockersecret)"
    exit 0
    ;;
    \?) echo "Invalid option -$OPTARG" >&2
    ;;
  esac
done

echo
echo Removing existing file if present
rm kaniko-user.yaml

echo
echo Generating new user creating manifests
cat <<EOM >kaniko-user.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: pod-runner
rules:
  -
    apiGroups:
      - ""
      - apps
    resources:
      - pods
      - pods/log
    verbs: ["get", "watch", "list", "create", "delete", "update", "patch"]

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: kaniko
  namespace: $namespace
imagePullSecrets:
- name: $dockersecret

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kaniko-pod-runner
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: pod-runner
subjects:
- kind: ServiceAccount
  name: kaniko
  namespace: $namespace
EOM

echo
echo Applying user manifests
kubectl apply -f kaniko-user.yaml

echo
echo Tidying up manifests
rm kaniko-user.yaml

echo
echo Getting secret
echo
secret=$(kubectl get serviceAccounts kaniko -n $namespace -o=jsonpath={.secrets[*].name})

token=$(kubectl get secret $secret -n $namespace -o=jsonpath={.data.token})

echo Or paste the following token where needed:
echo
echo $token | base64 --decode
echo

This can then be executed using the following command:

bash kaniko-setup.sh 

Token and Secure File

Create a new Azure DevOps library group called KanikoUserToken and add an entry to it. Name the variable ServiceAccount_Kaniko and copy the token from above into the value and make it a secret.

Under “Pipeline permissions” for the group, click the three vertical dots next to the + and choose “Open access”. This will be used

Azure Pipeline

The example below assumes the build is done using template files and this particular one would be for the container build and push. Update the parameter defaults as required. This will deploy to a repository on Docker Hub in a lowercase form of REPOSITORY_NAME with . replaced with - to make it compliant. This can be modified as required.

parameters:
  REPOSITORY_NAME: ''
  TAG: ''
  REGISTRY_SECRET: 'dockerhub-jabbermouth'
  DOCKER_HUB_IDENTIFIER: 'jabbermouth'

jobs:  
- job: Build${{ parameters.TAG }}
  displayName: 'Build ${{ parameters.TAG }}'
  condition: and(succeeded(), startsWith(variables['Build.SourceBranch'], 'refs/heads/${{ parameters.BRANCH_PREFIX }}'))
  pool: 'Docker (Linux)'
  variables:
    group: KanikoUserToken
  steps:
  - task: KubectlInstaller@0
    inputs:
      kubectlVersion: 'latest'

# copy code to shared folder: kaniko/buildId
  - task: CopyFiles@2
    displayName: Copy files to shared Kaniko folder
    inputs:
      SourceFolder: ''
      Contents: '**'
      TargetFolder: '/kaniko/$(Build.BuildId)/'
      CleanTargetFolder: true

# download K8s config file
  - task: DownloadSecureFile@1
    name: fetchK8sConfig
    displayName: Download Kaniko config
    inputs:
      secureFile: 'build-agent-kaniko.config'

# create pod script with folder mapped to kaniko/buildId
  - task: Bash@3
    displayName: Execute pod and wait for result
    inputs:
      targetType: 'inline'
      script: |
        #Create a deployment yaml to create the Kaniko Pod
        cat > deploy.yaml <<EOF
        apiVersion: v1
        kind: Pod
        metadata:
          name: kaniko-$(Build.BuildId)
          namespace: build-agents
        spec:
          imagePullSecrets:
          - name: ${{ parameters.REGISTRY_SECRET }}
          containers:
          - name: kaniko
            image: gcr.io/kaniko-project/executor:latest
            args:
            - "--dockerfile=Dockerfile"
            - "--context=/src/$(Build.BuildId)"
            - "--destination=${{ parameters.DOCKER_HUB_IDENTIFIER }}/${{ replace(lower(parameters.REPOSITORY_NAME),'.','-') }}:${{ parameters.TAG }}"
            volumeMounts:
            - name: kaniko-secret
              mountPath: /kaniko/.docker
            - name: source-code
              mountPath: /src
          restartPolicy: Never
          volumes:
          - name: kaniko-secret
            secret:
              secretName: ${{ parameters.REGISTRY_SECRET }}
              items:
              - key: .dockerconfigjson
                path: config.json
          - name: source-code
            persistentVolumeClaim:
              claimName: $DEPLOYMENT_NAME-kaniko
        EOF

        echo Applying pod definition to server
        kubectl apply -f deploy.yaml -n build-agents --token=$(ServiceAccount_Kaniko)

        # await pod completing
        # Monitor for Success or failure        
          while [[ $(kubectl get pods ${{ variables.jobName }} --token=$(Kaniko_ServiceAccount) -n build-agents -o jsonpath='{..status.phase}') != "Succeeded" && $(kubectl get pods ${{ variables.jobName }} --token=$(Kaniko_ServiceAccount)  -n build-agents -o jsonpath='{..status.phase}') != "Failed" ]]; do echo "waiting for pod ${{ variables.jobName }}: $(kubectl logs ${{ variables.jobName }} --token=$(Kaniko_ServiceAccount) -n build-agents | tail -1)" && sleep 10; done

        # Exit the script with error if build failed        
        if [ $(kubectl get pods kaniko-$(Build.BuildId) --token=$(ServiceAccount_Kaniko) -n build-agents -o jsonpath='{..status.phase}') == "Failed" ]; then 
            echo Build or push failed - outputing log
            echo
            kubectl logs ${{ variables.jobName }} --token=$(Kaniko_ServiceAccount) -n build-agents
            echo 
            echo Now deleting pod...
            kubectl delete -f deploy.yaml -n build-agents --token=$(ServiceAccount_Kaniko)

            echo Removing build source files
            rm -R -f /kaniko/$(Build.BuildId)

            exit 1;
        fi

        # if pod succeeded, delete the pod
        echo Build and push successed and now deleting pod
        kubectl delete -f deploy.yaml -n build-agents --token=$(ServiceAccount_Kaniko)

        echo Removing build source files
        rm -R -f /kaniko/$(Build.BuildId)

This template is called using something like:

  - template: templates/job-build-container.yaml
    parameters:
      REPOSITORY_NAME: 'Your.Respository.Name'
      TAG: 'latest'

Running H2R Graphics V2 output on a Raspberry Pi in kiosk mode

Overview

This article gives a step-by-step guide to getting an H2R Graphics V2 output display appearing on a Raspberry Pi automatically at boot. Some of the steps outlined here are optional and it's assumed you are starting from a clean Pi using the latest image.

Whilst these instructions have been developed for H2R Graphics specifically, they will work for any URL you want to open in kiosk (i.e. fullscreen) mode when a Raspberry Pi boots up.

Step-by-Step

The left-most HDMI port on the Pi should be used for your output.

Download the latest Raspberry Pi OS and write it to an SD Card

On the Pi, change the default password and configure some options:

  1. Raspberry Pi logo | Preferences | Raspberry Pi Configuration
  2. Change the password
  3. Change hostname as required
  4. Set “Network at Boot” to “Wait for network”
  5. Switch to “Display” tab
  6. Set “Screen Blanking” to “Disabled”
  7. Set “Headless Resolution” to “1920×1080” (this is a “just in case” thing)
  8. Switch to “Interfaces” and enable SSH and VNC as required
  9. Click “OK”

If prompted to reboot, do this.

If a wireless network is required, set this up

Download the latest updates and install them – reboot when finished (icon indicates updates in the top right) – repeat if required

To set a static IP (optional), enter the following in a terminal:

sudo nano /etc/dhcpcd.conf

Add the following lines with your appropriate values:

interface NETWORK
static ip_address=STATIC_IP/24
static routers=ROUTER_IP
static domain_name_servers=DNS_IP

Where:

NETWORK = your network connection type: eth0 (Ethernet) or wlan0 (wireless) STATIC_IP = the static IP address you want to set for the Raspberry Pi ROUTER_IP = the gateway IP address for your router on the local network DNS_IP = the DNS IP address (typically the same as your router's gateway address)

Reboot the Pi and don't forget to add a DNS entry if you wish to identify your Pi by IP.

Run the following command:

sudo nano /etc/xdg/lxsession/LXDE-pi/autostart

Add the following line:

@/usr/bin/chromium-browser --kiosk  --disable-restore-session-state http://h2r.machine.name:4001/output/ABCD

Replace h2r.machine.name with the name of the machine running H2R Graphics V2. Note that the host machine may need firewall changes to allow remote access.

Reboot and confirm

Note, to quit kiosk mode, use Ctrl+F4.

Dual Screens

If you have the inputs to spare on you switcher, you can output in dual screen mode. The three most obvious options are key & fill, preview and output, or output 1 & output 2. Below are the lines to add to the autostart file instead of the above.

Preview & Output

@/usr/bin/chromium-browser --kiosk --disable-restore-session-state --user-data-dir="/home/pi/Documents/Profiles/0" --window-position=0,0 http://h2r.machine.name:4001/preview/ABCD
@/usr/bin/chromium-browser --kiosk --disable-restore-session-state --user-data-dir="/home/pi/Documents/Profiles/1" --window-position=1920,0 http://h2r.machine.name:4001/output/ABCD

Key & Fill

@/usr/bin/chromium-browser --kiosk --disable-restore-session-state --user-data-dir="/home/pi/Documents/Profiles/0" --window-position=0,0 http://h2r.machine.name:4001/output/ABCD/?bg=%23000&key=true
@/usr/bin/chromium-browser --kiosk --disable-restore-session-state --user-data-dir="/home/pi/Documents/Profiles/1" --window-position=1920,0 http://h2r.machine.name:4001/output/ABCD/?bg=%23000

Output 1 and Output 2

@/usr/bin/chromium-browser --kiosk --disable-restore-session-state --user-data-dir="/home/pi/Documents/Profiles/0" --window-position=0,0 http://h2r.machine.name:4001/preview/ABCD
@/usr/bin/chromium-browser --kiosk --disable-restore-session-state --user-data-dir="/home/pi/Documents/Profiles/1" --window-position=1920,0 http://h2r.machine.name:4001/output/ABCD/2

Having tried this on a Raspberry Pi 4 Model B+ (8GB) running in Key & Fill mode, I'd say it works but it's on the edge of being smooth, especially when you add a few graphics.