Skip to content

Blog

Bitbucket, SSH auth, Visual Studio and VS Code

To get SSH key authentication working with Bitbucket from with both Visual Studio 2019 (or earlier potentially) and VS Code can be challenging. Some commands I've used are below.

VS Code

git config --global core.sshCommand C:/Windows/System32/OpenSSH/ssh.exe

Visual Studio

Visual Studio 2022 doesn't require additional changes but if running 2019 (i.e. 32-bit) or earlier, I had to run the following to get it to work but this then caused VS Code to stop authing properly.

git config --global core.sshCommand "\"C:\Program Files\Git\usr\bin\ssh.exe\""

This requires Git being installed from https://git-scm.com/.

Kaniko Setup for an Azure DevOps Linux build agent

NOTE: This document is still being tested so some parts may not quite work yet.

This document takes you through the process of setting up and using Kaniko for building containers on a Kubernetes-hosted Linux build agent without Docker being installed. This then allows for the complete removal of Docker from your worker nodes when switching over to containerd, etc…

For the purposes of this demo, the assumption is that a namespace called build-agents will be used to host Kaniko jobs and the Azure DevOps build agents. There is also a Docker secret required to push the container to Docker Hub.

PreRequisites

This process makes use of a ReadWriteMany (RWX) persistent storage volume and is assumed to be running using a build agent in the cluster as outlined in this Kubernetes Build Agent repo. The only change required is adding the following under the agent section (storageClass and size are optional):

  kaniko:
    enabled: true
    storageClass: longhorn
    size: 5Gi

Setup

Namepace

To set up your namespace for Kaniko (i.e. build-agents) run the following command:

kubectl create ns build-agents

Service Account

Next, create a file called kaniko-setup.sh and copy in the following script:

#!/bin/bash

namespace=build-agents
dockersecret=dockerhub-jabbermouth

while getopts ":n:d:?:" opt; do
  case $opt in
    n) namespace="$OPTARG"
    ;;
    d) dockersecret="$OPTARG"
    ;;
    ?) 
    echo "Usage: helpers/kaniko-setup.sh [OPTIONS]"
    echo
    echo "Options"
    echo "  n = namespace to create kaniko account in (default: $namespace)"
    echo "  d = name of Docker Hub secret to use (default: $dockersecret)"
    exit 0
    ;;
    \?) echo "Invalid option -$OPTARG" >&2
    ;;
  esac
done

echo
echo Removing existing file if present
rm kaniko-user.yaml

echo
echo Generating new user creating manifests
cat <<EOM >kaniko-user.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: pod-runner
rules:
  -
    apiGroups:
      - ""
      - apps
    resources:
      - pods
      - pods/log
    verbs: ["get", "watch", "list", "create", "delete", "update", "patch"]

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: kaniko
  namespace: $namespace
imagePullSecrets:
- name: $dockersecret

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kaniko-pod-runner
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: pod-runner
subjects:
- kind: ServiceAccount
  name: kaniko
  namespace: $namespace
EOM

echo
echo Applying user manifests
kubectl apply -f kaniko-user.yaml

echo
echo Tidying up manifests
rm kaniko-user.yaml

echo
echo Getting secret
echo
secret=$(kubectl get serviceAccounts kaniko -n $namespace -o=jsonpath={.secrets[*].name})

token=$(kubectl get secret $secret -n $namespace -o=jsonpath={.data.token})

echo Or paste the following token where needed:
echo
echo $token | base64 --decode
echo

This can then be executed using the following command:

bash kaniko-setup.sh 

Token and Secure File

Create a new Azure DevOps library group called KanikoUserToken and add an entry to it. Name the variable ServiceAccount_Kaniko and copy the token from above into the value and make it a secret.

Under “Pipeline permissions” for the group, click the three vertical dots next to the + and choose “Open access”. This will be used

Azure Pipeline

The example below assumes the build is done using template files and this particular one would be for the container build and push. Update the parameter defaults as required. This will deploy to a repository on Docker Hub in a lowercase form of REPOSITORY_NAME with . replaced with - to make it compliant. This can be modified as required.

parameters:
  REPOSITORY_NAME: ''
  TAG: ''
  REGISTRY_SECRET: 'dockerhub-jabbermouth'
  DOCKER_HUB_IDENTIFIER: 'jabbermouth'

jobs:  
- job: Build${{ parameters.TAG }}
  displayName: 'Build ${{ parameters.TAG }}'
  condition: and(succeeded(), startsWith(variables['Build.SourceBranch'], 'refs/heads/${{ parameters.BRANCH_PREFIX }}'))
  pool: 'Docker (Linux)'
  variables:
    group: KanikoUserToken
  steps:
  - task: KubectlInstaller@0
    inputs:
      kubectlVersion: 'latest'

# copy code to shared folder: kaniko/buildId
  - task: CopyFiles@2
    displayName: Copy files to shared Kaniko folder
    inputs:
      SourceFolder: ''
      Contents: '**'
      TargetFolder: '/kaniko/$(Build.BuildId)/'
      CleanTargetFolder: true

# download K8s config file
  - task: DownloadSecureFile@1
    name: fetchK8sConfig
    displayName: Download Kaniko config
    inputs:
      secureFile: 'build-agent-kaniko.config'

# create pod script with folder mapped to kaniko/buildId
  - task: Bash@3
    displayName: Execute pod and wait for result
    inputs:
      targetType: 'inline'
      script: |
        #Create a deployment yaml to create the Kaniko Pod
        cat > deploy.yaml <<EOF
        apiVersion: v1
        kind: Pod
        metadata:
          name: kaniko-$(Build.BuildId)
          namespace: build-agents
        spec:
          imagePullSecrets:
          - name: ${{ parameters.REGISTRY_SECRET }}
          containers:
          - name: kaniko
            image: gcr.io/kaniko-project/executor:latest
            args:
            - "--dockerfile=Dockerfile"
            - "--context=/src/$(Build.BuildId)"
            - "--destination=${{ parameters.DOCKER_HUB_IDENTIFIER }}/${{ replace(lower(parameters.REPOSITORY_NAME),'.','-') }}:${{ parameters.TAG }}"
            volumeMounts:
            - name: kaniko-secret
              mountPath: /kaniko/.docker
            - name: source-code
              mountPath: /src
          restartPolicy: Never
          volumes:
          - name: kaniko-secret
            secret:
              secretName: ${{ parameters.REGISTRY_SECRET }}
              items:
              - key: .dockerconfigjson
                path: config.json
          - name: source-code
            persistentVolumeClaim:
              claimName: $DEPLOYMENT_NAME-kaniko
        EOF

        echo Applying pod definition to server
        kubectl apply -f deploy.yaml -n build-agents --token=$(ServiceAccount_Kaniko)

        # await pod completing
        # Monitor for Success or failure        
          while [[ $(kubectl get pods ${{ variables.jobName }} --token=$(Kaniko_ServiceAccount) -n build-agents -o jsonpath='{..status.phase}') != "Succeeded" && $(kubectl get pods ${{ variables.jobName }} --token=$(Kaniko_ServiceAccount)  -n build-agents -o jsonpath='{..status.phase}') != "Failed" ]]; do echo "waiting for pod ${{ variables.jobName }}: $(kubectl logs ${{ variables.jobName }} --token=$(Kaniko_ServiceAccount) -n build-agents | tail -1)" && sleep 10; done

        # Exit the script with error if build failed        
        if [ $(kubectl get pods kaniko-$(Build.BuildId) --token=$(ServiceAccount_Kaniko) -n build-agents -o jsonpath='{..status.phase}') == "Failed" ]; then 
            echo Build or push failed - outputing log
            echo
            kubectl logs ${{ variables.jobName }} --token=$(Kaniko_ServiceAccount) -n build-agents
            echo 
            echo Now deleting pod...
            kubectl delete -f deploy.yaml -n build-agents --token=$(ServiceAccount_Kaniko)

            echo Removing build source files
            rm -R -f /kaniko/$(Build.BuildId)

            exit 1;
        fi

        # if pod succeeded, delete the pod
        echo Build and push successed and now deleting pod
        kubectl delete -f deploy.yaml -n build-agents --token=$(ServiceAccount_Kaniko)

        echo Removing build source files
        rm -R -f /kaniko/$(Build.BuildId)

This template is called using something like:

  - template: templates/job-build-container.yaml
    parameters:
      REPOSITORY_NAME: 'Your.Respository.Name'
      TAG: 'latest'

Running H2R Graphics V2 output on a Raspberry Pi in kiosk mode

Overview

This article gives a step-by-step guide to getting an H2R Graphics V2 output display appearing on a Raspberry Pi automatically at boot. Some of the steps outlined here are optional and it's assumed you are starting from a clean Pi using the latest image.

Whilst these instructions have been developed for H2R Graphics specifically, they will work for any URL you want to open in kiosk (i.e. fullscreen) mode when a Raspberry Pi boots up.

Step-by-Step

The left-most HDMI port on the Pi should be used for your output.

Download the latest Raspberry Pi OS and write it to an SD Card

On the Pi, change the default password and configure some options:

  1. Raspberry Pi logo | Preferences | Raspberry Pi Configuration
  2. Change the password
  3. Change hostname as required
  4. Set “Network at Boot” to “Wait for network”
  5. Switch to “Display” tab
  6. Set “Screen Blanking” to “Disabled”
  7. Set “Headless Resolution” to “1920×1080” (this is a “just in case” thing)
  8. Switch to “Interfaces” and enable SSH and VNC as required
  9. Click “OK”

If prompted to reboot, do this.

If a wireless network is required, set this up

Download the latest updates and install them – reboot when finished (icon indicates updates in the top right) – repeat if required

To set a static IP (optional), enter the following in a terminal:

sudo nano /etc/dhcpcd.conf

Add the following lines with your appropriate values:

interface NETWORK
static ip_address=STATIC_IP/24
static routers=ROUTER_IP
static domain_name_servers=DNS_IP

Where:

NETWORK = your network connection type: eth0 (Ethernet) or wlan0 (wireless) STATIC_IP = the static IP address you want to set for the Raspberry Pi ROUTER_IP = the gateway IP address for your router on the local network DNS_IP = the DNS IP address (typically the same as your router's gateway address)

Reboot the Pi and don't forget to add a DNS entry if you wish to identify your Pi by IP.

Run the following command:

sudo nano /etc/xdg/lxsession/LXDE-pi/autostart

Add the following line:

@/usr/bin/chromium-browser --kiosk  --disable-restore-session-state http://h2r.machine.name:4001/output/ABCD

Replace h2r.machine.name with the name of the machine running H2R Graphics V2. Note that the host machine may need firewall changes to allow remote access.

Reboot and confirm

Note, to quit kiosk mode, use Ctrl+F4.

Dual Screens

If you have the inputs to spare on you switcher, you can output in dual screen mode. The three most obvious options are key & fill, preview and output, or output 1 & output 2. Below are the lines to add to the autostart file instead of the above.

Preview & Output

@/usr/bin/chromium-browser --kiosk --disable-restore-session-state --user-data-dir="/home/pi/Documents/Profiles/0" --window-position=0,0 http://h2r.machine.name:4001/preview/ABCD
@/usr/bin/chromium-browser --kiosk --disable-restore-session-state --user-data-dir="/home/pi/Documents/Profiles/1" --window-position=1920,0 http://h2r.machine.name:4001/output/ABCD

Key & Fill

@/usr/bin/chromium-browser --kiosk --disable-restore-session-state --user-data-dir="/home/pi/Documents/Profiles/0" --window-position=0,0 http://h2r.machine.name:4001/output/ABCD/?bg=%23000&key=true
@/usr/bin/chromium-browser --kiosk --disable-restore-session-state --user-data-dir="/home/pi/Documents/Profiles/1" --window-position=1920,0 http://h2r.machine.name:4001/output/ABCD/?bg=%23000

Output 1 and Output 2

@/usr/bin/chromium-browser --kiosk --disable-restore-session-state --user-data-dir="/home/pi/Documents/Profiles/0" --window-position=0,0 http://h2r.machine.name:4001/preview/ABCD
@/usr/bin/chromium-browser --kiosk --disable-restore-session-state --user-data-dir="/home/pi/Documents/Profiles/1" --window-position=1920,0 http://h2r.machine.name:4001/output/ABCD/2

Having tried this on a Raspberry Pi 4 Model B+ (8GB) running in Key & Fill mode, I'd say it works but it's on the edge of being smooth, especially when you add a few graphics.

Build container from Visual Studio built Dockerfile

If you need to build a project using the Visual Studio generated Dockerfile, you need to run docker build from the solution folder and specify the file.

Assuming the project you are working is called Your.Project.Api and this is the name of the project folder, run the following command from the root folder where the solution file and project folder(s) are located.

docker build -f Your.Project.Api/Dockerfile .

For Windows, you can use a forward or back slash but Linux must be a forward slash.

Longhorn Restart

If your Longhorn setup gets ‘stuck', run this script to trigger a restart of all the Longhorn pods.

kubectl rollout restart daemonset engine-image-ei-d4c780c6 -n longhorn-system
kubectl rollout restart daemonset longhorn-csi-plugin -n longhorn-system
kubectl rollout restart daemonset longhorn-manager -n longhorn-system

kubectl rollout restart deploy csi-attacher -n longhorn-system
kubectl rollout restart deploy csi-provisioner -n longhorn-system
kubectl rollout restart deploy csi-resizer -n longhorn-system
kubectl rollout restart deploy csi-snapshotter -n longhorn-system
kubectl rollout restart deploy longhorn-driver-deployer -n longhorn-system
kubectl rollout restart deploy longhorn-ui -n longhorn-system

If you wish to make this script executable, save it to a file (e.g. restart-longhorn.sh) and put the following as the first line of the script file:

#!/bin/bash

And then run the following command:

chmod +x restart-longhorn.sh

Run an NGINX server for a folder

If you're testing a simple frontend-only web site and want an NGINX web server to point at it to let you access the page in a browser via a web server, run the following command from PowerShell within the folder that contains the HTML, etc… to set up a NGINX Docker container pointing at your HTML folder:

docker run -it -d -p 8100:80 -v ${PWD}:/usr/share/nginx/html nginx

If you're using command prompt instead of PowerShell, use:

docker run -it -d -p 8100:80 -v %cd%:/usr/share/nginx/html nginx

You'll then be able to access your site on http://localhost:8100/. You can set the port to any value that's not in use between 1 and 65535 by updating the first part of the -p attribute.

Build a Docker container from your SQL backup

The following Dockerfile and restore-database.sql file combine with a SQL backup file (my-database.bak in this example) to produce a Docker container. This will allow you to spin up a SQL Server container with your backup available via the SA account. Needless to say, this should not be used in a production environment without adding users, etc… as part of the process. Also, by default, the Developer edition is used and this is not licenced for production use.

The first step is to create a folder and in it put your my-database.bak file and two new files as below:

Dockerfile

FROM mcr.microsoft.com/mssql/server:2019-latest AS build
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD="ARandomPassword123!"

WORKDIR /tmp
COPY *.bak .
COPY restore-backup.sql .

RUN ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" \
    && sleep 5 \
    && /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P "ARandomPassword123!" -i /tmp/restore-backup.sql \
    && pkill sqlservr

FROM mcr.microsoft.com/mssql/server:2019-latest AS release

ENV ACCEPT_EULA=Y

COPY --from=build /var/opt/mssql/data /var/opt/mssql/data

restore-backup.sql

RESTORE DATABASE [MyDatabase] 
FROM DISK = '/tmp/my-database.bak'
WITH FILE = 1,
MOVE 'MyDatabase_Data' TO '/var/opt/mssql/data/MyDatabase.mdf',
MOVE 'MyDatabase_Log' TO '/var/opt/mssql/data/MyDatabase.ldf',
NOUNLOAD, REPLACE, STATS = 5
GO

USE MyDatabase
GO

DBCC SHRINKFILE (MyDatabase_Data, 1)
GO

ALTER DATABASE MyDatabase
SET RECOVERY SIMPLE;  
GO  

DBCC SHRINKFILE (MyDatabase_Log,1)
GO

ALTER DATABASE MyDatabase
SET RECOVERY FULL;  
GO  

DBCC SHRINKDATABASE ([MyDatabase])
GO

Note that your database will not be MyDatabase_Data and MyDatabase_Log so you will need to replace these values with your actual database file names. Note that sometimes the equivalent of MyDatabase_Data doesn't have _Data on the end.

To build the container, you then run the following command from within your folder:

docker build -t database-backup:latest .

You can then run this container using the following command:

docker run -d -p 11433:1433 --memory=6g --cpus=2 database-backup:latest

This will also limit your container to 6GB of RAM use and 2 CPU cores.

After 5 or 10 seconds, you can connect to your database from SQL Server Management Studio. The server name is “MachineName,11433” where MachineName is the name of your computer. For Authentication, choose “SQL Server Authentication” and for username enter “sa” and for password enter “ARandomPassword123!”.s

Removing a failed/no longer available control plane node from the etcd cluster

If you have a control plane node fail in a cluster to the point you can no longer connect to it, this is how to remove it, including etcd, from your cluster.

The first step is to delete the node itself. For these examples, we'll assume the node is called kubernetes-cp-gone. Run the following command from a working control plane:

kubectl delete node kubernetes-cp-gone

Next we need to tidy up etcd. Firstly, we'll tidy up the Kubernetes level configuration by running the following command:

kubectl -n kube-system edit cm kubeadm-config

Once in Vi, delete the three lines underneath apiEndpoints that correspond to the deleted server (press the insert key to go into the correct mode). Once done, save your changes by pressing escape then :wq followed by enter.

Next, you need to get the name of a working and available etcd pod. You can do this by typing the following:

kubectl get pods -n kube-system | grep etcd-

Next, enter the following command, replacement etcd-pod with the name of one of your working etcd pods:

kubectl exec -n kube-system etcd-pod -it -- etcdctl --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key member list -w table

Take note of the ID and then run the following command, again, replacing etcd-pod with the name of your working control plane pod:

kubectl exec -n kube-system etcd-pod -it -- etcdctl --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key member remove failednodeid

Getting Ansible running from a Windows 10 machine

From the Microsoft Store, install the latest version of Ubuntu (just follow the instructions on screen) and then apply any updates using the following:

sudo apt update
sudo apt upgrade -y

And then, to install Ansible itself, run the following commands:

sudo apt install software-properties-common
sudo apt-add-repository --yes --update ppa:ansible/ansible
sudo apt install ansible -y

You can then confirm it's installed by running the following command:

ansible --version