Skip to content

Tutorials

Yarn setup for Express API with TypeScript

This article talks through the process of setting up a basic Express API using TypeScript and Yarn for package management instead of NPM.

In the folder you'd like to create your project in, type the following commands. For the purpose of this project, we'll assume the folder being used is called “api”.

mkdir api
cd api
yarn init --yes

This sets up the initial project so that additional packages including Express can be installed.

Next, we'll add the Express and DotEnv packages:

yarn add express dotenv

At this point, we could test things but we want a typescript app so next we'll add the required TypeScript files as dev dependencies using -D:

yarn add -D typescript @types/express @types/node

The next step is to generate a tsconfig.json file which is accomplished by running the following command:

yarn tsc --init

In tsconfig.json, locate the commented out line // "outDir": "./", and uncomment it and set the path to ./dist so the line now looks like the following (note the comment at the end of the line can be left in if desired):

"outDir": "./dist",

Next create a file called index.ts in the root of the project and populate it with the following example code:

import express, { Express, Request, Response } from 'express';
import dotenv from 'dotenv';

dotenv.config();

const app: Express = express();
const port = process.env.PORT;

app.get('/', (req: Request, res: Response) => {
  res.send('Express + TypeScript Server');
});

app.listen(port, () => {
  console.log(`⚡️[server]: Server is running at http://localhost:${port}`);
});

To make development easier, we'll add a few tools as dev dependencies:

yarn add -D concurrently nodemon

Next, add or replace the scripts section of package.json with the following (I suggest before the dependencies section):

  "scripts": {
    "build": "npx tsc",
    "start": "node dist/index.js",
    "dev": "concurrently \"npx tsc --watch\" \"nodemon -q dist/index.js\""
  },

And update the value of main from index.js

Finally, create a .env file in the root of the project and populate it with the following (updating the port if required):

PORT=3100

Finally, start the server in dev mode by typing the following:

yarn dev

A link to the site will be showing. Using the port suggestion above, this will be http://localhost:3100/ which when accessed from the browser will show an empty screen displaying the message “Express + TypeScript Server”.

Kaniko Setup for an Azure DevOps Linux build agent

NOTE: This document is still being tested so some parts may not quite work yet.

This document takes you through the process of setting up and using Kaniko for building containers on a Kubernetes-hosted Linux build agent without Docker being installed. This then allows for the complete removal of Docker from your worker nodes when switching over to containerd, etc…

For the purposes of this demo, the assumption is that a namespace called build-agents will be used to host Kaniko jobs and the Azure DevOps build agents. There is also a Docker secret required to push the container to Docker Hub.

PreRequisites

This process makes use of a ReadWriteMany (RWX) persistent storage volume and is assumed to be running using a build agent in the cluster as outlined in this Kubernetes Build Agent repo. The only change required is adding the following under the agent section (storageClass and size are optional):

  kaniko:
    enabled: true
    storageClass: longhorn
    size: 5Gi

Setup

Namepace

To set up your namespace for Kaniko (i.e. build-agents) run the following command:

kubectl create ns build-agents

Service Account

Next, create a file called kaniko-setup.sh and copy in the following script:

#!/bin/bash

namespace=build-agents
dockersecret=dockerhub-jabbermouth

while getopts ":n:d:?:" opt; do
  case $opt in
    n) namespace="$OPTARG"
    ;;
    d) dockersecret="$OPTARG"
    ;;
    ?) 
    echo "Usage: helpers/kaniko-setup.sh [OPTIONS]"
    echo
    echo "Options"
    echo "  n = namespace to create kaniko account in (default: $namespace)"
    echo "  d = name of Docker Hub secret to use (default: $dockersecret)"
    exit 0
    ;;
    \?) echo "Invalid option -$OPTARG" >&2
    ;;
  esac
done

echo
echo Removing existing file if present
rm kaniko-user.yaml

echo
echo Generating new user creating manifests
cat <<EOM >kaniko-user.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: pod-runner
rules:
  -
    apiGroups:
      - ""
      - apps
    resources:
      - pods
      - pods/log
    verbs: ["get", "watch", "list", "create", "delete", "update", "patch"]

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: kaniko
  namespace: $namespace
imagePullSecrets:
- name: $dockersecret

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kaniko-pod-runner
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: pod-runner
subjects:
- kind: ServiceAccount
  name: kaniko
  namespace: $namespace
EOM

echo
echo Applying user manifests
kubectl apply -f kaniko-user.yaml

echo
echo Tidying up manifests
rm kaniko-user.yaml

echo
echo Getting secret
echo
secret=$(kubectl get serviceAccounts kaniko -n $namespace -o=jsonpath={.secrets[*].name})

token=$(kubectl get secret $secret -n $namespace -o=jsonpath={.data.token})

echo Or paste the following token where needed:
echo
echo $token | base64 --decode
echo

This can then be executed using the following command:

bash kaniko-setup.sh 

Token and Secure File

Create a new Azure DevOps library group called KanikoUserToken and add an entry to it. Name the variable ServiceAccount_Kaniko and copy the token from above into the value and make it a secret.

Under “Pipeline permissions” for the group, click the three vertical dots next to the + and choose “Open access”. This will be used

Azure Pipeline

The example below assumes the build is done using template files and this particular one would be for the container build and push. Update the parameter defaults as required. This will deploy to a repository on Docker Hub in a lowercase form of REPOSITORY_NAME with . replaced with - to make it compliant. This can be modified as required.

parameters:
  REPOSITORY_NAME: ''
  TAG: ''
  REGISTRY_SECRET: 'dockerhub-jabbermouth'
  DOCKER_HUB_IDENTIFIER: 'jabbermouth'

jobs:  
- job: Build${{ parameters.TAG }}
  displayName: 'Build ${{ parameters.TAG }}'
  condition: and(succeeded(), startsWith(variables['Build.SourceBranch'], 'refs/heads/${{ parameters.BRANCH_PREFIX }}'))
  pool: 'Docker (Linux)'
  variables:
    group: KanikoUserToken
  steps:
  - task: KubectlInstaller@0
    inputs:
      kubectlVersion: 'latest'

# copy code to shared folder: kaniko/buildId
  - task: CopyFiles@2
    displayName: Copy files to shared Kaniko folder
    inputs:
      SourceFolder: ''
      Contents: '**'
      TargetFolder: '/kaniko/$(Build.BuildId)/'
      CleanTargetFolder: true

# download K8s config file
  - task: DownloadSecureFile@1
    name: fetchK8sConfig
    displayName: Download Kaniko config
    inputs:
      secureFile: 'build-agent-kaniko.config'

# create pod script with folder mapped to kaniko/buildId
  - task: Bash@3
    displayName: Execute pod and wait for result
    inputs:
      targetType: 'inline'
      script: |
        #Create a deployment yaml to create the Kaniko Pod
        cat > deploy.yaml <<EOF
        apiVersion: v1
        kind: Pod
        metadata:
          name: kaniko-$(Build.BuildId)
          namespace: build-agents
        spec:
          imagePullSecrets:
          - name: ${{ parameters.REGISTRY_SECRET }}
          containers:
          - name: kaniko
            image: gcr.io/kaniko-project/executor:latest
            args:
            - "--dockerfile=Dockerfile"
            - "--context=/src/$(Build.BuildId)"
            - "--destination=${{ parameters.DOCKER_HUB_IDENTIFIER }}/${{ replace(lower(parameters.REPOSITORY_NAME),'.','-') }}:${{ parameters.TAG }}"
            volumeMounts:
            - name: kaniko-secret
              mountPath: /kaniko/.docker
            - name: source-code
              mountPath: /src
          restartPolicy: Never
          volumes:
          - name: kaniko-secret
            secret:
              secretName: ${{ parameters.REGISTRY_SECRET }}
              items:
              - key: .dockerconfigjson
                path: config.json
          - name: source-code
            persistentVolumeClaim:
              claimName: $DEPLOYMENT_NAME-kaniko
        EOF

        echo Applying pod definition to server
        kubectl apply -f deploy.yaml -n build-agents --token=$(ServiceAccount_Kaniko)

        # await pod completing
        # Monitor for Success or failure        
          while [[ $(kubectl get pods ${{ variables.jobName }} --token=$(Kaniko_ServiceAccount) -n build-agents -o jsonpath='{..status.phase}') != "Succeeded" && $(kubectl get pods ${{ variables.jobName }} --token=$(Kaniko_ServiceAccount)  -n build-agents -o jsonpath='{..status.phase}') != "Failed" ]]; do echo "waiting for pod ${{ variables.jobName }}: $(kubectl logs ${{ variables.jobName }} --token=$(Kaniko_ServiceAccount) -n build-agents | tail -1)" && sleep 10; done

        # Exit the script with error if build failed        
        if [ $(kubectl get pods kaniko-$(Build.BuildId) --token=$(ServiceAccount_Kaniko) -n build-agents -o jsonpath='{..status.phase}') == "Failed" ]; then 
            echo Build or push failed - outputing log
            echo
            kubectl logs ${{ variables.jobName }} --token=$(Kaniko_ServiceAccount) -n build-agents
            echo 
            echo Now deleting pod...
            kubectl delete -f deploy.yaml -n build-agents --token=$(ServiceAccount_Kaniko)

            echo Removing build source files
            rm -R -f /kaniko/$(Build.BuildId)

            exit 1;
        fi

        # if pod succeeded, delete the pod
        echo Build and push successed and now deleting pod
        kubectl delete -f deploy.yaml -n build-agents --token=$(ServiceAccount_Kaniko)

        echo Removing build source files
        rm -R -f /kaniko/$(Build.BuildId)

This template is called using something like:

  - template: templates/job-build-container.yaml
    parameters:
      REPOSITORY_NAME: 'Your.Respository.Name'
      TAG: 'latest'

Running H2R Graphics V2 output on a Raspberry Pi in kiosk mode

Overview

This article gives a step-by-step guide to getting an H2R Graphics V2 output display appearing on a Raspberry Pi automatically at boot. Some of the steps outlined here are optional and it's assumed you are starting from a clean Pi using the latest image.

Whilst these instructions have been developed for H2R Graphics specifically, they will work for any URL you want to open in kiosk (i.e. fullscreen) mode when a Raspberry Pi boots up.

Step-by-Step

The left-most HDMI port on the Pi should be used for your output.

Download the latest Raspberry Pi OS and write it to an SD Card

On the Pi, change the default password and configure some options:

  1. Raspberry Pi logo | Preferences | Raspberry Pi Configuration
  2. Change the password
  3. Change hostname as required
  4. Set “Network at Boot” to “Wait for network”
  5. Switch to “Display” tab
  6. Set “Screen Blanking” to “Disabled”
  7. Set “Headless Resolution” to “1920×1080” (this is a “just in case” thing)
  8. Switch to “Interfaces” and enable SSH and VNC as required
  9. Click “OK”

If prompted to reboot, do this.

If a wireless network is required, set this up

Download the latest updates and install them – reboot when finished (icon indicates updates in the top right) – repeat if required

To set a static IP (optional), enter the following in a terminal:

sudo nano /etc/dhcpcd.conf

Add the following lines with your appropriate values:

interface NETWORK
static ip_address=STATIC_IP/24
static routers=ROUTER_IP
static domain_name_servers=DNS_IP

Where:

NETWORK = your network connection type: eth0 (Ethernet) or wlan0 (wireless) STATIC_IP = the static IP address you want to set for the Raspberry Pi ROUTER_IP = the gateway IP address for your router on the local network DNS_IP = the DNS IP address (typically the same as your router's gateway address)

Reboot the Pi and don't forget to add a DNS entry if you wish to identify your Pi by IP.

Run the following command:

sudo nano /etc/xdg/lxsession/LXDE-pi/autostart

Add the following line:

@/usr/bin/chromium-browser --kiosk  --disable-restore-session-state http://h2r.machine.name:4001/output/ABCD

Replace h2r.machine.name with the name of the machine running H2R Graphics V2. Note that the host machine may need firewall changes to allow remote access.

Reboot and confirm

Note, to quit kiosk mode, use Ctrl+F4.

Dual Screens

If you have the inputs to spare on you switcher, you can output in dual screen mode. The three most obvious options are key & fill, preview and output, or output 1 & output 2. Below are the lines to add to the autostart file instead of the above.

Preview & Output

@/usr/bin/chromium-browser --kiosk --disable-restore-session-state --user-data-dir="/home/pi/Documents/Profiles/0" --window-position=0,0 http://h2r.machine.name:4001/preview/ABCD
@/usr/bin/chromium-browser --kiosk --disable-restore-session-state --user-data-dir="/home/pi/Documents/Profiles/1" --window-position=1920,0 http://h2r.machine.name:4001/output/ABCD

Key & Fill

@/usr/bin/chromium-browser --kiosk --disable-restore-session-state --user-data-dir="/home/pi/Documents/Profiles/0" --window-position=0,0 http://h2r.machine.name:4001/output/ABCD/?bg=%23000&key=true
@/usr/bin/chromium-browser --kiosk --disable-restore-session-state --user-data-dir="/home/pi/Documents/Profiles/1" --window-position=1920,0 http://h2r.machine.name:4001/output/ABCD/?bg=%23000

Output 1 and Output 2

@/usr/bin/chromium-browser --kiosk --disable-restore-session-state --user-data-dir="/home/pi/Documents/Profiles/0" --window-position=0,0 http://h2r.machine.name:4001/preview/ABCD
@/usr/bin/chromium-browser --kiosk --disable-restore-session-state --user-data-dir="/home/pi/Documents/Profiles/1" --window-position=1920,0 http://h2r.machine.name:4001/output/ABCD/2

Having tried this on a Raspberry Pi 4 Model B+ (8GB) running in Key & Fill mode, I'd say it works but it's on the edge of being smooth, especially when you add a few graphics.

Run an NGINX server for a folder

If you're testing a simple frontend-only web site and want an NGINX web server to point at it to let you access the page in a browser via a web server, run the following command from PowerShell within the folder that contains the HTML, etc… to set up a NGINX Docker container pointing at your HTML folder:

docker run -it -d -p 8100:80 -v ${PWD}:/usr/share/nginx/html nginx

If you're using command prompt instead of PowerShell, use:

docker run -it -d -p 8100:80 -v %cd%:/usr/share/nginx/html nginx

You'll then be able to access your site on http://localhost:8100/. You can set the port to any value that's not in use between 1 and 65535 by updating the first part of the -p attribute.

Build a Docker container from your SQL backup

The following Dockerfile and restore-database.sql file combine with a SQL backup file (my-database.bak in this example) to produce a Docker container. This will allow you to spin up a SQL Server container with your backup available via the SA account. Needless to say, this should not be used in a production environment without adding users, etc… as part of the process. Also, by default, the Developer edition is used and this is not licenced for production use.

The first step is to create a folder and in it put your my-database.bak file and two new files as below:

Dockerfile

FROM mcr.microsoft.com/mssql/server:2019-latest AS build
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD="ARandomPassword123!"

WORKDIR /tmp
COPY *.bak .
COPY restore-backup.sql .

RUN ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" \
    && sleep 5 \
    && /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P "ARandomPassword123!" -i /tmp/restore-backup.sql \
    && pkill sqlservr

FROM mcr.microsoft.com/mssql/server:2019-latest AS release

ENV ACCEPT_EULA=Y

COPY --from=build /var/opt/mssql/data /var/opt/mssql/data

restore-backup.sql

RESTORE DATABASE [MyDatabase] 
FROM DISK = '/tmp/my-database.bak'
WITH FILE = 1,
MOVE 'MyDatabase_Data' TO '/var/opt/mssql/data/MyDatabase.mdf',
MOVE 'MyDatabase_Log' TO '/var/opt/mssql/data/MyDatabase.ldf',
NOUNLOAD, REPLACE, STATS = 5
GO

USE MyDatabase
GO

DBCC SHRINKFILE (MyDatabase_Data, 1)
GO

ALTER DATABASE MyDatabase
SET RECOVERY SIMPLE;  
GO  

DBCC SHRINKFILE (MyDatabase_Log,1)
GO

ALTER DATABASE MyDatabase
SET RECOVERY FULL;  
GO  

DBCC SHRINKDATABASE ([MyDatabase])
GO

Note that your database will not be MyDatabase_Data and MyDatabase_Log so you will need to replace these values with your actual database file names. Note that sometimes the equivalent of MyDatabase_Data doesn't have _Data on the end.

To build the container, you then run the following command from within your folder:

docker build -t database-backup:latest .

You can then run this container using the following command:

docker run -d -p 11433:1433 --memory=6g --cpus=2 database-backup:latest

This will also limit your container to 6GB of RAM use and 2 CPU cores.

After 5 or 10 seconds, you can connect to your database from SQL Server Management Studio. The server name is “MachineName,11433” where MachineName is the name of your computer. For Authentication, choose “SQL Server Authentication” and for username enter “sa” and for password enter “ARandomPassword123!”.s

Getting Ansible running from a Windows 10 machine

From the Microsoft Store, install the latest version of Ubuntu (just follow the instructions on screen) and then apply any updates using the following:

sudo apt update
sudo apt upgrade -y

And then, to install Ansible itself, run the following commands:

sudo apt install software-properties-common
sudo apt-add-repository --yes --update ppa:ansible/ansible
sudo apt install ansible -y

You can then confirm it's installed by running the following command:

ansible --version

Migrating a Hyper-V VM to Proxmox

I'm sure there are many ways out there to do this but this is how I did it for about 20 VMs.

Firstly, in Hyper-V, I suggest compacting any disks you plan on migrating, especially if you know they're smaller than being reported.

Once that's done, not down CPU, memory and HD sizes as we'll manually need to re-create the VMs in Proxmox.

Choose the Export option and write the exported data to somewhere that will be accessible after installing Proxmox. If you're replacing an existing machine, use either external storage or a NAS/network share. I used the latter.

In the shell of Proxmox, run the following command to get access to the share:

mkdir /mnt/exports
mount -t cifs -o username=you@yourdomain.com,password=YourPassword123 //yournas/exports /mnt/exports

Now you need to set up a VM using the values you noted earlier. If migrating a Windows machine, I've noticed it it defaults to IDE for the hard drive but I've changed it to SATA in each case. If you have multiple drives, add the extra ones once you've set up the VM. For the purposes of this demo, use VM-Local (if you followed my previous setup guide). Take a note of the VM's ID.

Next change to the VM's directory which will be something like:

cd /mnt/vm-local/images/1xx

We'll now copy over and migrate the VM from a VHD/VHDX image to a QCOW2 image using the following command (note your export subfolders will differ) and replace 1xx with your VM's ID:

qemu-img convert -O qcow2 /mnt/exports/to-import.vhdx vm-1xx-disk-0.qcow2

For subsequent disks, repeat the process but increase the number after “disk-” by 1 each time.

Once all the disks are imported, start the VM. Note that things like network cards will have changed so you'll likely need to re-setup your static IPs if you have them.

Adding a previously used storage (i.e. physical) drive to Proxmox

If you want to use an old hard drive (or SSD, etc…) then it needs to be blank for Proxmox to accept it and offer it up as an option for adding as a LVM or LVM-thin.

To make it visible and valid, you need to wipe all the partitions. You can do this from the shell within Proxmox but there is a subtle difference between NVME devices and SATA devices.

To start, you can list all devices by typing the following in the shell:

fdisk -l

For SATA drives, you would use /dev/sda, /dev/sdb, etc… but for NVME drives, you use the namespace so /dev/nvme0n1. For this example, I'll be using an NVME drive but replace the value as needed for you situation.

Type the following command into the shell:

cfdisk /dev/nvme0n1

Press a key to acknowledge the warning. You may be prompted to choose a label type. For ease, pick gpt.

Next highlight any existing partitions and delete them. Once they're all deleted, choose the Write option and then Quit. The drive should now be available to add in Proxmox under the Create: Volume Group and Create: Thinpool options.

Getting Mongo DB running locally in a container

The following is a quick guide to getting Mongo DB up and running in a container for you to connect to. I've included authentication even though it's not essential for working locally just for completeness.

We'll assume you have a D drive for this example and that you want to persist your database in a folder on this drive.

d:
cd \
mkdir Mongo
docker run --name mongo -v d:/Mongo:/data/db -d -e MONGO_INITDB_ROOT_USERNAME=mongoadmin -e MONGO_INITDB_ROOT_PASSWORD=OnlyForLocal123 -p 27017:27017 --restart always mongo

There you go – Mongo is now running on your local instance of Docker with a simple superuser username and password.

To connect to this database from C#, use the following connection string:

mongodb://mongoadmin:OnlyForLocal123@host.docker.internal:27017/?authSource=admin

To get a UI up and running, you can also instantiate the following container:

docker run --name mongo-ui -d -e ME_CONFIG_MONGODB_ADMINUSERNAME=mongoadmin -e ME_CONFIG_MONGODB_ADMINPASSWORD=OnlyForLocal123 -e ME_CONFIG_MONGODB_SERVER=host.docker.internal -p 8081:8081 --restart always mongo-express

This will then be accessible via http://localhost:8081 once it's started.

Scripts for creating a kubeconfig for a new user in Kubernetes

This script is intended to create a kubeconfig file for a user and then give that user permissions to read pod data as well as exec to those pods in a particular namespace.

Once you've created the file, copy it to the machine you wish to connect to the cluster form and place it in a folder called .kube and rename it to config (no extension).

Create a file using Nano (or your preferred editor) name user-create.sh with the following content:

#!/bin/bash

if [ "$2" != "" ]; then
company=$2
else
company=your-company-name
fi

if [ "$3" != "" ]; then
namespace=$3
else
namespace=default
fi

openssl genrsa -out user-$1.key 2048
openssl req -new -key user-$1.key -out user-$1.csr -subj "/CN=$1/O=$company"
openssl x509 -req -in user-$1.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out user-$1.crt -days 500

rm user-$1.csr

cacrt=$(cat /etc/kubernetes/pki/ca.crt | base64 | tr -d '\n')
crt=$(cat user-$1.crt | base64 | tr -d '\n')
key=$(cat user-$1.key | base64 | tr -d '\n')

cat <<EOM >user-$1.config
apiVersion: v1
kind: Config
users:
- name: $1
  user:
    client-certificate-data: $crt
    client-key-data: $key
clusters:
- cluster:
    certificate-authority-data: $cacrt
    server: https://10.10.4.20:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: $namespace
    user: $1
  name: $1-context@kubernetes
current-context: $1-context@kubernetes
EOM

if [ "$4" != "" ]; then
kubectl config set-credentials $1 --client-certificate=user-$1.crt  --client-key=user-$1.key
kubectl config set-context $1-context --cluster=kubernetes --namespace=$namespace --user=$1
else
rm user-$1.crt
rm user-$1.key
fi

cat <<EOM >user-$1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: $namespace
  name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
  resources: ["pods","pods/log","services","ingress","configmaps"]
  verbs: ["get", "watch", "list", "exec"]
- apiGroups: [""]
  resources: ["pods/exec"]
  verbs: ["create"]
---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: pod-reader-$1
  namespace: $namespace
subjects:
- kind: User
  name: $1
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io
EOM

kubectl apply -f user-$1.yaml

rm user-$1.yaml

Save it and then you can use between 1 and 4 parameters to configure the user created:

1: user-name (should be in lowercase and parts separated by hyphens e.g. john-smith)

2: company-name (any text name you like for your company – update line 6 to set the default)

3: namespace (Kubernetes namespace to grant access to – update line 12 to set the default you want to use)

4: any-value (if a value is present, a context is created in the kubeconfig of the machine the script is executed on so you can use the following command to get a list of all pods: kubectl --context=john-smith-context get pods

Update the role definition to determine what the user can access.

To create the user with the default country name and namespace (which must exist in the cluster), run the following:

sudo bash create-user.sh john-smith

To delete a user, create the a new file called delete-user.sh and add the following content:

#!/bin/bash

if [ "$2" != "" ]; then
namespace=$2
else
namespace=default
fi

kubectl config delete-context $1-context
kubectl config delete-user $1

kubectl delete rolebinding -n $namespace pod-reader-$1

You then execute this, assuming you want the user deleting from the default namespace, with the following command:

bash user-delete.sh john-smith