Skip to content

Blog

Build container from Visual Studio built Dockerfile

If you need to build a project using the Visual Studio generated Dockerfile, you need to run docker build from the solution folder and specify the file.

Assuming the project you are working is called Your.Project.Api and this is the name of the project folder, run the following command from the root folder where the solution file and project folder(s) are located.

docker build -f Your.Project.Api/Dockerfile .

For Windows, you can use a forward or back slash but Linux must be a forward slash.

Longhorn Restart

If your Longhorn setup gets ‘stuck', run this script to trigger a restart of all the Longhorn pods.

kubectl rollout restart daemonset engine-image-ei-d4c780c6 -n longhorn-system
kubectl rollout restart daemonset longhorn-csi-plugin -n longhorn-system
kubectl rollout restart daemonset longhorn-manager -n longhorn-system

kubectl rollout restart deploy csi-attacher -n longhorn-system
kubectl rollout restart deploy csi-provisioner -n longhorn-system
kubectl rollout restart deploy csi-resizer -n longhorn-system
kubectl rollout restart deploy csi-snapshotter -n longhorn-system
kubectl rollout restart deploy longhorn-driver-deployer -n longhorn-system
kubectl rollout restart deploy longhorn-ui -n longhorn-system

If you wish to make this script executable, save it to a file (e.g. restart-longhorn.sh) and put the following as the first line of the script file:

#!/bin/bash

And then run the following command:

chmod +x restart-longhorn.sh

Run an NGINX server for a folder

If you're testing a simple frontend-only web site and want an NGINX web server to point at it to let you access the page in a browser via a web server, run the following command from PowerShell within the folder that contains the HTML, etc… to set up a NGINX Docker container pointing at your HTML folder:

docker run -it -d -p 8100:80 -v ${PWD}:/usr/share/nginx/html nginx

If you're using command prompt instead of PowerShell, use:

docker run -it -d -p 8100:80 -v %cd%:/usr/share/nginx/html nginx

You'll then be able to access your site on http://localhost:8100/. You can set the port to any value that's not in use between 1 and 65535 by updating the first part of the -p attribute.

Build a Docker container from your SQL backup

The following Dockerfile and restore-database.sql file combine with a SQL backup file (my-database.bak in this example) to produce a Docker container. This will allow you to spin up a SQL Server container with your backup available via the SA account. Needless to say, this should not be used in a production environment without adding users, etc… as part of the process. Also, by default, the Developer edition is used and this is not licenced for production use.

The first step is to create a folder and in it put your my-database.bak file and two new files as below:

Dockerfile

FROM mcr.microsoft.com/mssql/server:2019-latest AS build
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD="ARandomPassword123!"

WORKDIR /tmp
COPY *.bak .
COPY restore-backup.sql .

RUN ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" \
    && sleep 5 \
    && /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P "ARandomPassword123!" -i /tmp/restore-backup.sql \
    && pkill sqlservr

FROM mcr.microsoft.com/mssql/server:2019-latest AS release

ENV ACCEPT_EULA=Y

COPY --from=build /var/opt/mssql/data /var/opt/mssql/data

restore-backup.sql

RESTORE DATABASE [MyDatabase] 
FROM DISK = '/tmp/my-database.bak'
WITH FILE = 1,
MOVE 'MyDatabase_Data' TO '/var/opt/mssql/data/MyDatabase.mdf',
MOVE 'MyDatabase_Log' TO '/var/opt/mssql/data/MyDatabase.ldf',
NOUNLOAD, REPLACE, STATS = 5
GO

USE MyDatabase
GO

DBCC SHRINKFILE (MyDatabase_Data, 1)
GO

ALTER DATABASE MyDatabase
SET RECOVERY SIMPLE;  
GO  

DBCC SHRINKFILE (MyDatabase_Log,1)
GO

ALTER DATABASE MyDatabase
SET RECOVERY FULL;  
GO  

DBCC SHRINKDATABASE ([MyDatabase])
GO

Note that your database will not be MyDatabase_Data and MyDatabase_Log so you will need to replace these values with your actual database file names. Note that sometimes the equivalent of MyDatabase_Data doesn't have _Data on the end.

To build the container, you then run the following command from within your folder:

docker build -t database-backup:latest .

You can then run this container using the following command:

docker run -d -p 11433:1433 --memory=6g --cpus=2 database-backup:latest

This will also limit your container to 6GB of RAM use and 2 CPU cores.

After 5 or 10 seconds, you can connect to your database from SQL Server Management Studio. The server name is “MachineName,11433” where MachineName is the name of your computer. For Authentication, choose “SQL Server Authentication” and for username enter “sa” and for password enter “ARandomPassword123!”.s

Removing a failed/no longer available control plane node from the etcd cluster

If you have a control plane node fail in a cluster to the point you can no longer connect to it, this is how to remove it, including etcd, from your cluster.

The first step is to delete the node itself. For these examples, we'll assume the node is called kubernetes-cp-gone. Run the following command from a working control plane:

kubectl delete node kubernetes-cp-gone

Next we need to tidy up etcd. Firstly, we'll tidy up the Kubernetes level configuration by running the following command:

kubectl -n kube-system edit cm kubeadm-config

Once in Vi, delete the three lines underneath apiEndpoints that correspond to the deleted server (press the insert key to go into the correct mode). Once done, save your changes by pressing escape then :wq followed by enter.

Next, you need to get the name of a working and available etcd pod. You can do this by typing the following:

kubectl get pods -n kube-system | grep etcd-

Next, enter the following command, replacement etcd-pod with the name of one of your working etcd pods:

kubectl exec -n kube-system etcd-pod -it -- etcdctl --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key member list -w table

Take note of the ID and then run the following command, again, replacing etcd-pod with the name of your working control plane pod:

kubectl exec -n kube-system etcd-pod -it -- etcdctl --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key member remove failednodeid

Getting Ansible running from a Windows 10 machine

From the Microsoft Store, install the latest version of Ubuntu (just follow the instructions on screen) and then apply any updates using the following:

sudo apt update
sudo apt upgrade -y

And then, to install Ansible itself, run the following commands:

sudo apt install software-properties-common
sudo apt-add-repository --yes --update ppa:ansible/ansible
sudo apt install ansible -y

You can then confirm it's installed by running the following command:

ansible --version

Migrating a Hyper-V VM to Proxmox

I'm sure there are many ways out there to do this but this is how I did it for about 20 VMs.

Firstly, in Hyper-V, I suggest compacting any disks you plan on migrating, especially if you know they're smaller than being reported.

Once that's done, not down CPU, memory and HD sizes as we'll manually need to re-create the VMs in Proxmox.

Choose the Export option and write the exported data to somewhere that will be accessible after installing Proxmox. If you're replacing an existing machine, use either external storage or a NAS/network share. I used the latter.

In the shell of Proxmox, run the following command to get access to the share:

mkdir /mnt/exports
mount -t cifs -o username=you@yourdomain.com,password=YourPassword123 //yournas/exports /mnt/exports

Now you need to set up a VM using the values you noted earlier. If migrating a Windows machine, I've noticed it it defaults to IDE for the hard drive but I've changed it to SATA in each case. If you have multiple drives, add the extra ones once you've set up the VM. For the purposes of this demo, use VM-Local (if you followed my previous setup guide). Take a note of the VM's ID.

Next change to the VM's directory which will be something like:

cd /mnt/vm-local/images/1xx

We'll now copy over and migrate the VM from a VHD/VHDX image to a QCOW2 image using the following command (note your export subfolders will differ) and replace 1xx with your VM's ID:

qemu-img convert -O qcow2 /mnt/exports/to-import.vhdx vm-1xx-disk-0.qcow2

For subsequent disks, repeat the process but increase the number after “disk-” by 1 each time.

Once all the disks are imported, start the VM. Note that things like network cards will have changed so you'll likely need to re-setup your static IPs if you have them.

Adding a previously used storage (i.e. physical) drive to Proxmox

If you want to use an old hard drive (or SSD, etc…) then it needs to be blank for Proxmox to accept it and offer it up as an option for adding as a LVM or LVM-thin.

To make it visible and valid, you need to wipe all the partitions. You can do this from the shell within Proxmox but there is a subtle difference between NVME devices and SATA devices.

To start, you can list all devices by typing the following in the shell:

fdisk -l

For SATA drives, you would use /dev/sda, /dev/sdb, etc… but for NVME drives, you use the namespace so /dev/nvme0n1. For this example, I'll be using an NVME drive but replace the value as needed for you situation.

Type the following command into the shell:

cfdisk /dev/nvme0n1

Press a key to acknowledge the warning. You may be prompted to choose a label type. For ease, pick gpt.

Next highlight any existing partitions and delete them. Once they're all deleted, choose the Write option and then Quit. The drive should now be available to add in Proxmox under the Create: Volume Group and Create: Thinpool options.

Getting Mongo DB running locally in a container

The following is a quick guide to getting Mongo DB up and running in a container for you to connect to. I've included authentication even though it's not essential for working locally just for completeness.

We'll assume you have a D drive for this example and that you want to persist your database in a folder on this drive.

d:
cd \
mkdir Mongo
docker run --name mongo -v d:/Mongo:/data/db -d -e MONGO_INITDB_ROOT_USERNAME=mongoadmin -e MONGO_INITDB_ROOT_PASSWORD=OnlyForLocal123 -p 27017:27017 --restart always mongo

There you go – Mongo is now running on your local instance of Docker with a simple superuser username and password.

To connect to this database from C#, use the following connection string:

mongodb://mongoadmin:OnlyForLocal123@host.docker.internal:27017/?authSource=admin

To get a UI up and running, you can also instantiate the following container:

docker run --name mongo-ui -d -e ME_CONFIG_MONGODB_ADMINUSERNAME=mongoadmin -e ME_CONFIG_MONGODB_ADMINPASSWORD=OnlyForLocal123 -e ME_CONFIG_MONGODB_SERVER=host.docker.internal -p 8081:8081 --restart always mongo-express

This will then be accessible via http://localhost:8081 once it's started.