Skip to content

Docker

.NET Container Running as Non-Root in Kubernetes

This is a quick guide on how to get a standard .NET container running as a non-root, non privileged user in Kubernetes. It is not a complete security guide but rather just enough if you require your pods to not run under root.

Update Dockerfile

The first step is to update the Dockerfile. Two changes are required here; one to change the port and one to specify the user to use.

Exported Port

At the start of the Dockerfile, replace any EXPORT statements with the following:

ENV ASPNETCORE_URLS http://+:8000
EXPOSE 8000

This will expose your application to the cluster on port 8000 rather than port 80.

User

Next, just before the ENTRYPOINT instruction, add the following line:

USER $APP_UID

Now build and push the container to your container registry of choice either manually or via a CI/CD pipeline.

Kubernetes Manifests

Deployment Manifest

Add the following snippet to the deployment manifest under the container entry that is to be locked down:

securityContext:
  allowPrivilegeEscalation: false
  runAsNonRoot: true
  runAsUser: 1654

Service Manifest

As the exported container port has now changed, update any service you may have defined so it looks similar to the following service manifest:

apiVersion: v1
kind: Service
metadata:
  name: your-service
  namespace: service-namespace
spec:
  selector:
     app: your-app
  ports:
    - name: http
      port: 80
      targetPort: 8000
  type: ClusterIP

The pod will still be accessible via its service on port 80 so things like ingress or gateway definitions or references from other apps do not need to be updated.

Install Metrics Server into Kubernetes running in Docker Desktop using Helm

If using the Helm chart to install Metrics Server in a Docker Desktop instance of Kubernetes, it may not start due to insecure certificates. To resolve this, run the following command when doing the install (it can also be applied to an existing installation):

helm upgrade --install --set args[0]='--kubelet-insecure-tls' metrics-server metrics-server/metrics-server

If the repo hasn’t been added already, run the following first:

helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/

Create a Service Account and Get Token In Kubernetes Running In Docker Desktop

When running Kubernetes in Docker Desktop 4.8 and later, creating a service account doesn’t create the token properly. The following script will create a service account and retrieve the token. Note that it creates a cluster admin service account for the purposes of this demonstration.

Create a file called create-service-account.sh or similar and populate as follows:

kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: $1
  namespace: $2

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: $1
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: $1
  namespace: $2

---

apiVersion: v1
kind: Secret
metadata:
  name: $1
  annotations:
    kubernetes.io/service-account.name: $1
type: kubernetes.io/service-account-token
EOF

TOKEN=$(kubectl get secret $1 -n $2 --template='{{.data.token}}' | base64 --decode)

echo
echo $TOKEN
echo

To create a service account called my-service-account in the namespace development run the following command:

bash create-service-account.sh my-service-account development

Pushing Helm Charts to a Container Registry

This article walks though building and then pushing a Helm chart to a container registry locally, Azure Container Registry (ACR) and Docker Hub.

Getting Local Container Registry Running

The easiest way to achieve this is using Docker. Once Docker is installed, running the following command will setup an auto-restarting on boot container.

docker run --name registry --env=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin --volume=/var/lib/registry -p 8880:5000 --restart=always -d registry:2

Once up and running, the list of charts can be accessed at http://localhost:8880/v2/_catalog although this will be blank when initially accessing it.

Building A Package

Building a package is the same as building for any Chart registry such as Chart Museum. The important thing is that the chart name is all lowercase. The recommended convention is lowercase, numbers and dashes, replacing other characters and spaces with hyphens e.g. MyShop.Checkout would become myshop-checkout. This value should be used in the name property of the Chart.yaml file, resulting in something similar to the following first few lines:

apiVersion: v2
name: myshop-checkout
version: 0.0.1-dev

Versioning

Most systems will only pull a new version of a chart if the version number has increased. A script to automate version numbering is advised but, for the purposes of this tutorial, the fixed value will be used.

Building Package

Assuming the above example chart is in a subfolder called MyShop.Checkout, run the following command to build a chart package called myshop-checkout-0.0.1-dev.tgz in the current folder:

helm package ./MyShop.Checkout

Pushing Package to Local Registry

Pushing to the local registry is straightforward as authentication is not required by default. Run the following command to add the above chart to the local registry:

helm push myshop-checkout-0.0.1-dev.tgz oci://localhost:8880/helm

This can then be checked by going to http://localhost:8880/v2/_catalog.

Pushing To Azure Container Registry (ACR)

The following PowerShell script will push the above example chart to an ACR. Subscription names and target registry will need updating per your settings.

$subscription = "Your Subscription Name"
$containerRegistry = "yourcontainerregistry"
$ociUrl = "$containerRegistry.azurecr.io/helm"

$currentAccount = (az account show | ConvertFrom-Json)
if (-not $currentAccount) {
    Write-Host "Attemping login..."

    az login
    az account set --subscription $subscription
} else {
    Write-Host "Logged in already as $($currentAccount.user.name)"
}

$helmUsername = "00000000-0000-0000-0000-000000000000"
$helmPassword = $(az acr login --name $containerRegistry --expose-token --output tsv --query accessToken)

Write-Output $helmPassword | helm registry login "$containerRegistry.azurecr.io" --username $helmUsername --password-stdin
}

helm push myshop-checkout-0.0.1-dev.tgz oci://$ociUrl

Pushing To Docker Hub

Firstly, if one doesn’t exist already, create a PAT with read/write permissions. Also, for any repositories that are to be deployed, create these if using a PAT without admin permissions. For example, if the repository will be called myshop-checkout, create a repository called myshop-checkout.

# login
$helmUsername = "docker-name"
$helmPassword = "your_docker_pat"

Write-Output $helmPassword | helm registry login "registry-1.docker.io" --username $helmUsername --password-stdin

$ociUrl = "registry-1.docker.io/$helmUsername"

helm push myshop-checkout-0.0.1-dev.tgz oci://$ociUrl

.NET 5/6, Docker and custom NuGet server

When using a custom NuGet server and you've added a nuget.config file to the solution, you'll need to add the following line to the default Dockerfile build by Visual Studio to allow the container to be built.

COPY ["nuget.config", "/src/"]

This should be placed before the RUN dotnet restore … line.

The filename is case sensitive within the container so using all lowercase is recommended for the file name. If you need to change the case, you may need to do two commits of the file (e.g. rename it NuGet.config –commit–> nuget1.config –commit–> nuget.config).

If running in a CI/CD pipeline and you have fixed custom NuGet servers, you can inject a nuget.config file into the CI pipeline however the file will still need referencing in the Dockerfile as above to be correctly used by the container build process.

Setup local Redis container

To set up a local Redis container that automatically starts up on reboot, run the following command:

docker run --name redis -d --restart always -p 6379:6379 redis:latest

To include a UI, run the following command:

docker run --name redis-ui -d --restart always -p 8001:8001 redislabs/redisinsight

Build container from Visual Studio built Dockerfile

If you need to build a project using the Visual Studio generated Dockerfile, you need to run docker build from the solution folder and specify the file.

Assuming the project you are working is called Your.Project.Api and this is the name of the project folder, run the following command from the root folder where the solution file and project folder(s) are located.

docker build -f Your.Project.Api/Dockerfile .

For Windows, you can use a forward or back slash but Linux must be a forward slash.

Run an NGINX server for a folder

If you're testing a simple frontend-only web site and want an NGINX web server to point at it to let you access the page in a browser via a web server, run the following command from PowerShell within the folder that contains the HTML, etc… to set up a NGINX Docker container pointing at your HTML folder:

docker run -it -d -p 8100:80 -v ${PWD}:/usr/share/nginx/html nginx

If you're using command prompt instead of PowerShell, use:

docker run -it -d -p 8100:80 -v %cd%:/usr/share/nginx/html nginx

You'll then be able to access your site on http://localhost:8100/. You can set the port to any value that's not in use between 1 and 65535 by updating the first part of the -p attribute.

Build a Docker container from your SQL backup

The following Dockerfile and restore-database.sql file combine with a SQL backup file (my-database.bak in this example) to produce a Docker container. This will allow you to spin up a SQL Server container with your backup available via the SA account. Needless to say, this should not be used in a production environment without adding users, etc… as part of the process. Also, by default, the Developer edition is used and this is not licenced for production use.

The first step is to create a folder and in it put your my-database.bak file and two new files as below:

Dockerfile

FROM mcr.microsoft.com/mssql/server:2019-latest AS build
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD="ARandomPassword123!"

WORKDIR /tmp
COPY *.bak .
COPY restore-backup.sql .

RUN ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" \
    && sleep 5 \
    && /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P "ARandomPassword123!" -i /tmp/restore-backup.sql \
    && pkill sqlservr

FROM mcr.microsoft.com/mssql/server:2019-latest AS release

ENV ACCEPT_EULA=Y

COPY --from=build /var/opt/mssql/data /var/opt/mssql/data

restore-backup.sql

RESTORE DATABASE [MyDatabase] 
FROM DISK = '/tmp/my-database.bak'
WITH FILE = 1,
MOVE 'MyDatabase_Data' TO '/var/opt/mssql/data/MyDatabase.mdf',
MOVE 'MyDatabase_Log' TO '/var/opt/mssql/data/MyDatabase.ldf',
NOUNLOAD, REPLACE, STATS = 5
GO

USE MyDatabase
GO

DBCC SHRINKFILE (MyDatabase_Data, 1)
GO

ALTER DATABASE MyDatabase
SET RECOVERY SIMPLE;  
GO  

DBCC SHRINKFILE (MyDatabase_Log,1)
GO

ALTER DATABASE MyDatabase
SET RECOVERY FULL;  
GO  

DBCC SHRINKDATABASE ([MyDatabase])
GO

Note that your database will not be MyDatabase_Data and MyDatabase_Log so you will need to replace these values with your actual database file names. Note that sometimes the equivalent of MyDatabase_Data doesn't have _Data on the end.

To build the container, you then run the following command from within your folder:

docker build -t database-backup:latest .

You can then run this container using the following command:

docker run -d -p 11433:1433 --memory=6g --cpus=2 database-backup:latest

This will also limit your container to 6GB of RAM use and 2 CPU cores.

After 5 or 10 seconds, you can connect to your database from SQL Server Management Studio. The server name is “MachineName,11433” where MachineName is the name of your computer. For Authentication, choose “SQL Server Authentication” and for username enter “sa” and for password enter “ARandomPassword123!”.s

A simple testing mail server

If you need a quick mail server to test your mail logic works without actually sending these test emails, I suggest using Mailhog as it will not only swallow your test emails but it also provides a simple UI to see the messages you've sent.

To get it up and running requires one simple Docker command:

docker run -p 25:1025 -p 8025:8025 -d --restart always --name mailhog mailhog/mailhog

With this, you can send mail on port 25 via SMTP and view the dashboard at http://localhost:8025/ and see any mail you've received.