First Kubernetes deployment with microk8s and cert-manager

Docker is a amazing container platform, it helps fast deploying and scaling service among multiple servers (called cluster). But Docker is not good at managing instances on different servers, DevOps need a new software. A great thank to Google, Kubernetes (K8s) is well fitted for

automating deployment, scaling, and management of containerized applications.

In this post I will show my first impression on Kubernetes and how I setup a deployment of MySQL+PhpMyAdmin+Nginx and assign a ssl certificate automatically with cert-manager and the steps of my troubleshooting.

Kubernetes and MicroK8s

The official Kubernetes is mainly for cloud platform who needs to manage many clusters, and the control-panel node (which can also be called master node) is not recommended and not allowed to run any container as default. For a bare metal server (like VPS without upstream K8s support) it’s too heavy. A lightweighted K8s solution like MicroK8s is the best choice.

MicroK8s is developed by Canonical, the author of Ubuntu, and is

The smallest, simplest, pure production K8s.
For clusters, laptops, IoT and Edge, on Intel and ARM.

To install on Ubuntu system is very simple with snapd installed:

1
sudo snap install microk8s --classic

and it’s done. You can watch the install status with microk8s status --wait-ready if you want. More detailed information about installation see Offcial Docs.

For convenience, I recommend running following code:

1
2
3
alias kubectl='microk8s kubectl'
microk8s enable dns helm # helm3 is also available
alias helm='microk8s helm' # replace helm with helm3 if you use helm3 in command above

Secret? Volume? Service? Pod? Ingress?

Different from Docker, You’ll face lots of new concepts in order to start a single service:

  • ConfigMap for providing configuration
  • Secret for store private or secret information
  • Volume for providing storage space
  • Deployment for deploying and scaling service
  • Pod for running the service instance in container
  • Ingress for providing service to public network.

Those description is based on my understandings and may be not accurate enough. It’s really hard to understand how they work at the beginning, but they really helps to seperate configs and instance, allow you to generate different configs from a template for deploying on different nodes in clusters. But talk is cheap, now I show you how I setup a cluster with MySQL, PhpMyAdmin, Nginx.

Deploy first service

PersistentVolume and PersistentVolumeClaim

Let’s setup a MySQL service as a try. As a container will lost its data after shutdown, I need a PersistentVolume to persist database. A PersistentVolume is like a disk for containers, every container can claim some space from it, therefore a extra PersistentVolumeClaim is needed.

All configuration file is written in yaml format, and a yaml config file can contain multiple configs. The following code shows a PersistentVolume and a PersistentVolumeClaim for MySQL database storage:

Assuming the above code is written in mysql-pv.yaml, run following code to create the actual resources:

1
kubectl apply -f mysql-pv.yaml

and check if the resource is sucessfully created:

1
2
3
4
5
6
7
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mysql-pv-volume 2Gi RWO Retain Bound default/mysql-pv-claim manual 1m

$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim Bound mysql-pv-volume 2Gi RWO manual 1m

Deployment and Service

The next step is to create a Deployment configuration and a Service configuration. The PersistentVolumeClaim created above will be mounted to the Deployment. As a database is a stateful application, we don’t need to take care of scaling problem.

Secret

To keep the root password safe, it is not directly written in env section, but retrieved from mysql-secret, which is a Secret resource created from following code:

Attention that the value of root_password must be base64 encoded.

Deploy MySQL service

Assume that the two above codes are saved as mysql-deployment.yaml and mysql-secret.yaml, apply them using:

1
2
kubectl apply -f mysql-secret.yaml
kubectl apply -f mysql-deployment.yaml

By creating a Deployment resource, a Pod will also be created to run the instance. The Pod is the real container, backended by containerd by default.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ kubectl describe secret mysql-secret
Name: mysql-secret
Namespace: default
Labels: < none >
Annotations:
Type: Opaque

Data
====
root_password: 16 bytes

$ kubectl get deploy -l app=mysql
NAME READY UP-TO-DATE AVAILABLE AGE
mysql 1/1 1 1 3m

$ kubectl get pod -l app=mysql
NAME READY STATUS RESTARTS AGE
mysql-75b7c7dcb-qmxqg 1/1 Running 1 3m

$ kubectl get svc -l app=mysql
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP 10.152.***.*** < none > 3306/TCP 3m

Now try connecting to the mysql instance to see if the deployment succeeds.

1
kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -p[your password]

It will run a instant Pod running MySQL version 5.6 and call mysql client command to connect local mysql service. If you see a error message like Unknown mysql server host 'mysql', it means that your deployment is not correct or you didn’t enable dns addon. You can follow steps in official guide to check your dns addon working state. When everything works, you can see following prompt:

1
2
3
If you dont see a command prompt, try pressing enter.

mysql>

You can try executing some MySQL command here to check if MySQL server really works.

Deploy PhpMyAdmin

It’s time to deploy more services. Next one is PhpMyAdmin, a popular database management web application written in PHP. I recommend you use the default docker image branch instead of the fpm branch, at least I didn’t make the fpm image work properly.

With following code you can deploy a pma service in one configuration file. Remember to set PMA_ABSOLUTE_URI to the real public uri you want to use in development or production.

Then run following command to apply the configuration:

1
kubectl apply -f pma-deployment.yaml

and check the status of this deployment:

1
2
3
4
5
6
7
$ kubectl get deploy -l app=pma
NAME READY UP-TO-DATE AVAILABLE AGE
pma 1/1 1 1 1m

$ kubectl get svc pma-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pma-service ClusterIP 10.152.***.*** < none > 80/TCP 1m

Deploy Nginx with Nginx-Ingress and secure with cert-manager

Until now the deployed service is not able to be accessed from externel network, even not able from localhost. To allow external access, an Ingress resource will be created. Furthermore, the web service will be secured with a ssl certificate.

Use Nginx-Ingress to deploy nginx service

Nginx-Ingress allows you deploy nginx service with some simple commands, based on Kubernetes Ingress, use ConfigMap to auto configure nginx. All you need to do is install it and write a Ingress config file and then all done.

I recommend you use helm to install Nginx-Ingress using microk8s enable helm, replace helm with helm3 if you want to use helm3. You will also need a tiller account for helm:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ kubectl create serviceaccount tiller --namespace=kube-system
serviceaccount "tiller" created

$ kubectl create clusterrolebinding tiller-admin --serviceaccount=kube-system:tiller --clusterrole=cluster-admin
clusterrolebinding.rbac.authorization.k8s.io "tiller-admin" created

$ helm init --service-account=tiller
$HELM_HOME has been configured at /Users/myaccount/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

Then run helm repo update to update official repo. Assume that install service name is nginx, run helm install stable/nginx-ingress --name nginx to install Nginx-Ingress controller. But by default it requires a LoadBalancer to assign an external ip to the controller. As a bare metal server, the provider will not give an upstream LoadBalancer support. If you really want to use LoadBalancer you can install MetalLB which is still in beta phase and you need some IP available. I recommend using NodePort mode rather than LoadBalancer for convenience.

Helm supports config override using values. At Helm Hub page the configurable values are listed. Values can be set in command like --set config.service.type=NodePort, or in a file:

Remember to assign the external IP to your server IP. Check controller service status:

1
2
3
4
$ kubectl get svc -l app=nginx-ingress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-nginx-ingress-controller NodePort 10.152.***.*** [your server ip] 80:30123/TCP,443:30456/TCP 1m
nginx-nginx-ingress-default-backend ClusterIP 10.152.***.*** < none > 80/TCP 1m

You can access http://[your server ip]:30123 and get a default backend response with 404.

Expose PhpMyAdmin using Ingress

The Nginx-Ingress controller supports exposing a service with Ingress configuration, where I just need to point desired host, path and reverse proxy (backend). A sample yaml is shown below:

and comment out line 6 and line 18-21 to disable tls for now. It means that we expose the backend service pma-service to host, reverse proxy any request sent to the host (domain) to port 80 of pma-service. Apply this yaml using:

1
kubectl apply -f pma-ingress.yaml

and check ingress status:

1
2
3
kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
pma < none > [your hostname] [your server ip] 80, 443 24h

the address could be pending for a while because the Ingress will send config to the Nginx-Ingress controller and wait it to activate. Once your server ip is shown in the ADDRESS field, you can access the host you set in pma-ingress.yaml to test if the ingress works. Remember to point the host to your server ip in DNS provider.

Secure Web Application with SSL

Normally I use Let's encrypt to secure the connection using acme.sh script and import private key and public certificate in Nginx virtual host config. But with Kubernetes I can use cert-manager to automate this process.

At first run following commands to install cert-manager with helm:

1
2
3
4
5
6
7
8
9
10
kubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update

helm install \
--name cert-manager \ # if you use helm3, delete '--name'
--namespace cert-manager \
--version v0.16.0 \
jetstack/cert-manager \
--set installCRDs=true

Verify the installation with:

1
2
3
4
5
$ kubectl get pods --namespace cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-c456f8b56-4wkq7 1/1 Running 0 1m
cert-manager-cainjector-6b4f5b9c99-tqp5c 1/1 Running 0 1m
cert-manager-webhook-5cfd5478b-kd69h 1/1 Running 0 1m

Then create two Issuer, this is a new type imported by cert-manager. The one is for test using staging acme server, one is for production using real acme server.

The class name defined at line 18 must match the class name set in pma-ingress.yaml at line 5. The generated cert will be stored in Secret, name is defined at line 13 (privateKeySecretRef.name). Apply them using:

1
2
kubectl apply -f le-staging.yaml
kubectl apply -f le-prod.yaml

You can check the status of issuer with kubectl describe issuer letsencrypt-staging or kubectl describe issuer letsencrypt-prod.

Sign SSL Certificate and Deploy

Uncomment line 6 and line 18-21 in pma-ingress.yaml, replace the value at line 6 with letsencrypt-staging for testing purpose. Then apply this ingress config again. Check the status of certificate:

1
2
3
$ kubectl get certificate
NAME READY SECRET AGE
pma-tls True pma-tls 3m

until the field READY become True. If it is always False you can check detailed information about the certificate using kubectl describe certificate pma-tls.

The certificate is expected to be stored in pma-tls:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ kubectl describe secret pma-tls
Name: pma-tls
Namespace: default
Labels: < none >
Annotations: cert-manager.io/alt-names: [your hostname]
cert-manager.io/certificate-name: pma-tls
cert-manager.io/common-name: [your hostname]
cert-manager.io/ip-sans:
cert-manager.io/issuer-kind: Issuer
cert-manager.io/issuer-name: letsencrypt-staging
cert-manager.io/uri-sans:

Type: kubernetes.io/tls

Data
====
tls.crt: 3558 bytes
tls.key: 1679 bytes

Try accessing https://[your hostname]/, you will get a certificate warning, it’s normal because the certificate is signed by staging acme server. It means that the certificate issuer is working.

Replace letsencrypt-staging with letsencrypt-prod at line 6 in pma-ingress.yaml, delete the secret pma-tls using

1
kubectl delete secret pma-tls

and apply the pma-ingress.yaml again. Then wait a few minutes until new certificate is ready.

Now you should be able to access https://[your hostname]/ without any certificate warning, otherwise check if you forget to delete old pma-tls secret, or the certificate issue process is erroneous (execute kubectl describe certificate pma-tls to check the status).

Afterword

At the very beginning, the Kubernetes seems a little bit scared and complicated. I need to write some configuration yaml files to setup just one service. But it has great profit: I don’t need to set every config by myself, I don’t need to write nginx config, run acme.sh commands etc. And I can deploy another cluster with same configuration files in just a few minutes. With kustomize it’s quiet easy to generete and reuse configurations among clusters (see GitHub repo and this blog post).

An obvious disadvantage is relatively high memory usage, for example my Kubernetes configuration will eat up to 1.5 GiB memory. The recommended memory, according to microk8s official docs, is 4GiB. But anyway, Kubernetes worth a try.