First Kubernetes deployment with microk8s and cert-manager
Docker
is a amazing container platform, it helps fast deploying and scaling service among multiple servers (called cluster
). But Docker
is not good at managing instances on different servers, DevOps need a new software. A great thank to Google, Kubernetes (K8s)
is well fitted for
automating deployment, scaling, and management of containerized applications.
In this post I will show my first impression on Kubernetes and how I setup a deployment of MySQL
+PhpMyAdmin
+Nginx
and assign a ssl certificate automatically with cert-manager
and the steps of my troubleshooting.
Kubernetes and MicroK8s
The official Kubernetes
is mainly for cloud platform who needs to manage many clusters, and the control-panel node (which can also be called master
node) is not recommended and not allowed to run any container as default. For a bare metal server (like VPS without upstream K8s support) it’s too heavy. A lightweighted K8s solution like MicroK8s
is the best choice.
MicroK8s
is developed by Canonical, the author of Ubuntu
, and is
The smallest, simplest, pure production K8s.
For clusters, laptops, IoT and Edge, on Intel and ARM.
To install on Ubuntu
system is very simple with snapd
installed:
1 | sudo snap install microk8s --classic |
and it’s done. You can watch the install status with microk8s status --wait-ready
if you want. More detailed information about installation see Offcial Docs.
For convenience, I recommend running following code:
1 | alias kubectl='microk8s kubectl' |
Secret? Volume? Service? Pod? Ingress?
Different from Docker
, You’ll face lots of new concepts in order to start a single service:
ConfigMap
for providing configurationSecret
for store private or secret informationVolume
for providing storage spaceDeployment
for deploying and scaling servicePod
for running the service instance in containerIngress
for providing service to public network.
Those description is based on my understandings and may be not accurate enough. It’s really hard to understand how they work at the beginning, but they really helps to seperate configs and instance, allow you to generate different configs from a template for deploying on different nodes in clusters. But talk is cheap, now I show you how I setup a cluster with MySQL, PhpMyAdmin, Nginx.
Deploy first service
PersistentVolume
and PersistentVolumeClaim
Let’s setup a MySQL service as a try. As a container will lost its data after shutdown, I need a PersistentVolume
to persist database. A PersistentVolume
is like a disk for containers, every container can claim some space from it, therefore a extra PersistentVolumeClaim
is needed.
All configuration file is written in yaml
format, and a yaml
config file can contain multiple configs. The following code shows a PersistentVolume
and a PersistentVolumeClaim
for MySQL database storage:
Assuming the above code is written in mysql-pv.yaml
, run following code to create the actual resources:
1 | kubectl apply -f mysql-pv.yaml |
and check if the resource is sucessfully created:
1 | $ kubectl get pv |
Deployment
and Service
The next step is to create a Deployment
configuration and a Service
configuration. The PersistentVolumeClaim
created above will be mounted to the Deployment
. As a database is a stateful application, we don’t need to take care of scaling problem.
Secret
To keep the root password safe, it is not directly written in env
section, but retrieved from mysql-secret
, which is a Secret
resource created from following code:
Attention that the value of root_password
must be base64 encoded.
Deploy MySQL service
Assume that the two above codes are saved as mysql-deployment.yaml
and mysql-secret.yaml
, apply them using:
1 | kubectl apply -f mysql-secret.yaml |
By creating a Deployment
resource, a Pod
will also be created to run the instance. The Pod
is the real container, backended by containerd
by default.
1 | $ kubectl describe secret mysql-secret |
Now try connecting to the mysql instance to see if the deployment succeeds.
1 | kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -p[your password] |
It will run a instant Pod
running MySQL version 5.6 and call mysql client command to connect local mysql service. If you see a error message like Unknown mysql server host 'mysql'
, it means that your deployment is not correct or you didn’t enable dns addon. You can follow steps in official guide to check your dns addon working state. When everything works, you can see following prompt:
1 | If you dont see a command prompt, try pressing enter. |
You can try executing some MySQL command here to check if MySQL server really works.
Deploy PhpMyAdmin
It’s time to deploy more services. Next one is PhpMyAdmin
, a popular database management web application written in PHP. I recommend you use the default docker image branch instead of the fpm
branch, at least I didn’t make the fpm
image work properly.
With following code you can deploy a pma service in one configuration file. Remember to set PMA_ABSOLUTE_URI
to the real public uri you want to use in development or production.
Then run following command to apply the configuration:
1 | kubectl apply -f pma-deployment.yaml |
and check the status of this deployment:
1 | $ kubectl get deploy -l app=pma |
Deploy Nginx
with Nginx-Ingress
and secure with cert-manager
Until now the deployed service is not able to be accessed from externel network, even not able from localhost. To allow external access, an Ingress
resource will be created. Furthermore, the web service will be secured with a ssl certificate.
Use Nginx-Ingress
to deploy nginx service
Nginx-Ingress
allows you deploy nginx service with some simple commands, based on Kubernetes Ingress
, use ConfigMap
to auto configure nginx. All you need to do is install it and write a Ingress
config file and then all done.
I recommend you use helm
to install Nginx-Ingress
using microk8s enable helm
, replace helm with helm3 if you want to use helm3
. You will also need a tiller account for helm
:
1 | $ kubectl create serviceaccount tiller --namespace=kube-system |
Then run helm repo update
to update official repo. Assume that install service name is nginx, run helm install stable/nginx-ingress --name nginx
to install Nginx-Ingress
controller. But by default it requires a LoadBalancer
to assign an external ip to the controller. As a bare metal server, the provider will not give an upstream LoadBalancer
support. If you really want to use LoadBalancer
you can install MetalLB
which is still in beta phase and you need some IP available. I recommend using NodePort
mode rather than LoadBalancer
for convenience.
Helm
supports config override using values. At Helm Hub
page the configurable values are listed. Values can be set in command like --set config.service.type=NodePort
, or in a file:
Remember to assign the external IP to your server IP. Check controller service status:
1 | $ kubectl get svc -l app=nginx-ingress |
You can access http://[your server ip]:30123
and get a default backend response with 404.
Expose PhpMyAdmin
using Ingress
The Nginx-Ingress
controller supports exposing a service with Ingress
configuration, where I just need to point desired host, path and reverse proxy (backend). A sample yaml is shown below:
and comment out line 6 and line 18-21 to disable tls for now. It means that we expose the backend service pma-service
to host, reverse proxy any request sent to the host (domain) to port 80 of pma-service
. Apply this yaml using:
1 | kubectl apply -f pma-ingress.yaml |
and check ingress status:
1 | kubectl get ingress |
the address could be pending for a while because the Ingress
will send config to the Nginx-Ingress
controller and wait it to activate. Once your server ip is shown in the ADDRESS
field, you can access the host you set in pma-ingress.yaml
to test if the ingress works. Remember to point the host to your server ip in DNS provider.
Secure Web Application with SSL
Normally I use Let's encrypt
to secure the connection using acme.sh
script and import private key and public certificate in Nginx virtual host config. But with Kubernetes I can use cert-manager
to automate this process.
At first run following commands to install cert-manager
with helm
:
1 | kubectl create namespace cert-manager |
Verify the installation with:
1 | $ kubectl get pods --namespace cert-manager |
Then create two Issuer
, this is a new type imported by cert-manager
. The one is for test using staging acme server, one is for production using real acme server.
The class name defined at line 18 must match the class name set in pma-ingress.yaml
at line 5. The generated cert will be stored in Secret
, name is defined at line 13 (privateKeySecretRef.name). Apply them using:
1 | kubectl apply -f le-staging.yaml |
You can check the status of issuer with kubectl describe issuer letsencrypt-staging
or kubectl describe issuer letsencrypt-prod
.
Sign SSL Certificate and Deploy
Uncomment line 6 and line 18-21 in pma-ingress.yaml
, replace the value at line 6 with letsencrypt-staging for testing purpose. Then apply this ingress config again. Check the status of certificate:
1 | $ kubectl get certificate |
until the field READY
become True
. If it is always False
you can check detailed information about the certificate using kubectl describe certificate pma-tls
.
The certificate is expected to be stored in pma-tls
:
1 | $ kubectl describe secret pma-tls |
Try accessing https://[your hostname]/
, you will get a certificate warning, it’s normal because the certificate is signed by staging acme server. It means that the certificate issuer is working.
Replace letsencrypt-staging
with letsencrypt-prod
at line 6 in pma-ingress.yaml
, delete the secret pma-tls
using
1 | kubectl delete secret pma-tls |
and apply the pma-ingress.yaml
again. Then wait a few minutes until new certificate is ready.
Now you should be able to access https://[your hostname]/
without any certificate warning, otherwise check if you forget to delete old pma-tls
secret, or the certificate issue process is erroneous (execute kubectl describe certificate pma-tls
to check the status).
Afterword
At the very beginning, the Kubernetes seems a little bit scared and complicated. I need to write some configuration yaml files to setup just one service. But it has great profit: I don’t need to set every config by myself, I don’t need to write nginx config, run acme.sh commands etc. And I can deploy another cluster with same configuration files in just a few minutes. With kustomize
it’s quiet easy to generete and reuse configurations among clusters (see GitHub repo and this blog post).
An obvious disadvantage is relatively high memory usage, for example my Kubernetes
configuration will eat up to 1.5 GiB memory. The recommended memory, according to microk8s
official docs, is 4GiB. But anyway, Kubernetes
worth a try.