Setting up a single node Kubernetes Cluster
Note: This guide was written for Kubernetes 1.9 for Docker on Ubuntu 16.04
This guide is written by a beginner in both Linux, Docker and Kubernetes and is aimed as a guide to assist others who are interested in trying out Kubernetes without using VMs and MiniKube.
This is a basic guide to installing Kubernetes on a clean Docker on Ubuntu 16.04 system.
The guide is geared towards setting up a single node Kubernetes cluster with Traefik as the ingress controller. It will serve both the Traefik and Kubernetes dashboards on sub-domains reachable from the internet with both protected by basic auth.
Prerequisites
This guide assume that you have a bare metal or VPS server running somewhere as well as a domain name poiting to the machine's IP. It also assumes that you have a fresh install of Docker on Ubuntu 16.04 (The guide was written on a VPS hosted by OVH and they provide a Docker on Ubuntu image) and have already set up a user with sudo
access. All commands in this guide is execute on a regular user account unless otherwise noted.
Getting Started
First update your Ubuntu install by SSH-ing into your machine as a regular user and then running the following command
sudo apt-get update
followed by
sudo apt-get upgrade
This will update the package definitions and then upgrade the packages on your machine.
Getting the Kubernetes bits
The first step is to grab the key for the Kubernetes install. Do so by issueing the following command
sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add
Next we need the Kubernetes repository. Execute the following commands as root, as sudo
does not have the required priviledges to make the changes.
First create the file /etc/apt/sources.list.d/kubernetes.list
. It can be created by issueing
touch /etc/apt/sources.list.d/kubernetes.list
To edit the file enter
vi /etc/apt/sources.list.d/kubernetes.list
and then enter the line below into the file
deb http://apt.kubernetes.io/ kubernetes-xenial main
Once the line has been added press Esc
followed by Shift + :
then type x!
and press Enter
. This will close and save the file.
We are now ready to install the parts we need for Kubernetes. Issue the following commands as your regular user
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl kubernetes-cni
Once it is done, the parts for Kubernetes have been installed.
Initialize the Node
Kubernetes requires a Pod Network for the pods to communicate. For this guide we will use Flannel although there are several other Pod Networks available. You can take a look at other neworks here but installing them is outside the scope of this guide.
First we need to set /proc/sys/net/bridge/bridge-nf-call-iptables
to 1
to pass bridged IPv4 traffic to iptables` chains which is required by certain CNI networks (in this case Flannel). Do this by issueing
sudo sysctl net.bridge.bridge-nf-call-iptables=1
We can now initialize Kubernetes by running the initialization command and passing --pod-network-cidr
which is required for Flannel to work correctly
kubeadm init --pod-network-cidr=10.244.0.0/16
Once Kubernetes has been initialized we then install the Flannel Pod Network by running
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
We can check that the pod is up by running
kubectl get pods --all-namespaces
which will display all the Pods.
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-vps520050 1/1 Running 0 1d
kube-system kube-apiserver-vps520050 1/1 Running 0 1d
kube-system kube-controller-manager-vps520050 1/1 Running 0 1d
kube-system kube-dns-6f4fd4bdf-zpwjh 3/3 Running 0 1d
kube-system kube-flannel-ds-9szb9 1/1 Running 0 1d
kube-system kube-proxy-mgvg4 1/1 Running 0 1d
kube-system kube-scheduler-vps520050 1/1 Running 0 1d
If there are pods that are not running, take a look at the Kubernetes troubleshooting guide
Because we are running only a single Kubernetes node we want to be able to run Pods on the master node. To do this we need to untaint the master node so it can run regular pods. To do so run
kubectl taint nodes --all node-role.kubernetes.io/master-
Installing Traefik
We will use Traefik as an Ingress Controller.
Traefik will be installed as a Pod on Kubernetes.
Our first step is to set up the Role Based Access Control(RBAC) configuration.
The provided configuration is not overly fine-grained, but will serve our purposes for now.
The configuration looks as follows
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
Save this as traefik-rbac.yml
Our next step is to set up the configuration for Traefik. The configuration below will redirect all http traffic to https and also ensure that Let's Encrypt autoconfiguration will work correctly.
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-conf
namespace: kube-system
data:
traefik.toml: |
# traefik.toml
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
regex = "^http://(.*)"
replacement = "https://$1"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "<email address>"
storage = "/acme/acme.json"
entryPoint = "https"
[acme.httpChallenge]
entryPoint = "http"
[[acme.domains]]
main = "<domain for https>"
Save this file as traefik-configmap.yml
Next we create our Traefik deploy. Because Let's Encrypt has a limited amount of certificates it can issue per domain we make use of local storage to ensure we do not have to recreate the certificates as the Pod gets restarted. The deployement creates the required Service Account for Traefik to run under. It then sets up our Pod on the appropriate ports as well as configuring the volumes where we will store our certificates on the local volume (under /srv/configs/
).
# Service account for traefik
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
volumes:
- name: config
configMap:
name: traefik-conf
- name: acme
hostPath:
path: /srv/configs/
containers:
- image: traefik
name: traefik-ingress-lb
volumeMounts:
- mountPath: "/config"
name: "config"
- mountPath: "/acme"
name: "acme"
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
- containerPort: 8080
args:
- --configfile=/config/traefik.toml
- --web
- --kubernetes
- --logLevel=INFO
---
apiVersion: v1
kind: Service
metadata:
name: traefik-ingress-service
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
name: http
- port: 443
name: https
externalIPs:
- <Node's IP address>
Make sure to replace <Node's IP address>
with your own machine's IP address, then save the file as traefik-deploy.yml
Finally we configure our Ingress, the deployment provides a service for the Traefik Web UI as well as name-based routing for our domain to the Traefik Dashboard.
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: <domain to route to dashboard>
http:
paths:
- backend:
serviceName: traefik-web-ui
servicePort: 80
Simply replace <domain to route to dashboard>
with your own domain that points to your server and save the file as traefik-ingress.yml
Now it is time to actually run the deployment. Do so by executing the following commands
kubect create -f traefik-rbac.yml
kubect create -f traefik-configmap.yml
kubect create -f traefik-deploy.yml
kubect create -f traefik-ingress.yml
You can view the deployed items as follows
ClusterRole
kubectl get clusterrole --all-namespaces
Which will produce output similiar to
NAME AGE
admin 5d
cluster-admin 5d
flannel 5d
system:aws-cloud-provider 5d
system:basic-user 5d
traefik-ingress-controller 3d
ConfigMap
kubectl get configmap --all-namespaces
which will produce output similiar to
NAMESPACE NAME DATA AGE
kube-public cluster-info 1 5d
kube-system extension-apiserver-authentication 6 5d
kube-system kube-flannel-cfg 2 5d
kube-system kube-proxy 2 5d
kube-system kubeadm-config 1 5d
kube-system traefik-conf 1 3d
Service Accounts
kubectl get serviceaccounts --all-namespaces
which will produce output similiar to
NAMESPACE NAME SECRETS AGE
default default 1 5d
kube-public default 1 5d
kube-system admin-user 1 3d
kube-system attachdetach-controller 1 5d
kube-system bootstrap-signer 1 5d
kube-system certificate-controller 1 5d
kube-system clusterrole-aggregation-controller 1 5d
kube-system cronjob-controller 1 5d
kube-system daemon-set-controller 1 5d
kube-system default 1 5d
kube-system deployment-controller 1 5d
kube-system disruption-controller 1 5d
kube-system endpoint-controller 1 5d
kube-system flannel 1 5d
kube-system generic-garbage-collector 1 5d
kube-system heapster 1 3d
kube-system horizontal-pod-autoscaler 1 5d
kube-system job-controller 1 5d
kube-system kube-dns 1 5d
kube-system kube-proxy 1 5d
kube-system namespace-controller 1 5d
kube-system node-controller 1 5d
kube-system persistent-volume-binder 1 5d
kube-system pod-garbage-collector 1 5d
kube-system replicaset-controller 1 5d
kube-system replication-controller 1 5d
kube-system resourcequota-controller 1 5d
kube-system service-account-controller 1 5d
kube-system service-controller 1 5d
kube-system statefulset-controller 1 5d
kube-system token-cleaner 1 5d
kube-system traefik-ingress-controller 1 3d
kube-system ttl-controller 1 5d
Deployments
kubectl get deployments --all-namespaces
which will produce output similiar to
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-system heapster 1 1 1 1 3d
kube-system kube-dns 1 1 1 1 5d
kube-system monitoring-influxdb 1 1 1 1 3d
kube-system traefik-ingress-controller 1 1 1 1 3d
Services
kubectl get services --all-namespaces
which produces output similiar to
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP [redacted] <none> 443/TCP 5d
kube-system heapster ClusterIP [redacted] <none> 80/TCP 3d
kube-system kube-dns ClusterIP [redacted] <none> 53/UDP,53/TCP 5d
kube-system monitoring-influxdb ClusterIP [redacted] <none> 8086/TCP 3d
kube-system traefik-ingress-service ClusterIP [redacted] [your IP] 80/TCP,443/TCP 3d
kube-system traefik-web-ui ClusterIP [redacted] <none> 80/TCP 3d
Ingress
kubectl get ingress --all-namespaces
which will produce output similiar to
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
kube-system traefik-web-ui [your domain] 80 3d
Pods
kubectl get pods --all-namespaces
which will produce output similiar to
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-[redacted] 1/1 Running 0 5d
kube-system kube-apiserver-[redacted] 1/1 Running 0 5d
kube-system kube-controller-manager-[redacted] 1/1 Running 0 5d
kube-system kube-dns-6f4fd4bdf-zpwjh 3/3 Running 0 5d
kube-system kube-flannel-ds-9szb9 1/1 Running 0 5d
kube-system kube-proxy-mgvg4 1/1 Running 0 5d
kube-system kube-scheduler-[redacted] 1/1 Running 0 5d
kube-system traefik-ingress-controller-7b7866b8fc-jpw94 1/1 Running 0 3d
Traefik Logs
You can view the startup logs for the Traefik Pod by running
kubectl logs <Pod name> -n Kube-System
This should show something like
time="2018-03-05T05:16:27Z" level=info msg="Using TOML configuration file /config/traefik.toml"
time="2018-03-05T05:16:27Z" level=warning msg="web provider configuration is deprecated, you should use these options : api, rest provider, ping and metrics"
time="2018-03-05T05:16:27Z" level=info msg="Traefik version v1.5.3 built on 2018-02-27_02:47:04PM"
time="2018-03-05T05:16:27Z" level=info msg="
Stats collection is disabled.
Help us improve Traefik by turning this feature on :)
More details on: https://docs.traefik.io/basics/#collected-data
"
time="2018-03-05T05:16:27Z" level=info msg="Preparing server traefik &{Network: Address::8080 TLS:<nil> Redirect:<nil> Auth:<nil> WhitelistSourceRange:[] Compress:false ProxyProtocol:<nil> ForwardedHeaders:0xc420631d80} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s"
time="2018-03-05T05:16:27Z" level=info msg="Preparing server http &{Network: Address::80 TLS:<nil> Redirect:0xc4201cb290 Auth:<nil> WhitelistSourceRange:[] Compress:false ProxyProtocol:<nil> ForwardedHeaders:0xc420631d40} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s"
time="2018-03-05T05:16:27Z" level=info msg="Preparing server https &{Network: Address::443 TLS:0xc4206b4e00 Redirect:<nil> Auth:<nil> WhitelistSourceRange:[] Compress:false ProxyProtocol:<nil> ForwardedHeaders:0xc420631d60} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s"
time="2018-03-05T05:16:27Z" level=info msg="Starting server on :80"
time="2018-03-05T05:16:27Z" level=info msg="Starting server on :8080"
You should now be able to access the Traefik dashboard by navigating to your domain
Note that the Dashboard is not currently protected and sits open on the internet, basic auth will be covered in the next section.
Basic auth for Dashboard
Install the apache2-utils in order to make use of the htpasswd
command. You can install them by running
sudo apt install apache2-utils
Once complete set up a new password file by running
htpasswd -c ./auth <username>
and entering the desired password.
We will now create a secret in Kubernetes by issueing
kubectl create secret generic mysecret --from-file auth --namespace=kube-system
Note The secret must be in the same namespace as the Ingress object.
Next we can attach Basic Authentication to our Ingress by editing the Ingress configuration and adding
ingress.kubernetes.io/auth-type: "basic"
ingress.kubernetes.io/auth-secret: "mysecret"
The new ingress file will look as follows
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
ingress.kubernetes.io/auth-type: "basic"
ingress.kubernetes.io/auth-secret: "mysecret"
spec:
rules:
- host: <domain to route to dashboard>
http:
paths:
- backend:
serviceName: traefik-web-ui
servicePort: 80
Update the current Ingress by first deleting the old ingress
kubectl delete ingress traefik-web-ui -n kube-system
and then we create it again
kubectl create -f traefik-ingress.yml
The dashboard should now prompt for Basic auth when you try to access it
Kubernetes Dashboard
Install Heapster
For the dashboard to be able to display all of its data it requires Heapster running in the Cluster.
The heapster setup is fairly straight-forward and won't be covered in much detail. For this guide we will be using InfluxDb to store the data. While the provided configurations also cater for setting up Grafana we will not be using it.
First grab the Heapster RBAC file, then execute
kubectl create -f heapster-rbac.yaml
This will set up the required RBAC for Heapster to run.
Next we need to deploy the InfluxDB and Heapster pods. Start off by grabbing the InfluxDb deployment and
Heapster deployment files.
Next, execute the deployment by using
kubectl create -f influxdb.yaml
followed by
kubectl create -f heapster.yaml
Verify that both pods have been deployed and are running by executing
kubectl get pods --all-namespaces
Installing the Kubernetes Dashboard
For the dashboard we will use the Deployment file provided by Kubernetes with some modifications to fit into our use case. In this section we will explore the differences and deploy the dashboard to our cluster.
First we will skip the Dashboard secret as our dashboard will be hosted inside the cluster and be securely exposed via the Traefik Ingress Controller. We will set up the RBAC exactly as it appears in the file as it fullfils our requirements as is.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
Save the file as kubedashboard-rbac.yaml
Create the role by running
kubectl create -f kubedashboard-rbac.yaml
We need to make some changes to the Dashboard deployment as we do not need all of the parts
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
ports:
- containerPort: 9090
protocol: TCP
args:
- --insecure-bind-address=0.0.0.0
- --insecure-port=9090
- --enable-insecure-login
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
Save the file as kubedashboard-deploy.yaml
First, we expose port 9090
on the container as we will be hosting it without SSL on the internal cube network the reason being that there is no way to deploy Let's Encrypt certificates inside the cluster network. The next change is a modification to the arguments
Bind to all available addresses
--insecure-bind-address=0.0.0.0
Bind to port 9090
--insecure-port=9090
Enable the sign in page on HTTP, not just HTTPS
--enable-insecure-login
Adjust the livenessProbe
to run over HTTP and port 9090
and we remove the volume information as we will not be storing any certificates.
Deploy the dashboard by executing
kubectl create -f kubedashboard-deploy.yaml
Finally we need to set up the service and ingress to allow us to access the Dashboard
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubernetes-dashboard
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: <dashboard url>
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 80
Save the file as kubedashboard-ingress.yaml
This simply route traffic arriving at our dashboard url to the dashboard pod on port 9090.
Deploy the ingress by running
kubectl create -f kubedashboard-ingress.yaml
Finally we need a service account token to be able to access the dashboard's functionality. The deployment file will create a service account admin-user
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
Save the file as kubedashboard-serviceaccount.yaml
Create the account by running
kubectl create -f kubedashboard-serviceaccount.yaml
Now we can grab the token by running
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
and then copying the token
Name: admin-user-token-wfm5v
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=admin-user
kubernetes.io/service-account.uid=a7d8dcd6-2092-11e8-a78e-fa163e7f0433
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: [redacted]
Access the dashboard on your dashboard url, choose token and log in to see the state of your cluster