Deploy Raspberry Pi Kubernetes cluster with Kubespray

Viet Dinh
4 min readJul 5, 2021

In this post we will be deploying a Kubernetes cluster (2 master nodes and 3 worker nodes) as describing in the following picture

Requirements

  • Raspberry Pi 4 Model B (4GB or 8GB of RAM) x 6
  • Install Ubuntu 20 64-bit OS on each Rarspberry Pi. Make sure all Raspberry Pi share an identical credential (username and password). the OS file can be found here.

Proceed

First thing we need to do is to assign each Raspberry Pi (RP) an IP address and connect them on a LAN.
In this post, we are assumming that static IP addresses are assigned as following:

Load balancer: 192.168.0.9
Master 1: 192.168.0.10
Master 2: 192.168.0.11
Worker 1: 192.168.0.12
Worker 2: 192.168.0.13
Worker 3: 192.168.0.14

We need to open a SSH connection from our deploying machine (the machine we use to orchestrate the deployment) to each RP

sshpass must be installed on each RP.

sudo apt install -y sshpass

The load balancer is responsible for distributing traffic among the Kubernetes master nodes. We install Haproxy on the load balancer node

sudo apt install -y haproxy

Open and modify the haproxy configuration file on load balancer node

sudo nano /etc/haproxy/haproxy.cfg

Replace the content of haproxy.cfg with the following content

global
user haproxy
group haproxy
daemon
maxconn 4096
defaults
timeout connect 10s
timeout client 8640000s
timeout server 8640000s
balance roundrobin
log global
mode tcp
maxconn 2000
frontend k8s-api-server
bind 192.168.0.9:9000
mode tcp
option tcplog
default_backend k8s-api-server
backend k8s-api-server
mode tcp
balance roundrobin
option tcp-check
server master1 192.168.0.10:6443 check fall 3 rise 2
server master1 192.168.0.11:6443 check fall 3 rise 2

Restart haproxy service on the load balancer

sudo systemctl restart haproxy

Enable memory cgroup controller and cpu cgroup controller on each RP node except the load balancer node. On each node, we open the file /boot/firmware/cmdline.txt and append the text cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 if it is not in the file's content.

sudo nano /boot/firmware/cmdline.txt

Download Kubespray repository to our deploying machine and navigate to the Kubespray directory

# Clone Kubespray repository to your machine
git clone https://github.com/kubernetes-sigs/kubespray.git
# Navigate to Kubespray directory
cd kubespray

Install Kubespray dependencies on the deploying machine

# Install dependencies
sudo apt install -y python3-pip
sudo pip3 install -r requirements.txt

Create Kubespray settings for our cluster on deploying machine.

# Create a copy of sample cluster settings
cp -rfp inventory/sample inventory/mycluster

Update file inventory.ini found at inventory/mycluster/inventory.ini on our deploying machine with following content. Remember to replace text in curly brackets and the brackets themselve with your own cluster information.

[all]
k8s-node-1 ansible_host=192.168.0.10 ip=192.168.0.10 etcd_member_name=etcd1
k8s-node-2 ansible_host=192.168.0.11
k8s-node-3 ansible_host=192.168.0.12
k8s-node-4 ansible_host=192.168.0.13
k8s-node-5 ansible_host=192.168.0.14
[kube_control_plane]
k8s-node-1
k8s-node-2
[etcd]
k8s-node-1
[kube_node]
k8s-node-3
k8s-node-4
k8s-node-5
[k8s_cluster:children]
kube_control_plane
kube_node
[remote_nodes:children]
kube_control_plane
kube_node
[remote_nodes:vars]
ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
ansible_become_pass='{password for current user of deploying machine}'
ansible_user='{common user setup on each RP}'
ansible_password='{common password for the above common user}'

Update file etcd.yml found at inventory/mycluster/group_vars/k8s_cluster.yml on the deploying machine. Find the line responsible for enabling persistent volume feature.

persistent_volumes_enabled: true

Update file addons.yml found at inventory/cluster/group_vars/k8s_cluster/addons.yml on the deploying machine. Make sure the following line exits to allow using .

helm_enabled: true

Update file all.yml found at inventory/mycluster/group_vars/all/all.yml on the deploying machine, replace its content with the following. Make sure you replace text in curly brackets and the brackets themselves with your system information.

## Directory where etcd data stored
etcd_data_dir: {/path/to/an/empty/directory/on/192.168.0.10}
## Experimental kubeadm etcd deployment mode. Available only for new deployment
etcd_kubeadm_enabled: false
## Directory where the binaries will be installed
bin_dir: /usr/local/bin
## External LB example config
apiserver_loadbalancer_domain_name: "lb-apiserver.kubernetes.local"
loadbalancer_apiserver:
address: 192.168.0.9
port: 9000
## Internal loadbalancers for apiservers
loadbalancer_apiserver_localhost: false
## If loadbalancer_apiserver_healthcheck_port variable defined, enables proxy liveness check for nginx.
loadbalancer_apiserver_healthcheck_port: 8081
## Since workers are included in the no_proxy variable by default, docker engine will be restarted on all nodes (all
## pods will restart) when adding or removing workers. To override this behaviour by only including master nodes in the
## no_proxy variable, set below to true:
no_proxy_exclude_workers: false

On the deploying machine, run the following command to deploy the cluster

ansible-playbook -i inventory/mycluster/inventory.ini --become --become-user=root cluster.yml

After cluster deployment completes, on the first master node (192.168.0.10), run following commands to allow current user to use kubectl command without providing the cluster config file

mkdir -p ~/.kube
sudo cp /etc/kubernetes/admin.conf ~/.kube
sudo chown -R $(id -u):$(id -g) ~/.kube

On the first master node (192.168.0.10) run following command to confirm cluster creation

kubectl get node

We now have a fully functioning Kubernetes cluster with 2 master nodes and 3 worker nodes.

Originally published at http://vietraspi.ddns.net.

--

--