[!SUMMARY] Table Of Contents
- [[#Introduction]]
- [[#Prerequisites]]
- [[#Installing Kubernetes components and Dependencies]]
- [[#Swap and kernel modules]]
- [[#Container Runtime]]
- [[#DNS configuration]]
- [[#Setting up the cluster]]
- [[#Setting up Cilium]]
Introduction
Today, We start our journey into Kubernetes - what i consider to be the final stage of a “conventional” DevOps journey. I’ll try my best to take you through the entire process of setting up a production grade Kubernetes setup; from setting up the cluster to deploying our first service to delving into GitOps concepts.
Before we start, let’s summarize how this series of articles will be divided:
- Setting up the cluster with Kubeadm.
- Understanding Kubernetes’ basic components: pods, deployments, services.
- Understanding Kubernetes Networking.
- Persistence in Kubernetes.
- Helm.
- Introduction into GitOps.
Of course there’s a lot more to Kubernetes than what i can dive into, this series of articles only aims to get your foot into the door.
In this article, we will:
- Install all the dependencies on the cluster’s machines
- Initialize the cluster’s master node.
- Join the worker nodes to the cluster.
- Setup basic networking with Cilium.
Without any more Adieu, Let’s get started.
Prerequisites
First off, you will need at least three Linux machines to setup a Kubernetes cluster. If you’re doing this solely to learn, 3 VMs with 4 CPU threads and 2 GBs of RAM each are more than enough. Of course, you can have 3 physical computers with Linux installed on them if you have the resources to do so.
You will want each Linux machine to have a static IPV4 address, and ensure connectivity between each of the machines.
One other thing you need to take into account: you’ll need to set aside an adequate CIDR range for all the pods you’ll deploy, not to be confused with the reserved IP addresses of the cluster machines themselves.
Note: I’ll be installing Kubernetes on 3 Ubuntu machines, but the principle works in any distro, provided you adjust the commands accordingly.
Installing Kubernetes components and Dependencies
Right, let’s get each Linux machine prepped now.
Swap and kernel modules
# Disable Swap and enable modules
sudo swapoff -a
sudo modprobe overlay
sudo modprobe br_netfilter
# Edit Config Files to permenantly disable swap and enable modules
sudo sed -i '/ swap / s/^/#/' /etc/fstab
echo "
overlay
br_netfilter
" | sudo tee /etc/modules-load.d/containerd.conf
echo "
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
" | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
sudo sysctl --system
Container Runtime
# Install All Software Dependencies
sudo apt update && sudo apt dist-upgrade -y
sudo apt install -y apt-transport-https gnupg ca-certificates curl software-properties-common inetutils-traceroute gpg containerd
# Setup Containerd Config
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml >/dev/null
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd
DNS configuration
[!WARNING] Important Pay attention here! make sure to replace the IPs and hostnames to fit your setup!
sudo hostnamectl set-hostname <HOSTNAME>
echo "
<IP> <MASTER HOSTNAME>
<IP> <WORKER1 HOSTNAME>
<IP> <WORKER2 HOSTNAME>
" | sudo tee /etc/hosts
# Test
sudo ping -c 3 <MASTER HOSTNAME>
sudo ping -c 3 <WORKER1 HOSTNAME>
sudo ping -c 3 <WORKER2 HOSTNAME>
Kubernetes Components
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.35/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.35/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update && sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo systemctl enable --now kubelet
With that, all the components should be installed for you to setup the cluster. To test, run these commands:
sudo containerd -v
sudo kubectl version
sudo kubeadm version
If you get the following output, it’s ok; that’s because we haven’t setup the cluster yet.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Setting up the cluster
Now that everything is ready, setting up the actual cluster is relatively simple.
[!WARNING] You can skip
--control-plane-endpointif you only plan on having one control node, otherwise consider setting up a DNS rewrite or a load balancer.
# Initialize the cluster
sudo kubeadm init --pod-network-cidr <RANGE> --control-plane-endpoint "<ENDPOINT>" --upload-certs --v=5
# Configure Kubectl's Kubeconfig
mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config]
This should initiate the cluster and the control node, as well as print out the join command for the other nodes. if you need that command again, simply enter this command:
kubeadm token create --print-join-command
Now use the join command on the worker nodes.
Setting up Cilium
Now the cluster is set up, but you’ll notice that when you run kubectl get nodes, you’ll see the node statuses as NotReady. That’s because technically speaking, networking hasn’t been configured for the cluster yet; You need a CNI - a Container Networking Interface.
A CNI is what configures networking between the nodes as well as the containers. the most robust and widely used CNI is Cilium, so we’ll be installing it, even though some would argue it’s a bit overkill.
# Install The Cilium CLI
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
# Install Cilium On The Cluster
cilium install --version 1.18.5
# Validate Installation
cilium status --wait
# Test Connectivity
cilium connectivity test
This process does take a bit of time, so be patient, but once it’s finished, if you do kubectl get nodes, you should see all the nodes are ready!
One final thing i like to do is to label the worker nodes as worker nodes:
kubectl label node <NODE_NAME> node-role.kubernetes.io/worker=worker
And With that, your cluster is all setup! In my next article, we will deploy our first service and pods, so make sure to check it out!