Kubernetes is one of the most popular tools for managing containerized applications in modern development environments. In this article, you can find a step-by-step guide for installing and configuring Kubernetes using kubeadm.
The main components of Kubernetes are:
- A master node. It is a node that manages the entire cluster. It monitors other nodes and distributes the load using the controller manager and scheduler. To increase fault tolerance, we recommend having several master nodes.
- Worker nodes. These are nodes for launching containers. The more worker nodes you have, the more applications you can run. The number of nodes also affects the fault tolerance of the cluster. When one node fails, the load is redistributed to others.
Below, you can find how to deploy a standard Kubernetes cluster that consists of a master node and a worker node.
начало примера
Note
This article helps you set up a standard Kubernetes cluster for development or testing environments. For the production environment, we recommend installing Deckhouse.
конец примера
The installation of a Kubernetes cluster consists of four steps:
- Prepare the infrastructure.
- Prepare the master node.
- Prepare the worker node.
- Install additional components.
Step 1: Prepare the infrastructure
To deploy a Kubernetes cluster, you need:
- A physical server or VM (virtual machine). It is the control node (master node) of the future cluster.
Minimum system requirements:
- Virtual processor (CPU) — 2
- Virtual memory (RAM) — 4 GB
- Virtual hard disk (Storage) — 36 GB
- A physical server or VM (virtual machine). It is a worker node of the future cluster.
Minimum system requirements:
- Virtual processor (CPU) — 2
- Virtual memory (RAM) — 4 GB
- Virtual hard disk (Storage) — 36 GB
Execute the following commands on all nodes on behalf of the root user:
- For kubelet operation, disable swap:
systemctl stop dev-zram0.swap
systemctl mask dev-zram0.swap
swapoff -a
Disable swap loading after reboot and comment out the swap entry in /etc/fstab
:
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
- Disable SELinux:
setenforce 0 && sed -i --follow-symlinks
's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
- Create a file to load the kernel modules required for containerd at boot:
cat <<EOF | tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
Load the modules into the kernel:
modprobe overlay
modprobe br_netfilter
- Create a configuration file for the network inside kubernetes:
cat <<EOF | tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
Apply the parameters:
sysctl --system
- Install the required packages:
- Add the Kubernetes repository:
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
- Install kubelet, kubeadm, and kubectl:
sudo yum install -y cri-tools containerd
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
- Configure port forwarding in iptables:
iptables -P FORWARD ACCEPT
- Set the default settings for the container configuration:
containerd config default | sudo tee /etc/containerd/config.toml
- To allow the use of cgroup, toggle the flag of the
systemdCgroup
parameter in the/etc/containerd/config.toml
configuration file:
sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
- Start the containerd service and enable it at boot:
systemctl enable --now containerd
- Enable the kubelet service at boot:
systemctl enable kubelet.service
Step 2: Prepare the master node
Execute the following commands on the master node on behalf of the root user:
- Start the initialization of the master node. Use this command to perform the initial configuration and preparation of the main cluster node. The
--pod-network-cidr
key specifies the internal subnet address for the future cluster:
kubeadm init --pod-network-cidr=10.244.0.0/16
- After successful initialization, configure the cluster management parameters:
mkdir /home/$USER/.kube
cp -i /etc/kubernetes/admin.conf /home/$USER/.kube/config
chown $USER. /home/$USER/.kube /home/$USER/.kube/config
- Set the internal network configuration in the cluster:
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml
- To output the command to join the worker nodes to the cluster, use the following command:
kubeadm token create --print-join-command
As a result, you get the following command:
kubeadm join 192.168.1.10:6443 --token c7nysv.vyt6wwx3df459ha5a --discovery-token-ca-cert-hash sha256:7cab98d22238e6240add52db9a25465d4668ac588fd70e8e3d9d3cbd4688c7
Step 3: Prepare the worker node
To connect to the master node, on the worker node on behalf of the root user, execute the command received in the previous step:
kubeadm join 192.168.1.10:6443 --token c7nysv.vyt6wwx3df459ha5a --discovery-token-ca-cert-hash sha256:7cab98d22238e6240add52db9a25465d4668ac588fd70e8e3d9d3cbd4688c7
Step 4: Install additional components
Execute the following commands on the master node on behalf of the root user:
- To install Helm, go to the Helm releases page and download the
helm-vX.Y.Z-linux-amd64.tar.gz
archive of the required version.
To launch the installer via the internet, use the following command:
wget https://get.helm.sh/helm-vX.Y.Z-linux-amd64.tar.gz
For offline installation without internet access
wget https://get.helm.sh/helm-vX.Y.Z-linux-amd64.tar.gz
|
- Unpack the archive and move the helm binary file:
tar -zxvf helm-vX.Y.Z-linux-amd64.tar.gz
mv linux-amd64/helm /usr/bin/helm
- Install the nginx controller:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.5/deploy/static/provider/cloud/deploy.yaml
- Update the Ingress Nginx Controller service type to LoadBalancer and add the specified external IP address to the list of those that can be used to access the service, where
host_ip
is the IP address of the worker node:
kubectl patch svc ingress-nginx-controller -n ingress-nginx -p '{"spec": {"type": "LoadBalancer", "externalIPs":["<host_ip>"]}}'
- Install the Kubernetes HostPath tool for dynamical allocation of volumes:
helm repo add rimusz https://charts.rimusz.net
helm repo update
helm upgrade --install hostpath-provisioner --namespace kube-system rimusz/hostpath-provisioner