BRIX On-Premises > Prepare infrastructure > Kubernetes / Kubernetes cluster

Kubernetes cluster

To operate BRIX, a functional Kubernetes cluster of at least version 1.21 installed is required. We recommend using the Deckhouse platform built on Open Source components. Besides Kubernetes, this platform incorporates additional modules for monitoring, traffic balancing, autoscaling, secure access, and more. The modules are pre‑configured, integrated with each other, and ready to use. The management of all cluster components and the platform, as well as their updates, are fully automated.

Deckhouse is certified by CNCF.

This article outlines the deployment of a Kubernetes cluster consisting of a single master node.

The installation consists of five steps:

  1. Prepare infrastructure.
  2. Prepare configuration file.
  3. Install the Kubernetes cluster based on Deckhouse.
  4. Set up Deckhouse.
  5. Install Helm.

Step 1: Prepare infrastructure

To deploy a Kubernetes cluster based on the Deckhouse platform, you will need:

  1. A personal computer.

A computer from which the installation will be carried out. It is only needed to launch the Deckhouse installer and will not be part of the cluster.

System requirements:

  • OS: Windows 10+, macOS 10.15+, Linux (Ubuntu 18.04+, Fedora 35+).
  • Installed Docker for running the Deckhouse installer.
  • Access to a proxying registry or to a private container image storage with Deckhouse container images.
  • SSH access by key to the node that will be the master node of the future cluster.
  1. Master node.

A server (physical server or virtual machine) that will be the master node (master node) of the future cluster.

During the installation process, the Deckhouse installer, launched on a personal computer, will connect to the master node via SSH, install the necessary packages, configure the Kubernetes control plane, and deploy Deckhouse.

System requirements:

  • At least 12 CPU cores.
  • At least 16 GB of RAM.
  • At least 200 GB of disk space.
  • A supported OS.
  • Access to a proxying registry or to a private container image storage with Deckhouse container images.

Начало внимание

Deckhouse only supports Bearer token authentication scheme in the registry.

Конец внимание

  • Access to a proxy server for downloading OS deb/rpm packages (if necessary).
  • SSH access from a personal computer using key.
  • The node should not have container runtime packages installed, such as containerd or Docker.

Начало внимание

Installation directly from the master node is currently not supported. The installer in the form of a Docker image cannot be run on the same node where the master node is planned to be deployed, as the node should not have container runtime packages installed, such as containerd or Docker. In the absence of management nodes, install Docker on any other node of the future cluster, run the Docker image of the installer, install Deckhouse, and then remove the Docker image of the installer from the node along with Docker.

Конец внимание

  1. Downloading Deckhouse images to the local image registry.

You can deploy a Kubernetes cluster using Deckhouse in a closed environment without internet access. For this, first download the Deckhouse platform images on a computer with internet access and upload them to the local image registry. Read more in Download Deckhouse Images.

Step 2: Prepare configuration file

To install Deckhouse, prepare a YAML configuration file for installation. To obtain the YAML configuration file, use the Getting started service on the Deckhouse website. The service will generate an up-to-date YAML file for the current version of the platform.

  1. Generate the YAML file using the Getting started service by following these steps:
  1. Choose the infrastructure: Bare Metal.
  2. Read the installation information.
  3. Specify the template for the clusters DNS names. In our case it is %s.example.com.
  4. Save config.yml.
  1. Make the necessary changes in config.yml by doing the following:
  1. Set the clusters Pod address space in podSubnetCIDR.
  2. Set the clusters Service address space in serviceSubnetCIDR.
  3. Specify the desired Kubernetes version in kubernetesVersion.
  4. Check the update channel in releaseChannel (Stable).
  5. Verify the domain name template in publicDomainTemplate (%s.example.com).
    Used to form domains for system applications in the cluster. For example, Grafana for the template %s.example.com will be accessible as grafana.example.com.
  6. Check the operation mode of the cni-flannel module in podNetworkMode.
    Flannel operation mode, permissible values are VXLAN (if your servers have L3 connectivity) or HostGW (for L2 networks).
  7. Specify the local network to be used by cluster nodes in internalNetworkCIDRs.
    List of internal networks of cluster nodes, for example, '192.168.1.0/24', used for communication between Kubernetes components (kube-apiserver, kubelet, etc.).

Example of the initial cluster configuration file: config.yml.

For installation via the internet:

apiVersion: deckhouse.io/v1
kind: ClusterConfiguration
clusterType: Static
podSubnetCIDR: 10.111.0.0/16
serviceSubnetCIDR: 10.222.0.0/16
kubernetesVersion: "1.23"
clusterDomain: "cluster.local"
---
apiVersion: deckhouse.io/v1
kind: InitConfiguration
deckhouse:
  releaseChannel: Stable
  configOverrides:
    global:
      modules:
        publicDomainTemplate: "%s.example.com"
    cniFlannelEnabled: true
    cniFlannel:
      podNetworkMode: VXLAN
---
apiVersion: deckhouse.io/v1
kind: StaticClusterConfiguration
internalNetworkCIDRs:
  - 192.168.1.0/24

For offline installation without internet access:

Step 3: Install the Kubernetes cluster based on Deckhouse

The installation of Deckhouse Platform Community Edition involves setting up a cluster (using a Docker-image-based installer) consisting of a single master node. The Deckhouse installer is available as a container image, which requires the configuration files and SSH keys for accessing the master node. It is assumed that the SSH key used is ~/.ssh/id_rsa. The installer is based on the dhctl utility.

  1. Start the installer.

Начало внимание

Direct installation from the master node is currently not supported. The installer, in the form of a Docker image, cannot be run on the same node where the master node deployment is planned, as container runtime packages like containerd or docker should not be installed on the node.

Конец внимание

The installer is run on a personal computer prepared in the infrastructure preparation step. On the PC, navigate to the directory with the configuration file config.yml, prepared during the configuration file preparation step.

To launch the installer via the internet:

sudo docker run --pull=always -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" registry.deckhouse.io/deckhouse/ce/install:stable bash

For offline installation without internet access:

  1. Install Deckhouse. To do this, execute the command inside the installer container:

dhctl bootstrap --ssh-user=<username> --ssh-host=<master_ip> --ssh-agent-private-keys=/tmp/.ssh/id_rsa \
--config=/config.yml \
--ask-become-pass

where:

  • <username>. In the --ssh-user parameter specify the name of the user who generated the SSH key for installation.
  • <master_ip>. This is the IP address of the master node prepared during the infrastructure preparation step.

The installation process may take 15-30 minutes with a good connection.

Step 4: Set up Deckhouse

Connect via SSH to the master node prepared during the infrastructure preparation step. Perform the following steps:

  1. Remove taint restrictions from the master node.

As part of this article, the Kubernetes cluster based on Deckhouse consists of a single node. Allow Deckhouse components to work on the master node by executing the following command:

kubectl patch nodegroup master --type json -p '[{"op": "remove", "path": "/spec/nodeTemplate/taints"}]'

  1. Increase the number of Pods on the master node.

Increase the maximum number of Pods on the NodeGroup master nodes by executing the following command:

kubectl patch nodegroup master --type=merge -p '{"spec":{"kubelet":{"maxPods":200}}}'

  1. Add Local Path Provisioner.

By default, storageclass is absent in Deckhouse. Create a custom resource LocalPathProvisioner, allowing Kubernetes users to use local storage on nodes. Perform the following actions:

  1. Create a local-path-provisioner.yaml file on the master node containing the configuration for LocalPathProvisioner.
  1. Set the desired Reclaim policy (Retain by default). In this article, the parameter reclaimPolicy is set to "Delete" (PVs are deleted after PVCs are deleted).

local-path-provisioner.yaml file example:

apiVersion: deckhouse.io/v1alpha1
kind: LocalPathProvisioner
metadata:
  name: localpath-deckhouse-system
spec:
  nodeGroups:
  - master
  path: "/opt/local-path-provisioner"
  reclaimPolicy: "Delete"

  1. Apply the local-path-provisioner.yaml file in Kubernetes by executing the command:

kubectl apply -f local-path-provisioner.yaml

  1. Set the created LocalPathProvisioner as the default storageclass (default-class) by executing the command:

kubectl patch storageclass localpath-deckhouse-system -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

  1. Add Ingress Nginx Controller.

Deckhouse installs and manages the NGINX Ingress Controller using Custom Resources. If there is more than one node for deploying the Ingress controller, it is installed in a fail-safe mode and takes into account all the features of cloud and bare metal infrastructure implementations, as well as different types of Kubernetes clusters.

  1. Create an ingress-nginx-controller.yml file on the master node containing the Ingress controller configuration by executing the following command:

# section describing the parameters of the nginx ingress controller
# used version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
  name: nginx
spec:
  # name of the Ingress class for serving Ingress NGINX
  ingressClass: nginx
  # version of the Ingress controller (use version 1.1 with Kubernetes 1.23+)
  controllerVersion: "1.9"
  # how traffic enters from the external world
  inlet: HostPort
  hostPort:
    httpPort: 80
    httpsPort: 443
  # describes which nodes the component will be located on
  # you might want to change
  nodeSelector:
    node-role.kubernetes.io/control-plane: ""
  tolerations:
  - operator: Exists

  1. Apply the ingress-nginx-controller.yml file in Kubernetes by executing the command:

kubectl create -f ingress-nginx-controller.yml

  1. Add a user for access to the cluster web interface.
  1. Add the user.yml file on the master node containing the description of the user account and access rights:

apiVersion: deckhouse.io/v1
kind: ClusterAuthorizationRule
metadata:
  name: admin
spec:
  # list of Kubernetes RBAC accounts
  subjects:
  - kind: User
    name: admin@deckhouse.io
  # predefined access level template
  accessLevel: SuperAdmin
  # allow the user to do kubectl port-forward
  portForwarding: true
---
# section describing the parameters of the static user
# used version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: User
metadata:
  name: admin
spec:
  # user's email
  email: admin@deckhouse.io
  # this is the hash of the password xgnv5gkggd, generated now
  # generate your own or use this one, but only for testing
  # echo "xgnv5gkggd" | htpasswd -BinC 10 "" | cut -d: -f2
  # you might want to change
  password: '$2a$10$4j4cUeyonCfX7aDJyqSHXuAxycsf/sDK0T4n9ySQ7.owE34L1uXTm'

  1. Apply the user.yml file by executing the following command on the master node:

kubectl create -f user.yml

  1. Allow reassigning privilege policy for the running pods:

kubectl label namespace elma365 security.deckhouse.io/pod-policy=privileged --overwrite

Step 5: Install Helm

  1. To install Helm, visit the Helm releases page and download the helm-vX.Y.Z-linux-amd64.tar.gz archive of the required version.

For installation via the internet:

wget https://get.helm.sh/helm-vX.Y.Z-linux-amd64.tar.gz

For offline installation without internet access:

  1. Unpack the archive and move the helm binary file:

tar -zxvf helm-vX.Y.Z-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm