начало внимание
This article contains instructions for the deprecated BRIX On-Premises in MicroK8s with PostgreSQL 10 database. It is supported up to system version 2024.4. Upgrading edition and database is not available. Please update to the current BRIX On-Premises edition.
конец внимание
Basic information
A cluster of servers helps to balance the application load and allows duplicating and scaling backend services. This creates a high availability system that can continue to operate even if one of the servers fails.
Transparent communication between servers is achieved through using the following production-ready distribution: https://microk8s.io.
начало внимание
You need at least three servers to set up a cluster.
конец внимание
In this example, we are using three nodes with the following hostnames and IP addresses:
- elma365-1, 192.168.1.51
- elma365-2, 192.168.1.52
- elma365-3, 192.168.1.53
Step 1: Install BRIX
To install BRIX, use the following command (take note of the –e enterprise
parameter):
curl -fsSL -o elma365-microk8s.sh https://dl.elma365.com/onPremise/latest/elma365-microk8s-latest && \
chmod +x elma365-microk8s.sh && \
./elma365-microk8s.sh -e enterprise
During installation, you will be prompted to answer several questions. At the Use custom connection string? step, you can connect to distributed databases. Select external services (PostgreSQL, MongoDB, S3, Redis, or RabbitMQ) and specify the connection strings: hostname
, port
, username
, and password
.
Step 2: Scaling nodes
To create a cluster of servers, you need to install the system to one server and then add the other servers to the cluster.
On the main node (server where the system is installed), run the following command:
sudo elma365ctl add-node
A list of commands will be displayed in the following format:
From the node you wish to join to this cluster, run the following:
microk8s join 192.168.1.230:25000/92b2db237428470dc4fcfc4ebbd9dc81/2c0cb3284b05
These commands need to be executed on a server you want to add to the cluster.
For each new server, you need to repeatedly call add-node
, as this command generates a unique one-time identifier to connect the server.
You need to add servers to the cluster one by one, working with one server at a time.
Step 3: Autoscaling of services and Linkerd
Read more in the Autoscaling and Linkerd article.
You can enable or disable autoscaling during installation, updating, or reconfiguration of the BRIX application. When you enable autoscaling, you have to specify the minimum and maximum number of the application’s replicas.
As you enable autoscaling, Linkerd is automatically connected.
Manual scaling with the elma365ctl scale
command is not available if autoscaling is enabled.
If you use autoscaling, note that the minimum hardware requirements are specified for a single copy of a microservice. When you increase the number of replicas, multiply all requirements by that number.
Step 4: Manual scaling of services
To scale all services simultaneously among three nodes, use the following command:
sudo elma365ctl scale all 3
You can also scale services separately:
sudo elma365ctl scale service_name replica_count
Where:
service_name
is the name of the service.
replica_count
is the number of replicas that the service needs to be scaled to.
For example, elma365ctl scale main 2
will run the main
service in two replicas (copies).
If the number of service replicas does not change, and autoscaling is not needed, enabling the Linkerd service will be prompted when you disable autoscaling. It has to be enabled for correct operation and fault tolerance of the application.
Step 5: High availability mode
For this mode, you need to have at least three nodes in your cluster. After adding all the nodes to the cluster, run application reconfiguration in any of the nodes. At the Enable high availability mode? step, enable the high availability mode.
elma365ctl reconfigure
Specify the number of replicas for scaling the system services of the cluster (the minimum number is three). Then enter the time of moving application services from failed nodes to working ones. It needs to be specified in seconds. The value has to be between 10 and 300.
After that, you need to update the settings on each node of the cluster, including the one you reconfigured the application on. To do that, use the following command:
sudo elma365ctl update-hamode
You need to update the settings in the cluster’s nodes one by one, working with one server at a time. The nodes will be restarted.
When you finish, restart all the application’s services by running the following command once on any node of the cluster:
sudo elma365ctl restart
Step 6: HAProxy configuration (web)
Here is an example of a configuration for load balancing using HAProxy:
listen elma365_web
bind haproxy-server.your_domain:80
mode http
balance leastconn
no option http-use-htx
option forwardfor
option httpclose
option httpchk HEAD /
server elma365-1 elma365-1.your_domain:80 check
server elma365-2 elma365-2.your_domain:80 check
server elma365-3 elma365-3.your_domain:80 check