This article describes how to install PostgreSQL 13 for Ubuntu Linux 20.04 and 22.04. See the supported PostgreSQL versions for correct system operation in the System requirements for BRIX On-Premises article. You can also refer to the official PostgreSQL documentation.
The installation consists of 10 steps:
- Prepare nodes (servers).
- Prepare the etcd cluster.
- Install PostgreSQL.
- Configure PostgreSQL.
- Install Patroni.
- Configure Patroni.
- Prepare the PostgreSQL+Patroni cluster.
- Prepare PGBouncer (optional).
- Configure HAProxy (postgres block).
- Connect to PostgreSQL.
Step 1: Prepare nodes (servers)
Create three nodes (servers) with sequentially numbered host names.
начало внимание
The minimum number of servers for organizing a cluster is three.
конец внимание
In this example, three nodes with the following hostnames and IP addresses are used:
- postgres-server1.your_domain, 192.168.1.1
- postgres-server2.your_domain, 192.168.1.2
- postgres-server3.your_domain, 192.168.1.3
Create the necessary host name mappings in DNS. If not possible, add the required entries in /etc/hosts.
Step 2: Prepare the etcd cluster
- Install
etcd on all nodes:
sudo apt-get install etcd -y
- Stop
etcd on all nodes:
sudo systemctl stop etcd
- Remove the data directory:
sudo rm -rf /var/lib/etcd/*
- Move the default configuration file:
sudo mv /etc/default/etcd{,.original}
- Create and open a new configuration file for editing:
sudo nano /etc/default/etcd
- Add the example configurations for the postgres-server1.your_domain node:
ETCD_NAME="postgres-server1"
ETCD_DATA_DIR="/var/lib/etcd/default"
ETCD_HEARTBEAT_INTERVAL="1000"
ETCD_ELECTION_TIMEOUT="5000"
ETCD_LISTEN_PEER_URLS="http://192.168.1.1:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.1:2379,http://localhost:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.1:2380"
ETCD_INITIAL_CLUSTER="postgres-server1=http://192.168.1.1:2380,postgres-server2=http://192.168.1.2:2380,postgres-server3=http://192.168.1.3:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-postgres-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.1:2379"
ETCD_ENABLE_V2="true"
ETCD_INITIAL_ELECTION_TICK_ADVANCE="false"
Example configurations with enabling TLS/SSL for the postgres-server1.your_domain node:
ETCD_NAME="postgres-server1"
ETCD_DATA_DIR="/var/lib/etcd/default"
ETCD_HEARTBEAT_INTERVAL="1000"
ETCD_ELECTION_TIMEOUT="5000"
ETCD_LISTEN_PEER_URLS="https://192.168.1.1:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.1:2379,https://localhost:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://postgres-server1.your_domain:2380"
ETCD_INITIAL_CLUSTER="postgres-server1=https://postgres-server1.your_domain:2380,postgres-server2=https://postgres-server2.your_domain:2380,postgres-server3=https://postgres-server3.your_domain:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-postgres-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://postgres-server1.your_domain:2379"
ETCD_ENABLE_V2="true"
ETCD_CERT_FILE="/path/to/public.crt"
ETCD_KEY_FILE="/path/to/private.key"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/path/to/certCA.pem"
ETCD_PEER_CERT_FILE="/path/to/public.crt"
ETCD_PEER_KEY_FILE="/path/to/private.key"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/path/to/certCA.pem"
ETCD_INITIAL_ELECTION_TICK_ADVANCE="false"
|
- Add the example configurations to the file for the postgres-server2.your_domain node:
ETCD_NAME="postgres-server2"
ETCD_DATA_DIR="/var/lib/etcd/default"
ETCD_HEARTBEAT_INTERVAL="1000"
ETCD_ELECTION_TIMEOUT="5000"
ETCD_LISTEN_PEER_URLS="http://192.168.1.2:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.2:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.2:12380"
ETCD_INITIAL_CLUSTER="postgres-server1=http://192.168.1.1:12380,postgres-server2=http://192.168.1.2:12380,postgres-server3=http://192.168.1.3:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-postgres-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.2:2379"
ETCD_ENABLE_V2="true"
ETCD_INITIAL_ELECTION_TICK_ADVANCE="false"
Example configurations with enabling TLS/SSL for the postgres-server2.your_domain:
ETCD_NAME="postgres-server2"
ETCD_DATA_DIR="/var/lib/etcd/default"
ETCD_HEARTBEAT_INTERVAL="1000"
ETCD_ELECTION_TIMEOUT="5000"
ETCD_LISTEN_PEER_URLS="https://192.168.1.2:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.2:2379,https://localhost:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://postgres-server2.your_domain:2380"
ETCD_INITIAL_CLUSTER="postgres-server1=https://postgres-server1.your_domain:2380,postgres-server2=https://postgres-server2.your_domain:2380,postgres-server3=https://postgres-server3.your_domain:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-postgres-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://postgres-server2.your_domain:2379"
ETCD_ENABLE_V2="true"
ETCD_CERT_FILE="/path/to/public.crt"
ETCD_KEY_FILE="/path/to/private.key"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/path/to/certCA.pem"
ETCD_PEER_CERT_FILE="/path/to/public.crt"
ETCD_PEER_KEY_FILE="/path/to/private.key"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/path/to/certCA.pem"
ETCD_INITIAL_ELECTION_TICK_ADVANCE="false"
|
- Add the example configurations to the file for the postgres-server3.your_domain node:
ETCD_NAME="postgres-server3"
ETCD_DATA_DIR="/var/lib/etcd/default"
ETCD_HEARTBEAT_INTERVAL="1000"
ETCD_ELECTION_TIMEOUT="5000"
ETCD_LISTEN_PEER_URLS="http://192.168.1.3:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.3:2379,http://localhost:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.3:2380"
ETCD_INITIAL_CLUSTER="postgres-server1=http://192.168.1.1:2380,postgres-server2=http://192.168.1.2:2380,postgres-server3=http://192.168.1.3:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-postgres-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.3:2379"
ETCD_ENABLE_V2="true"
ETCD_INITIAL_ELECTION_TICK_ADVANCE="false"
Example configurations with enabling TLS/SSL for the postgres-server3.your_domain:
ETCD_NAME="postgres-server3"
ETCD_DATA_DIR="/var/lib/etcd/default"
ETCD_HEARTBEAT_INTERVAL="1000"
ETCD_ELECTION_TIMEOUT="5000"
ETCD_LISTEN_PEER_URLS="https://192.168.1.3:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.3:2379,https://localhost:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://postgres-server3.your_domain:2380"
ETCD_INITIAL_CLUSTER="postgres-server1=https://postgres-server1.your_domain:2380,postgres-server2=https://postgres-server2.your_domain:2380,postgres-server3=https://postgres-server3.your_domain:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-postgres-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://postgres-server3.your_domain:2379"
ETCD_ENABLE_V2="true"
ETCD_CERT_FILE="/path/to/public.crt"
ETCD_KEY_FILE="/path/to/private.key"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/path/to/certCA.pem"
ETCD_PEER_CERT_FILE="/path/to/public.crt"
ETCD_PEER_KEY_FILE="/path/to/private.key"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/path/to/certCA.pem"
ETCD_INITIAL_ELECTION_TICK_ADVANCE="false"
|
Let's review the provided parameters:
ETCD_NAME: Unique name within the cluster;ETCD_LISTEN_CLIENT_URLS: Connection point for clients in the cluster;ETCD_ADVERTISE_CLIENT_URLS: URLs through which other cluster nodes can be found.;ETCD_LISTEN_PEER_URLS: Connection point for other nodes in the cluster;ETCD_INITIAL_ADVERTISE_PEER_URLS: Initial URLs through which other nodes can find it;ETCD_INITIAL_CLUSTER_TOKEN: Token for the cluster, must be the same on all nodes.;ETCD_INITIAL_CLUSTER: List of nodes in the cluster at startup;ETCD_INITIAL_CLUSTER_STATE: Can take two values: new and existing;ETCD_DATA_DIR: Location of the cluster's data directory;ETCD_ELECTION_TIMEOUT: Time in milliseconds between the last received notification from the cluster leader and attempting to take over the leader role on the follower node;ETCD_HEARTBEAT_INTERVAL: Time in milliseconds between leader notifications sent by the leader node to inform others that it is still the leader;ETCD_CERT_FILE: Path to the server certificate file;ETCD_KEY_FILE: Path to the private key file;ETCD_TRUSTED_CA_FILE: Path to the root CA file;ETCD_CLIENT_CERT_AUTH: Can take two values: true and false;ETCD_PEER_CERT_FILE: Path to the server certificate file;ETCD_PEER_KEY_FILE: Path to the private key file;ETCD_PEER_TRUSTED_CA_FILE: Path to the root CA file;ETCD_PEER_CLIENT_CERT_AUTH: Can take two values: true and false;
- Restart
etcd on all nodes:
sudo systemctl restart etcd
- Check the cluster status.
For a cluster without TLS:
sudo etcdctl cluster-health
For a cluster with TLS:
sudo etcdctl -C https://postgres-server1.your_domain:2379 --key-file /path/to/private.key --cert-file /path/to/public.crt --ca-file /path/to/certCA.pem cluster-health
Step 3: Install PostgreSQL
начало внимание
Hardware requirements for BRIX on PostgreSQL can be found in System requirements for BRIX On-Premises.
конец внимание
- To install PostgreSQL, add the official repository
postgresql:
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
- Update package cache:
sudo apt update
- Install PostgreSQL on all nodes:
sudo apt install postgresql-13 -y
Step 4: Configure PostgreSQL
начало примечание
Note
For the password, the following characters are allowed:
- Uppercase Latin letters: A to Z
- Lowercase Latin letters: a to z
- Digits: 0 to 9
- Symbols: -_
Reserved (invalid) symbols:
! * ' ( ) ; : @ & = + $ , / ? % # [ ]
конец примечание
Actions for the node postgres-server1.your_domain:
- Create a new role
elma365 with the password SecretPassword:
sudo -u postgres psql -c \
"CREATE ROLE elma365 WITH login password 'SecretPassword';"
- Create a database
elma365 with owner elma365:
sudo -u postgres psql -c \
"CREATE DATABASE elma365 WITH OWNER elma365;"
- Add necessary extensions for the database
elma365:
sudo -u postgres psql -d elma365 -c \
"CREATE EXTENSION \"uuid-ossp\"; CREATE EXTENSION pg_trgm;"
- Create a new role
replicator with the password ReplicatorPassword for working with replicas. It must match the Patroni settings in the block postgresql - authentication - replication and the list of allowed PostgreSQL hosts in the file pg_hba.conf:
sudo -u postgres psql -c \
"CREATE USER replicator WITH REPLICATION ENCRYPTED PASSWORD 'ReplicatorPassword';"
- Set a password for the postgres user:
sudo -u postgres psql -c "ALTER USER postgres PASSWORD 'PostgresPassword';"
- Stop PostgreSQL:
systemctl stop postgresql
Actions for the nodes postgres-server2.your_domain and postgres-server3.your_domain:
- Stop PostgreSQL:
systemctl stop postgresql
- Remove the data directory on the nodes postgres-server2.your_domain and postgres-server3.your_domain:
rm -rf /var/lib/postgresql/13/main
Step 5: Install Patroni
- Install Patroni and PIP on all nodes:
sudo apt-get install python3-pip python3-dev libpq-dev -y
sudo apt-get install patroni -y
- Install dependencies for Patroni to work on all nodes:
pip3 install psycopg2-binary
pip3 install wheel
pip3 install python-etcd
Step 6: Configure Patroni
- Create a configuration file:
sudo nano /etc/patroni/config.yml
- Place an example initial configuration in the created
/etc/patroni/config.yml file, changing the IP addresses for each node in the cluster. Pay attention to the comments in this file.
Example initial configuration
scope: postgres-cluster # the same value on all nodes
name: postgresql-server1 # different values on all nodes
namespace: /service/ # the same value on all nodes
restapi:
listen: postgres-server1.your_domain:8008 # address of the node where the file is
connect_address: postgres-server1.your_domain:8008 # address of the node where the file is
etcd:
hosts: postgres-server1.your_domain:2379,postgres-server2.your_domain:2379,postgres-server3.your_domain:2379 # list of all the nodes with installed etcd
bootstrap:
method: initdb
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
master_start_timeout: 300
synchronous_mode: false
synchronous_mode_strict: false
synchronous_node_count: 1
postgresql:
use_pg_rewind: true
use_slots: true
parameters:
max_connections: 2000
superuser_reserved_connections: 5
max_locks_per_transaction: 64
max_prepared_transactions: 0
huge_pages: try
shared_buffers: 512MB
work_mem: 128MB
maintenance_work_mem: 256MB
effective_cache_size: 4GB
checkpoint_timeout: 15min
checkpoint_completion_target: 0.9
wal_compression: on
min_wal_size: 2GB
max_wal_size: 4GB
wal_buffers: 32MB
default_statistics_target: 1000
seq_page_cost: 1
random_page_cost: 4
effective_io_concurrency: 2
synchronous_commit: on
autovacuum: on
autovacuum_max_workers: 5
autovacuum_vacuum_scale_factor: 0.01
autovacuum_analyze_scale_factor: 0.02
autovacuum_vacuum_cost_limit: 200
autovacuum_vacuum_cost_delay: 20
autovacuum_naptime: 1s
max_files_per_process: 4096
archive_mode: on
archive_timeout: 1800s
archive_command: cd .
wal_level: replica
wal_keep_segments: 130
max_wal_senders: 10
max_replication_slots: 10
hot_standby: on
hot_standby_feedback: True
wal_log_hints: on
shared_preload_libraries: pg_stat_statements,auto_explain
pg_stat_statements.max: 10000
pg_stat_statements.track: all
pg_stat_statements.save: off
auto_explain.log_min_duration: 10s
auto_explain.log_analyze: true
auto_explain.log_buffers: true
auto_explain.log_timing: false
auto_explain.log_triggers: true
auto_explain.log_verbose: true
auto_explain.log_nested_statements: true
standard_conforming_strings: true
track_io_timing: on
log_lock_waits: on
log_temp_files: 3
track_activities: on
track_counts: on
track_functions: all
log_checkpoints: on
logging_collector: on
log_truncate_on_rotation: on
log_rotation_age: 1d
log_rotation_size: 0
log_line_prefix: '%t [%p-%l] %r %q%u@%d '
log_filename: 'postgresql-%a.log'
log_directory: /var/log/postgresql
initdb: # List options to be passed on to initdb
- encoding: UTF8
- locale: en_US.UTF-8
- data-checksums
pg_hba: # must contain addresses of ALL the machines used in the cluster
- local all postgres peer
- local all all peer
- host all all 0.0.0.0/0 md5
- host replication replicator localhost trust
- host replication replicator 192.168.1.1/32 md5
- host replication replicator 192.168.1.2/32 md5
- host replication replicator 192.168.1.3/32 md5
postgresql:
listen: 192.168.1.1,127.0.0.1:5432 # address of the node where the file is
connect_address: 192.168.1.1:5432 # address of the node where the file is
use_unix_socket: true
data_dir: /var/lib/postgresql/13/main # data directory
bin_dir: /usr/lib/postgresql/13/bin
config_dir: /etc/postgresql/13/main
pgpass: /var/lib/postgresql/.pgpass_patroni
authentication:
replication:
username: replicator
password: ReplicatorPassword
superuser:
username: postgres
password: PostgresPassword
parameters:
unix_socket_directories: /var/run/postgresql
pg_hba: # must contain addresses of ALL the machines used in the cluster
- local all postgres peer
- local all all peer
- host all all 0.0.0.0/0 md5
- host replication replicator localhost trust
- host replication replicator 192.168.1.1/32 md5
- host replication replicator 192.168.1.2/32 md5
- host replication replicator 192.168.1.3/32 md5
remove_data_directory_on_rewind_failure: false
remove_data_directory_on_diverged_timelines: false
create_replica_methods:
- basebackup
basebackup:
max-rate: '100M'
checkpoint: 'fast'
watchdog:
mode: off # Allowed values: off, automatic, required
device: /dev/watchdog
safety_margin: 5
tags:
nofailover: false
noloadbalance: false
clonefrom: false
nosync: false
|
Example initial configuration to enable TLS/SSL support in Patroni
Place the example initial configuration with TLS/SSL in the file /etc/patroni/config.yml. Pay attention to the comments in this file:
scope: postgres-cluster # the same value on all nodes
name: postgresql-server1 # different values on all nodes
namespace: /service/ # the same value on all nodes
restapi:
listen: postgres-server1.your_domain:8008 # address of the node where the file is
connect_address: postgres-server1.your_domain:8008 # address of the node where the file is
cafile: /path/to/pgCA.pem
certfile: /path/to/pg.crt # path to the server certificate file
keyfile: /path/to/pg.key # path to the private key file
verify_client: required # path to the root CA file
etcd:
protocol: https
cert: /path/to/public.crt # path to the server certificate file
key: /path/to/private.key # path to the private key file
cacert: /path/to/certCA.pem # path to the root CA file
hosts: postgres-server1.your_domain:2379,postgres-server2.your_domain:2379,postgres-server3.your_domain:2379 # list of all the nodes with installed etcd
ctl:
insecure: false # Allow connections to SSL sites without certs
certfile: /path/to/pg.crt # path to the server certificate file
keyfile: /path/to/pg.key # path to the private key file
cacert: /path/to/pgCA.pem # path to the root CA file
bootstrap:
method: initdb
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
master_start_timeout: 300
synchronous_mode: false
synchronous_mode_strict: false
synchronous_node_count: 1
postgresql:
use_pg_rewind: true
use_slots: true
parameters:
max_connections: 2000
superuser_reserved_connections: 5
max_locks_per_transaction: 64
max_prepared_transactions: 0
huge_pages: try
shared_buffers: 512MB
work_mem: 128MB
maintenance_work_mem: 256MB
effective_cache_size: 4GB
checkpoint_timeout: 15min
checkpoint_completion_target: 0.9
wal_compression: on
min_wal_size: 2GB
max_wal_size: 4GB
wal_buffers: 32MB
default_statistics_target: 1000
seq_page_cost: 1
random_page_cost: 4
effective_io_concurrency: 2
synchronous_commit: on
autovacuum: on
autovacuum_max_workers: 5
autovacuum_vacuum_scale_factor: 0.01
autovacuum_analyze_scale_factor: 0.02
autovacuum_vacuum_cost_limit: 200
autovacuum_vacuum_cost_delay: 20
autovacuum_naptime: 1s
max_files_per_process: 4096
archive_mode: on
archive_timeout: 1800s
archive_command: cd .
wal_level: replica
wal_keep_segments: 130
max_wal_senders: 10
max_replication_slots: 10
hot_standby: on
hot_standby_feedback: True
wal_log_hints: on
shared_preload_libraries: pg_stat_statements,auto_explain
pg_stat_statements.max: 10000
pg_stat_statements.track: all
pg_stat_statements.save: off
auto_explain.log_min_duration: 10s
auto_explain.log_analyze: true
auto_explain.log_buffers: true
auto_explain.log_timing: false
auto_explain.log_triggers: true
auto_explain.log_verbose: true
auto_explain.log_nested_statements: true
standard_conforming_strings: true
track_io_timing: on
log_lock_waits: on
log_temp_files: 3
track_activities: on
track_counts: on
track_functions: all
log_checkpoints: on
logging_collector: on
log_truncate_on_rotation: on
log_rotation_age: 1d
log_rotation_size: 0
log_line_prefix: '%t [%p-%l] %r %q%u@%d '
log_filename: 'postgresql-%a.log'
log_directory: /var/log/postgresql
ssl: on
ssl_ca_file: '/path/to/pgCA.pem'
ssl_cert_file: '/path/to/pg.crt'
ssl_key_file: '/path/to/pg.key'
initdb: # List options to be passed on to initdb
- encoding: UTF8
- locale: en_US.UTF-8
- data-checksums
pg_hba: # must contain addresses of ALL the machines used in the cluster
- local all postgres peer
- local all all peer
- hostssl all all 0.0.0.0/0 md5
- hostssl replication replicator localhost trust
- hostssl replication replicator 192.168.1.1/32 md5
- hostssl replication replicator 192.168.1.2/32 md5
- hostssl replication replicator 192.168.1.3/32 md5
postgresql:
listen: 192.168.1.1,127.0.0.1:5432 # address of the node where the file is
connect_address: 192.168.1.1:5432 # address of the node where the file is
use_unix_socket: true
data_dir: /var/lib/postgresql/13/main # data directory
bin_dir: /usr/lib/postgresql/13/bin
config_dir: /etc/postgresql/13/main
pgpass: /var/lib/postgresql/.pgpass_patroni
authentication:
replication:
username: replicator
password: ReplicatorPassword
sslcert: /path/to/pg.crt # path to the server certificate file
sslkey: /path/to/pg.key # path to the private key file
sslrootcert: /path/to/pgCA.pem # path to the root CA file
superuser:
username: postgres
password: PostgresPassword
sslcert: /path/to/pg.crt # path to the server certificate file
sslkey: /path/to/pg.key # path to the private key file
sslrootcert: /path/to/pgCA.pem # path to the root CA file
parameters:
unix_socket_directories: /var/run/postgresql
pg_hba: # must contain addresses of ALL the machines used in the cluster
- local all postgres peer
- local all all peer
- hostssl all all 0.0.0.0/0 md5
- hostssl replication replicator localhost trust
- hostssl replication replicator 192.168.1.1/32 md5
- hostssl replication replicator 192.168.1.2/32 md5
- hostssl replication replicator 192.168.1.3/32 md5
remove_data_directory_on_rewind_failure: false
remove_data_directory_on_diverged_timelines: false
create_replica_methods:
- basebackup
basebackup:
max-rate: '100M'
checkpoint: 'fast'
watchdog:
mode: off # Allowed values: off, automatic, required
device: /dev/watchdog
safety_margin: 5
tags:
nofailover: false
noloadbalance: false
clonefrom: false
nosync: false
Make the postgres user the owner of the private key file pg.key:
Начало внимание
The private key file pg.key specified in the section postgresql must have permissions of u=rw (0600) or less if it belongs to the PostgreSQL database user, or permissions of u=rw, g=r (0640) or less if it belongs to the root .
Конец внимание
sudo chown postgres:postgres -R /path/to/pg.key
sudo chmod 600 /path/to/pg.key
For more details on TLS/SSL configuration in Patroni, refer to the official Patroni documentation.
|
- Make the postgres user the owner of the configuration directory:
sudo chown postgres:postgres -R /etc/patroni
sudo chmod 700 /etc/patroni
Step 7: Prepare PostgreSQL+Patroni cluster
- Start the Patroni service on the postgres-server1.your_domain node, and then on the nodes postgres-server2.your_domain and postgres-server3.your_domain:
sudo systemctl enable --now patroni.service
- Check the cluster status:
patronictl -c /etc/patroni/config.yml list
Step 8: Prepare PGBouncer (optional)
PGBouncer is designed to manage a connection pool to PostgreSQL, minimizing the overhead associated with establishing new connections to PostgreSQL. For information on installing and configuring PGBouncer, refer to Install PGBouncer.
Step 9: Configure HAProxy (postgres block)
Configure a high availablility HAProxy to handle requests to PostgreSQL and load balance between the nodes of the PostgreSQL cluster. For more details, refer to Configure HAProxy for PostgreSQL.
Step 10: Connecto to PostgreSQL
Connection string to connect to the PostgreSQL cluster via Haproxy:
psqlUrl: postgresql://elma365:SecretPassword@haproxy-server.your_domain:5000/elma365?sslmode=prefer
Connection string to connect to the PostgreSQL cluster for read-only access via Haproxy:
psqlUrl: postgresql://elma365:SecretPassword@haproxy-server.your_domain:5001/elma365?sslmode=prefer
Connection string to connect to the PostgreSQL cluster with replica support:
psqlUrl: postgresql://elma365:SecretPassword@postgresql-server1.your_domain:5432,postgresql-server2.your_domain:5432,postgresql-server3.your_domain:5432/elma365?sslmode=prefer&target_session_attrs=primary
Connection string for read-only PostgreSQL cluster access with replica support:
roPsqlUrl: postgresql://elma365:SecretPassword@postgresql-server1.your_domain:5432,postgresql-server2.your_domain:5432,postgresql-server3.your_domain:5432/elma365?sslmode=prefer&target_session_attrs=prefer-standby
If an external secrets storage, such as Vault, is configured, you should set the RO_POSTGRES_URL and ELMA365_POOL_POSTGRES_URL environment variables to connect to the read-only PostgreSQL replica.
Prepare a PostgreSQL database for restoration
If you need to restore a database from backup, first prepare the database for restoration. For more details, refer to the article PostgreSQL.
Found a typo? Select it and press Ctrl+Enter to send us feedback