Quantcast
Channel: Tech Support
Viewing all articles
Browse latest Browse all 880

Set Up a Highly Available PostgreSQL13 Cluster on Ubuntu, Debian

$
0
0

This step by step guide will show you how to set up a highly available PostgreSQL cluster using patroni and haproxy on Ubuntu 20.04. These instructions can also be applied if you are running an earlier release of Ubuntu 16.04 or 18.04.

 

Prerequisite

To follow this tutorial along, you will need five (physical or virtual) machines installed with Ubuntu 20.04 with sudo non-root user privileges. 
 
For this guide, we will use five virtual machines as described below to set up patroni cluster in our case.
 
HOSTNAME        IP ADDRESSPURPOSE
patroni1192.168.10.1postgresql-13, patroni
patroni2  192.168.10.2postgresql-13, patroni
patroni3192.168.10.3postgresql-13, patroni
    

etcd1192.168.10.4   etcd
haproxy1192.168.10.5haproxy
 
 

 
When you have all the prerequisites in place, please proceed with the below steps:
 

Install PostgreSQL

In this step, we will install PostgreSQL version 13 on three of the nodes (patroni1, patroni2, patroni3) one by one using the below command:
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main"> /etc/apt/sources.list.d/pgdg.list'
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo apt update
sudo apt -y install postgresql-13 postgresql-server-dev-13
PostgreSQL automatically runs as a service once installation finished. We need to stop the PostgreSQL service at this point so that we can continue with patroni installation and configuration.
sudo systemctl stop postgresql

Install Patroni

Patroni is an open-source python package that manages PostgreSQL configuration. It can be configured to handle tasks like replication, backups, and restorations.
 
Patroni uses utilities that come installed with PostgreSQL, located in the /usr/lib/postgresql/<version>/bin directory by default on Ubuntu. 
 
To ensure that Patroni can find the PostgreSQL utilities, we will create a symbolic link like below on all three nodes (patroni1, patroni2, patroni3).
sudo ln -s /usr/lib/postgresql/13/bin/* /usr/sbin/
Next, type below command to install patroni's dependencies on all three nodes (patroni1, patroni2, patroni3).
sudo apt -y install python3 python3-pip
sudo -H pip install --upgrade testresources
sudo -H pip install --upgrade setuptools
sudo -H pip install psycopg2
Now you can install patroni with below command on all three nodes (patroni1, patroni2, patroni3).
sudo -H pip install patroni
sudo -H pip install python-etcd

Configure Patroni

Patroni can be configured using a YAML file which can be placed anywhere. For the sake of this guide, we will place this file under /etc/.

We will create patroni.yml file on on all three nodes (patroni1, patroni2, patroni3) like below:
sudo nano /etc/patroni.yml
Add the following parameters in it:
scope: postgres
namespace: /db/
name: patroni1

restapi:
listen: 192.168.10.1:8008
connect_address: 192.168.10.1:8008

etcd:
host: 192.168.10.4:2379

bootstrap:
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
postgresql:
use_pg_rewind: true

initdb:
- encoding: UTF8
- data-checksums

pg_hba:
- host replication replicator 127.0.0.1/32 md5
- host replication replicator 192.168.10.1/0 md5
- host replication replicator 192.168.10.2/0 md5
- host replication replicator 192.168.10.3/0 md5
- host all all 0.0.0.0/0 md5

users:
admin:
password: admin
options:
- createrole
- createdb

postgresql:
listen: 192.168.10.1:5432
connect_address: 192.168.10.1:5432
data_dir: /data/patroni
pgpass: /tmp/pgpass
authentication:
replication:
username: replicator
password: reppass123
superuser:
username: postgres
password: secret123
parameters:
unix_socket_directories: '.'

tags:
nofailover: false
noloadbalance: false
clonefrom: false
nosync: false
Make sure you replace the highlighted text with yours.
 
Save and close the editor when you are finished.
 
Next, create patroni.yml file on the 2nd node in our case (patroni2):
scope: postgres
namespace: /db/
name: patroni2

restapi:
listen: 192.168.10.2:8008
connect_address: 192.168.10.2:8008

etcd:
host: 192.168.10.4:2379

bootstrap:
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
postgresql:
use_pg_rewind: true

initdb:
- encoding: UTF8
- data-checksums

pg_hba:
- host replication replicator 127.0.0.1/32 md5
- host replication replicator 192.168.10.1/0 md5
- host replication replicator 192.168.10.2/0 md5
- host replication replicator 192.168.10.3/0 md5
- host all all 0.0.0.0/0 md5

users:
admin:
password: admin
options:
- createrole
- createdb

postgresql:
listen: 192.168.10.2:5432
connect_address: 192.168.10.2:5432
data_dir: /data/patroni
pgpass: /tmp/pgpass
authentication:
replication:
username: replicator
password: reppass123
superuser:
username: postgres
password: secret123
parameters:
unix_socket_directories: '.'

tags:
nofailover: false
noloadbalance: false
clonefrom: false
nosync: false
Save and close the editor when you are finished.
 
Next, create patroni.yml file on the third node in our case (patroni3):
scope: postgres
namespace: /db/
name: patroni3

restapi:
listen: 192.168.10.3:8008
connect_address: 192.168.10.3:8008

etcd:
host: 192.168.10.4:2379

bootstrap:
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
postgresql:
use_pg_rewind: true

initdb:
- encoding: UTF8
- data-checksums

pg_hba:
- host replication replicator 127.0.0.1/32 md5
- host replication replicator 192.168.10.1/0 md5
- host replication replicator 192.168.10.2/0 md5
- host replication replicator 192.168.10.3/0 md5
- host all all 0.0.0.0/0 md5

users:
admin:
password: admin
options:
- createrole
- createdb

postgresql:
listen: 192.168.10.3:5432
connect_address: 192.168.10.3:5432
data_dir: /data/patroni
pgpass: /tmp/pgpass
authentication:
replication:
username: replicator
password: reppass123
superuser:
username: postgres
password: secret123
parameters:
unix_socket_directories: '.'

tags:
nofailover: false
noloadbalance: false
clonefrom: false
nosync: false
Save and close the editor when you are finished.
 
Make note of the data_dir value in patroni.yml file, postgres user needs the ability to write to this directory.
 
We will create /data/patroni directory with appropriate permission on all three nodes (patroni1, patroni2, patroni3) using the below command:
sudo mkdir -p /data/patroni
sudo chown postgres:postgres /data/patroni
sudo chmod 700 /data/patroni
Next, we will create a script that will allow us to start and stop patroni.

We will create a file /etc/systemd/system/patroni.service on all three nodes (patroni1, patroni2, patroni3) like below:
sudo nano /etc/systemd/system/patroni.service 
Add the following parameters in it:
[Unit]
Description=High availability PostgreSQL Cluster
After=syslog.target network.target

[Service]
Type=simple
User=postgres
Group=postgres
ExecStart=/usr/local/bin/patroni /etc/patroni.yml
KillMode=process
TimeoutSec=30
Restart=no

[Install]
WantedBy=multi-user.target
Save and close the editor when you are finished.


Install etcd

Etcd is a fault-tolerant, distributed key-value store that is used to store the state of the PostgreSQL cluster. Using Patroni, all of the database nodes make use of etcd to keep the cluster up and running.

For the sake of this guide, we will use a single-server etcd cluster. However, in production, it may be best to use a larger etcd cluster so that if one etcd node fails, it doesn’t affect your PostgreSQL cluster.

Type below command to install etcd on (etcd1) in our case:
sudo apt -y install etcd

Configure etcd

We will edit /etc/default/etcd file to make the required changes like below:
sudo cp -p /etc/default/etcd /etc/default/etcd.orig
sudo nano /etc/default/etcd
Remove everything from the file, and add the following parameters in it:
ETCD_LISTEN_PEER_URLS="http://192.168.10.4:2380"
ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,http://192.168.10.4:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.10.4:2380"
ETCD_INITIAL_CLUSTER="default=http://192.168.10.4:2380,"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.10.4:2379"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
Make sure you replace the highlighted text with yours.

Save and close the editor when you are finished.

Restart etcd service to make the changes effect:
sudo systemctl restart etcd
As you can see in screenshot below, etcd is up and running.
 

 

Start patroni

Log in to your first patroni node (patroni1) in our case, and execute the command below to activate watchdog support:
sudo modprobe softdog
sudo chown postgres /dev/watchdog
Next, start the patroni with below command:
sudo systemctl start patroni
Check the patroni status with below command:
sudo systemctl status patroni
As you can see in screenshot below, patroni is up and running on our first node (patroni1). 
 

 
Next, log in to your other two nodes (patroni2, patroni3) in our case, and start patroni service with below command:
sudo modprobe softdog
sudo chown postgres /dev/watchdog
sudo systemctl start patroni
Wait for 10-20 seconds, then check the patroni status, and you will see the output similar to like below (in our case):
 
patroni status on 2nd node (patroni2) in our case:
 
patroni status on third node (patroni3) in our case:
 

 
If you carefully look at the patroni status on all three nodes, you will see that (patroni1) is acting as leader in the cluster. The other two nodes (patroni2, and patroni3) in our case, acting as secondary.

Install haproxy

When developing an application that uses a database, it can be cumbersome to keep track of the database endpoints if they keep changing. HAProxy simplifies this by giving a single endpoint to which you can connect the application.

HAProxy forwards the connection to whichever node is currently the master. It does this using a REST endpoint that Patroni provides. Patroni ensures that, at any given time, only the leader node will appear as online, forcing HAProxy to connect to the correct node.

Log in to your (haproxy1) node, and execute below command to install haproxy:
sudo apt -y install haproxy

Configure haproxy

With the patroni cluster set up, you need a method to connect to the leader node regardless of which of the nodes in the cluster is the leader. This is where HAProxy steps in. All PostgreSQL clients (your applications, psql, etc.) will connect to haproxy which will make sure you connect to the leader node in the cluster.

First make a copy of original configuration file haproxy.cfg, so that if you messed up with something, you still have original file to revert it back any time:
sudo cp -p /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg-orig
Next, edit the haproxy.cfg file like below:
sudo nano /etc/haproxy/haproxy.cfg
Remove everything from the file, and add the following parameters in it:
global
    maxconn 100

defaults
    log global
    mode tcp
    retries 2
    timeout client 30m
    timeout connect 4s
    timeout server 30m
    timeout check 5s

listen stats
    mode http
    bind *:7000
    stats enable
    stats uri /

listen postgres
    bind *:5000
    option httpchk
    http-check expect status 200
    default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
    server patroni1 192.168.10.1:5432 maxconn 100 check port 8008
    server patroni2 192.168.10.2:5432 maxconn 100 check port 8008
    server patroni3 192.168.10.3:5432 maxconn 100 check port 8008
Make sure you replace the highlighted values with yours.

Save and close the editor when you are finished.
 
Check for syntax errors in the haproxy.cf file with below command:
sudo /usr/sbin/haproxy -c -V -f /etc/haproxy/haproxy.cfg 
Next, restart haproxy to make the changes effect:
sudo systemctl restart haproxy
Check the haproxy status with below command:
sudo systemctl status haproxy
As you can see in the screenshot below, haproxy is up and running:
 

 
You can also access haproxy node on port 7000 using any of your favorite web browser to monitor the cluster status in the dashboard as shown in screenshot below:
 

 
If you kill the leader node with (sudo systemctl stop patroni), the dashboard will look similar to like below:
 

 
As you can see in the screenshot, the patroni1 row is now red and the patroni2 row is highlighted in green. This indicates that patroni2 is currently acting as the leader node in the cluster.

Note: In this case, it just so happens that the second node in the cluster is promoted as the leader. This might not always be the case. It is equally likely that the third node may be promoted as the leader.

Now lets start the patroni with (sudo systemctl start patroni) on our first node again, it will rejoin the cluster as a secondary and will sync up with the leader as you can see in the screenshot below.
 

 

Test PostgreSQL Cluster

We will make a connection request from a client machine using psql to the PostgreSQL database using haproxy node (192.168.10.5) on port 5000 in our case.
 

 
As you can see in the screenshot, connection to the database has been successful from the client via haproxy.
 

Wrapping up

You now have a robust, highly available PostgreSQL cluster ready to use. While the setup in this tutorial should go far in making your deployment highly available, here are a few more steps you can take to improve it further:

  • Add more nodes in etcd cluster to improve availability.
  • Use PgBouncer to pool connections.
  • Add more nodes to haproxy and configure IP failover to create a highly available haproxy cluster

Viewing all articles
Browse latest Browse all 880

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>