半小时搭建3节点的Pgpool-II、 postgresql,实现一主二备高可用集群,当前集群复制模式为最流行且官方建议的流复制模式backend_clustering_mode = 'streaming_replication'。
1.
软件准备
软件 |
版本 |
Pgpool |
$ pgpool -v pgpool-II version
4.2.7 (chichiriboshi) |
Postgresql |
$ psql -version psql (13.3) |
Os |
$ cat /etc/redhat-release CentOS Linux
release 7.2.1511 (Core) |
1.1 主机及ip地址
主机 |
Ip |
pg13-node1 |
192.168.40.174(主库) |
pg13-node2 |
192.168.40.175(备库) |
pg13-node3 |
192.168.40.176(备库) |
|
192.168.40.180(vip) |
1.2 Pg版本及配置
Item |
Value |
Detail |
PostgreSQL
Version |
13.3 |
- |
port |
5432 |
- |
$PGDATA |
/home/pg13/pgdata |
- |
Archive mode |
on |
/var/lib/pgsql/archivedir |
Replication Slots |
Enable |
- |
Start
automatically |
Enable |
- |
2.
安装pg及pgpool-ii
安装pg,[all servers]表明是所有主机
[all servers]# rpm -ivh postgresql13-libs-13.3-1PGDG.rhel7.x86_64.rpm
postgresql13-13.3-1PGDG.rhel7.x86_64.rpm
postgresql13-server-13.3-1PGDG.rhel7.x86_64.rpm
postgresql13-contrib-13.3-1PGDG.rhel7.x86_64.rpm |
安装pgpool
[all servers]# rpm
-ihv pgpool-II-pg13-4.2.7-1pgdg.rhel7.x86_64.rpm
libmemcached-1.0.18-1.el7.art.x86_64.rpm && rpm -ihv
pgpool-II-pg13-extensions-4.2.7-1pgdg.rhel7.x86_64.rpm |
设置操作系统用户postgres密码,都为postgres
[all servers]# passwd
postgres
配置数据目录
[all servers]#mkdir -p
/home/pg13/pgdata && chown -R postgres:postgres /home/pg13 && chmod 700 /home/pg13/pgdata
3.
初始化主库及配置
初始化主库
[节点1]#su
- postgres
[节点1]$/usr/pgsql-13/bin/initdb
-D /home/pg13/pgdata
启动主库
[节点1]$/usr/pgsql-13/bin/pg_ctl
-D /home/pg13/pgdata -l /tmp/logfile start
由于安全原因,建立 repl用户用于复制目的,建立 pgpool用户用于流复制延迟检查和pgpool-II健康检查
[节点1]# psql -U postgres -p 5432
postgres=# SET password_encryption
= 'scram-sha-256';
postgres=# CREATE ROLE pgpool WITH
LOGIN;
postgres=# CREATE ROLE repl WITH
REPLICATION LOGIN;
postgres=# \password pgpool
postgres=# \password repl
postgres=# \password postgres
pgpool repl postgres 3个用户的密码都与用户名一致
建立EXTENSION
[节点1]# su - postgres
[节点1]$ psql template1 -c "CREATE EXTENSION
pgpool_recovery"
加入pg_monitor组
[节点1]# su - postgres
[节点1]$ psql -c
" GRANT pg_monitor TO pgpool;"
4.
主库配置
假设所有Pgpool-II服务器和 PostgreSQL服务器在同一个子网中,编辑pg_hba.conf以启用scram-sha-256身份验证方法。
[节点1]$vi /home/pg13/pgdata/pg_hba.conf
在文件最后添加2行
host all all samenet scram-sha-256
host replication all samenet scram-sha-256
建立目录
/var/lib/pgsql/archivedir 存储WAL 段
[all servers]# su -
postgres
[all servers]$ mkdir
/var/lib/pgsql/archivedir
在主库设置流复制,清空主库配置文件并写入如下到主库
[节点1]$tail > /home/pg13/pgdata/postgresql.conf
vi
/home/pg13/pgdata//postgresql.conf
max_wal_size = 1GB
min_wal_size = 80MB
log_timezone =
'Asia/Shanghai'
datestyle = 'iso, mdy'
timezone =
'Asia/Shanghai'
default_text_search_config
= 'pg_catalog.english'
listen_addresses = '*'
archive_mode = on
archive_command = 'cp
"%p" "/var/lib/pgsql/archivedir/%f"'
max_wal_senders = 10
max_replication_slots =
10
wal_level = replica
hot_standby = on
wal_log_hints = on
重启pg数据库
/usr/pgsql-13/bin/pg_ctl
-D /home/pg13/pgdata -l /tmp/logfile restart
5.
Pgpool-II配置
配置免密登录
[all servers]#su - root
cd ~/.ssh
#ssh-keygen -t rsa -f
id_rsa_pgpool 直接回车
#ssh-copy-id -i id_rsa_pgpool.pub
postgres@pg13-node1
#ssh-copy-id -i id_rsa_pgpool.pub
postgres@pg13-node2
#ssh-copy-id -i id_rsa_pgpool.pub
postgres@pg13-node3
[all servers]#su - postgres
$cd ~/.ssh
$ssh-keygen -t rsa -f
id_rsa_pgpool 直接回车
$ssh-copy-id -i id_rsa_pgpool.pub
postgres@pg13-node1
$ssh-copy-id -i id_rsa_pgpool.pub
postgres@pg13-node2
$ssh-copy-id -i id_rsa_pgpool.pub postgres@pg13-node3
为了允许repl用户在不指定密码的情况下进行流复制和在线恢复,并使用postgres执行pg_rewind,我们在postgres用户的主目录中创建.pgpass文件,并将每个PostgreSQL服务器 上的权限更改为 600 。
[all servers]#
su - postgres
[all servers]$
vi /var/lib/pgsql/.pgpass
pg13-node1:5432:replication:repl:repl
pg13-node2:5432:replication:repl:repl
pg13-node3:5432:replication:repl:repl
pg13-node1:5432:postgres:postgres:postgres
pg13-node2:5432:postgres:postgres:postgres
pg13-node3:5432:postgres:postgres:postgres
[all servers]$
chmod 600 /var/lib/pgsql/.pgpass
停止防火墙
[all servers]
#systemctl disable firewalld
&& service firewalld stop
开机自动启动pgpool
[all servers]#
systemctl enable pgpool.service
创建
pgpool_node_id
节点1
[pg13-node1]# vi
/etc/pgpool-II/pgpool_node_id
0
节点2
[pg13-node2]# vi
/etc/pgpool-II/pgpool_node_id
1
节点3
[pg13-node3]# vi
/etc/pgpool-II/pgpool_node_id
2
6.
更新pgpool.conf
配置内容:监听地址/端口/流复制检查/健康检查/后端设置/故障转移配置/Pgpool-II 在线恢复配置/客户端认证配置/日志记录),在节点1执行
[root@pg13-node1
.ssh]# tail > /etc/pgpool-II/pgpool.conf
[root@pg13-node1
.ssh]# vi /etc/pgpool-II/pgpool.conf
backend_clustering_mode
= 'streaming_replication'
listen_addresses
= '*'
port = 9999
socket_dir =
'/var/run/postgresql'
reserved_connections
= 0
pcp_listen_addresses
= '*'
pcp_port = 9898
pcp_socket_dir =
'/var/run/postgresql'
listen_backlog_multiplier
= 2
serialize_accept
= off
backend_hostname0
= 'pg13-node1'
backend_port0 =
5432
backend_weight0
= 1
backend_data_directory0
= '/home/pg13/pgdata' #后端 0 的数据目录
backend_flag0 =
'ALLOW_TO_FAILOVER'
backend_application_name0
= 'pg13-node1'
backend_hostname1
= 'pg13-node2'
backend_port1 =
5432
backend_weight1
= 1
backend_data_directory1
= '/home/pg13/pgdata' #后端 0 的数据目录
backend_flag1 =
'ALLOW_TO_FAILOVER'
backend_application_name1
= 'pg13-node2'
backend_hostname2
= 'pg13-node3'
backend_port2 =
5432
backend_weight2
= 1
backend_data_directory2
= '/home/pg13/pgdata' #后端 0 的数据目录
backend_flag2 =
'ALLOW_TO_FAILOVER'
backend_application_name2
= 'pg13-node3'
enable_pool_hba
= on
pool_passwd =
'pool_passwd'
authentication_timeout
= 1min
allow_clear_text_frontend_auth
= off
ssl = off
ssl_ciphers =
'HIGH:MEDIUM:+3DES:!aNULL'
ssl_prefer_server_ciphers
= off
ssl_ecdh_curve =
'prime256v1'
ssl_dh_params_file
= ''
num_init_children
= 32
max_pool = 4
child_life_time
= 5min
child_max_connections
= 0
connection_life_time
= 0
client_idle_limit
= 0
log_destination
= 'stderr'
logging_collector
= on
log_directory =
'/var/log/pgpool_log'
log_filename =
'pgpool-%Y-%m-%d_%H%M%S.log'
log_truncate_on_rotation
= on
log_rotation_age
= 1d
log_rotation_size
= 10MB
log_connections
= off
log_disconnections
= off
log_hostname =
off
log_statement =
off
log_per_node_statement
= off
log_client_messages
= off
log_standby_delay
= 'if_over_threshold'
syslog_facility
= 'LOCAL0'
syslog_ident =
'pgpool'
pid_file_name =
'/var/run/pgpool/pgpool.pid'
logdir = '/tmp'
connection_cache
= on
reset_query_list
= 'ABORT; DISCARD ALL'
replicate_select
= off
insert_lock =
off
lobj_lock_table
= ''
replication_stop_on_mismatch
= off
failover_if_affected_tuples_mismatch
= off
load_balance_mode
= on
ignore_leading_white_space
= on
read_only_function_list
= ''
write_function_list
= ''
primary_routing_query_pattern_list
= ''
database_redirect_preference_list
= ''
app_name_redirect_preference_list
= ''
allow_sql_comments
= off
disable_load_balance_on_write
= 'transaction'
dml_adaptive_object_relationship_list=
''
statement_level_load_balance
= off
sr_check_period
= 10
sr_check_user =
'pgpool'
sr_check_password
= ''
sr_check_database
= 'postgres'
delay_threshold
= 10000000
follow_primary_command
= '/etc/pgpool-II/follow_primary.sh %d %h %p %D %m %H %M %P %r %R'
health_check_period
= 5
health_check_timeout
= 30
health_check_user
= 'pgpool'
health_check_password
= ''
health_check_database
= ''
health_check_max_retries
= 3
health_check_retry_delay
= 1
connect_timeout
= 10000
failover_command
= '/etc/pgpool-II/failover.sh %d %h %p %D %m %H %M %P %r %R %N %S'
failback_command
= ''
failover_on_backend_error
= on
detach_false_primary
= off
search_primary_node_timeout
= 5min
recovery_user =
'postgres'
recovery_password
= ''
recovery_1st_stage_command
= 'recovery_1st_stage'
recovery_2nd_stage_command
= ''
recovery_timeout
= 90
client_idle_limit_in_recovery
= 0
auto_failback =
off
auto_failback_interval
= 1min
use_watchdog =
on
trusted_servers
= ''
ping_path =
'/bin'
hostname0 = 'pg13-node1'
wd_port0 = 9000
pgpool_port0 =
9999
hostname1 =
'pg13-node2'
wd_port1 = 9000
pgpool_port1 =
9999
hostname2 =
'pg13-node3'
wd_port2 = 9000
pgpool_port2 =
9999
wd_priority = 1
wd_authkey = ''
wd_ipc_socket_dir
= '/var/run/postgresql'
delegate_IP =
'192.168.40.180'
if_cmd_path =
'/sbin'
if_up_cmd =
'/usr/bin/sudo /sbin/ip addr add $_IP_$/24 dev eth0 label eth0:0'
if_down_cmd =
'/usr/bin/sudo /sbin/ip addr del $_IP_$/24 dev eth0'
arping_path =
'/usr/sbin'
arping_cmd =
'/usr/bin/sudo /usr/sbin/arping -U $_IP_$ -w 1 -I eth0'
clear_memqcache_on_escalation
= on
wd_escalation_command
= '/etc/pgpool-II/escalation.sh'
wd_de_escalation_command
= ''
failover_when_quorum_exists
= on
failover_require_consensus
= on
allow_multiple_failover_requests_from_node
= off
enable_consensus_with_half_votes
= off
wd_monitoring_interfaces_list
= ''
wd_lifecheck_method
= 'heartbeat'
wd_interval = 10
heartbeat_hostname0
= 'pg13-node1'
heartbeat_port0
= 9694
heartbeat_device0
= 'eth0' # 网卡设备名称(如'eth0')
heartbeat_hostname1
= 'pg13-node2'
heartbeat_port1
= 9694
heartbeat_device1
= 'eth0'
heartbeat_hostname2
= 'pg13-node3'
heartbeat_port2
= 9694
heartbeat_device2
= 'eth0'
wd_heartbeat_keepalive
= 2
wd_heartbeat_deadtime
= 30
wd_life_point =
3
wd_lifecheck_query
= 'SELECT 1'
wd_lifecheck_dbname
= 'template1'
wd_lifecheck_user
= 'nobody'
wd_lifecheck_password
= ''
relcache_expire
= 0
relcache_size =
256
check_temp_table
= catalog
check_unlogged_table
= on
enable_shared_relcache
= on
relcache_query_target
= primary
memory_cache_enabled
= off
memqcache_method
= 'shmem'
memqcache_memcached_host
= 'localhost'
memqcache_memcached_port
= 11211
memqcache_total_size
= 64MB
memqcache_max_num_cache
= 1000000
memqcache_expire
= 0
memqcache_auto_cache_invalidation
= on
memqcache_maxcache
= 400kB
memqcache_cache_block_size
= 1MB
memqcache_oiddir
= '/var/log/pgpool/oiddir'
cache_safe_memqcache_table_list
= ''
cache_unsafe_memqcache_table_list
= ''
7.
Pgpool-II脚本配置
[all servers]#
cp -p /etc/pgpool-II/failover.sh{.sample,}
&& cp -p /etc/pgpool-II/follow_primary.sh{.sample,} && chown postgres:postgres
/etc/pgpool-II/{failover.sh,follow_primary.sh}
follow_primary_command脚本中的PCP命令需要用户认证,需要在pcp.conf 中以“ username:encrypted password ”格式指定用户名和md5加密密码 follow_primary.sh脚本必须在不输入密码的情况下执行PCP命令创建.pcppass
[all servers]#
echo 'pgpool:'`pg_md5 pgpool` >> /etc/pgpool-II/pcp.conf
[all servers]#
su - postgres
[all servers]$
echo 'localhost:9898:pgpool:pgpool' > ~/.pcppass
[all servers]$
chmod 600 ~/.pcppass
recovery_1st_stage 和pgpool_remote_start 将这些文件复制到主服务器
(pg13-node1) 的数据目录
[node1]# cp -p
/etc/pgpool-II/recovery_1st_stage.sample
/home/pg13/pgdata/recovery_1st_stage
&& cp -p
/etc/pgpool-II/pgpool_remote_start.sample /home/pg13/pgdata/pgpool_remote_start
&& chown postgres:postgres
/home/pg13/pgdata/{recovery_1st_stage,pgpool_remote_start}
客户端认证配置
[all servers]#su
- postgres
[all servers]$
vi /etc/pgpool-II/pool_hba.conf (文件最后添加2行)
host all
pgpool 0.0.0.0/0 scram-sha-256
host all
postgres 0.0.0.0/0 scram-sha-256
用于身份验证的默认密码文件名是pool_passwd。
[all servers]#
su - postgres
[all servers]$
echo 'some string' > ~/.pgpoolkey
[all servers]$
chmod 600 ~/.pgpoolkey
执行命令pg_enc
-m -k /path/to/.pgpoolkey -u username -p在文件pool_passwd中注册用户名和AES加密密码。如果pool_passwd还不存在,它将被创建在与 pgpool.conf相同的目录中。
[all servers]#
su - postgres
[all servers]$
pg_enc -m -k ~/.pgpoolkey -u pgpool -p
db password:
[pgpool user's password]
[all servers]$
pg_enc -m -k ~/.pgpoolkey -u postgres -p
db password:
[postgres user's passowrd]
watchdog 配置,检查3个节点 是否可以正常使用 ip arping 命令
示例:
[all
servers]$/usr/bin/sudo /sbin/ip
[all
servers]$/usr/bin/sudo /usr/sbin/arping
关闭其他
pgpool 节点上的虚拟 IP
[all servers]#
cp -p /etc/pgpool-II/escalation.sh{.sample,} && chown postgres:postgres
/etc/pgpool-II/escalation.sh
[all servers]#
vi /etc/pgpool-II/escalation.sh
...
PGPOOLS=(pg13-node1
pg13-node2 pg13-node3)
VIP=192.168.40.180
DEVICE=eth0
...
3台机器修改sudo
[all servers]#
chmod +w /etc/sudoers && vi
/etc/sudoers
注释掉
Defaults requiretty
Pgpool-II配置日志记录
在所有服务器上创建日志目录。
[all servers]#
mkdir
/var/log/pgpool_log/ && chown
postgres:postgres /var/log/pgpool_log/
复制 pgpool.conf
节点1 上pgpool.conf 的配置完成。将pgpool.conf复制 到其他Pgpool -II节点(节点2 和 节点3)。
[pg13-node1]#
scp -p /etc/pgpool-II/pgpool.conf root@pg13-node2:/etc/pgpool-II/pgpool.conf
[pg13-node1]#
scp -p /etc/pgpool-II/pgpool.conf root@pg13-node3:/etc/pgpool-II/pgpool.conf
/etc/sysconfig/pgpool
配置:在Pgpool -II启动时忽略pgpool_status文件 在/etc/sysconfig/pgpool的启动选项 OPTS 中添加“-D”
[all servers]#
vi /etc/sysconfig/pgpool
...
OPTS=" -D
-n"
8.
配置Pg备库
首先,我们应该使用Pgpool-II在线恢复功能
设置PostgreSQL备用服务器。
确保pcp_recovery_node命令使用的recovery_1st_stage和pgpool_remote_start
脚本位于PostgreSQL主服务器 ( node1 ) 的数据库集群目录中。
[root@pg13-node1 home]#
pcp_recovery_node -h 192.168.40.180 -p 9898 -U pgpool -n 1 -d
Password:
DEBUG: recv:
tos="m", len=8
DEBUG: recv:
tos="r", len=21
DEBUG: send:
tos="D", len=6
DEBUG: recv:
tos="c", len=20
pcp_recovery_node --
Command Successful
DEBUG: send:
tos="X", len=4
[root@pg13-node1 home]#
[root@pg13-node1
home]# pcp_recovery_node -h
192.168.40.180 -p 9898 -U pgpool -n 2 -d
Password:
DEBUG: recv:
tos="m", len=8
DEBUG: recv:
tos="r", len=21
DEBUG: send: tos="D",
len=6
DEBUG: recv:
tos="c", len=20
pcp_recovery_node --
Command Successful
DEBUG: send:
tos="X", len=4
[root@pg13-node1 home]#
9.
验证流复制
主库执行:
postgres=# create table
tbl1(id int, text varchar(10));
postgres=# insert into
tbl1 values (1, 'helloworld');
postgres=# select *
from tbl1;
id |
text
----+------------
1 | helloworld
(1 row)
现在去从库看:
postgres=# select *
from tbl1;
id |
text
----+------------
1 | helloworld
(1 row)
10. 切换主备看门狗
使用pcp_watchdog_info确认看门狗状态。首先启动的Pgpool-II服务器作为LEADER运行。
[root@pg13-node1 home]#
pcp_watchdog_info -h 192.168.40.180 -p 9898 -U pgpool
Password:
3 YES pg13-node1:9999
Linux pg13-node1 pg13-node1
pg13-node1:9999 Linux
pg13-node1 pg13-node1 9999 9000 4 LEADER
pg13-node2:9999 Linux pg13-node2
pg13-node2 9999 9000 7 STANDBY
pg13-node3:9999 Linux
pg13-node3 pg13-node3 9999 9000 7 STANDBY
停止活动服务器pg13-node1,然后节点2或 节点3将被提升为活动服务器。要停止 pg13-node1,我们可以停止Pgpool-II 服务或关闭整个系统。在这里,我们停止Pgpool-II服务。
[pg13-node1]# systemctl
stop pgpool.service
[root@pg13-node1 home]#
pcp_watchdog_info -p 9898 -h 192.168.40.180 -U pgpool
Password:
3 YES pg13-node3:9999
Linux pg13-node3 pg13-node3
pg13-node3:9999 Linux
pg13-node3 pg13-node3 9999 9000 4 LEADER
#node3 提升为 LEADER
pg13-node1:9999 Linux
pg13-node1 pg13-node1 9999 9000 10 SHUTDOWN
# node1 已停止
pg13-node2:9999 Linux
pg13-node2 pg13-node2 9999 9000 7 STANDBY
#node2 作为 STANDBY 运行
启动我们已经停止的
Pgpool-II ( pg13-node1 ),并验证pg13-node1作为备用服务器运行。
[root@pg13-node1 home]#
systemctl start pgpool.service
[root@pg13-node1 home]#
pcp_watchdog_info -p 9898 -h 192.168.40.180 -U pgpool
Password:
3 YES pg13-node3:9999
Linux pg13-node3 pg13-node3
pg13-node3:9999 Linux
pg13-node3 pg13-node3 9999 9000 4 LEADER
pg13-node1:9999 Linux
pg13-node1 pg13-node1 9999 9000 7 STANDBY
pg13-node2:9999 Linux
pg13-node2 pg13-node2 9999 9000 7 STANDBY
[root@pg13-node1 home]#
11. 验证故障转移
首先,使用psql通过虚拟IP连接PostgreSQL,验证后端信息。
[root@pg13-node1 home]#
PGPASSWORD=pgpool psql -h 192.168.40.180 -p 9999 -U pgpool postgres -c
"show pool_nodes"
node_id |
hostname | port | status |
lb_weight | role | select_cnt | load_balance_node |
replication_delay | replic
ation_state |
replication_sync_state | last_status_change
---------+------------+------+--------+-----------+---------+------------+-------------------+-------------------+-------
------------+------------------------+---------------------
0
| pg13-node1 | 5432 | up |
0.333333 | primary | 0 | false | 0 |
| | 2022-02-16 21:55:13
1
| pg13-node2 | 5432 | up |
0.333333 | standby | 0 | true | 0 | stream
ing | sync | 2022-02-16 21:55:13
2
| pg13-node3 | 5432 | up |
0.333333 | standby | 0 | false | 0 | stream
ing | potential | 2022-02-16 21:55:13
(3 rows)
接下来,停止主PostgreSQL服务器 node1,并验证自动故障转移。
[postgres@pg13-node1
~]$ /usr/pgsql-13/bin/pg_ctl -D /home/pg13/pgdata -m immediate stop
在node1上
停止PostgreSQL后,将发生故障转移,并且node2上的 PostgreSQL成为新的主数据库。
[postgres@pg13-node3
~]$PGPASSWORD=pgpool psql -h 192.168.40.180 -p 9999 -U pgpool postgres -c
"show pool_nodes"
node_id |
hostname | port | status |
lb_weight | role | select_cnt | load_balance_node |
replication_delay | replication_state | replication_sync_state | last
_status_change
---------+------------+------+--------+-----------+---------+------------+-------------------+-------------------+-------------------+------------------------+-----
----------------
0
| pg13-node1 | 5432 | down |
0.333333 | standby | 0 | false | 0 | | | 2022
-02-17 17:39:15
1
| pg13-node2 | 5432 | up |
0.333333 | primary | 0 | true | 0 | | | 2022
-02-17 17:39:15
2
| pg13-node3 | 5432 | up |
0.333333 | standby | 0 | false | 0 | streaming | async | 2022
-02-17 17:39:30
(3 rows)
node3 is running as
standby of new primary node2.
12. 在线恢复
在这里,我们使用Pgpool-II在线恢复功能恢复node1(旧的主服务器)作为备用服务器。检查
[root@pg13-node2 ~]# ls
-ltr /home/pg13/pgdata/recovery_1st_stage /home/pg13/pgdata/pgpool_remote_start
-rwxr-xr-x. 1 postgres
postgres 3178 Feb 16 21:17 /home/pg13/pgdata/recovery_1st_stage
-rwxr-xr-x. 1 postgres
postgres 979 Feb 16 21:17
/home/pg13/pgdata/pgpool_remote_start
任一节点执行:
[root@pg13-node2
pgdata]# pcp_recovery_node -h 192.168.40.180 -p 9898 -U pgpool -n 0
Password:
pcp_recovery_node --
Command Successful
[root@pg13-node2
pgdata]#
然后验证node1 是否作为备用服务器启动。
[postgres@pg13-node1
log]$PGPASSWORD=pgpool psql -h 192.168.40.180 -p 9999 -U pgpool postgres -c
"show pool_nodes"
node_id |
hostname | port | status |
lb_weight | role | select_cnt | load_balance_node |
replication_delay | replication_state | replication_sync_state | last
_status_change
---------+------------+------+--------+-----------+---------+------------+-------------------+-------------------+-------------------+------------------------+-----
----------------
0
| pg13-node1 | 5432 | up |
0.333333 | standby | 0 | false | 0 | streaming | async | 2022
-02-17 00:03:31
1
| pg13-node2 | 5432 | up |
0.333333 | primary | 0 | false | 0 | | | 2022
-02-16 23:38:50
2
| pg13-node3 | 5432 | up |
0.333333 | standby | 0 | true | 0 | streaming | async | 2022
-02-17 00:03:31
(3 rows)
参考
https://www.pgpool.net/docs/pgpool-II-4.2.7/en/html/example-cluster.html