暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

master节点故障切换到standby master上

原创 murkey 2020-05-09
943

master节点故障切换到standby master上

下面模拟Master节点故障,切换到Standby的情况。直接关闭Master节点主机的电源,模拟Master节点故障。可以通过gpactivatestandby命令将Standby切换成Master,这个命令依赖几个环境变量,下面的测试会一一介绍。

1. 需要设置master和standby的环境变量

SDW [root@mdw ~]# su - gpadmin Last login: Mon Apr 20 23:29:40 CST 2020 on pts/0 [gpadmin@mdw ~]$ gpstate -f 20200421:00:01:47:019456 gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: -f 20200421:00:01:47:019456 gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4' 20200421:00:01:47:019456 gpstate:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4) on x86_64-pc-linux-gnu, compiled by GCC gcc (GCC) 6.2.0, 64-bit compiled on Aug 10 2018 07:30:24' 20200421:00:01:47:019456 gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master... 20200421:00:01:47:019456 gpstate:mdw:gpadmin-[INFO]:-Standby master details 20200421:00:01:47:019456 gpstate:mdw:gpadmin-[INFO]:----------------------- 20200421:00:01:47:019456 gpstate:mdw:gpadmin-[INFO]:- Standby address = smdw 20200421:00:01:47:019456 gpstate:mdw:gpadmin-[INFO]:- Standby data directory = /greenplum/gpdata/master/gpseg-1 20200421:00:01:47:019456 gpstate:mdw:gpadmin-[INFO]:- Standby port = 5432 20200421:00:01:47:019456 gpstate:mdw:gpadmin-[INFO]:- Standby PID = 31874 20200421:00:01:47:019456 gpstate:mdw:gpadmin-[INFO]:- Standby status = Standby host passive 20200421:00:01:47:019456 gpstate:mdw:gpadmin-[INFO]:-------------------------------------------------------------- 20200421:00:01:47:019456 gpstate:mdw:gpadmin-[INFO]:--pg_stat_replication 20200421:00:01:47:019456 gpstate:mdw:gpadmin-[INFO]:-------------------------------------------------------------- 20200421:00:01:48:019456 gpstate:mdw:gpadmin-[INFO]:--WAL Sender State: streaming 20200421:00:01:48:019456 gpstate:mdw:gpadmin-[INFO]:--Sync state: sync 20200421:00:01:48:019456 gpstate:mdw:gpadmin-[INFO]:--Sent Location: 0/CB222C8 20200421:00:01:48:019456 gpstate:mdw:gpadmin-[INFO]:--Flush Location: 0/CB222C8 20200421:00:01:48:019456 gpstate:mdw:gpadmin-[INFO]:--Replay Location: 0/CB222C8 20200421:00:01:48:019456 gpstate:mdw:gpadmin-[INFO]:-------------------------------------------------------------- [gpadmin@mdw ~]$ cat .bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/.local/bin:$HOME/bin export PATH export MASTER_DATA_DIRECTORY=/greenplum/gpdata/master/gpseg-1 source /usr/local/greenplum-db/greenplum_path.sh export PGPORT=5432 export PGDATABASE=archdata [gpadmin@mdw ~]$ [gpadmin@mdw ~]$ ls gpAdminLogs gpconfig [gpadmin@mdw ~]$
复制
[gpadmin@sdw3 ~]$ cat .bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/.local/bin:$HOME/bin export PATH export MASTER_DATA_DIRECTORY=/greenplum/gpdata/master/gpseg-1 source /usr/local/greenplum-db/greenplum_path.sh export PGPORT=5432 export PGDATABASE=archdata [gpadmin@sdw3 ~]$
复制

2 模拟master故障

[root@mdw ~]# su - gpadmin
Last login: Tue Apr 21 00:01:42 CST 2020 on pts/0
[gpadmin@mdw ~]$ pg_ctl stop -D $MASTER_DATA_DIRECTORY
waiting for server to shut down.... done
server stopped
[gpadmin@mdw ~]$ gpstate -f
20200421:00:12:11:020067 gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: -f
20200421:00:12:11:020067 gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4'
20200421:00:12:11:020067 gpstate:mdw:gpadmin-[CRITICAL]:-gpstate failed. (Reason='could not connect to server: Connection refused
        Is the server running on host "localhost" (::1) and accepting
        TCP/IP connections on port 5432?
could not connect to server: Connection refused
        Is the server running on host "localhost" (127.0.0.1) and accepting
        TCP/IP connections on port 5432?
') exiting...
[gpadmin@mdw ~]$


复制

3 standby 切换到master

[root@sdw3 ~]# su - gpadmin Last login: Tue Apr 21 00:05:11 CST 2020 on pts/1 [gpadmin@sdw3 ~]$ gpactivatestandby -d $MASTER_DATA_DIRECTORY 20200421:00:13:16:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:------------------------------------------------------ 20200421:00:13:16:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-Standby data directory = /greenplum/gpdata/master/gpseg-1 20200421:00:13:16:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-Standby port = 5432 20200421:00:13:16:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-Standby running = yes 20200421:00:13:16:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-Force standby activation = no 20200421:00:13:16:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:------------------------------------------------------ Do you want to continue with standby master activation? Yy|Nn (default=N): > y 20200421:00:13:27:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-found standby postmaster process 20200421:00:13:27:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-Updating transaction files filespace flat files... 20200421:00:13:27:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-Updating temporary files filespace flat files... 20200421:00:13:27:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-Promoting standby... 20200421:00:13:27:019917 gpactivatestandby:sdw3:gpadmin-[DEBUG]:-Waiting for connection... 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-Standby master is promoted 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-Reading current configuration... 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[DEBUG]:-Connecting to dbname='archdata' 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-Writing the gp_dbid file - /greenplum/gpdata/master/gpseg-1/gp_dbid... 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-But found an already existing file. 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-Hence removed that existing file. 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-Creating a new file... 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-Wrote dbid: 1 to the file. 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-Now marking it as read only... 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-Verifying the file... 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:------------------------------------------------------ 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-The activation of the standby master has completed successfully. 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-sdw3 is now the new primary master. 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-You will need to update your user access mechanism to reflect 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-the change of master hostname. 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-Do not re-start the failed master while the fail-over master is 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-operational, this could result in database corruption! 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-MASTER_DATA_DIRECTORY is now /greenplum/gpdata/master/gpseg-1 if 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-this has changed as a result of the standby master activation, remember 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-to change this in any startup scripts etc, that may be configured 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-to set this value. 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-MASTER_PORT is now 5432, if this has changed, you 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-may need to make additional configuration changes to allow access 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-to the Greenplum instance. 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-Refer to the Administrator Guide for instructions on how to re-activate 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-the master to its previous state once it becomes available. 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-Query planner statistics must be updated on all databases 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-following standby master activation. 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:-When convenient, run ANALYZE against all user databases. 20200421:00:13:28:019917 gpactivatestandby:sdw3:gpadmin-[INFO]:------------------------------------------------------ [gpadmin@sdw3 ~]$
复制

4 检查状态

显示standby没有配置,此时Standby已经切换成Master,原因的Master节点已经被踢出了集群,切换后的数据库显示的是No master standby configured状态,通过gpstate –f参数也可以看到新的集群已经没有了Standby,Standby状态变成了Standby master instance not configured状态。

gpstate -f 20200421:00:14:14:020036 gpstate:sdw3:gpadmin-[INFO]:-Starting gpstate with args: -f 20200421:00:14:14:020036 gpstate:sdw3:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4' 20200421:00:14:14:020036 gpstate:sdw3:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4) on x86_64-pc-linux-gnu, compiled by GCC gcc (GCC) 6.2.0, 64-bit compiled on Aug 10 2018 07:30:24' 20200421:00:14:14:020036 gpstate:sdw3:gpadmin-[INFO]:-Obtaining Segment details from master... 20200421:00:14:14:020036 gpstate:sdw3:gpadmin-[INFO]:-Standby master instance not configured 20200421:00:14:14:020036 gpstate:sdw3:gpadmin-[INFO]:-------------------------------------------------------------- 20200421:00:14:14:020036 gpstate:sdw3:gpadmin-[INFO]:--pg_stat_replication 20200421:00:14:14:020036 gpstate:sdw3:gpadmin-[INFO]:-------------------------------------------------------------- 20200421:00:14:14:020036 gpstate:sdw3:gpadmin-[INFO]:-No entries found. 20200421:00:14:14:020036 gpstate:sdw3:gpadmin-[INFO]:-------------------------------------------------------------- [gpadmin@sdw3 ~]$ gpstate -f 20200421:00:15:51:020128 gpstate:sdw3:gpadmin-[INFO]:-Starting gpstate with args: -f 20200421:00:15:51:020128 gpstate:sdw3:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4' 20200421:00:15:51:020128 gpstate:sdw3:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4) on x86_64-pc-linux-gnu, compiled by GCC gcc (GCC) 6.2.0, 64-bit compiled on Aug 10 2018 07:30:24' 20200421:00:15:51:020128 gpstate:sdw3:gpadmin-[INFO]:-Obtaining Segment details from master... 20200421:00:15:51:020128 gpstate:sdw3:gpadmin-[INFO]:-Standby master instance not configured 20200421:00:15:51:020128 gpstate:sdw3:gpadmin-[INFO]:-------------------------------------------------------------- 20200421:00:15:51:020128 gpstate:sdw3:gpadmin-[INFO]:--pg_stat_replication 20200421:00:15:51:020128 gpstate:sdw3:gpadmin-[INFO]:-------------------------------------------------------------- 20200421:00:15:51:020128 gpstate:sdw3:gpadmin-[INFO]:-No entries found. 20200421:00:15:51:020128 gpstate:sdw3:gpadmin-[INFO]:-------------------------------------------------------------- [gpadmin@sdw3 ~]$ [gpadmin@sdw3 ~]$ gpstate -b 20200421:00:17:36:020251 gpstate:sdw3:gpadmin-[INFO]:-Starting gpstate with args: -b 20200421:00:17:36:020251 gpstate:sdw3:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4' 20200421:00:17:36:020251 gpstate:sdw3:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4) on x86_64-pc-linux-gnu, compiled by GCC gcc (GCC) 6.2.0, 64-bit compiled on Aug 10 2018 07:30:24' 20200421:00:17:36:020251 gpstate:sdw3:gpadmin-[INFO]:-Obtaining Segment details from master... 20200421:00:17:36:020251 gpstate:sdw3:gpadmin-[INFO]:-Gathering data from segments... . 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:-Greenplum instance status summary 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:----------------------------------------------------- 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Master instance = Active 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Master standby = No master standby configured 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total segment instance count from metadata = 12 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:----------------------------------------------------- 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Primary Segment Status 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:----------------------------------------------------- 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total primary segments = 6 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total primary segment valid (at master) = 6 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total primary segment failures (at master) = 0 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total number of postmaster.pid files missing = 0 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total number of postmaster.pid files found = 6 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total number of postmaster.pid PIDs missing = 0 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total number of postmaster.pid PIDs found = 6 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total number of /tmp lock files missing = 0 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total number of /tmp lock files found = 6 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total number postmaster processes missing = 0 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total number postmaster processes found = 6 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:----------------------------------------------------- 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Mirror Segment Status 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:----------------------------------------------------- 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total mirror segments = 6 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total mirror segment valid (at master) = 6 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total mirror segment failures (at master) = 0 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total number of postmaster.pid files missing = 0 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total number of postmaster.pid files found = 6 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total number of postmaster.pid PIDs missing = 0 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total number of postmaster.pid PIDs found = 6 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total number of /tmp lock files missing = 0 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total number of /tmp lock files found = 6 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total number postmaster processes missing = 0 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total number postmaster processes found = 6 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total number mirror segments acting as primary segments = 0 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:- Total number mirror segments acting as mirror segments = 6 20200421:00:17:37:020251 gpstate:sdw3:gpadmin-[INFO]:----------------------------------------------------- [gpadmin@sdw3 ~]$
复制

5 显示整个集群的元数据信息

archdata=# \d List of relations Schema | Name | Type | Owner | Storage --------+------+-------+---------+--------- public | test | table | gpadmin | heap (1 row) archdata=# select * from gp_segment_configuration; dbid | content | role | preferred_role | mode | status | port | hostname | address | replication_port ------+---------+------+----------------+------+--------+-------+----------+---------+------------------ 2 | 0 | p | p | s | u | 40000 | sdw1 | sdw1 | 41000 4 | 2 | p | p | s | u | 40000 | sdw2 | sdw2 | 41000 6 | 4 | p | p | s | u | 40000 | sdw3 | sdw3 | 41000 3 | 1 | p | p | s | u | 40001 | sdw1 | sdw1 | 41001 5 | 3 | p | p | s | u | 40001 | sdw2 | sdw2 | 41001 7 | 5 | p | p | s | u | 40001 | sdw3 | sdw3 | 41001 8 | 0 | m | m | s | u | 50000 | sdw2 | sdw2 | 51000 9 | 1 | m | m | s | u | 50001 | sdw2 | sdw2 | 51001 10 | 2 | m | m | s | u | 50000 | sdw3 | sdw3 | 51000 11 | 3 | m | m | s | u | 50001 | sdw3 | sdw3 | 51001 12 | 4 | m | m | s | u | 50000 | sdw1 | sdw1 | 51000 13 | 5 | m | m | s | u | 50001 | sdw1 | sdw1 | 51001 1 | -1 | p | p | s | u | 5432 | smdw | smdw | (13 rows)
复制

6 将原有的master重新生成standby master

注意这里添加的时候,会check master的数据库目录,要把原master节点的的目录删除或者重命名

[gpadmin@mdw master]$ mv /greenplum/gpdata/master/gpseg-1 /greenplum/gpdata/master/gpseg-1bak      
[gpadmin@mdw master]$ 

将原有主节点先添加为standby 节点
(standby节点,gpadmin账户操作)

 gpinitstandby -s mdw 
 
 [gpadmin@sdw3 ~]$  gpinitstandby -s mdw 
20200421:00:30:15:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Validating environment and parameters for standby initialization...
20200421:00:30:15:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Checking for filespace directory /greenplum/gpdata/master/gpseg-1 on mdw
20200421:00:30:15:021050 gpinitstandby:sdw3:gpadmin-[INFO]:------------------------------------------------------
20200421:00:30:15:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Greenplum standby master initialization parameters
20200421:00:30:15:021050 gpinitstandby:sdw3:gpadmin-[INFO]:------------------------------------------------------
20200421:00:30:15:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Greenplum master hostname               = smdw
20200421:00:30:15:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Greenplum master data directory         = /greenplum/gpdata/master/gpseg-1
20200421:00:30:15:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Greenplum master port                   = 5432
20200421:00:30:15:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Greenplum standby master hostname       = mdw
20200421:00:30:15:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Greenplum standby master port           = 5432
20200421:00:30:15:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Greenplum standby master data directory = /greenplum/gpdata/master/gpseg-1
20200421:00:30:15:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Greenplum update system catalog         = On
20200421:00:30:15:021050 gpinitstandby:sdw3:gpadmin-[INFO]:------------------------------------------------------
20200421:00:30:15:021050 gpinitstandby:sdw3:gpadmin-[INFO]:- Filespace locations
20200421:00:30:15:021050 gpinitstandby:sdw3:gpadmin-[INFO]:------------------------------------------------------
20200421:00:30:15:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-pg_system -> /greenplum/gpdata/master/gpseg-1
Do you want to continue with standby master initialization? Yy|Nn (default=N):
> y
20200421:00:30:17:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Syncing Greenplum Database extensions to standby
20200421:00:30:17:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-The packages on mdw are consistent.
20200421:00:30:18:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Adding standby master to catalog...
20200421:00:30:18:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Database catalog updated successfully.
20200421:00:30:18:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Updating pg_hba.conf file...
20200421:00:30:19:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-pg_hba.conf files updated successfully.
20200421:00:30:20:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Updating filespace flat files...
20200421:00:30:20:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Filespace flat file updated successfully.
20200421:00:30:20:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Starting standby master
20200421:00:30:20:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Checking if standby master is running on host: mdw  in directory: /greenplum/gpdata/master/gpseg-1
20200421:00:30:21:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Cleaning up pg_hba.conf backup files...
20200421:00:30:22:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Backup files of pg_hba.conf cleaned up successfully.
20200421:00:30:22:021050 gpinitstandby:sdw3:gpadmin-[INFO]:-Successfully created standby master on mdw
复制

至此,现在的情况是原有的master和standby已经进行了完整的对换,

7 进行standby和master恢复原状

操作方法按照主备切换方法反向操作即可;

  1. 将smdw上master进行模拟故障

    smdw上操作:
    [gpadmin@sdw3 ~]$ pg_ctl stop -D $MASTER_DATA_DIRECTORY
    waiting for server to shut down.... done
    server stopped
    [gpadmin@sdw3 ~]$ 
    
    
    复制
  2. 检查状态

  3. standby 切换 master( mdw操作)

    gpactivatestandby -d $MASTER_DATA_DIRECTORY
    [gpadmin@mdw ~]$ gpactivatestandby -d $MASTER_DATA_DIRECTORY
    20200421:00:37:43:021254 gpactivatestandby:mdw:gpadmin-[INFO]:------------------------------------------------------
    20200421:00:37:43:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-Standby data directory    = /greenplum/gpdata/master/gpseg-1
    20200421:00:37:43:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-Standby port              = 5432
    20200421:00:37:43:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-Standby running           = yes
    20200421:00:37:43:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-Force standby activation  = no
    20200421:00:37:43:021254 gpactivatestandby:mdw:gpadmin-[INFO]:------------------------------------------------------
    Do you want to continue with standby master activation? Yy|Nn (default=N):
    > y
    20200421:00:37:45:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-found standby postmaster process
    20200421:00:37:45:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-Updating transaction files filespace flat files...
    20200421:00:37:45:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-Updating temporary files filespace flat files...
    20200421:00:37:45:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-Promoting standby...
    20200421:00:37:45:021254 gpactivatestandby:mdw:gpadmin-[DEBUG]:-Waiting for connection...
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-Standby master is promoted
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-Reading current configuration...
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[DEBUG]:-Connecting to dbname='archdata'
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-Writing the gp_dbid file - /greenplum/gpdata/master/gpseg-1/gp_dbid...
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-But found an already existing file.
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-Hence removed that existing file.
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-Creating a new file...
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-Wrote dbid: 1 to the file.
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-Now marking it as read only...
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-Verifying the file...
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:------------------------------------------------------
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-The activation of the standby master has completed successfully.
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-mdw is now the new primary master.
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-You will need to update your user access mechanism to reflect
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-the change of master hostname.
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-Do not re-start the failed master while the fail-over master is
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-operational, this could result in database corruption!
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-MASTER_DATA_DIRECTORY is now /greenplum/gpdata/master/gpseg-1 if
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-this has changed as a result of the standby master activation, remember
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-to change this in any startup scripts etc, that may be configured
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-to set this value.
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-MASTER_PORT is now 5432, if this has changed, you
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-may need to make additional configuration changes to allow access
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-to the Greenplum instance.
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-Refer to the Administrator Guide for instructions on how to re-activate
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-the master to its previous state once it becomes available.
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-Query planner statistics must be updated on all databases
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-following standby master activation.
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:-When convenient, run ANALYZE against all user databases.
    20200421:00:37:46:021254 gpactivatestandby:mdw:gpadmin-[INFO]:------------------------------------------------------
    
    复制
  4. 再次检查状态

    [gpadmin@mdw ~]$ gpstate -f
    20200421:00:38:35:021364 gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: -f
    20200421:00:38:35:021364 gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4'
    20200421:00:38:35:021364 gpstate:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4) on x86_64-pc-linux-gnu, compiled by GCC gcc (GCC) 6.2.0, 64-bit compiled on Aug 10 2018 07:30:24'
    20200421:00:38:35:021364 gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
    20200421:00:38:35:021364 gpstate:mdw:gpadmin-[INFO]:-Standby master instance not configured
    20200421:00:38:35:021364 gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
    20200421:00:38:35:021364 gpstate:mdw:gpadmin-[INFO]:--pg_stat_replication
    20200421:00:38:35:021364 gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
    20200421:00:38:35:021364 gpstate:mdw:gpadmin-[INFO]:-No entries found.
    20200421:00:38:35:021364 gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
    [gpadmin@mdw ~]$ gpstate -d
    Usage: gpstate [--help] [options] 
    
    gpstate: error: -d option requires an argument
    [gpadmin@mdw ~]$ gpstate -b
    20200421:00:38:41:021409 gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: -b
    20200421:00:38:41:021409 gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4'
    20200421:00:38:41:021409 gpstate:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4) on x86_64-pc-linux-gnu, compiled by GCC gcc (GCC) 6.2.0, 64-bit compiled on Aug 10 2018 07:30:24'
    20200421:00:38:41:021409 gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
    20200421:00:38:41:021409 gpstate:mdw:gpadmin-[INFO]:-Gathering data from segments...
    . 
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-Greenplum instance status summary
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Master instance                                           = Active
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Master standby                                            = No master standby configured
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total segment instance count from metadata                = 12
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Primary Segment Status
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total primary segments                                    = 6
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total primary segment valid (at master)                   = 6
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total primary segment failures (at master)                = 0
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid files missing              = 0
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid files found                = 6
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing               = 0
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found                 = 6
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total number of /tmp lock files missing                   = 0
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total number of /tmp lock files found                     = 6
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total number postmaster processes missing                 = 0
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total number postmaster processes found                   = 6
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Mirror Segment Status
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total mirror segments                                     = 6
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total mirror segment valid (at master)                    = 6
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total mirror segment failures (at master)                 = 0
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid files missing              = 0
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid files found                = 6
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing               = 0
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found                 = 6
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total number of /tmp lock files missing                   = 0
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total number of /tmp lock files found                     = 6
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total number postmaster processes missing                 = 0
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total number postmaster processes found                   = 6
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total number mirror segments acting as primary segments   = 0
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-   Total number mirror segments acting as mirror segments    = 6
    20200421:00:38:42:021409 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
    [gpadmin@mdw ~]$ 
    
    复制
  5. 将smdw上master重新生成standby

 smdw操作
 mv /greenplum/gpdata/master/gpseg-1 /greenplum/gpdata/master/gpseg-1bak 

将原有主节点先添加为standby 节点
(mdw,gpadmin账户操作)

 gpinitstandby -s smdw 
 [gpadmin@mdw ~]$ gpinitstandby -s smdw 
20200421:00:40:23:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Validating environment and parameters for standby initialization...
20200421:00:40:23:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Checking for filespace directory /greenplum/gpdata/master/gpseg-1 on smdw
20200421:00:40:24:021547 gpinitstandby:mdw:gpadmin-[INFO]:------------------------------------------------------
20200421:00:40:24:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master initialization parameters
20200421:00:40:24:021547 gpinitstandby:mdw:gpadmin-[INFO]:------------------------------------------------------
20200421:00:40:24:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum master hostname               = mdw
20200421:00:40:24:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum master data directory         = /greenplum/gpdata/master/gpseg-1
20200421:00:40:24:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum master port                   = 5432
20200421:00:40:24:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master hostname       = smdw
20200421:00:40:24:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master port           = 5432
20200421:00:40:24:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master data directory = /greenplum/gpdata/master/gpseg-1
20200421:00:40:24:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum update system catalog         = On
20200421:00:40:24:021547 gpinitstandby:mdw:gpadmin-[INFO]:------------------------------------------------------
20200421:00:40:24:021547 gpinitstandby:mdw:gpadmin-[INFO]:- Filespace locations
20200421:00:40:24:021547 gpinitstandby:mdw:gpadmin-[INFO]:------------------------------------------------------
20200421:00:40:24:021547 gpinitstandby:mdw:gpadmin-[INFO]:-pg_system -> /greenplum/gpdata/master/gpseg-1
Do you want to continue with standby master initialization? Yy|Nn (default=N):
> y
20200421:00:40:27:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Syncing Greenplum Database extensions to standby
20200421:00:40:27:021547 gpinitstandby:mdw:gpadmin-[INFO]:-The packages on smdw are consistent.
20200421:00:40:27:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Adding standby master to catalog...
20200421:00:40:27:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Database catalog updated successfully.
20200421:00:40:27:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Updating pg_hba.conf file...
20200421:00:40:28:021547 gpinitstandby:mdw:gpadmin-[INFO]:-pg_hba.conf files updated successfully.
20200421:00:40:30:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Updating filespace flat files...
20200421:00:40:30:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Filespace flat file updated successfully.
20200421:00:40:30:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Starting standby master
20200421:00:40:30:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Checking if standby master is running on host: smdw  in directory: /greenplum/gpdata/master/gpseg-1
20200421:00:40:31:021547 gpinitstandby:mdw:gpadmin-[WARNING]:-Unable to cleanup previously started standby: 'Warning: the ECDSA host key for 'smdw' differs from the key for the IP address '10.102.254.26'
Offending key for IP in /home/gpadmin/.ssh/known_hosts:1
Matching host key in /home/gpadmin/.ssh/known_hosts:9
'
20200421:00:40:32:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Cleaning up pg_hba.conf backup files...
20200421:00:40:33:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Backup files of pg_hba.conf cleaned up successfully.
20200421:00:40:33:021547 gpinitstandby:mdw:gpadmin-[INFO]:-Successfully created standby master on smdw
           
复制

8 确认状态恢复完成

[gpadmin@mdw ~]$ gpstate -b            
20200421:00:40:49:021656 gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: -b
20200421:00:40:49:021656 gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4'
20200421:00:40:49:021656 gpstate:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4) on x86_64-pc-linux-gnu, compiled by GCC gcc (GCC) 6.2.0, 64-bit compiled on Aug 10 2018 07:30:24'
20200421:00:40:49:021656 gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20200421:00:40:49:021656 gpstate:mdw:gpadmin-[INFO]:-Gathering data from segments...
. 
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-Greenplum instance status summary
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Master instance                                           = Active
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Master standby                                            = smdw
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Standby master state                                      = Standby host passive
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total segment instance count from metadata                = 12
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Primary Segment Status
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total primary segments                                    = 6
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total primary segment valid (at master)                   = 6
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total primary segment failures (at master)                = 0
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid files missing              = 0
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid files found                = 6
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing               = 0
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found                 = 6
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total number of /tmp lock files missing                   = 0
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total number of /tmp lock files found                     = 6
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total number postmaster processes missing                 = 0
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total number postmaster processes found                   = 6
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Mirror Segment Status
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total mirror segments                                     = 6
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total mirror segment valid (at master)                    = 6
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total mirror segment failures (at master)                 = 0
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid files missing              = 0
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid files found                = 6
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing               = 0
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found                 = 6
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total number of /tmp lock files missing                   = 0
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total number of /tmp lock files found                     = 6
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total number postmaster processes missing                 = 0
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total number postmaster processes found                   = 6
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total number mirror segments acting as primary segments   = 0
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-   Total number mirror segments acting as mirror segments    = 6
20200421:00:40:50:021656 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
[gpadmin@mdw ~]$ gpstate -f
20200421:00:40:56:021742 gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: -f
20200421:00:40:56:021742 gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4'
20200421:00:40:56:021742 gpstate:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4) on x86_64-pc-linux-gnu, compiled by GCC gcc (GCC) 6.2.0, 64-bit compiled on Aug 10 2018 07:30:24'
20200421:00:40:56:021742 gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20200421:00:40:56:021742 gpstate:mdw:gpadmin-[INFO]:-Standby master details
20200421:00:40:56:021742 gpstate:mdw:gpadmin-[INFO]:-----------------------
20200421:00:40:56:021742 gpstate:mdw:gpadmin-[INFO]:-   Standby address          = smdw
20200421:00:40:56:021742 gpstate:mdw:gpadmin-[INFO]:-   Standby data directory   = /greenplum/gpdata/master/gpseg-1
20200421:00:40:56:021742 gpstate:mdw:gpadmin-[INFO]:-   Standby port             = 5432
20200421:00:40:56:021742 gpstate:mdw:gpadmin-[INFO]:-   Standby PID              = 21942
20200421:00:40:56:021742 gpstate:mdw:gpadmin-[INFO]:-   Standby status           = Standby host passive
20200421:00:40:56:021742 gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
20200421:00:40:56:021742 gpstate:mdw:gpadmin-[INFO]:--pg_stat_replication
20200421:00:40:56:021742 gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
20200421:00:40:56:021742 gpstate:mdw:gpadmin-[INFO]:--WAL Sender State: streaming
20200421:00:40:56:021742 gpstate:mdw:gpadmin-[INFO]:--Sync state: sync
20200421:00:40:56:021742 gpstate:mdw:gpadmin-[INFO]:--Sent Location: 0/1C000000
20200421:00:40:56:021742 gpstate:mdw:gpadmin-[INFO]:--Flush Location: 0/1C000000
20200421:00:40:56:021742 gpstate:mdw:gpadmin-[INFO]:--Replay Location: 0/1C000000
20200421:00:40:56:021742 gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
[gpadmin@mdw ~]$ 
archdata=# select * from gp_segment_configuration;
 dbid | content | role | preferred_role | mode | status | port  | hostname | address | replication_port 
------+---------+------+----------------+------+--------+-------+----------+---------+------------------
    2 |       0 | p    | p              | s    | u      | 40000 | sdw1     | sdw1    |            41000
    4 |       2 | p    | p              | s    | u      | 40000 | sdw2     | sdw2    |            41000
    6 |       4 | p    | p              | s    | u      | 40000 | sdw3     | sdw3    |            41000
    3 |       1 | p    | p              | s    | u      | 40001 | sdw1     | sdw1    |            41001
    5 |       3 | p    | p              | s    | u      | 40001 | sdw2     | sdw2    |            41001
    7 |       5 | p    | p              | s    | u      | 40001 | sdw3     | sdw3    |            41001
    8 |       0 | m    | m              | s    | u      | 50000 | sdw2     | sdw2    |            51000
    9 |       1 | m    | m              | s    | u      | 50001 | sdw2     | sdw2    |            51001
   10 |       2 | m    | m              | s    | u      | 50000 | sdw3     | sdw3    |            51000
   11 |       3 | m    | m              | s    | u      | 50001 | sdw3     | sdw3    |            51001
   12 |       4 | m    | m              | s    | u      | 50000 | sdw1     | sdw1    |            51000
   13 |       5 | m    | m              | s    | u      | 50001 | sdw1     | sdw1    |            51001
    1 |      -1 | p    | p              | s    | u      |  5432 | mdw      | mdw     |                 
   14 |      -1 | m    | m              | s    | u      |  5432 | smdw     | smdw    |                 
(14 rows)

archdata=# 
复制
「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论