首先使用GreatSQL Shell方式管理和维护MGR一定要用MGR专用用户,不要用root用户。
我这边创建的MGR专用用户的用户名为GreatSQL,登录方式如下
[root@GreatSQL01 ~]# mysqlsh --uri GreatSQL@192.168.116.41:3306
MySQL Shell 8.0.32
Copyright (c) 2016, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.
Type '\help' or '\?' for help; '\quit' to exit.
Creating a Classic session to 'GreatSQL@192.168.116.41:3306'
Fetching schema names for auto-completion... Press ^C to stop.
Your MySQL connection id is 115
Server version: 8.0.32-26 GreatSQL, Release 26, Revision a68b3034c3d
No default schema selected; type \use <schema> to set one.
MySQL 192.168.116.41:3306 ssl JS >
复制
切换主节点
GreatSQL Shell方式切换主节点
GreatSQL Shell命令区别大小写
MySQL 192.168.116.41:3306 ssl JS > c=dba.getCluster();
<Cluster:MGR1>
MySQL 192.168.116.41:3306 ssl JS > c.status();
{
"clusterName": "MGR1",
"defaultReplicaSet": {
"name": "default",
"primary": "192.168.116.41:3306",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"192.168.116.41:3306": {
"address": "192.168.116.41:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"192.168.116.42:3306": {
"address": "192.168.116.42:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"192.168.116.43:3306": {
"address": "192.168.116.43:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "192.168.116.41:3306"
}
MySQL 192.168.116.41:3306 ssl JS > c.setPrimaryInstance('192.168.116.42:3306')
Setting instance '192.168.116.42:3306' as the primary instance of cluster 'MGR1'...
Instance '192.168.116.41:3306' was switched from PRIMARY to SECONDARY.
Instance '192.168.116.42:3306' was switched from SECONDARY to PRIMARY.
Instance '192.168.116.43:3306' remains SECONDARY.
The instance '192.168.116.42:3306' was successfully elected as primary.
MySQL 192.168.116.41:3306 ssl JS > c.status();
{
"clusterName": "MGR1",
"defaultReplicaSet": {
"name": "default",
"primary": "192.168.116.42:3306",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"192.168.116.41:3306": {
"address": "192.168.116.41:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"192.168.116.42:3306": {
"address": "192.168.116.42:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"192.168.116.43:3306": {
"address": "192.168.116.43:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "192.168.116.42:3306"
}
MySQL 192.168.116.41:3306 ssl JS >
复制
手工方式切换
(Tue Oct 8 13:55:57 2024)[root@GreatSQL][(none)]>SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+----------------------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK |
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+----------------------------+
| group_replication_applier | 20121e3a-7e37-11ef-9e5a-000c296335ba | 192.168.116.41 | 3306 | ONLINE | SECONDARY | 8.0.32 | MySQL |
| group_replication_applier | 5fa2fefd-7e37-11ef-9e13-000c2927509d | 192.168.116.42 | 3306 | ONLINE | PRIMARY | 8.0.32 | MySQL |
| group_replication_applier | 6f8dd4a8-7e37-11ef-bf56-000c29807526 | 192.168.116.43 | 3306 | ONLINE | SECONDARY | 8.0.32 | MySQL |
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+----------------------------+
3 rows in set (0.00 sec)
(Tue Oct 8 13:56:02 2024)[root@GreatSQL][(none)]>SELECT group_replication_set_as_primary('20121e3a-7e37-11ef-9e5a-000c296335ba');
+--------------------------------------------------------------------------+
| group_replication_set_as_primary('20121e3a-7e37-11ef-9e5a-000c296335ba') |
+--------------------------------------------------------------------------+
| Primary server switched to: 20121e3a-7e37-11ef-9e5a-000c296335ba |
+--------------------------------------------------------------------------+
1 row in set (0.31 sec)
(Tue Oct 8 13:58:28 2024)[root@GreatSQL][(none)]>SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+----------------------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK |
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+----------------------------+
| group_replication_applier | 20121e3a-7e37-11ef-9e5a-000c296335ba | 192.168.116.41 | 3306 | ONLINE | PRIMARY | 8.0.32 | MySQL |
| group_replication_applier | 5fa2fefd-7e37-11ef-9e13-000c2927509d | 192.168.116.42 | 3306 | ONLINE | SECONDARY | 8.0.32 | MySQL |
| group_replication_applier | 6f8dd4a8-7e37-11ef-bf56-000c29807526 | 192.168.116.43 | 3306 | ONLINE | SECONDARY | 8.0.32 | MySQL |
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+----------------------------+
3 rows in set (0.00 sec)
(Tue Oct 8 13:58:35 2024)[root@GreatSQL][(none)]>
复制
切换单主/多主模式
GreatSQL Shell方式切换模式
快速单主模式无法切换成多主模式
单主切换多主
MySQL 192.168.116.41:3306 ssl JS > c.switchToMultiPrimaryMode()
Switching cluster 'MGR1' to Multi-Primary mode...
Instance '192.168.116.41:3306' was switched from SECONDARY to PRIMARY.
Instance '192.168.116.42:3306' remains PRIMARY.
Instance '192.168.116.43:3306' was switched from SECONDARY to PRIMARY.
The cluster successfully switched to Multi-Primary mode.
MySQL 192.168.116.41:3306 ssl JS > c.status();
{
"clusterName": "MGR1",
"defaultReplicaSet": {
"name": "default",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"192.168.116.41:3306": {
"address": "192.168.116.41:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"192.168.116.42:3306": {
"address": "192.168.116.42:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"192.168.116.43:3306": {
"address": "192.168.116.43:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
}
},
"topologyMode": "Multi-Primary"
},
"groupInformationSourceMember": "192.168.116.42:3306"
}
MySQL 192.168.116.41:3306 ssl JS >
复制
多主切换成单主
MySQL 192.168.116.41:3306 ssl JS > c.switchToSinglePrimaryMode("192.168.116.41:3306")
Switching cluster 'MGR1' to Single-Primary mode...
Instance '192.168.116.41:3306' remains PRIMARY.
Instance '192.168.116.42:3306' was switched from PRIMARY to SECONDARY.
Instance '192.168.116.43:3306' was switched from PRIMARY to SECONDARY.
WARNING: Existing connections that expected a R/W connection must be disconnected, i.e. instances that became SECONDARY.
The cluster successfully switched to Single-Primary mode.
MySQL 192.168.116.41:3306 ssl JS > c.status()
{
"clusterName": "MGR1",
"defaultReplicaSet": {
"name": "default",
"primary": "192.168.116.41:3306",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"192.168.116.41:3306": {
"address": "192.168.116.41:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"192.168.116.42:3306": {
"address": "192.168.116.42:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"192.168.116.43:3306": {
"address": "192.168.116.43:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "192.168.116.41:3306"
}
复制
手动方式切换
单主切换成多主
(Tue Oct 8 16:20:44 2024)[root@GreatSQL][(none)]>SELECT group_replication_switch_to_multi_primary_mode();
+--------------------------------------------------+
| group_replication_switch_to_multi_primary_mode() |
+--------------------------------------------------+
| Mode switched to multi-primary successfully. |
+--------------------------------------------------+
1 row in set (0.02 sec)
(Tue Oct 8 16:20:46 2024)[root@GreatSQL][(none)]>SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+----------------------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK |
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+----------------------------+
| group_replication_applier | 20121e3a-7e37-11ef-9e5a-000c296335ba | 192.168.116.41 | 3306 | ONLINE | PRIMARY | 8.0.32 | MySQL |
| group_replication_applier | 5fa2fefd-7e37-11ef-9e13-000c2927509d | 192.168.116.42 | 3306 | ONLINE | PRIMARY | 8.0.32 | MySQL |
| group_replication_applier | 6f8dd4a8-7e37-11ef-bf56-000c29807526 | 192.168.116.43 | 3306 | ONLINE | PRIMARY | 8.0.32 | MySQL |
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+----------------------------+
3 rows in set (0.00 sec)
复制
多主切换成单主
(Tue Oct 8 16:21:03 2024)[root@GreatSQL][(none)]>SELECT group_replication_switch_to_single_primary_mode('20121e3a-7e37-11ef-9e5a-000c296335ba');
+-----------------------------------------------------------------------------------------+
| group_replication_switch_to_single_primary_mode('20121e3a-7e37-11ef-9e5a-000c296335ba') |
+-----------------------------------------------------------------------------------------+
| Mode switched to single-primary successfully. |
+-----------------------------------------------------------------------------------------+
1 row in set (0.21 sec)
(Tue Oct 8 16:54:39 2024)[root@GreatSQL][(none)]>SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+----------------------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK |
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+----------------------------+
| group_replication_applier | 20121e3a-7e37-11ef-9e5a-000c296335ba | 192.168.116.41 | 3306 | ONLINE | PRIMARY | 8.0.32 | MySQL |
| group_replication_applier | 5fa2fefd-7e37-11ef-9e13-000c2927509d | 192.168.116.42 | 3306 | ONLINE | SECONDARY | 8.0.32 | MySQL |
| group_replication_applier | 6f8dd4a8-7e37-11ef-bf56-000c29807526 | 192.168.116.43 | 3306 | ONLINE | SECONDARY | 8.0.32 | MySQL |
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+----------------------------+
3 rows in set (0.00 sec)
(Tue Oct 8 16:54:45 2024)[root@GreatSQL][(none)]>
复制
添加新节点
GreatSQL Shell方式添加新节点
首先,启动一个全新的空实例,确保可以用root账户连接登入。
先利用 GreatSQL Shell 调用函数 dba.configureInstance()
完成初始化检查工作。
后切换到连接主节点的GreatSQL Shell终端上,首先获取cluster对象,再进行添加新节点操作:
MySQL 192.168.116.41:3306 ssl JS > c=dba.getCluster();
<Cluster:MGR1>
MySQL 192.168.116.41:3306 ssl JS > c.addInstance("GreatSQL@192.168.116.44:3306")
WARNING: A GTID set check of the MySQL instance at '192.168.116.44:3306' determined that it contains transactions that do not originate from the cluster, which must be discarded before it can join the cluster.
192.168.116.44:3306 has the following errant GTIDs that do not exist in the cluster:
977751df-860f-11ef-b897-000c29e2951d:1
WARNING: Discarding these extra GTID events can either be done manually or by completely overwriting the state of 192.168.116.44:3306 with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.
Having extra GTID events is not expected, and it is recommended to investigate this further and ensure that the data can be removed prior to choosing the clone recovery method.
Please select a recovery method [C]lone/[A]bort (default Abort): clone
Validating instance configuration at 192.168.116.44:3306...
This instance reports its own address as 192.168.116.44:3306
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '192.168.116.44:3306'. Use the localAddress option to override.
A new instance will be added to the InnoDB Cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
Adding instance to the cluster...
Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.
NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.
* Waiting for clone to finish...
NOTE: 192.168.116.44:3306 is being cloned from 192.168.116.41:3306
** Stage DROP DATA: Completed
** Clone Transfer
FILE COPY ############################################################ 100% Completed
PAGE COPY ############################################################ 100% Completed
REDO COPY ############################################################ 100% Completed
NOTE: 192.168.116.44:3306 is shutting down...
* Waiting for server restart... ready
* 192.168.116.44:3306 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 74.00 MB transferred in 2 sec (37.00 MB/s)
State recovery already finished for '192.168.116.44:3306'
The instance '192.168.116.44:3306' was successfully added to the cluster.
MySQL 192.168.116.41:3306 ssl JS > c.status()
{
"clusterName": "MGR1",
"defaultReplicaSet": {
"name": "default",
"primary": "192.168.116.42:3306",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"192.168.116.41:3306": {
"address": "192.168.116.41:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"192.168.116.42:3306": {
"address": "192.168.116.42:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"192.168.116.43:3306": {
"address": "192.168.116.43:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"192.168.116.44:3306": {
"address": "192.168.116.44:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "192.168.116.42:3306"
}
MySQL 192.168.116.41:3306 ssl JS >
复制
删除节点
GreatSQL Shell方式删除节点
MySQL 192.168.116.41:3306 ssl JS > c=dba.getCluster()
<Cluster:MGR1>
MySQL 192.168.116.41:3306 ssl JS > c.removeInstance("GreatSQL@192.168.116.44:3306")
The instance will be removed from the InnoDB Cluster.
* Waiting for instance '192.168.116.44:3306' to synchronize with the primary...
** Transactions replicated ############################################################ 100%
* Instance '192.168.116.44:3306' is attempting to leave the cluster...
The instance '192.168.116.44:3306' was successfully removed from the cluster.
MySQL 192.168.116.41:3306 ssl JS > c.status()
{
"clusterName": "MGR1",
"defaultReplicaSet": {
"name": "default",
"primary": "192.168.116.42:3306",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"192.168.116.41:3306": {
"address": "192.168.116.41:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"192.168.116.42:3306": {
"address": "192.168.116.42:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"192.168.116.43:3306": {
"address": "192.168.116.43:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "192.168.116.42:3306"
}
MySQL 192.168.116.41:3306 ssl JS >
复制
重启MGR集群
正常情况下,MGR集群中的Primary节点退出时,剩下的节点会自动选出新的Primary节点。当最后一个节点也退出时,相当于整个MGR集群都关闭了。这时候任何一个节点启动MGR服务后,都不会自动成为Primary节点,需要在启动MGR服务前,先设置 group_replication_bootstrap_group=ON
,使其成为引导节点,再启动MGR服务,它才会成为Primary节点,后续启动的其他节点也才能正常加入集群。如果是用GreatSQL Shell重启MGR集群,调用 rebootClusterFromCompleteOutage()
函数即可,它会自动判断各节点的状态,选择其中一个作为Primary节点,然后拉起各节点上的MGR服务,完成MGR集群重启。
MySQL 192.168.116.41:3306 ssl JS > c=dba.getCluster()
Dba.getCluster: This function is not available through a session to a standalone instance (metadata exists, instance belongs to that metadata, but GR is not active) (MYSQLSH 51314)
MySQL 192.168.116.41:3306 ssl JS > dba.rebootClusterFromCompleteOutage("MGR1");
Restoring the Cluster 'MGR1' from complete outage...
Cluster instances: '192.168.116.41:3306' (OFFLINE), '192.168.116.42:3306' (OFFLINE), '192.168.116.43:3306' (OFFLINE), '192.168.116.44:3306' (OFFLINE)
Waiting for instances to apply pending received transactions...
NOTE: The instance '192.168.116.41:3306' is running auto-rejoin process, which will be cancelled.
Validating instance configuration at 192.168.116.41:3306...
This instance reports its own address as 192.168.116.41:3306
Instance configuration is suitable.
NOTE: Cancelling active GR auto-initialization at 192.168.116.41:3306
* Waiting for seed instance to become ONLINE...
192.168.116.41:3306 was restored.
Validating instance configuration at 192.168.116.42:3306...
This instance reports its own address as 192.168.116.42:3306
Instance configuration is suitable.
Rejoining instance '192.168.116.42:3306' to cluster 'MGR1'...
Re-creating recovery account...
NOTE: User 'mysql_innodb_cluster_2'@'%' already existed at instance '192.168.116.41:3306'. It will be deleted and created again with a new password.
* Waiting for the Cluster to synchronize with the PRIMARY Cluster...
** Transactions replicated ############################################################ 100%
The instance '192.168.116.42:3306' was successfully rejoined to the cluster.
Validating instance configuration at 192.168.116.43:3306...
This instance reports its own address as 192.168.116.43:3306
Instance configuration is suitable.
Rejoining instance '192.168.116.43:3306' to cluster 'MGR1'...
Re-creating recovery account...
NOTE: User 'mysql_innodb_cluster_3'@'%' already existed at instance '192.168.116.41:3306'. It will be deleted and created again with a new password.
* Waiting for the Cluster to synchronize with the PRIMARY Cluster...
** Transactions replicated ############################################################ 100%
The instance '192.168.116.43:3306' was successfully rejoined to the cluster.
Validating instance configuration at 192.168.116.44:3306...
This instance reports its own address as 192.168.116.44:3306
Instance configuration is suitable.
Rejoining instance '192.168.116.44:3306' to cluster 'MGR1'...
Re-creating recovery account...
NOTE: User 'mysql_innodb_cluster_4'@'%' already existed at instance '192.168.116.41:3306'. It will be deleted and created again with a new password.
* Waiting for the Cluster to synchronize with the PRIMARY Cluster...
** Transactions replicated ############################################################ 100%
The instance '192.168.116.44:3306' was successfully rejoined to the cluster.
The Cluster was successfully rebooted.
<Cluster:MGR1>
MySQL 192.168.116.41:3306 ssl JS > c=dba.getCluster()
<Cluster:MGR1>
MySQL 192.168.116.41:3306 ssl JS > c.status()
{
"clusterName": "MGR1",
"defaultReplicaSet": {
"name": "default",
"primary": "192.168.116.41:3306",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"192.168.116.41:3306": {
"address": "192.168.116.41:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"192.168.116.42:3306": {
"address": "192.168.116.42:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"192.168.116.43:3306": {
"address": "192.168.116.43:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"192.168.116.44:3306": {
"address": "192.168.116.44:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": "applier_queue_applied",
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "192.168.116.41:3306"
}
MySQL 192.168.116.41:3306 ssl JS >
复制
参考文章链接:https://greatsql.cn/docs/8.0.32-26/8-mgr/3-mgr-maintain-admin.html
最后修改时间:2024-10-12 17:02:01
「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。
评论
相关阅读
GreatSQL 新版发布:MySQL 牵手“鸭子”
严少安
133次阅读
2025-04-19 14:57:56
MySQL下200GB大表备份,利用传输表空间解决停服发版表备份问题
GreatSQL社区
41次阅读
2025-04-09 11:01:25
GreatSQL社区月报 | 2025.3
GreatSQL社区
22次阅读
2025-04-15 09:49:59
MySQL备份文件导入GreatSQL MGR环境为什么出现大量报错?
GreatSQL社区
19次阅读
2025-04-11 15:34:34
事务处理对持久统计信息自动收集的影响
GreatSQL社区
14次阅读
2025-04-03 09:56:07
生态 | GreatSQL携手龙蜥 共建开源产业生态
GreatSQL社区
13次阅读
2025-04-16 10:33:41
GreatSQL启动崩溃:jemalloc依赖缺失问题排查
GreatSQL社区
12次阅读
2025-04-18 09:54:20
Java程序使用预处理语句的性能提升
GreatSQL社区
10次阅读
2025-04-23 11:18:50
使用 gt-checksum 分析迁移对象
GreatSQL社区
6次阅读
2025-04-25 10:10:06
TA的专栏
热门文章
ERROR 3144 (22032): Cannot create a JSON value from a string with CHARACTER SET 'binary'.
2020-06-30 5689浏览
centos 8没有ntpdate了,你们知道吗?不知道赶紧去看看吧。
2021-07-19 2493浏览
解决passwd: Authentication token manipulation error
2021-09-28 2404浏览
数据库连接报错ORA-12516: TNS:listener could not find available handler with matching protocol
2020-05-09 2223浏览
达梦数据库有哪些版本,各版本有哪些区别
2023-09-05 1426浏览
目录