暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

MGR之新增、删除节点

原创 在路上 2022-08-30
1294

MySQL组复制具有了高一致性、高容错性、高扩展性、高灵活性的特点,提供了高可用且可靠稳定的MySQL服务,增强了MySQL原有的高可用集群架构。在忽略网络延迟的情况,可以轻松的实现多活和异地容灾近端写库,组复制是MySQL未来发展的趋势,相信在未来的生产环境中会越来越多,MySQL也会越来越稳定。



MGR之新增节点

 环境一:binlog保留完整

直接加入一个新的节点
1、修改IP、主机名(旧的节点也需要加上新节点的IP 主机名)
vi /etc/hosts

10.123.1.21 cwdb1
10.123.1.22 cwdb2
10.123.1.23 cwdb3
10.123.1.24 cwdb4

2、10.123.1.24修改配置文件
vi /data01/mysql/data/3306/my.cnf

#bind_address= 0.0.0.0
#打开binlog,行复制并且disable binlog checksum
log_bin=/data01/mysql/log/3306/binlog/mysql-binlog
log_bin_index=/data01/mysql/log/3306/binlog/mysql-binlog.index
binlog_format=row
binlog_rows_query_log_events=on
binlog_checksum=none
slave-parallel-type=LOGICAL_CLOCK
slave-parallel-workers=4
slave_preserve_commit_order=1
#开启全局事务gtid
gtid_mode = on
enforce_gtid_consistency = 1
log-slave-updates = 1
binlog_gtid_simple_recovery=1

relay_log = /data01/mysql/log/3306/relaylog/mysql-relay.log
relay-log-index = /data01/mysql/log/3306/relaylog/mysql-relay.index
master_info_repository=table
relay_log_info_repository=table
#MGR参数
plugin_load="group_replication=group_replication.so"
transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
loose-group_replication_start_on_boot=off
#本节点的IP地址和端口号,注意该端口是组内成员之间通信的端口,而不是MySQL对外提供服务的端口
loose-group_replication_local_address= "10.123.1.24:33006"
loose-group_replication_group_seeds= "10.123.1.21:33306,10.123.1.22:33306,10.123.1.23:33306,10.123.1.24:33306"
loose-group_replication_bootstrap_group= off
group_replication_single_primary_mode=off
group_replication_enforce_update_everywhere_checks=on

配置完成后,重启MySQL服务
sh /data01/mysql/data/3306/3306mysql.sh restart

3、加入集群
3.1、#创建用户复制的用户(创建复制用户时不要写二进制日志,先关闭二进制日志、后创建复制用户、再打开二进制日志)
set sql_log_bin=0;
create user repmgr@'%' identified by 'repmgr';
grant replication slave,replication client on *.* to repmgr@'%';
create user repmgr@'127.0.0.1' identified by 'repmgr';
grant replication slave,replication client on *.* to repmgr@'127.0.0.1';
create user repmgr@'localhost' identified by 'repmgr';
grant replication slave,replication client on *.* to repmgr@'localhost';
flush privileges;
set sql_log_bin=1;
set global group_replication_recovery_get_public_key=on;
3.2、#配置复制时所使用的用户
change master to
master_user='repmgr',
master_password='repmgr'
for channel 'group_replication_recovery';
3.3、安装mysql group replication插件(此步已经写在配置中,不需要在次执行)
--install plugin group_replication soname 'group_replication.so';
#查看插件是否激活
show plugins;
3.4、配置修改 group_replication_group_seeds 的值
10.123.1.21执行:
set global group_replication_group_seeds="10.123.1.21:33306,10.123.1.22:33306,10.123.1.23:33306,10.123.1.24:33306";
10.123.1.22执行:
set global group_replication_group_seeds="10.123.1.21:33306,10.123.1.22:33306,10.123.1.23:33306,10.123.1.24:33306";
10.123.1.23执行:
set global group_replication_group_seeds="10.123.1.21:33306,10.123.1.22:33306,10.123.1.23:33306,10.123.1.24:33306";

将对应的值写入配置文件my.cnf持久化,等到下次重启MySQL服务自动生效
10.123.1.21:
loose-group_replication_group_seeds= "10.123.1.21:33306,10.123.1.22:33306,10.123.1.23:33306,10.123.1.24:33306"
10.123.1.22:
loose-group_replication_group_seeds= "10.123.1.21:33306,10.123.1.22:33306,10.123.1.23:33306,10.123.1.24:33306"
10.123.1.23:
loose-group_replication_group_seeds= "10.123.1.21:33306,10.123.1.22:33306,10.123.1.23:33306,10.123.1.24:33306"


3.5、#加入前面创建好的赋值组
start group_replication;
#查看组内成员状态
select * from performance_schema.replication_group_members;
select @@read_only, @@super_read_only;




环境二:binlog缺失

直接加入一个新的节点
1、修改IP、主机名(旧的节点也需要加上新节点的IP 主机名)
vi /etc/hosts

10.123.1.21 cwdb1
10.123.1.22 cwdb2
10.123.1.23 cwdb3
10.123.1.24 cwdb4

2、10.123.1.24修改配置文件
vi /data01/mysql/data/3306/my.cnf

#bind_address= 0.0.0.0
#打开binlog,行复制并且disable binlog checksum
log_bin=/data01/mysql/log/3306/binlog/mysql-binlog
log_bin_index=/data01/mysql/log/3306/binlog/mysql-binlog.index
binlog_format=row
binlog_rows_query_log_events=on
binlog_checksum=none
slave-parallel-type=LOGICAL_CLOCK
slave-parallel-workers=4
slave_preserve_commit_order=1
#开启全局事务gtid
gtid_mode = on
enforce_gtid_consistency = 1
log-slave-updates = 1
binlog_gtid_simple_recovery=1

relay_log = /data01/mysql/log/3306/relaylog/mysql-relay.log
relay-log-index = /data01/mysql/log/3306/relaylog/mysql-relay.index
master_info_repository=table
relay_log_info_repository=table
#MGR参数
plugin_load="group_replication=group_replication.so"
transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
loose-group_replication_start_on_boot=off
#本节点的IP地址和端口号,注意该端口是组内成员之间通信的端口,而不是MySQL对外提供服务的端口
loose-group_replication_local_address= "10.123.1.24:33006"
loose-group_replication_group_seeds= "10.123.1.21:33306,10.123.1.22:33306,10.123.1.23:33306,10.123.1.24:33306"
loose-group_replication_bootstrap_group= off
group_replication_single_primary_mode=off
group_replication_enforce_update_everywhere_checks=on

配置完成后,重启MySQL服务
sh /data01/mysql/data/3306/3306mysql.sh restart

3、通过备份进行数据恢复
主库备份并同步到从库
cd /data01/mysql/backup
mysqldump -uroot -prootroot --single-transaction --master-data=2 --flush-logs --flush-privileges --events --routines --all-databases >all_backup.sql
从库:
scp 10.123.1.21:/data01/mysql/backup/all_backup.sql /data01/mysql/backup/
mysql -uroot -prootroot -e 'reset master';
mysql -uroot -prootroot
source /data01/mysql/backup/all_backup.sql

4、加入集群
注:因为备的是全库,4.1步骤可省略
4.1、#创建用户复制的用户(创建复制用户时不要写二进制日志,先关闭二进制日志、后创建复制用户、再打开二进制日志)
set sql_log_bin=0;
create user repmgr@'%' identified by 'repmgr';
grant replication slave,replication client on *.* to repmgr@'%';
create user repmgr@'127.0.0.1' identified by 'repmgr';
grant replication slave,replication client on *.* to repmgr@'127.0.0.1';
create user repmgr@'localhost' identified by 'repmgr';
grant replication slave,replication client on *.* to repmgr@'localhost';
flush privileges;
set sql_log_bin=1;
set global group_replication_recovery_get_public_key=on;
4.2、#配置复制时所使用的用户
change master to
master_user='repmgr',
master_password='repmgr'
for channel 'group_replication_recovery';
4.3、安装mysql group replication插件(此步已经写在配置中,不需要在次执行)
--install plugin group_replication soname 'group_replication.so';
#查看插件是否激活
show plugins;
4.4、配置修改 group_replication_group_seeds 的值
10.123.1.21执行:
set global group_replication_group_seeds="10.123.1.21:33306,10.123.1.22:33306,10.123.1.23:33306,10.123.1.24:33306";
10.123.1.22执行:
set global group_replication_group_seeds="10.123.1.21:33306,10.123.1.22:33306,10.123.1.23:33306,10.123.1.24:33306";
10.123.1.23执行:
set global group_replication_group_seeds="10.123.1.21:33306,10.123.1.22:33306,10.123.1.23:33306,10.123.1.24:33306";

将对应的值写入配置文件my.cnf持久化,等到下次重启MySQL服务自动生效
10.123.1.21:
loose-group_replication_group_seeds= "10.123.1.21:33306,10.123.1.22:33306,10.123.1.23:33306,10.123.1.24:33306"
10.123.1.22:
loose-group_replication_group_seeds= "10.123.1.21:33306,10.123.1.22:33306,10.123.1.23:33306,10.123.1.24:33306"
10.123.1.23:
loose-group_replication_group_seeds= "10.123.1.21:33306,10.123.1.22:33306,10.123.1.23:33306,10.123.1.24:33306"


4.5、#加入前面创建好的赋值组
start group_replication;
#查看组内成员状态
select * from performance_schema.replication_group_members;
select @@read_only, @@super_read_only;

说明:为什么MEMBER_STATE的状态为RECOVERING,后来变成了ONLINE。
因为在此次期间,其他节点新增或改动了几条数据,节点四正在从其他节点恢复数据,所以状态是RECOVERING。
当节点四的数据与其他节点一致时,状态为ONLINE。

节点四执行sql查询:
select * from ceshidb.ceshi1;

自此所有节点数据一致。第四个节点添加成功。


MGR之删除节点


1、停止10.123.1.24上组复制:
stop GROUP_REPLICATION; ----slave关闭后就会被移除组成员

2、剩余节点配置修改 group_replication_group_seeds值
10.123.1.21执行:
set global group_replication_group_seeds="10.123.1.21:33006,10.123.1.22:33006,10.123.1.23:33006";
10.123.1.22执行:
set global group_replication_group_seeds="10.123.1.21:33006,10.123.1.22:33006,10.123.1.23:33006";
10.123.1.23执行:
set global group_replication_group_seeds="10.123.1.21:33006,10.123.1.22:33006,10.123.1.23:33006";

将对应的值写入配置文件my.cnf持久化,等到下次重启MySQL服务自动生效
10.123.1.21:
loose-group_replication_group_seeds= "10.123.1.21:33006,10.123.1.22:33006,10.123.1.23:33006"
10.123.1.22:
loose-group_replication_group_seeds= "10.123.1.21:33006,10.123.1.22:33006,10.123.1.23:33006"
10.123.1.23:
loose-group_replication_group_seeds= "10.123.1.21:33006,10.123.1.22:33006,10.123.1.23:33006"

3、彻底清理10.123.1.24节点相关组信息
set global group_replication_group_seeds="";
set global group_replication_local_address="";
将对应值写到my.cnf中,或者注释掉这两条参数。

4、删除复制账号权限、删除插件
set global read_only=0;
set sql_log_bin=0;
drop user repuser@'%';
drop user repuser@'127.0.0.1';
drop user repuser@'localhost';
set sql_log_bin=1;

uninstall plugin group_replication;
show plugins;
exit;

reboot;

欢迎大家提出合理的意见及建议,因个人能力有限,文章中难免有遗漏的地方,望见谅。此文章思路及方案可实施到线上环境,但请先在测试环境进行验证后,再到线上实施。祝大家年薪百万!

原文章地址:https://juejin.cn/post/7113883150647820324

「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论