暂无图片
暂无图片
1
暂无图片
暂无图片
暂无图片

MySQL 上亿大表优化实践

码咖 2020-03-21
198


https://www.cnblogs.com/YangJiaXin/p/10828244.html

目录

  • 背景

  • 分析

    • select xxx_record语句

    • delete xxx_record语句

  • 测试

  • 实施

  • 索引优化后

  • delete大表优化为小批量删除

  • 总结


背景

XX实例(一主一从)xxx告警中每天凌晨在报SLA报警,该报警的意思是存在一定的主从延迟(若在此时发生主从切换,需要长时间才可以完成切换,要追延迟来保证主从数据的一致性)

XX实例的慢查询数量最多(执行时间超过1s的sql会被记录),XX应用那方每天晚上在做删除一个月前数据的任务

分析

使用pt-query-digest工具分析最近一周的mysql-slow.log

  1. pt-query-digest --since=148h mysql-slow.log | less

结果第一部分

最近一个星期内,总共记录的慢查询执行花费时间为25403s,最大的慢sql执行时间为266s,平均每个慢sql执行时间5s,平均扫描的行数为1766万

结果第二部分

select arrivalrecord操作记录的慢查询数量最多有4万多次,平均响应时间为4s,delete arrivalrecord记录了6次,平均响应时间258s

select xxx_record语句

select arrival_record 慢查询语句都类似于如下所示,where语句中的参数字段是一样的,传入的参数值不一样

  1. select count(*) from arrival_record where product_id=26and receive_time between '2019-03-25 14:00:00'and'2019-03-25 15:00:00'and receive_spend_ms>=0\G

select arrival_record 语句在mysql中最多扫描的行数为5600万、平均扫描的行数为172万,推断由于扫描的行数多导致的执行时间长

查看执行计划

  1. explain select count() from arrival_record where product_id=26and receive_time between '2019-03-25 14:00:00'and'2019-03-25 15:00:00'and receive_spend_ms>=0\G;

  1. ************************** 1. row ***************************

  2. id: 1

  3. select_type: SIMPLE

  4. table: arrival_record

  5. partitions: NULL

  6. type: ref

  7. possible_keys: IXFK_arrival_record

  8. key: IXFK_arrival_record

  9. key_len: 8

  10. ref: const

  11. rows: 32261320

  12. filtered: 3.70

  13. Extra: Using index condition; Usingwhere

  14. 1 row inset, 1 warning (0.00 sec)

用到了索引IXFKarrivalrecord,但预计扫描的行数很多有3000多w行

  1. show index from arrival_record;

  2. +----------------+------------+---------------------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+---------------+

  3. | Table| Non_unique| Key_name| Seq_in_index| Column_name| Collation| Cardinality| Sub_part| Packed| Null| Index_type| Comment| Index_comment|

  4. +----------------+------------+---------------------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+---------------+

  5. | arrival_record | 0| PRIMARY | 1| id | A | 107990720| NULL | NULL | | BTREE | | |

  6. | arrival_record | 1| IXFK_arrival_record | 1| product_id | A | 1344| NULL | NULL | | BTREE | | |

  7. | arrival_record | 1| IXFK_arrival_record | 2| station_no | A | 22161| NULL | NULL | YES | BTREE | | |

  8. | arrival_record | 1| IXFK_arrival_record | 3| sequence | A | 77233384| NULL | NULL | | BTREE | | |

  9. | arrival_record | 1| IXFK_arrival_record | 4| receive_time | A | 65854652| NULL | NULL | YES | BTREE | | |

  10. | arrival_record | 1| IXFK_arrival_record | 5| arrival_time | A | 73861904| NULL | NULL | YES | BTREE | | |

  11. +----------------+------------+---------------------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+---------------+


  12. show create table arrival_record;

  13. ..........

  14. arrival_spend_ms bigint(20) DEFAULT NULL,

  15. total_spend_ms bigint(20) DEFAULT NULL,

  16. PRIMARY KEY (id),

  17. KEY IXFK_arrival_record (product_id,station_no,sequence,receive_time,arrival_time) USING BTREE,

  18. CONSTRAINT FK_arrival_record_product FOREIGN KEY (product_id) REFERENCES product (id) ON DELETE NO ACTION ON UPDATE NO ACTION

  19. ) ENGINE=InnoDB AUTO_INCREMENT=614538979 DEFAULT CHARSET=utf8 COLLATE=utf8_bin |

  20. ---

  • 该表总记录数约1亿多条,表上只有一个复合索引,product_id字段基数很小,选择性不好

  • 传入的过滤条件 where productid=26 and receivetime between '2019-03-25 14:00:00' and '2019-03-25 15:00:00' and receivespendms>=0 没有stationnu字段,使用不到复合索引 IXFKarrivalrecord的 productid,stationno,sequence,receivetime 这几个字段

  • 根据最左前缀原则,select arrivalrecord只用到了复合索引IXFKarrivalrecord的第一个字段productid,而该字段选择性很差,导致扫描的行数很多,执行时间长

  • receivetime字段的基数大,选择性好,可对该字段单独建立索引,select arrivalrecord sql就会使用到该索引


现在已经知道了在慢查询中记录的select arrivalrecord where语句传入的参数字段有 productid,receivetime,receivespend_ms,还想知道对该表的访问有没有通过其它字段来过滤了?


神器tcpdump出场的时候到了

使用tcpdump抓包一段时间对该表的select语句

  1. tcpdump -i bond0 -s 0-l -w - dst port 3316| strings | grep select| egrep -i 'arrival_record'>/tmp/select_arri.log

获取select 语句中from 后面的where条件语句

  1. IFS_OLD=$IFS

  2. IFS=$'\n'

  3. for i in`cat tmp/select_arri.log `;do echo ${i#*'from'}; done | less

  4. IFS=$IFS_OLD

  1. arrival_record arrivalrec0_ where arrivalrec0_.sequence='2019-03-27 08:40'and arrivalrec0_.product_id=17and arrivalrec0_.station_no='56742'

  2. arrival_record arrivalrec0_ where arrivalrec0_.sequence='2019-03-27 08:40'and arrivalrec0_.product_id=22and arrivalrec0_.station_no='S7100'

  3. arrival_record arrivalrec0_ where arrivalrec0_.sequence='2019-03-27 08:40'and arrivalrec0_.product_id=24and arrivalrec0_.station_no='V4631'

  4. arrival_record arrivalrec0_ where arrivalrec0_.sequence='2019-03-27 08:40'and arrivalrec0_.product_id=22and arrivalrec0_.station_no='S9466'

  5. arrival_record arrivalrec0_ where arrivalrec0_.sequence='2019-03-27 08:40'and arrivalrec0_.product_id=24and arrivalrec0_.station_no='V4205'

  6. arrival_record arrivalrec0_ where arrivalrec0_.sequence='2019-03-27 08:40'and arrivalrec0_.product_id=24and arrivalrec0_.station_no='V4105'

  7. arrival_record arrivalrec0_ where arrivalrec0_.sequence='2019-03-27 08:40'and arrivalrec0_.product_id=24and arrivalrec0_.station_no='V4506'

  8. arrival_record arrivalrec0_ where arrivalrec0_.sequence='2019-03-27 08:40'and arrivalrec0_.product_id=24and arrivalrec0_.station_no='V4617'

  9. arrival_record arrivalrec0_ where arrivalrec0_.sequence='2019-03-27 08:40'and arrivalrec0_.product_id=22and arrivalrec0_.station_no='S8356'

  10. arrival_record arrivalrec0_ where arrivalrec0_.sequence='2019-03-27 08:40'and arrivalrec0_.product_id=22and arrivalrec0_.station_no='S8356'

  • select 该表 where条件中有productid,stationno,sequence字段,可以使用到复合索引IXFKarrivalrecord的前三个字段。

综上所示,优化方法为,删除复合索引IXFKarrivalrecord,建立复合索引idxsequencestationnoproductid,并建立单独索引indxreceive_time。

delete xxx_record语句

该delete操作平均扫描行数为1.1亿行,平均执行时间是262s

delete语句如下所示,每次记录的慢查询传入的参数值不一样

  1. deletefrom arrival_record where receive_time < STR_TO_DATE('2019-02-23', '%Y-%m-%d')\G

执行计划

  1. explain select* from arrival_record where receive_time < STR_TO_DATE('2019-02-23', '%Y-%m-%d')\G

  2. *************************** 1. row ***************************

  3. id: 1

  4. select_type: SIMPLE

  5. table: arrival_record

  6. partitions: NULL

  7. type: ALL

  8. possible_keys: NULL

  9. key: NULL

  10. key_len: NULL

  11. ref: NULL

  12. rows: 109501508

  13. filtered: 33.33

  14. Extra: Usingwhere

  15. 1 row inset, 1 warning (0.00 sec)

  • 该delete语句没有使用索引(没有合适的索引可用),走的全表扫描,导致执行时间长

  • 优化方法也是 建立单独索引indxreceivetime(receive_time)

测试

拷贝arrival_record表到测试实例上进行删除重新索引操作XX实例arrival_record表信息

  1. du -sh datas/mysql/data/3316/cq_new_cimiss/arrival_record*

  2. 12K/datas/mysql/data/3316/cq_new_cimiss/arrival_record.frm

  3. 48G/datas/mysql/data/3316/cq_new_cimiss/arrival_record.ibd

  1. select count() from cq_new_cimiss.arrival_record;

  2. +-----------+

  3. | count() |

  4. +-----------+

  5. | 112294946|

  6. +-----------+

  7. 1亿多记录数

  1. SELECT

  2. table_name,

  3. CONCAT(FORMAT(SUM(data_length) 1024/ 1024,2),'M') AS dbdata_size,

  4. CONCAT(FORMAT(SUM(index_length) 1024/ 1024,2),'M') AS dbindex_size,

  5. CONCAT(FORMAT(SUM(data_length + index_length) 1024/ 1024/ 1024,2),'G') AS table_size(G),

  6. AVG_ROW_LENGTH,table_rows,update_time

  7. FROM

  8. information_schema.tables

  9. WHERE table_schema = 'cq_new_cimiss'and table_name='arrival_record';

  1. +----------------+-------------+--------------+------------+----------------+------------+---------------------+

  2. | table_name | dbdata_size | dbindex_size | table_size(G) | AVG_ROW_LENGTH | table_rows | update_time |

  3. +----------------+-------------+--------------+------------+----------------+------------+---------------------+

  4. | arrival_record | 18,268.02M| 13,868.05M| 31.38G| 175| 109155053| 2019-03-2612:40:17|

  5. +----------------+-------------+--------------+------------+----------------+------------+---------------------+

磁盘占用空间48G,mysql中该表大小为31G,存在17G左右的碎片,大多由于删除操作造成的(记录被删除了,空间没有回收)。

备份还原该表到新的实例中,删除原来的复合索引,重新添加索引进行测试。

mydumper并行压缩备份。

  1. user=root

  2. passwd=xxxx

  3. socket=/datas/mysql/data/3316/mysqld.sock

  4. db=cq_new_cimiss

  5. table_name=arrival_record

  6. backupdir=/datas/dump_$table_name

  7. mkdir -p $backupdir


  8. nohup echo `date +%T`&& mydumper -u $user -p $passwd -S $socket -B $db -c -T $table_name -o $backupdir -t 32-r 2000000&& echo `date +%T`&

并行压缩备份所花时间(52s)和占用空间(1.2G,实际该表占用磁盘空间为48G,mydumper并行压缩备份压缩比相当高!)

  1. Starteddump at: 2019-03-2612:46:04

  2. ........


  3. Finisheddump at: 2019-03-2612:46:56


  4. du -sh datas/dump_arrival_record/

  5. 1.2G/datas/dump_arrival_record/

拷贝dump数据到测试节点

  1. scp -rp datas/dump_arrival_record root@10.230.124.19:/datas

多线程导入数据

  1. time myloader -u root -S datas/mysql/data/3308/mysqld.sock -P 3308-p root -B test -d datas/dump_arrival_record -t 32

  1. real 126m42.885s

  2. user 1m4.543s

  3. sys 0m4.267s

逻辑导入该表后磁盘占用空间

  1. du -h -d 1/datas/mysql/data/3308/test/arrival_record.

  2. 12K/datas/mysql/data/3308/test/arrival_record.frm

  3. 30G/datas/mysql/data/3308/test/arrival_record.ibd

没有碎片,和mysql的该表的大小一致。

  1. cp -rp datas/mysql/data/3308/datas


分别使用online DDL和 pt-osc工具来做删除重建索引操作

先删除外键,不删除外键,无法删除复合索引,外键列属于复合索引中第一列

  1. nohup bash /tmp/ddl_index.sh &

  2. 2019-04-04-10:41:39begin stop mysqld_3308

  3. 2019-04-04-10:41:41begin rm -rf datadir and cp -rp datadir_bak

  4. 2019-04-04-10:46:53 start mysqld_3308

  5. 2019-04-04-10:46:59 online ddl begin

  6. 2019-04-04-11:20:34 onlie ddl stop

  7. 2019-04-04-11:20:34begin stop mysqld_3308

  8. 2019-04-04-11:20:36begin rm -rf datadir and cp -rp datadir_bak

  9. 2019-04-04-11:22:48 start mysqld_3308

  10. 2019-04-04-11:22:53 pt-osc begin

  11. 2019-04-04-12:19:15 pt-osc stop

online ddl 花费时间为34 分钟,pt-osc花费时间为57 分钟,使用onlne ddl时间约为pt-osc工具时间的一半。

实施

由于是一主一从实例,应用是连接的vip,删除重建索引采用online ddl来做。停止主从复制后,先在从实例上做(不记录binlog),主从切换,再在新切换的从实例上做(不记录binlog)

  1. function red_echo () {


  2. local what="$*"

  3. echo -e "$(date +%F-%T) ${what}"

  4. }


  5. function check_las_comm(){

  6. if[ "$1"!= "0"];then

  7. red_echo "$2"

  8. echo "exit 1"

  9. exit1

  10. fi

  11. }


  12. red_echo "stop slave"

  13. mysql -uroot -p$passwd --socket=/datas/mysql/data/${port}/mysqld.sock -e"stop slave"

  14. check_las_comm "$?""stop slave failed"


  15. red_echo "online ddl begin"

  16. mysql -uroot -p$passwd --socket=/datas/mysql/data/${port}/mysqld.sock -e"set sql_log_bin=0;select now() as ddl_start;ALTER TABLE $db_.\`${table_name}\` DROP FOREIGN KEY FK_arrival_record_product,drop index IXFK_arrival_record,add index idx_product_id_sequence_station_no(product_id,sequence,station_no),add index idx_receive_time(receive_time);select now() as ddl_stop">>${log_file} 2>& 1

  17. red_echo "onlie ddl stop"

  18. red_echo "add foreign key"

  19. mysql -uroot -p$passwd --socket=/datas/mysql/data/${port}/mysqld.sock -e"set sql_log_bin=0;ALTER TABLE $db_.${table_name} ADD CONSTRAINT _FK_${table_name}_product FOREIGN KEY (product_id) REFERENCES cq_new_cimiss.product (id) ON DELETE NO ACTION ON UPDATE NO ACTION;">>${log_file} 2>& 1

  20. check_las_comm "$?""add foreign key error"

  21. red_echo "add foreign key stop"


  22. red_echo "start slave"

  23. mysql -uroot -p$passwd --socket=/datas/mysql/data/${port}/mysqld.sock -e"start slave"

  24. check_las_comm "$?""start slave failed"

执行时间

  1. 2019-04-08-11:17:36 stop slave

  2. mysql: [Warning] Using a password on the command line interface can be insecure.

  3. ddl_start

  4. 2019-04-0811:17:36

  5. ddl_stop

  6. 2019-04-0811:45:13

  7. 2019-04-08-11:45:13 onlie ddl stop

  8. 2019-04-08-11:45:13 add foreign key

  9. mysql: [Warning] Using a password on the command line interface can be insecure.

  10. 2019-04-08-12:33:48 add foreign key stop

  11. 2019-04-08-12:33:48 start slave

删除重建索引花费时间为28分钟,添加外键约束时间为48分钟。

再次查看delete 和select语句的执行计划

  1. explain select count() from arrival_record where receive_time < STR_TO_DATE('2019-03-10', '%Y-%m-%d')\G

  2. ************************** 1. row ***************************

  3. id: 1

  4. select_type: SIMPLE

  5. table: arrival_record

  6. partitions: NULL

  7. type: range

  8. possible_keys: idx_receive_time

  9. key: idx_receive_time

  10. key_len: 6

  11. ref: NULL

  12. rows: 7540948

  13. filtered: 100.00

  14. Extra: Usingwhere; Using index

  1. explain select count() from arrival_record where product_id=26and receive_time between '2019-03-25 14:00:00'and'2019-03-25 15:00:00'and receive_spend_ms>=0\G;

  2. ************************** 1. row ***************************

  3. id: 1

  4. select_type: SIMPLE

  5. table: arrival_record

  6. partitions: NULL

  7. type: range

  8. possible_keys: idx_product_id_sequence_station_no,idx_receive_time

  9. key: idx_receive_time

  10. key_len: 6

  11. ref: NULL

  12. rows: 291448

  13. filtered: 16.66

  14. Extra: Using index condition; Usingwhere

都使用到了idxreceivetime 索引,扫描的行数大大降低。

索引优化后

delete 还是花费了77s时间

  1. deletefrom arrival_record where receive_time < STR_TO_DATE('2019-03-10', '%Y-%m-%d')\G

delete 语句通过receive_time的索引删除300多万的记录花费77s时间

delete大表优化为小批量删除

应用端已优化成每次删除10分钟的数据(每次执行时间1s左右),xxx中没在出现SLA(主从延迟告警)

另一个方法是通过主键的顺序每次删除20000条记录

  1. #得到满足时间条件的最大主键ID

  2. #通过按照主键的顺序去 顺序扫描小批量删除数据

  3. #先执行一次以下语句

  4. SELECT MAX(id) INTO @need_delete_max_id FROM `arrival_record` WHERE receive_time<'2019-03-01';

  5. DELETE FROM arrival_record WHERE id<@need_delete_max_id LIMIT 20000;

  6. select ROW_COUNT(); #返回20000



  7. #执行小批量delete后会返回row_count(), 删除的行数

  8. #程序判断返回的row_count()是否为0,不为0执行以下循环,为0退出循环,删除操作完成

  9. DELETE FROM arrival_record WHERE id<@need_delete_max_id LIMIT 20000;

  10. select ROW_COUNT();

  11. #程序睡眠0.5s

总结

  • 表数据量太大时,除了关注访问该表的响应时间外,还要关注对该表的维护成本(如做DDL表更时间太长,delete历史数据)

  • 对大表进行DDL操作时,要考虑表的实际情况(如对该表的并发表,是否有外键)来选择合适的DDL变更方式

  • 对大数据量表进行delete,用小批量删除的方式,减少对主实例的压力和主从延迟



文章转载自码咖,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论