更新时间:2022-09-07 18:30:16
把每天的慢查询日志进行按库切割
对每个库的慢查询日志进行分析
工具/功能 | 一般统计信息 | 高级统计信息 | 脚本 | 优势 |
mysqldumpslow | 支持 | 不支持 | perl | mysql官方自带 |
mysqlsla | 支持 | 支持 | perl | 功能强大,数据报表齐全,定制化能力强. |
mysql-explain-slow-log | 支持 | 不支持 | perl | 无 |
mysql-log-filter | 支持 | 部分支持 | python or php | 不失功能的前提下,保持输出简洁 |
myprofi | 支持 | 不支持 | php | 非常精简 |
通过google出来的这几款工具,mysqlsla的帮助文档提示是可以根据-db来进行分库筛选的,但是却无法得出想要的结果,不知道是不是我打开姿势不对的原因还是怎么着~ 后来一想就撸一把吧,虽然很糙~
切割工具:python脚本
慢查询日志分析工具:mysqlsla
日志格式:
1
2
3
4
5
6
7
8
|
# Time: 150331 22:23:48 # User@Host: test[test] @ [192.168.1.200] # Thread_id: 251049087 Schema: zhuima Last_errno: 0 Killed: 0 # Query_time: 4.581437 Lock_time: 0.000072 Rows_sent: 0 Rows_examined: 741 Rows_affected: 0 Rows_read: 740 # Bytes_sent: 89 Tmp_tables: 0 Tmp_disk_tables: 0 Tmp_table_sizes: 0 use zhuima; SET timestamp=1427811828; SELECT cid FROM user_point_list where uid = '473781' and (cid = 'ni' or cid = 'ac' or cid = 'qq' or cid = 'de' or cid = 'ec' ) limit 0,10;
|
python脚本:
用法:python script_name slow.log_name
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
|
[root@mysql01 test]# cat cut_slog.py #!/usr/bin/evn python #coding:utf8 #Author:zhuima #QQ:993182876 ''' 实现功能,对每日的慢查询日志进行按库名称切割 注意事项: 1、由于是把慢查询日志直接读进内存的,所以slow.log如果过大,会产生假死情况. ''' import time import re import sys def get_file_content(fname): '''
获取文件内容,这里是把所有文件直接读进内存的
'''
fp = file(fname)
content = fp.read()
fp.close()
return content
def get_dbname(): '''
1、获取所有包含Schema的字符串
2、再次进行筛选,获取最终的db名称
3、对list进行排序去重
4、最终获得当前日志中所有数据库的名称
'''
db_key = re.findall(r'Schema: \w+',content)
dbname_list = []
for db in db_key:
match_db_key = re.search(r'\s{1}\w+',db)
if match_db_key:
dbname_list.append(match_db_key.group().strip())
dbnamelist = list(set(dbname_list))
return dbnamelist
def cut_slog_bydbname(): '''
根据获取的db名称来进行分库重写操作
'''
dblist = get_dbname()
content_list = content.split(';\n#')
for i in range(len(dblist)):
db_name = dblist[i]
dblist[i] = [ x for x in content_list if dblist[i] in x]
for n in range(len(dblist[i])):
if n == 0:
dblist[i][n] = '#' + dblist[i][n]+';'
elif n == len(dblist):
dblist[i][n] = '#' + dblist[i][n]
else:
dblist[i][n] = '#' + dblist[i][n] + ';'
new_fp = file(db_name,'w')
new_fp.write('\n'.join(dblist[i]))
new_fp.close()
def main(): '''
统计脚本执行所消耗的时间
'''
start_time = time.time()
cut_slog_bydbname()
end_time = time.time()
take_time = end_time - start_time
print 'Running This Script Take %s Time' % take_time
if __name__ == '__main__': fname = sys.argv[1]
content = get_file_content(fname)
main()
|
对170M大小的文件进行切割过程及耗费时长:
使用mysqlsla分析查看:
其实说了那么多,实际上新贵pt-query-digest是可以胜任这个任务的部分工作的,比如按照库进行分析。
光说不练假把式,来点干货上来,先安装好工具再说
1
|
yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm |
用起来:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
|
[root@mysql01 test]# pt-query-digest --filter '$event->{db} =~ m/app_zhuima/ and $event->{arg} =~ m/^select/i' --order-by max --limit=10 s.log s.log: 36% 00:52 remain s.log: 75% 00:19 remain --order-by attribute max doesn't exist, using Query_time:sum # 78s user time, 540ms system time, 41.20M rss, 223.39M vsz # Current date: Fri Apr 3 12:11:02 2015 # Hostname: mysql01.opsdev.cn # Files: s.log # Overall: 101.75k total, 99 unique, 0.16 QPS, 0.02x concurrency _________ # Time range: 2015-03-16 09:10:59 to 2015-03-23 18:03:22 # Attribute total min max avg 95% stddev median # ============ ======= ======= ======= ======= ======= ======= ======= # Exec time 12085s 230us 5s 119ms 253ms 123ms 53ms # Lock time 26s 52us 2s 255us 224us 11ms 119us # Rows sent 477.36k 0 2.92k 4.80 17.65 71.55 0 # Rows examine 1.77G 0 36.01k 18.23k 33.17k 12.57k 19.40k # Rows affecte 0 0 0 0 0 0 0 # Rows read 122.33M 0 18.75k 1.23k 7.31k 2.85k 1.96 # Bytes sent 404.76M 60 70.40k 4.07k 28.66k 8.92k 246.02 # Tmp tables 22.58k 0 1 0.23 0.99 0.42 0 # Tmp disk tbl 1.12k 0 1 0.01 0 0.10 0 # Tmp tbl size 25.78G 0 25.25M 265.70k 0 2.11M 0 # Query size 19.09M 46 20.21k 196.76 271.23 233.92 174.84 # Profile # Rank Query ID Response time Calls R/Call V/M Item # ==== ================== =============== ===== ====== ===== ============= # 1 0xDF65C6303461AC09 551.9743 4.6% 10798 0.0511 0.01 SELECT apk_index apk_content? # 2 0xB12492DDCD79AC3C 8.1966 0.1% 10612 0.0008 0.01 SELECT apk_index apk_content? # 3 0x132FC5C1DD41AB40 8.5364 0.1% 10612 0.0008 0.10 SELECT apk_index apk_content? # 4 0xE5745B6DB0863C99 2156.1381 17.8% 8986 0.2399 0.02 SELECT apk_index apk_content? # 5 0xF4F4AC9E9B2C9707 1186.5799 9.8% 6982 0.1699 0.04 SELECT apk_index apk_content? # 6 0xFA2AFA52C427310A 1572.7090 13.0% 6554 0.2400 0.01 SELECT apk_index apk_content? # 7 0x46E8908EFD5250D1 1017.2819 8.4% 5370 0.1894 0.03 SELECT apk_index apk_content? # 8 0x1B0924852232D745 196.3949 1.6% 4129 0.0476 0.01 SELECT apk_index apk_content? # 9 0xF7A7B1118F17CC05 716.8696 5.9% 3014 0.2378 0.01 SELECT apk_index apk_content? # 10 0xC4422E67F9F74B81 140.8687 1.2% 2958 0.0476 0.00 SELECT apk_index apk_content? # 80 0xF1F228E747F8E739 62.4422 0.5% 19 3.2864 0.31 SELECT apk_tid # MISC 0xMISC 4466.9640 37.0% 31721 0.1408 0.0 < 88 ITEMS>
# Query 1: 0.02 QPS, 0.00x concurrency, ID 0xDF65C6303461AC09 at byte 4659 # This item is included in the report because it matches --limit. # Scores: V/M = 0.01 # Time range: 2015-03-16 09:11:01 to 2015-03-23 17:57:15 # Attribute pct total min max avg 95% stddev median # ============ === ======= ======= ======= ======= ======= ======= ======= # Count 10 10798 # Exec time 4 552s 2ms 935ms 51ms 65ms 20ms 51ms # Lock time 5 1s 75us 5ms 137us 194us 110us 113us # Rows sent 2 11.72k 0 18 1.11 17.65 4.25 0 # Rows examine 11 202.50M 36 20.49k 19.20k 20.37k 4.85k 20.37k # Rows affecte 0 0 0 0 0 0 0 0 # Rows read 0 17.73k 0 7.62k 1.68 0.99 72.02 0.99 # Bytes sent 9 37.17M 1.64k 54.05k 3.53k 30.09k 7.17k 1.61k # Tmp tables 0 0 0 0 0 0 0 0 # Tmp disk tbl 0 0 0 0 0 0 0 0 # Tmp tbl size 0 0 0 0 0 0 0 0 # Query size 9 1.82M 172 178 176.50 174.84 2.02 174.84 # String: # Databases app_zhuima # Hosts # Last errno 0 # Users app_zhuima # Query_time distribution # 1us # 10us # 100us # 1ms ### # 10ms ################################################################ # 100ms # # 1s # 10s+ # Tables # SHOW TABLE STATUS FROM `app_zhuima` LIKE 'apk_xindex'\G # SHOW CREATE TABLE `app_zhuima`.`apk_xindex`\G # SHOW TABLE STATUS FROM `app_zhuima` LIKE 'apk_xx'\G # SHOW CREATE TABLE `app_zhuima`.`apk_xx`\G # EXPLAIN /*!50100 PARTITIONS*/ select a.title,a.postdate,a.url,b.* from apk_index a,apk_xx b where a.tid=b.tid and a.ifpub=1 and a.publisher != 'adminzhuima' order by postdate desc limit 1294510,18\G |
本文转自lovelace521 51CTO博客,原文链接:http://blog.51cto.com/lovelace/1627961,如需转载请自行联系原作者