且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

内核参数导致的备库宕机分析

更新时间:2022-09-02 10:01:40

在前几天搭建好备库之后,因为同步文件着实花了些时间,首先配置备库能够正常接收归档,然后内核参数也基本没有设置,简单使用脚本算出一个Hugepage的值,就直接改了。当时从数据库日志中确实也没有发现hugepage启用的情况,但是因为不是很影响备库的性能,自己就没有重视。
结果早上的时候,首先受到了一封报警邮件。

ZABBIX-监控系统:
------------------------------------
报警内容: DG_issue
------------------------------------
报警级别: PROBLEM
------------------------------------
监控项目: dg_issue2015-11-19 10:28:05.0Log Transport ServicesErrorError 1034 received logging on to the standby2015-11-19 10:28:05.0Log Transport ServicesErrorPING[ARC0]: Heartbeat failed to connect to standby 'sstatdb2'. Error is 1034.
------------------------------------
报警时间:2015.11.19-10:23:39

最开始的时候自己倒没有多想,也没有重视,但是过了一会儿之后就开始频繁收到报警了。这个时候感觉是出了什么问题,但是让我意外的是,备库竟然自动宕了。
这个着实有点出乎我的意料之外,如果swap争用高也就算了,性能影响倒不是很太大,但是怎么会自动宕机呢。
我查看了一下日志。发现昨天开始就已经有了memlock设置不足的提醒了。

Stopping background process VKTM
Wed Nov 18 11:38:00 2015
Instance shutdown complete
Wed Nov 18 11:38:03 2015
Starting ORACLE instance (normal)
Memlock limit too small: 65536 to accommodate segment size: 134217728
Wed Nov 18 11:42:17 2015
Starting ORACLE instance (normal)
Memlock limit too small: 65536 to accommodate segment size: 134217728
Wed Nov 18 11:50:11 2015

看看宕机时间的日志,发现是因为471的错误导致的宕机。
Archived Log entry 344 added for thread 1 sequence 149889 ID 0x9bd5d62a dest 1:
Thu Nov 19 10:17:58 2015
PMON (ospid: 28529): terminating the instance due to error 471
Thu Nov 19 10:17:58 2015
System state dump requested by (instance=1, osid=28547 (DIAG)), summary=[abnormal instance termination].
System State dumped to trace file /U01/app/oracle/diag/rdbms/sstatdb2/statdb2/trace/statdb2_diag_28547.trc
Dumping diagnostic data in directory=[cdmp_20151119101758], requested by (instance=1, osid=28547 (DIAG)), summary=[abnormal instance termination].
Instance terminated by PMON, pid = 28529
Thu Nov 19 10:26:33 2015
Starting ORACLE instance (normal)
Memlock limit too small: 65536 to accommodate segment size: 134217728
那么00471是个什么错误呢,发现竟然和dbwr有关。
$ oerr ora 00471
00471, 00000, "DBWR process terminated with error"
// *Cause:  The database writer process died
// *Action: Warm start instance
其实我这台数据库实例只开了两个dbwr进程。
SQL> show parameter db_writer_proce
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_writer_processes                  integer     2
看看数据库启动的日志,发现28529确实是pmon对应的进程号,那么如果问题是由于DBWR出问题,那么很可能是Pmon开启了杀戒。
Wed Nov 18 11:53:11 2015
PMON started with pid=2, OS id=28529
Wed Nov 18 11:53:11 2015
PSP0 started with pid=3, OS id=28533
Wed Nov 18 11:53:12 2015
然后pmon一旦停掉,整个数据库实例就会停掉。
对于这个部分,可以从操作系统日志中发现一些信息来。首先就是发现在问题时间点,调用了oom killer,在Linux下这种机制,它会在系统内存耗尽的情况下,启用自己算法有选择性的kill 掉一些进程.所以说dbwr,pmon就是这样相互影响,最终双双倒下。
Nov 19 10:14:12 tc_154_11 SAFE-BASEINFO[2169]: [INFO] 2015/11/19 10:14:12 processAbnormal: [N/A]
Nov 19 10:17:55 tc_154_11 kernel: oracle invoked oom-killer: gfp_mask=0x200da, order=0, oom_adj=0, oom_score_adj=0
Nov 19 10:17:55 tc_154_11 kernel: oracle cpuset=/ mems_allowed=0-1
Nov 19 10:17:55 tc_154_11 kernel: Pid: 28833, comm: oracle Not tainted 2.6.32-431.el6.x86_64 #1
Nov 19 10:17:55 tc_154_11 kernel: Call Trace:
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff810d05b1>] ? cpuset_print_task_mems_allowed+0x91/0xb0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff81122960>] ? dump_header+0x90/0x1b0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8122798c>] ? security_real_capable_noaudit+0x3c/0x70
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff81122de2>] ? oom_kill_process+0x82/0x2a0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff81122d21>] ? select_bad_process+0xe1/0x120
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff81123220>] ? out_of_memory+0x220/0x3c0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8112fb3c>] ? __alloc_pages_nodemask+0x8ac/0x8d0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811651c9>] ? alloc_page_interleave+0x39/0x90
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff81167c46>] ? alloc_pages_vma+0x146/0x150
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8113d14f>] ? shmem_alloc_page+0x5f/0x70
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8117c805>] ? mem_cgroup_charge_common+0xa5/0xd0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8111f73e>] ? find_get_page+0x1e/0xa0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8112098a>] ? find_lock_page+0x2a/0x80
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8113ef67>] ? shmem_getpage_gfp+0x2a7/0x610
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8113f3ab>] ? shmem_fault+0x4b/0x70
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8121777e>] ? shm_fault+0x1e/0x20
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8114a084>] ? __do_fault+0x54/0x530
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8114a657>] ? handle_pte_fault+0xf7/0xb00
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8127fe09>] ? cfq_set_request+0x329/0x560
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811220e5>] ? mempool_alloc_slab+0x15/0x20
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff81122283>] ? mempool_alloc+0x63/0x140
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8137ffd0>] ? scsi_done+0x0/0x60
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8114b28a>] ? handle_mm_fault+0x22a/0x300
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8114b48a>] ? __get_user_pages+0x12a/0x430
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8114b829>] ? get_user_pages+0x49/0x50
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8104f717>] ? get_user_pages_fast+0x157/0x1c0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811c78a5>] ? do_direct_IO+0x9f5/0xfa0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811c8196>] ? __blockdev_direct_IO_newtrunc+0x346/0x1270
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff810a70a1>] ? ktime_get_ts+0xb1/0xf0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffffa008e360>] ? ext4_get_block+0x0/0x120 [ext4]
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8115e91b>] ? swap_free+0x2b/0x40
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811c9137>] ? __blockdev_direct_IO+0x77/0xe0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffffa008e360>] ? ext4_get_block+0x0/0x120 [ext4]
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8111f73e>] ? find_get_page+0x1e/0xa0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffffa008d10a>] ? ext4_ind_direct_IO+0xba/0x250 [ext4]
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffffa008e360>] ? ext4_get_block+0x0/0x120 [ext4]
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffffa008d2f6>] ? ext4_direct_IO+0x56/0x270 [ext4]
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811a50f1>] ? touch_atime+0x71/0x1a0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811217cb>] ? generic_file_aio_read+0x6bb/0x700
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8108412c>] ? lock_timer_base+0x3c/0x70
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff81121110>] ? generic_file_aio_read+0x0/0x700
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811d4f64>] ? aio_rw_vect_retry+0x84/0x200
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811d6924>] ? aio_run_iocb+0x64/0x170
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811d7d51>] ? do_io_submit+0x291/0x920
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811d83f0>] ? sys_io_submit+0x10/0x20
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b

从错误堆栈可以看出确实和内存紧张有关,但是为什么使用紧张了呢,可以看到其实swap已经彻底空了,一丁点空间都没有了。
Nov 19 10:17:55 tc_154_11 kernel: Node 1 Normal free:36600kB min:36636kB low:45792kB high:54952kB active_anon:3824492kB inactive_anon:637348kB active_file:1576kB inactive_file:4012kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:13445120kB mlocked:0kB dirty:0kB writeback:0kB mapped:1922560kB shmem:4162084kB slab_reclaimable:17076kB slab_unreclaimable:33408kB kernel_stack:848kB pagetables:68944kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:5460 all_unreclaimable? yes
Nov 19 10:17:55 tc_154_11 kernel: lowmem_reserve[]: 0 0 0 0
Nov 19 10:17:55 tc_154_11 kernel: Node 0 Normal: 409*4kB 346*8kB 248*16kB 202*32kB 141*64kB 64*128kB 19*256kB 1*512kB 7*1024kB 0*2048kB 0*4096kB = 44596kB
Nov 19 10:17:55 tc_154_11 kernel: Node 1 DMA: 2*4kB 2*8kB 1*16kB 1*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15624kB
Nov 19 10:17:55 tc_154_11 kernel: Node 1 DMA32: 486*4kB 375*8kB 338*16kB 304*32kB 203*64kB 116*128kB 46*256kB 2*512kB 0*1024kB 0*2048kB 0*4096kB = 60720kB
Nov 19 10:17:55 tc_154_11 kernel: Node 1 Normal: 172*4kB 252*8kB 224*16kB 189*32kB 97*64kB 48*128kB 45*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 36208kB
Nov 19 10:17:55 tc_154_11 kernel: 2697804 total pagecache pages
Nov 19 10:17:55 tc_154_11 kernel: 1620 pages in swap cache
Nov 19 10:17:55 tc_154_11 kernel: Swap cache stats: add 7640245, delete 7638625, find 2745828/3185908
Nov 19 10:17:55 tc_154_11 kernel: Free swap  = 0kB
Nov 19 10:17:55 tc_154_11 kernel: Total swap = 4194296kB
Nov 19 10:17:55 tc_154_11 kernel: 8388607 pages RAM
Nov 19 10:17:55 tc_154_11 kernel: 172490 pages reserved
Nov 19 10:17:55 tc_154_11 kernel: 4183499 pages shared
Nov 19 10:17:55 tc_154_11 kernel: 1183690 pages non-shared
所以这个问题就可以更加间接证明了hugepage设置出错,然后memlock这个参数也有一些影响。
最终导致了swap争用过高,最后启用了oom的清理算法,直接导致了数据库实例宕机。
那么问题的修复也就相应会简单一些,首先是设置memlock为它推荐的值,alert日志中也给出了推荐的值。
然后再次启动数据库,就会看到下面的提示了。
Instance shutdown complete
Thu Nov 19 10:30:40 2015
Starting ORACLE instance (normal)
****************** Large Pages Information *****************
 
Total Shared Global Region in Large Pages = 20 GB (100%)
 
Large Pages used by this instance: 10241 (20 GB)
Large Pages unused system wide = 4 (8192 KB) (alloc incr 64 MB)
Large Pages configured system wide = 10245 (20 GB)
Large Page size = 2048 KB
所以这些小问题还是不能够轻视,很可能造成很大的影响。