最近碰到了一个有些奇怪的问题,自己当时排查问题时间紧,没有细细琢磨,今天抽空看了下,终于发现了端倪。
首先是在早晨收到了报警邮件,报警邮件内容如下:
ZABBIX-监控系统:
------------------------------------
报警内容: DG_issue
------------------------------------
报警级别: PROBLEM
------------------------------------
监控项目: dg_issue:2015-10-29 00:34:55.0Fetch Archive LogErrorFAL[server, ARC1]: Error 270 creating remote archivelog file 'ssol1'
------------------------------------
报警时间:2015.10.29-00:36:48
看到这个错误,还是感觉在归档的传输上出了问题。是不是网络不稳定,或者是磁盘空间不足了。
在主库使用dg broker进行快速验证。发现这个时候dg broker也验证失败了,问题看来还是得赶紧修复了。
DGMGRL> show configuration;
Configuration - sol_dg
Protection Mode: MaxPerformance
Databases:
sol - Primary database
Error: ORA-16778: redo transport error for one or more databases
ssol1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
ERROR
DGMGRL> DGMGRL> exit
好在自当时也是冷静处理,没有稀里糊涂解决,要不很可能酿成一个苦果,后面说。
至于这个错误,可以通过verbose的方式看到更多明细的信息。
可以看到主库端验证出现了问题。
DGMGRL> show database verbose sol
Database - sol
Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
sol
Error: ORA-16737: the redo transport service for standby database "ssol1" has an error
Properties:
DGConnectIdentifier = 'sol'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '2'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
SidName = 'sol'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.2.65.111)(PORT=1535))(CONNECT_DATA=(SERVICE_NAME=sol_DGMGRL)(INSTANCE_NAME=sol)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = '%t_%s_%r.dbf'
TopWaitEvents = '(monitor)'
Database Status:
ERROR
查看备库的verbose信息,一切正常。
这个时候来看看主库的数据库日志信息,发现在归档的时候出现了问题、
Thu Oct 29 00:11:26 2015
LNS: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (270)
LNS: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
Error 270 for archive log file 18 to 'ssol1'
Errors in file /U01/app/oracle/diag/rdbms/sol/sol/trace/sol_nsa2_9780.trc:
ORA-00270: error creating archive log
ARC1: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (270)
ARC1: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
FAL[server, ARC1]: Error 270 creating remote archivelog file 'ssol1'
FAL[server, ARC1]: FAL archive failed, see trace file.
ARCH: FAL archive failed. Archiver continuing
ORACLE Instance sol - Archival Error. Archiver continuing.
Thu Oct 29 00:16:54 2015
ARC1: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (270)
ARC1: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
FAL[server, ARC1]: Error 270 creating remote archivelog file 'ssol1'
FAL[server, ARC1]: FAL archive failed, see trace file.
ARCH: FAL archive failed. Archiver continuing
ORACLE Instance sol - Archival Error. Archiver continuing.
Thu Oct 29 00:16:56 2015
对于这个问题,之前碰到过备库的空间问题导致的归档问题。
这次查看了一下,发现这次的问题有些不同,单纯查看磁盘空间,没有发现磁盘空间问题。
[oracle@statg2 yangjr]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 7.8G 2.3G 5.2G 31% /
/dev/sda6 7.8G 4.4G 3.0G 60% /var
/dev/sda5 7.8G 2.0G 5.5G 27% /usr
/dev/sda1 122M 12M 104M 10% /boot
tmpfs 90G 276M 90G 1% /dev/shm
/dev/shm 90G 276M 90G 1% /tmp
/dev/sda7 158G 85G 65G 57% /U01
/dev/sdd1 1.7T 1.4T 204G 88% /data
/dev/sde1 1.6T 1.3T 219G 86% /other
/dev/sdf1 296G 92G 189G 33% /U01/app/oracle/oradata/sol
备库查看 verbose的信息,一切都是正常的,
DGMGRL> show database verbose ssol1
Database - ssol1
Role: PHYSICAL STANDBY
Intended State: READ-ONLY
Transport Lag: 12 minutes 26 seconds
Apply Lag: 5 days 16 hours 52 minutes 38 seconds
Real Time Query: OFF
Instance(s):
sol
Properties:
DGConnectIdentifier = 'ssol1'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '2'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = '+ARCH/sol/datafile, /U02/app/oracle/oradata/sol, +DATA/sol/datafile, /U02/app/oracle/oradata/sol, /data/oracle/oradata/sol, /U02/app/oracle/oradata/sol, /other/app/oracle/oradata/sol, /U02/app/oracle/oradata/sol, /U01/app/oracle/oradata/sol, /U02/app/oracle/oradata/sol'
LogFileNameConvert = '/U01/app/oracle/oradata/sol/, /U02/app/oracle/oradata/sol/'
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
SidName = 'sol'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.127.133.23)(PORT=1532))(CONNECT_DATA=(SERVICE_NAME=ssol1_DGMGRL)(INSTANCE_NAME=sol)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = '%t_%s_%r.dbf'
TopWaitEvents = '(monitor)'
Database Status:
SUCCESS
但是查看备库的数据库日志,发现原来是闪回区满了。
Creating archive destination file : /U02/app/oracle/flash_recovery_area/SSOL1/archivelog/2015_10_29/o1_mf_1_459848_%u_.arc (648904 blocks)
Thu Oct 29 00:16:54 2015
Errors in file /U02/app/oracle/diag/rdbms/ssol1/sol/trace/sol_rfs_5518.trc:
ORA-19815: WARNING: db_recovery_file_dest_size of 107374182400 bytes is 100.00% used, and has 0 remaining bytes available.
可以明显看到是归档所在的闪回区爆满导致的,那么系统层面也设置了crontab,应该也可以清理掉吧。
查看了crontab使用的备库归档删除策略是保留了近4天的归档
crosscheck archivelog all;
delete expired archivelog all;
DELETE ARCHIVELOG ALL COMPLETED BEFORE 'SYSDATE-4';
问题处理要紧,为了保留现场,同是为了后续继续跟进问题,没有武断的把归档都手工删除,就变通解决了问题,调整了闪回区的大小,本来比如100G,我扩大的150G这样几天内还不会出现问题,所以简单扩展之后,dg broker校验就又恢复了正常。
但是这个时候有存在两个疑问,11g开始闪回区中的备库归档在超过80%的使用率的时候,会自动删除,为什么没有删除。
crontab在系统级通过调用rman来删除过期的归档,为什么也没有清理。
对于这两个问题,自己排除了rman的配置影响,排除了其它参数的干扰,最后查看dg broker的日志,发现里面的内容是大量的字样:
11/01/2015 21:52:36
Read-Only state no longer supported
顺着错误开始让上找,一致找到错误开始抛出,已经过去了快1个星期了。
05/20/2015 09:43:44
version check on database ssol1 detected stale metadata,
requesting update from primary database
Creating process RSM0
10/23/2015 11:11:50
Command EDIT DATABASE ssol1 SET STATE = READ-ONLY completed
10/23/2015 11:11:52
Read-Only state no longer supported
10/23/2015 11:12:52
对于这个问题一看到这儿就恍然大悟了,原来是之前做恢复测试的时候把数据库置为read-only,但是竟然忘了置为Online状态。
马上置为Online状态。
DGMGRL> edit database ssol1 set state='ONLINE';
Succeeded.
这个时候备库的日志就开始启动了MRP,开始应用日志了。
Sun Nov 01 22:00:16 2015
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE THROUGH ALL SWITCHOVER DISCONNECT USING CURRENT LOGFILE
Attempt to start background Managed Standby Recovery process (sol)
Sun Nov 01 22:00:16 2015
MRP0 started with pid=28, OS id=18517
MRP0: Background Managed Standby Recovery process started (sol)
started logmerger process
Sun Nov 01 22:00:21 2015
Managed Standby Recovery starting Real Time Apply
Sun Nov 01 22:00:35 2015
Parallel Media Recovery started with 16 slaves
Waiting for all non-current ORLs to be archived...
All non-current ORLs have been archived.
Sun Nov 01 22:00:36 2015
Media Recovery Log /U02/app/oracle/flash_recovery_area/SSOL1/archivelog/2015_10_23/o1_mf_1_459657_c2lwdgrl_.arc
Sun Nov 01 22:00:36 2015
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE THROUGH ALL SWITCHOVER DISCONNECT USING CURRENT LOGFILE
这个时候归档已经从主库传输到了备库,但是没有清理掉。
比如查看最近的归档文件情况,可以看到,从23号开始的归档都保留了下来。
4.0K ./2015_10_22
3.2G ./2015_10_23
9.9G ./2015_10_24
20G ./2015_10_25
11G ./2015_10_26
17G ./2015_10_27
36G ./2015_10_28
9.7G ./2015_10_29
11G ./2015_10_30
9.8G ./2015_10_31
20G ./2015_11_01
那么归档开始应用了,怎么看看80%的阀值开始生效了呢,一个招数,把闪回区大小改小。可以看到改小之后,马上回发现空间开始收紧,这个时候已经部署的归档就会置为expired的。然后开始做delete操作。
Sun Nov 01 22:26:03 2015
Media Recovery Log /U02/app/oracle/flash_recovery_area/SSOL1/archivelog/2015_10_25/o1_mf_1_459697_c2qnysqs_.arc
Sun Nov 01 22:26:06 2015
ALTER SYSTEM SET db_recovery_file_dest_size='120G' SCOPE=BOTH;
Sun Nov 01 22:26:06 2015
Errors in file /U02/app/oracle/diag/rdbms/ssol1/sol/trace/sol_m000_24860.trc:
ORA-19815: WARNING: db_recovery_file_dest_size of 128849018880 bytes is 100.00% used, and has 0 remaining bytes available.
************************************************************************
************************************************************************
Sun Nov 01 22:26:27 2015
Media Recovery Log /U02/app/oracle/flash_recovery_area/SSOL1/archivelog/2015_10_25/o1_mf_1_459698_c2qo28z1_.arc
Sun Nov 01 22:26:31 2015
ALTER SYSTEM SET db_recovery_file_dest_size='140G' SCOPE=BOTH;
Sun Nov 01 22:26:32 2015
Deleted Oracle managed file /U02/app/oracle/flash_recovery_area/SSOL1/archivelog/2015_10_23/o1_mf_1_459657_c2lwdgrl_.arc
Deleted Oracle managed file /U02/app/oracle/flash_recovery_area/SSOL1/archivelog/2015_10_23/o1_mf_1_459658_c2mfz45j_.arc
Deleted Oracle managed file /U02/app/oracle/flash_recovery_area/SSOL1/archivelog/2015_10_23/o1_mf_1_459659_c2mzkpgg_.arc
Deleted Oracle managed file /U02/app/oracle/flash_recovery_area/SSOL1/archivelog/2015_10_23/o1_mf_1_459660_c2nnnod5_.arc
Deleted Oracle managed file /U02/app/oracle/flash_recovery_area/SSOL1/archivelog/2015_10_23/o1_mf_1_459661_c2nnnod6_.arc
Deleted Oracle managed file /U02/app/oracle/flash_recovery_area/SSOL1/archivelog/2015_10_24/o1_mf_1_459662_c2npf92y_.arc
Deleted Oracle managed file /U02/app/oracle/flash_recovery_area/SSOL1/archivelog/2015_10_24/o1_mf_1_459663_c2npgdvq_.arc
Deleted Oracle managed file /U02/app/oracle/flash_recovery_area/SSOL1/archivelog/2015_10_24/o1_mf_1_459664_c2npgzr5_.arc
Sun Nov 01 22:26:44 2015
Media Recovery Log /U02/app/oracle/flash_recovery_area/SSOL1/archivelog/2015_10_25/o1_mf_1_459699_c2qo592h_.arc
当然为了不影响接收归档还是简单验证后又改了回去,先把归档接收完再慢慢清理。
所以通过这个可以发现其实很多时候看起来奇怪的现象的原因其实都很简单常规。但是这些细节是否能够合理利用起来,如果回过头来看,最开始dg broker检查的时候已经间接提示了。
DGMGRL> show database verbose ssol1
Database - ssol1
Role: PHYSICAL STANDBY
Intended State: READ-ONLY
Transport Lag: 12 minutes 26 seconds
Apply Lag: 5 days 16 hours 52 minutes 38 seconds
Real Time Query: OFF
所以对于这些日志信息还是需要好好斟酌。剩下的就是等待归档都应用完成之后开始检查清理是否完成了。