暂无图片
求解决办法!!!!ORACLE 19C RAC 19.3.0升级19.13.0失败
我来答
分享
解龙
2022-08-06
求解决办法!!!!ORACLE 19C RAC 19.3.0升级19.13.0失败

19.3.0升级19.12.0无报错

19.3.0-19.12.0升级19.13.0报ora.gipcd无法与另一节点通信。

为排除rac环境问题

./gridSetup -applyRU 33182768

在第二节点进行root.sh启动集群报ora.gipcd无法与另一节点通信,无法加入集群。

虚机公网网卡使用桥接,私网网卡使用host-only

防火墙关闭,nozeroconf=yes,私网网卡通信无问题

未做网卡绑定,按照官档检查网卡多播无问题。


具体操作如下:

patch冲突和预检查在两个节点都成功。

mkdir /patch

chmod 777 -R /patch 

chown -R grid.oinstall 33182768

chmod 777 -R 33182768

root

unzip p6880880_190000_Linux-x86-64(30).zip -d /u01/app/19.3.0/grid/

chown -R grid.oinstall OPatch 

chmod 755 -R 


oracle

chown -R oracle.oinstall p6880880_190000_Linux-x86-64(30).zip

unzip p6880880_190000_Linux-x86-64(30).zip -d /u01/app/oracle/product/19.3.0/db_1/

chmod 755 -R OPatch


./opatch lsinventory -detail -oh <ORACLE_HOME>

cd $ORACLE_HOME/OPatch
GI_HOME
./opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/33182768/33192793
./opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/33182768/33208123
./opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/33182768/33208107
./opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/33182768/33239955
./opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/33182768/32585572

cd $ORACLE_HOME/OPatch
ORACLE_HOME
./opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/33182768/33192793
./opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/33182768/33208123

cluvfy stage -pre patch
权限状态两节点都是pass

cat > /u01/app/grid/patch_list_gihome.txt <<EOF
/u01/33182768/32585572
/u01/33182768/33192793
/u01/33182768/33208107
/u01/33182768/33208123
/u01/33182768/33239955
EOF
cd $ORACLE_HOME/OPatch
./opatch prereq CheckSystemSpace -phBaseFile /u01/app/grid/patch_list_gihome.txt

cat > /u01/app/oracle/patch_list_dbhome.txt <<EOF
/u01/33182768/33192793
/u01/33182768/33208123
EOF
cd $ORACLE_HOME/OPatch
./opatch prereq CheckSystemSpace -phBaseFile /u01/app/oracle/patch_list_dbhome.txt

/u01/app/19.3.0/grid/OPatch/opatchauto apply /u01/33182768 -analyze -oh /u01/app/19.3.0/grid
/u01/app/19.3.0/grid/OPatch/opatchauto apply /u01/33182768/ -oh /u01/app/19.3.0/grid


具体报错:

[root@orcl19cn1 patch]# /u01/app/19.3.0/grid/OPatch/opatchauto apply /patch/33182768/ -oh /u01/app/19.3.0/grid

OPatchauto session is initiated at Sat Aug 6 15:09:48 2022

System initialization log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchautodb/systemconfig2022-08-06_03-10-03PM.log.

Session log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/opatchauto2022-08-06_03-12-01PM.log
The id for this session is 6A35

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.3.0/grid
Patch applicability verified successfully on home /u01/app/19.3.0/grid


Executing patch validation checks on home /u01/app/19.3.0/grid
Patch validation checks successfully completed on home /u01/app/19.3.0/grid


Performing prepatch operations on CRS - bringing down CRS service on home /u01/app/19.3.0/grid
Prepatch operation log file location: /u01/app/grid/crsdata/orcl19cn1/crsconfig/crs_prepatch_apply_inplace_orcl19cn1_2022-08-06_03-15-40PM.log
CRS service brought down successfully on home /u01/app/19.3.0/grid


Start applying binary patch on home /u01/app/19.3.0/grid
Binary patch applied successfully on home /u01/app/19.3.0/grid


Performing postpatch operations on CRS - starting CRS service on home /u01/app/19.3.0/grid
Postpatch operation log file location: /u01/app/grid/crsdata/orcl19cn1/crsconfig/crs_postpatch_apply_inplace_orcl19cn1_2022-08-06_03-26-26PM.log
Failed to start CRS service on home /u01/app/19.3.0/grid

Execution of [GIStartupAction] patch action failed, check log for more details. Failures:
Patch Target : orcl19cn1->/u01/app/19.3.0/grid Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/19.3.0/grid, host: orcl19cn1.
Command failed: /u01/app/19.3.0/grid/perl/bin/perl -I/u01/app/19.3.0/grid/perl/lib -I/u01/app/19.3.0/grid/opatchautocfg/db/dbtmp/bootstrap_orcl19cn1/patchwork/crs/install -I/u01/app/19.3.0/grid/opatchautocfg/db/dbtmp/bootstrap_orcl19cn1/patchwork/xag /u01/app/19.3.0/grid/opatchautocfg/db/dbtmp/bootstrap_orcl19cn1/patchwork/crs/install/rootcrs.pl -postpatch
Command failure output:
Using configuration parameter file: /u01/app/19.3.0/grid/opatchautocfg/db/dbtmp/bootstrap_orcl19cn1/patchwork/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/crsdata/orcl19cn1/crsconfig/crs_postpatch_apply_inplace_orcl19cn1_2022-08-06_03-26-26PM.log
2022/08/06 15:26:44 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'orcl19cn1'
CRS-2672: Attempting to start 'ora.evmd' on 'orcl19cn1'
CRS-2676: Start of 'ora.mdnsd' on 'orcl19cn1' succeeded
CRS-2676: Start of 'ora.evmd' on 'orcl19cn1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'orcl19cn1'
CRS-2676: Start of 'ora.gpnpd' on 'orcl19cn1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'orcl19cn1'
CRS-2676: Start of 'ora.gipcd' on 'orcl19cn1' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'orcl19cn1'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'orcl19cn1'
CRS-2676: Start of 'ora.cssdmonitor' on 'orcl19cn1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'orcl19cn1'
CRS-2672: Attempting to start 'ora.diskmon' on 'orcl19cn1'
CRS-2676: Start of 'ora.diskmon' on 'orcl19cn1' succeeded
CRS-2676: Start of 'ora.crf' on 'orcl19cn1' succeeded
CRS-2883: Resource 'ora.gipcd' failed during Clusterware stack start.
CRS-4406: Oracle High Availability Services synchronous start failed.
CRS-41053: checking Oracle Grid Infrastructure for file permission issues
CRS-4000: Command Start failed, or completed with errors.
2022/08/06 15:29:59 CLSRSC-117: Failed to start Oracle Clusterware stack from the Grid Infrastructure home /u01/app/19.3.0/grid

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Sat Aug 6 15:30:02 2022
Time taken to complete the session 20 minutes, 15 seconds

opatchauto failed with error code 42



GI alert日志

2022-08-06 15:27:18.712 [CLSECHO(69653)]ACFS-9294: updating file /etc/sysconfig/oracledrivers.conf
2022-08-06 15:27:18.749 [CLSECHO(69661)]ACFS-9308: Loading installed ADVM/ACFS drivers.
2022-08-06 15:27:18.788 [CLSECHO(69669)]ACFS-9321: Creating udev for ADVM/ACFS.
2022-08-06 15:27:18.826 [CLSECHO(69677)]ACFS-9323: Creating module dependencies - this may take some time.
2022-08-06 15:27:23.437 [CLSECHO(69753)]ACFS-9154: Loading 'oracleoks.ko' driver.
2022-08-06 15:27:24.151 [CLSECHO(69791)]ACFS-9154: Loading 'oracleadvm.ko' driver.
2022-08-06 15:27:24.975 [CLSECHO(69833)]ACFS-9154: Loading 'oracleacfs.ko' driver.
2022-08-06 15:27:25.653 [CLSECHO(69899)]ACFS-9327: Verifying ADVM/ACFS devices.
2022-08-06 15:27:25.786 [CLSECHO(69931)]ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
2022-08-06 15:27:25.866 [CLSECHO(69952)]ACFS-9156: Detecting control device '/dev/ofsctl'.
2022-08-06 15:27:30.221 [CLSECHO(70962)]ACFS-9309: ADVM/ACFS installation correctness verified.
2022-08-06 15:27:33.505 [CLSECHO(71245)]OKA-0620: OKA is not supported on this operating system version: '3.10.0-1160.el7.x86_64'
2022-08-06 15:27:33.627 [CLSECHO(71275)]OKA-9294: updating file /etc/sysconfig/oracledrivers.conf
2022-08-06 15:27:34.742 [CLSCFG(71352)]CRS-1810: Node-specific configuration for node orcl19cn1 in Oracle Local Registry was patched to patch level 2966572961.
2022-08-06 15:27:39.305 [OHASD(71384)]CRS-8500: Oracle Clusterware OHASD process is starting with operating system process ID 71384
2022-08-06 15:27:39.385 [OHASD(71384)]CRS-0714: Oracle Clusterware Release 19.0.0.0.0.
2022-08-06 15:27:39.407 [OHASD(71384)]CRS-2112: The OLR service started on node orcl19cn1.
2022-08-06 15:27:39.663 [OHASD(71384)]CRS-1301: Oracle High Availability Service started on node orcl19cn1.
2022-08-06 15:27:39.677 [OHASD(71384)]CRS-8017: location: /etc/oracle/lastgasp has 2 reboot advisory log files, 0 were announced and 0 errors occurred
2022-08-06 15:27:40.587 [CSSDAGENT(71490)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 71490
2022-08-06 15:27:40.746 [CSSDMONITOR(71495)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 71495
2022-08-06 15:27:41.387 [ORAROOTAGENT(71477)]CRS-8500: Oracle Clusterware ORAROOTAGENT process is starting with operating system process ID 71477
2022-08-06 15:27:41.414 [ORAAGENT(71484)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 71484
2022-08-06 15:27:42.682 [ORAAGENT(71593)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 71593
2022-08-06 15:27:43.213 [MDNSD(71617)]CRS-8500: Oracle Clusterware MDNSD process is starting with operating system process ID 71617
2022-08-06 15:27:43.227 [EVMD(71618)]CRS-8500: Oracle Clusterware EVMD process is starting with operating system process ID 71618
2022-08-06 15:27:44.261 [GPNPD(71650)]CRS-8500: Oracle Clusterware GPNPD process is starting with operating system process ID 71650
2022-08-06 15:27:45.178 [GPNPD(71650)]CRS-2328: GPNPD started on node orcl19cn1.
2022-08-06 15:27:45.419 [GIPCD(71721)]CRS-8500: Oracle Clusterware GIPCD process is starting with operating system process ID 71721
2022-08-06 15:27:46.093 [GIPCD(71721)]CRS-7517: The Oracle Grid Interprocess Communication (GIPC) failed to identify the Fast Node Death Detection (FNDD).
2022-08-06 15:27:49.246 [CSSDMONITOR(71755)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 71755
2022-08-06 15:27:49.926 [CSSDAGENT(71787)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 71787
2022-08-06 15:27:50.531 [OSYSMOND(71757)]CRS-8500: Oracle Clusterware OSYSMOND process is starting with operating system process ID 71757
2022-08-06 15:27:54.925 [OCSSD(71807)]CRS-8500: Oracle Clusterware OCSSD process is starting with operating system process ID 71807
2022-08-06 15:27:56.032 [OCSSD(71807)]CRS-1713: CSSD daemon is started in hub mode
2022-08-06 15:27:57.544 [OCSSD(71807)]CRS-1707: Lease acquisition for node orcl19cn1 number 1 completed
2022-08-06 15:27:58.649 [OCSSD(71807)]CRS-1621: The IPMI configuration data for this node stored in the Oracle registry is incomplete; details at (:CSSNK00002:) in /u01/app/grid/diag/crs/orcl19cn1/crs/trace/ocssd.trc
2022-08-06 15:27:58.650 [OCSSD(71807)]CRS-1617: The information required to do node kill for node orcl19cn1 is incomplete; details at (:CSSNM00004:) in /u01/app/grid/diag/crs/orcl19cn1/crs/trace/ocssd.trc
2022-08-06 15:27:58.653 [OCSSD(71807)]CRS-1605: CSSD voting file is online: /dev/sdc1; details in /u01/app/grid/diag/crs/orcl19cn1/crs/trace/ocssd.trc.
2022-08-06 15:27:58.664 [OCSSD(71807)]CRS-1605: CSSD voting file is online: /dev/sdd1; details in /u01/app/grid/diag/crs/orcl19cn1/crs/trace/ocssd.trc.
2022-08-06 15:27:58.674 [OCSSD(71807)]CRS-1605: CSSD voting file is online: /dev/sdb1; details in /u01/app/grid/diag/crs/orcl19cn1/crs/trace/ocssd.trc.
2022-08-06 15:27:58.792 [GIPCD(71721)]CRS-8503: Oracle Clusterware process GIPCD with operating system process ID 71721 experienced fatal signal or exception code 4.
2022-08-06T15:27:58.810687+08:00
Errors in file /u01/app/grid/diag/crs/orcl19cn1/crs/trace/gipcd.trc (incident=1):
CRS-8503 [] [] [] [] [] [] [] [] [] [] [] []
Incident details in: /u01/app/grid/diag/crs/orcl19cn1/crs/incident/incdir_1/gipcd_i1.trc

2022-08-06 15:27:59.983 [GIPCD(72038)]CRS-8500: Oracle Clusterware GIPCD process is starting with operating system process ID 72038
2022-08-06 15:28:00.449 [GIPCD(72038)]CRS-7517: The Oracle Grid Interprocess Communication (GIPC) failed to identify the Fast Node Death Detection (FNDD).
2022-08-06 15:28:03.255 [GIPCD(72038)]CRS-8503: Oracle Clusterware process GIPCD with operating system process ID 72038 experienced fatal signal or exception code 4.
2022-08-06T15:28:03.295184+08:00
Errors in file /u01/app/grid/diag/crs/orcl19cn1/crs/trace/gipcd.trc (incident=9):
CRS-8503 [] [] [] [] [] [] [] [] [] [] [] []


ocssd日志

2022-08-06 15:34:07.212 : CSSD:3322619648: [ INFO] clssnmvDHBValidateNCopy: node 2, orcl19cn2, has a disk HB, but no network HB, DHB has rcfg 555292967, wrtcnt, 24374, LATS 2587304, lastSeqNo 24371, uniqueness 1659769645, timestamp 1659771245/2586274
2022-08-06 15:34:07.338 : CSSD:3844413184: clsssc_CLSFAInit_CB: System not ready for CLSFA initialization
2022-08-06 15:34:07.895 : CSSD:3830224640: [ INFO] clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2022-08-06 15:34:08.214 : CSSD:3317888768: [ INFO] clssnmvDHBValidateNCopy: node 2, orcl19cn2, has a disk HB, but no network HB, DHB has rcfg 555292967, wrtcnt, 24378, LATS 2588304, lastSeqNo 24369, uniqueness 1659769645, timestamp 1659771246/2587274
2022-08-06 15:34:08.338 : CSSD:3844413184: clsssc_CLSFAInit_CB: System not ready for CLSFA initialization
2022-08-06 15:34:08.896 : CSSD:3830224640: [ INFO] clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2022-08-06 15:34:09.216 : CSSD:3317888768: [ INFO] clssnmvDHBValidateNCopy: node 2, orcl19cn2, has a disk HB, but no network HB, DHB has rcfg 555292967, wrtcnt, 24381, LATS 2589304, lastSeqNo 24378, uniqueness 1659769645, timestamp 1659771247/2588274
2022-08-06 15:34:09.339 : CSSD:3844413184: clsssc_CLSFAInit_CB: System not ready for CLSFA initialization
2022-08-06 15:34:09.397 : CSSD:3308427008: [ INFO] clssnmSendingThread: sending join msg to all nodes
2022-08-06 15:34:09.397 : CSSD:3308427008: [ INFO] clssnmSendingThread: sent 5 join msgs to all nodes
2022-08-06 15:34:09.896 : CSSD:3830224640: [ INFO] clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2022-08-06 15:34:10.218 : CSSD:3317888768: [ INFO] clssnmvDHBValidateNCopy: node 2, orcl19cn2, has a disk HB, but no network HB, DHB has rcfg 555292967, wrtcnt, 24384, LATS 2590304, lastSeqNo 24381, uniqueness 1659769645, timestamp 1659771248/2589274
2022-08-06 15:34:10.339 : CSSD:3844413184: clsssc_CLSFAInit_CB: System not ready for CLSFA initialization
2022-08-06 15:34:10.896 : CSSD:3830224640: [ INFO] clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2022-08-06 15:34:11.220 : CSSD:3317888768: [ INFO] clssnmvDHBValidateNCopy: node 2, orcl19cn2, has a disk HB, but no network HB, DHB has rcfg 555292967, wrtcnt, 24387, LATS 2591304, lastSeqNo 24384, uniqueness 1659769645, timestamp 1659771249/2590274
2022-08-06 15:34:11.339 : CSSD:3844413184: clsssc_CLSFAInit_CB: System not ready for CLSFA initialization
2022-08-06 15:34:11.797 : CSSD:3306850048: [ INFO] clssnmRcfgMgrThread: Local Join
2022-08-06 15:34:11.797 : CSSD:3306850048: [ INFO] clssnmLocalJoinEvent: begin on node(1), waittime 193000
2022-08-06 15:34:11.797 : CSSD:3306850048: [ INFO] clssnmLocalJoinEvent: set curtime (2591884) for my node
2022-08-06 15:34:11.797 : CSSD:3306850048: [ INFO] clssnmLocalJoinEvent: scanning 32 nodes
2022-08-06 15:34:11.797 : CSSD:3306850048: [ INFO] clssnmLocalJoinEvent: Node orcl19cn2, number 2, is in an existing cluster with disk state 3
2022-08-06 15:34:11.797 : CSSD:3306850048: [ WARNING] clssnmLocalJoinEvent: takeover aborted due to cluster member node found on disk
2022-08-06 15:34:11.897 : CSSD:3830224640: [ INFO] clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2022-08-06 15:34:12.222 : CSSD:3317888768: [ INFO] clssnmvDHBValidateNCopy: node 2, orcl19cn2, has a disk HB, but no network HB, DHB has rcfg 555292967, wrtcnt, 24390, LATS 2592314, lastSeqNo 24387, uniqueness 1659769645, timestamp 1659771250/2591274
2022-08-06 15:34:12.340 : CSSD:3844413184: clsssc_CLSFAInit_CB: System not ready for CLSFA initialization
2022-08-06 15:34:12.862 : CSSD:3303696128: [ INFO] clssscMonitorThreads clssnmClusterListener not scheduled for 11670 msecs, misscount 30000
2022-08-06 15:34:12.862 : CSSD:3303696128: [ INFO] clssscMonitorThreads: scheddelay observed
2022-08-06 15:34:12.897 : CSSD:3830224640: [ INFO] clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2022-08-06 15:34:13.224 : CSSD:3322619648: [ INFO] clssnmvDHBValidateNCopy: node 2, orcl19cn2, has a disk HB, but no network HB, DHB has rcfg 555292967, wrtcnt, 24392, LATS 2593314, lastSeqNo 24374, uniqueness 1659769645, timestamp 1659771251/2592284
2022-08-06 15:34:13.341 : CSSD:3844413184: clsssc_CLSFAInit_CB: System not ready for CLSFA initialization
2022-08-06 15:34:13.399 : CSSD:3308427008: [ INFO] scheduling delay in ocssd


gipcd.trc

我来答
添加附件
收藏
分享
问题补充
6条回答
默认
最新
解龙
上传附件:gipcd_i65.trc
暂无图片 评论
暂无图片 有用 0
打赏 0
解龙
上传附件:ocssd.trc
暂无图片 评论
暂无图片 有用 0
打赏 0
解龙
上传附件:alert.log
暂无图片 评论
暂无图片 有用 0
打赏 0
解龙

文件权限通过第二节点permission.pl恢复过,问题依旧。

暂无图片 评论
暂无图片 有用 0
打赏 0
解龙

selinux关闭

暂无图片 评论
暂无图片 有用 0
打赏 0
神武天尊

rac打patch时,使用opatchauto不需要指定home目录吧?打补丁之前有没有备份本地目录,如果有备份,就恢复一下

暂无图片 评论
暂无图片 有用 0
打赏 0
解龙
题主
2022-08-29
/u01/app/19.3.0/grid/OPatch/opatchauto apply /patch/33182768/ 和 /u01/app/19.3.0/grid/OPatch/opatchauto apply /patch/33182768/ -oh /u01/app/19.3.0/grid 都试过了,报错一样。
回答交流
Markdown


请输入正文
提交
相关推荐
请教个问题,Oracle-Linux-7.9能否支持Oracle-19c的ACFS文件系统?
回答 2
肯定支持,https://www.modb.pro/db/1705457603992178688
在使用正则表达式查询数据时,为什么使用通配符格式正确,却没有查找出符合条件的记录?
回答 1
已采纳
在Oracle中存储字符串数据时,可能会不小心把两端带有空格的字符串保存到记录中,而在查看表中记录时,Oracle不能明确地显示空格,数据库操作者不能直观地确定字符串两端是否有空格。例如,使用LIKE
修改asm_diskstring 报错ora 29783
回答 2
谢谢回复,我这个不是gridhome的问题,因为直接安装的11g,grid是一个单独的用户,ORACLEHOME设置没问题。
11g启动gsd服务报错
回答 2
暂无文字回复
oracle执行查询语句时出现ora-03113,这有哪些影响因素吗?
回答 1
已采纳
1、ORA03113表示客户端与ORACLE服务进程的通信意外中断,即连接失效;2、以下因素均会导致ORA03113A、通常是网络不稳定导致,比如:防火墙设置有超时机制、网络中断;B、Oracle例程
oracle日常管理监控emcc用的多吗,都用些什么工具呢
回答 1
你说的是em吗?我就用。结合awr。最好的还是恩墨的白求恩。
oracle 的dg是物理复制还是逻辑复制?
回答 2
已采纳
两种模式都有,通常布署DG都是物理复制模式,如果想逻辑复制,还不如布署OGG。
请问下ogg的状态中的Lag at Chkpt不断增长,这怎么解决?
回答 1
进程异常终止了没,你只说这个replicate进程延迟不断增长,你得把报告发出来看啊,通过报告才能分析出导致它延迟高的原因。
orachk如何限制收集时间段
回答 2
https://blogs.oracle.com/database4cn/post/orachk看一下这个,会对你有所帮助
ogg21c配置11g添加热备表提示没有权限
回答 1
已解决。
问题信息
请登录之后查看
附件列表
请登录之后查看
邀请回答
暂无人订阅该标签,敬请期待~~