

点击上方蓝字关注我们



前几天,一个小伙伴找到我,他的公司正在使用的Oracle 19c grid因为与某些旧版业务系统存在兼容性问题,导致业务运行出现了一些故障。为了解决这个问题,他们决定将Oracle 19c grid降级到Oracle 12.1.0.2 grid。为什么要这么干我也不知道,当然在操作之前还是要进行全备,不知道备份的DBA不是一名合格的DBA。


环境详情:
两个节点名称分别为:dbhost1 和 dbhost2
Grid用户为 oracle ##不要疑惑,因为我也不知道是谁干的,有可能是领导决定的!!!
19C ORACLE_HOME(current) – > sharearea/crs/grid19c
12C ORACLE_HOME (old) -> crs/app/oracle/product/grid12c

oracle@dbhost1:/$ crsctl query crs softwareversion
Oracle Clusterware version on node [dbhost1] is [19.0.0.0.0]
oracle@dbhost1:$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [19.0.0.0.0]
复制
复制
##查询mgmtdb数据库运行在哪个节点上
复制
oracle@dbhost1:$ srvctl status mgmtdb
Database is enabled
Instance -MGMTDB is running on node dbhost2
复制
oracle@dbhost2:~$ dbca -silent -deleteDatabase -sourceDB -MGMTDB
[WARNING] [DBT-19202] The Database Configuration Assistant will delete the Oracle instances and datafiles for your database. All information in the database will be destroyed.
Prepare for db operation
32% complete
Connecting to database
35% complete
39% complete
42% complete
45% complete
48% complete
52% complete
65% complete
Updating network configuration files
68% complete
Deleting instance and datafiles
84% complete
100% complete
Database deletion completed.
Look at the log file "/sharearea/orabase/cfgtoollogs/dbca/_mgmtdb/_mgmtdb.log" for further details.
复制
复制
注意: 所以,如果我们尝试从节点1删除MGMTDB,它会抛出以下错误。 oracle@dbhost1:$ dbca -silent -deleteDatabase -sourceDB -MGMTDB [FATAL] [DBT-10003] Delete operation for Oracle Grid Infrastructure Management Repository (GIMR) cannot be performed on the current node (dbhost1). 原因:Oracle GIMR正在远程节点(dbhost2)上运行。 办法:在远程节点(dbhost2)上调用DBCA以删除Oracle GIMR。
3. 执行降级脚本:
root@dbhost1:/export/home/oracle# sharearea/crs/grid19c/crs/install/rootcrs.sh -downgrade
Using configuration parameter file: sharearea/crs/grid19c/crs/install/crsconfig_params
The log of current session can be found at:
/sharearea/orabase/crsdata/dbhost1/crsconfig/crsdowngrade_dbhost1_2019-09-16_10-01-30AM.log
2019/09/16 10:04:12 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.
2019/09/16 10:05:16 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.
2019/09/16 10:05:18 CLSRSC-591: successfully downgraded Oracle Clusterware stack on this node
root@dbhost1:/export/home/oracle# 2019/09/16 10:05:55 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
复制
复制
root@dbhost2:# sharearea/crs/grid19c/crs/install/rootcrs.sh -downgrade
Using configuration parameter file: sharearea/crs/grid19c/crs/install/crsconfig_params
The log of current session can be found at:
/sharearea/orabase/crsdata/dbhost2/crsconfig/crsdowngrade_dbhost2_2019-09-16_10-10-47AM.log
2019-09-16 10:11:04.631 [1] gipcmodClsaBind: Clsa bind
2019-09-16 10:11:04.631 [1] gipcmodClsaBind: Clsa bind, endp 103ee91e0 [00000000000001e9] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=00000000-00000000-0))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=00000000-00000000-0))', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0, ready 0, wobj 103eeb0a0, sendp 103f9e820 status 13flags 0x21080710, flags-2 0x0, usrFlags 0x0 }
2019-09-16 10:11:04.631 [1] gipcmodClsaSetFast: IPC Clsa with fast clsa, endp 103ee91e0 [00000000000001e9] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=00000000-00000000-0))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=00000000-00000000-0))', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0, ready 0, wobj 103eeb0a0, sendp 103f9e820 status 13flags 0xa1080710, flags-2 0x0, usrFlags 0x0 }
2019-09-16 10:11:04.631 [1] gipcmodClsaCompleteRequest: [clsa] stared for req 103eea7b0 [00000000000001ee] { gipcConnectRequest : addr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=8a08364e-459c1999-52758))', parentEndp 103ee91e0, ret gipcretSuccess (0), objFlags 0x0, reqFlags 0x2 }
2019-09-16 10:11:04.631 [1] gipcmodClsaCompleteConnect: [clsa] completed connect on endp 103ee91e0 [00000000000001e9] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=459c1999-8a08364e-31594))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=8a08364e-459c1999-52758))', numPend 4, numReady 1, numDone 2, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 52758, readyRef 0, ready 0, wobj 103eeb0a0, sendp 103f9e820 status 13flags 0xa1082712, flags-2 0x100, usrFlags 0x0 }
2019-09-16 10:11:04.631 [1] gipcmodClsaCheckCompletion: username, state 4, endp 103ee91e0 [00000000000001e9] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=459c1999-8a08364e-31594))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=8a08364e-459c1999-52758))', numPend 4, numReady 0, numDone 3, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 52758, readyRef 0, ready 0, wobj 103eeb0a0, sendp 103f9e820 status 0flags 0xa1002716, flags-2 0x100, usrFlags 0x0 }
2019-09-16 10:11:04.631 [1] gipcmodClsaCheckCompletion: username CLSA, modendp 4, userData 103f2afd0
2019-09-16 10:11:04.632 [1] gipcmodClsaCompleteRequest: [clsa] stared for req 103d6f570 [00000000000001fe] { gipcSendRequest : addr '', data 103eea7b0, len 627, olen 627, parentEndp 103ee91e0, ret gipcretSuccess (0), objFlags 0x0, reqFlags 0x2 }
2019-09-16 10:11:04.654 [1] gipcmodClsaCompleteRequest: [clsa] stared for req 103d6f570 [0000000000000200] { gipcReceiveRequest : peerName 'clsc_ipc', data 103eea8f8, len 502, olen 502, off 0, parentEndp 103ee91e0, ret gipcretSuccess (0), objFlags 0x0, reqFlags 0x2 }
2019-09-16 10:11:04.655 [1] gipcmodClsaDisconnect: [clsa] disconnect issued on endp 103ee91e0 [00000000000001e9] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=459c1999-8a08364e-31594))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=8a08364e-459c1999-52758))', numPend 5, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 52758, readyRef 0, ready 0, wobj 103eeb0a0, sendp 103f9e820 status 0flags 0xa1002716, flags-2 0x100, usrFlags 0x0 }
2019-09-16 10:11:04.655 [1] gipcmodClsaDisconnect: [clsa] disconnect issued on endp 103ee91e0 [00000000000001e9] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=459c1999-8a08364e-31594))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=8a08364e-459c1999-52758))', numPend 0, numReady 5, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 52758, readyRef 0, ready 0, wobj 103eeb0a0, sendp 103f9e820 status 0flags 0xa700271e, flags-2 0x100, usrFlags 0x0 }
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.evmd' on 'dbhost2'
CRS-2672: Attempting to start 'ora.mdnsd' on 'dbhost2'
CRS-2676: Start of 'ora.mdnsd' on 'dbhost2' succeeded
CRS-2676: Start of 'ora.evmd' on 'dbhost2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'dbhost2'
CRS-2676: Start of 'ora.gpnpd' on 'dbhost2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'dbhost2'
CRS-2672: Attempting to start 'ora.gipcd' on 'dbhost2'
CRS-2676: Start of 'ora.cssdmonitor' on 'dbhost2' succeeded
CRS-2676: Start of 'ora.gipcd' on 'dbhost2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'dbhost2'
CRS-2672: Attempting to start 'ora.diskmon' on 'dbhost2'
CRS-2676: Start of 'ora.diskmon' on 'dbhost2' succeeded
CRS-2676: Start of 'ora.cssd' on 'dbhost2' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'dbhost2'
CRS-2672: Attempting to start 'ora.ctssd' on 'dbhost2'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'dbhost2'
CRS-2676: Start of 'ora.crf' on 'dbhost2' succeeded
CRS-2676: Start of 'ora.ctssd' on 'dbhost2' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'dbhost2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'dbhost2'
CRS-2676: Start of 'ora.asm' on 'dbhost2' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'dbhost2'
CRS-2676: Start of 'ora.storage' on 'dbhost2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'dbhost2'
CRS-2676: Start of 'ora.crsd' on 'dbhost2' succeeded
2019/09/16 10:26:12 CLSRSC-338: Successfully downgraded OCR to version 12.1.0.2.0
CRS-2672: Attempting to start 'ora.crf' on 'dbhost2'
CRS-2676: Start of 'ora.crf' on 'dbhost2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'dbhost2'
CRS-2676: Start of 'ora.crsd' on 'dbhost2' succeeded
2019/09/16 10:27:09 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.
2019/09/16 10:27:52 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.
2019/09/16 10:27:54 CLSRSC-591: successfully downgraded Oracle Clusterware stack on this node
2019/09/16 10:27:55 CLSRSC-640: To complete the downgrade operation, ensure that the node inventory on all nodes points to the configured Grid Infrastructure home '/crs/app/oracle/product/grid12c'.
2019/09/16 10:27:56 CLSRSC-592: Run 'crsctl start crs' from home crs/app/oracle/product/grid12c on each node to complete downgrade.
root@dbhost2:/export/home/oracle# 2019/09/16 10:28:38 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
复制
复制
oracle$./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=false
ORACLE_HOME=/sharearea/crs/grid19c "CLUSTER_NODES=dbhost1,dbhost2" -doNotUpdateNodeList
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 470964 MB Passed
The inventory pointer is located at var/opt/oracle/oraInst.loc
You can find the log of this install session at:
/crs/app/oraInventory/logs/UpdateNodeList2019-09-16_10-33-44AM.log
'UpdateNodeList' was successful.
复制
复制
oracle$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true
ORACLE_HOME=/crs/app/oracle/product/grid12c "CLUSTER_NODES=dbhost1,dbhost2"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 470673 MB Passed
The inventory pointer is located at var/opt/oracle/oraInst.loc
'UpdateNodeList' was successful.
复制
复制
--- node 1 :
root@dbhost1:/crs/app/oracle/product/grid12c/bin# ./crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
--- node 2 :
root@dbhost2:/crs/app/oracle/product/grid12c/bin# ./crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
复制
复制
REMOVE MGMT SERVICE:
oracle@dbhost1:~$ srvctl remove mgmtdb
Remove the database _mgmtdb? (y/[n]) y
复制
复制
oracle@dbhost1:~$ crsctl query crs softwareversion
Oracle Clusterware version on node [dbhost1] is [12.1.0.2.0]
oracle@dbhost1:~$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
复制
复制
/crs/app/oracle/product/grid12c/bin/dbca -silent -createDatabase -createAsContainerDatabase true-templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName +MGMT-datafileJarLocation /crs/app/oracle/product/grid12c/assistants/dbca/templates -characterset AL32UTF8-autoGeneratePasswords -skipUserTemplateCheckRegistering database with Oracle Grid Infrastructure5% completeCopying database files7% complete9% complete16% complete23% complete30% complete41% completeCreating and starting Oracle instance43% complete48% complete49% complete50% complete55% complete60% complete61% complete64% completeCompleting Database Creation68% complete79% complete89% complete100% completeLook at the log file "/crs/app/grid/cfgtoollogs/dbca/_mgmtdb/_mgmtdb1.log" for further details.10.创建MGMTDB PDB:/crs/app/oracle/product/grid12c/bin/dbca -silent -createPluggableDatabase -sourceDB -MGMTDB -pdbName cluster_name -createPDBFromRMANBACKUP -PDBBackUpfile /crs/app/oracle/product/grid12c/assistants/dbca/templates/mgmtseed_pdb.dfb -PDBMetadataFile /crs/app/oracle/product/grid12c/assistants/dbca/templates/mgmtseed_pdb.xml-createAsClone true -internalSkipGIHomeCheckCreating Pluggable Database4% complete12% complete21% complete38% complete55% complete85% completeCompleting Pluggable Database Creation100% completeLook at the log file "/crs/app/grid/cfgtoollogs/dbca/_mgmtdb/cluster_name/_mgmtdb.log" for further details.oracle@dbhost1:...app/oracle/product/grid12c/bin$ srvctl status mgmtdbDatabase is enabledInstance -MGMTDB is running on node dbhost1
复制
到此已成功地将19C GRID降级到12.1.0.2 GRID。 复制
扫描下方二维码或添加作者微信,回复“加群”即可加入我们,你将获得不仅仅是知识,还有使用GPT-4.0机器人,更有免费查询Oracle MOS的权益,让我们一起交流,一起成长。
往期推荐
文章转载自DBA巫师,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。
评论
相关阅读
【纯干货】Oracle 19C RU 19.27 发布,如何快速升级和安装?
Lucifer三思而后行
654次阅读
2025-04-18 14:18:38
Oracle RAC 一键安装翻车?手把手教你如何排错!
Lucifer三思而后行
624次阅读
2025-04-15 17:24:06
Oracle数据库一键巡检并生成HTML结果,免费脚本速来下载!
陈举超
529次阅读
2025-04-20 10:07:02
XTTS跨版本迁移升级方案(11g to 19c RAC for Linux)
zwtian
524次阅读
2025-04-08 09:12:48
【ORACLE】记录一些ORACLE的merge into语句的BUG
DarkAthena
479次阅读
2025-04-22 00:20:37
【活动】分享你的压箱底干货文档,三篇解锁进阶奖励!
墨天轮编辑部
475次阅读
2025-04-17 17:02:24
Oracle 19c RAC更换IP实战,运维必看!
szrsu
456次阅读
2025-04-08 23:57:08
一页概览:Oracle GoldenGate
甲骨文云技术
455次阅读
2025-04-30 12:17:56
【ORACLE】你以为的真的是你以为的么?--ORA-38104: Columns referenced in the ON Clause cannot be updated
DarkAthena
449次阅读
2025-04-22 00:13:51
火焰图--分析复杂SQL执行计划的利器
听见风的声音
404次阅读
2025-04-17 09:30:30