暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

浅谈:Oracle 19c Grid降级到 12C Grid

DBA巫师 2024-03-27
411

点击上方蓝字关注我们



前几天,一个小伙伴找到我,他的公司正在使用的Oracle 19c grid因为与某些旧版业务系统存在兼容性问题,导致业务运行出现了一些故障。为了解决这个问题,他们决定将Oracle 19c grid降级到Oracle 12.1.0.2 grid。为什么要这么干我也不知道,当然在操作之前还是要进行全备,不知道备份的DBA不是一名合格的DBA。





环境详情:

两个节点名称分别为:dbhost1 和 dbhost2

Grid用户为 oracle  ##不要疑惑,因为我也不知道是谁干的,有可能是领导决定的!!!

19C ORACLE_HOME(current) – > sharearea/crs/grid19c

12C ORACLE_HOME (old) -> crs/app/oracle/product/grid12c



1. 检查当前的grid版本:(在两个节点上都进行检查)
    oracle@dbhost1:/$ crsctl query crs softwareversion
    Oracle Clusterware version on node [dbhost1] is [19.0.0.0.0]
    oracle@dbhost1:$ crsctl query crs activeversion
    Oracle Clusterware active version on the cluster is [19.0.0.0.0]
    复制


    复制
    2. 删除mgmtdb数据库:
    ##查询mgmtdb数据库运行在哪个节点上

    复制
      oracle@dbhost1:$ srvctl status mgmtdb
      Database is enabled
      Instance -MGMTDB is running on node dbhost2
      复制
      目前mgmtdb正在dbhost2即节点2上运行。所以只能在节点2上运行删除命令。
        oracle@dbhost2:~$ dbca -silent -deleteDatabase -sourceDB -MGMTDB
        [WARNING] [DBT-19202] The Database Configuration Assistant will delete the Oracle instances and datafiles for your database. All information in the database will be destroyed.
        Prepare for db operation
        32% complete
        Connecting to database
        35% complete
        39% complete
        42% complete
        45% complete
        48% complete
        52% complete
        65% complete
        Updating network configuration files
        68% complete
        Deleting instance and datafiles
        84% complete
        100% complete
        Database deletion completed.
        Look at the log file "/sharearea/orabase/cfgtoollogs/dbca/_mgmtdb/_mgmtdb.log" for further details.


        复制

        复制
        注意:
        所以,如果我们尝试从节点1删除MGMTDB,它会抛出以下错误。
        oracle@dbhost1:$ dbca -silent -deleteDatabase -sourceDB -MGMTDB
        [FATAL] [DBT-10003] Delete operation for Oracle Grid Infrastructure Management Repository (GIMR) cannot be performed on the current node (dbhost1).
        原因:Oracle GIMR正在远程节点(dbhost2)上运行。
        办法:在远程节点(dbhost2)上调用DBCA以删除Oracle GIMR。


        3. 执行降级脚本:

        现在我们将首先在本地节点上运行降级脚本,然后在远程节点上运行。它需要从根用户运行。
        以root身份登录,并转到网格所有者有写入权限的路径。在我们的案例中,grid所有者是oracle。
        在节点1上降级:(dbhost1)
        以root身份登录并切换到oracle有写权限的路径
        root$ cd export/home/oracle
        /sharearea/crs/grid19c/crs/install/rootcrs.sh -downgrade
          root@dbhost1:/export/home/oracle# sharearea/crs/grid19c/crs/install/rootcrs.sh -downgrade


          Using configuration parameter file: sharearea/crs/grid19c/crs/install/crsconfig_params
          The log of current session can be found at:
          /sharearea/orabase/crsdata/dbhost1/crsconfig/crsdowngrade_dbhost1_2019-09-16_10-01-30AM.log
          2019/09/16 10:04:12 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.
          2019/09/16 10:05:16 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.
          2019/09/16 10:05:18 CLSRSC-591: successfully downgraded Oracle Clusterware stack on this node
          root@dbhost1:/export/home/oracle# 2019/09/16 10:05:55 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.


          复制

          复制
          在节点2上降级:(dbhost2)
          以root身份登录并切换到oracle有写权限的路径
          root$ cd export/home/oracle
          /sharearea/crs/grid19c/crs/install/rootcrs.sh -downgrade
            root@dbhost2:# sharearea/crs/grid19c/crs/install/rootcrs.sh -downgrade


            Using configuration parameter file: sharearea/crs/grid19c/crs/install/crsconfig_params
            The log of current session can be found at:
            /sharearea/orabase/crsdata/dbhost2/crsconfig/crsdowngrade_dbhost2_2019-09-16_10-10-47AM.log
            2019-09-16 10:11:04.631 [1] gipcmodClsaBind: Clsa bind
            2019-09-16 10:11:04.631 [1] gipcmodClsaBind: Clsa bind, endp 103ee91e0 [00000000000001e9] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=00000000-00000000-0))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=00000000-00000000-0))', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0, ready 0, wobj 103eeb0a0, sendp 103f9e820 status 13flags 0x21080710, flags-2 0x0, usrFlags 0x0 }
            2019-09-16 10:11:04.631 [1] gipcmodClsaSetFast: IPC Clsa with fast clsa, endp 103ee91e0 [00000000000001e9] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=00000000-00000000-0))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=00000000-00000000-0))', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0, ready 0, wobj 103eeb0a0, sendp 103f9e820 status 13flags 0xa1080710, flags-2 0x0, usrFlags 0x0 }
            2019-09-16 10:11:04.631 [1] gipcmodClsaCompleteRequest: [clsa] stared for req 103eea7b0 [00000000000001ee] { gipcConnectRequest : addr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=8a08364e-459c1999-52758))', parentEndp 103ee91e0, ret gipcretSuccess (0), objFlags 0x0, reqFlags 0x2 }
            2019-09-16 10:11:04.631 [1] gipcmodClsaCompleteConnect: [clsa] completed connect on endp 103ee91e0 [00000000000001e9] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=459c1999-8a08364e-31594))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=8a08364e-459c1999-52758))', numPend 4, numReady 1, numDone 2, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 52758, readyRef 0, ready 0, wobj 103eeb0a0, sendp 103f9e820 status 13flags 0xa1082712, flags-2 0x100, usrFlags 0x0 }
            2019-09-16 10:11:04.631 [1] gipcmodClsaCheckCompletion: username, state 4, endp 103ee91e0 [00000000000001e9] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=459c1999-8a08364e-31594))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=8a08364e-459c1999-52758))', numPend 4, numReady 0, numDone 3, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 52758, readyRef 0, ready 0, wobj 103eeb0a0, sendp 103f9e820 status 0flags 0xa1002716, flags-2 0x100, usrFlags 0x0 }
            2019-09-16 10:11:04.631 [1] gipcmodClsaCheckCompletion: username CLSA, modendp 4, userData 103f2afd0
            2019-09-16 10:11:04.632 [1] gipcmodClsaCompleteRequest: [clsa] stared for req 103d6f570 [00000000000001fe] { gipcSendRequest : addr '', data 103eea7b0, len 627, olen 627, parentEndp 103ee91e0, ret gipcretSuccess (0), objFlags 0x0, reqFlags 0x2 }
            2019-09-16 10:11:04.654 [1] gipcmodClsaCompleteRequest: [clsa] stared for req 103d6f570 [0000000000000200] { gipcReceiveRequest : peerName 'clsc_ipc', data 103eea8f8, len 502, olen 502, off 0, parentEndp 103ee91e0, ret gipcretSuccess (0), objFlags 0x0, reqFlags 0x2 }
            2019-09-16 10:11:04.655 [1] gipcmodClsaDisconnect: [clsa] disconnect issued on endp 103ee91e0 [00000000000001e9] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=459c1999-8a08364e-31594))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=8a08364e-459c1999-52758))', numPend 5, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 52758, readyRef 0, ready 0, wobj 103eeb0a0, sendp 103f9e820 status 0flags 0xa1002716, flags-2 0x100, usrFlags 0x0 }
            2019-09-16 10:11:04.655 [1] gipcmodClsaDisconnect: [clsa] disconnect issued on endp 103ee91e0 [00000000000001e9] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=459c1999-8a08364e-31594))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=8a08364e-459c1999-52758))', numPend 0, numReady 5, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 52758, readyRef 0, ready 0, wobj 103eeb0a0, sendp 103f9e820 status 0flags 0xa700271e, flags-2 0x100, usrFlags 0x0 }
            CRS-4123: Oracle High Availability Services has been started.
            CRS-2672: Attempting to start 'ora.evmd' on 'dbhost2'
            CRS-2672: Attempting to start 'ora.mdnsd' on 'dbhost2'
            CRS-2676: Start of 'ora.mdnsd' on 'dbhost2' succeeded
            CRS-2676: Start of 'ora.evmd' on 'dbhost2' succeeded
            CRS-2672: Attempting to start 'ora.gpnpd' on 'dbhost2'
            CRS-2676: Start of 'ora.gpnpd' on 'dbhost2' succeeded
            CRS-2672: Attempting to start 'ora.cssdmonitor' on 'dbhost2'
            CRS-2672: Attempting to start 'ora.gipcd' on 'dbhost2'
            CRS-2676: Start of 'ora.cssdmonitor' on 'dbhost2' succeeded
            CRS-2676: Start of 'ora.gipcd' on 'dbhost2' succeeded
            CRS-2672: Attempting to start 'ora.cssd' on 'dbhost2'
            CRS-2672: Attempting to start 'ora.diskmon' on 'dbhost2'
            CRS-2676: Start of 'ora.diskmon' on 'dbhost2' succeeded
            CRS-2676: Start of 'ora.cssd' on 'dbhost2' succeeded
            CRS-2672: Attempting to start 'ora.crf' on 'dbhost2'
            CRS-2672: Attempting to start 'ora.ctssd' on 'dbhost2'
            CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'dbhost2'
            CRS-2676: Start of 'ora.crf' on 'dbhost2' succeeded
            CRS-2676: Start of 'ora.ctssd' on 'dbhost2' succeeded
            CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'dbhost2' succeeded
            CRS-2672: Attempting to start 'ora.asm' on 'dbhost2'
            CRS-2676: Start of 'ora.asm' on 'dbhost2' succeeded
            CRS-2672: Attempting to start 'ora.storage' on 'dbhost2'
            CRS-2676: Start of 'ora.storage' on 'dbhost2' succeeded
            CRS-2672: Attempting to start 'ora.crsd' on 'dbhost2'
            CRS-2676: Start of 'ora.crsd' on 'dbhost2' succeeded
            2019/09/16 10:26:12 CLSRSC-338: Successfully downgraded OCR to version 12.1.0.2.0
            CRS-2672: Attempting to start 'ora.crf' on 'dbhost2'
            CRS-2676: Start of 'ora.crf' on 'dbhost2' succeeded
            CRS-2672: Attempting to start 'ora.crsd' on 'dbhost2'
            CRS-2676: Start of 'ora.crsd' on 'dbhost2' succeeded
            2019/09/16 10:27:09 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.
            2019/09/16 10:27:52 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.
            2019/09/16 10:27:54 CLSRSC-591: successfully downgraded Oracle Clusterware stack on this node
            2019/09/16 10:27:55 CLSRSC-640: To complete the downgrade operation, ensure that the node inventory on all nodes points to the configured Grid Infrastructure home '/crs/app/oracle/product/grid12c'.
            2019/09/16 10:27:56 CLSRSC-592: Run 'crsctl start crs' from home crs/app/oracle/product/grid12c on each node to complete downgrade.
            root@dbhost2:/export/home/oracle# 2019/09/16 10:28:38 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.


            复制

            复制
            4. 从活动集群库存中删除19c grid_home:(仅从一个节点)
            以oracle用户身份从19C的GRID_HOME运行此操作,只需要从一个节点运行。
            cd sharearea/crs/grid19c/oui/bin
              oracle$./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=false
              ORACLE_HOME=/sharearea/crs/grid19c "CLUSTER_NODES=dbhost1,dbhost2" -doNotUpdateNodeList


              Starting Oracle Universal Installer...


              Checking swap space: must be greater than 500 MB. Actual 470964 MB Passed
              The inventory pointer is located at var/opt/oracle/oraInst.loc
              You can find the log of this install session at:
              /crs/app/oraInventory/logs/UpdateNodeList2019-09-16_10-33-44AM.log
              'UpdateNodeList' was successful.
              复制



              复制
              5. 使用ORACLE 12C grid home 更新活动集群库(仅从一个节点)
              以12C的GRID_HOME的oracle用户身份运行此操作,只需要从一个节点运行即可
              cd crs/app/oracle/product/grid12c/oui/bin
                oracle$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true
                ORACLE_HOME=/crs/app/oracle/product/grid12c "CLUSTER_NODES=dbhost1,dbhost2"


                Starting Oracle Universal Installer...


                Checking swap space: must be greater than 500 MB. Actual 470673 MB Passed
                The inventory pointer is located at var/opt/oracle/oraInst.loc
                'UpdateNodeList' was successful.


                复制

                复制
                6. 从oracle 12c的grid_home启动CRS
                  --- node 1 :


                  root@dbhost1:/crs/app/oracle/product/grid12c/bin# ./crsctl start crs
                  CRS-4123: Oracle High Availability Services has been started.


                  --- node 2 :


                  root@dbhost2:/crs/app/oracle/product/grid12c/bin# ./crsctl start crs
                  CRS-4123: Oracle High Availability Services has been started.


                  复制

                  复制
                  7. 从集群中移除MGMTDB服务。
                    REMOVE MGMT SERVICE:


                    oracle@dbhost1:~$ srvctl remove mgmtdb
                    Remove the database _mgmtdb? (y/[n]) y


                    复制

                    复制
                    8.在两个节点上检查crs活动版本:
                      oracle@dbhost1:~$ crsctl query crs softwareversion
                      Oracle Clusterware version on node [dbhost1] is [12.1.0.2.0]


                      oracle@dbhost1:~$ crsctl query crs activeversion


                      Oracle Clusterware active version on the cluster is [12.1.0.2.0]


                      复制

                      复制
                      9.创建MGMTDB容器数据库:
                      这里的mgmtdb将在+MGMT磁盘组内创建。请确保+MGMT磁盘组已挂载。
                        /crs/app/oracle/product/grid12c/bin/dbca -silent -createDatabase -createAsContainerDatabase true-templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName +MGMT-datafileJarLocation /crs/app/oracle/product/grid12c/assistants/dbca/templates -characterset AL32UTF8-autoGeneratePasswords -skipUserTemplateCheckRegistering database with Oracle Grid Infrastructure5% completeCopying database files7% complete9% complete16% complete23% complete30% complete41% completeCreating and starting Oracle instance43% complete48% complete49% complete50% complete55% complete60% complete61% complete64% completeCompleting Database Creation68% complete79% complete89% complete100% completeLook at the log file "/crs/app/grid/cfgtoollogs/dbca/_mgmtdb/_mgmtdb1.log" for further details.10.创建MGMTDB PDB:/crs/app/oracle/product/grid12c/bin/dbca -silent -createPluggableDatabase -sourceDB -MGMTDB -pdbName cluster_name -createPDBFromRMANBACKUP -PDBBackUpfile /crs/app/oracle/product/grid12c/assistants/dbca/templates/mgmtseed_pdb.dfb -PDBMetadataFile /crs/app/oracle/product/grid12c/assistants/dbca/templates/mgmtseed_pdb.xml-createAsClone true -internalSkipGIHomeCheckCreating Pluggable Database4% complete12% complete21% complete38% complete55% complete85% completeCompleting Pluggable Database Creation100% completeLook at the log file "/crs/app/grid/cfgtoollogs/dbca/_mgmtdb/cluster_name/_mgmtdb.log" for further details.oracle@dbhost1:...app/oracle/product/grid12c/bin$ srvctl status mgmtdbDatabase is enabledInstance -MGMTDB is running on node dbhost1
                        复制
                        到此已成功地将19C GRID降级到12.1.0.2 GRID。
                        复制
                        总的来说,如果条件允许,最好进行全面备份,然后重新构建一套环境,并将数据导入到新环境中。这样可以避免在降级过程中可能出现的问题,同时也能确保数据的完整性和安全性。

                        扫描下方二维码或添加作者微信,回复“加群”即可加入我们,你将获得不仅仅是知识,还有使用GPT-4.0机器人,更有免费查询Oracle MOS的权益,让我们一起交流,一起成长。


                        往期推荐

                        Oracle的SQL调化健康检查脚本介绍

                        DBA如何平衡Oracle与国产数据库的工作需求

                        Oracle认证,中国DBA的职业护身符还是过时符咒?

                        如何选择合适的数据库产品与服务

                        国产数据库与国际数据库的比较分析



                        文章转载自DBA巫师,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

                        评论