- 前言:新装的19c ipv6 rac 由于子网长度大于或小于64存在Bug,经过沟通,只能改成ipv4的网络并配置ipv6的监听,等bug修复后再改回来。
如下是整个修改过程,仅供参考:
一. 首先,操作前的备份
- 1、profile.xml文件备份
该文件不能直接vi编辑,只能通过gpnptool命令修改,再回写到文件,位置:/u01/app/19.3.0/grid/gpnp/xydb6node1/profiles/peer/
su - grid cd $ORACLE_HOME/gpnp/`hostname`/profiles/peer/ cp -r profile.xml profile.xml.bak
复制
- 2、ocr文件备份
导出备份ocr,需要在root下执行。
[root@xydb6node1 ~]# /u01/app/19.3.0/grid/bin/ocrconfig -export /home/grid/ocr_bak_20200303.exp
复制
- 3、olr文件备份
手工触发一次olr备份,并拷贝一份做为临时备份。
[root@xydb6node2 ~]# ocrconfig -local -manualbackup xydb6node2 2020/03/05 21:50:53 /u01/app/grid/crsdata/xydb6node2/olr/backup_20200305_215053.olr 724960844 xydb6node2 2020/02/28 04:35:44 /u01/app/grid/crsdata/xydb6node2/olr/backup_20200228_043544.olr 724960844 [root@xydb6node2 ~]# cp -r /u01/app/grid/crsdata/xydb6node2/olr/backup_20200305_215053.olr /home/grid/olr.bak
复制
二、修改前后ip对应信息
ipv6(修改前) | ipv4(修改后) | HOSTNAME |
---|---|---|
::1 | 127.0.0.1 | localhost localhost.localdomain |
2409:8760:1282:0001:0F11:0000:0000:0047 | 192.168.122.71 | xydb6node1 |
2409:8760:1282:0001:0F11:0000:0000:0048 | 192.168.122.72 | xydb6node2 |
2409:8760:1282:0001:0F11:0000:0000:0049 | 192.168.122.73 | xydb6node1-vip |
2409:8760:1282:0001:0F11:0000:0000:004A | 192.168.122.74 | xydb6node2-vip |
fd17:625c:f037:a801:51f6:635a:fa15:5871 | 1.1.4.73 | xydb6node1-priv |
fd17:625c:f037:a801:51f6:635a:fa15:5872 | 1.1.4.74 | xydb6node2-priv |
2409:8760:1282:0001:0F11:0000:0000:004B | 192.168.122.75 | xydb6-scan |
三、修改 Public IP
- 1、检查集群的ip信息,当前为ipv6。
[root@xydb6node1 ~]# oifcfg getif bond0 2409:8760:1282:1:0:0:0:0 global public bond1 fd17:625c:f037:a801:0:0:0:0 global cluster_interconnect,asm
复制
- 2、使用oifcfg命令删除publicip,再增加新的ipv4。
[root@xydb6node1 ~]# oifcfg delif -global bond0/2409:8760:1282:1:0:0:0:0 [root@xydb6node1 ~]# oifcfg setif -global bond0/192.168.122.0:public [root@xydb6node1 ~]# oifcfg getif bond1 fd17:625c:f037:a801:0:0:0:0 global cluster_interconnect,asm bond0 192.168.122.0 global public 如上,public ip已修改好。
复制
- 3、在节点1和节点2上面修改/etc/hosts里面的public ip,修改前备份一下,方便后面再用ipv6。
[root@xydb6node1 ~]# cp -r /etc/hosts /etc/hosts.ipv6
[root@xydb6node1 ~]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain
#修改后的public-ip
192.168.122.71 xydb6node1
192.168.122.72 xydb6node2
#节点1和节点2记得都要改。
复制
- 4、修改主机public物理网卡ip地址,位置:/etc/sysconfig/network-scripts/ifcfg-bond0
由于是用回旧的ipv4地址,本身也存在,这一步可以省略,如果是要换别的ipv4需要操作。
修改完重启网络服务生效:
[root@xydb6node1 ~]# service network restart
Restarting network (via systemctl): [ OK ]
#我这里有在同一张网卡上配置了ipv6和ipv4,ipv4和ipv6可配置在同一张网卡上,不冲突。
BOOTPROTO=static
DEVICE=bond0
NAME=bond0
TYPE=Bond
BONDING_MASTER=yes
ONBOOT=yes
IPADDR=192.168.122.72
PREFIX=24
GATEWAY=192.168.122.254
IPV6INIT=yes
IPV6_FAILURE_FATAL=no
IPV6ADDR=2409:8760:1282:0001:0F11:0000:0000:0048/120
IPV6_DEFAULTGW=2409:8760:1282:0001:0F11:0000:0000:00FF
BONDING_OPTS="miimon=100 mode=1"
复制
四、修改vip地址
- 1、在节点1上停掉实例和vip资源,由于还没有创建DB实例,这里只停掉vip资源。
[root@xydb6node1 ~]# srvctl stop vip -n xydb6node1 -f
复制
- 2、查看旧的vip配置信息
[grid@xydb6node1 ~]$ srvctl config nodeapps -a Network 1 exists Subnet IPv4: Subnet IPv6: 2409:8760:1282:1:0:0:0:0/64/bond0, static Ping Targets: Network is enabled Network is individually enabled on nodes: Network is individually disabled on nodes: VIP exists: network number 1, hosting node xydb6node1 VIP Name: xydb6node1-vip VIP IPv4 Address: VIP IPv6 Address: 2409:8760:1282:1:f11:0:0:49 VIP is enabled. VIP is individually enabled on nodes: VIP is individually disabled on nodes: VIP exists: network number 1, hosting node xydb6node2 VIP Name: xydb6node2-vip VIP IPv4 Address: VIP IPv6 Address: 2409:8760:1282:1:f11:0:0:4a VIP is enabled. VIP is individually enabled on nodes: VIP is individually disabled on nodes:
复制
- 3、修改/etc/hosts文件中的vip地址
192.168.122.73 xydb6node1-vip 192.168.122.74 xydb6node2-vip
复制
- 4、使用root用户修改vip资源
[root@xydb6node1 ~]# srvctl modify nodeapps -n xydb6node1 -A 192.168.122.73/255.255.255.0/bond0
#检查是否修改成功
[root@xydb6node1 ~]# srvctl config nodeapps -a
Network 1 exists
Subnet IPv4: 192.168.122.0/255.255.255.0/bond0, static (inactive)
Subnet IPv6: 2409:8760:1282:1:0:0:0:0/64/bond0, static ========>这里是存在问题的,后面会说明
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
VIP exists: network number 1, hosting node xydb6node1
VIP Name: xydb6node1-vip
VIP IPv4 Address: 192.168.122.73 (inactive)
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 1, hosting node xydb6node2
VIP Name: xydb6node2-vip
VIP IPv4 Address: 192.168.122.74 (inactive)
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
如上,vip地址已修改过来。
复制
- 5、重复在节点2上面执行1-4步,如果有多节点,都要执行一次。
五、修改Scan IP
- 1、使用root用户停掉scan资源
[root@xydb6node1 ~]# srvctl stop scan_listener [root@xydb6node1 ~]# srvctl stop scan
复制
- 2、修改/etc/hosts中的scan ip地址
192.168.122.75 xydb6-scan
复制
- 3、用root用户修改scan ip 资源
[root@xydb6node1 ~]# srvctl modify scan -n 192.168.122.75
#检查是否修改成功
[root@xydb6node1 ~]# srvctl config scan
SCAN name: 192.168.122.75, Network: 1
Subnet IPv4: 192.168.122.0/255.255.255.0/bond0, static (inactive)
Subnet IPv6: 2409:8760:1282:1:0:0:0:0/64/bond0, static ======>这里同样也是有问题
SCAN 1 IPv4 VIP: 192.168.122.75 (inactive)
SCAN VIP is enabled.
复制
- 4、更新scan_listener
#Update SCAN listeners to match the number of SCAN VIPs
[root@xydb6node1 ~]# srvctl modify scan_listener -update
复制
- 5、修改节点2的/etc/hosts文件中的scan ip配置,只做这一步就可以了。
六、修改Private IP
- 1、检查旧的priveate子网信息
[root@xydb6node1 ~]# oifcfg getif bond1 fd17:625c:f037:a801:0:0:0:0 global cluster_interconnect,asm bond0 192.168.122.0 global public
复制
- 2、修改private ip前为防止集群脑裂,先停掉节点2。
[root@xydb6node2 ~]# crsctl stop has -f CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'xydb6node2' CRS-2673: Attempting to stop 'ora.crsd' on 'xydb6node2' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'xydb6node2' CRS-2673: Attempting to stop 'ora.cvu' on 'xydb6node2' 。。。。。。 CRS-2673: Attempting to stop 'ora.cssd' on 'xydb6node2' CRS-2677: Stop of 'ora.cssd' on 'xydb6node2' succeeded CRS-2673: Attempting to stop 'ora.driver.afd' on 'xydb6node2' CRS-2673: Attempting to stop 'ora.gipcd' on 'xydb6node2' CRS-2673: Attempting to stop 'ora.gpnpd' on 'xydb6node2' CRS-2677: Stop of 'ora.driver.afd' on 'xydb6node2' succeeded CRS-2677: Stop of 'ora.gpnpd' on 'xydb6node2' succeeded CRS-2677: Stop of 'ora.gipcd' on 'xydb6node2' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'xydb6node2' has completed CRS-4133: Oracle High Availability Services has been stopped.
复制
- 3、尝试新增一个ipv4的private子网,再删除ipv6的子网,private直接删除会报错,必需先增加一个。
[grid@xydb6node1 ~]$ oifcfg setif -global bond1/1.1.4.64:cluster_interconnect,asm
#错误信息如下 :
PRIF-38: Both IPv4 and IPv6 address types are not supported for cluster interconnect。
错误的大概意思就是ipv4和ipv6不能同时使用,听别的同事说ipv4改ipv6是可以直接增加的,有时间再去验证,这里我们用别的方法去解决。
-
#尝试直接删除私网接口
[root@xydb6node1 ~]# oifcfg delif -global bond1/fd17:625c:f037:a801:0:0:0:0
#错误提示,还是无法删除
PRIF-31: Failed to delete the specified network interface because it is the last private interface
复制
- 4、网络接口的配置信息是存储在gpnp profile.xml里面,想到其实可以通过gpnptool进行修改,简单回顾一下gpnp profile,如下:
GPnP profile是一个XML文件,位置
GRID_HOME/gpnp/`hostname`/profiles/peer/。集群的每个节点都在本地维护这个概要文件的副本,并由GPnP守护进程和mdns守护进程维护。 GPnP定义了一个节点关于公共和私有互连网络接口、ASM参数文件和CSS投票磁盘的元数据。 此配置文件由wallet保护,以防修改,如果必须手动修改配置文件,则必须先使用GRID_HOME/bin/gpnptool对其进行修改,然后使用wallet再次对其进行签名才能使用。
为加深理解,我们格式化profile.xml看看,如下只截取了部份重要的内容:
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd" ProfileSequence="1" ClusterUId="7924
9565213acfd3bf5f3725fbca8f92" ClusterName="xydb6-cluster" PALocation="">
<gpnp:Network-Profile>
<gpnp:HostNetwork id="gen" HostName="*">
<gpnp:Network id="net1" IP="2409:8760:1282:1:0:0:0:0" Adapter="bond0" Use=
"public"/>
<gpnp:Network id="net2" IP="fd17:625c:f037:a801:0:0:0:0" Adapter="bond1" Use="asm,cluster_interconnect"/>
</gpnp:HostNetwork>
</gpnp:Network-Profile>
如上,该配置文件包含集群的id、名字、公共接口、私网接口、私钥等信息。
复制
- 5、使用gpnptool工具修改私网接口,操作如下:
#1.先停掉节点1
[root@xydb6node1 peer]# crsctl stop has -f
#2.进入profile目录
[root@xydb6node1 peer]# cd /u01/app/19.3.0/grid/gpnp/xydb6node1/profiles/peer/
#3.开始已经备份了,改过public,再备一次
[root@xydb6node1 peer]# cp -r profile.xml profile.xml.bak_20200306
#重命名
[root@xydb6node1 peer]# mv profile.xml profile_tmp.xml
#4.移除标识信息
[root@xydb6node1 peer]# gpnptool unsign -p=profile_tmp.xml
#5.修改private,-net2(取自配置文件中的Network id="net2" )
[root@xydb6node1 peer]# gpnptool edit -net2:net_ip='1.1.4.72' -p=profile_tmp.xml -o=profile_tmp.xml -ovr
#5.1.修改public,由于前面我们用oifcfg已经修改了,这里可以不用再操作,操作了也不影响。-net1(取自配置文件中的Network id="net1" )
[root@xydb6node1 peer]# gpnptool edit -net1:net_ip='192.168.122.0' -p=profile_tmp.xml -o=profile_tmp.xml -ovr
#6.使用wallet私钥签名文件
[root@xydb6node1 peer]# gpnptool sign -p=profile_tmp.xml -w=file:/u01/app/19.3.0/grid/gpnp/xydb6node1/wallets/peer/ -o=profile.xml
[root@xydb6node1 peer]# gpnptool sign -p=profile_tmp.xml -w=file:/u01/app/19.3.0/grid/gpnp/xydb6node2/wallets/peer/ -o=profile.xml
#7.修改权限和用户属组
[root@xydb6node1 peer]# chown grid:oinstall profile.xml
[root@xydb6node1 peer]# chmod 644 profile.xml
复制
- 6、修改完后在节点1上面启动CRS
[root@xydb6node1 ~]# crsctl start has
CRS-4123: Oracle High Availability Services has been started.
#检查private子网,已改过来了
[root@xydb6node1 peer]# oifcfg getif
bond1 1.1.4.72 global cluster_interconnect,asm
bond0 192.168.122.0 global public
#这里先不急着改节点2,先让他停着,先确保节点1能恢复正常。
复制
- 7、检查节点资源状态,发现vip、scan、network等都没有拉起来,如下
[root@xydb6node1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE OFFLINE xydb6node1 STABLE
ora.chad
ONLINE ONLINE xydb6node1 STABLE
ora.net1.network
ONLINE ONLINE xydb6node1 STABLE
ora.on
ONLINE ONLINE xydb6node1 STABLE
ora.proxy_advm
OFFLINE OFFLINE xydb6node1 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
1 ONLINE ONLINE xydb6node1 STABLE
2 ONLINE OFFLINE STABLE
3 ONLINE OFFLINE STABLE
ora.CRSDG.dg(ora.asmgroup)
1 ONLINE ONLINE xydb6node1 STABLE
2 OFFLINE OFFLINE STABLE
3 OFFLINE OFFLINE STABLE
ora.DATADG1.dg(ora.asmgroup)
1 ONLINE ONLINE xydb6node1 STABLE
2 OFFLINE OFFLINE STABLE
3 OFFLINE OFFLINE STABLE
ora.FRADG.dg(ora.asmgroup)
1 ONLINE ONLINE xydb6node1 STABLE
2 OFFLINE OFFLINE STABLE
3 OFFLINE OFFLINE STABLE
ora.LISTENER_SCAN1.lsnr
1 OFFLINE OFFLINE STABLE
ora.asm(ora.asmgroup)
1 ONLINE ONLINE xydb6node1 Started,STABLE
2 ONLINE OFFLINE STABLE
3 ONLINE OFFLINE STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
1 ONLINE ONLINE xydb6node1 STABLE
2 ONLINE OFFLINE STABLE
3 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE xydb6node1 STABLE
ora.qosmserver
1 ONLINE ONLINE xydb6node1 STABLE
ora.scan1.vip
1 ONLINE OFFLINE STABLE
ora.xydb6node1.vip
1 ONLINE OFFLINE STABLE
ora.xydb6node2.vip
1 ONLINE OFFLINE STABLE
#1.检查alert日志,发现最后输出了failed,看不出是什么问题
2020-03-05 19:17:52.916 [ORAAGENT(347421)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 347421
2020-03-05 19:17:52.991 [ORAROOTAGENT(347434)]CRS-8500: Oracle Clusterware ORAROOTAGENT process is starting with operating system process ID 347434
2020-03-05 19:17:58.097 [CRSD(347258)]CRS-2772: Server 'xydb6node1' has been assigned to pool 'Free'.
2020-03-05 19:18:02.955 [CRSD(347258)]CRS-2807: Resource 'ora.LISTENER.lsnr' failed to start automatically.
2020-03-05 19:18:02.955 [CRSD(347258)]CRS-2807: Resource 'ora.scan1.vip' failed to start automatically.
2020-03-05 19:18:02.956 [CRSD(347258)]CRS-2807: Resource 'ora.xydb6node1.vip' failed to start automatically.
2020-03-05 19:18:02.956 [CRSD(347258)]CRS-2807: Resource 'ora.xydb6node2.vip' failed to start automatically.
#2.尝试用srvctl启动vip资源,同样也是拉不起来,如下:
[root@xydb6node1 ~]# srvctl start vip -n xydb6node1
PRCR-1079 : Failed to start resource ora.xydb6node1.vip
CRS-5052: invalid host name or IP address 'xydb6node1-vip'
CRS-2674: Start of 'ora.xydb6node1.vip' on 'xydb6node1' failed
CRS-2632: There are no more servers to try to place resource 'ora.xydb6node1.vip' on that would satisfy its placement policy
#3.再次检查节点配置,vip都正常修改过来了,掩码也正常。只有Ipv6的子网还在,找了台正常的ipv4集群,发现这里Subnet IPv6为空,初步判断有可能是这个影响了,想办法移除掉试试。
[root@xydb6node1 ~]# srvctl config nodeapps
Network 1 exists
Subnet IPv4: 192.168.122.0/255.255.255.0/bond0, static (inactive)
Subnet IPv6: 2409:8760:1282:1:0:0:0:0/64/bond0, static ===》问题点,前面也提到了
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
^[[AVIP exists: network number 1, hosting node xydb6node1
VIP Name: xydb6node1-vip
VIP IPv4 Address: 192.168.122.73 (inactive)
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 1, hosting node xydb6node2
VIP Name: xydb6node2-vip
VIP IPv4 Address: 192.168.122.74 (inactive)
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL true
ONS is enabled
ONS is individually enabled on nodes:
ONS is individually disabled on nodes:
#4.检查network,想办法从里面删除掉ipv6,查了下命令可以先remove再add。
[root@xydb6node1 ~]# srvctl config network
Network 1 exists
Subnet IPv4: 192.168.122.0/255.255.255.0/bond0, static (inactive)
Subnet IPv6: 2409:8760:1282:1:0:0:0:0/64/bond0, static
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
#5.remove network
[root@xydb6node1 ~]# srvctl remove network -all
PRCR-1025 : Resource ora.net1.network is still running
#提示正在运行,删除不了,加force强制删除,
[root@xydb6node1 ~]# srvctl remove network -all -force
--6.add network,执行后没有输出
[root@xydb6node1 ~]# srvctl add network -subnet 192.168.122.0/255.255.255.0/bond0
#7.再次检查network,正常了,看到这个结果,感觉是看到了希望。
[root@xydb6node1 ~]# srvctl config network
Network 1 exists
Subnet IPv4: 192.168.122.0/255.255.255.0/bond0, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
#8.检查nodeapps也正常了
[root@xydb6node1 ~]# srvctl config nodeapps
Network 1 exists
Subnet IPv4: 192.168.122.0/255.255.255.0/bond0, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
VIP exists: network number 1, hosting node xydb6node1
VIP Name: xydb6node1-vip
VIP IPv4 Address: 192.168.122.73
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 1, hosting node xydb6node2
VIP Name: xydb6node2-vip
VIP IPv4 Address: 192.168.122.74
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL true
ONS is enabled
ONS is individually enabled on nodes:
ONS is individually disabled on nodes:
#9.检查集群资源状态,vip资源已自动拉了起,如下:
[root@xydb6node1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
。。。。。。
ora.scan1.vip
1 ONLINE ONLINE xydb6node1 STABLE
ora.xydb6node1.vip
1 ONLINE ONLINE xydb6node1 STABLE
ora.xydb6node2.vip
1 ONLINE INTERMEDIATE xydb6node1 FAILED OVER,STABLE
#10.原本以为改到这里就可以了,检查发现还有个asmnetwork未修改,这个不能直接删除,需要先增加,再删除,具体操作如下:
注:这里需要在集群正常运行状态下操作。
#由于netnum 1已经存在,这里我们先增加2。
[root@xydb6node1 ~]# srvctl add asmnetwork -netnum 2 -subnet 1.1.4.72/255.255.255.248
--检查是否增加成功
[root@xydb6node1 network-scripts]# srvctl config asmnetwork
ASM network 1 exists
Subnet IPv4:
Subnet IPv6: fd17:625c:f037:a801:0:0:0:0//
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
ASM network 2 exists
Subnet IPv4: 1.1.4.72/255.255.255.248/
Subnet IPv6:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
--默认添加后不会自动运行,如下
[root@xydb6node1 network-scripts]# srvctl status asmnetwork
ASM network is running on xydb6node2,xydb6node1
ASM network 2 is not running.
--启动network 2
[root@xydb6node1 network-scripts]# srvctl start asmnetwork -netnum 2
--查看状态,已运行
[root@xydb6node1 network-scripts]# srvctl status asmnetwork
ASM network is running on xydb6node2,xydb6node1
ASM network is running on xydb6node2,xydb6node1
--增加asm监听
[root@xydb6node1 network-scripts]# srvctl add listener -asmlistener -netnum 2 -listener listener2
--查看新增的监听配置,允许ipv6和ipv4共存。
[root@xydb6node1 network-scripts]# srvctl config listener -asmlistener
Name: ASMNET1LSNR_ASM
Type: ASM Listener
Owner: grid
Subnet: fd17:625c:f037:a801:0:0:0:0
Home: <CRS home>
End points: TCP:1525
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:
Name: LISTENER2_ASM
Type: ASM Listener
Owner: grid
Subnet: 1.1.4.72
Home: <CRS home>
End points: TCP:1526
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:
--启动新增的asm监听资源
[root@xydb6node1 network-scripts]# crsctl start res -w "NAME = ora.LISTENER2_ASM.lsnr"
CRS-2672: Attempting to start 'ora.LISTENER2_ASM.lsnr' on 'xydb6node2'
CRS-2672: Attempting to start 'ora.LISTENER2_ASM.lsnr' on 'xydb6node1'
CRS-2676: Start of 'ora.LISTENER2_ASM.lsnr' on 'xydb6node2' succeeded
CRS-2676: Start of 'ora.LISTENER2_ASM.lsnr' on 'xydb6node1' succeeded
--停掉旧的asm监听资源
[root@xydb6node1 network-scripts]# crsctl stop res -w "NAME = ora.ASMNET1LSNR_ASM.lsnr"
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'xydb6node1'
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'xydb6node2'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'xydb6node2' succeeded
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'xydb6node1' succeeded
--停掉asm 1网络
[root@xydb6node1 network-scripts]# srvctl stop asmnetwork -netnum 1
--删除ipv6的asm监听配置
[root@xydb6node1 network-scripts]# srvctl remove listener -listener ASMNET1LSNR_ASM
--删除asm 1网络
[root@xydb6node1 network-scripts]# srvctl remove asmnetwork -netnum 1
--停掉集群节点1和2
[root@xydb6node1 network-scripts]# crsctl stop has -f
--修改节点1和2私有网卡ip地址,并重启网络服务生效
[root@xydb6node1 network-scripts]# service network restart
--启动集群节点1和2
[root@xydb6node1 network-scripts]# crsctl start has
--检查集群资源状态,都online了。
[root@xydb6node1 network-scripts]# crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE xydb6node1 STABLE
ONLINE ONLINE xydb6node2 STABLE
ora.chad
ONLINE ONLINE xydb6node1 STABLE
ONLINE ONLINE xydb6node2 STABLE
ora.net1.network
ONLINE ONLINE xydb6node1 STABLE
ONLINE ONLINE xydb6node2 STABLE
ora.on
ONLINE ONLINE xydb6node1 STABLE
ONLINE ONLINE xydb6node2 STABLE
ora.proxy_advm
OFFLINE OFFLINE xydb6node1 STABLE
OFFLINE OFFLINE xydb6node2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.CRSDG.dg(ora.asmgroup)
1 ONLINE ONLINE xydb6node1 STABLE
2 ONLINE ONLINE xydb6node2 STABLE
3 OFFLINE OFFLINE STABLE
ora.DATADG1.dg(ora.asmgroup)
1 ONLINE ONLINE xydb6node1 STABLE
2 ONLINE ONLINE xydb6node2 STABLE
3 OFFLINE OFFLINE STABLE
ora.FRADG.dg(ora.asmgroup)
1 ONLINE ONLINE xydb6node1 STABLE
2 ONLINE ONLINE xydb6node2 STABLE
3 OFFLINE OFFLINE STABLE
ora.LISTENER2_ASM.lsnr(ora.asmgroup)
1 ONLINE ONLINE xydb6node1 STABLE
2 ONLINE ONLINE xydb6node2 STABLE
3 ONLINE OFFLINE STABLE
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE xydb6node1 STABLE
ora.asm(ora.asmgroup)
1 ONLINE ONLINE xydb6node1 Started,STABLE
2 ONLINE ONLINE xydb6node2 Started,STABLE
3 OFFLINE OFFLINE STABLE
ora.asmnet2.asmnetwork(ora.asmgroup)
1 ONLINE ONLINE xydb6node1 STABLE
2 ONLINE ONLINE xydb6node2 STABLE
3 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE xydb6node1 STABLE
ora.qosmserver
1 ONLINE ONLINE xydb6node1 STABLE
ora.scan1.vip
1 ONLINE ONLINE xydb6node1 STABLE
ora.xydb6node1.vip
1 ONLINE ONLINE xydb6node1 STABLE
ora.xydb6node2.vip
1 ONLINE ONLINE xydb6node2 STABLE
--------------------------------------------------------------------------------
--检查listener_networks,已经是监听ipv4 ,至此,私网ip修改成功。
[grid@xydb6node1 ~]$ sqlplus / as sysasm
SQL> show parameter listener
NAME TYPE
------------------------------------ ----------------------
VALUE
------------------------------
forward_listener string
listener_networks string
((NAME=ora.LISTENER2_ASM.lsnr)
(LOCAL_LISTENER="(DESCRIPTION
=(ADDRESS=(PROTOCOL=TCP)(HOST=
1.1.4.73)(PORT=1526)))")), ((N
AME=ora.LISTENER2_ASM.lsnr)(RE
MOTE_LISTENER="(DESCRIPTION=(A
NAME TYPE
------------------------------------ ----------------------
VALUE
------------------------------
DDRESS=(PROTOCOL=TCP)(HOST=1.1
.4.73)(PORT=1526)))")), ((NAME
=ora.LISTENER2_ASM.lsnr) (REMO
TE_LISTENER="(DESCRIPTION=(ADD
RESS=(PROTOCOL=TCP)(HOST=1.1.4
.74)(PORT=1526)))"))
local_listener string
(ADDRESS=(PROTOCOL=TCP)(HOST=
192.168.122.73)(PORT=1521))
复制
总结
1、19c ip的修改方法大部份和11g差不多,唯一的区别在私有网卡上面,需要修改asm相关的配置。
2、整个修改过程花了点时间,主要原因是对部份知识点不太熟,过程中查阅文档发现,大部份知识点在19c官方文档中都有说明。
最后修改时间:2020-03-11 04:07:56
「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。
评论

2年前

评论
您好请问你们是19点几的版本?
3年前

评论
相关阅读
2025年4月中国数据库流行度排行榜:OB高分复登顶,崖山稳驭撼十强
墨天轮编辑部
1158次阅读
2025-04-09 15:33:27
Oracle RAC ASM 磁盘组满了,无法扩容怎么在线处理?
Lucifer三思而后行
977次阅读
2025-03-17 11:33:53
2025年3月国产数据库大事记
墨天轮编辑部
688次阅读
2025-04-03 15:21:16
2025年3月国产数据库中标情况一览:TDSQL大单622万、GaussDB大单581万……
通讯员
486次阅读
2025-04-10 15:35:48
征文大赛 |「码」上数据库—— KWDB 2025 创作者计划启动
KaiwuDB
447次阅读
2025-04-01 20:42:12
Oracle DataGuard高可用性解决方案详解
孙莹
416次阅读
2025-03-26 23:27:33
数据库,没有关税却有壁垒
多明戈教你玩狼人杀
392次阅读
2025-04-11 09:38:42
优炫数据库成功应用于国家电投集团青海海南州新能源电厂!
优炫软件
378次阅读
2025-03-21 10:34:08
墨天轮个人数说知识点合集
JiekeXu
356次阅读
2025-04-01 15:56:03
Oracle RAC 一键安装翻车?手把手教你如何排错!
Lucifer三思而后行
349次阅读
2025-04-15 17:24:06
目录