一.1.1 节点层
olsnodes
一.1.2 网络层
oifcfg
四个子命令可以通过oifcfg -help查看下
iflist 显示网口列表
getif 可以获得单个网口信息
setif 配置单个网口
delif 删除网口
一.1.3 集群层
crsctl 、orccheck 、ocrdump、ocrconfig
一.1.4 应用层(资源)
srvctl 、onsctl、crs_stat
一.1 rac数据库和实例相关
关机的序顺是:先停数据库à再停数据库的群集à再停机器
开机顺序:自动
查看状态命令(gird用户下)
crs_stat -t -v
停止 Oracle RAC 环境
当此实例(和相关服务)关闭后,关闭 ASM 实例。最后,关闭节点应用程序(虚拟 IP、GSD、TNS 监听器和 ONS)。
$ export ORACLE_SID=orcl1
$ emctl stop dbconsole
$ srvctl stop instance -d rac -i rac1 闭一个实例(即关闭这个节点的实例)
$ srvctl start instance -d rac -i rac1 开启一个实例(即开启这个节点的实例)
$ srvctl stop database -d rac 关闭节点数据库的命令(即这个库的所有实例都关闭)
$ srvctl start database -d rac 开启节点数据库的命令(即这个库的所有实例都开启)
srvctl stop/start/status instance –d rac –i rac1/rac2/rac1,rac2
srvctl stop/start/status database –d rac (后面可以加-o的参数制定数据库开启或关闭的状态,即-o mount,-o abort等)
$ srvctl stop asm -n node1
$ srvctl stop nodeapps –n node1 (关闭节点服务,但是实验时会报错)
一.2 oracle监听相关
1、
$ srvctl status listener (查看所有节点的监听状态,其中,在各个节点也可用常规法查询:
lsnrctl status)
[grid@rac1 ~]$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): rac2,rac1
2、开启、关闭
$ srvctl start listener –n rac2 (等同于rac2节点 grid用户执行:lsnrctl start)
$ srvctl stop listener –n rac1,rac2 (等同于在两个节点,grid用户分别执行lsnrctl stop)
启动 Oracle RAC 环境
第一步是启动节点应用程序(虚拟 IP、GSD、TNS 监听器和 ONS)。当成功启动节点应用程序后,启动 ASM 实例。最后,启动 Oracle 实例(和相关服务)以及企业管理器数据库控制台。
$ export ORACLE_SID=orcl1
$ srvctl start nodeapps -n racl1
$ srvctl start asm -n rac1
$ srvctl start instance -d orcl -i orcl1
$ emctl start dbconsole
一.3 ASM相关
[oracle@node1 ~]$ srvctl status asm
ASM is running on orcl1,orcl2
[grid@rac1 ~]$ srvctl config asm -a
ASM home: /u01/app/11.2.0/grid
一.4 EM管理器相关
$emctl start dbconsole
$emctl stop dbconsole
一.5 RAC集群相关
1、查看
[grid@rac1 bin]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
2、oracle rac默认会开机自启动,如需维护时可使用以下命令:
n 关闭:
crsctl stop cluster 停止本节点集群服务 (停止时数据库、实例都会自动停止,此命令等同于在这个节点执行crsctl stop crs)
crsctl stop cluster –all 停止所有节点服务
n 开启
crsctl start cluster 开启本节点集群服务
crsctl start cluster –all 开启所有节点服务,此命令等同于在这个节点执行crsctl start crs)
注:以上命令需以 root用户执行:/u01/app/11.2.0/grid/bin/crsctl start cluster
即粗暴关闭法(root用户下,直接关闭集群)
/u01/app/11.2.0/grid/bin/crsctl stop cluster –all
启动
/u01/app/11.2.0/grid/bin/crsctl start cluster -n rac1 rac2
3、srvctl start instance -d rac -i rac1 {开启一个节点(即开启这个节点的实例),
一.6 RAC网络相关
一.6.1 修改VIP
1、检查vip是否存在
[grid@rac2 ~]$ srvctl config vip -n rac1
VIP exists: /rac1-vip/192.168.3.209/192.168.3.0/255.255.255.0/eth0, hosting node rac1
2、停止节点的SERVICE的运行
3、关闭此节点的vip资源
4、修改vip
/etc/hosts先修改
然后用下面命令指定的节点上修改vip
# srvctl modify nodeapps –n rac1 –A IP/255.255.255.0/em1
5、启动资源
$ srvctl start vip –n rac1
一.6.2 修改SCAN
1、查看scan ip:
[grid@rac2 ~]$ srvctl config scan
SCAN name: rac-scan, Network: 1/192.168.3.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /rac-scan/192.168.3.212
查看scan所在的节点:
[grid@rac2 ~]$ srvctl status scan -i 1
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node rac2
2、关闭scan
3、删除scan并重新添加一个scna
一.6.3 修改私有和公网ip
1、查看
一.7 RAC检查运行状况
一.7.1 以grid 用户运行
[grid@node1 ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
一.7.2 Database检查例状态
[oracle@node1 ~]$ srvctl status database -d rac
Instance racl1 is running on node node1
Instance racl2 is running on node node2
一.7.3 检查节点应用状态及配置
[oracle@node1 ~]$ srvctl status nodeapps
VIP node1vip is enabled
VIP node1vip is running on node: node1
VIP node2vip is enabled
VIP node2vip is running on node: node2
Network is enabled s
Network is running on node: node1
Network is running on node: node2
GSD is disabled
GSD is not running on node: node1
GSD is not running on node: node2
ONS is enabled
ONS daemon is running on node: node1
ONS daemon is running on node: node2
eONS is enabled
eONS daemon is running on node: node1
eONS daemon is running on node: node2
[oracle@node1 ~]$ srvctl config nodeapps -a -g -s -l
-l homeion has been deprecated and will be ignored.
VIP exists.: node1
VIP exists.: /node1vip/10.45.61.150/255.255.255.224/eth0
VIP exists.: node2
VIP exists.: /node2vip/10.45.61.151/255.255.255.224/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
Name: LISTENER
Network: 1, Owner: grid
Home:
/oracle/11.2.0/grid on node(s) node2,node1
End points: TCP:1521
一.7.4 查看数据库配置
[grid@rac1 ~]$ srvctl config database -d rac -a
Database unique name: rac
Database name: rac
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/rac/spfilerac.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: rac
Database instances: rac1,rac2
Disk Groups: DATA,FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed
一.7.5 检查 ASM状态及配置
[oracle@node1 ~]$ srvctl status asm
ASM is running on orcl1,orcl2
[grid@rac1 ~]$ srvctl config asm -a
ASM home: /u01/app/11.2.0/grid
ASM listener: LISTENER
ASM is enabled.
一.7.6 检查 TNS的状态及配置
[grid@rac1 ~]$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): rac2,rac1
[grid@rac1 ~]$ srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
/u01/app/11.2.0/grid on node(s) rac1,rac2
End points: TCP:1521
[grid@rac1 ~]$
一.7.7 检查 SCAN 的状态及配置
[oracle@node1 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node node1
[oracle@node1 ~]$ srvctl config scan
SCAN name: rac-scan, Network:
1/192.168.1.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP:
/rac-scan
一.7.8 检查 VIP的状态及配置
[oracle@node1 ~]$ srvctl status vip -n node1
VIP node1vip is enabled
VIP node1vip is running on node: node1
[oracle@node1 ~]$ srvctl status vip -n node2
VIP node2vip is enabled
VIP node2vip is running on node: node2
[oracle@rac1 ~]$ srvctl config vip -n node1
VIP exists.:orc11
VIP exists.: /node1vip/10.45.61.129/255.255.255.224/eth0
[oracle@rac1 ~]$ srvctl config vip -n node2
VIP exists.:orcl2
VIP exists.: /node2vip/10.45.61.130/255.255.255.224/eth0