软件环境
数据库:
p13390677_112040_AIX64-5L_1of7.zip
p13390677_112040_AIX64-5L_2of7.zip
集群软件:
p13390677_112040_AIX64-5L_3of7.zip
这三个包
操作 系统:
AIX 7.1、Oracle 11gR2
一.基础环境准备(两个节点都需要做)
==================================================
--------------------------------------------------
1.1.操作系统检查(录屏:<nodename>_os_check.log)
--------------------------------------------------
1).操作系统版本及内核
====================
# bootinfo -K
# uname -s
# oslevel -s
====================
2).系统软件包检查
====================
a).必须软件包
--------------------
# lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.libperfstat \
bos.perf.perfstat bos.perf.proctools xlC.rte
注:xlC.rte 11.1.0.2 or later
--------------------
b).JAVA、C++、Xwindows、ssh
--------------------
# lslpp -l | grep -i ssh
# lslpp -l | grep -i java
注:JAVA建议安装java6_64bit
# lslpp -l | grep -i C++
注:C/C++建议9.0以上
# lslpp -l | grep -i x11|grep -i dt
注:X11需要包含以下包:
X11.Dt.ToolTalk
X11.Dt.bitmaps
X11.Dt.helpmin
X11.Dt.helprun
X11.Dt.lib
X11.Dt.rte
X11.Dt.ToolTalk
X11.Dt.bitmaps
X11.Dt.helpmin
X11.Dt.rte
====================
3).系统补丁包检查
====================
--------------------
a).ARPAs
--------------------
IZ87216
IZ87564
IZ89165
IZ97035
# instfix -i -k "IZ87216 IZ87564 IZ89165 IZ97035"
注:安装补丁包的时候参考下面的命令
# emgr -e IZ89302.101121.epkg.Z
--------------------
b).PTFs
--------------------
none
====================
4).内核参数检查
====================
a).ncargs>=256
--------------------
# lsattr -El sys0 -a ncargs
ncargs 256 ARG/ENV list size in 4K byte blocks True
注:修改方式
# chdev -l sys0 -a ncargs='256'
--------------------
b).maxuproc>=16384
--------------------
# lsattr -E -l sys0 -a maxuproc
maxuproc 16384 Maximum number of PROCESSES allowed per user True
注:修改方式
# chdev -l sys0 -a maxuproc=16384
--------------------
c).aio_maxreqs>=65536
--------------------
# ioo -o aio_maxreqs
aio_maxreqs = 131072
注:修改方式
# ioo –p -o aio_maxreqs=65536
====================
5).检查系统资源限制
====================
确认 /etc/security/limits文件包含:
fsize = -1
db = -1
cpu = -1
data = -1
rss = -1
stack = -1
nofiles = -1
# more /etc/security/limits
注:修改方式
# vi /etc/security/limits
====================
6).网络参数与端口
====================
--------------------
a).网络参数
--------------------
Network Preparation
=======================================
PARAMETER RECOMMENDED VALUE
ipqmaxlen 512
rfc1323 1
sb_max 41943040
tcp_recvspace 1048576
tcp_sendspace 1048576
udp_recvspace 20971520
udp_sendspace 2097152
注意:
udp_recvspace:应该是udp_sendspace的10倍,但是必须小于sb_max
udp_sendspace:这个值至少应该是4K+(db_block_size*db_multiblock_read_count)的大小。
--
查看所有的:
# no –a | more
分项查看:
# no -a | fgrep ipqmaxlen
# no -a | fgrep rfc1323
# no -a | fgrep sb_max
# no -a | fgrep tcp_recvspace
# no -a | fgrep tcp_sendspace
# no -a | fgrep udp_recvspace
# no -a | fgrep udp_sendspace
若有值不满足,进行修改:
no -r -o ipqmaxlen=512
no -p -o rfc1323=1
no -p -o sb_max=41943040
no -p -o tcp_recvspace=1048576
no -p -o tcp_sendspace=1048576
no -p -o udp_recvspace=20971520
no -p -o udp_sendspace=2097152
也可以在/etc/rc.net文件里面加入如下内容
if [ -f /usr/sbin/no ] ; then
/usr/sbin/no -o udp_sendspace=2097152
/usr/sbin/no -o udp_recvspace=20971520
/usr/sbin/no -o tcp_sendspace=1048576
/usr/sbin/no -o tcp_recvspace=1048576
/usr/sbin/no -o rfc1323=1
/usr/sbin/no -o sb_max=41943040
/usr/sbin/no -o ipqmaxlen=512
fi
--------------------
b).端口范围
--------------------
# no -a | fgrep ephemeral
tcp_ephemeral_high = 65500
tcp_ephemeral_low = 9000
udp_ephemeral_high = 65500
udp_ephemeral_low = 9000
调整方式:
#no -p -o tcp_ephemeral_low=9000 -o tcp_ephemeral_high=65500
#no -p -o udp_ephemeral_low=9000 -o udp_ephemeral_high=65500
====================
7).虚拟内存优化
====================
检查:
# vmo -L minperm%
# vmo -L maxperm%
# vmo -L maxclient%
# vmo -L lru_file_repage //此参数已经不是可优化参数
# vmo -L strict_maxclient
# vmo -L strict_maxperm
调整:
# vmo -p -o minperm%=3
# vmo -p -o maxperm%=90
# vmo -p -o maxclient%=90
# vmo -p -o lru_file_repage=0
# vmo -p -o strict_maxclient=1
# vmo -p -o strict_maxperm=0
====================
8).内存和paging space
====================
--------------------
a).检查内存(至少2.5G):
--------------------
# lsattr -E -l sys0 -a realmem
--------------------
b).检查交换空间:
--------------------
# lsps -a
注:内存小于16G建议设置成内存大小,内存大于16G则设置成16G
# chps -s 10 hd6 (lsvg rootvg查看PP SIZE大小,扩展10个PP)
====================
9).文件系统空间检查
====================
# df -g
临时文件系统至少1G;
安装软件文件系统至少50G
====================
10).重启
====================
如果在check过程中对以上参数进行过修改,建议进行重启之后再进行后续操作
# shutdown -Fr
rac节点
aix7tdb0
aix7tdb1
192.168.123.202
192.168.123.208
172.16.8.12
192.168.123.203
192.168.123.209
172.16.8.13
192.168.1.210
编辑/etc/hosts
---------------------------------
192.168.123.202 aix7tdb0
192.168.123.203 aix7tdb1
172.17.8.12 aix7tdb0-priv
172.17.8.13 aix7tdb1-priv
192.168.123.208 aix7tdb0-vip
192.168.123.209 aix7tdb1-vip
192.168.123.210 rac-scan
---------------------------------
--------------------------------------------------
1.3.时间同步配置
--------------------------------------------------
1).确认时区和NTP状态
====================
# echo $TZ 确认时区是否和原生产系统一致
# lssrc -s xntpd 查看NTP服务的状态
# stopsrc -s xntpd 关闭NTP服务
====================
2).使用ctssd服务配置方式
====================
##使用ctssd服务进行时间同步:
# mv /etc/ntp.conf /etc/ntp.conf.bak 重命名NTP的配置文件,防止ctss安装成observer状态
在Grid Infrastructure软件安装以后,用grid用户查看时间同步服务是否处于活动状态:
# su - grid
$ crsctl stat resource ora.ctssd -t -init
====================
3).使用NTP配置方式
====================
为了保证NTP不往回同步时间需要编辑以下内容:
# vi /etc/rc.tcpip
start /usr/sbin/xntpd "$src_running" "-x"
##启动xntpd服务:
# startsrc -s xntpd -a "-x"
--------------------------------------------------
1.4.创建系统组、用户
--------------------------------------------------
====================
1).存在性检查
====================
--------------------
a).检查
--------------------
# id oracle
# id grid
# more /etc/passwd
# more /etc/group
//如果用户已经存在,需要确认这些参数。最好是删除重建用户和组,保证正确性
--------------------
b).删除用户方案
--------------------
# rmuser -p oracle
# rmuser -p grid
# rm -rf /home/oracle
# rm -rf /home/grid
注:跳过步骤c)到创建用户
--------------------
c).保留用户方案
--------------------
# lsuser -a capabilities grid
# lsuser -a capabilities oracle
# chuser -a capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid|oracle
# grep oinstall /etc/group
# more /etc/oraInst.loc
注:查看是否有用户,组,是否安装过Oracle的产品,如果检查通过则跳过创建用户步骤。
====================
2).创建系统组、用户
====================
a).创建系统组
--------------------
# mkgroup -'A' id='501' adms='root' oinstall
# mkgroup -'A' id='502' adms='root' asmadmin
# mkgroup -'A' id='503' adms='root' asmdba
# mkgroup -'A' id='504' adms='root' asmoper
# mkgroup -'A' id='505' adms='root' dba
# mkgroup -'A' id='506' adms='root' oper
--------------------
b).创建用户
--------------------
# mkuser id='501' pgrp='oinstall' groups='dba,asmadmin,asmdba,asmoper' home='/home/grid' fsize=-1 cpu=-1 data=-1 rss=-1 stack=-1 stack_hard=-1 capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid
# mkuser id='502' pgrp='oinstall' groups='dba,asmdba,oper' home='/home/oracle' fsize=-1 cpu=-1 data=-1 rss=-1 stack=-1 stack_hard=-1 capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle
--------------------
c).检查用户
--------------------
# id oracle
# id grid
# lsuser -a capabilities grid
# lsuser -a capabilities oracle
--------------------
d).修改用户密码
--------------------
# passwd grid
# passwd oracle
# su - grid
# su - oracle
注:建议登录一次图形界面
gridbase 目录:/opt/app/grid #grid 用户的 ORACLE_BASE
grid asm 安装目录:/opt/app/11.2/grid #grid 用户的 ORACLE_HOME,也即是安装时的
software location
Oracle base 目录:/opt/app/oracle #oracle 用户的 ORACLE_BASE
mkdir -p /opt/app/grid
# mkdir -p /opt/app/11.2.0/grid
# mkdir -p /opt/app/grid
# mkdir -p /opt/app/oracle
# chown -R grid:oinstall /opt
# chown oracle:oinstall /opt/app/oracle
# chmod -R 775 /opt/
创建oraInventory:
====================
# mkdir -p /opt/app/oraInventory
# chown -R grid:oinstall /opt/app/oraInventory
# chmod -R 775 /opt/app/oraInventory
grid、oracle用户环境变量
grid用户环境变量
====================
# su - grid
$ vi /home/grid/.profile
--加入以下内容:
export ORACLE_BASE=/opt/app/grid
export ORACLE_HOME=/opt/app/11.2.0/grid
export ORACLE_SID=+ASM1
export PATH=$ORACLE_HOME/bin:$PATH
umask 022
export ORACLE_BASE=/opt/app/grid
export ORACLE_HOME=/opt/app/11.2.0/grid
export ORACLE_SID=+ASM2
export PATH=$ORACLE_HOME/bin:$PATH
umask 022
oracle用户环境变量:
vi /home/oracle/.profile
export ORACLE_BASE=/opt/app/oracle
export ORACLE_HOME=/opt/app/oracle/product/11.2.0/dbhome_1
export ORACLE_SID=RACDB1
export PATH=$ORACLE_HOME/bin:$PATH
export ORACLE_BASE=/opt/app/oracle
export ORACLE_HOME=/opt/app/oracle/product/11.2.0/dbhome_1
export ORACLE_SID=RACDB2
export PATH=$ORACLE_HOME/bin:$PATH
*****************##NFS服务器配置##*************************************
创建oracle用户和grid用户要求aix数据库服务器的dba用户id一致
dba用户组要求和aix数据库服务器的dba用户组id一致
nfs服务器
groupadd -g 300 dba
useradd -m -u 311 -g dba -d /home/grid -s /bin/bash -c "Grid Infrastructure Owner" grid
useradd -m -u 301 -g dba -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle
aix服务器节点
## 创建dba组
mkgroup -'a' id='300' admin=false projects='System' dba
## grid 用户及属性
mkuser id='311' admin=true pgrp='dba' groups='dba' admgroups='dba' home='/home/grid' grid
chuser capabilities=CAP_BYPASS_RAC_VMM,CAP_PROPAGATE,CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM grid
chown -R grid:dba /home/grid
## oracle 用户及属性
mkdir /home/oracle
mkuser id=301 admin=true pgrp=dba groups=dba admgroups=dba home=/home/oracle shell=/usr/bin/ksh oracle
chuser capabilities=CAP_BYPASS_RAC_VMM,CAP_PROPAGATE,CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM oracle
chown -R oracle:dba /home/oracle
添加磁盘
添加第二块磁盘
parted /dev/sdb
查看:(parted)p
将磁盘格式变成gpt的格式(因为parted只能针对gpt格式的磁盘进行操作)
mklabel gpt
mkpart primary 0 nG
查看:(parted) p
退出:(parted)quit ( parted分区自动保存,不用手动保存 )
lvm管理
linux上创建PV/VG/LV
LVM的整体思路是:
首先创建PV-->然后创建VG并将多个PV加到VG里-->然后创建LV-->格式化分区-->mount分区
创建物理卷PV
--对裸盘创建pv
pvcreate /dev/sdb
--对分区创建pv
pvcreate /dev/sdb1
创建完成后可以查看一下
pvs
pvdisplay /dev/sdb1
创建卷组VG
使用vgcreate创建卷组VG,并且此处可以-s选项指定PE(LE)的大小,(默认PE大小4M)
vgcreate vg1 /dev/sdb1
vgcreate -s 16M vg2 /dev/sdb2
--创建VG并将多个PV加到VG
vgcreate vg_test /dev/sdb /dev/sdc
创建完成后查看一下
vgs
vgdisplay vg1
注意:PE大,读取速度快,但浪费空间。反之,读取速度慢,但节省空间。类似于socket
创建逻辑卷LV
使用lvcreate创建LV。lvcreate -n lvname -L lvsize(M,G)|-l LEnumber vgname
lvcreate -n lv1 -L 64M vg1
lvcreate -n lv2 -L 10G vg1
lvs
格式化与挂载
mkfs.ext4 /dev/vg1/lv1
mkfs.ext4 /dev/vg1/data1
mount分区
mkdir /data
echo "/dev/vg_test/lv_test /data ext4 defaults 0 0" >> /etc/fstab
整体处理开机自动挂载
mkdir /ocr1
mkdir /ocr2
mkdir /ocr3
mkdir /vot1
mkdir /vot2
mkdir /vot3
mkdir /date1
mkdir /date2
mkdir /date3
mkdir /date4
mkdir /date5
mkdir /date6
修改目录用户属组和权限
chown -R grid:dba /ocr1
chown -R grid:dba /ocr2
chown -R grid:dba /ocr3
chown -R grid:dba /vot1
chown -R grid:dba /vot2
chown -R grid:dba /vot3
chown -R oracle:dba /date1
chown -R oracle:dba /date2
chown -R oracle:dba /date3
chmod -R 775 /ocr*
chmod -R 775 /vot*
chmod -R 775 /data*
添加到/etc/fstab文件
vi /etc/fstab
/dev/vg1/ocr1 /ocr1 ext4 defaults 0 0
/dev/vg1/ocr2 /ocr2 ext4 defaults 0 0
/dev/vg1/ocr3 /ocr3 ext4 defaults 0 0
/dev/vg1/vot1 /vot1 ext4 defaults 0 0
/dev/vg1/vot2 /vot2 ext4 defaults 0 0
/dev/vg1/vot3 /vot3 ext4 defaults 0 0
/dev/vg1/data1 /data1 ext4 defaults 0 0
/dev/vg1/data2 /data2 ext4 defaults 0 0
/dev/vg1/data3 /data3 ext4 defaults 0 0
/dev/vg1/data4 /data4 ext4 defaults 0 0
/dev/vg1/data5 /data5 ext4 defaults 0 0
部署nfs
NFS服务端所需的软件列表
nfs-utils: 这个是NFS服务主程序(包含rpc.nfsd、rpc.mountd、daemons)
rpcbind: 这个是CentOS6.X的RPC主程序(CentOS5.X的为portmap)
检查软件是否安装
客户端和服务端都要检查
rpm -qa nfs-utils rpcbind
如果没有安装在系统中通过yum 命令进行安装以上两个包
#yum install -y nfs-utils rpcbind
[root@shareddisk19 ~]# rpm -qa | grep nfs
nfs-utils-1.2.3-54.el6.x86_64
nfs4-acl-tools-0.3.3-6.el6.x86_64
nfs-utils-lib-1.1.5-9.el6.x86_64
启动NFS服务端相关服务
服务端操作:
#service rpcbind status 查看状态
#service rpcbind start 启动
#service rpcbind stop 停止
#service rpcbind restart 重启
启动NFS服务
#service nfs start 启动
#service nfs status 查看状态
#service nfs stop 停止
#service nfs restat 重启
设置开机启动
[root@h1 ~]# chkconfig nfs on
[root@h1 ~]# chkconfig rpcbind on
Add the following lines to the "/etc/exports" file.
vi /etc/exports
/ocr1 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/ocr2 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/ocr3 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/vot1 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/vot2 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/vot3 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/data1 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/data2 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/data3 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/data4 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/data5 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
*****************************************************************
重启nfs
chkconfig nfs on
service nfs restart
输出本地挂载点
[root@h1 ~]# showmount -e localhost
[root@h1 ~]# showmount -e 192.168.123.211
AIX7.2系统NFS目录mount方法
服务端和客户端配置/etc/hosts
服务端/etc/hosts
192.168.123.211 shareddisk19
192.168.123.202 aixnode1
192.168.123.202 aixnode2
客户端/etc/hosts
192.168.123.211 shareddisk19
192.168.123.202 aixnode1
192.168.123.202 aixnode2
AIX 开机自动挂载NFS共享
重新设置网络参数
nfso -p -o nfs_use_reserved_ports=1
AIX 开机自动挂载NFS共享
重新设置网络参数
nfso -p -o nfs_use_reserved_ports=1
提示信息显示写入了nextboot file
查看/etc/tunables/nextboot 新增加我们刚刚设置的参数
smit nfs 配置
【 Network File System (NFS)】--【Add a File System for Mounting】
Pathname of mount point 【/opt/app/ocr1】
Pathname of remote directory 【/ocr1】
Host where remote directory resides 【shareddisk19】
Security method [sys]
Mount now, add entry to /etc/filesystems or both? 【both】
/etc/filesystems entry will mount the directory 【yes】
vi /etc/filesystems
/opt/app/ocr1:
dev = "/ocr1"
vfs = nfs
nodename = shareddisk19
mount = true
options = cio,rw,bg,hard,intr,rsize=65536,wsize=65536,timeo=600,proto=tcp,noac,vers=3,sec=sys
account = false
/opt/app/ocr2:
dev = "/ocr2"
vfs = nfs
nodename = shareddisk19
mount = true
options = cio,rw,bg,hard,intr,rsize=65536,wsize=65536,timeo=600,proto=tcp,noac,vers=3,sec=sys
account = false
/opt/app/ocr3:
dev = "/ocr3"
vfs = nfs
nodename = shareddisk19
mount = true
options = cio,rw,bg,hard,intr,rsize=65536,wsize=65536,timeo=600,proto=tcp,noac,vers=3,sec=sys
account = false
/opt/app/data1:
dev = "/data1"
vfs = nfs
nodename = shareddisk19
mount = true
options = bg,hard,nointr,noac,llock,rsize=32768,wsize=32768,sec=sys,nosuid
account = false
/opt/app/data2:
dev = "/data2"
vfs = nfs
nodename = shareddisk19
mount = true
options = bg,hard,nointr,noac,llock,rsize=32768,wsize=32768,sec=sys,nosuid
account = false
/opt/app/data3:
dev = "/data3"
vfs = nfs
nodename = shareddisk19
mount = true
options = bg,hard,nointr,noac,llock,rsize=32768,wsize=32768,sec=sys,nosuid
account = false
/opt/app/data4:
dev = "/data4"
vfs = nfs
nodename = shareddisk19
mount = true
options = bg,hard,nointr,noac,llock,rsize=32768,wsize=32768,sec=sys,nosuid
account = false
/opt/app/data5:
dev = "/data5"
vfs = nfs
nodename = shareddisk19
mount = true
options = bg,hard,nointr,noac,llock,rsize=32768,wsize=32768,sec=sys,nosuid
account = false
grud用户
GI安装
$ xhost + 192.168.88.8
xhost + 192.168.88.9
##two node running in root :
sh rootpre.sh
./runInstaller
目录
/opt/app/ocr1/ocr
/opt/app/ocr1/vdsk
aix7.1执行root.sh脚本ohasd failed to start
然后查询节点一和节点二/etc/inittab内容,果然存在该信息:
[root@node1 bin]# grep install /etc/inittab
install_assist:2:wait:/usr/sbin/install_assist </dev/console >/dev/console 2>&1
install_assist是系统的安装助手,是交互式工具,即假如没有响应,则会一直等待,那么在该行后面的命令将不会被执行,也就是说rc2.d(默认运行级别为2)下的服务将不会被启动,这也就是无法启动ohasd服务的真凶。
问题解决
将/etc/inittab里面的install_assist的一行注释掉或清理掉,重启系统,然后重新执行root.sh,数据库顺利安装。
[root@node1 bin]# grep install /etc/inittab
#install_assist:2:wait:/usr/sbin/install_assist </dev/console >/dev/console 2>&1
Oracle用户
database软件安装
##two node running in root :
sh rootpre.sh
$ xhost + 192.168.88.8
./runInstaller
dbca创建数据库
Select the "Use Oracle-Managed Files" option and enter "/u01/oradata/" as the database location, then click the "Next" button
目录
/opt/app/data5/
常用命令
bash-5.0$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE aix7tdb0
ONLINE ONLINE aix7tdb1
ora.asm
OFFLINE OFFLINE aix7tdb0 Instance Shutdown
OFFLINE OFFLINE aix7tdb1
ora.gsd
OFFLINE OFFLINE aix7tdb0
OFFLINE OFFLINE aix7tdb1
ora.net1.network
ONLINE ONLINE aix7tdb0
ONLINE ONLINE aix7tdb1
ora.ons
ONLINE ONLINE aix7tdb0
ONLINE ONLINE aix7tdb1
ora.registry.acfs
OFFLINE OFFLINE aix7tdb0
OFFLINE OFFLINE aix7tdb1
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE aix7tdb1
ora.aix7tdb0.vip
1 ONLINE ONLINE aix7tdb0
ora.aix7tdb1.vip
1 ONLINE ONLINE aix7tdb1
ora.cvu
1 ONLINE ONLINE aix7tdb1
ora.dbsec.db
1 ONLINE ONLINE aix7tdb0 Open
2 ONLINE ONLINE aix7tdb1 Open
ora.oc4j
1 ONLINE ONLINE aix7tdb1
ora.scan1.vip
1 ONLINE ONLINE aix7tdb1
bash-5.0$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....ER.lsnr ora....er.type ONLINE ONLINE aix7tdb0
ora....N1.lsnr ora....er.type ONLINE ONLINE aix7tdb1
ora....SM1.asm application OFFLINE OFFLINE
ora....B0.lsnr application ONLINE ONLINE aix7tdb0
ora....db0.gsd application OFFLINE OFFLINE
ora....db0.ons application ONLINE ONLINE aix7tdb0
ora....db0.vip ora....t1.type ONLINE ONLINE aix7tdb0
ora....SM2.asm application OFFLINE OFFLINE
ora....B1.lsnr application ONLINE ONLINE aix7tdb1
ora....db1.gsd application OFFLINE OFFLINE
ora....db1.ons application ONLINE ONLINE aix7tdb1
ora....db1.vip ora....t1.type ONLINE ONLINE aix7tdb1
ora.asm ora.asm.type OFFLINE OFFLINE
ora.cvu ora.cvu.type ONLINE ONLINE aix7tdb1
ora.dbsec.db ora....se.type ONLINE ONLINE aix7tdb0
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE aix7tdb0
ora.oc4j ora.oc4j.type ONLINE ONLINE aix7tdb1
ora.ons ora.ons.type ONLINE ONLINE aix7tdb0
ora....ry.acfs ora....fs.type OFFLINE OFFLINE
ora.scan1.vip ora....ip.type ONLINE ONLINE aix7tdb1
bash-5.0$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3304
Available space (kbytes) : 258816
ID : 1423921486
Device/File Name : /opt/app/ocr1/ocr
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
bash-5.0$ su root
root's Password:
# bash
bash-5.0# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 9ecc860847354fb8bfb35486e77359e5 (/opt/app/ocr1/vdsk) []
Located 1 voting disk(s).




