暂无图片
暂无图片
1
暂无图片
暂无图片
暂无图片

RAC 静默安装 11g---grid静默安装

原创 枫狐狸 2021-02-22
768

RAC 静默安装 11g—grid静默安装
snowofsummer 2018-11-28 13:22:52 浏览1904
Oracle
install
Grid
软件版本11.2.0.4
p13390677_112040_Linux-x86-64_3of7.zip

系统版本
Red Hat Enterprise Linux Server release 6.8 (Santiago)

基础环境
192.168.0.230 prod01
192.168.0.232 prod01-vip

192.168.0.231 prod02
192.168.0.233 prod02-vip
192.168.0.234 scan
ssh 配置
https://yq.aliyun.com/articles/673572

基础环境参数检查
./runcluvfy.sh stage -pre crsinst -n prod01,prod02 -fixup

[grid@prod01 grid]$ ./runcluvfy.sh stage -pre crsinst -n prod01,prod02 -fixup

Performing pre-checks for cluster services setup

Checking node reachability…
Node reachability check passed from node “prod01”

Checking user equivalence…
User equivalence check passed for user “grid”

Checking node connectivity…

Checking hosts config file…

Verification of the hosts config file successful

Node connectivity passed for subnet “192.168.0.0” with node(s) prod02,prod01
TCP connectivity check passed for subnet “192.168.0.0”

Node connectivity passed for subnet “17.17.0.0” with node(s) prod02,prod01
TCP connectivity check passed for subnet “17.17.0.0”

Interfaces found on subnet “192.168.0.0” that are likely candidates for VIP are:
prod02 eth0:192.168.0.231
prod01 eth0:192.168.0.230

Interfaces found on subnet “17.17.0.0” that are likely candidates for VIP are:
prod02 eth1:17.17.0.2
prod01 eth1:17.17.0.1

WARNING:
Could not find a suitable set of interfaces for the private interconnect
Checking subnet mask consistency…
Subnet mask consistency check passed for subnet “192.168.0.0”.
Subnet mask consistency check passed for subnet “17.17.0.0”.
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication…

Checking subnet “192.168.0.0” for multicast communication with multicast group “230.0.1.0”…
Check of subnet “192.168.0.0” for multicast communication with multicast group “230.0.1.0” passed.

Checking subnet “17.17.0.0” for multicast communication with multicast group “230.0.1.0”…
Check of subnet “17.17.0.0” for multicast communication with multicast group “230.0.1.0” passed.

Check of multicast communication passed.

Checking ASMLib configuration.
Check for ASMLib configuration passed.
Total memory check passed
Available memory check passed
Swap space check failed
Check failed on nodes:
prod02,prod01
Free disk space check passed for “prod02:/tmp”
Free disk space check passed for “prod01:/tmp”
Check for multiple users with UID value 1100 passed
User existence check passed for “grid”
Group existence check passed for “oinstall”
Group existence check passed for “dba”
Membership check for user “grid” in group “oinstall” [as Primary] passed
Membership check for user “grid” in group “dba” failed
Check failed on nodes:
prod02,prod01
Run level check passed
Hard limits check passed for “maximum open file descriptors”
Soft limits check passed for “maximum open file descriptors”
Hard limits check passed for “maximum user processes”
Soft limits check passed for “maximum user processes”
System architecture check passed
Kernel version check passed
Kernel parameter check passed for “semmsl”
Kernel parameter check passed for “semmns”
Kernel parameter check passed for “semopm”
Kernel parameter check passed for “semmni”
Kernel parameter check passed for “shmmax”
Kernel parameter check passed for “shmmni”
Kernel parameter check passed for “shmall”
Kernel parameter check passed for “file-max”
Kernel parameter check passed for “ip_local_port_range”
Kernel parameter check passed for “rmem_default”
Kernel parameter check passed for “rmem_max”
Kernel parameter check passed for “wmem_default”
Kernel parameter check passed for “wmem_max”
Kernel parameter check passed for “aio-max-nr”
Package existence check passed for “make”
Package existence check passed for “binutils”
Package existence check passed for “gcc(x86_64)”
Package existence check passed for “libaio(x86_64)”
Package existence check passed for “glibc(x86_64)”
Package existence check passed for “compat-libstdc+±33(x86_64)”
Package existence check passed for “elfutils-libelf(x86_64)”
Package existence check passed for “elfutils-libelf-devel”
Package existence check passed for “glibc-common”
Package existence check passed for “glibc-devel(x86_64)”
Package existence check passed for “glibc-headers”
Package existence check passed for “gcc-c++(x86_64)”
Package existence check passed for “libaio-devel(x86_64)”
Package existence check passed for “libgcc(x86_64)”
Package existence check passed for “libstdc++(x86_64)”
Package existence check passed for “libstdc+±devel(x86_64)”
Package existence check passed for “sysstat”
Package existence check failed for “pdksh”
Check failed on nodes:
prod02,prod01
Package existence check passed for “expat(x86_64)”
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user’s primary group passed

Starting Clock synchronization checks using Network Time Protocol(NTP)…

NTP Configuration file check started…
NTP Configuration file check passed
No NTP Daemons or Services were found to be running
PRVF-5507 : NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s):
prod02,prod01
Clock synchronization check using Network Time Protocol(NTP) failed

Core file name pattern consistency check passed.

User “grid” is not part of “root” group. Check passed
Default user file creation mask check passed
Checking consistency of file “/etc/resolv.conf” across nodes

File “/etc/resolv.conf” does not have both domain and search entries defined
domain entry in file “/etc/resolv.conf” is consistent across nodes
search entry in file “/etc/resolv.conf” is consistent across nodes
PRVF-5636 : The DNS response time for an unreachable node exceeded “15000” ms on following nodes: prod01

File “/etc/resolv.conf” is not consistent across nodes

Time zone consistency check passed
Fixup information has been generated for following node(s):
prod02,prod01
Please run the following script on each node as “root” user to execute the fixups:
‘/tmp/CVU_11.2.0.4.0_grid/runfixup.sh’

Pre-check for cluster services setup was unsuccessful on all the nodes.

修复错误信息:
/tmp/CVU_11.2.0.4.0_grid/runfixup.sh

asm 磁盘准备
[root@prod01 ~]# cat /etc/udev/rules.d/99-asm.rules

KERNEL==“sd*”, BUS==“scsi”, PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/name",RESULT=="36000c29834a76f994d32c360690ff23d",NAME="asmdiskb",OWNER="grid",GROUP="asmadmin",MODE="0660"KERNEL=="sd",BUS=="scsi",PROGRAM=="/sbin/scsiidwhitelistedreplacewhitespacedevice=/dev/name", RESULT=="36000c29834a76f994d32c360690ff23d", NAME="asm-diskb", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/name", RESULT==“36000c299cf17f09b48e2fdd05d3baad3”, NAME=“asm-diskc”, OWNER=“grid”, GROUP=“asmadmin”, MODE=“0660”
KERNEL==“sd*”, BUS==“scsi”, PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/name",RESULT=="36000c291ebbc447cf66ff6b6a95fcba5",NAME="asmdiskd",OWNER="grid",GROUP="asmadmin",MODE="0660"KERNEL=="sd",BUS=="scsi",PROGRAM=="/sbin/scsiidwhitelistedreplacewhitespacedevice=/dev/name", RESULT=="36000c291ebbc447cf66ff6b6a95fcba5", NAME="asm-diskd", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/name", RESULT==“36000c293a9f6f98b804798c3a7779baf”, NAME=“asm-test01”, OWNER=“grid”, GROUP=“asmadmin”, MODE=“0660”
KERNEL==“sd*”, BUS==“scsi”, PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT==“36000c29b6b59bc1c7a06829306735d12”, NAME=“asm-test02”, OWNER=“grid”, GROUP=“asmadmin”, MODE=“0660”

[root@prod01 ~]# ll /dev/asm-*
brw-rw---- 1 grid asmadmin 8, 16 Nov 28 12:51 /dev/asm-diskb
brw-rw---- 1 grid asmadmin 8, 32 Nov 28 12:51 /dev/asm-diskc
brw-rw---- 1 grid asmadmin 8, 48 Nov 28 12:51 /dev/asm-diskd
brw-rw---- 1 grid asmadmin 8, 64 Nov 28 11:46 /dev/asm-test01
brw-rw---- 1 grid asmadmin 8, 80 Nov 28 11:46 /dev/asm-test02
准备响应文件
[grid@prod01 ~]$ more 11204-grid.rsp
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v11_2_0
ORACLE_HOSTNAME=
INVENTORY_LOCATION=/u01/app/oraInventory
SELECTED_LANGUAGES=en,zh_CN
oracle.install.option=CRS_CONFIG
ORACLE_BASE=/u01/app/grid
ORACLE_HOME=/u01/app/11.2.0/grid
oracle.install.asm.OSDBA=asmdba
oracle.install.asm.OSOPER=asmoper
oracle.install.asm.OSASM=asmadmin
oracle.install.crs.config.gpnp.scanName=scan
oracle.install.crs.config.gpnp.scanPort=1521
oracle.install.crs.config.clusterName=prod-cluster01
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=
oracle.install.crs.config.autoConfigureClusterNodeVIP=
oracle.install.crs.config.clusterNodes=prod01:prod01-vip,prod02:prod02-vip
oracle.install.crs.config.networkInterfaceList=eth0:192.168.0.0:1,eth1:17.17.0.0:2
oracle.install.crs.config.storageOption=ASM_STORAGE
oracle.install.crs.config.sharedFileSystemStorage.diskDriveMapping=
oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations=
oracle.install.crs.config.sharedFileSystemStorage.votingDiskRedundancy=NORMAL
oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=
oracle.install.crs.config.sharedFileSystemStorage.ocrRedundancy=NORMAL
oracle.install.crs.config.useIPMI=false
oracle.install.crs.config.ipmi.bmcUsername=
oracle.install.crs.config.ipmi.bmcPassword=
oracle.install.asm.SYSASMPassword=Grid1234
oracle.install.asm.diskGroup.name=crs
oracle.install.asm.diskGroup.redundancy=NORMAL
oracle.install.asm.diskGroup.AUSize=1
oracle.install.asm.diskGroup.disks=/dev/asm-diskb,/dev/asm-diskc,/dev/asm-diskd
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/
oracle.install.asm.monitorPassword=Grid1234
oracle.install.crs.upgrade.clusterNodes=
oracle.install.asm.upgradeASM=false
oracle.installer.autoupdates.option=
oracle.installer.autoupdates.downloadUpdatesLoc=
AUTOUPDATES_MYORACLESUPPORT_USERNAME=
AUTOUPDATES_MYORACLESUPPORT_PASSWORD=
PROXY_HOST=
PROXY_PORT=
PROXY_USER=
PROXY_PWD=
PROXY_REALM=
建立目录(2台主机)
mkdir /u01
chown grid:oinstall /u01/
执行静默安装(grid)
[grid@prod01 grid]$ ./runInstaller -ignorePrereq -silent -force -responseFile /home/grid/grid.rsp -showProgress
[grid@prod01 grid]$ You can find the log of this install session at:
/u01/app/oraInventory/logs/installActions2018-11-28_12-57-45PM.log

Prepare in progress.
… 9% Done.

Prepare successful.

Copy files in progress.
… 15% Done.
… 20% Done.
… 25% Done.
… 30% Done.
… 35% Done.
… 40% Done.
… 45% Done.

Copy files successful.

Link binaries in progress.

Link binaries successful.
… 62% Done.

Setup files in progress.

Setup files successful.
… 76% Done.

Perform remote operations in progress.
… 89% Done.

Perform remote operations successful.
The installation of Oracle Grid Infrastructure 11g was successful.
Please check ‘/u01/app/oraInventory/logs/silentInstall2018-11-28_12-57-45PM.log’ for more details.
… 94% Done.

Execute Root Scripts in progress.

As a root user, execute the following script(s):
1. /u01/app/oraInventory/orainstRoot.sh
2. /u01/app/11.2.0/grid/root.sh

Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes:
[prod01, prod02]
Execute /u01/app/11.2.0/grid/root.sh on the following nodes:
[prod01, prod02]

… 100% Done.

Execute Root Scripts successful.
As install user, execute the following script to complete the configuration.
1. /u01/app/11.2.0/grid/cfgtoollogs/configToolAllCommands RESPONSE_FILE=<response_file>

 Note:
1. This script must be run on the same host from where installer was run. 
2. This script needs a small password properties file for configuration assistants that require passwords (refer to install guide documentation).
复制

Successfully Setup Software.

[grid@prod01 grid]$
执行脚本01(root)
prod01:

[root@prod01 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@prod01 ~]# /u01/app/11.2.0/grid/root.sh
Check /u01/app/11.2.0/grid/install/root_prod01_2018-11-28_13-09-14.log for the output of root script
[root@prod01 ~]#
prod02:

[root@prod02 u01]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@prod02 u01]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@prod02 u01]# /u01/app/11.2.0/grid/root.sh
Check /u01/app/11.2.0/grid/install/root_prod02_2018-11-28_13-22-05.log for the output of root script
[root@prod02 u01]#
执行脚本02(安装节点grid)
[grid@prod01 grid]$ /u01/app/11.2.0/grid/cfgtoollogs/configToolAllCommands
Setting the invPtrLoc to /u01/app/11.2.0/grid/oraInst.loc

perform - mode is starting for action: configure

perform - mode finished for action: configure

You can see the log file: /u01/app/11.2.0/grid/cfgtoollogs/oui/configActions2018-11-28_01-34-29-PM.log
[grid@prod01 grid] 状态验证 [grid@prod01 grid] export ORACLE_HOME=/u01/app/11.2.0/grid
[grid@prod01 grid]$ /u01/app/11.2.0/grid/bin/crsctl check cluster -all


prod01:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online


prod02:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online


[grid@prod01 grid]$ /u01/app/11.2.0/grid/bin/crsctl stat res -t

NAME TARGET STATE SERVER STATE_DETAILS

Local Resources

ora.CRS.dg
ONLINE ONLINE prod01
ONLINE ONLINE prod02
ora.LISTENER.lsnr
ONLINE ONLINE prod01
ONLINE ONLINE prod02
ora.asm
ONLINE ONLINE prod01 Started
ONLINE ONLINE prod02 Started
ora.gsd
OFFLINE OFFLINE prod01
OFFLINE OFFLINE prod02
ora.net1.network
ONLINE ONLINE prod01
ONLINE ONLINE prod02
ora.ons
ONLINE ONLINE prod01
ONLINE ONLINE prod02
ora.registry.acfs
ONLINE ONLINE prod01
ONLINE ONLINE prod02

Cluster Resources

ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE prod01
ora.cvu
1 ONLINE ONLINE prod01
ora.oc4j
1 ONLINE ONLINE prod01
ora.prod01.vip
1 ONLINE ONLINE prod01
ora.prod02.vip
1 ONLINE ONLINE prod02
ora.scan1.vip
1 ONLINE ONLINE prod01
[grid@prod01 grid]
版权声明:本文内容由互联网用户自发贡献,版权归作者所有,本社区不拥有所有权,也不承担相关法律责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件至:yqgroup@service.aliyun.com 进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容。

「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论