暂无图片
暂无图片
4
暂无图片
暂无图片
暂无图片

Oracle ASM 存储的剿灭战

原创 ByteHouse 2024-05-12
979

1.Oracle 10G RAC 添加 ASM 磁盘

1.存储划分乱,建议卷每个大小为1TB
2.配置磁盘多路径
3.创建RAW设备
4.创建ASM磁盘组
5.数据库启用归档
6.修改归档日志文件存储路径参数

## 1.1.裸设备

  • 裸设备:
    也叫裸分区(原始分区),是一种没有经过格式化,不被Unix/Linux通过文件系统来读取的特殊字符设备。裸设备可以绑定一个分区,也可以绑定一个磁盘。
  • 字符设备:
    对字符设备的读写不需要通过OS的buffer。它不可被文件系统mount。
  • 块设备:
    对块设备的读写需要通过OS的buffer,它可以被mount到文件系统中。

在旧版本中,最多只可以有256个裸设备,Linux 4下做多可以绑定81Array2个裸设备。
在linux下,最多只能有255个分区,所以,如果用裸设备绑定分区,最多只能绑定255个裸设备。如果是用lvm,则没有这个限制。

Linux下单个磁盘最多可以有15个分区。3个主分区 + 1个扩展分区 + 11个逻辑分区。
建议的分区方法是:先分3个主分区,第四个分区为扩展分区,然后在扩展分区中再分成11个逻辑分区。

Note:
裸设备不要绑定在扩展分区上。

2.1.裸设备绑定:

  • linux下使用裸设备,则需要手工进行绑定
    在Linux中rawio的则实现了一套非绑定(unbound)的裸设备/dev/rawN或者/dev/raw/rawN和一个控制设备/dev/rawct用来把他们绑定到块设备上。所以当需要使用一个裸设备的时候,就需要把他和一个真实存在的块设备对应起来,这一个步骤实际上就是完成了Unix里的自动对应一个非缓存字符设备。
  • unix下使用裸设备,不需要手工进行绑定
    在Unix中每一个块设备都会有一个对应的字符设备用于非缓存(unbuffered)I/O,这就是他对应的裸设备了。

2.2.major 和 minor device number

在unix/linux系统中,一切都是文件。所有硬盘、软盘、键盘等设备都用文件来代表,对应着/dev下面的文件。对于应用程序来说,可以像对待普通文件一样打开,关闭、读写这些设备文件。但是这种文件名,比如/dev/sda、/dev/raw/raw1都是用户空间名称,OS
Kernel根本不知道这个名称指的是什么。在内核空间是通过major、minor device number 来区分设备的。

major device number可以看作是设备驱动程序,被同一设备驱动程序管理的设备有相同的major device
number,这个数字实际是Kernel中device driver table 的索引,这个表保存着不同设备驱动程序。而minor
device number用来代表被访问的具体设备。也就是说Kernel根据major device number
找到设备驱动程序,然后再从minor device number 获得设备位置等属性。所有这些major device number
是已经预先分配好的。

[oracle@itms-base ~]$ cat /etc/udev/rules.d/60-raw.rules 
# Enter raw device bindings here.
#
# An example would be:
#   ACTION=="add", KERNEL=="sda", RUN+="/bin/raw /dev/raw/raw1 %N"
# to bind /dev/raw/raw1 to /dev/sda, or
#   ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m"
# to bind /dev/raw/raw2 to the device with major 8, minor 1.
[oracle@itms-base ~]$

2.3.配置裸设备

在OEL4.8中需要编辑的文件是/etc/sysconfig/rawdevices,使用service rawdevices restart命令完成裸设备的配置。
1.编辑后的/etc/sysconfig/rawdevices文件内容

[root@RAC1 ~]# cat /etc/sysconfig/rawdevices
# This file and interface are deprecated.
# Applications needing raw device access should open regular
# block devices with O_DIRECT.
# raw device bindings
# format: 
#         
# example: /dev/raw/raw1 /dev/sda1
#          /dev/raw/raw2 8 5

/dev/raw/raw1 /dev/sdb1
/dev/raw/raw2 /dev/sdc1
/dev/raw/raw3 /dev/sdd1
/dev/raw/raw4 /dev/sde1

2.启动裸设备服务

[root@RAC1 ~]# service rawdevices restart
Assigning devices:
           /dev/raw/raw1  -->   /dev/sdb1
/dev/raw/raw1:  bound to major 8, minor 17
           /dev/raw/raw2  -->   /dev/sdc1
/dev/raw/raw2:  bound to major 8, minor 33
           /dev/raw/raw3  -->   /dev/sdd1
/dev/raw/raw3:  bound to major 8, minor 49
           /dev/raw/raw4  -->   /dev/sde1
/dev/raw/raw4:  bound to major 8, minor 65
done

3.最后的确认

[root@RAC1 ~]# ls -l /dev/raw
total 0
crw-rw----  1 root disk 162, 1 Jul  2 17:26 raw1
crw-rw----  1 root disk 162, 2 Jul  2 17:26 raw2
crw-rw----  1 root disk 162, 3 Jul  2 17:26 raw3
crw-rw----  1 root disk 162, 4 Jul  2 17:26 raw4

此事可以顺利的在/dev/raw目录中看到裸设备信息。

在Redhat 5之后,原来的raw设备接口已经取消了,redhat 5中通过udev规则进行配置。
编辑/etc/udev/rules.d/60-raw.rules

[oracle@itms-base ~]$ cat /etc/udev/rules.d/60-raw.rules 
# Enter raw device bindings here.
#
# An example would be:
#   ACTION=="add", KERNEL=="sda", RUN+="/bin/raw /dev/raw/raw1 %N"
# to bind /dev/raw/raw1 to /dev/sda, or
#   ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m"
# to bind /dev/raw/raw2 to the device with major 8, minor 1.
[oracle@itms-base ~]$

配置规则:

ACTION=="add", KERNEL="<device name>", RUN+="raw /dev/raw/rawX %N"
  • device name :需要绑定的设备名称替换(/dev/sda1)
  • X :为裸设备号,主/次号码:
ACTION=="add", ENV{MAJOR}="A", ENV{MINOR}="B", RUN+="raw /dev/raw/rawX %M %m"
  • “A” 和 “B” 是设备的主/次号码
  • X 是系统使用的raw设备号码

在redhat 5中,是通过udev来管理raw设备的,而udev是通过MAJOR和MINOR来识别raw设备。

1.查看磁盘分区情况

# fdisk  -l /dev/sdb

Disk /dev/sdb: 4880 MB, 4880072704 bytes

255 heads, 63 sectors/track, 593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1          25      200781   83  Linux
/dev/sdb2              26          50      200812+  83  Linux

2.配置/etc/udev/rules.d/60-raw.rules文件

# grep -v ^# /etc/udev/rules.d/60-raw.rules
ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdb2", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", ENV{MAJOR}=="3", ENV{MINOR}=="2", RUN+="/bin/raw /dev/raw/raw1 %M %m"
ACTION=="add", ENV{MAJOR}=="7", ENV{MINOR}=="2", RUN+="/bin/raw /dev/raw/raw2 %M %m"

3.启动raw设备

# start_udev
Starting udev:                                             [  OK  ]

4.查看配置情况

# raw -qa
/dev/raw/raw1:  bound to major 8, minor 17
/dev/raw/raw2:  bound to major 8, minor 18

5.可以通过如下方式指定主设备号和复设备号

# raw /dev/raw/raw1 1 1
/dev/raw/raw1:  bound to major 1, minor 1

raw /dev/raw/raw[n] /dev/xxx
其中n的范围是0-8191。raw目录不存在的话会被自动创建。执行这个命令,就会在/dev/raw下生成一个对应的raw[n]文件用命令方式绑定裸设备在系统重启后会失效。

5.删除裸设备

# raw /dev/raw/raw2 0 0
/dev/raw/raw2:  bound to major 0, minor 0

# raw -qa
/dev/raw/raw1:  bound to major 1, minor 1

以上设置必须同时修改/etc/udev/rules.d/60-raw.rules才能保证重启后生效,否则重启后系统会重新读取/etc/udev/rules.d/60-raw.rules

6.确定裸设备的大小
用blockdev命令来计算,如:

# blockdev --getsize /dev/raw/raw1
11718750

11718750表示有多少OS BLIOCK。
一般一个OS BLOCK大小是512字节,所以11718750*512/1024/1024= 5722(m) 就是裸设备的大小。

如需设置raw设备的用户和权限信息,可在/etc/udev/rules.d/60-raw.rules文件里添加如下信息:

ACTION=="add", KERNEL=="raw1", OWNER="dave", GROUP="tianlesoftware", MODE="660"

如果有多个raw设备,可以写成:

ACTION=="add", KERNEL=="raw[1-4]", OWNER="dave", GROUP="tianlesoftware", MODE="660"
#chown oracle:oinstall /dev/raw/raw[1-4]
#chmod 775 /dev/raw/raw[1-4]

Note:在内核2.6.9-89.5AXS2之前使用/etc/sysconfig/rawdevices和/etc/udev/permissions.d/50-udev.permissions进行raw设备的配置和权限管理。在内核
2.6.18-128.7AXS3以后则使用了本文介绍的/etc/udev/rules.d/60-raw.rules进行raw设备的管理

使用裸设备作为oracle的数据文件的注意事项

  1. 一个裸设备只能放置一个数据文件
  2. 数据文件的大小不能超过裸设备的大小
    日志文件,则裸设备最大可用大小=裸设备对应分区大小 - 1 * 512 (保留一个redo lock)
    数据文件,则裸设备最大可用大小=裸设备对应分区大小 - 2 * db_block_size(保留两个block)
    为了简单起见,对所有的文件设置称比裸设备小1M即可。
  3. 数据文件最好不要设置称自动扩展,如果设置称自动扩展,一定要把maxsize设置设置为比裸设备小
  4. linux下oracle不能直接把逻辑卷作为裸设备,也要进行绑定。unix下就不需要。

2.4.实施步骤

step 1.配置磁盘多路径(所有节点)

[root@jczhdb1 mapper]# cat /etc/multipath.conf 
# This is a basic configuration file with some examples, for device mapper
# multipath.
# For a complete list of the default configuration values, see
# /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.defaults
# For a list of configuration options with descriptions, see
# /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.annotated

defaults {
        user_friendly_names yes
}
blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
        devnode "^(hd|xvd|vd)[a-z]*"
        wwid "3600605b00bf61c801ff42c1b222b02ab"
}

# Make sure our multipath devices are enabled.

#blacklist_exceptions {
#        wwid "3600000e00d100000001032ce000c0000"
#}
#}

multipaths
{
multipath
{
wwid            3600000e00d2a0000002a084d00010000
alias           ocr1
}

multipath
{
wwid            3600000e00d2a0000002a084d00020000
alias           ocr2
}
multipath
{
wwid            3600000e00d2a0000002a084d00040000
alias           vote1
}
multipath
{
wwid            3600000e00d2a0000002a084d00030000
alias           vote2
}
multipath
{
wwid            3600000e00d2a0000002a084d00000000
alias           vote3
}
multipath
{
wwid            3600000e00d2a0000002a084d00050000
alias           data1
}
multipath
{
wwid            3600000e00d2a0000002a084d00060000
alias           data2
}
multipath
{
wwid            3600000e00d2a0000002a084d00070000
alias           arch1
}
multipath
{
wwid            3600000e00d2a0000002a084d00090000
alias           FRA01
}
multipath
{
wwid            3600000e00d2a0000002a084d000a0000
alias           FRA02
}
multipath
{
wwid            3600000e00d2a0000002a084d000b0000
alias           FRA03
}
multipath
{
wwid            3600000e00d2a0000002a084d000c0000
alias           FRA04
}
multipath
{
wwid            3600000e00d2a0000002a084d000d0000
alias           FRA05
}
multipath
{
wwid            3600000e00d2a0000002a084d000e0000
alias           FRA06
}
multipath
{
wwid            3600000e00d2a0000002a084d000f0000
alias           FRA07
}  
multipath
{
wwid            3600000e00d2a0000002a084d00100000
alias           FRA08
}
multipath
{
wwid            3600000e00d2a0000002a084d00110000
alias           FRA09
}
multipath
{
wwid            3600000e00d2a0000002a084d00120000
alias           FRA10
}
multipath
{
wwid            3600000e00d2a0000002a084d00130000
alias           data3
}
multipath
{
wwid            3600000e00d2a0000002a084d00140000
alias           data4
}
}
[root@jczhdb1 mapper]#

step 2.创建LVM磁盘

创建物理卷
[root@jczhdb1 mapper]# pvcreate /dev/mapper/data3
  Physical volume "/dev/mapper/data3" successfully created
[root@jczhdb1 mapper]# pvcreate /dev/mapper/data4
  Physical volume "/dev/mapper/data4" successfully created
[root@jczhdb1 mapper]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/dm-1
  VG Name               vgdate2
  PV Size               4.00 TB / not usable 4.00 MB
  Allocatable           yes 
  PE Size (KByte)       4096
  Total PE              1048575
  Free PE               196607
  Allocated PE          851968
  PV UUID               eZcWjA-MHlC-9trU-cXZa-6Akv-OyKr-kLeIou

  --- Physical volume ---
  PV Name               /dev/dm-0
  VG Name               vgdate1
  PV Size               4.00 TB / not usable 4.00 MB
  Allocatable           yes 
  PE Size (KByte)       4096
  Total PE              1048575
  Free PE               196607
  Allocated PE          851968
  PV UUID               1hDRBE-ve0S-92di-h81V-g0qe-6eaI-SCWqKc

  "/dev/dm-14" is a new physical volume of "4.00 TB"
  --- NEW Physical volume ---
  PV Name               /dev/dm-14
  VG Name
  PV Size               4.00 TB
  Allocatable           NO
  PE Size (KByte)       0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               ptLQ1i-e8Ev-E9vs-T9uA-AMq0-Lwg8-UTc2Kp

  "/dev/sda2" is a new physical volume of "2.73 TB"
  --- NEW Physical volume ---
  PV Name               /dev/sda2
  VG Name
  PV Size               2.73 TB
  Allocatable           NO
  PE Size (KByte)       0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               bOWV6x-Bh3T-C3jN-OX0C-DBTf-7132-C1quVl

  "/dev/dm-13" is a new physical volume of "4.00 TB"
  --- NEW Physical volume ---
  PV Name               /dev/dm-13
  VG Name
  PV Size               4.00 TB
  Allocatable           NO
  PE Size (KByte)       0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               dfWkgY-TT2y-wROs-NxYn-XJ3k-jhZx-eMLXel

[root@jczhdb1 mapper]#

创建LVM卷组
[root@jczhdb1 mapper]# vgcreate vgdata3 /dev/mapper/data3
  Volume group "vgdata3" successfully created
[root@jczhdb1 mapper]# vgcreate vgdata4 /dev/mapper/data4
  Volume group "vgdata4" successfully created
[root@jczhdb1 mapper]# vgdisplay
  --- Volume group ---
  VG Name               vgdata4
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               4.00 TB
  PE Size               4.00 MB
  Total PE              1048575
  Alloc PE / Size       0 / 0
  Free  PE / Size       1048575 / 4.00 TB
  VG UUID               9QaoZs-7eVM-cteQ-Z3HE-pCJ7-1Uvm-eOwMcG

  --- Volume group ---
  VG Name               vgdata3
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               4.00 TB
  PE Size               4.00 MB
  Total PE              1048575
  Alloc PE / Size       0 / 0
  Free  PE / Size       1048575 / 4.00 TB
  VG UUID               FP40ex-h8aA-IAWc-rYH4-lAzh-8HGY-KxDk7d

  --- Volume group ---
  VG Name               vgdate2
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                4
  Open LV               4
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               4.00 TB
  PE Size               4.00 MB
  Total PE              1048575
  Alloc PE / Size       851968 / 3.25 TB
  Free  PE / Size       196607 / 768.00 GB
  VG UUID               F3Tiy0-zMkn-i2mp-A2l2-jz30-5rI9-NM2fqD

  --- Volume group ---
  VG Name               vgdate1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                4
  Open LV               4
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               4.00 TB
  PE Size               4.00 MB
  Total PE              1048575
  Alloc PE / Size       851968 / 3.25 TB
  Free  PE / Size       196607 / 768.00 GB
  VG UUID               RCHGjL-1TMA-zbr1-0Wzi-zomY-W2x8-aahM9s

[root@jczhdb1 mapper]#

创建LVM的逻辑卷
[root@jczhdb1 mapper]# lvcreate -n data9 -L 1024G vgdata3
  Logical volume "data9" created
[root@jczhdb1 mapper]# lvcreate -n data10 -L 1024G vgdata3
  Logical volume "data10" created
[root@jczhdb1 mapper]# lvcreate -n data11 -L 1024G vgdata3
  Logical volume "data11" created
[root@jczhdb1 mapper]# lvcreate -n data12 -L 1024G vgdata3
  Insufficient free extents (262143) in volume group vgdata3: 262144 required
[root@jczhdb1 mapper]# 
[root@jczhdb1 mapper]# lvcreate -n data13 -L 1024G vgdata4
  Logical volume "data13" created
[root@jczhdb1 mapper]# lvcreate -n data14 -L 1024G vgdata4
  Logical volume "data14" created
[root@jczhdb1 mapper]# lvcreate -n data15 -L 1024G vgdata4
  Logical volume "data15" created
[root@jczhdb1 mapper]# lvcreate -n data16 -L 1024G vgdata4
  Insufficient free extents (262143) in volume group vgdata4: 262144 required
[root@jczhdb1 mapper]# 

step 3.创建raw磁盘(所有节点)

[root@jczhdb1 mapper]#  cat /etc/rc.loca

#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local
/sbin/modprobe hangcheck-timer

/bin/raw /dev/raw/raw1 /dev/mapper/ocr1
/bin/raw /dev/raw/raw2 /dev/mapper/ocr2
/bin/raw /dev/raw/raw3 /dev/mapper/vote1
/bin/raw /dev/raw/raw4 /dev/mapper/vote2
/bin/raw /dev/raw/raw5 /dev/mapper/vote3
/bin/raw /dev/raw/raw6 /dev/mapper/vgdate1-lvdate1
/bin/raw /dev/raw/raw7 /dev/mapper/vgdate1-lvdate2
/bin/raw /dev/raw/raw8 /dev/mapper/vgdate1-lvdate3
/bin/raw /dev/raw/raw9 /dev/mapper/vgdate1-lvdate4
/bin/raw /dev/raw/raw10 /dev/mapper/vgdate2-lvdate5
/bin/raw /dev/raw/raw11 /dev/mapper/vgdate2-lvdate6
/bin/raw /dev/raw/raw12 /dev/mapper/vgdate2-lvdate7
/bin/raw /dev/raw/raw13 /dev/mapper/vgdate2-lvdate8
/bin/raw /dev/raw/raw14 /dev/mapper/vgdata3-data9
/bin/raw /dev/raw/raw15 /dev/mapper/vgdata3-data10
/bin/raw /dev/raw/raw16 /dev/mapper/vgdata3-data11
/bin/raw /dev/raw/raw17 /dev/mapper/vgdata4-data13
/bin/raw /dev/raw/raw18 /dev/mapper/vgdata4-data14
/bin/raw /dev/raw/raw19 /dev/mapper/vgdata4-data15
/bin/raw /dev/raw/raw20 /dev/mapper/FRA01
/bin/raw /dev/raw/raw21 /dev/mapper/FRA02
/bin/raw /dev/raw/raw22 /dev/mapper/FRA03
/bin/raw /dev/raw/raw23 /dev/mapper/FRA04
/bin/raw /dev/raw/raw24 /dev/mapper/FRA05
/bin/raw /dev/raw/raw25 /dev/mapper/FRA06
/bin/raw /dev/raw/raw26 /dev/mapper/FRA07
/bin/raw /dev/raw/raw27 /dev/mapper/FRA08
/bin/raw /dev/raw/raw28 /dev/mapper/FRA09
/bin/raw /dev/raw/raw29 /dev/mapper/FRA10

sleep 2
chmod 660 /dev/raw/raw*
chown root:oinstall /dev/raw/raw{1,2}
chown oracle:oinstall /dev/raw/raw{3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29}
[root@jczhdb1 mapper]#

step 4.其他节点扫描LVM

[root@jczhdb2 mapper]# lvscan
  inactive          '/dev/vgdata4/data13' [1.00 TB] inherit
  inactive          '/dev/vgdata4/data14' [1.00 TB] inherit
  inactive          '/dev/vgdata4/data15' [1.00 TB] inherit
  inactive          '/dev/vgdata3/data9' [1.00 TB] inherit
  inactive          '/dev/vgdata3/data10' [1.00 TB] inherit
  inactive          '/dev/vgdata3/data11' [1.00 TB] inherit
  ACTIVE            '/dev/vgdate2/lvdate5' [1.00 TB] inherit
  ACTIVE            '/dev/vgdate2/lvdate6' [1.00 TB] inherit
  ACTIVE            '/dev/vgdate2/lvdate7' [1.00 TB] inherit
  ACTIVE            '/dev/vgdate2/lvdate8' [256.00 GB] inherit
  ACTIVE            '/dev/vgdate1/lvdate1' [1.00 TB] inherit
  ACTIVE            '/dev/vgdate1/lvdate2' [1.00 TB] inherit
  ACTIVE            '/dev/vgdate1/lvdate3' [1.00 TB] inherit
  ACTIVE            '/dev/vgdate1/lvdate4' [256.00 GB] inherit
[root@jczhdb2 mapper]#
发现新增的LVM,但状态为“inactive”。
vgchange命令 用于修改卷组的属性,经常被用来设置卷组是处于活动状态或非活动状态。处于活动状态的卷组无法被删除,必须使用vgchange命令将卷组设置为非活动状态后才能删除。

激活LVM
[root@jczhdb2 mapper]# vgchange -ay vgdata3
  3 logical volume(s) in volume group "vgdata3" now active
[root@jczhdb2 mapper]# vgchange -ay vgdata4
  3 logical volume(s) in volume group "vgdata4" now active
[root@jczhdb2 mapper]# lvscan 
  ACTIVE            '/dev/vgdata4/data13' [1.00 TB] inherit
  ACTIVE            '/dev/vgdata4/data14' [1.00 TB] inherit
  ACTIVE            '/dev/vgdata4/data15' [1.00 TB] inherit
  ACTIVE            '/dev/vgdata3/data9' [1.00 TB] inherit
  ACTIVE            '/dev/vgdata3/data10' [1.00 TB] inherit
  ACTIVE            '/dev/vgdata3/data11' [1.00 TB] inherit
  ACTIVE            '/dev/vgdate2/lvdate5' [1.00 TB] inherit
  ACTIVE            '/dev/vgdate2/lvdate6' [1.00 TB] inherit
  ACTIVE            '/dev/vgdate2/lvdate7' [1.00 TB] inherit
  ACTIVE            '/dev/vgdate2/lvdate8' [256.00 GB] inherit
  ACTIVE            '/dev/vgdate1/lvdate1' [1.00 TB] inherit
  ACTIVE            '/dev/vgdate1/lvdate2' [1.00 TB] inherit
  ACTIVE            '/dev/vgdate1/lvdate3' [1.00 TB] inherit
  ACTIVE            '/dev/vgdate1/lvdate4' [256.00 GB] inherit
[root@jczhdb2 mapper]#

step 5.查看磁盘

[root@jczhdb2 mapper]# ls -la
total 0
drwxr-xr-x  2 root root     760 Nov 10 17:49 .
drwxr-xr-x 17 root root    8740 Nov 10 17:49 ..
brw-rw----  1 root disk 253,  2 Nov 10 16:22 arch2
brw-rw----  1 root disk 253, 20 Nov 10 16:22 arch2p1
crw-------  1 root root  10, 59 Nov 10 16:22 control
brw-rw----  1 root disk 253,  0 Nov 10 16:22 data1
brw-rw----  1 root disk 253,  1 Nov 10 16:22 data2
brw-rw----  1 root disk 253, 13 Nov 10 16:22 data3
brw-rw----  1 root disk 253, 14 Nov 10 16:22 data4
brw-rw----  1 root disk 253,  3 Nov 10 16:22 FRA01
brw-rw----  1 root disk 253,  4 Nov 10 16:22 FRA02
brw-rw----  1 root disk 253,  5 Nov 10 16:22 FRA03
brw-rw----  1 root disk 253,  6 Nov 10 16:22 FRA04
brw-rw----  1 root disk 253,  7 Nov 10 16:22 FRA05
brw-rw----  1 root disk 253,  8 Nov 10 16:22 FRA06
brw-rw----  1 root disk 253,  9 Nov 10 16:22 FRA07
brw-rw----  1 root disk 253, 10 Nov 10 16:22 FRA08
brw-rw----  1 root disk 253, 11 Nov 10 16:22 FRA09
brw-rw----  1 root disk 253, 12 Nov 10 16:22 FRA10
brw-rw----  1 root disk 253, 16 Nov 10 16:22 ocr1
brw-rw----  1 root disk 253, 17 Nov 10 16:22 ocr2
brw-rw----  1 root disk 253, 30 Nov 10 17:49 vgdata3-data10
brw-rw----  1 root disk 253, 31 Nov 10 17:49 vgdata3-data11
brw-rw----  1 root disk 253, 29 Nov 10 17:49 vgdata3-data9
brw-rw----  1 root disk 253, 32 Nov 10 17:49 vgdata4-data13
brw-rw----  1 root disk 253, 33 Nov 10 17:49 vgdata4-data14
brw-rw----  1 root disk 253, 34 Nov 10 17:49 vgdata4-data15
brw-rw----  1 root disk 253, 25 Nov 10 16:22 vgdate1-lvdate1
brw-rw----  1 root disk 253, 26 Nov 10 16:22 vgdate1-lvdate2
brw-rw----  1 root disk 253, 27 Nov 10 16:22 vgdate1-lvdate3
brw-rw----  1 root disk 253, 28 Nov 10 16:22 vgdate1-lvdate4
brw-rw----  1 root disk 253, 21 Nov 10 16:22 vgdate2-lvdate5
brw-rw----  1 root disk 253, 22 Nov 10 16:22 vgdate2-lvdate6
brw-rw----  1 root disk 253, 23 Nov 10 16:22 vgdate2-lvdate7
brw-rw----  1 root disk 253, 24 Nov 10 16:22 vgdate2-lvdate8
brw-rw----  1 root disk 253, 19 Nov 10 16:22 vote1
brw-rw----  1 root disk 253, 18 Nov 10 16:22 vote2
brw-rw----  1 root disk 253, 15 Nov 10 16:22 vote3
[root@jczhdb2 mapper]#

step 6.创建RAW磁盘组

[root@jczhdb2 mapper]# source /etc/rc.local 
Error setting raw device (Device or resource busy)
Error setting raw device (Device or resource busy)
Error setting raw device (Device or resource busy)
Error setting raw device (Device or resource busy)
Error setting raw device (Device or resource busy)
Error setting raw device (Device or resource busy)
Error setting raw device (Device or resource busy)
Error setting raw device (Device or resource busy)
Error setting raw device (Device or resource busy)
Error setting raw device (Device or resource busy)
Error setting raw device (Device or resource busy)
Error setting raw device (Device or resource busy)
Error setting raw device (Device or resource busy)
/dev/raw/raw14:    bound to major 253, minor 29
/dev/raw/raw15:    bound to major 253, minor 30
/dev/raw/raw16:    bound to major 253, minor 31
/dev/raw/raw17:    bound to major 253, minor 32
/dev/raw/raw18:    bound to major 253, minor 33
/dev/raw/raw19:    bound to major 253, minor 34
[root@jczhdb2 mapper]#

step 7.给ASM磁盘组添加磁盘

alter diskgroup DATA add disk '/dev/raw/raw14';
alter diskgroup DATA add disk '/dev/raw/raw15';
alter diskgroup DATA add disk '/dev/raw/raw16';
alter diskgroup DATA add disk '/dev/raw/raw17';
alter diskgroup DATA add disk '/dev/raw/raw18';
alter diskgroup DATA add disk '/dev/raw/raw19';

step 8.创建ASM磁盘组

create diskgroup FRA external redundancy disk '/dev/raw/raw20';

alter diskgroup DATA add disk '/dev/raw/raw21';
alter diskgroup DATA add disk '/dev/raw/raw22';
alter diskgroup DATA add disk '/dev/raw/raw23';
alter diskgroup DATA add disk '/dev/raw/raw24';
alter diskgroup DATA add disk '/dev/raw/raw25';
alter diskgroup DATA add disk '/dev/raw/raw26';
alter diskgroup DATA add disk '/dev/raw/raw27';
alter diskgroup DATA add disk '/dev/raw/raw28';
alter diskgroup DATA add disk '/dev/raw/raw29';

step 9.数据库开启归档模式

关闭所有实例
shut immediate;
startup mount;
alter database archivelog;
alter database open;

配置归档文档路径
alter system set log_archive_dest_1='LOCATION=+FRA' scope=spfile sid='jczhdb1';
alter system set log_archive_dest_1='LOCATION=+FRA' scope=spfile sid='jczhdb2';

切换日志,使日志归档
alter system switch logfile;

裸设备不需要,Linux 操作系统配置其他特殊的服务来支持。

2.Oracle 11G 创建 ASM 磁盘

2.1.配置磁盘多路径

step 1.安装多路径软件包
查询是否安装了多路径软件包:

rpm -qa |grep device-mapper-multipath

如果没有安装,则用yum安装即可:

yum install device-mapper-multipath -y

step 2.扫描设备

[root@localhost Packages]# multipath -v2
Jul 29 01:27:09 | DM multipath kernel driver not loaded
Jul 29 01:27:09 | /etc/multipath.conf does not exist, blacklisting all devices.
Jul 29 01:27:09 | A sample multipath.conf file is located at
Jul 29 01:27:09 | /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf
Jul 29 01:27:09 | You can run /sbin/mpathconf to create or modify /etc/multipath.conf
Jul 29 01:27:09 | DM multipath kernel driver not loaded
[root@localhost Packages]# 

step 3.加载DM模块:
加载DM模块

modprobe dm-multipath
modprobe dm-round-robin

确认内核成功加载:

modprobe -l |grep multipath

step 4.设置服务开机启动
查询当前开机自启用设置:

chkconfig --list|grep multipathd

确认开启开机自启动:

chkconfig multipathd on
chkconfig --list|grep multipathd

step 5.生成multipath配置文件

  • 使用mpathconf生成配置

    /sbin/mpathconf  --enable
    
  • 使用示例配置文件
    可以从“/usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf” 复制:

    cp /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf /etc/
    

确认服务状态:

service multipathd status

启动DM服务

service multipathd start

step 6.用命令查看设备wwid:

multipath -ll
multipath -v3

设备wwid

cat /etc/multipath/wwids 

与设备绑定的WWID

cat /etc/multipath/bindings 

step 7. 配置/etc/multipath.conf,添加每个存储LUN的wwid号并设置对应的别名:

cat >> /etc/multipath.conf <<EOF
# Oracle Device Block
blacklist_exceptions {
    wwid "222020001551e58d8"
    wwid "2221a0001550395eb"
    wwid "222490001555d9e1d"
    wwid "22237000155e1d17a"
    wwid "222ba000155210541"
    wwid "222cf000155e429ea"
    wwid "222af0001556f1c17"
    wwid "2220e00015541c64d"
    wwid "222250001552dbf20"
    wwid "222960001558bb175"
    wwid "2225d0001552e0275"
    wwid "222d600015566bb0f"
    wwid "22204000155705524"
    wwid "22230000155ad7d67"
    wwid "222930001559793a7"
    wwid "222af000155d2e220"
    wwid "222030001556ea96b"
    wwid "222e9000155130f4e"
    wwid "2222d000155a55c5e"
    wwid "2223100015590f859"
    wwid "222dd000155d103d2"
    wwid "222ac0001558876a2"
    wwid "2223500015565cfd9"
    wwid "22284000155e63ad8"
    wwid "222fd000155de520e"
       }
blacklist {
       wwid 3600605b00eaacd802894354104b586a6
       devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
       devnode "^hd[a-z]"
}
multipaths {
    multipath {
                wwid "222020001551e58d8"
                alias   DATA0
        }
    multipath {
                wwid "2221a0001550395eb"
                alias   DATA1
        }    
    multipath {
                wwid "222490001555d9e1d"
                alias   DATA2
        }
    multipath {
                wwid "22237000155e1d17a"
                alias   DATA3
        }
    multipath {
                wwid "222ba000155210541"
                alias   DATA4
        }
    multipath {
                wwid "222cf000155e429ea"
                alias   REDO0
        }
    multipath {
                wwid "222af0001556f1c17"
                alias   REDO1
        }
    multipath {
                wwid "2220e00015541c64d"
                alias   REDO2
        }
    multipath {
                wwid "222250001552dbf20"
                alias   OCR0
        }    
    multipath {
                wwid "222960001558bb175"
                alias   OCR1
        }
    multipath {
                wwid "2225d0001552e0275"
                alias   OCR2
        }
    multipath {
                wwid "222d600015566bb0f"
                alias   OCR3
        }
    multipath {
                wwid "22204000155705524"
                alias   OCR4
        }    
    multipath {
                wwid "22230000155ad7d67"
                alias   FRA0
        }
    multipath {
                wwid "222930001559793a7"
                alias   FRA1
        }
    multipath {
                wwid "222af000155d2e220"
                alias   FRA2
        }
    multipath {
                wwid "222030001556ea96b"
                alias   FRA3
        }    
    multipath {
                wwid "222e9000155130f4e"
                alias   FRA4
        }
    multipath {
                wwid "2222d000155a55c5e"
                alias   FRA5
        }
    multipath {
                wwid "2223100015590f859"
                alias   FRA6
        }
    multipath {
                wwid "222dd000155d103d2"
                alias   FRA7
        }    
    multipath {
                wwid "222ac0001558876a2"
                alias   IMAGE0
        }
    multipath {
                wwid "2223500015565cfd9"
                alias   IMAGE1
        }
    multipath {
                wwid "22284000155e63ad8"
                alias   IMAGE2
        }
    multipath {
                wwid "222fd000155de520e"
                alias   IMAGE3
        }    
    }
EOF

2.2.使用OracleASM

安装Oracleasm包

[root@bigdata Packages]# yum install oracleasm-support-2.1.11-2.el7.x86_64.rpm 
[root@bigdata Packages]# 
[root@bigdata Packages]# oracleasm -h
Usage: oracleasm [--exec-path=<exec_path>] <command> [ <args> ]
       oracleasm --exec-path
       oracleasm -h
       oracleasm -V

The basic oracleasm commands are:
    configure Configure the Oracle Linux ASMLib driver
    init Load and initialize the ASMLib driver
    exit Stop the ASMLib driver
    scandisks Scan the system for Oracle ASMLib disks
    status Display the status of the Oracle ASMLib driver
    listdisks List known Oracle ASMLib disks
    listiids List the iid files
    deleteiids Delete the unused iid files
    querydisk Determine if a disk belongs to Oracle ASMlib
    createdisk Allocate a device for Oracle ASMLib use
    deletedisk Return a device to the operating system
    renamedisk Change the label of an Oracle ASMlib disk
    update-driver Download the latest ASMLib driver
[root@bigdata Packages]#
初始化:
[root@bigdata Packages]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
[root@bigdata Packages]# 

[root@xxjjzd-jcpt-data Packages]# oracleasm createdisk DATA0 /dev/sda
oracleasm module not loaded or /dev/oracleasm not mounted.
[root@xxjjzd-jcpt-data Packages]# 

加载asm库文件:
[root@xxjjzd-jcpt-data Packages]# /etc/init.d/oracleasm status
Checking if ASM is loaded: no
Checking if /dev/oracleasm is mounted: no
[root@xxjjzd-jcpt-data Packages]# /etc/init.d/oracleasm start
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@xxjjzd-jcpt-data Packages]#
[root@xxjjzd-jcpt-data Packages]# /etc/init.d/oracleasm enable
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]

创建asm磁盘:
[root@xxjjzd-jcpt-data ~]# oracleasm createdisk DATA0 /dev/sda1 
Writing disk header: done
Instantiating disk: done
[root@xxjjzd-jcpt-data ~]#

2.3.UDEV

step 1.配置dm规则:
搜索对应的配置文件模板:

[root@jyrac1 ~]# find / -name 12-*

/usr/share/doc/device-mapper-1.02.79/12-dm-permissions.rules

参考该文件中的:

# MULTIPATH DEVICES
#
# Set permissions for all multipath devices
# ENV{DM_UUID}=="mpath-?*", OWNER:="root", GROUP:="root", MODE:="660"

# Set permissions for first two partitions created on a multipath device (and detected by kpartx)
# ENV{DM_UUID}=="part[1-2]-mpath-?*", OWNER:="root", GROUP:="root", MODE:="660"

step 2.取得DM_UUID:

cd /dev/mapper
for i in `ls mpath*`; do printf "%s %s\n" "$i" "$(udevadm info --query=all --name=/dev/mapper/$i |grep -i dm_uuid)"; done

step 3.生成rules文件

cat > /etc/udev/rules.d/99-oracle-asmdevices.rules <<EOF
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-222020001551e58d8",SYMLINK+="/oracleasm/disks/DATA0",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-2221a0001550395eb",SYMLINK+="/oracleasm/disks/DATA1",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-222490001555d9e1d",SYMLINK+="/oracleasm/disks/DATA2",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-22237000155e1d17a",SYMLINK+="/oracleasm/disks/DATA3",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-222ba000155210541",SYMLINK+="/oracleasm/disks/DATA4",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-22230000155ad7d67",SYMLINK+="/oracleasm/disks/FRA0",OWNER="grid",GROUP="asmadmin",MODE="0660" 
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-222930001559793a7",SYMLINK+="/oracleasm/disks/FRA1",OWNER="grid",GROUP="asmadmin",MODE="0660" 
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-222af000155d2e220",SYMLINK+="/oracleasm/disks/FRA2",OWNER="grid",GROUP="asmadmin",MODE="0660" 
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-222030001556ea96b",SYMLINK+="/oracleasm/disks/FRA3",OWNER="grid",GROUP="asmadmin",MODE="0660" 
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-222e9000155130f4e",SYMLINK+="/oracleasm/disks/FRA4",OWNER="grid",GROUP="asmadmin",MODE="0660" 
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-2222d000155a55c5e",SYMLINK+="/oracleasm/disks/FRA5",OWNER="grid",GROUP="asmadmin",MODE="0660" 
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-2223100015590f859",SYMLINK+="/oracleasm/disks/FRA6",OWNER="grid",GROUP="asmadmin",MODE="0660" 
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-222dd000155d103d2",SYMLINK+="/oracleasm/disks/FRA7",OWNER="grid",GROUP="asmadmin",MODE="0660" 
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-222ac0001558876a2",SYMLINK+="/oracleasm/disks/IMAGE0",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-2223500015565cfd9",SYMLINK+="/oracleasm/disks/IMAGE1",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-22284000155e63ad8",SYMLINK+="/oracleasm/disks/IMAGE2",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-222fd000155de520e",SYMLINK+="/oracleasm/disks/IMAGE3",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-222250001552dbf20",SYMLINK+="/oracleasm/disks/OCR0",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-222960001558bb175",SYMLINK+="/oracleasm/disks/OCR1",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-2225d0001552e0275",SYMLINK+="/oracleasm/disks/OCR2",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-222d600015566bb0f",SYMLINK+="/oracleasm/disks/OCR3",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-22204000155705524",SYMLINK+="/oracleasm/disks/OCR4",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-222cf000155e429ea",SYMLINK+="/oracleasm/disks/REDO0",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-222af0001556f1c17",SYMLINK+="/oracleasm/disks/REDO1",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="dm-*",ENV{DM_UUID}=="mpath-2220e00015541c64d",SYMLINK+="/oracleasm/disks/REDO2",OWNER="grid",GROUP="asmadmin",MODE="0660
EOF

step 4.重新加载rules文件

udevadm control --reload-rules

# OEL6
/sbin/start_udev

# OEL7
udevadm trigger

step 5.查看磁盘权限

[root@sljj01 disks]# ll /dev/oracleasm/disks/*
lrwxrwxrwx. 1 root root 13 Jul 29 22:06 /dev/oracleasm/disks/DATA0 -> ../../../dm-6
lrwxrwxrwx. 1 root root 13 Jul 29 22:06 /dev/oracleasm/disks/DATA1 -> ../../../dm-4
lrwxrwxrwx. 1 root root 13 Jul 29 22:06 /dev/oracleasm/disks/DATA2 -> ../../../dm-2
lrwxrwxrwx. 1 root root 13 Jul 29 22:06 /dev/oracleasm/disks/DATA3 -> ../../../dm-5
lrwxrwxrwx. 1 root root 13 Jul 29 22:06 /dev/oracleasm/disks/DATA4 -> ../../../dm-3
lrwxrwxrwx. 1 root root 14 Jul 29 22:06 /dev/oracleasm/disks/FRA0 -> ../../../dm-13
lrwxrwxrwx. 1 root root 14 Jul 29 22:06 /dev/oracleasm/disks/FRA1 -> ../../../dm-15
lrwxrwxrwx. 1 root root 14 Jul 29 22:06 /dev/oracleasm/disks/FRA2 -> ../../../dm-14
lrwxrwxrwx. 1 root root 14 Jul 29 22:06 /dev/oracleasm/disks/FRA3 -> ../../../dm-16
lrwxrwxrwx. 1 root root 14 Jul 29 22:06 /dev/oracleasm/disks/FRA4 -> ../../../dm-17
lrwxrwxrwx. 1 root root 14 Jul 29 22:06 /dev/oracleasm/disks/FRA5 -> ../../../dm-18
lrwxrwxrwx. 1 root root 14 Jul 29 22:06 /dev/oracleasm/disks/FRA6 -> ../../../dm-20
lrwxrwxrwx. 1 root root 14 Jul 29 22:06 /dev/oracleasm/disks/FRA7 -> ../../../dm-19
lrwxrwxrwx. 1 root root 14 Jul 29 22:06 /dev/oracleasm/disks/IMAGE0 -> ../../../dm-24
lrwxrwxrwx. 1 root root 14 Jul 29 22:06 /dev/oracleasm/disks/IMAGE1 -> ../../../dm-22
lrwxrwxrwx. 1 root root 14 Jul 29 22:06 /dev/oracleasm/disks/IMAGE2 -> ../../../dm-21
lrwxrwxrwx. 1 root root 14 Jul 29 22:06 /dev/oracleasm/disks/IMAGE3 -> ../../../dm-23
lrwxrwxrwx. 1 root root 14 Jul 29 22:06 /dev/oracleasm/disks/OCR0 -> ../../../dm-10
lrwxrwxrwx. 1 root root 13 Jul 29 22:06 /dev/oracleasm/disks/OCR1 -> ../../../dm-9
lrwxrwxrwx. 1 root root 14 Jul 29 22:06 /dev/oracleasm/disks/OCR2 -> ../../../dm-11
lrwxrwxrwx. 1 root root 14 Jul 29 22:06 /dev/oracleasm/disks/OCR3 -> ../../../dm-26
lrwxrwxrwx. 1 root root 14 Jul 29 22:06 /dev/oracleasm/disks/OCR4 -> ../../../dm-12
lrwxrwxrwx. 1 root root 13 Jul 29 22:06 /dev/oracleasm/disks/REDO0 -> ../../../dm-7
lrwxrwxrwx. 1 root root 14 Jul 29 22:06 /dev/oracleasm/disks/REDO1 -> ../../../dm-25
lrwxrwxrwx. 1 root root 13 Jul 29 22:06 /dev/oracleasm/disks/REDO2 -> ../../../dm-8
[root@sljj01 disks]# 

请确保所有链接磁盘均可见,并具有对应的正确权限,否则,在进行下一步之前请解决该问题。

[root@sljj01 disks]# ll /dev/dm-*
brw-rw----. 1 root disk     253,  0 Jul 29 22:06 /dev/dm-0
brw-rw----. 1 root disk     253,  1 Jul 29 22:06 /dev/dm-1
brw-rw----. 1 grid asmadmin 253, 10 Jul 29 22:06 /dev/dm-10
brw-rw----. 1 grid asmadmin 253, 11 Jul 29 22:06 /dev/dm-11
brw-rw----. 1 grid asmadmin 253, 12 Jul 29 22:06 /dev/dm-12
brw-rw----. 1 grid asmadmin 253, 13 Jul 29 22:06 /dev/dm-13
brw-rw----. 1 grid asmadmin 253, 14 Jul 29 22:06 /dev/dm-14
brw-rw----. 1 grid asmadmin 253, 15 Jul 29 22:06 /dev/dm-15
brw-rw----. 1 grid asmadmin 253, 16 Jul 29 22:06 /dev/dm-16
brw-rw----. 1 grid asmadmin 253, 17 Jul 29 22:06 /dev/dm-17
brw-rw----. 1 grid asmadmin 253, 18 Jul 29 22:06 /dev/dm-18
brw-rw----. 1 grid asmadmin 253, 19 Jul 29 22:06 /dev/dm-19
brw-rw----. 1 grid asmadmin 253,  2 Jul 29 22:06 /dev/dm-2
brw-rw----. 1 grid asmadmin 253, 20 Jul 29 22:06 /dev/dm-20
brw-rw----. 1 grid asmadmin 253, 21 Jul 29 22:06 /dev/dm-21
brw-rw----. 1 grid asmadmin 253, 22 Jul 29 22:06 /dev/dm-22
brw-rw----. 1 grid asmadmin 253, 23 Jul 29 22:06 /dev/dm-23
brw-rw----. 1 grid asmadmin 253, 24 Jul 29 22:06 /dev/dm-24
brw-rw----. 1 grid asmadmin 253, 25 Jul 29 22:06 /dev/dm-25
brw-rw----. 1 grid asmadmin 253, 26 Jul 29 22:06 /dev/dm-26
brw-rw----. 1 root disk     253, 27 Jul 29 22:06 /dev/dm-27
brw-rw----. 1 grid asmadmin 253,  3 Jul 29 22:06 /dev/dm-3
brw-rw----. 1 grid asmadmin 253,  4 Jul 29 22:06 /dev/dm-4
brw-rw----. 1 grid asmadmin 253,  5 Jul 29 22:06 /dev/dm-5
brw-rw----. 1 grid asmadmin 253,  6 Jul 29 22:06 /dev/dm-6
brw-rw----. 1 grid asmadmin 253,  7 Jul 29 22:06 /dev/dm-7
brw-rw----. 1 grid asmadmin 253,  8 Jul 29 22:06 /dev/dm-8
brw-rw----. 1 grid asmadmin 253,  9 Jul 29 22:06 /dev/dm-9
[root@sljj01 disks]# 

设备的链接归 root 用户所有,但是链接所指向的磁盘是拥有正确权限的。

3.ASMlib和OCFS2软件支持

Oracle 将不再提供ASMlib和OCFS2软件和支持给红帽的新发行版,这是两则与Linux和数据库相关的重要通知,部分重要内容给大家翻译如下:

3.1.ASMLib

  • ASMLib是Oracle数据库10g和11g自动存储管理的支持库,它是数据库管理的重要组成部分。以前Oracle是提供 ASMlib给在红帽Linux上装Oracle数据库的客户。
  • 从RHEL6(红帽的最新版本)开始,Oracle只通过ULN(Unbreakable
    Linux
    Network,Oracle自己的Linux服务网络)提供ASMLib软件。而且Oracle不再提供ASMLib程序给红帽。只有购买了
    Oracle Linux的客户可以使用ULN上的ASMLib的服务。

3.2.OCFS2

  • OCFS2是多台服务器同时访问存储的并行文件系统,Oracle数据库作RAC的组成部分,以前Oracle提供OCFS2软件和支持服务给在红帽Linux安装oracle数据集群的客户。
  • 从RHEL6(红帽的最新版本)开始,Oracle OCFS2只通过ULN提供给Oracle Linux的客户,ULN需要客户购买Oracle Linux才能使用

3.3.Installing and Configuring ASMLib:

要获取,安装和配置ASMLib,请执行以下操作:

  1. 在Red Hat Network客户门户上启用Red Hat Enterprise Linux 6 Server存储库

  2. 从以下位置下载ASMLib实用程序软件包(oracleasm-support)和ASMLib库软件包(oracleasmlib):
    http://www.oracle.com/technetwork/server-storage/linux/asmlib/rhel6-1940776.html

  3. 使用root身份安装ASMLib内核模块软件包:

# yum install kmod-oracleasm
  1. 使用root安装步骤2中获得的ASMLib库软件包:
# yum localinstall oracleasmlib-<version>.x86_64.rpm
  1. 使用root安装步骤2中获得的ASMLib库软件包:
# yum localinstall oracleasm-support-<version>.x86_64.rpm

现在应在系统上安装所有三个必需的ASMLib组件。

  1. 使用以下命令配置ASMLib:
# oracleasm init

有关配置ASMLib的更多详细步骤,请参阅ASMLib文档:
Oracle Help Center

4.AFD配置 asm 磁盘 在CentOS 中

环境描述:

操作系统
OS:CentOS 7.9
数据库及系统版本信息:
Oracle GRID:19.3

## 4.1.系统兼容性测试

[root@db01 ~]# export ORACLE_HOME=/oracle/app/19.3.0/grid_1
[root@db01 ~]# export ORACLE_BASE=/tmp
[root@db01 ~]# export PATH=$PATH:$ORACLE_HOME/bin
[root@db01 ~]#
[root@db01 ~]# afddriverstate supported
AFD-620: AFD is not supported on this operating system version: 'centos-release-7-9.2009.0.el7.centos.x86_64'
AFD-9201: Not Supported
AFD-9294: updating file /etc/sysconfig/oracledrivers.conf

AFD-9201: Not Supported表示Centos 7.9不支持。

step 2.修改AFD部署脚本
通过下面AFD的部署脚本,可以知道Oracle是通过 rpm -qf /etc/redhat-release 的方式来获取系统的厂商名字和内核版本。

[root@db01 ~]# vi $ORACLE_HOME/lib/osds_acfslib.pm

  # see - http://www.oracle.com/us/technologies/027626.pdf
  if (-e "/etc/oracle-release")
  {
    open (RPM_QF, "rpm -qf /etc/oracle-release 2>&1 |");
    $release = <RPM_QF>;
    close (RPM_QF);
  }
  elsif (-e "/etc/redhat-release")
  {
    open (RPM_QF, "rpm -qf /etc/redhat-release 2>&1 |");
    $release = <RPM_QF>;
    close (RPM_QF);
  }
  elsif (-e "/etc/os-release")
  {
    open (RPM_QF, "rpm -qf /etc/os-release 2>&1 |");
    $release = <RPM_QF>;
    close (RPM_QF);
  }
  elsif (-e "/etc/SuSE-release")
  {
    open (RPM_QF, "rpm -qf /etc/SuSE-release 2>&1 |");
    $release = <RPM_QF>;
    close (RPM_QF);
  }

查看系统的版本信息

[root@db01 ~]# rpm -qf /etc/redhat-release
centos-release-7-9.2009.0.el7.centos.x86_64

继续向下查看脚本,可以看到Oracle没有多Centos进行判断,只对Oracle Linux和RHEL进行判断,下面通过手动添加CentOS的判断,命令如下所示

[root@db01 ~]# sed -i '/(\$release =~ \/\^enterprise-release\/) ||/a \         (\$release =~ \/^centos-release\/) ||        # Centos Enterprise Linux' $ORACLE_HOME/lib/osds_acfslib.pm
[root@db01 ~]# sed -i 's/Centos/Redhat/' /etc/redhat-release

详细的命令代码如下:

579   if ((defined($release)) &&                     # Redhat or OEL if defined
    580       (($release =~ /^redhat-release/) ||        # straight RH
    581        ($release =~ /^enterprise-release/) ||    # Oracle Enterprise Linux
    582        ($release =~ /^centos-release/) ||        # Oracle Linux
    583        ($release =~ /^oraclelinux-release/)))    # Oracle Linux

   1168   elsif (($release =~ /^redhat-release/) ||        # straight RH
   1169          ($release =~ /^enterprise-release/) ||    # Oracle Enterprise Linux
   1170          ($release =~ /^centos-release/) ||        # Centos Enterprise Linux
   1171          ($release =~ /^oraclelinux-release/))     # Oracle Linux
   1172   {

通过修改代码后,再次执行supported,提示已经支持。

[root@db01 ~]# afddriverstate supported
AFD-9200: Supported

4.2.配置磁盘的标签

在未安装AFD的环境中配置磁盘标签需要添加–init参数,如图所示,此步只需要再RAC环境中执行。

[root@db01 grid]# asmcmd afd_label OCR01 /dev/OCR01 --init
[root@db01 grid]# asmcmd afd_label OCR02 /dev/OCR02 --init
[root@db01 grid]# asmcmd afd_label OCR03 /dev/OCR03 --init
[root@db01 grid]# ls -la /dev/oracleafd/disks/*
-rw-rw-r-- 1 grid oinstall 34 Apr  8 09:00 /dev/oracleafd/disks/OCR01
-rw-rw-r-- 1 grid oinstall 34 Apr  8 09:02 /dev/oracleafd/disks/OCR02
-rw-rw-r-- 1 grid oinstall 34 Apr  8 09:03 /dev/oracleafd/disks/OCR03
[root@db01 grid]# 

[root@db01 grid]# asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
OCR01                                 /dev/sdd
OCR02                                 /dev/sdk
OCR03                                 /dev/sdj
[root@db01 grid]# 

如果是在集群环境中,需要再root用户下临时修改磁盘的权限,否则配置OCR磁盘组时无法查询到磁盘,修改的命令如下:

[root@db01 ~]# chown grid:asmadmin /dev/sdb

配置单机GRID环境

$ORACLE_HOME/gridSetup.sh -silent -skipPrereqs -waitForCompletion  -printtime  \
INVENTORY_LOCATION=/u01/app/oraInventory \
SELECTED_LANGUAGES=en \
oracle.install.option=HA_CONFIG \
ORACLE_BASE=/oracle/app/grid \
ORACLE_HOME=/oracle/app/19.3.0/grid_1 \
oracle.install.asm.OSDBA=asmdba \
oracle.install.asm.OSOPER=oinstall \
oracle.install.asm.OSASM=asmadmin \
oracle.install.asm.configureAFD=true \
oracle.install.asm.SYSASMPassword=Htzoracle123 \
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/sd* \
oracle.install.asm.diskGroup.name=ha_ocr \
oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/sdb, \
oracle.install.asm.diskGroup.disks=/dev/sdb \
oracle.install.asm.diskGroup.redundancy=EXTERNAL \
oracle.install.asm.diskGroup.AUSize=4 \
oracle.install.asm.monitorPassword=Htzoracle123

As a root user, execute the following script(s):
    1. /oracle/app/19.3.0/grid_1/root.sh

Execute /oracle/app/19.3.0/grid_1/root.sh on the following nodes:
[node1]

Successfully Setup Software.
As install user, execute the following command to complete the configuration.
    /oracle/app/19.3.0/grid_1/gridSetup.sh -executeConfigTools -responseFile /oracle/app/19.3.0/grid_1/install/response/grid_2023-01-03_01-30-58AM.rsp [-silent]
Note: The required passwords need to be included in the response file.

[root@node1 ~]# /oracle/app/19.3.0/grid_1/root.sh
Check /oracle/app/19.3.0/grid_1/install/root_node1_2023-01-03_01-33-28-805656094.log for the output of root script

[grid@node1 ~]$ /oracle/app/19.3.0/grid_1/gridSetup.sh -executeConfigTools -responseFile /oracle/app/19.3.0/grid_1/install/response/grid_2023-01-03_01-30-58AM.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:
/u01/app/oraInventory/logs/GridSetupActions2023-01-03_01-38-08AM

You can find the log of this install session at:
 /u01/app/oraInventory/logs/UpdateNodeList2023-01-03_01-38-08AM.log
[WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.

4.3.确认AFD是否安装

查询OKS是否已经加载
如果oks已经加载,说明AFD已经安装成功。

[root@node1 ~]# lsmod |grep oracle
oracleacfs           5238775  0
oracleadvm           1176594  0
oracleoks             781410  2 oracleacfs,oracleadvm
oracleafd             222652  0

查看AFD磁盘标签

[grid@node1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
OCR01                       ENABLED   /dev/sdb

利用AFD磁盘创建磁盘组

[grid@node1 ~]$ srvctl add asm
[grid@node1 ~]$ srvctl start asm

[grid@node1 ~]$ asmcmd afd_dsget
AFD discovery string: /dev/sd*
[grid@node1 ~]$ asmcmd dsget
parameter:AFD:*
profile:++no-value-at-resource-creation--never-updated-through-ASM++
[grid@node1 ~]$ asmca -silent -createDiskGroup -diskString AFD:* -diskGroupName ha_ocr -redundancy EXTERNAL -diskList AFD:OCR01

[DBT-30001] Disk groups created successfully. Check /oracle/app/grid/cfgtoollogs/asmca/asmca-230103AM014513.log for details.

[grid@node1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  1048576      3072     3008                0            3008              0             N  HA_OCR/

[grid@node1 ~]$ srvctl start diskgroup -diskgroup ha_ocr
[grid@node1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  1048576      3072     3008                0            3008              0             N  HA_OCR/

AFD标签写入信息
下面新增加一块磁盘/dev/sdb,查看AFD标签写入内容

[grid@node1 ~]$ ls -l /dev/sdb*
brw-rw---- 1 root disk 8, 16 Jan  3  2023 /dev/sdb
[grid@node1 ~]$ ls -l /dev/oracleafd/disks/*
-rw-r--r-- 1 grid asmadmin 9 Jan  3  2023 /dev/oracleafd/disks/OCR01

[root@node1 ~]# od -x /dev/sdc
0000000 0000 0000 0000 0000 0000 0000 0000 0000
*
^C

[grid@node1 ~]$ asmcmd afd_label data01 /dev/sdc
No devices to be labeled.
ASMCMD-9513: ASM disk label set operation failed.

[root@node1 ~]# export ORACLE_HOME=/oracle/app/19.3.0/grid_1
[root@node1 ~]# export ORACLE_BASE=/oracle/app/grid
[root@node1 ~]# /oracle/app/19.3.0/grid_1/bin/asmcmd afd_label data01 /dev/sdc
[root@node1 ~]# /oracle/app/19.3.0/grid_1/bin/asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
DATA01                      ENABLED   /dev/sdc
OCR01                       ENABLED   /dev/sdb

[root@node1 ~]# od -cx /dev/sdc
0000000  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 334 261 375 221
           0000    0000    0000    0000    0000    0000    b1dc    91fd
0000020  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
           0000    0000    0000    0000    0000    0000    0000    0000
0000040   O   R   C   L   D   I   S   K   D   A   T   A   0   1  \0  \0
           524f    4c43    4944    4b53    4144    4154    3130    0000
0000060  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
           0000    0000    0000    0000    0000    0000    0000    0000
*
0000440  \0  \0  \0  \0  \0  \0  \0  \0  \n  \n  \n 264 251 320 263   c
           0000    0000    0000    0000    0a0a    b40a    d0a9    63b3
0000460  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
           0000    0000    0000    0000    0000    0000    0000    0000
*

ORCLDISKDATA01为AFD写入磁盘头中的内容,如果AFD标签丢失时,我们也可以手动写入相应的内容。

5.关于 ASM 设备使用的畅想

从 Oracle RAC 的诞生,使用 Linux RAW 设备到 SAN Lun 的裸磁盘,更新到新版的Oracle数据库软件后,引入AFD ,再到抛弃了对其他操作系统对Oracleasmlib 的支持,强制用户开始使用 AFD。
转变的是技术,却是Oracle对RAC使用策略的收紧。
在当下国产数据库,多模数据库的疯狂占强市场;国产信创的如火如荼,让我们相信Oracle数据库会活的很好,会让Postgresql OEM 数据库艰难的前行。

以上谨代表个人观点,不喜勿喷。

最后修改时间:2024-05-13 10:14:06
「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论