GBase 8c 学习笔记 003 —— GBase 8c 安装实操
检查配置
硬件配置
- CPU
[root@192 ~]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 2 座: 2 NUMA 节点: 1 厂商 ID: GenuineIntel CPU 系列: 6 型号: 158 型号名称: Intel(R) Core(TM) i7-9700 CPU @ 3.00GHz 步进: 13 CPU MHz: 3000.001 BogoMIPS: 6000.00 超管理器厂商: VMware 虚拟化类型: 完全 L1d 缓存: 32K L1i 缓存: 32K L2 缓存: 256K L3 缓存: 12288K NUMA 节点0 CPU: 0-3 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflu sh mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc _reliable nonstop_tsc eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt t sc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflu shopt xsaveopt xsavec xgetbv1 arat md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
- 查看内存
[root@192 ~]# cat /proc/meminfo MemTotal: 10054444 kB MemFree: 8312168 kB MemAvailable: 8731304 kB Buffers: 1156 kB Cached: 653504 kB SwapCached: 0 kB Active: 630376 kB Inactive: 548004 kB Active(anon): 525240 kB Inactive(anon): 25136 kB Active(file): 105136 kB Inactive(file): 522868 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 8257532 kB SwapFree: 8257532 kB Dirty: 3712 kB Writeback: 0 kB AnonPages: 524036 kB Mapped: 160076 kB Shmem: 26340 kB Slab: 110152 kB SReclaimable: 41328 kB SUnreclaim: 68824 kB KernelStack: 10464 kB PageTables: 36248 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 13284752 kB Committed_AS: 4057108 kB VmallocTotal: 34359738367 kB VmallocUsed: 243188 kB VmallocChunk: 34359277564 kB Percpu: 56320 kB HardwareCorrupted: 0 kB AnonHugePages: 90112 kB CmaTotal: 0 kB CmaFree: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 171840 kB DirectMap2M: 5070848 kB DirectMap1G: 7340032 kB [root@192 ~]#
- 查看硬盘信息
[root@192 ~]# fdisk -l | grep 磁盘 磁盘 /dev/sda:161.1 GB, 161061273600 字节,314572800 个扇区 磁盘标签类型:dos 磁盘标识符:0x000a449d 磁盘 /dev/mapper/centos-root:125.4 GB, 125397106688 字节,244916224 个扇区 磁盘 /dev/mapper/centos-swap:8455 MB, 8455716864 字节,16515072 个扇区 磁盘 /dev/mapper/centos-home:25.1 GB, 25052577792 字节,48930816 个扇区
软件配置
- 检查操作系统
[root@192 ~]# cat /etc/redhat-release CentOS Linux release 7.9.2009 (Core)
建议:提前检查下端口,避免安装的时候冲突。
常用的默认端口 20001、2379、6666、5432、15432、20010等
[root@192 ~]# netstat -ntlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 783/rpcbind tcp 0 0 192.168.122.1:53 0.0.0.0:* LISTEN 1574/dnsmasq tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1216/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1215/cupsd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1553/master tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 2989/sshd: root@pts tcp6 0 0 :::111 :::* LISTEN 783/rpcbind tcp6 0 0 :::22 :::* LISTEN 1216/sshd tcp6 0 0 ::1:631 :::* LISTEN 1215/cupsd tcp6 0 0 ::1:25 :::* LISTEN 1553/master tcp6 0 0 ::1:6010 :::* LISTEN 2989/sshd: root@pts [root@192 ~]#
集群规划
实操环境:GBase 8c分布式集群,三台虚拟机。
node1 GHA Server、DCS、GTM主,dn2备 | 192.168.254.141 | |
node2 DCS、CN1、GTM备、DN1主 | 192.168.254.142 | |
node3 DCS、CN2、DN2主 | 192.168.254.143 |
安装准备工作
所有节点上进行! 所有节点上进行! 所有节点上进行!
关闭防火墙
# 首先检查防火墙状态
# 若处于关闭状态,不用操作。
# 若处于开启状态,需要手动关闭防火墙
[root@192 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: active (running) since 二 2023-03-21 20:26:05 CST; 10min ago
Docs: man:firewalld(1)
Main PID: 890 (firewalld)
Tasks: 2
CGroup: /system.slice/firewalld.service
└─890 /usr/bin/python2 -Es /usr/sbin/firewalld --nofork --nopid
3月 21 20:26:04 localhost.localdomain systemd[1]: Starting firewalld - dynamic firewall daemon...
3月 21 20:26:05 localhost.localdomain systemd[1]: Started firewalld - dynamic firewall daemon.
3月 21 20:26:05 localhost.localdomain firewalld[890]: WARNING: AllowZoneDrifting is enabled. This is considered an insecure configuration option. It will be removed in a future release. Ple...bling it now.
Hint: Some lines were ellipsized, use -l to show in full.
# 关闭防火墙
[root@192 ~]# systemctl stop firewalld
# 禁止开机自启动
[root@192 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
# 再次检查防火墙状态
[root@192 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
3月 21 20:26:04 localhost.localdomain systemd[1]: Starting firewalld - dynamic firewall daemon...
3月 21 20:26:05 localhost.localdomain systemd[1]: Started firewalld - dynamic firewall daemon.
3月 21 20:26:05 localhost.localdomain firewalld[890]: WARNING: AllowZoneDrifting is enabled. This is considered an insecure configuration option. It will be removed in a future release. Ple...bling it now.
3月 21 20:37:02 192.168.254.141 systemd[1]: Stopping firewalld - dynamic firewall daemon...
3月 21 20:37:03 192.168.254.141 systemd[1]: Stopped firewalld - dynamic firewall daemon.
Hint: Some lines were ellipsized, use -l to show in full.
关闭SELINUX
# 检查服务状态
[root@192 ~]# sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: enforcing
Mode from config file: enforcing
Policy MLS status: enabled
Policy deny_unknown status: allowed
Max kernel policy version: 31
[root@192 ~]#
# 若处于关闭状态,不用操作。
# 若处于开启状态enabled,需要关闭服务器。
# 修改配置文件:vim /etc/selinux/config
[root@192 ~]# vim /etc/selinux/config
# 将 SELINUX 参数设置为 disabled
SELINUX=disabled
# :wq保存
reboot
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
环境检查
# 设置主机名
[root@192 ~]# hostnamectl set-hostname gbase8c_5_141
[root@192 ~]# hostnamectl set-hostname gbase8c_5_142
[root@192 ~]# hostnamectl set-hostname gbase8c_5_143
# 或者
vim /etc/hostname
[root@192 ~]# hostname
gbase8c_5_141
[root@192 ~]# hostname
gbase8c_5_142
[root@192 ~]# hostname
gbase8c_5_143
# 检查依赖
# 若显示的依赖版本信息在软件依赖配置要求中,无需操作。
# libaio-devel、lsb_release、ncurses-devel已集成在安装包中,无需检查。
[root@192 ~]# rpm -q bison flex patch bzip2
bison-3.0.4-2.el7.x86_64
flex-2.5.37-6.el7.x86_64
patch-2.7.1-12.el7_7.x86_64
bzip2-1.0.6-13.el7.x86_64
[root@192 ~]#
# 若未安装,执行yum install -y name命令
# 检查是否支持rdtscp指令集
[root@192 ~]# cat /proc/cpuinfo | grep rdtscp
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 arat md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 arat md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 arat md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 arat md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
[root@192 ~]#
配置gbase用户权限
# 在所有节点创建gbase用户组和用户。牢记密码!!
# 密码:gbase!@#
[root@192 ~]# groupadd gbase
[root@192 ~]#
[root@192 ~]# useradd -m -d /home/gbase gbase -g gbase
[root@192 ~]#
# 配置gbase用户的密码
[root@192 ~]# passwd gbase
更改用户 gbase 的密码 。
新的 密码:
无效的密码: 密码包含用户名在某些地方
重新输入新的 密码:
passwd:所有的身份验证令牌已经成功更新。
[root@192 ~]#
# 编辑sudo配置文件
# /etc/sudoers
[root@192 ~]# visudo
# 添加如下内容,将 gbase 用户添加至 sudoer 列表。
gbase ALL=(ALL) NOPASSWD:ALL
# 配置完成后,切换为gbase用户。后续操作均以gbase身份进行。
[root@192 ~]# su gbase
[gbase@gbase8c_5_141 root]$
配置NTP同步
# 检查ntp服务正常运行
[gbase@gbase8c_5_141 root]$ sudo systemctl status ntpd.service
● ntpd.service - Network Time Service
Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
Active: inactive (dead)
# 启动ntp服务
[gbase@gbase8c_5_141 root]$ sudo systemctl start ntpd.service
# 设置开机启动
[gbase@gbase8c_5_141 root]$ sudo systemctl enable ntpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
# 再次检查ntp服务正常运行
[gbase@gbase8c_5_141 root]$ sudo systemctl status ntpd.service
● ntpd.service - Network Time Service
Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
Active: active (running) since 二 2023-03-21 21:12:22 CST; 14s ago
Main PID: 3374 (ntpd)
CGroup: /system.slice/ntpd.service
└─3374 /usr/sbin/ntpd -u ntp:ntp -g
3月 21 21:12:22 gbase8c_5_141 ntpd[3374]: Listen normally on 2 lo 127.0.0.1 UDP 123
3月 21 21:12:22 gbase8c_5_141 ntpd[3374]: Listen normally on 3 ens33 192.168.254.141 UDP 123
3月 21 21:12:22 gbase8c_5_141 ntpd[3374]: Listen normally on 4 virbr0 192.168.122.1 UDP 123
3月 21 21:12:22 gbase8c_5_141 ntpd[3374]: Listen normally on 5 lo ::1 UDP 123
3月 21 21:12:22 gbase8c_5_141 ntpd[3374]: Listen normally on 6 ens33 fe80::6eb3:53:eac0:83cd ...123
3月 21 21:12:22 gbase8c_5_141 ntpd[3374]: Listening on routing socket on fd #23 for interface...tes
3月 21 21:12:22 gbase8c_5_141 ntpd[3374]: 0.0.0.0 c016 06 restart
3月 21 21:12:22 gbase8c_5_141 ntpd[3374]: 0.0.0.0 c012 02 freq_set kernel 0.000 PPM
3月 21 21:12:22 gbase8c_5_141 ntpd[3374]: 0.0.0.0 c011 01 freq_not_set
3月 21 21:12:29 gbase8c_5_141 ntpd[3374]: 0.0.0.0 c614 04 freq_mode
Hint: Some lines were ellipsized, use -l to show in full.
[gbase@gbase8c_5_141 root]$
# 人为选取node1(GTM主)为NTP主节点(对节点类型没要求)
# 编辑NTP主节点配置文件
[gbase@gbase8c_5_141 root]$ sudo vi /etc/ntp.conf
# For more information about this file, see the man pages
# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).
driftfile /var/lib/ntp/drift
# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default nomodify notrap nopeer noquery
# 添加内容,本机IP
restrict 192.168.254.141 nomodify notrap nopeer noquery
# Permit all access over the loopback interface. This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
# 添加内容,子网掩码
restrict 192.168.254.255 mask 255.255.255.0 nomodify notrap
restrict 127.0.0.1
restrict ::1
# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
# 添加内容,127.127.1.0(本地时钟)
server 127.127.1.0
Fudge 127.127.1.0 stratum 10
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
# 注释内容
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
#broadcast 192.168.1.255 autokey # broadcast server
#broadcastclient # broadcast client
#broadcast 224.0.1.1 autokey # multicast server
#multicastclient 224.0.1.1 # multicast client
#manycastserver 239.255.254.254 # manycast server
#manycastclient 239.255.254.254 autokey # manycast client
# 编辑NTP备节点配置文件
[gbase@gbase8c_5_142 root]$ sudo vi /etc/ntp.conf
[gbase@gbase8c_5_143 root]$ sudo vi /etc/ntp.conf
# For more information about this file, see the man pages
# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).
driftfile /var/lib/ntp/drift
# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default nomodify notrap nopeer noquery
# 添加内容
restrict 192.168.254.142 nomodify notrap nopeer noquery
# Permit all access over the loopback interface. This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
# 添加内容
restrict 192.168.254.255 mask 255.255.255.0 nomodify notrap
restrict 127.0.0.1
restrict ::1
# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
# 添加内容,此处为NTP主节点的IP地址
server 192.168.254.141
Fudge 192.168.254.141 stratum 10
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
# 注释内容
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
# For more information about this file, see the man pages
# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).
driftfile /var/lib/ntp/drift
# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default nomodify notrap nopeer noquery
# 添加内容
restrict 192.168.254.143 nomodify notrap nopeer noquery
# Permit all access over the loopback interface. This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
# 添加内容
restrict 192.168.254.255 mask 255.255.255.0 nomodify notrap
restrict 127.0.0.1
restrict ::1
# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
# 添加内容,此处为NTP主节点的IP地址
server 192.168.254.141
Fudge 192.168.254.141 stratum 10
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
配置gbase免密互信
# 在所有节点创建 gbase 用户的.ssh 目录,并授权
[gbase@gbase8c_5_141 root]$ mkdir ~/.ssh
[gbase@gbase8c_5_141 root]$ chmod 700 ~/.ssh
# 生成密钥文件
[gbase@gbase8c_5_141 root]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/gbase/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/gbase/.ssh/id_rsa.
Your public key has been saved in /home/gbase/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:xcduS8xSamyAfm+zWH+qk6mVffJB6V+hfv2971xdJqw gbase@gbase8c_5_141
The key's randomart image is:
+---[RSA 2048]----+
| |
| . . . |
| . . o + |
| . + B . . |
| . S * * =.o|
| . + * =.o+|
| BoE.+ =|
| =++.+.==|
| o.ooo+oo@|
+----[SHA256]-----+
# 将密钥上传至所有节点,包括本节点,此操作需输入密码,为gbase用户的密码
[gbase@gbase8c_5_141 root]$ ssh-copy-id gbase@192.168.254.141
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/gbase/.ssh/id_rsa.pub"
The authenticity of host '192.168.254.141 (192.168.254.141)' can't be established.
ECDSA key fingerprint is SHA256:XyUe4k3DBDE4uUCYlAbDGds+qdF6FIunxj8rwS5DEBc.
ECDSA key fingerprint is MD5:4c:7d:50:05:df:4b:32:6c:2d:f0:00:2d:dc:ab:7b:df.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
gbase@192.168.254.141's password:
Permission denied, please try again.
gbase@192.168.254.141's password:
Permission denied, please try again.
gbase@192.168.254.141's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'gbase@192.168.254.141'"
and check to make sure that only the key(s) you wanted were added.
[gbase@gbase8c_5_141 root]$ ssh-copy-id gbase@192.168.254.142
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/gbase/.ssh/id_rsa.pub"
The authenticity of host '192.168.254.142 (192.168.254.142)' can't be established.
ECDSA key fingerprint is SHA256:cHEe6Qbl1yGNVhIO5nefAslHB1Z9s7XrlfpWASGXq2s.
ECDSA key fingerprint is MD5:0b:d6:d3:77:5c:33:40:d5:8e:8b:90:30:fe:31:95:ba.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
gbase@192.168.254.142's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'gbase@192.168.254.142'"
and check to make sure that only the key(s) you wanted were added.
[gbase@gbase8c_5_141 root]$ ssh-copy-id gbase@192.168.254.143
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/gbase/.ssh/id_rsa.pub"
The authenticity of host '192.168.254.143 (192.168.254.143)' can't be established.
ECDSA key fingerprint is SHA256:hRXSZ/S3jUNMDdqNETwX9iucmVUhZJI36cdBbflblTs.
ECDSA key fingerprint is MD5:2f:08:ff:c8:c1:ed:13:45:f4:99:9d:4d:9c:2d:a2:64.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
gbase@192.168.254.143's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'gbase@192.168.254.143'"
and check to make sure that only the key(s) you wanted were added.
# 检验是否完成互信免密(不需要输入密码,就是配置成功)
# 192.168.254.141--->192.168.254.142
[gbase@gbase8c_5_141 root]$ ssh gbase@192.168.254.142
Last login: Tue Mar 21 21:09:16 2023
[gbase@gbase8c_5_142 ~]$
# 192.168.254.142--->192.168.254.143
[gbase@gbase8c_5_142 root]$ ssh gbase@192.168.254.143
Last login: Tue Mar 21 21:09:16 2023
[gbase@gbase8c_5_143 ~]$
# 192.168.254.143--->192.168.254.141
[gbase@gbase8c_5_143 root]$ ssh gbase@192.168.254.142
Last login: Tue Mar 21 22:07:37 2023 from 192.168.254.143
[gbase@gbase8c_5_142 ~]$
安装工作(部署机上进行)
分布式情况下一般选GTM主节点作为部署机
# 上传安装包,修改权限,属于gbase用户
[gbase@gbase8c_5_141 ~]$ pwd
/home/gbase
[gbase@gbase8c_5_141 ~]$ cd gbase_package/
[gbase@gbase8c_5_141 gbase_package]$ ll
总用量 262300
-rw-r--r-- 1 root root 268594324 3月 21 22:18 GBase8cV5_S3.0.0B76_centos7.8_x86_64.tar.gz
[gbase@gbase8c_5_141 gbase_package]$ sudo chown -R gbase:gbase GBase8cV5_S3.0.0B76_centos7.8_x86_64.tar.gz
[gbase@gbase8c_5_141 gbase_package]$ ll
总用量 262300
-rw-r--r-- 1 gbase gbase 268594324 3月 21 22:18 GBase8cV5_S3.0.0B76_centos7.8_x86_64.tar.gz
[gbase@gbase8c_5_141 gbase_package]$
# 两级解压
[gbase@gbase8c_5_141 gbase_package]$ tar -xvf GBase8cV5_S3.0.0B76_centos7.8_x86_64.tar.gz
GBase8cV5_S3.0.0B76_CentOS_x86_64_om.sha256
GBase8cV5_S3.0.0B76_CentOS_x86_64_om.tar.gz
GBase8cV5_S3.0.0B76_CentOS_x86_64_pgpool.tar.gz
GBase8cV5_S3.0.0B76_CentOS_x86_64.sha256
GBase8cV5_S3.0.0B76_CentOS_x86_64.tar.bz2
[gbase@gbase8c_5_141 gbase_package]$ tar -xvf GBase8cV5_S3.0.0B76_CentOS_x86_64_om.tar.gz
./dependency/
./dependency/etcd-3.2.28-1.el7_8.x86_64.rpm
./dependency/etcd-server_3.2.17+dfsg-1_amd64.deb
./dependency/pipexec_2.5.5-1_amd64.deb
./dependency/lib64/
./dependency/lib64/libcrypto.so.1.1.1e
./dependency/lib64/libcrypto.so.1.1
./dependency/lib64/libssl.so.1.1.1e
./dependency/lib64/libssl.so.1.1
./dependency/lib64/libcrypto.so.10
./dependency/lib64/libcrypto.so.1.0.2k
./dependency/lib64/libffi.so.6
./dependency/lib64/libffi.so.6.0.1
./dependency/lib64/libncurses.so.5
./dependency/lib64/libncurses.so.5.9
./dependency/lib64/libreadline.so.6
./dependency/lib64/libreadline.so.6.2
./dependency/lib64/libssl.so.10
./dependency/lib64/libssl.so.1.0.2k
./dependency/lib64/libtinfo.so.5
./dependency/lib64/libtinfo.so.5.9
……
# 查看目录
[gbase@gbase8c_5_141 gbase_package]$ ll
总用量 526104
drwxrwxr-x 5 gbase gbase 165 2月 27 16:48 dependency
-rw-r--r-- 1 gbase gbase 268594324 3月 21 22:18 GBase8cV5_S3.0.0B76_centos7.8_x86_64.tar.gz
-rw-rw-r-- 1 gbase gbase 65 2月 27 16:48 GBase8cV5_S3.0.0B76_CentOS_x86_64_om.sha256
-rw-rw-r-- 1 gbase gbase 103802128 2月 27 16:48 GBase8cV5_S3.0.0B76_CentOS_x86_64_om.tar.gz
-rw-rw-r-- 1 gbase gbase 1035780 2月 27 16:48 GBase8cV5_S3.0.0B76_CentOS_x86_64_pgpool.tar.gz
-rw-rw-r-- 1 gbase gbase 65 2月 27 16:48 GBase8cV5_S3.0.0B76_CentOS_x86_64.sha256
-rw-rw-r-- 1 gbase gbase 165255046 2月 27 16:48 GBase8cV5_S3.0.0B76_CentOS_x86_64.tar.bz2
-rw-rw-r-- 1 gbase gbase 2570 2月 27 16:48 gbase.yml
drwxrwxr-x 11 gbase gbase 4096 2月 27 16:48 gha
-rw-rw-r-- 1 gbase gbase 188 2月 27 16:48 gha_ctl.ini
drwxrwxr-x 2 gbase gbase 96 2月 27 16:48 lib
-rw-rw-r-- 1 gbase gbase 729 2月 27 16:48 package_info.json
drwxr-xr-x 4 gbase gbase 28 3月 16 2021 python3.8
drwxrwxr-x 10 gbase gbase 4096 2月 27 16:48 script
drwxrwxr-x 2 gbase gbase 330 2月 27 16:48 simpleInstall
-rw-rw-r-- 1 gbase gbase 118 2月 27 16:48 ubuntu_version.json
drwx------ 6 gbase gbase 87 7月 2 2022 venv
-rw-rw-r-- 1 gbase gbase 36 2月 27 16:48 version.cfg
[gbase@gbase8c_5_141 gbase_package]$
# 编辑配置文件,与《集群规划》中的逻辑规划表一一对应
[gbase@gbase8c_5_141 gbase_package]$ vim /home/gbase/gbase_package/gbase.yml
gha_server:
- gha_server1:
host: 192.168.254.141
port: 20001
dcs:
- host: 192.168.254.142
port: 2379
- host: 192.168.254.143
port: 2379
- host: 192.168.254.141
port: 2379
gtm:
- gtm1:
host: 192.168.254.141
agent_host: 192.168.254.141
role: primary
port: 6666
agent_port: 8001
work_dir: /home/gbase/data/gtm/gtm1
- gtm2:
host: 192.168.254.142
agent_host: 192.168.254.142
role: standby
port: 6666
agent_port: 8002
work_dir: /home/gbase/data/gtm/gtm2
coordinator:
- cn1:
host: 192.168.254.142
agent_host: 192.168.254.142
role: primary
port: 5432
agent_port: 8003
work_dir: /home/gbase/data/coord/cn1
- cn2:
host: 192.168.254.143
agent_host: 192.168.254.143
role: primary
port: 5432
agent_port: 8004
work_dir: /home/gbase/data/coord/cn2
datanode:
- dn1:
- dn1_1:
host: 192.168.254.142
agent_host: 192.168.254.142
role: primary
port: 15432
agent_port: 8005
work_dir: /home/gbase/data/dn1/dn1_1
- dn2:
- dn2_1:
host: 192.168.254.143
agent_host: 192.168.254.143
role: primary
port: 20010
agent_port: 8007
work_dir: /home/gbase/data/dn2/dn2_1
# numa:
# cpu_node_bind: 0,1
# mem_node_bind: 0,1
- dn2_2:
host: 192.168.254.141
agent_host: 192.168.254.141
role: standby
port: 20010
agent_port: 8008
work_dir: /home/gbase/data/dn2/dn2_2
# numa:
# cpu_node_bind: 2
# mem_node_bind: 2
env:
# cluster_type allowed values: multiple-nodes, single-inst, default is multiple-nodes
cluster_type: multiple-nodes
pkg_path: /home/gbase/gbase_package
prefix: /home/gbase/gbase_db
version: V5_S3.0.0B76
user: gbase
port: 22
# constant:
# virtual_ip: 100.0.1.254/24
yml配置文件说明
- host:由数据面节点(CN、DN)访问连接的 IP
- port:集群节点连接端口
- agent_host:由控制面访问连接的IP
- role:节点角色类型,gtm、cn、dn节点必须设置的参数。
- agent_port:高可用端口号
- work_dir:节点数据存放路径。
- cluster_type:集群类型,分布式下参数值为multiple-nodes
- pkg_path:安装目录。owner为gbase
- prefix:运行目录。owner为gbase。
- version:安装包版本,仅修改后两位数字
# 执行安装
[gbase@gbase8c_5_141 gbase_package]$ cd script/
[gbase@gbase8c_5_141 script]$ ./gha_ctl install -c gbase -p /home/gbase/gbase_package/
{
"ret":0,
"msg":"Success"
}
[gbase@gbase8c_5_141 script]$
# 节点状态检查
/home/gbase/gbase_package/script/gha_ctl monitor -l http://192.168.254.141:2379
{
"cluster": "gbase",
"version": "V5_S3.0.0B76",
"server": [
{
"name": "gha_server1",
"host": "192.168.254.141",
"port": "20001",
"state": "running",
"isLeader": true
}
],
"gtm": [
{
"name": "gtm1",
"host": "192.168.254.141",
"port": "6666",
"workDir": "/home/gbase/data/gtm/gtm1",
"agentPort": "8001",
"state": "running",
"role": "primary",
"agentHost": "192.168.254.141"
},
{
"name": "gtm2",
"host": "192.168.254.142",
"port": "6666",
"workDir": "/home/gbase/data/gtm/gtm2",
"agentPort": "8002",
"state": "running",
"role": "standby",
"agentHost": "192.168.254.142"
}
],
"coordinator": [
{
"name": "cn1",
"host": "192.168.254.142",
"port": "5432",
"workDir": "/home/gbase/data/coord/cn1",
"agentPort": "8003",
"state": "running",
"role": "primary",
"agentHost": "192.168.254.142",
"central": true
},
{
"name": "cn2",
"host": "192.168.254.143",
"port": "5432",
"workDir": "/home/gbase/data/coord/cn2",
"agentPort": "8004",
"state": "running",
"role": "primary",
"agentHost": "192.168.254.143"
}
],
"datanode": {
"dn1": [
{
"name": "dn1_1",
"host": "192.168.254.142",
"port": "15432",
"workDir": "/home/gbase/data/dn1/dn1_1",
"agentPort": "8005",
"state": "running",
"role": "primary",
"agentHost": "192.168.254.142"
}
],
"dn2": [
{
"name": "dn2_1",
"host": "192.168.254.143",
"port": "20010",
"workDir": "/home/gbase/data/dn2/dn2_1",
"agentPort": "8007",
"state": "running",
"role": "primary",
"agentHost": "192.168.254.143"
},
{
"name": "dn2_2",
"host": "192.168.254.141",
"port": "20010",
"workDir": "/home/gbase/data/dn2/dn2_2",
"agentPort": "8008",
"state": "running",
"role": "standby",
"agentHost": "192.168.254.141"
}
]
},
"dcs": {
"clusterState": "healthy",
"members": [
{
"url": "http://192.168.254.143:2379",
"id": "60450489fa76e7d3",
"name": "node_1",
"isLeader": false,
"state": "healthy"
},
{
"url": "http://192.168.254.142:2379",
"id": "f223e8373317fc23",
"name": "node_0",
"isLeader": true,
"state": "healthy"
},
{
"url": "http://192.168.254.141:2379",
"id": "faded6717acf1144",
"name": "node_2",
"isLeader": false,
"state": "healthy"
}
]
}
}
# 手动监控、启停
# 监控。在DCS上执行命令
[gbase@gbase8c_5_141 script]$ gha_ctl monitor all -H -l http://192.168.254.141:2379
bash: gha_ctl: 未找到命令...
[gbase@gbase8c_5_141 script]$ su gbase
密码:
[gbase@gbase8c_5_141 script]$ gha_ctl monitor all -H -l http://192.168.254.141:2379
+----+-------------+-----------------+-------+---------+--------+
| No | name | host | port | state | leader |
+----+-------------+-----------------+-------+---------+--------+
| 0 | gha_server1 | 192.168.254.141 | 20001 | running | True |
+----+-------------+-----------------+-------+---------+--------+
+----+------+-----------------+------+---------------------------+---------+---------+
| No | name | host | port | work_dir | state | role |
+----+------+-----------------+------+---------------------------+---------+---------+
| 0 | gtm1 | 192.168.254.141 | 6666 | /home/gbase/data/gtm/gtm1 | running | primary |
| 1 | gtm2 | 192.168.254.142 | 6666 | /home/gbase/data/gtm/gtm2 | running | standby |
+----+------+-----------------+------+---------------------------+---------+---------+
+----+------+-----------------+------+----------------------------+---------+---------+
| No | name | host | port | work_dir | state | role |
+----+------+-----------------+------+----------------------------+---------+---------+
| 0 | cn1 | 192.168.254.142 | 5432 | /home/gbase/data/coord/cn1 | running | primary |
| 1 | cn2 | 192.168.254.143 | 5432 | /home/gbase/data/coord/cn2 | running | primary |
+----+------+-----------------+------+----------------------------+---------+---------+
+----+-------+-------+-----------------+-------+----------------------------+---------+---------+
| No | group | name | host | port | work_dir | state | role |
+----+-------+-------+-----------------+-------+----------------------------+---------+---------+
| 0 | dn1 | dn1_1 | 192.168.254.142 | 15432 | /home/gbase/data/dn1/dn1_1 | running | primary |
| 1 | dn2 | dn2_1 | 192.168.254.143 | 20010 | /home/gbase/data/dn2/dn2_1 | running | primary |
| 2 | dn2 | dn2_2 | 192.168.254.141 | 20010 | /home/gbase/data/dn2/dn2_2 | running | standby |
+----+-------+-------+-----------------+-------+----------------------------+---------+---------+
+----+-----------------------------+--------+---------+----------+
| No | url | name | state | isLeader |
+----+-----------------------------+--------+---------+----------+
| 0 | http://192.168.254.143:2379 | node_1 | healthy | False |
| 1 | http://192.168.254.142:2379 | node_0 | healthy | True |
| 2 | http://192.168.254.141:2379 | node_2 | healthy | False |
+----+-----------------------------+--------+---------+----------+
[gbase@gbase8c_5_141 script]$
# 停止。在DCS上执行命令
[gbase@gbase8c_5_141 script]$ gha_ctl stop all -l http://192.168.254.141:2379
{
"ret":0,
"msg":"Success"
}
[gbase@gbase8c_5_141 script]$ gha_ctl monitor all -H -l http://192.168.254.141:2379
+----+-------------+-----------------+-------+---------+--------+
| No | name | host | port | state | leader |
+----+-------------+-----------------+-------+---------+--------+
| 0 | gha_server1 | 192.168.254.141 | 20001 | stopped | False |
+----+-------------+-----------------+-------+---------+--------+
+----+------+-----------------+------+---------------------------+---------+---------+
| No | name | host | port | work_dir | state | role |
+----+------+-----------------+------+---------------------------+---------+---------+
| 0 | gtm1 | 192.168.254.141 | 6666 | /home/gbase/data/gtm/gtm1 | stopped | primary |
| 1 | gtm2 | 192.168.254.142 | 6666 | /home/gbase/data/gtm/gtm2 | stopped | standby |
+----+------+-----------------+------+---------------------------+---------+---------+
+----+------+-----------------+------+----------------------------+---------+---------+
| No | name | host | port | work_dir | state | role |
+----+------+-----------------+------+----------------------------+---------+---------+
| 0 | cn1 | 192.168.254.142 | 5432 | /home/gbase/data/coord/cn1 | stopped | primary |
| 1 | cn2 | 192.168.254.143 | 5432 | /home/gbase/data/coord/cn2 | stopped | primary |
+----+------+-----------------+------+----------------------------+---------+---------+
+----+-------+-------+-----------------+-------+----------------------------+---------+---------+
| No | group | name | host | port | work_dir | state | role |
+----+-------+-------+-----------------+-------+----------------------------+---------+---------+
| 0 | dn1 | dn1_1 | 192.168.254.142 | 15432 | /home/gbase/data/dn1/dn1_1 | stopped | primary |
| 1 | dn2 | dn2_1 | 192.168.254.143 | 20010 | /home/gbase/data/dn2/dn2_1 | stopped | primary |
| 2 | dn2 | dn2_2 | 192.168.254.141 | 20010 | /home/gbase/data/dn2/dn2_2 | stopped | standby |
+----+-------+-------+-----------------+-------+----------------------------+---------+---------+
+----+-----------------------------+--------+---------+----------+
| No | url | name | state | isLeader |
+----+-----------------------------+--------+---------+----------+
| 0 | http://192.168.254.143:2379 | node_1 | healthy | False |
| 1 | http://192.168.254.142:2379 | node_0 | healthy | True |
| 2 | http://192.168.254.141:2379 | node_2 | healthy | False |
+----+-----------------------------+--------+---------+----------+
[gbase@gbase8c_5_141 script]$
# 启动。在DCS上执行命令
[gbase@gbase8c_5_141 script]$ gha_ctl start all -l http://192.168.254.141:2379
{
"ret":0,
"msg":"Success"
}
[gbase@gbase8c_5_141 script]$ gha_ctl monitor all -H -l http://192.168.254.141:2379
+----+-------------+-----------------+-------+---------+--------+
| No | name | host | port | state | leader |
+----+-------------+-----------------+-------+---------+--------+
| 0 | gha_server1 | 192.168.254.141 | 20001 | running | True |
+----+-------------+-----------------+-------+---------+--------+
+----+------+-----------------+------+---------------------------+---------+---------+
| No | name | host | port | work_dir | state | role |
+----+------+-----------------+------+---------------------------+---------+---------+
| 0 | gtm1 | 192.168.254.141 | 6666 | /home/gbase/data/gtm/gtm1 | running | primary |
| 1 | gtm2 | 192.168.254.142 | 6666 | /home/gbase/data/gtm/gtm2 | running | standby |
+----+------+-----------------+------+---------------------------+---------+---------+
+----+------+-----------------+------+----------------------------+---------+---------+
| No | name | host | port | work_dir | state | role |
+----+------+-----------------+------+----------------------------+---------+---------+
| 0 | cn1 | 192.168.254.142 | 5432 | /home/gbase/data/coord/cn1 | running | primary |
| 1 | cn2 | 192.168.254.143 | 5432 | /home/gbase/data/coord/cn2 | running | primary |
+----+------+-----------------+------+----------------------------+---------+---------+
+----+-------+-------+-----------------+-------+----------------------------+---------+---------+
| No | group | name | host | port | work_dir | state | role |
+----+-------+-------+-----------------+-------+----------------------------+---------+---------+
| 0 | dn1 | dn1_1 | 192.168.254.142 | 15432 | /home/gbase/data/dn1/dn1_1 | running | primary |
| 1 | dn2 | dn2_1 | 192.168.254.143 | 20010 | /home/gbase/data/dn2/dn2_1 | running | primary |
| 2 | dn2 | dn2_2 | 192.168.254.141 | 20010 | /home/gbase/data/dn2/dn2_2 | running | standby |
+----+-------+-------+-----------------+-------+----------------------------+---------+---------+
+----+-----------------------------+--------+---------+----------+
| No | url | name | state | isLeader |
+----+-----------------------------+--------+---------+----------+
| 0 | http://192.168.254.143:2379 | node_1 | healthy | False |
| 1 | http://192.168.254.142:2379 | node_0 | healthy | True |
| 2 | http://192.168.254.141:2379 | node_2 | healthy | False |
+----+-----------------------------+--------+---------+----------+
[gbase@gbase8c_5_141 script]$
「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。