1.k8s高可用架构解析
2.基本环境配置
Kubeadm安装方式自1.14版本以后,安装方法几乎没有任何变化,此文档可以尝试安装最新的k8s集群,centos采用的是7.x版本
K8S官网:https://kubernetes.io/docs/setup/
最新版高可用安装:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
表1-1 高可用Kubernetes集群规划
角色 | 机器名 | 机器配置 | ip地址 | 安装软件 |
---|---|---|---|---|
master1 | k8s-master01.example.local | 2C4G | 172.31.3.101 | chrony-client、docker、kubeadm 、kubelet、kubectl |
master2 | k8s-master02.example.local | 2C4G | 172.31.3.102 | chrony-client、docker、kubeadm 、kubelet、kubectl |
master3 | k8s-master03.example.local | 2C4G | 172.31.3.103 | chrony-client、docker、kubeadm 、kubelet、kubectl |
ha1 | k8s-ha01.example.local | 2C2G | 172.31.3.104 172.31.3.188(vip) | chrony-server、haproxy、keepalived |
ha2 | k8s-ha02.example.local | 2C2G | 172.31.3.105 | chrony-server、haproxy、keepalived |
harbor1 | k8s-harbor01.example.local | 2C2G | 172.31.3.106 | chrony-client、docker、docker-compose、harbor |
harbor2 | k8s-harbor02.example.local | 2C2G | 172.31.3.107 | chrony-client、docker、docker-compose、harbor |
node1 | k8s-node01.example.local | 2C4G | 172.31.3.108 | chrony-client、docker、kubeadm 、kubelet |
node2 | k8s-node02.example.local | 2C4G | 172.31.3.109 | chrony-client、docker、kubeadm 、kubelet |
node3 | k8s-node03.example.local | 2C4G | 172.31.3.110 | chrony-client、docker、kubeadm 、kubelet |
软件版本信息和Pod、Service网段规划:
配置信息 | 备注 |
---|---|
支持的操作系统版本 | CentOS 7.9/stream 8、Rocky 8、Ubuntu 18.04/20.04 |
Docker版本 | 19.03.15 |
kubeadm版本 | 1.20.14 |
Pod网段 | 192.168.0.0/12 |
Service网段 | 10.96.0.0/12 |
注意:
集群安装时会涉及到三个网段:
宿主机网段:就是安装k8s的服务器
Pod网段:k8s Pod的网段,相当于容器的IP
Service网段:k8s service网段,service用于集群容器通信。
service网段会设置为10.96.0.0/12
Pod网段会设置成192.168.0.0/12
宿主机网段可能是172.31.0.0/21
需要注意的是这三个网段不能有任何交叉。
比如如果宿主机的IP是10.105.0.x
那么service网段就不能是10.96.0.0/12,因为10.96.0.0/12网段可用IP是:
10.96.0.1 ~ 10.111.255.255
所以10.105是在这个范围之内的,属于网络交叉,此时service网段需要更换,
可以更改为192.168.0.0/16网段(注意如果service网段是192.168开头的子网掩码最好不要是12,最好为16,因为子网掩码是12他的起始IP为192.160.0.1 不是192.168.0.1)。
同样的道理,计算别的网段也不能重复。可以通过http://tools.jb51.net/aideddesign/ip_net_calc/计算:
所以一般的推荐是,直接第一个开头的就不要重复,比如你的宿主机是192开头的,那么你的service可以是10.96.0.0/12.
如果你的宿主机是10开头的,就直接把service的网段改成192.168.0.0/16
如果你的宿主机是172开头的,就直接把pod网段改成192.168.0.0/12
注意搭配,均为10网段、172网段、192网段的搭配,第一个开头数字不一样就免去了网段冲突的可能性,也可以减去计算的步骤。
VIP(虚拟IP)不要和公司内网IP重复,首先去ping一下,不通不可用。VIP需要和主机在同一个局域网内!
公有云上搭建VIP是公有云的负载均衡的IP,比如阿里云的内网SLB的地址,腾讯云内网ELB的地址
各节点设置主机名:
hostnamectl set-hostname k8s-master01.example.local
hostnamectl set-hostname k8s-master02.example.local
hostnamectl set-hostname k8s-master03.example.local
hostnamectl set-hostname k8s-ha01.example.local
hostnamectl set-hostname k8s-ha02.example.local
hostnamectl set-hostname k8s-harbot01.example.local
hostnamectl set-hostname k8s-harbor02.example.local
hostnamectl set-hostname k8s-node01.example.local
hostnamectl set-hostname k8s-node02.example.local
hostnamectl set-hostname k8s-node03.example.local复制
各节点设置ip地址格式如下:
#CentOS
[root@k8s-master01 ~]# cat etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
NAME=eth0
BOOTPROTO=none
ONBOOT=yes
IPADDR=172.31.3.101
PREFIX=21
GATEWAY=172.31.0.2
DNS1=223.5.5.5
DNS2=180.76.76.76
#Ubuntu
root@k8s-master01:~# cat etc/netplan/01-netcfg.yaml
network:
version: 2
renderer: networkd
ethernets:
eth0:
addresses: [172.31.3.101/21]
gateway4: 172.31.0.2
nameservers:
addresses: [223.5.5.5, 180.76.76.76]复制
所有节点配置hosts,修改/etc/hosts如下:
cat >> etc/hosts <<EOF
172.31.3.101 k8s-master01.example.local k8s-master01
172.31.3.102 k8s-master02.example.local k8s-master02
172.31.3.103 k8s-master03.example.local k8s-master03
172.31.3.104 k8s-ha01.example.local k8s-ha01
172.31.3.105 k8s-ha02.example.local k8s-ha02
172.31.3.106 k8s-harbor01.example.local k8s-harbor01
172.31.3.107 k8s-harbor02.example.local k8s-harbor02
172.31.3.108 k8s-node01.example.local k8s-node01
172.31.3.109 k8s-node02.example.local k8s-node02
172.31.3.110 k8s-node03.example.local k8s-node03
172.31.3.188 k8s-lb
172.31.3.188 harbor.raymonds.cc
EOF复制
CentOS 7所有节点配置 yum源如下:
[root@k8s-master01 ~]# rm -f etc/yum.repos.d/*.repo
[root@k8s-master01 ~]# cat > etc/yum.repos.d/base.repo <<EOF
[base]
name=base
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever
[extras]
name=extras
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever
[updates]
name=updates
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever
[centosplus]
name=centosplus
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever/centosplus/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever
[epel]
name=epel
baseurl=https://mirrors.cloud.tencent.com/epel/$releasever/$basearch/
gpgcheck=1
gpgkey=https://mirrors.cloud.tencent.com/epel/RPM-GPG-KEY-EPEL-$releasever
EOF复制
Rocky 8所有节点配置 yum源如下:
[root@k8s-master01 ~]# cat etc/yum.repos.d/base.repo
[BaseOS]
name=BaseOS
baseurl=https://mirrors.sjtug.sjtu.edu.cn/rocky/$releasever/BaseOS/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
[AppStream]
name=AppStream
baseurl=https://mirrors.sjtug.sjtu.edu.cn/rocky/$releasever/AppStream/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
[extras]
name=extras
baseurl=https://mirrors.sjtug.sjtu.edu.cn/rocky/$releasever/extras/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
enabled=1
[plus]
name=plus
baseurl=https://mirrors.sjtug.sjtu.edu.cn/rocky/$releasever/plus/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
[PowerTools]
name=PowerTools
baseurl=https://mirrors.sjtug.sjtu.edu.cn/rocky/$releasever/PowerTools/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial复制
CentOS stream 8所有节点配置 yum源如下:
[root@k8s-master01 ~]# cat etc/yum.repos.d/base.repo
[BaseOS]
name=BaseOS
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever-stream/BaseOS/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
[AppStream]
name=AppStream
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever-stream/AppStream/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
[extras]
name=extras
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever-stream/extras/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
[centosplus]
name=centosplus
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever-stream/centosplus/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
[PowerTools]
name=PowerTools
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever-stream/PowerTools/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial复制
Ubuntu 所有节点配置 apt源如下:
root@k8s-master01:~# cat > etc/apt/sources.list <<EOF
deb http://mirrors.cloud.tencent.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.cloud.tencent.com/ubuntu/ bionic main restricted universe multiverse
deb http://mirrors.cloud.tencent.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.cloud.tencent.com/ubuntu/ bionic-security main restricted universe multiverse
deb http://mirrors.cloud.tencent.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.cloud.tencent.com/ubuntu/ bionic-updates main restricted universe multiverse
deb http://mirrors.cloud.tencent.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src http://mirrors.cloud.tencent.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb http://mirrors.cloud.tencent.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.cloud.tencent.com/ubuntu/ bionic-backports main restricted universe multiverse
EOF复制
必备工具安装:
#CentOS安装
yum -y install vim tree lrzsz wget jq psmisc net-tools telnet yum-utils device-mapper-persistent-data lvm2 git
#Rocky除了安装上面工具,还需要安装rsync
yum -y install rsync
#Ubuntu安装
apt -y install tree lrzsz jq复制
所有节点关闭防火墙、selinux、swap。服务器配置如下:
#CentOS
systemctl disable --now firewalld
#CentOS 7
systemctl disable --now NetworkManager
#Ubuntu
systemctl disable --now ufw
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' etc/selinux/config复制
关闭swap分区
sed -ri 's/.*swap.*/#&/' etc/fstab
swapoff -a
#Ubuntu 20.04,执行下面命令
sed -ri 's/.*swap.*/#&/' etc/fstab
SD_NAME=`lsblk|awk -F"[ └─]" '/SWAP/{printf $3}'`
systemctl mask dev-${SD_NAME}.swap
swapoff -a复制
ha01和ha02上安装chrony-server:
[root@k8s-ha01 ~]# cat install_chrony_server.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2021-11-22
#FileName: install_chrony_server.sh
#URL: raymond.blog.csdn.net
#Description: install_chrony_server for CentOS 7/8 & Ubuntu 18.04/20.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
os(){
OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' etc/os-release`
}
install_chrony(){
if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> dev/null;then
yum -y install chrony &> dev/null
sed -i -e '/^pool.*/d' -e '/^server.*/d' -e '/^# Please consider .*/a\server ntp.aliyun.com iburst\nserver time1.cloud.tencent.com iburst\nserver ntp.tuna.tsinghua.edu.cn iburst' -e 's@^#allow.*@allow 0.0.0.0/0@' -e 's@^#local.*@local stratum 10@' etc/chrony.conf
systemctl enable --now chronyd &> dev/null
systemctl is-active chronyd &> dev/null || { ${COLOR}"chrony 启动失败,退出!"${END} ; exit; }
${COLOR}"chrony安装完成"${END}
else
apt -y install chrony &> dev/null
sed -i -e '/^pool.*/d' -e '/^# See http:.*/a\server ntp.aliyun.com iburst\nserver time1.cloud.tencent.com iburst\nserver ntp.tuna.tsinghua.edu.cn iburst' etc/chrony/chrony.conf
echo "allow 0.0.0.0/0" >> etc/chrony/chrony.conf
echo "local stratum 10" >> etc/chrony/chrony.conf
systemctl enable --now chronyd &> dev/null
systemctl is-active chronyd &> dev/null || { ${COLOR}"chrony 启动失败,退出!"${END} ; exit; }
${COLOR}"chrony安装完成"${END}
fi
}
main(){
os
install_chrony
}
main
[root@k8s-ha01 ~]# bash install_chrony_server.sh
chrony安装完成
[root@k8s-ha02 ~]# bash install_chrony_server.sh
chrony安装完成
[root@k8s-ha01 ~]# chronyc sources -nv
210 Number of sources = 3
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 203.107.6.88 2 6 17 39 -1507us[-8009us] +/- 37ms
^- 139.199.215.251 2 6 17 39 +10ms[ +10ms] +/- 48ms
^? 101.6.6.172 0 7 0 - +0ns[ +0ns] +/- 0ns
[root@k8s-ha02 ~]# chronyc sources -nv
210 Number of sources = 3
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 203.107.6.88 2 6 17 40 +90us[-1017ms] +/- 32ms
^+ 139.199.215.251 2 6 33 37 +13ms[ +13ms] +/- 25ms
^? 101.6.6.172 0 7 0 - +0ns[ +0ns] +/- 0ns复制
master、node、harbor上安装chrony-client:
[root@k8s-master01 ~]# cat install_chrony_client.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2021-11-22
#FileName: install_chrony_client.sh
#URL: raymond.blog.csdn.net
#Description: install_chrony_client for CentOS 7/8 & Ubuntu 18.04/20.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
SERVER1=172.31.3.104
SERVER2=172.31.3.105
os(){
OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' etc/os-release`
}
install_chrony(){
if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> dev/null;then
yum -y install chrony &> dev/null
sed -i -e '/^pool.*/d' -e '/^server.*/d' -e '/^# Please consider .*/a\server '${SERVER1}' iburst\nserver '${SERVER2}' iburst' etc/chrony.conf
systemctl enable --now chronyd &> dev/null
systemctl is-active chronyd &> dev/null || { ${COLOR}"chrony 启动失败,退出!"${END} ; exit; }
${COLOR}"chrony安装完成"${END}
else
apt -y install chrony &> dev/null
sed -i -e '/^pool.*/d' -e '/^# See http:.*/a\server '${SERVER1}' iburst\nserver '${SERVER2}' iburst' etc/chrony/chrony.conf
systemctl enable --now chronyd &> dev/null
systemctl is-active chronyd &> dev/null || { ${COLOR}"chrony 启动失败,退出!"${END} ; exit; }
systemctl restart chronyd
${COLOR}"chrony安装完成"${END}
fi
}
main(){
os
install_chrony
}
main
[root@k8s-master01 ~]# for i in k8s-master02 k8s-master03 k8s-harbor01 k8s-harbor02 k8s-node01 k8s-node02 k8s-node03;do scp install_chrony_client.sh $i:/root/ ; done
[root@k8s-master01 ~]# bash install_chrony_client.sh
chrony安装完成
[root@k8s-master01 ~]# chronyc sources -nv
210 Number of sources = 2
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^+ k8s-ha01 3 6 17 8 +84us[ +74us] +/- 55ms
^* k8s-ha02 3 6 17 8 -82us[ -91us] +/- 45ms复制
所有节点设置时区。时间同步配置如下:
ln -sf usr/share/zoneinfo/Asia/Shanghai etc/localtime
echo 'Asia/Shanghai' >/etc/timezone复制
所有节点配置limit:
ulimit -SHn 65535
cat >>/etc/security/limits.conf <<EOF
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF复制
Master01节点免密钥登录其他节点,安装过程中生成配置文件和证书均在Master01上操作,集群管理也在Master01上操作,阿里云或者AWS上需要单独一台kubectl服务器。密钥配置如下:
[root@k8s-master01 ~]# cat ssh_key_push.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2021-11-19
#FileName: ssh_key_push.sh
#URL: raymond.blog.csdn.net
#Description: ssh_key_push for CentOS 7/8 & Ubuntu 18.04/24.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
export SSHPASS=123456
HOSTS="
172.31.3.101
172.31.3.102
172.31.3.103
172.31.3.104
172.31.3.105
172.31.3.106
172.31.3.107
172.31.3.108
172.31.3.109
172.31.3.110"
os(){
OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' etc/os-release`
}
ssh_key_push(){
rm -f ~/.ssh/id_rsa*
ssh-keygen -f root/.ssh/id_rsa -P '' &> dev/null
if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> dev/null;then
rpm -q sshpass &> dev/null || { ${COLOR}"安装sshpass软件包"${END};yum -y install sshpass &> dev/null; }
else
dpkg -S sshpass &> dev/null || { ${COLOR}"安装sshpass软件包"${END};apt -y install sshpass &> dev/null; }
fi
for i in $HOSTS;do
{
sshpass -e ssh-copy-id -o StrictHostKeyChecking=no -i root/.ssh/id_rsa.pub $i &> dev/null
[ $? -eq 0 ] && echo $i is finished || echo $i is false
}&
done
wait
}
main(){
os
ssh_key_push
}
main
[root@k8s-master01 ~]# bash ssh_key_push.sh
安装sshpass软件包
172.31.3.105 is finished
172.31.3.108 is finished
172.31.3.109 is finished
172.31.3.106 is finished
172.31.3.101 is finished
172.31.3.110 is finished
172.31.3.104 is finished
172.31.3.107 is finished
172.31.3.102 is finished
172.31.3.103 is finished复制
所有节点升级系统并重启,此处升级没有升级内核,下节会单独升级内核:
yum update -y --exclude=kernel* && reboot #CentOS7需要升级,CentOS8可以按需升级系统
[root@k8s-master01 ~]# cat etc/redhat-release
CentOS Linux release 7.9.2009 (Core)复制
3.内核配置
CentOS7 需要升级内核至4.18+,本地升级的版本为4.19
在master01节点下载内核:
[root@k8s-master01 ~]# wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
[root@k8s-master01 ~]# wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm复制
从master01节点传到其他节点:
[root@k8s-master01 ~]# for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done复制
所有节点安装内核
cd root && yum localinstall -y kernel-ml*复制
所有节点更改内核启动顺序
grub2-set-default 0 && grub2-mkconfig -o etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"复制
检查默认内核是不是4.19
grubby --default-kernel
[root@k8s-master01 ~]# grubby --default-kernel
/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64复制
所有节点重启,然后检查内核是不是4.19
reboot
uname -a
[root@k8s-master01 ~]# uname -a
Linux k8s-master01 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux复制
master和node安装ipvsadm:
#CentOS
yum -y install ipvsadm ipset sysstat conntrack libseccomp
#Ubuntu
apt -y install ipvsadm ipset sysstat conntrack libseccomp-dev复制
所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack #内核小于4.18,把这行改成nf_conntrack_ipv4
cat >> etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack #内核小于4.18,把这行改成nf_conntrack_ipv4
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
然后执行systemctl enable --now systemd-modules-load.service即可复制
开启一些k8s集群中必须的内核参数,所有节点配置k8s内核:
cat > etc/sysctl.d/k8s.conf <<EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system复制
Kubernetes内核优化常用参数详解:
net.ipv4.ip_forward = 1 #其值为0,说明禁止进行IP转发;如果是1,则说明IP转发功能已经打开。
net.bridge.bridge-nf-call-iptables = 1 #二层的网桥在转发包时也会被iptables的FORWARD规则所过滤,这样有时会出现L3层的iptables rules去过滤L2的帧的问题
net.bridge.bridge-nf-call-ip6tables = 1 #是否在ip6tables链中过滤IPv6包
fs.may_detach_mounts = 1 #当系统有容器运行时,需要设置为1
vm.overcommit_memory=1
#0, 表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。
#1, 表示内核允许分配所有的物理内存,而不管当前的内存状态如何。
#2, 表示内核允许分配超过所有物理内存和交换空间总和的内存
vm.panic_on_oom=0
#OOM就是out of memory的缩写,遇到内存耗尽、无法分配的状况。kernel面对OOM的时候,咱们也不能慌乱,要根据OOM参数来进行相应的处理。
#值为0:内存不足时,启动 OOM killer。
#值为1:内存不足时,有可能会触发 kernel panic(系统重启),也有可能启动 OOM killer。
#值为2:内存不足时,表示强制触发 kernel panic,内核崩溃GG(系统重启)。
fs.inotify.max_user_watches=89100 #表示同一用户同时可以添加的watch数目(watch一般是针对目录,决定了同时同一用户可以监控的目录数量)
fs.file-max=52706963 #所有进程最大的文件数
fs.nr_open=52706963 #单个进程可分配的最大文件数
net.netfilter.nf_conntrack_max=2310720 #连接跟踪表的大小,建议根据内存计算该值CONNTRACK_MAX = RAMSIZE (in bytes) 16384 (x 32),并满足nf_conntrack_max=4*nf_conntrack_buckets,默认262144
net.ipv4.tcp_keepalive_time = 600 #KeepAlive的空闲时长,或者说每次正常发送心跳的周期,默认值为7200s(2小时)
net.ipv4.tcp_keepalive_probes = 3 #在tcp_keepalive_time之后,没有接收到对方确认,继续发送保活探测包次数,默认值为9(次)
net.ipv4.tcp_keepalive_intvl =15 #KeepAlive探测包的发送间隔,默认值为75s
net.ipv4.tcp_max_tw_buckets = 36000 #Nginx 之类的中间代理一定要关注这个值,因为它对你的系统起到一个保护的作用,一旦端口全部被占用,服务就异常了。tcp_max_tw_buckets 能帮你降低这种情况的发生概率,争取补救时间。
net.ipv4.tcp_tw_reuse = 1 #只对客户端起作用,开启后客户端在1s内回收
net.ipv4.tcp_max_orphans = 327680 #这个值表示系统所能处理不属于任何进程的socket数量,当我们需要快速建立大量连接时,就需要关注下这个值了。
net.ipv4.tcp_orphan_retries = 3
#出现大量fin-wait-1
#首先,fin发送之后,有可能会丢弃,那么发送多少次这样的fin包呢?fin包的重传,也会采用退避方式,在2.6.358内核中采用的是指数退避,2s,4s,最后的重试次数是由tcp_orphan_retries来限制的。
net.ipv4.tcp_syncookies = 1 #tcp_syncookies是一个开关,是否打开SYN Cookie功能,该功能可以防止部分SYN攻击。tcp_synack_retries和tcp_syn_retries定义SYN的重试次数。
net.ipv4.tcp_max_syn_backlog = 16384 #进入SYN包的最大请求队列.默认1024.对重负载服务器,增加该值显然有好处.
net.ipv4.ip_conntrack_max = 65536 #表明系统将对最大跟踪的TCP连接数限制默认为65536
net.ipv4.tcp_max_syn_backlog = 16384 #指定所能接受SYN同步包的最大客户端数量,即半连接上限;
net.ipv4.tcp_timestamps = 0 #在使用 iptables 做 nat 时,发现内网机器 ping 某个域名 ping 的通,而使用 curl 测试不通, 原来是 net.ipv4.tcp_timestamps 设置了为 1 ,即启用时间戳
net.core.somaxconn = 16384 #Linux中的一个kernel参数,表示socket监听(listen)的backlog上限。什么是backlog呢?backlog就是socket的监听队列,当一个请求(request)尚未被处理或建立时,他会进入backlog。而socket server可以一次性处理backlog中的所有请求,处理后的请求不再位于监听队列中。当server处理请求较慢,以至于监听队列被填满后,新来的请求会被拒绝。复制
所有节点配置完内核后,重启服务器,保证重启后内核依旧加载
reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack
[root@k8s-master01 ~]# lsmod | grep --color=auto -e ip_vs -e nf_conntrack
ip_vs_ftp 16384 0
nf_nat 32768 1 ip_vs_ftp
ip_vs_sed 16384 0
ip_vs_nq 16384 0
ip_vs_fo 16384 0
ip_vs_sh 16384 0
ip_vs_dh 16384 0
ip_vs_lblcr 16384 0
ip_vs_lblc 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs_wlc 16384 0
ip_vs_lc 16384 0
ip_vs 151552 24 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
nf_conntrack 143360 2 nf_nat,ip_vs
nf_defrag_ipv6 20480 1 nf_conntrack
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 4 nf_conntrack,nf_nat,xfs,ip_vs复制
4.高可用组件安装
(注意:如果不是高可用集群,haproxy和keepalived无需安装)
公有云要用公有云自带的负载均衡,比如阿里云的SLB,腾讯云的ELB,用来替代haproxy和keepalived,因为公有云大部分都是不支持keepalived的,另外如果用阿里云的话,kubectl控制端不能放在master节点,推荐使用腾讯云,因为阿里云的slb有回环的问题,也就是slb代理的服务器不能反向访问SLB,但是腾讯云修复了这个问题。
4.1 安装haproxy
在ha01和ha02安装HAProxy:
[root@k8s-ha01 ~]# cat install_haproxy.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2021-12-29
#FileName: install_haproxy.sh
#URL: raymond.blog.csdn.net
#Description: The test script
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
SRC_DIR=/usr/local/src
COLOR="echo -e \\033[01;31m"
END='\033[0m'
CPUS=`lscpu |awk '/^CPU\(s\)/{print $2}'`
#lua下载地址:http://www.lua.org/ftp/lua-5.4.3.tar.gz
LUA_FILE=lua-5.4.3.tar.gz
#haproxy下载地址:https://www.haproxy.org/download/2.4/src/haproxy-2.4.10.tar.gz
HAPROXY_FILE=haproxy-2.4.10.tar.gz
HAPROXY_INSTALL_DIR=/apps/haproxy
STATS_AUTH_USER=admin
STATS_AUTH_PASSWORD=123456
VIP=172.31.3.188
MASTER01=172.31.3.101
MASTER02=172.31.3.102
MASTER03=172.31.3.103
HARBOR01=172.31.3.106
HARBOR02=172.31.3.107
os(){
OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' etc/os-release`
}
check_file (){
cd ${SRC_DIR}
${COLOR}'检查Haproxy相关源码包'${END}
if [ ! -e ${LUA_FILE} ];then
${COLOR}"缺少${LUA_FILE}文件,请把文件放到${SRC_DIR}目录下"${END}
exit
elif [ ! -e ${HAPROXY_FILE} ];then
${COLOR}"缺少${HAPROXY_FILE}文件,请把文件放到${SRC_DIR}目录下"${END}
exit
else
${COLOR}"相关文件已准备好"${END}
fi
}
install_haproxy(){
[ -d ${HAPROXY_INSTALL_DIR} ] && { ${COLOR}"Haproxy已存在,安装失败"${END};exit; }
${COLOR}"开始安装Haproxy"${END}
${COLOR}"开始安装Haproxy依赖包"${END}
if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> dev/null;then
yum -y install gcc make gcc-c++ glibc glibc-devel pcre pcre-devel openssl openssl-devel systemd-devel libtermcap-devel ncurses-devel libevent-devel readline-devel &> dev/null
else
apt update &> dev/null;apt -y install gcc make openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev libreadline-dev libsystemd-dev &> dev/null
fi
tar xf ${LUA_FILE}
LUA_DIR=`echo ${LUA_FILE} | sed -nr 's/^(.*[0-9]).([[:lower:]]).*/\1/p'`
cd ${LUA_DIR}
make all test
cd ${SRC_DIR}
tar xf ${HAPROXY_FILE}
HAPROXY_DIR=`echo ${HAPROXY_FILE} | sed -nr 's/^(.*[0-9]).([[:lower:]]).*/\1/p'`
cd ${HAPROXY_DIR}
make -j ${CPUS} ARCH=x86_64 TARGET=linux-glibc USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_CPU_AFFINITY=1 USE_LUA=1 LUA_INC=${SRC_DIR}/${LUA_DIR}/src/ LUA_LIB=${SRC_DIR}/${LUA_DIR}/src/ PREFIX=${HAPROXY_INSTALL_DIR}
make install PREFIX=${HAPROXY_INSTALL_DIR}
[ $? -eq 0 ] && $COLOR"Haproxy编译安装成功"$END || { $COLOR"Haproxy编译安装失败,退出!"$END;exit; }
cat > lib/systemd/system/haproxy.service <<-EOF
[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target
[Service]
ExecStartPre=/usr/sbin/haproxy -f etc/haproxy/haproxy.cfg -c -q
ExecStart=/usr/sbin/haproxy -Ws -f etc/haproxy/haproxy.cfg -p var/lib/haproxy/haproxy.pid
ExecReload=/bin/kill -USR2 $MAINPID
[Install]
WantedBy=multi-user.target
EOF
[ -L usr/sbin/haproxy ] || ln -s ../..${HAPROXY_INSTALL_DIR}/sbin/haproxy usr/sbin/ &> dev/null
[ -d etc/haproxy ] || mkdir etc/haproxy &> dev/null
[ -d var/lib/haproxy/ ] || mkdir -p var/lib/haproxy/ &> dev/null
cat > etc/haproxy/haproxy.cfg <<-EOF
global
maxconn 100000
chroot ${HAPROXY_INSTALL_DIR}
stats socket var/lib/haproxy/haproxy.sock mode 600 level admin
uid 99
gid 99
daemon
pidfile var/lib/haproxy/haproxy.pid
log 127.0.0.1 local3 info
defaults
option http-keep-alive
option forwardfor
maxconn 100000
mode http
timeout connect 300000ms
timeout client 300000ms
timeout server 300000ms
listen stats
mode http
bind 0.0.0.0:9999
stats enable
log global
stats uri haproxy-status
stats auth ${STATS_AUTH_USER}:${STATS_AUTH_PASSWORD}
listen kubernetes-6443
bind ${VIP}:6443
mode tcp
log global
server ${MASTER01} ${MASTER01}:6443 check inter 3s fall 2 rise 5
server ${MASTER02} ${MASTER02}:6443 check inter 3s fall 2 rise 5
server ${MASTER03} ${MASTER03}:6443 check inter 3s fall 2 rise 5
listen harbor-80
bind ${VIP}:80
mode http
log global
balance source
server ${HARBOR01} ${HARBOR01}:80 check inter 3s fall 2 rise 5
server ${HARBOR02} ${HARBOR02}:80 check inter 3s fall 2 rise 5
EOF
cat >> etc/sysctl.conf <<-EOF
net.ipv4.ip_nonlocal_bind = 1
EOF
sysctl -p &> dev/null
echo "PATH=${HAPROXY_INSTALL_DIR}/sbin:${PATH}" > etc/profile.d/haproxy.sh
systemctl daemon-reload
systemctl enable --now haproxy &> dev/null
systemctl is-active haproxy &> dev/null && ${COLOR}"Haproxy 服务启动成功!"${END} || { ${COLOR}"Haproxy 启动失败,退出!"${END} ; exit; }
${COLOR}"Haproxy安装完成"${END}
}
main(){
os
check_file
install_haproxy
}
main
[root@k8s-ha01 ~]# bash install_haproxy.sh
[root@k8s-ha02 ~]# bash install_haproxy.sh复制
4.2 安装keepalived
所有master节点配置KeepAlived健康检查文件:
[root@k8s-ha02 ~]# cat check_haproxy.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2022-01-09
#FileName: check_haproxy.sh
#URL: raymond.blog.csdn.net
#Description: The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
err=0
for k in $(seq 1 3);do
check_code=$(pgrep haproxy)
if [[ $check_code == "" ]]; then
err=$(expr $err + 1)
sleep 1
continue
else
err=0
break
fi
done
if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi复制
在ha01和ha02节点安装KeepAlived,配置不一样,注意区分 [root@k8s-master01 pki]# vim etc/keepalived/keepalived.conf ,注意每个节点的网卡(interface参数)
在ha01节点上安装keepalived-master:
[root@k8s-ha01 ~]# cat install_keepalived_master.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2021-12-29
#FileName: install_keepalived_master.sh
#URL: raymond.blog.csdn.net
#Description: install_keepalived for CentOS 7/8 & Ubuntu 18.04/20.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
SRC_DIR=/usr/local/src
COLOR="echo -e \\033[01;31m"
END='\033[0m'
KEEPALIVED_URL=https://keepalived.org/software/
KEEPALIVED_FILE=keepalived-2.2.4.tar.gz
KEEPALIVED_INSTALL_DIR=/apps/keepalived
CPUS=`lscpu |awk '/^CPU\(s\)/{print $2}'`
NET_NAME=`ip addr |awk -F"[: ]" '/^2: e.*/{print $3}'`
STATE=MASTER
PRIORITY=100
VIP=172.31.3.188
os(){
OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' etc/os-release`
OS_RELEASE_VERSION=`sed -rn '/^VERSION_ID=/s@.*="?([0-9]+)\.?.*"?@\1@p' etc/os-release`
}
check_file (){
cd ${SRC_DIR}
if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> dev/null;then
rpm -q wget &> dev/null || yum -y install wget &> dev/null
fi
if [ ! -e ${KEEPALIVED_FILE} ];then
${COLOR}"缺少${KEEPALIVED_FILE}文件,如果是离线包,请放到${SRC_DIR}目录下"${END}
${COLOR}'开始下载Keepalived源码包'${END}
wget ${KEEPALIVED_URL}${KEEPALIVED_FILE} || { ${COLOR}"Keepalived源码包下载失败"${END}; exit; }
elif [ ! -e check_haproxy.sh ];then
${COLOR}"缺少check_haproxy.sh文件,请把文件放到${SRC_DIR}目录下"${END}
exit
else
${COLOR}"相关文件已准备好"${END}
fi
}
install_keepalived(){
[ -d ${KEEPALIVED_INSTALL_DIR} ] && { ${COLOR}"Keepalived已存在,安装失败"${END};exit; }
${COLOR}"开始安装Keepalived"${END}
${COLOR}"开始安装Keepalived依赖包"${END}
if [ ${OS_ID} == "Rocky" -a ${OS_RELEASE_VERSION} == 8 ];then
URL=mirrors.sjtug.sjtu.edu.cn
if [ ! `grep -R "\[PowerTools\]" etc/yum.repos.d/` ];then
cat > etc/yum.repos.d/PowerTools.repo <<-EOF
[PowerTools]
name=PowerTools
baseurl=https://${URL}/rocky/\$releasever/PowerTools/\$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
EOF
fi
fi
if [ ${OS_ID} == "CentOS" -a ${OS_RELEASE_VERSION} == 8 ];then
URL=mirrors.cloud.tencent.com
if [ ! `grep -R "\[PowerTools\]" etc/yum.repos.d/` ];then
cat > etc/yum.repos.d/PowerTools.repo <<-EOF
[PowerTools]
name=PowerTools
baseurl=https://${URL}/centos/\$releasever/PowerTools/\$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
EOF
fi
fi
if [[ ${OS_RELEASE_VERSION} == 8 ]] &> dev/null;then
yum -y install make gcc ipvsadm autoconf automake openssl-devel libnl3-devel iptables-devel ipset-devel file-devel net-snmp-devel glib2-devel pcre2-devel libnftnl-devel libmnl-devel systemd-devel &> dev/null
elif [[ ${OS_RELEASE_VERSION} == 7 ]] &> dev/null;then
yum -y install make gcc libnfnetlink-devel libnfnetlink ipvsadm libnl libnl-devel libnl3 libnl3-devel lm_sensors-libs net-snmp-agent-libs net-snmp-libs openssh-server openssh-clients openssl openssl-devel automake iproute &> dev/null
elif [[ ${OS_RELEASE_VERSION} == 20 ]] &> dev/null;then
apt update &> dev/null;apt -y install make gcc ipvsadm build-essential pkg-config automake autoconf libipset-dev libnl-3-dev libnl-genl-3-dev libssl-dev libxtables-dev libip4tc-dev libip6tc-dev libipset-dev libmagic-dev libsnmp-dev libglib2.0-dev libpcre2-dev libnftnl-dev libmnl-dev libsystemd-dev
else
apt update &> dev/null;apt -y install make gcc ipvsadm build-essential pkg-config automake autoconf iptables-dev libipset-dev libnl-3-dev libnl-genl-3-dev libssl-dev libxtables-dev libip4tc-dev libip6tc-dev libipset-dev libmagic-dev libsnmp-dev libglib2.0-dev libpcre2-dev libnftnl-dev libmnl-dev libsystemd-dev &> dev/null
fi
tar xf ${KEEPALIVED_FILE}
KEEPALIVED_DIR=`echo ${KEEPALIVED_FILE} | sed -nr 's/^(.*[0-9]).([[:lower:]]).*/\1/p'`
cd ${KEEPALIVED_DIR}
./configure --prefix=${KEEPALIVED_INSTALL_DIR} --disable-fwmark
make -j $CPUS && make install
[ $? -eq 0 ] && ${COLOR}"Keepalived编译安装成功"${END} || { ${COLOR}"Keepalived编译安装失败,退出!"${END};exit; }
[ -d etc/keepalived ] || mkdir -p etc/keepalived &> dev/null
cat > etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script check_haoroxy {
script "/etc/keepalived/check_haproxy.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state ${STATE}
interface ${NET_NAME}
virtual_router_id 51
priority ${PRIORITY}
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
${VIP} dev ${NET_NAME} label ${NET_NAME}:1
}
track_script {
check_haproxy
}
}
EOF
cp ./keepalived/keepalived.service lib/systemd/system/
cd ${SRC_DIR}
mv check_haproxy.sh etc/keepalived/check_haproxy.sh
chmod +x etc/keepalived/check_haproxy.sh
echo "PATH=${KEEPALIVED_INSTALL_DIR}/sbin:${PATH}" > etc/profile.d/keepalived.sh
systemctl daemon-reload
systemctl enable --now keepalived &> dev/null
systemctl is-active keepalived &> dev/null && ${COLOR}"Keepalived 服务启动成功!"${END} || { ${COLOR}"Keepalived 启动失败,退出!"${END} ; exit; }
${COLOR}"Keepalived安装完成"${END}
}
main(){
os
check_file
install_keepalived
}
main
[root@k8s-ha01 ~]# bash install_keepalived_master.sh
[root@k8s-ha01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:05:9b:2a brd ff:ff:ff:ff:ff:ff
inet 172.31.3.104/21 brd 172.31.7.255 scope global eth0
valid_lft forever preferred_lft forever
inet 172.31.3.188/32 scope global eth0:1
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe05:9b2a/64 scope link
valid_lft forever preferred_lft forever复制
在ha02节点上安装keepalived-backup:
[root@k8s-ha02 ~]# cat install_keepalived_backup.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2021-12-29
#FileName: install_keepalived_backup.sh
#URL: raymond.blog.csdn.net
#Description: install_keepalived for CentOS 7/8 & Ubuntu 18.04/20.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
SRC_DIR=/usr/local/src
COLOR="echo -e \\033[01;31m"
END='\033[0m'
KEEPALIVED_URL=https://keepalived.org/software/
KEEPALIVED_FILE=keepalived-2.2.4.tar.gz
KEEPALIVED_INSTALL_DIR=/apps/keepalived
CPUS=`lscpu |awk '/^CPU\(s\)/{print $2}'`
NET_NAME=`ip addr |awk -F"[: ]" '/^2: e.*/{print $3}'`
STATE=BACKUP
PRIORITY=90
VIP=172.31.3.188
os(){
OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' etc/os-release`
OS_RELEASE_VERSION=`sed -rn '/^VERSION_ID=/s@.*="?([0-9]+)\.?.*"?@\1@p' etc/os-release`
}
check_file (){
cd ${SRC_DIR}
if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> dev/null;then
rpm -q wget &> dev/null || yum -y install wget &> dev/null
fi
if [ ! -e ${KEEPALIVED_FILE} ];then
${COLOR}"缺少${KEEPALIVED_FILE}文件,如果是离线包,请放到${SRC_DIR}目录下"${END}
${COLOR}'开始下载Keepalived源码包'${END}
wget ${KEEPALIVED_URL}${KEEPALIVED_FILE} || { ${COLOR}"Keepalived源码包下载失败"${END}; exit; }
elif [ ! -e check_haproxy.sh ];then
${COLOR}"缺少check_haproxy.sh文件,请把文件放到${SRC_DIR}目录下"${END}
exit
else
${COLOR}"相关文件已准备好"${END}
fi
}
install_keepalived(){
[ -d ${KEEPALIVED_INSTALL_DIR} ] && { ${COLOR}"Keepalived已存在,安装失败"${END};exit; }
${COLOR}"开始安装Keepalived"${END}
${COLOR}"开始安装Keepalived依赖包"${END}
if [ ${OS_ID} == "Rocky" -a ${OS_RELEASE_VERSION} == 8 ];then
URL=mirrors.sjtug.sjtu.edu.cn
if [ ! `grep -R "\[PowerTools\]" etc/yum.repos.d/` ];then
cat > etc/yum.repos.d/PowerTools.repo <<-EOF
[PowerTools]
name=PowerTools
baseurl=https://${URL}/rocky/\$releasever/PowerTools/\$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
EOF
fi
fi
if [ ${OS_ID} == "CentOS" -a ${OS_RELEASE_VERSION} == 8 ];then
URL=mirrors.cloud.tencent.com
if [ ! `grep -R "\[PowerTools\]" etc/yum.repos.d/` ];then
cat > etc/yum.repos.d/PowerTools.repo <<-EOF
[PowerTools]
name=PowerTools
baseurl=https://${URL}/centos/\$releasever/PowerTools/\$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
EOF
fi
fi
if [[ ${OS_RELEASE_VERSION} == 8 ]] &> dev/null;then
yum -y install make gcc ipvsadm autoconf automake openssl-devel libnl3-devel iptables-devel ipset-devel file-devel net-snmp-devel glib2-devel pcre2-devel libnftnl-devel libmnl-devel systemd-devel &> dev/null
elif [[ ${OS_RELEASE_VERSION} == 7 ]] &> dev/null;then
yum -y install make gcc libnfnetlink-devel libnfnetlink ipvsadm libnl libnl-devel libnl3 libnl3-devel lm_sensors-libs net-snmp-agent-libs net-snmp-libs openssh-server openssh-clients openssl openssl-devel automake iproute &> dev/null
elif [[ ${OS_RELEASE_VERSION} == 20 ]] &> dev/null;then
apt update &> dev/null;apt -y install make gcc ipvsadm build-essential pkg-config automake autoconf libipset-dev libnl-3-dev libnl-genl-3-dev libssl-dev libxtables-dev libip4tc-dev libip6tc-dev libipset-dev libmagic-dev libsnmp-dev libglib2.0-dev libpcre2-dev libnftnl-dev libmnl-dev libsystemd-dev
else
apt update &> dev/null;apt -y install make gcc ipvsadm build-essential pkg-config automake autoconf iptables-dev libipset-dev libnl-3-dev libnl-genl-3-dev libssl-dev libxtables-dev libip4tc-dev libip6tc-dev libipset-dev libmagic-dev libsnmp-dev libglib2.0-dev libpcre2-dev libnftnl-dev libmnl-dev libsystemd-dev &> dev/null
fi
tar xf ${KEEPALIVED_FILE}
KEEPALIVED_DIR=`echo ${KEEPALIVED_FILE} | sed -nr 's/^(.*[0-9]).([[:lower:]]).*/\1/p'`
cd ${KEEPALIVED_DIR}
./configure --prefix=${KEEPALIVED_INSTALL_DIR} --disable-fwmark
make -j $CPUS && make install
[ $? -eq 0 ] && ${COLOR}"Keepalived编译安装成功"${END} || { ${COLOR}"Keepalived编译安装失败,退出!"${END};exit; }
[ -d etc/keepalived ] || mkdir -p etc/keepalived &> dev/null
cat > etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script check_haoroxy {
script "/etc/keepalived/check_haproxy.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state ${STATE}
interface ${NET_NAME}
virtual_router_id 51
priority ${PRIORITY}
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
${VIP} dev ${NET_NAME} label ${NET_NAME}:1
}
track_script {
check_haproxy
}
}
EOF
cp ./keepalived/keepalived.service lib/systemd/system/
cd ${SRC_DIR}
mv check_haproxy.sh etc/keepalived/check_haproxy.sh
chmod +x etc/keepalived/check_haproxy.sh
echo "PATH=${KEEPALIVED_INSTALL_DIR}/sbin:${PATH}" > etc/profile.d/keepalived.sh
systemctl daemon-reload
systemctl enable --now keepalived &> dev/null
systemctl is-active keepalived &> dev/null && ${COLOR}"Keepalived 服务启动成功!"${END} || { ${COLOR}"Keepalived 启动失败,退出!"${END} ; exit; }
${COLOR}"Keepalived安装完成"${END}
}
main(){
os
check_file
install_keepalived
}
main
[root@k8s-ha02 ~]# bash install_keepalived_backup.sh复制
5.安装harbor
5.1 安装harbor
在harbor01和harbor02上安装harbor:
[root@k8s-harbor01 ~]# cat install_docker_compose_harbor.sh
#!/bin/bash
#
#**************************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2021-12-16
#FileName: install_docke_compose_harbor.sh
#URL: raymond.blog.csdn.net
#Description: install_docker_compose_harbor for CentOS 7/8 & Ubuntu 18.04/20.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#**************************************************************************************************
SRC_DIR=/usr/local/src
COLOR="echo -e \\033[01;31m"
END='\033[0m'
DOCKER_VERSION=19.03.15
URL='mirrors.cloud.tencent.com'
#docker-compose下载地址:https://github.com/docker/compose/releases/download/1.29.2/docker-compose-Linux-x86_64
DOCKER_COMPOSE_FILE=docker-compose-linux-x86_64
#harbor下载地址:https://github.com/goharbor/harbor/releases/download/v2.4.1/harbor-offline-installer-v2.4.1.tgz
HARBOR_FILE=harbor-offline-installer-v
HARBOR_VERSION=2.4.1
TAR=.tgz
HARBOR_INSTALL_DIR=/apps
HARBOR_DOMAIN=harbor.raymonds.cc
NET_NAME=`ip addr |awk -F"[: ]" '/^2: e.*/{print $3}'`
IP=`ip addr show ${NET_NAME}| awk -F" +|/" '/global/{print $3}'`
HARBOR_ADMIN_PASSWORD=123456
os(){
OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' etc/os-release`
OS_RELEASE_VERSION=`sed -rn '/^VERSION_ID=/s@.*="?([0-9]+)\.?.*"?@\1@p' etc/os-release`
}
check_file (){
cd ${SRC_DIR}
if [ ! -e ${DOCKER_COMPOSE_FILE} ];then
${COLOR}"缺少${DOCKER_COMPOSE_FILE}文件,请把文件放到${SRC_DIR}目录下"${END}
exit
elif [ ! -e ${HARBOR_FILE}${HARBOR_VERSION}${TAR} ];then
${COLOR}"缺少${HARBOR_FILE}${HARBOR_VERSION}${TAR}文件,请把文件放到${SRC_DIR}目录下"${END}
exit
else
${COLOR}"相关文件已准备好"${END}
fi
}
ubuntu_install_docker(){
${COLOR}"开始安装DOCKER依赖包"${END}
apt update &> dev/null
apt -y install apt-transport-https ca-certificates curl software-properties-common &> dev/null
curl -fsSL https://${URL}/docker-ce/linux/ubuntu/gpg | sudo apt-key add - &> dev/null
add-apt-repository "deb [arch=amd64] https://${URL}/docker-ce/linux/ubuntu $(lsb_release -cs) stable" &> dev/null
apt update &> dev/null
${COLOR}"Docker有以下版本"${END}
apt-cache madison docker-ce
${COLOR}"10秒后即将安装:Docker-"${DOCKER_VERSION}"版本......"${END}
${COLOR}"如果想安装其它Docker版本,请按Ctrl+c键退出,修改版本再执行"${END}
sleep 10
${COLOR}"开始安装DOCKER"${END}
apt -y install docker-ce=5:${DOCKER_VERSION}~3-0~ubuntu-$(lsb_release -cs) docker-ce-cli=5:${DOCKER_VERSION}~3-0~ubuntu-$(lsb_release -cs) &> dev/null || { ${COLOR}"apt源失败,请检查apt配置"${END};exit; }
}
centos_install_docker(){
${COLOR}"开始安装DOCKER依赖包"${END}
yum -y install yum-utils &> dev/null
yum-config-manager --add-repo https://${URL}/docker-ce/linux/centos/docker-ce.repo &> dev/null
sed -i 's+download.docker.com+'''${URL}'''/docker-ce+' etc/yum.repos.d/docker-ce.repo
yum clean all &> dev/null
yum makecache &> dev/null
${COLOR}"Docker有以下版本"${END}
yum list docker-ce.x86_64 --showduplicates
${COLOR}"10秒后即将安装:Docker-"${DOCKER_VERSION}"版本......"${END}
${COLOR}"如果想安装其它Docker版本,请按Ctrl+c键退出,修改版本再执行"${END}
sleep 10
${COLOR}"开始安装DOCKER"${END}
yum -y install docker-ce-${DOCKER_VERSION} docker-ce-cli-${DOCKER_VERSION} &> dev/null || { ${COLOR}"yum源失败,请检查yum配置"${END};exit; }
}
mirror_accelerator(){
mkdir -p etc/docker
cat > etc/docker/daemon.json <<-EOF
{
"registry-mirrors": [
"https://registry.docker-cn.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn"
],
"insecure-registries": ["${HARBOR_DOMAIN}"],
"exec-opts": ["native.cgroupdriver=systemd"],
"max-concurrent-downloads": 10,
"max-concurrent-uploads": 5,
"log-opts": {
"max-size": "300m",
"max-file": "2"
},
"live-restore": true
}
EOF
systemctl daemon-reload
systemctl enable --now docker
systemctl is-active docker &> dev/null && ${COLOR}"Docker 服务启动成功"${END} || { ${COLOR}"Docker 启动失败"${END};exit; }
docker version && ${COLOR}"Docker 安装成功"${END} || ${COLOR}"Docker 安装失败"${END}
}
set_alias(){
echo 'alias rmi="docker images -qa|xargs docker rmi -f"' >> ~/.bashrc
echo 'alias rmc="docker ps -qa|xargs docker rm -f"' >> ~/.bashrc
}
install_docker_compose(){
${COLOR}"开始安装 Docker compose....."${END}
sleep 1
mv ${SRC_DIR}/${DOCKER_COMPOSE_FILE} usr/bin/docker-compose
chmod +x usr/bin/docker-compose
docker-compose --version && ${COLOR}"Docker Compose 安装完成"${END} || ${COLOR}"Docker compose 安装失败"${END}
}
install_harbor(){
${COLOR}"开始安装 Harbor....."${END}
sleep 1
[ -d ${HARBOR_INSTALL_DIR} ] || mkdir ${HARBOR_INSTALL_DIR}
tar xf ${SRC_DIR}/${HARBOR_FILE}${HARBOR_VERSION}${TAR} -C ${HARBOR_INSTALL_DIR}/
mv ${HARBOR_INSTALL_DIR}/harbor/harbor.yml.tmpl ${HARBOR_INSTALL_DIR}/harbor/harbor.yml
sed -ri.bak -e 's/^(hostname:) .*/\1 '${IP}'/' -e 's/^(harbor_admin_password:) .*/\1 '${HARBOR_ADMIN_PASSWORD}'/' -e 's/^(https:)/#\1/' -e 's/ (port: 443)/# \1/' -e 's@ (certificate: .*)@# \1@' -e 's@ (private_key: .*)@# \1@' ${HARBOR_INSTALL_DIR}/harbor/harbor.yml
if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> dev/null;then
if [ ${OS_RELEASE_VERSION} == "8" ];then
yum -y install python3 &> dev/null || { ${COLOR}"安装软件包失败,请检查网络配置"${END}; exit; }
else
yum -y install python &> dev/null || { ${COLOR}"安装软件包失败,请检查网络配置"${END}; exit; }
fi
else
apt -y install python3 &> dev/null || { ${COLOR}"安装软件包失败,请检查网络配置"${END}; exit; }
fi
${HARBOR_INSTALL_DIR}/harbor/install.sh && ${COLOR}"Harbor 安装完成"${END} || ${COLOR}"Harbor 安装失败"${END}
cat > lib/systemd/system/harbor.service <<-EOF
[Unit]
Description=Harbor
After=docker.service systemd-networkd.service systemd-resolved.service
Requires=docker.service
Documentation=http://github.com/vmware/harbor
[Service]
Type=simple
Restart=on-failure
RestartSec=5
ExecStart=/usr/bin/docker-compose -f apps/harbor/docker-compose.yml up
ExecStop=/usr/bin/docker-compose -f apps/harbor/docker-compose.yml down
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable harbor &>/dev/null && ${COLOR}"Harbor已配置为开机自动启动"${END}
}
set_swap_limit(){
if [ ${OS_ID} == "Ubuntu" ];then
${COLOR}'设置Docker的"WARNING: No swap limit support"警告'${END}
sed -ri '/^GRUB_CMDLINE_LINUX=/s@"$@ swapaccount=1"@' etc/default/grub
update-grub &> dev/null
${COLOR}"10秒后,机器会自动重启"${END}
sleep 10
reboot
fi
}
main(){
os
check_file
if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> dev/null;then
rpm -q docker-ce &> dev/null && ${COLOR}"Docker已安装"${END} || centos_install_docker
else
dpkg -s docker-ce &>/dev/null && ${COLOR}"Docker已安装"${END} || ubuntu_install_docker
fi
[ -f etc/docker/daemon.json ] &>/dev/null && ${COLOR}"Docker镜像加速器已设置"${END} || mirror_accelerator
grep -Eqoi "(.*rmi=|.*rmc=)" ~/.bashrc && ${COLOR}"Docker别名已设置"${END} || set_alias
docker-compose --version &> dev/null && ${COLOR}"Docker Compose已安装"${END} || install_docker_compose
systemctl is-active harbor &> dev/null && ${COLOR}"Harbor已安装"${END} || install_harbor
grep -q "swapaccount=1" etc/default/grub && ${COLOR}'"WARNING: No swap limit support"警告,已设置'${END} || set_swap_limit
}
main
[root@k8s-harbor01 ~]# bash install_docker_compose_harbor.sh
[root@k8s-harbor02 ~]# bash install_docker_compose_harbor.sh复制
5.2 创建harbor仓库
在harbor01新建项目google_containers
在harbor02新建项目google_containers
在harbor02上新建目标
在harbor02上新建规则
在harbor01上新建目标
在harbor01上新建规则
6.基本组件安装
本节主要安装的是集群中用到的各种组件,比如Docker-ce、Kubernetes各组件等。
6.1 安装docker
master和node安装docker-ce:
[root@k8s-master01 ~]# cat install_docker.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2021-12-07
#FileName: install_docker.sh
#URL: raymond.blog.csdn.net
#Description: install_docker for centos 7/8 & ubuntu 18.04/20.04 Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
DOCKER_VERSION=19.03.15
URL='mirrors.cloud.tencent.com'
HARBOR_DOMAIN=harbor.raymonds.cc
os(){
OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' etc/os-release`
}
ubuntu_install_docker(){
dpkg -s docker-ce &>/dev/null && ${COLOR}"Docker已安装,退出"${END} && exit
${COLOR}"开始安装DOCKER依赖包"${END}
apt update &> dev/null
apt -y install apt-transport-https ca-certificates curl software-properties-common &> dev/null
curl -fsSL https://${URL}/docker-ce/linux/ubuntu/gpg | sudo apt-key add - &> dev/null
add-apt-repository "deb [arch=amd64] https://${URL}/docker-ce/linux/ubuntu $(lsb_release -cs) stable" &> dev/null
apt update &> dev/null
${COLOR}"Docker有以下版本"${END}
apt-cache madison docker-ce
${COLOR}"10秒后即将安装:Docker-"${DOCKER_VERSION}"版本......"${END}
${COLOR}"如果想安装其它Docker版本,请按Ctrl+c键退出,修改版本再执行"${END}
sleep 10
${COLOR}"开始安装DOCKER"${END}
apt -y install docker-ce=5:${DOCKER_VERSION}~3-0~ubuntu-$(lsb_release -cs) docker-ce-cli=5:${DOCKER_VERSION}~3-0~ubuntu-$(lsb_release -cs) &> dev/null || { ${COLOR}"apt源失败,请检查apt配置"${END};exit; }
}
centos_install_docker(){
rpm -q docker-ce &> dev/null && ${COLOR}"Docker已安装,退出"${END} && exit
${COLOR}"开始安装DOCKER依赖包"${END}
yum -y install yum-utils &> dev/null
yum-config-manager --add-repo https://${URL}/docker-ce/linux/centos/docker-ce.repo &> dev/null
sed -i 's+download.docker.com+'''${URL}'''/docker-ce+' etc/yum.repos.d/docker-ce.repo
yum clean all &> dev/null
yum makecache &> dev/null
${COLOR}"Docker有以下版本"${END}
yum list docker-ce.x86_64 --showduplicates
${COLOR}"10秒后即将安装:Docker-"${DOCKER_VERSION}"版本......"${END}
${COLOR}"如果想安装其它Docker版本,请按Ctrl+c键退出,修改版本再执行"${END}
sleep 10
${COLOR}"开始安装DOCKER"${END}
yum -y install docker-ce-${DOCKER_VERSION} docker-ce-cli-${DOCKER_VERSION} &> dev/null || { ${COLOR}"yum源失败,请检查yum配置"${END};exit; }
}
mirror_accelerator(){
mkdir -p etc/docker
cat > etc/docker/daemon.json <<-EOF
{
"registry-mirrors": [
"https://registry.docker-cn.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn"
],
"insecure-registries": ["${HARBOR_DOMAIN}"],
"exec-opts": ["native.cgroupdriver=systemd"],
"max-concurrent-downloads": 10,
"max-concurrent-uploads": 5,
"log-opts": {
"max-size": "300m",
"max-file": "2"
},
"live-restore": true
}
EOF
systemctl daemon-reload
systemctl enable --now docker
systemctl is-active docker &> dev/null && ${COLOR}"Docker 服务启动成功"${END} || { ${COLOR}"Docker 启动失败"${END};exit; }
docker version && ${COLOR}"Docker 安装成功"${END} || ${COLOR}"Docker 安装失败"${END}
}
set_alias(){
echo 'alias rmi="docker images -qa|xargs docker rmi -f"' >> ~/.bashrc
echo 'alias rmc="docker ps -qa|xargs docker rm -f"' >> ~/.bashrc
}
set_swap_limit(){
if [ ${OS_ID} == "Ubuntu" ];then
${COLOR}'设置Docker的"WARNING: No swap limit support"警告'${END}
sed -ri '/^GRUB_CMDLINE_LINUX=/s@"$@ swapaccount=1"@' etc/default/grub
update-grub &> dev/null
${COLOR}"10秒后,机器会自动重启"${END}
sleep 10
reboot
fi
}
main(){
os
if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> dev/null;then
centos_install_docker
else
ubuntu_install_docker
fi
mirror_accelerator
set_alias
set_swap_limit
}
main
[root@k8s-master01 ~]# bash install_docker.sh
[root@k8s-master02 ~]# bash install_docker.sh
[root@k8s-master03 ~]# bash install_docker.sh
[root@k8s-node01 ~]# bash install_docker.sh
[root@k8s-node02 ~]# bash install_docker.sh
[root@k8s-node03 ~]# bash install_docker.sh复制
6.2 安装kubeadm等组件
CentOS 7配置k8s镜像仓库和安装k8s组件:
[root@k8s-master01 ~]# cat > etc/yum.repos.d/kubernetes.repo <<-EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s-master01 ~]# yum list kubeadm.x86_64 --showduplicates | sort -r |grep 1.20
Repository base is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository updates is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
kubeadm.x86_64 1.20.9-0 kubernetes
kubeadm.x86_64 1.20.8-0 kubernetes
kubeadm.x86_64 1.20.7-0 kubernetes
kubeadm.x86_64 1.20.6-0 kubernetes
kubeadm.x86_64 1.20.5-0 kubernetes
kubeadm.x86_64 1.20.4-0 kubernetes
kubeadm.x86_64 1.20.2-0 kubernetes
kubeadm.x86_64 1.20.15-0 kubernetes
kubeadm.x86_64 1.20.14-0 kubernetes
kubeadm.x86_64 1.20.14-0 @kubernetes
kubeadm.x86_64 1.20.13-0 kubernetes
kubeadm.x86_64 1.20.12-0 kubernetes
kubeadm.x86_64 1.20.11-0 kubernetes
kubeadm.x86_64 1.20.1-0 kubernetes
kubeadm.x86_64 1.20.10-0 kubernetes
kubeadm.x86_64 1.20.0-0 kubernetes
[root@k8s-master01 ~]# yum -y install kubeadm-1.20.14 kubelet-1.20.14 kubectl-1.20.14复制
Ubuntu
root@k8s-master01:~# apt update
root@k8s-master01:~# apt install -y apt-transport-https
root@k8s-master01:~# curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
OK
root@k8s-master01:~# echo "deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main" >> etc/apt/sources.list.d/kubernetes.list
root@k8s-master01:~# apt update
root@k8s-master01:~# apt-cache madison kubeadm |grep 1.20
kubeadm | 1.20.15-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.14-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.13-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.12-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.11-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.10-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.9-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.8-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.7-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.6-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.5-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.4-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
root@k8s-master01:~# apt -y install kubelet=1.20.14-00 kubeadm=1.20.14-00 kubectl=1.20.14-00复制
查看镜像版本
[root@k8s-master01 ~]# kubeadm config images list --kubernetes-version v1.20.14
k8s.gcr.io/kube-apiserver:v1.20.14
k8s.gcr.io/kube-controller-manager:v1.20.14
k8s.gcr.io/kube-scheduler:v1.20.14
k8s.gcr.io/kube-proxy:v1.20.14
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0复制
下载镜像并上传至harbor:
[root@k8s-master01 ~]# docker login harbor.raymonds.cc
Username: admin
Password:
WARNING! Your password will be stored unencrypted in root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[root@k8s-master01 ~]# cat download_kubeadm_images_1.20.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2022-01-11
#FileName: download_kubeadm_images.sh
#URL: raymond.blog.csdn.net
#Description: The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
KUBEADM_VERSION=1.20.14
images=$(kubeadm config images list --kubernetes-version=v${KUBEADM_VERSION} | awk -F "/" '{print $NF}')
HARBOR_DOMAIN=harbor.raymonds.cc
images_download(){
${COLOR}"开始下载Kubeadm镜像"${END}
for i in ${images};do
docker pull registry.aliyuncs.com/google_containers/$i
docker tag registry.aliyuncs.com/google_containers/$i ${HARBOR_DOMAIN}/google_containers/$i
docker rmi registry.aliyuncs.com/google_containers/$i
docker push ${HARBOR_DOMAIN}/google_containers/$i
done
${COLOR}"Kubeadm镜像下载完成"${END}
}
images_download
[root@k8s-master01 ~]# bash download_kubeadm_images_1.20.sh
[root@k8s-master01 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
harbor.raymonds.cc/google_containers/kube-proxy v1.20.14 ec690d6bc684 3 weeks ago 99.7MB
harbor.raymonds.cc/google_containers/kube-apiserver v1.20.14 a50752e7cbd3 3 weeks ago 122MB
harbor.raymonds.cc/google_containers/kube-controller-manager v1.20.14 aea9b0bc2c0c 3 weeks ago 116MB
harbor.raymonds.cc/google_containers/kube-scheduler v1.20.14 e419f64ebbc3 3 weeks ago 47.3MB
harbor.raymonds.cc/google_containers/etcd 3.4.13-0 0369cf4303ff 16 months ago 253MB
harbor.raymonds.cc/google_containers/coredns 1.7.0 bfe3a36ebd25 19 months ago 45.2MB
harbor.raymonds.cc/google_containers/pause 3.2 80d28bedfe5d 23 months ago 683kB复制
设置Kubelet开机自启动:
[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl enable --now kubelet
Created symlink from etc/systemd/system/multi-user.target.wants/kubelet.service to usr/lib/systemd/system/kubelet.service.复制
在master02和master03执行脚本安装:
[root@k8s-master02 ~]# cat install_kubeadm_for_master.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2022-01-11
#FileName: install_kubeadm_for_master.sh
#URL: raymond.blog.csdn.net
#Description: The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
KUBEADM_MIRRORS=mirrors.aliyun.com
KUBEADM_VERSION=1.20.14
HARBOR_DOMAIN=harbor.raymonds.cc
os(){
OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' etc/os-release`
}
install_ubuntu_kubeadm(){
${COLOR}"开始安装Kubeadm依赖包"${END}
apt update &> dev/null && apt install -y apt-transport-https &> dev/null
curl -fsSL https://${KUBEADM_MIRRORS}/kubernetes/apt/doc/apt-key.gpg | apt-key add - &> dev/null
echo "deb https://"${KUBEADM_MIRRORS}"/kubernetes/apt kubernetes-xenial main" >> etc/apt/sources.list.d/kubernetes.list
apt update &> dev/null
${COLOR}"Kubeadm有以下版本"${END}
apt-cache madison kubeadm
${COLOR}"10秒后即将安装:Kubeadm-"${KUBEADM_VERSION}"版本......"${END}
${COLOR}"如果想安装其它Kubeadm版本,请按Ctrl+c键退出,修改版本再执行"${END}
sleep 10
${COLOR}"开始安装Kubeadm"${END}
apt -y install kubelet=${KUBEADM_VERSION}-00 kubeadm=${KUBEADM_VERSION}-00 kubectl=${KUBEADM_VERSION}-00 &> dev/null
${COLOR}"Kubeadm安装完成"${END}
}
install_centos_kubeadm(){
cat > etc/yum.repos.d/kubernetes.repo <<-EOF
[kubernetes]
name=Kubernetes
baseurl=https://${KUBEADM_MIRRORS}/kubernetes/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://${KUBEADM_MIRRORS}/kubernetes/yum/doc/yum-key.gpg https://${KUBEADM_MIRRORS}/kubernetes/yum/doc/rpm-package-key.gpg
EOF
${COLOR}"Kubeadm有以下版本"${END}
yum list kubeadm.x86_64 --showduplicates | sort -r
${COLOR}"10秒后即将安装:Kubeadm-"${KUBEADM_VERSION}"版本......"${END}
${COLOR}"如果想安装其它Kubeadm版本,请按Ctrl+c键退出,修改版本再执行"${END}
sleep 10
${COLOR}"开始安装Kubeadm"${END}
yum -y install kubelet-${KUBEADM_VERSION} kubeadm-${KUBEADM_VERSION} kubectl-${KUBEADM_VERSION} &> dev/null
${COLOR}"Kubeadm安装完成"${END}
}
start_service(){
systemctl daemon-reload
systemctl enable --now kubelet
systemctl is-active kubelet &> dev/null && ${COLOR}"Kubelet 服务启动成功"${END} || { ${COLOR}"Kubelet 启动失败"${END};exit; }
kubelet --version && ${COLOR}"Kubelet 安装成功"${END} || ${COLOR}"Kubelet 安装失败"${END}
}
main(){
os
if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> dev/null;then
install_centos_kubeadm
else
install_ubuntu_kubeadm
fi
start_service
}
main
[root@k8s-master02 ~]# bash install_kubeadm_for_master.sh
[root@k8s-master03 ~]# bash install_kubeadm_for_master.sh复制
node上安装kubeadm:
[root@k8s-node01 ~]# cat install_kubeadm_for_node.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2022-01-11
#FileName: install_kubeadm_for_node.sh
#URL: raymond.blog.csdn.net
#Description: The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
KUBEADM_MIRRORS=mirrors.aliyun.com
KUBEADM_VERSION=1.20.14
HARBOR_DOMAIN=harbor.raymonds.cc
os(){
OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' etc/os-release`
}
install_ubuntu_kubeadm(){
${COLOR}"开始安装Kubeadm依赖包"${END}
apt update &> dev/null && apt install -y apt-transport-https &> dev/null
curl -fsSL https://${KUBEADM_MIRRORS}/kubernetes/apt/doc/apt-key.gpg | apt-key add - &> dev/null
echo "deb https://"${KUBEADM_MIRRORS}"/kubernetes/apt kubernetes-xenial main" >> etc/apt/sources.list.d/kubernetes.list
apt update &> dev/null
${COLOR}"Kubeadm有以下版本"${END}
apt-cache madison kubeadm
${COLOR}"10秒后即将安装:Kubeadm-"${KUBEADM_VERSION}"版本......"${END}
${COLOR}"如果想安装其它Kubeadm版本,请按Ctrl+c键退出,修改版本再执行"${END}
sleep 10
${COLOR}"开始安装Kubeadm"${END}
apt -y install kubelet=${KUBEADM_VERSION}-00 kubeadm=${KUBEADM_VERSION}-00 &> dev/null
${COLOR}"Kubeadm安装完成"${END}
}
install_centos_kubeadm(){
cat > etc/yum.repos.d/kubernetes.repo <<-EOF
[kubernetes]
name=Kubernetes
baseurl=https://${KUBEADM_MIRRORS}/kubernetes/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://${KUBEADM_MIRRORS}/kubernetes/yum/doc/yum-key.gpg https://${KUBEADM_MIRRORS}/kubernetes/yum/doc/rpm-package-key.gpg
EOF
${COLOR}"Kubeadm有以下版本"${END}
yum list kubeadm.x86_64 --showduplicates | sort -r
${COLOR}"10秒后即将安装:Kubeadm-"${KUBEADM_VERSION}"版本......"${END}
${COLOR}"如果想安装其它Kubeadm版本,请按Ctrl+c键退出,修改版本再执行"${END}
sleep 10
${COLOR}"开始安装Kubeadm"${END}
yum -y install kubelet-${KUBEADM_VERSION} kubeadm-${KUBEADM_VERSION} &> dev/null
${COLOR}"Kubeadm安装完成"${END}
}
start_service(){
systemctl daemon-reload
systemctl enable --now kubelet
systemctl is-active kubelet &> dev/null && ${COLOR}"Kubelet 服务启动成功"${END} || { ${COLOR}"Kubelet 启动失败"${END};exit; }
kubelet --version && ${COLOR}"Kubelet 安装成功"${END} || ${COLOR}"Kubelet 安装失败"${END}
}
main(){
os
if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> dev/null;then
install_centos_kubeadm
else
install_ubuntu_kubeadm
fi
start_service
}
main
[root@k8s-node01 ~]# bash install_kubeadm_for_node.sh
[root@k8s-node02 ~]# bash install_kubeadm_for_node.sh
[root@k8s-node03 ~]# bash install_kubeadm_for_node.sh复制
重要:如果安装了keepalived和haproxy,需要测试keepalived是否是正常的
#测试VIP
[root@k8s-master01 ~]# ping 172.31.3.188
PING 172.31.3.188 (172.31.3.188) 56(84) bytes of data.
64 bytes from 172.31.3.188: icmp_seq=1 ttl=64 time=0.526 ms
64 bytes from 172.31.3.188: icmp_seq=2 ttl=64 time=0.375 ms
^C
--- 172.31.3.188 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1015ms
rtt min/avg/max/mdev = 0.375/0.450/0.526/0.078 ms
[root@k8s-ha01 ~]# systemctl stop keepalived
[root@k8s-ha01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:05:9b:2a brd ff:ff:ff:ff:ff:ff
inet 172.31.3.104/21 brd 172.31.7.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe05:9b2a/64 scope link
valid_lft forever preferred_lft forever
[root@k8s-ha02 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:5e:d8:f8 brd ff:ff:ff:ff:ff:ff
inet 172.31.3.105/21 brd 172.31.7.255 scope global eth0
valid_lft forever preferred_lft forever
inet 172.31.3.188/32 scope global eth0:1
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe5e:d8f8/64 scope link
valid_lft forever preferred_lft forever
[root@k8s-ha01 ~]# systemctl start keepalived
[root@k8s-ha01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:05:9b:2a brd ff:ff:ff:ff:ff:ff
inet 172.31.3.104/21 brd 172.31.7.255 scope global eth0
valid_lft forever preferred_lft forever
inet 172.31.3.188/32 scope global eth0:1
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe05:9b2a/64 scope link
valid_lft forever preferred_lft forever
[root@k8s-master01 ~]# telnet 172.31.3.188 6443
Trying 172.31.3.188...
Connected to 172.31.3.188.
Escape character is '^]'.
Connection closed by foreign host.复制
如果ping不通且telnet没有出现 ] ,则认为VIP不可以,不可在继续往下执行,需要排查keepalived的问题,比如防火墙和selinux,haproxy和keepalived的状态,监听端口等
所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld
所有节点查看selinux状态,必须为disable:getenforce
master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy
master节点查看监听端口:netstat -lntp
查看haproxy状态
http://172.31.3.188:9999/haproxy-status
7.集群初始化
官方初始化文档:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
7.1 基于命令初始化高可用master方式
Master01节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master01即可:
[root@k8s-master01 ~]# kubeadm init --apiserver-advertise-address=172.31.3.101 --control-plane-endpoint=172.31.3.188 --apiserver-bind-port=6443 --kubernetes-version=v1.20.14 --pod-network-cidr=192.168.0.0/12 --service-cidr=10.96.0.0/12 --service-dns-domain=example.local --image-repository=harbor.raymonds.cc/google_containers --ignore-preflight-errors=swap
[init] Using Kubernetes version: v1.20.14
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01.example.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.example.local] and IPs [10.96.0.1 172.31.3.101 172.31.3.188]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01.example.local localhost] and IPs [172.31.3.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01.example.local localhost] and IPs [172.31.3.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.037361 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01.example.local as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master01.example.local as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 1d8e8a.p35rsuat5a7hp577
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 172.31.3.188:6443 --token 1d8e8a.p35rsuat5a7hp577 \
--discovery-token-ca-cert-hash sha256:0fe81dccab5623a90b0b85dcae8cccc5085f8f3dc7c8cf169a29a84da0d8135e \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.31.3.188:6443 --token 1d8e8a.p35rsuat5a7hp577 \
--discovery-token-ca-cert-hash sha256:0fe81dccab5623a90b0b85dcae8cccc5085f8f3dc7c8cf169a29a84da0d8135e复制
如果初始化失败,重置后再次初始化,命令如下:
kubeadm reset -f ; ipvsadm --clear ; rm -rf ~/.kube复制
初始化成功以后,会产生Token值,用于其他节点加入时使用,因此要记录下初始化成功生成的token值(令牌值):
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 172.31.3.188:6443 --token 1d8e8a.p35rsuat5a7hp577 \
--discovery-token-ca-cert-hash sha256:0fe81dccab5623a90b0b85dcae8cccc5085f8f3dc7c8cf169a29a84da0d8135e \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.31.3.188:6443 --token 1d8e8a.p35rsuat5a7hp577 \
--discovery-token-ca-cert-hash sha256:0fe81dccab5623a90b0b85dcae8cccc5085f8f3dc7c8cf169a29a84da0d8135e复制
Master01节点配置环境变量,用于访问Kubernetes集群:
[root@k8s-master01 ~]# cat >> root/.bashrc <<EOF
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
[root@k8s-master01 ~]# source root/.bashrc
#也可以使用下面命令创建环境变量
mkdir -p $HOME/.kube
sudo cp -i etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config复制
查看节点状态:
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.example.local NotReady control-plane,master 82s v1.20.14复制
采用初始化安装方式,所有的系统组件均以容器的方式运行并且在kube-system命名空间内,此时可以查看Pod状态:
[root@k8s-master01 ~]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-5ffd5c4586-rvsm4 0/1 Pending 0 94s <none> <none> <none> <none>
coredns-5ffd5c4586-xzrwx 0/1 Pending 0 94s <none> <none> <none> <none>
etcd-k8s-master01.example.local 1/1 Running 0 93s 172.31.3.101 k8s-master01.example.local <none> <none>
kube-apiserver-k8s-master01.example.local 1/1 Running 0 93s 172.31.3.101 k8s-master01.example.local <none> <none>
kube-controller-manager-k8s-master01.example.local 1/1 Running 0 93s 172.31.3.101 k8s-master01.example.local <none> <none>
kube-proxy-rqqq9 1/1 Running 0 94s 172.31.3.101 k8s-master01.example.local <none> <none>
kube-scheduler-k8s-master01.example.local 1/1 Running 0 93s 172.31.3.101 k8s-master01.example.local <none> <none>复制
7.2 基于文件初始化高可用master方式
Master01节点创建kubeadm-config.yaml配置文件如下:
Master01:(# 注意,如果不是高可用集群,172.31.3.188:6443改为master01的地址,注意更改v1.18.5自己服务器kubeadm的版本:kubeadm version)
注意
以下文件内容,宿主机网段、podSubnet网段、serviceSubnet网段不能重复
[root@k8s-master01 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.14", GitCommit:"57a3aa3f13699cf3db9c52d228c18db94fa81876", GitTreeState:"clean", BuildDate:"2021-12-15T14:51:22Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-master01 ~]# vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: 7t2weq.bjbawausm0jaxury
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.31.3.101 #master01的IP地址
bindPort: 6443
nodeRegistration:
criSocket: var/run/dockershim.sock
name: k8s-master01.example.local #设置master01的hostname
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
certSANs:
- 172.31.3.188 #VIP地址
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 172.31.3.188:6443 #haproxy代理后端地址
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: var/lib/etcd
imageRepository: harbor.raymonds.cc/google_containers #harbor镜像地址
kind: ClusterConfiguration
kubernetesVersion: v1.20.14 #更改版本号
networking:
dnsDomain: example.local #dnsdomain
podSubnet: 192.168.0.0/12 #pod网段
serviceSubnet: 10.96.0.0/12 #service网段
scheduler: {}复制
更新kubeadm文件
root@k8s-master01:~# kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml
root@k8s-master01:~# cat new.yml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: 7t2weq.bjbawausm0jaxury
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.31.3.101
bindPort: 6443
nodeRegistration:
criSocket: var/run/dockershim.sock
name: k8s-master01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
certSANs:
- 172.31.3.188
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 172.31.3.188:6443
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: var/lib/etcd
imageRepository: harbor.raymonds.cc/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.14
networking:
dnsDomain: example.local
podSubnet: 192.168.0.0/12
serviceSubnet: 10.96.0.0/12
scheduler: {}复制
Master01节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master01即可:
#如果已经初始化过,重新初始化用下面命令reset集群后,再进行初始化
root@k8s-master01:~# kubeadm reset
root@k8s-master01:~# kubeadm init --config root/new.yaml --upload-certs
[init] Using Kubernetes version: v1.20.14
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.example.local] and IPs [10.96.0.1 172.31.3.101 172.31.3.188]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [172.31.3.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [172.31.3.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.538896 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
3a3ac1850e538ed3979afb26ebab30cdbc232a4ac11c00a307df56f3350259ef
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 7t2weq.bjbawausm0jaxury
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 172.31.3.188:6443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:7e7695a897d9a8afcc779f2f5c335502e091e85a806d6fbe865118abf17e39cd \
--control-plane --certificate-key 3a3ac1850e538ed3979afb26ebab30cdbc232a4ac11c00a307df56f3350259ef
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.31.3.188:6443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:7e7695a897d9a8afcc779f2f5c335502e091e85a806d6fbe865118abf17e39cd复制
如果初始化失败,重置后再次初始化,命令如下:
kubeadm reset -f ; ipvsadm --clear ; rm -rf ~/.kube
初始化成功以后,会产生Token值,用于其他节点加入时使用,因此要记录下初始化成功生成的token值(令牌值):
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 172.31.3.188:6443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:7e7695a897d9a8afcc779f2f5c335502e091e85a806d6fbe865118abf17e39cd \
--control-plane --certificate-key 3a3ac1850e538ed3979afb26ebab30cdbc232a4ac11c00a307df56f3350259ef
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.31.3.188:6443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:7e7695a897d9a8afcc779f2f5c335502e091e85a806d6fbe865118abf17e39cd复制
Master01节点配置环境变量,用于访问Kubernetes集群:
[root@k8s-master01 ~]# cat >> root/.bashrc <<EOF
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
[root@k8s-master01 ~]# source root/.bashrc复制
查看节点状态:
root@k8s-master01:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady control-plane,master 83s v1.20.14复制
采用初始化安装方式,所有的系统组件均以容器的方式运行并且在kube-system命名空间内,此时可以查看Pod状态:
root@k8s-master01:~# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-5ffd5c4586-c75rs 0/1 Pending 0 99s <none> <none> <none> <none>
coredns-5ffd5c4586-sn8x9 0/1 Pending 0 99s <none> <none> <none> <none>
etcd-k8s-master01 1/1 Running 0 100s 172.31.3.101 k8s-master01 <none> <none>
kube-apiserver-k8s-master01 1/1 Running 0 100s 172.31.3.101 k8s-master01 <none> <none>
kube-controller-manager-k8s-master01 1/1 Running 0 100s 172.31.3.101 k8s-master01 <none> <none>
kube-proxy-f9ltq 1/1 Running 0 99s 172.31.3.101 k8s-master01 <none> <none>
kube-scheduler-k8s-master01 1/1 Running 0 100s 172.31.3.101 k8s-master01 <none> <none>复制
8.高可用Master
如果是配置文件初始化集群,不用申请证书,命令行初始化,执行下面命令,申请证书,当前maste生成证书用于添加新控制节点
[root@k8s-master01 ~]# kubeadm init phase upload-certs --upload-certs
I0111 21:05:14.295131 4977 version.go:254] remote version is much newer: v1.23.1; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
5e4b05cc3ea9a54d172f4895e24caada1496a55061226f4c42cfebba9d50404b复制
添加master02:
[root@k8s-master02 ~]# kubeadm join 172.31.3.188:6443 --token 1d8e8a.p35rsuat5a7hp577 \
--discovery-token-ca-cert-hash sha256:0fe81dccab5623a90b0b85dcae8cccc5085f8f3dc7c8cf169a29a84da0d8135e \
--control-plane --certificate-key 5e4b05cc3ea9a54d172f4895e24caada1496a55061226f4c42cfebba9d50404b
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master02.example.local localhost] and IPs [172.31.3.102 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master02.example.local localhost] and IPs [172.31.3.102 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master02.example.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.example.local] and IPs [10.96.0.1 172.31.3.102 172.31.3.188]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master02.example.local as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master02.example.local as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.example.local NotReady control-plane,master 4m12s v1.20.14
k8s-master02.example.local NotReady control-plane,master 45s v1.20.14复制
9.Node节点的配置
Node节点上主要部署公司的一些业务应用,生产环境中不建议Master节点部署系统组件之外的其他Pod,测试环境可以允许Master节点部署Pod以节省系统资源。
添加node01:
[root@k8s-node01 ~]# kubeadm join 172.31.3.188:6443 --token 1d8e8a.p35rsuat5a7hp577 \
--discovery-token-ca-cert-hash sha256:0fe81dccab5623a90b0b85dcae8cccc5085f8f3dc7c8cf169a29a84da0d8135e
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.example.local NotReady control-plane,master 7m57s v1.20.14
k8s-master02.example.local NotReady control-plane,master 4m30s v1.20.14
k8s-node01.example.local NotReady <none> 28s v1.20.14复制
添加node02:
[root@k8s-node02 ~]# kubeadm join 172.31.3.188:6443 --token 1d8e8a.p35rsuat5a7hp577 \
--discovery-token-ca-cert-hash sha256:0fe81dccab5623a90b0b85dcae8cccc5085f8f3dc7c8cf169a29a84da0d8135e
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.example.local NotReady control-plane,master 9m22s v1.20.14
k8s-master02.example.local NotReady control-plane,master 5m55s v1.20.14
k8s-node01.example.local NotReady <none> 113s v1.20.14
k8s-node02.example.local NotReady <none> 34s v1.20.14复制
10.token过期添加新的master和node
注意:以下步骤是上述init命令产生的Token过期了才需要执行以下步骤,如果没有过期不需要执行
#Token过期后生成新的token:
root@k8s-master01:~# kubeadm token create --print-join-command
kubeadm join 172.31.3.188:6443 --token 13pz5i.oc0ja481fj1svy1i --discovery-token-ca-cert-hash sha256:7e7695a897d9a8afcc779f2f5c335502e091e85a806d6fbe865118abf17e39cd
#Master需要生成--certificate-key
root@k8s-master01:~# kubeadm init phase upload-certs --upload-certs
I0112 21:30:51.234388 16826 version.go:254] remote version is much newer: v1.23.1; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
0a6e1d13be15039189717c81f25dbe369ea0bec0c22e6060a33cb5fedf25e530复制
添加master03:
root@k8s-master03:~# kubeadm join 172.31.3.188:6443 --token 13pz5i.oc0ja481fj1svy1i \
--discovery-token-ca-cert-hash sha256:7e7695a897d9a8afcc779f2f5c335502e091e85a806d6fbe865118abf17e39cd \
--control-plane --certificate-key 0a6e1d13be15039189717c81f25dbe369ea0bec0c22e6060a33cb5fedf25e530
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master03.example.local localhost] and IPs [172.31.3.103 127.0.0.1 ::1]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master03.example.local localhost] and IPs [172.31.3.103 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master03.example.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.example.local] and IPs [10.96.0.1 172.31.3.103 172.31.3.188]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master03.example.local as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master03.example.local as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
root@k8s-master01:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady control-plane,master 20m v1.20.14
k8s-master02.example.local NotReady control-plane,master 15m v1.20.14
k8s-master03.example.local NotReady control-plane,master 14m v1.20.14
k8s-node01.example.local NotReady <none> 13m v1.20.14
k8s-node02.example.local NotReady <none> 13m v1.20.14复制
添加node03:
[root@k8s-node03 ~]# kubeadm join 172.31.3.188:6443 --token 13pz5i.oc0ja481fj1svy1i \
--discovery-token-ca-cert-hash sha256:7e7695a897d9a8afcc779f2f5c335502e091e85a806d6fbe865118abf17e39cd
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
root@k8s-master01:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady control-plane,master 22m v1.20.14
k8s-master02.example.local NotReady control-plane,master 17m v1.20.14
k8s-master03.example.local NotReady control-plane,master 16m v1.20.14
k8s-node01.example.local NotReady <none> 15m v1.20.14
k8s-node02.example.local NotReady <none> 15m v1.20.14
k8s-node03.example.local NotReady <none> 35s v1.20.14复制
11.Calico组件的安装
[root@k8s-master01 ~]# cat calico-etcd.yaml
---
# Source: calico/templates/calico-etcd-secrets.yaml
# The following contains k8s Secrets for use with a TLS enabled etcd cluster.
# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: calico-etcd-secrets
namespace: kube-system
data:
# Populate the following with etcd TLS configuration if desired, but leave blank if
# not using TLS for etcd.
# The keys below should be uncommented and the values populated with the base64
# encoded contents of each file that would be associated with the TLS data.
# Example command for encoding a file contents: cat <file> | base64 -w 0
# etcd-key: null
# etcd-cert: null
# etcd-ca: null
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Configure this with the location of your etcd cluster.
etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"
# If you're using TLS enabled etcd uncomment the following.
# You must also populate the Secret below with these files.
etcd_ca: "" # "/calico-secrets/etcd-ca"
etcd_cert: "" # "/calico-secrets/etcd-cert"
etcd_key: "" # "/calico-secrets/etcd-key"
# Typha is disabled.
typha_service_name: "none"
# Configure the backend to use.
calico_backend: "bird"
# Configure the MTU to use for workload interfaces and tunnels.
# - If Wireguard is enabled, set to your network MTU - 60
# - Otherwise, if VXLAN or BPF mode is enabled, set to your network MTU - 50
# - Otherwise, if IPIP is enabled, set to your network MTU - 20
# - Otherwise, if not using any encapsulation, set to your network MTU.
veth_mtu: "1440"
# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"etcd_endpoints": "__ETCD_ENDPOINTS__",
"etcd_key_file": "__ETCD_KEY_FILE__",
"etcd_cert_file": "__ETCD_CERT_FILE__",
"etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
},
{
"type": "bandwidth",
"capabilities": {"bandwidth": true}
}
]
}
---
# Source: calico/templates/calico-kube-controllers-rbac.yaml
# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
rules:
# Pods are monitored for changing labels.
# The node controller monitors Kubernetes nodes.
# Namespace and serviceaccount labels are used for policy.
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
- serviceaccounts
verbs:
- watch
- list
- get
# Watch for changes to Kubernetes NetworkPolicies.
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs:
- watch
- list
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-kube-controllers
subjects:
- kind: ServiceAccount
name: calico-kube-controllers
namespace: kube-system
---
---
# Source: calico/templates/calico-node-rbac.yaml
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-node
rules:
# The CNI plugin needs to get pods, nodes, and namespaces.
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
verbs:
- get
- apiGroups: [""]
resources:
- endpoints
- services
verbs:
# Used to discover service IPs for advertisement.
- watch
- list
# Pod CIDR auto-detection on kubeadm needs access to config maps.
- apiGroups: [""]
resources:
- configmaps
verbs:
- get
- apiGroups: [""]
resources:
- nodes/status
verbs:
# Needed for clearing NodeNetworkUnavailable flag.
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system
---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
spec:
nodeSelector:
kubernetes.io/os: linux
hostNetwork: true
tolerations:
# Make sure calico-node gets scheduled on all nodes.
- effect: NoSchedule
operator: Exists
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
serviceAccountName: calico-node
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
priorityClassName: system-node-critical
initContainers:
# This container installs the CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: docker.io/calico/cni:v3.15.3
command: ["/install-cni.sh"]
env:
# Name of the CNI config file to create.
- name: CNI_CONF_NAME
value: "10-calico.conflist"
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
# The location of the etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# CNI MTU Config variable
- name: CNI_MTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Prevents the container from sleeping forever.
- name: SLEEP
value: "false"
volumeMounts:
- mountPath: host/opt/cni/bin
name: cni-bin-dir
- mountPath: host/etc/cni/net.d
name: cni-net-dir
- mountPath: calico-secrets
name: etcd-certs
securityContext:
privileged: true
# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
# to communicate with Felix over the Policy Sync API.
- name: flexvol-driver
image: docker.io/calico/pod2daemon-flexvol:v3.15.3
volumeMounts:
- name: flexvol-driver-host
mountPath: host/driver
securityContext:
privileged: true
containers:
# Runs calico-node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: docker.io/calico/node:v3.15.3
env:
# The location of the etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# Location of the CA certificate for etcd.
- name: ETCD_CA_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_ca
# Location of the client key for etcd.
- name: ETCD_KEY_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_key
# Location of the client certificate for etcd.
- name: ETCD_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_cert
# Set noderef for node controller.
- name: CALICO_K8S_NODE_REF
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Choose the backend to use.
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Always"
# Enable or Disable VXLAN on the default IP pool.
- name: CALICO_IPV4POOL_VXLAN
value: "Never"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Set MTU for the VXLAN tunnel device.
- name: FELIX_VXLANMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Set MTU for the Wireguard tunnel device.
- name: FELIX_WIREGUARDMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
# - name: CALICO_IPV4POOL_CIDR
# value: "192.168.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Set Felix logging to "info"
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
exec:
command:
- bin/calico-node
- -felix-live
- -bird-live
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
exec:
command:
- bin/calico-node
- -felix-ready
- -bird-ready
periodSeconds: 10
volumeMounts:
- mountPath: lib/modules
name: lib-modules
readOnly: true
- mountPath: run/xtables.lock
name: xtables-lock
readOnly: false
- mountPath: var/run/calico
name: var-run-calico
readOnly: false
- mountPath: var/lib/calico
name: var-lib-calico
readOnly: false
- mountPath: calico-secrets
name: etcd-certs
- name: policysync
mountPath: var/run/nodeagent
volumes:
# Used by calico-node.
- name: lib-modules
hostPath:
path: lib/modules
- name: var-run-calico
hostPath:
path: var/run/calico
- name: var-lib-calico
hostPath:
path: var/lib/calico
- name: xtables-lock
hostPath:
path: run/xtables.lock
type: FileOrCreate
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: opt/cni/bin
- name: cni-net-dir
hostPath:
path: etc/cni/net.d
# Mount in the etcd TLS secrets with mode 400.
# See https://kubernetes.io/docs/concepts/configuration/secret/
- name: etcd-certs
secret:
secretName: calico-etcd-secrets
defaultMode: 0400
# Used to create per-pod Unix Domain Sockets
- name: policysync
hostPath:
type: DirectoryOrCreate
path: var/run/nodeagent
# Used to install Flex Volume Driver
- name: flexvol-driver-host
hostPath:
type: DirectoryOrCreate
path: usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system
---
# Source: calico/templates/calico-kube-controllers.yaml
# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
# The controllers can only have a single active instance.
replicas: 1
selector:
matchLabels:
k8s-app: calico-kube-controllers
strategy:
type: Recreate
template:
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
nodeSelector:
kubernetes.io/os: linux
tolerations:
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: calico-kube-controllers
priorityClassName: system-cluster-critical
# The controllers must run in the host network namespace so that
# it isn't governed by policy that would prevent it from working.
hostNetwork: true
containers:
- name: calico-kube-controllers
image: docker.io/calico/kube-controllers:v3.15.3
env:
# The location of the etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# Location of the CA certificate for etcd.
- name: ETCD_CA_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_ca
# Location of the client key for etcd.
- name: ETCD_KEY_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_key
# Location of the client certificate for etcd.
- name: ETCD_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_cert
# Choose which controllers to run.
- name: ENABLED_CONTROLLERS
value: policy,namespace,serviceaccount,workloadendpoint,node
volumeMounts:
# Mount in the etcd TLS secrets.
- mountPath: calico-secrets
name: etcd-certs
readinessProbe:
exec:
command:
- usr/bin/check-status
- -r
volumes:
# Mount in the etcd TLS secrets with mode 400.
# See https://kubernetes.io/docs/concepts/configuration/secret/
- name: etcd-certs
secret:
secretName: calico-etcd-secrets
defaultMode: 0400
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-kube-controllers
namespace: kube-system
---
# Source: calico/templates/calico-typha.yaml
---
# Source: calico/templates/configure-canal.yaml
---
# Source: calico/templates/kdd-crds.yaml复制
修改calico-etcd.yaml的以下位置
[root@k8s-master01 ~]# grep "etcd_endpoints:.*" calico-etcd.yaml
etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"
[root@k8s-master01 ~]# sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://172.31.3.101:2379,https://172.31.3.102:2379,https://172.31.3.103:2379"#g' calico-etcd.yaml
[root@k8s-master01 ~]# grep "etcd_endpoints:.*" calico-etcd.yaml
etcd_endpoints: "https://172.31.3.101:2379,https://172.31.3.102:2379,https://172.31.3.103:2379"
[root@k8s-master01 ~]# grep -E "(.*etcd-key:.*|.*etcd-cert:.*|.*etcd-ca:.*)" calico-etcd.yaml
# etcd-key: null
# etcd-cert: null
# etcd-ca: null
[root@k8s-master01 ~]# ETCD_KEY=`cat etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`
[root@k8s-master01 ~]# ETCD_CERT=`cat etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`
[root@k8s-master01 ~]# ETCD_CA=`cat etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`
[root@k8s-master01 ~]# sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml
[root@k8s-master01 ~]# grep -E "(.*etcd-key:.*|.*etcd-cert:.*|.*etcd-ca:.*)" calico-etcd.yaml
etcd-key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdkpwdldKZjdpRzhpNFJGeVhXVXlFRFU0dlR5UHprV1IxVE9jaUVpQ0J0N1BsbUJWCmkyOEZVdmZTTFpBbDdZRmJsNTg0STJJV0Y5MU1FOElIRnp4clZyckZ5eVZKdEtwSFZiWEt1OHdUaW81S0FpSEMKUGFrTE43amtrTXRERmJjbDFyaEVVaUxURVRJVFNUYlBmMmE3NU4zc2krTFU5L1packdmQ3hQcTMvQ1NlQjVwdwpDRktzQzhaVi9rbTRoK21LU3pYcmVVbGgyT2tDa3RYWEtReE9Qa1ptTzZDUHB5WkVlV05TL1c2VlArZytZd1I0Ck5KTVE0eUhzODNYTXByMEl4UFk2RVNYNEdVQTQrdVk4VmR6d1Y5aHFKWlcrdy9yMzlSd1VEY2ZhSlA5eXRGZisKVlMwR0dyL053TUd3b0ZZQWJOeGRGUHZJL3prdFRERkJQREZ6cVFJREFRQUJBb0lCQUJ6QWttNzRKSUdGSjlVVgordEJnS0FTdWlHclkrN2RmaGI3eDhsQVlkYklrYjVNbU5vUmVOWHFUaXpnay9KTTdvRUg2Sk8zSCswUkNHV0g5CnQyVUVjZnl6MW9tRXNyclhKcTdiV3YvTU9jSnF0TCtrYzk5QWtSUTZuS1d5UnhUZGFlaFZDUjFZYjhMMFZscFgKLzhRVlhsbWl0M2dQNlpXdnViWDl6NFNHRUZ4ZzJYSmNydWF3algxU1ZGTnRPN2xrU2tqaWgzYjRTb2wvamNNZwo3UExvUUxaOGNvbm5XaUJtUEExRWYzc3N2enBtbkd3M09KRXNJdEhVMEFyQ3VPQ2RxYllHaFpHYWZjdmhmWU1PCnJhKzFIUTg4Tys5VS9ScGkrTVNvelRnUDRTOGtZL1pNbXhmUXAwT2k4d1FRZC9RbmRJMWRuYW5MQy80RlBoSzgKNkVTVFRVMENnWUVBNzdzR0FtZFd2RFhrY1ZVM3hKQk1yK3pyZWV3Nlhla0F0ZGVsU3pBMU81RENWYmQvcEgrNApmOXppd1o4K1dWRC8xWm5wV2NlZ0JBQ0lPVzNsUnUvelJva2NSeXFpRFVGcE5ET0xwWjFEYXJaNzVCekhwQ2QyCjQrNldUdkNDMEVnR0k5enkrYWpKOE5ESjhwcFRPUjR0NDhOR3FTaHorMUdONkRaNU15elpBUmNDZ1lFQXlXY2wKeC9kWkMrVmRCV01uOVpkU0pQcHF1RHQwbVFzS044NG9vSlBIRVhydWphWnd6M3pMcWp0NnJsU1M3SDk2THZWeApaYkxvY1UyQ1hLVVdRaU1vNmhYc1cwa2NmaEJPU0xFQmMrT3o2M0tLWW0zdzl6M3dkYlhGQ1ZPUCtzdlh5bE90CmNkRWNnK1Z2aGZQK0w2VTVZN0d6OW9IL1NnME93d1hPdXk2K3FUOENnWUVBcCsrbUdBejRUOFNaRVdPWE81V3kKZ3hNL0todjREMDFvZC9wbkNyTHN0NXVDNTdVeUw3UmhOUUV4d0YyanVjSHFWbUlKZkNGQjBVdm1JZ1VBTnA5bApGcVo2THNpSTJTeFhYSUEzZFg4amVSLzR6aVh6SE9XZ2ZhL25qOGtnZW5QYUNVbUExTEFQTnltc0xzMDVPNndPCmpaMkFaSU80Sy9oSHBzSnlTUTFEdjJVQ2dZRUFvMHNPUnVNMVAzL251OFo1VDVZdzgrcFZQS3A0RHQzMG11cDcKNWpYcTRURmEyVjVwZU5FbUVBL0ptQzdhTVFYcWVzaGwrSjdsOTNkd2lzMFBEdkNTNjdoNnVraTg0VGszUDVqRQpKTUlwem13LzV5NWNnUm1uTE1rRHlGd0lFTC9WWmlZU0tvWHhLTCtOZkg0blNWb2MvY2ZHc2NjVXhXVnc0bzZDCjN5RTNWT0VDZ1lBNXFHL0t2amxhV3liQndYY3pXVmZWeTJ4VTMwZVNQWVVqWTlUdUR0ZGJqbHFFeTlialZsZzUKWldRb0dKcTVFbjF1YXpUcnc3QlFja2VjaE1zRzBrZkRZSzhZbC9UMThGemxBWDh3TzJaZGlOQnJYVjhGMnRKaQpPYmJwZU45Y0l2ZkVpcjgwOHBVcC9ac05zQWpjMzBERU82THVPblA2VlpmQ1R2Wit4VVJodmc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
etcd-cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURYVENDQWtXZ0F3SUJBZ0lJRDc0VzRkNnJ0MFl3RFFZSktvWklodmNOQVFFTEJRQXdFakVRTUE0R0ExVUUKQXhNSFpYUmpaQzFqWVRBZUZ3MHlNakF4TVRVd05qTTJNVEJhRncweU16QXhNVFV3TmpNMk1UQmFNQ1V4SXpBaApCZ05WQkFNVEdtczRjeTF0WVhOMFpYSXdNUzVsZUdGdGNHeGxMbXh2WTJGc01JSUJJakFOQmdrcWhraUc5dzBCCkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXZKcHZXSmY3aUc4aTRSRnlYV1V5RURVNHZUeVB6a1dSMVRPY2lFaUMKQnQ3UGxtQlZpMjhGVXZmU0xaQWw3WUZibDU4NEkySVdGOTFNRThJSEZ6eHJWcnJGeXlWSnRLcEhWYlhLdTh3VAppbzVLQWlIQ1Bha0xON2pra010REZiY2wxcmhFVWlMVEVUSVRTVGJQZjJhNzVOM3NpK0xVOS9aWnJHZkN4UHEzCi9DU2VCNXB3Q0ZLc0M4WlYva200aCttS1N6WHJlVWxoMk9rQ2t0WFhLUXhPUGtabU82Q1BweVpFZVdOUy9XNlYKUCtnK1l3UjROSk1RNHlIczgzWE1wcjBJeFBZNkVTWDRHVUE0K3VZOFZkendWOWhxSlpXK3cvcjM5UndVRGNmYQpKUDl5dEZmK1ZTMEdHci9Od01Hd29GWUFiTnhkRlB2SS96a3RUREZCUERGenFRSURBUUFCbzRHak1JR2dNQTRHCkExVWREd0VCL3dRRUF3SUZvREFkQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0h3WUQKVlIwakJCZ3dGb0FVQ2ZkNk5va2FXeFJOZlN2Umw4ajk5bU52aUhrd1RnWURWUjBSQkVjd1JZSWFhemh6TFcxaApjM1JsY2pBeExtVjRZVzF3YkdVdWJHOWpZV3lDQ1d4dlkyRnNhRzl6ZEljRXJCOERaWWNFZndBQUFZY1FBQUFBCkFBQUFBQUFBQUFBQUFBQUFBVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBZFNRRnlNckFMQWJqcTRDc29QMUoKTThTTU1aRCthMGV6U29xM01EUmJCQWhqWlhEaU5uczMvMWo2aUhTcDUvaWJ6NGRjQnRsaW1HWHk0ek03MGtvcwo5R0JBZzVwaXJXYVFEcWtVSFkxYjdkWUlZSWN4YW9vQWtQeEVoSlZOYTBKYlFyb21qTnJiTVh4MlVsUjVtRGU2CnFMYUtsVDh4WC9zVStSelRxN1VBckxhOWIzWkZvN2V5UkhzZFBUODY3QnZCQnZkNEdMOElxWDdzbVd0VUhLVEkKQWZLMUQrQ3BEUGxNUWE3M1FOOGhvQVRPNTV2ckVjeEFIeDh2VDJ5VUYrYjZaVjJnQm43Z3hJSUNxVUF6OGhWagpTdzMxUVEvTHZ6ME1HZlQ3dFQ0NE52dyt3aHExZVJyNXJ2enRQcml5MUFvSDB1a2hiZ3VEbGExdElOa2FKc01XClRnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
etcd-ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM0VENDQWNtZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkbGRHTmsKTFdOaE1CNFhEVEl5TURFeE5UQTJNell4TUZvWERUTXlNREV4TXpBMk16WXhNRm93RWpFUU1BNEdBMVVFQXhNSApaWFJqWkMxallUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxRSldKZmVNT3N3CmN4NDZmaUExaWREekU3c05DSk1JRnFXODRxWUdZSEF2ekxBZ2JSS0dXRHdnYi8vM01ZUUhuanhpb1lEN1BjVXUKYnZRRlN0cmtmYW1IWHpaMlAxd0dzRThrSkEvOVhaTStTNWttL0M0UHgxSjFoSHNyYUhjR21wUWYxM3ZCS2IrbgpvdDFHK2lERkZ2ZmdNWVd1U1FvL1M4WGFMZDZTcmZyeTFWOUQzek0zaUF4OGkrVzF3bE41b1hqc0RyRW5XcUFRCmxzVmVteWMxQkZRR0FjSTJLL0dzcXNlUmlUM1dCZ2RhV2JST1RMby83RWoycDdGNHdNcHRiT0kvam56UjM5WkUKSnZ6ZHpvUmJWQlh3NTFXY3cvNFZxOW5aaXJESEY3TWRZVEQ4RXIwRFovd2tYa1FsZ3VidTNBRjNDZEVsSS9wTAoxK1BhaFhvUFZpRUNBd0VBQWFOQ01FQXdEZ1lEVlIwUEFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CCkFmOHdIUVlEVlIwT0JCWUVGQW4zZWphSkdsc1VUWDByMFpmSS9mWmpiNGg1TUEwR0NTcUdTSWIzRFFFQkN3VUEKQTRJQkFRQ0lCaGtiQzZ5OTFqNVAvbzB0NjJLeWlDMWdWelJCcHB0NWwvSXZabDRDRVJwVHVWSzJMTkxPZitGbwpMbGVUWlBZTmVWVFVkc2ZYdlVCekYvelpsSjJ6OVdBRUhTbk5Ba0haQVQ4N0tzSGZuRksyQi9NeFFuSEFkMWMzCkNHdzBxQ3RvUVBLdFI1U2UwUngrQUxQSE9iaUEwRG5uN3JESVhuTnBtdkx6VFliY1JTbnVhRTk1cFIwVVBPYzQKWTd5Ulg4MkttRWkxQVR6UEZBNXp2NFg4VnFMbVB2MFNnSjZiRVl1RnM3TUhScFErTkFRZlRBaktLQzg2d3J0QQpUbWlxeUVJU1RtQk03cVliOTl3OWRsWlBVcDIwNS9jZDBmY3ZudTNQQlJRRDhCVWdrOEhtRnhtNG1iZE9wdW9KCktzT05rbVBlNm5ZcDV2dGNiUndKWnlsSzJOdGkKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
[root@k8s-master01 ~]# grep -E "(.*etcd_ca:.*|.*etcd_cert:.*|.*etcd_key:.*)" calico-etcd.yaml
etcd_ca: "" # "/calico-secrets/etcd-ca"
etcd_cert: "" # "/calico-secrets/etcd-cert"
etcd_key: "" # "/calico-secrets/etcd-key"
[root@k8s-master01 ~]# sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml
[root@k8s-master01 ~]# grep -E "(.*etcd_ca:.*|.*etcd_cert:.*|.*etcd_key:.*)" calico-etcd.yaml
etcd_ca: "/calico-secrets/etcd-ca" # "/calico-secrets/etcd-ca"
etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert"
etcd_key: "/calico-secrets/etcd-key" # "/calico-secrets/etcd-key"
[root@k8s-master01 ~]# POD_SUBNET=`cat etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`
[root@k8s-master01 ~]# echo $POD_SUBNET
192.168.0.0/12复制
# 注意下面的这个步骤是把calico-etcd.yaml文件里面的CALICO_IPV4POOL_CIDR下的网段改成自己的Pod网段,也就是把192.168.x.x/16改成自己的集群网段,并打开注释:
[root@k8s-master01 ~]# grep -E "(.*CALICO_IPV4POOL_CIDR.*|.*192.168.0.0.*)" calico-etcd.yaml
# - name: CALICO_IPV4POOL_CIDR
# value: "192.168.0.0/16"
[root@k8s-master01 ~]# sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@# value: "192.168.0.0/16"@ value: '"${POD_SUBNET}"'@g' calico-etcd.yaml
[root@k8s-master01 ~]# grep -E "(.*CALICO_IPV4POOL_CIDR.*|.*192.168.0.0.*)" calico-etcd.yaml
- name: CALICO_IPV4POOL_CIDR
value: 192.168.0.0/12
[root@k8s-master01 ~]# grep "image:" calico-etcd.yaml
image: docker.io/calico/cni:v3.15.3
image: docker.io/calico/pod2daemon-flexvol:v3.15.3
image: docker.io/calico/node:v3.15.3
image: docker.io/calico/kube-controllers:v3.15.3复制
下载calico镜像并上传harbor
[root@k8s-master01 ~]# vim download_calico_image.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2022-01-11
#FileName: download_calico_images.sh
#URL: raymond.blog.csdn.net
#Description: The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
images=$(awk -F "/" '/image:/{print $NF}' calico-etcd.yaml)
HARBOR_DOMAIN=harbor.raymonds.cc
images_download(){
${COLOR}"开始下载Calico镜像"${END}
for i in ${images};do
docker pull registry.cn-beijing.aliyuncs.com/raymond9/$i
docker tag registry.cn-beijing.aliyuncs.com/raymond9/$i ${HARBOR_DOMAIN}/google_containers/$i
docker rmi registry.cn-beijing.aliyuncs.com/raymond9/$i
docker push ${HARBOR_DOMAIN}/google_containers/$i
done
${COLOR}"Calico镜像下载完成"${END}
}
images_download
[root@k8s-master01 ~]# bash download_calico_image.sh
[root@k8s-master01 ~]# sed -ri 's@(.*image:) docker.io/calico(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' calico-etcd.yaml
[root@k8s-master01 ~]# grep "image:" calico-etcd.yaml
image: harbor.raymonds.cc/google_containers/cni:v3.15.3
image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
image: harbor.raymonds.cc/google_containers/node:v3.15.3
image: harbor.raymonds.cc/google_containers/kube-controllers:v3.15.3
[root@k8s-master01 ~]# kubectl apply -f calico-etcd.yaml
secret/calico-etcd-secrets created
configmap/calico-config created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
#查看容器状态
[root@k8s-master01 ~]# kubectl get pod -n kube-system |grep calico
calico-kube-controllers-6474888cfb-kgfx8 1/1 Running 0 31s
calico-node-2kpgb 1/1 Running 0 31s
calico-node-bpbbp 1/1 Running 0 31s
calico-node-cgxdk 1/1 Running 0 31s
calico-node-fr6vv 1/1 Running 0 31s
calico-node-h8q2d 1/1 Running 0 31s
calico-node-tc2nw 1/1 Running 0 31s
#查看集群状态
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.example.local Ready control-plane,master 133m v1.20.14
k8s-master02.example.local Ready control-plane,master 130m v1.20.14
k8s-master03.example.local Ready control-plane,master 128m v1.20.14
k8s-node01.example.local Ready <none> 126m v1.20.14
k8s-node02.example.local Ready <none> 124m v1.20.14
k8s-node03.example.local Ready <none> 123m v1.20.14复制
12.Metrics部署
在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。
[root@k8s-master01 ~]# cat components.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --metric-resolution=30s
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
image: k8s.gcr.io/metrics-server/metrics-server:v0.4.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: readyz
port: https
scheme: HTTPS
periodSeconds: 10
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100复制
将Master01节点的front-proxy-ca.crt复制到所有Node节点
[root@k8s-master01 ~]# for i in k8s-node01 k8s-node02 k8s-node03;do scp etc/kubernetes/pki/front-proxy-ca.crt $i:/etc/kubernetes/pki/front-proxy-ca.crt ; done复制
修改下面内容:
[root@k8s-master01 ~]# vim components.yaml
...
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --metric-resolution=30s
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
#添加下面内容
- --kubelet-insecure-tls
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt #注意kubeadm证书文件是front-proxy-ca.crt
- --requestheader-username-headers=X-Remote-User
- --requestheader-group-headers=X-Remote-Group
- --requestheader-extra-headers-prefix=X-Remote-Extra-
...
volumeMounts:
- mountPath: tmp
name: tmp-dir
#添加下面内容
- name: ca-ssl
mountPath: etc/kubernetes/pki
...
volumes:
- emptyDir: {}
name: tmp-dir
#添加下面内容
- name: ca-ssl
hostPath:
path: etc/kubernetes/pki复制
下载镜像并修改镜像地址
[root@k8s-master01 ~]# grep "image:" components.yaml
image: k8s.gcr.io/metrics-server/metrics-server:v0.4.1
[root@k8s-master01 ~]# cat download_metrics_images.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2022-01-11
#FileName: download_metrics_images.sh
#URL: raymond.blog.csdn.net
#Description: The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
images=$(awk -F "/" '/image:/{print $NF}' components.yaml)
HARBOR_DOMAIN=harbor.raymonds.cc
images_download(){
${COLOR}"开始下载Metrics镜像"${END}
for i in ${images};do
docker pull registry.aliyuncs.com/google_containers/$i
docker tag registry.aliyuncs.com/google_containers/$i ${HARBOR_DOMAIN}/google_containers/$i
docker rmi registry.aliyuncs.com/google_containers/$i
docker push ${HARBOR_DOMAIN}/google_containers/$i
done
${COLOR}"Metrics镜像下载完成"${END}
}
images_download
[root@k8s-master01 ~]# bash download_metrics_images.sh
[root@k8s-master01 ~]# docker images|grep metrics
harbor.raymonds.cc/google_containers/metrics-server v0.4.1 9759a41ccdf0 14 months ago 60.5MB
[root@k8s-master01 ~]# sed -ri 's@(.*image:) k8s.gcr.io/metrics-server(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' components.yaml
[root@k8s-master01 ~]# grep "image:" components.yaml
image: harbor.raymonds.cc/google_containers/metrics-server:v0.4.1复制
安装metrics server
[root@k8s-master01 ~]# kubectl apply -f components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created复制
查看状态
[root@k8s-master01 ~]# kubectl get pod -n kube-system | grep metrics
metrics-server-9787b55bd-xhmbx 1/1 Running 0 50s
[root@k8s-master01 ~]# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master01.example.local 175m 8% 1674Mi 43%
k8s-master02.example.local 187m 9% 1257Mi 32%
k8s-master03.example.local 164m 8% 1182Mi 30%
k8s-node01.example.local 98m 4% 634Mi 16%
k8s-node02.example.local 72m 3% 729Mi 19%
k8s-node03.example.local 104m 5% 651Mi 17%复制
13.Dashboard部署
Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。
13.1 Dashboard部署
[root@k8s-master01 ~]# cat recommended.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.4
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: certs
# Create on-disk volume to store exec logs
- mountPath: tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path:
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.4
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
[root@k8s-master01 ~]# vim recommended.yaml
...
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort #添加这行
ports:
- port: 443
targetPort: 8443
nodePort: 30005 #添加这行
selector:
k8s-app: kubernetes-dashboard
...
[root@k8s-master01 ~]# grep "image:" recommended.yaml
image: kubernetesui/dashboard:v2.0.4
image: kubernetesui/metrics-scraper:v1.0.4复制
下载镜像并上传到harbor
[root@k8s-master01 ~]# cat download_dashboard_images.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2022-01-11
#FileName: download_dashboard_images.sh
#URL: raymond.blog.csdn.net
#Description: The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
images=$(awk -F "/" '/image:/{print $NF}' recommended.yaml)
HARBOR_DOMAIN=harbor.raymonds.cc
images_download(){
${COLOR}"开始下载Dashboard镜像"${END}
for i in ${images};do
docker pull registry.aliyuncs.com/google_containers/$i
docker tag registry.aliyuncs.com/google_containers/$i ${HARBOR_DOMAIN}/google_containers/$i
docker rmi registry.aliyuncs.com/google_containers/$i
docker push ${HARBOR_DOMAIN}/google_containers/$i
done
${COLOR}"Dashboard镜像下载完成"${END}
}
images_download
[root@k8s-master01 ~]# bash download_dashboard_images.sh
[root@k8s-master01 ~]# sed -ri 's@(.*image:) kubernetesui(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' recommended.yaml
[root@k8s-master01 ~]# grep "image:" recommended.yaml
image: harbor.raymonds.cc/google_containers/dashboard:v2.0.4
image: harbor.raymonds.cc/google_containers/metrics-scraper:v1.0.4
[root@k8s-master01 ~]# kubectl create -f recommended.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created复制
创建管理员用户admin.yaml
[root@k8s-master01 ~]# vim admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
[root@k8s-master01 ~]# kubectl apply -f admin.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created复制
13.2 登录dashboard
在谷歌浏览器(Chrome)启动文件中加入启动参数,用于解决无法访问Dashboard的问题,参考图1-1:
--test-type --ignore-certificate-errors复制
图1-1 谷歌浏览器 Chrome的配置
[root@k8s-master01 ~]# kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.106.189.113 <none> 443:30005/TCP 18s复制
访问Dashboard:https://172.31.3.101:30005,参考图1-2
图1-2 Dashboard登录方式
13.2.2 使用kubeconfig文件登录dashboard
[root@k8s-master01 ~]# cp /etc/kubernetes/admin.conf kubeconfig
[root@k8s-master01 ~]# vim kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1ERXhNVEV6TURJMU1Gb1hEVE15TURFd09URXpNREkxTUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT0JoCkQxV3h3Myt3bE9WNU02MEtPYjlEZmo1U09EREZBYjdMTGd3ZXgrK3d3eEVHcnFpaGUxVmVLZnlIMHJmTnEvakEKVHArekxyVXhRNHdzNEw3Z29Na2tJcDc3aXRqOHc1VWJXYUh0c3IwMkp1VVBQQzZiWktieG5hTmFXTldTNjRBegpORFhzeSszU3dxcTNyU3h4WkloTS9ubVZRTEZKL21OanU5MUNVWE03ak9jcXhaMUI2QitSbzhSdHFpRStZUlhFCm1JS1ZCeWhpUXhQWE53VEcwN0NKMnY5WnduNmlxK2VUMUdNbVFsZ0Z1M0pqQm9NUTFteWhYODM3QTNTdXVQNDkKYU1HKzd2YTh5TFFkMWltZEZjSVpDcmNHU2FMekR5SDFmMUQ3ZTM4Qm01MTd4S1ZZZkJQQkNBSjROb3VKQmVXSgpPN1lLK2RFb1liaURHWVBtdWxNQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZQVUlaLzVqSSs0WHQ3b1FROC9USU5RQ1gxbXNNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFBRnVkbFNQUTNTM0VXdUZ0YnhiekVsc2IyR2F2NUQzd1VDWStBdlZWRWhzcmZvYzlqKwp5REEwdjZSamEvS3VRWUpjMG9vVkN5cTkveHVyenZyOU9DS3ZwejBDZDJHWkYyeFFFcDZ6QlMvM3A5VUh5YnU3Cm9Kb0E2S0h4OTd0KzVzaWQyamQ4U29qUGNwSGdzZloySmxJckc3ckJpMktuSTZFSlprdWxjMlVIN09kY2RJWmwKTXpkMWFlVG5xdHlsVkZYSDN6ZkNCTTJyZ045d0RqSHphNjUyMkFRZVQ2ODN0ZTZXRWIxeWwvVEdVUld0RFhmKwpQbXV6b3g5eGpwSFJoVDZlcVYwelVHVGZJUlI3WmRIb3p2TzNRVlhtYmNUdDQxVFFsaDRIMHBkQ2p6dmZLTDA0CnNHMmRIaFRBL0wzUlc0RXlDY2NPQ0o2bWNiT1hyZzNOUnhxWQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://172.31.3.188:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFekNDQWZ1Z0F3SUJBZ0lJUnQ3eHBrbVg3cjh3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TWpBeE1URXhNekF5TlRCYUZ3MHlNekF4TVRFeE16QXlOVEphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQWxKeHBjcmlNTjh0aGg3ODEKT2FMckxmZmRtU3BRYUdSTmJQZjZUdTMwK1hCMEtUTXloR2EwRC83TWtaajZ5MjAzM0R5SEtpSUlhY2d3QXBnYQpjZE9TcHhwaitsd2pRSy9rN3M3QVJLcVExY2VueUtiaXp0RGMweCt2dGFXN0djcVlQSkpvU2dqWWxuZ0FWSmh4CnlWZDI3R3I2SEVWRFFMSVlra2tqWnFSTzI0U0ZoMDlUK2JCZlhSRGVZaHk1UW1qem5lc0VWbk1nUkdSVElnNTgKYjFBRHR1d1VTZ3BQNTFITTlKWHZtSTBqUytqSXBJNllYQUtodlpLbnhLRjh2d1lpZnhlZDV4ZjhNNVJHWnJEMQpGbFZ5NWQ5ZUNjV2dpQ0tNYVgvdzM4b2pTbE5OZGFwUzlzQXVObXNnbHlMT0MrWVh1TlBNRWZHbDdmeG5yUWl2ClV1dkFMUUlEQVFBQm8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0h3WURWUjBqQkJnd0ZvQVU5UWhuL21NajdoZTN1aEJEejlNZzFBSmZXYXd3RFFZSktvWklodmNOQVFFTApCUUFEZ2dFQkFGU0tBZm9kYi9xSVJmdWIrcXlLdkhEeDFuNEtoNEVWQ2M5ZlRPZG1NdHBHU2tUbCtMbmptc0pNClpncWdTYUtMY0xYWS9JWTVsd3N3OXRsbzBwSElyMUNxYXBYa3M5WDZiSjJzc0pFdGN5ODFocXJSd2pqYzQzdEoKZUp0QkhsNWpvV2tkV0ZCMXpsRVhyWEYwdmU0ckRueVdWL04zSTV3bzVUYXpRMTRZRmZ0c2RVYlYwNXdXa0F6cgo5YWtLd25pWWRVZTRjdlpwNkFMb01uQVJXa29La1h0elI1SElJUFhaTGlHWnEwWGpHMWdpODBvR01ZZXlWb1ZCCnRUMmt1MElJNmhIbzh3VXNJdWlDT3EyQjRMWFpobW9DQU5kcnFDc0FUaXRjTll0bGdkM1RtQUx4ZmpMMkN1cWUKL1lieXZORWhndnh4dFlwN2lJWE9jZks1RDF3VSthdz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBbEp4cGNyaU1OOHRoaDc4MU9hTHJMZmZkbVNwUWFHUk5iUGY2VHUzMCtYQjBLVE15CmhHYTBELzdNa1pqNnkyMDMzRHlIS2lJSWFjZ3dBcGdhY2RPU3B4cGorbHdqUUsvazdzN0FSS3FRMWNlbnlLYmkKenREYzB4K3Z0YVc3R2NxWVBKSm9TZ2pZbG5nQVZKaHh5VmQyN0dyNkhFVkRRTElZa2tralpxUk8yNFNGaDA5VAorYkJmWFJEZVloeTVRbWp6bmVzRVZuTWdSR1JUSWc1OGIxQUR0dXdVU2dwUDUxSE05Slh2bUkwalMraklwSTZZClhBS2h2WktueEtGOHZ3WWlmeGVkNXhmOE01UkdackQxRmxWeTVkOWVDY1dnaUNLTWFYL3czOG9qU2xOTmRhcFMKOXNBdU5tc2dseUxPQytZWHVOUE1FZkdsN2Z4bnJRaXZVdXZBTFFJREFRQUJBb0lCQURrV0tHK1lNc3pRQktRWApzRU4yc083VWt6eGVBOHRHRkhQeWdpWEZ4Ti80OGJaTjQyNzI0TjV3RzNjbWs5aUhHUGt5Q3g0Rk9zUWYwVWw5CjBsSzlXazEwbHNrNmtaUXN2VDE3RUdLUVB0alFQRVNZenZGeFRCS1J6blp4dG9DKzBXSWJQNUtJK1dJN3NLek8KYm85UVdPK1NYSWQxbDlNSFZ1Y0N6MldEWW9OeU85bmFobWdzSWpIRnRqVEo5NWQ2cWRmWDNHZXBSRHA0em5EaQprTVFJMWRBdTg1TE9HMVZyd2lMRUxPa2JVOW5hNGdJS1VIVmY5RW90SndXVzI2K2kxS1JNYVJJVmlkbDVqTm1aCnZwM3JVOUM3L253c01pVktMMTF2MW8wdGptc2gzbkxnTVNEcEJtUE5pTGcxR3AxK0FPYVBXVFNDVEJZTDdOOG8KNGJxcEw0VUNnWUVBeEVpSWhKMzNMS0FTTHBGY3NtZ2RKUDBZOWRwZzZmcHlvOXA4NlpuejYxZXpUVkhyZ0p1SQptc09tTXQ0eHRINGVJbHhRYklWSGNJWC9iZis0aCtkUFJ0Q1ExRUdUTWRaaW9qSkJCd2JhRS9xd0YwMjZpRkRnCm9TZFpiemhFbk5BWmV5NjI1Skp2QXdRdldIanRPRHRNdDQ0dWZmYndGRDErZEtQc3JobkQzWThDZ1lFQXdkTHUKdGJTWDZYUFovTndHaXl6TnBrWHZST0hzNU1TaGFiVW9ibmxMbWxsL3gwUS9WQVkxdmhhakFiQ2t2WUk0T3VrUgowZWl2Wmx1bVNrazFJTlB5VXBNQ1dHR1lVTGJlWURidXhnZDlZd3Z1SWZQRmpwWU1RR0FRcE1SangzTCtMMzlQClplRW9lRmF3ZzdIVTgrYWVWWU9jTk5aaHYvbHhadUM5MzRkSW9JTUNnWUVBb3ZiRndiV1ZYb3VZRE9uTFdLUncKYmlGazg5cFgxR3VIZXRzUUVyTXJmUjNYVkQ3TGxIK05yMUQ1VUFxQ29pU0R5R3QwcW1VTnB6TFptKzVRdXlVbApBTnB4SklrOU9JZVNaSy9zcFhUZTR1K2orL1VoQmNTQWU4dzd5TWVpejc5SEtLcmtWbW50bVVlRU42Uk83L3pyCitRb25ONVlxUmVPNGRnY1Rub2p0d2FrQ2dZQTZYeVVHMGdtQ0JDTGROUUkvZmRHOVJvaUZqU2pEeUxmMzF0Z0QKVlVKQWpMMmZyRjBLR0FpdFk3SFp1M0lScEpyOG10NkVBZmg0OGhjRmZrQ2l6MUhHTG9IaFRoc0tDOWl5enpoZgpxVGZJMFhuNC9hbzhnOUhTdlZ1bDA0TmRPTE4yYUhmbjdjUTdZWmd0UVN3cC9BVXBLY2FzWHZmM1VjOG1OWDdaClI2dkdzd0tCZ1FDd2VBcmptSVV1ejV5cXhkREszbElQd0VsQTlGU3lMTjROU0owNTVrQ2tkTUZMS0xpcUZ0Y2UKSXBrWWhIbXNRc28yRTZwTStHQ0dIMU81YWVRSFNSZWh2SkRZeGdEMVhYaHA5UjRNdHpjTmw2U3cwcTQ4MVNZZQplNVp5Zk9CcWVDbzdOQmZ0dS9ua0tZTDFCTUNMS1hOM0JYNkVpQ0JPUjlSUDJHeEh6S3FBa2c9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IldlTXE5S29BbW1YYVdJNWljRnBGamVEX1E0YV9xRVU5UWM1Ykh0dGQ0UkkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTZidmhtIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjNmFlNzZhMi0zMjJjLTQ4M2EtOWRiMS1lYWYxMDI4NTkxNjUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.Y_VSCPM9F3L00v5XBGcT6JwpnLoAfrKUw8ufOpOhH5AhuFcXctRwAWvZDzGPOH3mH0GaIiyi1G7GOlZRRnJJVy5C7I3VKnE4mZzMScBvKCEheU40Y6x28CvkZDTmuaaDgSWrm3cfjAvTEJIg45TrtaN25at79GB27_A1LJ3JQUHY59OpG6YUbnFWjW899bCUN99lmYTMGe9M5cjY2RCufuyEam296QEz6b23tyEHdMCcPDJJH6IEDf2I4XhA5e5GWqfdkX1qX5XZ21MRyXXXTSVYqeLvvdNvQS3MxLlNaB5my0WcruRihydkC_n1UamgzXBu-XWfM4QWwk3gzsQ9yg复制
输入到令牌后,单击登录即可访问Dashboard,参考图1-3:
14.一些必须的配置更改
将Kube-proxy改为ipvs模式,因为在初始化集群的时候注释了ipvs配置,所以需要自行修改一下:
在master01节点执行
[root@k8s-master01 ~]# curl 127.0.0.1:10249/proxyMode
iptables
[root@k8s-master01 ~]# kubectl edit cm kube-proxy -n kube-system
mode: "ipvs"复制
更新Kube-Proxy的Pod:
[root@k8s-master01 ~]# kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system
daemonset.apps/kube-proxy patched复制
验证Kube-Proxy模式
[root@k8s-master01 ~]# curl 127.0.0.1:10249/proxyMode
ipvs
[root@k8s-master01 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.17.0.1:30005 rr
-> 192.169.111.141:8443 Masq 1 0 0
TCP 172.31.3.101:30005 rr
-> 192.169.111.141:8443 Masq 1 0 0
TCP 192.162.55.64:30005 rr
-> 192.169.111.141:8443 Masq 1 0 0
TCP 10.96.0.1:443 rr
-> 172.31.3.101:6443 Masq 1 0 0
-> 172.31.3.102:6443 Masq 1 0 0
-> 172.31.3.103:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr
-> 192.170.21.193:53 Masq 1 0 0
-> 192.170.21.194:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 192.170.21.193:9153 Masq 1 0 0
-> 192.170.21.194:9153 Masq 1 0 0
TCP 10.99.167.144:443 rr
-> 192.167.195.132:4443 Masq 1 0 0
TCP 10.101.88.7:8000 rr
-> 192.169.111.140:8000 Masq 1 0 0
TCP 10.106.189.113:443 rr
-> 192.169.111.141:8443 Masq 1 0 0
TCP 127.0.0.1:30005 rr
-> 192.169.111.141:8443 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 192.170.21.193:53 Masq 1 0 0
-> 192.170.21.194:53 Masq 1 0 0复制
15.注意事项
注意:kubeadm安装的集群,证书有效期默认是一年。master节点的kube-apiserver、kube-scheduler、kube-controller-manager、etcd都是以容器运行的。可以通过kubectl get po -n kube-system查看。
启动和二进制不同的是,
kubelet的配置文件在/etc/sysconfig/kubelet和/var/lib/kubelet/config.yaml,修改后需要重启kubelet进程
ipvs[root@k8s-master01 ~]# ls /etc/sysconfig/kubelet
/etc/sysconfig/kubelet
[root@k8s-master01 ~]# ls /var/lib/kubelet/config.yaml
/var/lib/kubelet/config.yaml复制
其他组件的配置文件在/etc/kubernetes/manifests目录下,比如kube-apiserver.yaml,该yaml文件更改后,kubelet会自动刷新配置,也就是会重启pod。不能再次创建该文件
[root@k8s-master01 ~]# ls /etc/kubernetes/manifests
etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml复制
[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-55bfb655fc-bgdlr 1/1 Running 1 16h 192.167.195.130 k8s-node02.example.local <none> <none>
calico-node-2bggs 1/1 Running 1 16h 172.31.3.102 k8s-master02.example.local <none> <none>
calico-node-2rgfb 1/1 Running 1 16h 172.31.3.101 k8s-master01.example.local <none> <none>
calico-node-449ws 1/1 Running 1 16h 172.31.3.110 k8s-node03.example.local <none> <none>
calico-node-4p9t5 1/1 Running 1 16h 172.31.3.103 k8s-master03.example.local <none> <none>
calico-node-bljzq 1/1 Running 1 16h 172.31.3.108 k8s-node01.example.local <none> <none>
calico-node-cbv29 1/1 Running 1 16h 172.31.3.109 k8s-node02.example.local <none> <none>
coredns-5ffd5c4586-rvsm4 1/1 Running 1 18h 192.170.21.194 k8s-node03.example.local <none> <none>
coredns-5ffd5c4586-xzrwx 1/1 Running 1 18h 192.170.21.193 k8s-node03.example.local <none> <none>
etcd-k8s-master01.example.local 1/1 Running 1 18h 172.31.3.101 k8s-master01.example.local <none> <none>
etcd-k8s-master02.example.local 1/1 Running 1 18h 172.31.3.102 k8s-master02.example.local <none> <none>
etcd-k8s-master03.example.local 1/1 Running 1 18h 172.31.3.103 k8s-master03.example.local <none> <none>
kube-apiserver-k8s-master01.example.local 1/1 Running 1 18h 172.31.3.101 k8s-master01.example.local <none> <none>
kube-apiserver-k8s-master02.example.local 1/1 Running 1 18h 172.31.3.102 k8s-master02.example.local <none> <none>
kube-apiserver-k8s-master03.example.local 1/1 Running 1 18h 172.31.3.103 k8s-master03.example.local <none> <none>
kube-controller-manager-k8s-master01.example.local 1/1 Running 2 18h 172.31.3.101 k8s-master01.example.local <none> <none>
kube-controller-manager-k8s-master02.example.local 1/1 Running 1 18h 172.31.3.102 k8s-master02.example.local <none> <none>
kube-controller-manager-k8s-master03.example.local 1/1 Running 1 18h 172.31.3.103 k8s-master03.example.local <none> <none>
kube-proxy-6k8vv 1/1 Running 0 88s 172.31.3.108 k8s-node01.example.local <none> <none>
kube-proxy-flt2l 1/1 Running 0 75s 172.31.3.101 k8s-master01.example.local <none> <none>
kube-proxy-ftqqm 1/1 Running 0 42s 172.31.3.102 k8s-master02.example.local <none> <none>
kube-proxy-m9h72 1/1 Running 0 96s 172.31.3.110 k8s-node03.example.local <none> <none>
kube-proxy-mjssk 1/1 Running 0 54s 172.31.3.109 k8s-node02.example.local <none> <none>
kube-proxy-zz2sl 1/1 Running 0 61s 172.31.3.103 k8s-master03.example.local <none> <none>
kube-scheduler-k8s-master01.example.local 1/1 Running 2 18h 172.31.3.101 k8s-master01.example.local <none> <none>
kube-scheduler-k8s-master02.example.local 1/1 Running 1 18h 172.31.3.102 k8s-master02.example.local <none> <none>
kube-scheduler-k8s-master03.example.local 1/1 Running 1 18h 172.31.3.103 k8s-master03.example.local <none> <none>
metrics-server-5b7c76b46c-2tkz6 1/1 Running 0 92m 192.167.195.132 k8s-node02.example.local <none> <none>
[root@k8s-master01 ~]# kubectl get pod -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-55bfb655fc-bgdlr 1/1 Running 1 16h 192.167.195.130 k8s-node02.example.local <none> <none>
kube-system calico-node-2bggs 1/1 Running 1 16h 172.31.3.102 k8s-master02.example.local <none> <none>
kube-system calico-node-2rgfb 1/1 Running 1 16h 172.31.3.101 k8s-master01.example.local <none> <none>
kube-system calico-node-449ws 1/1 Running 1 16h 172.31.3.110 k8s-node03.example.local <none> <none>
kube-system calico-node-4p9t5 1/1 Running 1 16h 172.31.3.103 k8s-master03.example.local <none> <none>
kube-system calico-node-bljzq 1/1 Running 1 16h 172.31.3.108 k8s-node01.example.local <none> <none>
kube-system calico-node-cbv29 1/1 Running 1 16h 172.31.3.109 k8s-node02.example.local <none> <none>
kube-system coredns-5ffd5c4586-rvsm4 1/1 Running 1 18h 192.170.21.194 k8s-node03.example.local <none> <none>
kube-system coredns-5ffd5c4586-xzrwx 1/1 Running 1 18h 192.170.21.193 k8s-node03.example.local <none> <none>
kube-system etcd-k8s-master01.example.local 1/1 Running 1 18h 172.31.3.101 k8s-master01.example.local <none> <none>
kube-system etcd-k8s-master02.example.local 1/1 Running 1 18h 172.31.3.102 k8s-master02.example.local <none> <none>
kube-system etcd-k8s-master03.example.local 1/1 Running 1 18h 172.31.3.103 k8s-master03.example.local <none> <none>
kube-system kube-apiserver-k8s-master01.example.local 1/1 Running 1 18h 172.31.3.101 k8s-master01.example.local <none> <none>
kube-system kube-apiserver-k8s-master02.example.local 1/1 Running 1 18h 172.31.3.102 k8s-master02.example.local <none> <none>
kube-system kube-apiserver-k8s-master03.example.local 1/1 Running 1 18h 172.31.3.103 k8s-master03.example.local <none> <none>
kube-system kube-controller-manager-k8s-master01.example.local 1/1 Running 2 18h 172.31.3.101 k8s-master01.example.local <none> <none>
kube-system kube-controller-manager-k8s-master02.example.local 1/1 Running 1 18h 172.31.3.102 k8s-master02.example.local <none> <none>
kube-system kube-controller-manager-k8s-master03.example.local 1/1 Running 1 18h 172.31.3.103 k8s-master03.example.local <none> <none>
kube-system kube-proxy-6k8vv 1/1 Running 0 2m12s 172.31.3.108 k8s-node01.example.local <none> <none>
kube-system kube-proxy-flt2l 1/1 Running 0 119s 172.31.3.101 k8s-master01.example.local <none> <none>
kube-system kube-proxy-ftqqm 1/1 Running 0 86s 172.31.3.102 k8s-master02.example.local <none> <none>
kube-system kube-proxy-m9h72 1/1 Running 0 2m20s 172.31.3.110 k8s-node03.example.local <none> <none>
kube-system kube-proxy-mjssk 1/1 Running 0 98s 172.31.3.109 k8s-node02.example.local <none> <none>
kube-system kube-proxy-zz2sl 1/1 Running 0 105s 172.31.3.103 k8s-master03.example.local <none> <none>
kube-system kube-scheduler-k8s-master01.example.local 1/1 Running 2 18h 172.31.3.101 k8s-master01.example.local <none> <none>
kube-system kube-scheduler-k8s-master02.example.local 1/1 Running 1 18h 172.31.3.102 k8s-master02.example.local <none> <none>
kube-system kube-scheduler-k8s-master03.example.local 1/1 Running 1 18h 172.31.3.103 k8s-master03.example.local <none> <none>
kube-system metrics-server-5b7c76b46c-2tkz6 1/1 Running 0 93m 192.167.195.132 k8s-node02.example.local <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-575d79bd97-l25f4 1/1 Running 0 21m 192.169.111.140 k8s-node01.example.local <none> <none>
kubernetes-dashboard kubernetes-dashboard-68965ddf9f-c5f7g 1/1 Running 0 21m 192.169.111.141 k8s-node01.example.local <none> <none>复制
Kubeadm安装后,master节点默认不允许部署pod,可以通过以下方式打开:
查看Taints:
[root@k8s-master01 ~]# kubectl describe node -l node-role.kubernetes.io/master= | grep Taints
Taints: node-role.kubernetes.io/master:NoSchedule
Taints: node-role.kubernetes.io/master:NoSchedule
Taints: node-role.kubernetes.io/master:NoSchedule复制
删除Taint:
[root@k8s-master01 ~]# kubectl taint node -l node-role.kubernetes.io/master node-role.kubernetes.io/master:NoSchedule-
node/k8s-master01 untainted
node/k8s-master02 untainted
node/k8s-master03 untainted
[root@k8s-master01 ~]# kubectl describe node -l node-role.kubernetes.io/master= | grep Taints
Taints: <none>
Taints: <none>
Taints: <none>复制
kube-proxy的配置在kube-system命名空间下的configmap中,可以通过
kubectl edit cm kube-proxy -n kube-system复制
进行更改,更改完成后,可以通过patch重启kube-proxy
kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system复制
piserver-k8s-master03.example.local复制