暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

基于二进制包安装kubernetes v1.20 -- 集群部署(一)

1.安装说明

本文章将演示CentOS 7二进制方式安装高可用k8s 1.17+,相对于其他版本,二进制安装方式并无太大区别,只需要区分每个组件版本的对应关系即可。

生产环境中,建议使用小版本大于5的Kubernetes版本,比如1.19.5以后的才可用于生产环境。

2.基本环境配置

表1-1  高可用Kubernetes集群规划

角色机器名机器配置ip地址安装软件
master1k8s-master01.example.local2C4G172.31.3.101chrony-client、docker、kube-controller-manager、kube-scheduler、kube-apiserver、kubelet、kube-proxy、kubectl
master2k8s-master02.example.local2C4G172.31.3.102chrony-client、docker、kube-controller-manager、kube-scheduler、kube-apiserver、kubelet、kube-proxy、kubectl
master3k8s-master03.example.local2C4G172.31.3.103chrony-client、docker、kube-controller-manager、kube-scheduler、kube-apiserver、kubelet、kube-proxy、kubectl
ha1k8s-ha01.example.local2C2G172.31.3.104 172.31.3.188(vip)chrony-server、haproxy、keepalived
ha2k8s-ha02.example.local2C2G172.31.3.105chrony-server、haproxy、keepalived
harbor1k8s-harbor01.example.local2C2G172.31.3.106chrony-client、docker、docker-compose、harbor
harbor2k8s-harbor02.example.local2C2G172.31.3.107chrony-client、docker、docker-compose、harbor
etcd1k8s-etcd01.example.local2C2G172.31.3.108chrony-client、docker、etcd
etcd2k8s-etcd02.example.local2C2G172.31.3.109chrony-client、docker、etcd
etcd3k8s-etcd03.example.local2C2G172.31.3.110chrony-client、docker、etcd
node1k8s-node01.example.local2C4G172.31.3.111chrony-client、docker、kubelet、kube-proxy
node2k8s-node02.example.local2C4G172.31.3.112chrony-client、docker、kubelet、kube-proxy
node3k8s-node03.example.local2C4G172.31.3.113chrony-client、docker、kubelet、kube-proxy

软件版本信息和Pod、Service网段规划:

配置信息备注
支持的操作系统版本CentOS 7.9/stream 8、Rocky 8、Ubuntu 18.04/20.04
Docker版本19.03.15
kubernetes版本1.20.14
Pod网段192.168.0.0/12
Service网段10.96.0.0/12

注意:

集群安装时会涉及到三个网段:

宿主机网段:就是安装k8s的服务器

Pod网段:k8s Pod的网段,相当于容器的IP

Service网段:k8s service网段,service用于集群容器通信。

service网段会设置为10.96.0.0/12

Pod网段会设置成192.168.0.0/12

宿主机网段可能是172.31.0.0/21

需要注意的是这三个网段不能有任何交叉。

比如如果宿主机的IP是10.105.0.x

那么service网段就不能是10.96.0.0/12,因为10.96.0.0/12网段可用IP是:

10.96.0.1 ~ 10.111.255.255

所以10.105是在这个范围之内的,属于网络交叉,此时service网段需要更换,

可以更改为192.168.0.0/16网段(注意如果service网段是192.168开头的子网掩码最好不要是12,最好为16,因为子网掩码是12他的起始IP为192.160.0.1 不是192.168.0.1)。

同样的道理,技术别的网段也不能重复。可以通过http://tools.jb51.net/aideddesign/ip_net_calc/计算:

所以一般的推荐是,直接第一个开头的就不要重复,比如你的宿主机是192开头的,那么你的service可以是10.96.0.0/12.

如果你的宿主机是10开头的,就直接把service的网段改成192.168.0.0/16

如果你的宿主机是172开头的,就直接把pod网段改成192.168.0.0/12

注意搭配,均为10网段、172网段、192网段的搭配,第一个开头数字不一样就免去了网段冲突的可能性,也可以减去计算的步骤。

主机信息,服务器IP地址不能设置成dhcp,要配置成静态IP。

VIP(虚拟IP)不要和公司内网IP重复,首先去ping一下,不通才可用。VIP需要和主机在同一个局域网内!公有云的话,VIP为公有云的负载均衡的IP,比如阿里云的SLB地址,腾讯云的ELB地址,注意公有云的负载均衡都是内网的负载均衡。

各节点设置主机名:

hostnamectl set-hostname k8s-master01.example.local
hostnamectl set-hostname k8s-master02.example.local
hostnamectl set-hostname k8s-master03.example.local
hostnamectl set-hostname k8s-ha01.example.local
hostnamectl set-hostname k8s-ha02.example.local
hostnamectl set-hostname k8s-harbot01.example.local
hostnamectl set-hostname k8s-harbor02.example.local
hostnamectl set-hostname k8s-etcd01.example.local
hostnamectl set-hostname k8s-etcd02.example.local
hostnamectl set-hostname k8s-etcd03.example.local
hostnamectl set-hostname k8s-node01.example.local
hostnamectl set-hostname k8s-node02.example.local
hostnamectl set-hostname k8s-node03.example.local
复制

各节点设置ip地址格式如下:

#CentOS
[root@k8s-master01 ~]# cat etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
NAME=eth0
BOOTPROTO=none
ONBOOT=yes
IPADDR=172.31.3.101
PREFIX=21
GATEWAY=172.31.0.2
DNS1=223.5.5.5
DNS2=180.76.76.76

#Ubuntu
root@k8s-master01:~# cat etc/netplan/01-netcfg.yaml
network:
version: 2
renderer: networkd
ethernets:
  eth0:
    addresses: [172.31.3.101/21]
    gateway4: 172.31.0.2
    nameservers:
      addresses: [223.5.5.5, 180.76.76.76]
复制

所有节点配置hosts,修改/etc/hosts如下:

cat >> etc/hosts <<EOF
172.31.3.101 k8s-master01.example.local k8s-master01
172.31.3.102 k8s-master02.example.local k8s-master02
172.31.3.103 k8s-master03.example.local k8s-master03
172.31.3.104 k8s-ha01.example.local k8s-ha01
172.31.3.105 k8s-ha02.example.local k8s-ha02
172.31.3.106 k8s-harbor01.example.local k8s-harbor01
172.31.3.107 k8s-harbor02.example.local k8s-harbor02
172.31.3.108 k8s-etcd01.example.local k8s-etcd01
172.31.3.109 k8s-etcd02.example.local k8s-etcd02
172.31.3.110 k8s-etcd03.example.local k8s-etcd03
172.31.3.111 k8s-node01.example.local k8s-node01
172.31.3.112 k8s-node02.example.local k8s-node02
172.31.3.113 k8s-node03.example.local k8s-node03
172.31.3.188 k8s-lb
172.31.3.188 harbor.raymonds.cc
EOF
复制

CentOS 7所有节点配置 yum源如下:

[root@k8s-master01 ~]# rm -f etc/yum.repos.d/*.repo
[root@k8s-master01 ~]# cat > etc/yum.repos.d/base.repo <<EOF
[base]
name=base
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever

[extras]
name=extras
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever

[updates]
name=updates
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever

[centosplus]
name=centosplus
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever/centosplus/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever

[epel]
name=epel
baseurl=https://mirrors.cloud.tencent.com/epel/$releasever/$basearch/
gpgcheck=1
gpgkey=https://mirrors.cloud.tencent.com/epel/RPM-GPG-KEY-EPEL-$releasever
EOF
复制

Rocky 8所有节点配置 yum源如下:

[root@k8s-master01 ~]# cat etc/yum.repos.d/base.repo 
[BaseOS]
name=BaseOS
baseurl=https://mirrors.sjtug.sjtu.edu.cn/rocky/$releasever/BaseOS/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial

[AppStream]
name=AppStream
baseurl=https://mirrors.sjtug.sjtu.edu.cn/rocky/$releasever/AppStream/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial

[extras]
name=extras
baseurl=https://mirrors.sjtug.sjtu.edu.cn/rocky/$releasever/extras/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
enabled=1

[plus]
name=plus
baseurl=https://mirrors.sjtug.sjtu.edu.cn/rocky/$releasever/plus/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial

[PowerTools]
name=PowerTools
baseurl=https://mirrors.sjtug.sjtu.edu.cn/rocky/$releasever/PowerTools/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
复制

CentOS stream 8所有节点配置 yum源如下:

[root@centos8-stream ~]# cat etc/yum.repos.d/base.repo 
[BaseOS]
name=BaseOS
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever-stream/BaseOS/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial

[AppStream]
name=AppStream
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever-stream/AppStream/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial

[extras]
name=extras
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever-stream/extras/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial

[centosplus]
name=centosplus
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever-stream/centosplus/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial

[PowerTools]
name=PowerTools
baseurl=https://mirrors.cloud.tencent.com/centos/$releasever-stream/PowerTools/$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
复制

Ubuntu 所有节点配置 apt源如下:

root@k8s-master01:~# cat > etc/apt/sources.list <<EOF
deb http://mirrors.cloud.tencent.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.cloud.tencent.com/ubuntu/ bionic main restricted universe multiverse

deb http://mirrors.cloud.tencent.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.cloud.tencent.com/ubuntu/ bionic-security main restricted universe multiverse

deb http://mirrors.cloud.tencent.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.cloud.tencent.com/ubuntu/ bionic-updates main restricted universe multiverse

deb http://mirrors.cloud.tencent.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src http://mirrors.cloud.tencent.com/ubuntu/ bionic-proposed main restricted universe multiverse

deb http://mirrors.cloud.tencent.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.cloud.tencent.com/ubuntu/ bionic-backports main restricted universe multiverse
EOF
复制

必备工具安装:

#CentOS安装
yum -y install vim tree lrzsz wget jq psmisc net-tools telnet yum-utils device-mapper-persistent-data lvm2 git

#Ubuntu安装
apt -y install tree lrzsz jq
复制

所有节点关闭防火墙、selinux、swap。服务器配置如下:

#CentOS
systemctl disable --now firewalld

#CentOS 7
systemctl disable --now NetworkManager

#Ubuntu
systemctl disable --now ufw

setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' etc/selinux/config
复制

关闭swap分区

sed -ri 's/.*swap.*/#&/' etc/fstab
swapoff -a

#Ubuntu 20.04,执行下面命令
sed -ri 's/.*swap.*/#&/' etc/fstab
SD_NAME=`lsblk|awk -F"[ └─]" '/SWAP/{printf $3}'`
systemctl mask dev-${SD_NAME}.swap
swapoff -a
复制

ha01和ha02上安装chrony-server:

[root@k8s-ha01 ~]# cat install_chrony_server.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:       Raymond
#QQ:           88563128
#Date:         2021-11-22
#FileName:     install_chrony_server.sh
#URL:           raymond.blog.csdn.net
#Description:   install_chrony_server for CentOS 7/8 & Ubuntu 18.04/20.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

os(){
   OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' etc/os-release`
}

install_chrony(){
   if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> dev/null;then
      yum -y install chrony &> dev/null
       sed -i -e '/^pool.*/d' -e '/^server.*/d' -e '/^# Please consider .*/a\server ntp.aliyun.com iburst\nserver time1.cloud.tencent.com iburst\nserver ntp.tuna.tsinghua.edu.cn iburst' -e 's@^#allow.*@allow 0.0.0.0/0@' -e 's@^#local.*@local stratum 10@' etc/chrony.conf
      systemctl enable --now chronyd &> dev/null
      systemctl is-active chronyd &> dev/null || { ${COLOR}"chrony 启动失败,退出!"${END} ; exit; }
       ${COLOR}"chrony安装完成"${END}
   else
      apt -y install chrony &> dev/null
       sed -i -e '/^pool.*/d' -e '/^# See http:.*/a\server ntp.aliyun.com iburst\nserver time1.cloud.tencent.com iburst\nserver ntp.tuna.tsinghua.edu.cn iburst' etc/chrony/chrony.conf
       echo "allow 0.0.0.0/0" >> etc/chrony/chrony.conf
       echo "local stratum 10" >> etc/chrony/chrony.conf
      systemctl enable --now chronyd &> dev/null
      systemctl is-active chronyd &> dev/null || { ${COLOR}"chrony 启动失败,退出!"${END} ; exit; }
       ${COLOR}"chrony安装完成"${END}
   fi
}

main(){
  os
  install_chrony
}

main

[root@k8s-ha01 ~]# bash install_chrony_server.sh
chrony安装完成

[root@k8s-ha02 ~]# bash install_chrony_server.sh
chrony安装完成

[root@k8s-ha01 ~]# chronyc sources -nv
210 Number of sources = 3
MS Name/IP address         Stratum Poll Reach LastRx Last sample              
===============================================================================
^* 203.107.6.88                  2   6    17    39  -1507us[-8009us] +/-   37ms
^- 139.199.215.251               2   6    17    39    +10ms[  +10ms] +/-   48ms
^? 101.6.6.172                   0   7     0     -     +0ns[   +0ns] +/-   0ns

[root@k8s-ha02 ~]# chronyc sources -nv
210 Number of sources = 3
MS Name/IP address         Stratum Poll Reach LastRx Last sample              
===============================================================================
^* 203.107.6.88                  2   6    17    40    +90us[-1017ms] +/-   32ms
^+ 139.199.215.251               2   6    33    37    +13ms[  +13ms] +/-   25ms
^? 101.6.6.172                   0   7     0     -     +0ns[   +0ns] +/-   0ns
复制

master、node、harbor上安装chrony-client:

#!/bin/bash
#
#**********************************************************************************************
#Author:       Raymond
#QQ:           88563128
#Date:         2021-11-22
#FileName:     install_chrony_client.sh
#URL:           raymond.blog.csdn.net
#Description:   install_chrony_client for CentOS 7/8 & Ubuntu 18.04/20.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
SERVER1=172.31.3.104
SERVER2=172.31.3.105

os(){
   OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' etc/os-release`
}

install_chrony(){
   if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> dev/null;then
      yum -y install chrony &> dev/null
       sed -i -e '/^pool.*/d' -e '/^server.*/d' -e '/^# Please consider .*/a\server '${SERVER1}' iburst\nserver '${SERVER2}' iburst' etc/chrony.conf
      systemctl enable --now chronyd &> dev/null
      systemctl is-active chronyd &> dev/null || { ${COLOR}"chrony 启动失败,退出!"${END} ; exit; }
       ${COLOR}"chrony安装完成"${END}
   else
      apt -y install chrony &> dev/null
       sed -i -e '/^pool.*/d' -e '/^# See http:.*/a\server '${SERVER1}' iburst\nserver '${SERVER2}' iburst' etc/chrony/chrony.conf
      systemctl enable --now chronyd &> dev/null
      systemctl is-active chronyd &> dev/null || { ${COLOR}"chrony 启动失败,退出!"${END} ; exit; }
      systemctl restart chronyd
       ${COLOR}"chrony安装完成"${END}
   fi
}

main(){
  os
  install_chrony
}

main

[root@k8s-master01 ~]# cat install_chrony_client.sh
#!/bin/bash
COLOR="echo -e \\033[01;31m"
END='\033[0m'
SERVER1=172.31.3.104
SERVER2=172.31.3.105

os(){
   OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' etc/os-release`
}

install_chrony(){
   if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> dev/null;then
      yum -y install chrony &> dev/null
       sed -i -e '/^pool.*/d' -e '/^server.*/d' -e '/^# Please consider .*/a\server '${SERVER1}' iburst\nserver '${SERVER2}' iburst' etc/chrony.conf
      systemctl enable --now chronyd &> dev/null
      systemctl is-active chronyd &> dev/null || { ${COLOR}"chrony 启动失败,退出!"${END} ; exit; }
       ${COLOR}"chrony安装完成"${END}
   else
      apt -y install chrony &> dev/null
       sed -i -e '/^pool.*/d' -e '/^# See http:.*/a\server '${SERVER1}' iburst\nserver '${SERVER2}' iburst' etc/chrony/chrony.conf
      systemctl enable --now chronyd &> dev/null
      systemctl is-active chronyd &> dev/null || { ${COLOR}"chrony 启动失败,退出!"${END} ; exit; }
      systemctl restart chronyd
       ${COLOR}"chrony安装完成"${END}
   fi
}

main(){
  os
  install_chrony
}

main

[root@k8s-master01 ~]# for i in k8s-master02 k8s-master03 k8s-harbor01 k8s-harbor02 k8s-node01 k8s-node02 k8s-node03;do scp install_chrony_client.sh $i:/root/ ; done

[root@k8s-master01 ~]# bash install_chrony_client.sh
chrony安装完成

[root@k8s-master01 ~]# chronyc sources -nv
210 Number of sources = 2
MS Name/IP address         Stratum Poll Reach LastRx Last sample              
===============================================================================
^+ k8s-ha01                      3   6    17     8    +84us[  +74us] +/-   55ms
^* k8s-ha02                      3   6    17     8    -82us[  -91us] +/-   45ms
复制

所有节点设置时区。时间同步配置如下:

ln -sf usr/share/zoneinfo/Asia/Shanghai etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
复制

所有节点配置limit:

ulimit -SHn 65535

cat >>/etc/security/limits.conf <<EOF
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF
复制

Master01节点免密钥登录其他节点,安装过程中生成配置文件和证书均在Master01上操作,集群管理也在Master01上操作,阿里云或者AWS上需要单独一台kubectl服务器。密钥配置如下:

[root@k8s-master01 ~]# cat ssh_key.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:       Raymond
#QQ:           88563128
#Date:         2021-12-20
#FileName:     ssh_key.sh
#URL:           raymond.blog.csdn.net
#Description:   ssh_key for CentOS 7/8 & Ubuntu 18.04/24.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

NET_NAME=`ip addr |awk -F"[: ]" '/^2: e.*/{print $3}'`
IP=`ip addr show ${NET_NAME}| awk -F" +|/" '/global/{print $3}'`
export SSHPASS=123456
HOSTS="
172.31.3.102
172.31.3.103
172.31.3.104
172.31.3.105
172.31.3.106
172.31.3.107
172.31.3.108
172.31.3.109
172.31.3.110
172.31.3.111
172.31.3.112
172.31.3.113"

os(){
   OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' etc/os-release`
}

ssh_key_push(){
   rm -f ~/.ssh/id_rsa*
  ssh-keygen -f root/.ssh/id_rsa -P '' &> dev/null
   if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> dev/null;then
      rpm -q sshpass &> dev/null || { ${COLOR}"安装sshpass软件包"${END};yum -y install sshpass &> dev/null; }
   else
      dpkg -S sshpass &> dev/null || { ${COLOR}"安装sshpass软件包"${END};apt -y install sshpass &> dev/null; }
   fi
  sshpass -e ssh-copy-id -o StrictHostKeyChecking=no ${IP} &> dev/null
  [ $? -eq 0 ] && echo ${IP} is finished || echo ${IP} is false

   for i in ${HOSTS};do
      sshpass -e scp -o StrictHostKeyChecking=no -r root/.ssh root@${i}: &> dev/null
      [ $? -eq 0 ] && echo ${i} is finished || echo ${i} is false
   done

   for i in ${HOSTS};do
      scp root/.ssh/known_hosts ${i}:.ssh/ &> dev/null
      [ $? -eq 0 ] && echo ${i} is finished || echo ${i} is false
   done
}

main(){
  os
  ssh_key_push
}

main

[root@k8s-master01 ~]# bash ssh_key.sh
安装sshpass软件包
172.31.3.101 is finished
172.31.3.102 is finished
172.31.3.103 is finished
172.31.3.104 is finished
172.31.3.105 is finished
172.31.3.106 is finished
172.31.3.107 is finished
172.31.3.108 is finished
172.31.3.109 is finished
172.31.3.110 is finished
172.31.3.111 is finished
172.31.3.112 is finished
172.31.3.113 is finished
172.31.3.102 is finished
172.31.3.103 is finished
172.31.3.104 is finished
172.31.3.105 is finished
172.31.3.106 is finished
172.31.3.107 is finished
172.31.3.108 is finished
172.31.3.109 is finished
172.31.3.110 is finished
172.31.3.111 is finished
172.31.3.112 is finished
172.31.3.113 is finished
复制

所有节点升级系统并重启,此处升级没有升级内核,下节会单独升级内核:

yum update -y --exclude=kernel* && reboot #CentOS7需要升级,CentOS8可以按需升级系统

[root@k8s-master01 ~]# cat etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
复制

3.内核配置

CentOS7 需要升级内核至4.18+,本地升级的版本为4.19

在master01节点下载内核:

[root@k8s-master01 ~]# wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm

[root@k8s-master01 ~]# wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
复制

从master01节点传到其他节点:

[root@k8s-master01 ~]# for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done
复制

所有节点安装内核

cd root && yum localinstall -y kernel-ml*
复制

所有节点更改内核启动顺序

grub2-set-default 0 && grub2-mkconfig -o etc/grub2.cfg

grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
复制

检查默认内核是不是4.19

grubby --default-kernel

[root@k8s-master01 ~]# grubby --default-kernel
/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
复制

所有节点重启,然后检查内核是不是4.19

reboot

uname -a

[root@k8s-master01 ~]# uname -a
Linux k8s-master01 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
复制

master和node安装ipvsadm:

#CentOS
yum -y install ipvsadm ipset sysstat conntrack libseccomp

#Ubuntu
apt -y install ipvsadm ipset sysstat conntrack libseccomp-dev
复制

所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack #内核小于4.18,把这行改成nf_conntrack_ipv4

cat >> etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack #内核小于4.18,把这行改成nf_conntrack_ipv4
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

然后执行systemctl enable --now systemd-modules-load.service即可
复制

开启一些k8s集群中必须的内核参数,所有节点配置k8s内核:

cat > etc/sysctl.d/k8s.conf <<EOF 
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF

sysctl --system
复制

Kubernetes内核优化常用参数详解:

net.ipv4.ip_forward = 1 #其值为0,说明禁止进行IP转发;如果是1,则说明IP转发功能已经打开。
net.bridge.bridge-nf-call-iptables = 1 #二层的网桥在转发包时也会被iptables的FORWARD规则所过滤,这样有时会出现L3层的iptables rules去过滤L2的帧的问题
net.bridge.bridge-nf-call-ip6tables = 1 #是否在ip6tables链中过滤IPv6包
fs.may_detach_mounts = 1 #当系统有容器运行时,需要设置为1

vm.overcommit_memory=1  
#0, 表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。
#1, 表示内核允许分配所有的物理内存,而不管当前的内存状态如何。
#2, 表示内核允许分配超过所有物理内存和交换空间总和的内存

vm.panic_on_oom=0
#OOM就是out of memory的缩写,遇到内存耗尽、无法分配的状况。kernel面对OOM的时候,咱们也不能慌乱,要根据OOM参数来进行相应的处理。
#值为0:内存不足时,启动 OOM killer。
#值为1:内存不足时,有可能会触发 kernel panic(系统重启),也有可能启动 OOM killer。
#值为2:内存不足时,表示强制触发 kernel panic,内核崩溃GG(系统重启)。

fs.inotify.max_user_watches=89100 #表示同一用户同时可以添加的watch数目(watch一般是针对目录,决定了同时同一用户可以监控的目录数量)

fs.file-max=52706963 #所有进程最大的文件数
fs.nr_open=52706963 #单个进程可分配的最大文件数
net.netfilter.nf_conntrack_max=2310720 #连接跟踪表的大小,建议根据内存计算该值CONNTRACK_MAX = RAMSIZE (in bytes) 16384 (x 32),并满足nf_conntrack_max=4*nf_conntrack_buckets,默认262144

net.ipv4.tcp_keepalive_time = 600  #KeepAlive的空闲时长,或者说每次正常发送心跳的周期,默认值为7200s(2小时)
net.ipv4.tcp_keepalive_probes = 3 #在tcp_keepalive_time之后,没有接收到对方确认,继续发送保活探测包次数,默认值为9(次)
net.ipv4.tcp_keepalive_intvl =15 #KeepAlive探测包的发送间隔,默认值为75s
net.ipv4.tcp_max_tw_buckets = 36000 #Nginx 之类的中间代理一定要关注这个值,因为它对你的系统起到一个保护的作用,一旦端口全部被占用,服务就异常了。tcp_max_tw_buckets 能帮你降低这种情况的发生概率,争取补救时间。
net.ipv4.tcp_tw_reuse = 1 #只对客户端起作用,开启后客户端在1s内回收
net.ipv4.tcp_max_orphans = 327680 #这个值表示系统所能处理不属于任何进程的socket数量,当我们需要快速建立大量连接时,就需要关注下这个值了。

net.ipv4.tcp_orphan_retries = 3
#出现大量fin-wait-1
#首先,fin发送之后,有可能会丢弃,那么发送多少次这样的fin包呢?fin包的重传,也会采用退避方式,在2.6.358内核中采用的是指数退避,2s,4s,最后的重试次数是由tcp_orphan_retries来限制的。

net.ipv4.tcp_syncookies = 1 #tcp_syncookies是一个开关,是否打开SYN Cookie功能,该功能可以防止部分SYN攻击。tcp_synack_retries和tcp_syn_retries定义SYN的重试次数。
net.ipv4.tcp_max_syn_backlog = 16384 #进入SYN包的最大请求队列.默认1024.对重负载服务器,增加该值显然有好处.
net.ipv4.ip_conntrack_max = 65536 #表明系统将对最大跟踪的TCP连接数限制默认为65536
net.ipv4.tcp_max_syn_backlog = 16384 #指定所能接受SYN同步包的最大客户端数量,即半连接上限;
net.ipv4.tcp_timestamps = 0 #在使用 iptables 做 nat 时,发现内网机器 ping 某个域名 ping 的通,而使用 curl 测试不通, 原来是 net.ipv4.tcp_timestamps 设置了为 1 ,即启用时间戳
net.core.somaxconn = 16384 #Linux中的一个kernel参数,表示socket监听(listen)的backlog上限。什么是backlog呢?backlog就是socket的监听队列,当一个请求(request)尚未被处理或建立时,他会进入backlog。而socket server可以一次性处理backlog中的所有请求,处理后的请求不再位于监听队列中。当server处理请求较慢,以至于监听队列被填满后,新来的请求会被拒绝。
复制

所有节点配置完内核后,重启服务器,保证重启后内核依旧加载

reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack

[root@k8s-master01 ~]# lsmod | grep --color=auto -e ip_vs -e nf_conntrack
ip_vs_ftp              16384  0
nf_nat                 32768  1 ip_vs_ftp
ip_vs_sed              16384  0
ip_vs_nq               16384  0
ip_vs_fo               16384  0
ip_vs_sh               16384  0
ip_vs_dh               16384  0
ip_vs_lblcr            16384  0
ip_vs_lblc             16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs_wlc              16384  0
ip_vs_lc               16384  0
ip_vs                 151552  24 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
nf_conntrack          143360  2 nf_nat,ip_vs
nf_defrag_ipv6         20480  1 nf_conntrack
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  4 nf_conntrack,nf_nat,xfs,ip_vs
复制

4.高可用组件安装

(注意:如果不是高可用集群,haproxy和keepalived无需安装)

公有云要用公有云自带的负载均衡,比如阿里云的SLB,腾讯云的ELB,用来替代haproxy和keepalived,因为公有云大部分都是不支持keepalived的,另外如果用阿里云的话,kubectl控制端不能放在master节点,推荐使用腾讯云,因为阿里云的slb有回环的问题,也就是slb代理的服务器不能反向访问SLB,但是腾讯云修复了这个问题。

阿里云需要设置:slb -> haproxy -> apiserver

4.1 安装haproxy

在ha01和ha02安装HAProxy:

[root@k8s-ha01 ~]# cat install_haproxy.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:       Raymond
#QQ:           88563128
#Date:         2021-12-29
#FileName:     install_haproxy.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
SRC_DIR=/usr/local/src
COLOR="echo -e \\033[01;31m"
END='\033[0m'
CPUS=`lscpu |awk '/^CPU\(s\)/{print $2}'`

#lua下载地址:http://www.lua.org/ftp/lua-5.4.3.tar.gz
LUA_FILE=lua-5.4.3.tar.gz

#haproxy下载地址:https://www.haproxy.org/download/2.4/src/haproxy-2.4.10.tar.gz
HAPROXY_FILE=haproxy-2.4.10.tar.gz
HAPROXY_INSTALL_DIR=/apps/haproxy

STATS_AUTH_USER=admin
STATS_AUTH_PASSWORD=123456

VIP=172.31.3.188
MASTER01=172.31.3.101
MASTER02=172.31.3.102
MASTER03=172.31.3.103
HARBOR01=172.31.3.106
HARBOR02=172.31.3.107

os(){
   OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' etc/os-release`
}

check_file (){
   cd ${SRC_DIR}
   ${COLOR}'检查Haproxy相关源码包'${END}
   if [ ! -e ${LUA_FILE} ];then
       ${COLOR}"缺少${LUA_FILE}文件,请把文件放到${SRC_DIR}目录下"${END}
       exit
   elif [ ! -e ${HAPROXY_FILE} ];then
       ${COLOR}"缺少${HAPROXY_FILE}文件,请把文件放到${SRC_DIR}目录下"${END}
       exit
   else
       ${COLOR}"相关文件已准备好"${END}
   fi
}

install_haproxy(){
  [ -d ${HAPROXY_INSTALL_DIR} ] && { ${COLOR}"Haproxy已存在,安装失败"${END};exit; }
   ${COLOR}"开始安装Haproxy"${END}
   ${COLOR}"开始安装Haproxy依赖包"${END}
   if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> dev/null;then
      yum -y install gcc make gcc-c++ glibc glibc-devel pcre pcre-devel openssl openssl-devel systemd-devel libtermcap-devel ncurses-devel libevent-devel readline-devel &> dev/null
   else
      apt update &> dev/null;apt -y install gcc make openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev libreadline-dev libsystemd-dev &> dev/null
   fi
  tar xf ${LUA_FILE}
   LUA_DIR=`echo ${LUA_FILE} | sed -nr 's/^(.*[0-9]).([[:lower:]]).*/\1/p'`
   cd ${LUA_DIR}
   make all test
   cd ${SRC_DIR}
  tar xf ${HAPROXY_FILE}
   HAPROXY_DIR=`echo ${HAPROXY_FILE} | sed -nr 's/^(.*[0-9]).([[:lower:]]).*/\1/p'`
   cd ${HAPROXY_DIR}
   make -j ${CPUS} ARCH=x86_64 TARGET=linux-glibc USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_CPU_AFFINITY=1 USE_LUA=1 LUA_INC=${SRC_DIR}/${LUA_DIR}/src/ LUA_LIB=${SRC_DIR}/${LUA_DIR}/src/ PREFIX=${HAPROXY_INSTALL_DIR}
   make install PREFIX=${HAPROXY_INSTALL_DIR}
  [ $? -eq 0 ] && $COLOR"Haproxy编译安装成功"$END || { $COLOR"Haproxy编译安装失败,退出!"$END;exit; }
   cat > lib/systemd/system/haproxy.service <<-EOF
[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target

[Service]
ExecStartPre=/usr/sbin/haproxy -f etc/haproxy/haproxy.cfg -c -q
ExecStart=/usr/sbin/haproxy -Ws -f etc/haproxy/haproxy.cfg -p var/lib/haproxy/haproxy.pid
ExecReload=/bin/kill -USR2 $MAINPID

[Install]
WantedBy=multi-user.target
EOF
  [ -L usr/sbin/haproxy ] || ln -s ../..${HAPROXY_INSTALL_DIR}/sbin/haproxy usr/sbin/ &> dev/null
  [ -d etc/haproxy ] || mkdir etc/haproxy &> dev/null  
  [ -d var/lib/haproxy/ ] || mkdir -p var/lib/haproxy/ &> dev/null
   cat > etc/haproxy/haproxy.cfg <<-EOF
global
maxconn 100000
chroot ${HAPROXY_INSTALL_DIR}
stats socket var/lib/haproxy/haproxy.sock mode 600 level admin
uid 99
gid 99
daemon
pidfile var/lib/haproxy/haproxy.pid
log 127.0.0.1 local3 info

defaults
option http-keep-alive
option forwardfor
maxconn 100000
mode http
timeout connect 300000ms
timeout client 300000ms
timeout server 300000ms

listen stats
  mode http
  bind 0.0.0.0:9999
  stats enable
  log global
  stats uri haproxy-status
  stats auth ${STATS_AUTH_USER}:${STATS_AUTH_PASSWORD}

listen kubernetes-6443
  bind ${VIP}:6443
  mode tcp
  log global
  server ${MASTER01} ${MASTER01}:6443 check inter 3s fall 2 rise 5
  server ${MASTER02} ${MASTER02}:6443 check inter 3s fall 2 rise 5
  server ${MASTER03} ${MASTER03}:6443 check inter 3s fall 2 rise 5

listen harbor-80
  bind ${VIP}:80
  mode http
  log global
  balance source
  server ${HARBOR01} ${HARBOR01}:80 check inter 3s fall 2 rise 5
  server ${HARBOR02} ${HARBOR02}:80 check inter 3s fall 2 rise 5
EOF
   cat >> etc/sysctl.conf <<-EOF
net.ipv4.ip_nonlocal_bind = 1
EOF
  sysctl -p &> dev/null
   echo "PATH=${HAPROXY_INSTALL_DIR}/sbin:${PATH}" > etc/profile.d/haproxy.sh
  systemctl daemon-reload
  systemctl enable --now haproxy &> dev/null
  systemctl is-active haproxy &> dev/null && ${COLOR}"Haproxy 服务启动成功!"${END} || { ${COLOR}"Haproxy 启动失败,退出!"${END} ; exit; }
   ${COLOR}"Haproxy安装完成"${END}
}

main(){
  os
  check_file
  install_haproxy
}

main

[root@k8s-ha01 ~]# bash install_haproxy.sh

[root@k8s-ha02 ~]# bash install_haproxy.sh
复制

4.2 安装keepalived

所有master节点配置KeepAlived健康检查文件:

[root@k8s-ha02 ~]# cat check_haproxy.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:       Raymond
#QQ:           88563128
#Date:         2022-01-09
#FileName:     check_haproxy.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
err=0
for k in $(seq 1 3);do
   check_code=$(pgrep haproxy)
   if [[ $check_code == "" ]]; then
       err=$(expr $err + 1)
       sleep 1
      continue
   else
       err=0
      break
   fi
done

if [[ $err != "0" ]]; then
   echo "systemctl stop keepalived"
  usr/bin/systemctl stop keepalived
   exit 1
else
   exit 0
fi
复制

在ha01和ha02节点安装KeepAlived,配置不一样,注意区分 [root@k8s-master01 pki]# vim etc/keepalived/keepalived.conf ,注意每个节点的网卡(interface参数)

在ha01节点上安装keepalived-master:

[root@k8s-ha01 ~]# cat install_keepalived_master.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:       Raymond
#QQ:           88563128
#Date:         2021-12-29
#FileName:     install_keepalived_master.sh
#URL:           raymond.blog.csdn.net
#Description:   install_keepalived for CentOS 7/8 & Ubuntu 18.04/20.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
SRC_DIR=/usr/local/src
COLOR="echo -e \\033[01;31m"
END='\033[0m'
KEEPALIVED_URL=https://keepalived.org/software/
KEEPALIVED_FILE=keepalived-2.2.4.tar.gz
KEEPALIVED_INSTALL_DIR=/apps/keepalived
CPUS=`lscpu |awk '/^CPU\(s\)/{print $2}'`
NET_NAME=`ip addr |awk -F"[: ]" '/^2: e.*/{print $3}'`
STATE=MASTER
PRIORITY=100
VIP=172.31.3.188


os(){
   OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' etc/os-release`
   OS_RELEASE_VERSION=`sed -rn '/^VERSION_ID=/s@.*="?([0-9]+)\.?.*"?@\1@p' etc/os-release`
}

check_file (){
   cd  ${SRC_DIR}
   if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> dev/null;then
      rpm -q wget &> dev/null || yum -y install wget &> dev/null
   fi
   if [ ! -e ${KEEPALIVED_FILE} ];then
       ${COLOR}"缺少${KEEPALIVED_FILE}文件,如果是离线包,请放到${SRC_DIR}目录下"${END}
       ${COLOR}'开始下载Keepalived源码包'${END}
       wget ${KEEPALIVED_URL}${KEEPALIVED_FILE} || { ${COLOR}"Keepalived源码包下载失败"${END}; exit; }
   elif [ ! -e check_haproxy.sh ];then
       ${COLOR}"缺少check_haproxy.sh文件,请把文件放到${SRC_DIR}目录下"${END}
       exit
   else
       ${COLOR}"相关文件已准备好"${END}
   fi
}

install_keepalived(){
  [ -d ${KEEPALIVED_INSTALL_DIR} ] && { ${COLOR}"Keepalived已存在,安装失败"${END};exit; }
   ${COLOR}"开始安装Keepalived"${END}
   ${COLOR}"开始安装Keepalived依赖包"${END}
   if [ ${OS_ID} == "Rocky" -a ${OS_RELEASE_VERSION} == 8 ];then
       URL=mirrors.sjtug.sjtu.edu.cn
if [ ! `grep -R "\[PowerTools\]" etc/yum.repos.d/` ];then
           cat > etc/yum.repos.d/PowerTools.repo <<-EOF
[PowerTools]
name=PowerTools
baseurl=https://${URL}/rocky/\$releasever/PowerTools/\$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
EOF
       fi
   fi
   if [ ${OS_ID} == "CentOS" -a ${OS_RELEASE_VERSION} == 8 ];then
       URL=mirrors.cloud.tencent.com
       if [ ! `grep -R "\[PowerTools\]" etc/yum.repos.d/` ];then
           cat > etc/yum.repos.d/PowerTools.repo <<-EOF
[PowerTools]
name=PowerTools
baseurl=https://${URL}/centos/\$releasever/PowerTools/\$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
EOF
       fi
   fi
   if [[ ${OS_RELEASE_VERSION} == 8 ]] &> dev/null;then
      yum -y install make gcc ipvsadm autoconf automake openssl-devel libnl3-devel iptables-devel ipset-devel file-devel net-snmp-devel glib2-devel pcre2-devel libnftnl-devel libmnl-devel systemd-devel &> dev/null
   elif [[ ${OS_RELEASE_VERSION} == 7 ]] &> dev/null;then
      yum -y install make gcc libnfnetlink-devel libnfnetlink ipvsadm libnl libnl-devel libnl3 libnl3-devel lm_sensors-libs net-snmp-agent-libs net-snmp-libs openssh-server openssh-clients openssl openssl-devel automake iproute &> dev/null
   elif [[ ${OS_RELEASE_VERSION} == 20 ]] &> dev/null;then
      apt update &> dev/null;apt -y install make gcc ipvsadm build-essential pkg-config automake autoconf libipset-dev libnl-3-dev libnl-genl-3-dev libssl-dev libxtables-dev libip4tc-dev libip6tc-dev libipset-dev libmagic-dev libsnmp-dev libglib2.0-dev libpcre2-dev libnftnl-dev libmnl-dev libsystemd-dev
   else
      apt update &> dev/null;apt -y install make gcc ipvsadm build-essential pkg-config automake autoconf iptables-dev libipset-dev libnl-3-dev libnl-genl-3-dev libssl-dev libxtables-dev libip4tc-dev libip6tc-dev libipset-dev libmagic-dev libsnmp-dev libglib2.0-dev libpcre2-dev libnftnl-dev libmnl-dev libsystemd-dev &> dev/null
   fi
  tar xf ${KEEPALIVED_FILE}
   KEEPALIVED_DIR=`echo ${KEEPALIVED_FILE} | sed -nr 's/^(.*[0-9]).([[:lower:]]).*/\1/p'`
   cd ${KEEPALIVED_DIR}
  ./configure --prefix=${KEEPALIVED_INSTALL_DIR} --disable-fwmark
   make -j $CPUS && make install
  [ $? -eq 0 ] && ${COLOR}"Keepalived编译安装成功"${END} || { ${COLOR}"Keepalived编译安装失败,退出!"${END};exit; }
  [ -d etc/keepalived ] || mkdir -p etc/keepalived &> dev/null
   cat > etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived

global_defs {
  router_id LVS_DEVEL
  script_user root
  enable_script_security
}

vrrp_script check_haoroxy {
  script "/etc/keepalived/check_haproxy.sh"
  interval 5
  weight -5
  fall 2  
  rise 1
}

vrrp_instance VI_1 {
  state ${STATE}
  interface ${NET_NAME}
  virtual_router_id 51
  priority ${PRIORITY}
  advert_int 1
  authentication {
      auth_type PASS
      auth_pass 1111
  }
  virtual_ipaddress {
       ${VIP} dev ${NET_NAME} label ${NET_NAME}:1
  }
  track_script {
      check_haproxy
  }
}
EOF
   cp ./keepalived/keepalived.service lib/systemd/system/
   cd  ${SRC_DIR}
   mv check_haproxy.sh etc/keepalived/check_haproxy.sh
   chmod +x etc/keepalived/check_haproxy.sh
   echo "PATH=${KEEPALIVED_INSTALL_DIR}/sbin:${PATH}" > etc/profile.d/keepalived.sh
  systemctl daemon-reload
  systemctl enable --now keepalived &> dev/null
  systemctl is-active keepalived &> dev/null && ${COLOR}"Keepalived 服务启动成功!"${END} || { ${COLOR}"Keepalived 启动失败,退出!"${END} ; exit; }
   ${COLOR}"Keepalived安装完成"${END}
}

main(){
  os
  check_file
  install_keepalived
}

main

[root@k8s-ha01 ~]# bash install_keepalived_master.sh

[root@k8s-ha01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
      valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
      valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  link/ether 00:0c:29:05:9b:2a brd ff:ff:ff:ff:ff:ff
  inet 172.31.3.104/21 brd 172.31.7.255 scope global eth0
      valid_lft forever preferred_lft forever
  inet 172.31.3.188/32 scope global eth0:1
      valid_lft forever preferred_lft forever
  inet6 fe80::20c:29ff:fe05:9b2a/64 scope link
      valid_lft forever preferred_lft forever
复制

在ha02节点上安装keepalived-backup:

[root@k8s-ha02 ~]# cat install_keepalived_backup.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:       Raymond
#QQ:           88563128
#Date:         2021-12-29
#FileName:     install_keepalived_backup.sh
#URL:           raymond.blog.csdn.net
#Description:   install_keepalived for CentOS 7/8 & Ubuntu 18.04/20.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
SRC_DIR=/usr/local/src
COLOR="echo -e \\033[01;31m"
END='\033[0m'
KEEPALIVED_URL=https://keepalived.org/software/
KEEPALIVED_FILE=keepalived-2.2.4.tar.gz
KEEPALIVED_INSTALL_DIR=/apps/keepalived
CPUS=`lscpu |awk '/^CPU\(s\)/{print $2}'`
NET_NAME=`ip addr |awk -F"[: ]" '/^2: e.*/{print $3}'`
STATE=BACKUP
PRIORITY=90
VIP=172.31.3.188


os(){
   OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' etc/os-release`
   OS_RELEASE_VERSION=`sed -rn '/^VERSION_ID=/s@.*="?([0-9]+)\.?.*"?@\1@p' etc/os-release`
}

check_file (){
   cd  ${SRC_DIR}
   if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> dev/null;then
      rpm -q wget &> dev/null || yum -y install wget &> dev/null
   fi
   if [ ! -e ${KEEPALIVED_FILE} ];then
       ${COLOR}"缺少${KEEPALIVED_FILE}文件,如果是离线包,请放到${SRC_DIR}目录下"${END}
       ${COLOR}'开始下载Keepalived源码包'${END}
       wget ${KEEPALIVED_URL}${KEEPALIVED_FILE} || { ${COLOR}"Keepalived源码包下载失败"${END}; exit; }
   elif [ ! -e check_haproxy.sh ];then
       ${COLOR}"缺少check_haproxy.sh文件,请把文件放到${SRC_DIR}目录下"${END}
       exit
   else
       ${COLOR}"相关文件已准备好"${END}
   fi
}

install_keepalived(){
  [ -d ${KEEPALIVED_INSTALL_DIR} ] && { ${COLOR}"Keepalived已存在,安装失败"${END};exit; }
   ${COLOR}"开始安装Keepalived"${END}
   ${COLOR}"开始安装Keepalived依赖包"${END}
   if [ ${OS_ID} == "Rocky" -a ${OS_RELEASE_VERSION} == 8 ];then
       URL=mirrors.sjtug.sjtu.edu.cn
if [ ! `grep -R "\[PowerTools\]" etc/yum.repos.d/` ];then
           cat > etc/yum.repos.d/PowerTools.repo <<-EOF
[PowerTools]
name=PowerTools
baseurl=https://${URL}/rocky/\$releasever/PowerTools/\$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
EOF
       fi
   fi
   if [ ${OS_ID} == "CentOS" -a ${OS_RELEASE_VERSION} == 8 ];then
       URL=mirrors.cloud.tencent.com
       if [ ! `grep -R "\[PowerTools\]" etc/yum.repos.d/` ];then
           cat > etc/yum.repos.d/PowerTools.repo <<-EOF
[PowerTools]
name=PowerTools
baseurl=https://${URL}/centos/\$releasever/PowerTools/\$basearch/os/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
EOF
       fi
   fi
   if [[ ${OS_RELEASE_VERSION} == 8 ]] &> dev/null;then
      yum -y install make gcc ipvsadm autoconf automake openssl-devel libnl3-devel iptables-devel ipset-devel file-devel net-snmp-devel glib2-devel pcre2-devel libnftnl-devel libmnl-devel systemd-devel &> dev/null
   elif [[ ${OS_RELEASE_VERSION} == 7 ]] &> dev/null;then
      yum -y install make gcc libnfnetlink-devel libnfnetlink ipvsadm libnl libnl-devel libnl3 libnl3-devel lm_sensors-libs net-snmp-agent-libs net-snmp-libs openssh-server openssh-clients openssl openssl-devel automake iproute &> dev/null
   elif [[ ${OS_RELEASE_VERSION} == 20 ]] &> dev/null;then
      apt update &> dev/null;apt -y install make gcc ipvsadm build-essential pkg-config automake autoconf libipset-dev libnl-3-dev libnl-genl-3-dev libssl-dev libxtables-dev libip4tc-dev libip6tc-dev libipset-dev libmagic-dev libsnmp-dev libglib2.0-dev libpcre2-dev libnftnl-dev libmnl-dev libsystemd-dev
   else
      apt update &> dev/null;apt -y install make gcc ipvsadm build-essential pkg-config automake autoconf iptables-dev libipset-dev libnl-3-dev libnl-genl-3-dev libssl-dev libxtables-dev libip4tc-dev libip6tc-dev libipset-dev libmagic-dev libsnmp-dev libglib2.0-dev libpcre2-dev libnftnl-dev libmnl-dev libsystemd-dev &> dev/null
   fi
  tar xf ${KEEPALIVED_FILE}
   KEEPALIVED_DIR=`echo ${KEEPALIVED_FILE} | sed -nr 's/^(.*[0-9]).([[:lower:]]).*/\1/p'`
   cd ${KEEPALIVED_DIR}
  ./configure --prefix=${KEEPALIVED_INSTALL_DIR} --disable-fwmark
   make -j $CPUS && make install
  [ $? -eq 0 ] && ${COLOR}"Keepalived编译安装成功"${END} || { ${COLOR}"Keepalived编译安装失败,退出!"${END};exit; }
  [ -d etc/keepalived ] || mkdir -p etc/keepalived &> dev/null
   cat > etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived

global_defs {
  router_id LVS_DEVEL
  script_user root
  enable_script_security
}

vrrp_script check_haoroxy {
  script "/etc/keepalived/check_haproxy.sh"
  interval 5
  weight -5
  fall 2  
  rise 1
}

vrrp_instance VI_1 {
  state ${STATE}
  interface ${NET_NAME}
  virtual_router_id 51
  priority ${PRIORITY}
  advert_int 1
  authentication {
      auth_type PASS
      auth_pass 1111
  }
  virtual_ipaddress {
       ${VIP} dev ${NET_NAME} label ${NET_NAME}:1
  }
  track_script {
      check_haproxy
  }
}
EOF
   cp ./keepalived/keepalived.service lib/systemd/system/
   cd  ${SRC_DIR}
   mv check_haproxy.sh etc/keepalived/check_haproxy.sh
   chmod +x etc/keepalived/check_haproxy.sh
   echo "PATH=${KEEPALIVED_INSTALL_DIR}/sbin:${PATH}" > etc/profile.d/keepalived.sh
  systemctl daemon-reload
  systemctl enable --now keepalived &> dev/null
  systemctl is-active keepalived &> dev/null && ${COLOR}"Keepalived 服务启动成功!"${END} || { ${COLOR}"Keepalived 启动失败,退出!"${END} ; exit; }
   ${COLOR}"Keepalived安装完成"${END}
}

main(){
  os
  check_file
  install_keepalived
}

main

[root@k8s-ha02 ~]# bash install_keepalived_backup.sh
复制

5.安装harbor

5.1 安装harbor

在harbor01和harbor02上安装harbor:

[root@k8s-harbor01 ~]# cat install_docker_binary_compose_harbor.sh
#!/bin/bash
#
#**************************************************************************************************
#Author:       Raymond
#QQ:           88563128
#Date:         2021-12-15
#FileName:     install_docker_binary_compose_harbor.sh
#URL:           raymond.blog.csdn.net
#Description:   install_docker_binary_compose_harbor for CentOS 7/8 & Ubuntu 18.04/20.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#**************************************************************************************************
SRC_DIR=/usr/local/src
COLOR="echo -e \\033[01;31m"
END='\033[0m'

URL='https://download.docker.com/linux/static/stable/x86_64/'
DOCKER_FILE=docker-19.03.9.tgz

#docker-compose下载地址:https://github.com/docker/compose/releases/download/1.29.2/docker-compose-Linux-x86_64
DOCKER_COMPOSE_FILE=docker-compose-linux-x86_64

#harbor下载地址:https://github.com/goharbor/harbor/releases/download/v2.4.1/harbor-offline-installer-v2.4.1.tgz
HARBOR_FILE=harbor-offline-installer-v
HARBOR_VERSION=2.4.1
TAR=.tgz
HARBOR_INSTALL_DIR=/apps
HARBOR_DOMAIN=harbor.raymonds.cc
NET_NAME=`ip addr |awk -F"[: ]" '/^2: e.*/{print $3}'`
IP=`ip addr show ${NET_NAME}| awk -F" +|/" '/global/{print $3}'`
HARBOR_ADMIN_PASSWORD=123456

os(){
   OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' etc/os-release`
   OS_RELEASE_VERSION=`sed -rn '/^VERSION_ID=/s@.*="?([0-9]+)\.?.*"?@\1@p' etc/os-release`
}

check_file (){
   cd ${SRC_DIR}
   if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> dev/null;then
      rpm -q wget &> dev/null || yum -y install wget &> dev/null
   fi
   if [ ! -e ${DOCKER_FILE} ];then
       ${COLOR}"缺少${DOCKER_FILE}文件,如果是离线包,请把文件放到${SRC_DIR}目录下"${END}
       ${COLOR}'开始下载DOCKER二进制源码包'${END}
       wget ${URL}${DOCKER_FILE} || { ${COLOR}"DOCKER二进制安装包下载失败"${END}; exit; }
   elif [ ! -e ${DOCKER_COMPOSE_FILE} ];then
       ${COLOR}"缺少${DOCKER_COMPOSE_FILE}文件,请把文件放到${SRC_DIR}目录下"${END}
       exit
   elif [ ! -e ${HARBOR_FILE}${HARBOR_VERSION}${TAR} ];then
       ${COLOR}"缺少${HARBOR_FILE}${HARBOR_VERSION}${TAR}文件,请把文件放到${SRC_DIR}目录下"${END}
       exit
   else
       ${COLOR}"相关文件已准备好"${END}
   fi
}

install_docker(){
  tar xf ${DOCKER_FILE}
   mv docker/* usr/bin/
   cat > lib/systemd/system/docker.service <<-EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H unix://var/run/docker.sock
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOF
   mkdir -p etc/docker
   cat > etc/docker/daemon.json <<-EOF
{
   "registry-mirrors": [
       "https://registry.docker-cn.com",
       "http://hub-mirror.c.163.com",
       "https://docker.mirrors.ustc.edu.cn"
  ],
   "insecure-registries": ["${HARBOR_DOMAIN}"],
   "exec-opts": ["native.cgroupdriver=systemd"],
   "max-concurrent-downloads": 10,
   "max-concurrent-uploads": 5,
   "log-opts": {
       "max-size": "300m",
       "max-file": "2"  
  },
   "live-restore": true
}
EOF
   echo 'alias rmi="docker images -qa|xargs docker rmi -f"' >> ~/.bashrc
   echo 'alias rmc="docker ps -qa|xargs docker rm -f"' >> ~/.bashrc
  systemctl daemon-reload
  systemctl enable --now docker &> dev/null
  systemctl is-active docker &> dev/null && ${COLOR}"Docker 服务启动成功"${END} || { ${COLOR}"Docker 启动失败"${END};exit; }
  docker version && ${COLOR}"Docker 安装成功"${END} || ${COLOR}"Docker 安装失败"${END}
}

install_docker_compose(){
   ${COLOR}"开始安装 Docker compose....."${END}
   sleep 1
   mv ${SRC_DIR}/${DOCKER_COMPOSE_FILE} usr/bin/docker-compose
   chmod +x usr/bin/docker-compose
  docker-compose --version &&  ${COLOR}"Docker Compose 安装完成"${END} || ${COLOR}"Docker compose 安装失败"${END}
}

install_harbor(){
   ${COLOR}"开始安装 Harbor....."${END}
   sleep 1
  [ -d ${HARBOR_INSTALL_DIR} ] || mkdir ${HARBOR_INSTALL_DIR}
  tar xf ${SRC_DIR}/${HARBOR_FILE}${HARBOR_VERSION}${TAR} -C ${HARBOR_INSTALL_DIR}/
   mv ${HARBOR_INSTALL_DIR}/harbor/harbor.yml.tmpl ${HARBOR_INSTALL_DIR}/harbor/harbor.yml
   sed -ri.bak -e 's/^(hostname:) .*/\1 '${IP}'/' -e 's/^(harbor_admin_password:) .*/\1 '${HARBOR_ADMIN_PASSWORD}'/' -e 's/^(https:)/#\1/' -e 's/ (port: 443)/# \1/' -e 's@ (certificate: .*)@# \1@' -e 's@ (private_key: .*)@# \1@' ${HARBOR_INSTALL_DIR}/harbor/harbor.yml
   if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> dev/null;then
      yum -y install python3 &> dev/null || { ${COLOR}"安装软件包失败,请检查网络配置"${END}; exit; }
   else
      apt -y install python3 &> dev/null || { ${COLOR}"安装软件包失败,请检查网络配置"${END}; exit; }
   fi
   ${HARBOR_INSTALL_DIR}/harbor/install.sh && ${COLOR}"Harbor 安装完成"${END} ||  ${COLOR}"Harbor 安装失败"${END}
   cat > lib/systemd/system/harbor.service <<-EOF
[Unit]
Description=Harbor
After=docker.service systemd-networkd.service systemd-resolved.service
Requires=docker.service
Documentation=http://github.com/vmware/harbor

[Service]
Type=simple
Restart=on-failure
RestartSec=5
ExecStart=/usr/bin/docker-compose -f apps/harbor/docker-compose.yml up
ExecStop=/usr/bin/docker-compose -f apps/harbor/docker-compose.yml down

[Install]
WantedBy=multi-user.target
EOF

  systemctl daemon-reload
  systemctl enable harbor &>/dev/null && ${COLOR}"Harbor已配置为开机自动启动"${END}
}

set_swap_limit(){
   if [ ${OS_ID} == "Ubuntu" ];then
       ${COLOR}'设置Docker的"WARNING: No swap limit support"警告'${END}
       sed -ri '/^GRUB_CMDLINE_LINUX=/s@"$@ swapaccount=1"@' etc/default/grub
      update-grub &> dev/null
       ${COLOR}"10秒后,机器会自动重启"${END}
       sleep 10
      reboot
   fi
}

main(){
  os
  check_file
  [ -f usr/bin/docker ] && ${COLOR}"Docker已安装"${END} || install_docker
  docker-compose --version &> dev/null && ${COLOR}"Docker Compose已安装"${END} || install_docker_compose
  systemctl is-active harbor &> dev/null && ${COLOR}"Harbor已安装"${END} || install_harbor
   grep -q "swapaccount=1" etc/default/grub && ${COLOR}'"WARNING: No swap limit support"警告,已设置'${END} || set_swap_limit
}

main

[root@k8s-harbor01 ~]# bash install_docker_compose_harbor.sh

[root@k8s-harbor02 ~]# bash install_docker_compose_harbor.sh
复制

5.2 创建harbor仓库

在harbor01新建项目google_containers



在harbor02新建项目google_containers



在harbor02上新建目标



在harbor02上新建规则



在harbor01上新建目标



在harbor01上新建规则



6.部署etcd

6.1 安装etcd组件

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#server-binaries-1


下载etcd安装包

[root@k8s-etcd01 ~]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
复制

解压etcd安装文件

[root@k8s-etcd01 ~]# tar -xf etcd-v3.4.13-linux-amd64.tar.gz --strip-components=1 -C usr/local/bin etcd-v3.4.13-linux-amd64/etcd{,ctl}
复制

版本查看

[root@k8s-etcd01 ~]# etcdctl version
etcdctl version: 3.4.13
API version: 3.4
复制

将组件发送到其他节点

[root@k8s-etcd01 ~]# for NODE in k8s-etcd02 k8s-etcd03; do echo $NODE; scp -o StrictHostKeyChecking=no usr/local/bin/etcd* $NODE:/usr/local/bin/; done
复制

etcd节点创建/opt/cni/bin目录

[root@k8s-etcd01 ~]# mkdir -p opt/cni/bin
[root@k8s-etcd02 ~]# mkdir -p opt/cni/bin
[root@k8s-etcd03 ~]# mkdir -p opt/cni/bin
复制

6.2 生成etcd证书

二进制安装最关键步骤,一步错误全盘皆输,一定要注意每个步骤都要是正确的

etcd01下载生成证书工具

[root@k8s-etcd01 ~]# wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O usr/local/bin/cfssl

[root@k8s-etcd01 ~]# wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O usr/local/bin/cfssljson

[root@k8s-etcd01 ~]# chmod +x usr/local/bin/cfssl usr/local/bin/cfssljson
复制

etcd节点创建etcd证书目录

[root@k8s-etcd01 ~]# mkdir etc/etcd/ssl -p
[root@k8s-etcd02 ~]# mkdir etc/etcd/ssl -p
[root@k8s-etcd03 ~]# mkdir etc/etcd/ssl -p
复制

etcd01节点生成etcd证书

生成证书的CSR文件:证书签名请求文件,配置了一些域名、公司、单位

[root@k8s-etcd01 ~]# mkdir pki
[root@k8s-etcd01 ~]# cd pki/

[root@k8s-etcd01 pki]# cat etcd-ca-csr.json
{
 "CN": "etcd",
 "key": {
   "algo": "rsa",
   "size": 2048
},
 "names": [
  {
     "C": "CN",
     "ST": "Beijing",
     "L": "Beijing",
     "O": "etcd",
     "OU": "Etcd Security"
  }
],
 "ca": {
   "expiry": "876000h"
}
}

# 生成etcd CA证书和CA证书的key
[root@k8s-etcd01 pki]# cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etc/etcd/ssl/etcd-ca
#执行结果
2022/02/25 15:32:30 [INFO] generating a new CA key and certificate from CSR
2022/02/25 15:32:30 [INFO] generate received request
2022/02/25 15:32:30 [INFO] received CSR
2022/02/25 15:32:30 [INFO] generating key: rsa-2048
2022/02/25 15:32:30 [INFO] encoded CSR
2022/02/25 15:32:30 [INFO] signed certificate with serial number 283470375480918891598291779152743682937426861184

[root@k8s-etcd01 pki]# ll etc/etcd/ssl/etcd-ca*
-rw-r--r-- 1 root root 1005 Feb 25 15:32 etc/etcd/ssl/etcd-ca.csr
-rw------- 1 root root 1679 Feb 25 15:32 etc/etcd/ssl/etcd-ca-key.pem
-rw-r--r-- 1 root root 1367 Feb 25 15:32 etc/etcd/ssl/etcd-ca.pem

[root@k8s-etcd01 pki]# cat ca-config.json
{
 "signing": {
   "default": {
     "expiry": "876000h"
  },
   "profiles": {
     "kubernetes": {
       "usages": [
           "signing",
           "key encipherment",
           "server auth",
           "client auth"
      ],
       "expiry": "876000h"
    }
  }
}
}

[root@k8s-etcd01 pki]# cat etcd-csr.json
{
 "CN": "etcd",
 "key": {
   "algo": "rsa",
   "size": 2048
},
 "names": [
  {
     "C": "CN",
     "ST": "Beijing",
     "L": "Beijing",
     "O": "etcd",
     "OU": "Etcd Security"
  }
]
}

[root@k8s-etcd01 pki]# cfssl gencert \
  -ca=/etc/etcd/ssl/etcd-ca.pem \
  -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
  -config=ca-config.json \
  -hostname=127.0.0.1,k8s-etcd01,k8s-etcd02,k8s-etcd03,172.31.3.108,172.31.3.109,172.31.3.110 \
  -profile=kubernetes \
  etcd-csr.json | cfssljson -bare etc/etcd/ssl/etcd
#执行结果
2022/02/25 15:34:05 [INFO] generate received request
2022/02/25 15:34:05 [INFO] received CSR
2022/02/25 15:34:05 [INFO] generating key: rsa-2048
2022/02/25 15:34:05 [INFO] encoded CSR
2022/02/25 15:34:05 [INFO] signed certificate with serial number 621125122845507641888880600687542216934848214818

[root@k8s-etcd01 pki]# ll etc/etcd/ssl/etcd*
-rw-r--r-- 1 root root 1005 Feb 25 15:32 etc/etcd/ssl/etcd-ca.csr
-rw------- 1 root root 1679 Feb 25 15:32 etc/etcd/ssl/etcd-ca-key.pem
-rw-r--r-- 1 root root 1367 Feb 25 15:32 etc/etcd/ssl/etcd-ca.pem
-rw-r--r-- 1 root root 1005 Feb 25 15:34 etc/etcd/ssl/etcd.csr
-rw------- 1 root root 1679 Feb 25 15:34 etc/etcd/ssl/etcd-key.pem
-rw-r--r-- 1 root root 1501 Feb 25 15:34 etc/etcd/ssl/etcd.pem
复制

将etcd证书复制到其他etcd节点

[root@k8s-etcd01 pki]# for NODE in k8s-etcd02 k8s-etcd03; do
    ssh -o StrictHostKeyChecking=no $NODE "mkdir -p etc/etcd/ssl"
    for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do
      scp -o StrictHostKeyChecking=no etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}
    done
done
复制

6.3 Etcd配置

etcd配置大致相同,注意修改每个etcd节点的etcd配置的主机名和IP地址

6.3.1 etcd01

[root@k8s-etcd01 pki]# vim etc/etcd/etcd.config.yml
name: 'k8s-etcd01'
data-dir: var/lib/etcd
wal-dir: var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://172.31.3.108:2380'
listen-client-urls: 'https://172.31.3.108:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://172.31.3.108:2380'
advertise-client-urls: 'https://172.31.3.108:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-etcd01=https://172.31.3.108:2380,k8s-etcd02=https://172.31.3.109:2380,k8s-etcd03=https://172.31.3.110:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
复制

6.3.2 etcd02

[root@k8s-etcd02 ~]# vim etc/etcd/etcd.config.yml
name: 'k8s-etcd02'
data-dir: var/lib/etcd
wal-dir: var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://172.31.3.109:2380'
listen-client-urls: 'https://172.31.3.109:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://172.31.3.109:2380'
advertise-client-urls: 'https://172.31.3.109:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-etcd01=https://172.31.3.108:2380,k8s-etcd02=https://172.31.3.109:2380,k8s-etcd03=https://172.31.3.110:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
复制

6.3.3 etcd03

[root@k8s-etcd03 ~]# vim etc/etcd/etcd.config.yml
name: 'k8s-etcd03'
data-dir: var/lib/etcd
wal-dir: var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://172.31.3.110:2380'
listen-client-urls: 'https://172.31.3.110:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://172.31.3.110:2380'
advertise-client-urls: 'https://172.31.3.110:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-etcd01=https://172.31.3.108:2380,k8s-etcd02=https://172.31.3.109:2380,k8s-etcd03=https://172.31.3.110:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
复制

6.3.4 创建Service

所有etcd节点创建etcd service并启动

[root@k8s-etcd01 pki]# vim lib/systemd/system/etcd.service
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
Alias=etcd3.service

[root@k8s-etcd01 pki]# for NODE in k8s-etcd02 k8s-etcd03; do scp lib/systemd/system/etcd.service $NODE:/lib/systemd/system/;done
复制

所有etcd节点创建etcd的证书目录

[root@k8s-etcd01 pki]# mkdir etc/kubernetes/pki/etcd -p
[root@k8s-etcd02 ~]# mkdir etc/kubernetes/pki/etcd -p
[root@k8s-etcd03 ~]# mkdir etc/kubernetes/pki/etcd -p

[root@k8s-etcd01 pki]# ln -s etc/etcd/ssl/* etc/kubernetes/pki/etcd/
[root@k8s-etcd02 ~]# ln -s etc/etcd/ssl/* etc/kubernetes/pki/etcd/
[root@k8s-etcd03 ~]# ln -s etc/etcd/ssl/* etc/kubernetes/pki/etcd/

[root@k8s-etcd01 pki]# systemctl daemon-reload && systemctl enable --now etcd
[root@k8s-etcd02 ~]# systemctl daemon-reload && systemctl enable --now etcd
[root@k8s-etcd03 ~]# systemctl daemon-reload && systemctl enable --now etcd

[root@k8s-etcd01 pki]# systemctl status etcd
● etcd.service - Etcd Service
  Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
  Active: active (running) since Fri 2022-02-25 15:45:24 CST; 12s ago
    Docs: https://coreos.com/etcd/docs/latest/
Main PID: 12347 (etcd)
  CGroup: system.slice/etcd.service
          └─12347 usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml

Feb 25 15:45:26 k8s-etcd01.example.local etcd[12347]: health check for peer 209a1f57c506dba2 could not connect: dial tcp 172.31.3.110:...efused
Feb 25 15:45:26 k8s-etcd01.example.local etcd[12347]: health check for peer 209a1f57c506dba2 could not connect: dial tcp 172.31.3.110:...efused
Feb 25 15:45:26 k8s-etcd01.example.local etcd[12347]: peer 209a1f57c506dba2 became active
Feb 25 15:45:26 k8s-etcd01.example.local etcd[12347]: established a TCP streaming connection with peer 209a1f57c506dba2 (stream MsgApp...riter)
Feb 25 15:45:26 k8s-etcd01.example.local etcd[12347]: established a TCP streaming connection with peer 209a1f57c506dba2 (stream Message writer)
Feb 25 15:45:26 k8s-etcd01.example.local etcd[12347]: established a TCP streaming connection with peer 209a1f57c506dba2 (stream MsgApp...eader)
Feb 25 15:45:26 k8s-etcd01.example.local etcd[12347]: established a TCP streaming connection with peer 209a1f57c506dba2 (stream Message reader)
Feb 25 15:45:28 k8s-etcd01.example.local etcd[12347]: updating the cluster version from 3.0 to 3.4
Feb 25 15:45:28 k8s-etcd01.example.local etcd[12347]: updated the cluster version from 3.0 to 3.4
Feb 25 15:45:28 k8s-etcd01.example.local etcd[12347]: enabled capabilities for version 3.4
Hint: Some lines were ellipsized, use -l to show in full.

[root@k8s-etcd02 ~]# systemctl status etcd
[root@k8s-etcd03 ~]# systemctl status etcd
复制

查看etcd状态

[root@k8s-etcd01 pki]# export ETCDCTL_API=3

[root@k8s-etcd01 pki]# etcdctl --endpoints="172.31.3.108:2379,172.31.3.109:2379,172.31.3.110:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|     ENDPOINT     |       ID       | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 172.31.3.108:2379 | a9fef56ff96ed75c |  3.4.13 |   20 kB |      true |      false |         3 |          9 |                  9 |       |
| 172.31.3.109:2379 | 8319ef09e8b3d277 |  3.4.13 |   20 kB |     false |      false |         3 |          9 |                  9 |       |
| 172.31.3.110:2379 | 209a1f57c506dba2 |  3.4.13 |   20 kB |     false |      false |         3 |          9 |                  9 |       |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
复制

重要:如果安装了keepalived和haproxy,需要测试keepalived是否是正常的

#测试VIP
[root@k8s-master01 pki]# ping 172.31.3.188
PING 172.31.3.188 (172.31.3.188) 56(84) bytes of data.
64 bytes from 172.31.3.188: icmp_seq=1 ttl=64 time=1.27 ms
64 bytes from 172.31.3.188: icmp_seq=2 ttl=64 time=0.585 ms
^C
--- 172.31.3.188 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.585/0.931/1.277/0.346 ms

[root@k8s-ha01 ~]# systemctl stop keepalived
[root@k8s-ha01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
      valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
      valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  link/ether 00:0c:29:05:9b:2a brd ff:ff:ff:ff:ff:ff
  inet 172.31.3.104/21 brd 172.31.7.255 scope global eth0
      valid_lft forever preferred_lft forever
  inet6 fe80::20c:29ff:fe05:9b2a/64 scope link
      valid_lft forever preferred_lft forever

[root@k8s-ha02 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
      valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
      valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  link/ether 00:0c:29:5e:d8:f8 brd ff:ff:ff:ff:ff:ff
  inet 172.31.3.105/21 brd 172.31.7.255 scope global eth0
      valid_lft forever preferred_lft forever
  inet 172.31.3.188/32 scope global eth0:1
      valid_lft forever preferred_lft forever
  inet6 fe80::20c:29ff:fe5e:d8f8/64 scope link
      valid_lft forever preferred_lft forever

[root@k8s-ha01 ~]# systemctl start keepalived
[root@k8s-ha01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
      valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
      valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  link/ether 00:0c:29:05:9b:2a brd ff:ff:ff:ff:ff:ff
  inet 172.31.3.104/21 brd 172.31.7.255 scope global eth0
      valid_lft forever preferred_lft forever
  inet 172.31.3.188/32 scope global eth0:1
      valid_lft forever preferred_lft forever
  inet6 fe80::20c:29ff:fe05:9b2a/64 scope link
      valid_lft forever preferred_lft forever

[root@k8s-master01 pki]# telnet 172.31.3.188 6443
Trying 172.31.3.188...
Connected to 172.31.3.188.
Escape character is '^]'.
Connection closed by foreign host.
复制

如果ping不通且telnet没有出现 ] ,则认为VIP不可以,不可在继续往下执行,需要排查keepalived的问题,比如防火墙和selinux,haproxy和keepalived的状态,监听端口等

所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld

所有节点查看selinux状态,必须为disable:getenforce

master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy

master节点查看监听端口:netstat -lntp

7.部署docker

master和node安装docker-ce:

[root@k8s-master01 ~]# cat install_docker_binary.sh
#!/bin/bash
#
#**********************************************************************************************
#Author:       Raymond
#QQ:           88563128
#Date:         2021-12-07
#FileName:     install_docker_binary.sh
#URL:           raymond.blog.csdn.net
#Description:   install_docker_binary for centos 7/8 & ubuntu 18.04/20.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
SRC_DIR=/usr/local/src
COLOR="echo -e \\033[01;31m"
END='\033[0m'
URL='https://mirrors.cloud.tencent.com/docker-ce/linux/static/stable/x86_64/'
DOCKER_FILE=docker-19.03.9.tgz
HARBOR_DOMAIN=harbor.raymonds.cc

os(){
   OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' etc/os-release`
}

check_file (){
   cd ${SRC_DIR}
  rpm -q wget &> dev/null || yum -y install wget &> dev/null
   if [ ! -e ${DOCKER_FILE} ];then
       ${COLOR}"缺少${DOCKER_FILE}文件,如果是离线包,请把文件放到${SRC_DIR}目录下"${END}
       ${COLOR}'开始下载DOCKER二进制安装包'${END}
       wget ${URL}${DOCKER_FILE} || { ${COLOR}"DOCKER二进制安装包下载失败"${END}; exit; }
   else
       ${COLOR}"相关文件已准备好"${END}
   fi
}

install(){
  [ -f usr/bin/docker ] && { ${COLOR}"DOCKER已存在,安装失败"${END};exit; }
   ${COLOR}"开始安装DOCKER..."${END}
  tar xf ${DOCKER_FILE}
   mv docker/* usr/bin/
   cat > lib/systemd/system/docker.service <<-EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H unix://var/run/docker.sock
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOF
   mkdir -p etc/docker
   cat > etc/docker/daemon.json <<-EOF
{
   "registry-mirrors": [
       "https://registry.docker-cn.com",
       "http://hub-mirror.c.163.com",
       "https://docker.mirrors.ustc.edu.cn"
  ],
   "insecure-registries": ["${HARBOR_DOMAIN}"],
   "exec-opts": ["native.cgroupdriver=systemd"],
   "max-concurrent-downloads": 10,
   "max-concurrent-uploads": 5,
   "log-opts": {
       "max-size": "300m",
       "max-file": "2"  
  },
   "live-restore": true
}
EOF
   echo 'alias rmi="docker images -qa|xargs docker rmi -f"' >> ~/.bashrc
   echo 'alias rmc="docker ps -qa|xargs docker rm -f"' >> ~/.bashrc
  systemctl daemon-reload
  systemctl enable --now docker &> dev/null
  systemctl is-active docker &> dev/null && ${COLOR}"Docker 服务启动成功"${END} || { ${COLOR}"Docker 启动失败"${END};exit; }
  docker version && ${COLOR}"Docker 安装成功"${END} || ${COLOR}"Docker 安装失败"${END}
}

set_swap_limit(){
   if [ ${OS_ID} == "Ubuntu" ];then
       ${COLOR}'设置Docker的"WARNING: No swap limit support"警告'${END}
       sed -ri '/^GRUB_CMDLINE_LINUX=/s@"$@ swapaccount=1"@' etc/default/grub
      update-grub &> dev/null
       ${COLOR}"10秒后,机器会自动重启"${END}
       sleep 10
      reboot
   fi
}

main(){
  os
  check_file
  install
  set_swap_limit
}

main

[root@k8s-master01 ~]# bash install_docker_binary.sh

[root@k8s-master02 ~]# bash install_docker_binary.sh

[root@k8s-master03 ~]# bash install_docker_binary.sh

[root@k8s-node01 ~]# bash install_docker_binary.sh

[root@k8s-node02 ~]# bash install_docker_binary.sh

[root@k8s-node03 ~]# bash install_docker_binary.sh
复制

8.部署master

8.1 创建etcd相关目录和复制etcd证书

master节点创建etcd证书目录

[root@k8s-master01 ~]# mkdir etc/etcd/ssl -p
[root@k8s-master02 ~]# mkdir etc/etcd/ssl -p
[root@k8s-master03 ~]# mkdir etc/etcd/ssl -p
复制

将etcd证书复制到master节点

[root@k8s-etcd01 pki]# for NODE in k8s-master01 k8s-master02 k8s-master03; do
    ssh -o StrictHostKeyChecking=no $NODE "mkdir -p etc/etcd/ssl"
    for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do
      scp -o StrictHostKeyChecking=no etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}
    done
done
复制

所有master节点创建etcd的证书目录

[root@k8s-master01 ~]# mkdir etc/kubernetes/pki/etcd -p
[root@k8s-master02 ~]# mkdir etc/kubernetes/pki/etcd -p
[root@k8s-master03 ~]# mkdir etc/kubernetes/pki/etcd -p

[root@k8s-master01 ~]# ln -s etc/etcd/ssl/* etc/kubernetes/pki/etcd/
[root@k8s-master02 ~]# ln -s etc/etcd/ssl/* etc/kubernetes/pki/etcd/
[root@k8s-master03 ~]# ln -s etc/etcd/ssl/* etc/kubernetes/pki/etcd/
复制

8.2 安装kubernetes组件

下载kubernetes安装包

[root@k8s-master01 ~]# wget https://dl.k8s.io/v1.20.14/kubernetes-server-linux-amd64.tar.gz
复制

需要下载最新的1.20.x版本:

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#server-binaries
复制

打开页面后点击:

解压kubernetes安装文件

[root@k8s-master01 ~]# tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
复制

版本查看

[root@k8s-master01 ~]# kubelet --version
Kubernetes v1.20.14
复制

将组件发送到其他master节点

[root@k8s-master01 ~]# for NODE in k8s-master02 k8s-master03; do echo $NODE; scp -o StrictHostKeyChecking=no usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; done
复制

master节点创建/opt/cni/bin目录

[root@k8s-master01 ~]# mkdir -p opt/cni/bin
[root@k8s-master02 ~]# mkdir -p opt/cni/bin
[root@k8s-master03 ~]# mkdir -p opt/cni/bin
复制

8.3 生成k8s组件证书

二进制安装最关键步骤,一步错误全盘皆输,一定要注意每个步骤都要是正确的

master节点创建kubernetes相关目录

[root@k8s-master01 ~]# mkdir -p etc/kubernetes/pki
[root@k8s-master02 ~]# mkdir -p etc/kubernetes/pki
[root@k8s-master03 ~]# mkdir -p etc/kubernetes/pki
复制

master01下载生成证书工具

[root@k8s-master01 ~]# wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O usr/local/bin/cfssl

[root@k8s-master01 ~]# wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O usr/local/bin/cfssljson

[root@k8s-master01 ~]# chmod +x usr/local/bin/cfssl usr/local/bin/cfssljson
复制

8.3.1 生成ca证书

[root@k8s-master01 ~]# mkdir pki
[root@k8s-master01 ~]# cd pki/

[root@k8s-master01 pki]# cat ca-csr.json
{
 "CN": "kubernetes",
 "key": {
   "algo": "rsa",
   "size": 2048
},
 "names": [
  {
     "C": "CN",
     "ST": "Beijing",
     "L": "Beijing",
     "O": "Kubernetes",
     "OU": "Kubernetes-manual"
  }
],
 "ca": {
   "expiry": "876000h"
}
}

[root@k8s-master01 pki]# cfssl gencert -initca ca-csr.json | cfssljson -bare etc/kubernetes/pki/ca
#执行结果
2022/02/25 16:19:52 [INFO] generating a new CA key and certificate from CSR
2022/02/25 16:19:52 [INFO] generate received request
2022/02/25 16:19:52 [INFO] received CSR
2022/02/25 16:19:52 [INFO] generating key: rsa-2048
2022/02/25 16:19:52 [INFO] encoded CSR
2022/02/25 16:19:52 [INFO] signed certificate with serial number 50700045204155982111782984381054779655420622936

[root@k8s-master01 pki]# ll etc/kubernetes/pki/ca*
-rw-r--r-- 1 root root 1025 Feb 25 16:19 etc/kubernetes/pki/ca.csr
-rw------- 1 root root 1675 Feb 25 16:19 etc/kubernetes/pki/ca-key.pem
-rw-r--r-- 1 root root 1411 Feb 25 16:19 etc/kubernetes/pki/ca.pem
复制

8.3.2 生成apiserver证书

# 10.96.0.是k8s service的网段,如果说需要更改k8s service网段,那就需要更改10.96.0.1,

# 如果不是高可用集群,172.31.3.188为Master01的IP

[root@k8s-master01 pki]# cat ca-config.json 
{
 "signing": {
   "default": {
     "expiry": "876000h"
  },
   "profiles": {
     "kubernetes": {
       "usages": [
           "signing",
           "key encipherment",
           "server auth",
           "client auth"
      ],
       "expiry": "876000h"
    }
  }
}
}

[root@k8s-master01 pki]# cat apiserver-csr.json
{
 "CN": "kube-apiserver",
 "key": {
   "algo": "rsa",
   "size": 2048
},
 "names": [
  {
     "C": "CN",
     "ST": "Beijing",
     "L": "Beijing",
     "O": "Kubernetes",
     "OU": "Kubernetes-manual"
  }
]
}

[root@k8s-master01 pki]# cfssl gencert   -ca=/etc/kubernetes/pki/ca.pem   -ca-key=/etc/kubernetes/pki/ca-key.pem   -config=ca-config.json   -hostname=10.96.0.1,172.31.3.188,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,172.31.3.101,172.31.3.102,172.31.3.103 -profile=kubernetes   apiserver-csr.json | cfssljson -bare etc/kubernetes/pki/apiserver
#执行结果
2022/02/25 16:21:42 [INFO] generate received request
2022/02/25 16:21:42 [INFO] received CSR
2022/02/25 16:21:42 [INFO] generating key: rsa-2048
2022/02/25 16:21:42 [INFO] encoded CSR
2022/02/25 16:21:42 [INFO] signed certificate with serial number 14326114816925312981811634565226925868722808544

[root@k8s-master01 pki]# ll etc/kubernetes/pki/apiserver*
-rw-r--r-- 1 root root 1029 Feb 25 16:21 etc/kubernetes/pki/apiserver.csr
-rw------- 1 root root 1675 Feb 25 16:21 etc/kubernetes/pki/apiserver-key.pem
-rw-r--r-- 1 root root 1692 Feb 25 16:21 etc/kubernetes/pki/apiserver.pem
复制

8.3.3 生成apiserver的聚合证书

生成apiserver的聚合证书。Requestheader-client-xxx   requestheader-allowwd-xxx:aggerator

[root@k8s-master01 pki]# cat front-proxy-ca-csr.json
{
 "CN": "kubernetes",
 "key": {
    "algo": "rsa",
    "size": 2048
}
}

[root@k8s-master01 pki]# cfssl gencert   -initca front-proxy-ca-csr.json | cfssljson -bare etc/kubernetes/pki/front-proxy-ca
#执行结果
2022/02/25 16:23:07 [INFO] generating a new CA key and certificate from CSR
2022/02/25 16:23:07 [INFO] generate received request
2022/02/25 16:23:07 [INFO] received CSR
2022/02/25 16:23:07 [INFO] generating key: rsa-2048
2022/02/25 16:23:07 [INFO] encoded CSR
2022/02/25 16:23:07 [INFO] signed certificate with serial number 60929528331736839052879833998406013639330884564

[root@k8s-master01 pki]# ll etc/kubernetes/pki/front-proxy-ca*
-rw-r--r-- 1 root root  891 Feb 25 16:23 etc/kubernetes/pki/front-proxy-ca.csr
-rw------- 1 root root 1679 Feb 25 16:23 etc/kubernetes/pki/front-proxy-ca-key.pem
-rw-r--r-- 1 root root 1143 Feb 25 16:23 etc/kubernetes/pki/front-proxy-ca.pem

[root@k8s-master01 pki]# cat ca-config.json
{
 "signing": {
   "default": {
     "expiry": "876000h"
  },
   "profiles": {
     "kubernetes": {
       "usages": [
           "signing",
           "key encipherment",
           "server auth",
           "client auth"
      ],
       "expiry": "876000h"
    }
  }
}
}

[root@k8s-master01 pki]# cat front-proxy-client-csr.json
{
 "CN": "front-proxy-client",
 "key": {
    "algo": "rsa",
    "size": 2048
}
}

[root@k8s-master01 pki]# cfssl gencert   -ca=/etc/kubernetes/pki/front-proxy-ca.pem   -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem   -config=ca-config.json   -profile=kubernetes   front-proxy-client-csr.json | cfssljson -bare etc/kubernetes/pki/front-proxy-client
#返回结果(忽略警告)
2022/02/25 16:24:29 [INFO] generate received request
2022/02/25 16:24:29 [INFO] received CSR
2022/02/25 16:24:29 [INFO] generating key: rsa-2048
2022/02/25 16:24:29 [INFO] encoded CSR
2022/02/25 16:24:29 [INFO] signed certificate with serial number 625247142890350892356758545319462713918431205897
2022/02/25 16:24:29 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

[root@k8s-master01 pki]# ll etc/kubernetes/pki/front-proxy-client*
-rw-r--r-- 1 root root  903 Feb 25 16:24 etc/kubernetes/pki/front-proxy-client.csr
-rw------- 1 root root 1675 Feb 25 16:24 etc/kubernetes/pki/front-proxy-client-key.pem
-rw-r--r-- 1 root root 1188 Feb 25 16:24 etc/kubernetes/pki/front-proxy-client.pem
复制

8.3.4 生成controller-manage的证书和配置文件

[root@k8s-master01 pki]# cat ca-config.json
{
 "signing": {
   "default": {
     "expiry": "876000h"
  },
   "profiles": {
     "kubernetes": {
       "usages": [
           "signing",
           "key encipherment",
           "server auth",
           "client auth"
      ],
       "expiry": "876000h"
    }
  }
}
}

[root@k8s-master01 pki]# cat manager-csr.json
{
 "CN": "system:kube-controller-manager",
 "key": {
   "algo": "rsa",
   "size": 2048
},
 "names": [
  {
     "C": "CN",
     "ST": "Beijing",
     "L": "Beijing",
     "O": "system:kube-controller-manager",
     "OU": "Kubernetes-manual"
  }
]
}

[root@k8s-master01 pki]# cfssl gencert \
  -ca=/etc/kubernetes/pki/ca.pem \
  -ca-key=/etc/kubernetes/pki/ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  manager-csr.json | cfssljson -bare etc/kubernetes/pki/controller-manager
#执行结果
2022/02/25 16:30:55 [INFO] generate received request
2022/02/25 16:30:55 [INFO] received CSR
2022/02/25 16:30:55 [INFO] generating key: rsa-2048
2022/02/25 16:30:56 [INFO] encoded CSR
2022/02/25 16:30:56 [INFO] signed certificate with serial number 603818792655902270143954916469072941384696164716
2022/02/25 16:30:56 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

[root@k8s-master01 pki]# ll etc/kubernetes/pki/controller-manager*
-rw-r--r-- 1 root root 1082 Feb 25 16:30 etc/kubernetes/pki/controller-manager.csr
-rw------- 1 root root 1675 Feb 25 16:30 etc/kubernetes/pki/controller-manager-key.pem
-rw-r--r-- 1 root root 1501 Feb 25 16:30 etc/kubernetes/pki/controller-manager.pem

# 注意,如果不是高可用集群,172.31.3.188:6443改为master01的地址
# set-cluster:设置一个集群项,
[root@k8s-master01 pki]# kubectl config set-cluster kubernetes \
    --certificate-authority=/etc/kubernetes/pki/ca.pem \
    --embed-certs=true \
    --server=https://172.31.3.188:6443 \
    --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
#执行结果
Cluster "kubernetes" set.

# set-credentials 设置一个用户项
[root@k8s-master01 pki]# kubectl config set-credentials system:kube-controller-manager \
    --client-certificate=/etc/kubernetes/pki/controller-manager.pem \
    --client-key=/etc/kubernetes/pki/controller-manager-key.pem \
    --embed-certs=true \
    --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
#执行结果
User "system:kube-controller-manager" set.

# 设置一个环境项,一个上下文
[root@k8s-master01 pki]# kubectl config set-context system:kube-controller-manager@kubernetes \
   --cluster=kubernetes \
   --user=system:kube-controller-manager \
   --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
#执行结果
Context "system:kube-controller-manager@kubernetes" created.

# 使用某个环境当做默认环境
[root@k8s-master01 pki]# kubectl config use-context system:kube-controller-manager@kubernetes \
    --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
#执行结果
Switched to context "system:kube-controller-manager@kubernetes".
复制

8.3.5 生成scheduler的证书和配置文件

[root@k8s-master01 pki]# cat scheduler-csr.json 
{
 "CN": "system:kube-scheduler",
 "key": {
   "algo": "rsa",
   "size": 2048
},
 "names": [
  {
     "C": "CN",
     "ST": "Beijing",
     "L": "Beijing",
     "O": "system:kube-scheduler",
     "OU": "Kubernetes-manual"
  }
]
}

[root@k8s-master01 pki]# cfssl gencert \
  -ca=/etc/kubernetes/pki/ca.pem \
  -ca-key=/etc/kubernetes/pki/ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  scheduler-csr.json | cfssljson -bare etc/kubernetes/pki/scheduler
#执行结果
2022/02/25 16:34:13 [INFO] generate received request
2022/02/25 16:34:13 [INFO] received CSR
2022/02/25 16:34:13 [INFO] generating key: rsa-2048
2022/02/25 16:34:14 [INFO] encoded CSR
2022/02/25 16:34:14 [INFO] signed certificate with serial number 28588241162948355825534653175725392424189325408
2022/02/25 16:34:14 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

[root@k8s-master01 pki]# ll etc/kubernetes/pki/scheduler*
-rw-r--r-- 1 root root 1058 Feb 25 16:34 etc/kubernetes/pki/scheduler.csr
-rw------- 1 root root 1679 Feb 25 16:34 etc/kubernetes/pki/scheduler-key.pem
-rw-r--r-- 1 root root 1476 Feb 25 16:34 etc/kubernetes/pki/scheduler.pem

# 注意,如果不是高可用集群,172.31.3.188:6443改为master01的地址
[root@k8s-master01 pki]# kubectl config set-cluster kubernetes \
    --certificate-authority=/etc/kubernetes/pki/ca.pem \
    --embed-certs=true \
    --server=https://172.31.3.188:6443 \
    --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
#执行结果
Cluster "kubernetes" set.

[root@k8s-master01 pki]# kubectl config set-credentials system:kube-scheduler \
    --client-certificate=/etc/kubernetes/pki/scheduler.pem \
    --client-key=/etc/kubernetes/pki/scheduler-key.pem \
    --embed-certs=true \
    --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
#执行结果
User "system:kube-scheduler" set.

[root@k8s-master01 pki]# kubectl config set-context system:kube-scheduler@kubernetes \
    --cluster=kubernetes \
    --user=system:kube-scheduler \
    --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
#执行结果
Context "system:kube-scheduler@kubernetes" created.

[root@k8s-master01 pki]# kubectl config use-context system:kube-scheduler@kubernetes \
    --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
#执行结果
Switched to context "system:kube-scheduler@kubernetes".
复制

8.3.6 生成admin证书和配置文件

[root@k8s-master01 pki]# cat ca-config.json 
{
 "signing": {
   "default": {
     "expiry": "876000h"
  },
   "profiles": {
     "kubernetes": {
       "usages": [
           "signing",
           "key encipherment",
           "server auth",
           "client auth"
      ],
       "expiry": "876000h"
    }
  }
}
}

[root@k8s-master01 pki]# cat admin-csr.json
{
 "CN": "admin",
 "key": {
   "algo": "rsa",
   "size": 2048
},
 "names": [
  {
     "C": "CN",
     "ST": "Beijing",
     "L": "Beijing",
     "O": "system:masters",
     "OU": "Kubernetes-manual"
  }
]
}

[root@k8s-master01 pki]# cfssl gencert \
  -ca=/etc/kubernetes/pki/ca.pem \
  -ca-key=/etc/kubernetes/pki/ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  admin-csr.json | cfssljson -bare etc/kubernetes/pki/admin
#执行结果
2022/02/25 16:37:03 [INFO] generate received request
2022/02/25 16:37:03 [INFO] received CSR
2022/02/25 16:37:03 [INFO] generating key: rsa-2048
2022/02/25 16:37:03 [INFO] encoded CSR
2022/02/25 16:37:03 [INFO] signed certificate with serial number 406554626408020347401920084654276875970659166990
2022/02/25 16:37:03 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

[root@k8s-master01 pki]# ll etc/kubernetes/pki/admin*
-rw-r--r-- 1 root root 1025 Feb 25 16:37 etc/kubernetes/pki/admin.csr
-rw------- 1 root root 1675 Feb 25 16:37 etc/kubernetes/pki/admin-key.pem
-rw-r--r-- 1 root root 1444 Feb 25 16:37 etc/kubernetes/pki/admin.pem

# 注意,如果不是高可用集群,172.31.3.188:8443改为master01的地址
[root@k8s-master01 pki]# kubectl config set-cluster kubernetes     --certificate-authority=/etc/kubernetes/pki/ca.pem     --embed-certs=true     --server=https://172.31.3.188:6443     --kubeconfig=/etc/kubernetes/admin.kubeconfig
#执行结果
Cluster "kubernetes" set.

[root@k8s-master01 pki]# kubectl config set-credentials kubernetes-admin     --client-certificate=/etc/kubernetes/pki/admin.pem     --client-key=/etc/kubernetes/pki/admin-key.pem     --embed-certs=true     --kubeconfig=/etc/kubernetes/admin.kubeconfig
#执行结果
User "kubernetes-admin" set.

[root@k8s-master01 pki]# kubectl config set-context kubernetes-admin@kubernetes     --cluster=kubernetes     --user=kubernetes-admin     --kubeconfig=/etc/kubernetes/admin.kubeconfig
#执行结果
Context "kubernetes-admin@kubernetes" created.

[root@k8s-master01 pki]# kubectl config use-context kubernetes-admin@kubernetes     --kubeconfig=/etc/kubernetes/admin.kubeconfig
#执行结果
Switched to context "kubernetes-admin@kubernetes".
复制

8.3.7 创建sa key

创建ServiceAccount Key à secret

[root@k8s-master01 pki]# openssl genrsa -out etc/kubernetes/pki/sa.key 2048
#返回结果
Generating RSA private key, 2048 bit long modulus
..............................+++
....................................+++
e is 65537 (0x10001)

[root@k8s-master01 pki]# openssl rsa -in etc/kubernetes/pki/sa.key -pubout -out etc/kubernetes/pki/sa.pub
#执行结果
writing RSA key

[root@k8s-master01 pki]# ll etc/kubernetes/pki/sa*
-rw-r--r-- 1 root root 1675 Feb 25 16:38 etc/kubernetes/pki/sa.key
-rw-r--r-- 1 root root  451 Feb 25 16:39 etc/kubernetes/pki/sa.pub
复制

发送证书至其他master节点

[root@k8s-master01 pki]# for NODE in k8s-master02 k8s-master03; do 
   for FILE in $(ls etc/kubernetes/pki | grep -v etcd); do
      scp etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE};
   done;
   for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do
      scp etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE};
   done;
done
复制

查看证书文件

[root@k8s-master01 pki]# ll -R etc/kubernetes/pki/
/etc/kubernetes/pki/:
total 92
-rw-r--r-- 1 root root 1025 Feb 25 16:37 admin.csr
-rw------- 1 root root 1675 Feb 25 16:37 admin-key.pem
-rw-r--r-- 1 root root 1444 Feb 25 16:37 admin.pem
-rw-r--r-- 1 root root 1029 Feb 25 16:21 apiserver.csr
-rw------- 1 root root 1675 Feb 25 16:21 apiserver-key.pem
-rw-r--r-- 1 root root 1692 Feb 25 16:21 apiserver.pem
-rw-r--r-- 1 root root 1025 Feb 25 16:19 ca.csr
-rw------- 1 root root 1675 Feb 25 16:19 ca-key.pem
-rw-r--r-- 1 root root 1411 Feb 25 16:19 ca.pem
-rw-r--r-- 1 root root 1082 Feb 25 16:30 controller-manager.csr
-rw------- 1 root root 1675 Feb 25 16:30 controller-manager-key.pem
-rw-r--r-- 1 root root 1501 Feb 25 16:30 controller-manager.pem
drwxr-xr-x 2 root root   84 Feb 25 15:56 etcd
-rw-r--r-- 1 root root  891 Feb 25 16:23 front-proxy-ca.csr
-rw------- 1 root root 1679 Feb 25 16:23 front-proxy-ca-key.pem
-rw-r--r-- 1 root root 1143 Feb 25 16:23 front-proxy-ca.pem
-rw-r--r-- 1 root root  903 Feb 25 16:24 front-proxy-client.csr
-rw------- 1 root root 1675 Feb 25 16:24 front-proxy-client-key.pem
-rw-r--r-- 1 root root 1188 Feb 25 16:24 front-proxy-client.pem
-rw-r--r-- 1 root root 1675 Feb 25 16:38 sa.key
-rw-r--r-- 1 root root  451 Feb 25 16:39 sa.pub
-rw-r--r-- 1 root root 1058 Feb 25 16:34 scheduler.csr
-rw------- 1 root root 1679 Feb 25 16:34 scheduler-key.pem
-rw-r--r-- 1 root root 1476 Feb 25 16:34 scheduler.pem

/etc/kubernetes/pki/etcd:
total 0
lrwxrwxrwx 1 root root 29 Feb 25 15:56 etcd-ca-key.pem -> etc/etcd/ssl/etcd-ca-key.pem
lrwxrwxrwx 1 root root 25 Feb 25 15:56 etcd-ca.pem -> etc/etcd/ssl/etcd-ca.pem
lrwxrwxrwx 1 root root 26 Feb 25 15:56 etcd-key.pem -> etc/etcd/ssl/etcd-key.pem
lrwxrwxrwx 1 root root 22 Feb 25 15:56 etcd.pem -> etc/etcd/ssl/etcd.pem
复制


8.4 Kubernetes组件配置

master节点创建相关目录

[root@k8s-master01 pki]# cd
[root@k8s-master01 ~]# mkdir -p etc/kubernetes/manifests/ etc/systemd/system/kubelet.service.d var/lib/kubelet var/log/kubernetes

[root@k8s-master02 ~]# mkdir -p etc/kubernetes/manifests/ etc/systemd/system/kubelet.service.d var/lib/kubelet var/log/kubernetes

[root@k8s-master03 ~]# mkdir -p etc/kubernetes/manifests/ etc/systemd/system/kubelet.service.d var/lib/kubelet var/log/kubernetes
复制

8.4.1 Apiserver

所有Master节点创建kube-apiserver service,# 注意,如果不是高可用集群,172.31.3.188改为master01的地址

8.4.1.1 Master01配置

注意本文档使用的k8s service网段为10.96.0.0/12,该网段不能和宿主机的网段、Pod网段的重复,请按需修改

[root@k8s-master01 ~]# vim lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
     --v=2 \
     --logtostderr=true \
     --allow-privileged=true \
     --bind-address=0.0.0.0 \
     --secure-port=6443 \
     --insecure-port=0 \
     --advertise-address=172.31.3.101 \
     --service-cluster-ip-range=10.96.0.0/12 \
     --service-node-port-range=30000-32767 \
     --etcd-servers=https://172.31.3.108:2379,https://172.31.3.109:2379,https://172.31.3.110:2379 \
     --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
     --etcd-certfile=/etc/etcd/ssl/etcd.pem \
     --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
     --client-ca-file=/etc/kubernetes/pki/ca.pem \
     --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
     --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
     --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
     --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
     --service-account-key-file=/etc/kubernetes/pki/sa.pub \
     --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
     --service-account-issuer=https://kubernetes.default.svc.cluster.local \
     --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
     --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
     --authorization-mode=Node,RBAC \
     --enable-bootstrap-token-auth=true \
     --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
     --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
     --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
     --requestheader-allowed-names=aggregator \
     --requestheader-group-headers=X-Remote-Group \
     --requestheader-extra-headers-prefix=X-Remote-Extra- \
     --requestheader-username-headers=X-Remote-User
     # --token-auth-file=/etc/kubernetes/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
复制

8.4.1.2 Master02配置

注意本文档使用的k8s service网段为10.96.0.0/12,该网段不能和宿主机的网段、Pod网段的重复,请按需修改

[root@k8s-master02 ~]# vim lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
     --v=2 \
     --logtostderr=true \
     --allow-privileged=true \
     --bind-address=0.0.0.0 \
     --secure-port=6443 \
     --insecure-port=0 \
     --advertise-address=172.31.3.102 \
     --service-cluster-ip-range=10.96.0.0/12 \
     --service-node-port-range=30000-32767 \
     --etcd-servers=https://172.31.3.108:2379,https://172.31.3.109:2379,https://172.31.3.110:2379 \
     --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
     --etcd-certfile=/etc/etcd/ssl/etcd.pem \
     --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
     --client-ca-file=/etc/kubernetes/pki/ca.pem \
     --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
     --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
     --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
     --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
     --service-account-key-file=/etc/kubernetes/pki/sa.pub \
     --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
     --service-account-issuer=https://kubernetes.default.svc.cluster.local \
     --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
     --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
     --authorization-mode=Node,RBAC \
     --enable-bootstrap-token-auth=true \
     --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
     --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
     --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
     --requestheader-allowed-names=aggregator \
     --requestheader-group-headers=X-Remote-Group \
     --requestheader-extra-headers-prefix=X-Remote-Extra- \
     --requestheader-username-headers=X-Remote-User
     # --token-auth-file=/etc/kubernetes/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
复制

8.4.1.3 Master03配置

注意本文档使用的k8s service网段为10.96.0.0/12,该网段不能和宿主机的网段、Pod网段的重复,请按需修改

[root@k8s-master03 ~]# vim lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
     --v=2 \
     --logtostderr=true \
     --allow-privileged=true \
     --bind-address=0.0.0.0 \
     --secure-port=6443 \
     --insecure-port=0 \
     --advertise-address=172.31.3.103 \
     --service-cluster-ip-range=10.96.0.0/12 \
     --service-node-port-range=30000-32767 \
     --etcd-servers=https://172.31.3.108:2379,https://172.31.3.109:2379,https://172.31.3.110:2379 \
     --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
     --etcd-certfile=/etc/etcd/ssl/etcd.pem \
     --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
     --client-ca-file=/etc/kubernetes/pki/ca.pem \
     --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
     --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
     --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
     --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
     --service-account-key-file=/etc/kubernetes/pki/sa.pub \
     --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
     --service-account-issuer=https://kubernetes.default.svc.cluster.local \
     --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
     --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
     --authorization-mode=Node,RBAC \
     --enable-bootstrap-token-auth=true \
     --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
     --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
     --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
     --requestheader-allowed-names=aggregator \
     --requestheader-group-headers=X-Remote-Group \
     --requestheader-extra-headers-prefix=X-Remote-Extra- \
     --requestheader-username-headers=X-Remote-User
     # --token-auth-file=/etc/kubernetes/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
复制

8.4.1.4 启动apiserver

所有Master节点开启kube-apiserver

[root@k8s-master01 ~]# systemctl daemon-reload && systemctl enable --now kube-apiserver

[root@k8s-master02 ~]# systemctl daemon-reload && systemctl enable --now kube-apiserver

[root@k8s-master03 ~]# systemctl daemon-reload && systemctl enable --now kube-apiserver
复制

检测kube-server状态

[root@k8s-master01 ~]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
  Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
  Active: active (running) since Fri 2022-02-25 16:55:38 CST; 15s ago
    Docs: https://github.com/kubernetes/kubernetes
Main PID: 13671 (kube-apiserver)
  Tasks: 8
  Memory: 302.6M
  CGroup: system.slice/kube-apiserver.service
          └─13671 usr/local/bin/kube-apiserver --v=2 --logtostderr=true --allow-privileged=true --bind-address=0.0.0.0 --secure-port=6443 ...

Feb 25 16:55:43 k8s-master01.example.local kube-apiserver[13671]: I0225 16:55:43.090985   13671 storage_rbac.go:326] created rolebindin...ystem
Feb 25 16:55:43 k8s-master01.example.local kube-apiserver[13671]: I0225 16:55:43.100954   13671 storage_rbac.go:326] created rolebindin...ystem
Feb 25 16:55:43 k8s-master01.example.local kube-apiserver[13671]: I0225 16:55:43.102507   13671 healthz.go:244] poststarthook/rbac/boot...eadyz
Feb 25 16:55:43 k8s-master01.example.local kube-apiserver[13671]: [-]poststarthook/rbac/bootstrap-roles failed: not finished
Feb 25 16:55:43 k8s-master01.example.local kube-apiserver[13671]: I0225 16:55:43.110647   13671 storage_rbac.go:326] created rolebindin...ystem
Feb 25 16:55:43 k8s-master01.example.local kube-apiserver[13671]: I0225 16:55:43.120843   13671 storage_rbac.go:326] created rolebindin...ystem
Feb 25 16:55:43 k8s-master01.example.local kube-apiserver[13671]: I0225 16:55:43.130291   13671 storage_rbac.go:326] created rolebindin...ublic
Feb 25 16:55:43 k8s-master01.example.local kube-apiserver[13671]: W0225 16:55:43.237118   13671 lease.go:233] Resetting endpoints for m....101]
Feb 25 16:55:43 k8s-master01.example.local kube-apiserver[13671]: I0225 16:55:43.238022   13671 controller.go:609] quota admission adde...oints
Feb 25 16:55:43 k8s-master01.example.local kube-apiserver[13671]: I0225 16:55:43.245826   13671 controller.go:609] quota admission adde...8s.io
Hint: Some lines were ellipsized, use -l to show in full.

[root@k8s-master02 ~]# systemctl status kube-apiserver
[root@k8s-master03 ~]# systemctl status kube-apiserver
复制

系统日志的这些提示可以忽略

[root@k8s-master01 ~]#  tail -f var/log/messages
...
Feb 25 16:56:23 k8s-master01 kube-apiserver: I0225 16:56:23.136848   13671 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
复制

8.4.2  ControllerManager

所有Master节点配置kube-controller-manager service

注意本文档使用的k8s Pod网段为192.168.0.0/12,该网段不能和宿主机的网段、k8s Service网段的重复,请按需修改

[root@k8s-master01 ~]# vim lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
     --v=2 \
     --logtostderr=true \
     --address=127.0.0.1 \
     --root-ca-file=/etc/kubernetes/pki/ca.pem \
     --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
     --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
     --service-account-private-key-file=/etc/kubernetes/pki/sa.key \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
     --leader-elect=true \
     --use-service-account-credentials=true \
     --node-monitor-grace-period=40s \
     --node-monitor-period=5s \
     --pod-eviction-timeout=2m0s \
     --controllers=*,bootstrapsigner,tokencleaner \
     --allocate-node-cidrs=true \
     --cluster-cidr=192.168.0.0/12 \
     --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
     --node-cidr-mask-size=24
     
Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

[root@k8s-master01 ~]# for NODE in k8s-master02 k8s-master03; do scp lib/systemd/system/kube-controller-manager.service $NODE:/lib/systemd/system/; done
复制

所有Master节点启动kube-controller-manager

[root@k8s-master01 ~]# systemctl daemon-reload && systemctl enable --now kube-controller-manager
[root@k8s-master02 ~]# systemctl daemon-reload && systemctl enable --now kube-controller-manager
[root@k8s-master03 ~]# systemctl daemon-reload && systemctl enable --now kube-controller-manager
复制

查看启动状态

[root@k8s-master01 ~]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
  Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
  Active: active (running) since Fri 2022-02-25 16:58:09 CST; 16s ago
    Docs: https://github.com/kubernetes/kubernetes
Main PID: 13723 (kube-controller)
  Tasks: 6
  Memory: 38.7M
  CGroup: system.slice/kube-controller-manager.service
          └─13723 usr/local/bin/kube-controller-manager --v=2 --logtostderr=true --address=127.0.0.1 --root-ca-file=/etc/kubernetes/pki/ca...

Feb 25 16:58:24 k8s-master01.example.local kube-controller-manager[13723]: I0225 16:58:24.294372   13723 reflector.go:219] Starting refl...:134
Feb 25 16:58:24 k8s-master01.example.local kube-controller-manager[13723]: I0225 16:58:24.294470   13723 reflector.go:219] Starting refl...:134
Feb 25 16:58:24 k8s-master01.example.local kube-controller-manager[13723]: I0225 16:58:24.294698   13723 reflector.go:219] Starting refl...:134
Feb 25 16:58:24 k8s-master01.example.local kube-controller-manager[13723]: I0225 16:58:24.294813   13723 reflector.go:219] Starting refl...:134
Feb 25 16:58:24 k8s-master01.example.local kube-controller-manager[13723]: I0225 16:58:24.294941   13723 reflector.go:219] Starting refl...o:90
Feb 25 16:58:24 k8s-master01.example.local kube-controller-manager[13723]: I0225 16:58:24.300284   13723 reflector.go:219] Starting refl...o:90
Feb 25 16:58:24 k8s-master01.example.local kube-controller-manager[13723]: I0225 16:58:24.593499   13723 shared_informer.go:247] Caches ...ctor
Feb 25 16:58:24 k8s-master01.example.local kube-controller-manager[13723]: I0225 16:58:24.593529   13723 garbagecollector.go:254] synced...ctor
Feb 25 16:58:24 k8s-master01.example.local kube-controller-manager[13723]: I0225 16:58:24.630713   13723 shared_informer.go:247] Caches ...ctor
Feb 25 16:58:24 k8s-master01.example.local kube-controller-manager[13723]: I0225 16:58:24.630766   13723 garbagecollector.go:151] Garbag...bage
Hint: Some lines were ellipsized, use -l to show in full.

[root@k8s-master02 ~]# systemctl status kube-controller-manager
[root@k8s-master03 ~]# systemctl status kube-controller-manager
复制

8.4.3 Scheduler

所有Master节点配置kube-scheduler service

[root@k8s-master01 ~]# vim lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
     --v=2 \
     --logtostderr=true \
     --address=127.0.0.1 \
     --leader-elect=true \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

[root@k8s-master01 ~]# for NODE in k8s-master02 k8s-master03; do scp lib/systemd/system/kube-scheduler.service $NODE:/lib/systemd/system/; done

[root@k8s-master01 ~]# systemctl daemon-reload && systemctl enable --now kube-scheduler
[root@k8s-master02 ~]# systemctl daemon-reload && systemctl enable --now kube-scheduler
[root@k8s-master03 ~]# systemctl daemon-reload && systemctl enable --now kube-scheduler

[root@k8s-master01 ~]# systemctl status kube-scheduler
● kube-scheduler.service - Kubernetes Scheduler
  Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
  Active: active (running) since Fri 2022-02-25 16:59:50 CST; 14s ago
    Docs: https://github.com/kubernetes/kubernetes
Main PID: 13770 (kube-scheduler)
  Tasks: 7
  Memory: 18.1M
  CGroup: system.slice/kube-scheduler.service
          └─13770 usr/local/bin/kube-scheduler --v=2 --logtostderr=true --address=127.0.0.1 --leader-elect=true --kubeconfig=/etc/kubernet...

Feb 25 16:59:52 k8s-master01.example.local kube-scheduler[13770]: I0225 16:59:52.030124   13770 reflector.go:219] Starting reflector *v...o:134
Feb 25 16:59:52 k8s-master01.example.local kube-scheduler[13770]: I0225 16:59:52.030359   13770 reflector.go:219] Starting reflector *v...o:134
Feb 25 16:59:52 k8s-master01.example.local kube-scheduler[13770]: I0225 16:59:52.030534   13770 reflector.go:219] Starting reflector *v...o:134
Feb 25 16:59:52 k8s-master01.example.local kube-scheduler[13770]: I0225 16:59:52.030779   13770 reflector.go:219] Starting reflector *v...o:134
Feb 25 16:59:52 k8s-master01.example.local kube-scheduler[13770]: I0225 16:59:52.031157   13770 reflector.go:219] Starting reflector *v...o:134
Feb 25 16:59:52 k8s-master01.example.local kube-scheduler[13770]: I0225 16:59:52.031425   13770 reflector.go:219] Starting reflector *v...o:134
Feb 25 16:59:52 k8s-master01.example.local kube-scheduler[13770]: I0225 16:59:52.031717   13770 reflector.go:219] Starting reflector *v...o:134
Feb 25 16:59:52 k8s-master01.example.local kube-scheduler[13770]: I0225 16:59:52.032718   13770 reflector.go:219] Starting reflector *v...o:134
Feb 25 16:59:52 k8s-master01.example.local kube-scheduler[13770]: I0225 16:59:52.128803   13770 leaderelection.go:243] attempting to ac...er...
Feb 25 16:59:52 k8s-master01.example.local kube-scheduler[13770]: I0225 16:59:52.153623   13770 leaderelection.go:253] successfully acq...duler
Hint: Some lines were ellipsized, use -l to show in full.

[root@k8s-master02 ~]# systemctl status kube-scheduler
[root@k8s-master03 ~]# systemctl status kube-scheduler
复制

8.4.4 TLS Bootstrapping配置

在Master01创建bootstrap

[root@k8s-master01 ~]# vim bootstrap.secret.yaml 
apiVersion: v1
kind: Secret
metadata:
name: bootstrap-token-c8ad9c
namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
description: "The default bootstrap token generated by 'kubelet '."
token-id: c8ad9c
token-secret: 2e4d610cf3e7426e
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubelet-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-autoapprove-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-autoapprove-certificate-rotation
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
  rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
  kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
 - apiGroups:
     - ""
  resources:
     - nodes/proxy
     - nodes/stats
     - nodes/log
     - nodes/spec
     - nodes/metrics
  verbs:
     - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
 - apiGroup: rbac.authorization.k8s.io
  kind: User
  name: kube-apiserver

# 注意,如果不是高可用集群,172.31.3.188:6443改为master01的地址
[root@k8s-master01 ~]# kubectl config set-cluster kubernetes     --certificate-authority=/etc/kubernetes/pki/ca.pem     --embed-certs=true     --server=https://172.31.3.188:6443     --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
#执行结果
Cluster "kubernetes" set.

[root@k8s-master01 ~]# kubectl config set-credentials tls-bootstrap-token-user     --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
#执行结果
User "tls-bootstrap-token-user" set.

[root@k8s-master01 ~]# kubectl config set-context tls-bootstrap-token-user@kubernetes     --cluster=kubernetes     --user=tls-bootstrap-token-user     --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
#执行结果
Context "tls-bootstrap-token-user@kubernetes" modified.

[root@k8s-master01 ~]# kubectl config use-context tls-bootstrap-token-user@kubernetes     --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
#执行结果
Switched to context "tls-bootstrap-token-user@kubernetes".
复制

注意:如果要修改bootstrap.secret.yaml的token-id和token-secret,需要保证下图红圈内的字符串一致的,并且位数是一样的。还要保证上个命令的黄色字体:c8ad9c.2e4d610cf3e7426e与你修改的字符串要一致


[root@k8s-master01 ~]# mkdir -p root/.kube ; cp etc/kubernetes/admin.kubeconfig root/.kube/config

[root@k8s-master01 ~]# kubectl create -f bootstrap.secret.yaml
secret/bootstrap-token-c8ad9c created
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created

[root@k8s-master01 ~]# for NODE in k8s-master02 k8s-master03; do
    for FILE in bootstrap-kubelet.kubeconfig; do
      scp etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}
    done
done
复制

8.4.5  Kubelet配置

master节点配置kubelet service

[root@k8s-master01 ~]# vim lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
ExecStart=/usr/local/bin/kubelet

Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target

[root@k8s-master01 ~]# for NODE in k8s-master02 k8s-master03; do scp lib/systemd/system/kubelet.service $NODE:/lib/systemd/system/ ;done
复制

下载镜像并上传至harbor:

[root@k8s-master01 ~]# docker login harbor.raymonds.cc
Username: admin
Password:
WARNING! Your password will be stored unencrypted in root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

[root@k8s-master01 ~]# cat download_pause_images_3.2.sh
#!/bin/bash
#
#**********************************************************************************************
#Author:       Raymond
#QQ:           88563128
#Date:         2022-01-11
#FileName:     download_pause_images.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

PAUSE_VERSION=3.2
HARBOR_DOMAIN=harbor.raymonds.cc

images_download(){
   ${COLOR}"开始下载Pause镜像"${END}
      docker pull registry.aliyuncs.com/google_containers/pause:${PAUSE_VERSION}
      docker tag registry.aliyuncs.com/google_containers/pause:${PAUSE_VERSION} ${HARBOR_DOMAIN}/google_containers/pause:${PAUSE_VERSION}
      docker rmi registry.aliyuncs.com/google_containers/pause:${PAUSE_VERSION}
      docker push ${HARBOR_DOMAIN}/google_containers/pause:${PAUSE_VERSION}
   ${COLOR}"Pause镜像下载完成"${END}
}

images_download

[root@k8s-master01 ~]# bash download_pause_images_3.2.sh

[root@k8s-master01 ~]# docker images |grep pause
harbor.raymonds.cc/google_containers/pause      3.2                 80d28bedfe5d        23 months ago       683kB
复制

master节点配置kubelet service的配置文件

[root@k8s-master01 ~]# vim etc/systemd/system/kubelet.service.d/10-kubelet.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=harbor.raymonds.cc/google_containers/pause:3.2" #把harbor仓库改成自己的私有仓库地址
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS

[root@k8s-master01 ~]# for NODE in k8s-master02 k8s-master03; do scp etc/systemd/system/kubelet.service.d/10-kubelet.conf $NODE:/etc/systemd/system/kubelet.service.d/ ;done
复制

master创建kubelet的配置文件

注意:如果更改了k8s的service网段,需要更改kubelet-conf.yml 的clusterDNS:配置,改成k8s Service网段的第十个地址,比如10.96.0.10

[root@k8s-master01 ~]# vim etc/kubernetes/kubelet-conf.yml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
anonymous:
  enabled: false
webhook:
  cacheTTL: 2m0s
  enabled: true
x509:
  clientCAFile: etc/kubernetes/pki/ca.pem
authorization:
mode: Webhook
webhook:
  cacheAuthorizedTTL: 5m0s
  cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s

[root@k8s-master01 ~]# for NODE in k8s-master02 k8s-master03; do scp etc/kubernetes/kubelet-conf.yml $NODE:/etc/kubernetes/ ;done
复制

启动master节点kubelet

[root@k8s-master01 ~]# systemctl daemon-reload && systemctl enable --now kubelet
[root@k8s-master02 ~]# systemctl daemon-reload && systemctl enable --now kubelet
[root@k8s-master03 ~]# systemctl daemon-reload && systemctl enable --now kubelet

[root@k8s-master01 ~]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
  Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: etc/systemd/system/kubelet.service.d
          └─10-kubelet.conf
  Active: active (running) since Fri 2022-02-25 17:14:21 CST; 19s ago
    Docs: https://github.com/kubernetes/kubernetes
Main PID: 13943 (kubelet)
  Tasks: 13
  Memory: 39.9M
  CGroup: system.slice/kubelet.service
          └─13943 usr/local/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/k...

Feb 25 17:14:34 k8s-master01.example.local kubelet[13943]: E0225 17:14:34.089479   13943 kubelet.go:2263] node "k8s-master01.example.l... found
Feb 25 17:14:34 k8s-master01.example.local kubelet[13943]: E0225 17:14:34.190545   13943 kubelet.go:2263] node "k8s-master01.example.l... found
Feb 25 17:14:34 k8s-master01.example.local kubelet[13943]: E0225 17:14:34.290854   13943 kubelet.go:2263] node "k8s-master01.example.l... found
Feb 25 17:14:34 k8s-master01.example.local kubelet[13943]: I0225 17:14:34.367572   13943 kubelet_node_status.go:74] Successfully regis....local
Feb 25 17:14:34 k8s-master01.example.local kubelet[13943]: I0225 17:14:34.391818   13943 kuberuntime_manager.go:1006] updating runtime...0.0/24
Feb 25 17:14:34 k8s-master01.example.local kubelet[13943]: I0225 17:14:34.392004   13943 docker_service.go:362] docker cri received ru...24,},}
Feb 25 17:14:34 k8s-master01.example.local kubelet[13943]: I0225 17:14:34.392109   13943 kubelet_network.go:77] Setting Pod CIDR:  -> ...0.0/24
Feb 25 17:14:34 k8s-master01.example.local kubelet[13943]: E0225 17:14:34.400995   13943 kubelet.go:2183] Container runtime network no...alized
Feb 25 17:14:36 k8s-master01.example.local kubelet[13943]: W0225 17:14:36.348085   13943 cni.go:239] Unable to update cni config: no n.../net.d
Feb 25 17:14:37 k8s-master01.example.local kubelet[13943]: E0225 17:14:37.720283   13943 kubelet.go:2183] Container runtime network no...alized
Hint: Some lines were ellipsized, use -l to show in full.

[root@k8s-master02 ~]# systemctl status kubelet
[root@k8s-master03 ~]# systemctl status kubelet
复制

此时系统日志/var/log/messages

#显示只有如下信息为正常
Feb 25 17:16:16 k8s-master01 kubelet: W0225 17:16:16.358217   13943 cni.go:239] Unable to update cni config: no networks found in etc/cni/net.d
复制

查看集群状态

[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS     ROLES   AGE   VERSION
k8s-master01.example.local   NotReady   <none>   13m   v1.20.14
k8s-master02.example.local   NotReady   <none>   73s   v1.20.14
k8s-master03.example.local   NotReady   <none>   70s   v1.20.14
复制

8.4.6 kube-proxy配置

# 注意,如果不是高可用集群,172.31.3.188:6443改为master01的地址

[root@k8s-master01 ~]# kubectl -n kube-system create serviceaccount kube-proxy
#执行结果
serviceaccount/kube-proxy created

[root@k8s-master01 ~]# kubectl create clusterrolebinding system:kube-proxy         --clusterrole system:node-proxier         --serviceaccount kube-system:kube-proxy
#执行结果
clusterrolebinding.rbac.authorization.k8s.io/system:kube-proxy created

[root@k8s-master01 ~]# SECRET=$(kubectl -n kube-system get sa/kube-proxy \
   --output=jsonpath='{.secrets[0].name}')

[root@k8s-master01 ~]# JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \
--output=jsonpath='{.data.token}' | base64 -d)

[root@k8s-master01 ~]# PKI_DIR=/etc/kubernetes/pki

[root@k8s-master01 ~]# K8S_DIR=/etc/kubernetes

[root@k8s-master01 ~]# kubectl config set-cluster kubernetes     --certificate-authority=/etc/kubernetes/pki/ca.pem     --embed-certs=true     --server=https://172.31.3.188:6443     --kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig
#执行结果
Cluster "kubernetes" set.

[root@k8s-master01 ~]# kubectl config set-credentials kubernetes     --token=${JWT_TOKEN}     --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
#执行结果
User "kubernetes" set.

[root@k8s-master01 ~]# kubectl config set-context kubernetes     --cluster=kubernetes     --user=kubernetes     --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
#执行结果
Context "kubernetes" created.

[root@k8s-master01 ~]# kubectl config use-context kubernetes     --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
#执行结果
Switched to context "kubernetes".
复制

在master01将kube-proxy的systemd Service文件发送到其他节点

如果更改了集群Pod的网段,需要更改kube-proxy/kube-proxy.conf的clusterCIDR: 192.168.0.0/12参数为pod的网段。

[root@k8s-master01 ~]# vim etc/kubernetes/kube-proxy.conf
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: etc/kubernetes/kube-proxy.kubeconfig
qps: 5
clusterCIDR: 192.168.0.0/12 #修改pod网段
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
masqueradeAll: true
minSyncPeriod: 5s
scheduler: "rr"
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms

[root@k8s-master01 ~]# vim lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-proxy \
 --config=/etc/kubernetes/kube-proxy.conf \
 --v=2

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

[root@k8s-master01 ~]# for NODE in k8s-master02 k8s-master03; do
    scp ${K8S_DIR}/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
    scp etc/kubernetes/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf
    scp lib/systemd/system/kube-proxy.service $NODE:/lib/systemd/system/kube-proxy.service
done
复制

master节点启动kube-proxy

[root@k8s-master01 ~]# systemctl daemon-reload && systemctl enable --now kube-proxy
[root@k8s-master02 ~]# systemctl daemon-reload && systemctl enable --now kube-proxy
[root@k8s-master03 ~]# systemctl daemon-reload && systemctl enable --now kube-proxy

[root@k8s-master01 ~]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Kube Proxy
  Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
  Active: active (running) since Fri 2022-02-25 18:05:42 CST; 14s ago
    Docs: https://github.com/kubernetes/kubernetes
Main PID: 24062 (kube-proxy)
  Tasks: 6
  Memory: 16.3M
  CGroup: system.slice/kube-proxy.service
          └─24062 usr/local/bin/kube-proxy --config=/etc/kubernetes/kube-proxy.conf --v=2

Feb 25 18:05:42 k8s-master01.example.local kube-proxy[24062]: I0225 18:05:42.674150   24062 shared_informer.go:240] Waiting for caches...config
Feb 25 18:05:42 k8s-master01.example.local kube-proxy[24062]: I0225 18:05:42.674161   24062 config.go:224] Starting endpoint slice con...roller
Feb 25 18:05:42 k8s-master01.example.local kube-proxy[24062]: I0225 18:05:42.674163   24062 shared_informer.go:240] Waiting for caches...config
Feb 25 18:05:42 k8s-master01.example.local kube-proxy[24062]: I0225 18:05:42.674291   24062 reflector.go:219] Starting reflector *v1.S...go:134
Feb 25 18:05:42 k8s-master01.example.local kube-proxy[24062]: I0225 18:05:42.674706   24062 reflector.go:219] Starting reflector *v1be...go:134
Feb 25 18:05:42 k8s-master01.example.local kube-proxy[24062]: I0225 18:05:42.678410   24062 service.go:275] Service default/kubernetes... ports
Feb 25 18:05:42 k8s-master01.example.local kube-proxy[24062]: I0225 18:05:42.774399   24062 shared_informer.go:247] Caches are synced ...config
Feb 25 18:05:42 k8s-master01.example.local kube-proxy[24062]: I0225 18:05:42.774481   24062 proxier.go:1036] Not syncing ipvs rules un...master
Feb 25 18:05:42 k8s-master01.example.local kube-proxy[24062]: I0225 18:05:42.774491   24062 shared_informer.go:247] Caches are synced ...config
Feb 25 18:05:42 k8s-master01.example.local kube-proxy[24062]: I0225 18:05:42.774789   24062 service.go:390] Adding new service port "d...43/TCP
Hint: Some lines were ellipsized, use -l to show in full.

[root@k8s-master02 ~]# systemctl status kube-proxy
[root@k8s-master03 ~]# systemctl status kube-proxy
复制

9.部署node

9.1 安装node组件

将组件发送到node节点

[root@k8s-master01 ~]# for NODE in k8s-node01 k8s-node02 k8s-node03; do echo $NODE; scp -o StrictHostKeyChecking=no usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done
复制

node节点创建/opt/cni/bin目录

[root@k8s-node01 ~]# mkdir -p opt/cni/bin
[root@k8s-node02 ~]# mkdir -p opt/cni/bin
[root@k8s-node03 ~]# mkdir -p opt/cni/bin
复制

9.2 复制etcd证书

node节点创建etcd证书目录

[root@k8s-node01 ~]# mkdir etc/etcd/ssl -p
[root@k8s-node02 ~]# mkdir etc/etcd/ssl -p
[root@k8s-node03 ~]# mkdir etc/etcd/ssl -p
复制

将etcd证书复制到node节点

[root@k8s-etcd01 pki]# for NODE in k8s-node01 k8s-node02 k8s-node03; do
    ssh -o StrictHostKeyChecking=no $NODE "mkdir -p etc/etcd/ssl"
    for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do
      scp -o StrictHostKeyChecking=no etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}
    done
done
复制

所有master节点创建etcd的证书目录

[root@k8s-node01 ~]# mkdir etc/kubernetes/pki/etcd -p
[root@k8s-node02 ~]# mkdir etc/kubernetes/pki/etcd -p
[root@k8s-node03 ~]# mkdir etc/kubernetes/pki/etcd -p

[root@k8s-node01 ~]# ln -s etc/etcd/ssl/* etc/kubernetes/pki/etcd/
[root@k8s-node02 ~]# ln -s etc/etcd/ssl/* etc/kubernetes/pki/etcd/
[root@k8s-node03 ~]# ln -s etc/etcd/ssl/* etc/kubernetes/pki/etcd/
复制

9.3 复制kubernetes证书和配置文件

node节点创建kubernetes相关目录

[root@k8s-node01 ~]# mkdir -p etc/kubernetes/pki
[root@k8s-node02 ~]# mkdir -p etc/kubernetes/pki
[root@k8s-node03 ~]# mkdir -p etc/kubernetes/pki
复制

Master01节点复制证书至Node节点

[root@k8s-master01 ~]# for NODE in k8s-node01 k8s-node02 k8s-node03; do
    for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do
      scp etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}
    done
done
复制

9.4 配置kubelet

node节点创建相关目录

[root@k8s-node01 ~]# mkdir -p etc/kubernetes/manifests/ etc/systemd/system/kubelet.service.d var/lib/kubelet var/log/kubernetes

[root@k8s-node02 ~]# mkdir -p etc/kubernetes/manifests/ etc/systemd/system/kubelet.service.d var/lib/kubelet var/log/kubernetes

[root@k8s-node03 ~]# mkdir -p etc/kubernetes/manifests/ etc/systemd/system/kubelet.service.d var/lib/kubelet var/log/kubernetes
复制

Master01节点复制配置文件kubelet service至Node节点

[root@k8s-master01 ~]# for NODE in k8s-node01 k8s-node02 k8s-node03; do scp lib/systemd/system/kubelet.service $NODE:/lib/systemd/system/ ;done
复制

Master01节点复制kubelet service的配置文件至Node节点

[root@k8s-master01 ~]# for NODE in k8s-node01 k8s-node02 k8s-node03; do scp etc/systemd/system/kubelet.service.d/10-kubelet.conf $NODE:/etc/systemd/system/kubelet.service.d/ ;done
复制

Master01节点kubelet的配置文件至Node节点

[root@k8s-master01 ~]# for NODE in k8s-node01 k8s-node02 k8s-node03; do scp etc/kubernetes/kubelet-conf.yml $NODE:/etc/kubernetes/ ;done
复制

启动node节点kubelet

[root@k8s-node01 ~]# systemctl daemon-reload && systemctl enable --now kubelet
[root@k8s-node02 ~]# systemctl daemon-reload && systemctl enable --now kubelet
[root@k8s-node03 ~]# systemctl daemon-reload && systemctl enable --now kubelet

[root@k8s-node01 ~]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
  Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: etc/systemd/system/kubelet.service.d
          └─10-kubelet.conf
  Active: active (running) since Fri 2022-02-25 17:40:22 CST; 10s ago
    Docs: https://github.com/kubernetes/kubernetes
Main PID: 13450 (kubelet)
  Tasks: 12
  Memory: 38.2M
  CGroup: system.slice/kubelet.service
          └─13450 usr/local/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/k...

Feb 25 17:40:30 k8s-node01.example.local kubelet[13450]: I0225 17:40:30.011461   13450 cpu_manager.go:194] [cpumanager] reconciling every 10s
Feb 25 17:40:30 k8s-node01.example.local kubelet[13450]: I0225 17:40:30.011478   13450 state_mem.go:36] [cpumanager] initializing new ... store
Feb 25 17:40:30 k8s-node01.example.local kubelet[13450]: I0225 17:40:30.014239   13450 policy_none.go:43] [cpumanager] none policy: Start
Feb 25 17:40:30 k8s-node01.example.local kubelet[13450]: W0225 17:40:30.052409   13450 manager.go:595] Failed to retrieve checkpoint f... found
Feb 25 17:40:30 k8s-node01.example.local kubelet[13450]: I0225 17:40:30.054679   13450 plugin_manager.go:114] Starting Kubelet Plugin Manager
Feb 25 17:40:30 k8s-node01.example.local kubelet[13450]: I0225 17:40:30.076391   13450 kuberuntime_manager.go:1006] updating runtime c...4.0/24
Feb 25 17:40:30 k8s-node01.example.local kubelet[13450]: I0225 17:40:30.076643   13450 docker_service.go:362] docker cri received runt...24,},}
Feb 25 17:40:30 k8s-node01.example.local kubelet[13450]: I0225 17:40:30.076779   13450 kubelet_network.go:77] Setting Pod CIDR:  -> 19...4.0/24
Feb 25 17:40:30 k8s-node01.example.local kubelet[13450]: E0225 17:40:30.083546   13450 kubelet.go:2183] Container runtime network not ...alized
Feb 25 17:40:30 k8s-node01.example.local kubelet[13450]: I0225 17:40:30.182896   13450 reconciler.go:157] Reconciler: start to sync state
Hint: Some lines were ellipsized, use -l to show in full.

[root@k8s-node02 ~]# systemctl status kubelet
[root@k8s-node03 ~]# systemctl status kubelet
复制

查看集群状态

[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS     ROLES   AGE   VERSION
k8s-master01.example.local   NotReady   <none>   27m   v1.20.14
k8s-master02.example.local   NotReady   <none>   15m   v1.20.14
k8s-master03.example.local   NotReady   <none>   15m   v1.20.14
k8s-node01.example.local     NotReady   <none>   66s   v1.20.14
k8s-node02.example.local     NotReady   <none>   64s   v1.20.14
k8s-node03.example.local     NotReady   <none>   72s   v1.20.14
复制

9.5 配置kube-proxy

Master01节点复制kube-proxy相关文件到node

[root@k8s-master01 ~]# for NODE in k8s-node01 k8s-node02 k8s-node03; do
    scp ${K8S_DIR}/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
    scp etc/kubernetes/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf
    scp lib/systemd/system/kube-proxy.service $NODE:/lib/systemd/system/kube-proxy.service
done
复制

node节点启动kube-proxy

[root@k8s-node01 ~]# systemctl daemon-reload && systemctl enable --now kube-proxy
[root@k8s-node02 ~]# systemctl daemon-reload && systemctl enable --now kube-proxy
[root@k8s-node03 ~]# systemctl daemon-reload && systemctl enable --now kube-proxy

[root@k8s-node01 ~]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Kube Proxy
  Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
  Active: active (running) since Fri 2022-02-25 18:09:03 CST; 15s ago
    Docs: https://github.com/kubernetes/kubernetes
Main PID: 19089 (kube-proxy)
  Tasks: 6
  Memory: 18.0M
  CGroup: system.slice/kube-proxy.service
          └─19089 usr/local/bin/kube-proxy --config=/etc/kubernetes/kube-proxy.conf --v=2

Feb 25 18:09:03 k8s-node01.example.local kube-proxy[19089]: I0225 18:09:03.647706   19089 config.go:315] Starting service config controller
Feb 25 18:09:03 k8s-node01.example.local kube-proxy[19089]: I0225 18:09:03.647751   19089 shared_informer.go:240] Waiting for caches t...config
Feb 25 18:09:03 k8s-node01.example.local kube-proxy[19089]: I0225 18:09:03.647767   19089 config.go:224] Starting endpoint slice confi...roller
Feb 25 18:09:03 k8s-node01.example.local kube-proxy[19089]: I0225 18:09:03.647770   19089 shared_informer.go:240] Waiting for caches t...config
Feb 25 18:09:03 k8s-node01.example.local kube-proxy[19089]: I0225 18:09:03.647868   19089 reflector.go:219] Starting reflector *v1beta...go:134
Feb 25 18:09:03 k8s-node01.example.local kube-proxy[19089]: I0225 18:09:03.648339   19089 reflector.go:219] Starting reflector *v1.Ser...go:134
Feb 25 18:09:03 k8s-node01.example.local kube-proxy[19089]: I0225 18:09:03.651713   19089 service.go:275] Service default/kubernetes u... ports
Feb 25 18:09:03 k8s-node01.example.local kube-proxy[19089]: I0225 18:09:03.748659   19089 shared_informer.go:247] Caches are synced fo...config
Feb 25 18:09:03 k8s-node01.example.local kube-proxy[19089]: I0225 18:09:03.748673   19089 shared_informer.go:247] Caches are synced fo...config
Feb 25 18:09:03 k8s-node01.example.local kube-proxy[19089]: I0225 18:09:03.749014   19089 service.go:390] Adding new service port "def...43/TCP
Hint: Some lines were ellipsized, use -l to show in full.

[root@k8s-node02 ~]# systemctl status kube-proxy
[root@k8s-node03 ~]# systemctl status kube-proxy
复制

查看haproxy状态

http://172.31.3.188:9999/haproxy-status

10.安装Calico

[root@k8s-master01 ~]# cat calico-etcd.yaml
---
# Source: calico/templates/calico-etcd-secrets.yaml
# The following contains k8s Secrets for use with a TLS enabled etcd cluster.
# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: calico-etcd-secrets
namespace: kube-system
data:
 # Populate the following with etcd TLS configuration if desired, but leave blank if
 # not using TLS for etcd.
 # The keys below should be uncommented and the values populated with the base64
 # encoded contents of each file that would be associated with the TLS data.
 # Example command for encoding a file contents: cat <file> | base64 -w 0
 # etcd-key: null
 # etcd-cert: null
 # etcd-ca: null
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
 # Configure this with the location of your etcd cluster.
etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"
 # If you're using TLS enabled etcd uncomment the following.
 # You must also populate the Secret below with these files.
etcd_ca: ""   # "/calico-secrets/etcd-ca"
etcd_cert: "" # "/calico-secrets/etcd-cert"
etcd_key: ""  # "/calico-secrets/etcd-key"
 # Typha is disabled.
typha_service_name: "none"
 # Configure the backend to use.
calico_backend: "bird"
 # Configure the MTU to use for workload interfaces and tunnels.
 # - If Wireguard is enabled, set to your network MTU - 60
 # - Otherwise, if VXLAN or BPF mode is enabled, set to your network MTU - 50
 # - Otherwise, if IPIP is enabled, set to your network MTU - 20
 # - Otherwise, if not using any encapsulation, set to your network MTU.
veth_mtu: "1440"

 # The CNI network configuration to install on each node. The special
 # values in this config will be automatically populated.
cni_network_config: |-
  {
     "name": "k8s-pod-network",
     "cniVersion": "0.3.1",
     "plugins": [
      {
         "type": "calico",
         "log_level": "info",
         "etcd_endpoints": "__ETCD_ENDPOINTS__",
         "etcd_key_file": "__ETCD_KEY_FILE__",
         "etcd_cert_file": "__ETCD_CERT_FILE__",
         "etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",
         "mtu": __CNI_MTU__,
         "ipam": {
             "type": "calico-ipam"
        },
         "policy": {
             "type": "k8s"
        },
         "kubernetes": {
             "kubeconfig": "__KUBECONFIG_FILEPATH__"
        }
      },
      {
         "type": "portmap",
         "snat": true,
         "capabilities": {"portMappings": true}
      },
      {
         "type": "bandwidth",
         "capabilities": {"bandwidth": true}
      }
    ]
  }

---
# Source: calico/templates/calico-kube-controllers-rbac.yaml

# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
rules:
 # Pods are monitored for changing labels.
 # The node controller monitors Kubernetes nodes.
 # Namespace and serviceaccount labels are used for policy.
 - apiGroups: [""]
  resources:
     - pods
     - nodes
     - namespaces
     - serviceaccounts
  verbs:
     - watch
     - list
     - get
 # Watch for changes to Kubernetes NetworkPolicies.
 - apiGroups: ["networking.k8s.io"]
  resources:
     - networkpolicies
  verbs:
     - watch
     - list
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-kube-controllers
subjects:
- kind: ServiceAccount
name: calico-kube-controllers
namespace: kube-system
---

---
# Source: calico/templates/calico-node-rbac.yaml
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-node
rules:
 # The CNI plugin needs to get pods, nodes, and namespaces.
 - apiGroups: [""]
  resources:
     - pods
     - nodes
     - namespaces
  verbs:
     - get
 - apiGroups: [""]
  resources:
     - endpoints
     - services
  verbs:
     # Used to discover service IPs for advertisement.
     - watch
     - list
 # Pod CIDR auto-detection on kubeadm needs access to config maps.
 - apiGroups: [""]
  resources:
     - configmaps
  verbs:
     - get
 - apiGroups: [""]
  resources:
     - nodes/status
  verbs:
     # Needed for clearing NodeNetworkUnavailable flag.
     - patch

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system

---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: calico-node
namespace: kube-system
labels:
  k8s-app: calico-node
spec:
selector:
  matchLabels:
    k8s-app: calico-node
updateStrategy:
  type: RollingUpdate
  rollingUpdate:
    maxUnavailable: 1
template:
  metadata:
    labels:
      k8s-app: calico-node
  spec:
    nodeSelector:
      kubernetes.io/os: linux
    hostNetwork: true
    tolerations:
       # Make sure calico-node gets scheduled on all nodes.
       - effect: NoSchedule
        operator: Exists
       # Mark the pod as a critical add-on for rescheduling.
       - key: CriticalAddonsOnly
        operator: Exists
       - effect: NoExecute
        operator: Exists
    serviceAccountName: calico-node
     # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
     # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
    terminationGracePeriodSeconds: 0
    priorityClassName: system-node-critical
    initContainers:
       # This container installs the CNI binaries
       # and CNI network config file on each node.
       - name: install-cni
        image: docker.io/calico/cni:v3.15.3
        command: ["/install-cni.sh"]
        env:
           # Name of the CNI config file to create.
           - name: CNI_CONF_NAME
            value: "10-calico.conflist"
           # The CNI network config to install on each node.
           - name: CNI_NETWORK_CONFIG
            valueFrom:
              configMapKeyRef:
                name: calico-config
                key: cni_network_config
           # The location of the etcd cluster.
           - name: ETCD_ENDPOINTS
            valueFrom:
              configMapKeyRef:
                name: calico-config
                key: etcd_endpoints
           # CNI MTU Config variable
           - name: CNI_MTU
            valueFrom:
              configMapKeyRef:
                name: calico-config
                key: veth_mtu
           # Prevents the container from sleeping forever.
           - name: SLEEP
            value: "false"
        volumeMounts:
           - mountPath: host/opt/cni/bin
            name: cni-bin-dir
           - mountPath: host/etc/cni/net.d
            name: cni-net-dir
           - mountPath: calico-secrets
            name: etcd-certs
        securityContext:
          privileged: true
       # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
       # to communicate with Felix over the Policy Sync API.
       - name: flexvol-driver
        image: docker.io/calico/pod2daemon-flexvol:v3.15.3
        volumeMounts:
         - name: flexvol-driver-host
          mountPath: /host/driver
        securityContext:
          privileged: true
    containers:
       # Runs calico-node container on each Kubernetes node. This
       # container programs network policy and routes on each
       # host.
       - name: calico-node
        image: docker.io/calico/node:v3.15.3
        env:
           # The location of the etcd cluster.
           - name: ETCD_ENDPOINTS
            valueFrom:
              configMapKeyRef:
                name: calico-config
                key: etcd_endpoints
           # Location of the CA certificate for etcd.
           - name: ETCD_CA_CERT_FILE
            valueFrom:
              configMapKeyRef:
                name: calico-config
                key: etcd_ca
           # Location of the client key for etcd.
           - name: ETCD_KEY_FILE
            valueFrom:
              configMapKeyRef:
                name: calico-config
                key: etcd_key
           # Location of the client certificate for etcd.
           - name: ETCD_CERT_FILE
            valueFrom:
              configMapKeyRef:
                name: calico-config
                key: etcd_cert
           # Set noderef for node controller.
           - name: CALICO_K8S_NODE_REF
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
           # Choose the backend to use.
           - name: CALICO_NETWORKING_BACKEND
            valueFrom:
              configMapKeyRef:
                name: calico-config
                key: calico_backend
           # Cluster type to identify the deployment type
           - name: CLUSTER_TYPE
            value: "k8s,bgp"
           # Auto-detect the BGP IP address.
           - name: IP
            value: "autodetect"
           # Enable IPIP
           - name: CALICO_IPV4POOL_IPIP
            value: "Always"
           # Enable or Disable VXLAN on the default IP pool.
           - name: CALICO_IPV4POOL_VXLAN
            value: "Never"
           # Set MTU for tunnel device used if ipip is enabled
           - name: FELIX_IPINIPMTU
            valueFrom:
              configMapKeyRef:
                name: calico-config
                key: veth_mtu
           # Set MTU for the VXLAN tunnel device.
           - name: FELIX_VXLANMTU
            valueFrom:
              configMapKeyRef:
                name: calico-config
                key: veth_mtu
           # Set MTU for the Wireguard tunnel device.
           - name: FELIX_WIREGUARDMTU
            valueFrom:
              configMapKeyRef:
                name: calico-config
                key: veth_mtu
           # The default IPv4 pool to create on startup if none exists. Pod IPs will be
           # chosen from this range. Changing this value after installation will have
           # no effect. This should fall within `--cluster-cidr`.
           # - name: CALICO_IPV4POOL_CIDR
           #   value: "192.168.0.0/16"
           # Disable file logging so `kubectl logs` works.
           - name: CALICO_DISABLE_FILE_LOGGING
            value: "true"
           # Set Felix endpoint to host default action to ACCEPT.
           - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
            value: "ACCEPT"
           # Disable IPv6 on Kubernetes.
           - name: FELIX_IPV6SUPPORT
            value: "false"
           # Set Felix logging to "info"
           - name: FELIX_LOGSEVERITYSCREEN
            value: "info"
           - name: FELIX_HEALTHENABLED
            value: "true"
        securityContext:
          privileged: true
        resources:
          requests:
            cpu: 250m
        livenessProbe:
          exec:
            command:
             - /bin/calico-node
             - -felix-live
             - -bird-live
          periodSeconds: 10
          initialDelaySeconds: 10
          failureThreshold: 6
        readinessProbe:
          exec:
            command:
             - /bin/calico-node
             - -felix-ready
             - -bird-ready
          periodSeconds: 10
        volumeMounts:
           - mountPath: /lib/modules
            name: lib-modules
            readOnly: true
           - mountPath: /run/xtables.lock
            name: xtables-lock
            readOnly: false
           - mountPath: /var/run/calico
            name: var-run-calico
            readOnly: false
           - mountPath: /var/lib/calico
            name: var-lib-calico
            readOnly: false
           - mountPath: /calico-secrets
            name: etcd-certs
           - name: policysync
            mountPath: /var/run/nodeagent
    volumes:
       # Used by calico-node.
       - name: lib-modules
        hostPath:
          path: /lib/modules
       - name: var-run-calico
        hostPath:
          path: /var/run/calico
       - name: var-lib-calico
        hostPath:
          path: /var/lib/calico
       - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
       # Used to install CNI.
       - name: cni-bin-dir
        hostPath:
          path: /opt/cni/bin
       - name: cni-net-dir
        hostPath:
          path: /etc/cni/net.d
       # Mount in the etcd TLS secrets with mode 400.
       # See https://kubernetes.io/docs/concepts/configuration/secret/
       - name: etcd-certs
        secret:
          secretName: calico-etcd-secrets
          defaultMode: 0400
       # Used to create per-pod Unix Domain Sockets
       - name: policysync
        hostPath:
          type: DirectoryOrCreate
          path: /var/run/nodeagent
       # Used to install Flex Volume Driver
       - name: flexvol-driver-host
        hostPath:
          type: DirectoryOrCreate
          path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---

apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system

---
# Source: calico/templates/calico-kube-controllers.yaml
# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
  k8s-app: calico-kube-controllers
spec:
 # The controllers can only have a single active instance.
replicas: 1
selector:
  matchLabels:
    k8s-app: calico-kube-controllers
strategy:
  type: Recreate
template:
  metadata:
    name: calico-kube-controllers
    namespace: kube-system
    labels:
      k8s-app: calico-kube-controllers
  spec:
    nodeSelector:
      kubernetes.io/os: linux
    tolerations:
       # Mark the pod as a critical add-on for rescheduling.
       - key: CriticalAddonsOnly
        operator: Exists
       - key: node-role.kubernetes.io/master
        effect: NoSchedule
    serviceAccountName: calico-kube-controllers
    priorityClassName: system-cluster-critical
     # The controllers must run in the host network namespace so that
     # it isn't governed by policy that would prevent it from working.
    hostNetwork: true
    containers:
       - name: calico-kube-controllers
        image: docker.io/calico/kube-controllers:v3.15.3
        env:
           # The location of the etcd cluster.
           - name: ETCD_ENDPOINTS
            valueFrom:
              configMapKeyRef:
                name: calico-config
                key: etcd_endpoints
           # Location of the CA certificate for etcd.
           - name: ETCD_CA_CERT_FILE
            valueFrom:
              configMapKeyRef:
                name: calico-config
                key: etcd_ca
           # Location of the client key for etcd.
           - name: ETCD_KEY_FILE
            valueFrom:
              configMapKeyRef:
                name: calico-config
                key: etcd_key
           # Location of the client certificate for etcd.
           - name: ETCD_CERT_FILE
            valueFrom:
              configMapKeyRef:
                name: calico-config
                key: etcd_cert
           # Choose which controllers to run.
           - name: ENABLED_CONTROLLERS
            value: policy,namespace,serviceaccount,workloadendpoint,node
        volumeMounts:
           # Mount in the etcd TLS secrets.
           - mountPath: /calico-secrets
            name: etcd-certs
        readinessProbe:
          exec:
            command:
             - /usr/bin/check-status
             - -r
    volumes:
       # Mount in the etcd TLS secrets with mode 400.
       # See https://kubernetes.io/docs/concepts/configuration/secret/
       - name: etcd-certs
        secret:
          secretName: calico-etcd-secrets
          defaultMode: 0400

---

apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-kube-controllers
namespace: kube-system

---
# Source: calico/templates/calico-typha.yaml

---
# Source: calico/templates/configure-canal.yaml

---
# Source: calico/templates/kdd-crds.yaml
复制

修改calico-etcd.yaml的以下位置

[root@k8s-master01 ~]# grep "etcd_endpoints:.*" calico-etcd.yaml 
etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"

[root@k8s-master01 ~]# sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://172.31.3.108:2379,https://172.31.3.109:2379,https://172.31.3.110:2379"#g' calico-etcd.yaml

[root@k8s-master01 ~]# grep "etcd_endpoints:.*" calico-etcd.yaml
etcd_endpoints: "https://172.31.3.108:2379,https://172.31.3.109:2379,https://172.31.3.110:2379"

[root@k8s-master01 ~]# grep -E "(.*etcd-key:.*|.*etcd-cert:.*|.*etcd-ca:.*)" calico-etcd.yaml
 # etcd-key: null
 # etcd-cert: null
 # etcd-ca: null

[root@k8s-master01 ~]# ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'`
[root@k8s-master01 ~]# ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'`
[root@k8s-master01 ~]# ETCD_CA=`cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'`

[root@k8s-master01 ~]# sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml

[root@k8s-master01 ~]# grep -E "(.*etcd-key:.*|.*etcd-cert:.*|.*etcd-ca:.*)" calico-etcd.yaml
etcd-key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBcGh0ejhrbzRURnV4T2VVTDBoSWpFdHBmcC9BRGYrcGR3SWNkeVA2QnV5dGxmSzJECjF4eEpRUGVhOFNwMGlFaVBxTEdNWkl5bjNjbHd4Mm9TYkpJd1ZzeEt6N2RybFErdUx2Qzl3Y3lPUktOZVpEd24KMTNDemk4eURENkZmL3NLcXhzNXVEMnNsNWNBMGdPK3orMkdOeUh5YkhOTytodG93bnh0MjhuNHFKWmRnK2l5VQp3R3psT0xQblY5UlJESWJLTW9YT3FLUUt1WWVhMm8rU2E4Rkp1anlvT2Uyc0t5UndTQk5xcjYyZnRTK0ZWSHFxCmVKalJYS245NFM0TDFwd2I5cUxnUDJmaU41bFRadk4va1dkZnMxd2RXVElWUVNaZE92TmhhZGp4b0Y5TWlsSGEKZ0l4NzZaNU1YL2lNZWpQb3Z4M2pDTXJzdWFUS0tnSGt6eTRLU3dJREFRQUJBb0lCQUFlVi8yQ1VWU2ZmbENOeAp1MjEzbUpSMjFxR0R5NVVlN2ZNcCtJbENYa2hlL2Y2SXFobTcxL2lZbGtIblQzVWQ0em13Q2hwWmRoMGg0djJvCmNYajE0REZHbVRBTlQyTjZXTmtaODRDVFIvZ0lnZm9QNlQza2pyNldzM0dXVEIwRlpPazVhanRZQ0Y0S3Zoc1oKVjEzbW9hUURWTTRuT1c5TkxhVkdpdE1lUWV4L2YzV1ZSc2M2TWdaUlVvRGU5THR4bk5nb1hWZmVYcVpZbElzVQplSFJQb1JGYnpXYi9UdEduTnFRMzJkemtyYTNNWnFzd1R4QjdMMGNWUW0xTGxMUXQ1KzkvWnRLd3Zwa0w0QTUvCldwUEYvWGhSSTBBQ0dhUEo3YWNlRUlwUlRSellzbnQ0dlZHNHNob3Y3MEQrYjdLT1lhN1FyU2FmNUlLRVlydFkKV3pjM0tQa0NnWUVBd1dwQk41enFxTWllVWpVODhLVVVDTkhNdUxMSHp5TTZQQ29OZXMrWGNIY1U1L1kxZUV0TwpMd3Z6djd3QVR5UW92RU8ycldtNEF2RXRSaG1QUFc2YU52ZUpPc2FZNnlXaVJ2R0RiN2dzb093eW9DYVlKd08vCnF5MEVLM29qTy9XRVZhNFpyTUlXOUxNWEkwajlKeldpUWI4NytNaENJcVpoZnYvUUhuWW5VU1VDZ1lFQTI5c2cKRzFJZ1hXamVyNHhiTWdMVkFnOXk1K3g1NlQ1RTZWNE5vdUJUZUlhUStob1cvU0w2UFMyS2ZjLzJweXVweFd3egp3aVRXdSt2L1NIUTVudlMrRHAzU0J5U0NqMEJJalg3N2VXS2g0SW1Hd2NoVzV5WnVBM3BVS3paSnV2VXpIdUFNCnFRc0NnR0ZnZGo4Zm1qYWV6ZENOVTI2TUhSZTRNaUJ2cHhSUHFxOENnWUFQamxNMmZObG12OVB6K3JJdkRLZmkKMmJUa2VnU1dCVmhPdEhjbkZJRXltM0ZFQXNwa0pYSmhXRTIvY3doM1ZRb3RzaWlFSkFlWHZQd09Na29SLzg1SgpjM2xIRCtnR3FaMDJwWUFUd1RWZHNBR1dYZVJJNXdWSWFETjRwN2Nqd0doblY3eGE1N1ZlOHZSK2N3VmhYTy95CjU4V1VDYzgvNkMvWlBndm9GMHFzUFFLQmdBaHNjZU42RnhGZEprTVZucHpnN09aaVR5WEJzcjRVQzdIaFQ2WncKNytITFRoeTNDVEJ6dWFERWNPejNIZDB6MkJKZlhmQlBWd2JtT09hK3hVSm80Q3RSTXEzaFlUczUzRTNIa3IwSQo0V2puL0FqS3MwR3lBRDhUM2N1MkRjY2pBKzFuNmpSRDNybXFnWGFtWG9DYkhTU0huQktaUnJjS3BKMFBEeGdZCnVDQ3pBb0dBSjh0SXk1UHRya3lUN3ExZURJNTF1Q2YwWDhrRWJoeFZ1RC9oVW82SkFURkRnRG0vN0Z5UFNvMnAKSFZVaEtpZmtQNUVoYTBYTDMrK3VxOWhITXJvNHVuaksrZSs2Y3VrZkhOWkk4MFVHazBOWUY3WGd1VTdETlJ1aApHQ1dJRkNhcjB0TE9lK1pBRzJQaHFQMno4cXlmNVNEckk0bmJtUHlabjZPMVFYZ0Q1REU9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
etcd-cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVKakNDQXc2Z0F3SUJBZ0lVVHNTUDBUVlZqaE9UZEFUNnlncFpXcERRb0dJd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1p6RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeERUQUxCZ05WQkFvVEJHVjBZMlF4RmpBVUJnTlZCQXNURFVWMFkyUWdVMlZqZFhKcGRIa3hEVEFMCkJnTlZCQU1UQkdWMFkyUXdJQmNOTWpJd01USXlNRGd4TkRBd1doZ1BNakV5TVRFeU1qa3dPREUwTURCYU1HY3gKQ3pBSkJnTlZCQVlUQWtOT01SQXdEZ1lEVlFRSUV3ZENaV2xxYVc1bk1SQXdEZ1lEVlFRSEV3ZENaV2xxYVc1bgpNUTB3Q3dZRFZRUUtFd1JsZEdOa01SWXdGQVlEVlFRTEV3MUZkR05rSUZObFkzVnlhWFI1TVEwd0N3WURWUVFECkV3UmxkR05rTUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUFwaHR6OGtvNFRGdXgKT2VVTDBoSWpFdHBmcC9BRGYrcGR3SWNkeVA2QnV5dGxmSzJEMXh4SlFQZWE4U3AwaUVpUHFMR01aSXluM2Nsdwp4Mm9TYkpJd1ZzeEt6N2RybFErdUx2Qzl3Y3lPUktOZVpEd24xM0N6aTh5REQ2RmYvc0txeHM1dUQyc2w1Y0EwCmdPK3orMkdOeUh5YkhOTytodG93bnh0MjhuNHFKWmRnK2l5VXdHemxPTFBuVjlSUkRJYktNb1hPcUtRS3VZZWEKMm8rU2E4Rkp1anlvT2Uyc0t5UndTQk5xcjYyZnRTK0ZWSHFxZUpqUlhLbjk0UzRMMXB3YjlxTGdQMmZpTjVsVApadk4va1dkZnMxd2RXVElWUVNaZE92TmhhZGp4b0Y5TWlsSGFnSXg3Nlo1TVgvaU1lalBvdngzakNNcnN1YVRLCktnSGt6eTRLU3dJREFRQUJvNEhITUlIRU1BNEdBMVVkRHdFQi93UUVBd0lGb0RBZEJnTlZIU1VFRmpBVUJnZ3IKQmdFRkJRY0RBUVlJS3dZQkJRVUhBd0l3REFZRFZSMFRBUUgvQkFJd0FEQWRCZ05WSFE0RUZnUVU5cXI4N3RsZApyTGJPdGxMUEYvT0xBN1QvcEVFd0h3WURWUjBqQkJnd0ZvQVVpbkFQc1JrQ3pPenZ6N3ZwWmdQdUhUNGt3QTR3ClJRWURWUjBSQkQ0d1BJSUthemh6TFdWMFkyUXdNWUlLYXpoekxXVjBZMlF3TW9JS2F6aHpMV1YwWTJRd000Y0UKZndBQUFZY0VyQjhEYkljRXJCOERiWWNFckI4RGJqQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFlb28rL0NVYQpTa2hkVEY0ekJLa3ExREs0cFFaVEdhQUNHNEUvWUUwNXFNWS9QcTlpam5nNGtRdFB0d2lXaE5WN1JZWGl5QnhjCitIMTBDc3JVSTQrTFVjVjI0T1d5UFA2Q09yY2sycDBDZUhTL0E0ZEhYaEhReC8rZFRoUGxWcno1RzdlblhKRE0KaTlhZGxOR21BSWVlZEE4ekNENlVvbHFOOVdrZ29jTWw0ckdFZDJ3WFZMcFA5ZzhybGlyNVJrSy9seHFmQ1dBWgpBeDZPejJTYTNEbEVGdXpNdGxYejBobnRPdGpBdUJ6eEdIdlJVMllDdlcyL3pDUTJTQ0ZodkJXMGtPVCtiUVc1CkkrVTZGeVpCSU1XQlBPQmZsNm03M2pkNjdiSzRreVJXTEhQUnl0T2w1N3RMdlljOEgybFBQbS9VS3BWYkx5NjkKdXBuNHhOZUhaYXZ5ckE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
etcd-ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR4RENDQXF5Z0F3SUJBZ0lVSW02eEIzNlN2dXE1TDhUaks5cHV5bjJHWEp3d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1p6RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeERUQUxCZ05WQkFvVEJHVjBZMlF4RmpBVUJnTlZCQXNURFVWMFkyUWdVMlZqZFhKcGRIa3hEVEFMCkJnTlZCQU1UQkdWMFkyUXdJQmNOTWpJd01USXlNRGd4TXpBd1doZ1BNakV5TVRFeU1qa3dPREV6TURCYU1HY3gKQ3pBSkJnTlZCQVlUQWtOT01SQXdEZ1lEVlFRSUV3ZENaV2xxYVc1bk1SQXdEZ1lEVlFRSEV3ZENaV2xxYVc1bgpNUTB3Q3dZRFZRUUtFd1JsZEdOa01SWXdGQVlEVlFRTEV3MUZkR05rSUZObFkzVnlhWFI1TVEwd0N3WURWUVFECkV3UmxkR05rTUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUF1cDRWVEQzS1JaRWgKcXA2TW0wTXF3amFrVkFKTFJ0YlFjd3FLNWsvQ2s4MEFjTDUyOGl6YldSdGRXcDVpNk9td241M3BGNGdpZG9EYQphOUpadEF4ZUl0RmNkbExxRzZrdjFCU3pyVVlMMXZyOFZNckRZd0VrYW9RdlZ3cHFrZDJiR3pUd21oVnJXZ3AxCmMrMjcwSWI1L2NVa25mWmtubEVTcWlyQzI5Z09oZnh0OFNrc1FTSUNtcXhuajFDVnltL3dML3AwMDUzNE5BNjAKeXk5aDdkZjU1R0ZFbjdLaytzOEdkbUVmL3ludXVsT1VUY25mTXppeWVoQW5uUStZMjZMWGJzSWw3eHg3YzRpZgpManFPN3d1Qm5WS3M2WllENzI0V1Z0QUY0VWllL1NqRXVabE5GWGNIdTg0Ly9jNHBLL1Avb0dxNklUaVZYWUJyClY1TW1jdTRPV3dJREFRQUJvMll3WkRBT0JnTlZIUThCQWY4RUJBTUNBUVl3RWdZRFZSMFRBUUgvQkFnd0JnRUIKL3dJQkFqQWRCZ05WSFE0RUZnUVVpbkFQc1JrQ3pPenZ6N3ZwWmdQdUhUNGt3QTR3SHdZRFZSMGpCQmd3Rm9BVQppbkFQc1JrQ3pPenZ6N3ZwWmdQdUhUNGt3QTR3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUpmNWJwd2FJYjFTCmtiRUcyVDlRb3d4WU52VGRYSGllbzkwazlPSEFqN3A3RGdzekk0alUwUnkxOHN4c1h0aW5TMCtNU3U5L2d1VHYKZEprK3c4TnhyNHNZZEt3N2VSVVpUbUREQ2l0VldkY0JHNk14Y1BTTDJaQnVJMi8wOTRnN0ZNd2ZIc09lVEdHZgpScVVrV1lTRjRRbU9iRTZwNTA3QWlxRlZqMEhzUHRmTTdpQjZ3ZXRyYzlTVzlZd3R5Tm9PVFhnZEdDdDc5akNBCllUTG9TaHFxcGRvUWEwd0hzYWZqSDd5N2VIZEdRRmZtSWo2RVFQU1ZRSFhQUmhFOXVadDgxbDByeENseUQxa3kKOEhVYTJpOFpHblF0cVJxd3JORHRHeEdlYUdMbCtNYkZVb1N4SW9nTTNaK2x0a2NNbUVZK3hxc3dBbVlMUTJnTwpNMUtoRVJxT1JsMD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=

[root@k8s-master01 ~]# grep -E "(.*etcd_ca:.*|.*etcd_cert:.*|.*etcd_key:.*)" calico-etcd.yaml
etcd_ca: ""   # "/calico-secrets/etcd-ca"
etcd_cert: "" # "/calico-secrets/etcd-cert"
etcd_key: ""  # "/calico-secrets/etcd-key"

[root@k8s-master01 ~]# sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml

[root@k8s-master01 ~]# grep -E "(.*etcd_ca:.*|.*etcd_cert:.*|.*etcd_key:.*)" calico-etcd.yaml
etcd_ca: "/calico-secrets/etcd-ca"   # "/calico-secrets/etcd-ca"
etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert"
etcd_key: "/calico-secrets/etcd-key"  # "/calico-secrets/etcd-key"

# 更改此处为自己的pod网段
[root@k8s-master01 ~]# POD_SUBNET="192.168.0.0/12"
复制

# 注意下面的这个步骤是把calico-etcd.yaml文件里面的CALICO_IPV4POOL_CIDR下的网段改成自己的Pod网段,也就是把192.168.x.x/16改成自己的集群网段,并打开注释:

[root@k8s-master01 ~]# grep -E "(.*CALICO_IPV4POOL_CIDR.*|.*192.168.0.0.*)" calico-etcd.yaml 
           # - name: CALICO_IPV4POOL_CIDR
           #   value: "192.168.0.0/16"

[root@k8s-master01 ~]# sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@ value: '"${POD_SUBNET}"'@g' calico-etcd.yaml

[root@k8s-master01 ~]# grep -E "(.*CALICO_IPV4POOL_CIDR.*|.*192.168.0.0.*)" calico-etcd.yaml
           - name: CALICO_IPV4POOL_CIDR
            value: 192.168.0.0/12

[root@k8s-master01 ~]# grep "image:" calico-etcd.yaml
        image: docker.io/calico/cni:v3.15.3
        image: docker.io/calico/pod2daemon-flexvol:v3.15.3
        image: docker.io/calico/node:v3.15.3
        image: docker.io/calico/kube-controllers:v3.15.3
复制

下载calico镜像并上传harbor

[root@k8s-master01 ~]# cat download_calico_images.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:       Raymond
#QQ:           88563128
#Date:         2022-01-11
#FileName:     download_calico_images.sh
#URL:           raymond.blog.csdn.net
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'

images=$(awk -F "/" '/image:/{print $NF}' calico-etcd.yaml)
HARBOR_DOMAIN=harbor.raymonds.cc

images_download(){
   ${COLOR}"开始下载Calico镜像"${END}
   for i in ${images};do
      docker pull registry.cn-beijing.aliyuncs.com/raymond9/$i
      docker tag registry.cn-beijing.aliyuncs.com/raymond9/$i ${HARBOR_DOMAIN}/google_containers/$i
      docker rmi registry.cn-beijing.aliyuncs.com/raymond9/$i
      docker push ${HARBOR_DOMAIN}/google_containers/$i
   done
   ${COLOR}"Calico镜像下载完成"${END}
}

images_download

[root@k8s-master01 ~]# bash download_calico_images.sh

[root@k8s-master01 ~]# docker images|grep calico
harbor.raymonds.cc/calico/node                     v3.15.3             d45bf977dfbf        16 months ago       262MB
harbor.raymonds.cc/calico/pod2daemon-flexvol       v3.15.3             963564fb95ed        16 months ago       22.8MB
harbor.raymonds.cc/calico/cni                     v3.15.3             ca5564c06ea0        16 months ago       110MB
harbor.raymonds.cc/calico/kube-controllers         v3.15.3             0cb2976cbb7d        16 months ago       52.9MB

[root@k8s-master01 ~]# sed -ri 's@(.*image:) docker.io/calico(/.*)@\1 harbor.raymonds.cc/google_containers\2@g' calico-etcd.yaml

[root@k8s-master01 ~]# grep "image:" calico-etcd.yaml
        image: harbor.raymonds.cc/google_containers/cni:v3.15.3
        image: harbor.raymonds.cc/google_containers/pod2daemon-flexvol:v3.15.3
        image: harbor.raymonds.cc/google_containers/node:v3.15.3
        image: harbor.raymonds.cc/google_containers/kube-controllers:v3.15.3

[root@k8s-master01 ~]# kubectl apply -f calico-etcd.yaml
secret/calico-etcd-secrets created
configmap/calico-config created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

#查看容器状态
[root@k8s-master01 ~]# kubectl get pod -n kube-system |grep calico
calico-kube-controllers-6fdd497b59-mfc6t   1/1     Running   0         44s
calico-node-8scp5                          1/1     Running   0         44s
calico-node-cj25g                          1/1     Running   0         44s
calico-node-g9gtn                          1/1     Running   0         44s
calico-node-thsfj                          1/1     Running   0         44s
calico-node-wl4lt                          1/1     Running   0         44s
calico-node-xm2cx                          1/1     Running   0         44s

#查看集群状态
[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS   ROLES   AGE   VERSION
k8s-master01.example.local   Ready   <none>   40m   v1.20.14
k8s-master02.example.local   Ready   <none>   40m   v1.20.14
k8s-master03.example.local   Ready   <none>   40m   v1.20.14
k8s-node01.example.local     Ready   <none>   40m   v1.20.14
k8s-node02.example.local     Ready   <none>   40m   v1.20.14
k8s-node03.example.local     Ready   <none>   40m   v1.20.14
复制


文章转载自Raymond运维云原生技术交流,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论