暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

二进制安装Kubernetes(k8s) v1.23.6

2567

二进制安装Kubernetes(k8s) v1.23.6

背景

kubernetes二进制安装

1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 文档以及安装包已生成。

后续尽可能第一时间更新新版本文档

https://github.com/cby-chen/Kubernetes/releases

脚本项目地址:

https://github.com/cby-chen/Binary_installation_of_Kubernetes

手动项目地址:

https://github.com/cby-chen/Kubernetes

1.环境

主机名称IP地址说明软件
Master01192.168.1.81master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client
Master02192.168.1.82master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client
Master03192.168.1.83master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-client
Node01192.168.1.84node节点kubelet、kube-proxy、nfs-client
Node02192.168.1.85node节点kubelet、kube-proxy、nfs-client
Node03192.168.1.86node节点kubelet、kube-proxy、nfs-client
Node04192.168.1.87node节点kubelet、kube-proxy、nfs-client
Node05192.168.1.88node节点kubelet、kube-proxy、nfs-client
Lb01192.168.1.80Lb01节点haproxy、keepalived
Lb02192.168.1.90Lb02节点haproxy、keepalived
192.168.1.89VIP
软件版本
内核4.18.0-373.el8.x86_64
CentOS 8v8 或者 v7
kube-apiserver、kube-controller-managerkube-scheduler、kubelet、kube-proxyv1.23.6
etcdv3.5.3
docker-cev20.10.14
containerdv1.5.11
cfsslv1.6.1
cniv1.1.1
crictlv1.23.0
haproxyv1.8.27
keepalivedv2.1.5

网段

物理主机:192.168.1.0/24

service:10.96.0.0/12

pod:172.16.0.0/12

如果有条件建议k8s集群与etcd集群分开安装

1.1.k8s基础系统环境配置

1.2.配置IP

ssh root@192.168.1.161 "nmcli con mod ens18 ipv4.addresses 192.168.1.81/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.167 "nmcli con mod ens18 ipv4.addresses 192.168.1.82/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.137 "nmcli con mod ens18 ipv4.addresses 192.168.1.83/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.152 "nmcli con mod ens18 ipv4.addresses 192.168.1.84/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.198 "nmcli con mod ens18 ipv4.addresses 192.168.1.85/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.166 "nmcli con mod ens18 ipv4.addresses 192.168.1.86/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.171 "nmcli con mod ens18 ipv4.addresses 192.168.1.87/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.159 "nmcli con mod ens18 ipv4.addresses 192.168.1.88/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.122 "nmcli con mod ens18 ipv4.addresses 192.168.1.80/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.125 "nmcli con mod ens18 ipv4.addresses 192.168.1.90/24; nmcli con mod ens18 ipv4.gateway 192.168.1.99; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
复制

1.3.设置主机名

hostnamectl set-hostname k8s-master01hostnamectl set-hostname k8s-master02hostnamectl set-hostname k8s-master03hostnamectl set-hostname k8s-node01hostnamectl set-hostname k8s-node02hostnamectl set-hostname k8s-node03hostnamectl set-hostname k8s-node04hostnamectl set-hostname k8s-node05hostnamectl set-hostname lb01hostnamectl set-hostname lb02
复制

1.4.配置yum源

# 对于 CentOS 7sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \         -e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \         -i.bak \         /etc/yum.repos.d/CentOS-*.repo# 对于 CentOS 8sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \         -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \         -i.bak \         /etc/yum.repos.d/CentOS-*.reposed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://192.168.1.123/centos|g' -i.bak  /etc/yum.repos.d/CentOS-*.repo
复制

1.5.安装一些必备工具

yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y
复制

1.6.安装docker工具 (lb除外)

yum install -y yum-utils device-mapper-persistent-data lvm2wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.reposudo sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repoyum makecacheyum -y install docker-cesystemctl  enable --now docker
复制

1.7.关闭防火墙

systemctl disable --now firewalld
复制

1.8.关闭SELinux

setenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
复制

1.9.关闭交换分区

sed -ri 's/.*swap.*/#&/' /etc/fstabswapoff -a && sysctl -w vm.swappiness=0cat /etc/fstab# /dev/mapper/centos-swap swap                    swap    defaults        0 0
复制

1.10.关闭NetworkManager 并启用 network (lb除外)

systemctl disable --now NetworkManagersystemctl start network && systemctl enable network
复制

1.11.进行时间同步 (lb除外)

服务端yum install chrony -ycat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsyncallow 192.168.1.0/24local stratum 10keyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronydsystemctl enable chronyd客户端yum install chrony -yvim /etc/chrony.confcat /etc/chrony.conf | grep -v  "^#" | grep -v "^$"pool 192.168.1.81 iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsynckeyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronysystemctl restart chronyd ; systemctl enable chronydyum install chrony -y ; sed -i "s#2.centos.pool.ntp.org#192.168.1.81#g" /etc/chrony.conf ; systemctl restart chronyd ; systemctl enable chronyd使用客户端进行验证chronyc sources -v
复制

1.12.配置ulimit

ulimit -SHn 65535cat >> /etc/security/limits.conf <<EOF* soft nofile 655360* hard nofile 131072* soft nproc 655350* hard nproc 655350* seft memlock unlimited* hard memlock unlimiteddEOF
复制

1.13.配置免密登录

yum install -y sshpassssh-keygen -f /root/.ssh/id_rsa -P ''export IP="192.168.1.81 192.168.1.82 192.168.1.83 192.168.1.84 192.168.1.85 192.168.1.86 192.168.1.87 192.168.1.88 192.168.1.80 192.168.1.90"export SSHPASS=123123for HOST in $IP;do     sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOSTdone
复制

1.14.添加启用源 (lb除外)

为 RHEL-8或 CentOS-8配置源yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm为 RHEL-7 SL-7 或 CentOS-7 安装 ELRepo yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm查看可用安装包yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available
复制

1.15.升级内核至4.18版本以上 (lb除外)

安装最新的内核# 我这里选择的是稳定版kernel-ml   如需更新长期维护版本kernel-lt  yum  --enablerepo=elrepo-kernel  install  kernel-ml查看已安装那些内核rpm -qa | grep kernelkernel-core-4.18.0-358.el8.x86_64kernel-tools-4.18.0-358.el8.x86_64kernel-ml-core-5.16.7-1.el8.elrepo.x86_64kernel-ml-5.16.7-1.el8.elrepo.x86_64kernel-modules-4.18.0-358.el8.x86_64kernel-4.18.0-358.el8.x86_64kernel-tools-libs-4.18.0-358.el8.x86_64kernel-ml-modules-5.16.7-1.el8.elrepo.x86_64查看默认内核grubby --default-kernel/boot/vmlinuz-5.16.7-1.el8.elrepo.x86_64若不是最新的使用命令设置grubby --set-default /boot/vmlinuz-「您的内核版本」.x86_64重启生效rebootv8 整合命令为:yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available -y ; yum  --enablerepo=elrepo-kernel  install  kernel-ml -y ; grubby --default-kernel ; rebootv7 整合命令为:yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available -y ; yum  --enablerepo=elrepo-kernel  install  kernel-ml -y ; grubby --set-default \$(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel
复制

1.16.安装ipvsadm (lb除外)

yum install ipvsadm ipset sysstat conntrack libseccomp -ycat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrackip_tablesip_setxt_setipt_setipt_rpfilteript_REJECTipipEOFsystemctl restart systemd-modules-load.servicelsmod | grep -e ip_vs -e nf_conntrackip_vs_sh               16384  0ip_vs_wrr              16384  0ip_vs_rr               16384  0ip_vs                 180224  6 ip_vs_rr,ip_vs_sh,ip_vs_wrrnf_conntrack          176128  1 ip_vsnf_defrag_ipv6         24576  2 nf_conntrack,ip_vsnf_defrag_ipv4         16384  1 nf_conntracklibcrc32c              16384  3 nf_conntrack,xfs,ip_vs
复制

1.17.修改内核参数 (lb除外)

cat <<EOF > /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-iptables = 1fs.may_detach_mounts = 1vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_intvl =15net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_max_orphans = 327680net.ipv4.tcp_orphan_retries = 3net.ipv4.tcp_syncookies = 1net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.ip_conntrack_max = 65536net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.tcp_timestamps = 0net.core.somaxconn = 16384EOFsysctl --system
复制

1.18.所有节点配置hosts本地解析

cat > /etc/hosts <<EOF127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.1.81 k8s-master01192.168.1.82 k8s-master02192.168.1.83 k8s-master03192.168.1.84 k8s-node01192.168.1.85 k8s-node02192.168.1.86 k8s-node03192.168.1.87 k8s-node04192.168.1.88 k8s-node05192.168.1.80 lb01192.168.1.90 lb02192.168.1.89 lb-vipEOF
复制

2.k8s基本组件安装

2.1.所有k8s节点安装Containerd作为Runtime

yum install containerd -y
复制

2.1.1配置Containerd所需的模块

cat <<EOF | sudo tee /etc/modules-load.d/containerd.confoverlaybr_netfilterEOF
复制

2.1.2加载模块

systemctl restart systemd-modules-load.service
复制

2.1.3配置Containerd所需的内核

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.confnet.bridge.bridge-nf-call-iptables  = 1net.ipv4.ip_forward                 = 1net.bridge.bridge-nf-call-ip6tables = 1EOF# 加载内核sysctl --system
复制

2.1.4创建Containerd的配置文件

mkdir -p /etc/containerdcontainerd config default | tee /etc/containerd/config.toml修改Containerd的配置文件sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep SystemdCgroup# 找到containerd.runtimes.runc.options,在其下加入SystemdCgroup = true[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]              SystemdCgroup = true    [plugins."io.containerd.grpc.v1.cri".cni]# 将sandbox_image默认地址改为符合版本地址    sandbox_image = "registry.cn-hangzhou.aliyuncs.com/chenby/pause:3.6"
复制

2.1.5启动并设置为开机启动

systemctl daemon-reloadsystemctl enable --now containerd
复制

2.1.6配置crictl客户端连接的运行时位置

cat > /etc/crictl.yaml <<EOFruntime-endpoint: unix:///run/containerd/containerd.sockimage-endpoint: unix:///run/containerd/containerd.socktimeout: 10debug: falseEOFsystemctl restart  containerd
复制

2.2.k8s与etcd下载及安装(仅在master01操作)

2.2.1下载k8s安装包(你用哪个下哪个)

1.下载kubernetes1.23.+的二进制包github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.mdwget https://dl.k8s.io/v1.23.6/kubernetes-server-linux-amd64.tar.gz2.下载etcdctl二进制包github二进制包下载地址:https://github.com/etcd-io/etcd/releaseswget https://github.com/etcd-io/etcd/releases/download/v3.5.3/etcd-v3.5.3-linux-amd64.tar.gz3.docker-ce二进制包下载地址二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/这里需要下载20.10.+版本wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.14.tgz4.containerd二进制包下载github下载地址:https://github.com/containerd/containerd/releasescontainerd下载时下载带cni插件的二进制包。wget https://github.com/containerd/containerd/releases/download/v1.6.2/cri-containerd-cni-1.6.2-linux-amd64.tar.gz5.下载cfssl二进制包github二进制包下载地址:https://github.com/cloudflare/cfssl/releaseswget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd646.cni插件下载github下载地址:https://github.com/containernetworking/plugins/releaseswget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz7.crictl客户端二进制下载github下载:https://github.com/kubernetes-sigs/cri-tools/releaseswget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz解压k8s安装文件tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}解压etcd安装文件tar -xf etcd-v3.5.3-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.3-linux-amd64/etcd{,ctl}# 查看/usr/local/bin下内容ls /usr/local/bin/etcd  etcdctl  kube-apiserver  kube-controller-manager  kubectl  kubelet  kube-proxy  kube-scheduler已经整理好的:wget https://github.com/cby-chen/Kubernetes/releases/download/v1.23.6/kubernetes-v1.23.6.tar
复制

2.2.2查看版本

[root@k8s-master01 ~]# kubelet --versionKubernetes v1.23.6[root@k8s-master01 ~]# etcdctl versionetcdctl version: 3.5.3API version: 3.5[root@k8s-master01 ~]# 
复制

2.2.3将组件发送至其他k8s节点

Master='k8s-master02 k8s-master03'Work='k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05'for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; donefor NODE in $Work; do     scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done
复制

2.2.4克隆证书相关文件

git clone https://github.com/cby-chen/Kubernetes.git
复制

2.2.5所有k8s节点创建目录

mkdir -p /opt/cni/bin
复制

3.相关证书生成

master01节点下载证书生成工具wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -O /usr/local/bin/cfsslwget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -O /usr/local/bin/cfssljsonchmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
复制

3.1.生成etcd证书

特别说明除外,以下操作在所有master节点操作

3.1.1所有master节点创建证书存放目录

mkdir /etc/etcd/ssl -p
复制

3.1.2master01节点生成etcd证书

cd Kubernetes/pki/# 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来)cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-cacfssl gencert \   -ca=/etc/etcd/ssl/etcd-ca.pem \   -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \   -config=ca-config.json \   -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.1.81,192.168.1.82,192.168.1.83 \   -profile=kubernetes \   etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
复制

3.1.3将证书复制到其他节点

Master='k8s-master02 k8s-master03'for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pem  etcd-ca.pem  etcd-key.pem  etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done
复制

3.2.生成k8s相关证书

特别说明除外,以下操作在所有master节点操作

3.2.1所有k8s节点创建证书存放目录

mkdir -p /etc/kubernetes/pki
复制

3.2.2master01节点生成k8s证书

# 生成一个根证书cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca# 10.96.0.1是service网段的第一个地址,需要计算,192.168.1.89为高可用vip地址cfssl gencert   \-ca=/etc/kubernetes/pki/ca.pem   \-ca-key=/etc/kubernetes/pki/ca-key.pem   \-config=ca-config.json   \-hostname=10.96.0.1,192.168.1.89,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,192.168.1.81,192.168.1.82,192.168.1.83,192.168.1.84,192.168.1.85,192.168.1.86,192.168.1.87,192.168.1.88,192.168.1.80,192.168.1.90,192.168.1.40,192.168.1.41   \-profile=kubernetes   apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
复制

3.2.3生成apiserver聚合证书

cfssl gencert   -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca # 有一个警告,可以忽略cfssl gencert  \-ca=/etc/kubernetes/pki/front-proxy-ca.pem   \-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem   \-config=ca-config.json   \-profile=kubernetes   front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
复制

3.2.4生成controller-manage的证书

cfssl gencert \   -ca=/etc/kubernetes/pki/ca.pem \   -ca-key=/etc/kubernetes/pki/ca-key.pem \   -config=ca-config.json \   -profile=kubernetes \   manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager# 设置一个集群项kubectl config set-cluster kubernetes \     --certificate-authority=/etc/kubernetes/pki/ca.pem \     --embed-certs=true \     --server=https://192.168.1.89:8443 \     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 设置一个环境项,一个上下文kubectl config set-context system:kube-controller-manager@kubernetes \    --cluster=kubernetes \    --user=system:kube-controller-manager \    --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 设置一个用户项kubectl config set-credentials system:kube-controller-manager \     --client-certificate=/etc/kubernetes/pki/controller-manager.pem \     --client-key=/etc/kubernetes/pki/controller-manager-key.pem \     --embed-certs=true \     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 设置默认环境kubectl config use-context system:kube-controller-manager@kubernetes \     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfigcfssl gencert \   -ca=/etc/kubernetes/pki/ca.pem \   -ca-key=/etc/kubernetes/pki/ca-key.pem \   -config=ca-config.json \   -profile=kubernetes \   scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/schedulerkubectl config set-cluster kubernetes \     --certificate-authority=/etc/kubernetes/pki/ca.pem \     --embed-certs=true \     --server=https://192.168.1.89:8443 \     --kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config set-credentials system:kube-scheduler \     --client-certificate=/etc/kubernetes/pki/scheduler.pem \     --client-key=/etc/kubernetes/pki/scheduler-key.pem \     --embed-certs=true \     --kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config set-context system:kube-scheduler@kubernetes \     --cluster=kubernetes \     --user=system:kube-scheduler \     --kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config use-context system:kube-scheduler@kubernetes \     --kubeconfig=/etc/kubernetes/scheduler.kubeconfigcfssl gencert \   -ca=/etc/kubernetes/pki/ca.pem \   -ca-key=/etc/kubernetes/pki/ca-key.pem \   -config=ca-config.json \   -profile=kubernetes \   admin-csr.json | cfssljson -bare /etc/kubernetes/pki/adminkubectl config set-cluster kubernetes     \  --certificate-authority=/etc/kubernetes/pki/ca.pem     \  --embed-certs=true     \  --server=https://192.168.1.89:8443     \  --kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config set-credentials kubernetes-admin  \  --client-certificate=/etc/kubernetes/pki/admin.pem     \  --client-key=/etc/kubernetes/pki/admin-key.pem     \  --embed-certs=true     \  --kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config set-context kubernetes-admin@kubernetes    \  --cluster=kubernetes     \  --user=kubernetes-admin     \  --kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config use-context kubernetes-admin@kubernetes  --kubeconfig=/etc/kubernetes/admin.kubeconfig
复制

3.2.5创建ServiceAccount Key ——secret

openssl genrsa -out /etc/kubernetes/pki/sa.key 2048openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
复制

3.2.6将证书发送到其他master节点

for NODE in k8s-master02 k8s-master03; do  for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do  scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done;  for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do  scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done
复制

3.2.7查看证书

ls /etc/kubernetes/pki/admin.csr      apiserver-key.pem  ca.pem                      front-proxy-ca.csr      front-proxy-client-key.pem  scheduler.csradmin-key.pem  apiserver.pem      controller-manager.csr      front-proxy-ca-key.pem  front-proxy-client.pem      scheduler-key.pemadmin.pem      ca.csr             controller-manager-key.pem  front-proxy-ca.pem      sa.key                      scheduler.pemapiserver.csr  ca-key.pem         controller-manager.pem      front-proxy-client.csr  sa.pub# 一共23个就对了ls /etc/kubernetes/pki/ |wc -l23
复制

4.k8s系统组件配置

4.1.etcd配置

4.1.1master01配置

cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master01'data-dir: /var/lib/etcdwal-dir: /var/lib/etcd/walsnapshot-count: 5000heartbeat-interval: 100election-timeout: 1000quota-backend-bytes: 0listen-peer-urls: 'https://192.168.1.81:2380'listen-client-urls: 'https://192.168.1.81:2379,http://127.0.0.1:2379'max-snapshots: 3max-wals: 5cors:initial-advertise-peer-urls: 'https://192.168.1.81:2380'advertise-client-urls: 'https://192.168.1.81:2379'discovery:discovery-fallback: 'proxy'discovery-proxy:discovery-srv:initial-cluster: 'k8s-master01=https://192.168.1.81:2380,k8s-master02=https://192.168.1.82:2380,k8s-master03=https://192.168.1.83:2380'initial-cluster-token: 'etcd-k8s-cluster'initial-cluster-state: 'new'strict-reconfig-check: falseenable-v2: trueenable-pprof: trueproxy: 'off'proxy-failure-wait: 5000proxy-refresh-interval: 30000proxy-dial-timeout: 1000proxy-write-timeout: 5000proxy-read-timeout: 0client-transport-security:  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'  client-cert-auth: true  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'  auto-tls: truepeer-transport-security:  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'  peer-client-cert-auth: true  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'  auto-tls: truedebug: falselog-package-levels:log-outputs: [default]force-new-cluster: falseEOF
复制

4.1.2master02配置

cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master02'data-dir: /var/lib/etcdwal-dir: /var/lib/etcd/walsnapshot-count: 5000heartbeat-interval: 100election-timeout: 1000quota-backend-bytes: 0listen-peer-urls: 'https://192.168.1.82:2380'listen-client-urls: 'https://192.168.1.82:2379,http://127.0.0.1:2379'max-snapshots: 3max-wals: 5cors:initial-advertise-peer-urls: 'https://192.168.1.82:2380'advertise-client-urls: 'https://192.168.1.82:2379'discovery:discovery-fallback: 'proxy'discovery-proxy:discovery-srv:initial-cluster: 'k8s-master01=https://192.168.1.81:2380,k8s-master02=https://192.168.1.82:2380,k8s-master03=https://192.168.1.83:2380'initial-cluster-token: 'etcd-k8s-cluster'initial-cluster-state: 'new'strict-reconfig-check: falseenable-v2: trueenable-pprof: trueproxy: 'off'proxy-failure-wait: 5000proxy-refresh-interval: 30000proxy-dial-timeout: 1000proxy-write-timeout: 5000proxy-read-timeout: 0client-transport-security:  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'  client-cert-auth: true  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'  auto-tls: truepeer-transport-security:  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'  peer-client-cert-auth: true  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'  auto-tls: truedebug: falselog-package-levels:log-outputs: [default]force-new-cluster: falseEOF
复制

4.1.3master03配置

cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master03'data-dir: /var/lib/etcdwal-dir: /var/lib/etcd/walsnapshot-count: 5000heartbeat-interval: 100election-timeout: 1000quota-backend-bytes: 0listen-peer-urls: 'https://192.168.1.83:2380'listen-client-urls: 'https://192.168.1.83:2379,http://127.0.0.1:2379'max-snapshots: 3max-wals: 5cors:initial-advertise-peer-urls: 'https://192.168.1.83:2380'advertise-client-urls: 'https://192.168.1.83:2379'discovery:discovery-fallback: 'proxy'discovery-proxy:discovery-srv:initial-cluster: 'k8s-master01=https://192.168.1.81:2380,k8s-master02=https://192.168.1.82:2380,k8s-master03=https://192.168.1.83:2380'initial-cluster-token: 'etcd-k8s-cluster'initial-cluster-state: 'new'strict-reconfig-check: falseenable-v2: trueenable-pprof: trueproxy: 'off'proxy-failure-wait: 5000proxy-refresh-interval: 30000proxy-dial-timeout: 1000proxy-write-timeout: 5000proxy-read-timeout: 0client-transport-security:  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'  client-cert-auth: true  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'  auto-tls: truepeer-transport-security:  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'  peer-client-cert-auth: true  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'  auto-tls: truedebug: falselog-package-levels:log-outputs: [default]force-new-cluster: falseEOF
复制

4.2.创建service(所有master节点操作)

4.2.1创建etcd.service并启动

cat > /usr/lib/systemd/system/etcd.service << EOF[Unit]Description=Etcd ServiceDocumentation=https://coreos.com/etcd/docs/latest/After=network.target[Service]Type=notifyExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.ymlRestart=on-failureRestartSec=10LimitNOFILE=65536[Install]WantedBy=multi-user.targetAlias=etcd3.serviceEOF
复制

4.2.2创建etcd证书目录

mkdir /etc/kubernetes/pki/etcdln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/systemctl daemon-reloadsystemctl enable --now etcd
复制

4.2.3查看etcd状态

export ETCDCTL_API=3etcdctl --endpoints="192.168.1.83:2379,192.168.1.82:2379,192.168.1.81:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem  endpoint status --write-out=table+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+|     ENDPOINT      |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+| 192.168.1.83:2379 | 7cb7be3df5c81965 |   3.5.2 |   20 kB |     false |      false |         2 |          9 |                  9 |        || 192.168.1.82:2379 | c077939949ab3f8b |   3.5.2 |   20 kB |     false |      false |         2 |          9 |                  9 |        || 192.168.1.81:2379 | 2ee388f67565dac9 |   3.5.2 |   20 kB |      true |      false |         2 |          9 |                  9 |        |+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+[root@k8s-master01 pki]# 
复制

5.高可用配置

5.1在lb01和lb02两台服务器上操作

5.1.1安装keepalived和haproxy服务

systemctl disable --now firewalldsetenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/configyum -y install keepalived haproxy
复制

5.1.2修改haproxy配置文件(两台配置文件一样)

# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bakcat >/etc/haproxy/haproxy.cfg<<"EOF"global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30sdefaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15sfrontend monitor-in bind *:33305 mode http option httplog monitor-uri /monitorfrontend k8s-master bind 0.0.0.0:8443 bind 127.0.0.1:8443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-masterbackend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server  k8s-master01  192.168.1.81:6443 check server  k8s-master02  192.168.1.82:6443 check server  k8s-master03  192.168.1.83:6443 checkEOF
复制

5.1.3lb01配置keepalived master节点

#cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bakcat > /etc/keepalived/keepalived.conf << EOF! Configuration File for keepalivedglobal_defs {    router_id LVS_DEVEL}vrrp_script chk_apiserver {    script "/etc/keepalived/check_apiserver.sh"    interval 5     weight -5    fall 2    rise 1}vrrp_instance VI_1 {    state MASTER    interface ens18    mcast_src_ip 192.168.1.80    virtual_router_id 51    priority 100    nopreempt    advert_int 2    authentication {        auth_type PASS        auth_pass K8SHA_KA_AUTH    }    virtual_ipaddress {        192.168.1.89    }    track_script {      chk_apiserver } }EOF
复制

5.1.4lb02配置keepalived backup节点

# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bakcat > /etc/keepalived/keepalived.conf << EOF! Configuration File for keepalivedglobal_defs {    router_id LVS_DEVEL}vrrp_script chk_apiserver {    script "/etc/keepalived/check_apiserver.sh"    interval 5     weight -5    fall 2    rise 1}vrrp_instance VI_1 {    state BACKUP    interface ens18    mcast_src_ip 192.168.1.90    virtual_router_id 51    priority 50    nopreempt    advert_int 2    authentication {        auth_type PASS        auth_pass K8SHA_KA_AUTH    }    virtual_ipaddress {        192.168.1.89    }    track_script {      chk_apiserver } }EOF
复制

5.1.5健康检查脚本配置(两台lb主机)

cat >  /etc/keepalived/check_apiserver.sh << EOF#!/bin/basherr=0for k in \$(seq 1 3)do    check_code=\$(pgrep haproxy)    if [[ \$check_code == "" ]]; then        err=\$(expr \$err + 1)        sleep 1        continue    else        err=0        break    fidoneif [[ \$err != "0" ]]; then    echo "systemctl stop keepalived"    /usr/bin/systemctl stop keepalived    exit 1else    exit 0fiEOF# 给脚本授权chmod +x /etc/keepalived/check_apiserver.sh
复制

5.1.6启动服务

systemctl daemon-reloadsystemctl enable --now haproxysystemctl enable --now keepalived
复制

5.1.7测试高可用

# 能ping同[root@k8s-node02 ~]# ping 192.168.1.89# 能telnet访问[root@k8s-node02 ~]# telnet 192.168.1.89 8443# 关闭主节点,看vip是否漂移到备节点
复制

6.k8s组件配置(区别于第4点)

所有k8s节点创建以下目录

mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes
复制

6.1.创建apiserver(所有master节点)

6.1.1master01节点配置

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-apiserver \      --v=2  \      --logtostderr=true  \      --allow-privileged=true  \      --bind-address=0.0.0.0  \      --secure-port=6443  \      --insecure-port=0  \      --advertise-address=192.168.1.81 \      --service-cluster-ip-range=10.96.0.0/12  \      --service-node-port-range=30000-32767  \      --etcd-servers=https://192.168.1.81:2379,https://192.168.1.82:2379,https://192.168.1.83:2379 \      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \      --client-ca-file=/etc/kubernetes/pki/ca.pem  \      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \      --service-account-issuer=https://kubernetes.default.svc.cluster.local \      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \      --authorization-mode=Node,RBAC  \      --enable-bootstrap-token-auth=true  \      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \      --requestheader-allowed-names=aggregator  \      --requestheader-group-headers=X-Remote-Group  \      --requestheader-extra-headers-prefix=X-Remote-Extra-  \      --requestheader-username-headers=X-Remote-User \      --enable-aggregator-routing=true      # --token-auth-file=/etc/kubernetes/token.csvRestart=on-failureRestartSec=10sLimitNOFILE=65535[Install]WantedBy=multi-user.targetEOF
复制

6.1.2master02节点配置

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-apiserver \      --v=2  \      --logtostderr=true  \      --allow-privileged=true  \      --bind-address=0.0.0.0  \      --secure-port=6443  \      --insecure-port=0  \      --advertise-address=192.168.1.82 \      --service-cluster-ip-range=10.96.0.0/12  \      --service-node-port-range=30000-32767  \      --etcd-servers=https://192.168.1.81:2379,https://192.168.1.82:2379,https://192.168.1.83:2379 \      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \      --client-ca-file=/etc/kubernetes/pki/ca.pem  \      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \      --service-account-issuer=https://kubernetes.default.svc.cluster.local \      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \      --authorization-mode=Node,RBAC  \      --enable-bootstrap-token-auth=true  \      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \      --requestheader-allowed-names=aggregator  \      --requestheader-group-headers=X-Remote-Group  \      --requestheader-extra-headers-prefix=X-Remote-Extra-  \      --requestheader-username-headers=X-Remote-User \      --enable-aggregator-routing=true      # --token-auth-file=/etc/kubernetes/token.csvRestart=on-failureRestartSec=10sLimitNOFILE=65535[Install]WantedBy=multi-user.targetEOF
复制

6.1.3master03节点配置

cat > /usr/lib/systemd/system/kube-apiserver.service  << EOF[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-apiserver \      --v=2  \      --logtostderr=true  \      --allow-privileged=true  \      --bind-address=0.0.0.0  \      --secure-port=6443  \      --insecure-port=0  \      --advertise-address=192.168.1.83 \      --service-cluster-ip-range=10.96.0.0/12  \      --service-node-port-range=30000-32767  \      --etcd-servers=https://192.168.1.81:2379,https://192.168.1.82:2379,https://192.168.1.83:2379 \      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \      --client-ca-file=/etc/kubernetes/pki/ca.pem  \      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \      --service-account-issuer=https://kubernetes.default.svc.cluster.local \      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \      --authorization-mode=Node,RBAC  \      --enable-bootstrap-token-auth=true  \      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \      --requestheader-allowed-names=aggregator  \      --requestheader-group-headers=X-Remote-Group  \      --requestheader-extra-headers-prefix=X-Remote-Extra-  \      --requestheader-username-headers=X-Remote-User \      --enable-aggregator-routing=true      # --token-auth-file=/etc/kubernetes/token.csvRestart=on-failureRestartSec=10sLimitNOFILE=65535[Install]WantedBy=multi-user.targetEOF
复制

6.1.4启动apiserver(所有master节点)

systemctl daemon-reload && systemctl enable --now kube-apiserver# 注意查看状态是否启动正常systemctl status kube-apiserver
复制

6.2.配置kube-controller-manager service

所有master节点配置,且配置相同172.16.0.0/12为pod网段,按需求设置你自己的网段cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-controller-manager \      --v=2 \      --logtostderr=true \      --address=127.0.0.1 \      --root-ca-file=/etc/kubernetes/pki/ca.pem \      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \      --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \      --service-account-private-key-file=/etc/kubernetes/pki/sa.key \      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \      --leader-elect=true \      --use-service-account-credentials=true \      --node-monitor-grace-period=40s \      --node-monitor-period=5s \      --pod-eviction-timeout=2m0s \      --controllers=*,bootstrapsigner,tokencleaner \      --allocate-node-cidrs=true \      --cluster-cidr=172.16.0.0/12 \      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \      --node-cidr-mask-size=24Restart=alwaysRestartSec=10s[Install]WantedBy=multi-user.targetEOF
复制

6.2.1启动kube-controller-manager,并查看状态

systemctl daemon-reloadsystemctl enable --now kube-controller-managersystemctl  status kube-controller-manager
复制

6.3.配置kube-scheduler service

6.3.1所有master节点配置,且配置相同

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-scheduler \      --v=2 \      --logtostderr=true \      --address=127.0.0.1 \      --leader-elect=true \      --kubeconfig=/etc/kubernetes/scheduler.kubeconfigRestart=alwaysRestartSec=10s[Install]WantedBy=multi-user.targetEOF
复制

6.3.2启动并查看服务状态

systemctl daemon-reloadsystemctl enable --now kube-schedulersystemctl status kube-scheduler
复制

7.TLS Bootstrapping配置

7.1在master01上配置

cd /root/Kubernetes/bootstrapkubectl config set-cluster kubernetes     \--certificate-authority=/etc/kubernetes/pki/ca.pem     \--embed-certs=true     --server=https://192.168.1.89:8443     \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config set-credentials tls-bootstrap-token-user     \--token=c8ad9c.2e4d610cf3e7426e \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config set-context tls-bootstrap-token-user@kubernetes     \--cluster=kubernetes     \--user=tls-bootstrap-token-user     \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config use-context tls-bootstrap-token-user@kubernetes     \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig# token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
复制

7.2查看集群状态,没问题的话继续后续操作

kubectl get csWarning: v1 ComponentStatus is deprecated in v1.19+NAME                 STATUS    MESSAGE                         ERRORscheduler            Healthy   ok                              controller-manager   Healthy   ok                              etcd-0               Healthy   {"health":"true","reason":""}   etcd-2               Healthy   {"health":"true","reason":""}   etcd-1               Healthy   {"health":"true","reason":""} kubectl create -f bootstrap.secret.yaml
复制

8.node节点配置

8.1.在master01上将证书复制到node节点

cd /etc/kubernetes/for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done
复制

8.2.kubelet配置

8.2.1所有k8s节点创建相关目录

mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/所有k8s节点配置kubelet servicecat > /usr/lib/systemd/system/kubelet.service << EOF[Unit]Description=Kubernetes KubeletDocumentation=https://github.com/kubernetes/kubernetesAfter=docker.serviceRequires=docker.service[Service]ExecStart=/usr/local/bin/kubelet \    --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig  \    --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \    --config=/etc/kubernetes/kubelet-conf.yml \    --network-plugin=cni  \    --cni-conf-dir=/etc/cni/net.d  \    --cni-bin-dir=/opt/cni/bin  \    --container-runtime=remote  \    --runtime-request-timeout=15m  \    --container-runtime-endpoint=unix:///run/containerd/containerd.sock  \    --cgroup-driver=systemd \    --node-labels=node.kubernetes.io/node=''Restart=alwaysStartLimitInterval=0RestartSec=10[Install]WantedBy=multi-user.targetEOF
复制

8.2.2所有k8s节点创建kubelet的配置文件

cat > /etc/kubernetes/kubelet-conf.yml <<EOFapiVersion: kubelet.config.k8s.io/v1beta1kind: KubeletConfigurationaddress: 0.0.0.0port: 10250readOnlyPort: 10255authentication:  anonymous:    enabled: false  webhook:    cacheTTL: 2m0s    enabled: true  x509:    clientCAFile: /etc/kubernetes/pki/ca.pemauthorization:  mode: Webhook  webhook:    cacheAuthorizedTTL: 5m0s    cacheUnauthorizedTTL: 30scgroupDriver: systemdcgroupsPerQOS: trueclusterDNS:- 10.96.0.10clusterDomain: cluster.localcontainerLogMaxFiles: 5containerLogMaxSize: 10MicontentType: application/vnd.kubernetes.protobufcpuCFSQuota: truecpuManagerPolicy: nonecpuManagerReconcilePeriod: 10senableControllerAttachDetach: trueenableDebuggingHandlers: trueenforceNodeAllocatable:- podseventBurst: 10eventRecordQPS: 5evictionHard:  imagefs.available: 15%  memory.available: 100Mi  nodefs.available: 10%  nodefs.inodesFree: 5%evictionPressureTransitionPeriod: 5m0sfailSwapOn: truefileCheckFrequency: 20shairpinMode: promiscuous-bridgehealthzBindAddress: 127.0.0.1healthzPort: 10248httpCheckFrequency: 20simageGCHighThresholdPercent: 85imageGCLowThresholdPercent: 80imageMinimumGCAge: 2m0siptablesDropBit: 15iptablesMasqueradeBit: 14kubeAPIBurst: 10kubeAPIQPS: 5makeIPTablesUtilChains: truemaxOpenFiles: 1000000maxPods: 110nodeStatusUpdateFrequency: 10soomScoreAdj: -999podPidsLimit: -1registryBurst: 10registryPullQPS: 5resolvConf: /etc/resolv.confrotateCertificates: trueruntimeRequestTimeout: 2m0sserializeImagePulls: truestaticPodPath: /etc/kubernetes/manifestsstreamingConnectionIdleTimeout: 4h0m0ssyncFrequency: 1m0svolumeStatsAggPeriod: 1m0sEOF
复制

8.2.3启动kubelet

systemctl daemon-reloadsystemctl restart kubeletsystemctl enable --now kubelet
复制

8.2.4查看集群

[root@k8s-master01 ~]# kubectl  get nodeNAME           STATUS     ROLES    AGE   VERSIONk8s-master01   NotReady   <none>   14h   v1.23.5k8s-master02   NotReady   <none>   14h   v1.23.5k8s-master03   NotReady   <none>   14h   v1.23.5k8s-node01     NotReady   <none>   14h   v1.23.5k8s-node02     NotReady   <none>   14h   v1.23.5k8s-node03     NotReady   <none>   14h   v1.23.5k8s-node04     NotReady   <none>   14h   v1.23.5k8s-node05     NotReady   <none>   14h   v1.23.5[root@k8s-master01 ~]#
复制

8.3.kube-proxy配置

8.3.1此配置只在master01操作

cd /root/Kubernetes/kubectl -n kube-system create serviceaccount kube-proxykubectl create clusterrolebinding system:kube-proxy \--clusterrole system:node-proxier \--serviceaccount kube-system:kube-proxySECRET=$(kubectl -n kube-system get sa/kube-proxy \    --output=jsonpath='{.secrets[0].name}')JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \--output=jsonpath='{.data.token}' | base64 -d)PKI_DIR=/etc/kubernetes/pkiK8S_DIR=/etc/kuberneteskubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://192.168.1.89:8443 \--kubeconfig=${K8S_DIR}/kube-proxy.kubeconfigkubectl config set-credentials kubernetes \--token=${JWT_TOKEN} \--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfigkubectl config set-context kubernetes \--cluster=kubernetes \--user=kubernetes \--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfigkubectl config use-context kubernetes \--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
复制

8.3.2将kubeconfig发送至其他节点

for NODE in k8s-master02 k8s-master03; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; donefor NODE in k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig;  done
复制

8.3.3所有k8s节点添加kube-proxy的配置和service文件

cat >  /usr/lib/systemd/system/kube-proxy.service << EOF[Unit]Description=Kubernetes Kube ProxyDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-proxy \  --config=/etc/kubernetes/kube-proxy.yaml \  --v=2Restart=alwaysRestartSec=10s[Install]WantedBy=multi-user.targetEOF
复制
cat > /etc/kubernetes/kube-proxy.yaml << EOFapiVersion: kubeproxy.config.k8s.io/v1alpha1bindAddress: 0.0.0.0clientConnection:  acceptContentTypes: ""  burst: 10  contentType: application/vnd.kubernetes.protobuf  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig  qps: 5clusterCIDR: 172.16.0.0/12 configSyncPeriod: 15m0sconntrack:  max: null  maxPerCore: 32768  min: 131072  tcpCloseWaitTimeout: 1h0m0s  tcpEstablishedTimeout: 24h0m0senableProfiling: falsehealthzBindAddress: 0.0.0.0:10256hostnameOverride: ""iptables:  masqueradeAll: false  masqueradeBit: 14  minSyncPeriod: 0s  syncPeriod: 30sipvs:  masqueradeAll: true  minSyncPeriod: 5s  scheduler: "rr"  syncPeriod: 30skind: KubeProxyConfigurationmetricsBindAddress: 127.0.0.1:10249mode: "ipvs"nodePortAddresses: nulloomScoreAdj: -999portRange: ""udpIdleTimeout: 250msEOF
复制

8.3.4启动kube-proxy

 systemctl daemon-reload systemctl enable --now kube-proxy
复制

9.安装Calico

9.1以下步骤只在master01操作

9.1.1更改calico网段

cd /root/Kubernetes/calico/sed -i "s#POD_CIDR#172.16.0.0/12#g" calico.yamlgrep "IPV4POOL_CIDR" calico.yaml  -A 1            - name: CALICO_IPV4POOL_CIDR              value: "172.16.0.0/12"# 创建kubectl apply -f calico.yaml
复制

9.1.2查看容器状态

[root@k8s-master01 ~]# kubectl  get pod -ANAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGEkube-system   calico-kube-controllers-6f6595874c-nb95g   1/1     Running   0          2m54skube-system   calico-node-67dn4                          1/1     Running   0          2m54skube-system   calico-node-79zxj                          1/1     Running   0          2m54skube-system   calico-node-85bsf                          1/1     Running   0          2m54skube-system   calico-node-8trsm                          1/1     Running   0          2m54skube-system   calico-node-dvz72                          1/1     Running   0          2m54skube-system   calico-node-qqzwx                          1/1     Running   0          2m54skube-system   calico-node-rngzq                          1/1     Running   0          2m55skube-system   calico-node-w8gqp                          1/1     Running   0          2m54skube-system   calico-typha-6b6cf8cbdf-2b454              1/1     Running   0          2m55s[root@k8s-master01 ~]# [root@k8s-master01 ~]# kubectl  get nodeNAME           STATUS   ROLES    AGE   VERSIONk8s-master01   Ready    <none>   14h   v1.23.5k8s-master02   Ready    <none>   14h   v1.23.5k8s-master03   Ready    <none>   14h   v1.23.5k8s-node01     Ready    <none>   14h   v1.23.5k8s-node02     Ready    <none>   14h   v1.23.5k8s-node03     Ready    <none>   14h   v1.23.5k8s-node04     Ready    <none>   14h   v1.23.5k8s-node05     Ready    <none>   14h   v1.23.5[root@k8s-master01 ~]# 
复制

10.安装CoreDNS

10.1以下步骤只在master01操作

10.1.1修改文件

cd /root/Kubernetes/CoreDNS/sed -i "s#KUBEDNS_SERVICE_IP#10.96.0.10#g" coredns.yamlcat coredns.yaml | grep clusterIP:  clusterIP: 10.96.0.10 
复制

10.1.2安装

kubectl  create -f coredns.yaml serviceaccount/coredns createdclusterrole.rbac.authorization.k8s.io/system:coredns createdclusterrolebinding.rbac.authorization.k8s.io/system:coredns createdconfigmap/coredns createddeployment.apps/coredns createdservice/kube-dns created
复制

11.安装Metrics Server

11.1以下步骤只在master01操作

11.1.1安装Metrics-server

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率

安装metrics servercd /root/Kubernetes/metrics-server/kubectl  create -f . serviceaccount/metrics-server createdclusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader createdclusterrole.rbac.authorization.k8s.io/system:metrics-server createdrolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader createdclusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator createdclusterrolebinding.rbac.authorization.k8s.io/system:metrics-server createdservice/metrics-server createddeployment.apps/metrics-server createdapiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
复制

11.1.2稍等片刻查看状态

kubectl  top nodeNAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   k8s-master01   154m         1%     1715Mi          21%       k8s-master02   151m         1%     1274Mi          16%       k8s-master03   523m         6%     1345Mi          17%       k8s-node01     84m          1%     671Mi           8%        k8s-node02     73m          0%     727Mi           9%        k8s-node03     96m          1%     769Mi           9%        k8s-node04     68m          0%     673Mi           8%        k8s-node05     82m          1%     679Mi           8% 
复制

12.集群验证

12.1部署pod资源

cat<<EOF | kubectl apply -f -apiVersion: v1kind: Podmetadata:  name: busybox  namespace: defaultspec:  containers:  - name: busybox    image: busybox:1.28    command:      - sleep      - "3600"    imagePullPolicy: IfNotPresent  restartPolicy: AlwaysEOF# 查看kubectl  get podNAME      READY   STATUS    RESTARTS   AGEbusybox   1/1     Running   0          17s
复制

12.2用pod解析默认命名空间中的kubernetes

kubectl get svcNAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGEkubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   17hkubectl exec  busybox -n default -- nslookup kubernetes3Server:    10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName:      kubernetesAddress 1: 10.96.0.1 kubernetes.default.svc.cluster.local
复制

12.3测试跨命名空间是否可以解析

kubectl exec  busybox -n default -- nslookup kube-dns.kube-systemServer:    10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName:      kube-dns.kube-systemAddress 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
复制

12.4每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53

telnet 10.96.0.1 443Trying 10.96.0.1...Connected to 10.96.0.1.Escape character is '^]'. telnet 10.96.0.10 53Trying 10.96.0.10...Connected to 10.96.0.10.Escape character is '^]'.curl 10.96.0.10:53curl: (52) Empty reply from server
复制

12.5Pod和Pod之前要能通

kubectl get po -owideNAME      READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATESbusybox   1/1     Running   0          17m   172.27.14.193   k8s-node02   <none>           <none> kubectl get po -n kube-system -owideNAME                                       READY   STATUS    RESTARTS      AGE   IP               NODE           NOMINATED NODE   READINESS GATEScalico-kube-controllers-5dffd5886b-4blh6   1/1     Running   0             77m   172.25.244.193   k8s-master01   <none>           <none>calico-node-fvbdq                          1/1     Running   1 (75m ago)   77m   192.168.1.81     k8s-master01   <none>           <none>calico-node-g8nqd                          1/1     Running   0             77m   192.168.1.84     k8s-node01     <none>           <none>calico-node-mdps8                          1/1     Running   0             77m   192.168.1.85     k8s-node02     <none>           <none>calico-node-nf4nt                          1/1     Running   0             77m   192.168.1.83     k8s-master03   <none>           <none>calico-node-sq2ml                          1/1     Running   0             77m   192.168.1.82     k8s-master02   <none>           <none>calico-typha-8445487f56-mg6p8              1/1     Running   0             77m   192.168.1.85     k8s-node02     <none>           <none>calico-typha-8445487f56-pxbpj              1/1     Running   0             77m   192.168.1.81     k8s-master01   <none>           <none>calico-typha-8445487f56-tnssl              1/1     Running   0             77m   192.168.1.84     k8s-node01     <none>           <none>coredns-5db5696c7-67h79                    1/1     Running   0             63m   172.25.92.65     k8s-master02   <none>           <none>metrics-server-6bf7dcd649-5fhrw            1/1     Running   0             61m   172.18.195.1     k8s-master03   <none>           <none># 进入busybox ping其他节点上的podkubectl exec -ti busybox -- sh/ # ping 192.168.1.84PING 192.168.1.84 (192.168.1.84): 56 data bytes64 bytes from 192.168.1.84: seq=0 ttl=63 time=0.358 ms64 bytes from 192.168.1.84: seq=1 ttl=63 time=0.668 ms64 bytes from 192.168.1.84: seq=2 ttl=63 time=0.637 ms64 bytes from 192.168.1.84: seq=3 ttl=63 time=0.624 ms64 bytes from 192.168.1.84: seq=4 ttl=63 time=0.907 ms# 可以连通证明这个pod是可以跨命名空间和跨主机通信的
复制

12.6创建三个副本,可以看到3个副本分布在不同的节点上(用完可以删了)

cat > deployments.yaml << EOFapiVersion: apps/v1kind: Deploymentmetadata:  name: nginx-deployment  labels:    app: nginxspec:  replicas: 3  selector:    matchLabels:      app: nginx  template:    metadata:      labels:        app: nginx    spec:      containers:      - name: nginx        image: nginx:1.14.2        ports:        - containerPort: 80EOFkubectl  apply -f deployments.yaml deployment.apps/nginx-deployment createdkubectl  get pod NAME                               READY   STATUS    RESTARTS   AGEbusybox                            1/1     Running   0          6m25snginx-deployment-9456bbbf9-4bmvk   1/1     Running   0          8snginx-deployment-9456bbbf9-9rcdk   1/1     Running   0          8snginx-deployment-9456bbbf9-dqv8s   1/1     Running   0          8s# 删除nginx[root@k8s-master01 ~]# kubectl delete -f deployments.yaml 
复制

13.安装dashboard

cd /root/Kubernetes/dashboard/kubectl  create -f .serviceaccount/admin-user createdclusterrolebinding.rbac.authorization.k8s.io/admin-user creatednamespace/kubernetes-dashboard createdserviceaccount/kubernetes-dashboard createdservice/kubernetes-dashboard createdsecret/kubernetes-dashboard-certs createdsecret/kubernetes-dashboard-csrf createdsecret/kubernetes-dashboard-key-holder createdconfigmap/kubernetes-dashboard-settings createdrole.rbac.authorization.k8s.io/kubernetes-dashboard createdclusterrole.rbac.authorization.k8s.io/kubernetes-dashboard createdrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard createdclusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard createddeployment.apps/kubernetes-dashboard createdservice/dashboard-metrics-scraper createddeployment.apps/dashboard-metrics-scraper created
复制

13.1创建管理员用户

cat > admin.yaml << EOFapiVersion: v1kind: ServiceAccountmetadata:  name: admin-user  namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBinding metadata:   name: admin-user  annotations:    rbac.authorization.kubernetes.io/autoupdate: "true"roleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: cluster-adminsubjects:- kind: ServiceAccount  name: admin-user  namespace: kube-systemEOF
复制

13.2执行yaml文件

kubectl apply -f admin.yaml -n kube-systemserviceaccount/admin-user createdclusterrolebinding.rbac.authorization.k8s.io/admin-user created
复制

13.3更改dashboard的svc为NodePort,如果已是请忽略

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard  type: NodePort
复制

13.4查看端口号

kubectl get svc kubernetes-dashboard -n kubernetes-dashboardNAME                   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGEkubernetes-dashboard   NodePort   10.98.201.22   <none>        443:31245/TCP   10m
复制

13.5查看token

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')Name:         admin-user-token-5vfk4Namespace:    kube-systemLabels:       <none>Annotations:  kubernetes.io/service-account.name: admin-user              kubernetes.io/service-account.uid: fc2535ae-8760-4037-9026-966f03ab9bf9Type:  kubernetes.io/service-account-tokenData====ca.crt:     1363 bytesnamespace:  11 bytestoken:      eyJhbGciOiJSUzI1NiIsImtpZCI6InVOMnhMdHFTRWxweUlfUm93VmhMZTVXZW1FXzFrT01nQ0dTcE5uYjJlNWMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTV2Zms0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJmYzI1MzVhZS04NzYwLTQwMzctOTAyNi05NjZmMDNhYjliZjkiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.HSU1FeqY6pDVoXVIv4Lu27TDhCYHM-FzGsGybYL5QPJ5-P0b3tQqUH9i3AQlisiGPB--jCFT5CUeOeXneOyfV7XkC7frbn6VaQoh51n6ztkIvjUm8Q4xj_LQ2OSFfWlFUnaZsaYTdD-RCldwh63pX362T_FjgDknO4q1wtKZH5qR0mpL1dOjas50gnOSyBY0j-nSPrifhnNq3_GcDLE4LxjuzO1DfGNTEHZ6TojPJ_5ZElMolaYJsVejn2slfeUQEWdiD5AHFZlRd4exODCHyvUhRpzb9jO2rovN2LMqdE_vxBtNgXp19evQB9AgZyMMSmu1Ch2C2UAi4NxjKw8HNA
复制

13.6登录dashboard

https://192.168.1.81:31245/

eyJhbGciOiJSUzI1NiIsImtpZCI6InYzV2dzNnQzV3hHb2FQWnYzdnlOSmpudmtpVmNjQW5VM3daRi12SFM4dEEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWs1NDVrIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjMzA4MDcxYy00Y2Y1LTQ1ODMtODNhMi1lYWY3ODEyNTEyYjQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.pshvZPi9ZJkXUWuWilcYs1wawTpzV-nMKesgF3d_l7qyTPaK2N5ofzIThd0SjzU7BFNb4_rOm1dw1Be5kLeHjY_YW5lDnM5TAxVPXmZQ0HJ2pAQ0pjQqCHFnPD0bZFIYkeyz8pZx0Hmwcd3ZdC1yztr0ADpTAmMgI9NC2ZFIeoFFo4Ue9ZM_ulhqJQjmgoAlI_qbyjuKCNsWeEQBwM6HHHAsH1gOQIdVxqQ83OQZUuynDQRpqlHHFIndbK2zVRYFA3GgUnTu2-VRQ-DXBFRjvZR5qArnC1f383jmIjGT6VO7l04QJteG_LFetRbXa-T4mcnbsd8XutSgO0INqwKpjw

14.ingress安装

14.1写入配置文件,并执行

[root@hello ~/yaml]# vim deploy.yaml[root@hello ~/yaml]#[root@hello ~/yaml]#[root@hello ~/yaml]# cat deploy.yamlapiVersion: v1kind: Namespacemetadata:  name: ingress-nginx  labels:    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx---# Source: ingress-nginx/templates/controller-serviceaccount.yamlapiVersion: v1kind: ServiceAccountmetadata:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: controller  name: ingress-nginx  namespace: ingress-nginxautomountServiceAccountToken: true---# Source: ingress-nginx/templates/controller-configmap.yamlapiVersion: v1kind: ConfigMapmetadata:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: controller  name: ingress-nginx-controller  namespace: ingress-nginxdata:  allow-snippet-annotations: 'true'---# Source: ingress-nginx/templates/clusterrole.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm  name: ingress-nginxrules:  - apiGroups:      - ''    resources:      - configmaps      - endpoints      - nodes      - pods      - secrets      - namespaces    verbs:      - list      - watch  - apiGroups:      - ''    resources:      - nodes    verbs:      - get  - apiGroups:      - ''    resources:      - services    verbs:      - get      - list      - watch  - apiGroups:      - networking.k8s.io    resources:      - ingresses    verbs:      - get      - list      - watch  - apiGroups:      - ''    resources:      - events    verbs:      - create      - patch  - apiGroups:      - networking.k8s.io    resources:      - ingresses/status    verbs:      - update  - apiGroups:      - networking.k8s.io    resources:      - ingressclasses    verbs:      - get      - list      - watch---# Source: ingress-nginx/templates/clusterrolebinding.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm  name: ingress-nginxroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: ingress-nginxsubjects:  - kind: ServiceAccount    name: ingress-nginx    namespace: ingress-nginx---# Source: ingress-nginx/templates/controller-role.yamlapiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: controller  name: ingress-nginx  namespace: ingress-nginxrules:  - apiGroups:      - ''    resources:      - namespaces    verbs:      - get  - apiGroups:      - ''    resources:      - configmaps      - pods      - secrets      - endpoints    verbs:      - get      - list      - watch  - apiGroups:      - ''    resources:      - services    verbs:      - get      - list      - watch  - apiGroups:      - networking.k8s.io    resources:      - ingresses    verbs:      - get      - list      - watch  - apiGroups:      - networking.k8s.io    resources:      - ingresses/status    verbs:      - update  - apiGroups:      - networking.k8s.io    resources:      - ingressclasses    verbs:      - get      - list      - watch  - apiGroups:      - ''    resources:      - configmaps    resourceNames:      - ingress-controller-leader    verbs:      - get      - update  - apiGroups:      - ''    resources:      - configmaps    verbs:      - create  - apiGroups:      - ''    resources:      - events    verbs:      - create      - patch---# Source: ingress-nginx/templates/controller-rolebinding.yamlapiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: controller  name: ingress-nginx  namespace: ingress-nginxroleRef:  apiGroup: rbac.authorization.k8s.io  kind: Role  name: ingress-nginxsubjects:  - kind: ServiceAccount    name: ingress-nginx    namespace: ingress-nginx---# Source: ingress-nginx/templates/controller-service-webhook.yamlapiVersion: v1kind: Servicemetadata:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: controller  name: ingress-nginx-controller-admission  namespace: ingress-nginxspec:  type: ClusterIP  ports:    - name: https-webhook      port: 443      targetPort: webhook      appProtocol: https  selector:    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/component: controller---# Source: ingress-nginx/templates/controller-service.yamlapiVersion: v1kind: Servicemetadata:  annotations:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: controller  name: ingress-nginx-controller  namespace: ingress-nginxspec:  type: NodePort  externalTrafficPolicy: Local  ipFamilyPolicy: SingleStack  ipFamilies:    - IPv4  ports:    - name: http      port: 80      protocol: TCP      targetPort: http      appProtocol: http    - name: https      port: 443      protocol: TCP      targetPort: https      appProtocol: https  selector:    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/component: controller---# Source: ingress-nginx/templates/controller-deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: controller  name: ingress-nginx-controller  namespace: ingress-nginxspec:  selector:    matchLabels:      app.kubernetes.io/name: ingress-nginx      app.kubernetes.io/instance: ingress-nginx      app.kubernetes.io/component: controller  revisionHistoryLimit: 10  minReadySeconds: 0  template:    metadata:      labels:        app.kubernetes.io/name: ingress-nginx        app.kubernetes.io/instance: ingress-nginx        app.kubernetes.io/component: controller    spec:      dnsPolicy: ClusterFirst      containers:        - name: controller          image: registry.cn-hangzhou.aliyuncs.com/chenby/controller:v1.1.3           imagePullPolicy: IfNotPresent          lifecycle:            preStop:              exec:                command:                  - /wait-shutdown          args:            - /nginx-ingress-controller            - --election-id=ingress-controller-leader            - --controller-class=k8s.io/ingress-nginx            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller            - --validating-webhook=:8443            - --validating-webhook-certificate=/usr/local/certificates/cert            - --validating-webhook-key=/usr/local/certificates/key          securityContext:            capabilities:              drop:                - ALL              add:                - NET_BIND_SERVICE            runAsUser: 101            allowPrivilegeEscalation: true          env:            - name: POD_NAME              valueFrom:                fieldRef:                  fieldPath: metadata.name            - name: POD_NAMESPACE              valueFrom:                fieldRef:                  fieldPath: metadata.namespace            - name: LD_PRELOAD              value: /usr/local/lib/libmimalloc.so          livenessProbe:            failureThreshold: 5            httpGet:              path: /healthz              port: 10254              scheme: HTTP            initialDelaySeconds: 10            periodSeconds: 10            successThreshold: 1            timeoutSeconds: 1          readinessProbe:            failureThreshold: 3            httpGet:              path: /healthz              port: 10254              scheme: HTTP            initialDelaySeconds: 10            periodSeconds: 10            successThreshold: 1            timeoutSeconds: 1          ports:            - name: http              containerPort: 80              protocol: TCP            - name: https              containerPort: 443              protocol: TCP            - name: webhook              containerPort: 8443              protocol: TCP          volumeMounts:            - name: webhook-cert              mountPath: /usr/local/certificates/              readOnly: true          resources:            requests:              cpu: 100m              memory: 90Mi      nodeSelector:        kubernetes.io/os: linux      serviceAccountName: ingress-nginx      terminationGracePeriodSeconds: 300      volumes:        - name: webhook-cert          secret:            secretName: ingress-nginx-admission---# Source: ingress-nginx/templates/controller-ingressclass.yaml# We don't support namespaced ingressClass yet# So a ClusterRole and a ClusterRoleBinding is requiredapiVersion: networking.k8s.io/v1kind: IngressClassmetadata:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: controller  name: nginx  namespace: ingress-nginxspec:  controller: k8s.io/ingress-nginx---# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml# before changing this value, check the required kubernetes version# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisitesapiVersion: admissionregistration.k8s.io/v1kind: ValidatingWebhookConfigurationmetadata:  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: admission-webhook  name: ingress-nginx-admissionwebhooks:  - name: validate.nginx.ingress.kubernetes.io    matchPolicy: Equivalent    rules:      - apiGroups:          - networking.k8s.io        apiVersions:          - v1        operations:          - CREATE          - UPDATE        resources:          - ingresses    failurePolicy: Fail    sideEffects: None    admissionReviewVersions:      - v1    clientConfig:      service:        namespace: ingress-nginx        name: ingress-nginx-controller-admission        path: /networking/v1/ingresses---# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yamlapiVersion: v1kind: ServiceAccountmetadata:  name: ingress-nginx-admission  namespace: ingress-nginx  annotations:    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: admission-webhook---# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:  name: ingress-nginx-admission  annotations:    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: admission-webhookrules:  - apiGroups:      - admissionregistration.k8s.io    resources:      - validatingwebhookconfigurations    verbs:      - get      - update---# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  name: ingress-nginx-admission  annotations:    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: admission-webhookroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: ingress-nginx-admissionsubjects:  - kind: ServiceAccount    name: ingress-nginx-admission    namespace: ingress-nginx---# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yamlapiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata:  name: ingress-nginx-admission  namespace: ingress-nginx  annotations:    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: admission-webhookrules:  - apiGroups:      - ''    resources:      - secrets    verbs:      - get      - create---# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yamlapiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:  name: ingress-nginx-admission  namespace: ingress-nginx  annotations:    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: admission-webhookroleRef:  apiGroup: rbac.authorization.k8s.io  kind: Role  name: ingress-nginx-admissionsubjects:  - kind: ServiceAccount    name: ingress-nginx-admission    namespace: ingress-nginx---# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yamlapiVersion: batch/v1kind: Jobmetadata:  name: ingress-nginx-admission-create  namespace: ingress-nginx  annotations:    helm.sh/hook: pre-install,pre-upgrade    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: admission-webhookspec:  template:    metadata:      name: ingress-nginx-admission-create      labels:        helm.sh/chart: ingress-nginx-4.0.10        app.kubernetes.io/name: ingress-nginx        app.kubernetes.io/instance: ingress-nginx        app.kubernetes.io/version: 1.1.0        app.kubernetes.io/managed-by: Helm        app.kubernetes.io/component: admission-webhook    spec:      containers:        - name: create          image: registry.cn-hangzhou.aliyuncs.com/chenby/kube-webhook-certgen:v1.1.1           imagePullPolicy: IfNotPresent          args:            - create            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc            - --namespace=$(POD_NAMESPACE)            - --secret-name=ingress-nginx-admission          env:            - name: POD_NAMESPACE              valueFrom:                fieldRef:                  fieldPath: metadata.namespace          securityContext:            allowPrivilegeEscalation: false      restartPolicy: OnFailure      serviceAccountName: ingress-nginx-admission      nodeSelector:        kubernetes.io/os: linux      securityContext:        runAsNonRoot: true        runAsUser: 2000---# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yamlapiVersion: batch/v1kind: Jobmetadata:  name: ingress-nginx-admission-patch  namespace: ingress-nginx  annotations:    helm.sh/hook: post-install,post-upgrade    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded  labels:    helm.sh/chart: ingress-nginx-4.0.10    app.kubernetes.io/name: ingress-nginx    app.kubernetes.io/instance: ingress-nginx    app.kubernetes.io/version: 1.1.0    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/component: admission-webhookspec:  template:    metadata:      name: ingress-nginx-admission-patch      labels:        helm.sh/chart: ingress-nginx-4.0.10        app.kubernetes.io/name: ingress-nginx        app.kubernetes.io/instance: ingress-nginx        app.kubernetes.io/version: 1.1.0        app.kubernetes.io/managed-by: Helm        app.kubernetes.io/component: admission-webhook    spec:      containers:        - name: patch          image: registry.cn-hangzhou.aliyuncs.com/chenby/kube-webhook-certgen:v1.1.1           imagePullPolicy: IfNotPresent          args:            - patch            - --webhook-name=ingress-nginx-admission            - --namespace=$(POD_NAMESPACE)            - --patch-mutating=false            - --secret-name=ingress-nginx-admission            - --patch-failure-policy=Fail          env:            - name: POD_NAMESPACE              valueFrom:                fieldRef:                  fieldPath: metadata.namespace          securityContext:            allowPrivilegeEscalation: false      restartPolicy: OnFailure      serviceAccountName: ingress-nginx-admission      nodeSelector:        kubernetes.io/os: linux      securityContext:        runAsNonRoot: true        runAsUser: 2000[root@hello ~/yaml]#
复制

14.2启用后端,写入配置文件执行

[root@hello ~/yaml]# vim backend.yaml[root@hello ~/yaml]# cat backend.yamlapiVersion: apps/v1kind: Deploymentmetadata:  name: default-http-backend  labels:    app.kubernetes.io/name: default-http-backend  namespace: kube-systemspec:  replicas: 1  selector:    matchLabels:      app.kubernetes.io/name: default-http-backend  template:    metadata:      labels:        app.kubernetes.io/name: default-http-backend    spec:      terminationGracePeriodSeconds: 60      containers:      - name: default-http-backend        image: registry.cn-hangzhou.aliyuncs.com/chenby/defaultbackend-amd64:1.5         livenessProbe:          httpGet:            path: /healthz            port: 8080            scheme: HTTP          initialDelaySeconds: 30          timeoutSeconds: 5        ports:        - containerPort: 8080        resources:          limits:            cpu: 10m            memory: 20Mi          requests:            cpu: 10m            memory: 20Mi---apiVersion: v1kind: Servicemetadata:  name: default-http-backend  namespace: kube-system  labels:    app.kubernetes.io/name: default-http-backendspec:  ports:  - port: 80    targetPort: 8080  selector:    app.kubernetes.io/name: default-http-backend[root@hello ~/yaml]#
复制

14.3安装测试应用

[root@hello ~/yaml]# vim ingress-demo-app.yaml[root@hello ~/yaml]#[root@hello ~/yaml]# cat ingress-demo-app.yamlapiVersion: apps/v1kind: Deploymentmetadata:  name: hello-serverspec:  replicas: 2  selector:    matchLabels:      app: hello-server  template:    metadata:      labels:        app: hello-server    spec:      containers:      - name: hello-server        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server        ports:        - containerPort: 9000---apiVersion: apps/v1kind: Deploymentmetadata:  labels:    app: nginx-demo  name: nginx-demospec:  replicas: 2  selector:    matchLabels:      app: nginx-demo  template:    metadata:      labels:        app: nginx-demo    spec:      containers:      - image: nginx        name: nginx---apiVersion: v1kind: Servicemetadata:  labels:    app: nginx-demo  name: nginx-demospec:  selector:    app: nginx-demo  ports:  - port: 8000    protocol: TCP    targetPort: 80---apiVersion: v1kind: Servicemetadata:  labels:    app: hello-server  name: hello-serverspec:  selector:    app: hello-server  ports:  - port: 8000    protocol: TCP    targetPort: 9000---apiVersion: networking.k8s.io/v1kind: Ingress  metadata:  name: ingress-host-barspec:  ingressClassName: nginx  rules:  - host: "hello.chenby.cn"    http:      paths:      - pathType: Prefix        path: "/"        backend:          service:            name: hello-server            port:              number: 8000  - host: "demo.chenby.cn"    http:      paths:      - pathType: Prefix        path: "/nginx"          backend:          service:            name: nginx-demo            port:              number: 8000[root@hello ~/yaml]#[root@hello ~/yaml]# kubectl  get ingressNAME               CLASS    HOSTS                            ADDRESS        PORTS   AGEingress-demo-app   <none>   app.demo.com                     192.168.1.11   80      20mingress-host-bar   nginx    hello.chenby.cn,demo.chenby.cn   192.168.1.11   80      2m17s[root@hello ~/yaml]#
复制

14.4执行部署

root@hello:~# kubectl  apply -f deploy.yaml namespace/ingress-nginx createdserviceaccount/ingress-nginx createdconfigmap/ingress-nginx-controller createdclusterrole.rbac.authorization.k8s.io/ingress-nginx createdclusterrolebinding.rbac.authorization.k8s.io/ingress-nginx createdrole.rbac.authorization.k8s.io/ingress-nginx createdrolebinding.rbac.authorization.k8s.io/ingress-nginx createdservice/ingress-nginx-controller-admission createdservice/ingress-nginx-controller createddeployment.apps/ingress-nginx-controller createdingressclass.networking.k8s.io/nginx createdvalidatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission createdserviceaccount/ingress-nginx-admission createdclusterrole.rbac.authorization.k8s.io/ingress-nginx-admission createdclusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission createdrole.rbac.authorization.k8s.io/ingress-nginx-admission createdrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission createdjob.batch/ingress-nginx-admission-create createdjob.batch/ingress-nginx-admission-patch createdroot@hello:~# root@hello:~# kubectl  apply -f backend.yaml deployment.apps/default-http-backend createdservice/default-http-backend createdroot@hello:~# root@hello:~# kubectl  apply -f ingress-demo-app.yaml deployment.apps/hello-server createddeployment.apps/nginx-demo createdservice/nginx-demo createdservice/hello-server createdingress.networking.k8s.io/ingress-host-bar createdroot@hello:~# 
复制

14.5过滤查看ingress端口

[root@hello ~/yaml]# kubectl  get svc -A | grep ingressdefault         ingress-demo-app                     ClusterIP   10.68.231.41    <none>        80/TCP                       51mingress-nginx   ingress-nginx-controller             NodePort    10.68.93.71     <none>        80:32746/TCP,443:30538/TCP   32mingress-nginx   ingress-nginx-controller-admission   ClusterIP   10.68.146.23    <none>        443/TCP                      32m[root@hello ~/yaml]#
复制

15.安装命令行自动补全功能

yum install bash-completion -ysource /usr/share/bash-completion/bash_completionsource <(kubectl completion bash)echo "source <(kubectl completion bash)" >> ~/.bashrc
复制

附录:

配置kube-controller-manager有效期100年(能不能生效的先配上再说)

vim /usr/lib/systemd/system/kube-controller-manager.service# [Service]下找个地方加上--cluster-signing-duration=876000h0m0s \# 重启systemctl daemon-reload systemctl restart kube-controller-manager
复制

防止漏洞扫描

vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf[Service] Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.kubeconfig --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig" Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin" Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml  --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6" Environment="KUBELET_EXTRA_ARGS=--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384    --image-pull-progress-deadline=30m" ExecStart= ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS 
复制

预留空间,按需分配

vim /etc/kubernetes/kubelet-conf.ymlrotateServerCertificates: trueallowedUnsafeSysctls: - "net.core*" - "net.ipv4.*"   kubeReserved:     cpu: "1"     memory: 1Gi     ephemeral-storage: 10Gi   systemReserved:     cpu: "1"     memory: 1Gi     ephemeral-storage: 10Gi
复制

数据盘要与系统盘分开;etcd使用ssd磁盘

https://www.oiox.cn/

https://www.chenby.cn/

https://cby-chen.github.io/

https://blog.csdn.net/qq_33921750

https://my.oschina.net/u/3981543

https://www.zhihu.com/people/chen-bu-yun-2

https://segmentfault.com/u/hppyvyv6/articles

https://juejin.cn/user/3315782802482007

https://cloud.tencent.com/developer/column/93230

https://www.jianshu.com/u/0f894314ae2c

https://www.toutiao.com/c/user/token/MS4wLjABAAAAeqOrhjsoRZSj7iBJbjLJyMwYT5D0mLOgCoo4pEmpr4A/

CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、个人博客、全网可搜《小陈运维》

文章主要发布于微信公众号:《Linux运维交流社区》

文章转载自Linux运维交流社区,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论