kubernetes集群计算节点的升级和扩容
kuernetes集群计算节点升级
首先查看集群的节点状态
Last login: Thu Mar 14 09:39:26 2019 from 10.83.2.89
[root@kubemaster ~]#
[root@kubemaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster Ready master 17d v1.13.3
kubenode1 Ready <none> 17d v1.13.3
kubenode2 Ready <none> 17d v1.13.3
[root@kubemaster ~]#复制查看哪些POD运行在kubenode1节点上面
[root@kubemaster ~]# kubectl get pods -o wide|grep kubenode1
account-summary-689d96d949-49bjr 1/1 Running 0 7d15h 10.244.1.17 kubenode1 <none> <none>
compute-interest-api-5f54cc8dd9-44g9p 1/1 Running 0 7d15h 10.244.1.15 kubenode1 <none> <none>
send-notification-fc7c8ffc4-rk5wl 1/1 Running 0 7d15h 10.244.1.16 kubenode1 <none> <none>
transaction-generator-7cfccbbd57-8ts5s 1/1 Running 0 7d15h 10.244.1.18 kubenode1 <none> <none>
[root@kubemaster ~]#
# 如果别的命名空间也有pods,也可以加上命名空间,比如 kubectl get pods -n kube-system -o wide|grep kubenode1复制使用kubectl cordon命令将kubenode1节点配置为不可调度状态;
[root@kubemaster ~]# kubectl cordon kubenode1
node/kubenode1 cordoned
[root@kubemaster ~]#复制继续查看运行的Pod,发现Pod还是运行在kubenode1上面。其实kubectl crodon的用途只是说后续的pod不运行在kubenode1上面,但是仍然在kubenode1节点上面运行的Pod还是没有驱逐
[root@kubemaster ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
kubemaster Ready master 17d v1.13.3
kubenode1 Ready,SchedulingDisabled <none> 17d v1.13.3
kubenode2 Ready <none> 17d v1.13.3
[root@kubemaster ~]# kubectl get pods -n kube-system -o wide|grep kubenode1
kube-flannel-ds-amd64-7ghpg 1/1 Running 1 17d 10.83.32.138 kubenode1 <none> <none>
kube-proxy-2lfnm 1/1 Running 1 17d 10.83.32.138 kubenode1 <none> <none>
[root@kubemaster ~]#复制现在需要驱逐Pod,使用的命令是kubectl drain 如果节点上面还有一些DaemonSet的Pod在运行的话,需要加上参数 —ignore-daemonsets
[root@kubemaster ~]# kubectl drain kubenode1 --ignore-daemonsets
node/kubenode1 already cordoned
WARNING: Ignoring DaemonSet-managed pods: node-exporter-s5vfc, kube-flannel-ds-amd64-7ghpg, kube-proxy-2lfnm
pod/traefik-ingress-controller-7899bfbd87-wsl64 evicted
pod/grafana-57f7d594d9-vw5mp evicted
pod/tomcat-deploy-5fd9ffbdc7-cdnj8 evicted
pod/myapp-deploy-6b56d98b6b-rrb5b evicted
pod/transaction-generator-7cfccbbd57-8ts5s evicted
pod/prometheus-848d44c7bc-rtq7t evicted
pod/send-notification-fc7c8ffc4-rk5wl evicted
pod/compute-interest-api-5f54cc8dd9-44g9p evicted
pod/account-summary-689d96d949-49bjr evicted
node/kubenode1 evicted
[root@kubemaster ~]#复制再次查看Pod,是否还有Pod在kubenode1上面运行。没有的话开始关机升级配置,增加配置之后启动计算节点。
[root@kubemaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster Ready master 17d v1.13.3
kubenode1 Ready,SchedulingDisabled <none> 17d v1.13.3
kubenode2 Ready <none> 17d v1.13.3
[root@kubemaster ~]#
#发现这个节点还是无法调度的状态
[root@kubemaster ~]# kubectl uncordon kubenode1
#设置这个计算节点为可调度
node/kubenode1 uncordoned
[root@kubemaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster Ready master 17d v1.13.3
kubenode1 Ready <none> 17d v1.13.3
kubenode2 Ready <none> 17d v1.13.3
[root@kubemaster ~]#复制至此升级一台k8s集群计算节点的任务就此完成了。现在我们再来实现k8s集群增加一台计算节点;
kuernetes集群计算节点扩容
首先参考我以前的一篇关于通过kubeadm安装k8s集群的博客:
https://blog.51cto.com/zgui2000/2354852
设置好yum源仓库,安装好docker-ce、安装好kubelet等;
[root@kubenode3 yum.repos.d]# cat /etc/yum.repos.d/docker-ce.repo
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[docker-ce-edge]
name=Docker CE Edge - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[docker-ce-edge-debuginfo]
name=Docker CE Edge - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[docker-ce-edge-source]
name=Docker CE Edge - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[docker-ce-nightly]
name=Docker CE Nightly - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[root@kubenode3 yum.repos.d]#
#准备docker-ce yum仓库文件
[root@kubenode3 yum.repos.d]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
[root@kubenode3 yum.repos.d]#
#准备kubernetes.repo yum仓库文件
[root@kubenode3 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.83.32.146 kubemaster
10.83.32.138 kubenode1
10.83.32.133 kubenode2
10.83.32.144 kubenode3
#准备hosts文件
[root@kubenode3 yum.repos.d]# getenforce
Disabled
#禁用selinux,可以通过设置/etc/selinux/config文件
systemctl stop firewalld
systemctl disable firewalld
#禁用防火墙
yum install docker-ce kubelet kubeadm kubectl
#安装docker、kubelet等
curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io
#安装docker镜像加速器,需要重启docker服务。systemctl restart Docker
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.13.3
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.3
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.13.3
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.13.3
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.24
docker pull carlziess/coredns-1.2.6
docker pull quay.io/coreos/flannel:v0.11.0-amd64
docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.13.3 k8s.gcr.io/kube-apiserver:v1.13.3
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.3 k8s.gcr.io/kube-controller-manager:v1.13.3
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.13.3 k8s.gcr.io/kube-scheduler:v1.13.3
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.13.3 k8s.gcr.io/kube-proxy:v1.13.3
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag carlziess/coredns-1.2.6 k8s.gcr.io/coredns:1.2.6
#将运行的镜像提前下载到本地,因为使用kubeadm安装的k8s集群,api-server、controller-manager、kube-scheduler、etcd、flannel等组件需要运行为容器的形式,所以提前把镜像下载下来;
vim /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
sysctl -p
[root@kubenode3 yum.repos.d]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
[root@kubenode3 yum.repos.d]#复制
现在开始扩容计算节点
每个token只有24小时的有效期,如果没有有效的token,可以使用如下命令创建
推荐关注我的个人微信公众号 “云时代IT运维”,周期性更新最新的应用运维类技术文档。关注虚拟化和容器技术、CI/CD、自动化运维等最新前沿运维技术和趋势;
[root@kubemaster ~]# kubeadm token create
fv93ud.33j7oxtdmodwfn7f
[root@kubemaster ~]#
#创建token
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
c414ceda959552049efccc2d9fd1fc1a2006689006a5f3b05e6ca05b3ff1a93e
#查看Kubernetes认证的SHA256加密字符串
swapoff -a
#关闭swap分区
kubeadm join 10.83.32.146:6443 --token fv93ud.33j7oxtdmodwfn7f --discovery-token-ca-cert-hash sha256:c414ceda959552049efccc2d9fd1fc1a2006689006a5f3b05e6ca05b3ff1a93e --ignore-preflight-errors=Swap
#加入kubernetes集群
[root@kubemaster ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
kubemaster Ready master 18d v1.13.3
kubenode1 Ready <none> 17d v1.13.3
kubenode2 Ready <none> 17d v1.13.3
kubenode3 Ready <none> 2m22s v1.13.4
#查看节点状态,发现已经成功加入kubenode3节点复制
文章转载自云时代IT运维,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。
评论
相关阅读
2025年4月中国数据库流行度排行榜:OB高分复登顶,崖山稳驭撼十强
墨天轮编辑部
2280次阅读
2025-04-09 15:33:27
数据库国产化替代深化:DBA的机遇与挑战
代晓磊
1046次阅读
2025-04-27 16:53:22
2025年3月国产数据库中标情况一览:TDSQL大单622万、GaussDB大单581万……
通讯员
657次阅读
2025-04-10 15:35:48
数据库,没有关税却有壁垒
多明戈教你玩狼人杀
527次阅读
2025-04-11 09:38:42
国产数据库需要扩大场景覆盖面才能在竞争中更有优势
白鳝的洞穴
508次阅读
2025-04-14 09:40:20
最近我为什么不写评论国产数据库的文章了
白鳝的洞穴
466次阅读
2025-04-07 09:44:54
【活动】分享你的压箱底干货文档,三篇解锁进阶奖励!
墨天轮编辑部
420次阅读
2025-04-17 17:02:24
2025年4月国产数据库中标情况一览:4个千万元级项目,GaussDB与OceanBase大放异彩!
通讯员
399次阅读
2025-04-30 15:24:06
天津市政府数据库框采结果公布,7家数据库产品入选!
通讯员
396次阅读
2025-04-10 12:32:35
优炫数据库成功入围新疆维吾尔自治区行政事业单位数据库2025年框架协议采购!
优炫软件
346次阅读
2025-04-18 10:01:22