暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

体验华为openeuler系统的K8S

原创 zayki 2021-09-17
9775

下载地址:https://www.openeuler.org/zh/download/
最新版ISO下载地址:https://repo.openeuler.org/openEuler-21.03/ISO/
文档地址:https://docs.openeuler.org/zh/

21.03 版本关键特性集成Kubernetes 1.20

用于自动部署,扩展和管理容器化应用程序的云原生操作系统它更多特性,请参考Kubernetes 1.20官方发行说明。

自动上线和回滚,Kubernetes 会自动将应用或其配置的更改后的实例上线,同时监视应用程序运行状况,失败就会回滚之前所作更改。
服务发现和负载均衡,服务发现和基于容器IP和DNS名称的负载均衡机支持。
存储编排,支持多种存储后端的自动挂载,如本地存储、NFS、iSCSI、Gluster、Ceph等网络存储系统。
水平扩展,支持命令行、UI手动操作扩展,以及基于 CPU 使用情况自动扩展方式。
复制

下面我们来体验一下在openeuler系统上安装k8s的过程。官方文档太过于复杂,不适合我这样的小白操作,参考网络的一篇文章完成了以下部署过程,难免有些纰漏,希望大神不吝赐教。

参考文章连接:https://www.cnblogs.com/heian99/p/12173599.html

一、安装及配置

安装操作系统的过程省略…

以下操作在主节点上完成:

安装软件:

[root@openeuler ~]# dnf install -y kubernetes-kubeadm kubernetes-kubelet kubernetes-master
复制

启动kubelet服务:

[root@openeuler ~]# swapoff -a [root@openeuler ~]# systemctl stop firewalld && systemctl disable firewalld [root@openeuler ~]# systemctl enable docker && systemctl start docker [root@openeuler ~]# systemctl enable kubelet.service && systemctl start kubelet.service
复制

初始化主节点:

[root@openeuler ~]# kubeadm init --apiserver-advertise-address=192.168.128.132 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.20.2 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16 [init] Using Kubernetes version: v1.20.2 [preflight] Running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Hostname]: hostname "openeuler" could not be reached [WARNING Hostname]: hostname "openeuler": lookup openeuler on 192.168.128.2:53: server misbehaving [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openeuler] and IPs [10.1.0.1 192.168.128.132] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost openeuler] and IPs [192.168.128.132 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost openeuler] and IPs [192.168.128.132 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 9.002950 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node openeuler as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)" [mark-control-plane] Marking the node openeuler as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: gnvwps.ydjc5gxisxxvztua [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.128.132:6443 --token gnvwps.ydjc5gxisxxvztua \ --discovery-token-ca-cert-hash sha256:adc61edc9452c6c44abaf9f62b976ec8c9109d2635c234c0a212e0f2f0381026
复制

创建普通用户kubeuser:

[root@openeuler ~]# groupadd kubeuser [root@openeuler ~]# useradd -d /home/kubeuser -m -g kubeuser kubeuser [root@openeuler ~]# passwd kubeuser Changing password for user kubeuser. (pass:kubepass)
复制

添加sudo权限 /etc/sudoers

%kubeuser ALL=(ALL) ALL su - kubeuser执行以下命令 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bashrc
复制

查看master状态:

export KUBECONFIG=/etc/kubernetes/admin.conf [kubeuser@openeuler ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION openeuler NotReady control-plane,master 4h34m v1.20.2
复制

查看kubelet服务状态:

[root@k8sslave1 ~]# journalctl -u kubelet -f Sep 16 14:07:21 k8sslave1 kubelet[8548]: : [failed to find plugin "flannel" in path [/opt/cni/bin] failed to find plugin "portmap" in path [/opt/cni/bin]] Sep 16 14:07:21 k8sslave1 kubelet[8548]: W0916 14:07:21.033280 8548 cni.go:239] Unable to update cni config: no valid networks found in /etc/cni/net.d Sep 16 14:07:22 k8sslave1 kubelet[8548]: E0916 14:07:22.858257 8548 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
复制

解决方法:

[root@k8sslave2 ~]# dnf install -y containernetworking-plugins containernetworking-plugins-devel Last metadata expiration check: 0:06:18 ago on Thu 16 Sep 2021 01:49:56 PM CST. Dependencies resolved. ============================================================================================================================================================================== Package Architecture Version Repository Size ============================================================================================================================================================================== Installing: containernetworking-plugins x86_64 0.8.2-4.git485be65.oe1 OS 17 M containernetworking-plugins-devel noarch 0.8.2-4.git485be65.oe1 everything 84 k Transaction Summary ============================================================================================================================================================================== Install 2 Packages [root@openeuler ~]# mkdir -p /opt/cni/bin [root@openeuler ~]# cp /usr/libexec/cni/* /opt/cni/bin/ [root@openeuler ~]# systemctl restart kubelet
复制

以下是在slave节点上执行:

安装软件:

[root@k8sslave2 ~]# dnf install -y kubernetes-kubeadm kubernetes-kubelet kubernetes-node containernetworking-plugins containernetworking-plugins-devel
复制

启动kubelet服务:

[root@k8sslave2 ~]# mkdir -p /opt/cni/bin [root@k8sslave2 ~]# cp /usr/libexec/cni/* /opt/cni/bin/ [root@k8sslave2 ~]# swapoff -a [root@k8sslave2 ~]# systemctl stop firewalld && systemctl disable firewalld [root@k8sslave2 ~]# systemctl enable docker && systemctl start docker [root@k8sslave2 ~]# systemctl enable kubelet.service && systemctl start kubelet.service [root@k8sslave2 ~]# kubeadm join 192.168.128.132:6443 --token gnvwps.ydjc5gxisxxvztua \ --discovery-token-ca-cert-hash sha256:adc61edc9452c6c44abaf9f62b976ec8c9109d2635c234c0a212e0f2f0381026
复制

在主节点查看所有节点状态:(如下状态均为Ready表示节点安装正常)

[kubeuser@openeuler ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8sslave2 Ready <none> 137m v1.20.2 openeuler Ready control-plane,master 4h51m v1.20.2
复制

二、部署应用

部署应用的关键是打通网络,
参考文章:https://www.cnblogs.com/dribs/p/10318200.html

引用一段介绍如下:

k8s要靠CNI接口接入其他插件来实现网络通讯。目前比较流行的插件有flannet,callco,canel,kube-router。 这些插件使用的解决方案都如下: 1)虚拟网桥,虚拟网卡,多个容器共用一个虚拟网卡进行通信; 2)多路复用:MacVLAN,多个容器共用一个物理网卡进行通信; 3)硬件交换:SR-LOV,一个物理网卡可以虚拟出多个接口,这个性能最好。
复制

本例使用flannet网络插件,其他插件不在本文讨论范围:
修改docker镜像(所有节点安装)

[root@openeuler ~]# vi /etc/docker/deamon.json { "registry-mirrors": ["http://hub-mirror.c.163.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn"] } [root@openeuler ~]# docker pull lizhenliang/flannel:v0.11.0-amd64
复制

也可以使用配置文件来完成安装:
配置文件下载连接:https://www.modb.pro/download/166338

[kubeuser@openeuler ~]$ kubectl apply -f kube-flannel.yml
复制

在主节点上部署nginx

[kubeuser@openeuler ~]$ kubectl create deployment nginx --image=nginx [kubeuser@openeuler ~]$ kubectl expose deployment nginx --port=80 --type=NodePort [kubeuser@openeuler ~]$ kubectl get pod,svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 5h4m service/nginx NodePort 10.1.106.205 <none> 80:32108/TCP 131m [kubeuser@openeuler ~]$ kubectl scale deployment nginx --replicas=3 [kubeuser@openeuler ~]$ kubectl get pod,svc NAME READY STATUS RESTARTS AGE pod/nginx-6799fc88d8-q56d8 1/1 Running 0 119m pod/nginx-6799fc88d8-sntzg 1/1 Running 0 106m pod/nginx-6799fc88d8-zf4kr 1/1 Running 0 106m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 5h4m service/nginx NodePort 10.1.106.205 <none> 80:32108/TCP 131m
复制

分别访问主节点和从节点的32108端口即可访问到nginx集群。

最后修改时间:2021-09-17 09:33:49
「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
1人已赞赏
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论

暂无图片
获得了17次点赞
暂无图片
内容获得10次评论
暂无图片
获得了56次收藏
TA的专栏
ELK技巧文章
收录8篇内容
目录
  • 21.03 版本关键特性集成Kubernetes 1.20
  • 下面我们来体验一下在openeuler系统上安装k8s的过程。官方文档太过于复杂,不适合我这样的小白操作,参考网络的一篇文章完成了以下部署过程,难免有些纰漏,希望大神不吝赐教。
    • 参考文章连接:https://www.cnblogs.com/heian99/p/12173599.html
    • 一、安装及配置
      • 以下操作在主节点上完成:
      • 以下是在slave节点上执行:
      • 在主节点查看所有节点状态:(如下状态均为Ready表示节点安装正常)
  • 二、部署应用
    • 在主节点上部署nginx