暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

二进制安装 karmada 集群

CNCF 2022-03-17
1521


最近有些读者朋友问笔者 karmada 是否可以用二进制的方式安装,karmada 与 kubernetes 一样,当然也可以使用二进制的方式安装。实际上,我们线上的 karmada 便是二进制安装的。

以我们线上环境为例,分享 karmada 二进制安装的步骤,对二进制安装方式情有独钟的读者可以作为参考。

01

环境

三台服务器

主机名      内网 IP          公网 IP
karmada-01 172.31.209.245  47.242.88.82 
karmada-02 172.31.209.246  
karmada-03 172.31.209.247  

复制


02

安装

一、准备二进制文件

  下载 kubernetes 二进制包

wget https://dl.k8s.io/v1.23.3/kubernetes-server-linux-amd64.tar.gz
tar -zxvf kubernetes-server-linux-amd64.tar.gz
cd /root/kubernetes/server/bin
mv  kube-apiserver kube-controller-manager kubectl /usr/local/sbin/



复制

  下载 etcd 二进制包

wget https://github.com/etcd-io/etcd/releases/download/v3.5.1/etcd-v3.5.1-linux-amd64.tar.gz
tar -zxvf etcd-v3.5.1-linux-amd64.tar.gz
cd etcd-v3.5.1-linux-amd64/
cp etcdctl etcd /usr/local/sbin/

复制


  编译 karmada 二进制文件

git clone https://github.com/karmada-io/karmada
cd karmada
make karmada-aggregated-apiserver
make karmada-controller-manager
make karmada-scheduler
make karmada-webhook
mv karmada-aggregated-apiserver karmada-controller-manager karmada-scheduler karmada-webhook /usr/local/sbin/

复制


  把二进制文件拷贝到其他节点

root@karmada-01:~#  ls -lrt /usr/local/sbin/
total 581480
-rwxr-xr-x 1 root root 131297280 Jan 26 05:36 kube-apiserver
-rwxr-xr-x 1 root root 121110528 Jan 26 05:36 kube-controller-manager
-rwxr-xr-x 1 root root  46587904 Jan 26 05:36 kubectl
-rwxr-xr-x 1 root root  17981440 Feb 13 13:45 etcdctl
-rwxr-xr-x 1 root root  23568384 Feb 13 13:45 etcd
-rwxr-xr-x 1 root root  69978022 Feb 13 14:00 karmada-aggregated-apiserver
-rwxr-xr-x 1 root root  67732857 Feb 13 14:00 karmada-controller-manager
-rwxr-xr-x 1 root root  60731283 Feb 13 14:00 karmada-scheduler
-rwxr-xr-x 1 root root  56431694 Feb 13 14:00 karmada-webhook

scp /usr/local/sbin/* karmada-02:/usr/local/sbin/
scp /usr/local/sbin/* karmada-03:/usr/local/sbin/

复制


二、编译安装 nginx


我们选择 nginx 来实现 karmada 的高可用和负载均衡。也可以使用 HAProxy 实现,公有云环境可以使用云负载均衡。


  编译安装 nginx

wget http://nginx.org/download/nginx-1.21.6.tar.gz
tar -zxvf nginx-1.21.6.tar.gz
cd nginx-1.21.6
./configure --with-stream --without-http --prefix=/usr/local/karmada-nginx --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module
make && make install
mv /usr/local/karmada-nginx/sbin/nginx /usr/local/karmada-nginx/sbin/karmada-nginx

复制


  先配置 karmada apiserver 的负载均衡

cat > /usr/local/karmada-nginx/conf/nginx.conf <<EOF
worker_processes 2;

events {
    worker_connections  1024;
}

stream {
    upstream backend {
        hash $remote_addr consistent;
        server 172.31.209.245:6443        max_fails=3 fail_timeout=30s;
        server 172.31.209.246:6443        max_fails=3 fail_timeout=30s;
        server 172.31.209.247:6443        max_fails=3 fail_timeout=30s;
    }

    server {
        listen 172.31.209.245:5443;
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }
}
EOF

复制


  创建 karmada nginx systemd 文件

vi /lib/systemd/system/karmada-nginx.service

[Unit]
Description=The karmada karmada-apiserver nginx proxy server
After=syslog.target network-online.target remote-fs.target nss-lookup.target
Wants=network-online.target

[Service]
Type=forking
ExecStartPre=/usr/local/karmada-nginx/sbin/karmada-nginx -t
ExecStart=/usr/local/karmada-nginx/sbin/karmada-nginx
ExecReload=/usr/local/karmada-nginx/sbin/karmada-nginx -s reload
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

复制


  启动 karmada nginx

systemctl daemon-reload
systemctl enable karmada-nginx
systemctl start karmada-nginx
systemctl status karmada-nginx

复制


生成集群证书


  直接用 Linux 的 openssl 命令生成,不用其他第三方工具了。创建证书时需要注意是的 DNS 和 IP。

# 创建证书临时目录
mkdir certs
cd certs
# 创建 karmada ca 根证书,后面的证书都该证书签名,有效期 10 年
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=karmada/OU=System/CN=karmada" -days 3650 -out ca.crt

# 创建 etcd server 证书
openssl genrsa -out etcd-server.key 2048
openssl req -new -nodes -sha256 -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=karmada/OU=System/CN=karmada-etcd"  -key etcd-server.key -out etcd-server.csr
openssl x509 -req -days 3650  \
  -extfile <(printf "keyUsage=critical,Digital Signature, Key Encipherment\nextendedKeyUsage=serverAuth,clientAuth\nauthorityKeyIdentifier=keyid,issuer\nsubjectAltName=IP:172.31.209.245,IP:172.31.209.246,IP:172.31.209.247,IP:127.0.0.1,DNS:localhost") \
  -sha256 -CA ca.crt -CAkey ca.key -set_serial 01 -in etcd-server.csr -out etcd-server.crt

# 创建 etcd peer 证书
openssl genrsa -out etcd-peer.key 2048
openssl req -new -nodes -sha256 -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=karmada/OU=System/CN=karmada-etcd-peer"  -key etcd-peer.key -out etcd-peer.csr
openssl x509 -req -days 3650 \
  -extfile <(printf "keyUsage=critical,Digital Signature, Key Encipherment\nextendedKeyUsage=serverAuth,clientAuth\nauthorityKeyIdentifier=keyid,issuer\nsubjectAltName=IP:172.31.209.245,IP:172.31.209.246,IP:172.31.209.247,IP:127.0.0.1,DNS:localhost") \
  -sha256 -CA ca.crt -CAkey ca.key -set_serial 01 -in etcd-peer.csr -out etcd-peer.crt

# 创建 karmada  连接 etcd 的 client 证书
openssl genrsa -out karmada-etcd-client.key 2048
openssl req -new -nodes -sha256 -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=karmada/OU=System/CN=karmada-etcd-client"  -key karmada-etcd-client.key -out karmada-etcd-client.csr
openssl x509 -req -days 3650 \
  -extfile <(printf "keyUsage=critical,Digital Signature, Key Encipherment\nextendedKeyUsage=clientAuth\nauthorityKeyIdentifier=keyid,issuer") \
  -sha256 -CA ca.crt -CAkey ca.key -set_serial 01 -in karmada-etcd-client.csr -out karmada-etcd-client.crt

# 创建 karmada-apiserver server 证书
openssl genrsa -out karmada-apiserver.key 2048
openssl req -new -nodes -sha256 -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=karmada/OU=System/CN=karmada" -key karmada-apiserver.key -out karmada-apiserver.csr
openssl x509 -req -days 3650 \
  -extfile <(printf "keyUsage=critical,Digital Signature, Key Encipherment\nextendedKeyUsage=serverAuth,clientAuth\nauthorityKeyIdentifier=keyid,issuer\nsubjectAltName=DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster,DNS:kubernetes.default.svc.cluster.local,IP:172.31.209.245,IP:172.31.209.246,IP:172.31.209.247,IP:10.254.0.1,IP:47.242.88.82") \
  -sha256  -CA ca.crt -CAkey ca.key -set_serial 01  -in karmada-apiserver.csr -out karmada-apiserver.crt

# 创建 admin 证书
openssl genrsa -out admin.key 2048
openssl req -new -nodes -sha256 -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=system:masters/OU=System/CN=admin"  -key admin.key -out admin.csr
openssl x509 -req -days 3650 \
  -extfile <(printf "keyUsage=critical,Digital Signature, Key Encipherment\nextendedKeyUsage=clientAuth\nauthorityKeyIdentifier=keyid,issuer") \
  -sha256 -CA ca.crt -CAkey ca.key -set_serial 01 -in admin.csr   -out admin.crt

# 创建 kube-controller-manager 证书
openssl genrsa -out kube-controller-manager.key 2048
openssl req -new -nodes -sha256 -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=karmada/OU=System/CN=system:kube-controller-manager"  -key kube-controller-manager.key -out kube-controller-manager.csr
openssl x509 -req -days 3650 \
  -extfile <(printf "keyUsage=critical,Digital Signature, Key Encipherment\nextendedKeyUsage=clientAuth\nauthorityKeyIdentifier=keyid,issuer") \
  -sha256 -CA ca.crt -CAkey ca.key -set_serial 01 -in kube-controller-manager.csr -out kube-controller-manager.crt


# 创建 karmada 证书
openssl genrsa -out karmada.key 2048
openssl req -new -nodes -sha256 -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=karmada/OU=System/CN=system:karmada"  -key karmada.key -out karmada.csr
openssl x509 -req -days 3650 \
  -extfile <(printf "keyUsage=critical,Digital Signature, Key Encipherment\nextendedKeyUsage=serverAuth,clientAuth\nauthorityKeyIdentifier=keyid,issuer\nsubjectAltName=DNS:karmada-01,DNS:karmada-02,DNS:karmada-03,DNS:localhost,IP:172.0.0.1,IP:172.31.209.245,IP:172.31.209.246,IP:172.31.209.247,IP:10.254.0.1,IP:47.242.88.82") \
  -sha256 -CA ca.crt -CAkey ca.key -set_serial 01 -in karmada.csr -out karmada.crt

# 创建 front-proxy-client
openssl genrsa -out front-proxy-client.key 2048
openssl req -new -nodes -sha256 -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=karmada/OU=System/CN=front-proxy-client"  -key front-proxy-client.key -out front-proxy-client.csr
openssl x509 -req -days 3650 \
  -extfile <(printf "keyUsage=critical,Digital Signature, Key Encipherment\nextendedKeyUsage=serverAuth,clientAuth\nauthorityKeyIdentifier=keyid,issuer") \
  -sha256 -CA ca.crt -CAkey ca.key -set_serial 01 -in front-proxy-client.csr -out front-proxy-client.crt

# 可以查看证书的配置,以 etcd-server 为例
openssl x509  -noout -text  -in  etcd-server.crt

# 创建 karmada 配置目录,把证书拷贝到 /etc/karmada/pki 目录
mkdir -p /etc/karmada/pki
cp karmada.key tls.key
cp karmada.crt tls.crt
cp *.key *.crt /etc/karmada/pki

# 创建 karmada-apiserver SA 秘钥
openssl genrsa -out sa.key 2048
openssl rsa -in sa.key -pubout -out sa.pub

复制


四、生成 karmada 配置文件


  创建 kubectl kubeconfig 文件,默认保持在 $HOME/.kube/config

# karmada apiserver 地址设置为负载的地址
export KARMADA_APISERVER="https://172.31.209.245:5443"
# 设置集群参数
kubectl config set-cluster karmada \
  --certificate-authority=/etc/karmada/pki/ca.crt \
  --embed-certs=true \
  --server=${KARMADA_APISERVER}
# 设置客户端认证参数
kubectl config set-credentials admin \
  --client-certificate=/etc/karmada/pki/admin.crt \
  --embed-certs=true \
  --client-key=/etc/karmada/pki/admin.key
# 设置上下文参数
kubectl config set-context karmada \
  --cluster=karmada \
  --user=admin
# 设置默认上下文
kubectl config use-context karmada

复制


  创建 kube-controller-manager kubeconfig 文件

kubectl config set-cluster karmada \
  --certificate-authority=/etc/karmada/pki/ca.crt \
  --embed-certs=true \
  --server=${KARMADA_APISERVER} \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=/etc/karmada/pki/kube-controller-manager.crt \
  --client-key=/etc/karmada/pki/kube-controller-manager.key \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context system:kube-controller-manager \
  --cluster=karmada \
  --user=system:kube-controller-manager \
  --kubeconfig=kube-controller-manager.kubeconfig
  
kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

复制


  创建 karmada kubeconfig 文件,karmada 的组件通过这个文件来连接 karmada apiserver

kubectl config set-cluster karmada \
  --certificate-authority=/etc/karmada/pki/ca.crt \
  --embed-certs=true \
  --server=${KARMADA_APISERVER} \
  --kubeconfig=karmada.kubeconfig

kubectl config set-credentials system:karmada \
  --client-certificate=/etc/karmada/pki/karmada.crt \
  --client-key=/etc/karmada/pki/karmada.key \
  --embed-certs=true \
  --kubeconfig=karmada.kubeconfig

kubectl config set-context system:karmada\
  --cluster=karmada \
  --user=system:karmada \
  --kubeconfig=karmada.kubeconfig

kubectl config use-context system:karmada --kubeconfig=karmada.kubeconfig

复制


  创建 etcd 数据加密配置文件

export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
cat > /etc/karmada/encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}
EOF

复制


  打包 karmada 配置文件拷贝到其他节点

cd /etc
tar -cvf karmada.tar karmada
scp karmada.tar  karmada-02:/etc/
scp karmada.tar  karmada-03:/etc/

# 其他节点需要解包
tar -xvf karmada.tar

复制


五、部署 etcd 集群


  创建 etcd systemd 文件,以 karmada-01 为例

vi /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/sbin/etcd \
  --name karmada-01 \
  --client-cert-auth=true \
  --cert-file=/etc/karmada/pki/etcd-server.crt \
  --key-file=/etc/karmada/pki/etcd-server.key \
  --peer-client-cert-auth=true \
  --peer-cert-file=/etc/karmada/pki/etcd-peer.crt \
  --peer-key-file=/etc/karmada/pki/etcd-peer.key \
  --peer-trusted-ca-file=/etc/karmada/pki/ca.crt \
  --trusted-ca-file=/etc/karmada/pki/ca.crt \
  --snapshot-count=10000 \
  --initial-advertise-peer-urls https://172.31.209.245:2380 \
  --listen-peer-urls https://172.31.209.245:2380 \
  --listen-client-urls https://172.31.209.245:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://172.31.209.245:2379 \
  --initial-cluster-token etcd-cluster \
  --initial-cluster karmada-01=https://172.31.209.245:2380,karmada-02=https://172.31.209.246:2380,karmada-03=https://172.31.209.247:2380 \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

复制


  启动 etcd 集群。(三台机都要执行)

# 创建etcd 存储目录
mkdir /var/lib/etcd/
chmod 700 /var/lib/etcd

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd

复制


  检查etcd 集群状态

etcdctl --cacert=/etc/karmada/pki/ca.crt \
 --cert=/etc/karmada/pki/etcd-server.crt \
 --key=/etc/karmada/pki/etcd-server.key \
 --endpoints 172.31.209.245:2379,172.31.209.246:2379,172.31.209.247:2379 endpoint status --write-out="table"

+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|      ENDPOINT       |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 172.31.209.245:2379 | 689151f8cbf4ee95 |   3.5.1 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
| 172.31.209.246:2379 | 5db4dfb6ecc14de7 |   3.5.1 |   20 kB |      true |      false |         2 |          9 |                  9 |        |
| 172.31.209.247:2379 | 7e59eef3c816aa57 |   3.5.1 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

复制


六、部署 karmada apiserver


  创建 karmada-apiserver systemd 文件,以 karmada-01 为例

vi /usr/lib/systemd/system/karmada-apiserver.service
[Unit]
Description=Kubernetes API Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service

[Service]
ExecStart=/usr/local/sbin/kube-apiserver \
  --advertise-address=172.31.209.245 \
  --default-not-ready-toleration-seconds=360 \
  --default-unreachable-toleration-seconds=360 \
  --max-mutating-requests-inflight=2000 \
  --enable-admission-plugins=NodeRestriction \
  --disable-admission-plugins=StorageObjectInUseProtection,ServiceAccount \
  --max-requests-inflight=4000 \
  --default-watch-cache-size=200 \
  --delete-collection-workers=2 \
  --encryption-provider-config=/etc/karmada/encryption-config.yaml \
  --etcd-cafile=/etc/karmada/pki/ca.crt \
  --etcd-certfile=/etc/karmada/pki/karmada-etcd-client.crt \
  --etcd-keyfile=/etc/karmada/pki/karmada-etcd-client.key \
  --etcd-servers=https://172.31.209.245:2379,https://172.31.209.246:2379,https://172.31.209.247:2379 \
  --bind-address=172.31.209.245 \
  --secure-port=6443 \
  --tls-cert-file=/etc/karmada/pki/karmada-apiserver.crt \
  --tls-private-key-file=/etc/karmada/pki/karmada-apiserver.key \
  --insecure-port=0 \
  --audit-webhook-batch-buffer-size=30000 \
  --audit-webhook-batch-max-size=800 \
  --profiling \
  --anonymous-auth=false \
  --client-ca-file=/etc/karmada/pki/ca.crt \
  --enable-bootstrap-token-auth \
  --requestheader-allowed-names="front-proxy-client" \
  --requestheader-client-ca-file=/etc/karmada/pki/ca.crt \
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \
  --requestheader-group-headers=X-Remote-Group \
  --requestheader-username-headers=X-Remote-User \
  --proxy-client-cert-file=/etc/karmada/pki/front-proxy-client.crt \
  --proxy-client-key-file=/etc/karmada/pki/front-proxy-client.key \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --event-ttl=168h \
  --service-cluster-ip-range=10.254.0.0/16 \
  --service-node-port-range=10-60060 \
  --enable-swagger-ui=true \
  --logtostderr=true \
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  --service-account-key-file=/etc/karmada/pki/sa.pub \
  --service-account-signing-key-file=/etc/karmada/pki/sa.key

Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

复制

--allow-privileged:运行执行 privileged 权限的容器;

--apiserver-count=3:指定 apiserver 实例的数量;

--event-ttl:指定 events 的保存时间;

--encryption-provider-config: 在etcd中加密API数据


  启动 karmada-apiserver(3个节点都执行

systemctl daemon-reload
systemctl enable karmada-apiserver
systemctl start karmada-apiserver
systemctl status karmada-apiserver

复制


  验证

# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                        ERROR
scheduler            Unhealthy   Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused   
controller-manager   Unhealthy   Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused   
etcd-0               Healthy     {"health":"true","reason":""}                                                                  
etcd-2               Healthy     {"health":"true","reason":""}                                                                  
etcd-1               Healthy     {"health":"true","reason":""}   

复制


七、部署 kube-controller-manager


  创建 kube-controller-manager systemd 文件

vi /usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/local/sbin/kube-controller-manager \
  --profiling \
  --cluster-name=karmada \
  --controllers=namespace,garbagecollector,serviceaccount-token\
  --kube-api-qps=1000 \
  --kube-api-burst=2000 \
  --leader-elect \
  --use-service-account-credentials\
  --concurrent-service-syncs=1 \
  --bind-address=0.0.0.0 \
  --address=0.0.0.0 \
  --tls-cert-file=/etc/karmada/pki/kube-controller-manager.crt \
  --tls-private-key-file=/etc/karmada/pki/kube-controller-manager.key \
  --authentication-kubeconfig=/etc/karmada/kube-controller-manager.kubeconfig \
  --client-ca-file=/etc/karmada/pki/ca.crt \
  --requestheader-allowed-names="front-proxy-client" \
  --requestheader-client-ca-file=/etc/karmada/pki/ca.crt \
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \
  --requestheader-group-headers=X-Remote-Group \
  --requestheader-username-headers=X-Remote-User \
  --authorization-kubeconfig=/etc/karmada/kube-controller-manager.kubeconfig \
  --cluster-signing-cert-file=/etc/karmada/pki/ca.crt \
  --cluster-signing-key-file=/etc/karmada/pki/ca.key \
  --experimental-cluster-signing-duration=876000h \
  --feature-gates=RotateKubeletServerCertificate=true \
  --horizontal-pod-autoscaler-sync-period=10s \
  --concurrent-deployment-syncs=10 \
  --concurrent-gc-syncs=30 \
  --node-cidr-mask-size=24 \
  --service-cluster-ip-range=10.254.0.0/16 \
  --pod-eviction-timeout=5m \
  --terminated-pod-gc-threshold=10000 \
  --root-ca-file=/etc/karmada/pki/ca.crt \
  --service-account-private-key-file=/etc/karmada/pki/sa.key \
  --kubeconfig=/etc/karmada/kube-controller-manager.kubeconfig \
  --logtostderr=true \
  --v=4 
Restart=on
RestartSec=5
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target 

复制


  启动 kube-controller-manager(3个节点都执行)

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager

复制


八、部署 karmada-controller-manager


  创建命名空间,绑定集群 admin 角色

kubectl create ns karmada-system
kubectl create clusterrolebinding cluster-admin:karmada --clusterrole=cluster-admin --user system:karmada

复制


  创建 karmada-controller-manager systemd 文件

vi /usr/lib/systemd/system/karmada-controller-manager.service

[Unit]
Description=Karmada Controller Manager
Documentation=https://github.com/karmada-io/karmada

[Service]
ExecStart=/usr/local/sbin/karmada-controller-manager \
  --kubeconfig=/etc/karmada/karmada.kubeconfig \
  --bind-address=0.0.0.0 \
  --cluster-status-update-frequency=10s \
  --secure-port=10357 \
  --v=4 
Restart=on
RestartSec=5
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

复制


  启动 karmada-controller-manager(3个节点都执行)

systemctl daemon-reload
systemctl enable karmada-controller-manager
systemctl start karmada-controller-manager
systemctl status karmada-controller-manager

复制


九、部署 karmada-scheduler


  创建 karmada-scheduler systemd 文件

vi /usr/lib/systemd/system/karmada-scheduler.service

[Unit]
Description=Karmada Controller Manager
Documentation=https://github.com/karmada-io/karmada

[Service]
ExecStart=/usr/local/sbin/karmada-scheduler \
  --kubeconfig=/etc/karmada/karmada.kubeconfig \
  --bind-address=0.0.0.0 \
  --feature-gates=Failover=true \
  --enable-scheduler-estimator=true \
  --secure-port=10351 \
  --v=4 
Restart=on
RestartSec=5
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

复制


  启动 karmada-scheduler(3个节点都执行)

systemctl daemon-reload
systemctl enable karmada-scheduler
systemctl start karmada-scheduler
systemctl status karmada-scheduler

复制


十、部署 karmada-webhook


 karmada-webhook 与 scheduler 、controller-manager 不同,它的高可用需要借助 nginx 来实现。


  修改 nginx 配置

cat /usr/local/karmada-nginx/conf/nginx.conf
worker_processes 2;

events {
    worker_connections  1024;
}

stream {
    upstream backend {
        hash  consistent;
        server 172.31.209.245:6443        max_fails=3 fail_timeout=30s;
        server 172.31.209.246:6443        max_fails=3 fail_timeout=30s;
        server 172.31.209.247:6443        max_fails=3 fail_timeout=30s;
    }

    upstream webhook {
        hash  consistent;
        server 172.31.209.245:8443        max_fails=3 fail_timeout=30s;
        server 172.31.209.246:8443        max_fails=3 fail_timeout=30s;
        server 172.31.209.247:8443        max_fails=3 fail_timeout=30s;
    }

    server {
        listen 172.31.209.245:5443;
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }

    server {
        listen 172.31.209.245:4443;
        proxy_connect_timeout 1s;
        proxy_pass webhook;
    }
}

复制


  重新加载nginx配置

/usr/local/karmada-nginx/sbin/karmada-nginx -s reload

复制


  创建 karmada-webhook systemd 文件

vi /usr/lib/systemd/system/karmada-webhook.service

[Unit]
Description=Karmada Controller Manager
Documentation=https://github.com/karmada-io/karmada

[Service]
ExecStart=/usr/local/sbin/karmada-webhook \
  --kubeconfig=/etc/karmada/karmada.kubeconfig \
  --bind-address=0.0.0.0 \
  --secure-port=8443 \
  --health-probe-bind-address=:8444 \
  --metrics-bind-address=:8445 \
  --cert-dir=/etc/karmada/pki \
  --v=4 
Restart=on
RestartSec=5
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

复制


  启动 karmada-webhook(3个节点都执行)

systemctl daemon-reload
systemctl enable karmada-webhook
systemctl start karmada-webhook
systemctl status karmada-webhook

复制


十一、初始化 karmada 资源


  创建 crd

wget https://github.com/karmada-io/karmada/releases/download/v1.0.0/crds.tar.gz
tar -zxvf crds.tar.gz
cd crds/bases 
kubectl apply -f .

cd ../patches/
ca_string=$(sudo cat /etc/karmada/pki/ca.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
sed -i "s/{{caBundle}}/${ca_string}/g" webhook_in_resourcebindings.yaml
sed -i "s/{{caBundle}}/${ca_string}/g"  webhook_in_clusterresourcebindings.yaml
sed -i 's/karmada-webhook.karmada-system.svc:443/172.31.209.245:4443/g' webhook_in_resourcebindings.yaml
sed -i 's/karmada-webhook.karmada-system.svc:443/172.31.209.245:4443/g' webhook_in_clusterresourcebindings.yaml

kubectl patch CustomResourceDefinition resourcebindings.work.karmada.io  --patch "$(cat webhook_in_resourcebindings.yaml)"
kubectl patch CustomResourceDefinition clusterresourcebindings.work.karmada.io  --patch "$(cat webhook_in_clusterresourcebindings.yaml)"

复制


  创建 webhook-configuration 模板文件

vi webhook-configuration.yaml

apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
  name: mutating-config
  labels:
    app: mutating-config
webhooks:
  - name: propagationpolicy.karmada.io
    rules:
      - operations: ["CREATE""UPDATE"]
        apiGroups: ["policy.karmada.io"]
        apiVersions: ["*"]
        resources: ["propagationpolicies"]
        scope: "Namespaced"
    clientConfig:
      url: https://karmada-webhook.karmada-system.svc:443/mutate-propagationpolicy
      caBundle: {{caBundle}}
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions: ["v1"]
    timeoutSeconds: 3
  - name: clusterpropagationpolicy.karmada.io
    rules:
      - operations: ["CREATE""UPDATE"]
        apiGroups: ["policy.karmada.io"]
        apiVersions: ["*"]
        resources: ["clusterpropagationpolicies"]
        scope: "Cluster"
    clientConfig:
      url: https://karmada-webhook.karmada-system.svc:443/mutate-clusterpropagationpolicy
      caBundle: {{caBundle}}
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions: ["v1"]
    timeoutSeconds: 3
  - name: overridepolicy.karmada.io
    rules:
      - operations: ["CREATE""UPDATE"]
        apiGroups: ["policy.karmada.io"]
        apiVersions: ["*"]
        resources: ["overridepolicies"]
        scope: "Namespaced"
    clientConfig:
      url: https://karmada-webhook.karmada-system.svc:443/mutate-overridepolicy
      caBundle: {{caBundle}}
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions: ["v1"]
    timeoutSeconds: 3
  - name: work.karmada.io
    rules:
      - operations: ["CREATE""UPDATE"]
        apiGroups: ["work.karmada.io"]
        apiVersions: ["*"]
        resources: ["works"]
        scope: "Namespaced"
    clientConfig:
      url: https://karmada-webhook.karmada-system.svc:443/mutate-work
      caBundle: {{caBundle}}
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions: ["v1"]
    timeoutSeconds: 3
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  name: validating-config
  labels:
    app: validating-config
webhooks:
  - name: propagationpolicy.karmada.io
    rules:
      - operations: ["CREATE""UPDATE"]
        apiGroups: ["policy.karmada.io"]
        apiVersions: ["*"]
        resources: ["propagationpolicies"]
        scope: "Namespaced"
    clientConfig:
      url: https://karmada-webhook.karmada-system.svc:443/validate-propagationpolicy
      caBundle: {{caBundle}}
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions: ["v1"]
    timeoutSeconds: 3
  - name: clusterpropagationpolicy.karmada.io
    rules:
      - operations: ["CREATE""UPDATE"]
        apiGroups: ["policy.karmada.io"]
        apiVersions: ["*"]
        resources: ["clusterpropagationpolicies"]
        scope: "Cluster"
    clientConfig:
      url: https://karmada-webhook.karmada-system.svc:443/validate-clusterpropagationpolicy
      caBundle: {{caBundle}}
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions: ["v1"]
    timeoutSeconds: 3
  - name: overridepolicy.karmada.io
    rules:
      - operations: ["CREATE""UPDATE"]
        apiGroups: ["policy.karmada.io"]
        apiVersions: ["*"]
        resources: ["overridepolicies"]
        scope: "Namespaced"
    clientConfig:
      url: https://karmada-webhook.karmada-system.svc:443/validate-overridepolicy
      caBundle: {{caBundle}}
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions: ["v1"]
    timeoutSeconds: 3
  - name: clusteroverridepolicy.karmada.io
    rules:
      - operations: ["CREATE""UPDATE"]
        apiGroups: ["policy.karmada.io"]
        apiVersions: ["*"]
        resources: ["clusteroverridepolicies"]
        scope: "Cluster"
    clientConfig:
      url: https://karmada-webhook.karmada-system.svc:443/validate-clusteroverridepolicy
      caBundle: {{caBundle}}
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions: ["v1"]
    timeoutSeconds: 3
  - name: config.karmada.io
    rules:
      - operations: ["CREATE""UPDATE"]
        apiGroups: ["config.karmada.io"]
        apiVersions: ["*"]
        resources: ["resourceinterpreterwebhookconfigurations"]
        scope: "Cluster"
    clientConfig:
      url: https://karmada-webhook.karmada-system.svc:443/validate-resourceinterpreterwebhookconfiguration
      caBundle: {{caBundle}}
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions: ["v1"]
    timeoutSeconds: 3

复制


  创建 webhook-configuration.yaml

sed -i "s/{{caBundle}}/${ca_string}/g"  webhook-configuration.yaml
sed -i 's/karmada-webhook.karmada-system.svc:443/172.31.209.245:4443/g'  webhook-configuration.yaml

kubectl create -f  webhook-configuration.yaml

复制


十二、部署 karmada-aggregated-apiserver


 与 karmada-webhook 一样,使用 nginx 来实现高可用


  修改 nginx 配置

cat /usr/local/karmada-nginx/conf/nginx.conf
worker_processes 2;

events {
    worker_connections  1024;
}

stream {
    upstream backend {
        hash  consistent;
        server 172.31.209.245:6443        max_fails=3 fail_timeout=30s;
        server 172.31.209.246:6443        max_fails=3 fail_timeout=30s;
        server 172.31.209.247:6443        max_fails=3 fail_timeout=30s;
    }

    upstream webhook {
        hash  consistent;
        server 172.31.209.245:8443        max_fails=3 fail_timeout=30s;
        server 172.31.209.246:8443        max_fails=3 fail_timeout=30s;
        server 172.31.209.247:8443        max_fails=3 fail_timeout=30s;
    }

    upstream aa {
        hash  consistent;
        server 172.31.209.245:7443        max_fails=3 fail_timeout=30s;
        server 172.31.209.246:7443        max_fails=3 fail_timeout=30s;
        server 172.31.209.247:7443        max_fails=3 fail_timeout=30s;
    }

    server {
        listen 172.31.209.245:5443;
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }

    server {
        listen 172.31.209.245:4443;
        proxy_connect_timeout 1s;
        proxy_pass webhook;
    }

    server {
        listen 172.31.209.245:443;
        proxy_connect_timeout 1s;
        proxy_pass aa;
    }
}
# 重新加载nginx配置
/usr/local/karmada-nginx/sbin/karmada-nginx -s reload

复制


  创建 karmada-aggregated-apiserver systemd 文件

vi /usr/lib/systemd/system/karmada-aggregated-apiserver.service

[Unit]
Description=Karmada Controller Manager
Documentation=https://github.com/karmada-io/karmada

[Service]
ExecStart=/usr/local/sbin/karmada-aggregated-apiserver \
  --secure-port=7443 \
  --kubeconfig=/etc/karmada/karmada.kubeconfig \
  --authentication-kubeconfig=/etc/karmada/karmada.kubeconfig  \
  --authorization-kubeconfig=/etc/karmada/karmada.kubeconfig  \
  --karmada-config=/etc/karmada/karmada.kubeconfig  \
  --etcd-servers=https://172.31.209.245:2379,https://172.31.209.246:2379,https://172.31.209.247:2379 \
  --etcd-cafile=/etc/karmada/pki/ca.crt \
  --etcd-certfile=/etc/karmada/pki/karmada-etcd-client.crt \
  --etcd-keyfile=/etc/karmada/pki/karmada-etcd-client.key \
  --tls-cert-file=/etc/karmada/pki/karmada.crt \
  --tls-private-key-file=/etc/karmada/pki/karmada.key \
  --audit-log-path=-  \
  --feature-gates=APIPriorityAndFairness=false  \
  --audit-log-maxage=0  \
  --audit-log-maxbackup=0
Restart=on
RestartSec=5
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

复制


  启动 karmada-scheduler(3个节点都执行)

systemctl daemon-reload
systemctl enable karmada-aggregated-apiserver
systemctl start karmada-aggregated-apiserver
systemctl status karmada-aggregated-apiserver

复制


  创建 apiservice,externalName 是 nginx 所在的节点主机名,karmada-apiserver 能访问即可。

vi  apiservice.yaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  name: v1alpha1.cluster.karmada.io
  labels:
    app: karmada-aggregated-apiserver
    apiserver: "true"
spec:
  insecureSkipTLSVerify: true
  group: cluster.karmada.io
  groupPriorityMinimum: 2000
  service:
    name: karmada-aggregated-apiserver
    namespace: karmada-system
  version: v1alpha1
  versionPriority: 10
---
apiVersion: v1
kind: Service
metadata:
  name: karmada-aggregated-apiserver
  namespace: karmada-system
spec:
  type: ExternalName
  externalName: karmada-01
  
# 创建 APIService
kubectl create -f  apiservice.yaml

复制


03

验证


  把 kubernetes 集群加入到 karmada ,Pull 和 Push 模式。

root@karmada-01:~# kubectl get cluster
NAME      VERSION   MODE   READY   AGE
demo      v1.22.0   Pull   True    11s
member1   v1.22.3   Push   True    46s

复制

  注:加入过程略,如果不清楚的读者可以去社区看文档,或者翻下之前的文章


  可以访问 Push 模式的成员集群 api

kubectl   get --raw /apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy/api/v1/nodes

复制

  注:pull 模式需要安装网络插件,教程社区文档有,这里就不展开了


  创建示例文件

# cat karmada-nginx-example.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: docker.io/library/nginx:1.21.1-alpine
        name: nginx
        ports:
        - containerPort: 80
          protocol: TCP
          name: http
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 80
    name: http
  selector:
    app: nginx
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: nginx-propagation
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: nginx
  placement:
    clusterAffinity:
      clusterNames:
        - demo
        - member1
    replicaScheduling:
      replicaDivisionPreference: Weighted
      replicaSchedulingType: Divided
      weightPreference:
        staticWeightList:
          - targetCluster:
              clusterNames:
                - member1
            weight: 1
          - targetCluster:
              clusterNames:
                - demo
            weight: 1
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: nginx-propagation-svc
spec:
  resourceSelectors:
    - apiVersion: v1
      kind: Service
      name: nginx
  placement:
    clusterAffinity:
      clusterNames:
        - member1
        - demo
    replicaScheduling:
      replicaDivisionPreference: Weighted
      replicaSchedulingType: Divided
      weightPreference:
        staticWeightList:
          - targetCluster:
              clusterNames:
                - demo
            weight: 1
          - targetCluster:
              clusterNames:
                - member1
            weight: 1

复制


  创建资源分发示例

#  kubectl create -f  karmada-nginx-example.yaml
deployment.apps/nginx created
service/nginx created
propagationpolicy.policy.karmada.io/nginx-propagation created
propagationpolicy.policy.karmada.io/nginx-propagation-svc created


# kubectl get deployments.apps 
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   2/2     2            2           7s

(member1) # kubectl get deployments.apps --kubeconfig ./member1-kubeconfig 
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           63s

(demo) # kubectl get deployments.apps 
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           110s

复制


04

最后


关于安装 karmada 文章不出意外的话笔者后续就不更新了,这篇文章笔者也会整理成 md 文档提交到社区,后续的二进制安装文档在社区迭代更新。当然,如果觉得二进制安装麻烦,可以选择脚本、helm 或命令行工具的方式安装,社区在安装部署方面做的还是挺不错的,如果有问题的话也可以提 issues 。


后续有时间再跟读者朋友分享一些 karmada 的使用案例,比如说怎么实现和扩展 karmada 的控制器等,因为兼容原生 kubernetes ,可玩性还是比较高的,也欢迎大家来体验 karmada。


https://github.com/karmada-io/karmada


文章转载自ProdanLabs点击这里阅读原文了解更多


    CNCF概况(幻灯片)

    扫描二维码联系我们!




    CNCF (Cloud Native Computing Foundation)成立于2015年12月,隶属于Linux  Foundation,是非营利性组织。 

    CNCF云原生计算基金会)致力于培育和维护一个厂商中立的开源生态系统,来推广云原生技术。我们通过将最前沿的模式民主化,让这些创新为大众所用。请长按以下二维码进行关注。

    文章转载自CNCF,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

    评论