作者:宇汇,壮怀,先河
概述
Cloud Native
方案架构
Cloud Native

前提条件
Cloud Native
应用部署
Cloud Native
1. 执行一下命令创建命名空间 demo。
kubectl create namespace demo
复制
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web-demo
name: web-demo
namespace: demo
spec:
replicas: 5
selector:
matchLabels:
app: web-demo
template:
metadata:
labels:
app: web-demo
spec:
containers:
- image: acr-multiple-clusters-registry.cn-hangzhou.cr.aliyuncs.com/ack-multiple-clusters/web-demo:0.4.0
name: web-demo
env:
- name: ENV_NAME
value: cluster1-beijing
volumeMounts:
- name: config-file
mountPath: "/config-file"
readOnly: true
volumes:
- name: config-file
configMap:
items:
- key: config.json
path: config.json
name: web-demo
---
apiVersion: v1
kind: Service
metadata:
name: web-demo
namespace: demo
labels:
app: web-demo
spec:
selector:
app: web-demo
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-demo
namespace: demo
labels:
app: web-demo
spec:
rules:
- host: web-demo.example.com
http:
paths:
- path:
pathType: Prefix
backend:
service:
name: web-demo
port:
number: 80
---
apiVersion: v1
kind: ConfigMap
metadata:
name: web-demo
namespace: demo
labels:
app: web-demo
data:
config.json: |
{
database-host: "beijing-db.pg.aliyun.com"
}
3. 执行以下命令,在主控实例上部署应用 web-demo。注意:在主控实例上创建 kube 资源并不会下发到子集群,此 kube 资源作为原数据,被后续 Application(步骤 4b)中引用。
复制
kubectl apply -f app-meta.yaml
复制
4. 创建应用分发规则。 复制
a. 执行以下命令,查看主控实例管理的关联集群,确定应用的分发目标 kubectl amc get managedcluster
复制
Name Alias HubAccepted
managedcluster-cxxx cluster1-hangzhou true
managedcluster-cxxx cluster2-beijing true
managedcluster-cxxx cluster1-beijing true
复制
复制
apiVersion: core.oam.dev/v1alpha1
kind: Policy
metadata:
name: cluster1-beijing
namespace: demo
type: topology
properties:
clusters: ["<managedcluster-cxxx>"] #分发目标集群1 cluster1-beijing
---
apiVersion: core.oam.dev/v1alpha1
kind: Policy
metadata:
name: cluster2-beijing
namespace: demo
type: topology
properties:
clusters: ["<managedcluster-cxxx>"] #分发目标集群2 cluster2-beijing
---
apiVersion: core.oam.dev/v1alpha1
kind: Policy
metadata:
name: cluster1-hangzhou
namespace: demo
type: topology
properties:
clusters: ["<managedcluster-cxxx>"] #分发目标集群3 cluster1-hangzhou
---
apiVersion: core.oam.dev/v1alpha1
kind: Policy
metadata:
name: override-env-cluster2-beijing
namespace: demo
type: override
properties:
components:
- name: "deployment"
traits:
- type: env
properties:
containerName: web-demo
env:
ENV_NAME: cluster2-beijing #对集群cluster2-beijing的deployment做环境变量的差异化配置
---
apiVersion: core.oam.dev/v1alpha1
kind: Policy
metadata:
name: override-env-cluster1-hangzhou
namespace: demo
type: override
properties:
components:
- name: "deployment"
traits:
- type: env
properties:
containerName: web-demo
env:
ENV_NAME: cluster1-hangzhou #对集群cluster1-hangzhou的deployment做环境变量的差异化配置
---
apiVersion: core.oam.dev/v1alpha1
kind: Policy
metadata:
name: override-replic-cluster1-hangzhou
namespace: demo
type: override
properties:
components:
- name: "deployment"
traits:
- type: scaler
properties:
replicas: 1 #对集群cluster1-hangzhou的deployment做副本数的差异化配置
---
apiVersion: core.oam.dev/v1alpha1
kind: Policy
metadata:
name: override-configmap-cluster1-hangzhou
namespace: demo
type: override
properties:
components:
- name: "configmap"
traits:
- type: json-merge-patch #对集群cluster1-hangzhou的deployment做configmap的差异化配置
properties:
data:
config.json: |
{
database-address: "hangzhou-db.pg.aliyun.com"
}
---
apiVersion: core.oam.dev/v1alpha1
kind: Workflow
metadata:
name: deploy-demo
namespace: demo
steps: #顺序部署cluster1-beijing,cluster2-beijing,cluster1-hangzhou。
- type: deploy
name: deploy-cluster1-beijing
properties:
policies: ["cluster1-beijing"]
- type: deploy
name: deploy-cluster2-beijing
properties:
auto: false #部署cluster2-beijing前需要人工审核
policies: ["override-env-cluster2-beijing", "cluster2-beijing"] #在部署cluster2-beijing时做环境变量的差异化
- type: deploy
name: deploy-cluster1-hangzhou
properties:
policies: ["override-env-cluster1-hangzhou", "override-replic-cluster1-hangzhou", "override-configmap-cluster1-hangzhou", "cluster1-hangzhou"]
#在部署cluster2-beijing时做环境变量,副本数,configmap的差异化
---
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
annotations:
app.oam.dev/publishVersion: version8
name: web-demo
namespace: demo
spec:
components:
- name: deployment #独立引用deployment,方便差异化配置
type: ref-objects
properties:
objects:
- apiVersion: apps/v1
kind: Deployment
name: web-demo
- name: configmap #独立引用configmap,方便差异化配置
type: ref-objects
properties:
objects:
- apiVersion: v1
kind: ConfigMap
name: web-demo
- name: same-resource #不做差异化配置
type: ref-objects
properties:
objects:
- apiVersion: v1
kind: Service
name: web-demo
- apiVersion: networking.k8s.io/v1
kind: Ingress
name: web-demo
workflow:
ref: deploy-demo
复制
kubectl apply -f app.yaml
复制
kubectl get app web-demo -n demo
复制
NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE
web-demo deployment ref-objects workflowSuspending true 47h
复制
复制
kubectl amc get deployment web-demo -n demo -m all
复制
Run on ManagedCluster managedcluster-cxxx (cluster1-hangzhou)
No resources found in demo namespace #第一次新部署应用,工作流还没有开始部署cluster1-hangzhou
Run on ManagedCluster managedcluster-cxxx (cluster2-beijing)
No resources found in demo namespace #第一次新部署应用,工作流还没有开始部署cluster2-beijiing,等待人工审核
Run on ManagedCluster managedcluster-cxxx (cluster1-beijing)
NAME READY UP-TO-DATE AVAILABLE AGE
web-demo 5/5 5 5 47h #Deployment在cluster1-beijing集群上运行正常
复制
复制
kubectl amc workflow resume web-demo -n demo
Successfully resume workflow: web-demo
复制
kubectl get app web-demo -n demo
复制
NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE
web-demo deployment ref-objects running true 47h
复制
复制
kubectl amc get deployment web-demo -n demo -m all
复制
Run on ManagedCluster managedcluster-cxxx (cluster1-hangzhou)
NAME READY UP-TO-DATE AVAILABLE AGE
web-demo 1/1 1 1 47h
Run on ManagedCluster managedcluster-cxxx (cluster2-beijing)
NAME READY UP-TO-DATE AVAILABLE AGE
web-demo 5/5 5 5 2d
Run on ManagedCluster managedcluster-cxxx (cluster1-beijing)
NAME READY UP-TO-DATE AVAILABLE AGE
web-demo 5/5 5 5 47h
复制
kubectl amc get ingress -n demo -m all
复制
Run on ManagedCluster managedcluster-cxxx (cluster1-hangzhou)
NAME CLASS HOSTS ADDRESS PORTS AGE
web-demo nginx web-demo.example.com 47.xxx.xxx.xxx 80 47h
Run on ManagedCluster managedcluster-cxxx (cluster2-beijing)
NAME CLASS HOSTS ADDRESS PORTS AGE
web-demo nginx web-demo.example.com 123.xxx.xxx.xxx 80 2d
Run on ManagedCluster managedcluster-cxxx (cluster1-beijing)
NAME CLASS HOSTS ADDRESS PORTS AGE
web-demo nginx web-demo.example.com 182.xxx.xxx.xxx 80 2d
05 流量管理
Cloud Native
复制
通过配置全局流量管理,自动检测应用运行状态,并在异常发生时,自动切换流量到监控集群。
1. 配置全局流量管理实例,web-demo.example.com 为示例应用的域名,请替换为实际应用的域名,并设置 DNS 解析到全局流量管理的 CNAME 接入域名。
2. 在已创建的 GTM 示例中,创建 2 个地址池:
pool-beijing:包含 2 个北京集群的 Ingress IP 地址,负载均衡策略为返回全部地址,实现北京 2 个集群的负载均衡。Ingress IP 地址可通过在主控实例上运行 “kubectl amc get ingress -n demo -m all” 获取。
pool-hangzhou:包含 1 个杭州集群的 Ingress IP 地址。


部署验证
Cloud Native
1. 正常情况,所有有流量都有北京的 2 个集群上的应用处理,每个集群各处理 50% 流量。
for i in {1..50}; do curl web-demo.example.com; sleep 3; done
This is env cluster1-beijing !
Config file is {
database-host: "beijing-db.pg.aliyun.com"
}
This is env cluster1-beijing !
Config file is {
database-host: "beijing-db.pg.aliyun.com"
}
This is env cluster2-beijing !
Config file is {
database-host: "beijing-db.pg.aliyun.com"
}
This is env cluster1-beijing !
Config file is {
database-host: "beijing-db.pg.aliyun.com"
}
This is env cluster2-beijing !
Config file is {
database-host: "beijing-db.pg.aliyun.com"
}
This is env cluster2-beijing !
Config file is {
database-host: "beijing-db.pg.aliyun.com"
}
2. 当集群 cluster1-beijing 上的应用异常时,GTM 将所有的流量路由到 cluster2-bejing 集群处理。
复制
for i in {1..50}; do curl web-demo.example.com; sleep 3; done
...
<html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body>
<center><h1>503 Service Temporarily Unavailable</h1></center>
<hr><center>nginx</center>
</body>
</html>
This is env cluster2-beijing !
Config file is {
database-host: "beijing-db.pg.aliyun.com"
}
This is env cluster2-beijing !
Config file is {
database-host: "beijing-db.pg.aliyun.com"
}
This is env cluster2-beijing !
Config file is {
database-host: "beijing-db.pg.aliyun.com"
}
This is env cluster2-beijing !
Config file is {
database-host: "beijing-db.pg.aliyun.com"
}
复制
3. 当集群 cluster1-beijing 和 cluster2-beijing 上的应用同时异常时,GTM 将流量路由到 cluster1-hangzhou 集群处理。
for i in {1..50}; do curl web-demo.example.com; sleep 3; done
<head><title>503 Service Temporarily Unavailable</title></head>
<body>
<center><h1>503 Service Temporarily Unavailable</h1></center>
<hr><center>nginx</center>
</body>
</html>
<html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body>
<center><h1>503 Service Temporarily Unavailable</h1></center>
<hr><center>nginx</center>
</body>
</html>
This is env cluster1-hangzhou !
Config file is {
database-address: "hangzhou-db.pg.aliyun.com"
}
This is env cluster1-hangzhou !
Config file is {
database-address: "hangzhou-db.pg.aliyun.com"
}
This is env cluster1-hangzhou !
Config file is {
database-address: "hangzhou-db.pg.aliyun.com"
}
This is env cluster1-hangzhou !
Config file is {
database-address: "hangzhou-db.pg.aliyun.com"
}
07 总结
Cloud Native
复制
相关链接
Cloud Native
[1] 开启多集群管理主控实例:
https://help.aliyun.com/document_detail/384048.html
[2] 通过管理关联集群:
https://help.aliyun.com/document_detail/415167.html
[3] 创建 GTM 实例:
https://dns.console.aliyun.com/#/gtm2/list