首页 > 其他分享 >istio多集群探秘,部署了50次多集群后我得出的结论

istio多集群探秘,部署了50次多集群后我得出的结论

时间:2022-12-09 23:33:07浏览次数:43  
标签:kubectl 1.11 istio 50 集群 SYNCED istiod cluster1


 欢迎关注我的公众号:

istio多集群探秘,部署了50次多集群后我得出的结论_云原生

 目前刚开始写一个月,一共写了18篇原创文章,文章目录如下:

istio多集群探秘,部署了50次多集群后我得出的结论

istio多集群链路追踪,附实操视频

istio防故障利器,你知道几个,istio新手不要读,太难!

istio业务权限控制,原来可以这么玩

istio实现非侵入压缩,微服务之间如何实现压缩

不懂envoyfilter也敢说精通istio系列-http-rbac-不要只会用AuthorizationPolicy配置权限

不懂envoyfilter也敢说精通istio系列-02-http-corsFilter-不要只会vs

不懂envoyfilter也敢说精通istio系列-03-http-csrf filter-再也不用再代码里写csrf逻辑了

不懂envoyfilter也敢说精通istio系列http-jwt_authn-不要只会RequestAuthorization

不懂envoyfilter也敢说精通istio系列-05-fault-filter-故障注入不止是vs

不懂envoyfilter也敢说精通istio系列-06-http-match-配置路由不只是vs

不懂envoyfilter也敢说精通istio系列-07-负载均衡配置不止是dr

不懂envoyfilter也敢说精通istio系列-08-连接池和断路器

不懂envoyfilter也敢说精通istio系列-09-http-route filter

不懂envoyfilter也敢说精通istio系列-network filter-redis proxy

不懂envoyfilter也敢说精通istio系列-network filter-HttpConnectionManager

不懂envoyfilter也敢说精通istio系列-ratelimit-istio ratelimit完全手册

理论篇

什么是isito多集群

istio多集群是指将多个istio集群联邦为一个整体的mesh。比如有两个k8s集群,上面分别部署了istio集群,这两个k8s集群可以在一个网络下,也可以在多个网络下。如果在一个网络下,每个集群用对方service ip是可以访问的,但是另一个集群并不知道你的ip,程序一般会用服务名称进行访问,service ip动态生成的需要通过dns进行解析。但是每个k8s的dns试独立的,这样我的集群只能看到我的服务的ip,这样就没法通信。有了istio联邦,我的istiod服务,会监控联邦里的所有k8s的apiserver。cluster1就能看到cluster2的端点。这样两个istio集群就可以通信了。如果是在一个网络,pod之间就直接通信,如果是多个网络,还需要一个叫东西向网关的东西,通过网关进行链接。

为啥要使用多集群

有些公司的业务非常庞大,可有有几千个业务线。k8s所支持的容量是有上线的,这么多pod有时不能同时部署到一个k8s集群里面,这样就产生了多个k8s集群。每个k8s集群再部署isito,就产生了多个istio集群。但是这里有需求了,我的两个业务部署在不同集群里面,但是他们需要相互通信。咋办?一个办法是通过cluster1 的egress-gateway,访问cluster2的ingress-gateway,到具体的service。这样做有什么缺点呢,就是管理起来不是很方便。有了联邦,如果在一个网络里面就可以直接通信pod到pod,如果不在一个网络里面就通过东西向网管通信,而且需要的配置很少,只是在搭建集群的时候需要配置,第一种方法你需要对每一个需求进行配置。

部署模型

单集群部署模型就不介绍了,这里我们介绍两个集群的部署模型。三个以上集群也是类似的。

Istio部署的各个配置维度:

  1. 单集群或多集群
  2. 单网络或多网络
  3. 单控制面或多控制面
  4. 单网格或多网格

一个网格可以包含多个集群。使用多集群部署可以在一个网格中提供如下功能。

  • 故障隔离和转移:当​​cluster-1​​宕机后,使用​​cluster-2​
  • 位置感知路由和故障转移:发送请求到最近的服务
  • 多种控制面模型:支持不同级别的可用性。
  • 团队和项目隔离:每个团队运行各自的集群

1

istio多集群探秘,部署了50次多集群后我得出的结论_bc_02

网络模型

网络可以是多网络,或单网络。有些多的集群也可以几个集群在一个网络,和其他集群不在一个网络。单网络的性能相对比较好一点,因为pod可以直接通信。多网络需要一个东西向网关进行通信,但能对应用进行隔离。

多网络:

istio多集群探秘,部署了50次多集群后我得出的结论_运维_03

单网络:

istio多集群探秘,部署了50次多集群后我得出的结论_云原生_04

控制面模型

控制面可以是单个控制面也可以是多个控制面;或者混合使用,部分集群共享一个控制面,和其他集群则是多个控制面。多控制面有如下好处:

  • 提升可用性:如果一个控制面不可用,停机的范围将仅限于该控制平面
  • 配置隔离:可以修改一个集群,zone或region中的配置,而不会影响另外一个控制面的配置。

其中有本地控制面的叫主集群,没有本地控制面的叫做远端集群。

单控制面:

istio多集群探秘,部署了50次多集群后我得出的结论_云原生_05

 

多控制面:

istio多集群探秘,部署了50次多集群后我得出的结论_bc_06

混合控制面:

istio多集群探秘,部署了50次多集群后我得出的结论_运维_07

实操篇

环境说明

两集群部署是用的机子是:

cluster1

192.168.229.128 master

192.168.229.129 master

192.168.229.130 node

cluster2

192.168.229.131 master

192.168.229.132 master

192.168.229.133 node

三集群部署用的机子是;

cluster1

192.168.229.137 master

192.168.229.138 master

192.168.229.139 node

cluster2

192.168.229.140 master

192.168.229.141 master

192.168.229.142 node

cluster3

192.168.229.143 master

192.168.229.144 master

192.168.229.145 node

k8s版本

[root@node01 ~]# kubectl version --short
Client Version: v1.21.0
Server Version: v1.21.0

istio版本

[root@node01 ~]# istioctl version
client version: 1.11.2
control plane version: 1.11.2
data plane version: none

两集群准备

首先需要创建root-ca,多个istio集群的root-ca必须是一样的:

cluster1:
mkdir -p certs
make -f ../tools/certs/Makefile.selfsigned.mk root-ca
make -f ../tools/certs/Makefile.selfsigned.mk cluster1-cacerts
make -f ../tools/certs/Makefile.selfsigned.mk cluster2-cacerts
scp -r cluster2 [email protected]:/root/cluster2

kubectl create namespace istio-system
kubectl create secret generic cacerts -n istio-system \
--from-file=cluster1/ca-cert.pem \
--from-file=cluster1/ca-key.pem \
--from-file=cluster1/root-cert.pem \
--from-file=cluster1/cert-chain.pem


cluster2:
kubectl create namespace istio-system
kubectl create secret generic cacerts -n istio-system \
--from-file=cluster2/ca-cert.pem \
--from-file=cluster2/ca-key.pem \
--from-file=cluster2/root-cert.pem \
--from-file=cluster2/cert-chain.pem

两集群

单个控制面板

在同一个网络中

![arch (3)](05image\arch (3).jpg)

说明:

集群cluster1和cluster2在同一个network1网络,cluster1有一个istiod,cluster2没有控制面板,cluster1 istiod回监控cluster1 apidserver和cluster2 apiserver,所以要配置访问cluster2 apiserver的secret。cluster1 svc直接注册到istiod中,cluster2 svc通过东西向网关注册到istiod中,cluster1的svc和cluster2的svc可以直接访问。cluster2没有东西向网关。

部署步骤:

两集群网络联通
集群1
128,129,130
集群2
131,132,133

两个网络联通
128。129.130
route add -net 172.21.1.0 netmask 255.255.255.0 gw 192.168.229.131
route add -net 172.21.2.0 netmask 255.255.255.0 gw 192.168.229.133
route add -net 172.21.0.0 netmask 255.255.255.0 gw 192.168.229.132
route add -net 10.69.0.0 netmask 255.255.0.0 gw 192.168.229.131

131,132,133
route add -net 172.20.0.0 netmask 255.255.255.0 gw 192.168.229.128
route add -net 172.20.1.0 netmask 255.255.255.0 gw 192.168.229.129
route add -net 172.20.2.0 netmask 255.255.255.0 gw 192.168.229.130
route add -net 10.68.0.0 netmask 255.255.0.0 gw 192.168.229.128

生成部署operator文件
cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

这里我设置的cluster1东西向网关的ip试192.168.229.100
如果用的是loadblance,可以用下面命令获取
# export DISCOVERY_ADDRESS=$(kubectl -n istio-system get svc istio-eastwestgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
然后替换remotePilotAddress

cat <<EOF > cluster2.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster2
network: network1
remotePilotAddress: 192.168.229.100
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

传输部署文件到另一个集群
scp cluster2.yaml [email protected]:/root

安装cluster1
istioctl install -f cluster1.yaml

生成东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster1 --network network1 | istioctl install -y -f -


配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.100"]}}'

暴露istiod
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-istiod.yaml

cluster2:
生成访问apiserver secret
istioctl x create-remote-secret --name=cluster2 --server=https://192.168.229.131:6443 > remote-secret-cluster2.yaml

传输secret到cluster1
scp remote-secret-cluster2.yaml [email protected]:/root

cluster1:
应用secret
kubectl apply -f remote-secret-cluster2.yaml

cluster2:
安装cluster2
istioctl install -f cluster2.yaml


cluster1:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster2:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

验证
kubectl exec -n istio "$(kubectl get pod -n istio -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -- curl productpage.istio:9080/productpage

验证

cluster1:
[root@node01 primary-Remote]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-5b896666d5-s2j5j.istio SYNCED SYNCED SYNCED SYNCED istiod-7bcfdcdb74-zr8n7 1.11.2
details-v1-64fc58cb97-45fnm.istio SYNCED SYNCED SYNCED SYNCED istiod-7bcfdcdb74-zr8n7 1.11.2
istio-eastwestgateway-5755d646c9-xx5tj.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-7bcfdcdb74-zr8n7 1.11.2
istio-egressgateway-5978ff79c-24qhx.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-7bcfdcdb74-zr8n7 1.11.2
istio-egressgateway-79c59ffb9-8wxkv.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-7bcfdcdb74-zr8n7 1.11.2
istio-ingressgateway-5869645595-vsgjk.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-7bcfdcdb74-zr8n7 1.11.2
istio-ingressgateway-5c8b454445-hn542.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-7bcfdcdb74-zr8n7 1.11.2
nginx-55b686795f-42qp4.istio SYNCED SYNCED SYNCED SYNCED istiod-7bcfdcdb74-zr8n7 1.11.2
productpage-v1-868c49bfcf-wgj7v.istio SYNCED SYNCED SYNCED SYNCED istiod-7bcfdcdb74-zr8n7 1.11.2
productpage-v1-bb688796c-d555c.istio SYNCED SYNCED SYNCED SYNCED istiod-7bcfdcdb74-zr8n7 1.11.2
ratings-v1-7f895bb49-72s76.istio SYNCED SYNCED SYNCED SYNCED istiod-7bcfdcdb74-zr8n7 1.11.2
ratings-v1-7fb9d4888f-zbwdh.istio SYNCED SYNCED SYNCED SYNCED istiod-7bcfdcdb74-zr8n7 1.11.2
reviews-v1-54b55d79d-7wg8r.istio SYNCED SYNCED SYNCED SYNCED istiod-7bcfdcdb74-zr8n7 1.11.2
reviews-v1-57955978bf-lrctt.istio SYNCED SYNCED SYNCED SYNCED istiod-7bcfdcdb74-zr8n7 1.11.2
reviews-v2-65c5cc89b4-g9hdq.istio SYNCED SYNCED SYNCED SYNCED istiod-7bcfdcdb74-zr8n7 1.11.2
reviews-v2-6c64ccc649-gf2bw.istio SYNCED SYNCED SYNCED SYNCED istiod-7bcfdcdb74-zr8n7 1.11.2
reviews-v3-7d7f679bdd-5gvst.istio SYNCED SYNCED SYNCED SYNCED istiod-7bcfdcdb74-zr8n7 1.11.2
reviews-v3-8557cdd4b5-9rw2f.istio SYNCED SYNCED SYNCED SYNCED istiod-7bcfdcdb74-zr8n7 1.11.2
sentinel-rls-server-fbc7556d6-fmnpw.istio SYNCED SYNCED SYNCED SYNCED istiod-7bcfdcdb74-zr8n7 1.11.2

[root@node01 primary-Remote]# istioctl pc endpoint -n istio-system istio-ingressgateway-5c8b454445-hn542|grep productpage
172.20.1.54:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.21.0.49:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster2:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION

[root@node01 ~]# istioctl pc endpoint -n istio-system istio-ingressgateway-5869645595-vsgjk |grep productpage
172.20.1.54:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.21.0.49:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

从上面输出我们看到proxy都连的是cluster1 的istiod,并且两个集群都可以看到两个productpage endpoint。

给cluster1,和cluster2创建gw和vs:

cluster1:

multicluster/gateway-01.yaml

kubectl apply -f gateway-01.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"

multicluster/vs-bookinfo-hosts-star.yaml

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage.istio.svc.cluster.local
port:
number: 9080

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root

cluster2:

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

自己替换nodeport

访问:

​http://192.168.229.128:32498/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

​http://192.168.229.131:31614/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

这说明多集群配置成功,而且每个envoy cluster有两个endpoint,轮训的方式访问。

清理:

cluster1:

kubectl delete vs bookinfo -n istio
kubectl delete gw bookinfo-gateway -n istio
kubectl delete vs istiod-vs -n istio-system
kubectl delete gw istiod-gateway -n istio-system
kubectl delete secret istio-remote-secret-cluster2 -n istio-system
istioctl x uninstall -f cluster1.yaml

reboot

cluster2:

kubectl delete vs bookinfo -n istio
kubectl delete gw bookinfo-gateway -n istio
istioctl x uninstall -f cluster2.yaml

reboot

在不同网络中

istio多集群探秘,部署了50次多集群后我得出的结论_运维_08

说明:

cluster1再network1,cluster2再network2,cluster1有一个istiod,cluster2使用cluster1的istiod,cluster1的service直接注册到istiod,cluster2的service通过cluster1的东西向网关注册到iistiod,cluster1的service通过cluster2的东西向网关访问cluster2的service,cluster2得service通过cluster1的东西向网关访问cluster1的service。cluster1的istiod同时监控cluster1的apiserver和cluster2的apiserver。

集群1
128,129,130
集群2
131,132,133

给istio-system namespace 打标签
cluster1:
kubectl label namespace istio-system topology.istio.io/network=network1
cluster2:
kubectl label namespace istio-system topology.istio.io/network=network2

cluster1:
生成istio operator部署文件
cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

这里我设置的cluster1东西向网关的ip试192.168.229.100
如果用的是loadblance,可以用下面命令获取
# export DISCOVERY_ADDRESS=$(kubectl -n istio-system get svc istio-eastwestgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
然后替换remotePilotAddress

cat <<EOF > cluster2.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster2
network: network2
remotePilotAddress: 192.168.229.100
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

传输部署文件到另一个集群
scp cluster2.yaml [email protected]:/root

安装istio
istioctl install -f cluster1.yaml

安装东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster1 --network network1 | istioctl install -y -f -

配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.100"]}}'

暴露istiod
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-istiod.yaml
暴露服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml

cluster2:
生成istiod访问apiserver secret
istioctl x create-remote-secret --name=cluster2 --server=https://192.168.229.131:6443 > remote-secret-cluster2.yaml

传输secret到cluster1
scp remote-secret-cluster2.yaml [email protected]:/root

cluster1
安装secret
kubectl apply -f remote-secret-cluster2.yaml -n istio-system


cluster2:
部署cluster2
istioctl install -f cluster2.yaml

生成东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster2 --network network2 | istioctl install -y -f -

配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.101"]}}'

暴露服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml

重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster1:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

验证

cluster1:
[root@node01 primary-Remote-different-network]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-658958d644-ftkhr.istio SYNCED SYNCED SYNCED SYNCED istiod-8956df5f8-46td5 1.11.2
details-v1-85948dd9cc-zgd54.istio SYNCED SYNCED SYNCED SYNCED istiod-8956df5f8-46td5 1.11.2
istio-eastwestgateway-67cc5d4459-6bmhm.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-8956df5f8-46td5 1.11.2
istio-eastwestgateway-75b665f4b8-6hm67.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-8956df5f8-46td5 1.11.2
istio-egressgateway-5bcd6d77b7-fwgtn.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-8956df5f8-46td5 1.11.2
istio-egressgateway-85d87dc445-5v5gf.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-8956df5f8-46td5 1.11.2
istio-ingressgateway-564765cb6f-pqxjf.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-8956df5f8-46td5 1.11.2
istio-ingressgateway-cc7c87f6f-58wkc.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-8956df5f8-46td5 1.11.2
nginx-7c9d77cc4-8bwjf.istio SYNCED SYNCED SYNCED SYNCED istiod-8956df5f8-46td5 1.11.2
productpage-v1-765559f7b4-vr8sh.istio SYNCED SYNCED SYNCED SYNCED istiod-8956df5f8-46td5 1.11.2
productpage-v1-7b77d65b55-gjhmm.istio SYNCED SYNCED SYNCED SYNCED istiod-8956df5f8-46td5 1.11.2
ratings-v1-6cc9878677-h8n4k.istio SYNCED SYNCED SYNCED SYNCED istiod-8956df5f8-46td5 1.11.2
ratings-v1-867f46b89b-rl779.istio SYNCED SYNCED SYNCED SYNCED istiod-8956df5f8-46td5 1.11.2
reviews-v1-54b55d79d-7wg8r.istio SYNCED SYNCED SYNCED SYNCED istiod-8956df5f8-46td5 1.11.2
reviews-v1-5769f64c6-s5v55.istio SYNCED SYNCED SYNCED SYNCED istiod-8956df5f8-46td5 1.11.2
reviews-v2-74466bb49b-ptdpc.istio SYNCED SYNCED SYNCED SYNCED istiod-8956df5f8-46td5 1.11.2
reviews-v2-76747775f6-22fwz.istio SYNCED SYNCED SYNCED SYNCED istiod-8956df5f8-46td5 1.11.2
reviews-v3-86587c968d-scc4b.istio SYNCED SYNCED SYNCED SYNCED istiod-8956df5f8-46td5 1.11.2
reviews-v3-8688d55f6d-955ql.istio SYNCED SYNCED SYNCED SYNCED istiod-8956df5f8-46td5 1.11.2
sentinel-rls-server-68d988869d-2s8pf.istio SYNCED SYNCED SYNCED SYNCED istiod-8956df5f8-46td5 1.11.2

[root@node01 primary-Remote-different-network]# istioctl pc endpoint -n istio-system istio-ingressgateway-59cb545bd8-schzj |grep productpage
172.20.1.68:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.101:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster2:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION

[root@node01 ~]# istioctl pc endpoint -n istio-system istio-ingressgateway-55f785c858-cs25s |grep productpage
172.21.1.73:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.100:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

可以看到cluster1 有一个endpoint地址是192.168.229.101:15443 ,这个是cluster2的东西向网关的地址,cluster1中的service通过这个地址和cluster2的service通信;同样,cluster2有一个endpoint地址是192.168.229.100:15443 ,这个是cluster1的东西向网关的地址,cluster2的service通过这个地址和cluster1的service通信。所有proxy都连的是cluster1的istiod。

给cluster1,和cluster2创建gw和vs:

cluster1:

multicluster/gateway-01.yaml

kubectl apply -f gateway-01.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"

multicluster/vs-bookinfo-hosts-star.yaml

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage.istio.svc.cluster.local
port:
number: 9080

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root

cluster2:

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

自己替换nodeport

访问:

​http://192.168.229.128:32498/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

​http://192.168.229.131:31614/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

这说明多集群配置成功,而且每个envoy cluster有两个endpoint,轮训的方式访问。

清理:

cluster1:

kubectl label namespace istio-system topology.istio.io/network-
kubectl delete vs bookinfo -n istio
kubectl delete gw bookinfo-gateway -n istio
kubectl delete vs istiod-vs -n istio-system
kubectl delete gw istiod-gateway -n istio-system
kubectl delete gw cross-network-gateway -n istio-system
kubectl delete secret istio-remote-secret-cluster2 -n istio-system
istioctl x uninstall -f cluster1.yaml

reboot

cluster2:

kubectl label namespace istio-system topology.istio.io/network-
kubectl delete vs bookinfo -n istio
kubectl delete gw bookinfo-gateway -n istio
kubectl delete gw cross-network-gateway -n istio-system
istioctl x uninstall -f cluster2.yaml

reboot

两个控制面板

在同一个网络中

istio多集群探秘,部署了50次多集群后我得出的结论_云原生_09

说明:

cluster1和cluster2在同一个网络network1,cluster1和cluster2分别有一个istiod,cluster1的istiod监控cluster1和cluster2的apiserver,cluster2的isitod监控cluster1和cluster2的apiserver。cluster1的service链接cluster1的istiod,cluster2的service连的试cluster2的istiod。cluster1的service和cluster2的service直接链接。

集群1
128,129,130
集群2
131,132,133

两个网络联通
128。129.130
route add -net 172.21.1.0 netmask 255.255.255.0 gw 192.168.229.131
route add -net 172.21.2.0 netmask 255.255.255.0 gw 192.168.229.133
route add -net 172.21.0.0 netmask 255.255.255.0 gw 192.168.229.132
route add -net 10.69.0.0 netmask 255.255.0.0 gw 192.168.229.131

131,132,133
route add -net 172.20.0.0 netmask 255.255.255.0 gw 192.168.229.128
route add -net 172.20.1.0 netmask 255.255.255.0 gw 192.168.229.129
route add -net 172.20.2.0 netmask 255.255.255.0 gw 192.168.229.130
route add -net 10.68.0.0 netmask 255.255.0.0 gw 192.168.229.128


cluster1:
生成istio安装operator文件
cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF


生成istio安装operator文件
cat <<EOF > cluster2.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster2
network: network1
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

把部署文件传到cluster2
scp cluster2.yaml [email protected]:/root


cluster1:
生成访问apiserver secret
istioctl x create-remote-secret --name=cluster1 --server=https://192.168.229.128:6443 > remote-secret-cluster1.yaml
传输secret到cluster2
scp remote-secret-cluster1.yaml [email protected]:/root

cluster2
生成访问apiserver secret
istioctl x create-remote-secret --name=cluster2 --server=https://192.168.229.131:6443 > remote-secret-cluster2.yaml
传输secret到cluster2
scp remote-secret-cluster2.yaml [email protected]:/root

cluster1
应用secret
kubectl apply -f remote-secret-cluster2.yaml

部署集群
istioctl install -f cluster1.yaml


cluster2:
应用secret
kubectl apply -f remote-secret-cluster1.yaml
部署集群
istioctl install -f cluster2.yaml

cluster1:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster2:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

验证
kubectl exec -n istio "$(kubectl get pod -n istio -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -- curl productpage.istio:9080/productpage

验证:

cluster1:
[root@node01 multi-Primary]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-f894458fc-6qz8r.istio SYNCED SYNCED SYNCED SYNCED istiod-7ff8bfbf5b-x9jql 1.11.2
istio-egressgateway-668486bd4c-7mfpf.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-7ff8bfbf5b-x9jql 1.11.2
istio-ingressgateway-74c9989fff-zvcs9.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-7ff8bfbf5b-x9jql 1.11.2
productpage-v1-68684bdfc5-6mptj.istio SYNCED SYNCED SYNCED SYNCED istiod-7ff8bfbf5b-x9jql 1.11.2
ratings-v1-85cfdf69f7-4ndpl.istio SYNCED SYNCED SYNCED SYNCED istiod-7ff8bfbf5b-x9jql 1.11.2
reviews-v1-54b55d79d-7wg8r.istio SYNCED SYNCED SYNCED SYNCED istiod-7ff8bfbf5b-x9jql 1.11.2
reviews-v2-b5d49c57b-mw56g.istio SYNCED SYNCED SYNCED SYNCED istiod-7ff8bfbf5b-x9jql 1.11.2
reviews-v3-679bd5fb56-phrt7.istio SYNCED SYNCED SYNCED SYNCED istiod-7ff8bfbf5b-x9jql 1.11.2
sentinel-rls-server-5548bf96cd-5w7sk.istio SYNCED SYNCED SYNCED SYNCED istiod-7ff8bfbf5b-x9jql 1.11.2

[root@node01 multi-Primary]# istioctl pc endpoint -n istio-system istio-ingressgateway-74c9989fff-zvcs9 |grep productpage
172.20.1.81:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.21.0.70:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster2:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-69d5766c4f-w6ntl.istio SYNCED SYNCED SYNCED SYNCED istiod-78548b8fcb-6w8hr 1.11.2
istio-egressgateway-7ccffd9f9c-n6gj7.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-78548b8fcb-6w8hr 1.11.2
istio-ingressgateway-5ccbcd45ff-d79zd.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-78548b8fcb-6w8hr 1.11.2
nginx-7674599775-wv94x.istio SYNCED SYNCED SYNCED SYNCED istiod-78548b8fcb-6w8hr 1.11.2
productpage-v1-7fd98586c5-hcsbm.istio SYNCED SYNCED SYNCED SYNCED istiod-78548b8fcb-6w8hr 1.11.2
ratings-v1-c9586cbb5-w84kr.istio SYNCED SYNCED SYNCED SYNCED istiod-78548b8fcb-6w8hr 1.11.2
reviews-v1-6d4f656d5c-755d2.istio SYNCED SYNCED SYNCED SYNCED istiod-78548b8fcb-6w8hr 1.11.2
reviews-v2-9d46df857-f8g7v.istio SYNCED SYNCED SYNCED SYNCED istiod-78548b8fcb-6w8hr 1.11.2
reviews-v3-587f69f6df-ml4cx.istio SYNCED SYNCED SYNCED SYNCED istiod-78548b8fcb-6w8hr 1.11.2

[root@node01 ~]# istioctl pc endpoint -n istio-system istio-ingressgateway-5ccbcd45ff-d79zd|grep productpage
172.20.1.81:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.21.0.70:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

可以看到cluster1的service proxy连的是cluster1的istiod,cluster2的service连的是cluster2的isitod。cluster1和cluster2的endpoint的ip都是pod的ip,因为是一个网络通的。cluster1的istiod监控cluster1和cluster2的apiserver,cluster2的isitod监控cluster1和cluster2的apiserver。

给cluster1,和cluster2创建gw和vs:

cluster1:

multicluster/gateway-01.yaml

kubectl apply -f gateway-01.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"

multicluster/vs-bookinfo-hosts-star.yaml

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage.istio.svc.cluster.local
port:
number: 9080

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root

cluster2:

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

自己替换nodeport

访问:

​http://192.168.229.128:32498/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

​http://192.168.229.131:31614/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

这说明多集群配置成功,而且每个envoy cluster有两个endpoint,轮训的方式访问。

清理:

cluster1:

kubectl delete vs bookinfo -n istio
kubectl delete gw bookinfo-gateway -n istio
kubectl delete secret istio-remote-secret-cluster2 -n istio-system
istioctl x uninstall -f cluster1.yaml

reboot

cluster2:

kubectl delete vs bookinfo -n istio
kubectl delete gw bookinfo-gateway -n istio
kubectl delete secret istio-remote-secret-cluster1 -n istio-system
istioctl x uninstall -f cluster2.yaml

reboot

在不同网络中

istio多集群探秘,部署了50次多集群后我得出的结论_云原生_10

说明:

cluster1在network1网络,cluster2在network2网络,cluster1和cluster2有各自istiod,cluster1的istiod监控cluster1和cluster2的apiserver,cluster2的istiod监控cluster1和cluster2的apiserver。cluster1的service连接到cluster1的istiod,cluster2的service连接到cluster2的istiod。cluster1的service通过cluster2的东西向网关访问cluster2的service,cluster2的service通过cluster1的东西向网关访问cluster1的service。

集群1
128,129,130
集群2
131,132,133

给istio-system namespace打标签
cluster1:
kubectl label namespace istio-system topology.istio.io/network=network1
cluster2:
kubectl label namespace istio-system topology.istio.io/network=network2

cluster1:
生成istio operator部署文件
cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

生成istio operator部署文件
cat <<EOF > cluster2.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster2
network: network2
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

传输部署文件到cluster2
scp cluster2.yaml [email protected]:/root

生成监控apiserver secret
istioctl x create-remote-secret --name=cluster1 --server=https://192.168.229.128:6443 > remote-secret-cluster1.yaml
传输secret到cluster2
scp remote-secret-cluster1.yaml [email protected]:/root

cluster2
生成监控apiserver secret
istioctl x create-remote-secret --name=cluster2 --server=https://192.168.229.131:6443 > remote-secret-cluster2.yaml

传输secret到cluster1
scp remote-secret-cluster2.yaml [email protected]:/root

cluster1:
部署监控apiserver secret
kubectl apply -f remote-secret-cluster2.yaml

部署istio
istioctl install -f cluster1.yaml

部署东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster1 --network network1 | istioctl install -y -f -

配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.100"]}}'

暴露服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml


cluster2:
部署监控apiserver secret
kubectl apply -f remote-secret-cluster1.yaml

部署istio
istioctl install -f cluster2.yaml

部署东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster2 --network network2 | istioctl install -y -f -

配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.101"]}}'

暴露服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml

cluster1
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster2:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

验证
kubectl exec -n istio "$(kubectl get pod -n istio -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -- curl productpage.istio:9080/productpage

验证:

cluster1:

[root@node01 multi-Primary-different-network]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-97f8b5b6-8q8lj.istio SYNCED SYNCED SYNCED SYNCED istiod-68b7fb46fb-2f6c4 1.11.2
istio-eastwestgateway-7df75bb497-8xsht.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-68b7fb46fb-2f6c4 1.11.2
istio-egressgateway-85dcbc779-b2cd9.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-68b7fb46fb-2f6c4 1.11.2
istio-ingressgateway-574d4d7bcf-2tzjw.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-68b7fb46fb-2f6c4 1.11.2
productpage-v1-7945c8b546-cj8mg.istio SYNCED SYNCED SYNCED SYNCED istiod-68b7fb46fb-2f6c4 1.11.2
ratings-v1-5976db7f7d-qfn4q.istio SYNCED SYNCED SYNCED SYNCED istiod-68b7fb46fb-2f6c4 1.11.2
reviews-v1-54b55d79d-7wg8r.istio SYNCED SYNCED SYNCED SYNCED istiod-68b7fb46fb-2f6c4 1.11.2
reviews-v2-7b6c6688d9-klzvl.istio SYNCED SYNCED SYNCED SYNCED istiod-68b7fb46fb-2f6c4 1.11.2
reviews-v3-66bfff85b5-vfl68.istio SYNCED SYNCED SYNCED SYNCED istiod-68b7fb46fb-2f6c4 1.11.2
sentinel-rls-server-7cf9f56ff-sn56p.istio SYNCED SYNCED SYNCED SYNCED istiod-68b7fb46fb-2f6c4 1.11.2

[root@node01 multi-Primary-different-network]# istioctl pc endpoint -n istio-system istio-ingressgateway-574d4d7bcf-2tzjw|grep productpage
172.20.1.93:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.101:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster2:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-d6c6465cc-r7v89.istio SYNCED SYNCED SYNCED SYNCED istiod-9f45d48fc-wdpqg 1.11.2
istio-eastwestgateway-677969f99c-p9qcq.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-9f45d48fc-wdpqg 1.11.2
istio-egressgateway-694848755c-x9mph.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-9f45d48fc-wdpqg 1.11.2
istio-ingressgateway-76ddb7fb77-xsvvr.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-9f45d48fc-wdpqg 1.11.2
nginx-9855dcfdc-2l4cf.istio SYNCED SYNCED SYNCED SYNCED istiod-9f45d48fc-wdpqg 1.11.2
productpage-v1-7f5c787875-kdjnv.istio SYNCED SYNCED SYNCED SYNCED istiod-9f45d48fc-wdpqg 1.11.2
ratings-v1-58b7976469-pcmgl.istio SYNCED SYNCED SYNCED SYNCED istiod-9f45d48fc-wdpqg 1.11.2
reviews-v1-ddb8856fc-rm8gk.istio SYNCED SYNCED SYNCED SYNCED istiod-9f45d48fc-wdpqg 1.11.2
reviews-v2-dfdc64c49-h89zq.istio SYNCED SYNCED SYNCED SYNCED istiod-9f45d48fc-wdpqg 1.11.2
reviews-v3-54db675c77-bv6l4.istio SYNCED SYNCED SYNCED SYNCED istiod-9f45d48fc-wdpqg 1.11.2

[root@node01 ~]# istioctl pc endpoint -n istio-system istio-ingressgateway-76ddb7fb77-xsvvr |grep productpage
172.21.1.93:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.100:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

可以看到每个集群的service连接到各自集群的istiod,cluster1的endpoint 192.168.229.101:15443 是cluster2的东西向网关的地址,cluster2的endpoint 192.168.229.100:15443 是cluster1的东西向网关的地址,因为他们在不同网络上面。cluster1得istiod监控cluster1和cluster2的apiserver,cluster2的istiod监控cluster1和cluster2的apiserver。

给cluster1,和cluster2创建gw和vs:

cluster1:

multicluster/gateway-01.yaml

kubectl apply -f gateway-01.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"

multicluster/vs-bookinfo-hosts-star.yaml

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage.istio.svc.cluster.local
port:
number: 9080

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root

cluster2:

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

自己替换nodeport

访问:

​http://192.168.229.128:32498/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

​http://192.168.229.131:31614/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

这说明多集群配置成功,而且每个envoy cluster有两个endpoint,轮训的方式访问。

清理:

cluster1:

kubectl label namespace istio-system topology.istio.io/network-
kubectl delete vs bookinfo -n istio
kubectl delete gw bookinfo-gateway -n istio
kubectl delete gw cross-network-gateway -n istio-system
kubectl delete secret istio-remote-secret-cluster2 -n istio-system
istioctl x uninstall -f cluster1.yaml

reboot

cluster2:

kubectl label namespace istio-system topology.istio.io/network-
kubectl delete vs bookinfo -n istio
kubectl delete gw bookinfo-gateway -n istio
kubectl delete gw cross-network-gateway -n istio-system
kubectl delete secret istio-remote-secret-cluster1 -n istio-system
istioctl x uninstall -f cluster2.yaml

reboot

三集群准备

首先需要创建root-ca,多个istio集群的root-ca必须是一样的:

cluster1:
mkdir -p certs
make -f ../tools/certs/Makefile.selfsigned.mk root-ca
make -f ../tools/certs/Makefile.selfsigned.mk cluster1-cacerts
make -f ../tools/certs/Makefile.selfsigned.mk cluster2-cacerts
make -f ../tools/certs/Makefile.selfsigned.mk cluster3-cacerts
scp -r cluster2 [email protected]:/root/cluster2
scp -r cluster3 [email protected]:/root/cluster3

cluster1:
kubectl create namespace istio-system
kubectl create secret generic cacerts -n istio-system \
--from-file=cluster1/ca-cert.pem \
--from-file=cluster1/ca-key.pem \
--from-file=cluster1/root-cert.pem \
--from-file=cluster1/cert-chain.pem

cluster2:
kubectl create namespace istio-system
kubectl create secret generic cacerts -n istio-system \
--from-file=cluster2/ca-cert.pem \
--from-file=cluster2/ca-key.pem \
--from-file=cluster2/root-cert.pem \
--from-file=cluster2/cert-chain.pem
cluster3:
kubectl create namespace istio-system
kubectl create secret generic cacerts -n istio-system \
--from-file=cluster3/ca-cert.pem \
--from-file=cluster3/ca-key.pem \
--from-file=cluster3/root-cert.pem \
--from-file=cluster3/cert-chain.pem

三集群

单控制面板

单网络

istio多集群探秘,部署了50次多集群后我得出的结论_bc_11

说明:

三个集群在一个网络中,cluster1有一个istiod,cluster2和cluster3没有istiod。cluster1 service直接链接cluster1 istiod,cluster2和cluster3 service通过cluster1的东西向网关链接cluster1的istiod。cluster1,cluster2,和cluster3的service直接相互连接。cluster1的istiod监控三个集群的apiserver。

三个网络联通
集群1
137,138,139
集群2
140,141,142
集群3
143,144,145

网络联通
137,138,139
route add -net 172.21.2.0 netmask 255.255.255.0 gw 192.168.229.142
route add -net 172.21.0.0 netmask 255.255.255.0 gw 192.168.229.141
route add -net 172.21.1.0 netmask 255.255.255.0 gw 192.168.229.140

route add -net 172.22.2.0 netmask 255.255.255.0 gw 192.168.229.145
route add -net 172.22.0.0 netmask 255.255.255.0 gw 192.168.229.144
route add -net 172.22.1.0 netmask 255.255.255.0 gw 192.168.229.143

route add -net 10.70.0.0 netmask 255.255.0.0 gw 192.168.229.143
route add -net 10.69.0.0 netmask 255.255.0.0 gw 192.168.229.140

140,141,142
route add -net 172.20.2.0 netmask 255.255.255.0 gw 192.168.229.139
route add -net 172.20.0.0 netmask 255.255.255.0 gw 192.168.229.138
route add -net 172.20.1.0 netmask 255.255.255.0 gw 192.168.229.137

route add -net 172.22.2.0 netmask 255.255.255.0 gw 192.168.229.145
route add -net 172.22.0.0 netmask 255.255.255.0 gw 192.168.229.144
route add -net 172.22.1.0 netmask 255.255.255.0 gw 192.168.229.143

route add -net 10.70.0.0 netmask 255.255.0.0 gw 192.168.229.143
route add -net 10.68.0.0 netmask 255.255.0.0 gw 192.168.229.137


143,144,145
route add -net 172.21.2.0 netmask 255.255.255.0 gw 192.168.229.142
route add -net 172.21.0.0 netmask 255.255.255.0 gw 192.168.229.141
route add -net 172.21.1.0 netmask 255.255.255.0 gw 192.168.229.140

route add -net 172.20.2.0 netmask 255.255.255.0 gw 192.168.229.139
route add -net 172.20.0.0 netmask 255.255.255.0 gw 192.168.229.138
route add -net 172.20.1.0 netmask 255.255.255.0 gw 192.168.229.137

route add -net 10.69.0.0 netmask 255.255.0.0 gw 192.168.229.140
route add -net 10.68.0.0 netmask 255.255.0.0 gw 192.168.229.137



cluster1:
生成istio operator部署文件
cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

这里我设置的cluster1东西向网关的ip试192.168.229.100
如果用的是loadblance,可以用下面命令获取
# export DISCOVERY_ADDRESS=$(kubectl -n istio-system get svc istio-eastwestgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
然后替换remotePilotAddress

生成istio operator部署文件
cat <<EOF > cluster2.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster2
network: network1
remotePilotAddress: 192.168.229.100
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

传输部署文件到cluster2
scp cluster2.yaml [email protected]:/root


这里我设置的cluster1东西向网关的ip试192.168.229.100
如果用的是loadblance,可以用下面命令获取
# export DISCOVERY_ADDRESS=$(kubectl -n istio-system get svc istio-eastwestgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
然后替换remotePilotAddress

生成istio operator部署文件
cat <<EOF > cluster3.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster3
network: network1
remotePilotAddress: 192.168.229.100
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

传输部署文件到cluster3
scp cluster3.yaml [email protected]:/root


部署istio
istioctl install -f cluster1.yaml

生成东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster1 --network network1 | istioctl install -y -f -


配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.100"]}}'

暴露istiod
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-istiod.yaml



cluster2:
生成访问apiserver secret
istioctl x create-remote-secret --name=cluster2 --server=https://192.168.229.140:6443 > remote-secret-cluster2.yaml

传输secret到cluster1
scp remote-secret-cluster2.yaml [email protected]:/root


cluster3:
生成访问apiserver secret
istioctl x create-remote-secret --name=cluster3 --server=https://192.168.229.143:6443 > remote-secret-cluster3.yaml

传输secret到cluster1
scp remote-secret-cluster3.yaml [email protected]:/root

cluster1
应用secret
kubectl apply -f remote-secret-cluster2.yaml
kubectl apply -f remote-secret-cluster3.yaml



cluster2:
部署istio
istioctl install -f cluster2.yaml

cluster3:
部署istio
istioctl install -f cluster3.yaml

cluster1:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster2:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster3:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

验证:

cluster1:
[root@node01 samenetwork]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-56975cb5d-dct8n.istio SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2
details-v1-7dbccb7cd6-4p4tb.istio SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2
details-v1-b76db5cdc-k8s79.istio SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2
istio-eastwestgateway-6bfdcdcc69-sx6lr.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-776d477cb4-44fjd 1.11.2
istio-egressgateway-66685b4c55-bzvdx.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-776d477cb4-44fjd 1.11.2
istio-egressgateway-6ffc646d79-ljpk2.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-776d477cb4-44fjd 1.11.2
istio-egressgateway-b57dff6c-jj4tv.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-776d477cb4-44fjd 1.11.2
istio-ingressgateway-58f4585dc7-qllq2.istio-system SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2
istio-ingressgateway-78fb9874cc-zzsgk.istio-system SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2
istio-ingressgateway-8f69f699f-tnvxc.istio-system SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2
productpage-v1-57c54b6546-rtzst.istio SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2
productpage-v1-75b4995fdf-zkh7n.istio SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2
productpage-v1-ff858974b-796lx.istio SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2
ratings-v1-78857d59c5-dpg6q.istio SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2
ratings-v1-7f665dc48b-btqpp.istio SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2
ratings-v1-85d69fd654-tt57m.istio SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2
reviews-v1-545b7f7ddb-2v9m5.istio SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2
reviews-v1-5968f866f4-g9lvc.istio SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2
reviews-v1-99f485987-nm87s.istio SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2
reviews-v2-59649dc9ff-n825z.istio SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2
reviews-v2-84dd779d8b-2bn2p.istio SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2
reviews-v2-85c4f76b94-8gw2x.istio SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2
reviews-v3-54c5fc78-bqhq5.istio SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2
reviews-v3-6979cc5bc8-7dj5j.istio SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2
reviews-v3-6df8b58dc5-p2mdm.istio SYNCED SYNCED SYNCED SYNCED istiod-776d477cb4-44fjd 1.11.2

[root@node01 samenetwork]# istioctl pc endpoint -n istio-system istio-ingressgateway-58f4585dc7-qllq2 |grep productpage
172.20.1.69:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.21.2.135:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.22.1.96:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster2:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION

[root@node01 ~]# istioctl pc endpoint -n istio-system istio-ingressgateway-78fb9874cc-zzsgk|grep productpage
172.20.1.69:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.21.2.135:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.22.1.96:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster3:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION

[root@node01 ~]# istioctl pc endpoint -n istio-system istio-ingressgateway-8f69f699f-tnvxc |grep productpage
172.20.1.69:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.21.2.135:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.22.1.96:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

从上面可以看到三个集群的proxy都连的是cluster1的istiod。三个集群都有三个productpage端点,并且都是pod ip的形式,因为是在一个网络中。cluster1的istio的监控三个集群的apiserver

部署bookinfo vs gw

cluster1:

multicluster/gateway-01.yaml

kubectl apply -f gateway-01.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"

multicluster/vs-bookinfo-hosts-star.yaml

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage.istio.svc.cluster.local
port:
number: 9080

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root、

cluster2:

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

cluster3::

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

访问:

​http://192.168.229.137:32498/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

​http://192.168.229.140:31614/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

​http://192.168.229.143:32050/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

这说明多集群配置成功,而且每个envoy cluster有三个endpoint,轮训的方式访问。

清理:

cluster1:

kubectl delete vs bookinfo -n istio
kubectl delete gw bookinfo-gateway -n istio
kubectl delete secret istio-remote-secret-cluster2 -n istio-system
kubectl delete secret istio-remote-secret-cluster3 -n istio-system
kubectl delete vs istiod-vs -n istio-system
kubectl delete gw istiod-gateway -n istio-system
istioctl x uninstall -f cluster1.yaml

reboot



cluster2:

kubectl delete vs bookinfo -n istio
kubectl delete gw bookinfo-gateway -n istio
istioctl x uninstall -f cluster2.yaml

reboot



cluster3:

kubectl delete vs bookinfo -n istio
kubectl delete gw bookinfo-gateway -n istio
istioctl x uninstall -f cluster3.yaml

reboot

两网络

两网关

istio多集群探秘,部署了50次多集群后我得出的结论_云原生_12

说明:

cluster1在network1,cluster2和cluster3在network2.cluster1有一个isitod,cluster2和cluster3没有istiod。cluster1有一个东西向网关,cluster2和cluster3共用一个东西向网关,在cluster2上面。cluster2和cluster3的pod可以直接通信,cluster1的service和cluster2,cluster3的service通信要经过cluster2上面的东西向网关。cluster1的proxy直接连接cluster1的istiod,cluster2和cluster3的proxy通过cluster1的东西向网关连接cluster1的istiod。cluster1的istiod监控三个集群。

两个网络
network2 东西向网管可以在cluster2也可以在cluster3
cluster2有网关,cluster3没有网关
不建议使用,按地域负载均衡的时候会有问题

集群1
137,138,139
集群2
140,141,142
集群3
143,144,145

打通cluster2,cluster3网络
140,141,142
route add -net 172.22.2.0 netmask 255.255.255.0 gw 192.168.229.145
route add -net 172.22.0.0 netmask 255.255.255.0 gw 192.168.229.144
route add -net 172.22.1.0 netmask 255.255.255.0 gw 192.168.229.143

route add -net 10.70.0.0 netmask 255.255.0.0 gw 192.168.229.143


143,144,145
route add -net 172.21.2.0 netmask 255.255.255.0 gw 192.168.229.142
route add -net 172.21.0.0 netmask 255.255.255.0 gw 192.168.229.141
route add -net 172.21.1.0 netmask 255.255.255.0 gw 192.168.229.140

route add -net 10.69.0.0 netmask 255.255.0.0 gw 192.168.229.140

给isito-system namespace打标签
cluster1:
kubectl label namespace istio-system topology.istio.io/network=network1

cluster1:
kubectl label namespace istio-system topology.istio.io/network=network2

cluster1:
kubectl label namespace istio-system topology.istio.io/network=network2

生成operator部署文件
cluster1:
cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
imagePullPolicy: IfNotPresent
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

这里我设置的cluster1东西向网关的ip试192.168.229.100
如果用的是loadblance,可以用下面命令获取
# export DISCOVERY_ADDRESS=$(kubectl -n istio-system get svc istio-eastwestgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
然后替换remotePilotAddress

生成operator部署文件
cat <<EOF > cluster2.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
imagePullPolicy: IfNotPresent
meshID: mesh1
multiCluster:
clusterName: cluster2
network: network2
remotePilotAddress: 192.168.229.100
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

这里我设置的cluster1东西向网关的ip试192.168.229.100
如果用的是loadblance,可以用下面命令获取
# export DISCOVERY_ADDRESS=$(kubectl -n istio-system get svc istio-eastwestgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
然后替换remotePilotAddress

生成operator部署文件
cat <<EOF > cluster3.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
imagePullPolicy: IfNotPresent
meshID: mesh1
multiCluster:
clusterName: cluster3
network: network2
remotePilotAddress: 192.168.229.100
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

把部署文件传到cluster2
scp cluster2.yaml [email protected]:/root
把部署文件传到cluster3
scp cluster3.yaml [email protected]:/root

部署cluster1
istioctl install -f cluster1.yaml
部署东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster1 --network network1 | istioctl install -y -f -


配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.100"]}}'

暴露istiod
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-istiod.yaml
暴露服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml


cluster2:
生成监控apiserver secret
istioctl x create-remote-secret --name=cluster2 --server=https://192.168.229.140:6443 > remote-secret-cluster2.yaml
传输secret到cluster1
scp remote-secret-cluster2.yaml [email protected]:/root


cluster3:
生成监控apiserver secret
istioctl x create-remote-secret --name=cluster3 --server=https://192.168.229.143:6443 > remote-secret-cluster3.yaml

传输secret到cluster1
scp remote-secret-cluster3.yaml [email protected]:/root

cluster1:
应用监控apiserver secret
kubectl apply -f remote-secret-cluster2.yaml
kubectl apply -f remote-secret-cluster3.yaml


cluster2:
部署cluster2
istioctl install -f cluster2.yaml
安装东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster2 --network network2 | istioctl install -y -f -


配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.101"]}}'

暴露服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml


cluster3:
部署cluster3
istioctl install -f cluster3.yaml


cluster1:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster2:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster1:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

验证

cluster1:
[root@node01 twonetwork]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-655b44b5cc-t65q5.istio SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2
details-v1-7464b47bb-4bl2x.istio SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2
details-v1-7f76bd59b7-qkqph.istio SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2
istio-eastwestgateway-6c54ff57f4-4n5tf.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-698b966cd5-hz9wr 1.11.2
istio-eastwestgateway-6cd4bf6996-xzhtp.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-698b966cd5-hz9wr 1.11.2
istio-egressgateway-546599b588-bqzbs.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-698b966cd5-hz9wr 1.11.2
istio-egressgateway-57d5564758-kqgbk.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-698b966cd5-hz9wr 1.11.2
istio-egressgateway-9d65f86fb-trbfm.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-698b966cd5-hz9wr 1.11.2
istio-ingressgateway-58c9f5d786-2vs5s.istio-system SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2
istio-ingressgateway-5d97f85b98-5svwd.istio-system SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2
istio-ingressgateway-84db9bb88-jscqs.istio-system SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2
productpage-v1-647485fbf9-d4q8w.istio SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2
productpage-v1-864958696b-fbdc2.istio SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2
productpage-v1-86955ff989-4ckp9.istio SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2
ratings-v1-6b4f9cbd9c-7lk2p.istio SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2
ratings-v1-6df66b6b9f-zrk7v.istio SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2
ratings-v1-846f9d5898-fzv2c.istio SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2
reviews-v1-5b5d8475c5-m2snt.istio SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2
reviews-v1-7756f87fb6-dm8vw.istio SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2
reviews-v1-d77995db9-gmfbq.istio SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2
reviews-v2-b58b5c6f9-f5w8l.istio SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2
reviews-v2-d7cb7877d-pxvtg.istio SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2
reviews-v2-d8dcb445-gb62h.istio SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2
reviews-v3-59576f889-c6sbd.istio SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2
reviews-v3-5fb585c9db-4hzrj.istio SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2
reviews-v3-df4597ff-74vdb.istio SYNCED SYNCED SYNCED SYNCED istiod-698b966cd5-hz9wr 1.11.2


[root@node01 twonetwork]# istioctl pc endpoint -n istio productpage-v1-647485fbf9-d4q8w|grep productpage
172.20.1.35:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.101:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster2:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION

[root@node01 ~]# istioctl pc endpoint -n istio productpage-v1-864958696b-fbdc2|grep productpage
172.21.0.91:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.22.0.78:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.100:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster3:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION

[root@node01 ~]# istioctl pc endpoint -n istio productpage-v1-86955ff989-4ckp9|grep productpage
172.21.0.91:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.22.0.78:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.100:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

三个集群的service连接的是cluster1的istiod。cluster1有两个productpage端点,其中一个端点是cluster2的东西向网关的地址,这里为啥是两个而不是三个端点,因为cluster2和cluster3只有一个东西向网关。cluster2有三个端点,其中一个是cluster1的东西向网关的地址,因为cluster2和cluster3在一个集群中,可以直接连接,所以这两个之间用的是pod ip。cluster3的端点情况和cluster2类似。

部署bookinfo vs gw

cluster1:

multicluster/gateway-01.yaml

kubectl apply -f gateway-01.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"

multicluster/vs-bookinfo-hosts-star.yaml

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage.istio.svc.cluster.local
port:
number: 9080

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root、

cluster2:

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

cluster3::

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

访问:

​http://192.168.229.137:32498/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

​http://192.168.229.140:31614/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

​http://192.168.229.143:32050/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

这说明多集群配置成功,而且每个envoy cluster有三个endpoint,轮训的方式访问。

清理:

cluster1:

kubectl label namespace istio-system topology.istio.io/network-

kubectl delete vs bookinfo -n istio

kubectl delete gw bookinfo-gateway -n istio

kubectl delete secret istio-remote-secret-cluster2 -n istio-system

kubectl delete secret istio-remote-secret-cluster3 -n istio-system

kubectl delete gw cross-network-gateway -n istio-system

kubectl delete gw istiod-gateway -n istio

kubectl delete vs istiod-vs -n istio

istioctl x uninstall -f cluster1.yaml

reboot

cluster2:

kubectl label namespace istio-system topology.istio.io/network-

kubectl delete vs bookinfo -n istio

kubectl delete gw bookinfo-gateway -n istio

kubectl delete gw cross-network-gateway -n istio-system

istioctl x uninstall -f cluster2.yaml

reboot

cluster3:

kubectl label namespace istio-system topology.istio.io/network-

kubectl delete vs bookinfo -n istio

kubectl delete gw bookinfo-gateway -n istio

istioctl x uninstall -f cluster3.yaml

reboot

三网关

istio多集群探秘,部署了50次多集群后我得出的结论_bc_13

说明:

cluster1在network1,cluster2和cluster3在network2.cluster1 有一个isitod,cluster2和cluster3没有istiod。cluster1的proxy直接连接自己的istiod,cluster2和cluster3的proxy通过cluster1的东西向网关连接cluster1的istiod。cluster1的istiod监控三个集群。cluster1的service通过cluster2的东西向网关访问cluster2的service,cluster1的service通过cluster3的东西向网关访问cluster3的service。cluster2和cluster3的service可以直接访问。

两个网络
三个东西向网关

集群1
137,138,139
集群2
140,141,142
集群3
143,144,145

打通cluster2,cluster3网络
140,141,142
route add -net 172.22.2.0 netmask 255.255.255.0 gw 192.168.229.145
route add -net 172.22.0.0 netmask 255.255.255.0 gw 192.168.229.144
route add -net 172.22.1.0 netmask 255.255.255.0 gw 192.168.229.143

route add -net 10.70.0.0 netmask 255.255.0.0 gw 192.168.229.143


143,144,145
route add -net 172.21.2.0 netmask 255.255.255.0 gw 192.168.229.142
route add -net 172.21.0.0 netmask 255.255.255.0 gw 192.168.229.141
route add -net 172.21.1.0 netmask 255.255.255.0 gw 192.168.229.140

route add -net 10.69.0.0 netmask 255.255.0.0 gw 192.168.229.140

给isito-system namespace打标签
cluster1:
kubectl label namespace istio-system topology.istio.io/network=network1

cluster2:
kubectl label namespace istio-system topology.istio.io/network=network2

cluster3:
kubectl label namespace istio-system topology.istio.io/network=network2

生成operator部署文件
cluster1:
cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
imagePullPolicy: IfNotPresent
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

这里我设置的cluster1东西向网关的ip试192.168.229.100
如果用的是loadblance,可以用下面命令获取
# export DISCOVERY_ADDRESS=$(kubectl -n istio-system get svc istio-eastwestgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
然后替换remotePilotAddress

生成operator部署文件
cat <<EOF > cluster2.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
imagePullPolicy: IfNotPresent
meshID: mesh1
multiCluster:
clusterName: cluster2
network: network2
remotePilotAddress: 192.168.229.100
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

这里我设置的cluster1东西向网关的ip试192.168.229.100
如果用的是loadblance,可以用下面命令获取
# export DISCOVERY_ADDRESS=$(kubectl -n istio-system get svc istio-eastwestgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
然后替换remotePilotAddress

生成operator部署文件
cat <<EOF > cluster3.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
imagePullPolicy: IfNotPresent
meshID: mesh1
multiCluster:
clusterName: cluster3
network: network2
remotePilotAddress: 192.168.229.100
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

把部署文件传到cluster2
scp cluster2.yaml [email protected]:/root
把部署文件传到cluster3
scp cluster3.yaml [email protected]:/root

部署cluster1
istioctl install -f cluster1.yaml
部署东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster1 --network network1 | istioctl install -y -f -


配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.100"]}}'

暴露istiod
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-istiod.yaml
暴露服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml


cluster2:
生成监控apiserver secret
istioctl x create-remote-secret --name=cluster2 --server=https://192.168.229.140:6443 > remote-secret-cluster2.yaml
传输secret到cluster1
scp remote-secret-cluster2.yaml [email protected]:/root


cluster3:
生成监控apiserver secret
istioctl x create-remote-secret --name=cluster3 --server=https://192.168.229.143:6443 > remote-secret-cluster3.yaml

传输secret到cluster1
scp remote-secret-cluster3.yaml [email protected]:/root

cluster1:
应用监控apiserver secret
kubectl apply -f remote-secret-cluster2.yaml
kubectl apply -f remote-secret-cluster3.yaml


cluster2:
部署cluster2
istioctl install -f cluster2.yaml
安装东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster2 --network network2 | istioctl install -y -f -


配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.101"]}}'

暴露服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml


cluster3:
部署cluster3
istioctl install -f cluster3.yaml

安装东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster3 --network network2 | istioctl install -y -f -


配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.102"]}}'

暴露服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml


cluster1:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster2:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster1:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

验证:

cluster1:
[root@node01 twonetworkthreegateway]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-6b4b5c8bb4-qb8wz.istio SYNCED SYNCED SYNCED SYNCED istiod-c7bfddd65-fxgq4 1.11.2
details-v1-76dfcb7885-k97p9.istio SYNCED SYNCED SYNCED SYNCED istiod-c7bfddd65-fxgq4 1.11.2
details-v1-7b6c945f64-jjnn6.istio SYNCED SYNCED SYNCED SYNCED istiod-c7bfddd65-fxgq4 1.11.2
istio-eastwestgateway-64f8d96dd8-swdfp.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-c7bfddd65-fxgq4 1.11.2
istio-eastwestgateway-687cf775f4-smwjc.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-c7bfddd65-fxgq4 1.11.2
istio-eastwestgateway-76b4f4b7d5-rg4wm.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-c7bfddd65-fxgq4 1.11.2
istio-egressgateway-66bdf6b5d9-pgkx4.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-c7bfddd65-fxgq4 1.11.2
istio-egressgateway-6bc464c8d8-zxx4g.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-c7bfddd65-fxgq4 1.11.2
istio-egressgateway-7c46778d97-jdsjx.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-c7bfddd65-fxgq4 1.11.2
istio-ingressgateway-5689c54c7b-sgb59.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-c7bfddd65-fxgq4 1.11.2
istio-ingressgateway-76c7fd4dff-6kvlx.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-c7bfddd65-fxgq4 1.11.2
istio-ingressgateway-7f55cc9f77-4vzkk.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-c7bfddd65-fxgq4 1.11.2
productpage-v1-67fd8f54c9-fspdz.istio SYNCED SYNCED SYNCED SYNCED istiod-c7bfddd65-fxgq4 1.11.2
productpage-v1-6dd84945cb-fxczj.istio SYNCED SYNCED SYNCED SYNCED istiod-c7bfddd65-fxgq4 1.11.2
productpage-v1-cccdbd8f6-29892.istio SYNCED SYNCED SYNCED SYNCED istiod-c7bfddd65-fxgq4 1.11.2
ratings-v1-5cdc447c97-cnfr6.istio SYNCED SYNCED SYNCED SYNCED istiod-c7bfddd65-fxgq4 1.11.2
ratings-v1-6dd6959cb8-sq72s.istio SYNCED SYNCED SYNCED SYNCED istiod-c7bfddd65-fxgq4 1.11.2
ratings-v1-858bdc68d-2rffj.istio SYNCED SYNCED SYNCED SYNCED istiod-c7bfddd65-fxgq4 1.11.2
reviews-v1-6546f5967b-ksxzt.istio SYNCED SYNCED SYNCED SYNCED istiod-c7bfddd65-fxgq4 1.11.2
reviews-v1-7d89469cf6-fp9hg.istio SYNCED SYNCED SYNCED SYNCED istiod-c7bfddd65-fxgq4 1.11.2
reviews-v1-7f9844cc85-tp797.istio SYNCED SYNCED SYNCED SYNCED istiod-c7bfddd65-fxgq4 1.11.2
reviews-v2-5998778d96-2tj9v.istio SYNCED SYNCED SYNCED SYNCED istiod-c7bfddd65-fxgq4 1.11.2
reviews-v2-759b796d5c-pkxpl.istio SYNCED SYNCED SYNCED SYNCED istiod-c7bfddd65-fxgq4 1.11.2
reviews-v2-778645c964-hxdxb.istio SYNCED SYNCED SYNCED SYNCED istiod-c7bfddd65-fxgq4 1.11.2
reviews-v3-5888845bb5-chrql.istio SYNCED SYNCED SYNCED SYNCED istiod-c7bfddd65-fxgq4 1.11.2
reviews-v3-69b7957588-xnw2v.istio SYNCED SYNCED SYNCED SYNCED istiod-c7bfddd65-fxgq4 1.11.2
reviews-v3-c78bc98bd-crbmt.istio SYNCED SYNCED SYNCED SYNCED istiod-c7bfddd65-fxgq4 1.11.2

[root@node01 twonetworkthreegateway]# istioctl pc endpoint -n istio-system istio-ingressgateway-5689c54c7b-sgb59|grep productpage
172.20.1.93:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.101:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.102:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster2:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION

[root@node01 ~]# istioctl pc endpoint -n istio-system istio-ingressgateway-7f55cc9f77-4vzkk|grep productpage
172.21.2.150:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.22.1.106:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.100:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster3:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION

[root@node01 ~]# istioctl pc endpoint -n istio-system istio-ingressgateway-76c7fd4dff-6kvlx|grep productpage
172.21.2.150:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.22.1.106:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.100:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

三个集群service都连的是cluster1的istiod,其中cluster1的service直接连接自己的istiod,cluster2和cluster3通过cluster1的东西向网关链接。cluster1的productpage endpoint有两个是cluster2和cluster3的东西向网关地址,cluster2的productpage endpoint有一个是cluster1的东西向网管地址,cluster3类似。这说明cluster2和cluster3的service是直连的,他们和cluster1的service通过东西向网关连接。

部署bookinfo vs gw

cluster1:

multicluster/gateway-01.yaml

kubectl apply -f gateway-01.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"

multicluster/vs-bookinfo-hosts-star.yaml

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage.istio.svc.cluster.local
port:
number: 9080

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root、

cluster2:

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

cluster3::

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

访问:

​http://192.168.229.137:32498/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

​http://192.168.229.140:31614/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

​http://192.168.229.143:32050/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

这说明多集群配置成功,而且每个envoy cluster有三个endpoint,轮训的方式访问。

清理:

cluster1:

kubectl label namespace istio-system topology.istio.io/network-

kubectl delete vs bookinfo -n istio

kubectl delete gw bookinfo-gateway -n istio

kubectl delete secret istio-remote-secret-cluster2 -n istio-system

kubectl delete secret istio-remote-secret-cluster3 -n istio-system

kubectl delete gw cross-network-gateway -n istio-system

kubectl delete gw istiod-gateway -n istio-system

kubectl delete vs istiod-vs -n istio-system

istioctl x uninstall -f cluster1.yaml

reboot

cluster2:

kubectl label namespace istio-system topology.istio.io/network-

kubectl delete vs bookinfo -n istio

kubectl delete gw bookinfo-gateway -n istio

kubectl delete gw cross-network-gateway -n istio-system

istioctl x uninstall -f cluster2.yaml

reboot

cluster3:

kubectl label namespace istio-system topology.istio.io/network-

kubectl delete vs bookinfo -n istio

kubectl delete gw bookinfo-gateway -n istio

kubectl delete gw cross-network-gateway -n istio-system

istioctl x uninstall -f cluster3.yaml

reboot

三网络

istio多集群探秘,部署了50次多集群后我得出的结论_云原生_14

说明:

cluster1在网络network1,cluster2在网络network2,cluster3在network3.cluster1有一个istiod,cluster2和cluster3没有istiod。cluster1的proxy直接连接集群内的istiod,cluster2和cluster3通过东西向网关连接cluster1的istiod。三个集群的service通过东西向网关相互链接。

三个网络
集群1
137,138,139
集群2
140,141,142
集群3
143,144,145

给istio-system namespace打标签
cluster1:
kubectl label namespace istio-system topology.istio.io/network=network1

cluster2:
kubectl label namespace istio-system topology.istio.io/network=network2

cluster3:
kubectl label namespace istio-system topology.istio.io/network=network3


cluster1:
生成istio operator部署文件
cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

这里我设置的cluster1东西向网关的ip试192.168.229.100
如果用的是loadblance,可以用下面命令获取
# export DISCOVERY_ADDRESS=$(kubectl -n istio-system get svc istio-eastwestgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
然后替换remotePilotAddress

生成istio operator部署文件
cat <<EOF > cluster2.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster2
network: network2
remotePilotAddress: 192.168.229.100
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

这里我设置的cluster1东西向网关的ip试192.168.229.100
如果用的是loadblance,可以用下面命令获取
# export DISCOVERY_ADDRESS=$(kubectl -n istio-system get svc istio-eastwestgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
然后替换remotePilotAddress

生成istio operator部署文件
cat <<EOF > cluster3.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster3
network: network3
remotePilotAddress: 192.168.229.100
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

传输部署文件到cluster2
scp cluster2.yaml [email protected]:/root

传输部署文件到cluster3
scp cluster3.yaml [email protected]:/root

安装istio
istioctl install -f cluster1.yaml

生成东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster1 --network network1 | istioctl install -y -f -

配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.100"]}}'

暴露istiod
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-istiod.yaml

暴露service
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml

cluster2:
生成访问apiserver的secret
istioctl x create-remote-secret --name=cluster2 --server=https://192.168.229.140:6443 > remote-secret-cluster2.yaml

传输secret到cluster1
scp remote-secret-cluster2.yaml [email protected]:/root

cluster3:
生成访问apiserver的secret
istioctl x create-remote-secret --name=cluster3 --server=https://192.168.229.143:6443 > remote-secret-cluster3.yaml

传输secret到cluster1
scp remote-secret-cluster3.yaml [email protected]:/root

cluster1:
应用secret
kubectl apply -f remote-secret-cluster2.yaml
kubectl apply -f remote-secret-cluster3.yaml

cluster2:
部署istio
istioctl install -f cluster2.yaml

生成东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster2 --network network2 | istioctl install -y -f -

配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.101"]}}'

暴露service
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml


cluster3:
部署istio
istioctl install -f cluster3.yaml

生成东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster3 --network network3 | istioctl install -y -f -

配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.102"]}}'

暴露service
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml

cluster1:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster2:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster3:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

验证

cluster1:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-68ffb7845c-wfc8s.istio SYNCED SYNCED SYNCED SYNCED istiod-598cfb4f7b-2wsw2 1.11.2
details-v1-75fcf458c7-l5mk2.istio SYNCED SYNCED SYNCED SYNCED istiod-598cfb4f7b-2wsw2 1.11.2
details-v1-8568696cf5-gc454.istio SYNCED SYNCED SYNCED SYNCED istiod-598cfb4f7b-2wsw2 1.11.2
istio-eastwestgateway-6d99d5cf57-4rmvl.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-598cfb4f7b-2wsw2 1.11.2
istio-eastwestgateway-756fbb795d-fmcrr.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-598cfb4f7b-2wsw2 1.11.2
istio-eastwestgateway-887c5f6bf-vtns7.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-598cfb4f7b-2wsw2 1.11.2
istio-egressgateway-68c99f8bb6-8j9qx.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-598cfb4f7b-2wsw2 1.11.2
istio-egressgateway-7b9bd57b99-52hpq.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-598cfb4f7b-2wsw2 1.11.2
istio-egressgateway-84d6945467-jbkjt.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-598cfb4f7b-2wsw2 1.11.2
istio-ingressgateway-7c8d87d5f5-5pwbz.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-598cfb4f7b-2wsw2 1.11.2
istio-ingressgateway-856cbc54b9-bcr28.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-598cfb4f7b-2wsw2 1.11.2
istio-ingressgateway-b594544d4-hjsrf.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-598cfb4f7b-2wsw2 1.11.2
productpage-v1-596f75689-r6glq.istio SYNCED SYNCED SYNCED SYNCED istiod-598cfb4f7b-2wsw2 1.11.2
productpage-v1-7bd457ff8c-9rdkp.istio SYNCED SYNCED SYNCED SYNCED istiod-598cfb4f7b-2wsw2 1.11.2
productpage-v1-7fd5d8dc87-5z4t7.istio SYNCED SYNCED SYNCED SYNCED istiod-598cfb4f7b-2wsw2 1.11.2
ratings-v1-5f58f859f-lmxts.istio SYNCED SYNCED SYNCED SYNCED istiod-598cfb4f7b-2wsw2 1.11.2
ratings-v1-7cb488c88f-p2r4v.istio SYNCED SYNCED SYNCED SYNCED istiod-598cfb4f7b-2wsw2 1.11.2
ratings-v1-84db8675f7-4tjhz.istio SYNCED SYNCED SYNCED SYNCED istiod-598cfb4f7b-2wsw2 1.11.2
reviews-v1-5d94854d9-497k9.istio SYNCED SYNCED SYNCED SYNCED istiod-598cfb4f7b-2wsw2 1.11.2
reviews-v1-6bdffd75c5-hxtht.istio SYNCED SYNCED SYNCED SYNCED istiod-598cfb4f7b-2wsw2 1.11.2
reviews-v1-b957b7485-kgdsb.istio SYNCED SYNCED SYNCED SYNCED istiod-598cfb4f7b-2wsw2 1.11.2
reviews-v2-57d55fccc-pm4sn.istio SYNCED SYNCED SYNCED SYNCED istiod-598cfb4f7b-2wsw2 1.11.2
reviews-v2-69d544fd69-5z5kn.istio SYNCED SYNCED SYNCED SYNCED istiod-598cfb4f7b-2wsw2 1.11.2
reviews-v2-6cf5595767-4gv7b.istio SYNCED SYNCED SYNCED SYNCED istiod-598cfb4f7b-2wsw2 1.11.2
reviews-v3-566b69584c-gpnfz.istio SYNCED SYNCED SYNCED SYNCED istiod-598cfb4f7b-2wsw2 1.11.2
reviews-v3-794f468c5c-kxtx7.istio SYNCED SYNCED SYNCED SYNCED istiod-598cfb4f7b-2wsw2 1.11.2
reviews-v3-7f579546d9-cbf9n.istio SYNCED SYNCED SYNCED SYNCED istiod-598cfb4f7b-2wsw2 1.11.2

[root@node01 ~]# istioctl pc endpoint -n istio-system istio-ingressgateway-856cbc54b9-bcr28|grep productpage
172.20.2.108:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.101:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.102:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster2:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION

[root@node01 ~]# istioctl pc endpoint -n istio-system istio-ingressgateway-b594544d4-hjsrf|grep productpage
172.21.0.145:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.100:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.102:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster3:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION

[root@node01 ~]# istioctl pc endpoint -n istio-system istio-ingressgateway-7c8d87d5f5-5pwbz |grep productpage
172.22.0.133:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.100:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.101:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

三个集群所有proxy连的是cluster1的istiod,cluster1直接连接,cluster2和cluster3通过cluster1的东西向网关链接。cluster1 productpage endpoint有两个是cluster2和cluster3的网关地址,cluster2和cluster3类似,因为各自网络不通。

部署bookinfo vs gw

cluster1:

multicluster/gateway-01.yaml

kubectl apply -f gateway-01.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"

multicluster/vs-bookinfo-hosts-star.yaml

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage.istio.svc.cluster.local
port:
number: 9080

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root、

cluster2:

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

cluster3::

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

访问:

​http://192.168.229.137:32498/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

​http://192.168.229.140:31614/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

​http://192.168.229.143:32050/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

这说明多集群配置成功,而且每个envoy cluster有三个endpoint,轮训的方式访问。

清理:

cluster1:

kubectl label namespace istio-system topology.istio.io/network-

kubectl delete vs bookinfo -n istio

kubectl delete gw bookinfo-gateway -n istio

kubectl delete secret istio-remote-secret-cluster2 -n istio-system

kubectl delete secret istio-remote-secret-cluster3 -n istio-system

kubectl delete gw cross-network-gateway -n istio-system

kubectl delete gw istiod-gateway -n istio-system

kubectl delete vs istiod-vs -n istio-system

istioctl x uninstall -f cluster1.yaml

reboot

cluster2:

kubectl label namespace istio-system topology.istio.io/network-

kubectl delete vs bookinfo -n istio

kubectl delete gw bookinfo-gateway -n istio

kubectl delete gw cross-network-gateway -n istio-system

istioctl x uninstall -f cluster2.yaml

reboot

cluster3:

kubectl label namespace istio-system topology.istio.io/network-

kubectl delete vs bookinfo -n istio

kubectl delete gw bookinfo-gateway -n istio

kubectl delete gw cross-network-gateway -n istio-system

istioctl x uninstall -f cluster3.yaml

reboot

两控制面板

单网络

istio多集群探秘,部署了50次多集群后我得出的结论_bc_15

说明:

cluster1,cluster2和cluster3在一个网络里面。cluster1和cluster2各自有一个istiod,cluster3用cluster2的istiod。cluster1和cluster2的istiod监控三个集群。三个集群的service直接相互连接。cluster3的proxy通过cluster2的东西向网关链接cluster2得istiod。

cluster1有一个控制面板
cluster2,cluster3共享一个控制面板

三个网络联通
集群1
137,138,139
集群2
140,141,142
集群3
143,144,145

网络联通
137,138,139
route add -net 172.21.2.0 netmask 255.255.255.0 gw 192.168.229.142
route add -net 172.21.0.0 netmask 255.255.255.0 gw 192.168.229.141
route add -net 172.21.1.0 netmask 255.255.255.0 gw 192.168.229.140

route add -net 172.22.2.0 netmask 255.255.255.0 gw 192.168.229.145
route add -net 172.22.0.0 netmask 255.255.255.0 gw 192.168.229.144
route add -net 172.22.1.0 netmask 255.255.255.0 gw 192.168.229.143

route add -net 10.70.0.0 netmask 255.255.0.0 gw 192.168.229.143
route add -net 10.69.0.0 netmask 255.255.0.0 gw 192.168.229.140

140,141,142
route add -net 172.20.2.0 netmask 255.255.255.0 gw 192.168.229.139
route add -net 172.20.0.0 netmask 255.255.255.0 gw 192.168.229.138
route add -net 172.20.1.0 netmask 255.255.255.0 gw 192.168.229.137

route add -net 172.22.2.0 netmask 255.255.255.0 gw 192.168.229.145
route add -net 172.22.0.0 netmask 255.255.255.0 gw 192.168.229.144
route add -net 172.22.1.0 netmask 255.255.255.0 gw 192.168.229.143

route add -net 10.70.0.0 netmask 255.255.0.0 gw 192.168.229.143
route add -net 10.68.0.0 netmask 255.255.0.0 gw 192.168.229.137


143,144,145
route add -net 172.21.2.0 netmask 255.255.255.0 gw 192.168.229.142
route add -net 172.21.0.0 netmask 255.255.255.0 gw 192.168.229.141
route add -net 172.21.1.0 netmask 255.255.255.0 gw 192.168.229.140

route add -net 172.20.2.0 netmask 255.255.255.0 gw 192.168.229.139
route add -net 172.20.0.0 netmask 255.255.255.0 gw 192.168.229.138
route add -net 172.20.1.0 netmask 255.255.255.0 gw 192.168.229.137

route add -net 10.69.0.0 netmask 255.255.0.0 gw 192.168.229.140
route add -net 10.68.0.0 netmask 255.255.0.0 gw 192.168.229.137


cluster1:
生成istio operator部署文件
cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF


生成istio operator部署文件
cat <<EOF > cluster2.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster2
network: network1
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

传输部署文件到cluster2
scp cluster2.yaml [email protected]:/root



这里我设置的cluster2东西向网关的ip试192.168.229.101
如果用的是loadblance,可以用下面命令获取
# export DISCOVERY_ADDRESS=$(kubectl -n istio-system get svc istio-eastwestgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
然后替换remotePilotAddress

生成istio operator部署文件
cat <<EOF > cluster3.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster3
network: network1
remotePilotAddress: 192.168.229.101
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

传输部署文件到cluster3
scp cluster3.yaml [email protected]:/root


cluster1:
创建访问apiserver secret
istioctl x create-remote-secret --name=cluster1 --server=https://192.168.229.137:6443 > remote-secret-cluster1.yaml

传输secret到cluster2
scp remote-secret-cluster1.yaml [email protected]:/root

cluster2
创建访问apiserver secret
istioctl x create-remote-secret --name=cluster2 --server=https://192.168.229.140:6443 > remote-secret-cluster2.yaml


传输secret到cluster1
scp remote-secret-cluster2.yaml [email protected]:/root

cluster3
创建访问apiserver secret
istioctl x create-remote-secret --name=cluster3 --server=https://192.168.229.143:6443 > remote-secret-cluster3.yaml

传输secret到cluster1
scp remote-secret-cluster3.yaml [email protected]:/root
传输secret到cluster2
scp remote-secret-cluster3.yaml [email protected]:/root

cluster1
应用secret
kubectl apply -f remote-secret-cluster2.yaml
kubectl apply -f remote-secret-cluster3.yaml

部署istio
istioctl install -f cluster1.yaml

cluster2:
应用secret
kubectl apply -f remote-secret-cluster1.yaml
kubectl apply -f remote-secret-cluster3.yaml

部署istio
istioctl install -f cluster2.yaml

生成东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster2 --network network1 | istioctl install -y -f -

配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.101"]}}'

暴露服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-istiod.yaml


cluster3:
部署istio
istioctl install -f cluster3.yaml

cluster1:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster2:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster3:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

验证

cluster1:
[root@node01 samenetwork]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-7b7dc8c8bb-nxdz2.istio SYNCED SYNCED SYNCED SYNCED istiod-679bfd4b75-smfgf 1.11.2
istio-egressgateway-54fc8b8886-74thh.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-679bfd4b75-smfgf 1.11.2
istio-ingressgateway-69f9bb7558-6jjk4.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-679bfd4b75-smfgf 1.11.2
productpage-v1-65dfc8b75d-kh6p5.istio SYNCED SYNCED SYNCED SYNCED istiod-679bfd4b75-smfgf 1.11.2
ratings-v1-56b76cc558-bs4gl.istio SYNCED SYNCED SYNCED SYNCED istiod-679bfd4b75-smfgf 1.11.2
reviews-v1-867d698c46-ttpph.istio SYNCED SYNCED SYNCED SYNCED istiod-679bfd4b75-smfgf 1.11.2
reviews-v2-74f9ddbdf8-r95rp.istio SYNCED SYNCED SYNCED SYNCED istiod-679bfd4b75-smfgf 1.11.2
reviews-v3-7578ffd89d-kw4cn.istio SYNCED SYNCED SYNCED SYNCED istiod-679bfd4b75-smfgf 1.11.2

[root@node01 samenetwork]# istioctl pc endpoint -n istio-system istio-ingressgateway-69f9bb7558-6jjk4|grep productpage
172.20.1.121:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.21.1.154:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.22.2.133:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster2:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-74749b4689-kd69n.istio SYNCED SYNCED SYNCED SYNCED istiod-84b444d5b9-rgj94 1.11.2
details-v1-75cc4f5f59-5c7hb.istio SYNCED SYNCED SYNCED SYNCED istiod-84b444d5b9-rgj94 1.11.2
istio-eastwestgateway-f466d6c87-rvth7.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-84b444d5b9-rgj94 1.11.2
istio-egressgateway-5cd694544b-bwl8t.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-84b444d5b9-rgj94 1.11.2
istio-egressgateway-76d745cb77-fv86p.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-84b444d5b9-rgj94 1.11.2
istio-ingressgateway-6488f9dd78-d2x4b.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-84b444d5b9-rgj94 1.11.2
istio-ingressgateway-699d874f89-k6lhc.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-84b444d5b9-rgj94 1.11.2
productpage-v1-75fdbcb5bf-tdpp7.istio SYNCED SYNCED SYNCED SYNCED istiod-84b444d5b9-rgj94 1.11.2
productpage-v1-86999f8c7f-xnj4d.istio SYNCED SYNCED SYNCED SYNCED istiod-84b444d5b9-rgj94 1.11.2
ratings-v1-7d8cf65c4f-h28qz.istio SYNCED SYNCED SYNCED SYNCED istiod-84b444d5b9-rgj94 1.11.2
ratings-v1-fdb4848ff-hs2lz.istio SYNCED SYNCED SYNCED SYNCED istiod-84b444d5b9-rgj94 1.11.2
reviews-v1-5549dfc67-sbc5z.istio SYNCED SYNCED SYNCED SYNCED istiod-84b444d5b9-rgj94 1.11.2
reviews-v1-5c4f4db57c-5gqfb.istio SYNCED SYNCED SYNCED SYNCED istiod-84b444d5b9-rgj94 1.11.2
reviews-v2-68cbd8df77-sqt6r.istio SYNCED SYNCED SYNCED SYNCED istiod-84b444d5b9-rgj94 1.11.2
reviews-v2-69745c659f-fkdbc.istio SYNCED SYNCED SYNCED SYNCED istiod-84b444d5b9-rgj94 1.11.2
reviews-v3-545b56f8b5-hl2nl.istio SYNCED SYNCED SYNCED SYNCED istiod-84b444d5b9-rgj94 1.11.2
reviews-v3-85dfd86cc9-vl6bm.istio SYNCED SYNCED SYNCED SYNCED istiod-84b444d5b9-rgj94 1.11.2

[root@node01 ~]# istioctl pc endpoint -n istio-system istio-ingressgateway-6488f9dd78-d2x4b | grep productpage
172.20.1.121:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.21.1.154:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.22.2.133:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster3:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION

[root@node01 ~]# istioctl pc endpoint -n istio-system istio-ingressgateway-699d874f89-k6lhc|grep productpage
172.20.1.121:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.21.1.154:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.22.2.133:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

由于是单个网络,所以每个集群的endpoint都是以pod ip的形式。cluster1 的proxy连接到cluster1的istiod,cluster2和cluster3的proxy连接到cluster2的itiod。

部署bookinfo vs gw

cluster1:

multicluster/gateway-01.yaml

kubectl apply -f gateway-01.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"

multicluster/vs-bookinfo-hosts-star.yaml

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage.istio.svc.cluster.local
port:
number: 9080

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root、

cluster2:

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

cluster3::

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

访问:

​http://192.168.229.137:32498/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

​http://192.168.229.140:31614/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

​http://192.168.229.143:32050/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

这说明多集群配置成功,而且每个envoy cluster有三个endpoint,轮训的方式访问。

清理:

cluster1:

kubectl label namespace istio-system topology.istio.io/network-

kubectl delete vs bookinfo -n istio

kubectl delete gw bookinfo-gateway -n istio

kubectl delete secret istio-remote-secret-cluster2 -n istio-system

kubectl delete secret istio-remote-secret-cluster3 -n istio-system

istioctl x uninstall -f cluster1.yaml

reboot

cluster2:

kubectl label namespace istio-system topology.istio.io/network-

kubectl delete vs bookinfo -n istio

kubectl delete gw bookinfo-gateway -n istio

kubectl delete gw istiod-gateway -n istio-system

kubectl delete vs istiod-vs -n istio-system

istioctl x uninstall -f cluster2.yaml

reboot

cluster3:

kubectl label namespace istio-system topology.istio.io/network-

kubectl delete vs bookinfo -n istio

kubectl delete gw bookinfo-gateway -n istio

istioctl x uninstall -f cluster3.yaml

reboot

两网络

istio多集群探秘,部署了50次多集群后我得出的结论_2d_16

说明:

cluster1在network1,cluster2和cluster3在同一个网络network2.cluster1有一个istiod,cluster3有一个istiod,cluster2用cluster3的istio。istiod监控所有集群apiserver。cluster1的service和cluster2的servcie的proxy直接链接istiod,cluster2的proxy通过cluster3的东西向网关连接cluster3的itiod。cluster1的service通过cluster3的东西向网关连接cluster2和cluster3的service。cluster2和cluster3的service通过cluster1的东西向网关连接cluster1的service。

cluster1有一个控制面板,与其他cluster不在一个网络
cluster2,cluster3共享一个控制面板,cluster3有一个控制面板,在同一个网络

集群1
137,138,139
集群2
140,141,142
集群3
143,144,145


cluster2,cluster3网络连通

140,141,142
route add -net 172.22.2.0 netmask 255.255.255.0 gw 192.168.229.145
route add -net 172.22.0.0 netmask 255.255.255.0 gw 192.168.229.144
route add -net 172.22.1.0 netmask 255.255.255.0 gw 192.168.229.143

route add -net 10.70.0.0 netmask 255.255.0.0 gw 192.168.229.143


143,144,145
route add -net 172.21.2.0 netmask 255.255.255.0 gw 192.168.229.142
route add -net 172.21.0.0 netmask 255.255.255.0 gw 192.168.229.141
route add -net 172.21.1.0 netmask 255.255.255.0 gw 192.168.229.140

route add -net 10.69.0.0 netmask 255.255.0.0 gw 192.168.229.140


给istio-system namesapce打标签
cluster1:
kubectl label namespace istio-system topology.istio.io/network=network1

cluster2:
kubectl label namespace istio-system topology.istio.io/network=network2

cluster3:
kubectl label namespace istio-system topology.istio.io/network=network2

cluster1:
生成istio operator部署文件
cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

这里我设置的cluster3东西向网关的ip试192.168.229.102
如果用的是loadblance,可以用下面命令获取
# export DISCOVERY_ADDRESS=$(kubectl -n istio-system get svc istio-eastwestgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
然后替换remotePilotAddress

生成istio operator部署文件
cat <<EOF > cluster2.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster2
network: network2
remotePilotAddress: 192.168.229.102
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

传输部署文件到cluster2
scp cluster2.yaml [email protected]:/root


生成istio operator部署文件
cat <<EOF > cluster3.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster3
network: network2
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

传输部署文件到cluster3
scp cluster3.yaml [email protected]:/root


cluster1:
生成apiserver访问secret
istioctl x create-remote-secret --name=cluster1 --server=https://192.168.229.137:6443 > remote-secret-cluster1.yaml

传输secret到cluster3
scp remote-secret-cluster1.yaml [email protected]:/root

cluster2
生成apiserver访问secret
istioctl x create-remote-secret --name=cluster2 --server=https://192.168.229.140:6443 > remote-secret-cluster2.yaml

传输secret到cluster1
scp remote-secret-cluster2.yaml [email protected]:/root

传输secret到cluster3
scp remote-secret-cluster2.yaml [email protected]:/root

cluster3

生成apiserver访问secret
istioctl x create-remote-secret --name=cluster3 --server=https://192.168.229.143:6443 > remote-secret-cluster3.yaml

传输secret到cluster1
scp remote-secret-cluster3.yaml [email protected]:/root


cluster1:
应用secret
kubectl apply -f remote-secret-cluster2.yaml
kubectl apply -f remote-secret-cluster3.yaml

部署istio
istioctl install -f cluster1.yaml

生成东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster1 --network network1 | istioctl install -y -f -

配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.100"]}}'

暴露服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml


cluster3:
应用secret
kubectl apply -f remote-secret-cluster1.yaml
kubectl apply -f remote-secret-cluster2.yaml

部署istio
istioctl install -f cluster3.yaml

生成东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster3 --network network2 | istioctl install -y -f -

配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.102"]}}'

暴露istiod
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-istiod.yaml

暴露服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml


cluster2:
部署istio
istioctl install -f cluster2.yaml

cluster1:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster2:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster3:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

验证:

cluster1:
[root@node01 twonetwork]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-675557b7-hds7p.istio SYNCED SYNCED SYNCED SYNCED istiod-5f7778496d-fpr8q 1.11.2
istio-eastwestgateway-6487d7fffc-4cth4.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-5f7778496d-fpr8q 1.11.2
istio-egressgateway-68d5f4844c-ld6p6.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-5f7778496d-fpr8q 1.11.2
istio-ingressgateway-745f974c97-fn2dk.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-5f7778496d-fpr8q 1.11.2
productpage-v1-8749f4f5c-lnm9b.istio SYNCED SYNCED SYNCED SYNCED istiod-5f7778496d-fpr8q 1.11.2
ratings-v1-d67575495-5wmzh.istio SYNCED SYNCED SYNCED SYNCED istiod-5f7778496d-fpr8q 1.11.2
reviews-v1-66f57b46f9-477fq.istio SYNCED SYNCED SYNCED SYNCED istiod-5f7778496d-fpr8q 1.11.2
reviews-v2-5f4b549bc5-sthc7.istio SYNCED SYNCED SYNCED SYNCED istiod-5f7778496d-fpr8q 1.11.2
reviews-v3-886678bb7-lxnkd.istio SYNCED SYNCED SYNCED SYNCED istiod-5f7778496d-fpr8q 1.11.2

[root@node01 twonetwork]# istioctl pc endpoint -n istio-system istio-ingressgateway-745f974c97-fn2dk|grep productpage
172.20.1.129:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.102:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster2:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION

[root@node01 ~]# istioctl pc endpoint -n istio-system istio-ingressgateway-7fd7d75c6c-b7dhg|grep productpage
172.21.2.179:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.22.2.146:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.100:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster3:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-6b8f559ddf-pl6jb.istio SYNCED SYNCED SYNCED SYNCED istiod-79cf7d6f5b-gf9nn 1.11.2
details-v1-784469d4f-w55n5.istio SYNCED SYNCED SYNCED SYNCED istiod-79cf7d6f5b-gf9nn 1.11.2
istio-eastwestgateway-8559fb88f-q75jv.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-79cf7d6f5b-gf9nn 1.11.2
istio-egressgateway-658547cf44-x8hkx.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-79cf7d6f5b-gf9nn 1.11.2
istio-egressgateway-d766c8975-5b8ft.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-79cf7d6f5b-gf9nn 1.11.2
istio-ingressgateway-58dbfdfcdd-wjxvw.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-79cf7d6f5b-gf9nn 1.11.2
istio-ingressgateway-7fd7d75c6c-b7dhg.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-79cf7d6f5b-gf9nn 1.11.2
productpage-v1-7747df867-9qqck.istio SYNCED SYNCED SYNCED SYNCED istiod-79cf7d6f5b-gf9nn 1.11.2
productpage-v1-c7d79d567-blvnk.istio SYNCED SYNCED SYNCED SYNCED istiod-79cf7d6f5b-gf9nn 1.11.2
ratings-v1-6666868bd5-t74kl.istio SYNCED SYNCED SYNCED SYNCED istiod-79cf7d6f5b-gf9nn 1.11.2
ratings-v1-747c59fc97-mfhpt.istio SYNCED SYNCED SYNCED SYNCED istiod-79cf7d6f5b-gf9nn 1.11.2
reviews-v1-7675754b57-qrrfc.istio SYNCED SYNCED SYNCED SYNCED istiod-79cf7d6f5b-gf9nn 1.11.2
reviews-v1-7d6f95f5f7-szjt7.istio SYNCED SYNCED SYNCED SYNCED istiod-79cf7d6f5b-gf9nn 1.11.2
reviews-v2-567766cbb9-hm452.istio SYNCED SYNCED SYNCED SYNCED istiod-79cf7d6f5b-gf9nn 1.11.2
reviews-v2-6f49b9d85c-lcskj.istio SYNCED SYNCED SYNCED SYNCED istiod-79cf7d6f5b-gf9nn 1.11.2
reviews-v3-764bd6d485-9v8cx.istio SYNCED SYNCED SYNCED SYNCED istiod-79cf7d6f5b-gf9nn 1.11.2
reviews-v3-7775f69cf7-6wn5n.istio SYNCED SYNCED SYNCED SYNCED istiod-79cf7d6f5b-gf9nn 1.11.2

[root@node01 ~]# istioctl pc endpoint -n istio-system istio-ingressgateway-58dbfdfcdd-wjxvw|grep productpage
172.21.2.179:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.22.2.146:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.100:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster1的proxy连接的是cluster1的istio的,cluster2和cluster3 proxy连接的是cluster3的istiod。cluster productpage有两个endpoint,cluster2和cluster3有三个endpoint,因为cluster2和cluster3在一个网络里面。

cluster1的一个endpoint是本集群ip,另一个endpoint是cluster3的东西向网关地址。cluster2和cluster3的endpoint有两个是pod ip,另一个是cluster1的东西向网关地址。

部署bookinfo vs gw

cluster1:

multicluster/gateway-01.yaml

kubectl apply -f gateway-01.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"

multicluster/vs-bookinfo-hosts-star.yaml

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage.istio.svc.cluster.local
port:
number: 9080

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root、

cluster2:

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

cluster3::

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

访问:

​http://192.168.229.137:32498/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

​http://192.168.229.140:31614/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

​http://192.168.229.143:32050/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

这说明多集群配置成功,而且每个envoy cluster有三个endpoint,轮训的方式访问。

清理:

cluster1:

kubectl label namespace istio-system topology.istio.io/network-

kubectl delete vs bookinfo -n istio

kubectl delete gw bookinfo-gateway -n istio

kubectl delete secret istio-remote-secret-cluster2 -n istio-system

kubectl delete secret istio-remote-secret-cluster3 -n istio-system

kubectl delete gw cross-network-gateway -n istio-system

istioctl x uninstall -f cluster1.yaml

reboot

cluster2:

kubectl label namespace istio-system topology.istio.io/network-

kubectl delete vs bookinfo -n istio

kubectl delete gw bookinfo-gateway -n istio

istioctl x uninstall -f cluster2.yaml

reboot

cluster3:

kubectl label namespace istio-system topology.istio.io/network-

kubectl delete vs bookinfo -n istio

kubectl delete gw bookinfo-gateway -n istio

kubectl delete gw istiod-gateway -n istio-system

kubectl delete vs istiod-vs -n istio-system

kubectl delete secret istio-remote-secret-cluster1 -n istio-system

kubectl delete secret istio-remote-secret-cluster2 -n istio-system

kubectl delete gw cross-network-gateway -n istio-system

istioctl x uninstall -f cluster3.yaml

reboot

三网络

istio多集群探秘,部署了50次多集群后我得出的结论_运维_17

说明:

三个集群都不在一个网络。cluster1和cluster2都有一个istiod,cluster3用cluster2的istiod。cluster3的proxy通过cluster2的东西向网关连接cluster2的istiod。由于都不在一个网络,每个集群的service都通过东西向网关访问其他集群的service。

cluster1有一个控制面板,与其他cluster不在一个网络
cluster2,cluster3共享一个控制面板,cluster2有一个控制面板,不在同一个网络

集群1
137,138,139
集群2
140,141,142
集群3
143,144,145

给istio-system namespace打标签
cluster1:
kubectl label namespace istio-system topology.istio.io/network=network1

cluster2:
kubectl label namespace istio-system topology.istio.io/network=network2

cluster3:
kubectl label namespace istio-system topology.istio.io/network=network3

cluster1:
生成istio operator部署文件
cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

生成istio operator部署文件
cat <<EOF > cluster2.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster2
network: network2
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

传输部署文件到cluster2
scp cluster2.yaml [email protected]:/root

这里我设置的cluster2东西向网关的ip试192.168.229.101
如果用的是loadblance,可以用下面命令获取
# export DISCOVERY_ADDRESS=$(kubectl -n istio-system get svc istio-eastwestgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
然后替换remotePilotAddress

生成istio operator部署文件
cat <<EOF > cluster3.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster3
network: network3
remotePilotAddress: 192.168.229.101
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

传输部署文件到cluster3
scp cluster3.yaml [email protected]:/root


cluster1:
生成访问apiserver secret
istioctl x create-remote-secret --name=cluster1 --server=https://192.168.229.137:6443 > remote-secret-cluster1.yaml

传输secret到cluster2
scp remote-secret-cluster1.yaml [email protected]:/root

cluster2
生成访问apiserver secret
istioctl x create-remote-secret --name=cluster2 --server=https://192.168.229.140:6443 > remote-secret-cluster2.yaml

传输secret到cluster1
scp remote-secret-cluster2.yaml [email protected]:/root


cluster3
生成访问apiserver secret
istioctl x create-remote-secret --name=cluster3 --server=https://192.168.229.143:6443 > remote-secret-cluster3.yaml

传输secret到cluster1
scp remote-secret-cluster3.yaml [email protected]:/root

传输secret到cluster2
scp remote-secret-cluster3.yaml [email protected]:/root


cluster1:
应用secret
kubectl apply -f remote-secret-cluster2.yaml
kubectl apply -f remote-secret-cluster3.yaml

部署istio
istioctl install -f cluster1.yaml

/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster1 --network network1 | istioctl install -y -f -

配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.100"]}}'

暴露服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml


cluster2:
应用secret
kubectl apply -f remote-secret-cluster1.yaml
kubectl apply -f remote-secret-cluster3.yaml

部署istio
istioctl install -f cluster2.yaml

生成东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster2 --network network2 | istioctl install -y -f -

配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.101"]}}'

暴露istiod
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-istiod.yaml

暴露服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml

cluster3:
部署istio
istioctl install -f cluster3.yaml

生成东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster3 --network network3 | istioctl install -y -f -

配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.102"]}}'

暴露服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml

cluster1:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster2:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster3:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

验证:

cluster1:
[root@node01 threenetwork]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-5f5cb446d8-zsvl5.istio SYNCED SYNCED SYNCED SYNCED istiod-6684658bfc-7kzxh 1.11.2
istio-eastwestgateway-7cbd7986c8-m5mfc.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-6684658bfc-7kzxh 1.11.2
istio-egressgateway-757d4db884-9fzqp.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-6684658bfc-7kzxh 1.11.2
istio-ingressgateway-6579968d88-m64s7.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-6684658bfc-7kzxh 1.11.2
productpage-v1-575c979756-swppq.istio SYNCED SYNCED SYNCED SYNCED istiod-6684658bfc-7kzxh 1.11.2
ratings-v1-5b6884cb-2pxm9.istio SYNCED SYNCED SYNCED SYNCED istiod-6684658bfc-7kzxh 1.11.2
reviews-v1-7f6c557f64-z7rlv.istio SYNCED SYNCED SYNCED SYNCED istiod-6684658bfc-7kzxh 1.11.2
reviews-v2-6bc4479688-nn8jz.istio SYNCED SYNCED SYNCED SYNCED istiod-6684658bfc-7kzxh 1.11.2
reviews-v3-6bcfc5898d-vmqk7.istio SYNCED SYNCED SYNCED SYNCED istiod-6684658bfc-7kzxh 1.11.2

[root@node01 threenetwork]# istioctl pc endpoint -n istio-system istio-ingressgateway-6579968d88-m64s7 |grep productpage
172.20.1.145:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.101:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.102:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster2:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-5779dc6cc4-5pnwr.istio SYNCED SYNCED SYNCED SYNCED istiod-86d9c94ffb-x6bl7 1.11.2
details-v1-7b4b949d5c-4cbqb.istio SYNCED SYNCED SYNCED SYNCED istiod-86d9c94ffb-x6bl7 1.11.2
istio-eastwestgateway-655d7b44cf-85rv2.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-86d9c94ffb-x6bl7 1.11.2
istio-eastwestgateway-68d4b447db-v88z4.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-86d9c94ffb-x6bl7 1.11.2
istio-egressgateway-67657df8d6-h5849.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-86d9c94ffb-x6bl7 1.11.2
istio-egressgateway-76f7fbbcd9-gpmh5.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-86d9c94ffb-x6bl7 1.11.2
istio-ingressgateway-59d5cf88bf-mpwp5.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-86d9c94ffb-x6bl7 1.11.2
istio-ingressgateway-87f557db4-p8l69.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-86d9c94ffb-x6bl7 1.11.2
productpage-v1-5668d6748c-4g48t.istio SYNCED SYNCED SYNCED SYNCED istiod-86d9c94ffb-x6bl7 1.11.2
productpage-v1-84486d5d7d-mmnd8.istio SYNCED SYNCED SYNCED SYNCED istiod-86d9c94ffb-x6bl7 1.11.2
ratings-v1-57bdb55d69-kxmzl.istio SYNCED SYNCED SYNCED SYNCED istiod-86d9c94ffb-x6bl7 1.11.2
ratings-v1-db8dc96d6-4mpfk.istio SYNCED SYNCED SYNCED SYNCED istiod-86d9c94ffb-x6bl7 1.11.2
reviews-v1-6b665c9cd-d9mg2.istio SYNCED SYNCED SYNCED SYNCED istiod-86d9c94ffb-x6bl7 1.11.2
reviews-v1-6f8df4d857-zgxhd.istio SYNCED SYNCED SYNCED SYNCED istiod-86d9c94ffb-x6bl7 1.11.2
reviews-v2-5465455954-fkz6w.istio SYNCED SYNCED SYNCED SYNCED istiod-86d9c94ffb-x6bl7 1.11.2
reviews-v2-ff95566fc-kwlmj.istio SYNCED SYNCED SYNCED SYNCED istiod-86d9c94ffb-x6bl7 1.11.2
reviews-v3-575485ff6b-86jkc.istio SYNCED SYNCED SYNCED SYNCED istiod-86d9c94ffb-x6bl7 1.11.2
reviews-v3-78f896bc89-cgjpf.istio SYNCED SYNCED SYNCED SYNCED istiod-86d9c94ffb-x6bl7 1.11.2

[root@node01 ~]# istioctl pc endpoint -n istio-system istio-ingressgateway-87f557db4-p8l69| grep productpage
172.21.1.174:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.100:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.102:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster3:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION

[root@node01 ~]# istioctl pc endpoint istio-ingressgateway-59d5cf88bf-mpwp5 -n istio-system | grep productpage
172.22.1.142:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.100:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.101:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster1的proxy连的是cluster1的istiod,cluster2和cluster3的proxy连的是cluster2的istio的。由于都不在一个网络中,其他集群的endpoint都是东西向网关的地址。

部署bookinfo vs gw

cluster1:

multicluster/gateway-01.yaml

kubectl apply -f gateway-01.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"

multicluster/vs-bookinfo-hosts-star.yaml

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage.istio.svc.cluster.local
port:
number: 9080

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root、

cluster2:

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

cluster3::

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

访问:

​http://192.168.229.137:32498/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

​http://192.168.229.140:31614/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

​http://192.168.229.143:32050/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

这说明多集群配置成功,而且每个envoy cluster有三个endpoint,轮训的方式访问。

清理:

cluster1:
kubectl label namespace istio-system topology.istio.io/network-
kubectl delete vs bookinfo -n istio
kubectl delete gw bookinfo-gateway -n istio
kubectl delete secret istio-remote-secret-cluster2 -n istio-system
kubectl delete secret istio-remote-secret-cluster3 -n istio-system
kubectl delete gw cross-network-gateway -n istio-system
istioctl x uninstall -f cluster1.yaml
reboot
cluster2:
kubectl label namespace istio-system topology.istio.io/network-
kubectl delete vs bookinfo -n istio
kubectl delete gw bookinfo-gateway -n istio
kubectl delete gw istiod-gateway -n istio-system
kubectl delete vs istiod-vs -n istio-system
kubectl delete secret istio-remote-secret-cluster1 -n istio-system
kubectl delete secret istio-remote-secret-cluster3 -n istio-system
kubectl delete gw cross-network-gateway -n istio-system
istioctl x uninstall -f cluster2.yaml
reboot
cluster3:
kubectl label namespace istio-system topology.istio.io/network-
kubectl delete vs bookinfo -n istio
kubectl delete gw bookinfo-gateway -n istio
kubectl delete gw cross-network-gateway -n istio-system
istioctl x uninstall -f cluster3.yaml
reboot

三控制面板

单网络

istio多集群探秘,部署了50次多集群后我得出的结论_bc_18

说明:

三个集群在一个网络中。每个集群内部有一个istiod,集群内的proxy连接到本集群的istiod,集群之间的service直接链接。每个istiod监控所有集群的apiserver。

三个网络联通
集群1
137,138,139
集群2
140,141,142
集群3
143,144,145

网络联通
137,138,139
route add -net 172.21.2.0 netmask 255.255.255.0 gw 192.168.229.142
route add -net 172.21.0.0 netmask 255.255.255.0 gw 192.168.229.141
route add -net 172.21.1.0 netmask 255.255.255.0 gw 192.168.229.140

route add -net 172.22.2.0 netmask 255.255.255.0 gw 192.168.229.145
route add -net 172.22.0.0 netmask 255.255.255.0 gw 192.168.229.144
route add -net 172.22.1.0 netmask 255.255.255.0 gw 192.168.229.143

route add -net 10.70.0.0 netmask 255.255.0.0 gw 192.168.229.143
route add -net 10.69.0.0 netmask 255.255.0.0 gw 192.168.229.140

140,141,142
route add -net 172.20.2.0 netmask 255.255.255.0 gw 192.168.229.139
route add -net 172.20.0.0 netmask 255.255.255.0 gw 192.168.229.138
route add -net 172.20.1.0 netmask 255.255.255.0 gw 192.168.229.137

route add -net 172.22.2.0 netmask 255.255.255.0 gw 192.168.229.145
route add -net 172.22.0.0 netmask 255.255.255.0 gw 192.168.229.144
route add -net 172.22.1.0 netmask 255.255.255.0 gw 192.168.229.143

route add -net 10.70.0.0 netmask 255.255.0.0 gw 192.168.229.143
route add -net 10.68.0.0 netmask 255.255.0.0 gw 192.168.229.137


143,144,145
route add -net 172.21.2.0 netmask 255.255.255.0 gw 192.168.229.142
route add -net 172.21.0.0 netmask 255.255.255.0 gw 192.168.229.141
route add -net 172.21.1.0 netmask 255.255.255.0 gw 192.168.229.140

route add -net 172.20.2.0 netmask 255.255.255.0 gw 192.168.229.139
route add -net 172.20.0.0 netmask 255.255.255.0 gw 192.168.229.138
route add -net 172.20.1.0 netmask 255.255.255.0 gw 192.168.229.137

route add -net 10.69.0.0 netmask 255.255.0.0 gw 192.168.229.140
route add -net 10.68.0.0 netmask 255.255.0.0 gw 192.168.229.137



cluster1:
生成isito operator部署文件
cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF


生成isito operator部署文件
cat <<EOF > cluster2.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster2
network: network1
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF
传输部署文件到cluster2
scp cluster2.yaml [email protected]:/root

生成isito operator部署文件
cat <<EOF > cluster3.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster3
network: network1
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

传输部署文件到cluster3
scp cluster3.yaml [email protected]:/root


cluster1:
创建访问apiserver secret
istioctl x create-remote-secret --name=cluster1 --server=https://192.168.229.137:6443 > remote-secret-cluster1.yaml

传输secret
scp remote-secret-cluster1.yaml [email protected]:/root
scp remote-secret-cluster1.yaml [email protected]:/root

cluster2
创建访问apiserver secret
istioctl x create-remote-secret --name=cluster2 --server=https://192.168.229.140:6443 > remote-secret-cluster2.yaml

传输secret
scp remote-secret-cluster2.yaml [email protected]:/root
scp remote-secret-cluster2.yaml [email protected]:/root

cluster3
创建访问apiserver secret
istioctl x create-remote-secret --name=cluster3 --server=https://192.168.229.143:6443 > remote-secret-cluster3.yaml

传输secret
scp remote-secret-cluster3.yaml [email protected]:/root
scp remote-secret-cluster3.yaml [email protected]:/root


cluster1:
应用secret
kubectl apply -f remote-secret-cluster3.yaml
kubectl apply -f remote-secret-cluster2.yaml

部署istio
istioctl install -f cluster1.yaml

cluster2:
应用secret
kubectl apply -f remote-secret-cluster1.yaml
kubectl apply -f remote-secret-cluster3.yaml

部署istio
istioctl install -f cluster2.yaml

cluster3:
应用secret
kubectl apply -f remote-secret-cluster1.yaml
kubectl apply -f remote-secret-cluster2.yaml

部署istio
istioctl install -f cluster3.yaml

cluster1:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system


cluster2:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system


cluster3:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

验证:

cluster1:
[root@node01 samenetwork]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-7bf594cdd4-sc4nb.istio SYNCED SYNCED SYNCED SYNCED istiod-5f7bc846dc-2d44z 1.11.2
istio-egressgateway-5c5c85898b-8ds59.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-5f7bc846dc-2d44z 1.11.2
istio-ingressgateway-7975989ccd-2vqfg.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-5f7bc846dc-2d44z 1.11.2
productpage-v1-685dbb6fdd-597bd.istio SYNCED SYNCED SYNCED SYNCED istiod-5f7bc846dc-2d44z 1.11.2
ratings-v1-659b848d68-qwk55.istio SYNCED SYNCED SYNCED SYNCED istiod-5f7bc846dc-2d44z 1.11.2
reviews-v1-767d5656b8-5895x.istio SYNCED SYNCED SYNCED SYNCED istiod-5f7bc846dc-2d44z 1.11.2
reviews-v2-58bb787dd4-gldhm.istio SYNCED SYNCED SYNCED SYNCED istiod-5f7bc846dc-2d44z 1.11.2
reviews-v3-6468dcb8bd-zqwjv.istio SYNCED SYNCED SYNCED SYNCED istiod-5f7bc846dc-2d44z 1.11.2

[root@node01 samenetwork]# istioctl pc endpoint -n istio-system istio-ingressgateway-7975989ccd-2vqfg | grep productpage
172.20.2.165:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.21.2.193:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.22.2.168:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster2:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-7549748bc5-js5zk.istio SYNCED SYNCED SYNCED SYNCED istiod-85fdbb8cdb-szktn 1.11.2
istio-egressgateway-d89b5b999-tr7dt.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-85fdbb8cdb-szktn 1.11.2
istio-ingressgateway-5fbfcd6549-qzgzv.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-85fdbb8cdb-szktn 1.11.2
productpage-v1-b99b6cfb9-rx7ww.istio SYNCED SYNCED SYNCED SYNCED istiod-85fdbb8cdb-szktn 1.11.2
ratings-v1-5b589fc8b7-tqbq5.istio SYNCED SYNCED SYNCED SYNCED istiod-85fdbb8cdb-szktn 1.11.2
reviews-v1-7d7cfb6655-2xfht.istio SYNCED SYNCED SYNCED SYNCED istiod-85fdbb8cdb-szktn 1.11.2
reviews-v2-786bd46c9-h8zbz.istio SYNCED SYNCED SYNCED SYNCED istiod-85fdbb8cdb-szktn 1.11.2
reviews-v3-8684b84c9f-6hn5q.istio SYNCED SYNCED SYNCED SYNCED istiod-85fdbb8cdb-szktn 1.11.2

[root@node01 ~]# istioctl pc endpoint -n istio-system istio-ingressgateway-5fbfcd6549-qzgzv|grep productpage
172.20.2.165:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.21.2.193:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.22.2.168:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster3:

[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-789d874486-qqxzt.istio SYNCED SYNCED SYNCED SYNCED istiod-845697489-tdjq5 1.11.2
istio-egressgateway-69cd5559b4-xmfg5.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-845697489-tdjq5 1.11.2
istio-ingressgateway-5b6bd7d9bb-mlg6n.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-845697489-tdjq5 1.11.2
productpage-v1-59596f5958-gx68p.istio SYNCED SYNCED SYNCED SYNCED istiod-845697489-tdjq5 1.11.2
ratings-v1-5fc46dbfdb-b5rqf.istio SYNCED SYNCED SYNCED SYNCED istiod-845697489-tdjq5 1.11.2
reviews-v1-7b8544ccd6-lhkv2.istio SYNCED SYNCED SYNCED SYNCED istiod-845697489-tdjq5 1.11.2
reviews-v2-564ff556f8-lln88.istio SYNCED SYNCED SYNCED SYNCED istiod-845697489-tdjq5 1.11.2
reviews-v3-6fdd747ffd-chgww.istio SYNCED SYNCED SYNCED SYNCED istiod-845697489-tdjq5 1.11.2


[root@node01 ~]# istioctl pc endpoint -n istio-system istio-ingressgateway-5b6bd7d9bb-mlg6n|grep productpage
172.20.2.165:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.21.2.193:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.22.2.168:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

可以看到三个集群上的endpoint都是pod ip 因为在一个网络中。各自的proxy也是注册到本集群的istiod中。

部署bookinfo vs gw

cluster1:

multicluster/gateway-01.yaml

kubectl apply -f gateway-01.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"

multicluster/vs-bookinfo-hosts-star.yaml

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage.istio.svc.cluster.local
port:
number: 9080

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root、

cluster2:

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

cluster3::

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

访问:

​http://192.168.229.137:32498/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

​http://192.168.229.140:31614/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

​http://192.168.229.143:32050/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

这说明多集群配置成功,而且每个envoy cluster有三个endpoint,轮训的方式访问。

清理:

cluster1:

kubectl delete vs bookinfo -n istio
kubectl delete gw bookinfo-gateway -n istio
kubectl delete secret istio-remote-secret-cluster2 -n istio-system
kubectl delete secret istio-remote-secret-cluster3 -n istio-system
istioctl x uninstall -f cluster1.yaml
reboot

cluster2:

kubectl delete vs bookinfo -n istio
kubectl delete gw bookinfo-gateway -n istio
kubectl delete secret istio-remote-secret-cluster1 -n istio-system
kubectl delete secret istio-remote-secret-cluster3 -n istio-system
istioctl x uninstall -f cluster2.yaml

reboot

cluster3:

kubectl delete vs bookinfo -n istio
kubectl delete gw bookinfo-gateway -n istio
kubectl delete secret istio-remote-secret-cluster2 -n istio-system
kubectl delete secret istio-remote-secret-cluster1 -n istio-system
istioctl x uninstall -f cluster3.yaml
reboot

两网络

istio多集群探秘,部署了50次多集群后我得出的结论_云原生_19

说明:

cluster2和cluster3在一个网络中,他们和cluster1不在一个网络中。各自集群有各自的istiod。集群内的proxy连接的是自己集群的istiod。cluster2和cluster3的service可以直接通信,他们和cluster1的service通过cluster1的东西向网关通信。

三个集群
集群1
137,138,139
集群2
140,141,142
集群3
143,144,145

每个集群都有各自的istiod
cluster2和cluster3在一个网络中,他们和cluster1在不同网络中

打通cluster2和cluster3网络
140,141,142
route add -net 172.22.2.0 netmask 255.255.255.0 gw 192.168.229.145
route add -net 172.22.0.0 netmask 255.255.255.0 gw 192.168.229.144
route add -net 172.22.1.0 netmask 255.255.255.0 gw 192.168.229.143

route add -net 10.70.0.0 netmask 255.255.0.0 gw 192.168.229.143


143,144,145
route add -net 172.21.2.0 netmask 255.255.255.0 gw 192.168.229.142
route add -net 172.21.0.0 netmask 255.255.255.0 gw 192.168.229.141
route add -net 172.21.1.0 netmask 255.255.255.0 gw 192.168.229.140

route add -net 10.69.0.0 netmask 255.255.0.0 gw 192.168.229.140

给istio-system namespace打网络标签
cluster1:
kubectl label namespace istio-system topology.istio.io/network=network1
cluster2:
kubectl label namespace istio-system topology.istio.io/network=network2
cluster3:
kubectl label namespace istio-system topology.istio.io/network=network2

生成istio集群operator部署文件
cluster1:
cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

生成istio集群operator部署文件
cat <<EOF > cluster2.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster2
network: network2
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF


cat <<EOF > cluster3.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster3
network: network2
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

传输部署文件到相关主机
scp cluster2.yaml [email protected]:/root
scp cluster3.yaml [email protected]:/root


部署cluster1
istioctl install -f cluster1.yaml

生成cluster1 东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster1 --network network1 | istioctl install -y -f -

配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.100"]}}'

暴露cluster1中的服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml


cluster2:
安装集群2
istioctl install -f cluster2.yaml

配置东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster2 --network network2 | istioctl install -y -f -

配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.101"]}}'

暴露的服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml

cluster3:
安装集群3
istioctl install -f cluster3.yaml

配置东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster3 --network network2 | istioctl install -y -f -

配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.102"]}}'

暴露的服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml

cluster1:
生成k8s访问secret
istioctl x create-remote-secret --name=cluster1 --server=https://192.168.229.137:6443 > remote-secret-cluster1.yaml

传输k8s访问secret
scp remote-secret-cluster1.yaml [email protected]:/root
scp remote-secret-cluster1.yaml [email protected]:/root

cluster2:
生成k8s访问secret
istioctl x create-remote-secret --name=cluster2 --server=https://192.168.229.140:6443 > remote-secret-cluster2.yaml
传输k8s访问secret
scp remote-secret-cluster2.yaml [email protected]:/root
scp remote-secret-cluster2.yaml [email protected]:/root

cluster3
生成k8s访问secret
istioctl x create-remote-secret --name=cluster3 --server=https://192.168.229.143:6443 > remote-secret-cluster3.yaml
传输k8s访问secret
scp remote-secret-cluster3.yaml [email protected]:/root
scp remote-secret-cluster3.yaml [email protected]:/root

cluster1
应用k8s访问secret
kubectl apply -f remote-secret-cluster2.yaml
kubectl apply -f remote-secret-cluster3.yaml

cluster2:
应用k8s访问secret
kubectl apply -f remote-secret-cluster1.yaml
kubectl apply -f remote-secret-cluster3.yaml

cluster3:
应用k8s访问secret
kubectl apply -f remote-secret-cluster1.yaml
kubectl apply -f remote-secret-cluster2.yaml

cluster1:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system


cluster2:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster3:
重启pod
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

验证:

cluster1:

[root@node01 twonetwork]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-65b5d4fc88-bp6cf.istio SYNCED SYNCED SYNCED SYNCED istiod-78f589b4fb-qc5j7 1.11.2
istio-eastwestgateway-f9ffd96f7-6hk4t.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-78f589b4fb-qc5j7 1.11.2
istio-egressgateway-66c984f45f-7dgjk.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-78f589b4fb-qc5j7 1.11.2
istio-ingressgateway-647868b87c-sms28.istio-system SYNCED SYNCED SYNCED SYNCED istiod-78f589b4fb-qc5j7 1.11.2
productpage-v1-6bb95f4848-r2dv8.istio SYNCED SYNCED SYNCED SYNCED istiod-78f589b4fb-qc5j7 1.11.2
ratings-v1-777544db8-p5k4n.istio SYNCED SYNCED SYNCED SYNCED istiod-78f589b4fb-qc5j7 1.11.2
reviews-v1-868944cf5b-z7khn.istio SYNCED SYNCED SYNCED SYNCED istiod-78f589b4fb-qc5j7 1.11.2
reviews-v2-db4c885b4-5xg6q.istio SYNCED SYNCED SYNCED SYNCED istiod-78f589b4fb-qc5j7 1.11.2
reviews-v3-89b69d49b-fr8pt.istio SYNCED SYNCED SYNCED SYNCED istiod-78f589b4fb-qc5j7 1.11.2

[root@node01 twonetwork]# istioctl pc endpoint -n istio productpage-v1-6bb95f4848-r2dv8.istio|grep productpage
172.20.2.35:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.101:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.102:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster2:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-7dfb994b7-sv4hr.istio SYNCED SYNCED SYNCED SYNCED istiod-d5888669f-4q7z5 1.11.2
istio-eastwestgateway-7f69d78759-mxnfj.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-d5888669f-4q7z5 1.11.2
istio-egressgateway-7f4fb98c67-969z6.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-d5888669f-4q7z5 1.11.2
istio-ingressgateway-8994b5b59-c7lr6.istio-system SYNCED SYNCED SYNCED SYNCED istiod-d5888669f-4q7z5 1.11.2
productpage-v1-6c458d7f9c-qcm4l.istio SYNCED SYNCED SYNCED SYNCED istiod-d5888669f-4q7z5 1.11.2
ratings-v1-5774967ddc-kf728.istio SYNCED SYNCED SYNCED SYNCED istiod-d5888669f-4q7z5 1.11.2
reviews-v1-6bb67f7c7f-ft8l2.istio SYNCED SYNCED SYNCED SYNCED istiod-d5888669f-4q7z5 1.11.2
reviews-v2-f5bb497f-xtrd7.istio SYNCED SYNCED SYNCED SYNCED istiod-d5888669f-4q7z5 1.11.2
reviews-v3-7cdc8c6f87-gnn7z.istio SYNCED SYNCED SYNCED SYNCED istiod-d5888669f-4q7z5 1.11.2

[root@node01 ~]# istioctl pc endpoint -n istio productpage-v1-6c458d7f9c-qcm4l|grep productpage
172.21.2.93:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.22.2.74:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.100:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster3:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-689546bb5d-9gqnb.istio SYNCED SYNCED SYNCED SYNCED istiod-65577f87b6-25x7q 1.11.2
istio-eastwestgateway-77cff7f899-k4ngb.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-65577f87b6-25x7q 1.11.2
istio-egressgateway-7d4cf6b9f8-glrw6.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-65577f87b6-25x7q 1.11.2
istio-ingressgateway-687bc9d67d-g5l6w.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-65577f87b6-25x7q 1.11.2
productpage-v1-6fffcfb4b-jmfs7.istio SYNCED SYNCED SYNCED SYNCED istiod-65577f87b6-25x7q 1.11.2
ratings-v1-59db4f7dbb-t27jf.istio SYNCED SYNCED SYNCED SYNCED istiod-65577f87b6-25x7q 1.11.2
reviews-v1-7ccb47d858-8w9cr.istio SYNCED SYNCED SYNCED SYNCED istiod-65577f87b6-25x7q 1.11.2
reviews-v2-576599fb8b-hnprc.istio SYNCED SYNCED SYNCED SYNCED istiod-65577f87b6-25x7q 1.11.2
reviews-v3-77d47cf98-hxngw.istio SYNCED SYNCED SYNCED SYNCED istiod-65577f87b6-25x7q 1.11.2

[root@node01 ~]# istioctl pc endpoint -n istio productpage-v1-6fffcfb4b-jmfs7 | grep productpage
172.21.2.93:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
172.22.2.74:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.100:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

各个集群的proxy连接各自集群的istiod,cluster1的endpoint有两个是其他集群的东西向网关地址。cluster2的endpoint两个事pod ip,因为cluster2和cluster3再一个集群中,有一个是cluster1东西向网关地址,因为他们不在一个集群中,cluster3也是类似的。

部署bookinfo gw vs

cluster1:

multicluster/gateway-01.yaml

kubectl apply -f gateway-01.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"

multicluster/vs-bookinfo-hosts-star.yaml

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage.istio.svc.cluster.local
port:
number: 9080

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root

cluster2:

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

cluster3::

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

访问:

​http://192.168.229.137:32498/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

​http://192.168.229.140:31614/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

​http://192.168.229.143:32050/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

这说明多集群配置成功,而且每个envoy cluster有三个endpoint,轮训的方式访问。

清理:

cluster1:

kubectl label namespace istio-system topology.istio.io/network-
kubectl delete vs bookinfo -n istio
kubectl delete gw bookinfo-gateway -n istio
kubectl delete secret istio-remote-secret-cluster2 -n istio-system
kubectl delete secret istio-remote-secret-cluster3 -n istio-system
kubectl delete gw cross-network-gateway -n istio-system
istioctl x uninstall -f cluster1.yaml
reboot

cluster2:

kubectl label namespace istio-system topology.istio.io/network-
kubectl delete vs bookinfo -n istio
kubectl delete gw bookinfo-gateway -n istio
kubectl delete secret istio-remote-secret-cluster1 -n istio-system
kubectl delete secret istio-remote-secret-cluster3 -n istio-system
kubectl delete gw cross-network-gateway -n istio-system
istioctl x uninstall -f cluster2.yaml
reboot

cluster3:

kubectl label namespace istio-system topology.istio.io/network-

kubectl delete vs bookinfo -n istio

kubectl delete gw bookinfo-gateway -n istio

kubectl delete secret istio-remote-secret-cluster1 -n istio-system

kubectl delete secret istio-remote-secret-cluster3 -n istio-system

kubectl delete gw cross-network-gateway -n istio-system

istioctl x uninstall -f cluster3.yaml

reboot

三网络

istio多集群探秘,部署了50次多集群后我得出的结论_bc_20

说明:

三个集群网络都是隔离的,每个集群都有各自的istiod和东西向网关。每个集群的service要访问其他集群的服务,都必须通过对象集群的东西向网关。每个集群的proxy连接到自己集群的istiod。istiod监控所有集群apiserver。

三个网络
集群1
137,138,139
集群2
140,141,142
集群3
143,144,145

给istio-system namespace打标签

cluster1:
kubectl label namespace istio-system topology.istio.io/network=network1

cluster2:
kubectl label namespace istio-system topology.istio.io/network=network2

cluster3:
kubectl label namespace istio-system topology.istio.io/network=network3

cluster1:
生成istio operator部署文件
cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

生成istio operator部署文件
cat <<EOF > cluster2.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster2
network: network2
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

生成istio operator部署文件
cat <<EOF > cluster3.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster3
network: network3
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF

传输部署文件
scp cluster2.yaml [email protected]:/root
scp cluster3.yaml [email protected]:/root

部署istio
istioctl install -f cluster1.yaml

生成东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster1 --network network1 | istioctl install -y -f -

配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.100"]}}'

暴露服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml


cluster2:
部署istio
istioctl install -f cluster2.yaml

生成东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster2 --network network2 | istioctl install -y -f -

配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.101"]}}'

暴露服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml

cluster3:
部署istio
istioctl install -f cluster3.yaml

生成东西向网关
/root/istio-1.11.2/samples/multicluster/gen-eastwest-gateway.sh --mesh mesh1 --cluster cluster3 --network network3 | istioctl install -y -f -

配置东西向网关ip
kubectl patch svc -n istio-system istio-eastwestgateway -p '{"spec":{"externalIPs":["192.168.229.102"]}}'

暴露服务
kubectl apply -n istio-system -f /root/istio-1.11.2/samples/multicluster/expose-services.yaml

cluster1:
生成访问apiserver secret
istioctl x create-remote-secret --name=cluster1 --server=https://192.168.229.137:6443 > remote-secret-cluster1.yaml

传输secret
scp remote-secret-cluster1.yaml [email protected]:/root
scp remote-secret-cluster1.yaml [email protected]:/root

cluster2
生成访问apiserver secret
istioctl x create-remote-secret --name=cluster2 --server=https://192.168.229.140:6443 > remote-secret-cluster2.yaml

传输secret
scp remote-secret-cluster2.yaml [email protected]:/root
scp remote-secret-cluster2.yaml [email protected]:/root

cluster3
生成访问apiserver secret
istioctl x create-remote-secret --name=cluster3 --server=https://192.168.229.143:6443 > remote-secret-cluster3.yaml

传输secret
scp remote-secret-cluster3.yaml [email protected]:/root
scp remote-secret-cluster3.yaml [email protected]:/root

cluster1:
kubectl apply -f remote-secret-cluster2.yaml
kubectl apply -f remote-secret-cluster3.yaml

cluster2:
kubectl apply -f remote-secret-cluster1.yaml
kubectl apply -f remote-secret-cluster3.yaml

cluster3:
kubectl apply -f remote-secret-cluster1.yaml
kubectl apply -f remote-secret-cluster2.yaml

cluster1:
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster2:
kubectl apply -f remote-secret-cluster3.yaml

kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

cluster3:
kubectl rollout restart deploy -n istio
kubectl rollout restart deploy -n istio-system

验证:

cluster1:
[root@node01 multinetwork]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-96bfdccc8-znsvk.istio SYNCED SYNCED SYNCED SYNCED istiod-54d85d6859-8bcg6 1.11.2
istio-eastwestgateway-64df78c868-256x8.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-54d85d6859-8bcg6 1.11.2
istio-egressgateway-5f587b99fd-dpwqf.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-54d85d6859-8bcg6 1.11.2
istio-ingressgateway-79fdb4bfc8-5wmdq.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-54d85d6859-8bcg6 1.11.2
productpage-v1-c597885db-nskdn.istio SYNCED SYNCED SYNCED SYNCED istiod-54d85d6859-8bcg6 1.11.2
ratings-v1-555cb4c6f8-5g6gw.istio SYNCED SYNCED SYNCED SYNCED istiod-54d85d6859-8bcg6 1.11.2
reviews-v1-6db6c9c546-8mz42.istio SYNCED SYNCED SYNCED SYNCED istiod-54d85d6859-8bcg6 1.11.2
reviews-v2-974d5f97d-pgx92.istio SYNCED SYNCED SYNCED SYNCED istiod-54d85d6859-8bcg6 1.11.2
reviews-v3-6d68c7bd46-8htgc.istio SYNCED SYNCED SYNCED SYNCED istiod-54d85d6859-8bcg6 1.11.2

[root@node01 multinetwork]# istioctl pc endpoint -n istio-system istio-ingressgateway-79fdb4bfc8-5wmdq |grep productpage
172.20.1.176:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.101:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.102:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster2:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-7cf7f6dc77-xrmxd.istio SYNCED SYNCED SYNCED SYNCED istiod-6ccccd9f6f-c6dbf 1.11.2
istio-eastwestgateway-78475f7cc-hzh5p.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-6ccccd9f6f-c6dbf 1.11.2
istio-egressgateway-7b54fc89bf-4sp6n.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-6ccccd9f6f-c6dbf 1.11.2
istio-ingressgateway-5d7bf69f65-vn7fh.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-6ccccd9f6f-c6dbf 1.11.2
productpage-v1-7b49d49559-vkxz7.istio SYNCED SYNCED SYNCED SYNCED istiod-6ccccd9f6f-c6dbf 1.11.2
ratings-v1-7f7ddcf6d9-x5spr.istio SYNCED SYNCED SYNCED SYNCED istiod-6ccccd9f6f-c6dbf 1.11.2
reviews-v1-68c96467cc-qrpzw.istio SYNCED SYNCED SYNCED SYNCED istiod-6ccccd9f6f-c6dbf 1.11.2
reviews-v2-7bc46c79b9-gnxxk.istio SYNCED SYNCED SYNCED SYNCED istiod-6ccccd9f6f-c6dbf 1.11.2
reviews-v3-7f94dcf6f7-8kntd.istio SYNCED SYNCED SYNCED SYNCED istiod-6ccccd9f6f-c6dbf 1.11.2

[root@node01 ~]# istioctl pc endpoint -n istio-system istio-ingressgateway-5d7bf69f65-vn7fh |grep productpage
172.21.0.185:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.100:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.102:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

cluster3:
[root@node01 ~]# istioctl ps
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-6c8b8d7659-tkhnf.istio SYNCED SYNCED SYNCED SYNCED istiod-7b74f8d468-b7m6t 1.11.2
istio-eastwestgateway-776b49c855-g7lbw.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-7b74f8d468-b7m6t 1.11.2
istio-egressgateway-644457d9cc-l4ds5.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-7b74f8d468-b7m6t 1.11.2
istio-ingressgateway-646dbfc99c-2gq5j.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-7b74f8d468-b7m6t 1.11.2
productpage-v1-7c9dccbf65-hdmt6.istio SYNCED SYNCED SYNCED SYNCED istiod-7b74f8d468-b7m6t 1.11.2
ratings-v1-678c8559f5-7d7s6.istio SYNCED SYNCED SYNCED SYNCED istiod-7b74f8d468-b7m6t 1.11.2
reviews-v1-5f4c5b4699-hl5ff.istio SYNCED SYNCED SYNCED SYNCED istiod-7b74f8d468-b7m6t 1.11.2
reviews-v2-6854768ff9-qnv95.istio SYNCED SYNCED SYNCED SYNCED istiod-7b74f8d468-b7m6t 1.11.2
reviews-v3-7b67ffd547-vrqqr.istio SYNCED SYNCED SYNCED SYNCED istiod-7b74f8d468-b7m6t 1.11.2

[root@node01 ~]# istioctl pc endpoint -n istio-system istio-ingressgateway-646dbfc99c-2gq5j |grep productpage
172.22.1.155:9080 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.100:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local
192.168.229.101:15443 HEALTHY OK outbound|9080||productpage.istio.svc.cluster.local

可以看出上面的结果很对称。各个集群的proxy连接各自集群的istiod,都有三个端点,每个集群的端点有一个是本集群pod的ip,另外两个分别是他们东西向网关的地址。

部署bookinfo gw vs

cluster1:

multicluster/gateway-01.yaml

kubectl apply -f gateway-01.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"

multicluster/vs-bookinfo-hosts-star.yaml

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage.istio.svc.cluster.local
port:
number: 9080

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root

scp gateway-01.yaml [email protected]:/root

scp vs-bookinfo-hosts-star.yaml [email protected]:/root

cluster2:

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

cluster3::

kubectl apply -f gateway-01.yaml -n istio

kubectl apply -f vs-bookinfo-hosts-star.yaml -n istio

访问:

​http://192.168.229.137:32498/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

​http://192.168.229.140:31614/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

​http://192.168.229.143:32050/productpage​

查看日志:

cluster1:

kubectl logs -f -n istio productpage-v1-6bb95f4848-r2dv8 -f

有日志

cluster2:

kubectl logs -f -n istio productpage-v1-6c458d7f9c-qcm4l

有日志

cluster3:

kubectl logs -f -n istio productpage-v1-6fffcfb4b-jmfs7

有日志

这说明多集群配置成功,而且每个envoy cluster有三个endpoint,轮训的方式访问。

清理:

cluster1:

kubectl label namespace istio-system topology.istio.io/network-

kubectl delete vs bookinfo -n istio

kubectl delete gw bookinfo-gateway -n istio

kubectl delete secret istio-remote-secret-cluster2 -n istio-system

kubectl delete secret istio-remote-secret-cluster3 -n istio-system

kubectl delete gw cross-network-gateway -n istio-system

istioctl x uninstall -f cluster1.yaml





reboot



cluster2:

kubectl label namespace istio-system topology.istio.io/network-

kubectl delete vs bookinfo -n istio

kubectl delete gw bookinfo-gateway -n istio

kubectl delete secret istio-remote-secret-cluster1 -n istio-system

kubectl delete secret istio-remote-secret-cluster3 -n istio-system

kubectl delete gw cross-network-gateway -n istio-system

istioctl x uninstall -f cluster2.yaml





reboot



cluster3:

kubectl label namespace istio-system topology.istio.io/network-

kubectl delete vs bookinfo -n istio

kubectl delete gw bookinfo-gateway -n istio

kubectl delete secret istio-remote-secret-cluster1 -n istio-system

kubectl delete secret istio-remote-secret-cluster3 -n istio-system

kubectl delete gw cross-network-gateway -n istio-system

istioctl x uninstall -f cluster3.yaml





reboot

总结

多集群要点

1istiod 监视apiserver

2service连接到istiod

3不同集群之间service怎么访问

结论

1每个istiod需要监视所有集群的apiserver

2如果多个集群共享一个istiod,其他集群的service必须通过,istiod所在集群的东西向网关连接到istiod

3如果在同一个网络service可以直接访问,如果不在一个网络service必须通过其他集群的东西向网关访问

部署模型选择

考虑因素,网络隔离性,集群容错,配置隔离性。

如果网络需要强隔离,则选择多网络模型,否则选择单网络模型。

如果控制平面容错要高,则选择多控制平面模型,否则可以选择单控制平面。

如果istio相关配置需要隔离,不同集群同一个服务配置不一样,则采用多控制平面模型,否则采用单控制平面模型。

标签:kubectl,1.11,istio,50,集群,SYNCED,istiod,cluster1
From: https://blog.51cto.com/u_11979904/5926693

相关文章

  • Ubuntu 22.04 搭建K8s集群
    目录1.虚拟机基础配置配置静态ip设置主机名设置hosts安装ssh2.Ubuntu系统设置禁用swap修改内核参数3.安装containerd4.安装Kubernetes组件添加aptrepo安装Kubectl,ku......
  • redis哨兵集群搭建与数据测试!!
    博客园首页新随笔联系管理订阅随笔-583 文章-1 评论-1081 阅读- 1108万 Redis哨兵模式(sentinel)学习总结及部署记录(主从复制、读写分离、主从切换)......
  • Redis集群
    目录:1、Redis主从复制概念作用主从复制流程实验2、哨兵模式哨兵模式的原理哨兵模式作用实验3、Redis群集模式概念群集的......
  • SpringCloud学习 系列七、EurekaServer集群创建
    系列导航SpringCloud学习系列一、前言-为什么要学习微服务SpringCloud学习系列二、简介SpringCloud学习系列三、创建一个没有使用springCloud的服务提供者和消费......
  • redis数据库—主从复制、哨兵模式、集群
    一、Redis的三种高可用方案主从复制:主从复制是高可用Redis的基础,哨兵和集群都是在主从复制基础上实现高可用的。主从复制主要实现了数据的多机备份(和同步),以及对于读......
  • 为什么需要集群
    为什么需要集群摘要设备是廉价的极端容易损坏的最近世代的数据中心与大型机小型机时代最大的区别在于大型机小型机时代,都是scaleup.假设基础设施是非常稳定的(也的......
  • 认识一下 Kubernetes 多集群服务 API
    由于各种原因,采用Kubernetes的企业内部存在着几个、几十甚至上百个集群。比如处于研发流程上的考虑,不同环境下都存在独立的集群;监管层面的考虑,就地存储的用户数据需要搭......
  • Redis(六)集群
    Redis集群1.1存在的问题容量不够Redis如何扩容并发写操作,Redis如何分摊当主机或者从机宕机,薪火相传、反客为主等主从模式都会导致ip发生变化,应用程序中的配置需......
  • 二进制部署高可用k8s集群
    一、k8s高可用架构解析etcd是键值数据库用来存储集群信息apiserver是集群的整个控制中心,所有的流量都会经过他ControllerManager控制器,监控整个集群的状态Scheduler调......
  • 记.net framework php接口 返回数据格式问题 请求接口远程服务器返回错误: (500) 内
    .netframework框架请求时候php接口这边返回exit(json_encode(['code'=>200,'data'=>$tokenData]));.net报错 请求接口远程服务器返回错误:(500)内部服务器错误而......