首页 > 其他分享 >Istio

Istio

时间:2024-03-25 19:12:46浏览次数:32  
标签:kind name Istio istio v1 spec metadata

Istio

1、istio简介

image-20240220135512928

1.1 istio概念

istio 是一个服务网格形态的,用于云原生形态下,对服务进行治理的基础设施平台

1.2 istio特点

# 可观察性
# 安全性
# 流量治理

1.3 istio功能

通过边车模式,为服务注入一个代理,实现以下功能
1、服务发现,对其代理的svc进行服务发现,svc后端通常存在一个ep列表,可以根据负载均衡策略选择服务实例进行流量发送
2、隔离后端故障实例
3、服务保护,对最大连接数,最大请求数进行限制
4、执行对后端访问的快速失败和访问超时
5、对某个接口的访问进行限流
6、当对后端访问失败时,自动重试
7、动态修改请求的头部信息
8、故障注入,模拟对访问的返回失败
9、将对后端的服务访问重定向
10、灰度发布,根据权重或者请求内容对流量进行切分
11、对访问者和后端服务进行双向加密
12、对访问进行细粒度的授权
13、自动记录访问日志和访问细节
14、自动调用链埋点,进行分布式追踪
15、生成访问指标,对应用的访问形成完整的拓扑
# 上述功能均无需侵入式配置,只需进行对应修改配置,即可动态生效

1.4 istio组件

1.4.1 envoy(数据平面代理)

# Envoy 代理被部署为服务的 Sidecar
协调服务网格中所有服务的入站和出站流量
# 内置特性
动态服务发现
负载均衡
TLS 终端
HTTP/2 与 gRPC 代理
熔断器
健康检查
基于百分比流量分割的分阶段发布
故障注入
丰富的指标
# 支持添加istio功能
流量控制
网络弹性
安全性和身份认证
基于 WebAssembly 的可插拔扩展

1.4.2 istiod(控制平面)

组件
Pilot
galley
citadel
功能
服务发现、配置和证书管理

1.5 部署模型

[]: https://istio.io/latest/zh/docs/ops/deployment/deployment-models/ "istio部署模型"

2、istio安装

2.1 helm安装

# 添加仓库
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
或者从github下载
# 下载istio chart
helm pull istio-base istio/base
helm pull istiod istio/istiod
helm pull istio-ingress istio/gateways
# 修改istio chart中的镜像地址为私有镜像(可选:将仓库打包并上传到自己的helm仓库)
# 创建istio使用的ns 
kubectl create namespace istio-system
# 安装
helm install istio-base $base_path -n istio-system
helm install istiod $istiod_path -n istio-system
helm install istio-ingress $ingress_path -n istio-system
helm install istio-egress $egress_path -n istio-system
# 验证
helm list -A
kubectl get pod -n istio-system

3、istio使用

3.1 安装测试用服务

# 安装测试使用的微服务
apiVersion: v1
kind: Service
metadata:
  name: details
  labels:
    app: details
    service: details
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: details
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-details
  labels:
    account: details
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: details-v1
  labels:
    app: details
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: details
      version: v1
  template:
    metadata:
      labels:
        app: details
        version: v1
    spec:
      serviceAccountName: bookinfo-details
      containers:
      - name: details
        image: quanheng.com/pub/examples-bookinfo-details-v1:1.18.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
---
##################################################################################################
# Ratings service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: ratings
  labels:
    app: ratings
    service: ratings
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: ratings
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-ratings
  labels:
    account: ratings
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ratings-v1
  labels:
    app: ratings
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ratings
      version: v1
  template:
    metadata:
      labels:
        app: ratings
        version: v1
    spec:
      serviceAccountName: bookinfo-ratings
      containers:
      - name: ratings
        image: quanheng.com/pub/examples-bookinfo-ratings-v1:1.18.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
---
##################################################################################################
# Reviews service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: reviews
  labels:
    app: reviews
    service: reviews
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: reviews
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-reviews
  labels:
    account: reviews
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: reviews-v1
  labels:
    app: reviews
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reviews
      version: v1
  template:
    metadata:
      labels:
        app: reviews
        version: v1
    spec:
      serviceAccountName: bookinfo-reviews
      containers:
      - name: reviews
        image: quanheng.com/pub/examples-bookinfo-reviews-v1:1.18.0
        imagePullPolicy: IfNotPresent
        env:
        - name: LOG_DIR
          value: "/tmp/logs"
        ports:
        - containerPort: 9080
        volumeMounts:
        - name: tmp
          mountPath: /tmp
        - name: wlp-output
          mountPath: /opt/ibm/wlp/output
      volumes:
      - name: wlp-output
        emptyDir: {}
      - name: tmp
        emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: reviews-v2
  labels:
    app: reviews
    version: v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reviews
      version: v2
  template:
    metadata:
      labels:
        app: reviews
        version: v2
    spec:
      serviceAccountName: bookinfo-reviews
      containers:
      - name: reviews
        image: quanheng.com/pub/examples-bookinfo-reviews-v2:1.18.0
        imagePullPolicy: IfNotPresent
        env:
        - name: LOG_DIR
          value: "/tmp/logs"
        ports:
        - containerPort: 9080
        volumeMounts:
        - name: tmp
          mountPath: /tmp
        - name: wlp-output
          mountPath: /opt/ibm/wlp/output
      volumes:
      - name: wlp-output
        emptyDir: {}
      - name: tmp
        emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: reviews-v3
  labels:
    app: reviews
    version: v3
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reviews
      version: v3
  template:
    metadata:
      labels:
        app: reviews
        version: v3
    spec:
      serviceAccountName: bookinfo-reviews
      containers:
      - name: reviews
        image: quanheng.com/pub/examples-bookinfo-reviews-v3:1.18.0
        imagePullPolicy: IfNotPresent
        env:
        - name: LOG_DIR
          value: "/tmp/logs"
        ports:
        - containerPort: 9080
        volumeMounts:
        - name: tmp
          mountPath: /tmp
        - name: wlp-output
          mountPath: /opt/ibm/wlp/output
      volumes:
      - name: wlp-output
        emptyDir: {}
      - name: tmp
        emptyDir: {}
---
##################################################################################################
# Productpage services
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: productpage
  labels:
    app: productpage
    service: productpage
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: productpage
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-productpage
  labels:
    account: productpage
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: productpage-v1
  labels:
    app: productpage
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: productpage
      version: v1
  template:
    metadata:
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9080"
        prometheus.io/path: "/metrics"
      labels:
        app: productpage
        version: v1
    spec:
      serviceAccountName: bookinfo-productpage
      containers:
      - name: productpage
        image: quanheng.com/pub/examples-bookinfo-productpage-v1:1.18.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
        volumeMounts:
        - name: tmp
          mountPath: /tmp
      volumes:
      - name: tmp
        emptyDir: {}
---

3.2 开启自动注入代理

对ns打上lable,拥有此标签的ns将会对pod自动注入代理
istio-injection=enabled
kubectl lable ns $ns_name istio-injection=enabled

3.3 手动注入代理

# 生成带有sider的yaml
istioctl kube-inject -f xx.yaml > xx_sider.yaml
# 部署
kubectl apply -f xx_sider.yaml

4、可观测性

4.1 指标

Istio 基于 4 个监控的黄金标识(延迟、流量、错误、饱和)生成了一系列服务指标
代理本身管理功能的详细统计信息,包括配置信息和健康信息
服务级别的指标,服务监控需求:延迟、流量、错误和饱和情况
控制平面指标,监控 Istio 自己的行为(区别于网格内的服务)

4.2.1 内置指标

kubectl  exec -ti -n weather advertisement-v1-64c975fd5-tc2x7 -c istio-proxy -- pilot-agent request GET /stats/prometheus

4.2.2 自定义指标

4.2.2.1 修改指标

# 安装crd
cd cloud-native-istio/09
istioctl install -f custom-metrics.yaml
kubectl get iop -n istio-system
# 在spec.values.telemetry.v2.prometheus下添加
configOverride:
  gateway:
    metrics:
    - name: request_total # 针对指标进行修改,若不指定那么默认对所有指标生效
      dimensions: # 定义指标里的监控属性
          request_host: request.host
          request_method: request.method
      tags_to_remove: # 从指标中移除那些指标
      - response_flags
  inboundSidecar:
    metrics:
    - dimensions:
      request_host: request.host
      request_method: request.method
  outboundSidecar:
    metrics:
    - dimensions:
      request_host: request.host
      request_method: request.method

4.2.2.2 自定义新指标

# 定义一个新指标名称为custom_count指定类型为统计总数,指标内属性为reporter=proxy
# 在spec.values.telemetry.v2.prometheus下添加
configOverride:
  outboundSidecar:
    definitons: # 定义新指标
    - name: costom_count
      type: COUNTER # 统计
      value: "1"
    metrics:
    - name: custom_count
      dimensions:
        reporter: "'proxy'"

4.2 分布式追踪

Envoy 代理进行分布式追踪。代理自动为其应用程序生成追踪 span, 只需要应用程序转发适当的请求上下文
Istio 支持很多追踪系统,包括 Zipkin、 Jaeger、 LightStep、 Datadog。 运维人员控制生成追踪的采样率(每个请求生成跟踪数据的速率)。这允许运维人员控制网格生成追踪数据的数量和速率。

4.3 日志

4.3.1 开启

apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
  name: mesh-default
  namespace: istio-system
spec:
  accessLogging:
    - providers:
      - name: envoy

5、流量管理

5.1 灰度发布(流量切分)

5.1.1 定义服务版本

# dr 将流量到每个服务版本的路径封装成一个对外的整体,进行流量切分时可以直接使用这个名称
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: reviews
  namespace: istio-test
spec:
  host: reviews
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
  - name: v3
    labels:
      version: v3

5.1.1 基于流量比例

image-20240220091644717

# 配置vs,使两个版本按流量比例进行切分
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: reviews
  namespace: istio-test
spec:
  hosts:
  - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v1
      weight: 50
    - destination:
        host: reviews
        subset: v2
      weight: 50

5.1.2 基于请求内容

# chrom浏览器

image-20240220094135292

# 火狐浏览器

image-20240220094008563

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: reviews
  namespace: istio-test
spec:
  hosts:
  - reviews
  http:
  - match: # 匹配请求头部代理为谷歌浏览器内核任意版本的请求将其转发到v2版本
    - headers:
        User-Agent:
          regex: .*(Chrome/([\d.]+)).*
    route:
    - destination:
        host: reviews
        subset: v1
  - route:   
    - destination:
        host: reviews
        subset: v2

5.1.3 多条件组合

# 配置vs 先将android用户流量进行权重切分,再将剩余流量路由到其余服务
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: frontend-route
  namespace: weather
spec:
  hosts:
  - "*"
  gateways:
  - istio-system/weather-gateway
  http:
  - match:
    - headers:
        User-Agent:
          regex: .*((Android)).*  
    route: # 基于权重进行切分
    - destination:
        host: frontend
        subset: v1   
      weight: 50
    - destination:
        host: frontend
        subset: v2
      weight: 50		
  - route: # 
    - destination:
        host: frontend
        subset: v1

5.1.4 多服务多版本

# 对多个服务进行vs配置
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: frontend-route
spec:
  hosts:
  - "*"
  gateways:
  - istio-system/weather-gateway
  http:
  - match:
    - headers:
        cookie:
          regex: ^(.*?;)?(user=tester)(;.*)?$
    route:
    - destination:
        host: frontend
        subset: v2
  - route:
    - destination:
        host: frontend
        subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: forecast-route
spec:
  hosts:
  - forecast
  http:
  - match:
    - sourceLabels:
        version: v2
    route:
    - destination:
        host: forecast
        subset: v2
  - route:
    - destination:
        host: forecast
        subset: v1
# 将网关来的流量匹配用户为tester的路由到v2版本,其余路由到v1
# 将frontend来的流量,版本标签为v2的路由到forecast的v2版本,其余路由到v1

5.2 流量治理

# 定义的是服务到后端实例部分的流量分发规则

5.2.1 负载均衡

5.2.1.1 RR & RanDom

# 配置dr
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: advertisement-dr
spec:
  host: advertisement
  subsets:
  - labels:
      version: v1
    name: v1
  trafficPolicy:
    loadBalancer:
      simple: RANDOM #可以更改此策略实现不同的负载均衡效果
---
ROUND_ROBIN: 轮询
RANDOM: 随机

5.2.1.2 根据标签进行流量切分

# 配置DR 标签打在node节点上
region: topology.kubernetes.io/region=cn-north-7
zone: topology.kubernetes.io/zone=cn-north-7b
sub_zone: topology.istio.io/sub_zone=nanjing

# 格式 region/zone/sub_zone

---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: advertisement-distribution
spec:
  host: advertisement.weather.svc.cluster.local
  trafficPolicy:
    loadBalancer:
      localityLbSetting:
        enabled: true
        distribute:
        - from: cn-north-7/cn-north-7b/* # 来自此标签下的所流量 
          to: # 当上面from里面流量流向下面定义的标签时按比例对流量进行切分
            "cn-north-7/cn-north-7b/nanjing": 10
            "cn-north-7/cn-north-7c/hangzhou": 80
            "cn-north-7/cn-north-7c/ningbo": 10
***

5.2.1.3 设置故障转移负载均衡

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: recommendation-failover
spec:
  host: recommendation.weather.svc.cluster.local
  trafficPolicy:
    connectionPool:
      http:
        maxRequestsPerConnection: 1 
    loadBalancer:
      simple: ROUND_ROBIN # 指定故障后转移策略为rr
      localityLbSetting: # 开启故障负载转移
        enabled: true
    outlierDetection:
      consecutive5xxErrors: 1
      interval: 1s
      baseEjectionTime: 2m

5.2.2 会话保持

# 将具有指定相同字段cooki的流量转发到同一后端服务
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: advertisement-dr
  namespace: weather
spec:
  host: advertisement
  subsets:
  - labels:
      version: v1
    name: v1
  trafficPolicy:
     loadBalancer:
       consistentHash:
         httpCookie:
           name: user
           ttl: 60s

5.2.3 故障注入

5.2.3.1 延迟注入

# 所有访问都会被延迟3s
kind: VirtualService
metadata:
  name: advertisement-route
spec:
  hosts:
  - advertisement
  http:
  - fault:
      delay:
        fixedDelay: 3s # 延迟时间
        percentage:
          value: 100
    route: # 路由规则
    - destination:
        host: advertisement
        subset: v1

5.2.3.2 中断注入

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: advertisement-route
spec:
  hosts:
  - advertisement
  http:
  - fault:
      abort:
        httpStatus: 521 # 自定义返回状态码
        percentage:
          value: 100
    route:
    - destination:
        host: advertisement
        subset: v1

5.2.4 超时策略

#定义服务访问的超时策略,超过时间返回报错
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: forecast-route
spec:
  hosts:
  - forecast
  http:
  - route:
    - destination:
        host: forecast
        subset: v2
    timeout: 1s # 对当前服务进行超时策略配置,当服务访问超过定义时间返回失败

5.2.5 重试

# 当服务调用返回匹配的状态码 进行重试操作
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: forecast-route
spec:
  hosts:
  - forecast
  http:
  - route:
    - destination:
        host: forecast
        subset: v2
    retries:
      attempts: 5 # 重试次数
      perTryTimeout: 1s #间隔多久没得到正确状态码
      retryOn: "5xx" # 匹配状态吗

5.2.6 重定向

# 将匹配路径进行重定向
# advertisement.weather/ad 重定向到 advertisement.weather.svc.cluster.local/maintenanced
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: advertisement-route
spec:
  hosts:
  - advertisement
  http:
  - match:
    - uri:
        prefix: /ad # 匹配路径
    redirect:
      uri: /maintenanced # 重定向路径
      authority: advertisement.weather.svc.cluster.local # 前缀

5.2.7 重写

# 将给定条件重写为新的条件
# 将路由到advertisement的流量含有/damo/的url重定向为/
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: advertisement-route
spec:
  hosts:
  - advertisement
  http:
  - match:
    - uri:
        prefix: /demo/
    rewrite:
      uri: /
    route:
    - destination:
        host: advertisement
        subset: v1

5.2.8 熔断

5.2.8.1 熔断设置

# 防止由于某一服务调用失败而影响整体服务性能
# 当forecast同时并发3个且等待链接超过5个时,触发熔断直接返回失败
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: forecast-dr
spec:
  host: forecast
  subsets:
    - labels:
        version: v1
      name: v1
    - labels:
        version: v2
      name: v2
  trafficPolicy:
    connectionPool: # 连接池设置
      tcp:
        maxConnections: 3 # 定义tcp链接最大可同时存在多少个
      http:
        http1MaxPendingRequests: 5 # 定义http最大等待请求
        maxRequestsPerConnection: 1 
	outlierDetection:
      consecutive5xxErrors: 2  # 若连续返回2次500
      interval: 10s  # 每10s扫描后端实例
      baseEjectionTime: 2m # 基础等待时间
      maxEjectionPercent: 40 # 错误实例中将有40百分比的实例被移出实例列表

5.2.8.2 熔断异常点检测

# 实例被检测到符合配置的错误次数后,按照给定策略将实例从svc后端剔除,如果一段时间恢复将其重新加入后端,否则继续触发
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: forecast-dr
spec:
  host: forecast
  subsets:
    - labels:
        version: v1
      name: v1
    - labels:
        version: v2
      name: v2
  trafficPolicy:
	outlierDetection:
      consecutive5xxErrors: 2  # 若连续返回2次500
      interval: 10s  # 每10s扫描后端实例
      baseEjectionTime: 2m # 基础等待时间
      maxEjectionPercent: 40 # 错误实例中将有40百分比的实例被移出实例列表

5.2.9 限流

全局限流和本地限流,满足最低标准的那个

5.2.9.1 全局限流

# 限制svc后端所有实例共同可以接收多少流量
# 将配置文件以cm形式定义
apiVersion: v1
kind: ConfigMap
metadata:
  name: ratelimit-config
data:
  config.yaml: |
    domain: advertisement-ratelimit
    descriptors:
      - key: PATH 
        value: "/ad"
        rate_limit:
          unit: minute # 分钟
          requests_per_unit: 3
      - key: PATH
        rate_limit:
          unit: minute
          requests_per_unit: 100 
---
# 创建一个消息中间件来限制流量(此处使用redis)
apiVersion: v1
kind: Service
metadata:
  name: redis
  labels:
    app: redis
spec:
  ports:
  - name: redis
    port: 6379
  selector:
    app: redis
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - image: redis:alpine
        imagePullPolicy: Always
        name: redis
        ports:
        - name: redis
          containerPort: 6379
      restartPolicy: Always
      serviceAccountName: ""
---
apiVersion: v1
kind: Service
metadata:
  name: ratelimit
  labels:
    app: ratelimit
spec:
  ports:
  - name: http-port
    port: 8080
    targetPort: 8080
    protocol: TCP
  - name: grpc-port
    port: 8081
    targetPort: 8081
    protocol: TCP
  - name: http-debug
    port: 6070
    targetPort: 6070
    protocol: TCP
  selector:
    app: ratelimit
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ratelimit
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ratelimit
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: ratelimit
    spec:
      containers:
      - image: quanheng.com/k8s/envoyproxy/ratelimit:6f5de117 # 2021/01/08
        imagePullPolicy: Always
        name: ratelimit
        command: ["/bin/ratelimit"]
        env:
        - name: LOG_LEVEL
          value: debug
        - name: REDIS_SOCKET_TYPE
          value: tcp
        - name: REDIS_URL
          value: redis:6379
        - name: USE_STATSD
          value: "false"
        - name: RUNTIME_ROOT
          value: /data
        - name: RUNTIME_SUBDIRECTORY
          value: ratelimit
        ports:
        - containerPort: 8080
        - containerPort: 8081
        - containerPort: 6070
        volumeMounts:
        - name: config-volume
          mountPath: /data/ratelimit/config/config.yaml
          subPath: config.yaml
      volumes:
      - name: config-volume
        configMap:
          name: ratelimit-config
---
# 将定义的cm配置 插入envoyfilter
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: global-ratelimit
spec:
  workloadSelector:
    labels:
      app: advertisement
  configPatches:
    - applyTo: HTTP_FILTER
      match:
        context: SIDECAR_INBOUND
        listener:
          filterChain:
            filter:
              name: "envoy.filters.network.http_connection_manager"
              subFilter:
                name: "envoy.filters.http.router"
      patch:
        operation: INSERT_BEFORE
        value:
          name: envoy.filters.http.ratelimit #
          typed_config:
            "@type": type.googleapis.com/envoy.extensions.filters.http.ratelimit.v3.RateLimit
            domain: advertisement-ratelimit # 定义的cm domain
            failure_mode_deny: true # 开启
            rate_limit_service:
              grpc_service:
                envoy_grpc:
                  cluster_name: rate_limit_cluster # 针对集群名称
                timeout: 10s # 超时时间
              transport_api_version: V3 # 版本
    - applyTo: CLUSTER # 配置针对集群
      match:
        cluster:
          service: ratelimit.weather.svc.cluster.local
      patch:
        operation: ADD # 动作
        value:
          name: rate_limit_cluster
          type: STRICT_DNS
          connect_timeout: 10s
          lb_policy: ROUND_ROBIN
          http2_protocol_options: {}
          load_assignment:
            cluster_name: rate_limit_cluster
            endpoints:
            - lb_endpoints:
              - endpoint:
                  address:
                     socket_address:
                      address: ratelimit.weather.svc.cluster.local
                      port_value: 8081
---
# 配置路由
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: global-ratelimit-svc
spec:
  workloadSelector:
    labels:
      app: advertisement
  configPatches:
    - applyTo: VIRTUAL_HOST
      match:
        context: SIDECAR_INBOUND
        routeConfiguration:
          vhost:
            name: ""
            route:
              action: ANY
      patch:
        operation: MERGE
        value:
          rate_limits:
            - actions: 
              - request_headers:
                  header_name: ":path"
                  descriptor_key: "PATH"

5.2.9.2 本地限流

# 限制每个实例可以接收多少流量
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: local-ratelimit-svc
spec:
  workloadSelector:
    labels:
      app: advertisement
  configPatches:
    - applyTo: HTTP_FILTER
      match:
        context: SIDECAR_INBOUND
        listener:
          filterChain:
            filter:
              name: "envoy.filters.network.http_connection_manager"
      patch:
        operation: INSERT_BEFORE
        value:
          name: envoy.filters.http.local_ratelimit
          typed_config:
            "@type": type.googleapis.com/udpa.type.v1.TypedStruct
            type_url: type.googleapis.com/envoy.extensions.filters.http.local_ratelimit.v3.LocalRateLimit
            value:
              stat_prefix: http_local_rate_limiter
              token_bucket:
                max_tokens: 3
                tokens_per_fill: 3
                fill_interval: 60s
              filter_enabled:
                runtime_key: local_rate_limit_enabled
                default_value:
                  numerator: 100
                  denominator: HUNDRED
              filter_enforced:
                runtime_key: local_rate_limit_enforced
                default_value:
                  numerator: 100
                  denominator: HUNDRED
              response_headers_to_add:
                - append: false
                  header:
                    key: x-local-rate-limit
                    value: 'true'

5.2.10 服务隔离

# 类似networkpolicy,定义服务之间的互相访问
apiVersion: networking.istio.io/v1alpha3
kind: Sidecar
metadata:
  name: sidecar-frontend
spec:
  workloadSelector:
    labels:
      app: frontend
  egress:
  - hosts:
    - "weather/advertisement.weather.svc.cluster.local"
    - "istio-system/*"

5.2.11 流量镜像

# 将流量复制一份,复制流量将会被丢弃,若主流量存在问题,那么复制流量会进行响应
kind: VirtualService
metadata:
  name: forecast-route
spec:
  hosts:
  - forecast
  http:
  - route:
      - destination:
          host: forecast
          subset: v1
        weight: 100 # 指定复制流量的权重比例
    mirror:   #  指定影子流量给谁      
        host: forecast     
        subset: v2

5.3 服务治理

5.3.1 将https服务对外发布

# 部署一个https服务
# 将https服务所使用的证书制作为secret对象
# 配置istio网关
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  generation: 1
  name: argocd-gateway
  namespace: argocd
spec:
  selector:
    istio: ingressgateway
  servers:
    - hosts:
        - '*'
      port:
        name: http-argocd
        number: 15036
        protocol: HTTPS
      tls:
        mode: SIMPLE
        credentialName: argocd-secret
# 配置vs
# 配置dr
# 访问
https://ip:port

5.3.2 将http服务对外暴露

image-20240219150418131

5.3.2.1 配置gateway

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: bookinfo-gateway
  namespace: istio-ingress
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 8899
      name: http-bookinfo
      protocol: HTTP
    hosts:
    - "bookinfo.com"

5.3.2.2 配置vs

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: bookinfo
  namespace: istio-test
spec:
  hosts:
  - "bookinfo.com"
  gateways:
  - istio-ingress/bookinfo-gateway
  http:
  - match:
    - port: 8899
    route:
    - destination:
        host: productpage
        port:
          number: 9080

5.3.2.3 配置svc

# 添加8899端口
apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: istio-ingress
    meta.helm.sh/release-namespace: istio-ingress
  creationTimestamp: "2024-02-18T07:27:49Z"
  labels:
    app: istio-ingressgateway
    app.kubernetes.io/managed-by: Helm
    install.operator.istio.io/owning-resource: unknown
    istio: ingressgateway
    istio.io/rev: default
    operator.istio.io/component: IngressGateways
    release: istio-ingress
  name: istio-ingressgateway
  namespace: istio-ingress
  resourceVersion: "32406549"
  uid: a0d51225-836c-4ce0-b1e0-5167251b24bb
spec:
  clusterIP: 10.173.49.109
  clusterIPs:
  - 10.173.49.109
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  loadBalancerIP: 172.31.3.77
  ports:
  - name: status-port
    nodePort: 14221
    port: 15021
    protocol: TCP
    targetPort: 15021
  - name: http2
    nodePort: 27801
    port: 80
    protocol: TCP
    targetPort: 8080
  - name: bookinfo
    nodePort: 15086
    port: 8899
    protocol: TCP
    targetPort: 8899
  - name: https
    nodePort: 29533
    port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    app: istio-ingressgateway
    istio: ingressgateway
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

5.3.2.4 配置hosts解析

172.31.3.19 bookinfo.com

image-20240219101204881

6、流量走向

istio-ingress-->gateway-->VirtualService-->匹配路由规则(dr-DestinationRule)--> 根据路由规则将流量转发
# gateway
定义了服务网格外允许哪些流量可以进入服务网格,后端是vs
# vs
定义了从网关到服务的流量路径,后端必须配置一个真正的服务比如svc或者注册进服务网格的服务
# dr
定义了从服务到后端应用的流量规则,主要定义流量如何使用分配,后端是真正的实例

7、概念

7.1 vs

# 主要定义了服务如何访问
虚拟服务让您配置如何在服务网格内将请求路由到服务

7.2 dr

# 定义了流量如何处理,具备丰富的策略,满足路由规则的流量到达后端服务的访问策略
负载均衡设置
tls设置
熔断设置

7.3 网关

选择一个虚拟服务,通过那个端口接收流量进入服务网格,vs必须定义此虚拟服务且绑定网关才可生效

7.4 服务入口

添加一个入口将服务网格外的服务纳入服务网格来管理

7.5 Sidecar

类似networkpolicy,用来根据限制流量的走向

9、安全

1、流量加密
2、访问控制,双向 TLS 和细粒度的访问策略
3、审计

9.1 架构

# 一个证书颁发机构
# 一个api服务器用于将证书进行分发到代理 认证策略,授权策略,安全命名信息
# 一个代理(sider)用来做策略执行
# envoy代理扩展用来做审计和遥测
简单来说控制面提供证书envoy代理作为执行点互相通信,实现无侵入式得加密流量通信

image-20240221100745213

9.2 概念

9.2.1 安全命名

服务器身份: 编码在证书里
服务名称
安全命名,将服务器身份映射到服务名称
例子:
	服务器身份A与服务名称B的映射关系可以理解为授权A运行服务B,类似于账号密码的登录放行,A可以通过B的验证
# 注意,非七层网络无法保证安全命名的安全性,因为其通过ip地址进行通信,其发生在envoy代理之前

9.3 身份和证书管理详解

istio-agent 是指 Sidecar 容器中的 pilot-agent 进程
# 详细流程
istiod 提供 gRPC 服务以接受证书签名请求(CSR)。
istio-agent 在启动时创建私钥和 CSR,然后将 CSR 及其凭据发送到 istiod 进行签名。
istiod CA 验证 CSR 中携带的凭据,成功验证后签署 CSR 以生成证书。
当工作负载启动时,Envoy 通过 Secret 发现服务(SDS)API 向同容器内的 istio-agent 发送证书和密钥请求。
istio-agent 通过 Envoy SDS API 将从 istiod 收到的证书和密钥发送给 Envoy。
istio-agent 监控工作负载证书的过期时间。上述过程会定期重复进行证书和密钥轮换。

image-20240221135958207

9.4 认证

认证策略的生效范围由小到大,类似网络策略

9.4.1 对等认证

服务到服务的认证,以验证建立连接的客户端,双向tls(最低版本为v1.2),加密通信
# 工作原理
Istio 将出站流量从客户端重新路由到客户端的本地 Sidecar Envoy。
客户端 Envoy 与服务器端 Envoy 开始双向 TLS 握手。在握手期间, 客户端 Envoy 还做了安全命名检查, 以验证服务器证书中显示的服务帐户是否被授权运行目标服务。
客户端 Envoy 和服务器端 Envoy 建立了一个双向的 TLS 连接,Istio 将流量从客户端 Envoy 转发到服务器端 Envoy。
服务器端 Envoy 授权请求。如果获得授权,它将流量转发到通过本地 TCP 连接的后端服务。

9.4.1.1 部署测试服务

kubectl create ns foo 
kubectl create ns bar 
kubectl create ns legacy
kubectl label ns foo istio-injection=enabled
kubectl label ns bar istio-injection=enabled
kubectl label ns legacy istio-injection=enabled
kubectl apply -f sleep.yaml -n foo
kubectl apply -f sleep.yaml -n legacy
kubectl apply -f sleep.yaml -n bar
kubectl apply -f httpbin.yaml -n foo
kubectl apply -f httpbin.yaml -n legacy
kubectl apply -f httpbin.yaml -n bar
# sleep.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: sleep
---
apiVersion: v1
kind: Service
metadata:
  name: sleep
  labels:
    app: sleep
    service: sleep
spec:
  ports:
  - port: 80
    name: http
  selector:
    app: sleep
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sleep
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sleep
  template:
    metadata:
      labels:
        app: sleep
    spec:
      terminationGracePeriodSeconds: 0
      serviceAccountName: sleep
      containers:
      - name: sleep
        image: quanheng.com/pub/curl:v1
        command: ["/bin/sleep", "infinity"]
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - mountPath: /etc/sleep/tls
          name: secret-volume
      volumes:
      - name: secret-volume
        secret:
          secretName: sleep-secret
          optional: true

9.4.1.2 配置为双向tls

# 仅允许sider之间的通信,无sider无法进行访问
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: "default"
  namespace: "istio-system" # istio安装的根命名空间
spec:
  mtls:
    mode: STRICT 

9.4.2 请求认证

终端用户认证,以验证附加到请求的凭据,账户密码,采用jwt进行认证

9.4.2.1 部署测试服务

kubectl create ns foo
kubectl label ns foo istio-injection=enabled
kubectl apply -f httpbin.yaml -n foo
kubectl apply -f httpbin-gateway.yaml
# httpbin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: httpbin
---
apiVersion: v1
kind: Service
metadata:
  name: httpbin
  labels:
    app: httpbin
    service: httpbin
spec:
  ports:
  - name: http
    port: 8000
    targetPort: 80
  selector:
    app: httpbin
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: httpbin
      version: v1
  template:
    metadata:
      labels:
        app: httpbin
        version: v1
    spec:
      serviceAccountName: httpbin
      containers:
      - image: quanheng.com/pub/httpbin:v1
        imagePullPolicy: IfNotPresent
        name: httpbin
        ports:
        - containerPort: 80
# httpbin-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: httpbin-gateway
  namespace: istio-ingress
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
  namespace: foo
spec:
  hosts:
  - "*"
  gateways:
  - istio-ingress/httpbin-gateway
  http:
  - route:
    - destination:
        host: httpbin
        port:
          number: 8000

9.4.2.2 配置入口网关认证

apiVersion: security.istio.io/v1
kind: RequestAuthentication
metadata:
  name: ingress-jwt
  namespace: istio-system
spec:
  selector:
    matchLabels:
      istio: ingressgateway
  jwtRules:
  - issuer: "[email protected]"
    jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.20/security/tools/jwt/samples/jwks.json"
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
  namespace: foo
spec:
  hosts:
  - "*"
  gateways:
  - istio-ingress/httpbin-gateway
  http:
  - match:
    - uri:
        prefix: /headers
      headers:
        "@request.auth.claims.groups":
          exact: group1
    route:
    - destination:
        port:
          number: 8000
        host: httpbin

9.5 授权

为网格中的工作负载提供网格、 命名空间和工作负载级别的访问控制
工作负载到工作负载以及最终用户到工作负载的授权。
一个简单的 API:它包括一个单独的并且很容易使用和维护的 AuthorizationPolicy CRD。
灵活的语义:运维人员可以在 Istio 属性上定义自定义条件,并使用 DENY 和 ALLOW 动作。
高性能:Istio 授权是在 Envoy 本地强制执行的。
高兼容性:原生支持 HTTP、HTTPS 和 HTTP2,以及任意普通 TCP 协议。
授权策略包括选择器(selector)、动作(action)和一个规则(rules)列表:
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
 name: httpbin
 namespace: foo
spec:
 selector:
   matchLabels:
     app: httpbin
     version: v1
 action: ALLOW
 rules:
 - from:
   - source:
       principals: ["cluster.local/ns/default/sa/sleep"]
   - source:
       namespaces: ["dev"]
   to:
   - operation:
       methods: ["GET"]
   when:
   - key: request.auth.claims[iss]
     values: ["https://accounts.google.com"]

9.5.1 七层流量访问控制

# 允许使用get方法访问
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
  name: "productpage-viewer"
  namespace: default
spec:
  selector:
    matchLabels:
      app: productpage
  action: ALLOW
  rules:
  - to:
    - operation:
        methods: ["GET"]
# sa授权
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
  name: "details-viewer"
  namespace: default
spec:
  selector:
    matchLabels:
      app: details
  action: ALLOW
  rules:
  - from:
    - source:
        principals: ["cluster.local/ns/default/sa/bookinfo-productpage"]
    to:
    - operation:
        methods: ["GET"]

9.5.2 四层访问控制

# 允许访问
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
  name: tcp-policy
  namespace: foo
spec:
  selector:
    matchLabels:
      app: tcp-echo
  action: ALLOW
  rules:
  - to:
    - operation:
        ports: ["9000", "9001"]
# 拒绝访问
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
  name: tcp-policy
  namespace: foo
spec:
  selector:
    matchLabels:
      app: tcp-echo
  action: DENY
  rules:
  - to:
    - operation:
        methods: ["GET"]

9.5.3 网关访问控制

# 根据外接负载均衡器来进行配置具体参考
https://istio.io/latest/zh/docs/tasks/security/authorization/authz-ingress/#ip-based-allow-list-and-deny-list
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
  name: ingress-policy
  namespace: istio-system
spec:
  selector:
    matchLabels:
      app: istio-ingressgateway
  action: ALLOW
  rules:
  - from:
    - source:
        ipBlocks: ["1.2.3.4", "5.6.7.0/24"]

10、服务注册

将服务网格外的应用注册进来,一般使用虚拟机服务

[]: https://istio.io/latest/zh/docs/ops/deployment/vm-architecture/ "将虚拟机应用接入服务网格"

10.1 概念

WorkloadGroup 表示共享通用属性的虚拟机工作负载逻辑组合。这类似于 Kubernetes 中的 Deployment。
WorkloadEntry 表示虚拟机工作负载的单个实例。这类似于 Kubernetes 中的 Pod。

10.2 实现

10.2.1 部署虚拟机

根据要求选择debian系或centos8

10.2.2 规划所需要的变量

VM_APP="<将在这台虚机上运行的应用名>"
VM_NAMESPACE="<您的服务所在的命名空间>"
WORK_DIR="<证书工作目录>"
SERVICE_ACCOUNT="<为这台虚机提供的 Kubernetes 的服务账号名称>"
CLUSTER_NETWORK=""
VM_NETWORK=""
CLUSTER="Kubernetes"

10.2.3 创建一个目录来生成文件

mkdir -p vm

10.2.4 设置iop

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  name: istio
spec:
  values:
    global:
      meshID: mesh1
      multiCluster:
        clusterName: cluster1
      network: 10.182.0.0/16 # pod网络

10.2.5 部署东西向网关

# samples/multicluster/gen-eastwest-gateway.sh --single-cluster > iop_for_vm.yaml
apiVersion: install.istio.io/v1alpha1                                                                                             
kind: IstioOperator
metadata:
  name: eastwest
spec:
  revision: ""
  profile: empty
  components:
    ingressGateways:
      - name: istio-eastwestgateway
        label:
          istio: eastwestgateway
          app: istio-eastwestgateway
        enabled: true
        k8s:
          service:
            ports:
              - name: status-port
                port: 15021
                targetPort: 15021
              - name: tls
                port: 15443
                targetPort: 15443
              - name: tls-istiod
                port: 15012
                targetPort: 15012
              - name: tls-webhook
                port: 15017
                targetPort: 15017
  values:
    gateways:
      istio-ingressgateway:
        injectionTemplate: gateway
# gateway.yaml
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: istiod-gateway
spec:
  selector:
    istio: eastwestgateway
  servers:
    - port:
        name: tls-istiod
        number: 15012
        protocol: tls
      tls:
        mode: PASSTHROUGH        
      hosts:
        - "*"
    - port:
        name: tls-istiodwebhook
        number: 15017
        protocol: tls
      tls:
        mode: PASSTHROUGH          
      hosts:
        - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: istiod-vs
spec:
  hosts:
  - "*"
  gateways:
  - istiod-gateway
  tls:
  - match:
    - port: 15012
      sniHosts:
      - "*"
    route:
    - destination:
        host: istiod.istio-system.svc.cluster.local
        port:
          number: 15012
  - match:
    - port: 15017
      sniHosts:
      - "*"
    route:
    - destination:
        host: istiod.istio-system.svc.cluster.local
        port:
          number: 443

10.2.6 创建一个sa

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx
  namespace:vm
---
apiVersion: v1
kind: Secret
metadata:
  name: nginx-sa-secret
  namespace: vm
  annotations:
    kubernetes.io/service-account.name: nginx
type: kubernetes.io/service-account-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: vm-clusterrole-binding
subjects:
- kind: ServiceAccount
  name: nginx
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: istiod-istio-system
  apiGroup: rbac.authorization.k8s.io

10.2.7 创建虚拟机用的文件

# 创建 WorkloadGroup,作为每个 WorkloadEntry 实例,当创建好后实例会被自动创建
apiVersion: networking.istio.io/v1alpha3
kind: WorkloadGroup
metadata:
  name: nginx
  namespace: vm
spec:
  metadata:
    labels:
      app: nginx
  template:
    serviceAccount: nginx
  probe:
    httpGet:
      port: 80
istioctl x workload entry configure -f workloadgroup.yaml -o "${WORK_DIR}" --clusterID "${CLUSTER}" --
istioctl x workload entry configure -f workload.yaml -o /home/gu/k8s/istio/vm --clusterID cluster1 --ingressIP=10.173.31.65
cluster.env: 包含用来识别名称空间、服务帐户、网络 CIDR、和入站端口(可选)的元数据。
istio-token: 用来从 CA 获取证书的 Kubernetes 令牌。
mesh.yaml: 提供 ProxyConfig 来配置 discoveryAddress, 健康检查, 以及一些认证操作。
root-cert.pem: 用于认证的根证书。
hosts: /etc/hosts 的补充,代理将使用该补充从 Istiod 获取 xDS.*。
生成后将文件上传到虚拟机"${HOME}"

10.2.8 配置虚拟机

# 将根证书安装
sudo mkdir -p /etc/certs \
sudo cp root-cert.pem /etc/certs/root-cert.pem \
# 将令牌安装
sudo  mkdir -p /var/run/secrets/tokens \
sudo cp istio-token /var/run/secrets/tokens/istio-token \
# 安装istio-sidecar服务
debian 
curl -LO https://storage.googleapis.com/istio-release/releases/1.18.2/deb/istio-sidecar.deb
sudo dpkg -i istio-sidecar.deb
centos
curl -LO https://storage.googleapis.com/istio-release/releases/1.18.2/rpm/istio-sidecar.rpm
sudo rpm -i istio-sidecar.rpm
# 将 cluster.env 安装到目录 /var/lib/istio/envoy/ 
sudo cp cluster.env /var/lib/istio/envoy/cluster.env \
# 将网格配置文件 Mesh Config 安装到目录 /etc/istio/config/mesh:
sudo cp mesh.yaml /etc/istio/config/mesh \
# 将 istiod 主机添加到 /etc/hosts:
sudo sh -c 'cat $(eval echo ~$SUDO_USER)/hosts >> /etc/hosts'
# 把文件 /etc/certs/ 和 /var/lib/istio/envoy/ 的所有权转移给 Istio proxy:
sudo mkdir -p /etc/istio/proxy \
sudo chown -R istio-proxy /var/lib/istio /etc/certs /etc/istio/proxy /etc/istio/config /var/run/secrets /etc/certs/root-cert.pem
sudo mkdir -p /etc/certs && sudo cp root-cert.pem /etc/certs/root-cert.pem && sudo  mkdir -p /var/run/secrets/tokens && sudo cp istio-token /var/run/secrets/tokens/istio-token && sudo cp cluster.env /var/lib/istio/envoy/cluster.env && sudo cp mesh.yaml /etc/istio/config/mesh && sudo mkdir -p /etc/istio/proxy && sudo chown -R istio-proxy /var/lib/istio /etc/certs /etc/istio/proxy /etc/istio/config /var/run/secrets /etc/certs/root-cert.pem

10.2.9 虚拟机启动istio

 systemctl start istio
 检查 /var/log/istio/istio.log 中的日志

10.2.10 卸载

# 停止虚拟机服务
sudo systemctl stop istio
#  删除ns

10.3 serviceentry

# 将外部服务包装成和svc一样的服务发现机制

10.4 workloadgroup

# 将虚拟机服务进行自动注册,可以理解为workloadentry的模版,类似于deploy

10.5 workloadentry

# 服务实例注册

标签:kind,name,Istio,istio,v1,spec,metadata
From: https://www.cnblogs.com/guquanheng/p/18095097

相关文章

  • Istio实战-01.环境部署
    目录环境说明操作说明下载Istio安装Istio部署应用对外暴露应用程序部署仪表板卸载删除BookInfo应用删除Istio删除命名空间删除Envoy边车代理的标签参考链接转载请注明出处环境说明Centos7.9Docker24.0.7Kubernetes1.23.5操作说明以下指令全部在Kubernetes......
  • 备考ICA----Istio实验4---使用 Istio 进行金丝雀部署
    备考ICA----Istio实验4—使用Istio进行金丝雀部署上一个实验已经通过DestinationRule实现了部分金丝雀部署的功能,这个实验会更完整的模拟展示一个环境由v1慢慢过渡到v2版本的金丝雀发布.1.环境清理kubectldeletegw/helloworld-gatewayvs/helloworlddr/helloworld......
  • 备考ICA----Istio实验3---Istio DestinationRule 实验
    备考ICA----Istio实验3—IstioDestinationRule实验1.hello服务说明这部分服务沿用Istio实验2的deployment和svc同时在上一个实验的deployment中分别加入了2个标签:app:helloworld两个deployment共有version:v1和version:v2两个deploymen不同详见:https://b......
  • k8s系列之十四安装Istio
    Istio是一个开源的服务网格(ServiceMesh),用于连接、管理和保护微服务。它提供了一组功能强大的工具,包括流量管理、安全性、监控和跟踪等,以帮助在微服务架构中更好地管理服务之间的通信。一些主要的Istio功能包括:流量管理:Istio可以对流量进行智能路由、负载均衡和故障......
  • 云原生周刊:Istio 加入 Phippy 家族 | 2024.3.18
    开源项目推荐ko"ko"是一个用于构建和部署Go应用程序的简单、快速的容器镜像构建工具。它适用于那些镜像中只包含单个Go应用程序且没有或很少依赖于操作系统基础镜像的情况(例如没有cgo,没有操作系统软件包依赖)。"ko"在本地机器上通过执行"gobuild"的方式构建镜像,因此不......
  • Istio安装及Bookinfo环境部署
    目录一.ServiceMesh服务网格1.服务代理模式2.什么是ServiceMesh二.部署Istio1.Istio概述2.Istio各版本支持的K8S版本3.下载指定版本的Istio4.配置Istio的环境变量5.安装Istio6.给命名空间添加标签三.部署示例应用1.部署Bookinfo示例应用2.查看部署结果3.验证服务是否部署成功四.......
  • Istio中的核心资源及定义
    Istio的核心资源主要包括以下几种:1.Gateway用于建模边缘网关,可以为进入或离开网格的流量提供专用的入口和出口点。Gateway定义了在网格边缘运行的负载均衡器,用于接收传入或传出的HTTP/TCP连接。然后,它将接收到的连接路由到目标地址,该地址可以是网格内的服务,也可以是网格......
  • envoy&istio 对接ratelimit 实现限流之ratelimit启动
    直接采用官方提供的Docker镜像进行启动编写docker-compose.yaml文件version:"3"services:ratelimit:image:envoyproxy/ratelimit:19f2079fcommand:/bin/ratelimitports:-8080:8080-8081:8081-6070:6070volu......
  • envoy&istio 对接ratelimit 实现限流之ratelimit简介
    23年的时候公司因调用企业微信接口超限,导致业务问题。架构组经过协商后决定上一个限流服务。限流这块自然而然就落到我负责的网关这块,小公司我一个人负责api网关这块。之前基于istio给公司上线了一个本地的限流(我给公司开发了一个devops管理工具,可以用来管理k8s、istio、jenki......
  • MSE/Istio 全链路灰度的挑战、实现思路与解决方案
    微服务架构下的灰度发布挑战在传统的单体应用架构中,灰度发布相对简单。只需要在服务的流量入口处进行分流,通过使用K8sService或各种类型的网关即可实现。然而,微服务架构引入了新的复杂性,服务之间的依赖关系错综复杂。有时候,某个功能的发布可能依赖于多个服务,要求灰度流量在整......