首页 > 其他分享 >【K8s】专题七(2):Kubernetes 服务发现之 Ingress

【K8s】专题七(2):Kubernetes 服务发现之 Ingress

时间:2024-07-15 13:29:10浏览次数:11  
标签:Ingress lb Kubernetes kubernetes app io develop K8s name

以下内容均来自个人笔记并重新梳理,如有错误欢迎指正!如果对您有帮助,烦请点赞、关注、转发!欢迎扫码关注个人公众号!

公众号二维码


目录

一、基本介绍

二、工作原理

三、资源清单(示例)

1、Ingress Controller

2、Ingress 对象

四、常用命令


一、基本介绍

Ingress 是 Kubernetes 提供的一种服务发现机制,主要作用是为集群外部访问集群内部服务提供访问入口,通过制定 Ingress 策略管理 HTTP 路由,将集群外部的访问请求反向代理到集群内部不同 Service 对应的 Endpoint(即 Pod)上。

Ingress 具有以下特点:

  • Ingress 支持七层负载均衡,仅支持 HTTP 通信规则
  • Ingress 策略(rules)与 Ingress Controller 组成一个完整的 Ingress 负载均衡器
  • Ingress 将外部访问请求直接反向代理到 Endpoint 上,从而跳过 kube-proxy 组件的转发,kube-proxy 不再起作用
  • Ingress 对象与其反向代理的 Service 对象必须处于同一命名空间
  • Ingress 通过 path 路径访问不同服务,且 “ / ” 位于最后避免其他路径被拦截


二、工作原理
  • 定义 Ingress 策略:用户在 Kubernetes 集群中创建 Ingress 资源,定义如何将外部请求路由到集群内的服务
  • 策略监听:Ingress Controller 监听 Ingress 资源的变化,当有新的 Ingress 资源被创建或现有资源被更新时,Ingress 控制器会读取这些规则
  • 配置负载均衡器或反向代理:Ingress Controller 根据 Ingress 策略配置内部的负载均衡器或反向代理服务器(如 Nginx、HAProxy 等),设置路由规则
  • 路由转发:Ingress Controller 会根据配置的策略,将请求转发到正确的服务
  • 服务响应:服务处理请求并返回响应,Ingress Controller 将响应转发回请求者


三、资源清单(示例)
1、Ingress Controller
# nginx-ingress-controller.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: lb-develop-controller
  namespace: kube-system
  labels:
    helm.sh/chart: ingress-nginx-4.2.5
    app.kubernetes.io/name: lb-develop
    app.kubernetes.io/instance: lb-develop
    app.kubernetes.io/version: "1.3.1"
    app.kubernetes.io/part-of: lb-develop
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
data:
  allow-snippet-annotations: "true"
  client-body-buffer-size: "10m"
  client-body-timeout: "300"
  client-header-buffer-size: "64k"
  client-header-timeout: "300"
  compute-full-forwarded-for: "true"
  enable-access-log-for-default-backend: "true"
  log-format-escape-json: "true"
  log-format-upstream: "{\"@timestamp\": \"$time_iso8601\", \"nginx.name\": \"lb-develop\", \"remote_addr\": \"$remote_addr\", \"x_forwarded_for\": \"$http_x_forwarded_for\", \"x_forwarded_proto\": \"$pass_access_scheme\", \"node-forwarded-proto\": \"$http_node_forwarded_proto\", \"request_id\": \"$req_id\", \"remote_user\": \"$remote_user\", \"bytes_sent\": $bytes_sent, \"status\": $status, \"content_length\": \"$content_length\", \"scheme\":\"$scheme\", \"vhost\": \"$host\", \"request_proto\": \"$server_protocol\", \"path\": \"$uri\", \"request_uri\": \"$request_uri\", \"request_body\": \"$request_body\", \"request_query\": \"$args\", \"request_length\": $request_length, \"duration\": $request_time, \"method\": \"$request_method\", \"http_referer\": \"$http_referer\", \"http_client_source\": \"$http_client_source\", \"http_client_version\": \"$http_client_version\", \"http_user_agent\": \"$http_user_agent\", \"http_token\": \"$http_authorization\", \"http_authorization\": \"$http_authorization\", \"http_uid\": \"$http_http_uid\", \"http_device_id\": \"$http_device_id\", \"http_x_auth_user\": \"$http_x_auth_user\", \"http_x_auth_scope\": \"$http_x_auth_scope\", \"http_x_token_type\": \"$http_x_token_type\", \"http_x_auth_client\": \"$http_x_auth_client\", \"http_origin\": \"$http_origin\", \"cookie_token\": \"$cookie_access_token\", \"cookie_uid\": \"$cookie_uid\", \"k8s_ingress_name\": \"$ingress_name\", \"k8s_namespace\": \"$namespace\", \"k8s_service_name\": \"$service_name\", \"upstream_name\": \"$proxy_upstream_name\", \"upstream_addr\": \"$upstream_addr\", \"upstream_status\": \"$upstream_status\", \"upstream_response_time\": \"$upstream_response_time\"}"
  proxy-add-original-uri-header: "true"
  proxy-body-size: "200m"
  proxy-buffer-size: "512k"
  proxy-buffering: "on"
  proxy-buffers-number: "8"
  proxy-connect-timeout: "300"
  proxy-read-timeout: "300"
  proxy-send-timeout: "300"
  upstream-keepalive-connections: "100"
  upstream-keepalive-requests: "100"
  upstream-keepalive-timeout: "30"
  use-forwarded-headers: "true"

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: lb-develop
  namespace: kube-system
  labels:
    helm.sh/chart: ingress-nginx-4.2.5
    app.kubernetes.io/name: lb-develop
    app.kubernetes.io/instance: lb-develop
    app.kubernetes.io/version: "1.3.1"
    app.kubernetes.io/part-of: lb-develop
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
automountServiceAccountToken: true

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: lb-develop
  labels:
    helm.sh/chart: ingress-nginx-4.2.5
    app.kubernetes.io/name: lb-develop
    app.kubernetes.io/instance: lb-develop
    app.kubernetes.io/version: "1.3.1"
    app.kubernetes.io/part-of: lb-develop
    app.kubernetes.io/managed-by: Helm
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
      - namespaces
    verbs:
      - list
      - watch
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: lb-develop
  labels:
    helm.sh/chart: ingress-nginx-4.2.5
    app.kubernetes.io/name: lb-develop
    app.kubernetes.io/instance: lb-develop
    app.kubernetes.io/version: "1.3.1"
    app.kubernetes.io/part-of: lb-develop
    app.kubernetes.io/managed-by: Helm
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: lb-develop
subjects:
  - kind: ServiceAccount
    name: lb-develop
    namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: lb-develop
  namespace: kube-system
  labels:
    helm.sh/chart: ingress-nginx-4.2.5
    app.kubernetes.io/name: lb-develop
    app.kubernetes.io/instance: lb-develop
    app.kubernetes.io/version: "1.3.1"
    app.kubernetes.io/part-of: lb-develop
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
rules:
  - apiGroups:
      - ""
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  # TODO(Jintao Zhang)
  # Once we release a new version of the controller,
  # we will be able to remove the configmap related permissions
  # We have used the Lease API for selection
  # ref: https://github.com/kubernetes/ingress-nginx/pull/8921
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      - ingress-controller-leader
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    resourceNames:
      - ingress-controller-leader
    verbs:
      - get
      - update
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: lb-develop
  namespace: kube-system
  labels:
    helm.sh/chart: ingress-nginx-4.2.5
    app.kubernetes.io/name: lb-develop
    app.kubernetes.io/instance: lb-develop
    app.kubernetes.io/version: "1.3.1"
    app.kubernetes.io/part-of: lb-develop
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: lb-develop
subjects:
  - kind: ServiceAccount
    name: lb-develop
    namespace: kube-system

---
apiVersion: v1
kind: Service
metadata:
  name: lb-develop-controller
  namespace: kube-system
  annotations:
  labels:
    helm.sh/chart: ingress-nginx-4.2.5
    app.kubernetes.io/name: lb-develop
    app.kubernetes.io/instance: lb-develop
    app.kubernetes.io/version: "1.3.1"
    app.kubernetes.io/part-of: lb-develop
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
spec:
  type: ClusterIP
  selector:
    app.kubernetes.io/name: lb-develop
    app.kubernetes.io/instance: lb-develop
    app.kubernetes.io/component: controller
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
      appProtocol: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
      appProtocol: https

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: lb-develop-controller
  namespace: kube-system
  annotations: 
    reloader.stakater.com/auto: "true"
  labels:
    helm.sh/chart: ingress-nginx-4.2.5
    app.kubernetes.io/name: lb-develop
    app.kubernetes.io/instance: lb-develop
    app.kubernetes.io/version: "1.3.1"
    app.kubernetes.io/part-of: lb-develop
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: lb-develop
      app.kubernetes.io/instance: lb-develop
      app.kubernetes.io/component: controller
  template:
    metadata:
      labels:
        app.kubernetes.io/name: lb-develop
        app.kubernetes.io/instance: lb-develop
        app.kubernetes.io/component: controller
        app: lb-develop
        release: lb-develop
    spec:
      containers:
        - name: controller
          image: nginx-ingress-controller:1.2.1
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          lifecycle: 
            preStop:
              exec:
                command:
                - /wait-shutdown
          args:
            - /nginx-ingress-controller
            - --publish-service=$(POD_NAMESPACE)/lb-develop-controller
            - --election-id=ingress-controller-leader
            - --controller-class=k8s.io/ingress-nginx
            - --ingress-class=lb-develop
            - --configmap=$(POD_NAMESPACE)/lb-develop-controller
            - --watch-ingress-without-class=true
          securityContext: 
            capabilities:
              drop:
              - ALL
              add:
              - NET_BIND_SERVICE
            runAsUser: 101
            allowPrivilegeEscalation: true
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: LD_PRELOAD
              value: /usr/local/lib/libmimalloc.so
            - name: TZ
              value: Asia/Shanghai
          livenessProbe: 
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 60
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe: 
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 60
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
      affinity: 
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: loadbalancer
                operator: In
                values:
                - lb-develop
      hostNetwork: true                         # 直接绑定主机的80端口、443端口
      dnsPolicy: ClusterFirstWithHostNet        # 设置对应的dns策略
      serviceAccountName: lb-develop
      terminationGracePeriodSeconds: 300

---
# We don't support namespaced ingressClass yet
# So a ClusterRole and a ClusterRoleBinding is required
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: lb-develop
  annotations:
    ingressclass.kubernetes.io/is-default-class: "true"
  labels:
    helm.sh/chart: ingress-nginx-4.2.5
    app.kubernetes.io/name: lb-develop
    app.kubernetes.io/instance: lb-develop
    app.kubernetes.io/version: "1.3.1"
    app.kubernetes.io/part-of: lb-develop
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
spec:
  controller: k8s.io/ingress-nginx

2、Ingress 对象
  • networking.k8s.io/v1 类型
# demo-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: "lb-develop"
spec:
  ingressClassName: lb-develop
  rules:
  - host: xx.xx.com                    # 有域名情况
    http:
      paths:
      - path: /prom
        pathType: Prefix               # 必须要指定
        backend:
          service: 
            name: prometheus
            port: 
              number: 9090             # 或者name: xxxx
      - path: /graf
        pathType: Prefix               # 必须要指定
        backend:
          service: 
            name: monitoring-grafana
            port: 
              number: 8080

  - http:                              # 无域名情况
      paths:
      - path: /nginx
        pathType: Prefix               # 必须要指定
        backend:
          service: 
            name: nginx
            port: 
              number: 8080             # 或者name: xxxx

  tls:
  - hosts:
    - xxx.xxx.com
    secretName: demo-secret

标签:Ingress,lb,Kubernetes,kubernetes,app,io,develop,K8s,name
From: https://blog.csdn.net/2401_82795112/article/details/139931577

相关文章

  • K8S教程:如何使用Kubeadm命令在PetaExpress Ubuntu系统上安装Kubernetes集群
    Kubernetes,通常缩写为K8s,是一个开源的容器编排平台,旨在自动化容器化应用的部署、扩展和管理。有了Kubernetes,您可以轻松地部署、更新和扩展应用,而无需担心底层基础设施。一个Kubernetes集群由控制平面节点(master节点)和工作节点(worker节点)组成。确保集群的高效运......
  • Windows节点加入K8S集群(K8S搭建Linux和Window混合集群)
    说明:K8S多数情况用于linux系统的集群,目前很少人实践linux和windows的混合集群。linux和windows的K8S混合集群,是以linux为Master节点,Windows为Node节点的。本示例linux采用centos7.6,windows采用windowsserver2019(均为虚拟机)。一、前提准备  1.熟悉linux的基本使......
  • k8s字段选择器
    目录一、概述二、基本语法三、支持的字段1、错误示例2、支持的字段列表四、支持的操作符1、示例五、跨多种资源类型使用字段选择器一、概述在Kubernetes中,字段选择器(FieldSelectors)和标签选择器(LabelSelectors)是两种不同的查询机制,用于过滤和选择特定的资源。字段选择器允许用......
  • Kubernetes近十年里程碑及版本偏差策略
    1、Kubernetes十年回顾Kubernetes的历史始于2014年6月6日的那次历史性提交,随后,Google工程师EricBrewer在2014年6月10日的DockerCon2014上的主题演讲(及其相应的Google博客)中由宣布了该项目。在接下来的一年里,一个由主要来自Google和RedHat......
  • K8S标签与标签选择器
    目录一、标签1、简介2、为什么需要标签3、标签命名规范3.1、标签名3.2、标签的value4、标签的基本操作4.1、创建标签4.1.1、资源清单方式4.1.2、命令行方式4.2、查看标签4.2.1、查看刚才打标的两个pod4.2.2、通过标签过滤查询4.2.3、将标签显示在输出结果中4.3、添加标签4.3.1、分......
  • 测试人必会 K8S 操作之 Dashboard
    在云计算和微服务架构的时代,Kubernetes(K8S)已成为管理容器化应用的标准。然而,对于许多新手来说,K8S的操作和管理常常显得复杂而神秘。特别是,当你第一次接触K8SDashboard时,你是否也感到有些无所适从? K8SDashboard是Kubernetes提供的一种用户友好的图形界面工具,它让用......
  • 有了ingress,是否还需要kub-proxy?
    情况描述:部署了ingress-nginx,同时设置serviceclusterIP:None。问题:这种情况下,是否还需要使用kub-proxy? 根据描述的情况,即部署了ingress-nginx并设置了Service的clusterIP:None(通常称为HeadlessService),我们来探讨是否还需要使用kube-proxy以及他们之间的关系。是否需要使......
  • k8s在线安装手册
    k8s在线安装手册目录k8s在线安装手册1.环境准备1.1修改hosts1.2修改hostsname1.3关闭防火墙1.4关闭selinux1.5关闭swap1.6修改/etc/sysctl.conf2安装docker2.1安装基础依赖2.2配置dockeryum源2.3安装并启动docker2.4配置docker加......
  • 编译安装Kubernetes 1.29 高可用集群(9)--Harbor私有仓库部署
    1.环境说明操作系统:openEuler22.03软件版本:harbor2.10.32.Harbor软件安装2.1安装前准备#systemctldisablefirewalld.service#systemctlstopfirewalld.service#sed-i's/SELINUX=enforcing/SELINUX=disabled/'/etc/selinux/config#setenforce0#hostnamec......
  • K8S各组件概念以及原理知识总结
    简述ETCD及其特点? etcd是CoreOS团队发起的开源项目,是一个管理配置信息和服务发现(servicediscovery)的项目,它的目标是构建一个高可用的分布式键值(key-value)数据库,基于Go语言实现。特点:简单:支持REST风格的HTTP+JSONAPI安全:支持HTTPS方式的访问快速:支持并......