首页 > 其他分享 >每天一点基础K8S--K8S中的ingress和ingress controller

每天一点基础K8S--K8S中的ingress和ingress controller

时间:2023-01-12 00:22:06浏览次数:46  
标签:node ingress tomcat -- worker nginx controller K8S

ingress和ingress controller

1、背景

# 通过service的nodeport和clusterIP可以将后端pod服务进行代理和负载。但该代理为四层应用,即对IP和port进行代理。

# 根据之前的学习指导,K8S中每一个service资源的创建都会对应的产生一个域名,在集群内可以通过域名service-name.namespace.svc.cluster.local对后端服务进行访问。

# K8S集群中运行的业务大多数情况还是需要集群外访问的,此时有几种方案:

# 方案一:通过service的nodePort映射到node的物理网卡,此时只能通过ip+port进行访问,即四层代理;

# 方案二:在K8S集群外配置nginx服务器,将访问某一URL的流量代理到K8S集群中的service,再由service代理到后端pod。此时可以对集群外的流量实现7层代理,但是对于K8S集群中service变动频繁,需要频繁修改nginx配置文件,并手动reload,维护相对麻烦。

# 方案三: 通过K8S中的ingress和ingress controller资源,它可以根据后端ingress变化,自动配置nginx配置文件,既实现了7层代理,维护也简单。

2、ingress和ingress controller介绍

# ingress是将内部服务报错到集群外,并提供基于域名、负载均衡等服务。

# ingress-controller是负载均衡调度器,是客户端流量的入口,常用的调度器有nginx、traefik。ingress-controller就是去动态检测ingress的变化,自动修改nginx配置文件。

f1f4117c2d2f4c7af0ed8ecd0338d92e.png

3、ingress-controller(nginx)高可用部署

3.1、部署nginx-keepalived实现高可用
# 按照前面的理解,ingress-controller是客户端流量的入口,那么在重要环境中,建议将ingress-controller进行高可用部署。

# 此处在only-worker-node上部署nginx + keepalive实现ingress- controller高可用。
# 给worker节点打上标签,便于区分。
[root@master-worker-node-1 ingress]# kubectl label nodes only-worker-node-3 kubernetes/ingress-controller=nginx
node/only-worker-node-3 labeled
[root@master-worker-node-1 ingress]# kubectl label nodes only-worker-node-4 kubernetes/ingress-controller=nginx
node/only-worker-node-4 labeled

# 按照nginx和keepalive实现ingress controller nginx高可用,按照nginx和keepalived
[root@only-worker-node-3 ~]# yum install -y nginx keepalived
[root@only-worker-node-4 ~]# yum install -y nginx keepalived

# 修改nginx配置文件
# 四层负载均衡,为2台nginx-ingress-controller组件提供负载均衡,两个worker的nginx配置文件相同。
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream nginx-ingress-controller {
       server 192.168.122.132:80 weight=5 max_fails=3 fail_timeout=30s;   # nginx ingress controller 1 IP:PORT
       server 192.168.122.182:80 weight=5 max_fails=3 fail_timeout=30s;   # nginx ingress controller 2 IP:PORT
    }

    server {
       listen 1080; 
       proxy_pass nginx-ingress-controller;
    }
}
[root@only-worker-node-3 nginx]# scp nginx.conf node4:/etc/nginx/nginx.conf
nginx.conf                                                                                                          100% 1393   139.3KB/s   00:00    
[root@only-worker-node-3 nginx]# ssh node4 systemctl enable nginx --now
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.

[root@only-worker-node-3 ~]# netstat -tnlupa |  grep nginx 
tcp        0      0 0.0.0.0:80           0.0.0.0:*               LISTEN      827/nginx: master p 
tcp        0      0 0.0.0.0:1080           0.0.0.0:*               LISTEN      827/nginx: master p 

# 配置keepalived
[root@only-worker-node-3 keepalived]# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id nginx-ingress-slave    
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}
vrrp_script check_nginx{
   script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    state MASTER  # vrrp角色
    interface ens3   # 网卡信息
    virtual_router_id 100  # node 3 4 router id 相同
    priority 100  # 优先级,node 3 4 不相同
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.100.252  # VIP
    }
    track_script{
        check_nginx  # 监测脚本
    }
}

# 测试keepalive状态切换
[root@only-worker-node-3 keepalived]# ip add  |  grep 192.168
    inet 192.168.122.132/24 brd 192.168.122.255 scope global noprefixroute ens3
    inet 192.168.122.252/32 scope global ens3
[root@only-worker-node-3 keepalived]# systemctl stop keepalived.service 
[root@only-worker-node-3 keepalived]# ip add  |  grep 192.168
    inet 192.168.122.132/24 brd 192.168.122.255 scope global noprefixroute ens3
[root@only-worker-node-3 keepalived]# systemctl start  keepalived.service 
[root@only-worker-node-3 keepalived]# ip add  |  grep 192.168
    inet 192.168.122.132/24 brd 192.168.122.255 scope global noprefixroute ens3
    inet 192.168.122.252/32 scope global ens3
    
# 上面完成了nginx-ingress-controller入口高可用部署,下面开始部署ingress- controller

3.2、ingress-controller部署
# ingress-controller采用的控制器为deployment,两副本。配置pod反亲和性
[root@master-worker-node-1 plugin]# kubectl get all -n ingress-nginx
NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-fh44f        0/1     Completed   0          25h
pod/ingress-nginx-admission-patch-r2d7f         0/1     Completed   0          25h
pod/ingress-nginx-controller-64bdc78c96-9z7qk   1/1     Running     0          25h
pod/ingress-nginx-controller-64bdc78c96-wbcsw   1/1     Running     0          25h

NAME                                         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             NodePort    10.99.152.0   <none>        80:31729/TCP,443:32480/TCP   25h
service/ingress-nginx-controller-admission   ClusterIP   10.99.69.38   <none>        443/TCP                      25h

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   2/2     2            2           25h

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-64bdc78c96   2         2         2       25h

NAME                                       COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   1/1           11s        25h
job.batch/ingress-nginx-admission-patch    1/1           10s        25h

4、搭建tomcat后端服务

---
apiVersion: v1
kind: Service
metadata:
  name: tomcat
  namespace: default
spec:
  selector:
    app: tomcat
    release: canary
  ports:
  - name: http
    targetPort: 8080
    port: 8080
  - name: ajp
    targetPort: 8009
    port: 8009
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat-deploy
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: tomcat
      release: canary
  template:
    metadata:
      labels:
        app: tomcat
        release: canary
    spec:
      containers:
      - name: tomcat
        image: tomcat:8.5.34-jre8-alpine 
        imagePullPolicy: IfNotPresent  
        ports:
        - name: http
          containerPort: 8080
          name: ajp
          containerPort: 8009

[root@master-worker-node-1 plugin]# kubectl get service -o wide 
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE     SELECTOR
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP             3d22h   <none>
tomcat       ClusterIP   10.110.26.143   <none>        8080/TCP,8009/TCP   25h     app=tomcat,release=canary
[root@master-worker-node-1 plugin]# kubectl get deployment -o wide 
NAME            READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                      SELECTOR
tomcat-deploy   2/2     2            2           25h   tomcat       tomcat:8.5.34-jre8-alpine   app=tomcat,release=canary
[root@master-worker-node-1 plugin]# kubectl get pods -o wide 
NAME                             READY   STATUS    RESTARTS   AGE   IP             NODE                 NOMINATED NODE   READINESS GATES
tomcat-deploy-64d6489dd9-gn88s   1/1     Running   0          25h   10.244.54.11   only-worker-node-4   <none>           <none>
tomcat-deploy-64d6489dd9-nqf98   1/1     Running   0          25h   10.244.31.20   only-worker-node-3   <none>           <none>

# 先测试一下集群内访问
[root@master-worker-node-1 ingress]# curl -I 10.110.26.143:8080
HTTP/1.1 200 
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Date: Wed, 11 Jan 2023 15:30:34 GMT

5、使用ingress代理后端http tomcat资源

5.1、创建ingress规则
[root@master-worker-node-1 ingress]# cat ingress-tomcat.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-ingress-tomcat
spec:
  rules:
    - host: tomcat.test.com # 测试通过域名访问
      http:
        paths:
        - path: /   # 指定访问路径
          pathType: Prefix
          backend:  # 指定后段服务
            service:
              name: tomcat
              port:
                number: 8080
                
# 应用规则
[root@master-worker-node-1 ingress]# kubectl apply -f ingress-tomcat.yaml 
ingress.networking.k8s.io/test-ingress-tomcat created
[root@master-worker-node-1 ingress]# kubectl get ingress 
NAME            CLASS    HOSTS             ADDRESS                           PORTS   AGE
ingress-myapp   <none>   tomcat.test.com   192.168.122.132,192.168.122.182   80      25h

# 查看ingress 规则
[root@master-worker-node-1 ingress]# kubectl describe ingress ingress-myapp 
Name:             ingress-myapp
Labels:           <none>
Namespace:        default
Address:          192.168.122.132,192.168.122.182 # 表示两个ingress- controller的地址
Ingress Class:    <none>
Default backend:  <default>
Rules:
  Host             Path  Backends
  ----             ----  --------
  tomcat.test.com  
                   /   tomcat:8080 (10.244.31.20:8080,10.244.54.11:8080)
Annotations:       kubernetes.io/ingress.class: nginx  # 正常情况下还需配置defaultbackend
Events:            <none>

5.2、测试集群外访问

# 修改主机的地址
[23:44:28 remote-server root ~] # cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
122.112.148.100 huawei
120.48.27.104 bd1
192.168.122.252 tomcat.test.com

# 测试访问
[23:44:35 remote-server root ~] # curl -I tomcat.test.com:1080
HTTP/1.1 200 
Date: Wed, 11 Jan 2023 15:34:37 GMT
Content-Type: text/html;charset=UTF-8
Connection: keep-alive

5.3、业务流量说明

# 流量大致走向:
# 1、客户端根据主机将tomcat.test.com解析为192.168.122.252,访问192.168.122.252:1080。client --> 192.168.122.252:1080

# 2、根据keepalive优先级配置,192.168.122.252地址在节点only-worker-node-3上,1080端口为nginx服务,根据nginx配置文件将流量负载到某一台实服务器,此处以192.168.122.132(only-worker-node-3)为例。--> 192.168.122.132:80

# 3、由于192.168.122.132:80端口为nginx-ingress-controller deployment中pod暴露的端口。流量经某一nginx-ingress-controller pod处理,pod中的nginx.conf根据ingress 规则中的host信息,将访问tomcat.test.com的流量交给后端tomcat service的8080端口处理。此处的backend信息可以安装ingress-nginx plugin查看。--> [service]tomcat:8080

# 4、service收到流量后,根据负载规则(默认rr)负载到后端pod,--> tomcat-pod

6、总结

1、service只能用于ip+port的负载,即4层负载。K8S内资源只能通过nodeport暴露到集群外,为了实现高可用,需要通过类似nginx-keepalive实现高可用,但是nginx的配置文件的维护工作在相对频繁变化的K8S service下比较麻烦。

2、为了实现nginx配置文件自动生成,自动加载。通过nginx-ingress-controller实现。并且可以实现7层负载。

标签:node,ingress,tomcat,--,worker,nginx,controller,K8S
From: https://www.cnblogs.com/woshinidaye123/p/17045256.html

相关文章

  • 应用 Serverless 化,让业务开发心无旁骛
    我们希望让用户做得更少而收获更多,通过Serverless化,用云就像用电一样简单。11月5日,激活应用构建新范式:云原生峰会再次聚焦Serverless,进一步解读阿里云核心产品全......
  • 年终总结
    不知不觉又到年底了,马上要过年了,特此在今夜做一篇年终总结,来回顾今年历经之感悟。我一直在想,从月薪三千走到过万需要多久,刚来南京的时候,我头一次站在中华门外(第一家公司......
  • Markdown学习
    Markdown学习 标题三级标题四级标题 字体Hello,world!(加粗)Hello,world!(斜体)Hello,world!(加粗+斜体)Hello,world!(删除线)  引用(>)选择自己人生道路,只顾风......
  • 双系统开机直接进入ubuntu界面,搜索不到Windows启动项
    原因为止,确定是由ubuntu更新导致的结果解决方法执行命令sudorm/etc/grub.d/20_memtest86+删掉文件输入命令sudovim/etc/default/grub修改此配置文件,加入GRUB_DISAB......
  • PostGIS之维数扩展的九交模型
    1.概述PostGIS是PostgreSQL数据库一个空间数据库扩展,它添加了对地理对象的支持,允许在SQL中运行空间查询PostGIS官网:AboutPostGIS|PostGISPostGIS官方教程:PostGIS......
  • nju pa
    PA目录PAPA1RTFSC优美地退出简易调试器单步执行si[N]打印寄存器infor扫描内存xNEXPR表达式求值pEXPRPA1RTFSC优美地退出makerun启动nemu后直接输入q退出,得......
  • 合约广告平台架构演进实践
    导读introduction从事B端业务系统研发多年,不免会有这样的思考:B端系统的技术挑战是什么?什么样的业务架构算好架构?本文结合百度合约广告业务的发展历程,介绍广告投放平台从......
  • 简单利用pyautogui自动查找信息,复制保存到文本文档中。
    前言工作使用的核酸检测系统搜索人名只能一个号一个号的搜,让人头大。之前从师兄那里知道了pyautogui这个神器,便拿来偷懒用。目的将所需搜索的条码号保存在文本中,在平台......
  • 34. 在排序数组中查找元素的第一个和最后一个位置
    问题描述https://leetcode.cn/problems/find-first-and-last-position-of-element-in-sorted-array/description/解题思路我们查找元素的第一个和最后一个元素的位置,题......
  • 微服务引擎 MSE 升级至 3.0:降低微服务在云原生时代的演进成本
    一项技术的全面普及和通用化,必然会经历标准化的过程,微服务技术也不例外。2022云栖大会上,阿里云智能云原生应用平台总经理丁宇发布了微服务引擎MSE3.0,通过提供开放标准、......