首页 > 其他分享 >20231112 K8S部署MetalLB以及测试应用

20231112 K8S部署MetalLB以及测试应用

时间:2023-11-12 22:46:55浏览次数:110  
标签:MetalLB metallb 20231112 created system 192.168 io K8S rocky9

环境配置

  • 3节点的K8S
  • 1+2配置
[root@rocky9-1 dashboard]# kubectl get node -o wide 
NAME       STATUS   ROLES           AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                      KERNEL-VERSION                 CONTAINER-RUNTIME
rocky9-1   Ready    control-plane   2d21h   v1.28.2   192.168.100.21   <none>        Rocky Linux 9.2 (Blue Onyx)   5.14.0-284.30.1.el9_2.x86_64   containerd://1.6.24
rocky9-2   Ready    <none>          2d21h   v1.28.2   192.168.100.22   <none>        Rocky Linux 9.2 (Blue Onyx)   5.14.0-284.30.1.el9_2.x86_64   containerd://1.6.24
rocky9-3   Ready    <none>          2d21h   v1.28.2   192.168.100.23   <none>        Rocky Linux 9.2 (Blue Onyx)   5.14.0-284.30.1.el9_2.x86_64   containerd://1.6.24


[root@rocky9-1 dashboard]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.100.21:6443
CoreDNS is running at https://192.168.100.21:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

网络设计

  • 节点网络使用192.168.100.21-29网段
  • VIP网络使用192.168.100.90-99网段

安装MetalLB

检查最新版本

[root@rocky9-1 dashboard]# MetalLB_RTAG=$(curl -s https://api.github.com/repos/metallb/metallb/releases/latest|grep tag_name|cut -d '"' -f 4|sed 's/v//')
[root@rocky9-1 dashboard]# 
[root@rocky9-1 dashboard]# echo $MetalLB_RTAG
0.13.12

创建目录

mkdir ~/metallb
cd ~/metallb

现在最新版本

wget https://raw.githubusercontent.com/metallb/metallb/v$MetalLB_RTAG/config/manifests/metallb-native.yaml
  • The metallb-system/controller deployment – Cluster-wide controller that handles IP address assignments.
  • The metallb-system/speaker daemonset – Component that speaks the protocol(s) of your choice to make the services reachable. - 运行在每个节点上
  • Service accounts for both controller and speaker, along with RBAC permissions that the needed by the components to function.
[root@rocky9-1 metallb]# kubectl apply -f metallb-native.yaml
namespace/metallb-system created
customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
serviceaccount/controller created
serviceaccount/speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
configmap/metallb-excludel2 created
secret/webhook-server-cert created
service/webhook-service created
deployment.apps/controller created
daemonset.apps/speaker created

$ watch kubectl get all -n metallb-system
 
Every 2.0s: kubectl get all -n metallb-system                                                                                                                                     Thu Jul 20 10:18:38 2023
 
NAME                              READY   STATUS    RESTARTS   AGE
pod/controller-595f88d88f-6pk24   1/1     Running   0          47m
pod/speaker-2gthf                 1/1     Running   0          47m
pod/speaker-4nxnf                 1/1     Running   0          47m
pod/speaker-nqt7r                 1/1     Running   0          47m
 
NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/webhook-service   ClusterIP   10.96.235.219   <none>        443/TCP   47m
 
NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/speaker   3         3         3       3            3           kubernetes.io/os=linux   47m
 
NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/controller   1/1     1            1           47m
 
NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/controller-595f88d88f   1         1         1       47m
[root@rocky9-1 metallb]# kubectl get pod  -n metallb-system -w
NAME                          READY   STATUS    RESTARTS   AGE
controller-786f9df989-fh679   1/1     Running   0          48s
speaker-24dcb                 1/1     Running   0          48s
speaker-4v5ll                 1/1     Running   0          48s
speaker-cvtlp                 0/1     Running   0          48s
speaker-cvtlp                 1/1     Running   0          56s

使用IPPool方式

编辑IP池配置文件

[root@rocky9-1 metallb]# cat ipaddress_pools.yaml 
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: production
  namespace: metallb-system
spec:
  addresses:
  - 192.168.100.90-192.168.100.99
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: l2-advert
  namespace: metallb-system

The IPs that MetalLB uses to assign IPs to services. In the configuration the pool has IPs range 192.168.92.30-192.168.92.50.
The IP addresses can be defined by CIDR, by range, and both IPV4 and IPV6 addresses can be assigned.

  • 192.168.1.30-192.168.1.50
  • 192.168.1.0/24
  • fc00:f853:0ccd:e799::/124
    Announce service IPs after the creation. This is a sample configuration used to advertise all IP address pools created in the cluster.
    Advertisement can also be limited to a specific Pool. In the example the limit is to the production pool.
[root@rocky9-1 metallb]# kubectl apply -f  ipaddress_pools.yaml 
ipaddresspool.metallb.io/production created
l2advertisement.metallb.io/l2-advert created

[root@rocky9-1 metallb]# kubectl get ipaddresspools.metallb.io -n metallb-system
NAME         AUTO ASSIGN   AVOID BUGGY IPS   ADDRESSES
production   true          false             ["192.168.100.90-192.168.100.99"]

[root@rocky9-1 metallb]# kubectl get l2advertisements.metallb.io -n metallb-system
NAME        IPADDRESSPOOLS   IPADDRESSPOOL SELECTORS   INTERFACES
l2-advert                                              
[root@rocky9-1 metallb]# kubectl describe ipaddresspools.metallb.io production -n metallb-system
Name:         production
Namespace:    metallb-system
Labels:       <none>
Annotations:  <none>
API Version:  metallb.io/v1beta1
Kind:         IPAddressPool
Metadata:
  Creation Timestamp:  2023-11-12T13:43:43Z
  Generation:          1
  Resource Version:    379843
  UID:                 432e795d-612f-45f5-83d1-7c2c424093e1
Spec:
  Addresses:
    192.168.100.90-192.168.100.99
  Auto Assign:       true
  Avoid Buggy I Ps:  false
Events:              <none>
[root@rocky9-1 metallb]# kubectl get pod -A -o wide
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE     IP               NODE       NOMINATED NODE   READINESS GATES
default                mypod                                        1/1     Running   0          175m    10.244.1.2       rocky9-2   <none>           <none>
kube-flannel           kube-flannel-ds-5hhr7                        1/1     Running   0          2d21h   192.168.100.22   rocky9-2   <none>           <none>
kube-flannel           kube-flannel-ds-jsw6l                        1/1     Running   0          2d21h   192.168.100.23   rocky9-3   <none>           <none>
kube-flannel           kube-flannel-ds-qcjnx                        1/1     Running   0          2d21h   192.168.100.21   rocky9-1   <none>           <none>
kube-system            coredns-5dd5756b68-bscgh                     1/1     Running   0          2d22h   10.244.2.3       rocky9-3   <none>           <none>
kube-system            coredns-5dd5756b68-lgl54                     1/1     Running   0          2d22h   10.244.2.2       rocky9-3   <none>           <none>
kube-system            etcd-rocky9-1                                1/1     Running   1          2d22h   192.168.100.21   rocky9-1   <none>           <none>
kube-system            kube-apiserver-rocky9-1                      1/1     Running   1          2d22h   192.168.100.21   rocky9-1   <none>           <none>
kube-system            kube-controller-manager-rocky9-1             1/1     Running   1          2d22h   192.168.100.21   rocky9-1   <none>           <none>
kube-system            kube-proxy-7sc8l                             1/1     Running   0          2d21h   192.168.100.23   rocky9-3   <none>           <none>
kube-system            kube-proxy-jfb45                             1/1     Running   0          2d22h   192.168.100.22   rocky9-2   <none>           <none>
kube-system            kube-proxy-t49dk                             1/1     Running   0          2d22h   192.168.100.21   rocky9-1   <none>           <none>
kube-system            kube-scheduler-rocky9-1                      1/1     Running   1          2d22h   192.168.100.21   rocky9-1   <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-5657497c4c-gzf89   1/1     Running   0          121m    10.244.2.4       rocky9-3   <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-78f87ddfc-nd5tb         1/1     Running   0          121m    10.244.1.4       rocky9-2   <none>           <none>
metallb-system         controller-786f9df989-fh679                  1/1     Running   0          4m34s   10.244.1.5       rocky9-2   <none>           <none>
metallb-system         speaker-24dcb                                1/1     Running   0          4m34s   192.168.100.23   rocky9-3   <none>           <none>
metallb-system         speaker-4v5ll                                1/1     Running   0          4m34s   192.168.100.22   rocky9-2   <none>           <none>
metallb-system         speaker-cvtlp                                1/1     Running   0          4m34s   192.168.100.21   rocky9-1   <none>           <none>

方式#1 配置测试Pod - 不指定IP

apiVersion: v1
kind: Namespace
metadata:
  name: web
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
  namespace: web
spec:
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: httpd
        image: httpd:alpine
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: web-server-service
  namespace: web
spec:
  selector:
    app: web
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer

创建Pod

[root@rocky9-1 metallb]# kubectl apply -f web-app-demo.yaml
namespace/web created
deployment.apps/web-server created
service/web-server-service created

检查SVC

[root@rocky9-1 metallb]# kubectl get svc -A
NAMESPACE              NAME                        TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                  AGE
default                kubernetes                  ClusterIP      10.96.0.1        <none>           443/TCP                  2d22h
kube-system            kube-dns                    ClusterIP      10.96.0.10       <none>           53/UDP,53/TCP,9153/TCP   2d22h
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP      10.96.229.191    <none>           8000/TCP                 122m
kubernetes-dashboard   kubernetes-dashboard        ClusterIP      10.100.187.236   <none>           443/TCP                  122m
metallb-system         webhook-service             ClusterIP      10.103.240.127   <none>           443/TCP                  5m44s
web                    web-server-service          LoadBalancer   10.107.126.246   192.168.100.90   80:32181/TCP             9s

验证

➜  ~ curl http://192.168.100.90
<html><body><h1>It works!</h1></body></html>

方式#2 配置测试Pod - 指定IP

$ vim web-app-demo.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: web
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
  namespace: web
spec:
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: httpd
        image: httpd:alpine
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: web-server-service
  namespace: web
spec:
  selector:
    app: web
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer
  loadBalancerIP: 192.168.91.35          # <-------------

高级用法

指定IP池地址

  • When creating a service of type LoadBalancer, you can request a specific address pool. This is a feature supported by MetalLB out of the box.
  • A specific pool can be requested for IP address assignment by adding the metallb.universe.tf/address-pool annotation to your service, with the name of the address pool as the annotation value.
apiVersion: v1
kind: Service
metadata:
  name: web-server-service
  namespace: web
  annotations:
    metallb.universe.tf/address-pool: production ###<-----
spec:
  selector:
    app: web
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer

控制IP地址分配

  • 例如公网IP数量有限,因此不希望自动分配
  • This is a reasonable application for smaller pools of “expensive” IPs (e.g. leased public IPv4 addresses). By default, MetalLB allocates free IP addresses from any configured address pool.
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: expensive
  namespace: metallb-system
spec:
  addresses:
  - 42.175.26.64/30
  autoAssign: false

IP Address Sharing

  • In Kubernetes services don’t share IP addresses by default. For any need to colocate services on a single IP, add the metallb.universe.tf/allow-shared-ip annotation to enable selective IP sharing for your services.

  • The value of this annotation is a “sharing key“. For the Services to share an IP address the following conditions has to be met:

    • Both services should share the same sharing key.
    • Services should use different ports (e.g. tcp/80 for one and tcp/443 for the other).
    • The two services should use the Cluster external traffic policy, or they both point to the exact same set of pods (i.e. the pod selectors are identical).
  • By using spec.loadBalancerIP, it means the two services share a specific address. See below example configuration of two services that share the same ip address:

apiVersion: v1
kind: Service
metadata:
  name: dns-service-tcp
  namespace: demo
  annotations:
    metallb.universe.tf/allow-shared-ip: "key-to-share-192.168.1.36" # 相同的KEY
spec:
  type: LoadBalancer
  loadBalancerIP: 192.168.1.36
  ports:
    - name: dnstcp
      protocol: TCP
      port: 53
      targetPort: 53
  selector:
    app: dns
---
apiVersion: v1
kind: Service
metadata:
  name: dns-service-udp
  namespace: demo
  annotations:
    metallb.universe.tf/allow-shared-ip: "key-to-share-192.168.1.36" # 相同的KEY
spec:
  type: LoadBalancer
  loadBalancerIP: 192.168.1.36
  ports:
    - name: dnsudp
      protocol: UDP
      port: 53
      targetPort: 53
  selector:
    app: dns

Setting Nginx Ingress to use MetalLB

参考文档

https://metallb.universe.tf/installation/
https://computingforgeeks.com/deploy-metallb-load-balancer-on-kubernetes/?expand_article=1
https://computingforgeeks.com/best-kubernetes-study-books/?expand_article=1
https://metallb.universe.tf/installation/
https://logz.io/blog/best-open-source-load-balancers/

标签:MetalLB,metallb,20231112,created,system,192.168,io,K8S,rocky9
From: https://www.cnblogs.com/yxyj1919/p/20230720-k8s-bu-shumetallb-yi-ji-ce-shi-ying-yong.html

相关文章

  • k8s-资源调度
    滚动更新注:是滚动更新不是扩容只有修改了deployment配置文件中的template中的属性后,才会分触发更新操作如使用kubctleditdeploy{name}查看滚动更新情况1.查看状态kubectlrolloutstatus deploy{deployName}2.查看过程kubectldescribedeploy{deployname}1.会......
  • 20231112
    前几天实在没有时间没写,我谢罪/kk但是我觉得我也写不出来什么有趣的鲜花qwq昨晚深夜听歌,发现自己的时差真的好大,总是会听一些好几年前的歌,2023听2016的歌是什么操作(虽然我那会还不知道Vocaloid就是了。然后听听听,听到不知道多久就睡着了。于是有了我今天12:30才醒来的这一幕。......
  • 二进制安装Kubernetes(k8s)v1.28.3
    二进制安装Kubernetes(k8s)v1.28.3https://github.com/cby-chen/Kubernetes开源不易,帮忙点个star,谢谢了介绍kubernetes(k8s)二进制高可用安装部署,支持IPv4+IPv6双栈。我使用IPV6的目的是在公网进行访问,所以我配置了IPV6静态地址。若您没有IPV6环境,或者不想使用IPv6,不对主机进行......
  • k8s flannel获取小子网
    flannelv0.11.0kube-flannel-ds-amd64main.goflanneld启动时设置kube-subnet-mgr参数是true,表示连接kube-apiserver来分配subnet,而不是直连etcd。启动时从挂载的configmapkube-flannel-cfg中读取Pod网段和后端类型。flanneld从kube-controller-manager全局分配的Nodes......
  • 导出相关binary制作k8s本地包
    这次把我之前安装的k8s的包导出来保存。cd/var/cache/apt/lskube*cd~&&mkdirk8sdbinstallsudocp*_1.23.17*~/k8sdbinstall 接下来保存相关的docker镜像, 使用dockersave-oexport.tarimageId/imageName:tagName保存多个镜像,导出全部镜像,需要切换到su执行#......
  • k8s service ipvs模式下nodePort实现
    部署nodePort+StatefulSetapiVersion:v1kind:Servicemetadata:name:nginxspec:ports:-port:80selector:app:nginxtype:NodePort---apiVersion:apps/v1kind:StatefulSetmetadata:name:nginxspec:podManagementPolicy:Parallels......
  • milvus本地集群部署非k8s
    (milvus本地集群部署非k8s)部署etcd和minio使用docker-compose部署,docker-compose.yml内容如下:version:'3.5'services:etcd:container_name:milvus-etcdimage:quay.io/coreos/etcd:v3.5.5environment:-ETCD_AUTO_COMPACTION_MODE=revision......
  • kubeadm部署的k8s证书过期问题 k8s问题排查:the existing bootstrap client certifica
     解决问题:估计跟移动有关,下面那个没解决问题,是因为在原有文件的基础上修改的吧?而这里直接是移走,重新生成了新的。不太清楚是不是这个原因。$cd/etc/kubernetes/pki/$mv{apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front......
  • k8s部署业务服务(详细总结篇)
    1.业务部署说明我们是做算法的,每个算法会被封装成一个镜像,比如:image1(入侵算法),image2(安全带识别算) 结合k8s流程:ingress-nginx(做了hostNetwork:true 网络模式,省去了service的一成转发),直接可以访问ingress-nginx的域名和端口——>客户通过ingress发布的host+path+业务......
  • K8S基础:搭建K8S集群(v1.27.6)
    Kubernetes 是一个可移植、可扩展的开源平台,用于管理容器化的工作负载和服务,可促进声明式配置和自动化。Kubernetes拥有一个庞大且快速增长的生态,其服务、支持和工具的使用范围相当广泛。准备节点主机名IP系统&内核配置master01k8s0110.70.5.190Centos7.9,Kernel5.4.259-1.el7.el......