首页 > 其他分享 >k8s之基于metallb实现LoadBalancer型Service

k8s之基于metallb实现LoadBalancer型Service

时间:2024-01-21 22:11:08浏览次数:34  
标签:metallb Service created v1 master io k8s

一、实验说明
1、实验目的
基于metallb实现kubernetes的LoadBalancer型Service。
2、环境说明
VMware Workstation安装三台虚拟机,安装K8S集群,网络模式NAT模式。

master 11.0.1.131
node01 11.0.1.132
node02 11.0.1.133

oot@master:/home/user# kubectl get nodes
NAME     STATUS     ROLES           AGE   VERSION
master   Ready      control-plane   37d   v1.26.3
node01   Ready      <none>          37d   v1.26.3
node02   Ready      <none>          37d   v1.26.3

二、安装metallb
参考官网安装
https://metallb.org/installation/

1、修改ipvs严格ARP模式

kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl apply -f - -n kube-system

2、创建metallb

root@master:/home/user# kubectl apply -f https://mirror.ghproxy.com/https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/etallb-native.yaml
namespace/metallb-system created
customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
serviceaccount/controller created
serviceaccount/speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
configmap/metallb-excludel2 created
secret/webhook-server-cert created
service/webhook-service created
deployment.apps/controller created
daemonset.apps/speaker created
validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created
root@master:/home/user# kubectl get ns
NAME              STATUS   AGE
default           Active   37d
dev               Active   18d
kube-node-lease   Active   37d
kube-public       Active   37d
kube-system       Active   37d
metallb-system    Active   2m24s
myapp             Active   26d
myserver          Active   36d

root@master:/home/user# kubectl api-versions
admissionregistration.k8s.io/v1
apiextensions.k8s.io/v1
apiregistration.k8s.io/v1
apps/v1
authentication.k8s.io/v1
authorization.k8s.io/v1
autoscaling/v1
autoscaling/v2
batch/v1
certificates.k8s.io/v1
coordination.k8s.io/v1
crd.projectcalico.org/v1
discovery.k8s.io/v1
events.k8s.io/v1
flowcontrol.apiserver.k8s.io/v1beta2
flowcontrol.apiserver.k8s.io/v1beta3
metallb.io/v1alpha1
metallb.io/v1beta1
metallb.io/v1beta2
networking.k8s.io/v1
node.k8s.io/v1
policy/v1
rbac.authorization.k8s.io/v1
scheduling.k8s.io/v1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1

root@master:/home/user# kubectl get pod -n metallb-system
NAME                          READY   STATUS              RESTARTS   AGE
controller-586bfc6b59-wg286   1/1     Running             0          61s
speaker-bzhpc                 1/1     Running             0          60s
speaker-k2s9j                 1/1     Running             0          61s

3、创建metallb地址池
metallb的Addres Allocation (地址分配) : 基于用户配置的地址池,为用户创建的LoadBalancer分配IP地址,并配置在节点上。
本实验的地址池选择宿主机的NAT池网段,必须保证网络可达。

root@master:/home/user# cat metallb-ippool.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: localip-pool
  namespace: metallb-system
spec:
  addresses:
  - 11.0.1.140-11.0.1.146
  autoAssign: true
  avoidBuggyIPs: true

root@master:/home/user# kubectl apply -f metallb-ippool.yaml
ipaddresspool.metallb.io/localip-pool created

root@master:/home/user# kubectl get ipaddresspool  -n metallb-system
NAME           AUTO ASSIGN   AVOID BUGGY IPS   ADDRESSES
localip-pool   true          true              ["11.0.1.140-11.0.1.146"]

4、通过网卡对外通告
External Announcement(对外公告):让集群外部的网络了解新分配的IP地址,MetallB使用ARP、NDP或BGP实现。

root@master:/home/user# cat metallb-l2.yaml
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: localip-pool-l2a
  namespace: metallb-system
spec:
  ipAddressPools:
  - localip-pool
  interfaces:
  - ens33

root@master:/home/user# kubectl apply -f metallb-l2.yaml
l2advertisement.metallb.io/localip-pool-l2a created

root@master:/home/user# kubectl get l2advertisement -n metallb-system
NAME               IPADDRESSPOOLS     IPADDRESSPOOL SELECTORS   INTERFACES
localip-pool-l2a   ["localip-pool"]                             ["ens33"]

三、实现应用的对外发布
1、创建deployment型工作负载的应用

root@master:/home/user# kubectl create deployment demoapp --image=ikubernetes/demoapp:v1.0 --replicas=3
deployment.apps/demoapp created
root@master:/home/user# kubectl get pod
NAME                      READY   STATUS                   RESTARTS   AGE
demoapp-75f59c894-2p5wr   1/1     Running                  0          101s
demoapp-75f59c894-fx8xh   1/1     Running                  0          101s
demoapp-75f59c894-k5stt   1/1     Running                  0          101s

2、创建loadbalancer Service对外发布应用

root@master:/home/user# cat services-loadbalancer-demo.yaml
kind: Service
apiVersion: v1
metadata:
  name: demoapp-loadbalancer-svc
spec:
  type: LoadBalancer
  selector:
    app: demoapp
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 80
root@master:/home/user# kubectl apply -f services-loadbalancer-demo.yaml
service/demoapp-loadbalancer-svc created
root@master:/home/user# kubectl get svc
NAME                       TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
demoapp-loadbalancer-svc   LoadBalancer   10.200.135.7   11.0.1.140    80:30970/TCP   18s
kubernetes                 ClusterIP      10.200.0.1     <none>        443/TCP        37d

以上已经为service自动分配外部地址11.0.1.140

四、通过外部IP访问集群

root@master:/home/user# kubectl get pod -o wide
NAME                      READY   STATUS                   RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
demoapp-75f59c894-2p5wr   1/1     Running                  0          36m   10.100.231.117   node02   <none>           <none>
demoapp-75f59c894-fx8xh   1/1     Running                  0          36m   10.100.231.119   node02   <none>           <none>
demoapp-75f59c894-k5stt   1/1     Running                  0          36m   10.100.231.118   node02   <none>           <none>

刷新网页可以看到实现了流量的负载均衡,


标签:metallb,Service,created,v1,master,io,k8s
From: https://www.cnblogs.com/OpenSourceSite/p/17978377

相关文章

  • Servlet系列:生命周期(init、 service、destroy)详解
    Servlet的生命周期是由Web容器(如Tomcat)管理的,包括以下三个阶段:加载和实例化:当Web应用程序启动时,Web容器会加载和实例化Servlet。加载和实例化过程可以在应用程序启动时自动完成,也可以通过Servlet的名称手动加载。在实例化Servlet后,Web容器会调用其init()方法进行初始化。处理请......
  • k8s之configmap应用
    一、创建configmap1、基于命令创建configmaproot@k8s-master01:~#kubectlcreateconfigmapdemoapp-cfg--from-literal=listen.port=8080--from-literal=listen.address='127.0.0.1'configmap/demoapp-cfgcreatedroot@k8s-master01:~#kubectlgetcmNAME......
  • k8s之构建Mysql和Wordpress集群
    一、实验目的基于Kubernetes集群实现多负载的WordPress应用。将WordPress数据存储在后端Mysql,Mysql实现主从复制读写分离功能。1、准备Kubernetes集群环境root@k8s-master01:~#kubectlgetnodesNAMESTATUSROLESAGEVERSIONk8s-master01Re......
  • k8s之存储卷OpenEBS
    一、OpenEBS简介OpenEBS是一种开源云原生存储解决方案,托管于CNCF基金会,目前该项目处于沙箱阶段。OpenEBS能够将Kubernetes工作节点上可用的住何存储转换为术卷或分布式复制卷。OpenEBS支持两大类卷——本地卷和复制卷。本地卷本地卷,即节点级卷,仅支持在卷所在的节点本地......
  • Docker、K8S
    .Netcore微服务基础1.Docker2.K8S 参考资料1.docker官网https://www.docker.com/products/docker-desktop/2.docker学习教程 https://blog.csdn.net/javaboyweng/article/details/1326220753..netcore微服务之ASP.NETCoreOnDocker https://www.cnblogs.com/edis......
  • istio-虚拟服务(VirtualService)
     在istion中,虚拟服务(Virtualservice)和目标规则(destinationrule)是流量路由功能的关键组成部分。在Istio所提供的基本连接和发现基础上,通过虚拟服务,能够将请求路由到Istio网格中的特定服务。每个虚拟服务由一组路由规则组成,这些路由规则使Istio能够将虚拟服务的每个给定请求匹配......
  • K8s 网关选型血泪史
    Sealos公有云几乎打爆了市面上所有主流的开源网关,本文可以给大家很好的避坑,在网关选型方面做一些参考。SealosCloud的复杂场景Sealos公有云上线以来,用户呈爆发式增长,目前总共注册用户8.7w,每个用户都去创建应用,每个应用都需要有自己的访问入口,就导致整个集群路由条目非常巨......
  • 使用jenkins构建k8s项目怎么执行kubectl命令
    使用jenkins构建k8s项目时需要执行kubectl命令因为使用jenkins使用的用户是jenkins所以在执行kubectl时没有权限,但是在页面报错不会报权限错误而是报以下错误error:unabletorecognize"k8s/xiaoxing-labs-web-deployment.yaml":nomatchesforkind"Deployment"inversio......
  • k8s 1.28 calico-nod 节点无法使用本地镜像
    看来一圈抄来抄去的陈年老博客,还不如看博客评论区。yaml文件的imagePullPolicy的IfNotPresent改为Never没啥事用,本地存在镜像,kubectl还是选择去阿里代理仓拉取,尴尬的是calico是从github下载的release-v3.26.4,阿里云镜像代理仓没有这个版本。nerdctlpulldocker.io/calico/cni......
  • k8s探针详解
    一、探针类型Kubernetes(k8s)中的探针是一种健康检查机制,用于监测Pod内容器的运行状况。主要包括以下三种类型的探针:1、存活探针(LivenessProbe)2、就绪探针(ReadinessProbe)3、启动探针(StartupProbe)(自1.16版本引入)二、探针功能1、启动探针(StartupProbe)Kubernetes......