华子目录
什么是微服务
用控制器
来完成集群
的工作负载
,那么应用如何暴漏
出去?需要通过微服务暴漏出去
后才能被访问
Service
是一组提供相同服务的Pod
对外开放的接口
。- 借助
Service
,应用可以实现服务发现
和负载均衡
。 Service
默认只支持4层
负载均衡能力,没有7层
功能。(可以通过Ingress
实现)
微服务的类型
微服务类型 | 作用描述 |
---|---|
ClusterIP | 默认值 ,k8s 系统给service 自动分配的虚拟ip ,只能在集群内部 进行访问 |
NodePort | 将service 通过指定的node 上的端口暴露 给外部 ,访问任意一个nodeIP:nodePort 都将路由到ClusterIP |
LoadBalancer | 在NodePort 的基础上,借助cloud provider 创建一个外部 的负载均衡器 ,并将请求 转发到NodeIP:NodePort ,此模式只能在云服务器 上使用 |
ExternalName | 将服务 通过DNS cname 记录方式转发到指定的域名 (通过spec.externalName 设定) |
[root@k8s-master ~]# mkdir services
[root@k8s-master ~]# cd services/
#生成控制器文件并建立控制器
[root@k8s-master services]# kubectl create deployment huazi --image myapp:v1 --replicas 2 --dry-run=client -o yaml > huazi-service.yml
[root@k8s-master services]# ls
huazi-service.yml
[root@k8s-master services]# kubectl apply -f huazi-service.yml
deployment.apps/huazi created
[root@k8s-master services]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
huazi-646d7864fd-4rtxl 1/1 Running 0 8s 10.244.2.7 k8s-node2.org <none> <none>
huazi-646d7864fd-kxsg9 1/1 Running 0 8s 10.244.1.8 k8s-node1.org <none> <none>
[root@k8s-master services]# kubectl expose deployment huazi --port 8080 --target-port 80 --dry-run=client -o yaml >> huazi-service.yml
[root@k8s-master services]# vim huazi-service.yml
[root@k8s-master services]# cat huazi-service.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: huazi
name: huazi
spec:
replicas: 2
selector:
matchLabels:
app: huazi
template:
metadata:
labels:
app: huazi
spec:
containers:
- image: myapp:v1
name: myapp
--- #不同资源间用---隔开
apiVersion: v1
kind: Service
metadata:
labels:
app: huazi
name: huazi
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 80
selector:
app: huazi
[root@k8s-master services]# kubectl apply -f huazi-service.yml
deployment.apps/huazi configured
service/huazi created
[root@k8s-master services]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
huazi ClusterIP 10.109.230.54 <none> 8080/TCP 10s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17d
#services的缩写:svc
[root@k8s-master services]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
huazi ClusterIP 10.109.230.54 <none> 8080/TCP 50s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17d
[root@k8s-master services]# kubectl get svc huazi --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
huazi ClusterIP 10.109.230.54 <none> 8080/TCP 3m13s app=huazi
[root@k8s-master services]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
huazi-646d7864fd-4rtxl 1/1 Running 0 9m21s 10.244.2.7 k8s-node2.org <none> <none>
huazi-646d7864fd-kxsg9 1/1 Running 0 9m21s 10.244.1.8 k8s-node1.org <none> <none>
[root@k8s-master services]# kubectl describe svc huazi
Name: huazi
Namespace: default
Labels: app=huazi
Annotations: <none>
Selector: app=huazi
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.109.230.54
IPs: 10.109.230.54
Port: <unset> 8080/TCP
TargetPort: 80/TCP
Endpoints: 10.244.1.8:80,10.244.2.7:80
Session Affinity: None
Events: <none>
[root@k8s-master services]# curl 10.109.230.54:8080
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
service
的app标签
和控制器中的app标签
是一样的- 如果
pod
的标签
和services
一致,则pod
的ip
在Endpoints
中,如果不一致,会从Endpoints
中移除
#可以在火墙中查看策略信息
[root@k8s-master services]# iptables -t nat -nL
Chain KUBE-SERVICES (2 references)
target prot opt source destination
KUBE-SVC-NHHAOEMW36LYR3Y5 tcp -- 0.0.0.0/0 10.109.230.54 /* default/huazi cluster IP */ tcp dpt:8080
ipvs模式
Services
是由kube-proxy
组件,加上iptables
来共同实现
的kube-proxy
通过iptables
处理Services
的过程,需要在宿主机
上设置相当多
的iptables规则
,如果宿主机
有大量的Pod
,不断刷新iptables规则
,会消耗大量的CPU
资源IPVS
模式的Services
,可以使k8s
集群支持更多量级
的Pod
ipvs模式配置方式
- 在
所有节点
中安装ipvsadm
[root@k8s-master services]# yum install ipvsadm -y
[root@k8s-node1 ~]# yum install ipvsadm -y
[root@k8s-node1 ~]# yum install ipvsadm -y
- 修改master节点的代理配置
[root@k8s-master services]# kubectl -n kube-system edit cm kube-proxy
- 重启
pod
。在pod
运行时配置文件
中采用默认配置
,当改变配置文件
后已经运行
的pod状态
不会变化,所以要重启pod
[root@k8s-master services]# kubectl -n kube-system get pods | awk '/kube-proxy/{system("kubectl -n kube-system delete pods "$1)}'
注意
切换ipvs模式
后,kube-proxy
会在宿主机
上添加一个虚拟网卡
:kube-ipvs0
,并分配所有serviceIP
[root@k8s-master services]# ip a | tail -n 8
8: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
link/ether 22:eb:d5:60:29:26 brd ff:ff:ff:ff:ff:ff
inet 10.96.0.10/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.109.230.54/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.96.0.1/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
- 当删掉
services
后,ipvs
中的策略
也自动没了
[root@k8s-master services]# kubectl delete svc huazi
service "huazi" deleted
[root@k8s-master services]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 172.25.254.100:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr
-> 10.244.0.2:53 Masq 1 0 0
-> 10.244.0.3:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 10.244.0.2:9153 Masq 1 0 0
-> 10.244.0.3:9153 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 10.244.0.2:53 Masq 1 0 0
-> 10.244.0.3:53 Masq 1 0 0
微服务类型详解
ClusterIP
类型
clusterip
模式只能在集群内
访问,并对集群内
的pod
提供健康检测
和自动发现
功能
[root@k8s-master services]# kubectl run testpod --image myapp:v1
pod/testpod created
[root@k8s-master services]# kubectl get pods -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
testpod 1/1 Running 0 3m56s 10.244.2.9 k8s-node2.org <none> <none> run=testpod
[root@k8s-master services]# kubectl expose pod testpod --port 8080 --target-port 80 --dry-run=client -o yaml > testpod-svc.yml
[root@k8s-master services]# vim testpod-svc.yml
[root@k8s-master services]# cat testpod-svc.yml
apiVersion: v1
kind: Service
metadata:
labels:
run: testpod
name: testpod
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 80
selector:
run: testpod
type: ClusterIP #设置类型为ClusterIP
[root@k8s-master services]# kubectl apply -f testpod-svc.yml
service/testpod created
[root@k8s-master services]# kubectl get svc testpod
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
testpod ClusterIP 10.101.174.36 <none> 8080/TCP 60s
[root@k8s-master services]# kubectl describe svc testpod
Name: testpod
Namespace: default
Labels: run=testpod
Annotations: <none>
Selector: run=testpod
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.101.174.36
IPs: 10.101.174.36
Port: <unset> 8080/TCP
TargetPort: 80/TCP
Endpoints: 10.244.2.9:80
Session Affinity: None
Events: <none>
[root@k8s-master services]# ipvsadm -Ln
我们发现ipvs
策略自动添加了
[root@k8s-master services]# kubectl run testpod1 --image myapp:v1
pod/testpod1 created
[root@k8s-master services]# kubectl get pods -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
testpod 1/1 Running 0 12m 10.244.2.9 k8s-node2.org <none> <none> run=testpod
testpod1 1/1 Running 0 12s 10.244.1.10 k8s-node1.org <none> <none> run=testpod1
[root@k8s-master services]# ipvsadm -Ln
#当改完标签后,发现testpod1加入到ipvs中
[root@k8s-master services]# kubectl label pod testpod1 run=testpod --overwrite
pod/testpod1 labeled
[root@k8s-master services]# ipvsadm -Ln
Services
创建后集群DNS
提供解析
[root@k8s-master services]# kubectl -n default get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17d
testpod ClusterIP 10.101.174.36 <none> 8080/TCP 8m43s
[root@k8s-master services]# kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 17d
[root@k8s-master services]# dig testpod.default.svc.cluster.local @10.96.0.10
; <<>> DiG 9.16.23-RH <<>> testpod.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 36803
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 148e67c3c114f1a0 (echoed)
;; QUESTION SECTION:
;testpod.default.svc.cluster.local. IN A
;; ANSWER SECTION:
testpod.default.svc.cluster.local. 30 IN A 10.101.174.36
;; Query time: 1 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Mon Oct 21 09:38:00 EDT 2024
;; MSG SIZE rcvd: 123
运行一个busyboxplus
,在集群
内部通过域名访问
[root@k8s-master services]# kubectl run busybox -it --image busyboxplus
/ # curl testpod.default.svc.cluster.local.:8080
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
因为每开一个pod
,ip
都会变,所以集群内部
的沟通
都是通过域名
进行访问的
/ # curl testpod:8080 #域名会自动补全
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
/ # nslookup testpod
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: testpod
Address 1: 10.101.174.36 testpod.default.svc.cluster.local
#我们发现域名对应的是Services上的ip
#域名自动补全的原因是:/etc/resolv.conf中有记录
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local org
options ndots:5
ClusterIP
中的特殊模式:headless无头服务
- 流量不经过
services
,而是直接发送给pod
- 对于
无头Services
并不会分配Cluster IP
,kube-proxy
不会处理它们
, 而且平台
也不会
为它们进行负载均衡
和路由
,集群
访问通过dns解析
直接指向到业务pod
上的IP
,所有的调度
由dns单独
完成
[root@k8s-master services]# kubectl create deployment huazi --image myapp:v1 --dry-run=client --replicas 2 -o yaml > huazi-dp.yml
[root@k8s-master services]# kubectl apply -f huazi-dp.yml
[root@k8s-master services]# kubectl expose deployment huazi --port 8080 --target-port 80 --dry-run=client -o yaml >> huazi-svc.yml
[root@k8s-master services]# kubectl apply -f huazi-svc.yml
[root@k8s-master services]# ls
huazi-dp.yml huazi-svc.yml
[root@k8s-master services]# kubectl get pods -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
huazi-646d7864fd-hs848 1/1 Running 0 7m39s 10.244.2.12 k8s-node2.org <none> <none> app=huazi,pod-template-hash=646d7864fd
huazi-646d7864fd-jzg6b 1/1 Running 0 7m39s 10.244.1.12 k8s-node1.org <none> <none> app=huazi,pod-template-hash=646d7864fd
[root@k8s-master services]# kubectl get svc huazi
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
huazi ClusterIP 10.97.121.243 <none> 8080/TCP 2m20s
[root@k8s-master services]# vim huazi-svc.yml
[root@k8s-master services]# kubectl delete -f huazi-svc.yml
service "huazi" deleted
[root@k8s-master services]# kubectl apply -f huazi-svc.yml
service/huazi created
[root@k8s-master services]# kubectl describe svc huazi
[root@k8s-master services]# kubectl run busybox -it --image busyboxplus
If you don't see a command prompt, try pressing enter.
/ # nslookup huazi
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: huazi
Address 1: 10.244.2.12 10-244-2-12.huazi.default.svc.cluster.local
Address 2: 10.244.1.12 10-244-1-12.huazi.default.svc.cluster.local
# 我们发现域名直接映射到了pod的ip,而不是services上的ip
#进入到busybox中,访问域名
[root@k8s-master services]# kubectl exec -it busybox -- /bin/sh
/ # curl huazi
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master services]# ipvsadm -Ln
NodePort类型
通过ipvs
暴漏端口从而使外部主机
通过master
节点的对外ip:port
来访问pod业务
访问过程
[root@k8s-master services]# vim huazi-dp.yml
[root@k8s-master services]# cat huazi-dp.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: huazi
name: huazi
spec:
replicas: 2
selector:
matchLabels:
app: huazi
template:
metadata:
labels:
app: huazi
spec:
containers:
- image: myapp:v1
name: myapp
[root@k8s-master services]# vim huazi-svc.yml
[root@k8s-master services]# cat huazi-svc.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: huazi
name: huazi
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 80
selector:
app: huazi
type: NodePort
[root@k8s-master services]# kubectl apply -f huazi-dp.yml
deployment.apps/huazi created
[root@k8s-master services]# kubectl apply -f huazi-svc.yml
service/huazi created
[root@k8s-master services]# kubectl get pods -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
huazi-646d7864fd-9877k 1/1 Running 0 38s 10.244.2.4 k8s-node2.org <none> <none> app=huazi,pod-template-hash=646d7864fd
huazi-646d7864fd-j24st 1/1 Running 0 38s 10.244.1.3 k8s-node1.org <none> <none> app=huazi,pod-template-hash=646d7864fd
[root@k8s-master services]# kubectl get svc huazi
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
huazi NodePort 10.96.205.98 <none> 8080:30310/TCP 18s
[root@k8s-master services]# ipvsadm -Ln
[root@k8s-master services]# kubectl describe svc huazi
Name: huazi
Namespace: default
Labels: app=huazi
Annotations: <none>
Selector: app=huazi
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.205.98
IPs: 10.96.205.98
Port: <unset> 8080/TCP
TargetPort: 80/TCP
NodePort: <unset> 30310/TCP
Endpoints: 10.244.1.3:80,10.244.2.4:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
访问master的ip:port
NodePort默认端口
NodePort
默认端口是30000-32767
,超出会报错
[root@k8s-master services]# vim huazi-svc.yml
[root@k8s-master services]# kubectl apply -f huazi-svc.yml
service/huazi configured
[root@k8s-master services]# kubectl get svc huazi
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
huazi NodePort 10.96.205.98 <none> 8080:30000/TCP 10m
访问master的ip:port
[root@k8s-master services]# vim huazi-svc.yml
[root@k8s-master services]# kubectl apply -f huazi-svc.yml
The Service "huazi" is invalid: spec.ports[0].nodePort: Invalid value: 33333: provided port is not in the valid range. The range of valid ports is 30000-32767
- 发现报错了,端口范围只能在
30000-32767
如果需要使用这个范围以外
的端口
就需要特殊设定
- 添加“
--service-node-port-range=
“ 参数,端口范围
可以自定义
- 修改后
api-server
会自动重启
,等api-server
正常启动后
才能操作集群
集群
重启自动完成在修改完参数后
全程不需要人为干预
[root@k8s-master services]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
改了之后,集群
会自动重启
#我们发现现在就不报错了
[root@k8s-master services]# kubectl apply -f huazi-svc.yml
service/huazi configured
[root@k8s-master services]# kubectl get svc huazi
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
huazi NodePort 10.96.205.98 <none> 8080:33333/TCP 27m
访问master的ip:port
LoadBalancer
类型
云平台
会为我们分配vip
并实现访问
,如果是裸金属主机
那么需要metallb
来实现ip
的分配
访问过程
[root@k8s-master services]# kubectl delete -f huazi-svc.yml
service "huazi" deleted
[root@k8s-master services]# vim huazi-svc.yml
[root@k8s-master services]# kubectl apply -f huazi-svc.yml
service/huazi created
#我们可以发现EXTERNAL-IP处于pending状态,表示外部负载均衡器的IP地址尚未分配
[root@k8s-master services]# kubectl get svc huazi
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
huazi LoadBalancer 10.97.214.7 <pending> 8080:34662/TCP 8s
- 我们可以发现
EXTERNAL-IP
处于pending
状态,表示外部负载均衡器
的IP地址
尚未分配
LoadBalancer模式
适用云平台
,裸金属环境
需要安装metallb
提供支持
metalLB
官网:https://metallb.universe.tf/installation/
metalLB功能
- 为
LoadBalancer
分配vip
部署metalLB
- 设置
ipvs
模式
#cm是ConfigMap的缩写
[root@k8s-master services]# kubectl edit cm -n kube-system kube-proxy
- 重启
pod
。在pod
运行时配置文件
中采用默认配置
,当改变配置文件
后已经运行
的pod状态
不会变化,所以要重启pod
[root@k8s-master services]# kubectl -n kube-system get pods | awk '/kube-proxy/{system("kubectl -n kube-system delete pods "$1)}'
- 下载部署文件
metallb-native.yaml
[root@k8s-master metalLB]# ls
configmap.yml metallb-native.yaml metalLB.tag.gz
- 修改
metallb-native.yaml
文件中镜像地址
,与harbor仓库
路径保持一致
[root@k8s-master metalLB]# vim metallb-native.yaml
...
...
...
image: metallb/controller:v0.14.8
image: metallb/speaker:v0.14.8
...
...
...
上传镜像
[root@k8s-master metalLB]# docker load -i metalLB.tag.gz
[root@k8s-master metalLB]# docker tag quay.io/metallb/controller:v0.14.8 harbor.huazi.org/metallb/controller:v0.14.8
[root@k8s-master metalLB]# docker tag quay.io/metallb/speaker:v0.14.8 harbor.huazi.org/metallb/speaker:v0.14.8
[root@k8s-master metalLB]# docker push harbor.huazi.org/metallb/controller:v0.14.8
[root@k8s-master metalLB]# docker push harbor.huazi.org/metallb/speaker:v0.14.8
[root@k8s-master metalLB]# kubectl apply -f metallb-native.yaml
[root@k8s-master metalLB]# kubectl -n metallb-system get pods
NAME READY STATUS RESTARTS AGE
controller-65957f77c8-2p2hj 1/1 Running 0 65s
speaker-jhdfd 1/1 Running 0 65s
speaker-rflp2 1/1 Running 0 65s
speaker-vvtlf 1/1 Running 0 65s
[root@k8s-master metalLB]# cat configmap.yml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool #地址池名称
namespace: metallb-system
spec:
addresses:
- 172.25.254.50-172.25.254.99 #修改为自己本地地址段
--- #两个不同的kind中间必须加分割
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: example
namespace: metallb-system
spec:
ipAddressPools:
- first-pool #使用地址池
[root@k8s-master metalLB]# kubectl apply -f configmap.yml
ipaddresspool.metallb.io/first-pool created
l2advertisement.metallb.io/example created
[root@k8s-master metalLB]# kubectl -n metallb-system get configmaps
NAME DATA AGE
kube-root-ca.crt 1 8m29s
metallb-excludel2 1 8m29s
此时我们发现:EXTERNAL-IP
已经分配了
[root@k8s-master metalLB]# kubectl get svc huazi
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
huazi LoadBalancer 10.97.214.7 172.25.254.50 8080:34662/TCP 51m
172.25.254.50
是一个vip
172.25.254.100
是master
的真实ip
在真正的云平台
上,EXTERNAL-IP
是一个公网ip
ExternalName
类型
- 开启
Services
后,不会被分配IP
,而是用dns
解析CNAME
固定域名
来解决ip
变化问题 - 一般应用于
外部业务
和pod
沟通或外部业务
迁移到pod
内时 - 在
应用
向集群迁移过程
中,externalname
在过度阶段
就可以起作用
了。 集群外的资源
迁移到集群
时,在迁移的过程
中ip
可能会变化
,但是域名+dns解析
能完美解决此问题
[root@k8s-master services]# kubectl delete -f huazi-svc.yml
service "huazi" deleted
[root@k8s-master services]# vim huazi-svc.yml
[root@k8s-master services]# kubectl apply -f huazi-svc.yml
service/huazi created
[root@k8s-master services]# kubectl get svc huazi
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
huazi ExternalName <none> www.baidu.com 8080/TCP 8s
[root@k8s-master services]# kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 18d
[root@k8s-master services]# kubectl describe svc huazi
Name: huazi
Namespace: default
Labels: app=huazi
Annotations: <none>
Selector: app=huazi
Type: ExternalName
IP Families: <none>
IP:
IPs: <none>
External Name: www.baidu.com
Port: <unset> 8080/TCP
TargetPort: 80/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
#我们可以看到:是百度的服务器在响应
[root@k8s-master services]# kubectl run -it busybox --image busyboxplus
If you don't see a command prompt, try pressing enter.
/ # ping huazi
PING huazi (183.2.172.185): 56 data bytes
64 bytes from 183.2.172.185: seq=0 ttl=127 time=36.577 ms
64 bytes from 183.2.172.185: seq=1 ttl=127 time=34.153 ms
64 bytes from 183.2.172.185: seq=2 ttl=127 time=33.778 ms
64 bytes from 183.2.172.185: seq=3 ttl=127 time=33.871 ms
64 bytes from 183.2.172.185: seq=4 ttl=127 time=36.072 ms
ExternalName
的本质
:在集群内部
指定一个service
的ip
,与外部
的ip
进行绑定
- 当访问
集群内部
指定一个service
的ip
,实际访问的是集群外部
的ip