k8s网络学习
之前使用docker的时候,容器可以直接使用宿主机开放的端口,外部就可以访问了。
现在使用k8s后就不能使用这种方法了,我们需要采用另外的方法实现外部访问内部pod。
了解pod,跨主机数据包通信图
同节点通信
不同节点通信
pod数据包,到cni0网关,通过flannel封装到udp,走物理网卡出去
查看物理网卡,udp封装的icmp数据包
查看同一台主机上抓包信息
抓包常用参数
参数
# -nn 只显示数字,ip port
# -s 0
# -v 显示完整信息
# -i 抓取指定网卡
# p
tcpdump -nn -i cni0 -s0 -v icmp
tcpdump -nn -i ens160 -s0 -v udp
service
官网地址:https://kubernetes.io/zh-cn/docs/concepts/services-networking/service
Pod创建完成后,如何访问Pod呢?直接访问Pod会有如下几个问题:
- Pod会随时被Deployment这样的控制器删除重建,那访问Pod的结果就会变得不可预知。(ip动态)
- Pod的IP地址是在Pod启动后才被分配,在启动前并不知道Pod的IP地址。
- 应用往往都是由多个运行相同镜像的一组Pod组成,逐个访问Pod也变得不现实。(负载均衡入口)
举个例子,假设有这样一个应用程序,使用Deployment创建了前台和后台,前台会调用后台做一些计算处理
后台运行了3个Pod,这些Pod是相互独立且可被替换的,当Pod出现状况被重建时,新建的Pod的IP地址是新IP
前台的Pod无法直接感知。
Service解决pod访问问题
Kubernetes中的Service对象就是用来解决上述Pod访问问题的。
Service有一个固定IP地址,Service将访问它的流量转发给Pod,具体转发给哪些Pod通过Label来选择,而且Service可以给这些Pod做负载均衡。
那么对于上面的例子,为后台添加一个Service,通过Service来访问Pod,这样前台Pod就无需感知后台Pod的变化。
创建一个后端pod
[root@k8s-master ~]# cat cheng-svc-test.yaml
# 基于deployment创建3个pod,加上标签。
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: cheng-svc-test #创建一个新的名称空间
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:latest
name: container-0
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
创建查看pod
#创建名称空间
[root@k8s-master ~]# kubectl create ns cheng-svc-test
namespace/cheng-svc-test created
#创建pod
[root@k8s-master ~]# kubectl create -f cheng-svc-test.yaml
deployment.apps/nginx created
#把node加入标签,有时候创建失败,就是node没有标签
kubectl label no k8s-node1 ingress=true
#查看pod
[root@k8s-master ~]# kubectl -n cheng-svc-test get pods
NAME READY STATUS RESTARTS AGE
nginx-56d58c56c7-bx5hh 1/1 Running 0 43s
nginx-56d58c56c7-j62h8 1/1 Running 0 43s
nginx-56d58c56c7-s5hrf 1/1 Running 0 43s
[root@k8s-master ~]#
#3个后端pod已经创建完毕了
service资源
三种server类型IP
kubectl explain svc.spec.type
ClusterIP
NodePort
LoadBalancer
创建ClusterIP
# 集群内,开启一个固定死的ip,svc的ip地址,负载均衡代理一组pod
apiVersion: v1
kind: Service
metadata:
name: cheng-clusterip # Service的名称 ,svc ip,dns,自定义,做好按业务名称,通俗易懂
namespace: cheng-svc-test #和之前创建的后端放一个名称空间中
spec:
selector: # Label Selector,选择包含app=nginx标签的Pod,后面只要这个命名空间中带这个标签的server IP就会自动发现
app: nginx #之前创建pod中的保持一致
ports:
- name: service0
targetPort: 80 # Pod的端口
port: 80 # Service对外暴露的端口,也就是ClusterIP的port
protocol: TCP # 转发协议类型,支持TCP和UDP
type: ClusterIP # Service的类型
查看clusterIP
[root@k8s-master cheng-svc-test]# vim cheng-clusterip.yaml
#创建cluster
[root@k8s-master cheng-svc-test]# kubectl create -f cheng-clusterip.yaml
service/cheng-clusterip created
#查看这个名称空间下的所有资源
[root@k8s-master cheng-svc-test]# kubectl -n cheng-svc-test get all -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-56d58c56c7-bx5hh 1/1 Running 0 28m 10.2.1.18 k8s-node1 <none> <none>
pod/nginx-56d58c56c7-j62h8 1/1 Running 0 28m 10.2.2.30 k8s-node2 <none> <none>
pod/nginx-56d58c56c7-s5hrf 1/1 Running 0 28m 10.2.2.29 k8s-node2 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/cheng-clusterip ClusterIP 10.1.17.45 <none> 80/TCP 41s app=nginx
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/nginx 3/3 3 3 28m container-0 nginx:latest app=nginx
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/nginx-56d58c56c7 3 3 3 28m container-0 nginx:latest app=nginx,pod-template-hash=56d58c56c7
[root@k8s-master cheng-svc-test]#
访问clusterIP
服务与发现
service cluster简称svc
svc是一个四层代理,代理一组后端pod
前端pod应用,访问后端数据库之类的填写svc地址就可以了
[root@k8s-master cheng-svc-test]# kubectl -n cheng-svc-test edit deployments.apps nginx
deployment.apps/nginx edited
spec:
progressDeadlineSeconds: 600
replicas: 5 #修改数量
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
[root@k8s-master ~]# kubectl -n cheng-svc-test describe svc
Name: cheng-clusterip
Namespace: cheng-svc-test
Labels: <none>
Annotations: <none>
Selector: app=nginx
Type: ClusterIP
IP: 10.1.17.45
Port: service0 80/TCP
TargetPort: 80/TCP
Endpoints: 10.2.1.18:80,10.2.1.19:80,10.2.2.29:80 + 2 more...
Session Affinity: None
Events: <none>
NodePort类型的service
NodePort类型的Service可以让Kubemetes集群每个节点上保留一个相同的端口, 外部访问连接首先访问节点IP:Port,然后将这些连接转发给服务对应的Pod
apiVersion: v1
kind: Service
metadata:
name: nodeport-service
namespace: cheng-svc-test #要做那个名称空间的port访问就写那个名称空间
spec:
type: NodePort #service类型
ports:
- port: 80 # service集群内代理的 ip:port
targetPort: 80 # pod后端,暴露是什么端口,5000,80
nodePort: 30188 # 给k8s集群的Node节点,暴露一个端口,去访问这个svc
# # 默认情况下,为了方便起见,Kubernetes 控制平面会从某个范围内分配一个端口号(默认:30000-32767)
name: service1 # 若不指定,svc里显示unset
selector: # 标签选择器
app: nginx #创建clusterIP里面的写一样,就是latest标签
[root@k8s-master cheng-svc-test]# vim nodeprot-svc.yaml
[root@k8s-master cheng-svc-test]# kubectl create -f nodeprot-svc.yaml
service/nodeport-service created
[root@k8s-master cheng-svc-test]# kubectl -n cheng-svc-test get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cheng-clusterip ClusterIP 10.1.17.45 <none> 80/TCP 57m
nodeport-service NodePort 10.1.73.3 <none> 80:30188/TCP 48s
[root@k8s-master cheng-svc-test]#
访问nodeport
创建了nodeport后,k8s集群中所有宿主机上都开通了30188端口,访问任意一个就可以了。
如何是在公有云环境上,k8s集群前边绑定了slb负载,负载前边绑定了EIP地址,就直接访问EIP地址加端口就可以了。
Ingress资源访问
官网地址:https://kubernetes.io/zh-cn/docs/concepts/services-networking/ingress/
Ingress
特性状态: Kubernetes v1.19 [stable]
Ingress 是对集群中服务的外部访问进行管理的 API 对象,典型的访问方式是 HTTP。
Ingress 可以提供负载均衡、SSL 终结和基于名称的虚拟托管。
为什么要用ingress?
前面已经使用了nodeport实现外部网络访问内部容器了,为什么还要用ingress呢?
使用nodeport是IP+端口的方式访问,在实际生产环境中,咱们不可能在OA、邮箱网址后面再添加一个端口,这样太麻烦了
ingress就可以解决这个问题,它是基于域名方式访问的
ingress工作流程
ingress-nginx是一个七层代理
1、创建ingress控制器,编写yaml文件。
2、编写nginx规则,编写yaml文件,自动生成nginx配置文件。
3、基于域名访问ingress
寻找ingress控制器
#你必须拥有一个 Ingress 控制器 才能满足 Ingress 的要求。仅创建 Ingress 资源本身没有任何效果。
#你可能需要部署一个 Ingress 控制器,例如 ingress-nginx。 你可以从许多 Ingress 控制器中进行选择。
#理想情况下,所有 Ingress 控制器都应遵从参考规范。 但实际上,各个 Ingress 控制器操作略有不同。
#先安装ingress-nginx,使用k8s权威指南第五版的ingress.yaml即可。
https://github.com/kubeguide/K8sDefinitiveGuide-V5-Sourcecode/blob/main/Chapter04/4.6.1%20nginx-ingress-controller.yaml
直接复制文件,把后面的删除即可,我这里已经删除了。
---
apiVersion: v1
kind: Namespace
metadata:
name: nginx-ingress
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress
namespace: nginx-ingress
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nginx-ingress
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- update
- create
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- list
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- list
- watch
- get
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- k8s.nginx.org
resources:
- virtualservers
- virtualserverroutes
- globalconfigurations
- transportservers
- policies
verbs:
- list
- watch
- get
- apiGroups:
- k8s.nginx.org
resources:
- virtualservers/status
- virtualserverroutes/status
verbs:
- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nginx-ingress
subjects:
- kind: ServiceAccount
name: nginx-ingress
namespace: nginx-ingress
roleRef:
kind: ClusterRole
name: nginx-ingress
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Secret
metadata:
name: default-server-secret
namespace: nginx-ingress
type: Opaque
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN2akNDQWFZQ0NRREFPRjl0THNhWFhEQU5CZ2txaGtpRzl3MEJBUXNGQURBaE1SOHdIUVlEVlFRRERCWk8KUjBsT1dFbHVaM0psYzNORGIyNTBjbTlzYkdWeU1CNFhEVEU0TURreE1qRTRNRE16TlZvWERUSXpNRGt4TVRFNApNRE16TlZvd0lURWZNQjBHQTFVRUF3d1dUa2RKVGxoSmJtZHlaWE56UTI5dWRISnZiR3hsY2pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUwvN2hIUEtFWGRMdjNyaUM3QlBrMTNpWkt5eTlyQ08KR2xZUXYyK2EzUDF0azIrS3YwVGF5aGRCbDRrcnNUcTZzZm8vWUk1Y2Vhbkw4WGM3U1pyQkVRYm9EN2REbWs1Qgo4eDZLS2xHWU5IWlg0Rm5UZ0VPaStlM2ptTFFxRlBSY1kzVnNPazFFeUZBL0JnWlJVbkNHZUtGeERSN0tQdGhyCmtqSXVuektURXUyaDU4Tlp0S21ScUJHdDEwcTNRYzhZT3ExM2FnbmovUWRjc0ZYYTJnMjB1K1lYZDdoZ3krZksKWk4vVUkxQUQ0YzZyM1lma1ZWUmVHd1lxQVp1WXN2V0RKbW1GNWRwdEMzN011cDBPRUxVTExSakZJOTZXNXIwSAo1TmdPc25NWFJNV1hYVlpiNWRxT3R0SmRtS3FhZ25TZ1JQQVpQN2MwQjFQU2FqYzZjNGZRVXpNQ0F3RUFBVEFOCkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQWpLb2tRdGRPcEsrTzhibWVPc3lySmdJSXJycVFVY2ZOUitjb0hZVUoKdGhrYnhITFMzR3VBTWI5dm15VExPY2xxeC9aYzJPblEwMEJCLzlTb0swcitFZ1U2UlVrRWtWcitTTFA3NTdUWgozZWI4dmdPdEduMS9ienM3bzNBaS9kclkrcUI5Q2k1S3lPc3FHTG1US2xFaUtOYkcyR1ZyTWxjS0ZYQU80YTY3Cklnc1hzYktNbTQwV1U3cG9mcGltU1ZmaXFSdkV5YmN3N0NYODF6cFErUyt1eHRYK2VBZ3V0NHh3VlI5d2IyVXYKelhuZk9HbWhWNThDd1dIQnNKa0kxNXhaa2VUWXdSN0diaEFMSkZUUkk3dkhvQXprTWIzbjAxQjQyWjNrN3RXNQpJUDFmTlpIOFUvOWxiUHNoT21FRFZkdjF5ZytVRVJxbStGSis2R0oxeFJGcGZnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdi91RWM4b1JkMHUvZXVJTHNFK1RYZUprckxMMnNJNGFWaEMvYjVyYy9XMlRiNHEvClJOcktGMEdYaVN1eE9ycXgrajlnamx4NXFjdnhkenRKbXNFUkJ1Z1B0ME9hVGtIekhvb3FVWmcwZGxmZ1dkT0EKUTZMNTdlT1l0Q29VOUZ4amRXdzZUVVRJVUQ4R0JsRlNjSVo0b1hFTkhzbysyR3VTTWk2Zk1wTVM3YUhudzFtMApxWkdvRWEzWFNyZEJ6eGc2clhkcUNlUDlCMXl3VmRyYURiUzc1aGQzdUdETDU4cGszOVFqVUFQaHpxdmRoK1JWClZGNGJCaW9CbTVpeTlZTW1hWVhsMm0wTGZzeTZuUTRRdFFzdEdNVWozcGJtdlFmazJBNnljeGRFeFpkZFZsdmwKMm82MjBsMllxcHFDZEtCRThCay90elFIVTlKcU56cHpoOUJUTXdJREFRQUJBb0lCQVFDZklHbXowOHhRVmorNwpLZnZJUXQwQ0YzR2MxNld6eDhVNml4MHg4Mm15d1kxUUNlL3BzWE9LZlRxT1h1SENyUlp5TnUvZ2IvUUQ4bUFOCmxOMjRZTWl0TWRJODg5TEZoTkp3QU5OODJDeTczckM5bzVvUDlkazAvYzRIbjAzSkVYNzZ5QjgzQm9rR1FvYksKMjhMNk0rdHUzUmFqNjd6Vmc2d2szaEhrU0pXSzBwV1YrSjdrUkRWYmhDYUZhNk5nMUZNRWxhTlozVDhhUUtyQgpDUDNDeEFTdjYxWTk5TEI4KzNXWVFIK3NYaTVGM01pYVNBZ1BkQUk3WEh1dXFET1lvMU5PL0JoSGt1aVg2QnRtCnorNTZud2pZMy8yUytSRmNBc3JMTnIwMDJZZi9oY0IraVlDNzVWYmcydVd6WTY3TWdOTGQ5VW9RU3BDRkYrVm4KM0cyUnhybnhBb0dCQU40U3M0ZVlPU2huMVpQQjdhTUZsY0k2RHR2S2ErTGZTTXFyY2pOZjJlSEpZNnhubmxKdgpGenpGL2RiVWVTbWxSekR0WkdlcXZXaHFISy9iTjIyeWJhOU1WMDlRQ0JFTk5jNmtWajJTVHpUWkJVbEx4QzYrCk93Z0wyZHhKendWelU0VC84ajdHalRUN05BZVpFS2FvRHFyRG5BYWkyaW5oZU1JVWZHRXFGKzJyQW9HQkFOMVAKK0tZL0lsS3RWRzRKSklQNzBjUis3RmpyeXJpY05iWCtQVzUvOXFHaWxnY2grZ3l4b25BWlBpd2NpeDN3QVpGdwpaZC96ZFB2aTBkWEppc1BSZjRMazg5b2pCUmpiRmRmc2l5UmJYbyt3TFU4NUhRU2NGMnN5aUFPaTVBRHdVU0FkCm45YWFweUNweEFkREtERHdObit3ZFhtaTZ0OHRpSFRkK3RoVDhkaVpBb0dCQUt6Wis1bG9OOTBtYlF4VVh5YUwKMjFSUm9tMGJjcndsTmVCaWNFSmlzaEhYa2xpSVVxZ3hSZklNM2hhUVRUcklKZENFaHFsV01aV0xPb2I2NTNyZgo3aFlMSXM1ZUtka3o0aFRVdnpldm9TMHVXcm9CV2xOVHlGanIrSWhKZnZUc0hpOGdsU3FkbXgySkJhZUFVWUNXCndNdlQ4NmNLclNyNkQrZG8wS05FZzFsL0FvR0FlMkFVdHVFbFNqLzBmRzgrV3hHc1RFV1JqclRNUzRSUjhRWXQKeXdjdFA4aDZxTGxKUTRCWGxQU05rMXZLTmtOUkxIb2pZT2pCQTViYjhibXNVU1BlV09NNENoaFJ4QnlHbmR2eAphYkJDRkFwY0IvbEg4d1R0alVZYlN5T294ZGt5OEp0ek90ajJhS0FiZHd6NlArWDZDODhjZmxYVFo5MWpYL3RMCjF3TmRKS2tDZ1lCbyt0UzB5TzJ2SWFmK2UwSkN5TGhzVDQ5cTN3Zis2QWVqWGx2WDJ1VnRYejN5QTZnbXo5aCsKcDNlK2JMRUxwb3B0WFhNdUFRR0xhUkcrYlNNcjR5dERYbE5ZSndUeThXczNKY3dlSTdqZVp2b0ZpbmNvVlVIMwphdmxoTUVCRGYxSjltSDB5cDBwWUNaS2ROdHNvZEZtQktzVEtQMjJhTmtsVVhCS3gyZzR6cFE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: nginx-ingress
data:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
replicas: 1
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
nodeSelector:
role: ingress-nginx-controller
serviceAccountName: nginx-ingress
containers:
- image: nginx/nginx-ingress:1.7.2
imagePullPolicy: IfNotPresent
name: nginx-ingress
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
securityContext:
allowPrivilegeEscalation: true
runAsUser: 101 #nginx
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
- -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
创建ingress控制器
上面复制的就是
node加入label
root@k8s-master ingress-controller]# vim ingress-controller-nginx.yaml
#创建
[root@k8s-master ingress-controller]# kubectl create -f ingress-controller-nginx.yaml
namespace/nginx-ingress created
serviceaccount/nginx-ingress created
secret/default-server-secret created
configmap/nginx-config created
deployment.apps/nginx-ingress created
#报错了
Error from server (AlreadyExists): error when creating "ingress-controller-nginx.yaml": clusterroles.rbac.authorization.k8s.io "nginx-ingress" already exists
Error from server (AlreadyExists): error when creating "ingress-controller-nginx.yaml": clusterrolebindings.rbac.authorization.k8
#这个报错是因为我之前创建过,没清理干净
#查看是否正常运行
[root@k8s-master ingress-controller]# kubectl -n nginx-ingress get all
NAME READY STATUS RESTARTS AGE
pod/nginx-ingress-75c88594dc-hnrwl 1/1 Running 0 61m #如何这里不是Running状态,就需要查看日志了。
pod/nginx-ingress-75c88594dc-qchqh 1/1 Running 0 44m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-ingress 2/2 2 2 61m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-ingress-75c88594dc 2 2 2 61m
[root@k8s-master ingress-controller]#
#查看日志
[root@k8s-master ingress-controller]# kubectl describe pod nginx-ingress-75c88594dc-hnrwl -n nginx-ingress
Name: nginx-ingress-75c88594dc-hnrwl
Namespace: nginx-ingress
Priority: 0
Node: <none>
Labels: app=nginx-ingress
pod-template-hash=75c88594dc
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/nginx-ingress-75c88594dc
Containers:
nginx-ingress:
Image: nginx/nginx-ingress:1.7.2
Ports: 80/TCP, 443/TCP
Host Ports: 80/TCP, 443/TCP
Args:
-nginx-configmaps=$(POD_NAMESPACE)/nginx-config
-default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
Environment:
POD_NAMESPACE: nginx-ingress (v1:metadata.namespace)
POD_NAME: nginx-ingress-75c88594dc-hnrwl (v1:metadata.name)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-token-fx6n5 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
nginx-ingress-token-fx6n5:
Type: Secret (a volume populated by a Secret)
SecretName: nginx-ingress-token-fx6n5
Optional: false
QoS Class: BestEffort
Node-Selectors: role=ingress-nginx-controller
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 7m32s default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.
Warning FailedScheduling 7m32s default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.
#显示没有匹配的node节点,是因为ingress-controller里面有一个node节点标签,咱们把node加入这个标签就行
[root@k8s-master ingress-controller]#
kubectl label nodes <node-name> labelName=<标签名称>
[root@k8s-master ingress-controller]# kubectl label nodes k8s-node1 role=ingress-nginx-controller
node/k8s-node1 labeled
[root@k8s-master ingress-controller]# kubectl label nodes k8s-node2 role=ingress-nginx-controller
node/k8s-node2 labeled
#再次查看pod运行,显示在node1上,已经正常运行了
[root@k8s-master ingress-controller]# kubectl -n nginx-ingress get all -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-ingress-75c88594dc-hnrwl 1/1 Running 0 14m 10.2.1.24 k8s-node1 <none> <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/nginx-ingress 1/1 1 1 14m nginx-ingress nginx/nginx-ingress:1.7.2 app=nginx-ingress
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/nginx-ingress-75c88594dc 1 1 1 14m nginx-ingress nginx/nginx-ingress:1.7.2 app=nginx-ingress,pod-template-hash=75c88594dc
#在创建一个
[root@k8s-master ingress-controller]# kubectl -n nginx-ingress edit deployments.apps nginx-ingress
spec:
progressDeadlineSeconds: 600
replicas: 2 #这里修改为2
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx-ingress
[root@k8s-master ingress-controller]# kubectl -n nginx-ingress get all -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-ingress-75c88594dc-hnrwl 1/1 Running 0 16m 10.2.1.24 k8s-node1 <none> <none>
pod/nginx-ingress-75c88594dc-qchqh 1/1 Running 0 3s 10.2.2.45 k8s-node2 <none> <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/nginx-ingress 2/2 2 2 16m nginx-ingress nginx/nginx-ingress:1.7.2 app=nginx-ingress
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/nginx-ingress-75c88594dc 2 2 2 16m nginx-ingress nginx/nginx-ingress:1.7.2 app=nginx-ingress,pod-template-hash=75c88594dc
k8s标签常用命令
#因为我之前创建过,我先删除这个名称空间下的所有资源。
[root@k8s-master ~]# kubectl delete all --all -n nginx-ingress
pod "nginx-ingress-75c88594dc-kk6wl" deleted
pod "nginx-ingress-75c88594dc-t4pf4" deleted
deployment.apps "nginx-ingress" deleted
replicaset.apps "nginx-ingress-75c88594dc" deleted
#在删除名称空间
[root@k8s-master ~]# kubectl delete ns nginx-ingress
namespace "nginx-ingress" deleted
#查看标签
[root@k8s-master ~]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-master Ready master 4d22h v1.19.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s-node1 Ready <none> 4d21h v1.19.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env2=net,env3=net,env4=net,env=test-network,ingress=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux,role=ingress-nginx-controller
k8s-node2 Ready <none> 4d21h v1.19.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env2=net,env3=net,env4=net,env=net,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux,role=ingress-nginx-controller
#删除标签
[root@k8s-master ~]# kubectl label node k8s-node1 role- #只到标签名就行
node/k8s-node1 labeled
[root@k8s-master ~]# kubectl label node k8s-node2 role-
2. 节点管理
2.1 Kubectl 自动补全
source <(kubectl completion bash)
2.2 查看服务节点信息
kubectl get nodes
2.3 查看服务器节点详情
kubectl get nodes -o wide
2.4 显示服务器节点的详细信息
kubectl describe node node-name/node-address
2.5 服务器节点打标签
kubectl label nodes <node-name> labelName=<标签名称>
2.6 查看服务器节点标签
kubectl get node --show-labels
2.7 删除服务器节点标签
kubectl label node <node-name> labelName-
2.8 标记服务器节点不可调度
kubectl cordon node-name
2.9 标记服务器节点可调度
kubectl uncordon node-name
2.10 清空服务器节点以待维护
kubectl drain node-name
2.11 删除服务器节点
驱逐节点上的pod
kubectl drain node-name/node-address --delete-local-data --force --ignore-daemonsets
删除节点
kubectl delete node node-name/node-address
2.12 显示服务器节点的指标度量,需要安装一个Metrics-server端
kubectl top node node-name
2.13 worker节点加入集群
kubeadm join master-address:6443 --token xxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxx
创建ingress访问规则
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cheng-test-ingress
namespace: cheng-svc-test #这里和cluster放一个名称空间里面
spec:
rules: # 转发规则,如下字段的,值,都会生成nginx.conf
- host: "aoligei.cheng.com" # 填入你们的业务域名 > server_name xxxxxxx;
http: # 基于http协议解析,七层代理规则
paths: # 基于url路径匹配 ,location
- pathType: Prefix #要设置路径类型,否则不合法,
path: "/" # 以 / 分割的URL路径前缀匹配,区分大小写,这里表默认所有路径。 location / { xxx}
backend: # 后端Service信息的组合 proxy_pass
service: # ingress > svc【 】 > pod
name: cheng-clusterip # 代理到名字是service1的ClusterIP
port: # 代理到的Service的端口号。
number: 80
[root@k8s-master ingress-controller]# vim cheng-ingress.yaml
[root@k8s-master ingress-controller]# kubectl create -f cheng-ingress.yaml
ingress.networking.k8s.io/cheng-test-ingress created
创建后,nginx配置会自动生成
外部测试访问
修改本地hosts文件
ingress排错
ingress设置的80端口,在k8s主机上是看不到开启80端口的,但是访问域名的时候是通的,这点不用怀疑
有时候删除了ingress,没有删除名称空间对应的ingress,在创建名称空间的时候就会出现访问不了,所有把之前名称空间下的ingress删除在创建一次,就可以了
node节点故障,pod自愈
#我现在运行了6个nginx,分别的两个node上,我直接关机一个node节点,查看会不会在另外一个节点上创建pod
[root@k8s-master ingress-controller]# kubectl -n cheng-svc-test get pods -owide -w
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-56d58c56c7-29kgs 1/1 Running 1 50m 10.2.2.10 k8s-node1 <none> <none>
nginx-56d58c56c7-k77tq 1/1 Running 0 92m 10.2.1.3 k8s-node2 <none> <none>
nginx-56d58c56c7-l6qzh 1/1 Running 1 92m 10.2.2.11 k8s-node1 <none> <none>
nginx-56d58c56c7-qlkn6 1/1 Running 1 92m 10.2.2.9 k8s-node1 <none> <none>
nginx-56d58c56c7-swsjd 1/1 Running 0 7m13s 10.2.1.6 k8s-node2 <none> <none>
nginx-56d58c56c7-thf5l 1/1 Running 0 50m 10.2.1.5 k8s-node2 <none> <none>
kubernetes节点失效后pod的调度过程时间
0、Master每隔一段时间和node联系一次,判定node是否失联,这个时间周期配置项为 node-monitor-period ,默认5s
1、当node失联后一段时间后,kubernetes判定node为notready状态,这段时长的配置项为 node-monitor-grace-period ,默认40s
2、当node失联后一段时间后,kubernetes判定node为unhealthy,这段时长的配置项为 node-startup-grace-period ,默认1m0s
3、当node失联后一段时间后,kubernetes开始删除原node上的pod,这段时长配置项为 pod-eviction-timeout ,默认5m0s
在应用中,想要缩短pod的重启时间,可以修改上述几个参数
标签:node,ingress,网络,nginx,master,pod,k8s
From: https://www.cnblogs.com/9527com/p/18018848