第九部分 statefulset控制器
参考:https://blog.csdn.net/styshoo/article/details/73731993
https://blog.51cto.com/xuexinhuan/5424144
即便是有状态管理,也需要使用脚本来运维。
CoreOS:Operator
StatusfulSet:有状态
Cattle,pet
PetSet -> StatefulSet a、稳定且唯一的网络标识符; b、稳定且持久的存储; c、有序、平滑的部署和扩展; d、有序、平滑的终止和删除; e、有序的滚动更新;(先更新从,在更新主节点)
三个必须组件:
headless service:必须是无头服务,clusterip为None
volumeClaimTemplates:每一个pod都会申请一个专用的pvc和pv,不共用
Statefulset控制器:
以redis cluster集群模式为例:
每个分配对应的存储卷都不一样,redis节点一主两从,槽位都不一样;
节点pod名称不能变,识别pod唯一标志符。
volumeClaimTemplates:动态申请pvc,前提是需要创建可用的pv
基于上节内容,删除重新创建pv,保证前3个pv都有5g空间
[root@k8s-master ~]# kubectl delete pvc mypvc
[root@k8s-master ~]# kubectl delete pv pv001 pv002 pv003 pv004 pv005
由于pv003被占用,不好删除,现在需要删除。怎么找到被哪个pod占用了?对应的CLAIN列就能看出对应的pod分配
[root@k8s-master volumes]# kubectl delete pod pod-vol-pvc 删除对应的容器后,自动释放pvc和pv
重建pv
[root@k8s-master volumes]# kubectl create -f pv-nfs-d.yaml
[root@k8s-master volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 5Gi RWO,RWX Retain Available 14s
pv002 5Gi RWO Retain Available 14s
pv003 5Gi RWO,RWX Retain Available 14s
pv004 10Gi RWO,RWX Retain Available 14s
pv005 10Gi RWO,RWX Retain Available 14s
创建pod
任何一个有状态的都应该有三部分(headless、StatefulSet、Vomueslues)组成,如下示例,
[root@master stateful]# vim stateful-demo.yaml 配置文件
[root@k8s-master stateful]# cat stateful-demo.yaml apiVersion: v1 kind: Service metadata: labels: app: myapp-svc name: myapp-svc spec: ports: - port: 80 name: web selector: app: myapp-pod --- apiVersion: apps/v1 kind: StatefulSet metadata: name: myapp spec: serviceName: myapp-svc replicas: 2 selector: matchLabels: app: myapp-pod template: metadata: labels: app: myapp-pod spec: containers: - name: myapp-pod image: ikubernetes/myapp:v1 ports: - containerPort: 80 name: web volumeMounts: - name: myappdata mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: myappdata spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 2Gistateful-demo.yaml
[root@k8s-master stateful]# kubectl create -f stateful-demo.yaml
[root@k8s-master stateful]# kubectl get pvc
[root@k8s-master stateful]# kubectl get pv # get pv, pvc都会有数据
默认不删除pvc,数据一直存在,如下
[root@k8s-master stateful]# kubectl delete -f stateful-demo.yaml
service "myapp-svc" deleted
statefulset.apps "myapp" deleted
[root@k8s-master stateful]# kubectl get pods
No resources found.
[root@k8s-master stateful]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myappdata-myapp-0 Bound pv002 5Gi RWO 14m
myappdata-myapp-1 Bound pv001 5Gi RWO,RWX 14m
每次删除重建后,pod名称不变的话,对应的存储卷还是保持一致。容器内部可以自动解析主机名
[root@k8s-master stateful]# kubectl exec -it myapp-0 -- nslookup myapp-0
nslookup: can't resolve '(null)': Name does not resolve
Name: myapp-0
Address 1: 10.244.1.14 myapp-0.myapp-svc.default.svc.cluster.local
[root@k8s-master stateful]# kubectl exec -it myapp-0 -- nslookup myapp-1
nslookup: can't resolve '(null)': Name does not resolve
nslookup: can't resolve 'myapp-1': Name does not resolve
command terminated with exit code 1
[root@k8s-master stateful]# kubectl exec -it myapp-0 -- nslookup myapp-1.myapp-svc.default.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve
Name: myapp-1.myapp-svc.default.svc.cluster.local
Address 1: 10.244.2.48 myapp-1.myapp-svc.default.svc.cluster.local
域名解析规则:
pod_name.service_name.ns_name.svc.cluster.local
myapp-1.myapp-svc.default.svc.cluster.local
===
扩展pod,也是有顺序的,一个一个操作
[root@k8s-master stateful]# kubectl scale sts myapp --replicas=4
[root@k8s-master stateful]# kubectl patch sts myapp -p '{"spec":{"replicas":5}}' # 这两种方式都可以实现扩缩容
缩容逻辑也是一样的。
分段更新
比如我们定义一个分区"partition":3,可以使用 patch 或 edit 直接对 StatefulSet 进行设置:
kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"type":"RollingUpdate", "rollingUpdate":{"partition":3}}}}'
因为 Pod myapp-2 的序号小于分区 3,所以 Pod 不会被更新,还是会使用以前的容器恢复 Pod。
按照上述方式,可以实现分阶段更新,类似于灰度/金丝雀发布。
更新版本,也是两种方式
[root@k8s-master stateful]# kubectl set image sts/myapp myapp-pod=ikubernetes/myapp:v2
[root@k8s-master stateful]# kubectl patch sts myapp -p '{"spec":...}' # 待补充
[root@k8s-master stateful]# kubectl get sts -owide
NAME READY AGE CONTAINERS IMAGES
myapp 5/5 133m myapp-pod ikubernetes/myapp:v2
下面演示下,分段更新,扩容pod,然后更新镜像。
[root@k8s-master stateful]# kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"type":"RollingUpdate", "rollingUpdate":{"partition":3}}}}'
statefulset.apps/myapp patched
[root@k8s-master stateful]# kubectl scale sts myapp --replicas=5
statefulset.apps/myapp scaled
[root@k8s-master stateful]# for p in `seq 0 4`; do kubectl get po myapp-$p --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done
ikubernetes/myapp:v2
ikubernetes/myapp:v2
ikubernetes/myapp:v2
ikubernetes/myapp:v2
ikubernetes/myapp:v2
[root@k8s-master stateful]# kubectl set image sts/myapp myapp-pod=ikubernetes/myapp:v3
[root@k8s-master stateful]# for p in `seq 0 4`; do kubectl get po myapp-$p --template '{{range $i, $c := .spec.containers}}{{$c.image}}{{end}}'; echo; done
ikubernetes/myapp:v2
ikubernetes/myapp:v2
ikubernetes/myapp:v2
ikubernetes/myapp:v3
ikubernetes/myapp:v3
类似金丝雀发布,一点减小分区大小,逐步升级
参考git官网,验证测试,可尝试玩玩:redis、mysql...
目前来说,在k8s上进行有状态(StatefulSet)服务,还是比较困难的。