首页 > 其他分享 >K8S-StorageClass资源-实践【补充知识点】

K8S-StorageClass资源-实践【补充知识点】

时间:2023-04-15 10:57:00浏览次数:47  
标签:知识点 go nfs StorageClass provisioner test pod K8S data

Kubernetes学习目录

1、准备工作

1.1、官方文档

支持的存储制备器 :https://kubernetes.io/zh-cn/docs/concepts/storage/storage-classes/#provisioner

NFS provisioner: https://kubernetes.io/zh-cn/docs/concepts/storage/storage-classes/#nfs

1.2、nfs-subdir-external-provisioner项目地址

https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

1.3、下载项目至本地

wget https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.git

2、部署nfs-subdir-external-provisioner

2.1、创建RBAC资源

2.1.1、定义资源配置清单

cat > rbac.yaml <<'EOF'
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
EOF

2.1.2、应用资源配置清单

deploy]# kubectl apply -f rbac.yaml 
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

2.2、准备离线镜像

docker pull registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
docker tag registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 192.168.10.33:80/k8s/sig-storage/nfs-subdir-external-provisioner:v4.0.2
docker push 192.168.10.33:80/k8s/sig-storage/nfs-subdir-external-provisioner:v4.0.2

2.3、创建deployment资源

2.3.1、定义资源配置清单

cat > deployment.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: 192.168.10.33:80/k8s/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 192.168.10.33
            - name: NFS_PATH
              value: /nfs-data/promtheus_data
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.10.33
            path: /nfs-data/promtheus_data
EOF

# 属性解析:NFS_SERVER、NFS_PATH 配置nfs信息 ,PROVISIONER_NAME


2.3.2、应用资源配置清单

deploy]# kubectl apply -f deployment.yaml 
deployment.apps/nfs-client-provisioner created

2.3.3、查询deployment运行状态

deploy]# kubectl get pod
NAME                                     READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-f9465f6c4-w8gf5   1/1     Running   0          4m9s

2.3.4、查询运行日志

deploy]# kubectl logs nfs-client-provisioner-f9465f6c4-w8gf5 
I0414 14:47:49.501029       1 leaderelection.go:242] attempting to acquire leader lease  default/k8s-sigs.io-nfs-subdir-external-provisioner...
I0414 14:47:49.507138       1 leaderelection.go:252] successfully acquired lease default/k8s-sigs.io-nfs-subdir-external-provisioner
I0414 14:47:49.507374       1 controller.go:820] Starting provisioner controller k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-f9465f6c4-w8gf5_9c86b64e-b49c-4873-b344-c8d6d3a83240!
I0414 14:47:49.508515       1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"default", Name:"k8s-sigs.io-nfs-subdir-external-provisioner", UID:"21e32682-a95a-4e7b-96e7-c285bd35e110", 
APIVersion:"v1", ResourceVersion:"40247", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' nfs-client-provisioner-f9465f6c4-w8gf5_9c86b64e-b49c-4873-b344-c8d6d3a83240 became leader I0414 14:47:49.608224 1 controller.go:869] Started provisioner controller k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-f9465f6c4-w8gf5_9c86b64e-b49c-4873-b344-c8d6d3a83240!

2.4、NFS配置

2.4.1、exports

]# cat /etc/exports
/nfs-data/promtheus_data *(rw,all_squash,anonuid=1000,anongid=1000)
/nfs-data/alertmanager_data *(rw,all_squash,anonuid=1000,anongid=1000)

2.4.2、创建用户

groupadd -g 1000 nfs
useradd -u 1000 -g 1000 nfs

]# id nfs
uid=1000(nfs) gid=1000(nfs) groups=1000(nfs)

2.4.3、目录修改所属组

# 注意:需要修改所属用户和组
chown -R nfs.nfs alertmanager_data promtheus_data
]# ll /nfs-data/
drwxr-xr-x 2 nfs  nfs   4096 Apr 14 00:30 alertmanager_data
drwxr-xr-x 3 nfs  nfs   4096 Apr 14 23:14 promtheus_data

2.4.4、重启nfs服务

systemctl restart nfs

2.4、创建StorageClass资源

2.4.1、定义资源配置清单

cat > class.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-sc
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true"
EOF

# provisioner名字要和PROVISIONER_NAME 环境变量保持一致。

2.4.2、应用资源配置清单

deploy]# kubectl apply -f class.yaml 
storageclass.storage.k8s.io/nfs-sc created

2.4.3、查询sc资源

deploy]# kubectl get sc
NAME     PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-sc   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  11s

3、创建PVC资源-测试-实践

3.1、目标

1、主要确认pv是否自动创建和删除
2、删除pvc的时候,数据是否会被清空

3.2、创建pvc资源

3.2.1、定义资源配置清单

cat > test-claim.yaml <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  storageClassName: nfs-sc
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
EOF

3.2.2、应用资源配置清单

deploy]# kubectl apply -f test-claim.yaml 
persistentvolumeclaim/test-claim created

3.3、分析运行情况

3.3.1、查询nfs-client-provisioner日志

~]# kubectl logs nfs-client-provisioner-f9465f6c4-w8gf5
I0414 15:28:31.580103 1 controller.go:1317] provision "default/test-claim" class "nfs-sc": started I0414 15:28:31.582795 1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"test-claim", UID:"cac9523c-e952-46c6-9c9d-2fbbd45a0177",
APIVersion:"v1", ResourceVersion:"45275", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/test-claim" I0414 15:28:31.587202 1 controller.go:1420] provision "default/test-claim" class "nfs-sc": volume "pvc-cac9523c-e952-46c6-9c9d-2fbbd45a0177" provisioned I0414 15:28:31.587257 1 controller.go:1437] provision "default/test-claim" class "nfs-sc": succeeded I0414 15:28:31.587262 1 volume_store.go:212] Trying to save persistentvolume "pvc-cac9523c-e952-46c6-9c9d-2fbbd45a0177" I0414 15:28:31.590381 1 volume_store.go:219] persistentvolume "pvc-cac9523c-e952-46c6-9c9d-2fbbd45a0177" saved I0414 15:28:31.590614 1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"test-claim", UID:"cac9523c-e952-46c6-9c9d-2fbbd45a0177",
APIVersion:"v1", ResourceVersion:"45275", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-cac9523c-e952-46c6-9c9d-2fbbd45a0177
# 说明:已经自动创建成功PV

3.3.2、查询pv资源

~]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   REASON   AGE
pvc-cac9523c-e952-46c6-9c9d-2fbbd45a0177   1Mi        RWX            Delete           Bound    default/test-claim   nfs-sc                  2m38s

3.3.3、查询nfs的目录

nfs-data]# tree 
.
├── alertmanager_data
├── lost+found
└── promtheus_data
    └── default-test-claim-pvc-cac9523c-e952-46c6-9c9d-2fbbd45a0177
# 自动创建一个标记文件

3.4、删除pvc进行分析

3.4.1、删除pvc

deploy]# kubectl delete -f test-claim.yaml 
persistentvolumeclaim "test-claim" deleted

3.4.2、查询nfs-client-provisioner日志


I0414 15:35:59.999635 1 controller.go:1450] delete "pvc-cac9523c-e952-46c6-9c9d-2fbbd45a0177": started I0414 15:36:00.006549 1 controller.go:1478] delete "pvc-cac9523c-e952-46c6-9c9d-2fbbd45a0177": volume deleted I0414 15:36:00.010271 1 controller.go:1524] delete "pvc-cac9523c-e952-46c6-9c9d-2fbbd45a0177": persistentvolume deleted I0414 15:36:00.010294 1 controller.go:1526] delete "pvc-cac9523c-e952-46c6-9c9d-2fbbd45a0177": succeeded

# 说明已经删除成功

3.4.3、查询nfs目录

nfs-data]# tree 
.
├── alertmanager_data
├── lost+found
└── promtheus_data

# 标记已经被删除

4、创建Pod-测试-实践

4.1、目标

1、验证pod的挂载写数据
2、需要再次创建pvc,请看小节:【3.2.1、定义资源配置清单】

4.2、创建Pod

4.2.1、定义资源配置清单

cat > test-claim.yaml <<EOF
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox:stable
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
EOF

4.2.2、应用资源配置清单

deploy]# kubectl apply -f test-pod.yaml 
pod/test-pod created

4.3、分析运行情况

4.3.1、查询pod运行状态

deploy]# kubectl get pod -w
NAME                                     READY   STATUS              RESTARTS   AGE
nfs-client-provisioner-f9465f6c4-w8gf5   1/1     Running             0          55m
test-pod                                 0/1     ContainerCreating   0          12s
test-pod                                 0/1     Completed           0          27s
test-pod                                 0/1     Completed           0          29s

# 挂载好目录,写入文件,就结束pod

4.3.2、查询NFS目录数据

nfs-data]# tree 
.
├── alertmanager_data
├── lost+found
└── promtheus_data
    └── default-test-claim-pvc-55088be5-f8c3-43e8-871f-5c38e482639b
        └── SUCCESS # 说明,已经成功挂载写入数据

5、关于StorageClass回收策略对数据的影响

5.1、archiveOnDelete: false & reclaimPolicy: Delete

archiveOnDelete: "false"  
reclaimPolicy: Delete   #默认没有配置,默认值为Delete

结果
1、pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
2、sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
3、删除PVC后,PV被删除且NFS Server对应数据被删除

5.2、archiveOnDelete: true & reclaimPolicy: Delete

archiveOnDelete: "ture"  
reclaimPolicy: Delete 

1、pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
2、sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
3、删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
4、重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中

5.3、archiveOnDelete: false & reclaimPolicy: Retain 

archiveOnDelete: "false"
reclaimPolicy: Retain

1、pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
2、sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
3、删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
4、重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中

5.4、archiveOnDelete: true & reclaimPolicy: Retain 

archiveOnDelete: "ture"  
reclaimPolicy: Retain  

1、pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
2、sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
3、删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
4、重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中

5.5、小结

除以第一种配置外,其他三种配置在PV/PVC被删除后数据依然保留

 

标签:知识点,go,nfs,StorageClass,provisioner,test,pod,K8S,data
From: https://www.cnblogs.com/ygbh/p/17319970.html

相关文章

  • 67、K8S-部署管理-Helm部署Prometheus、TSDB数据持久化
    Kubernetes学习目录1、准备仓库1.1、配置prometheus仓库1.1.1、增加prometheus仓库helmrepoaddprometheus-communityhttps://prometheus-community.github.io/helm-charts1.1.2、查询增加的结果]#helmrepolistNAMEURL......
  • 开发中需要知道的相关知识点:什么是 OAuth 2.0 授权码授权类型?
    OAuth详解<2>什么是OAuth2.0授权码授权类型?授权代码授权类型可能是您将遇到的最常见的OAuth2.0授权类型。Web应用程序和本机应用程序都使用它在用户授权应用程序后获取访问令牌。这篇文章是我们探索常用的OAuth2.0授权类型系列文章的第一部分。如果您想在深入了解OAut......
  • 开发中需要知道的相关知识点: 什么是 OAuth 2.0 密码授予类型?
    OAuth详解<3>什么是OAuth2.0密码授予类型?OAuth2.0密码授权类型是一种在给定用户名和密码的情况下获取访问令牌的方法。它通常仅由服务自己的移动应用程序使用,通常不提供给第三方开发人员。这篇文章是我们探索常用的OAuth2.0授权类型系列文章的第三篇。之前我们介绍了授权......
  • 将TiDB各服务组件混布到物理机集群和K8S环境
    前提条件K8S集群外的服务器节点和K8S集群内的Pod网络必须保持互通(本文采用将物理机节点加入K8S集群然后打污点并驱逐该服务器里边的pod的方式来实现)K8S机器外的服务器节点必须可以通过添加解析的方式来解析K8S集群内部Pod域名(具体见第一步)待迁移集群没有开启组件间TLS加密通信......
  • 会计知识点
    会计知识点1利润表1.1利润表​当期损益指损益类科目的当期的发生额,即净利润。2资产类2.1现金方式取得金融资产的核算科目:交易性金融资产核算-其他货币资金长期债券投资-银行存款债权投​资-银行存款​长期股权投资-银行存款2.2无形资产摊销额按照受益对象计入相......
  • K8s中的external-traffic-policy
    K8s中的external-traffic-policy是什么?  【摘要】external-traffic-policy,顾名思义“外部流量策略”,那这个配置有什么作用呢?以及external是指什么东西的外部呢,集群、节点、Pod?今天我们就来学习一下这个概念吧。1      什么是external-traffic-policy在k8s的Serv......
  • 学习K8S 使用Operator部署管理Nginx
    创建一个KubernetesOperator部署nginx的大致过程如下:确定您要使用的OperatorSDK版本并安装它。使用OperatorSDK命令行创建新的Operator项目。定义CustomResourceDefinition(CRD),即将在Kubernetes中定义的自定义资源规范,以及该资源的状态和操作。例如,定义一个名为......
  • 前端小知识点扫盲笔记记录8
    前言我是歌谣放弃很容易但是坚持一定很酷今天继续对前端知识的小结命令模式宏命令<!DOCTYPEhtml><htmllang="en"><head><metacharset="UTF-8"><metahttp-equiv="X-UA-Compatible"content="IE=edge"><metaname="......
  • 使用kubeadm安装k8s
    相关链接kubeadm安装官网kubeadm安装k8s完整教程安装配置以下操作是每个节点都要执行的步骤配置hosts将主节点与子节点分别配置hostname如下:hostnamectlset-hostnamemaster#主节点hostnamectlset-hostnamenode1#子节点hostnamectlset-hostnamenode2#子节点在/e......
  • Java基础知识点内部类之成员内部类
    一:概述1.成员内部类顾名思义就是写在成员位置的,属于外部类成员。2.成员变量可以被一些修饰符所修饰,比如:private,default,public,static等。3.在成员内部类中,jdk16之前不能定义静态变量,jdk16开始才可以定义静态变量。二;获取内部类对象方法一;当成员内部类被private修饰时,在外部类中......