首页 > 其他分享 >云原生学习笔记-DAY3

云原生学习笔记-DAY3

时间:2023-05-05 14:14:23浏览次数:57  
标签:原生 master1 name DAY3 笔记 nginx nfs k8s root

etcd进阶和K8s资源管理

1 etcd进阶

image

1.1 etcd配置

etcd没有配置文件,配置是从serivce文件里面加载参数实现的

image

1.2 etcd选举机制

1.2.1 选举简介

 etcd基于Raft算法进行集群角色选举,使用Raft的还有Consul、InfluxDB、kafka(KRaft)等。
 Raft算法是由斯坦福大学的Diego Ongaro(迭戈·翁加罗)和John Ousterhout(约翰·欧斯特霍特)于2013年提出的,在Raft算法之前,Paxos算法是最著名的分布式一致性算法,但Paxos算法的理论和实现都比较复杂,不太容易理解和实现,因此,以上两位大神就提出了Raft算法,旨在解决Paxos算法存在的一些问题,如难以理解、难以实现、可扩展性有
限等。
 Raft算法的设计目标是提供更好的可读性、易用性和可靠性,以便工程师能够更好地理解和实现它,它采用了Paxos算法中的一些核心思想,但通过简化和修改,使其更容易理解和实现,Raft算法的设计理念是使算法尽可能简单化,以便于人们理解和维护,同时也能够提供强大的一致性保证和高可用性。

1.2.2 etcd节点角色

集群中每个节点只能处于 Leader、Follower 和 Candidate 三种状态的一种
 follower:追随者(Redis Cluster的Slave节点)
 candidate:候选节点,选举过程中
 leader:主节点(Redis Cluster的Master节点)
 节点启动后基于termID(任期ID)进行相互投票,termID是一个整数默认值为0,在Raft算法中,一个term代表leader的一段任期周期,每当一个节点成为leader时,就会进入一个新的term, 然后每个节点都会在自己的term ID上加1,以便与上一轮选举区分开来。

image

1.2.3 etcd选举

image

1.3 etcd配置优化

1.3.1 配置参数优化

 --max-request-bytes=10485760 #request size limit(请求的最大字节数,默认一个key最大1.5Mib,官方推荐最大不要超出10Mib)
 --quota-backend-bytes=8589934592 #storage size limit(磁盘存储空间大小限制,默认为2G,此值超过8G启动会有警告信息)

1.3.2 磁盘碎片整理

root@k8s-etcd1:~# ETCDCTL_API=3 /usr/local/bin/etcdctl defrag --cluster --endpoints=https://192.168.1.106:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem
Finished defragmenting etcd member[https://192.168.1.106:2379]
Finished defragmenting etcd member[https://192.168.1.107:2379]
Finished defragmenting etcd member[https://192.168.1.108:2379]

1.4 etcd常用命令

etcd有多个不同的API访问版本,v1版本已经废弃,etcd v2和v3本质上是共享同一套 raft 协议代码的两个独立的应用,接口不一样,存储不一样,数据互相隔离。也就是说如果从 Etcd v2 升级到 Etcd v3,原来v2 的数据还是只能用 v2 的接口访问,v3 的接口创建的数据也只能访问通过v3 的接口访问。在老版本的K8s上,可能看到以下的提示信息,提示如何选择etcd API的版本。新版本的K8s默认使用的是Etcd v3

WARNING:
Environment variable ETCDCTL_API is not set; defaults to etcdctl v2. #默认使用V2版本
Set environment variable ETCDCTL_API=3 to use v3 API or ETCDCTL_API=2 to use v2 API. #设置API版本

root@k8s-etcd1:~# etcdctl --help #查看etcdctl帮助
root@k8s-etcd1:~# ETCDCTL_API=3 etcdctl --help #指定API版本查看帮助

1.4.1 获取etcd节点成员列表

root@k8s-etcd1:~# etcdctl member --help #查看etcdctl member命令的帮助

root@k8s-etcd1:~# etcdctl member list #显示etcd所有节点信息
2a6ab8f664707d52, started, etcd-192.168.1.106, https://192.168.1.106:2380, https://192.168.1.106:2379, false
c6f4b8d228b548a4, started, etcd-192.168.1.107, https://192.168.1.107:2380, https://192.168.1.107:2379, false
ca70eae314bd4165, started, etcd-192.168.1.108, https://192.168.1.108:2380, https://192.168.1.108:2379, false

root@k8s-etcd1:~# etcdctl member list --write-out="table" #以表格的形式输出信息
+------------------+---------+--------------------+----------------------------+----------------------------+------------+
|        ID        | STATUS  |        NAME        |         PEER ADDRS         |        CLIENT ADDRS        | IS LEARNER |
+------------------+---------+--------------------+----------------------------+----------------------------+------------+
| 2a6ab8f664707d52 | started | etcd-192.168.1.106 | https://192.168.1.106:2380 | https://192.168.1.106:2379 |      false |
| c6f4b8d228b548a4 | started | etcd-192.168.1.107 | https://192.168.1.107:2380 | https://192.168.1.107:2379 |      false |
| ca70eae314bd4165 | started | etcd-192.168.1.108 | https://192.168.1.108:2380 | https://192.168.1.108:2379 |      false |
+------------------+---------+--------------------+----------------------------+----------------------------+------------+

1.4.2 获取etcd节点心跳信息

root@k8s-etcd1:~# export NODE_IPS="192.168.1.106 192.168.1.107 192.168.1.108"
root@k8s-etcd1:~# for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/local/bin/etcdctl endpoint health --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem; done
https://192.168.1.106:2379 is healthy: successfully committed proposal: took = 14.454889ms
https://192.168.1.107:2379 is healthy: successfully committed proposal: took = 12.552675ms
https://192.168.1.108:2379 is healthy: successfully committed proposal: took = 13.539823ms

1.4.3 查看节点状态信息,

可显示哪个节点是LEADER

root@k8s-etcd1:~# export NODE_IPS="192.168.1.106 192.168.1.107 192.168.1.108"

root@k8s-etcd1:~# for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/local/bin/etcdctl endpoint status --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem --write-out=table; done
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.1.106:2379 | 2a6ab8f664707d52 |   3.5.6 |  6.9 MB |     false |      false |        13 |     340434 |             340434 |        |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.1.107:2379 | c6f4b8d228b548a4 |   3.5.6 |  6.9 MB |      true |      false |        13 |     340434 |             340434 |        |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.1.108:2379 | ca70eae314bd4165 |   3.5.6 |  6.9 MB |     false |      false |        13 |     340434 |             340434 |        |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

1.4.4 查看etcd数据

root@k8s-etcd1:~# ETCDCTL_API=3 etcdctl get / --prefix --keys-only #以路径的方式所有key信息

查看pod信息
 root@k8s-etcd1:~# ETCDCTL_API=3 etcdctl get / --prefix --keys-only | grep pod

查看namespace信息
 root@k8s-etcd1:~# ETCDCTL_API=3 etcdctl get / --prefix --keys-only | grep namespaces

查看deployment控制器信息
 root@k8s-etcd1:~# ETCDCTL_API=3 etcdctl get / --prefix --keys-only | grep deployment

查看calico组件信息:
 root@k8s-etcd1:~# ETCDCTL_API=3 etcdctl get / --prefix --keys-only | grep calico

1.4.5 etcd增删改查

添加数据
 root@etcd1:~# ETCDCTL_API=3 /usr/local/bin/etcdctl put /name "tom"
OK

查询数据
 root@etcd1:~# ETCDCTL_API=3 /usr/local/bin/etcdctl get /name
/name
tom

改动数据,#直接覆盖就是更新数据
 root@etcd1:~# ETCDCTL_API=3 /usr/local/bin/etcdctl put /name "jack"
OK
验证改动
 root@etcd1:~# ETCDCTL_API=3 /usr/local/bin/etcdctl get /name
/name
jack

删除数据
 root@etcd1:~# ETCDCTL_API=3 /usr/local/bin/etcdctl del /name
1
验证删除
 root@etcd1:~# ETCDCTL_API=3 /usr/local/bin/etcdctl get /name #没有值返回,说明已删除

1.5 etcd数据watch机制

基于不断监看数据,发生变化就主动触发通知watch客户端,Etcd v3 watch机制支持watch某个固定的key,也支持watch一个范围。

在etcd node1上watch一个key,没有此key也可以执行watch,后期可以再创建
root@k8s-etcd1:~# ETCDCTL_API=3 /usr/local/bin/etcdctl watch /data

在etcd node2修改数据,验证etcd node1是否能够发现数据变化
root@k8s-etcd2:~# ETCDCTL_API=3 /usr/local/bin/etcdctl put /data "data v1"
OK
root@k8s-etcd2:~# ETCDCTL_API=3 /usr/local/bin/etcdctl put /data "data v2"
OK

在etcd1节点可以看到相关信息

image

1.6 etcd数据备份与恢复

WAL是write ahead log(预写日志)的缩写,顾名思义,也就是在执行真正的写操作之前先写一个日志,预写日志。
WAL存放预写式日志,最大的作用是记录了整个数据变化的全部历程。在etcd中,所有数据的修改在提交前,都要先写入到WAL中。

1.6.1 单节点部署环境备份与恢复

1.6.1.1 手动备份和恢复单节点部署环境的数据

当etcd采用单节点部署时,可以手动执行etcdctl snapshot命令进行备份和恢复。单节点部署只在测试环境使用,生产环境要采用多节点高可用集群部署方式

V3版本备份数据:
root@k8s-etcd1:~# ETCDCTL_API=3 etcdctl snapshot save /tmp/snapshot.db

V3版本恢复数据:
1 将备份数据恢复到一个新的不存在的目录中,如/opt/etcd-testdir
root@k8s-etcd1:~# ETCDCTL_API=3 etcdctl snapshot restore /tmp/snapshot.db --data-dir=/opt/etcd-testdir
2 停止节点etcd服务,将恢复出来的数据复制到etcd默认数据目录/var/lib/etcd/default/
systemctl stop etcd && cp -rf /opt/etcd-testdir/member /var/lib/etcd/default/
3 数据复制完后启动节点服务
systemctl start etcd

1.6.1.2 脚本自动备份单节点部署环境的数据

root@k8s-etcd1:~# mkdir /data/etcd-backup-dir/ -p #/data/etcd-backup-dir建议挂载一个存储分区
root@k8s-etcd1:~# vim  etcd-backup.sh
#!/bin/bash
source /etc/profile
DATE=`date +%Y-%m-%d_%H-%M-%S`
ETCDCTL_API=3 /usr/local/bin/etcdctl snapshot save /data/etcd-backup-dir/etcd-snapshot-${DATE}.db

1.6.2 多节点集群环境备份与恢复

使用kubeasz ezctl 备份和恢复etcd高可用集群数据, 生产中可以将备份任务写入crontab定期执行

1 当前myserver namespace有一个svc
root@k8s-master1:~/nginx-tomcat-case# kubectl get svc -n myserver
NAME                     TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
myserver-nginx-service   NodePort   10.100.103.96   <none>        80:30004/TCP,443:30443/TCP   5s

2 使用ezctl备份etcd
root@k8s-deploy:/etc/kubeasz# ./ezctl backup k8s-cluster1

3 验证备份文件生成
root@k8s-deploy:/etc/kubeasz# ll clusters/k8s-cluster1/backup/
total 14172
drwxr-xr-x 2 root root      57 Apr 25 18:01 ./
drwxr-xr-x 5 root root    4096 Apr 25 17:32 ../
-rw------- 1 root root 7249952 Apr 25 18:01 snapshot.db
-rw------- 1 root root 7249952 Apr 25 18:01 snapshot_202304251801.db

4 删除myserver namespace的svc
root@k8s-master1:~/nginx-tomcat-case# kubectl delete -f nginx.yaml
deployment.apps "myserver-nginx-deployment" deleted
service "myserver-nginx-service" deleted
root@k8s-master1:~/nginx-tomcat-case# kubectl get svc -n myserver
No resources found in myserver namespace.

5 使用ezctl恢复etcd
root@k8s-deploy:/etc/kubeasz# ./ezctl restore k8s-cluster1

6 验证myserver namespace的svc已恢复,CLUSTER-IP与之前一样
root@k8s-master1:~/nginx-tomcat-case# kubectl get svc -n myserver
NAME                     TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
myserver-nginx-service   NodePort   10.100.103.96   <none>        80:30004/TCP,443:30443/TCP   8m4s

1.6.3 etcd集群宕机恢复流程(待验证)

当etcd集群宕机数量超过集群总节点数一半以上的时候(如总数为三台宕机两台),就会导致整合集群宕机,后期需要重新恢复数据,则恢复流程如下:
 1、恢复服务器系统
 2、重新部署ETCD集群
 3、停止(master节点)kube-apiserver/controller-manager/scheduler/kubelet/kube-proxy
 4、停止ETCD集群
 5、各ETCD节点恢复同一份备份数据
 6、启动各节点并验证ETCD集群
 7、启动(master节点)kube-apiserver/controller-manager/scheduler/kubelet/kube-proxy
 8、验证k8s master状态及pod数据

2 CoreDNS域名解析流程

image

2.1 Pod向CoreDNS请求内部域名解析流程

1 Pod向CoreDNS请求解析内部域名解析
2 CoreDNS检查自身是否有缓存域名解析,有则直接返回解析结果
3 若CoreDNS自身没有缓存解析,则通过API向ETCD查询名称解析结果。查到则返回结果,没查到则解析失败
4 若步骤3成功则向CoreDNS返回查询结果
5 CoreDNS向Pod返回查询结果,并缓存解析结果

2.2 Pod向CoreDNS请求外部域名解析流程

1 Pod向CoreDNS请求解析外部域名解析
2 CoreDNS将解析请求转发给公司自建DNS处理,若自建DNS能解析则返回解析结果
3 若公司自建DNS无法解析,则会将请求转发给互联网DNS解析,如果互联网DNS解析成功则返回结果,若不成功则解析失败
4 CoreDNS收到解析成功的返回结果,向Pod返回解析结果,同时缓存解析结果

3 K8s Pod 资源

可以使用yaml文件定义和创建Pod

root@k8s-master1:~# kubectl explain pods #可以使用explain查看资源的定义格式

root@k8s-master1:~/manifest/k8s-Resource-N76/case1-pod# vi mytest-pod.yaml
apiVersion: v1 # 可以使用kubectl api-resources获取到pods的APIVERSION
kind: Pod # 可以使用kubectl api-resources获取到pods资源的KIND
metadata:
  name: mytest-pod #定义pod名字
  labels: #定义pod标签
    app: nginx
  namespace: default #定义pod名称空间,默认值为default
spec: #定义pod的规格,值是一个对象<object>, 对象书写格式可以是{}括起的键值对,也可以每个键值对分一行写在下面
  containers: #定义pod规格中container相关的参数,值为一个列表对象<[]object>

  - name: mytest-container1 #定义第一个容器
    image: mynginx:v1
    ports: #定义mytest-container1的端口,其值为列表对象<[]object>,列表对象值第一行要加-
    - containerPort: 80
      hostPort: 8011 #将node的8011映射到容器的80
  - name: mytest-container2 #定义第二个容器
    image: alpine
    command: ['sh','-c','/bin/sleep 1000']
  dnsPolicy: ClusterFirst #定义pod规格里面的dnsPolicy参数,默认值ClusterFirst

root@k8s-master1:~/manifest/k8s-Resource-N76/case1-pod# kubectl apply -f mytest-pod.yaml

4 K8s工作负载(job,conjob,rc,rs,deployment)

4.1 job

一般用来执行一次性的任务,如数据库初始化操作等,yaml定义样例如下

root@k8s-master1:~#kubectl explain job #使用explain查看资源定义格式
root@k8s-master1:~/manifest/k8s-Resource-N76/case2-job# vi mytest-job.yaml

apiVersion: batch/v1
kind: Job
metadata:
  name: mysql-init-job
  namespace: default #若不指定,默认值为default
  #labels:
    #app: mysql-init-job
spec:
  template:
    spec:
      containers:
      - name: mysql-init-container
        image: alpine
        command: #列表也可以写成中括号格式 ['/bin/sh','-c','echo "mysql init operation herer" > /cache/init.log']
        - 'bin/sh'
        -  '-c'
        -  'echo "mysql init operation here" > /cache/mysql_init.log'
        volumeMounts:
          - name: host-path-dir #与volumes定义的名字一致,表示要挂载哪个volume,注意名称只能包含小写字母、数字和中横线,不能包含下划>线
            mountPath: /cache #容器里面的挂载点,表示把卷host-path-dir挂载到/cache供容器使用。如果网/cache读写数据,则是卷里面读写数
据
      volumes: #定义卷
      - name: host-path-dir #卷的名字,可以自定义,名称只能包含小写字母、数字和中横线
        hostPath: #卷的类型
          path: /tmp/jobdata
      restartPolicy: Never #Job这里只有两个值,OnFailure和None,不能是Always,对于一次运行的Job, 一般是Never。如果用Always Pod无法启动

root@k8s-master1:~/manifest/k8s-Resource-N76/case2-job# kubectl apply -f mytest-job.yaml
job.batch/mysql-init-job created
root@k8s-master1:~/manifest/k8s-Resource-N76/case2-job# kubectl get pods -o wide
NAME                   READY   STATUS      RESTARTS   AGE   IP               NODE            NOMINATED NODE   READINESS GATES
mysql-init-job-8qxrs   0/1     Completed   0          53s   10.200.182.153   192.168.1.113   <none>           <none

root@k8s-node3:/tmp/jobdata# cat mysql_init.log #查看job相关操作已完成,数据已写入
mysql init operation here

4.2 cronjob

一般用来执行周期性的任务,如数据备份、生成报表操作等, cronjob默认保留最近3个历史记录。yaml定义样例如下

root@k8s-master1:~/manifest/k8s-Resource-N76/case2-job# kubectl explain cronjob #使用explain查看资源定义

root@k8s-master1:~/manifest/k8s-Resource-N76/case2-job# vi mytest-cronjob.yaml

apiVersion: batch/v1
kind: CronJob
metadata:
  name: mysqlbackup-cronjob
spec:
  schedule: '* * * * *'
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: mysqlbackup-cronjob-pod
            image: busybox
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - echo "Hello from the K8s cluster at `date`" >> /cache/cronjob.log
            volumeMounts:
            - name: cache-volume
              mountPath: /cache
          volumes:
          - name: cache-volume
            hostPath:
              path: /tmp/cronjobdata
          restartPolicy: OnFailure

root@k8s-master1:~/manifest/k8s-Resource-N76/case2-job# kubectl apply -f mytest-cronjob.yaml
root@k8s-master1:~/manifest/k8s-Resource-N76/case2-job# kubectl get pods -o wide
NAME                                 READY   STATUS      RESTARTS   AGE     IP               NODE            NOMINATED NODE   READINESS GATES
mysqlbackup-cronjob-28042977-jqmdn   0/1     Completed   0          2m49s   10.200.182.163   192.168.1.113   <none>           <none>
mysqlbackup-cronjob-28042978-sfrz4   0/1     Completed   0          109s    10.200.182.165   192.168.1.113   <none>           <none>
mysqlbackup-cronjob-28042979-v4x4s   0/1     Completed   0          49s     10.200.182.162   192.168.1.113   <none>           <none>

root@k8s-node3:/tmp# tail -f cronjobdata/cronjob.log
Hello from the K8s cluster at Thu Apr 27 06:49:21 UTC 2023
Hello from the K8s cluster at Thu Apr 27 06:50:00 UTC 2023
Hello from the K8s cluster at Thu Apr 27 06:51:00 UTC 2023
Hello from the K8s cluster at Thu Apr 27 06:52:00 UTC 2023
Hello from the K8s cluster at Thu Apr 27 06:53:00 UTC 2023

4.3 replication controller

第一代副本控制器,select只支持=和!=精确匹配

root@k8s-master1:~/manifest/k8s-Resource-N76/case3-controller# vi mytest-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: mytest-rc
spec:
  replicas: 3
  selector:
    app: nginx-80-rc
    #app2: nginx-81-rc #selector里面的标签数必须小于或者等于template.metadata.labels的标签,如果大于则无法创建pod
  template:
    metadata:
      name: mytest-rc-pod
      labels:
        app: nginx-80-rc
        app2: nginx-81-rc
    spec:
      containers:
      - name: mytest-rc-pod-container
        image: mynginx:v1
        imagePullPolicy: IfNotPresent
        ports:
        - name: http-80
          containerPort: 80
          hostPort: 8081
      restartPolicy: Always


root@k8s-master1:~/manifest/k8s-Resource-N76/case3-controller# kubectl apply -f mytest-rc.yaml

root@k8s-master1:~/manifest/k8s-Resource-N76/case3-controller# kubectl get pods -o wide
NAME              READY   STATUS    RESTARTS   AGE     IP               NODE            NOMINATED NODE   READINESS GATES
mytest-rc-6cqlx   1/1     Running   0          6m26s   10.200.117.19    192.168.1.111   <none>           <none>
mytest-rc-fgdnv   1/1     Running   0          6m26s   10.200.81.9      192.168.1.112   <none>           <none>
mytest-rc-qb5ql   1/1     Running   0          6m26s   10.200.182.148   192.168.1.113   <none>           <none>

验证手动删除pod后,控制器会自动重建pod
root@k8s-master1:~/manifest/k8s-Resource-N76/case3-controller# kubectl delete pods mytest-rc-6cqlx
pod "mytest-rc-6cqlx" deleted
root@k8s-master1:~/manifest/k8s-Resource-N76/case3-controller# kubectl get pods -o wide
NAME              READY   STATUS    RESTARTS   AGE     IP               NODE            NOMINATED NODE   READINESS GATES
mytest-rc-fgdnv   1/1     Running   0          9m57s   10.200.81.9      192.168.1.112   <none>           <none>
mytest-rc-qb5ql   1/1     Running   0          9m57s   10.200.182.148   192.168.1.113   <none>           <none>
mytest-rc-tss5d   1/1     Running   0          2m38s   10.200.117.17    192.168.1.111   <none>           <none>

4.4 ReplicaSet

第二代副本控制器,select 不仅支持=和!=精确匹配,还支持in和not in模糊匹配

参考:https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/replicaset/

root@k8s-master1:~/manifest/k8s-Resource-N76/case3-controller# vi mytest-rs.yaml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: mytest-rs
spec:
  replicas: 2
  selector:
    matchExpressions: #matchExpressions值为列表对象
    #- {key: app, operator: In, values: [nginx-80-rs, nginx-81-rs]} #列表对象写法1, 也可以拆分成3行的写法2如下
    - key: app
      operator: In
      values: [nginx-80-rs, nginx-81-rs] # values值为字串列表,也可以拆分成多行写
      #values:
      #- nginx-80-rs
      #- nginx-81-rs
    #matchLabels:
    #  app: nginx-80-rs
  template:
    metadata:
      labels:
        app: nginx-80-rs
    spec:
      containers:
      - name: mytest-rs-container
        image: mynginx:v1
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
          hostPort: 8081

root@k8s-master1:~/manifest/k8s-Resource-N76/case3-controller# kubectl apply -f mytest-rs.yaml

root@k8s-master1:~/manifest/k8s-Resource-N76/case3-controller# kubectl get rs -o wide
NAME        DESIRED   CURRENT   READY   AGE     CONTAINERS            IMAGES       SELECTOR
mytest-rs   2         2         2       3m34s   mytest-rs-container   mynginx:v1   app=nginx-80-rs

root@k8s-master1:~/manifest/k8s-Resource-N76/case3-controller# kubectl get pods -o wide
NAME              READY   STATUS    RESTARTS   AGE     IP               NODE            NOMINATED NODE   READINESS GATES
mytest-rs-ptdl5   1/1     Running   0          3m39s   10.200.81.11     192.168.1.112   <none>           <none>
mytest-rs-pw9qp   1/1     Running   0          3m40s   10.200.182.149   192.168.1.113   <none>           <none>

手动删除pod会重建
root@k8s-master1:~/manifest/k8s-Resource-N76/case3-controller# kubectl delete pods mytest-rs-ptdl5
pod "mytest-rs-ptdl5" deleted
root@k8s-master1:~/manifest/k8s-Resource-N76/case3-controller# kubectl get pods -o wide
NAME              READY   STATUS    RESTARTS   AGE     IP               NODE            NOMINATED NODE   READINESS GATES
mytest-rs-mrswj   1/1     Running   0          7s      10.200.117.16    192.168.1.111   <none>           <none>
mytest-rs-pw9qp   1/1     Running   0          5m31s   10.200.182.149   192.168.1.113   <none>           <none>

4.5 Deployment

第三代pod控制器,比rs更高一级的控制器,除了有rs的功能之外,还有很多高级功能,比如说最重要的:滚动升级、回滚等

参考: https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/deployment/

root@k8s-master1:~/manifest/k8s-Resource-N76/case3-controller# kubectl explain deployments

root@k8s-master1:~/manifest/k8s-Resource-N76/case3-controller# vi mytest-deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mytest-deploy
spec:
  replicas: 2
  selector:
    matchExpressions:
    - key: app
      operator: In
      values: [nginx-80-deploy, nginx-81-deploy]
  template:
    metadata:
      labels:
        app: nginx-80-deploy
    spec:
      containers:
      - name: mytest-deploy-container
        image: mynginx:v1
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
          hostPort: 8081


root@k8s-master1:~/manifest/k8s-Resource-N76/case3-controller# kubectl get deploy -o wide
NAME            READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS                IMAGES       SELECTOR
mytest-deploy   2/2     2            2           2m40s   mytest-deploy-container   mynginx:v1   app in (nginx-80-deploy,nginx-81-deploy)
root@k8s-master1:~/manifest/k8s-Resource-N76/case3-controller# kubectl get rs -o wide
NAME                       DESIRED   CURRENT   READY   AGE   CONTAINERS                IMAGES       SELECTOR
mytest-deploy-684695d765   2         2         2       3m    mytest-deploy-container   mynginx:v1   app in (nginx-80-deploy,nginx-81-deploy),pod-template-hash=684695d765
root@k8s-master1:~/manifest/k8s-Resource-N76/case3-controller# kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP               NODE            NOMINATED NODE   READINESS GATES
mytest-deploy-684695d765-ffw9v   1/1     Running   0          3m10s   10.200.182.154   192.168.1.113   <none>           <none>
mytest-deploy-684695d765-m7r5x   1/1     Running   0          3m10s   10.200.81.17     192.168.1.112   <none>           <none>

5 K8s服务

5.1 服务类型

 ClusterIP:用于内部服务基于service name的访问
 NodePort:用于kubernetes集群以外的服务主动访问运行在kubernetes集群内部的服务
 LoadBalancer:用于公有云环境的服务暴露
 ExternalName:用于将k8s集群外部的服务映射至k8s集群内部访问,从而让集群内部的pod能够通过固定的service name访问集群外部的服务,有时候也用于将不同namespace之间的pod通过ExternalName进行访问

5.2 ClusterIP服务

用于解耦集群内部pod之间的互访

5.2.1 ClusterIP服务使用示例

root@k8s-master1:~/manifest/k8s-Resource-N76/case4-service# cat 1-deploy_node.yml
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector: #标签选择器,其定于的标签要小于等于pod metadata.labels才能选择pod
    #matchLabels: #rs or deployment
    #  app: ng-deploy3-80
    matchExpressions:
      - {key: app, operator: In, values: [ng-deploy-80,ng-rs-81]}
  template:
    metadata:
      labels:
        app: ng-deploy-80 # pod labels一般与container name保持一致
    spec:
      containers:
      - name: ng-deploy-80 #定义container name
        image: nginx:1.17.5
        ports:
        - containerPort: 80
      #nodeSelector:
      #  env: group1

root@k8s-master1:~/manifest/k8s-Resource-N76/case4-service# vi mytest-svc-clusterip.yaml
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80 #一般与selector标签名字一致
spec:
  type: ClusterIP
  selector: #标签选择器
    app: ng-deploy-80 #selector标签要与pod metadata.labels保持一致才可以选择到后端的pod
  ports:
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP

创建pod
root@k8s-master1:~/manifest/k8s-Resource-N76/case4-service# kubectl apply -f 1-deploy_node.yml
创建ClusterIP类型的service
root@k8s-master1:~/manifest/k8s-Resource-N76/case4-service# kubectl apply -f mytest-svc-clusterip.yaml

查看pod和service创建成功
root@k8s-master1:~/manifest/k8s-Resource-N76/case4-service# kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP               NODE            NOMINATED NODE   READINESS GATES
mynginx                             1/1     Running   0          43m   10.200.182.188   192.168.1.113   <none>           <none>
nginx-deployment-787957d974-qcjmf   1/1     Running   0          88m   10.200.81.30     192.168.1.112   <none>           <none>
root@k8s-master1:~/manifest/k8s-Resource-N76/case4-service# kubectl get svc -o wide
NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE     SELECTOR
kubernetes     ClusterIP   10.100.0.1      <none>        443/TCP   6d23h   <none>
ng-deploy-80   ClusterIP   10.100.50.180   <none>        80/TCP    4m18s   app=ng-deploy-80

验证访问,从k8s pod里面去访问pod和service
root@mynginx:/# curl -i 10.200.81.30 #直接访问pod地址,验证可以访问
HTTP/1.1 200 OK
Server: nginx/1.17.5
Date: Fri, 28 Apr 2023 10:07:43 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 22 Oct 2019 14:30:00 GMT
Connection: keep-alive
ETag: "5daf1268-264"
Accept-Ranges: bytes

root@mynginx:/# curl -i ng-deploy-80 #直接访问service名称,验证可以访问
HTTP/1.1 200 OK
Server: nginx/1.17.5
Date: Fri, 28 Apr 2023 10:08:08 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 22 Oct 2019 14:30:00 GMT
Connection: keep-alive
ETag: "5daf1268-264"
Accept-Ranges: bytes

在pod里面验证service名称解析,已被解析到一个service网段的地址10.100.50.180
root@mynginx:/# nslookup ng-deploy-80
Server:         10.100.0.2
Address:        10.100.0.2#53
Name:   ng-deploy-80.default.svc.cluster.local
Address: 10.100.50.180

在node节点上查看ipvs规则,10.100.50.180的80被转发到10.200.81.30的80,而10.200.81.30就是上面创建的pod地址。由此说明集群内部pod访问service名称的时候,访问请求会被转发到后端pod上
root@k8s-node3:~# ipvsadm -ln -t 10.100.50.180:80
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.100.50.180:80 rr
  -> 10.200.81.30:80              Masq    1      0          0

5.2.2 ClusterIP服务访问流程

当K8s集群内部pod访问ClusterIP服务名称的时候,coreDNS会将服务名称解析成服务地址,然后node节点上的ipvs或者iptables会将对服务地址的访问转发到后端pod上去。node节点上的ipvs或iptables规则是根据service定义生成的

image

5.3 NodePort服务

用于将kubernetes集群内部的服务暴露给集群外部访问

5.3.1 NodePort服务使用示例

root@k8s-master1:~/manifest/k8s-Resource-N76/case4-service# cat 1-deploy_node.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    #matchLabels: #rs or deployment
    #  app: ng-deploy3-80
    matchExpressions:
      - {key: app, operator: In, values: [ng-deploy-80,ng-rs-81]}
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx:1.17.5
        ports:
        - containerPort: 80
      #nodeSelector: 
      #  env: group1

root@k8s-master1:~/manifest/k8s-Resource-N76/case4-service# vi mytest-svc-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80
spec:
  type: NodePort
  selector:
    app: ng-deploy-80 #需要与pod metadata.labels保持一致才可以选择到后端的pod
  ports:
  - name: http
    port: 81
    targetPort: 80
    nodePort: 38989
    protocol: TCP

创建pod
root@k8s-master1:~/manifest/k8s-Resource-N76/case4-service# kubectl apply -f 1-deploy_node.yml
创建NodePort类型的service
root@k8s-master1:~/manifest/k8s-Resource-N76/case4-service# kubectl apply -f mytest-svc-nodeport.yaml

查看pod和service创建成功
root@k8s-master1:~/manifest/k8s-Resource-N76/case4-service# kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP               NODE            NOMINATED NODE   READINESS GATES
mynginx                             1/1     Running   0          169m    10.200.182.188   192.168.1.113   <none>           <none>
nginx-deployment-787957d974-dtxwr   1/1     Running   0          11m     10.200.182.189   192.168.1.113   <none>           <none>
nginx-deployment-787957d974-qcjmf   1/1     Running   0          3h34m   10.200.81.30     192.168.1.112   <none>           <none>
root@k8s-master1:~/manifest/k8s-Resource-N76/case4-service# kubectl get svc -o wide
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE    SELECTOR
kubernetes     ClusterIP   10.100.0.1       <none>        443/TCP        7d1h   <none>
ng-deploy-80   NodePort    10.100.199.194   <none>        81:38989/TCP   41m    app=ng-deploy-80

从节点查看ipvs规则,可以看到节点上已经把到NodePort 38989的访问转发到对应的pod上

root@k8s-node3:~# ipvsadm -ln -t 192.168.1.113:38989
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.1.113:38989 rr
  -> 10.200.81.30:80              Masq    1      0          0
  -> 10.200.182.189:80            Masq    1      0          1

从集群外部验证访问节点的端口, 可以看到从外部访问节点的NodePort38989时,节点ipvs会把访问请求轮流转发到后端的两个Pod上

image

5.3.2 NodePort服务访问流程

当外部用户直接访问NodePort服务端口时,被访问到的节点上的ipvs或iptables会将流量直接转发到后端的Pod上, 节点上的ipvs或iptables规则是根据service定义生成的

image

6 K8s卷

Volume将容器中的指定目录数据和容器解耦,并将数据存储到指定的位置,不同的存储卷功能不一样,如果是基于网络存储的存储卷可以实现容器间的数据共享和持久化。
静态存储卷需要在使用前手动创建PV和PVC,然后绑定至pod使用。

常用的几种卷:
 emptyDir:本地临时卷
 hostPath:本地存储卷
 nfs等:网络存储卷
 Secret:是一种包含少量敏感信息例如密码、令牌或密钥的对象
 configmap: 配置文件

参考:https://kubernetes.io/zh/docs/concepts/storage/volumes/

image

6.1 本地临时卷emptyDir

在pod被删除时,pod所挂载的emptyDir也会被删除

1 在master节点创建yaml文件
root@k8s-master1:~/manifest/k8s-Resource-N76/case5-emptyDir# vi my_deploy_emptyDir.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment #名字一般是 应用名-控制器名
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-80-deploy #selector的标签需要小于等于pod的metadata.lables才能选择后端pod
  template:
    metadata:
      labels:
        app: nginx-80-deploy #lables一般与container name保持一致
    spec:
      containers:
      - name: nginx-80-deploy
        image: mynginx:v1
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
        volumeMounts:
        - name: cache-volume
          mountPath: /cache
      volumes:
      - name: cache-volume
        emptyDir:
          sizeLimit: 500Mi

2 在master节点创建挂载emptyDir卷的Pod
root@k8s-master1:~/manifest/k8s-Resource-N76/case5-emptyDir# kubectl apply -f my_deploy_emptyDir.yaml

root@k8s-master1:~/manifest/k8s-Resource-N76/case5-emptyDir# kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP               NODE            NOMINATED NODE   READINESS GATES
nginx-deployment-6c54c6756d-fxmqk   1/1     Running   0          15s   10.200.182.190   192.168.1.113   <none>           <none>

3 在master节点进入Pod的/cache目录写入一点测试数据
root@k8s-master1:~/manifest/k8s-Resource-N76/case3-controller# kubectl exec -it nginx-deployment-6c54c6756d-fxmqk -- bash
root@nginx-deployment-6c54c6756d-fxmqk:/# cd /cache
root@nginx-deployment-6c54c6756d-fxmqk:/cache# echo "test info" > cache_test.txt

4 登录运行Pod的节点并切换到pods目录,在步骤2里面看到Pod运行在192.168.1.113
root@k8s-node3:~# cd /var/lib/kubelet/pods/

5 登录运行Pod的节点查找卷名称,找到cache-volume的位置
root@k8s-node3:/var/lib/kubelet/pods# find ./ -name cache-volume
./f34e3fbf-5daa-4ada-a763-50508dd1a5c7/volumes/kubernetes.io~empty-dir/cache-volume
./f34e3fbf-5daa-4ada-a763-50508dd1a5c7/plugins/kubernetes.io~empty-dir/cache-volume

6 登录运行Pod的节点进入cache-volume目录
root@k8s-node3:/var/lib/kubelet/pods# cd f34e3fbf-5daa-4ada-a763-50508dd1a5c7/volumes/kubernetes.io~empty-dir/cache-volume

7 登录运行Pod的节点验证emptyDir卷的数据被写入
root@k8s-node3:/var/lib/kubelet/pods/f34e3fbf-5daa-4ada-a763-50508dd1a5c7/volumes/kubernetes.io~empty-dir/cache-volume# cat cache_test.txt
test info

8 在master节点删除deployment同时也删除Pod
root@k8s-master1:~/manifest/k8s-Resource-N76/case5-emptyDir# kubectl delete -f my_deploy_emptyDir.yaml

9 登录之前运行Pod的节点验证emptyDir卷目录也被删除
root@k8s-node3:~# ls /var/lib/kubelet/pods/f34e3fbf-5daa-4ada-a763-50508dd1a5c7/volumes/kubernetes.io~empty-dir/cache-volume
ls: cannot access '/var/lib/kubelet/pods/f34e3fbf-5daa-4ada-a763-50508dd1a5c7/volumes/kubernetes.io~empty-dir/cache-volume': No such file or directory

6.2 本地存储卷hostPath

Pod删除后,Pod挂载的hostPath卷不会被删除

1 创建yaml文件
root@k8s-master1:~/manifest/k8s-Resource-N76/case6-hostPath# cat my_deploy_hostPath.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-80-deploy
  template:
    metadata:
      labels:
        app: nginx-80-deploy
    spec:
      containers:
      - name: nginx-80-deploy
        image: mynginx:v1
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
        volumeMounts:
        - name: cache-volume
          mountPath: /cache
      volumes:
      - name: cache-volume
        hostPath:
          path: /opt/k8sdata


2 创建挂载hostPath卷Pod
root@k8s-master1:~/manifest/k8s-Resource-N76/case6-hostPath# kubectl apply -f my_deploy_hostPath.yaml
deployment.apps/nginx-deployment created
root@k8s-master1:~/manifest/k8s-Resource-N76/case6-hostPath# kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP               NODE            NOMINATED NODE   READINESS GATES
nginx-deployment-64485b56d9-fd7q2   1/1     Running   0          13s   10.200.182.173   192.168.1.113   <none>           <none>

3 进入pod尝试在/cache中写入数据
root@k8s-master1:~/manifest/k8s-Resource-N76/case6-hostPath# kubectl exec -it nginx-deployment-64485b56d9-fd7q2 -- bash
root@nginx-deployment-64485b56d9-fd7q2:/# cd /cache
root@nginx-deployment-64485b56d9-fd7q2:/cache# echo "test hostPath volume write" > test_hostPath.log
root@nginx-deployment-64485b56d9-fd7q2:/cache# cat test_hostPath.log
test hostPath volume write

4 在运行pod的节点上查看hostPath卷是否有数据写入
root@k8s-node3:~# cd /opt/k8sdata/
root@k8s-node3:/opt/k8sdata# ll
total 4
drwxr-xr-x 2 root root 31 Apr 29 14:08 ./
drwxr-xr-x 5 root root 50 Apr 29 14:06 ../
-rw-r--r-- 1 root root 27 Apr 29 14:08 test_hostPath.log
root@k8s-node3:/opt/k8sdata# cat test_hostPath.log
test hostPath volume write  

5 尝试删除pod
root@k8s-master1:~/manifest/k8s-Resource-N76/case6-hostPath# kubectl delete -f my_deploy_hostPath.yaml
deployment.apps "nginx-deployment" deleted

6 在之前运行pod的节点上查看hostPath卷中的数据还存在。由此说明pod删除后,pod挂载的hostPath卷不会被删除
root@k8s-node3:/opt/k8sdata# ll
total 4
drwxr-xr-x 2 root root 31 Apr 29 14:08 ./
drwxr-xr-x 5 root root 50 Apr 29 14:06 ../
-rw-r--r-- 1 root root 27 Apr 29 14:08 test_hostPath.log
root@k8s-node3:/opt/k8sdata# cat test_hostPath.log
test hostPath volume write

6.3 网络存储卷nfs

在定义pod的时候使用volumes定义nfs类型的卷,并使用containers.volumeMounts挂载nfs存储卷。

nfs网络存储卷可以为多个pod共享存储数据。在pod被删除的时候,nfs网络存储卷不会被删除

1 确保nfs server上的nfs卷已设置好
root@k8s-ha1:/data/k8sdata# showmount -e 192.168.1.109
Export list for 192.168.1.109:
/data/volumes         *
/data/k8sdata/pool2   *
/data/k8sdata/pool1   *
/data/k8sdata/kuboard *

2 创建挂载nfs卷的pod
root@k8s-master1:~/manifest/k8s-Resource-N76/case7-nfs# vi my_deploy_nfs.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-80-deploy
  template:
    metadata:
      labels:
        app: nginx-80-deploy
    spec:
      containers:
      - name: nginx-80-deploy
        image: mynginx:v1
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
        volumeMounts:
        - name: my-nfs-volume1
          mountPath: /usr/share/nginx/html/pool1
        - name: my-nfs-volume2
          mountPath: /usr/share/nginx/html/pool2
      volumes:
      - name: my-nfs-volume1
        nfs:
          server: 192.168.1.109
          path: /data/k8sdata/pool1
      - name: my-nfs-volume2
        nfs:
          server: 192.168.1.109
          path: /data/k8sdata/pool2

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-80-deploy #service metadata.name一般要与spec.selector标签一致
spec:
  type: NodePort
  selector:
    app: nginx-80-deploy
  ports:
  - name: http
    port: 80
    nodePort: 30017
    targetPort: 80
    protocol: TCP

root@k8s-master1:~/manifest/k8s-Resource-N76/case7-nfs# kubectl apply -f my_deploy_nfs.yaml
deployment.apps/nginx-deployment created
service/nginx-80-deploy created

3 进入其中一个pod向nfs卷写入数据
root@k8s-master1:~/manifest/k8s-Resource-N76/case7-nfs# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-77f949fdf5-6tf57   1/1     Running   0          5s
nginx-deployment-77f949fdf5-rsf5q   1/1     Running   0          2m14s
root@k8s-master1:~/manifest/k8s-Resource-N76/case7-nfs# kubectl exec -it nginx-deployment-77f949fdf5-6tf57 -- bash
root@nginx-deployment-77f949fdf5-6tf57:/# cd /usr/share/nginx/html/
root@nginx-deployment-77f949fdf5-6tf57:/usr/share/nginx/html# ls
50x.html  index.html  pool1  pool2
root@nginx-deployment-77f949fdf5-6tf57:/usr/share/nginx/html# cd pool1
root@nginx-deployment-77f949fdf5-6tf57:/usr/share/nginx/html/pool1# echo "6tf57 pool1 dir" > test.info
root@nginx-deployment-77f949fdf5-6tf57:/usr/share/nginx/html/pool1# cat test.info
6tf57 pool1 dir
root@nginx-deployment-77f949fdf5-6tf57:/usr/share/nginx/html/pool1# cd ../pool2
root@nginx-deployment-77f949fdf5-6tf57:/usr/share/nginx/html/pool2# echo "6tf57 pool2 dir" > test.info
root@nginx-deployment-77f949fdf5-6tf57:/usr/share/nginx/html/pool2# cat test.info
6tf57 pool2 dir

4 在另外一个pod上验证是否能看到相同挂载卷的数据
root@k8s-master1:~# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-77f949fdf5-6tf57   1/1     Running   0          5m40s
nginx-deployment-77f949fdf5-rsf5q   1/1     Running   0          7m49s
root@k8s-master1:~# kubectl exec -it nginx-deployment-77f949fdf5-rsf5q -- bash
root@nginx-deployment-77f949fdf5-rsf5q:/# cd /usr/share/nginx/html/pool1
root@nginx-deployment-77f949fdf5-rsf5q:/usr/share/nginx/html/pool1# cat test.info
6tf57 pool1 dir
root@nginx-deployment-77f949fdf5-rsf5q:/usr/share/nginx/html/pool1# cd ../pool2/
root@nginx-deployment-77f949fdf5-rsf5q:/usr/share/nginx/html/pool2# cat test.info
6tf57 pool2 dir

5 在nfs server验证能看到pod写入的数据
root@k8s-ha1:/data/k8sdata# cd pool1
root@k8s-ha1:/data/k8sdata/pool1# cat test.info
6tf57 pool1 dir
root@k8s-ha1:/data/k8sdata/pool1# cd ../pool2
root@k8s-ha1:/data/k8sdata/pool2# cat test.info
6tf57 pool2 dir

6 删除测试用的pod
root@k8s-master1:~/manifest/k8s-Resource-N76/case7-nfs# kubectl delete -f my_deploy_nfs.yaml
deployment.apps "nginx-deployment" deleted
service "nginx-80-deploy" deleted

7 查看nfs存储卷的数据任然存在
root@k8s-ha1:/data/k8sdata/pool2# cat test.info
6tf57 pool2 dir
root@k8s-ha1:/data/k8sdata/pool2# cd ../pool1
root@k8s-ha1:/data/k8sdata/pool1# cat test.info
6tf57 pool1 dir

6.4 PVC

PVC(PersistentVolumeClaim)持久卷申领有静态和动态的两种

区别:两种PVC在pod里面的使用方式相同,都是使用PersistentVolumeClaim类型的volumes定义和使用。不同的地方是在PVC自身的定义里面,static PVC在定义的时候需要使用volumeName申明所使用的PV,dynamic pvc在定义的时候只需要使用storageClassName指定所使用的存储类即可。

6.4.1 nfs静态PVC

静态PVC使用注意事项:静态PVC在使用之前必须要先定义好PV,然后PVC在定义的时候使用volumeName申明所使用的PV,并且PVC和PV的storageClassName必须一致才可以绑定,PV的storageClassName默认值为空,PVC的storageClassName默认值为默认storageClass的名字(默认storageClass可以使用storageclass的metadata.annotations属性定义)。保险一点的使用案例是在定义pv和pvc的时候都使用storageClassName明确申明所使用的存储类,使用默认值有时候会出问题,导致PVC与PV无法绑定

1 创建pv
root@k8s-master1:~/manifest/k8s-Resource-N76/case8-pv-static# vi mytest-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: myserver-myapp-static-pv #指定pv卷名字,格式一般为 名称空间-应用名-pv类型-pvXXX
  namespace: myserver #指定pv卷的名称空间
spec:
  storageClassName: nfs #指定存储类名称,该名称必须要与pvc里面的storageClassName一致才能绑定。在不指定的情况下默认为空值
  volumeMode: Filesystem #指定pv卷模式,如果没有指定改参数,默认值为Filesystem
  accessModes: #指定pv卷访问模式,值为字串列表格式
  - ReadWriteMany
  capacity: #指定pv卷大小
    storage: 10Gi
  persistentVolumeReclaimPolicy: Retain #指定pv卷回收策略,静态pv默认值为Retain, 动态pv默认值为Delete
  nfs: #指定pv卷类型是nfs
    server: 192.168.1.109
    path: /data/pv-volumes

root@k8s-master1:~/manifest/k8s-Resource-N76/case8-pv-static# kubectl apply -f mytest-pv.yaml

2 创建pvc
root@k8s-master1:~/manifest/k8s-Resource-N76/case8-pv-static# vi mytest-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myserver-myapp-static-pvc #指定pvc名字,格式为 名称空间-应用名-pvc类型-pvcXXX
  namespace: myserver #指定pvc的名称空间
spec:
  #volumeMode: Filesystem #指定申领的卷模式,默认值是Filesystem
  storageClassName: nfs #指定一个存储类的名称,在没有指定的情况下,会使用默认存储类的值,默认存储类用kubectl get storageClass获取。pvc与pv的storageClassName一致才能绑定,否则会报classStorageName not match的错
  volumeName: myserver-myapp-static-pv #指定申领的pv名称,pv一般要在pvc之前定义和创建好
  accessModes: #指定申领的卷访问模式,值为字串列表
  - ReadWriteMany
  resources: #指定申领的卷大小
    requests:
      storage: 10Gi

root@k8s-master1:~/manifest/k8s-Resource-N76/case8-pv-static# kubectl apply -f mytest-pvc.yaml

3 创建pod和svc
root@k8s-master1:~/manifest/k8s-Resource-N76/case8-pv-static# vi mytest-webserver.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myserver-myapp-deployname
  namespace: myserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
      - name: myserver-myapp-container
        image: mynginx:v1
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
        volumeMounts:
        - name: static-datadir
          mountPath: /usr/share/nginx/html/statics
      volumes:
      - name: static-datadir
        persistentVolumeClaim:
          claimName: myserver-myapp-static-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: myserver-myapp-servicename
  namespace: myserver
spec:
  type: NodePort
  selector:
    app: myserver-myapp-frontend #注意svc标签选择器的标签必须与pod metadata.labels标签一致
  ports:
  - name: http
    port: 80
    nodePort: 30017
    targetPort: 80
    protocol: TCP

root@k8s-master1:~/manifest/k8s-Resource-N76/case8-pv-static# kubectl apply -f mytest-webserver.yaml

4 查看pod创建成功
root@k8s-master1:~/manifest/k8s-Resource-N76/case8-pv-static# kubectl get pods -o wide -n myserver
NAME                                        READY   STATUS    RESTARTS   AGE   IP               NODE            NOMINATED NODE   READINESS GATES
myserver-myapp-deployname-5784cdbf5-8nnzf   1/1     Running   0          96s   10.200.182.180   192.168.1.113   <none>           <none>

5 进入pod查看pv挂载目录下面没有数据
root@k8s-master1:~/manifest/k8s-Resource-N76/case8-pv-static# kubectl exec -it -n myserver myserver-myapp-deployname-5784cdbf5-8nnzf -- bash
root@myserver-myapp-deployname-5784cdbf5-8nnzf:/# cd /usr/share/nginx/html/
root@myserver-myapp-deployname-5784cdbf5-8nnzf:/usr/share/nginx/html# ls
50x.html  index.html  statics
root@myserver-myapp-deployname-5784cdbf5-8nnzf:/usr/share/nginx/html# cd statics/
root@myserver-myapp-deployname-5784cdbf5-8nnzf:/usr/share/nginx/html/statics# ls
root@myserver-myapp-deployname-5784cdbf5-8nnzf:/usr/share/nginx/html/statics# 

6 在pv卷里面写入数据
root@k8s-ha1:/data/pv-volumes# echo "static pv test" > test.info

7 pod查看数据已写入
root@myserver-myapp-deployname-5784cdbf5-8nnzf:/usr/share/nginx/html/statics# ls
test.info
root@myserver-myapp-deployname-5784cdbf5-8nnzf:/usr/share/nginx/html/statics# cat test.info
static pv test

8 通过nodeport也能访问到写入的数据
root@k8s-deploy:~# curl http://192.168.1.189/statics/test.info
static pv test

6.4.2 nfs 动态 PVC

动态pvc在定义的时候只需要使用storageClassName申明所使用的存储类即可。

1 给serviceaccount授权
root@k8s-master1:~/manifest/3-k8s-Resource-N76/case9-pv-dynamic-nfs# cat 1-rbac.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: nfs
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["persistentvolumes"]
  verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
  resources: ["persistentvolumeclaims"]
  verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
  name: nfs-client-provisioner  
  # replace with namespace where provisioner is deployed 
  namespace: nfs
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
rules:
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
subjects:
- kind: ServiceAccount
  name: nfs-client-provisioner 
  # replace with namespace where provisioner is deployed 
  namespace: nfs
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

root@k8s-master1:~/manifest/3-k8s-Resource-N76/case9-pv-dynamic-nfs# kubectl apply -f 1-rbac.yaml

2 定义存储类
root@k8s-master1:~/manifest/3-k8s-Resource-N76/case9-pv-dynamic-nfs# cat 2-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
reclaimPolicy: Retain #PV的删除策略,默认为delete,删除PV后立即删除NFS server的数据
mountOptions:
  #- vers=4.1 #containerd有部分参数异常
  #- noresvport #告知NFS客户端在重新建立网络连接时,使用新的传输控制协议源端口
  - noatime #访问文件时不更新文件inode中的时间戳,高并发环境可提高性能
parameters:
  #mountOptions: "vers=4.1,noresvport,noatime"
  archiveOnDelete: "true"  #删除pod时保留pod数据,默认为false时为不保留数据

root@k8s-master1:~/manifest/3-k8s-Resource-N76/case9-pv-dynamic-nfs# kubectl apply -f 2-storageclass.yaml
storageclass.storage.k8s.io/managed-nfs-storage created
root@k8s-master1:~/manifest/3-k8s-Resource-N76/case9-pv-dynamic-nfs# kubectl get sc
NAME                  PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage   k8s-sigs.io/nfs-subdir-external-provisioner   Retain          Immediate           false                  117s

3 定义nfs-provisioner
root@k8s-master1:~/manifest/3-k8s-Resource-N76/case9-pv-dynamic-nfs# cat 3-nfs-provisioner.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
spec:
  replicas: 1
  strategy: #部署策略
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          #image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          image: registry.cn-qingdao.aliyuncs.com/zhangshijie/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 192.168.1.109
            - name: NFS_PATH
              value: /data/volumes
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.1.109
            path: /data/volumes

root@k8s-master1:~/manifest/3-k8s-Resource-N76/case9-pv-dynamic-nfs# kubectl apply -f 3-nfs-provisioner.yaml

4 创建动态PVC
root@k8s-master1:~/manifest/3-k8s-Resource-N76/case9-pv-dynamic-nfs# cat 4-create-pvc.yaml
# Test PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myserver-myapp-dynamic-pvc
  namespace: myserver
spec:
  storageClassName: managed-nfs-storage #调用的storageclass 名称
  accessModes:
    - ReadWriteMany #访问权限
  resources:
    requests:
      storage: 500Mi #空间大小

root@k8s-master1:~/manifest/3-k8s-Resource-N76/case9-pv-dynamic-nfs# kubectl apply -f 4-create-pvc.yaml
persistentvolumeclaim/myserver-myapp-dynamic-pvc created

5 创建pod并使用dynamic pvc卷
root@k8s-master1:~/manifest/3-k8s-Resource-N76/case9-pv-dynamic-nfs# cat 5-myapp-webserver.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-myapp
  name: myserver-myapp-deployment-name
  namespace: myserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
        - name: myserver-myapp-container
          image: nginx:1.20.0
          #imagePullPolicy: Always
          volumeMounts:
          - mountPath: "/usr/share/nginx/html/statics"
            name: statics-datadir
      volumes:
        - name: statics-datadir
          persistentVolumeClaim:
            claimName: myserver-myapp-dynamic-pvc

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-myapp-service
  name: myserver-myapp-service-name
  namespace: myserver
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30080
  selector:
    app: myserver-myapp-frontend
root@k8s-master1:~/manifest/3-k8s-Resource-N76/case9-pv-dynamic-nfs# kubectl apply -f 5-myapp-webserver.yaml

6 验证pod可以访问,并且在pod里面可以看到nfs挂载的卷
root@k8s-master1:~/manifest/3-k8s-Resource-N76/case9-pv-dynamic-nfs# kubectl get pods -n myserver -o wide
NAME                                              READY   STATUS    RESTARTS   AGE     IP               NODE            NOMINATED NODE   READINESS GATES
myserver-myapp-deployment-name-65ff65446f-j5hb7   1/1     Running   0          7m23s   10.200.182.154   192.168.1.113   <none>           <none>

root@k8s-master1:~/manifest/3-k8s-Resource-N76/case9-pv-dynamic-nfs# curl -i 192.168.1.113:30080
HTTP/1.1 200 OK
Server: nginx/1.20.0
Date: Fri, 05 May 2023 05:28:40 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 20 Apr 2021 13:35:47 GMT
Connection: keep-alive
ETag: "607ed8b3-264"
Accept-Ranges: bytes

root@k8s-master1:~/manifest/3-k8s-Resource-N76/case9-pv-dynamic-nfs# kubectl exec -it myserver-myapp-deployment-name-65ff65446f-j5hb7 -n myserver -- sh
# df -h
Filesystem                                                                                                Size  Used Avail Use% Mounted on
overlay                                                                                                    59G   12G   48G  20% /
tmpfs                                                                                                      64M     0   64M   0% /dev
/dev/vda3                                                                                                  59G   12G   48G  20% /etc/hosts
shm                                                                                                        64M     0   64M   0% /dev/shm
192.168.1.109:/data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-b1ff99d8-44e4-4676-8dac-f542b7fa406b   19G  5.9G   14G  32% /usr/share/nginx/html/statics
tmpfs                                                                                                     7.5G   12K  7.5G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                                                                                     3.9G     0  3.9G   0% /proc/acpi
tmpfs                                                                                                     3.9G     0  3.9G   0% /proc/scsi
tmpfs                                                                                                     3.9G     0  3.9G   0% /sys/firmware

标签:原生,master1,name,DAY3,笔记,nginx,nfs,k8s,root
From: https://www.cnblogs.com/k8sinaction/p/17373937.html

相关文章

  • python笔记-数据类型
    获取数据类型type(val)iftype(1)==int:print('1是int类型')iftype('hello')==str:print('1是字符串类型')iftype(1.5)==float:print('1是float类型')iftype([1,2])==list:print('1是list类型')类型转换prin......
  • AI 学习笔记
    AI学习笔记机器学习简介DifferenttypesofFunctionsRegression:Thefunctionoutputsascalar(标量).predictthePM2.5Classification:Givenoptions(classes),thefunctionoutputsthecorrectone.SpamfilteringStructuredLearning:createsomethingwi......
  • chatgpt接口开发笔记2生成图片接口
    chatgpt接口开发笔记2生成图片接口chatgpt的生成图片接口,可以根据用户的描述来生成满足用户意愿的图片1、了解接口参数接口地址:POSThttps://api.openai.com/v1/images/generations下面是接口文档描述内容curlhttps://api.openai.com/v1/images/generations\-H"Co......
  • 每日一练 | 华为认证真题练习Day37
    1、缺省情况下,STP协议中的端口状态由Disabled转化为forwarding状态至少需要30s的时间。A.对B.错2、在路由表中存在到达同一个目的网络的多个NextHop,这些路由称之为?A.等价路由B.默认路由C.多径路由D.次优路由3、OSPF协议在以下哪种网络类型中需要选举DR和BDR?(多选)A.点到点类型......
  • 每日一练 | 华为认证真题练习Day38
    1、静态路由协议的优先级不能手工指定。A.对B.错2、以下关于直连路由说法正确的是?A.直连路由优先级低于动态路由B.直连路由需要管理员手工配置目的网络和下一跳地址C.直连路由优先级最高D.直连路由优先级低于静态路由3、骨干区域内的路由器有它所有区域的全部LSDB。A.对B.......
  • 笔记
    康托展开和逆康托展开康托展开和逆康托展开(转)-Sky丨Star-博客园(cnblogs.com)康托展开表示的就是是当前排列组合在n个不同元素的全排列中的名次逆康托展开则是由名次得出该名次的排列组合公式:康托展开值X=a[n]*(n-1)!+a[n-1]*(n-2)!+...+a[i]*(i-1)!+...+a[1]*0!X表示该......
  • 【Java学习笔记】Maven项目+Junit5单元测试
    1.Maven简介;Maven概念:仓库、坐标Maven坐标:描述仓库中资源的位置Maven坐标查找:https://mvnrepository.com/Maven坐标组成:-groupId:定义当前Maven项目隶属组织名称(通常是域名反写,例如:com.Google)-artifactId:定义当前Maven项目名称(通常是模块名称)-version:定义当前Maven项目......
  • RocketMQ笔记(十一):消息存储删除机制
    RocketMQ的消息采用文件进行持久化存储。1、存储目录详情RocketMQ中默认文件存储位置/root/store,文件详情如下 commitLog:消息存储目录config:运行期间一些配置信息consumerqueue:消息消费队列存储目录index:消息索引文件存储目录checkpoint:文件......
  • OpenResty学习笔记03:深入体验WAF
    一.WAF概况  二.Lua介绍  三.文件说明  四.引用关系  五.测试&体验  六.本篇总结  ......
  • RocketMQ笔记(十):事务消息
    事务消息官网:RocketMQ官网-事务消息。一、什么是事务消息事务消息是RocketMQ提供的一种消息类型,支持在分布式场景下保障消息生产和本地事务的最终一致性。二、事务消息的原理2.1、事务消息的生命周期2.1.1、初始化半事务消息被生产者构建并完成初始化,待发......