首页 > 其他分享 >17_2 kubernetes CKA 模拟题总结

17_2 kubernetes CKA 模拟题总结

时间:2022-12-23 16:35:41浏览次数:84  
标签:kubectl CKA name kubernetes yaml -- 模拟题 nginx pod

做题前注意是否在要求的上下文

# 查看当前所在的context
kubectl config current-context
# 输出 kubernetes-admin@kubernetes

# 使用指定的context
kubectl config use-context kubernetes-admin@kubernetes  
# 输出 Switched to context "kubernetes-admin@kubernetes".

模拟题

01 RBAC

Create a service account name dev-sa in default namespace, dev-sa can create below components in dev namespace:

  • Deployment
  • StatefulSet
  • DaemonSet

解题思路:题目考察RBAC授权,题意是要求在默认的命名空间下创建一个名为dev-sa的服务账户,该账户具有在名为dev的命名空间下创建Deployment、StatefulSet、DaemonSet的权限。我们首先创建要求的服务账户dev-sa、命名空间dev,以及role,然后再通过 role binding进行授权。

# 在默认命名空间下创建服务账号dev-a
kubectl create sa dev-a -n=default
# 创建命名空间 dev
kubectl create ns dev
# 在命名空间 dev 下建立具有创建 deployment,statefulset,daemonset 权限的角色 sa-role 
kubectl create role sa-role -n=dev \
--resource=deployment,statefulset,daemonset --verb=create
# 将角色 sa-role 授予 默认命名空间下的服务账户 dev-sa
kubectl create role-binding sa-rolebinding -n=dev \
--role=sa-role --serviceaccount=default:dev-sa

02 Volume

Create a pod name log , container name log-pro use image busybox, output the important information at /log/data/output.log. Then another container name log-cus use image busybox, load the output.log at /log/data/output.log and print it. Note, this log file only can be share within the pod.

解题思路:本题目考察pod以及volume,题意是创建具有两个容器的pod,其中一个容器向/log/data/output.log写入一些信息,另一个容器则负责加载该log文件至应用中并输出,需要注意的是,题目有说到该log文件仅允许在当前pod中使用,因此我们用emptyDir来解决问题,题目未说明namespace,则默认选择default

考点:多容器pod

# 1 用--dry-run命令得到一份yaml文件
kubectl run log --image=busybox --dry-run=client -o yaml > log.pod.yaml
# 官网连接:https://kubernetes.io/docs/concepts/storage/volumes/

# 2 根据题意修改yaml文件如下
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: log
  name: log
spec:
  containers:
  - image: busybox
    name: log-pro
    command: ["sh","-c","echo important information >> /log/data/output.log;sleep 1d"]
    volumeMounts:
    - name: date-log
      mountPath: /log/data
    resources: {}
  - image: busybox
    name: log-cus
    command: ["sh","-c","cat /log/data/output.log;sleep 1d"]
    volumeMounts:
    - name: date-log
      mountPath: /log/data
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  volumes:
  - name: date-log
    emptyDir: {}
status: {}

# 3 修改完yaml后,启动此yaml
kubectl  create -f log-pod.yaml

# 4 测试
kubectl logs log -c log-cus -- bash
# 输出 important information
kubectl  exec -it log -c log-pro -- cat /log/data/output.log
# 输出 important information

# node 节点查看 pod共享存储 在宿主机实际的路径
cd /var/lib/kubelet/pods/9ce9a5ea-eb10.../volumes/kubernetes.io~empty-dir/data

03 NetworkPolicy

Only pods that in the internal namespace can access to the pods in mysql namespace via port 8080/TCP

解题思路:本题目考察network policy,题目要求只有在命名空间interanl的pod,才可以通过TCP协议8080端口访问到命名空间mysql内的pod,因此考虑使用network policy 的 ingress 来解答

network-policy:https://kubernetes.io/zh-cn/docs/concepts/services-networking/network-policies/

# (1) 根据题意修改官网network policy.yaml如下
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: cka-network
  namespace: mysql
spec:
    # podSelector,不填,则默认选择此命名空间下所有的pod
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
            # 配置label,如果命名空间internal没有该标签,先打上该标签
          ns: internal
    ports:
    - protocol: TCP
      port: 8080
# (2) 准备工作
kebectl create ns interal
kebectl create ns mysql
# (3) 查看命名空间上是否有上面文件定义的标签 ns=internal
kubectl get ns --show-labels
# (4) 没有则创建,标签名与上面文件中定义的namespaceSelector一致即可,这里是internal
kubectl label namespace internal ns=internal
# (5) 创建network policy
kubectl apply -f network-policy.yaml

# (6) 检查
kubectl describe networkpolicy test-network-policy -n=mysql

04 pod to node

Create a pod named nginx and schedule it to the node disk=stat

创建一个pod名称为nginx,并将其调度到节点为 disk=stat上

pod to nodes: https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/

# (1) 准备工作
kubectl label nodes k8s-node1 disk=stat # 创建标签
kubectl label nodes k8s-node1 disk- # 删除标签
kubectl get nodes --show-labels # 查看标签
# (2) 创建 nginx.yaml
kubectl run nginx --image=busybox --restart=Never --dry-run=client -o yaml > nginx.yaml

修改 nginx.yaml 如下
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - image: busybox
    name: nginx
    resources: {}
  nodeSelector:
    disk: stat
node-role.kubernetes.io/master
# (2) 创建 pod
kubectl apply -f nginx.yaml
# (3) 检验
kubectl get pods -o wide

附:nodeSelector 属于强制性的,如果我们的目标节点没有可用的资源,我们的 Pod 就会一直处于 Pending 状态。

05 Save Logs

Set configuration context $kubectl config use-context k8s.Monitor the logs of Pod foobar and Extract log lines corresponding to error unable-to-access-website. Write them to /opt/KULM00612/foobar

解题思路:查看pod foobar中的日志,将匹配 unable-to-access-website 的日志行保存至指定文件中

# (1) 进入指定的context中
kubectl config use-context k8s.Monitor
# (2) 保存日志
kubectl logs foobar | grep 'unable-to-access-website' >> /opt/KULM00612/foobar

06 Daemonset

Start a daemonset named daemon-test, the pod name inside is nginx, use nginx image.

启动一个 daemonset 名称为 daemon-test, 里面的 pod名称为 nginx,使用 nginx image。

DaemonSet : https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/daemonset/

# (1) 生成 daemonSet.yaml
kubectl create deploy daemon-test --image=nginx --dry-run=client -o yaml > daemonSet.yaml
# (2) 修改如下
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    run: daemon-test
  name: daemon-test
spec:
  selector:
    matchLabels:
      run: daemon-test
  template:
    metadata:
      name: nginx
      labels:
        run: daemon-test
    spec:
      containers:
      - image: nginx
        name: nginx
        
# (2) 创建
 kubectl apply -f daemonSet.yaml
# (3) 检验
kubectl get daemonset
kubectl describe daemonset daemon-test
kubectl get pods
kubectl describe pod daemon-test-hcpp5

07 create pod

Start a pod containing nginx, redis, zookeeper

启动一个包含nginx,redis,zookeeper的pod

# 创建 test-pod.yaml,并修改如下
kubectl run test-pod --image=nginx --dry-run=client -o yaml > test-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: test-pod
  name: test-pod
spec:
  containers:
  - image: nginx
    name: nginx
    resources: {}
  - image: redis
    name: redis
    resources: {}
  - image: zookeeper
    name: zookeeper
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

08 upgrade and rollback

Start a deployment that includes nginx pods, the initial version is 1.9.1, upgrade to 1.13.1 and record, and roll back to the original version after the upgrade

启动一个deployment包含了nginx的pod,初始版本为1.9.1,升级到1.13.1并记录,升级完回滚到原来的版本

# 镜像一直拉不下来,故换成1.19.1 升级到最新,考试应该没这个问题
# (1) 准备 pod
kubectl create deployment deploy-nginx --image=nginx:1.19.1 --replicas=1
# (2) 升级
kubectl set image deployment deploy-nginx *=nginx:latest --record
# (3) 查看升级记录
kubectl rollout history deployment deploy-nginx
# (4) 回滚
kuebctl rollout undo deployment deploy-nginx
# (5) 升级或回滚后验证
kubectl describe deployment deploy-nginx

09 Secret

To create a secret, use the following:
name: super-secret
credential: tom
Create a pod named pod-secrets-via-file using redis mirror, mount the mount path/secrets named super-secret
Create a second Pod name Pod-secrets-via-env using redis mirror, export as CREDENTIALS

创建一个secret,使用以下:
name:super-secret
credential:tom
创建一个pod名为pod-secrets-via-file 使用redis镜像,挂载名为super-secret的 挂载路径/secrets
使用redis镜像创建第二个Pod名称Pod-secrets-via-env, 导出为 CREDENTIALS

官网:https://kubernetes.io/zh-cn/docs/concepts/configuration/secret/

# (1) 创建 secret
kubectl create secret generic super-secret --from-literal=credential=tom
kubectl get secret
kubectl get secrets super-secret -o yaml > super-secret.yaml
cat super-secret.yaml
# (2) 创建 pod-secrets-via-file
# 生成pod的yaml文件
kubectl run pod-secrets-via-file --image=busybox --dry-run=client -o yaml > pod-secrets-via-file.yaml
# vi pod-secrets-via-file.yaml, 根据题意修改如下
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod-secrets-via-file
  name: pod-secrets-via-file
spec:
  containers:
  - image: redis
    name: myredispod
    volumeMounts:
    - name: foo
      mountPath: "/secrets"
      readonly: true
    resources: {} 
  - name: foo
    secret:
      secretName: super-secret
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

# 启动 pod-secrets-via-file
kubectl apply -f pod-secrets-via-file.yaml

# (3) 使用redis镜像创建第二个Pod名称 , 导出为 CREDENTIALS
# 生成pod的yaml文件
kubectl  run Pod-secrets-via-env --image=redis --dry-run=client -o yaml >Pod-secrets-via-env.yaml
# vi Pod-secrets-via-env.yaml,根据题意编辑修改如下
apiVersion: v1
kind: Pod
metadata:
  name: pod-secrets-via-env
spec:
  containers:
  - image: redis
    name: pod-secrets-via-env
    env:
      - name: CREDENTIALS
        valueFrom: 
          secretKeyRef:
            name: super-secret
            key: credential
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
# 启动 POD Pod-secrets-via-env
kubectl apply  -f pod-secrets-via-env.yaml

# (4) 验证 pod 情况
kubectl get pods

10 Drain Node

Set the node named host-10-19-83-154 as unavaliable and reschedule all the pods running on it

移除节点并设置为不可调度。

官网 :https://kubernetes.io/zh-cn/docs/concepts/architecture/nodes/

# 标记节点不可用
kubectl cordon k8s-node2
# 从集群中删除pod,这命令慎用,一不小心就将节点上所有的pod都清了,测试机器还好,要是有应用的机器,新手不知道怎么恢复。
kubectl drain k8s-node2 --delete-emptydir-data --ignore-daemonsets --force
# 查看
kubectl get pods -o wide

# 恢复节点可调度
kubectl uncordon k8s-node2
# 将节点上停掉的 pod 在重新启动
kubectl get pod pod-name -o yaml | kubectl replace --force -f ./

备注:
–force 不被控制器 (RC、RS、Job、DaemonSet 或 StatefulSet) 管理的pod也会被强制删除
–ignore-daemonsets 表示不会驱逐 daemonset 管理的 Pods
–delete-emptydir-data 如果 pods 使用 emptyDir,也会继续被删除

11 SVC

  • Reconfigure the existing deployment daemon-test and add a port specifiction named http exposing port 80/tcp of the existing container nginx.

  • Create a new service named front-end-svc exposing the container prot http.

  • Configure the new service to also expose the individual Pods via a NodePort on the nodes on which they are scheduled.

  • 重新配置已经存在、名称为daemon-test 的工作负载,为已经存在容器 nginx,添加一个名称为 http、暴露 80/tcp 的端口

  • 创建一个名为 front-end-svc 的 svc,暴露容器的端口http

  • 将其配置为 NodePort 类型的 svc

# 方法 1 推荐
#--> 用 expose 为 Deployment,Pod 创建 Service
kubectl expose deployment daemon-test --name=front-end-svc --port=80 --target-port=80 --type=NodePort
#--> 查看
kubectl get endpoints front-end-svc

# 方法 2 yaml文件配置
#--> 切换到指定集群
kubectl config use-context [NAME]
#--> 按要求编辑 deployment 添加 ports 端口属性信息
kubectl edit deployment front-end
apiVersion: apps/v1
kind: Deployment
metadata:
  name: front-end
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - name: http
          containerPort: 80
          protocol: TCP
          
## front-end-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: front-end-svc
  namespace: default
  labels:
    app: nginx
spec:
  selector:
    app: nginx
  type: NodePort
  ports:
    - name: http
      port: 80
      target-port: 80
      protocol: TCP
# 创建 front-end-svc
kubectl apply -f front-end-svc.yaml

12 Ingress

Create a new nginx Ingress resource as follows:

• Name: ping

• Namespace: ing-internal

• Exposing service hi on path /hi using service port 5678

The avaliability of service hi can be checked using the following command,which should return hi: curl -kL /hi

创建一个新的 nginx Ingress 资源如下:

• name: ping

• namespace: ing-internal

• Exposing service hi on path /hi using service port 5678

可以使用以下命令curl -kL /hi 检查服务 hi 的可用性,该命令应返回 hi

官网:https://kubernetes.io/zh-cn/docs/concepts/services-networking/ingress/

vi ingress.yaml 
# 根据题意修改如下
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ping
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  namespace: ing-internal
spec:
  rules:
  - http:
      paths:
      - path: /hi
        pathType: Prefix
        backend:
          service:
            name: hi
            port:
              number: 5678
# 创建 ingress              
kubectl apply -f ingress.yaml
# 验证
kubectl get ingress -A
kubectl describe ingress ping -n ing-internal

13 Schedule a pod

Schedule a pod as follows:

•name: nginx-kusc00401

•Image: nginx

•Node selector: disk: spinning

题意:创建名称为nginx-kusc00401 ,镜像为nginx的pod,并调度至标签为disk=spinning 的节点

# 生成 nginx-kusc00401-pod.yaml 文件
kubectl run nginx-kusc00401 --images=busybox --dry-run=client -o yaml > nginx-kusc00401-pod.yaml

vi nginx-kusc00401-pod.yaml
# 根据题意修改如下
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx-kusc00401
  name: nginx-kusc00401
spec:
  containers:
  - image: nginx
    name: nginx-kusc00401
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  nodeSelector:
    disk: spinning
status: {}

# 打标签
kubectl label nodes k8s-node1 disk=stat

# 创建pod
kubectl apply -f nginx-kusc00401-pod.yaml

14 create a multi-image pod

Create a pod named kucc8 with a single app container for each of the following images running inside (there may be between 1 and 4 images specified): nginx + redis + memcached + consul

创建一个名为 kucc8 的 pod,并为其中运行的以下每个镜像创建一个应用程序容器(可能指定 1 到 4 个镜像):nginx + redis + memcached + consul

# 生成 yaml 文件
kubectl run kucc8 --image=nginx --dry-run=client -o yaml > kucc8.yam
# 修改 yaml 文件
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: kucc8
  name: kucc8
spec:
  containers:
  - image: nginx
    name: nginx
  - image: redis
    name: redis
  - image: memcached
    name: memcached
  - image: consul
    name: consul
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
# 创建POD
kubectl apply -f kucc8.yam

15 persistent volume

Create a persistent volume whit name app-config, of capacity 1Gi and access mode ReadOnlyMany . the type of volume is hostPath and its location is /srv/app-config .

创建一个名为 app-config、容量为 1Gi 和访问模式为 ReadOnlyMany 的持久卷。 卷的类型是 hostPath ,位置是 /srv/app-config 。

persistent volume : https://kubernetes.io/zh-cn/docs/concepts/storage/persistent-volumes/

# 依据题意并参照官网配置 presistentVolume.yaml
vi presistentVolume.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: app-config
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadOnlyMany
  hostPath:
    path: /data/app-config
    
# 创建
kubectl apply -f presistentVolume.yaml
# 验证
kubectl get pv
kubectl describe pv app-config

16 SC + PVC + pod

Create a new PersistentVolumeClaim:
• Name: pv-volume
• Class: csi-hostpath-sc
• Capacity: 10Mi

Create a new Pod which mounts the PersistentVolumeClaim as a volume:
• Name: web-server
• Image: nginx
• Mount path: /usr/nginx/html

Configure the new Pod to have ReadWriteOnce access on the volume.

Finally, using kubectl edit or Kubectl patch expand the PersistentVolumeClaim to a capacity of 70Mi and record that change.

创建一个新的 PersistentVolumeClaim:
• Name: pv-volume
• Class: csi-hostpath-sc
• Capacity: 10Mi

创建一个将 PersistentVolumeClaim 作为卷挂载的新 Pod:
• Name: web-server
• Image: nginx
• Mount path: /usr/nginx/html

将新 Pod 配置为对卷具有 ReadWriteOnce 访问权限。

最后,使用 kubectl edit 或 Kubectl patch 将 PersistentVolumeClaim 扩展到 70Mi 的容量并记录该变化。

StorageClass : https://kubernetes.io/zh-cn/docs/concepts/storage/storage-classes

# (1) 创建 StorageClass
# 参照官网配置 storageclass.yaml
vi storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-hostpath-sc
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
  - debug
volumeBindingMode: Immediate
# 创建storageclass csi-hostpath-sc
kubectl apply -f storageclass.yaml
# 验证 
kubectl get sc

PersistentVolumeClaim: https://kubernetes.io/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-claim-v1/

# (2) 创建 PersistentVolumeClaim
# 依据题意并参照官网配置 PersistentVolumeClaim.yaml 如下
vi PersistentVolumeClaim.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pv-volume
  namespace: default
spec:
  resources:
    requests:
     storage: 10Mi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  storageClassName: csi-hostpath-sc
  
# 创建 PersistentVolumeClaim pv-volume
kubectl apply -f PersistentVolumeClaim.yaml
# 验证
kubectl get pvc
kubectl describe pvc pv-volume

volumes : https://kubernetes.io/zh-cn/docs/concepts/storage/volumes/

# (3) 创建pod并绑定此pvc
kubectl run web-server --image=nginx --dry-run=client -o yaml > web-server.ayml
# 编辑如下
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: web-server
  name: web-server
spec:
  containers:
  - name: web-server
    image: nginx
    volumeMounts:
    - mountPath: /usr/nginx/html
      name: test-volume
  volumes:
  - name: test-volume
    persistentVolumeClaim:
      claimName: pv-volume
# 创建
kubectl apply -f web-server.yaml
# (4) 配置 pv-volume storage 70Mi
kubectl edit pvc pv-volume --save-config

17 sidecar container

Add a busybox sidecar container to the existing Pod big-corp-app. The new sidecar container has to run the following command:

/bin/sh -c tail -n+1 -f /var/log/big-corp-app.log

Use a volume mount named logs to make the file /var/log/big-corp-app.log available to the sidecar container. Don’t modify the existing container. Don’t modify the path of the log file,both containers must access it at /var/log/big-corp-app.log

将 busybox sidecar 容器添加到现有的 Pod big-corp-app。 新的 sidecar 容器必须运行以下命令:

/bin/sh -c tail -n+1 -f /var/log/big-corp-app.log

使用名为 logs 的卷装载使文件 /var/log/big-corp-app.log 可用于 sidecar 容器。 不要修改现有容器。 不要修改日志文件的路径,两个容器都必须在 /var/log/big-corp-app.log 访问它

Logging Architecture : https://kubernetes.io/zh-cn/docs/concepts/cluster-administration/logging/

# 生成 yaml 文件
kubectl run big-corp-app --image=busybox --dry-run -o yaml > big-corp-app.yaml

apiVersion: v1
kind: Pod
metadata:
  name: big-corp-app
spec:
  containers:
  - name: big-corp-app
    image: busybox
    args: [/bin/sh, -c, 'i=0; while true; do echo $(date) $i >> /var/log/big-corp-app.log; i=$((i+1)); sleep 1; done']
    volumeMounts:
    - mountPath: /var/log
      name: logs
  - name: count-log-1
    image: busybox
    args: [/bin/sh, -c, 'while true; do tail -n+1 -f /var/log/big-corp-app.log; sleep 1; done']
    volumeMounts:
    - mountPath: /var/log
      name: logs
  volumes:
  - name: logs
    emptyDir: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

# 创建
kubectl apply -f big-corp-app.yaml
# 检查
kubectl get pods -o wide
kubectl describe pods big-corp-app
docker ps -a | grep 54a34c20
docker exec -it 54a34c20a607 /bin/sh

18 pods are sorted by cpu

Form the pod label daemon-test,find pods running high CPU workloads and write the name of the pod consuming most CPU to the file /opt/KUTR00401/KURT00401.txt(which alredy exists)

# 查看 pods 标签
kubectl  get pod --show-labels

# 指定标签 并根据cpu排序
kubectl top pods -l app=daemon-test --sort-by=cpu

# 输出到指定文件
kubectl top pods -l app=daemon-test --sort-by=cpu >> /opt/KUTR00401/KUTR00401.txt

19 NotReady

A Kubernetes worker node, labelled with name=wk8s-node-0 is in state NotReady . Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are made permanent.
Hints:
You can ssh to the failed node using $ ssh wk8s-node-0
You can assume elevated privileges on the node with the following command $ sudo -i

名为 wk8s-node-0 的 Kubernetes 工作节点处于 NotReady 状态。 调查为什么会出现这种情况,并执行任何适当的步骤使节点进入就绪状态,确保任何更改都是永久性的。 您可以使用以下方式 ssh 到失败的节点:sudo -i

思路:登陆到主机,排查问题,基本上两种思路,看日志看状态,一般情况下,如果有问题,都会将错误信息打印出来

kubectl get nodes | grep NotReady
# 以 k8s-node1 节点为例,登录到 k8s-node1
sudo -i ssh k8s-node1
systemctl status kubelet
sudo journalctl  -f -u kubelet
systemctl enable kubelet

20 kubernetes upgrade

Given an existing Kubernetes cluster running version 1.18.8,upgrade all of Kubernetes control plane and node components on the master node only to version 1.19.0。 You are also expected to upgrade kubelet and kubectl on the master node。 Be sure to drain the master node before upgrading it and uncordon it after the upgrade. Do not upgrade the worker nodes,etcd,the container manager,the CNI plugin,the DNS service or any other addons

给定一个运行 1.18.8 版本的现有 Kubernetes 集群,仅将主节点上的所有 Kubernetes 控制平面和节点组件升级到 1.19.0 版本。您还需要升级主节点上的 kubelet 和 kubectl。一定要 drain the master 节点升级前,升级后解除封锁。 不要升级工作节点、etcd、容器管理器、CNI 插件、DNS 服务或任何其他插件。

Upgrade A Cluster : https://kubernetes.io/docs/tasks/administer-cluster/cluster-upgrade/

# CKA 考试用的系统是 Ubuntu,要用 apt
apt update
apt-cache policy kubeadm
apt-get update && apt-get install -y --allow-change-held-packages kubeadm=1.19.0

# 查看当前kubeadm版本
kubeadm version

# 驱逐节点上的应用
kubectl drain master --ignore-daemonsets --delete-local-data --force

# 命令查看可升级的版本信息
sudo kubeadm upgrade plan

# 查看版本信息时,排除etcd从3.4.3升级到3.4.7
sudo kubeadm upgrade apply v1.19.0 --etcd-upgrade=false

# 标记节点可调度
kubectl uncordon master

# 如果是高可用还需要在其他 master 节点上执行命令:
sudo kubeadm upgrade node

# 在所有的 master 节点上执行如下命令升级 kubelet 和 kubectl
apt-get update && apt-get install -y --allow-change-held-packages kubelet=1.19.0 kubectl=1.19.0

# 配置重启
sudo systemctl daemon-reload
sudo systemctl restart kubelet

工作原理

(1) 针对第一个升级的master节点
kubeadm upgrade apply 做了以下工作:
1)检查你的集群是否处于可升级状态:

  • API 服务器是可访问的
  • 所有节点处于 Ready 状态
  • 控制面是健康的

2)强制执行版本偏差策略。
3)确保控制面的镜像是可用的或可拉取到服务器上。
4)如果组件配置要求版本升级,则生成替代配置与/或使用用户提供的覆盖版本配置。
5)升级控制面组件或回滚(如果其中任何一个组件无法启动)。
6)应用新的 CoreDNS 和 kube-proxy 清单,并强制创建所有必需的 RBAC 规则。
7)如果旧文件在 180 天后过期,将创建 API 服务器的新证书和密钥文件并备份旧文件。

(2) 针对其他master节点
kubeadm upgrade node 在其他控制平节点上执行以下操作
1)从集群中获取 kubeadm ClusterConfiguration 。
2)(可选操作)备份 kube-apiserver 证书。
3)升级控制平面组件的静态 Pod 清单。
4)为本节点升级 kubelet 配置

(3)针对worker节点
kubeadm upgrade node 在工作节点上完成以下工作:
1)从集群取回 kubeadm ClusterConfiguration 。
2)为本节点升级 kubelet 配置。

21 service routes pod

Create and configure the service front-end-service so it’s accessible through NodePort and routes to the existing pod named front-end

创建和配置 service,名字为 front-end-service。可以通过 NodePort/ClusterIp 开访问,并且路由到 front-end 的 Pod 上。

service : https://kubernetes.io/zh-cn/docs/concepts/services-networking/service/

# 创建测试 pod
kubectl run pod-test --image=nginx --dry-run=client -o yaml > pod-test.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod-test
  name: pod-test
spec:
  containers:
  - image: nginx
    name: pod-test
    resources: {}
    ports:
      - containerPort: 80
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

kubectl apply -f pod-test.yaml

# 创建 service
kubectl expose pod pod-test --name=front-end-service --port=80 --target-port=80 \ 
--type=NodePort --dry-run=client -o yaml > front-end-service.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    run: pod-test
  name: front-end-service
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: pod-test
  type: NodePort
status:
  loadBalancer: {}

kubectl apply -f front-end-service.yaml

# 测试
curl http://10.98.56.171:80

22 deployment upgrade and rollback

Create a deployment as follows

  • Name: nginx-app

  • Using container nginx with version 1.11.9-alpine

  • The deployment should contain 3 replicas

Next, deploy the app with new version 1.12.0-alpine by performing a rolling update and record that update.

Finally, rollback that update to the previous version 1.11.9-alpine

创建一个满足以下条件的deployment

  • name:nginx-app

  • 使用版本为 1.11.9-alpine 的容器 nginx

  • 部署应包含 3 个副本

接下来,通过执行滚动更新并记录该更新,使用新版本 1.12.0-alpine 部署应用程序。 最后,将该更新回滚到之前的版本 1.11.9-alpine

deployment : https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/

# (1) 创建 deployment 
kubectl create deploy nginx-app --image=nginx:1.11.9 --dry-run=client -o yaml > nginx-app.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-app
  name: nginx-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-app
  strategy: {}
  template:
    metadata:
      labels:
        app: nginx-app
    spec:
      containers:
      - image: nginx:1.11.9
        name: nginx
        resources: {}
status: {}

kubectl apply -f nginx-app.yaml

# (2) 升级镜像
kubectl set image deployment nginx-app *=nginx:1.12.0 --record
# (3) 查看升级记录
kubectl rollout history deployment nginx-app
# (4) 回滚
kubectl rollout undo deployment nginx-app
# (5) 查看升级
kubectl describe deployment nginx-app

23 Pod

Create a Pod as follows:

  • Name: jenkins
  • Using image: nginx
  • In a new Kubenetes namespace named website-frontend
kubectl get ns
kubectl create ns website-frontend
kubectl run jenkins-nginx --image=nginx --namespace=website-frontend \
--dry-run=client -o yaml > jenkin-nginx.yaml

---
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: jenkins-nginx
  name: jenkins-nginx
  namespace: website-frontend
spec:
  containers:
  - image: nginx
    name: jenkins-nginx
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
---

kubectl apply -f jenkin-nginx.yaml
kubectl get pods -n website-frontend

24 deployment

Create a deployment spec file that will:

  • Launch 7 replicas of the redis image with the label: app_env_stage=dev

  • Deployment name: kual0020

Save a copy of this spec file to /opt/KUAL00201/deploy_spec.yaml (or .json)

When you are done, clean up (delete) any new k8s API objects that you produced during this task

kubectl create deploy kual0020 --image=redis --dry-run=client  -o yaml > deploy_spec.yaml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app_env_stage: dev
  name: kual0020
spec:
  replicas: 7
  selector:
    matchLabels:
      app_env_stage: dev
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app_env_stage: dev
    spec:
      containers:
      - image: redis
        name: redis
        resources: {}
status: {}
---

kubectl apply -f deploy_spec.yaml

25 find pod

Create a file /opt/KUCC00302/kucc00302.txt that lists all pods that implement Service foo in Namespace production.

The format of the file should be one pod name per line

kubectl get svc -n production --show-labels

kubectl get svc -n production --show-labels | grep pod-test

kubectl get pod -l run=pod-test | grep -v NAME | awk '{print $1}' >> /opt/KUCC00302/kucc00302.txt

26 Secret file and env

Create a Kubernetes Secret as follows:

  • name: super-secret

  • credential: alice or username:bob

Create a Pod named pod-secrets-via-file using the redis image which mounts a secret named super-secret at /secrets

Create a second Pod named pod-secrets-via-env using the redis image, which exports credential as TOPSECRET

https://kubernetes.io/zh-cn/docs/concepts/configuration/secret/

# (1)
kubectl create secret generic super-secret --from-literal=username=bob \
--dry-run=client -o yaml > super-secret.yaml
# (2)
kubectl apply -f super-secret.yaml
kubectl describe secret super-secret

kubectl run pod-secrets-via-file --image=redis --dry-run=client -o yaml > pod-secrets-via-file.yaml

---
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: pod-secrets-via-file
  name: pod-secrets-via-file
spec:
  containers:
  - image: redis
    name: pod-secrets-via-file
    volumeMounts:
    - name: foo
      mountPath: "/secrets"
      readOnly: true
  volumes:
  - name: foo
    secret:
      secretName: super-secret
      optional: false
status: {}
---
kubectl apply -f pod-secrets-via-file.yaml
kubectl run pod-secrets-via-env --image=redis --dry-run=client -o yaml > pod-secrets-via-env.yaml

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: pod-secrets-via-env
  name: pod-secrets-via-env
spec:
  containers:
  - image: redis
    name: pod-secrets-via-env
    env:
      - name: TOPSECRET
        valueFrom:
          secretKeyRef:
            name: super-secret  # 与secret 名字一致
            key: username       # 与secret中username值对应,实际就是key:value中的key的名字
status: {}
---
kubectl apply -f pod-secrets-via-env.yaml
kubectl get pods
kubectl get secrets

27 emptyDir

Name: non-persistent-redis

Container image: redis

Named-volume with name: cache-control

Mount path: /data/redis

It should launch in the pre-prod namespace and the volume MUST NOT be persistent.

https://kubernetes.io/zh-cn/docs/concepts/storage/volumes/#emptydir

kubectl create ns pre-prod

kubectl run non-persistent-redis --image=redis --namespace=pre-prod --dry-run=client -o yaml > non-persistent-redis.yaml

---
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: non-persistent-redis
  name: non-persistent-redis
  namespace: pre-prod
spec:
  containers:
  - image: redis
    name: non-persistent-redis
    volumeMounts:
    - mountPath: /data/redis
      name: cache-control
  volumes:
  - name: cache-control
    emptyDir: {}
status: {}
---

kubectl apply -f non-persistent-redis.yaml

28 Taints

Check to see how many nodes are ready (not including nodes tainted NoSchedule) and write the number to /opt/nodenum

[root@master ~]# kubectl describe nodes|grep Taints
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Taints:             <none>
Taints:             <none>
[root@master ~]# kubectl describe nodes|grep Taints | grep -v NoSchedule
Taints:             <none>
Taints:             <none>
[root@master ~]# kubectl describe nodes|grep Taints | grep -v NoSchedule | wc -l
2
[root@master ~]# kubectl describe nodes|grep Taints | grep -v NoSchedule | wc -l > /home/kubernetes/nodenum
[root@master ~]# cd /home/kubernetes/
[root@master kubernetes]# cat nodenum 
2

29 top and awk

From the Pod label name=cpu-utilizer, find pods running high CPU workloads and write the name of the Pod consuming most CPU to the file /opt/cpu.txt (which already exists)

kubectl top pods --sort-by=cpu -l name=cpu-utilizer -A | grep -v NAME | awk 'NR==1' | awk '{print $2}' > /opt/cpu.txt

30 dns pod service

Create a deployment as follows

  • Name: nginx-dns

  • Exposed via a service: nginx-dns

  • Ensure that the service & pod are accessible via their respective DNS records

  • The container(s) within any Pod(s) running as a part of this deployment should use the nginx image

Next, use the utility nslookup to look up the DNS records of the service & pod and write the output to /opt/service.dns and /opt/pod.dns respectively.

Ensure you use the busybox:1.28 image(or earlier) for any testing, an the latest release has an unpstream bug which impacts thd use of nslookup.

https://kubernetes.io/zh-cn/docs/concepts/services-networking/dns-pod-service/

# 创建 deployment nginx-dns
kubectl  create deployment  nginx-dns --image=nginx --replicas=3
kubectl  get  deployment
# 发布 service nginx-dns
kubectl expose deployment nginx-dns --name=nginx-dns --port=80 --type=NodePort
kubectl get svc
# 测试,指定busybox镜像版本
kubectl  run  test --image=busybox:1.28 --command -- sleep 3600
kubectl get pod
# 查询pod的记录,172.17.140.207 为nginx-dns的内部ip
kubectl exec -it test -- nslookup 172.17.140.207 > /opt/pod.dns
cat /opt/pod.dns
-----
[root@master kubernetes]# cat nslookup.logs 
Server:    10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local

Name:      172.17.140.207
Address 1: 172.17.140.207 172-17-140-207.nginx-dns.default.svc.cluster.local
---

# 查看service的记录
kubectl exec -it test -- nslookup nginx-dns > /opt/service.dns
cat /opt/service.dns
---
[root@master kubernetes]# cat nginx-dns.logs 
Server:    10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local

Name:      nginx-dns
Address 1: 10.14.92.212 nginx-dns.default.svc.cluster.local
---

31 Snapshot using etcdctl options

Create a snapshot of the etcd instance running at https://127.0.0.1:2379 saving the snapshot to the file path /data/backup/etcd-snapshot.db

The etcd instance is running etcd version 3.1.10

The following TLS certificates/key are supplied for connecting to the server with etcdctl

CA certificate: /opt/KUCM00302/ca.crt

Client certificate: /opt/KUCM00302/etcd-client.crt

Clientkey:/opt/KUCM00302/etcd-client.key

https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster

# 查看 etcd
kubectl get pod -n kube-system | grep etcd
或者 
find / -name etcdctl # 以绝对路径执行
# 复制 etcdctl 到本地
kubectl -n kube-system cp etcd-k8s-master:/usr/local/bin/etcdctl /usr/local/bin/etcdctl

# 前两步骤,就是确保本地 etcdctl 可执行,不能执行就在根目录找
# 携带证书,执行保存快照命令
ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 \
--cacert=/opt/KUCM00302/ca.crt \
--cert=/opt/KUCM00302/etcd-client.crt \
--key=/opt/KUCM00302/etcd-client.key \
snapshot save /data/backup/etcd-snapshot.db

# 或将etcdctl换成find查找到的容器内的
/var/lib/docker/overlay2/5eff44b96798680f8288dccfa2b911c3623dbe1e07ff5213d56b21159e9827f1/diff/usr/local/bin/etcdctl

32 cordon and drain

Set the node labelled with name=ek8s-node-1 as unavailable and reschedule all the pods running on it.

# 查看满足要求的node节点
kubectl get nodes --show-labels
kubectl get nodes -l name=ek8s-node-1
# 标记节点不可用
kubectl cordon k8s-node2
# 驱逐node01节点上的pod,当节点上有daemonSet时。加上参数 生产慎用
kubectl drain k8s-node2 --delete-emptydir-data --ignore-daemonsets --force
# 查看
kubectl get pods -o wide
# 恢复节点可调度
kubectl uncordon k8s-node2
# 将节点上停掉的 pod 在重新启动
kubectl get pod pod-name -o yaml | kubectl replace --force -f ./

33 static pod

Configure the kubelet systemd managed service, on the node labelled with name=wk8s-node-1, to launch a Pod containing a single container of image nginx named myservice automatically. Any spec files required should be placed in the /etc/kubernetes/manifests directory on the node.
Hints:
You can ssh to the failed node using $ ssh wk8s-node-1
You can assume elevated privileges on the node with the following command $ sudo -i

# 查看满足要求的节点
kubectl get nodes -l name=wk8s-node-1

vi /var/lib/kubelet/config.yaml
staticPodPath: /etc/kubernetes/manifests
或
vi /usr/lib/systemd/system/kubelet.service   
观察有没有 --pod-manifest-path=/etc/kubernetes/manifest 


# 注:将部署的 pod yaml 放到该目录会由kubelet自动创建
# 验证 看 k8s-master 下 /etc/kubernetes/manifests 内 包含了etcd、kube-apiserver等yaml

vi /etc/kubernetes/manifests/pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: myservice
spec:
  containers:
  - name: myservice
    image: nginx
---
# 回到master节点验证
kubectl get pods

34 failing service

Given a partially-functioning Kubenetes cluster, identify symptoms of failure on the cluster. Determine the node, the failing service and take actions to bring up the failed service and restore the health of the cluster. Ensure that any changes are made permanently.
The worker node in this cluster is labelled with name=bk8s-node-0

Hints:
You can ssh to the relevant nodes using $ ssh $(NODE) where ( N O D E ) is one of bk8s−master−0 or bk8s−node − 0
You can assume elevated privileges on any node in the cluster with the following command (NODE) is one of bk8s−master−0 or bk8s−node−0
You can assume elevated privileges on any node in the cluster with the following command sudo -i

# 情形一:kubectl 命令能用,进行健康检查,那个notready 就重启那个
kubectl get cs
# 例如 manager-controller 显示notready
systemctl start kube-manager-controller.service
   
# 情形二:kubectl 命令不能用
# 则 首先 ssh登陆到bk8 -master-0上检查服务,如master上的4大服务,
# api-server/schedule/controllor-manager/etcd
systemctl list-utils-files | grep controller-manager
systemctl list-utils-files | grep api-server
systemctl list-utils-files | grep etcd
# 当无服务时
cd /etc/kubernetes/manifest
# 可以看到api-server.yaml  controller-manager.yaml等4个文件,说明这几个服务是为 static pod
# 检查 kubelet 状态
systemctl status kubelet
# 说明api-server   controlloer-manager    etcd    schedule  这几个pod 没启动。

# 检查静态pod配置, /var/lib/kubelet/config.yaml  或者 /var/lib/systemd/system/kubelet.service
# 检查 staticPodPath: /etc/kubernetes/manifests 是否正确,有误则改正,并重启kubelet

# 再查看node啥的,就OK了

35 persistent volume

Creae a persistent volume with name app-config of capacity 1Gi and access mode ReadWriteOnce. The type of volume is hostPath and its location is /srv/app-config

https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

# 修改官网PV模板
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: app-config
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/srv/app-config"
---
# 创建 PV
kubectl apply -f app-config.yaml

36 Daemonsets

Ensure a single instance of Pod nginx is running on each node of the kubernetes cluster where nginx also represents the image name which has to be used. Do no override any taints currently in place.

Use Daemonsets to complete this task and use ds.kusc00201 as Daemonset name

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ds-kusc00201
  namespace: kube-system
  labels:
    k8s-app: ds-kusc00201
spec:
  selector:
    matchLabels:
      name: ds-kusc00201
  template:
    metadata:
      labels:
        name: ds-kusc00201
    spec:
      tolerations:
      # 这些容忍度设置是为了让该守护进程集在控制平面节点上运行
      # 如果你不希望自己的控制平面节点运行 Pod,可以删除它们
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: ds-kusc00201
        image: nginx
# 创建
kubectl apply -f daemonset.yaml
# 验证
kubectl get daemonsets -n kube-system

37 Upgrade kubeadm

Upgrade kubeadm, only upgrade the master node, and upgrade the version from 1.20.0 to 1.20.1
The version components on the master, there are kubectl, kubelet, etcd without upgrading

kubectl get nodes
# 升级涉及到的组件
# ssh 至 master 主机上
kubectl cordon k8s-master
kubectl drain k8s-master --delete-emptydir-data --ignore-daemonsets --force

apt-mark unhold kubeadm kubectl kubelet
apt-get update && apt-get install -y kubeadm=1.20.1-00 kubelet=1.20.1-00 kubectl=1.20.1-00
apt-mark hold kubeadm kubectl kubelet
# 进行检查
kubeadm upgrade plan  

kubectl uncordon k8s-master
kubectl get nodes

38 Contexts

You have access to multiple clusters from your main terminal through kubectl contexts. Write all those context names into /opt/course/1/contexts.
Next write a command to display the current context into /opt/course/1/context_default_kubectl.sh, the command should use kubectl.
Finally write a second command doing the same thing into /opt/course/1/context_default_no_kubectl.sh, but without the use of kubectl.

kubectl config get-contexts | grep -v NAME | awk '{print $2}' > /opt/course/1/contexts

kubectl config current-context

cat ~/.kube/config | grep current | sed -e "s/current-contest://"

39 Schedule Pod on Master Node

Use context: kubectl config use-context k8s-c1-H

Create a single Pod of image httpd:2.4.41-alpine in Namespace default. The Pod should be named pod1 and the container should be named pod1-container. This Pod should only be scheduled on a master node, do not add new labels any nodes.
Shortly write the reason on why Pods are by default not scheduled on master nodes into /opt/course/2/master_schedule_reason .

# 查看master的taints
kubectl describe node k8s-master | grep Taint # get master node taints
# 查看master的labels
kubectl describe node k8s-master | grep Labels -A 10
kubectl get node k8s-master --show-labels

---
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: pod1
  name: pod1
spec:
  containers:
  - image: nginx
    name: pod1-container
  nodeName: k8s-master
status: {}
---
or
---
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
spec:
  containers:
  - image: nginx
    name: pod1-container                  # change
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  tolerations:                            # add
  - effect: NoSchedule                    # add
    key: node-role.kubernetes.io/master   # add
  nodeSelector:                           # add
    node-role.kubernetes.io/master: ""    # add
status: {}
---
kubectl apply -f pod1.yaml

# pod 默认不分配到 master 节点的原因
master node can not be deployed by default, because it has taint "NoSchedule"

40 Scale down StatefulSet

Use context: kubectl config use-context k8s-c1-H

There are two Pods named o3db-* in Namespace project-c13. C13 management asked you to scale the Pods down to one replica to save resources. Record the action.

kubectl -n project-c13 get deploy,ds,sts | grep o3db
kubectl -n project-c13 get pod --show-labels | grep o3db
kubectl -n project-c13 scale sts o3db --replicas 1 --record
k -n project-c13 get sts o3db

41 Pod Ready if Service is reachable

Use context: kubectl config use-context k8s-c1-H

Do the following in Namespace default. Create a single Pod named ready-if-service-ready of image nginx:1.16.1-alpine. Configure a LivenessProbe which simply runs true. Also configure a ReadinessProbe which does check if the url http://service-am-i-ready:80 is reachable, you can use wget -T2 -O- http://service-am-i-ready:80 for this. Start the Pod and confirm it isn’t ready because of the ReadinessProbe.
Create a second Pod named am-i-ready of image nginx:1.16.1-alpine with label id: cross-server-ready. The already existing Service service-am-i-ready should now have that second Pod as endpoint.
Now the first Pod should be in ready state, confirm that.

vi ready-if-service-ready.yaml
---
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: ready-if-service-ready
  name: ready-if-service-ready
spec:
  containers:
  - image: nginx:1.16.1-alpine
    name: ready-if-service-ready
    resources: {}
    livenessProbe:
      exec:
        command:
        - 'true'
    readinessProbe:
      exec:
        command:
        - sh
        - -c
        - 'wget -T2 -O- http://service-am-i-ready:80'
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
---
kubectl -f ready-if-service-ready.yaml apply

kubectl run am-i-ready --image=nginx:1.16.1-alpine --labels="id=cross-server-ready"

42 Kubectl sorting

Use context: kubectl config use-context k8s-c1-H

There are various Pods in all namespaces. Write a command into /opt/course/5/find_pods.sh which lists all Pods sorted by their AGE (metadata.creationTimestamp).
Write a second command into /opt/course/5/find_pods_uid.sh which lists all Pods sorted by field metadata.uid. Use kubectl sorting for both commands.

# /opt/course/5/find_pods.sh
kubectl get pods -A --sort-by=.metadata.creationTimestamp
# /opt/course/5/find_pods_uid.sh
kubectl get pods -A --sort-by=.metadata.uid

43 Storage, PV, PVC, Pod volume

Create a new PersistentVolume named safari-pv. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data and no storageClassName defined.
Next create a new PersistentVolumeClaim in Namespace project-tiger named safari-pvc . It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.
Finally create a new Deployment safari in Namespace project-tiger which mounts that volume at /tmp/safari-data. The Pods of that Deployment should be of image httpd:2.4.41-alpine.

# PV
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: safari-pv
  labels:
    type: local
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/Volumes/Data"
# PVC
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: safari-pvc
  namespace: project-tiger
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
# Pod
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: project-tiger
  labels:
    app: safari
  name: safari
spec:
  replicas: 1
  selector:
    matchLabels:
      app: safari
  template:
    metadata:
      labels:
        app: safari
    spec:
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: safari-pvc
      containers:
      - image: httpd:2.4.41-alpine
        name: httpd
        volumeMounts:
        - mountPath: "/tmp/safari-data"
          name: data

44 Node and Pod Resource Usage

Use context: kubectl config use-context k8s-c1-H

The metrics-server hasn’t been installed yet in the cluster, but it’s something that should be done soon. Your college would already like to know the kubectl commands to:
show node resource usage
show Pod and their containers resource usage
Please write the commands into /opt/course/7/node.sh and /opt/course/7/pod.sh.

# 查看指令帮助
kubectl top pod -h

# /opt/course/7/node.sh
kubectl top node

# /opt/course/7/pod.sh
kubectl top pods --containers=true

45 Get Master Information

Use context: kubectl config use-context k8s-c1-H

Ssh into the master node with ssh cluster1-master1. Check how the master components kubelet, kube-apiserver, kube-scheduler, kube-controller-manager and etcd are started/installed on the master node. Also find out the name of the DNS application and how it’s started/installed on the master node.
Write your findings into file /opt/course/8/master-components.txt. The file should be structured like:
/opt/course/8/master-components.txt
kubelet: [TYPE]
kube-apiserver: [TYPE]
kube-scheduler: [TYPE]
kube-controller-manager: [TYPE]
etcd: [TYPE]
dns: [TYPE] [NAME]
Choices of [TYPE] are: not-installed, process, static-pod, pod

思路查看各个组件运行状态是进程还是pod,及查看/etc/systemd/system下是否有该服务。/etc/kubernetes/manifests下是否有静态pod yaml,以及普通pod类型

ps aux | grep kubelet # shows kubelet process

ll /etc/kubernetes/manifests  # 查看静态pod

find /etc/systemd/system/ | grep kube

find /etc/systemd/system/ | grep etcd

kubectl -n kube-system get pod -o wide | grep master1

kubectl -n kube-system get deploy
# /opt/course/8/master-components.txt
kubelet: process
kube-apiserver: static-pod
kube-scheduler: static-pod
kube-controller-manager: static-pod
etcd: static-pod
dns: pod coredns

46 Kill Scheduler, Manual Scheduling

Use context: kubectl config use-context k8s-c2-AC

Ssh into the master node with ssh cluster2-master1. Temporarily stop the kube-scheduler, this means in a way that you can start it again afterwards.
Create a single Pod named manual-schedule of image httpd:2.4-alpine, confirm its created but not scheduled on any node.
Now you’re the scheduler and have all its power, manually schedule that Pod on node cluster2-master1. Make sure it’s running.
Start the kube-scheduler again and confirm its running correctly by creating a second Pod named manual-schedule2 of image httpd:2.4-alpine and check if it’s running on cluster2-worker1.

# stop the kube-scheduler
kubectl get nodes
kubectl -n kube-system get pods | grep schedule
cd /etc/kubernetes/manifests/
mv kube-scheduler.yaml ..

# Create POD manual-schedule
kubectl run manual-schedule --image=httpd:2.4-alpine
# manually schedule that Pod on node cluster2-master1
kubectl get pod manual-schedule -o yaml > maual-schedule.yaml

调度这里用NodeName 也可以先打标签再通过 nodeSelector 后面这种思路可以参照 13题

maual-schedule.yaml

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: manual-schedule
  managedFields:
...
    manager: kubectl-run
    operation: Update
  name: manual-schedule
  namespace: default
  resourceVersion: "3515"
  selfLink: /api/v1/namespaces/default/pods/manual-schedule
  uid: 8e9d2532-4779-4e63-b5af-feb82c74a935
spec:
  nodeName: cluster2-master1        # add the master node name
  containers:
  - image: httpd:2.4-alpine
    imagePullPolicy: IfNotPresent
    name: manual-schedule
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-nxnc7
      readOnly: true
  dnsPolicy: ClusterFirst
...

kubectl -f maual-schedule.yaml replace --force
# Start the kube-scheduler again
cd /etc/kubernetes/manifests/
mv ../kube-scheduler.yaml .
kubectl run manual-schedule2 --image=httpd:2.4-alpine

47 RBAC SA Role RoleBinding

Create a new ServiceAccount processor in Namespace project-hamster. Create a Role and RoleBinding, both named processor as well. These should allow the new SA to only create Secrets and ConfigMaps in that Namespace.

# create sa
kubectl -n project-hamster create sa processor
# create role
kubectl -n project-hamster create role processor --verb=create --resource=secret,configmap
# create rolebinding
kubectl -n project-hamster create rolebinding processor \
--role=processor --serviceaccount project-hamster:processor

48 DaemonSet on all Nodes

Use Namespace project-tiger for the following. Create a DaemonSet named ds-important with image httpd:2.4-alpine and labels id=ds-important and uuid=18426a0b-5f59-4e10-923f-c0e078e82462. The Pods it creates should request 10 millicore cpu and 10 mebibyte memory. The Pods of that DaemonSet should run on all nodes, master and worker.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ds-important
  namespace: kube-system
  labels:
    id: ds-important
    uuid: 18426a0b-5f59-4e10-923f-c0e078e82462
spec:
  selector:
    matchLabels:
      id: ds-important
      uuid: 18426a0b-5f59-4e10-923f-c0e078e82462
  template:
    metadata:
      labels:
        id: ds-important
        uuid: 18426a0b-5f59-4e10-923f-c0e078e82462
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: ds-important
        image: httpd:2.4-alpine
        resources:
          requests:
            cpu: 10m
            memory: 10Mi

49 Deployment on all Nodes

Use Namespace project-tiger for the following. Create a Deployment named deploy-important with label id=very-important (the Pods should also have this label) and 3 replicas. It should contain two containers, the first named container1 with image nginx:1.17.6-alpine and the second one named container2 with image kubernetes/pause.
There should be only ever one Pod of that Deployment running on one worker node. We have two worker nodes: cluster1-worker1 and cluster1-worker2. Because the Deployment has three replicas the result should be that on both nodes one Pod is running. The third Pod won’t be scheduled, unless a new worker node will be added.
In a way we kind of simulate the behaviour of a DaemonSet here, but using a Deployment and a fixed number of replicas.

使用 podAntiAffinity 的方法解决

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    id: very-important                  # change
  name: deploy-important
  namespace: project-tiger              # important
spec:
  replicas: 3                           # change
  selector:
    matchLabels:
      id: very-important                # change
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        id: very-important              # change
    spec:
      containers:
      - image: nginx:1.17.6-alpine
        name: container1                # change
        resources: {}
      - image: kubernetes/pause         # add
        name: container2                # add
      affinity:                                             # add
        podAntiAffinity:                                    # add
          requiredDuringSchedulingIgnoredDuringExecution:   # add
          - labelSelector:                                  # add
              matchExpressions:                             # add
              - key: id                                     # add
                operator: In                                # add
                values:                                     # add
                - very-important                            # add
            topologyKey: topology.kubernetes.io/zone        # add
status: {}

使用 topologySpreadConstraints 的方法解决

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    id: very-important                  # change
  name: deploy-important
  namespace: project-tiger              # important
spec:
  replicas: 3                           # change
  selector:
    matchLabels:
      id: very-important                # change
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        id: very-important              # change
    spec:
      containers:
      - image: nginx:1.17.6-alpine
        name: container1                # change
        resources: {}
      - image: kubernetes/pause         # add
        name: container2                # add
      topologySpreadConstraints:                 # add
      - maxSkew: 1                               # add
        topologyKey: kubernetes.io/hostname      # add
        whenUnsatisfiable: DoNotSchedule         # add
        labelSelector:                           # add
          matchLabels:                           # add
            id: very-important                   # add
status: {}

50 Multi Containers and Pod shared Volume

Use context: kubectl config use-context k8s-c1-H

Create a Pod named multi-container-playground in Namespace default with three containers, named c1, c2 and c3. There should be a volume attached to that Pod and mounted into every container, but the volume shouldn’t be persisted or shared with other Pods.
Container c1 should be of image nginx:1.17.6-alpine and have the name of the node where its Pod is running available as environment variable MY_NODE_NAME.
Container c2 should be of image busybox:1.31.1 and write the output of the date command every second in the shared volume into file date.log. You can use while true; do date >> /your/vol/path/date.log; sleep 1; done for this.
Container c3 should be of image busybox:1.31.1 and constantly send the content of file date.log from the shared volume to stdout. You can use tail -f /your/vol/path/date.log for this.
Check the logs of container c3 to confirm correct setup.

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: multi-container-playground
  name: multi-container-playground
spec:
  containers:
  - image: nginx:1.17.6-alpine
    name: c1                                                                      # change
    resources: {}
    env:                                                                          # add
    - name: MY_NODE_NAME                                                          # add
      valueFrom:                                                                  # add
        fieldRef:                                                                 # add
          fieldPath: spec.nodeName                                                # add
    volumeMounts:                                                                 # add
    - name: vol                                                                   # add
      mountPath: /vol                                                             # add
  - image: busybox:1.31.1                                                         # add
    name: c2                                                                      # add
    command: ["sh", "-c", "while true; do date >> /vol/date.log; sleep 1; done"]  # add
    volumeMounts:                                                                 # add
    - name: vol                                                                   # add
      mountPath: /vol                                                             # add
  - image: busybox:1.31.1                                                         # add
    name: c3                                                                      # add
    command: ["sh", "-c", "tail -f /vol/date.log"]                                # add
    volumeMounts:                                                                 # add
    - name: vol                                                                   # add
      mountPath: /vol                                                             # add
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  volumes:                                                                        # add
    - name: vol                                                                   # add
      emptyDir: {}                                                                # add
status: {}

51 Find out Cluster Information

Use context: kubectl config use-context k8s-c1-H

You’re ask to find out following information about the cluster k8s-c1-H:
How many master nodes are available?
How many worker nodes are available?
What is the Service CIDR?
Which Networking (or CNI Plugin) is configured and where is its config file?
Which suffix will static pods have that run on cluster1-worker1?
Write your answers into file /opt/course/14/cluster-info, structured like this:

# 查看 master 和 worker 节点状态
kubectl get nodes
# Service CIDR
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep range
# Networking
find /etc/cni/net.d/
cat /etc/cni/net.d/10-calico.conflist


# The resulting
/opt/course/14/cluster-info could look like:
/opt/course/14/cluster-info
How many master nodes are available?
1: 1
How many worker nodes are available?
2: 2
What is the Service CIDR?
3: 10.96.0.0/12
Which Networking (or CNI Plugin) is configured and where is its config file?
4: Weave, /etc/cni/net.d/10-weave.conflist
Which suffix will static pods have that run on cluster1-worker1?
5: -cluster1-worker1

52 Cluster Event Logging

Use context: kubectl config use-context k8s-c2-AC

Write a command into /opt/course/15/cluster_events.sh which shows the latest events in the whole cluster, ordered by time. Use kubectl for it.
Now kill the kube-proxy Pod running on node cluster2-worker1 and write the events this caused into /opt/course/15/pod_kill.log.
Finally kill the containerd container of the kube-proxy Pod on node cluster2-worker1 and write the events into /opt/course/15/container_kill.log.
Do you notice differences in the events both actions caused?

/opt/course/15/cluster_events.sh
kubectl get events -A --sort-by=.metadata.creationTimestamp

Now we kill the kube-proxy Pod:
# find pod running on cluster2-worker1
k -n kube-system get pod -o wide | grep proxy 
k -n kube-system delete pod kube-proxy-z64cg
Now check the events:
sh /opt/course/15/cluster_events.sh

Write the events the killing caused into
/opt/course/15/pod_kill.log:
kube-system 9s Normal Killing pod/kube-proxy-jsv7t …
kube-system 3s Normal SuccessfulCreate daemonset/kube-proxy …
kube-system Normal Scheduled pod/kube-proxy-m52sx …
default 2s Normal Starting node/cluster2-worker1 …
kube-system 2s Normal Created pod/kube-proxy-m52sx …
kube-system 2s Normal Pulled pod/kube-proxy-m52sx …
kube-system 2s Normal Started pod/kube-proxy-m52sx …
Finally we will try to provoke events by killing the container belonging to the container of the kube-proxy Pod:

# delete kube-proxy
ssh cluster2-worker1
crictl ps | grep kube-proxy
crictl rm 1e020b43c4423
crictl ps | grep kube-proxy

Now we see if this caused events again and we write those into the second file:
sh /opt/course/15/cluster_events.sh
/opt/course/15/container_kill.log
kube-system 13s Normal Created pod/kube-proxy-m52sx …
kube-system 13s Normal Pulled pod/kube-proxy-m52sx …
kube-system 13s Normal Started pod/kube-proxy-m52sx …
Comparing the events we see that when we deleted the whole Pod there were more things to be done, hence more events. For example was the DaemonSet in the game to re-create the missing Pod. Where when we manually killed the main container of the Pod, the Pod would still exist but only its container needed to be re-created, hence less events.

53 namespaces and Api Resources

Use context: kubectl config use-context k8s-c1-H

Create a new Namespace called cka-master.
Write the names of all namespaced Kubernetes resources (like Pod, Secret, ConfigMap…) into /opt/course/16/resources.txt.
Find the project-* Namespace with the highest number of Roles defined in it and write its name and amount of Roles into /opt/course/16/crowded-namespace.txt.

kubectl create ns cka-master

kubectl api-resources
kubectl api-resources --namespaced -o name > /opt/course/16/resources.txt

# Namespace with most Roles
kubectl -n project-c13 get role --no-headers | wc -l
kubectl -n project-c14 get role --no-headers | wc -l
kubectl -n project-snake get role --no-headers | wc -l
kubectl -n project-tiger get role --no-headers | wc -l

Finally we write the name and amount into the file:
/opt/course/16/crowded-namespace.txt
project-c14 with 300 resources

54 Find Container of Pod and check info

Use context: kubectl config use-context k8s-c1-H

In Namespace project-tiger create a Pod named tigers-reunite of image httpd:2.4.41-alpine with labels pod=container and container=pod. Find out on which node the Pod is scheduled. Ssh into that node and find the containerd container belonging to that Pod.
Using command crictl:
Write the ID of the container and the
info.runtimeType into
/opt/course/17/pod-container.txt
Write the logs of the container into
/opt/course/17/pod-container.log

kubectl -n project-tiger run tigers-reunite --image=httpd:2.4.41-alpine --labels "pod=container,container=pod"

kubectl -n project-tiger get pods -o wide

ssh k8s-node1
crictl ps | grep tigers-reunite
crictl inspect b01edbe6f89ed | grep runtimeType

# Then we fill the requested file (on the main terminal):
/opt/course/17/pod-container.txt
b01edbe6f89ed io.containerd.runc.v2
# Finally we write the container logs in the second file:
ssh ssh k8s-node1
crictl logs b01edbe6f89ed &> /opt/course/17/pod-container.log

55 Fix Kubelet

Use context: kubectl config use-context k8s-c3-CCC

There seems to be an issue with the kubelet not running on cluster3-worker1. Fix it and confirm that cluster has node cluster3-worker1 available in Ready state afterwards. You should be able to schedule a Pod on cluster3-worker1 afterwards.
Write the reason of the issue into /opt/course/18/reason.txt.

kubectl get nodes

ps -aux|grep kubelete
service kubelet status
service kubelet start
service kubelet status

/usr/local/bin/kubelet
whereis kubelet
kubelet: /usr/bin/kubelet

journalctl -u kubelet
# Well, there we have it, wrong path specified. Correct the path in file
service kubelet status | grep Drop-In: -A 5
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf and run:
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf # fix
systemctl daemon-reload && systemctl restart kubelet
systemctl status kubelet # should now show running
# Finally we write the reason into the file:
/opt/course/18/reason.txt
wrong path to kubelet binary specified in service config

56 Create Secret and mount into Pod

Use context: kubectl config use-context k8s-c3-CCC

Do the following in a new Namespace secret. Create a Pod named secret-pod of image busybox:1.31.1 which should keep running for some time. It should be able to run on master nodes as well, create the proper toleration.
There is an existing Secret located at /opt/course/19/secret1.yaml, create it in the secret Namespace and mount it readonly into the Pod at /tmp/secret1.
Create a new Secret in Namespace secret called secret2 which should contain user=user1 and pass=1234. These entries should be available inside the Pod’s container as environment variables APP_USER and APP_PASS.
Confirm everything is working.

# (0)
kubectl create ns secret

cp /opt/course/19/secret1.yaml 19_secret1.yaml

vi 19_secret1.yaml
---
apiVersion: v1
data:
  halt: IyEgL2Jpbi9zaAo...
kind: Secret
metadata:
  creationTimestamp: null
  name: secret1
  namespace: secret           # change
---
kubectl -f 19_secret1.yaml create

# (1)
kubectl secret create secret generic secret2 --from-literal=user=user1 --from-literal=pass=1234

kubectl -n secret run secret-pod --image=busybox:1.31.1 $do - sh -c "sleep 5d" > 19.yaml
vi 19.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: secret-pod
  name: secret-pod
  namespace: secret                       # add
spec:
  tolerations:                            # add
  - effect: NoSchedule                    # add
    key: node-role.kubernetes.io/master   # add
  containers:
  - args:
    - sh
    - -c
    - sleep 1d
    image: busybox:1.31.1
    name: secret-pod
    resources: {}
    env:                                  # add
    - name: APP_USER                      # add
      valueFrom:                          # add
        secretKeyRef:                     # add
          name: secret2                   # add
          key: user                       # add
    - name: APP_PASS                      # add
      valueFrom:                          # add
        secretKeyRef:                     # add
          name: secret2                   # add
          key: pass                       # add
    volumeMounts:                         # add
    - name: secret1                       # add
      mountPath: /tmp/secret1             # add
      readOnly: true                      # add
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  volumes:                                # add
  - name: secret1                         # add
    secret:                               # add
      secretName: secret1                 # add
status: {}
---
kubectl -f 19.yaml create
# (2) check
kubectl -n secret exec secret-pod – env | grep APP
kubectl -n secret exec secret-pod – find /tmp/secret1
kubectl -n secret exec secret-pod – cat /tmp/secret1/halt

57 Update Kubernetes Version and join cluster

Use context: kubectl config use-context k8s-c3-CCC

Your coworker said node cluster3-worker2 is running an older Kubernetes version and is not even part of the cluster. Update Kubernetes on that node to the exact version that’s running on cluster3-master1. Then add this node to the cluster. Use kubeadm for this.

kubectl get nodes

# 将节点标记为不可调度并驱逐所有负载,准备节点的维护:
kubectl cordon cluster-worker2
kubectl drain cluster-worker2 --ignore-daemonsets

ssh cluster-worker2
kubeadm version
kubectl version
kubelet --version
# 对于工作节点,下面的命令会升级本地的 kubelet 配置
kubeadm upgreade node

# 继续更新 kubelet 和 kubectl
apt update
apt show kubectl -a | grep 1.23
apt install kubectl=1.23.1-00 kubelet=1.23.1-00
# 验证版本并重启
kubelete --version
systemctl restart kubelet
service kubelet status

# 增加 cluster3-master2 to 集群
# 首先我们登录到 master1 并生成一个新的 TLS 引导令牌,同时打印出加入命令
ssh cluster3-master1
kubeadm token create --print-join-command
---
kubeadm join 192.168.100.31:6443 --token leqq1l.1hlg4rw8mu7brv73 --discovery-token-ca-cert-hash sha256:2e2c3407a256fc768f0d8e70974a8e24d7b9976149a79bd08858c4d7aa2ff79a
---

kubeadm token list

# 登录 cluster3-worker2 执行加入集群的命令
ssh cluster3-worker2
kubeadm join 192.168.100.31:6443 --token leqq1l.1hlg4rw8mu7brv73 --discovery-token-ca-cert-hash sha256:2e2c3407a256fc768f0d8e70974a8e24d7b9976149a79bd08858c4d7aa2ff79a
# kubeadm join 有问题 则执行 kubeadm 重复上述步骤
# 验证状态
ssh cluster3-master1
kubectl get nodes

58 Create a Static Pod and Service

Use context: kubectl config use-context k8s-c3-CCC

Create a Static Pod named my-static-pod in Namespace default on cluster3-master1. It should be of image nginx:1.16-alpine and have resource requests for 10m CPU and 20Mi memory.
Then create a NodePort Service named static-pod-service which exposes that static Pod on port 80 and check if it has Endpoints and if its reachable through the cluster3-master1 internal IP address. You can connect to the internal node IPs from your main terminal.

https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/static-pod/#static-pod-creation

https://kubernetes.io/zh-cn/docs/concepts/scheduling-eviction/pod-overhead/#set-up

# (0) create static pod
kubectl run my-static-pod -–image=nginx:1.16-alpine -o yaml \
--dry-run=client > my-static-pod.yaml

mkdir -p /etc/kubernetes/manifests/
cat <<EOF >/etc/kubernetes/manifests/my-static-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-static-pod
  labels:
    run: my-static-pod
spec:
  containers:
    - name: web
      image: nginx:1.16-alpine
      resources:
        limits:
          cpu: 10m
          memory: 20Mi
EOF

# (1) create service
kubectl expose pod my-static-pod-cluster3-master1 --name static-pod-service \
--type=NodePort --port 80

apiVersion: v1
kind: Service
metadata:
  name: static-pod-service
spec:
  type: NodePort
  selector:
    run: my-static-pod
  ports:
    - port: 80
      targetPort: 80

# (2) check
kubectl get svc,ep -l run=my-static-pod

59 Check how long certificates are valid

Use context: kubectl config use-context k8s-c2-AC

Check how long the kube-apiserver server certificate is valid on cluster2-master1. Do this with openssl or cfssl. Write the exipiration date into /opt/course/22/expiration.
Also run the correct kubeadm command to list the expiration dates and confirm both methods show the same date.
Write the correct kubeadm command that would renew the apiserver server certificate into /opt/course/22/kubeadm-renew-certs.sh.

https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#check-certificate-expiration

ssh cluster2-master1


find /etc/kubernetes/pki | grep apiserver
# (0) 使用 openssl 获取到期时间
openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver.crt | grep Validity -A2
Validity
Not Before: Jan 14 18:18:15 2021 GMT
Not After : Jan 14 18:49:40 2022 GMT
# 写入上步骤得到的时间 /opt/course/22/expiration
Jan 14 18:49:40 2022 GMT

# (1) 使用 kubeadm 的功能来获取到期时间
kubeadm certs check-expiration | grep apiserver
apiserver Jan 14, 2022 18:49 UTC 363d ca no
apiserver-etcd-client Jan 14, 2022 18:49 UTC 363d etcd-ca no
apiserver-kubelet-client Jan 14, 2022 18:49 UTC 363d ca no

# (3) 编写将所有证书更新到请求位置的命令
/opt/course/22/kubeadm-renew-certs.sh
kubeadm certs renew apiserver

60 kubelet client/server cert info

Use context: kubectl config use-context k8s-c2-AC

Node cluster2-worker1 has been added to the cluster using kubeadm and TLS bootstrapping.
Find the “Issuer” and “Extended Key Usage” values of the cluster2-worker1:
kubelet client certificate, the one used for outgoing connections to the kube-apiserver.
kubelet server certificate, the one used for incoming connections from the kube-apiserver.
Write the information into file /opt/course/23/certificate-info.txt.
Compare the “Issuer” and “Extended Key Usage” fields of both certificates and make sense of these.

# (0) First we check the kubelet client certificate:
ssh cluster2-worker1
openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet-client-current.pem | grep Issuer
openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet-client-current.pem | grep "Extended Key Usage" -A1

# (1) Next we check the kubelet server certificate:
openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet.crt | grep Issuer
openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet.crt | grep "Extended Key Usage" -A1

We see that the server certificate was generated on the worker node itself and the client certificate was issued by the Kubernetes api. The “Extended Key Usage” also shows if its for client or server authentication.

61 NetworkPolicy

Use context: kubectl config use-context k8s-c1-H

There was a security incident where an intruder was able to access the whole cluster from a single hacked backend Pod.
To prevent this create a NetworkPolicy called np-backend in Namespace project-snake. It should allow the backend-* Pods only to:
connect to
db1-* Pods on port 1111
connect to
db2-* Pods on port 2222
Use the app label of Pods in your policy.
After implementation, connections from backend-* Pods to vault-* Pods on port 3333 should for example no longer work.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: np-backend
  namespace: project-snake
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Egress                    # policy is only about Egress
  egress:
    -                           # first rule
      to:                       # first condition "to"
      - podSelector:
          matchLabels:
            app: db1
      ports:                    # second condition "port"
      - protocol: TCP
        port: 1111
    -                           # second rule
      to:                       # first condition "to"
      - podSelector:
          matchLabels:
            app: db2
      ports:                    # second condition "port"
      - protocol: TCP
        port: 2222

62 Etcd Snapshot Save and Restore

Use context: kubectl config use-context k8s-c3-CCC

Make a backup of etcd running on cluster3-master1 and save it on the master node at /tmp/etcd-backup.db.
Then create a Pod of your kind in the cluster.
Finally restore the backup, confirm the cluster is still working and that the created Pod is no longer with us.

# (0) First we log into the master and try to create a snapshop of etcd:
ssh cluster3-master1
ETCDCTL_API=3 etcdctl snapshot save /tmp/etcd-backup.db

Error: rpc error: code = Unavailable desc = transport is closing
But it fails because we need to authenticate ourselves. For the necessary information we can check the etc manifest:

ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
  --cacert=<trusted-ca-file> --cert=<cert-file> --key=<key-file> \
  snapshot save <backup-file-location>
  
# 从 etcd Pod 的描述中获得 trusted-ca-file、cert-file 和 key-file
cat /etc/kubernetes/manifests/etcd.yaml | grep file
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt

#  备份
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/server.crt \
--key /etc/kubernetes/pki/etcd/server.key \
snapshot save /tmp/etcd-backup.db

# 1.1 先暂停kube-apiserver和etcd容器
mv /etc/kubernetes/manifests /etc/kubernetes/manifests.bak
mv /var/lib/etcd/ /var/lib/etcd.bak
# 1.2 恢复
ETCDCTL_API=3 etcdctl \
snapshot restore /tmp/etcd-backup.db \
--data-dir=/var/lib/etcd
# 1.3 启动kube-apiserver和etcd容器
mv /etc/kubernetes/manifests.bak /etc/kubernetes/manifests

练习题

01 Pod 日志记录

查看 pod 日志,并将日志中 Error 的行记录到指定文件。

  • pod 名称:web
  • 文件:/opt/web
kubectl logs pod/web-5f6bcbd7b-g8pfq | grep "error" >> pod-logs.txt

02 筛选 Pod

查看指定标签使用 cpu 最高的 pod,并记录到指定文件中。

  • 标签:app=web
  • 文件:/opt/cpu
kubectl top pod -l app=web |grep -v NAME | sort -nr -k 3 | head -n 1> /opt/cpu

kubernetres Label:https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#label

03 pod健康检查

检查容器中文件是否创建,如果没有被检测到则pod重启

  • path : /tmp/test.sock
vim liveness-test.yaml
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-test
spec:
  containers:
  - name: liveness
    image: busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/test.sock; sleep 30; rm -f /tmp/test.sock; sleep 600
    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/test.sock
      initialDelaySeconds: 5
      periodSeconds: 5
    readinessProbe:
      exec:
        command:
        - cat
        - /tmp/test.sock
      initialDelaySeconds: 5
      periodSeconds: 5
---
kubectl apply -f liveness-test.yaml

标签:kubectl,CKA,name,kubernetes,yaml,--,模拟题,nginx,pod
From: https://www.cnblogs.com/moranyuantian/p/17000984.html

相关文章

  • Kubernetes监控手册07-监控controller-manager
    写在前面controller-manager是Kubernetes控制面的组件,通常不太可能出问题,一般监控一下通用的进程指标就问题不大了,不过controller-manager确实也暴露了很多 ​​/metr......
  • Kubernetes(K8S) helm 安装
    Helm是一个Kubernetes的包管理工具,就像Linux下的包管理器,如yum/apt等,可以很方便的将之前打包好的yaml文件部署到kubernetes上。Helm有3个重要概念:helm:一......
  • Kubernetes(K8S) 常用命令
    Docker常用命令​​Docker常用命令​​#查看API版本[root@k8smaster~]#kubectlapi-versions#重启K8S[root@k8smaster~]#systemctlrestartkubelet#查看kubelet......
  • Kubernetes(K8S) 配置静态资源服务
    Kubernetes(K8S)配置静态资源服务---apiVersion:v1kind:ConfigMapmetadata:name:img-confignamespace:vipsoftdata:img.conf:|server{charsetutf-......
  • Kubernetes(K8S) kubectl top (metrics-server) node NotFound
    kubectltop命令安装metrics-servercomponents.yaml网上的各种方法都有问题,找到了一个完整版的yamlapiVersion:v1kind:ServiceAccountmetadata:labels:k8s-app......
  • Kubernetes监控手册02-宿主监控概述
    咱们这个系列是讲解Kubernetes监控,Kubernetes自身也是要跑在机器上的,那机器的监控自然也是整个体系的一环。机器层面的监控分为两部分,带内网络和带外网络,通过带内网络做......
  • Kubernetes网络模型 -flannel +Calico
    切换网络所有pod需要重建Kubernetes要求所有的网络插件实现必须满足如下要求:一个Pod一个IP所有的Pod可以与任何其他Pod直接通信,无需使用NAT映射所有节点可以与所有......
  • Kubernetes(k8s) kubectl top常用命令
    kubectl在$HOME/.kube目录中查找一个名为config的配置文件。可以通过设置KUBECONFIG环境变量或设置--kubeconfig参数来指定其它kubeconfig文件。本文主要介绍K......
  • 云原生安全系列3:5个 Kubernetes API 网络安全访问最佳实践
    Kubernetes中的API访问控制会经历三个步骤。首先,对请求进行身份验证,然后检查请求的有效授权,然后执行请求准入控制,最后授予访问权限。但在身份验证过程开始之前,确保正确配......
  • 移除Package Dependencies
    热烈欢迎,请直接点击!!!进入博主AppStore主页,下载使用各个作品!!!注:博主将坚持每月上线一个新app!!!【项目】-【SwiftPackages】-【选择Package】进行删除。 ......