首页 > 其他分享 >云原生|kubernetes|CKA模拟测试-2022(1---10题)(一)

云原生|kubernetes|CKA模拟测试-2022(1---10题)(一)

时间:2023-02-24 10:01:31浏览次数:50  
标签:10 context CKA name io kubectl 2022 ready pod


第一题:

Task weight: 1%

You have access to multiple clusters from your main terminal through kubectl contexts. Write all those context names into ​/opt/course/1/contexts​.

Next write a command to display the current context into /opt/course/1/context_default_kubectl.sh, the command should use ​kubectl​.

Finally write a second command doing the same thing into /opt/course/1/context_default_no_kubectl.sh, but without the use of ​kubectl​.

解析:

题目要求是,你现在访问的是一个多集群,你需要通过kubeclt 命令获取到config的上下文,config的current-context以及不通过kubectl获取到config的上下文以及current-context。

答案:

1,kubectl config get-contexts

2,kubectl config current-context

3,cat $HOME/.kube/config |grep current

第二题:

Task weight: 3%

Use context: kubectl config use-context k8s-c1-H

Create a single Pod of image httpd:2.4.41-alpine in Namespace ​default​. The Pod should be named ​pod1​ and the container should be named ​pod1-container​. This Pod should only be scheduled on a master node, do not add new labels any nodes.

解析:

这道题目要求的是创建一个pod,此pod的名称和镜像和镜像名称都规定了,并且规定是调度到master节点。由此得出需要四个条件才可完成此题。

此题考察的是对pod名称和容器名称的区分以及pod的节点调度。

1,

通过命令快速生成创建pod的模板文件

kubectl run pod1 --image=httpd:2.4.41-alpine  --dry-run=client -oyaml >pod1.yaml

模板文件大概是这个样子 

apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
spec:
containers:
- image: httpd:2.4.41-alpine
name: pod1
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

2,

在模板文件的基础上按题意修改

根据题目来看,我们可以知道pod可以nodeselector,也可以直接nodename,这两种方式都是OK的,但很明显nodename更加的简单

nodeSelector方式:

这个方式是通用的方式,但编写的时候写的东西多一些,比较麻烦,要有亲和

apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
spec:
containers:
- image: httpd:2.4.41-alpine
name: pod1-container # change
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
tolerations: # add
- effect: NoSchedule # add
key: node-role.kubernetes.io/master # add
nodeSelector: # add
node-role.kubernetes.io/master: "" # add
status: {}

节点标签如下;

因此,如果是二进制部署的集群,这种方式并不适用,因为没有role标签。

root@k8s-master:~# kubectl get no --show-labels 
NAME STATUS ROLES AGE VERSION LABELS
k8s-master Ready control-plane,master 372d v1.22.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-node1 Ready <none> 2d17h v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux
k8s-node2 Ready <none> 2d17h v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux


nodename方式:

需要先查询出master节点的名称,这个一定要准确查询

root@k8s-master:~# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 372d v1.22.10
k8s-node1 Ready <none> 2d17h v1.22.2
k8s-node2 Ready <none> 2d17h v1.22.2
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
spec:
containers:
- image: httpd:2.4.41-alpine
name: pod1-container # change
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
nodeName: k8s-master


3,

应用模板文件,创建pod

kubectl apply -f pod1.yaml



第三题:

Task weight: 1%

Use context: kubectl config use-context k8s-c1-H

There are two Pods named o3db-* in Namespace ​project-c13​. C13 management asked you to scale the Pods down to one replica to save resources.

解析:

这个题目比较简单,通过查询pod可以看到这些pod 的名称是有规律的编号,因此,可以推断出是StateFulSet方式部署的。

云原生|kubernetes|CKA模拟测试-2022(1---10题)(一)_kubernetes

 

云原生|kubernetes|CKA模拟测试-2022(1---10题)(一)_云原生_02

 因此,直接编辑这个sts即可

云原生|kubernetes|CKA模拟测试-2022(1---10题)(一)_云原生_03

云原生|kubernetes|CKA模拟测试-2022(1---10题)(一)_kubernetes_04

 

 再次查询pod,结果如上面第一图一样即可了。

第四题:

Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Do the following in Namespace default. Create a single Pod named ​ready-if-service-ready​ of image ​nginx:1.16.1-alpine​. Configure a LivenessProbe which simply runs ​true​. Also configure a ReadinessProbe which does check if the url ​http://service-am-i-ready:80​ is reachable, you can use ​wget -T2 -O- http://service-am-i-ready:80 ​for this. Start the Pod and confirm it isn't ready because of the ReadinessProbe.

Create a second Pod named am-i-ready of image ​nginx:1.16.1-alpine​ with label ​id: cross-server-ready​. The already existing Service ​service-am-i-ready​ should now have that second Pod as endpoint.

Now the first Pod should be in ready state, confirm that.

解析:

本题考查livnessPorbe和readnessProbe,也就是存活探针和运行探针

存活探针使用true,题目中要求的运行探针没有明确规定,它建议使用命令 wget -T2 -O- http://service-am-i-ready:80,但我们使用端口检测也是OK的,

service是环境里已经建立好的,绑定的pod是第二个创建的pod,因此,创建的第二个pod一定要准确

service的创建文件(已经创建好的,现在只是查看一下,确定是和第二个pod有关而已):

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"id":"cross-server-ready"},"name":"service-am-i-ready","namespace":"default"},"spec":{"ports":[{"port":80,"protocol":"TCP","targetPort":80}],"selector":{"id":"cross-server-ready"}},"status":{"loadBalancer":{}}}
creationTimestamp: "2022-09-29T14:30:58Z"
labels:
id: cross-server-ready
name: service-am-i-ready
namespace: default
resourceVersion: "4761"
uid: 03981930-13d9-4133-8e23-9704c2a24807
spec:
clusterIP: 10.109.238.68
clusterIPs:
- 10.109.238.68
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
id: cross-server-ready
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}

 存活探针的第二种写法:

readinessProbe:
exec:
command:
- sh
- -c
- 'wget -T2 -O- http://service-am-i-ready:80'

第一个pod: 

 

apiVersion: v1
kind: Pod
metadata:
labels:
run: ready-if-service-ready
name: ready-if-service-ready
spec:
containers:
- image: nginx:1.16.1-alpine
name: ready-if-service-ready
ports:
- containerPort: 80
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 5
livenessProbe:
exec:
command:
- 'true'
initialDelaySeconds: 5
periodSeconds: 5
dnsPolicy: ClusterFirst
restartPolicy: Always
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
id: cross-server-ready
name: am-i-ready
spec:
containers:
- image: nginx:1.16.1-alpine
name: am-i-ready
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

 

第五题:

Task weight: 1%

Use context: kubectl config use-context k8s-c1-H

There are various Pods in all namespaces. Write a command into /opt/course/5/find_pods.sh which lists all Pods sorted by their AGE (​metadata.creationTimestamp​).

Write a second command into /opt/course/5/find_pods_uid.sh which lists all Pods sorted by field ​metadata.uid​. Use ​kubectl​ sorting for both commands.

解析:

这一题比较的简单,只是要求把查询命令写入文件内即可。

第一个命令是查询所有pod,按创建时间排序的功能,将查询命令写入/opt/course/5/find_pods.sh

第二个命令是查询所有pod,按pod的uid排序,将查询命令写入/opt/course/5/find_pods_uid.sh

cat /opt/course/5/find_pods.sh
kubectl get po --sort-by {.metadata.creationTimestamp} -A
cat /opt/course/5/find_pods_uid.sh
kubectl get po --sort-by {.metadata.uid} -A


第六题:

Task weight: 8%

Use context: kubectl config use-context k8s-c1-H

Create a new PersistentVolume named safari-pv. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath ​/Volumes/Data​ and no storageClassName defined.

Next create a new PersistentVolumeClaim in Namespace project-tiger named ​safari-pvc​ . It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.

Finally create a new Deployment safari in Namespace ​project-tiger​ which mounts that volume at ​/tmp/safari-data​. The Pods of that Deployment should be of image ​httpd:2.4.41-alpine​.

解析:

这题难度中等,相关代码都得从官网查询,pvc建立后,pvc会自己寻找合适的pv,这里要理解pv设置的是2G,pvc设置的是2G,因此只有这个pv是适合的,也就是通过storage两者建立的联系

pv的创建:

cat safari-pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
name: safari-pv
spec:
capacity:
storage: 2Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
hostPath:
path: /Volumes/Data

pvc的创建: 

cat safari-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: safari-pvc
namespace: project-tiger
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi

 pv和pvc创建好后就可以看它们的状态了,两个bond才是正确的哦:

k8s@terminal:~$ kubectl get pv,pvc -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/safari-pv 2Gi RWO Delete Bound project-tiger/safari-pvc 4h2m

NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
project-tiger persistentvolumeclaim/safari-pvc Bound safari-pv 2Gi RWO 3h55m

pvc的使用,这里按题目的要求做即可

cat safari-dep.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: safari
name: safari
namespace: project-tiger
spec:
replicas: 1
selector:
matchLabels:
app: safari
strategy: {}
template:
metadata:
labels:
app: safari
spec:
containers:
- image: httpd:2.4.41-alpine
name: safari
resources: {}
volumeMounts: #这里定义pod中要挂载的路径
- name: safari
mountPath: /tmp/safari-data
volumes:
- name: safari #和上面的挂载目录一致
persistentVolumeClaim:
claimName: safari-pvc #

第七题:

Task weight: 1%

Use context: kubectl config use-context k8s-c1-H

The metrics-server has been installed in the cluster. Your college would like to know the kubectl commands to:

  1. show Nodes resource usage
  2. show Pods and their containers resource usage

Please write the commands into /opt/course/7/node.sh and ​/opt/course/7/pod.sh​.

解析:

Metrics server 环境是已经安装好的,因此不需要关注,只需要把这两个命令写到对应的文件内即可。

cat /opt/course/7/node.sh
kubectl top nodes -A
cat  /opt/course/7/pod.sh
kubectl top pod --containers=true


第八题:

Task weight: 2%

Use context: kubectl config use-context k8s-c1-H

Ssh into the master node with ssh cluster1-master1. Check how the master components kubelet, kube-apiserver, kube-scheduler, kube-controller-manager and etcd are started/installed on the master node. Also find out the name of the DNS application and how it's started/installed on the master node.

Write your findings into file /opt/course/8/master-components.txt. The file should be structured like:

# /opt/course/8/master-components.txt
kubelet: [TYPE]
kube-apiserver: [TYPE]
kube-scheduler: [TYPE]
kube-controller-manager: [TYPE]
etcd: [TYPE]
dns: [TYPE] [NAME]

Choices of [TYPE] are: ​not-installed​​process​​static-pod​​pod​

解析:

先登录cluster1-master1,也就是ssh cluster1-master1,然后使用命令kubectl get po -A 查看pod的状态,确认pod是如何部署的即可,题目内已经给了选项了,按顺序填写进文件内即可,答案在下面

答案:

cat /opt/course/8/master-components.txt
# /opt/course/8/master-components.txt
kubelet:process
kube-apiserver: static-pod
kube-scheduler: static-pod
kube-controller-manager:static-pod
etcd: static-pod
dns: pod coredns

​第九题:​

Task weight: 5%

Use context: kubectl config use-context k8s-c2-AC

Ssh into the master node with ssh cluster2-master1. Temporarily stop the kube-scheduler, this means in a way that you can start it again afterwards.

Create a single Pod named manual-schedule of image ​httpd:2.4-alpine​, confirm its created but not scheduled on any node.

Now you're the scheduler and have all its power, manually schedule that Pod on node cluster2-master1. Make sure it's running.

Start the kube-scheduler again and confirm its running correctly by creating a second Pod named manual-schedule2 of image ​httpd:2.4-alpine​ and check if it's running on cluster2-worker1.

解析:

此题考察了静态pod的重启,也就是最后一步,需要来回移动一次/etc/kubernetes/manifests/kube-scheduler.yaml这个文件,同时考察节点选择策略,

cat manual-schedule.yaml 
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: manual-schedule
name: manual-schedule
spec:
containers:
- image: httpd:2.4-alpine
name: manual-schedule
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
nodeName: cluster2-master1
status: {}


cat manual-schedule2.yaml 
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: manual-schedule2
name: manual-schedule2
spec:
containers:
- image: httpd:2.4-alpine
name: manual-schedule2
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
nodeName: cluster2-worker1
status: {}

第十题:

Task weight: 6%

Use context: kubectl config use-context k8s-c1-H

Create a new ServiceAccount processor in Namespace ​project-hamster​. Create a Role and RoleBinding, both named ​processor​ as well. These should allow the new SA to only create Secrets and ConfigMaps in that Namespace.

解析:

这一题是RBAC

建立role

kubectl -n project-hamster create role processor --verb=create --resource=secret --resource=configmap
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: processor
namespace: project-hamster
rules:
- apiGroups:
- ""
resources:
- secrets
- configmaps
verbs:
- create

 建立rolebinding

k -n project-hamster create rolebinding processor
–role processor
–serviceaccount project-hamster:processor
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: processor
namespace: project-hamster
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: processor
subjects:
- kind: ServiceAccount
name: processor
namespace: project-hamster



标签:10,context,CKA,name,io,kubectl,2022,ready,pod
From: https://blog.51cto.com/u_15966109/6082666

相关文章