一、Docker安装
1、下载资源
docker 安装包地址:https://download.docker.com/linux/static/stable/aarch64/
这里下载的是docker-20.10.20.tgz。
解压后,将docker目录下文件拷贝到/usr/bin
$ tar -xf docker-19.10.20.tgz
$ mv docker/* /usr/bin
2、配置system服务
创建docker.socket文件
$ cat > /usr/lib/systemd/system/docker.service <<EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
MountFlags=shared # 允许共享挂载
[Install]
WantedBy=multi-user.target
EOF
配置
$ mkdir -p /etc/docker
$ tee /etc/docker/daemon.json <<-'EOF'
{
"exec-opts": ["native.cgroupdriver=cgroupfs"],
"registry-mirrors": ["https://isch1uhg.mirror.aliyuncs.com"]
}
EOF
启动
$ systemctl daemon-reload
$ systemctl start docker
$ systemctl enable docker
$ systemctl status docker
二、kubernetes安装
1、基本组件
添加k8s插件阿里源。
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-aarch64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装kubelete、kubeadm、kubectl。
这里指定1.22.2版本,如果不指定,则安装最新的版本。
$ yum install -y kubelet-1.22.2 kubeadm-1.22.2 kubectl-1.22.2
# 不指定版本则默认安装最新版本的组建
$ yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
确保kubelet开机自启。
$ systemctl enable --now kubelet
为了实现Docker使用的cgroup drvier和kubelet使用的cgroup drver一致,建议修改"/etc/sysconfig/kubelet"文件的内容 :
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
初始化master节点(只在master节点上操作):
# 由于默认啊去镜像地址k8s.gcr.io,国内无法访问,这里指定阿里云镜像仓库
$ kubeadm init \
--apiserver-advertise-address=172.172.31.217 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.22.2 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
看到Success等字样,以及提示加入master节点命令,说明master启动成功。
根据提示信息,在master节点上配置一下kube/config文件:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
将node01节点和node02节点加入集群(在node节点上操作):
$ kubeadm join 172.172.31.217:6443 --token yyalpm.0dspmr813eh8fgr2 \
--discovery-token-ca-cert-hash sha256:b8b0fcf26d886473dc346efb13874fe785e8bd51d860175aab7ce793c1198f8a
上面的token根据实际情况,如果忘记了或者其他原因需要重新生成token,可以使用下面命令:
$ kubeadm token create --print-join-command
2、cni
此处使用flannel插件:
kubectl apply -f https://github.com/flannel-io/flannel/blob/release1.23/Documentation/kube-flannel.yml
如果网络差的话,直接复制下面内容:
---
kind: Namespace
apiVersion: v1
metadata:
name: kube-flannel
labels:
pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-flannel
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-flannel
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
#image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
#image: flannelcni/flannel:v0.20.2 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
#image: flannelcni/flannel:v0.20.2 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
注意:
安装完成后,查看网络插件的状态:
$ kubectl get pod -A
Kube-system命名空间的core-xxxx Pod可能会抛缺少某个网络插件的错误:failed to find plugin “xxx” in path [/opt/cni/bin]]
这是因为pod需要的cni插件没有安装,此时需要手动安装。
在https://github.com/containernetworking/plugins/releases里面找到对应的版本,下载后解压到/opt/cni/bin之后重启kubectl即可。
3、验证
在Kubernetes集群中部署一个Nginx服务,测试集群是否正常工作。
# 创建Nginx
$ kubectl create deployment nginx --image=nginx:1.14-alpine
# 暴露端口
$ kubectl expose deploy nginx --port=80 --target-port=80 --type=NodePort
# 查看服务
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3m36s
nginx NodePort 10.97.172.215 <none> 80:31021/TCP 6s
# 访问
curl http://172.172.31.217:31021
三、开启卷快照
下载external-snapshotter源码,这里使用的是release-6.2。
$ git clone -b release-6.2 https://github.com/kubernetes-csi/external-snapshotter.git
安装快照CRD:
$ kubectl kustomize client/config/crd | kubectl create -f -
安装快照控制器:
注意:
由于控制器的镜像在"registry.k8s.io"中,需要替换成“registry.aliyuncs.com/google_containers”。
这里直接修改"external-snapshotter/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml"文件,将"registry.k8s.io/sig-storage/"替换成“registry.aliyuncs.com/google_containers/”。
然后使用下面命令安装快照控制器:
$ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
四、minio作为Kubernetes后端存储
1、前提准备
Docker守护进程必须允许共享挂载(systemd标志MountFlags=shared
)
由于前面在安装Docker的时候已经配置过了,可以使用以下命令进行查看:
$ systemctl show --property=MountFlags docker.service
MountFlags=shared
如果没有配置,可以修改docker.service文件,并重新加载,以及重启Docker,并进行查看。
2、csi-s3
1)、配置文件修改
这里选择使用yandex-cloud/**k8s-csi-s3**这个仓库(相比其他,代码更新比较活跃)。
$ git clone https://github.com/yandex-cloud/k8s-csi-s3.git
由于官方提供的镜像是AMD X86的,所以首先需要编译ARM版本的镜像。
首先需要修改Dockerfile文件。
$ cd k8s-csi-s3
下面是已经修改好的Dockerfile文件。
FROM golang:1.16-alpine as gobuild
WORKDIR /build
ADD go.mod go.sum /build/
ADD cmd /build/cmd
ADD pkg /build/pkg
ENV GOPROXY https://goproxy.cn
RUN go mod download
RUN go get -d -v ./...
RUN CGO_ENABLED=0 GOOS=linux GOARCH=arm64 go build -a -ldflags '-extldflags "-static"' -o ./s3driver ./cmd/s3driver
FROM alpine:3.16.4
LABEL maintainers="Vitaliy Filippov <[email protected]>"
LABEL description="csi-s3 slim image"
# apk add temporarily broken:
#ERROR: unable to select packages:
# so:libcrypto.so.3 (no such package):
# required by: s3fs-fuse-1.91-r1[so:libcrypto.so.3]
#RUN apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/testing s3fs-fuse rclone
ADD geesefs /usr/bin/geesefs
RUN chmod 755 /usr/bin/geesefs
COPY --from=gobuild /build/s3driver /s3driver
ENTRYPOINT ["/s3driver"]
由于使用了geesfs
进行文件系统挂载,但是官方同样没有提供ARM版本的可执行文件。需要编译二进制可执行文件。
$ git clone https://github.com/yandex-cloud/geesefs.git
$ cd geesefs
$ CGO_ENABLED=0 GOOS=linux GOARCH=arm64 go build
编译后会得到geesefs
二进制可执行文件,需要将此文件复制到k8s-csi-s3
目录下。
上面一切都准备就绪后,就可以编译csi-s3的镜像了。
$ cd k8s-csi-s3
$ docker build -t suninfo/csi-s3:0.34.4 .
同时需要把编译好的镜像使用docker save
命令打包分发到其他节点,同时在其他节点上使用docker load
加载镜像。
镜像准备完成后,还需要对部署的yaml文件进行修改。
cd k8s-csi-s3/deploy/kubernetes
需要对attacher.yaml
、csi-s3.yaml
、provisioner.yaml
这三个yaml文件中的镜像进行替换。
attacher.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-attacher-sa
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: external-attacher-runner
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments/status"]
verbs: ["patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-attacher-role
subjects:
- kind: ServiceAccount
name: csi-attacher-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: external-attacher-runner
apiGroup: rbac.authorization.k8s.io
---
# needed for StatefulSet
kind: Service
apiVersion: v1
metadata:
name: csi-attacher-s3
namespace: kube-system
labels:
app: csi-attacher-s3
spec:
selector:
app: csi-attacher-s3
ports:
- name: csi-s3-dummy
port: 65535
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: csi-attacher-s3
namespace: kube-system
spec:
serviceName: "csi-attacher-s3"
replicas: 1
selector:
matchLabels:
app: csi-attacher-s3
template:
metadata:
labels:
app: csi-attacher-s3
spec:
serviceAccount: csi-attacher-sa
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
- key: CriticalAddonsOnly
operator: Exists
containers:
- name: csi-attacher
image: longhornio/csi-attacher:v3.4.0-arm64
args:
- "--v=4"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /var/lib/kubelet/plugins/ru.yandex.s3.csi/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/kubelet/plugins/ru.yandex.s3.csi
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/ru.yandex.s3.csi
type: DirectoryOrCreate
csi-s3.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-s3
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-s3
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "update"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-s3
subjects:
- kind: ServiceAccount
name: csi-s3
namespace: kube-system
roleRef:
kind: ClusterRole
name: csi-s3
apiGroup: rbac.authorization.k8s.io
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: csi-s3
namespace: kube-system
spec:
selector:
matchLabels:
app: csi-s3
template:
metadata:
labels:
app: csi-s3
spec:
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- operator: Exists
effect: NoExecute
tolerationSeconds: 300
serviceAccount: csi-s3
hostNetwork: true
containers:
- name: driver-registrar
image: longhornio/csi-node-driver-registrar:v1.2.0-lh1
args:
- "--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)"
- "--v=4"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /csi/csi.sock
- name: DRIVER_REG_SOCK_PATH
value: /var/lib/kubelet/plugins/ru.yandex.s3.csi/csi.sock
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: plugin-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration/
- name: csi-s3
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: suninfo/csi-s3:0.34.4
imagePullPolicy: IfNotPresent
args:
- "--endpoint=$(CSI_ENDPOINT)"
- "--nodeid=$(NODE_ID)"
- "--v=4"
env:
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: plugin-dir
mountPath: /csi
- name: pods-mount-dir
mountPath: /var/lib/kubelet/pods
mountPropagation: "Bidirectional"
- name: fuse-device
mountPath: /dev/fuse
volumes:
- name: registration-dir
hostPath:
path: /var/lib/kubelet/plugins_registry/
type: DirectoryOrCreate
- name: plugin-dir
hostPath:
path: /var/lib/kubelet/plugins/ru.yandex.s3.csi
type: DirectoryOrCreate
- name: pods-mount-dir
hostPath:
path: /var/lib/kubelet/pods
type: Directory
- name: fuse-device
hostPath:
path: /dev/fuse
provisioner.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-provisioner-sa
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: external-provisioner-runner
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-provisioner-role
subjects:
- kind: ServiceAccount
name: csi-provisioner-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: external-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Service
apiVersion: v1
metadata:
name: csi-provisioner-s3
namespace: kube-system
labels:
app: csi-provisioner-s3
spec:
selector:
app: csi-provisioner-s3
ports:
- name: csi-s3-dummy
port: 65535
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: csi-provisioner-s3
namespace: kube-system
spec:
serviceName: "csi-provisioner-s3"
replicas: 1
selector:
matchLabels:
app: csi-provisioner-s3
template:
metadata:
labels:
app: csi-provisioner-s3
spec:
serviceAccount: csi-provisioner-sa
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
- key: CriticalAddonsOnly
operator: Exists
containers:
- name: csi-provisioner
image: longhornio/csi-provisioner:v2.1.2
args:
- "--csi-address=$(ADDRESS)"
- "--v=4"
env:
- name: ADDRESS
value: /var/lib/kubelet/plugins/ru.yandex.s3.csi/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/kubelet/plugins/ru.yandex.s3.csi
- name: csi-s3
image: suninfo/csi-s3:0.34.4
imagePullPolicy: IfNotPresent
args:
- "--endpoint=$(CSI_ENDPOINT)"
- "--nodeid=$(NODE_ID)"
- "--v=4"
env:
- name: CSI_ENDPOINT
value: unix:///var/lib/kubelet/plugins/ru.yandex.s3.csi/csi.sock
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: socket-dir
mountPath: /var/lib/kubelet/plugins/ru.yandex.s3.csi
volumes:
- name: socket-dir
emptyDir: {}
2)、安装
$ cd deploy/kubernetes
$ kubectl create -f provisioner.yaml
$ kubectl create -f attacher.yaml
$ kubectl create -f csi-s3.yaml
3)、测试
首先使用s3的凭据创建secret
apiVersion: v1
kind: Secret
metadata:
name: csi-s3-secret
# Namespace depends on the configuration in the storageclass.yaml
namespace: kube-system
stringData:
accessKeyID: root
secretAccessKey: suninfo@123
# For AWS set it to "https://s3.<region>.amazonaws.com", for example https://s3.eu-central-1.amazonaws.com
endpoint: http://172.172.31.217:44088
# For AWS set it to AWS region
#region: ""
创建storage class
$ kubectl create -f examples/storageclass.yaml
创建pvc
$ kubectl create -f examples/pvc.yaml
查看pvc是否与pv进行绑定
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-s3-pvc Bound pvc-51f06ad4-ef6a-46b9-8dec-110ffd0f1965 5Gi RWX csi-s3 2d17h
创建用于测试挂载的Pod
$ kubectl create -f examples/pod.yaml
测试挂载
$ kubectl exec -ti csi-s3-test-nginx bash
$ mount | grep fuse
pvc-51f06ad4-ef6a-46b9-8dec-110ffd0f1965: on /usr/share/nginx/html/s3 type fuse.geesefs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
$ touch /usr/share/nginx/html/s3/hello.txt
可以登陆minio的web页面可以进行查看对应的存储桶是否有文件被存储。
标签:kube,csi,name,部署,s3,--,k8s,arm,flannel From: https://blog.51cto.com/u_16082673/6504661