首页 > 其他分享 >arm系统部署k8s

arm系统部署k8s

时间:2023-06-17 12:03:35浏览次数:48  
标签:kube csi name 部署 s3 -- k8s arm flannel

一、Docker安装

1、下载资源

docker 安装包地址:https://download.docker.com/linux/static/stable/aarch64/

这里下载的是docker-20.10.20.tgz。

解压后,将docker目录下文件拷贝到/usr/bin

$ tar -xf docker-19.10.20.tgz
$ mv docker/* /usr/bin
2、配置system服务

创建docker.socket文件

$ cat > /usr/lib/systemd/system/docker.service <<EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
 
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
MountFlags=shared  # 允许共享挂载 
[Install]
WantedBy=multi-user.target
EOF

配置

$ mkdir -p /etc/docker
$ tee /etc/docker/daemon.json <<-'EOF'
{
  "exec-opts": ["native.cgroupdriver=cgroupfs"], 
  "registry-mirrors": ["https://isch1uhg.mirror.aliyuncs.com"]
}
EOF

启动

$ systemctl daemon-reload
$ systemctl start docker
$ systemctl enable docker
$ systemctl status docker

二、kubernetes安装

1、基本组件

添加k8s插件阿里源。

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-aarch64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubelete、kubeadm、kubectl。

这里指定1.22.2版本,如果不指定,则安装最新的版本。

$ yum install -y kubelet-1.22.2  kubeadm-1.22.2  kubectl-1.22.2

# 不指定版本则默认安装最新版本的组建
$ yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

确保kubelet开机自启。

$ systemctl enable --now kubelet

为了实现Docker使用的cgroup drvier和kubelet使用的cgroup drver一致,建议修改"/etc/sysconfig/kubelet"文件的内容 :

KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"

初始化master节点(只在master节点上操作):

# 由于默认啊去镜像地址k8s.gcr.io,国内无法访问,这里指定阿里云镜像仓库

$ kubeadm init \
  --apiserver-advertise-address=172.172.31.217 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.22.2 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16

看到Success等字样,以及提示加入master节点命令,说明master启动成功。

根据提示信息,在master节点上配置一下kube/config文件:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

将node01节点和node02节点加入集群(在node节点上操作):

$ kubeadm join 172.172.31.217:6443 --token yyalpm.0dspmr813eh8fgr2 \
	--discovery-token-ca-cert-hash sha256:b8b0fcf26d886473dc346efb13874fe785e8bd51d860175aab7ce793c1198f8a

上面的token根据实际情况,如果忘记了或者其他原因需要重新生成token,可以使用下面命令:

$ kubeadm token create --print-join-command
2、cni

此处使用flannel插件:

kubectl apply -f https://github.com/flannel-io/flannel/blob/release1.23/Documentation/kube-flannel.yml

如果网络差的话,直接复制下面内容:

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.20.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.20.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

注意:

安装完成后,查看网络插件的状态:

$ kubectl get pod -A

Kube-system命名空间的core-xxxx Pod可能会抛缺少某个网络插件的错误:failed to find plugin “xxx” in path [/opt/cni/bin]]

这是因为pod需要的cni插件没有安装,此时需要手动安装。

https://github.com/containernetworking/plugins/releases里面找到对应的版本,下载后解压到/opt/cni/bin之后重启kubectl即可。

3、验证

在Kubernetes集群中部署一个Nginx服务,测试集群是否正常工作。

# 创建Nginx
$ kubectl create deployment nginx --image=nginx:1.14-alpine

# 暴露端口
$ kubectl expose deploy nginx --port=80 --target-port=80 --type=NodePort

# 查看服务
$ kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        3m36s
nginx        NodePort    10.97.172.215   <none>        80:31021/TCP   6s

# 访问
curl http://172.172.31.217:31021

三、开启卷快照

下载external-snapshotter源码,这里使用的是release-6.2。

$ git clone -b release-6.2 https://github.com/kubernetes-csi/external-snapshotter.git

安装快照CRD:

$ kubectl kustomize client/config/crd | kubectl create -f -

安装快照控制器:

注意:

由于控制器的镜像在"registry.k8s.io"中,需要替换成“registry.aliyuncs.com/google_containers”。

这里直接修改"external-snapshotter/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml"文件,将"registry.k8s.io/sig-storage/"替换成“registry.aliyuncs.com/google_containers/”。

然后使用下面命令安装快照控制器:

$ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -

四、minio作为Kubernetes后端存储

1、前提准备

Docker守护进程必须允许共享挂载(systemd标志MountFlags=shared

由于前面在安装Docker的时候已经配置过了,可以使用以下命令进行查看:

$ systemctl show --property=MountFlags docker.service
MountFlags=shared

如果没有配置,可以修改docker.service文件,并重新加载,以及重启Docker,并进行查看。

2、csi-s3
1)、配置文件修改

这里选择使用yandex-cloud/**k8s-csi-s3**这个仓库(相比其他,代码更新比较活跃)。

$ git clone https://github.com/yandex-cloud/k8s-csi-s3.git

由于官方提供的镜像是AMD X86的,所以首先需要编译ARM版本的镜像。

首先需要修改Dockerfile文件。

$ cd k8s-csi-s3

下面是已经修改好的Dockerfile文件。

FROM golang:1.16-alpine as gobuild

WORKDIR /build
ADD go.mod go.sum /build/
ADD cmd /build/cmd
ADD pkg /build/pkg

ENV GOPROXY https://goproxy.cn
RUN go mod download


RUN go get -d -v ./...
RUN CGO_ENABLED=0 GOOS=linux GOARCH=arm64 go build -a -ldflags '-extldflags "-static"' -o ./s3driver ./cmd/s3driver

FROM alpine:3.16.4
LABEL maintainers="Vitaliy Filippov <[email protected]>"
LABEL description="csi-s3 slim image"

# apk add temporarily broken:
#ERROR: unable to select packages:
#  so:libcrypto.so.3 (no such package):
#    required by: s3fs-fuse-1.91-r1[so:libcrypto.so.3]
#RUN apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/testing s3fs-fuse rclone

ADD geesefs  /usr/bin/geesefs
RUN chmod 755 /usr/bin/geesefs

COPY --from=gobuild /build/s3driver /s3driver
ENTRYPOINT ["/s3driver"]

由于使用了geesfs进行文件系统挂载,但是官方同样没有提供ARM版本的可执行文件。需要编译二进制可执行文件。

$ git clone https://github.com/yandex-cloud/geesefs.git
$ cd geesefs
$ CGO_ENABLED=0 GOOS=linux GOARCH=arm64 go build

编译后会得到geesefs二进制可执行文件,需要将此文件复制到k8s-csi-s3目录下。

上面一切都准备就绪后,就可以编译csi-s3的镜像了。

$ cd k8s-csi-s3
$ docker build -t suninfo/csi-s3:0.34.4 .

同时需要把编译好的镜像使用docker save命令打包分发到其他节点,同时在其他节点上使用docker load加载镜像。

镜像准备完成后,还需要对部署的yaml文件进行修改。

cd k8s-csi-s3/deploy/kubernetes

需要对attacher.yamlcsi-s3.yamlprovisioner.yaml这三个yaml文件中的镜像进行替换。

attacher.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-attacher-sa
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: external-attacher-runner
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["csinodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["get", "list", "watch", "update", "patch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments/status"]
    verbs: ["patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-attacher-role
subjects:
  - kind: ServiceAccount
    name: csi-attacher-sa
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: external-attacher-runner
  apiGroup: rbac.authorization.k8s.io
---
# needed for StatefulSet
kind: Service
apiVersion: v1
metadata:
  name: csi-attacher-s3
  namespace: kube-system
  labels:
    app: csi-attacher-s3
spec:
  selector:
    app: csi-attacher-s3
  ports:
    - name: csi-s3-dummy
      port: 65535
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: csi-attacher-s3
  namespace: kube-system
spec:
  serviceName: "csi-attacher-s3"
  replicas: 1
  selector:
    matchLabels:
      app: csi-attacher-s3
  template:
    metadata:
      labels:
        app: csi-attacher-s3
    spec:
      serviceAccount: csi-attacher-sa
      tolerations:
        - key: node-role.kubernetes.io/master
          operator: Exists
        - key: CriticalAddonsOnly
          operator: Exists
      containers:
        - name: csi-attacher
          image: longhornio/csi-attacher:v3.4.0-arm64
          args:
            - "--v=4"
            - "--csi-address=$(ADDRESS)"
          env:
            - name: ADDRESS
              value: /var/lib/kubelet/plugins/ru.yandex.s3.csi/csi.sock
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: socket-dir
              mountPath: /var/lib/kubelet/plugins/ru.yandex.s3.csi
      volumes:
        - name: socket-dir
          hostPath:
            path: /var/lib/kubelet/plugins/ru.yandex.s3.csi
            type: DirectoryOrCreate

csi-s3.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-s3
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-s3
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "update"]
  - apiGroups: [""]
    resources: ["namespaces"]
    verbs: ["get", "list"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["get", "list", "watch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-s3
subjects:
  - kind: ServiceAccount
    name: csi-s3
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: csi-s3
  apiGroup: rbac.authorization.k8s.io
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: csi-s3
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: csi-s3
  template:
    metadata:
      labels:
        app: csi-s3
    spec:
      tolerations:
        - key: CriticalAddonsOnly
          operator: Exists
        - operator: Exists
          effect: NoExecute
          tolerationSeconds: 300
      serviceAccount: csi-s3
      hostNetwork: true
      containers:
        - name: driver-registrar
          image: longhornio/csi-node-driver-registrar:v1.2.0-lh1
          args:
            - "--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)"
            - "--v=4"
            - "--csi-address=$(ADDRESS)"
          env:
            - name: ADDRESS
              value: /csi/csi.sock
            - name: DRIVER_REG_SOCK_PATH
              value: /var/lib/kubelet/plugins/ru.yandex.s3.csi/csi.sock
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: plugin-dir
              mountPath: /csi
            - name: registration-dir
              mountPath: /registration/
        - name: csi-s3
          securityContext:
            privileged: true
            capabilities:
              add: ["SYS_ADMIN"]
            allowPrivilegeEscalation: true
          image: suninfo/csi-s3:0.34.4
          imagePullPolicy: IfNotPresent
          args:
            - "--endpoint=$(CSI_ENDPOINT)"
            - "--nodeid=$(NODE_ID)"
            - "--v=4"
          env:
            - name: CSI_ENDPOINT
              value: unix:///csi/csi.sock
            - name: NODE_ID
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: plugin-dir
              mountPath: /csi
            - name: pods-mount-dir
              mountPath: /var/lib/kubelet/pods
              mountPropagation: "Bidirectional"
            - name: fuse-device
              mountPath: /dev/fuse
      volumes:
        - name: registration-dir
          hostPath:
            path: /var/lib/kubelet/plugins_registry/
            type: DirectoryOrCreate
        - name: plugin-dir
          hostPath:
            path: /var/lib/kubelet/plugins/ru.yandex.s3.csi
            type: DirectoryOrCreate
        - name: pods-mount-dir
          hostPath:
            path: /var/lib/kubelet/pods
            type: Directory
        - name: fuse-device
          hostPath:
            path: /dev/fuse

provisioner.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-provisioner-sa
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: external-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-provisioner-role
subjects:
  - kind: ServiceAccount
    name: csi-provisioner-sa
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: external-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Service
apiVersion: v1
metadata:
  name: csi-provisioner-s3
  namespace: kube-system
  labels:
    app: csi-provisioner-s3
spec:
  selector:
    app: csi-provisioner-s3
  ports:
    - name: csi-s3-dummy
      port: 65535
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: csi-provisioner-s3
  namespace: kube-system
spec:
  serviceName: "csi-provisioner-s3"
  replicas: 1
  selector:
    matchLabels:
      app: csi-provisioner-s3
  template:
    metadata:
      labels:
        app: csi-provisioner-s3
    spec:
      serviceAccount: csi-provisioner-sa
      tolerations:
        - key: node-role.kubernetes.io/master
          operator: Exists
        - key: CriticalAddonsOnly
          operator: Exists
      containers:
        - name: csi-provisioner
          image: longhornio/csi-provisioner:v2.1.2
          args:
            - "--csi-address=$(ADDRESS)"
            - "--v=4"
          env:
            - name: ADDRESS
              value: /var/lib/kubelet/plugins/ru.yandex.s3.csi/csi.sock
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: socket-dir
              mountPath: /var/lib/kubelet/plugins/ru.yandex.s3.csi
        - name: csi-s3
          image: suninfo/csi-s3:0.34.4
          imagePullPolicy: IfNotPresent
          args:
            - "--endpoint=$(CSI_ENDPOINT)"
            - "--nodeid=$(NODE_ID)"
            - "--v=4"
          env:
            - name: CSI_ENDPOINT
              value: unix:///var/lib/kubelet/plugins/ru.yandex.s3.csi/csi.sock
            - name: NODE_ID
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: socket-dir
              mountPath: /var/lib/kubelet/plugins/ru.yandex.s3.csi
      volumes:
        - name: socket-dir
          emptyDir: {}
2)、安装
$ cd deploy/kubernetes
$ kubectl create -f provisioner.yaml
$ kubectl create -f attacher.yaml
$ kubectl create -f csi-s3.yaml
3)、测试

首先使用s3的凭据创建secret

apiVersion: v1
kind: Secret
metadata:
  name: csi-s3-secret
  # Namespace depends on the configuration in the storageclass.yaml
  namespace: kube-system
stringData:
  accessKeyID: root
  secretAccessKey: suninfo@123
  # For AWS set it to "https://s3.<region>.amazonaws.com", for example https://s3.eu-central-1.amazonaws.com
  endpoint: http://172.172.31.217:44088
  # For AWS set it to AWS region
  #region: ""

创建storage class

$ kubectl create -f examples/storageclass.yaml

创建pvc

$ kubectl create -f examples/pvc.yaml

查看pvc是否与pv进行绑定

$ kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
csi-s3-pvc   Bound    pvc-51f06ad4-ef6a-46b9-8dec-110ffd0f1965   5Gi        RWX            csi-s3         2d17h

创建用于测试挂载的Pod

$ kubectl create -f examples/pod.yaml

测试挂载

$ kubectl exec -ti csi-s3-test-nginx bash
$ mount | grep fuse
pvc-51f06ad4-ef6a-46b9-8dec-110ffd0f1965: on /usr/share/nginx/html/s3 type fuse.geesefs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)

$ touch /usr/share/nginx/html/s3/hello.txt

可以登陆minio的web页面可以进行查看对应的存储桶是否有文件被存储。

标签:kube,csi,name,部署,s3,--,k8s,arm,flannel
From: https://blog.51cto.com/u_16082673/6504661

相关文章

  • 云原生之使用Docker部署Heimdall个人导航页
    (Docker部署Heimdall个人导航页)一、Heimdall介绍HeimdallApplicationDashboard是所有Web应用程序的仪表板。不过,它不需要仅限于应用程序,您可以添加指向您喜欢的任何内容的链接。可以用作个人的网页导航首页。二、检查docker状态[root@node~]#systemctlstatusdock......
  • 【Azure 应用服务】Azure Function App在部署时候遇见 503 ServiceUnavailable
    问题描述在VSCode中编写好AzureFunctionApp代码后,通过 funcazurefunctionapppublish部署失败,抛出503ServiceUnavailable错误。Gettingsitepublishinginfo...Creatingarchiveforcurrentdirectory...Performingremotebuildforfunctionsproject.Deleting......
  • k8sphp业务
    1.K8S部署初始化准备1.1系统安装地址规划,根据实际情况进行修改主机名IP操作系统master10.0.0.10ubuntu22.04worker0110.0.0.11ubuntu22.04worker0210.0.0.12ubuntu22.04下载地址:https://mirrors.aliyun.com/ubuntu-releases/bionic/ubuntu-18.04......
  • 在 Windows 下部署 ChatGLM-6B 过程记录
    1、为git安装lfs模块下载模型文件前,需要安装gitlfs模块以支持大文件的下载。下载地址:https://git-lfs.com/验证:gitlfsinstall2、下载模型文件gitclonehttps://huggingface.co/THUDM/chatglm-6b-int43、......
  • [ARM 汇编]进阶篇—存储访问指令—2.3.3 栈操作指令
    栈是一种特殊的数据结构,其特点是后进先出(LIFO,LastInFirstOut)。在ARM汇编中,栈通常用于保存函数调用时的寄存器状态、局部变量和返回地址等。本节将详细介绍ARM汇编中的栈操作指令,并通过实例帮助你更好地理解和掌握这些指令。推入栈(PUSH)PUSH指令用于将一个或多个寄存器......
  • **使用源码部署Nginx 1.23.3的详细步骤和性能优化**
    简介:在本篇博客文章中,我们将详细介绍如何使用源码部署Nginx1.23.3,并提供一些优化措施以提升性能和安全性。将按照以下步骤进行操作:目录准备工作下载和编译Nginx源码安装Nginx配置Nginx优化Nginx性能和安全性启动Nginx服务结论1.准备工作在开始部署Nginx之前,确保你的......
  • ChatGLM-6B云服务器部署教程
    目录一、准备服务器1.购买服务器2.开机进入终端3.进入终端二、部署ChatGLM1.执行命令2.本地代理访问地址2.1结果如下2.2api接口一样操作三、Fastapi流式接口1.api_fast.py1.2将api_fast.py上传到服务器2.准备插件3.访问地址CSDN链接地址:https://blog.csdn.net/Yh_yh_new_Yh/ar......
  • 【超详细】RuoYi 前后端分离版部署流程
    RuoYi前后端分离版部署流程......
  • K8S nginx-ingress配置集锦
    1.设置IP白名单#设置只能通过192.168.0.0/24和127.0.0.1网段才能访问,否则报403apiVersion:networking.k8s.io/v1kind:Ingressmetadata:name:rabbitmqnamespace:defaultannotations:nginx.ingress.kubernetes.io/whitelist-source-range:192.168.0.0/24,127.0.......
  • RK3588平台产测之ArmSoM产品低温环境测试
    1.简介专栏总目录ArmSoM团队在产品量产之前都会对产品做几次专业化的功能测试以及性能压力测试,以此来保证产品的质量以及稳定性优秀的产品都要进行多次全方位的功能测试以及性能压力测试才能够经得起市场的检验2.ArmSoM-W3软硬件重启测试方案软件方式重启系统3000次测试硬件电源......