首页 > 其他分享 >部署Kubernetes Cluster

部署Kubernetes Cluster

时间:2023-01-31 13:23:34浏览次数:54  
标签:kube Kubernetes kubernetes 部署 name Cluster -- k8s flannel

 

 

安装方法

  • kubernetes 二进制安装 (配置最繁琐,不亚于安装openstack)

  • kubeadm 安装 (谷歌推出的自动化安装工具,网络有要求)

  • minikube 安装 (仅仅用来体验k8s)

  • yum 安装 (最简单,版本比较低====学习推荐此种方法)

  • go编译安装 (最难)

基本环境说明

ip: 192.168.115.149   主机名:node1 
ip: 192.168.115.151   主机名:node2 
ip: 192.168.115.152   主机名:node3 

准备工作

说明:  k8s集群涉及到的3台机器都需要进行准备

1、检查ip和uuid:确保每个节点上 MAC 地址和 product_uuid 的唯一性

2、允许 iptables 检查桥接流量:确保 br_netfilter 模块被加载、iptables 能够正确地查看桥接流量、设置路由

3、关闭系统的selinux、防火墙、Swap

4、修改主机名,添加hosts

5、安装好docker: 注意docker和k8s的版本对应关系,并设置设置cgroup驱动,这里用systemd,否则后续kubeadm init会有相关warning

#####检查ip和uuid
ifconfig -a/ ip a
cat /sys/class/dmi/id/product_uuid

#####允许 iptables 检查桥接流量
1.确保 br_netfilter 模块被加载
#显示已载入系统的模块
lsmod | grep br_netfilter 
#如果未加载则加载该模块
modprobe br_netfilter
   
2.iptables 能够正确地查看桥接流量
确保在sysctl 配置中将 net.bridge.bridge-nf-call-iptables 设置为 1
sysctl -a |grep net.bridge.bridge-nf-call-iptables
3.设置路由 cat <<EOF | tee /etc/modules-load.d/k8s.conf br_netfilter EOF cat <<EOF | tee /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_recycle = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system

#####关闭系统的selinux、防火墙、Swap
#关闭selinux
setenforce 0 //临时关闭selinux,重启后失效
#永久关闭,休修改文件后重启服务器即可
vim /etc/selinux/config
#修改SELINUX=enforcing  为 
SELINUX=disabled
#关闭防火墙
systemctl status firewalld
systemctl stop firewalld
#关闭swap
#临时关闭
swapoff -a
#永久关闭
vim /etc/fstab
注释掉 SWAP 的自动挂载
vim /etc/sysctl.d/k8s.conf (可选) 添加下面一行: vm.swappiness=0 sysctl -p /etc/sysctl.d/k8s.conf

#####修改主机名、添加hosts
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2

vim /etc/hosts
192.168.115.149   k8s-master
192.168.115.151   k8s-node1
192.168.115.152   k8s-node2 

 

 

 

 

 

 

 

 

 

 

 

 

 yum方式安装部署

(192条消息) k8s搭建部署(超详细)_Anime777的博客-CSDN博客_k8s部署

安装kubeadm,kubelet和kubectl

说明:  k8s集群涉及到的3台机器都需要进行准备

#添加k8s阿里云YUM软件源
vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg


#安装kubeadm,kubelet和kubectl,注意和docker版本对应
yum install -y kubelet-1.21.1 kubeadm-1.21.1 kubectl-1.21.1

#启动,注意master节点
systemctl start kubelet
systemctl enable kubelet
systemctl status kubelet 

 

集群部署

#master节点部署初始化master节点
kubeadm init   --apiserver-advertise-address=192.168.115.149 --image-repository registry.aliyuncs.com/google_containers   --kubernetes-version v1.21.1   --service-cidr=10.140.0.0/16 --pod-network-cidr=10.240.0.0/16

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

  #node节点部署,根据kubeadm init执行成功生成的命令复制到node节点执行

   kubeadm join 192.168.115.149:6443 --token swshsb.7yu37gx1929902tl \

    --discovery-token-ca-cert-hash sha256:626728b1a039991528a031995ed6ec8069382b489c8ae1e61286f96fcd9a3bfc
#node节点加入后,可在master节点进行查看节点加入情况
kubectl get nodes
集群部署后查看集群状态的话还不是ready的状态,所以需要安装网络插件来完成k8s的集群创建的最后一步

 

 

 

 

 

 

 

 

 

 

 

安装网络插件

说明:master节点安装,可安装flannel插件也可安装安装calico插件,此处安装flannel插件

vim kube-flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  seLinux:
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: rancher/mirrored-flannelcni-flannel:v0.18.1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: rancher/mirrored-flannelcni-flannel:v0.18.1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

修改net-conf.json
下面的网段为上面初始化master pod-network-cidr的网段地址
sed -i 's/10.244.0.0/10.240.0.0/' kube-flannel.yml

#执行
kubectl apply -f kube-flannel.yml

#执行查看安装的状态
 kubectl get pods --all-namespaces

#查看集群的状态是否为ready
kubectl get nodes

===补充卸载flannel================

1、在master节点,找到flannel路径,删除flannel
kubectl delete -f kube-flannel.yml
2、在node节点清理flannel网络留下的文件
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
rm -f /etc/cni/net.d/*
执行完上面的操作,重启kubelet

 

 

 

 

 

测试kubernetes集群

说明:创建一个pod,开放对外端口访问,这里会随机映射一个端口,不指定ns,会默认创建在default下

kubectl create deployment nginx --image=nginx

kubectl expose deployment nginx --port=80 --type=NodePort

 

 

 

 

 

 

 

问题总结

 master节点启动kubelet异常

查看kubelet状态有如下报错属正常现象,正常进行master初始化即可

 

 

 master初始化问题处理

执行kubeadm init   --apiserver-advertise-address=192.168.115.149  --kubernetes-version v1.21.1   --service-cidr=10.140.0.0/16 --pod-network-cidr=10.240.0.0/16 

报错如下:

 

 

 原因分析:由于国内网络原因,kubeadm init会卡住不动,一卡就是很长时间,然后报出这种问题,kubeadm init未设置镜像地址,就默认下载k8s.gcr.io的docker镜像,但是国内连不上https://k8s.gcr.io/v2/

 解决方案:kubeadm init添加镜像地址,执行kubeadm init   --apiserver-advertise-address=192.168.115.149 --image-repository registry.aliyuncs.com/google_containers   --kubernetes-version v1.21.1   --service-cidr=10.140.0.0/16 --pod-network-cidr=10.240.0.0/16

报错如下:

 

 

 原因分析:拉取 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0镜像失败

解决方案:可查询需要下载的镜像,手动拉取镜像修改tag

#查询需要下载的镜像  kubeadm config images list

#查询镜像 docker images

发现已经有coredns:v1.8.0镜像但是tag不一样,修改

docker tag registry.aliyuncs.com/google_containers/coredns:v1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0 

再次执行kubeadm init   --apiserver-advertise-address=192.168.115.149 --image-repository registry.aliyuncs.com/google_containers   --kubernetes-version v1.21.1   --service-cidr=10.140.0.0/16 --pod-network-cidr=10.240.0.0/16

成功!!!!!

 

 

 

 

 

 master初始化成功记录

 1 [root@k8s-master ~]# kubeadm init   --apiserver-advertise-address=192.168.115.149 --image-repository registry.aliyuncs.com/google_containers   --kubernetes-version v1.21.1   --service-cidr=10.140.0.0/16 --pod-network-cidr=10.240.0.0/16
 2 [init] Using Kubernetes version: v1.21.1
 3 [preflight] Running pre-flight checks
 4     [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
 5     [WARNING Hostname]: hostname "k8s-master" could not be reached
 6     [WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 192.168.115.2:53: no such host
 7 [preflight] Pulling images required for setting up a Kubernetes cluster
 8 [preflight] This might take a minute or two, depending on the speed of your internet connection
 9 [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
10 [certs] Using certificateDir folder "/etc/kubernetes/pki"
11 [certs] Generating "ca" certificate and key
12 [certs] Generating "apiserver" certificate and key
13 [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.140.0.1 192.168.115.149]
14 [certs] Generating "apiserver-kubelet-client" certificate and key
15 [certs] Generating "front-proxy-ca" certificate and key
16 [certs] Generating "front-proxy-client" certificate and key
17 [certs] Generating "etcd/ca" certificate and key
18 [certs] Generating "etcd/server" certificate and key
19 [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.115.149 127.0.0.1 ::1]
20 [certs] Generating "etcd/peer" certificate and key
21 [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.115.149 127.0.0.1 ::1]
22 [certs] Generating "etcd/healthcheck-client" certificate and key
23 [certs] Generating "apiserver-etcd-client" certificate and key
24 [certs] Generating "sa" key and public key
25 [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
26 [kubeconfig] Writing "admin.conf" kubeconfig file
27 [kubeconfig] Writing "kubelet.conf" kubeconfig file
28 [kubeconfig] Writing "controller-manager.conf" kubeconfig file
29 [kubeconfig] Writing "scheduler.conf" kubeconfig file
30 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
31 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
32 [kubelet-start] Starting the kubelet
33 [control-plane] Using manifest folder "/etc/kubernetes/manifests"
34 [control-plane] Creating static Pod manifest for "kube-apiserver"
35 [control-plane] Creating static Pod manifest for "kube-controller-manager"
36 [control-plane] Creating static Pod manifest for "kube-scheduler"
37 [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
38 [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
39 [kubelet-check] Initial timeout of 40s passed.
40 [apiclient] All control plane components are healthy after 64.005303 seconds
41 [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
42 [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
43 [upload-certs] Skipping phase. Please see --upload-certs
44 [mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
45 [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
46 [bootstrap-token] Using token: swshsb.7yu37gx1929902tl
47 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
48 [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
49 [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
50 [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
51 [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
52 [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
53 [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
54 [addons] Applied essential addon: CoreDNS
55 [addons] Applied essential addon: kube-proxy
56 
57 Your Kubernetes control-plane has initialized successfully!
58 
59 To start using your cluster, you need to run the following as a regular user:
60 
61   mkdir -p $HOME/.kube
62   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
63   sudo chown $(id -u):$(id -g) $HOME/.kube/config
64 
65 Alternatively, if you are the root user, you can run:
66 
67   export KUBECONFIG=/etc/kubernetes/admin.conf
68 
69 You should now deploy a pod network to the cluster.
70 Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
71   https://kubernetes.io/docs/concepts/cluster-administration/addons/
72 
73 Then you can join any number of worker nodes by running the following on each as root:
74 
75 kubeadm join 192.168.115.149:6443 --token swshsb.7yu37gx1929902tl \
76     --discovery-token-ca-cert-hash sha256:626728b1a039991528a031995ed6ec8069382b489c8ae1e61286f96fcd9a3bfc
View Code

 

 kernel:NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [ksoftirqd/1:14] 

大量高负载程序,造成cpu soft lockup。
Soft lockup就是内核软死锁,这个bug没有让系统彻底死机,但是若干个进程(或者kernel thread)被锁死在了某个状态(一般在内核区域),很多情况下这个是由于内核锁的使用的问题。

https://blog.csdn.net/qq_44710568/article/details/104843432

https://blog.csdn.net/JAVA_LuZiMaKei/article/details/120140987

标签:kube,Kubernetes,kubernetes,部署,name,Cluster,--,k8s,flannel
From: https://www.cnblogs.com/MeeSeeks-B/p/17062841.html

相关文章

  • Linux如何通过Apache httpd部署MantisBT
    一、Apachehttpd1.安装Apachehttpdyuminstallhttpd-y#安装httpdsystemctlstarthttpd#启动httpdsystemctlenablehttpd#配置自启动2.修改Apache服......
  • nginx部署vue history模式项目页面刷新报404问题
    nginx部署vuehistory模式项目页面刷新报404问题解决方案:在nginx配置种添加以下代码:try_files$uri$uri//index.html示例:location/{rootdist;......
  • Mongodb的安装部署
    它是由C++编写的分布式文档数据库。内部使用类似于Json的bson格式。官网文档https://docs.mongodb.com/中文手册https://www.w3cschool.cn/mongodb/安装https://www.m......
  • PyTorch图像分类全流程实战--模型部署07
    教程同济子豪兄https://space.bilibili.com/1900783代码运行云GPU平台:https://featurize.cn/?s=d7ce99f842414bfcaea5662a97581bd1模型部署入门教程(一):模型部署简介htt......
  • rabbitmq 概念部署及应用
    概念RabbitMQ是一个消息中间件:它接受并转发消息。你可以把它当做一个快递站点,当你要发送一个包裹时,你把你的包裹放到快递站,快递员最终会把你的快递送到收件人那里,按照这种......
  • API对象--Service(chrono《kubernetes入门实战课》笔记整理)
     【概念解说】当pod被实例化出来,如果运行一段时间会销毁,虽然deployment和ds会注意管理和维护pod的数目,但是pod销毁后再重建,ip会发生变化,这对于服务来说,是很麻烦的。所......
  • java部署 宝塔面板 linux安装宝塔面板
    linux安装宝塔面板linux命令yuminstall-ywget&&wget-Oinstall.shhttp://download.bt.cn/install/install_6.0.sh&&shinstall.sh中间遇到y选y安装完成后会给......
  • 宝塔部署 宝塔远程连接数据库出现1045问题
    宝塔远程连接数据库出现1045问题宝塔面板在安装好mysql后本地navicat远程连接的时候报错1045这个问题是数据库权限问题在宝塔面板页面找到软件商店—已安装—mysql—......
  • DevOps: 自动部署语义化版本实操
    本文将向您展示如何使用Buddy的流水线在任何Git存储仓中自动增加应用程序的版本。我们即将创建的流水线使用参数来定义我们想要提升的数字(主版本号/次版本号/修订号),使用脚......
  • DevOps: 自动与手动部署语义化版本(Semantic Versioning)实操
    本文将向您展示如何使用Buddy的流水线在任何Git存储仓中自动增加应用程序的版本。我们即将创建的流水线使用参数来定义我们想要提升的数字(主版本号/次版本号/修订号),使用......