首页 > 其他分享 >k8s简要和部署

k8s简要和部署

时间:2022-11-18 01:44:38浏览次数:75  
标签:简要 部署 etc node2 master ssh node1 k8s root

K8S简要和部署

环境规划

  1. 集群类型

    Kubernetes集群大体上分为两类:一主多从多主多从

    一主多从:一台master节点和多台node节点,搭建简单,但是有单机故障风险,适用于测试环境

    多主多从:多台master节点和多台node节点,搭建麻烦,安全性高,适用于生产环境

    image-20221117234618273

  2. 安装方式

    Kubernetes有多种部署方式,目前主流的方式有kubeadm、minikube、二进制包

    1、Minikube:一个用于快速搭建单节点kubernetes的工具

    2、Kubeadm:一个用于快速搭建kubernetes集群的工具,https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

    3、二进制包:从官网下载每个组件的二进制包,依次去安装,此方式对于理解kubernetes组件更加有效,https://github.com/kubernetes/kubernetes

环境部署

环境准备:

角色 ip 系统 组件
master 192.168.100.10/24 centos-8-Stream docker,kubectl,kubeadm,kubelet
node1 192.168.100.11/24 centos-8-Stream docker,kubectl,kubeadm,kubelet
node2 192.168.100.12/24 centos-8-Stream docker,kubectl,kubeadm,kubelet

实验步骤:

  1. 关闭防火墙,关闭SELinux规则。(master/node1/node2)

    [root@master ~]# systemctl stop firewalld.service 
    [root@master ~]# systemctl disable firewalld.service 
    Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
    Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
    [root@master ~]# setenforce 0	#临时关闭
    [root@master ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config	#永久关闭
    [root@master ~]# systemctl stop postfix  #如果有这个服务也需要关闭
    Failed to stop postfix.service: Unit postfix.service not loaded.
    
  2. hosts文件添加对应主机。(master/node1/node2)

    [root@node1 ~]# cat /etc/hosts 
    192.168.100.10 master
    192.168.100.11 node1
    192.168.100.12 node2
    
  3. 生成ssh秘钥

    [root@master ~]# ssh-keygen 
    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa): 
    Enter passphrase (empty for no passphrase): 
    Enter same passphrase again: 
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    SHA256:wQm/nP6I2M6Z84DeBtrMp0gJ3ZffEaeFnv+snkFwwnE root@master
    The key's randomart image is:
    +---[RSA 3072]----+
    |      .   . E    |
    |       + o +     |
    |        = * +    |
    | . .   o = X     |
    |. . . o S = .    |
    | . ..o o . +     |
    |  o=... o . o    |
    | ..o+=+= o   =   |
    |  . +=Ooo ..+.o  |
    +----[SHA256]-----+
    
    
  4. 将秘钥传输到各节点

    [root@master ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1
    /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
    /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    root@node1's password: 
    
    Number of key(s) added: 1
    
    Now try logging into the machine, with:   "ssh 'root@node1'"
    and check to make sure that only the key(s) you wanted were added.
    
    [root@master ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2
    /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
    /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    root@node2's password: 
    
    Number of key(s) added: 1
    
    Now try logging into the machine, with:   "ssh 'root@node2'"
    and check to make sure that only the key(s) you wanted were added.
    
    
  5. 设置时钟同步

    #将master设置为时钟服务器
    [root@master ~]# vim /etc/chrony.conf 
    local stratum 10
    [root@master ~]# systemctl restart chronyd.service 
    [root@master ~]# systemctl enable chronyd
    [root@master ~]# hwclock -w
    
    #设置其他节点时钟同步
    [root@node1 ~]# vim /etc/chrony.conf
    #pool 2.centos.pool.ntp.org iburst
    server master  iburst
    [root@node1 ~]# systemctl restart chronyd.service 
    [root@node2 ~]# vim /etc/chrony.conf
    #pool 2.centos.pool.ntp.org iburst
    server master  iburst
    [root@node2 ~]# systemctl restart chronyd.service 
    
  6. 禁用swap分区(master/node1/node2)

    [root@master ~]# vim /etc/fstab  #注释掉swap分区
    #/dev/mapper/cs-swap     none                    swap    defaults        0 0
    
  7. 开启IP转发,和修改内核信息(master/node1/node2)

    [root@master ~]# vim /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    [root@master ~]# modprobe br_netfilter
    [root@master ~]# sysctl -p /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    
  8. 配置IPVS功能(master/node1/node2)

    [root@node2 ~]# vim /etc/sysconfig/modules/ipvs.modules 
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    [root@node2 ~]# bash /etc/sysconfig/modules/ipvs.modules
    [root@node2 ~]# lsmod | grep -e ip_vs
    ip_vs_sh               16384  0
    ip_vs_wrr              16384  0
    ip_vs_rr               16384  0
    ip_vs                 172032  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
    nf_conntrack          172032  3 nf_nat,nft_ct,ip_vs
    nf_defrag_ipv6         20480  2 nf_conntrack,ip_vs
    libcrc32c              16384  5 nf_conntrack,nf_nat,nf_tables,xfs,ip_vs
    
  9. 安装docker(master/node1/node2)

    #确保网络仓库源可用再安装
    [root@master yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
    [root@master yum.repos.d]# dnf -y install epel-release
    [root@master yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    #安装docker
    [root@master ~]#  dnf -y install docker-ce --allowerasing
    
  10. 配置docker加速器(master/node1/node2)

    #需要启停docker服务之后再添加配置文件
    [root@master ~]# systemctl restart docker.service 
    [root@master ~]# systemctl stop docker.service 
    Warning: Stopping docker.service, but it can still be activated by:
      docker.socket
    [root@master ~]# cat > /etc/docker/daemon.json << EOF
    {
    "registry-mirrors": ["https://rpnfe8c5.mirror.aliyuncs.com"],
    "exec-opts": ["native.cgroupdriver=systemd"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      },
      "storage-driver": "overlay2"
    }
    EOF
    
  11. 安装kubernetes组件(master/node1/node2)

    [root@master ~]# cat > /etc/yum.repos.d/kubernetes.repo EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
  12. 安装kubeadm kubelet kubectl工具(master/node1/node2)

    [root@master ~]# systemctl restart kubelet
    [root@master ~]# systemctl enable kubelet
    Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
    
  13. 配置containerd(master/node1/node2)

    # 为确保后面集群初始化及加入集群能够成功执行,需要配置containerd的配置文件/etc/containerd/config.toml,此操作需要在所有节点执行
    [root@master ~]#containerd config default > /etc/containerd/config.toml
    [root@node1 ~]# vim /etc/containerd/config.toml
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
    [root@node1 ~]# systemctl restart containerd
    [root@node1 ~]# systemctl enable containerd
    Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/lib/systemd/system/containerd.service.
    
  14. 部署k8s的master节点

    [root@master ~]#kubeadm init \
    --apiserver-advertise-address=192.168.111.100 \
    --image-repository registry.aliyuncs.com/google_containers \
    --kubernetes-version v1.25.4 \
    --service-cidr=10.96.0.0/12 \
    --pod-network-cidr=10.244.0.0/16
    
    # 建议将初始化内容保存在某个文件中
    [root@master ~]#vim k8s
    To start using your cluster, you need to run the following as a regular
    user:
    
    	mkdir -p $HOME/.kube
    	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    	sudo chown $(id -u):$(id -g) $HOME/.kube/config
    	
    Alternatively, if you are the root user, you can run:
    
    	export KUBECONFIG=/etc/kubernetes/admin.conf
    	
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed
    at:
    	https: /kubernetes.io/docs/concepts/cluster-administration/addons/
    	
    Then you can join any number of worker nodes by running the following on
    each as root:
    
    kubeadm join 192.168.100.10:6443 -token eav8jn.zj2muv0thd7e8dad \
    	--discovery-token-ca-cert-hash
    sha256:b38f8a6a6302e25c0bcba2a67c13b234fd0b9fdd8b0c0645154c79edf6555e09
    
  15. 安装pod网络插件

    [root@master ~]#wget https: /raw.githubusercontent.com/flannelio/flannel/master/Documentation/kube-flannel.yml
    [root@master ~]#kubectl apply -f kube-flannel.yml
    namespace/kube-flannel created
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.apps/kube-flannel-ds created
    [root@master ~]#kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    master NotReady control-plane 6m41s v1.25.4
    [root@master ~]#kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    master Ready control-plane 7m10s v1.25.4
    
  16. 将node节点加入到k8s集群中

    [root@node1 ~]#kubeadm join 192.168.100.10:6443 -token
    eav8jn.zj2muv0thd7e8dad \
    > -discovery-token-ca-cert-hash
    sha256:dskxy6sa5bwi786c5a09cad5v6b56gvubtdfst554asd4fdd8b0c0645154c79ed
    
    [root@node2 ~]#kubeadm join 192.168.100.10:6443 -token
    eav8jn.zj2muv0thd7e8dad \
    > -discovery-token-ca-cert-hash
    sha256:dskxy6sa5bwi786c5a09cad5v6b56gvubtdfst554asd4fdd8b0c0645154c79ed
    
  17. kubectl get nodes 查看node状态

    [root@master ~]#kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    master Ready control-plane 9m37s v1.25.4
    node1 NotReady <none> 51s v1.25.4
    node2 NotReady <none> 31s v1.25.4
    [root@master ~]#kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    master Ready control-plane 9m57s v1.25.4
    node1 Ready <none> 71s v1.25.4
    node2 Ready <none> 51s v1.25.4
    
  18. 使用k8s集群创建一个pod,运行nginx容器,然后进行测试

    [root@master ~]#kubectl create deployment nginx -image nginx
    deployment.apps/nginx created
    [root@master ~]#kubectl get pods
    NAME 				READY 			STATUS 			RESTARTS 		AGE
    nginx-76d6c9b8c-z7p4l 		1/1 			Running 		0 			35s
    [root@master ~]#kubectl expose deployment nginx -port 80 -type
    NodePort
    service/nginx exposed
    [root@master ~]#kubectl get pods -o wide
    NAME 			READY 	    STATUS 	  RESTARTS 	  AGE 	  IP
    NODE NOMINATED NODE READINESS GATES
    nginx-76d6c9b8c-z7p4l 1/1 	Running     0 	                          119s    10.244.1.2
    node1       <none> 		<none>
    [root@master ~]#kubectl get services
    NAME    TYPE    CLUSTER-IP    EXTERNAL-IP    PORT(S)    AGE
    kubernetes    ClusterIP    10.96.0.1    <none>    443/TCP    15m
    nginx    NodePort    10.109.37.202     <none>    80:31125/TCP    17s
    
  19. 测试访问

    1668705688471

  20. 修改默认网页

    [root@master ~]# kubectl exec -it pod/nginx-76d6c9b8c-z7p4l - /bin/bash
    root@nginx-76d6c9b8c-z7p4l:/# cd /usr/share/nginx/html/
    echo "liu" > index.html
    

    1668705966610

标签:简要,部署,etc,node2,master,ssh,node1,k8s,root
From: https://www.cnblogs.com/Archer-x/p/16901942.html

相关文章

  • jetson-nano使用deepstream-tensorrt部署yolov5
    title:jetsonnano使用deepstream+tensorrt部署yolov5date:2022-06-1916:27:22tags:-jetson-deepstream-TensorRtcategories:-note-jetson目录介绍环......
  • Tomcat多实例部署
    一、Tomcat多实例的操作步骤1.1关闭防火墙,将安装Tomcat所需软件包传到/opt目录下jdk-8u201-linux-x64.rpmapache-tomcat-9.0.16.tar.gzsystemctlstopfirewalld......
  • Kubernetes_k8s持久化存储(亲测可用)
    一、前言新建具有两个节点的k8s集群,主节点(master节点/m节点)的ip地址是192.168.100.150,从节点(w1节点)的ip地址是192.168.100.151。本文操作如何将pod中的containe......
  • k8s中创建nfs外部供应商(provisioner)
    1、环境说明操作系统:cenots7.9k8s版本:1.25容器运行时:containerdnfs:1.3.02、搭建nfs服务1、服务端1、安装nfs服务yuminstall-ynfs-utilsrpcbindnet-tools2、......
  • Windows Server独立CA部署
    实验环境如下,要求客户端能https访问web服务。主机IPCA服务器10.0.0.10防火墙网卡1:10.0.0.11网卡2:192.168.230.111Web服务器10.0.0.12客户端(win10)192,168.230.1771.配置Web服......
  • K8s安装乐维5.0应用部署文档
     乐维产品包具体打包为4个镜像包,分别为:mysql5.7.36.tar、zabbix_server.tar、itops_v1_4_x86_64.tar、bpm0.1.tar,对应的配置文件分别为:data.tar、conf.tar、nginx-v1.3.t......
  • 一台服务器部署3个mysql实例
    1.数据库的安装过程:略,可以参考我以前的博客2.将安装目录copy2份,本实例的安装目录为:/project/mysql3306,所以复制两份:/project/mysql3307和/project/mysql33083.本文在一......
  • LVS负载均衡集群----NAT部署
    一、企业群集应用概述1.1群集的含义Cluster、集群、群集由多台主机构成,但对外只表现为一个整体,只提供一个访问入口(域名或IP地址),相当于一台大型计算机1.2问题及解决......
  • Tomcat部署、优化、多示例部署、负载均衡(群集,要安装nginx)
    目录:1、Tomcat核心组件2、Tomcat功能组件结构3、Tomcat请求过程4、Tomcat工作模式5、Tomcat部署方式6、Tomcat顶层架构7、实验7-1Tomcat服务部署7-2Tomcat虚拟主......
  • Tomcat多实例部署
    一、Tomcat多实例的操作步骤1.1、关闭防火墙,将安装Tomcat所需软件包传到/opt目录下jdk-8u201-linux-x64.rpmapache-tomcat-9.0.16.tar.gzsystemctlstopfirewall......