首页 > 其他分享 >单节点部署K8S

单节点部署K8S

时间:2024-04-07 22:34:16浏览次数:28  
标签:kubernetes etc 部署 sudo apt kubelet docker K8S 节点

K使用Kubeadm搭建单节点

安装前注意进行快照,方便多次安装练习。

虚拟机基础信息

系统:Ubuntu 20.04.6 LTS
内核:5.15.0-67-generic
硬盘:60G
内存:12G
CPU:4C4U

本次安装参考博客地址:

https://glory.blog.csdn.net/article/details/120606787

安装前准备

1.关闭防火墙

systemctl status ufw
systemctl stop ufw
systemctl disable ufw

2.关闭swap分区

配置/etc/fstab文件,注释/swapfile一行即可

cat /etc/fstab 
.......
#/swapfile                                 none            swap    sw              0       0
.......
重启后使用free -h,swap为0b即可
free -h
              total        used        free      shared  buff/cache   available
Mem:           11Gi       660Mi       8.4Gi       7.0Mi       2.6Gi        10Gi
Swap:            0B          0B          0B

一、安装配置docker

最新版K8S不在原生集成docker-cni,所以最新K8S一般使用podman或container作为底层容器引擎。但由于docker与K8s搭配相关学习教程最多,出于学习考虑,这里配置K8S依旧使用K8S+docker的组合。

docker安装过程与原博客全程一致,这里仅进行简单搬运工作

1、删除docker相关组件

sudo apt-get autoremove docker docker-ce docker-engine  docker.io  containerd runc

2、更新apt-get

sudo apt-get update

3、安装 apt 依赖包,用于通过HTTPS来获取仓库

sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common

4、添加 Docker 的官方 GPG 密钥

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

5、设置稳定版仓库(添加到/etc/apt/sources.list中)

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

6、更新apt-get

sudo apt-get update

7、查询docker-ce版本

sudo apt-cache policy docker-ce

8、安装指定版本

sudo apt-get install  docker-ce=5:20.10.7~3-0~ubuntu-focal

9、验证安装是否成功

docker --version

更新 cgroupdriver 为systemd

tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://uy35zvn6.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

systemctl daemon-reload
systemctl restart docker

二、kubernetes安装

1.iptables配置


cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sudo sysctl --system

bridge-nf

bridge-nf 使得 netfilter 可以对 Linux 网桥上的 IPv4/ARP/IPv6 包过滤。比如,设置net.bridge.bridge-nf-call-iptables=1后,二层的网桥在转发包时也会被 iptables的 FORWARD 规则所过滤。常用的选项包括:

net.bridge.bridge-nf-call-arptables:是否在 arptables 的 FORWARD 中过滤网桥的 ARP 包

net.bridge.bridge-nf-call-ip6tables:是否在 ip6tables 链中过滤 IPv6 包

net.bridge.bridge-nf-call-iptables:是否在 iptables 链中过滤 IPv4 包

net.bridge.bridge-nf-filter-vlan-tagged:是否在 iptables/arptables 中过滤打了 vlan 标签的包。

防火墙是保护服务器和基础设施安全的重要工具。在 Linux 生态系统中,iptables 是使 用很广泛的防火墙工具之一,它基于内核的包过滤框架(packet filtering framework) netfilter。

Linux 上最常用的防火墙工具是 iptables。iptables 与协议栈内有包过滤功能的 hook 交 互来完成工作。这些内核 hook 构成了 netfilter 框架。

每个进入网络系统的包(接收或发送)在经过协议栈时都会触发这些 hook,程序 可以通过注册 hook 函数的方式在一些关键路径上处理网络流量。iptables 相关的内核模 块在这些 hook 点注册了处理函数,因此可以通过配置 iptables 规则来使得网络流量符合 防火墙规则。

2.apt包更新

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

3.添加GPG 密钥

sudo curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -

4.添加 Kubernetes apt 存储库,安装kubelet, kubeadm and kubectl

sudo tee /etc/apt/sources.list.d/kubernetes.list <<-'EOF'
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
EOF


sudo apt-get update
sudo apt-get install -y kubelet=1.22.2-00 kubeadm=1.22.2-00 kubectl=1.22.2-00 
sudo apt-mark hold kubelet kubeadm kubectl           //代表这三个组件不更新

指定版本 apt-get install -y kubelet=1.22.2-00 kubeadm=1.22.2-00 kubectl=1.22.2-00 
最新版本 apt-get install -y kubelet kubeadm kubectl
备注:
apt-mark用法
apt-mark [选项] {auto|manual} 软件包1 [软件包2 …]
apt-mark常用命令
auto – 标记指定软件包为自动安装
manual – 标记指定软件包为手动安装
minimize-manual – Mark all dependencies of meta packages as automatically installed.
hold – 标记指定软件包为保留(held back),阻止软件自动更新
unhold – 取消指定软件包的保留(held back)标记,解除阻止自动更新
showauto – 列出所有自动安装的软件包
showmanual – 列出所有手动安装的软件包
showhold – 列出设为保留的软件包

5.使用kubeadm init初始化集群

kubeadm init \
 --image-repository registry.aliyuncs.com/google_containers \
 --kubernetes-version v1.22.2 \
 --pod-network-cidr=172.10.0.0/16 \
 --apiserver-advertise-address=192.168.80.10

apiserver-advertise-address: 表示你的kubernetes集群的控制平面节点的 API server 的广播地址
pod-network-cidr:表示你的kubernetes集群的pod网段  

输出信息类似如下,该信息对后续添加子节点作用很大:
root@K8s-master:/etc/docker# kubeadm init \
>  --image-repository registry.aliyuncs.com/google_containers \
>  --kubernetes-version v1.22.2 \
>  --pod-network-cidr=172.10.0.0/16 \
>  --apiserver-advertise-address=192.168.80.10
[init] Using Kubernetes version: v1.22.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.80.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.80.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.80.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.505395 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 5d6yqs.3jmn7ih0ha4wj8pt
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.80.10:6443 --token 5d6yqs.3jmn7ih0ha4wj8pt \
	--discovery-token-ca-cert-hash sha256:4e9cae977b6d2b33b1b5a787e116ad396b064c0ce4038d126d870c5d92a4cf22 

6.复制kubeconfig文件

//输出信息提示内容
 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

7.去除master节点的污点

K8S中默认master节点是不能调度任何pod的,因此需要提前将主节点污点去除
kubectl taint nodes --all node-role.kubernetes.io/master-

8.安装calico cni 插件

kubectl create -f https://projectcalico.docs.tigera.io/archive/v3.23/manifests/tigera-operator.yaml
kubectl create -f https://projectcalico.docs.tigera.io/archive/v3.23/manifests/custom-resources.yaml
如果初始化 --pod-network-cidr非192.168.0.0/16需要先
wget https://projectcalico.docs.tigera.io/archive/v3.23/manifests/custom-resources.yaml
 
修改custom-resources.yaml  cidr为自己配置的--pod-network-cidr
---
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/v3.23/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 172.10.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()

---

使用kubectl apply -f custom-resources.yaml  应用修改

9.验证集群状态

root@K8s-master:/etc/docker# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Healthy     ok                                                                                           
etcd-0               Healthy     {"health":"true","reason":""} 

scheduler 出现该报错是因为是/etc/kubernetes/manifests/下的kube-controller-manager.yaml和kube-scheduler.yaml设置的默认端口是0导致的,解决方式是注释掉对应的port即可,操作如下:

image-20230719102108290

image-20230719102139563

然后在master节点上重启kubelet,systemctl restart kubelet.service,然后重新查看就正常了
port=0是做什么用的? 关闭非安全端口

root@K8s-master:/etc/kubernetes/manifests# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}  

10.Kubernetes还原

集群初始化如果遇到问题,可以使用下面的命令进行清理

# 1.卸载服务
 
kubeadm reset
 
# 2.删除相关容器  #删除镜像
 
docker rm $(docker  ps -aq) -f
docker rmi $(docker images -aq) -f
 
# 3.删除上一个集群相关的文件
 
rm -rf  /var/lib/etcd
rm -rf  /etc/kubernetes
rm -rf $HOME/.kube
rm -rf /var/etcd
rm -rf /var/lib/kubelet/
rm -rf /run/kubernetes/
rm -rf ~/.kube/
 
# 4.清除网络
 
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/*
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/*
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start docker
 
# 5.卸载工具
 
apt autoremove -y kubelet kubectl kubeadm kubernetes-cni
删除/var/lib/kubelet/目录,删除前先卸载
 
for m in $(sudo tac /proc/mounts | sudo awk '{print $2}'|sudo grep /var/lib/kubelet);do
 
sudo umount $m||true
 
done
 
# 6.删除所有的数据卷
 
sudo docker volume rm $(sudo docker volume ls -q)
 
# 7.再次显示所有的容器和数据卷,确保没有残留
 
sudo docker ps -a
 
sudo docker volume ls

Kubernetes测试

部署 Deployment

kubectl apply -f https://k8s.io/examples/application/deployment.yaml
​
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

部署 NodePort

kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
spec:
  selector:
    app: nginx
  type: NodePort
  ports:
    - protocol: TCP
      port: 80
EOF

验证

root@K8s-master:/etc/kubernetes/manifests# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-66b6c48dd5-vb9gt   1/1     Running   0          47s
nginx-deployment-66b6c48dd5-wflwh   1/1     Running   0          47s

root@K8s-master:/etc/kubernetes/manifests# kubectl get services
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        62m
my-nginx     NodePort    10.111.23.11   <none>        80:32235/TCP   107s

root@K8s-master:/etc/kubernetes/manifests# curl 192.168.80.10:32235     //本机ip+本地暴露端口
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

三、K8Snode节点加入

3.1在node节点中重复执行主节点2.5之前的所有步骤

在node节点中重复执行主节点2.5之前的所有步骤

在2.5步骤直接加入主节点集群即可

kubeadm join 192.168.80.10:6443 --token 5d6yqs.3jmn7ih0ha4wj8pt \
	--discovery-token-ca-cert-hash sha256:4e9cae977b6d2b33b1b5a787e116ad396b064c0ce4038d126d870c5d92a4cf22 
	
如果加入一直卡在[preflight] Running pre-flight checks  代表token过期需要在主节点重新生成下

root@K8s-master:~# kubeadm token create
o1f5gq.1ev981a3s71iepvn

在从节点替换下执行命令即可

kubeadm join 192.168.80.10:6443 --token o1f5gq.1ev981a3s71iepvn \
	--discovery-token-ca-cert-hash sha256:4e9cae977b6d2b33b1b5a787e116ad396b064c0ce4038d126d870c5d92a4cf22 
	
该命令每24小时会重置,超出24小时可以通过在主节点执行
kubeadm token create --print-join-command  获取最新的加入集群的hash

3.2 从节点复制主节点config文件

如果从节点没有config文件执行kubectl 命令会报错

The connection to the server localhost:8080 was refused - did you specify the right host or port?

所以需要从主节点复制对应文件

 mkdir -p $HOME/.kube
 sudo scp [email protected]:/etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

3.3 网络插件

通过3.1加入到集群中后,不需要额外自行安装calico插件,该插件通过DaemonSet控制,自动在新加入节点创建部署。

可以看到新加入到集群的25节点正在初始化calico容器

image-20230801164437826

标签:kubernetes,etc,部署,sudo,apt,kubelet,docker,K8S,节点
From: https://www.cnblogs.com/Canyun-blogs/p/18120076

相关文章

  • Springboot计算机毕业设计财务报销微信小程序【附源码】开题+论文+mysql+程序+部署
    本系统(程序+源码)带文档lw万字以上 文末可获取一份本项目的java源码和数据库参考。系统程序文件列表开题报告内容研究背景随着移动互联网技术的飞速发展,微信小程序作为一种新型的应用形态,以其便捷、高效的特点受到了广大用户的青睐。在高等教育领域,财务管理是学校运营中不......
  • Springboot计算机毕业设计博物馆预约小程序【附源码】开题+论文+mysql+程序+部署
    本系统(程序+源码)带文档lw万字以上 文末可获取一份本项目的java源码和数据库参考。系统程序文件列表开题报告内容研究背景在信息化、数字化日益发展的今天,博物馆作为传承历史文化的重要场所,其管理和服务方式也在不断革新。传统的博物馆参观方式往往受限于开放时间、参观人......
  • XML文档节点导航与选择指南 | XPath基础知识
    XPath(XMLPathLanguage)是XSLT标准的主要组成部分。它用于在XML文档中浏览元素和属性,提供了一种强大的定位和选择节点的方式。XPath的基本特点代表XML路径语言:XPath是一种用于在XML文档中导航和选择节点的语言。路径样式语法:XPath使用路径表达式的“路径样式”语......
  • K8s技术全景:架构、应用与优化
    本文深入探讨了Kubernetes(K8s)的关键方面,包括其架构、容器编排、网络与存储管理、安全与合规、高可用性、灾难恢复以及监控与日志系统。关注【TechLeadCloud】,分享互联网架构、云服务技术的全维度知识。作者拥有10+年互联网服务架构、AI产品研发经验、团队管理经验,同济本复旦硕......
  • JMeter-分布式压测部署与执行
    一、 主机\从机安装相同版本JDK1、openjdk压缩包解压到C盘,配置环境变量2、 cmd,执行检查是否安装成果:java -version 二、 主机\从机安装相同版本JMeter1、jmeter压缩包解压到C盘,配置环境变量2、  jmeter.bat,发送快捷方式到桌面3、 双击,是否打开成功4、 ......
  • H3C-V7防火墙透明部署案例(华三)
    1.配置需求如下组网图所示,在原有的网络中增加防火墙来提高网络安全性,但又不想对原有网络配置进行改动,所以需要防火墙采用透明模式部署;其中GigabitEthernet1/0/1接口接原有路由器的下联口,GigabitEthernet1/0/3接口接原有的交换机上联口。2.组网图3.配置步骤3.1配置连接路由......
  • k8s.gcr.io、registry.k8s.io镜像下载失败解决方案
    k8s.gcr.io、registry.k8s.io镜像下载失败解决方案问题解决方案使用方法匹配规则问题初始化Kubernetes集群时,很多人都可能遇到以下问题,部分镜像无法访问:Errorresponsefromdaemon:Gethttps://k8s.gcr.io/v2/:net/http:requestcanceledwhilewaitingforcon......
  • 基于新版宝塔Docker部署在线客服系统过程小记
    我在业余时间开发维护了一款免费开源的升讯威在线客服系统,也收获了许多用户。对我来说,只要能获得用户的认可,就是我最大的动力。客服系统开发过程中,最让我意外的是对TCP/IP协议的认识。过去一直认为TCP/IP是可靠的连接,加上过去开发的软件网络环境比较稳定,很少在这个问题上纠结......
  • ESXI部署安装
    1.新建ESXI虚拟机自定义Wrokstation17.5.x稍后安装操作系统VMwareEsxi8及更高版本命名虚拟机处理器配置--选择处理器数量(P):2每个处理器的内核数量(C)2处理器内核总数:4虚拟机内存--选择8192MB网络类型--选择使用网络地址转换(NAT)(E)为客户机操作系统提供使......
  • zookeeper运维(部署安装,常用命令)
    目录1.安装部署1.1单机部署1.1.1下载安装1.1.2配置文件1.1.3zkserver状态管理1.1.4使用zk客户端登录服务器1.1.5使用PrettyZoo连接zk1.2集群部署1.2.1环境准备1.2.2配置修改1.2.3设置myid1.2.4启动集群1.2.5测试集群2.常用命令2.1分类2.2功能脚本2.2.1zkServer......