2、安装Kubernetes
1、系统初始化
- 初始化
-
关闭防火墙:
systemctl stop firewalld systemctl disable firewalld
-
关闭 selinux:
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时 -
关闭 swap:
swapoff -a # 临时
vim /etc/fstab # 永久 -
主机名:
主机名不能包含特殊符号
https://blog.csdn.net/qq_44895681/article/details/119947302
hostnamectl set-hostname -
在 master 添加 hosts:
cat >> /etc/hosts << EOF
192.168.10.100 kubernetes_master kubernetes_master
192.168.10.101 kubernetes_note01 kubernetes_note01
192.168.10.102 kubernetes_note02 kubernetes_note02
EOF -
将桥接的 IPv4 流量传递到 iptables 的链:
cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system # 生效
-
时间同步:
yum install ntpdate -y
-
2、安装docker
3、使用kubeadm部署k8s
-
0、在所有的节点上安装 kubeadm,kubelet 和 kubectl
-
更新仓库为阿里云
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
-
安装特定版本的k8s(不同版本的k8s和docker不一定匹配)
注意:这里安装的版本,会在后面的初始化init中用到,要保持版本一致
yum -y install kubeadm-1.17.4 kubectl-1.17.4 kubelet-1.17.4
systemctl enable kubelet
-
-
1、部署 Kubernetes Master
在 Master执行
-
查看k8s版本,修改下面的初始化命令
[root@master home]# kubectl version Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:03:42Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T20:55:23Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
-
准备集群镜像
- 查看所需要的镜像的版本
[root@master home]# kubeadm config images list I0131 11:34:52.187788 17583 version.go:251] remote version is much newer: v1.26.1; falling back to: stable-1.17 W0131 11:34:52.997658 17583 validation.go:28] Cannot validate kubelet config - no validator is available W0131 11:34:52.997693 17583 validation.go:28] Cannot validate kube-proxy config - no validator is available k8s.gcr.io/kube-apiserver:v1.17.17 k8s.gcr.io/kube-controller-manager:v1.17.17 k8s.gcr.io/kube-scheduler:v1.17.17 k8s.gcr.io/kube-proxy:v1.17.17 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5
- 修改系统中images的内容为
[root@mini1 ~]# images=( kube-apiserver:v1.17.4 kube-controller-manager:v1.17.4 kube-scheduler:v1.17.4 kube-proxy:v1.17.4 pause:3.1 etcd:3.4.3-0 coredns:1.6.5 )
- 将镜像拉取到本地
[root@mini1 ~]# for imageName in ${images[@]};do docker registry.aliyuncs.com/google_containers/$imageName docker tag registry.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName docker rmi registry.aliyuncs.com/google_containers/$imageName done
- 查看所需要的镜像的版本
-
修改初始化master命令:
- 初始化
kubeadm init \ --apiserver-advertise-address=192.168.10.100 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.17.4 \ --service-cidr=10.96.0.0/12 \ --pod-network-cidr=10.244.0.0/16 #当前k8s版本 #链接访问ip 与本地不冲突就行 #由于默认拉取镜像地址 k8s.gcr.io 国内无法访问,这里指定阿里云镜像仓库地址。
- 重新初始化
# 先重置 kubeadm reset kubeadm init \ --apiserver-advertise-address=192.168.10.100 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.17.4 \ --service-cidr=10.96.0.0/12 \ --pod-network-cidr=10.244.0.0/16 #当前k8s版本 #链接访问ip 与本地不冲突就行 #由于默认拉取镜像地址 k8s.gcr.io 国内无法访问,这里指定阿里云镜像仓库地址。
- 初始化
-
初始化问题:
https://cloud.tencent.com/developer/article/2039072
-
使用 kubectl 工具:
mkdir -p $HOME/.kube
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
-
2、加入 Kubernetes Node
-
在 192.168.10.12/13(Node)执行
向集群添加新节点,执行在 kubeadm init 输出的 kubeadm join 命令:
kubeadm join 192.168.3.100:6443 --token i6802l.umkyymnqind3g1hi \ --discovery-token-ca-cert-hash sha256:e09e9841c5605667815420717c87143887cfe7db07964cd40bcc516c22e0f0a6
-
默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,操作如下:
kubeadm token create --print-join-command
-
-
3、安装 Pod 网络插件(CNI)在master上进行安装
- 获取yaml文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml --no-check-certificate
- 如果无法获取该文件复制下面内容手动编辑文件(不用修改):
--- kind: Namespace apiVersion: v1 metadata: name: kube-flannel labels: pod-security.kubernetes.io/enforce: privileged --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - get - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch - apiGroups: - "networking.k8s.io" resources: - clustercidrs verbs: - list - watch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-flannel --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-flannel --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-flannel labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-flannel labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux hostNetwork: true priorityClassName: system-node-critical tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni-plugin image: docker.io/flannel/flannel-cni-plugin:v1.1.2 #image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2 command: - cp args: - -f - /flannel - /opt/cni/bin/flannel volumeMounts: - name: cni-plugin mountPath: /opt/cni/bin - name: install-cni image: docker.io/flannel/flannel:v0.20.2 #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: docker.io/flannel/flannel:v0.20.2 #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN", "NET_RAW"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: EVENT_QUEUE_DEPTH value: "5000" volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ - name: xtables-lock mountPath: /run/xtables.lock volumes: - name: run hostPath: path: /run/flannel - name: cni-plugin hostPath: path: /opt/cni/bin - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate
- 如果无法获取该文件复制下面内容手动编辑文件(不用修改):
- 默认镜像地址无法访问,sed命令修改为docker hub镜像仓库。
kubectl apply -f ./kube-flannel.yml
- 查看notes状态
[root@master home]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 28m v1.17.4 note01 Ready <none> 26m v1.17.4 note02 Ready <none> 26m v1.17.4
- 查看系统namespace中pod的运行情况
[root@master home]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-9d85f5447-j8r49 1/1 Running 0 29m coredns-9d85f5447-jmzlz 1/1 Running 0 29m etcd-master 1/1 Running 0 29m kube-apiserver-master 1/1 Running 0 29m kube-controller-manager-master 1/1 Running 0 29m kube-proxy-4fgl5 1/1 Running 0 27m kube-proxy-mgsbf 1/1 Running 0 28m kube-proxy-tvbjc 1/1 Running 0 29m kube-scheduler-master 1/1 Running 0 29m
- 获取yaml文件
-
4、测试 kubernetes 集群
-
在 Kubernetes 集群中创建一个 pod,验证是否正常运行:
-
kubectl create deployment nginx --image=nginx
$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=4040 --type=NodePort
$ kubectl get pod,svc -
问题:如果创创建不成功,参考链接
-
访问地址:http://NodeIP:Port (任意一个node的ip,加上命令行出来的port
-
-
5、安装监控界面
-
卸载rancher
#!/bin/bash # 卸载rancher2.x KUBE_SVC=' kubelet kube-scheduler kube-proxy kube-controller-manager kube-apiserver ' for kube_svc in ${KUBE_SVC}; do # 停止服务 if [[ `systemctl is-active ${kube_svc}` == 'active' ]]; then systemctl stop ${kube_svc} fi # 禁止服务开机启动 if [[ `systemctl is-enabled ${kube_svc}` == 'enabled' ]]; then systemctl disable ${kube_svc} fi done # 停止所有容器 docker stop $(docker ps -aq) # 删除所有容器 docker rm -f $(docker ps -qa) # 删除所有容器卷 docker volume rm $(docker volume ls -q) # 卸载mount目录 for mount in $(mount | grep tmpfs | grep '/var/lib/kubelet' | awk '{ print $3 }') /var/lib/kubelet /var/lib/rancher; do umount $mount; done # 备份目录 mv /etc/kubernetes /etc/kubernetes-bak-$(date +"%Y%m%d%H%M") mv /var/lib/etcd /var/lib/etcd-bak-$(date +"%Y%m%d%H%M") mv /var/lib/rancher /var/lib/rancher-bak-$(date +"%Y%m%d%H%M") mv /opt/rke /opt/rke-bak-$(date +"%Y%m%d%H%M") # 删除残留路径 rm -rf /etc/ceph \ /etc/cni \ /opt/cni \ /run/secrets/kubernetes.io \ /run/calico \ /run/flannel \ /var/lib/calico \ /var/lib/cni \ /var/lib/kubelet \ /var/log/containers \ /var/log/kube-audit \ /var/log/pods \ /var/run/calico # 清理网络接口 no_del_net_inter=' lo docker0 eth ens bond ' network_interface=`ls /sys/class/net` for net_inter in $network_interface; do if ! echo "${no_del_net_inter}" | grep -qE ${net_inter:0:3}; then ip link delete $net_inter fi done # 清理残留进程 port_list=' 80 443 6443 2376 2379 2380 8472 9099 10250 10254 ' for port in $port_list; do pid=`netstat -atlnup | grep $port | awk '{print $7}' | awk -F '/' '{print $1}' | grep -v - | sort -rnk2 | uniq` if [[ -n $pid ]]; then kill -9 $pid fi done kube_pid=`ps -ef | grep -v grep | grep kube | awk '{print $2}'` if [[ -n $kube_pid ]]; then kill -9 $kube_pid fi # 清理Iptables表 ## 注意:如果节点Iptables有特殊配置,以下命令请谨慎操作 #sudo iptables --flush #sudo iptables --flush --table nat #sudo iptables --flush --table filter #sudo iptables --table nat --delete-chain #sudo iptables --table filter --delete-chain systemctl restart docker
-
安装rancher
https://blog.csdn.net/qq_37481017/article/details/118999716
-
-
6、完全卸载k8s
脚本
#!/bin/bash yum remove -y kube* kubeadm reset -f modprobe -r ipip lsmod rm -rf ~/.kube/ rm -rf /etc/kubernetes/ rm -rf /etc/systemd/system/kubelet.service.d rm -rf /etc/systemd/system/kubelet.service rm -rf /usr/bin/kube* rm -rf /etc/cni rm -rf /opt/cni rm -rf /var/lib/etcd rm -rf /var/etcd yum clean all yum -y remove kube*