首页 > 其他分享 >ansible部署kubernetes(1.30)

ansible部署kubernetes(1.30)

时间:2024-05-31 13:29:12浏览次数:9  
标签:shell name kubernetes containerd etc ansible 1.30 com

ansible部署kubernetes(1.30)

  • 操作系统使用的是ubuntu 24.04,ansible使用rocky9.2

1.规划

节点角色配置地址domain name备注
master-012c,2g10.10.50.11k8s.master01.example.com
node-012c,10g10.10.50.14k8s.node01.example.com
node-022c,10g10.10.50.15k8s.node02.example.com
node-032c,10g10.10.50.16k8s.node03.example.com

2.ssh免密配置

$ ssh-keygen -t ed25519
$ ssh-copy-id [email protected]
$ ssh-copy-id [email protected]
$ ssh-copy-id [email protected]
$ ssh-copy-id [email protected]

$ sudo sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config \
&& sudo sed -i 's/#PubkeyAuthentication yes/PubkeyAuthentication yes/g' /etc/ssh/sshd_config \
&& sudo sed -i 's/#PasswordAuthentication yes/PasswordAuthentication no/g' /etc/ssh/sshd_config \
&& rm -rf /etc/ssh/sshd_config.d/50-cloud-init.conf \
&& sudo systemctl restart sshd

3.ansible部署

#下面环境使用的是rocky9.2的系统
# step1:安装
$ yum -y install ansible-core.x86_64

# step2:配置ansible配置文件
$ vim /etc/ansible/ansible.cfg 
[defaults]
inventory=/etc/ansible/hosts
roles_path=/etc/ansible/roles
host_key_checking=False

[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False

# step3:使用key认证,最好使用fqdn,因为这样后续部署别的集群只需要修改hosts解析记录即可
$ vim /etc/ansible/hosts
[master]
k8s-master01

[node]
k8s-node01
k8s-node02
k8s-node03

# step4:测试
$ ansible all -m ping					

4.系统配置

# step2:主机名配置
ansible master-01 -m hostname -a "name=k8s-master01.leepongmin.com"
ansible node-01 -m hostname -a "name=k8s-node01.leepongmin.com"
ansible node-02 -m hostname -a "name=k8s-node02.leepongmin.com"
ansible node-03 -m hostname -a "name=k8s-node03.leepongmin.com"

# step3:主机名解析配置
$ ansible all -m shell -a "cat <<eof>> /etc/hosts
10.10.50.11 k8s-master01.example.com master-01
10.10.50.14 k8s-node01.example.com node-01
10.10.50.15 k8s-node02.example.com node-02
10.10.50.16 k8s-node03.example.com node-03
eof"

# 4.系统仓库配置
$ ansible all -m shell -a 'cat <<eof> /etc/apt/sources.list
deb https://mirrors.aliyun.com/ubuntu/ noble main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ noble main restricted universe multiverse
deb https://mirrors.aliyun.com/ubuntu/ noble-security main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ noble-security main restricted universe multiverse
deb https://mirrors.aliyun.com/ubuntu/ noble-updates main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ noble-updates main restricted universe multiverse
deb https://mirrors.aliyun.com/ubuntu/ noble-backports main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ noble-backports main restricted universe multiverse
eof'
$ ansible all -m shell -a 'apt clean all && apt update'

5.ntp配置

$ vim chrony-install.yaml
---
- name: Install and configure Chrony
  hosts: all
  tasks:
    - name: Install Chrony
      apt:
        name: chrony
        state: present
      when: ansible_os_family == "Debian"
    - name: Install Chrony
      yum:
        name: chrony
        state: present
      when: ansible_os_family == "RedHat"
    - name: Configure Chrony NTP servers
      copy:
        dest: /etc/chrony/chrony.conf
        content: |
          server ntp.aliyun.com iburst
    - name: Restart Chrony service
      systemd:
        name: chronyd
        state: restarted
        enabled: yes
    - name: Set the timezone using timedatectl
      command: timedatectl set-timezone Asia/Shanghai

$ ansible-ploybook chrony-install.yaml

6.禁用swap

$ ansible all -m shell -a "sed -ri 's/.*swap.*/#&/' /etc/fstab"
$ ansible all -m shell -a "swapoff -a"

7.配置内核信息

# 1.加载模块
$ ansible all -m shell -a "cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF"
		
# 2.设置内核参数
$ ansible all -m shell -a "cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF"
$ ansible all -m shell -a "sysctl --system"
		
# 3.资源优化
$ ansible all -m shell -a "cat >>/etc/security/limits.conf <<EOF
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF"

8.ipvs配置

$ ansible all -m shell -a "sudo apt install -y ipset ipvsadm"
		
# 1.加载模块
$ ansible all -m shell -a "sudo tee /etc/modules-load.d/ipvs.conf << EOF
overlay
br_netfilter
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF"
$ ansible all -m shell -a "systemctl restart systemd-modules-load.service"

9.containerd

# step1:安装containerd
$ wget -P /opt https://github.com/containerd/containerd/releases/download/v1.7.17/containerd-1.7.17-linux-amd64.tar.gz
#这个在github下载很慢,因此将文件下载到ansible主机上之后拷贝到所有节点
$ vim copy-containerd.yaml
- hosts: all
  gather_facts: no
  tasks:
  - name: Synchronize files to remote host
    copy:
      src: /opt/containerd-1.7.17-linux-amd64.tar.gz
      dest: /opt/containerd-1.7.17-linux-amd64.tar.gz
$ ansible-playbook copy-containerd.yaml 

$ ansible all -m shell -a "tar xzf /opt/containerd-1.7.17-linux-amd64.tar.gz -C /usr/local/"
#$ ansible all -m shell -a "cp -r /opt/bin/* /usr/local/bin/
		
$ ansible all -m shell -a "cat > /lib/systemd/system/containerd.service << EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
EOF"

10.runc

$ wget -P /opt https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64
$ vim copy-runc.yaml
- hosts: all
  gather_facts: no
  tasks:
  - name: Synchronize files to remote host
    copy:
      src: /opt/runc.amd64
      dest: /usr/local/bin/runc
      mode: "0755"

# step3:创建配置文件
$ ansible all -m shell -a "mkdir -p /etc/containerd"
$ ansible all -m shell -a "containerd config default | tee /etc/containerd/config.toml"
		
# step4:修改底层容器地址,使用代理就不用修改
#$ vim modify-container.yaml
- hosts: all
  gather_facts: no
  tasks:
  - name: Execute bootstrapped shell command on remote host
    become: true
    shell: |
      sed -i 's|SystemdCgroup = false|SystemdCgroup = true|g' /etc/containerd/config.toml
      sed -ri -e 's@(.*sandbox_image = ).*@\1\"registry.aliyuncs.com/google_containers/pause:3.9\"@' /etc/containerd/config.toml
  - name: Reload the systemd daemon
    become: true
    command: systemctl daemon-reload
  - name: Restart the containerd service
    become: true
    systemd:
      name: containerd
      state: restarted

11.containerd proxy

$ vim containerd-proxy.yaml
- hosts: all
  gather_facts: no
  tasks:
  - name: http-proxy
    become: true
    shell: |
      sed -i '5a Environment=HTTP_PROXY="http://10.10.50.2:7890"' /lib/systemd/system/containerd.service
      sed -i '5a Environment=HTTPS_PROXY="http://10.10.50.2:7890"' /lib/systemd/system/containerd.service
      sed -i '5a Environment="NO_PROXY=localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local"' /lib/systemd/system/containerd.service
      
  - name: Reload the systemd daemon
    become: true
    command: systemctl daemon-reload
  - name: Restart the containerd service
    become: true
    systemd:
      name: containerd
      state: restarted
  - name: Reboot
    reboot:

12.kubeadm

$ ansible all -m shell -a "apt-get update && apt-get install -y apt-transport-https"
$ ansible all -m shell -a 'curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb/Release.key |
    gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb/ /" |
    tee /etc/apt/sources.list.d/kubernetes.list'
    
$ ansible all -m shell -a "apt-get update"
$ ansible all -m shell -a "apt-get install -y kubelet kubeadm kubectl"

13.init

$ kubeadm init --apiserver-advertise-address=10.10.50.11 --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16 --token-ttl=0 --image-repository registry.aliyuncs.com/google_containers --upload-certs

14.calico deployment

$ wget https://raw.githubusercontent.com/projectcalico/calico/master/manifests/calico.yaml --no-check-certificate
$ vim calico.yaml
- name: CALICO_IPV4POOL_CIDR
  value: "10.100.0.0/16"
- name: CALICO_IPV4BLOCK_SIZE
  value: "24"
- name: CALICO_IPV4POOL_IPIP
  value: "Never"

15.test pod

apiVersion: v1
kind: Pod
metadata:
  name: mypod1
spec:
  nodeName: k8s-node01.example.com
  containers:
  - name: daemonapp1
    image: busybox
    command: ["sh","-c","sleep 3600"]
---
apiVersion: v1
kind: Pod
metadata:
  name: mypod2
spec:
  nodeName: k8s-node02.example.com
  containers:
  - name: daemonapp2
    image: busybox
    command: ["sh","-c","sleep 3600"]
---
apiVersion: v1
kind: Pod
metadata:
  name: mypod3
spec:
  nodeName: k8s-node03.example.com
  containers:
  - name: daemonapp3
    image: busybox
    command: ["sh","-c","sleep 3600"]

在这里插入图片描述

标签:shell,name,kubernetes,containerd,etc,ansible,1.30,com
From: https://blog.csdn.net/leepongmin/article/details/139294175

相关文章

  • Kubernetes ExternalName类型的服务
    1、概述在《KubernetesHeadless服务》这篇博文中对KubenertesService资源类型进行了概述并详细介绍了Headless服务,通过这篇博文我们可以知道Service一般分为3种类型:ClusterIP、NodePort、LoadBalancer,唯独对ExternalName置若罔闻,本文将详细介绍KubernetesExternalName类......
  • Kubernetes StatefulSet 扩缩容与升级
    KubernetesStatefulSet扩缩容与升级StatefulSet扩容kubectlscalestsstateful-set-web--replicas=5root@k8s-master1:~#kubectlgetpods--watch-lapp=pod-nginxNAMEREADYSTATUSRESTARTSAGEstateful-set-web-01/1Runnin......
  • Kubernetes 硬盘持久化之 StorageClass
    Kubernetes硬盘持久化之StorageClassStorageClass定义StorageClass为管理员提供了描述存储"类"的方法。不同的类型可能会映射到不同的服务质量等级或备份策略,或是由集群管理员制定的任意策略。这个类的概念在其他存储系统中有时被称为"配置文件"。每个StorageClass......
  • Ubuntu22.04安装部署基于dockers的K8s目前最新版本1.30.1版本
    其实之前也写过其他系统、其他版本的部署,但是,由于在1.28版本之后,安装略有所变化,所以,这里再写一篇基础环境主机名配置角色系统版本IP安装的组件master4核4GmasterUbuntu22.04192.168.140.75apiserver、controller-manager、scheduler、kubelet、etcd、kube-pro......
  • 在kubernetes里使用seccomp限制容器的系统调用
    目录一.系统环境二.前言三.系统调用简介四.使用seccomp限制docker容器系统调用五.在kubernetes里使用seccomp限制容器的系统调用5.1配置seccomp允许pod进行所有系统调用5.2配置seccomp禁止pod进行所有系统调用5.3配置seccomp允许pod进行50个系统调用六.总结一.系统环境本文主......
  • 三、(1)Kubernetes基本概念和术语
    目录1.1资源对象概述1.2集群类1.3应用类1. serviceandpod:应用类相关的资源对象主要是围绕service和pod来进行说明的。2.Lable与标签选择器3.pod与deployment4.service的clusterIP地址5.service的外网访问问题6.有状态的应用集群7.与应用运维相关对象考虑......
  • filebeat配置参数add_kubernetes_metadata
    在Kubernetes集群中,我们可以使用Filebeat来从容器中收集日志,并为每个日志事件添加Kubernetes相关的元数据信息,例如Pod名称、命名空间、标签等。这样我们就可以更好地分析和理解日志数据。filebeat.inputs:-type:containerpaths:-/var/log/containers/*.log......
  • 在kubernetes里使用AppArmor限制容器对资源的访问
    目录一.系统环境二.前言三.AppArmor简介四.AppArmor和SELinux的区别五.使用AppArmor限制nginx程序访问目录5.1安装nginx5.2修改nginx的默认主页5.3安装AppArmor实用工具5.4AppArmor规则解释5.5配置AppArmor规则限制nginx程序访问目录六.在kubernetes里使用AppArmor限制容器对......
  • 自动化测试在 Kubernetes Operator 开发中的应用:以 OpenTelemetry 为例
    背景最近在给opentelemetry-operator提交一个标签选择器的功能时,因为当时修改的函数是私有的,无法添加单测函数,所以社区建议我补充一个e2etest.因为在当前的版本下,只要给deployment打上了instrumentation.opentelemetry.io/inject-java:"true"这类注解就会给该deployme......
  • Kubernetes(k8s) v1.30.1 本地集群部署 安装metallb 支持LoadBalancer 生产环境 推荐
    1 metallb安装参考:Kubernetes(k8s)v1.30.1本地集群部署默认不支持LoadBalancermetallb来解决-CSDN博客2 删除Layer2模式配置kubectldelete-fIPAddressPool.yamlkubectldelete-fL2Advertisement.yamlkubectldelete-fdiscuz-srv.yaml3配置k8sMeta......