首页 > 其他分享 >ansible维护

ansible维护

时间:2024-03-28 21:25:39浏览次数:25  
标签:-% %- set when servers ansible 维护 disk

 参考:https://docs.ansible.com/ansible/latest/collections/ansible/builtin/unarchive_module.html

1,检测网络

- hosts: all
  gather_facts: no 
  become: yes
  tasks:
  - name: Install traceroute
    package:
      name: "{{item}}"
      state: present
    with_items:
      - traceroute 

  - name: check ip
    shell: "{{item}}" 
    register: check_ip_list
    with_items:
      - traceroute -T -p 22 192.168.0.1 |tail -n 1

  - name: delete check 
    delegate_to: localhost
    run_once: true
    file:
      state: absent
      path: ./check_ip.log
    ignore_errors: yes

  - name: check 
    delegate_to: localhost
    blockinfile:
      path: ./check_ip.log
      owner: tiantao01 
      group:  tiantao01
      create: yes
      marker: ""
      block: |
        {{inventory_hostname}} {{item.cmd.strip()}} {{item.stdout.strip().split('\n')[-1]}}
    when: 
    - not (item.cmd.split(' ')[-4].strip()) in (item.stdout.strip().split('\n')[-1])
    loop: "{{check_ip_list.results}}"

2,分区

- hosts: all
  gather_facts: yes
  become: yes
  tags:
    - parted
  tasks:
  - debug: "msg={{ansible_devices}}"
  - debug: "msg={{ansible_lvm.pvs}}"
  - debug: "msg={{ansible_mounts}}"
  - debug: "msg={{ansible_lvm}}"
  
  - name: set vars non root hard drive
    set_fact:
      notroot_disk: |
        {%- set servers=[] -%}
        {%- set root_servers=[] -%}
        {%- for disk in ansible_devices.keys() -%}
          {%- if 'SCSI' in  ansible_devices[disk].host or  'RAID' in  ansible_devices[disk].host  -%}
            {%- set _=servers.append(disk) -%}
            {%- set _=root_servers.append(disk) -%}
          {%- endif -%}
        {%- endfor -%}
        {%- set servers= servers|unique -%}
        {%- for mount in ansible_mounts -%}
           {%- for disk in servers -%}
             {%- if disk in mount.device and mount.mount == "/" -%}
                {%- if disk in servers and disk in root_servers -%}
                  {%- set _=root_servers.remove(disk) -%}
                {%- endif -%}
             {%- endif -%}
             {%- if disk not in mount.device and mount.mount == "/" -%}
                 {%- for lvms in ansible_lvm.lvs|dict2items -%}
                     {%- if lvms.key in mount.device and lvms.value.vg in mount.device -%}
                         {%- for pvs in ansible_lvm.pvs|dict2items -%}
                             {%- if lvms.value.vg ==  pvs.value.vg -%}
                                {%- set _tmp=(pvs.key|regex_replace('[0-9]+$','')).split('/')[-1] -%}
                                {%- if _tmp in root_servers -%}
                                  {%- set _=root_servers.remove(_tmp) -%}
                                {%- endif -%}
                             {%- endif -%}
                         {%- endfor -%}
                     {%- endif -%}
                 {%- endfor -%}
             {%- endif -%}
           {%- endfor -%}
        {%- endfor -%}
        {%- set servers= servers|unique -%}
        {%- set root_servers= root_servers|unique|sort -%}
        {{ root_servers }}   
  
  - name: set vars non root hard drive
    set_fact:
      notroot_nvme_disk: |
        {%- set servers=[] -%}
        {%- set root_servers=[] -%}
        {%- for disk in ansible_devices.keys() -%}
          {%- if 'NVMe' in  ansible_devices[disk].host  -%}
            {%- set _=servers.append(disk) -%}
            {%- set _=root_servers.append(disk) -%}
          {%- endif -%}
        {%- endfor -%}
        {%- set servers= servers|unique -%}
        {%- for mount in ansible_mounts -%}
           {%- for disk in servers -%}
             {%- if disk in mount.device and mount.mount == "/" -%}
                {%- if disk in servers and disk in root_servers -%}
                  {%- set _=root_servers.remove(disk) -%}
                {%- endif -%}
             {%- endif -%}
             {%- if disk not in mount.device and mount.mount == "/" -%}
                 {%- for lvms in ansible_lvm.lvs|dict2items -%}
                     {%- if lvms.key in mount.device and lvms.value.vg in mount.device -%}
                         {%- for pvs in ansible_lvm.pvs|dict2items -%}
                             {%- if lvms.value.vg ==  pvs.value.vg -%}
                                {%- set _tmp=(pvs.key|regex_replace('[0-9]+$','')).split('/')[-1] -%}
                                {%- if _tmp in root_servers -%}
                                  {%- set _=root_servers.remove(_tmp) -%}
                                {%- endif -%}
                             {%- endif -%}
                         {%- endfor -%}
                     {%- endif -%}
                 {%- endfor -%}
             {%- endif -%}
           {%- endfor -%}
        {%- endfor -%}
        {%- set servers= servers|unique -%}
        {%- set root_servers= root_servers|unique|sort -%}
        {{ root_servers }}   

  - name: set vars root disk hard drive
    set_fact:
      root_disk: |
        {%- set servers=[] -%}
        {%- set root_servers=[] -%}
        {%- for disk in ansible_devices.keys() -%}
          {%- if 'SCSI' in  ansible_devices[disk].host or  'RAID' in  ansible_devices[disk].host or 'NVMe' in  ansible_devices[disk].host -%}
            {%- set _=servers.append(disk) -%}
          {%- endif -%}
        {%- endfor -%}
        {%- set servers= servers|unique -%}
        {%- for mount in ansible_mounts -%}
           {%- for disk in servers -%}
             {%- if disk in mount.device and mount.mount == "/" -%}
                {%- if disk not in root_servers -%}
                  {%- set _=root_servers.append(disk) -%}
                {%- endif -%}
             {%- endif -%}
             {%- if disk not in mount.device and mount.mount == "/" -%}
                 {%- for lvms in ansible_lvm.lvs|dict2items -%}
                     {%- if lvms.key in mount.device and lvms.value.vg in mount.device -%}
                         {%- for pvs in ansible_lvm.pvs|dict2items -%}
                             {%- if lvms.value.vg ==  pvs.value.vg -%}
                                {%- set _tmp=(pvs.key|regex_replace('[0-9]+$','')).split('/')[-1] -%}
                                {%- set _=root_servers.append(_tmp) -%}
                             {%- endif -%}
                         {%- endfor -%}
                     {%- endif -%}
                 {%- endfor -%}
             {%- endif -%}
           {%- endfor -%}
        {%- endfor -%}
        {%- set servers= servers|unique -%}
        {%- set root_servers= root_servers|unique -%}
        {{ root_servers }}   
        
  
  
  - name: set root pv 
    set_fact:
      root_pv: |
        {%- set servers=[] -%}
        {%- set root_servers=[] -%}
        {%- for disk in ansible_devices.keys() -%}
          {%- if 'SCSI' in  ansible_devices[disk].host or  'RAID' in  ansible_devices[disk].host or 'NVMe' in  ansible_devices[disk].host -%}
            {%- set _=servers.append(disk) -%}
          {%- endif -%}
        {%- endfor -%}
        {%- set servers= servers|unique -%}
        {%- for mount in ansible_mounts -%}
           {%- for disk in servers -%}
             {%- if disk not in mount.device and mount.mount == "/" -%}
                 {%- for lvms in ansible_lvm.lvs|dict2items -%}
                     {%- if lvms.key in mount.device and lvms.value.vg in mount.device -%}
                         {%- for pvs in ansible_lvm.pvs|dict2items -%}
                             {%- if lvms.value.vg ==  pvs.value.vg -%}
                                {%- set _tmp=(pvs.key|regex_replace('[0-9]+$','')).split('/')[-1] -%}
                                {%- set _=root_servers.append(_tmp) -%}
                             {%- endif -%}
                         {%- endfor -%}
                     {%- endif -%}
                 {%- endfor -%}
             {%- endif -%}
           {%- endfor -%}
        {%- endfor -%}
        {%- set servers= servers|unique -%}
        {%- set root_servers=root_servers|unique -%}
        {{ root_servers }}
  
  
  - name: set root vg 
    set_fact:
      root_vg: |
        {%- set servers=[] -%}
        {%- set root_servers=[] -%}
        {%- for disk in ansible_devices.keys() -%}
          {%- if 'SCSI' in  ansible_devices[disk].host or  'RAID' in  ansible_devices[disk].host or 'NVMe' in  ansible_devices[disk].host -%}
            {%- set _=servers.append(disk) -%}
          {%- endif -%}
        {%- endfor -%}
        {%- set servers= servers|unique -%}
        {%- for mount in ansible_mounts -%}
           {%- for disk in servers -%}
             {%- if disk not in mount.device and mount.mount == "/" -%}
                 {%- for lvms in ansible_lvm.lvs|dict2items -%}
                     {%- if lvms.key in mount.device and lvms.value.vg in mount.device -%}
                         {%- for pvs in ansible_lvm.pvs|dict2items -%}
                             {%- if lvms.value.vg ==  pvs.value.vg -%}
                                {%- set _tmp=(pvs.key|regex_replace('[0-9]+$','')).split('/')[-1] -%}
                                {%- set _=root_servers.append(pvs.value.vg) -%}
                             {%- endif -%}
                         {%- endfor -%}
                     {%- endif -%}
                 {%- endfor -%}
             {%- endif -%}
           {%- endfor -%}
        {%- endfor -%}
        {%- set servers= servers|unique -%}
        {%- set root_servers=root_servers|unique -%}
        {{ root_servers }}
  
  - name: set root vglv 
    set_fact:
      root_vglv: |
        {%- set servers=[] -%}
        {%- set root_servers=[] -%}
        {%- for disk in ansible_devices.keys() -%}
          {%- if 'SCSI' in  ansible_devices[disk].host or  'RAID' in  ansible_devices[disk].host or 'NVMe' in  ansible_devices[disk].host -%}
            {%- set _=servers.append(disk) -%}
          {%- endif -%}
        {%- endfor -%}
        {%- set servers= servers|unique -%}
        {%- for mount in ansible_mounts -%}
           {%- for disk in servers -%}
             {%- if disk not in mount.device and mount.mount == "/" -%}
                 {%- for lvms in ansible_lvm.lvs|dict2items -%}
                     {%- if lvms.key in mount.device and lvms.value.vg in mount.device -%}
                          {%- set _tmp=('/dev/mapper/'+lvms.value.vg+'-'+lvms.key) -%}
                          {%- set _=root_servers.append(_tmp) -%}
                     {%- endif -%}
                 {%- endfor -%}
             {%- endif -%}
           {%- endfor -%}
        {%- endfor -%}
        {%- for vg in root_vg -%}
            {%- for mount in ansible_mounts -%}
                {%- if vg in mount.device -%}
                      {%- set _=root_servers.append(mount.device) -%}
                {%- endif -%}
            {%- endfor -%}
        {%- endfor -%}
        {%- set servers= servers|unique -%}
        {%- set root_servers=root_servers|unique -%}
        {{ root_servers }}
  
  - name: Get no root disk size
    set_fact:
      noroot_disks_sizes: | 
        {%- set servers=[] -%}
        {%- set root_servers=[] -%}
        {%- for disk in notroot_disk -%}
            {%- set _sectors= hostvars[inventory_hostname].ansible_devices[((disk|regex_replace('[0-9]+$','')).split('/')[-1])].sectors|int -%}
            {%- set _sectorsize= hostvars[inventory_hostname].ansible_devices[((disk|regex_replace('[0-9]+$','')).split('/')[-1])].sectorsize|int -%}
            {%- set _tmp=_sectors*_sectorsize -%}
            {%- set _data={} -%}
            {%- set _ = _data.update({'name': disk,'size_bytes':_tmp}) -%}
            {%- set _=servers.append(_data) -%}
        {%- endfor -%}
        {%- set servers= servers|unique|list -%}
        {{ servers }}
    when: notroot_disk is defined

  - name: Get no root nvme disk size
    set_fact:
      noroot_nvme_disks_sizes: | 
        {%- set servers=[] -%}
        {%- set root_servers=[] -%}
        {%- for disk in notroot_nvme_disk -%}
            {%- set _sectors= hostvars[inventory_hostname].ansible_devices[((disk).split('/')[-1])].sectors|int -%}
            {%- set _sectorsize= hostvars[inventory_hostname].ansible_devices[((disk).split('/')[-1])].sectorsize|int -%}
            {%- set _tmp=_sectors*_sectorsize -%}
            {%- set _data={} -%}
            {%- set _ = _data.update({'name': disk,'size_bytes':_tmp}) -%}
            {%- set _=servers.append(_data) -%}
        {%- endfor -%}
        {%- set servers= servers|unique|list -%}
        {{ servers }}
    when: notroot_nvme_disk is defined

  - name: Sort disks by size
    set_fact:
      noroot_sortet_disks_sizes: "{{ noroot_disks_sizes | sort(attribute='size_bytes')|reverse | map(attribute='name') | list }}"
    when: notroot_disk is not none 

  - name: Sort disks by size
    set_fact:
      noroot_sortet_nvme_disks_sizes: "{{ noroot_nvme_disks_sizes | sort(attribute='size_bytes')|reverse | map(attribute='name') | list }}"
    when: notroot_nvme_disk is not none 

  - debug: "msg={{root_disk}}"
    when: root_disk is defined
  - debug: "msg={{notroot_disk}}"
    when: notroot_nvme_disk is defined
  - debug: "msg={{notroot_nvme_disk}}"
  - debug: "msg={{root_pv}}"
    when: root_pv is defined
  - debug: "msg={{root_vg}}"
    when: root_vg is defined
  - debug: "msg={{root_vglv}}"
    when: root_vglv is defined
  - debug: "msg={{noroot_sortet_disks_sizes}}"
    when: noroot_sortet_disks_sizes is defined
  - debug: "msg={{noroot_sortet_nvme_disks_sizes}}"
    when: noroot_sortet_nvme_disks_sizes is defined

  - set_fact:
      data_install_root: "{{mountpath}}"
      data_disk: "{{noroot_sortet_disks_sizes[0]}}"
    when: 
    - noroot_sortet_nvme_disks_sizes|length < 1
    - noroot_sortet_disks_sizes|length >=1

  - set_fact:
      data_install_root: "{{mountpath}}"
      data_disk: "{{noroot_sortet_nvme_disks_sizes[0]}}"
    when: 
    - noroot_sortet_nvme_disks_sizes|length >=1

  - name: Unmount volume unmounted
    changed_when: true
    failed_when: false
    ignore_errors: true 
    shell: "umount -l {{ data_install_root }}"
    when: 
    - data_disk is defined
    - data_disk is not none

  - name: Unmount volume absent /etc/fstab
    ignore_errors: true 
    lineinfile:
      path: /etc/fstab 
      state: absent
      regexp: '{{ data_install_root }}' 
    when: 
    - data_disk is defined
    - data_disk is not none
  
  - name: remove partition
    changed_when: true
    failed_when: false
    parted:
      device: "{{data_disk}}"
      number: "1"
      state: absent 
    when: 
    - data_disk is defined
    - data_disk is not none

  - name: Create mount dir  
    changed_when: true
    failed_when: false
    file:
      path: '{{item}}'
      state: directory
      mode: '0755'
    with_items:
      - "{{data_install_root}}"
    when: 
    - data_disk is defined
    - data_disk is not none
  
  - name: create partition
    changed_when: true
    failed_when: false
    parted:
      device: /dev/{{item}}
      number: 1
      label: gpt
      state: present
      part_end: 100%
    with_items: 
      - "{{data_disk}}"
    when: 
    - data_disk is defined
    - data_disk is not none

  - name: Get partition device name
    shell: lsblk -n -o NAME,TYPE | grep {{data_disk}} |tail -n 1 | awk '{print $1}'|sed 's/[^[:alnum:]]//g' 
    register: partition_name_result
    when: 
    - data_disk is defined
    - data_disk is not none

  - name: Set partition name fact
    set_fact:
      partition_name: "{{ partition_name_result.stdout }}"
    when: 
    - data_disk is defined
    - data_disk is not none

  - debug: "msg={{partition_name}}"

  #- name: partprobe
  #  shell:
  #    partprobe {{data_disk}}

  - name: Format the partition
    filesystem:
      fstype: xfs
      dev: /dev/{{partition_name}}
      force: yes
    when:
      - data_disk is defined
      - data_disk is not none

  - name: Get {{data_disk}}  UUID of App disks
    shell: blkid /dev/{{partition_name}} -s UUID -o value
    register: QAXDATA_DIRK_UUID
    when: 
    - data_disk is defined
    - data_disk is not none
  
  
  - name: mount the {{data_disk}}
    mount:
      path: "{{data_install_root}}"
      src: "UUID={{QAXDATA_DIRK_UUID.stdout}}"
      fstype: xfs
      state: mounted
    when: 
    - data_disk is defined
    - data_disk is not none

3,防火墙

#!/bin/bash

{{ '\n' }}
{%- set ip_all = [] -%}
{%- for host in ansible_play_hosts_all -%}
  {%- set ip = hostvars[host]['ansible_' + interface]['ipv4']['address'] -%}
  {%- if ip is defined -%}
    {%- if ip_all.append(ip) -%}{%- endif -%}
  {%- endif -%}
{%- endfor -%}
{%- set _ = ip_all.append(ip_extension) -%}
ip_all={{ ip_all | join(",") }}
{{ '\n' }}

{%- set ports_all = [] -%}
{%- if groups['etcd'] is defined -%}
  {%- set etcd_port_client = hostvars[groups['etcd'][0]]['etcd_service_port'] -%}
  {%- if etcd_port_client is defined -%}
    {%- if ports_all.append(etcd_port_client) -%}{%- endif -%}
  {%- endif -%}
  {%- set etcd_port_peer = hostvars[groups['etcd'][0]]['etcd_service_port']|int + 1  -%}
  {%- if etcd_port_peer is defined -%}
    {%- if ports_all.append(etcd_port_peer) -%}{%- endif -%}
  {%- endif -%}
{%- endif -%}

{%- if groups['kafka'] is defined -%}
  {%- set kafka_zookeeper_service_port = hostvars[groups['kafka'][0]]['kafka_zookeeper_service_port'] -%}
  {%- if kafka_zookeeper_service_port is defined -%}
    {%- if ports_all.append(kafka_zookeeper_service_port) -%}{%- endif -%}
  {%- endif -%}
{%- endif -%}

{%- if groups['zookeeper'] is defined -%}
  {%- set zookeeper_service_port = hostvars[groups['zookeeper'][0]]['zookeeper_service_port'] -%}
  {%- if zookeeper_service_port is defined -%}
    {%- if ports_all.append(zookeeper_service_port) -%}{%- endif -%}
  {%- endif -%}
{%- endif -%}
ports_all={{ ports_all | join(',') }}
{{ '\n' }}

{%- set network_all = [] -%}
{%- if groups['k8s'] is defined -%}
  {%- set k8s_pod_subnet = hostvars[groups['k8s'][0]]['pod_subnet'] -%}
  {%- if k8s_pod_subnet is defined -%}
    {%- if network_all.append(k8s_pod_subnet) -%}{%- endif -%}
  {%- endif -%}
  {%- set k8s_service_subnet = hostvars[groups['k8s'][0]]['service_subnet'] -%}
  {%- if k8s_service_subnet is defined -%}
    {%- if network_all.append(k8s_service_subnet) -%}{%- endif -%}
  {%- endif -%}
{%- endif -%}
{%- set _ = network_all.append(network_extension) -%}
network_all={{ network_all | join(',') }}
{{ '\n' }}
{% raw %}
iptables -t mangle -F PREROUTING 
for port in $(echo $ports_all|tr ',' '\n' | grep -v '^$'); do
  for host in $(echo $ip_all|tr ',' '\n' | grep -v '^$'); do
    echo $port $host
    iptables -t mangle -A PREROUTING -s ${host}/32  -p tcp -m multiport --dport ${port}  -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
  done
done
for port in $(echo $ports_all|tr ',' '\n' | grep -v '^$'); do
  for network in $(echo $network_all|tr ',' '\n' | grep -v '^$'); do
    iptables -t mangle -A PREROUTING -s ${network}  -p tcp -m multiport --dport ${port}  -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
  done
done
iptables -t mangle -A PREROUTING -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
for port in $(echo $ports_all|tr ',' '\n' | grep -v '^$'); do
  iptables -t mangle -A PREROUTING -s ${network}  -p tcp -m multiport --dport ${port}  -j ACCEPT
done
for port in $(echo $ports_all|tr ',' '\n' | grep -v '^$'); do
  iptables -t mangle -A PREROUTING -p tcp -m multiport --dport ${port}  -j DROP
done
iptables -t mangle -S
{% endraw %}

 

4,子网

---
- hosts: all
  gather_facts: yes
  become: yes

  tags:
    - ceph

  tasks:

    - name: set vars network_cidr
      run_once: yes
      set_fact:
        network_cidr: |
          {%- set servers=[] -%}
          {%- for host in ansible_play_hosts_all -%}
              {%- set _=servers.append(hostvars[host]['ansible_facts']['default_ipv4']['network']+'/'+hostvars[host]['ansible_facts']['default_ipv4']['netmask']) -%}
          {%- endfor -%}
          {%- set servers= servers|cidr_merge('span') -%}
          {{servers}}


    - name: set vars cluster_network_cidr
      run_once: yes
      set_fact:
        cluster_network_cidr: "{{network_cidr.strip().split()[0]}}"

    - name: set vars public_network_cidr
      run_once: yes
      set_fact:
        public_network_cidr: "{{network_cidr.strip().split()[0]}}"

    - debug: "msg={{network_cidr}}"
    - debug: "msg={{public_network_cidr}}"
    - debug: "msg={{cluster_network_cidr}}"

    - name: Define ceph mon group
      add_host:
        groups:
          - ceph
          - cephmon
          - cephrgw
          - cephosd
          - cephmds
        name: "{{item}}"
        api_interface: "{{interface}}"
        public_network: "{{network_cidr.strip().split()[0]}}"
        cluster_network: "{{network_cidr.strip().split()[0]}}"
        ceph_env: "xian"
      when: hostvars[item]['ceph_is_master'] is defined
      loop: "{{ansible_play_hosts_all}}"

    - debug: "msg={{groups['cephmon']|length}}"

- hosts: all
  gather_facts: yes
  become: yes
  tags:
    - ceph

  tasks:
    - name: Define k8s_master host group
      add_host:
        groups:
          - k8s_master
        name: "{{item}}"
        interface: "{{interface}}"
      when: hostvars[item]['k8s_is_master'] is defined and hostvars[item]['k8s_is_master'] == "true"
      loop: "{{ansible_play_hosts_all}}"

    - name: Define k8s group
      add_host:
        groups:
          - k8s
        name: "{{item}}"
      when: hostvars[item]['k8s_is_master'] is defined
      loop: "{{ansible_play_hosts_all}}"

- hosts: ceph
  become: yes
  tags:
    - ceph

  roles:
    - role: docker
      tags:
        - ceph
      when: 
        - isuseceph|bool

- hosts: all
  become: yes
  tags:
    - ceph

  roles:
    - role: repo
      tags:
        - ceph
      when: 
        - isuseceph|bool

- hosts: all
  become: yes
  tags:
    - ceph

  roles:
    - role: ntp
      tags:
        - ceph

- import_playbook: deploy-ceph-mons.yml
  when: 
    - isuseceph|bool
  tags: ceph

- import_playbook: deploy-ceph-osds.yml
  when: 
    - isuseceph|bool
  tags: ceph

- import_playbook: deploy-ceph-rgws.yml
  when: 
    - isuseceph|bool
  tags: ceph

- import_playbook: init-ceph-rgw-user.yml
  when: 
    - isuseceph|bool
  tags: ceph

- import_playbook: deploy-ceph-mds.yml
  when: 
    - isuseceph|bool
  tags: ceph

- import_playbook: deploy-ceph-provisioner.yaml
  when: 
    - isuseceph|bool
  tags: ceph

  

5,失败

---
- hosts: all
  gather_facts: yes
  become: yes
  tags:
    - k8s
    - master
    - slave

  tasks:
    - name: set cgroup_driver
      set_fact:
        cgroup_driver: "cgroupfs"
      when: ansible_os_family != "RedHat"

    - name: set cgroup_driver
      set_fact:
        cgroup_driver: "systemd"
      when: ansible_os_family == "RedHat"

    - name: set k8s prefix path
      set_fact:
        k8sconfigpath: "{{k8s_prefix}}config"
      when: k8s_prefix is defined

    - name: Define k8s_master host group
      add_host:
        groups:
          - etcd
          - k8s_master
        name: "{{item}}"
        interface: "{{interface}}"
      when: hostvars[item]['k8s_is_master'] is defined and hostvars[item]['k8s_is_master'] == "true"
      loop: "{{ansible_play_hosts_all}}"

    - name: Define k8s_slave nodes
      add_host:
        groups:
          - k8s_slave
        name: "{{item}}"
        interface: "{{interface}}"
      when: hostvars[item]['k8s_is_master'] is defined and hostvars[item]['k8s_is_master'] == "false"
      loop: "{{ansible_play_hosts_all}}"


- hosts: all 
  become: yes
  tags:
    - k8s

  roles:
    - role: dassl 
    - role: docker
      vars:
        cgroup_driver: "{{cgroup_driver}}"

- hosts: k8s_master
  become: yes
  tags:
    - k8s
    - master

  roles:
    - role: init-system

    - role: etcd
      vars:
        etcd_with_tls: true
        etcd_with_basic_auth: false
        etcd_service_port: 2379
        etcd_port_peer: 2380
        etcd_version: v3.5.4 
        etcd_interface: "{{interface}}"

    - role: docker
      vars:
        cgroup_driver: "{{cgroup_driver}}"

    - role: kernel
      when:
        - isusemerge|bool

    - role: k8s
      vars:
        etcd_with_tls: true
        etcd_with_basic_auth: false
        etcd_service_port: 2379
        etcd_port_peer: 2380
        etcd_version: v3.5.4
        etcd_interface: "{{interface}}"

  post_tasks:
    - name: sleep
      shell: "sleep 300"
      delegate_to: "{{groups['k8s_master'][0]}}" 
      changed_when: true
      failed_when: false
      run_once: yes

    - name: check k8s master status 
      shell: "kubectl get node|grep master|grep -v NAME|grep NotReady"
      delegate_to: "{{groups['k8s_master'][0]}}" 
      changed_when: true
      failed_when: false
      run_once: yes
      register: result_k8s

    - name: check k8s plugin status
      shell: "kubectl get pods -n kube-system  -o wide|grep -v NAME|grep -v Running"
      delegate_to: "{{groups['k8s_master'][0]}}" 
      run_once: yes
      changed_when: true
      failed_when: false
      register: result_plugin
    
    - debug: "msg={{result_k8s}}"

    - debug: "msg={{result_plugin}}"

    - name: master node fail
      fail:
        msg: "The status of the master node {{inventory_hostname}} is wrong"
      when: 
        - result_k8s.stdout is not none
        - (result_k8s.stdout|regex_search(inventory_hostname))

    - name: master node plugin fail
      fail:
        msg: "The status of the master node plugin {{inventory_hostname}} is wrong"
      when: 
        - result_plugin.stdout is not none
        - (result_plugin.stdout|regex_search(inventory_hostname))

- hosts: k8s_slave
  become: yes
  tags:
    - k8s
    - slave

  roles:
    - role: init-system

    - role: docker
      vars:
        cgroup_driver: "{{cgroup_driver}}"


    - role: kernel
      when:
        - isusemerge|bool

    - role: k8s

  post_tasks:
    - name: sleep
      shell: "sleep 300"
      delegate_to: "{{groups['k8s_master'][0]}}" 
      changed_when: true
      failed_when: false
      run_once: yes

    - name: check k8s slave status 
      shell: "kubectl get node|grep -v master|grep -v NAME|grep NotReady"
      delegate_to: "{{groups['k8s_master'][0]}}" 
      changed_when: true
      failed_when: false
      run_once: yes
      register: result_k8s

    - name: check k8s plugin status
      shell: "kubectl get pods -n kube-system  -o wide|grep -v NAME|grep -v Running" 
      changed_when: true
      failed_when: false
      delegate_to: "{{groups['k8s_master'][0]}}" 
      run_once: yes
      register: result_plugin

    - debug: "msg={{result_k8s}}"

    - debug: "msg={{result_plugin}}"


    - name: slave node fail
      fail:
        msg: "The status of the slave node {{inventory_hostname}} is wrong"
      when: 
        - result_k8s.stdout is not none
        - (result_k8s.stdout|regex_search(inventory_hostname))

    - name: slave node plugin fail
      fail:
        msg: "The status of the slave node plugin {{inventory_hostname}} is wrong"
      when: 
        - result_plugin.stdout is not none
        - (result_plugin.stdout|regex_search(inventory_hostname))

  

6,套用

---
- hosts: all 
  gather_facts: yes
  become: yes
  tags:
    - monitor

  pre_tasks:
    - name: Define pg_master host group
      add_host:
        groups:
          - pg_master
        name: "{{item}}"
      when: groups['pg'] is defined and hostvars[item]['pg_is_master'] is defined and hostvars[item]['pg_is_master'] == "true"
      loop: "{{groups['pg']}}"


    - name: 1 minio per node
      set_fact:
        minio_per_node: 1
      when: groups["minio"] is defined and groups["minio"]|length == 1
    
    - name: 2 minio per node
      set_fact:
        minio_per_node: 2
      when: groups["minio"] is defined and groups["minio"]|length >= 2
    
    - set_fact:
        minio_all_nodes_port: "{% for host in groups['minio'] %}{% for idx in range(0, minio_per_node) %}{{hostvars[host]['minio_service_port']|int +idx}}{% if not loop.last %},{% endif %}{% endfor %}{% if loop.first %}{% break %}{% endif %}{% endfor %}"
      when: groups["minio"] is defined

    - name: 3 instance per node
      set_fact:
        instance_per_node: 3
      when: groups["redis"] is defined and groups["redis"]|length == 2
    
    - name: 2 instance per node
      set_fact:
        instance_per_node: 2
      when: groups["redis"] is defined and groups["redis"]|length > 2 and groups["redis"]|length < 6
    
    - name: 1 instance per node
      set_fact:
        instance_per_node: 1
      when: groups["redis"] is defined and groups["redis"]|length >= 6
    
    - set_fact:
        redis_all_nodes_port: "{% for host in groups['redis'] %}{% for idx in range(0, instance_per_node) %}{{hostvars[host]['redis_cluster_port']|int +idx}}{% if not loop.last %},{% endif %}{% endfor %}{% if loop.first %}{% break %}{% endif %}{% endfor %}"
      when: groups["redis"] is defined and groups["redis"]|length > 1

    - set_fact:
        redis_all_nodes_port: "{% for host in groups['redis'] %}{{hostvars[host]['redis_standalone_port']}}{% break %}{% endfor %}"
      when: groups["redis"] is defined and groups["redis"]|length == 1

    - set_fact:
        prometheus_host: "{% for host in groups['prometheus'] %}{{host}}{% if loop.first %}{% break %}{% endif %}{% endfor %}"
      when: groups["prometheus"] is defined and groups["prometheus"]|length == 1

    - set_fact:
        pg_service_port: "{% for host in groups['pg'] %}{{hostvars[host]['pg_service_port']}}{% if loop.first %}{% break %}{% endif %}{% endfor %}"
      when: groups['pg'] is defined

    - set_fact:
        pg_service_user: "{% for host in groups['pg'] %}{{hostvars[host]['pg_service_user']}}{% if loop.first %}{% break %}{% endif %}{% endfor %}"
      when: groups['pg'] is defined

    - set_fact:
        es_service_user: "{% for host in groups['es'] %}{{hostvars[host]['es_service_user']}}{% if loop.first %}{% break %}{% endif %}{% endfor %}"
      when: groups['es'] is defined 

    - set_fact:
        es_service_port: "{% for host in groups['es'] %}{{hostvars[host]['es_service_port']}}{% if loop.first %}{% break %}{% endif %}{% endfor %}"
      when: groups['es'] is defined 

    - set_fact:
        minio_service_user: "{% for host in groups['minio'] %}{{hostvars[host]['minio_service_user']}}{% if loop.first %}{% break %}{% endif %}{% endfor %}"
      when: groups['minio'] is defined 

    - set_fact:
        kafka_service_user: "{% for host in groups['kafka'] %}{{hostvars[host]['kafka_service_user']}}{% if loop.first %}{% break %}{% endif %}{% endfor %}"
      when: groups['kafka'] is defined 

    - set_fact:
        kafka_service_port: "{% for host in groups['kafka'] %}{{hostvars[host]['kafka_service_port']}}{% if loop.first %}{% break %}{% endif %}{% endfor %}"
      when: groups['kafka'] is defined 

    - set_fact:
        zookeeper_service_port: "{% for host in groups['zookeeper'] %}{{hostvars[host]['zookeeper_service_port']}}{% if loop.first %}{% break %}{% endif %}{% endfor %}"
      when: groups['zookeeper'] is defined 

    - set_fact:
        etcd_service_port: "{% for host in groups['etcd'] %}{{hostvars[host]['etcd_service_port']}}{% if loop.first %}{% break %}{% endif %}{% endfor %}"
      when: groups['etcd'] is defined 

    - set_fact:
        mongo_service_user: "{% for host in groups['mongo'] %}{{hostvars[host]['mongo_service_user']}}{% if loop.first %}{% break %}{% endif %}{% endfor %}"
      when: groups['mongo'] is defined 

    - set_fact:
        mongo_service_port: "{% for host in groups['mongo'] %}{{hostvars[host]['mongo_service_port']}}{% if loop.first %}{% break %}{% endif %}{% endfor %}"
      when: groups['mongo'] is defined 

    - set_fact:
        edge2_addr: "{% for host in groups['edge'] %}{{host}}{% if loop.first %}{% break %}{% endif %}{% endfor %}"
      when: groups["edge"] is defined and groups["edge"]|length == 1

    - set_fact:
        edge2_grpc_port: "{% for host in groups['edge'] %}{{hostvars[host]['edge2_grpc_port']}}{% if loop.first %}{% break %}{% endif %}{% endfor %}"
      when: groups["edge"] is defined and groups["edge"]|length == 1

    - set_fact:
        edge2_agent_port: "{% for host in groups['edge'] %}{{hostvars[host]['edge2_agent_port']}}{% if loop.first %}{% break %}{% endif %}{% endfor %}"
      when: groups["edge"] is defined and groups["edge"]|length == 1

    - set_fact:
        edge_addr: "{% for host in groups['edge.v1'] %}{{host}}{% if loop.first %}{% break %}{% endif %}{% endfor %}"
      when: groups["edge.v1"] is defined and groups["edge.v1"]|length == 1

    - set_fact:
        edge_grpcweb_port: "{% for host in groups['edge.v1'] %}{{hostvars[host]['edge_grpcweb_port']}}{% if loop.first %}{% break %}{% endif %}{% endfor %}"
      when: groups["edge.v1"] is defined and groups["edge.v1"]|length == 1

    - set_fact:
        edge_websocket_port: "{% for host in groups['edge.v1'] %}{{hostvars[host]['edge_websocket_port']}}{% if loop.first %}{% break %}{% endif %}{% endfor %}"
      when: groups["edge.v1"] is defined and groups["edge.v1"]|length == 1

    - debug: "msg={{redis_all_nodes_port}}"
      when: groups["redis"] is defined

    - debug: "msg={{minio_all_nodes_port}}"
      when: groups["minio"] is defined

  roles:
    - role: monitor
- name: Get install_exporter dir
  set_fact:
    basedir: "{{ansible_inventory_sources[0].split('/ansible_inventory')[0]}}"

- debug: "msg={{basedir}}"



- name: Set umask
  lineinfile:
    path: "/home/{{ansible_user}}/.bashrc"
    line: "umask 0022"
    regexp: '^umask\s+\d+'
    create: yes
    mode: "0644"
  become: yes


- name: Make temp dir
  file:
    state: absent
    path: "/home/{{ansible_user}}/.ansible"
- name: Create a directory if it does not exist
  file:
    path: /tmp
    state: directory
    recurse: yes
    mode: '0777'

- name: install node_exporter
  delegate_to: localhost
  environment:
    ANSIBLE_REMOTE_TEMP: /tmp/{{ansible_user}}/ansible
  changed_when: true
  failed_when: false
  shell: |
    ansible-playbook -i {{ansible_inventory_sources[0]}} {{basedir}}/ansible-playbooks/deploy/playbooks/install_exporter.yaml -t node -b  --extra-vars 'promethues_host={{prometheus_host}}' --extra-vars '{"prometheus_options": [{"env_name": "test","project_name": "test"}]}' 

6,shell

- name: get timezone
  become: yes
  delegate_to: localhost
  shell: "timedatectl | grep 'Time zone' | awk '{print $3}'"
  register: timezone_output

- name: show timezone 
  debug:
    var: timezone_output.stdout

- name: Define k8s_master host group
  add_host:
    groups:
      - k8s_master
    name: "{{item}}"
    interface: "{{interface}}"
  when: hostvars[item]['k8s_is_master'] is defined and hostvars[item]['k8s_is_master'] == "true"
  loop: "{{ansible_play_hosts_all}}"

- name: make repo
  changed_when: true
  failed_when: false
  shell:
    cmd: |
      rm -f /var/lib/rpm/__db.00* && rm -rf /var/cache/dnf/* && yum clean all

- name: Install chrony 
  package: 
    name: "chrony"
    state: present 

- name: config chrony.conf
  when: inventory_hostname == groups['k8s_master'][0]
  shell: |
    cat > /etc/chrony.conf << EOF
    server 127.0.0.1 iburst
    driftfile /var/lib/chrony/drift
    makestep 1.0 3
    rtcsync
    allow all
    local stratum 10
    logdir /var/log/chrony
    EOF

- name: config chrony.conf
  when: inventory_hostname != groups['k8s_master'][0]
  shell: |
    cat > /etc/chrony.conf << EOF
    server {{groups['k8s_master'][0]}} iburst
    driftfile /var/lib/chrony/drift
    makestep 1.0 3
    rtcsync
    allow all
    local stratum 10
    logdir /var/log/chrony
    EOF

- name: restart chrony
  systemd:
    name: chronyd 
    enabled: yes
    daemon_reload: yes
    state: restarted

- name: chronyc sources -v
  shell:
    cmd: |
      chronyc sources -v
      chronyc -a makestep
      chronyc sourcestats -v
      timedatectl set-timezone {{timezone_output.stdout}} 
      timedatectl set-local-rtc 0
      timedatectl set-ntp yes
      timedatectl status

- name: restart chrony
  systemd:
    name: chronyd 
    enabled: yes
    daemon_reload: yes
    state: restarted

- block:
    - name: Install systemd-timesyncd 
      package: > 
        name=systemd-timesyncd
        state=present

    - name: restart systemd-timesyncd
      systemd:
        name: systemd-timesyncd
        enabled: yes
        daemon_reload: yes
        state: restarted

    - name: checkntp
      shell:
        cmd: |
           timedatectl status
      register: checkntp_ntp
    
    - debug: msg="{{checkntp_ntp}}"
    
    - name: restart chrony
      systemd:
        name: chronyd
        enabled: yes
        daemon_reload: yes
        state: restarted
      when: "'synchronized: yes' not in checkntp_ntp.stdout"
    
    - name: checkntp
      shell:
        cmd: |
           systemctl restart chronyd
           systemctl restart systemd-timesyncd 
           sleep 10
           timedatectl status
      until: "'synchronized: yes' in checkntps_ntp.stdout"
      retries: "10"
      delay: "5"
      ignore_errors: yes
      register: checkntps_ntp
    
    - name: checkntp
      shell:
        cmd: |
           timedatectl status
      register: checkntps_ntp
  when: 
    - ansible_distribution_major_version == "9"

- block:
    - name: checkntp
      shell:
        cmd: |
           timedatectl status
      register: checkntp_ntp
    
    - debug: msg="{{checkntp_ntp}}"
    
    - name: restart chrony
      systemd:
        name: chronyd
        enabled: yes
        daemon_reload: yes
        state: restarted
      when: "'synchronized: yes' not in checkntp_ntp.stdout"
    
    - name: checkntp
      shell:
        cmd: |
           systemctl restart chronyd
           sleep 10
           timedatectl status
      until: "'synchronized: yes' in checkntps_ntp.stdout"
      retries: "10"
      delay: "5"
      ignore_errors: yes
      register: checkntps_ntp
    
    - name: checkntp
      shell:
        cmd: |
           timedatectl status
      register: checkntps_ntp
    
    - name: Assert ntp status
      assert:
        that:
          - "'synchronized: yes' in checkntps_ntp.stdout"
        msg: "ntp checkout is not ok"
  when: 
    - ansible_distribution_major_version != "9"

  

  

  

  

  

标签:-%,%-,set,when,servers,ansible,维护,disk
From: https://www.cnblogs.com/tiantao36/p/18102635

相关文章

  • ansible脚本
    -hosts:allgather_facts:nobecome:yestasks:-name:Installtraceroutepackage:name:"{{item}}"state:presentwith_items:-traceroute-name:checkipshell:"{{item}}"register:......
  • 深入理解 Vue 3.0 宏函数:提升组件代码的工程化与可维护性
    Vue3.0宏函数详解:defineProps、defineEmits、defineExpose、defineSlots和defineOptions在Vue3.0中,为了更好地组织和维护组件代码,引入了几个新的宏函数。这些宏函数包括defineProps、defineEmits、defineExpose、defineSlots和defineOptions。本文将详细介绍这五......
  • 安全更新:关于Cybellum维护服务器问题的情况说明(CVE-2023-42419)
    “转载自CybellumTechnologiesLtd.”我们想通知我们的客户一个我们注意到的安全问题,作为我们对产品透明度和持续安全性的承诺。2023年6月21日,一位名叫Delikely的安全研究员向Cybellum的安全团队报告了一个问题,特别针对Cybellum软件的某个发行版。这个问题是在Cybellum的QCOW......
  • 第三十七天:Ansible playbook变量
    Playbook中同样也支持变量变量名:仅能由字母、数字和下划线组成,且只能以字母开头变量定义:variable=valuevariable:value变量调用方式:通过{{variable_name}}调用变量,且变量名前后建议加空格,有时用"{{variable_name}}"才生效变量来源:1.ansible的setupfacts远程......
  • 第三十七天:Ansible playbook基础
    一、playbook介绍1、Playbook组成一个playbook(剧本)文件是一个YAML语言编写的文本文件通常一个playbook只包括一个play一个play的主要包括两部分:主机和tasks.即实现在指定一组主机上执行一个tasks定义好的任务列表。一个tasks中可以有一个或多个task......
  • OpenFeign 维护状态 和 HTTP客户端 @HttpExchange
    OpenFeign维护状态OpenFeign还在维护吗?根据提供的搜索结果,OpenFeign(SpringCloudOpenFeign)是一个由Spring官方推出的声明式服务调用与负载均衡组件。它是对Feign的二次封装,不仅继承了Feign的所有功能,还增加了对SpringMVC注解的支持。Feign本身在2019年由Netflix公司......
  • Linux架构26 playbook实战, 安装数据库, 网站迁移, ansible变量
    Ansibleplaybook实战1.基础准备#1.安装ansible[root@m01~]#yuminstall-yansible#2.配置ansible[root@m01~]#vim/etc/ansible/ansible.cfghost_key_checking=False#这个解开注释#3.配置主机清单[root@m01~]#vim/etc/ansible/hosts[web_group]we......
  • 关于华为交换机dhcp在维护
    DHCP报文介绍DHCP报文类型DHCP服务器与DHCP客户端之间通过DHCP报文进行通信。DHCP报文是基于UDP协议传输的。DHCP客户端向DHCP服务器发送报文时采用68端口号,DHCP服务器向DHCP客户端发送报文时采用67端口号。目前DHCP定义了如下八种类型报文。DHCP租期和地址池根据IP地址......
  • 软件架构设计:确保系统可维护性与稳定性的关键策略
    在软件开发的世界里,软件架构设计是构建稳定、可维护系统的基石。一个优秀的架构设计不仅能够确保系统的稳定运行,还能降低维护成本,提高开发效率。那么,如何在软件架构设计中保证系统的可维护性和稳定性呢?本文将为您揭示其中的关键策略。一、模块化设计:实现高内聚低耦合模块......
  • fsutil,您可以执行多种文件系统操作,包括查询和设置文件系统特性,refsutil 是用于管理和
    fsutil/?fsutil:fsutil是一个用于执行各种文件系统相关操作的Windows命令行实用程序。通过fsutil,您可以执行多种文件系统操作,包括查询和设置文件系统特性。----支持的命令----8dot3name   8.3文件名管理behavior    控制文件系统行为dax    ......