首页 > 数据库 >Centos7系统-postgresql+etcd+patroni+haproxy+keepalived高可用集群部署

Centos7系统-postgresql+etcd+patroni+haproxy+keepalived高可用集群部署

时间:2023-02-17 23:13:33浏览次数:56  
标签:haproxy postgresql postgres keepalived 192.168 host pg etcd data

一、概况

1、概念

pgsql高可用集群采用postgresql+etcd+patroni+haproxy+keepalived等软件实现,以postgresql做数据库,etcd存储集群状态,patroni与etcd结合实现数据库集群故障切换,<br>haproxy实现数据库高可用(读读写分离),keepalived实现VIP跳转。

2、拓扑图

软件下载地址:

链接:https://pan.baidu.com/s/1VIWwXcfQRCumJjEXndSXPQ
提取码:5bpz

 

二、postgresql部署(三个节点)

1、下载解压

1 2 3 https://www.enterprisedb.com/download-postgresql-binaries mkdir -p /data/pg_data tar xf postgresql-10.18-1-linux-x64-binaries.tar.gz -C /data/

2、创建用户并授权

1 2 3 useradd postgres passwd postgres chown -R postgres.postgres /data/

3、初始化数据库(postgres用户下)

 

1 2 3 4 切换目录 [root@centos7 ~]# su – postgres 初始化目录 [postgres@centos7 ~]$ /data/pgsql/bin/initdb -D /data/pg_data/

4、配置变量

1 2 3 4 5 6 7 8 9 10 11 12 13 su – postgres vim .bash_profile PATH=$PATH:$HOME/bin export PATH export PATH export PGHOME=/data/pgsql export PATH=$PATH:$PGHOME/bin export PGDATA=/data/pg_data export PGLOG=/data/pg_log/pg.log   source .bash_profile mkdir -p /data/pg_log chown postgres.postgres /data/pg_data chown postgres.postgres /data/pg_log

5、配置postgresql启动脚本

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 vim  /etc/systemd/system/postgresql.service [Unit] Description=PostgreSQL database server After=network.target   [Service] Type=forking  User=postgres   Group=postgres ExecStart= /data/pgsql/bin/pg_ctl -D /data/pg_data/  start ExecReload= /data/pgsql/bin/pg_ctl -D /data/pg_data/ restart ExecStop= /data/pgsql/bin/pg_ctl -D /data/pg_data/  stop PrivateTmp=true   [Install] WantedBy=multi-user.target

6、启动与关闭

1 2 3 4 5 6 7 systemctl daemon-reload 开启 systemctl start postgresql 关闭 systemctl stop postgresql 重启 systemctl restart postgresql

7、数据库添加密码

1 2 [postgres@pgsql-19 ~]$ psql -U postgres -h localhost postgres=# alter user postgres with password 'P@sswrd';

8、允许远程连接

1 2 vim /data/pg_data/pg_hba.conf host    all             all             0.0.0.0/0               md5
1 2 3 4 5 6 7 vim /data/pg_data/postgresql.conf   listen_addresses = '*' password_encryption = on   重启数据库 systemctl restart postgresql

  

三、etcd部署(三个节点)  

1、下载解压

1 2 tar xf etcd-v3.1.20-linux-amd64.tar.gz -C /usr/local/ ln -s /usr/local/etcd-v3.1.20-linux-amd64 /usr/local/etcd

2、文件配置

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 mkdir -p /usr/local/etcd/data/etcd vim /usr/local/etcd/conf.yml name: pgsql_1971 data-dir: /usr/local/etcd/data/etcd listen-client-urls: http://192.168.19.71:2379,http://127.0.0.1:2379 advertise-client-urls: http://192.168.19.71:2379,http://127.0.0.1:2379 listen-peer-urls: http://192.168.19.71:2380 initial-advertise-peer-urls: http://192.168.19.71:2380 initial-cluster: pgsql_1971=http://192.168.19.71:2380,pgsql_1972=http://192.168.19.72:2380,pgsql_1973=http://192.168.19.73:2380 initial-cluster-token: etcd-cluster-token initial-cluster-state: new   mkdir -p /usr/local/etcd/data/etc vim /usr/local/etcd/conf.yml name: pgsql_1972 data-dir: /usr/local/etcd/data/etcd listen-client-urls: http://192.168.19.72:2379,http://127.0.0.1:2379 advertise-client-urls: http://192.168.19.72:2379,http://127.0.0.1:2379 listen-peer-urls: http://192.168.19.72:2380 initial-advertise-peer-urls: http://192.168.19.72:2380 initial-cluster: pgsql_1971=http://192.168.19.71:2380,pgsql_1972=http://192.168.19.72:2380,pgsql_1973=http://192.168.19.73:2380 initial-cluster-token: etcd-cluster-token initial-cluster-state: new   mkdir -p /usr/local/etcd/data/etc vim /usr/local/etcd/conf.yml name: pgsql_1973 data-dir: /usr/local/etcd/data/etcd listen-client-urls: http://192.168.19.73:2379,http://127.0.0.1:2379 advertise-client-urls: http://192.168.19.73:2379,http://127.0.0.1:2379 listen-peer-urls: http://192.168.19.73:2380 initial-advertise-peer-urls: http://192.168.19.73:2380 initial-cluster: pgsql_1971=http://192.168.19.71:2380,pgsql_1972=http://192.168.19.72:2380,pgsql_1973=http://192.168.19.73:2380 initial-cluster-token: etcd-cluster-token initial-cluster-state: new

3、启动并加入到开机自启中

1 2 加入开机自启里边 nohup /usr/local/etcd/etcd --config-file=/usr/local/etcd/conf.yml &

4、集群检查

1 2 netstat -lntup|grep etcd /usr/local/etcd/etcdctl member list

四、patroni部署(三个节点)  

1、更新postgresql.conf文件,先执行下面第三步再执行从第一步顺序执行

1 2 3 4 5 6 7 8 9 10 11 12 postgresql.conf配置如下   max_connections = '500' max_wal_senders = '10' port = '5432' listen_addresses = '*' synchronous_commit = on full_page_writes = on wal_log_hints = on synchronous_standby_names = '*' max_replication_slots = 10 wal_level = replica

注:wal_log_hints = on,synchronous_standby_names = '*' 这两个参数会导致数据库执行呆滞,后来者欢迎留言看是怎么回事儿 

2、更新pg_hba.conf文件

复制代码
vim /data/pg_data/pg_hba.conf
清理最后配置的配置,新增以下
local   all             all                                     peer
host    all             all             127.0.0.1/32            md5
host    all             postgres        127.0.0.1/32            md5
host    all             all             192.168.19.0/24      md5
host    all             all             ::1/128                 md5
local   replication     replicator                                peer
host    replication     replicator        127.0.0.1/32            md5
host    replication     replicator        ::1/128                 md5
host    replication     replicator        192.168.19.71/32      md5
host    replication     replicator        192.168.19.72/32      md5
host    replication     replicator        192.168.19.73/32      md5
复制代码

以上配置完成后,重启数据库

3、在主节点上创建复制槽,很重要,patroni会用到

1 postgres=# create user replicator replication login encrypted password '1qaz2wsx';

4、配置stream replication(在两个从节点操作)  

1 2 3 4 systemctl stop postgresql&&su - postgres cd /data/ && rm -rf pg_data /data/pgsql/bin/pg_basebackup -h 192.168.19.71 -D /data/pg_data -U replicator -v -P -R systemctl start postgresql

5、安装patroni(三个节点)

1 2 3 4 5 6 7 8 9 yum install -y python3 python-psycopg2 python3-devel pip3 install --upgrade pip pip3 install psycopg2-binary -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com pip3 install patroni[etcd] -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com    验证是否安装成功   which patroni patronictl --help

6、创建patroni配置文件 

1 2 mkdir -p /usr/patroni/conf  cd /usr/patroni/conf/
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 node1   scope: batman namespace: /service/ name: postgresql1   restapi:   listen: 192.168.19.71:8008   connect_address: 192.168.19.71:8008 #  certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem #  keyfile: /etc/ssl/private/ssl-cert-snakeoil.key #  authentication: #    username: username #    password: password   # ctl: #   insecure: false # Allow connections to SSL sites without certs #   certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem #   cacert: /etc/ssl/certs/ssl-cacert-snakeoil.pem   etcd:   #Provide host to do the initial discovery of the cluster topology:   host: 192.168.19.71:2379   #Or use "hosts" to provide multiple endpoints   #Could be a comma separated string:   #hosts: host1:port1,host2:port2   #host: 192.168.19.71:2379,192.168.19.71:2379,192.168.19.73:2379   #or an actual yaml list:   #hosts:   #- host1:port1   #- host2:port2   #Once discovery is complete Patroni will use the list of advertised clientURLs   #It is possible to change this behavior through by setting:   #use_proxies: true   #raft: #  data_dir: . #  self_addr: 192.168.19.71:2222 #  partner_addrs: #  - 192.168.19.71:2223 #  - 192.168.19.71:2224   bootstrap:   # this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster   # and all other cluster members will use it as a `global configuration`   dcs:     ttl: 30     loop_wait: 10     retry_timeout: 10     maximum_lag_on_failover: 1048576 #    master_start_timeout: 300 #    synchronous_mode: false     #standby_cluster:       #host: 192.168.19.71       #port: 1111       #primary_slot_name: patroni     postgresql:       use_pg_rewind: true #      use_slots: true       parameters: #        wal_level: hot_standby #        hot_standby: "on" #        max_connections: 100 #        max_worker_processes: 8 #        wal_keep_segments: 8 #        max_wal_senders: 10 #        max_replication_slots: 10 #        max_prepared_transactions: 0 #        max_locks_per_transaction: 64 #        wal_log_hints: "on" #        track_commit_timestamp: "off" #        archive_mode: "on" #        archive_timeout: 1800s #        archive_command: mkdir -p ../wal_archive && test ! -f ../wal_archive/%f && cp %p ../wal_archive/%f #      recovery_conf: #        restore_command: cp ../wal_archive/%f %p     # some desired options for 'initdb'   initdb:  # Note: It needs to be a list (some options need values, others are switches)   - encoding: UTF8   - data-checksums     pg_hba:  # Add following lines to pg_hba.conf after running 'initdb'   # For kerberos gss based connectivity (discard @.*$)   #- host replication replicator 192.168.19.71/32 gss include_realm=0   #- host all all 0.0.0.0/0 gss include_realm=0   - host replication replicator 192.168.19.71/32 md5   - host all all 0.0.0.0/0 md5 #  - hostssl all all 0.0.0.0/0 md5     # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter) # post_init: /usr/local/bin/setup_cluster.sh     # Some additional users users which needs to be created after initializing new cluster   users:     admin:       password: admin       options:         - createrole         - createdb   postgresql:   listen: 192.168.19.71:5432   connect_address: 192.168.19.71:5432   data_dir: /data/pg_data   bin_dir: /data/pgsql/bin #  config_dir:   pgpass: /tmp/pgpass0   authentication:     replication:       username: replicator       password: 1qaz2wsx     superuser:       username: postgres       password: P@sswrd     rewind:  # Has no effect on postgres 10 and lower       username: postgres       password: P@sswrd   # Server side kerberos spn #  krbsrvname: postgres   parameters:     # Fully qualified kerberos ticket file for the running user     # same as KRB5CCNAME used by the GSS #   krb_server_keyfile: /var/spool/keytabs/postgres     unix_socket_directories: '.'   # Additional fencing script executed after acquiring the leader lock but before promoting the replica   #pre_promote: /path/to/pre_promote.sh   #watchdog: #  mode: automatic # Allowed values: off, automatic, required #  device: /dev/watchdog #  safety_margin: 5   tags:     nofailover: false     noloadbalance: false     clonefrom: false     nosync: false ------------------------------------------------------------------------------ node2   scope: batman namespace: /service/ name: postgresql2   restapi:   listen: 192.168.19.72:8008   connect_address: 192.168.19.72:8008 #  certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem #  keyfile: /etc/ssl/private/ssl-cert-snakeoil.key #  authentication: #    username: username #    password: password   # ctl: #   insecure: false # Allow connections to SSL sites without certs #   certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem #   cacert: /etc/ssl/certs/ssl-cacert-snakeoil.pem   etcd:   #Provide host to do the initial discovery of the cluster topology:   host: 192.168.19.72:2379   #Or use "hosts" to provide multiple endpoints   #Could be a comma separated string:   #hosts: host1:port1,host2:port2   #host: 192.168.19.71:2379,192.168.19.72:2379,192.168.19.73:2379   #or an actual yaml list:   #hosts:   #- host1:port1   #- host2:port2   #Once discovery is complete Patroni will use the list of advertised clientURLs   #It is possible to change this behavior through by setting:   #use_proxies: true   #raft: #  data_dir: . #  self_addr: 192.168.19.72:2222 #  partner_addrs: #  - 192.168.19.72:2223 #  - 192.168.19.72:2224   bootstrap:   # this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster   # and all other cluster members will use it as a `global configuration`   dcs:     ttl: 30     loop_wait: 10     retry_timeout: 10     maximum_lag_on_failover: 1048576 #    master_start_timeout: 300 #    synchronous_mode: false     #standby_cluster:       #host: 192.168.19.72       #port: 1111       #primary_slot_name: patroni     postgresql:       use_pg_rewind: true #      use_slots: true       parameters: #        wal_level: hot_standby #        hot_standby: "on" #        max_connections: 100 #        max_worker_processes: 8 #        wal_keep_segments: 8 #        max_wal_senders: 10 #        max_replication_slots: 10 #        max_prepared_transactions: 0 #        max_locks_per_transaction: 64 #        wal_log_hints: "on" #        track_commit_timestamp: "off" #        archive_mode: "on" #        archive_timeout: 1800s #        archive_command: mkdir -p ../wal_archive && test ! -f ../wal_archive/%f && cp %p ../wal_archive/%f #      recovery_conf: #        restore_command: cp ../wal_archive/%f %p     # some desired options for 'initdb'   initdb:  # Note: It needs to be a list (some options need values, others are switches)   - encoding: UTF8   - data-checksums     pg_hba:  # Add following lines to pg_hba.conf after running 'initdb'   # For kerberos gss based connectivity (discard @.*$)   #- host replication replicator 192.168.19.72/32 gss include_realm=0   #- host all all 0.0.0.0/0 gss include_realm=0   - host replication replicator 192.168.19.72/32 md5   - host all all 0.0.0.0/0 md5 #  - hostssl all all 0.0.0.0/0 md5     # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter) # post_init: /usr/local/bin/setup_cluster.sh     # Some additional users users which needs to be created after initializing new cluster   users:     admin:       password: admin       options:         - createrole         - createdb   postgresql:   listen: 192.168.19.72:5432   connect_address: 192.168.19.72:5432   data_dir: /data/pg_data   bin_dir: /data/pgsql/bin #  config_dir:   pgpass: /tmp/pgpass0   authentication:     replication:       username: replicator       password: 1qaz2wsx     superuser:       username: postgres       password: P@sswrd     rewind:  # Has no effect on postgres 10 and lower       username: postgres       password: P@sswrd   # Server side kerberos spn #  krbsrvname: postgres   parameters:     # Fully qualified kerberos ticket file for the running user     # same as KRB5CCNAME used by the GSS #   krb_server_keyfile: /var/spool/keytabs/postgres     unix_socket_directories: '.'   # Additional fencing script executed after acquiring the leader lock but before promoting the replica   #pre_promote: /path/to/pre_promote.sh   #watchdog: #  mode: automatic # Allowed values: off, automatic, required #  device: /dev/watchdog #  safety_margin: 5   tags:     nofailover: false     noloadbalance: false     clonefrom: false     nosync: false   ------------------------------------------------------------------------------ node3   scope: batman namespace: /service/ name: postgresql3   restapi:   listen: 192.168.19.73:8008   connect_address: 192.168.19.73:8008 #  certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem #  keyfile: /etc/ssl/private/ssl-cert-snakeoil.key #  authentication: #    username: username #    password: password   # ctl: #   insecure: false # Allow connections to SSL sites without certs #   certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem #   cacert: /etc/ssl/certs/ssl-cacert-snakeoil.pem   etcd:   #Provide host to do the initial discovery of the cluster topology:   host: 192.168.19.73:2379   #Or use "hosts" to provide multiple endpoints   #Could be a comma separated string:   #hosts: host1:port1,host2:port2   #host: 192.168.19.73:2379,192.168.19.73:2379,192.168.19.73:2379   #or an actual yaml list:   #hosts:   #- host1:port1   #- host2:port2   #Once discovery is complete Patroni will use the list of advertised clientURLs   #It is possible to change this behavior through by setting:   #use_proxies: true   #raft: #  data_dir: . #  self_addr: 192.168.19.73:2222 #  partner_addrs: #  - 192.168.19.73:2223 #  - 192.168.19.73:2224   bootstrap:   # this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster   # and all other cluster members will use it as a `global configuration`   dcs:     ttl: 30     loop_wait: 10     retry_timeout: 10     maximum_lag_on_failover: 1048576 #    master_start_timeout: 300 #    synchronous_mode: false     #standby_cluster:       #host: 192.168.19.73       #port: 1111       #primary_slot_name: patroni     postgresql:       use_pg_rewind: true #      use_slots: true       parameters: #        wal_level: hot_standby #        hot_standby: "on" #        max_connections: 100 #        max_worker_processes: 8 #        wal_keep_segments: 8 #        max_wal_senders: 10 #        max_replication_slots: 10 #        max_prepared_transactions: 0 #        max_locks_per_transaction: 64 #        wal_log_hints: "on" #        track_commit_timestamp: "off" #        archive_mode: "on" #        archive_timeout: 1800s #        archive_command: mkdir -p ../wal_archive && test ! -f ../wal_archive/%f && cp %p ../wal_archive/%f #      recovery_conf: #        restore_command: cp ../wal_archive/%f %p     # some desired options for 'initdb'   initdb:  # Note: It needs to be a list (some options need values, others are switches)   - encoding: UTF8   - data-checksums     pg_hba:  # Add following lines to pg_hba.conf after running 'initdb'   # For kerberos gss based connectivity (discard @.*$)   #- host replication replicator 192.168.19.73/32 gss include_realm=0   #- host all all 0.0.0.0/0 gss include_realm=0   - host replication replicator 192.168.19.73/32 md5   - host all all 0.0.0.0/0 md5 #  - hostssl all all 0.0.0.0/0 md5     # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter) # post_init: /usr/local/bin/setup_cluster.sh     # Some additional users users which needs to be created after initializing new cluster   users:     admin:       password: admin       options:         - createrole         - createdb   postgresql:   listen: 192.168.19.73:5432   connect_address: 192.168.19.73:5432   data_dir: /data/pg_data   bin_dir: /data/pgsql/bin #  config_dir:   pgpass: /tmp/pgpass0   authentication:     replication:       username: replicator       password: 1qaz2wsx     superuser:       username: postgres       password: P@sswrd     rewind:  # Has no effect on postgres 10 and lower       username: postgres       password: P@sswrd   # Server side kerberos spn #  krbsrvname: postgres   parameters:     # Fully qualified kerberos ticket file for the running user     # same as KRB5CCNAME used by the GSS #   krb_server_keyfile: /var/spool/keytabs/postgres     unix_socket_directories: '.'   # Additional fencing script executed after acquiring the leader lock but before promoting the replica   #pre_promote: /path/to/pre_promote.sh   #watchdog: #  mode: automatic # Allowed values: off, automatic, required #  device: /dev/watchdog #  safety_margin: 5   tags:     nofailover: false     noloadbalance: false     clonefrom: false     nosync: false

7、依次启动patroni服务

1 2 Postgres用户下启动 nohup patroni /usr/patroni/conf/patroni_postgresql.yml &

8、patroni配置启动脚本

为了方便开机自启,故配置成 patroni.service,3个node都需要进行配置,配置好patroni.service后就可以直接在root用户下切换Leader以及重启postgres节点等操作

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 [root@pgsql_1971 ~]$ vim /etc/systemd/system/patroni.service [Unit] Description=patroni - a high-availability PostgreSQL Documentation=https://patroni.readthedocs.io/en/latest/index.html After=syslog.target network.target etcd.target Wants=network-online.target    [Service] Type=simple User=postgres Group=postgres PermissionsStartOnly=true ExecStart=/usr/local/bin/patroni /usr/patroni/conf/patroni_postgresql.yml ExecReload=/bin/kill -HUP $MAINPID LimitNOFILE=65536 KillMode=process KillSignal=SIGINT Restart=on-abnormal RestartSec=30s TimeoutSec=0    [Install] WantedBy=multi-user.target

9、禁用postgresql脚本采用patroni服务启动数据库

1 2 3 4 5 6 7 8 9 禁止 postgresql 的自启动,通过 patroni 来管理 postgresql   systemctl stop postgresql systemctl status postgresql systemctl disable postgresql   systemctl status patroni systemctl start patroni systemctl enable patroni

五、集群检查

1、数据库集群检查

1 patronictl -c /usr/patroni/conf/patroni_postgresql.yml list

 2、etcd检查

1 root@pgsql_1971 ~]# /usr/local/etcd/etcdctl ls /service/batmanroot@pgsql_1971 ~]# /usr/local/etcd/etcdctl get /service/batman/members/postgresql1

六、haproxy部署(两个从节点)  

 1、安装haproxy服务

1 2 yum install -y haproxy cp -r /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg_bak

2、配置文件

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 vi /etc/haproxy/haproxy.cfg   #--------------------------------------------------------------------- # 全局定义 global     # log语法:log [max_level_1]     # 全局的日志配置,使用log关键字,指定使用127.0.0.1上的syslog服务中的local0日志设备,     # 记录日志等级为info的日志 #   log         127.0.0.1 local0 info     log         127.0.0.1 local1 notice     chroot      /var/lib/haproxy     pidfile     /var/run/haproxy.pid            # 定义每个haproxy进程的最大连接数 ,由于每个连接包括一个客户端和一个服务器端,     # 所以单个进程的TCP会话最大数目将是该值的两倍。     maxconn     4096            # 用户,组     user        haproxy     group       haproxy            # 以守护进程的方式运行     daemon        # turn on stats unix socket     stats socket /var/lib/haproxy/stats    #--------------------------------------------------------------------- # 默认部分的定义 defaults        # mode语法:mode {http|tcp|health} 。http是七层模式,tcp是四层模式,health是健康检测,返回OK     mode tcp       # 使用127.0.0.1上的syslog服务的local3设备记录错误信息     log 127.0.0.1 local3 err        #if you set mode to http,then you nust change tcplog into httplog     option     tcplog            # 启用该项,日志中将不会记录空连接。所谓空连接就是在上游的负载均衡器或者监控系统为了     #探测该服务是否存活可用时,需要定期的连接或者获取某一固定的组件或页面,或者探测扫描     #端口是否在监听或开放等动作被称为空连接;官方文档中标注,如果该服务上游没有其他的负     #载均衡器的话,建议不要使用该参数,因为互联网上的恶意扫描或其他动作就不会被记录下来     option     dontlognull            # 定义连接后端服务器的失败重连次数,连接失败次数超过此值后将会将对应后端服务器标记为不可用          retries    3            # 当使用了cookie时,haproxy将会将其请求的后端服务器的serverID插入到cookie中,以保证     #会话的SESSION持久性;而此时,如果后端的服务器宕掉了,但是客户端的cookie是不会刷新的     #,如果设置此参数,将会将客户的请求强制定向到另外一个后端server上,以保证服务的正常     option redispatch        #等待最大时长  When a server's maxconn is reached, connections are left pending in a queue  which may be server-specific or global to the backend.     timeout queue           525600m            # 设置成功连接到一台服务器的最长等待时间,默认单位是毫秒     timeout connect         10s            # 客户端非活动状态的超时时长   The inactivity timeout applies when the client is expected to acknowledge or  send data.     timeout client          525600m            # Set the maximum inactivity time on the server side.The inactivity timeout applies when the server is expected to acknowledge or  send data.     timeout server          525600m     timeout check           5s     maxconn                 5120      #--------------------------------------------------------------------- # 配置haproxy web监控,查看统计信息 listen status     bind 0.0.0.0:1080       mode http       log global            stats enable     # stats是haproxy的一个统计页面的套接字,该参数设置统计页面的刷新间隔为30s     stats refresh 30s       stats uri /haproxy-stats     # 设置统计页面认证时的提示内容     stats realm Private lands     # 设置统计页面认证的用户和密码,如果要设置多个,另起一行写入即可     stats auth admin:passw0rd     # 隐藏统计页面上的haproxy版本信息 #    stats hide-version        #--------------------------------------------------------------------- listen master     bind *:5000         mode tcp         option tcplog         balance roundrobin     option httpchk OPTIONS /master     http-check expect status 200     default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions         server node1 192.168.19.71:5432 maxconn 1000 check port 8008 inter 5000 rise 2 fall 2         server node2 192.168.19.72:5432 maxconn 1000 check port 8008 inter 5000 rise 2 fall 2     server node3 192.168.19.73:5432 maxconn 1000 check port 8008 inter 5000 rise 2 fall 2   listen replicas     bind *:5001         mode tcp         option tcplog         balance roundrobin     option httpchk OPTIONS /replica     http-check expect status 200     default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions         server node1 192.168.19.71:5432 maxconn 1000 check port 8008 inter 5000 rise 2 fall 2         server node2 192.168.19.72:5432 maxconn 1000 check port 8008 inter 5000 rise 2 fall 2     server node3 192.168.19.73:5432 maxconn 1000 check port 8008 inter 5000 rise 2 fall 2

3、启动服务并加入开机自启

1 2 3 4 5 6 systemctl start haproxy systemctl enable haproxy systemctl status haproxy 浏览器访问http://192.168.19.72:1080/haproxy-stats输入用户名admin密码passw0rd   这里我们通过5000端口和5001端口分别来提供写服务和读服务,如果需要对数据库写入数据只需要对外提供192.168.216.136:5000即可,可以模拟主库故障,即关闭其中的master节点来验证是否会进行自动主从切换

七、keepalived部署(两个从节点)  

1、安装keepalived服务

1 yum install -y keepalived

2、配置更新

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 pg-node1 cat keepalived.conf   global_defs { notification_email {     root@localhost #收邮件人   } smtp_server 127.0.0.1 smtp_connect_timeout 30  router_id master-node     router_id LVS_01 }    vrrp_instance VI_1 {     state MASTER     interface eth0     virtual_router_id 51     priority 150     advert_int 1     authentication {         auth_type PASS         auth_pass 1221     }     virtual_ipaddress {         192.168.19.110/24 dev eth0 label eth0:0       } } ------------------------------------------------------------------------------- pg-node2 cat keepalived.conf   !Configuration File for keepalived global_defs { notification_email {  root@localhost #收邮件人 } smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id master-node     router_id LVS_02 }   vrrp_instance VI_1 {     state BACKUP     interface eth0     virtual_router_id 51     priority 100     advert_int 1     authentication {         auth_type PASS         auth_pass 1221     }   virtual_ipaddress {        192.168.19.110/24 dev eth0 label eth0:0    } }

3、启动keepalived服务  

1 2 systemctl restart keepalived systemctl enable keepalived

4、对外提供访问端口  

1 2 3 VIP:192.168.19.110 端口:5000 读写权限 端口:5001 读权限

 

注:以上部署有个问题就是haproxy与keepalived相互依存的问题,我的观点是除非死机否则不切换,所以我没有做脚本控制,其他要是有问题,欢迎大佬们提提意见哦!!!  

标签:haproxy,postgresql,postgres,keepalived,192.168,host,pg,etcd,data
From: https://www.cnblogs.com/mayongjie/p/17131711.html

相关文章

  • Flink CDC 监听 Postgresql表的变化
    前言最近看文章说如何把Postgresql的数据同步给别的数据源,可以利用它的WAL,具体怎么操作没有说,我自己找到一篇文章https://www.cnblogs.com/xiongmozhou/p/14817641.html......
  • PostgreSQL中按时间月份自动创建分区表
    出处https://blog.csdn.net/xgb2018/article/details/109244096PostgreSQL中按时间月份自动创建分区表前言1.创建主表2.创建存储过程3.创建触发器4.踩过的坑(1)constrai......
  • 二进制部署k8s+calico+dashboard+nginx+keepalived
    192.168.190.200k8s-master1192.168.190.202k8s-node1192.168.190.201k8s-master2192.168.190.110vip#查看yum中可升级的内核版本yumlistkernel--showdup......
  • Keepalived+Nginx实现高可用负载均衡
    最近刚学习了负载均衡的知识,昨天实战了一下,但是遇到了一些奇怪的问题,查找无方之后,自己探索摸出了原因,所以今天就带大家实战一遍,避免走坑。提前准备VMware、CentOS7(两台......
  • Keepalived安装与配置
    Keepalived的安装过程keepalived的安装非常简单,以操作系统环境Centos7为例,建议通过yum方式直接安装:yuminstallkeepalived如果需要lvs功能,还需要安装ipvs模块:yumins......
  • 使用PostgreSQL保存二进制的Protobuf
    前言PostgreSQL可以直接存储二进制字段,而上周我学习了通过Protobuf来做grpc通信格式,当然也是可以序列化为二进制存入数据库的,需要的时候从数据库查询出来,通过protobuf来......
  • [学习笔记]PostgreSQL数据库的安装和配置
    安装安装源yuminstall-yhttps://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm安装本体yum-yinstallpostg......
  • keepalived详解
    keepalived详解官网地址:https://keepalived.org/keepalived原理Keepalived是以VRRP协议为实现基础的,VRRP全称VirtualRouterRedundancyProtocol,即虚拟路由冗余......
  • postgresql-15.1源码安装
      yum-yinstallreadlinereadline-develzlibzlib-develgettextgettext-developensslopenssl-develpampam-devellibxml2libxml2-devellibxsltlibxslt-d......
  • keepalived的状态不断切换的问题解决
    转载自:https://blog.csdn.net/weixin_43515220/article/details/104959814================= 笔者在搭建nginx+keepalived架构的过程中,发现存在keepalived的vip不断迁......