机器信息
节点 | 主机名 | IP地址 | 安装软件 |
节点1 | yanglinux01 | 192.168.222.128 | mysql, mysql-shell, mysql-router,MGR-Primary |
节点2 | yanglinux02 | 192.168.222.132 | mysql, mysql-shell, mysql-router, MGR-Secondary |
节点3 | yanglinux03 | 192.168.222.133 | mysql, mysql-shell, MGR-Secondary |
三台机器分别设置主机名
hostnamectl set-hostname yanglinux02
hostnamectl set-hostname yanglinux02
hostnamectl set-hostname yanglinux03
设置/etc/hosts
192.168.222.128 yanglinux01
192.168.222.132 yanglinux02
192.168.222.133 yanglinux03
三台机器全部关闭防火墙和selinux
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
三台机器分别部署mysql服务(步骤略)
三台mysql配置文件内容:
yanglinux01
[mysqld]
user = mysql
port = 3306
basedir = /usr/local/mysql
datadir = /data/mysql
socket = /tmp/mysql.sock
pid-file = /data/mysql/mysqld.pid
log-error = /data/mysql/mysql.err
server_id = 128
gtid_mode = on
enforce_gtid_consistency = on
binlog_checksum=NONE
log_bin = yanglinux01-bin
log_replica_updates = 1
binlog_format = row
sync_source_info = 1
sync_binlog = 1
skip_replica_start = 1
relay-log = yanglinux01-relay-bin
binlog_transaction_dependency_tracking = WRITESET
yanglinux02
[mysqld]
user = mysql
port = 3306
basedir = /usr/local/mysql
datadir = /data/mysql
socket = /tmp/mysql.sock
pid-file = /data/mysql/mysqld.pid
log-error = /data/mysql/mysql.err
server_id = 132
gtid_mode = on
enforce_gtid_consistency = on
binlog_checksum=NONE
log_bin = yanglinux02-bin
log_replica_updates = 1
binlog_format = row
sync_source_info = 1
sync_binlog = 1
skip_replica_start = 1
relay-log = yanglinux02-relay-bin
binlog_transaction_dependency_tracking = WRITESET
yanglinux03
[mysqld]
user = mysql
port = 3306
basedir = /usr/local/mysql
datadir = /data/mysql
socket = /tmp/mysql.sock
pid-file = /data/mysql/mysqld.pid
log-error = /data/mysql/mysql.err
server_id = 133
gtid_mode = on
enforce_gtid_consistency = on
binlog_checksum=NONE
log_bin = yanglinux03-bin
log_replica_updates = 1
binlog_format = row
sync_source_info = 1
sync_binlog = 1
skip_replica_start = 1
relay-log = yanglinux03-relay-bin
binlog_transaction_dependency_tracking = WRITESET
下载mysql-shell和mysql-router包
cd /usr/local
① mysql-shell
wget 'https://cdn.mysql.com/archives/mysql-shell/mysql-shell-8.0.29-linux-glibc2.12-x86-64bit.tar.gz'
② mysql-router
wget 'https://cdn.mysql.com/archives/mysql-router/mysql-router-8.0.29-linux-glibc2.12-x86_64.tar.xz'
安装mysql-shell(三台机器都安装)
解压
tar zxf mysql-shell-8.0.29-linux-glibc2.12-x86-64bit.tar.gz -C /usr/local/
ln -s /usr/local/mysql-shell-8.0.29-linux-glibc2.12-x86-64bit /usr/local/mysql-shell
配置环境变量
vi /etc/profile #增加
export PATH=$PATH:/usr/local/mysql-shell/bin
使其生效
source /etc/profile
测试
直接输入命令:mysqlsh,可以进入到mysql-shell界面里
测试连接本地mysql服务
按Ctrl d退出
三个节点上创建用户(都执行)
mysql -uroot
mysql> create user 'yang'@'192.168.%' identified with mysql_native_password by 'yanglinux.Com';
mysql> grant all on *.* to 'yang'@'192.168.%' with grant option;
mysql> flush privileges;
检测
mysqlsh
MySQL JS > dba.checkInstanceConfiguration('yang@yanglinux01:3306'); ## 显示ok才可以
The instance 'yanglinux01:3306' is valid to be used in an InnoDB cluster.
{
"status": "ok"
}
MySQL JS > dba.checkInstanceConfiguration('yang@yanglinux02:3306');
The instance 'yanglinux02:3306' is valid to be used in an InnoDB cluster.
{
"status": "ok"
}
MySQL JS > dba.checkInstanceConfiguration('yang@yanglinux03:3306');
The instance 'yanglinux03:3306' is valid to be used in an InnoDB cluster.
{
"status": "ok"
}
创建集群
在yanglinux01上执行
MySQL JS > shell.connect('yangyanglinux01:3306')
MySQL yanglinux01:3306 ssl JS > dba.createCluster('mycluster')
A new InnoDB cluster will be created on instance 'yanglinux01:3306'.
Validating instance configuration at yanglinux01:3306...
This instance reports its own address as yanglinux01:3306
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using 'yanglinux01:33061'. Use the localAddress option to override.
Creating InnoDB cluster 'mycluster' on 'yanglinux01:3306'...
Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
<Cluster:mycluster>
MySQL yanglinux01:3306 ssl JS > var cluster = dba.getCluster()
MySQL yanglinux01:3306 ssl JS > cluster.addInstance('yangg@yanglinux02:3306')
WARNING: A GTID set check of the MySQL instance at 'yanglinux02:3306' determined that it contains transactions that do not originate from the cluster, which must be discarded before it can join the cluster.
yanglinux02:3306 has the following errant GTIDs that do not exist in the cluster:
437c2469-3501-11ed-a15c-000c2928f21c:1-6
WARNING: Discarding these extra GTID events can either be done manually or by completely overwriting the state of yanglinux02:3306 with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.
Having extra GTID events is not expected, and it is recommended to investigate this further and ensure that the data can be removed prior to choosing the clone recovery method.
Please select a recovery method [C]lone/[A]bort (default Abort): C
Validating instance configuration at yanglinux02:3306...
This instance reports its own address as yanglinux02:3306
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using 'yanglinux02:33061'. Use the localAddress option to override.
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
Adding instance to the cluster...
Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.
NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.
* Waiting for clone to finish...
NOTE: yanglinux02:3306 is being cloned from yanglinux01:3306
** Stage DROP DATA: Completed
** Clone Transfer
FILE COPY ############################################################ 100% Completed
PAGE COPY ############################################################ 100% Completed
REDO COPY ############################################################ 100% Completed
NOTE: yanglinux02:3306 is shutting down...
* Waiting for server restart... ready
* yanglinux02:3306 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 73.66 MB transferred in about 1 second (~73.66 MB/s)
State recovery already finished for yanglinux02:3306'
The instance 'yanglinux02:3306' was successfully added to the cluster.
MySQL yanglinux01:3306 ssl JS > cluster.addInstance('yang@yanglinux03:3306')
Please select a recovery method [C]lone/[A]bort (default Abort): C
The instance 'yanglinux03:3306' was successfully added to the cluster.
查看集群状态:
dba.getCluster("mycluster").status();
安装mysql-router(在yanglinux01和yanglinux02上操作)
###
之所以要安装两个mysql-router,是为了做高可用。因为mysql-router作为请求路由入口,不能存在单点故障,所以还需要额外增加一个实现高可用的软件
###
解压
cd /usr/local
tar Jxf mysql-router-8.0.29-linux-glibc2.12-x86_64.tar.xz
ln -s mysql-router-8.0.29-linux-glibc2.12-x86_64 /usr/local/mysql-router
配置环境变量
vi /etc/profile #增加
export PATH=$PATH:/usr/local/mysql-shell/bin:/usr/local/mysql-router
使其生效
source /etc/profile
生成配置文件
mysqlrouter --user=mysql --bootstrap yang@yanglinux01:3306 ##如果执行此操作的用户为root,还需要指定普通用户
Please enter MySQL password for yang:
# Bootstrapping system MySQL Router instance...
- Creating account(s) (only those that are needed, if any)
- Verifying account (using it to run SQL queries that would be run by Router)
- Storing account in keyring
- Adjusting permissions of generated files
- Creating configuration /usr/local/mysql-router/mysqlrouter.conf
# MySQL Router configured for the InnoDB Cluster 'mycluster'
After this MySQL Router has been started with the generated configuration
$ /etc/init.d/mysqlrouter restart
or
$ systemctl start mysqlrouter
or
$ mysqlrouter -c /usr/local/mysql-router/mysqlrouter.conf
InnoDB Cluster 'mycluster' can be reached by connecting to:
## MySQL Classic protocol
- Read/Write Connections: localhost:6446
- Read/Only Connections: localhost:6447
## MySQL X protocol
- Read/Write Connections: localhost:6448
- Read/Only Connections: localhost:6449
此时生成了配置文件在
/usr/local/mysql-router/mysqlrouter.conf
查看其内容
# File automatically generated during MySQL Router bootstrap
[DEFAULT]
name=system
user=mysql
keyring_path=/usr/local/mysql-router/var/lib/mysqlrouter/keyring
master_key_path=/usr/local/mysql-router/mysqlrouter.key
connect_timeout=5
read_timeout=30
dynamic_state=/usr/local/mysql-router/bin/../var/lib/mysqlrouter/state.json
client_ssl_cert=/usr/local/mysql-router/var/lib/mysqlrouter/router-cert.pem
client_ssl_key=/usr/local/mysql-router/var/lib/mysqlrouter/router-key.pem
client_ssl_mode=PREFERRED
server_ssl_mode=AS_CLIENT
server_ssl_verify=DISABLED
unknown_config_option=error
[logger]
level=INFO
[metadata_cache:bootstrap]
cluster_type=gr
router_id=1
user=mysql_router1_bv4vlk1u4sn1
metadata_cluster=mycluster
ttl=0.5
auth_cache_ttl=-1
auth_cache_refresh_interval=2
use_gr_notifications=0
[routing:bootstrap_rw]
bind_address=0.0.0.0
bind_port=6446
destinations=metadata-cache://mycluster/?role=PRIMARY
routing_strategy=first-available
protocol=classic
[routing:bootstrap_ro]
bind_address=0.0.0.0
bind_port=6447
destinations=metadata-cache://mycluster/?role=SECONDARY
routing_strategy=round-robin-with-fallback
protocol=classic
[routing:bootstrap_x_rw]
bind_address=0.0.0.0
bind_port=6448
destinations=metadata-cache://mycluster/?role=PRIMARY
routing_strategy=first-available
protocol=x
[routing:bootstrap_x_ro]
bind_address=0.0.0.0
bind_port=6449
destinations=metadata-cache://mycluster/?role=SECONDARY
routing_strategy=round-robin-with-fallback
protocol=x
[http_server]
port=8443
ssl=1
ssl_cert=/usr/local/mysql-router/var/lib/mysqlrouter/router-cert.pem
ssl_key=/usr/local/mysql-router/var/lib/mysqlrouter/router-key.pem
[http_auth_realm:default_auth_realm]
backend=default_auth_backend
method=basic
name=default_realm
[rest_router]
require_realm=default_auth_realm
[rest_api]
[http_auth_backend:default_auth_backend]
backend=metadata_cache
[rest_routing]
require_realm=default_auth_realm
[rest_metadata_cache]
require_realm=default_auth_realm
更改权限
chown -R mysql /usr/local/mysql-router/var
定义systemd服务脚本
vi /lib/systemd/system/mysqlrouter.service #内容如下
[Unit]
Description=MYSQL Router
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
TimeoutSec=0
PermissionsStartOnly=true
ExecStart=/usr/local/mysql-router/bin/mysqlrouter -c /usr/local/mysql-router/mysqlrouter.conf
ExecReload=/bin/kill -HUP -
ExecStop=/bin/kill -QUIT
KillMode=process
LimitNOFILE=65535
Restart=on-failure
RestartSec=10
RestartPreventExitStatus=1
PrivateTmp=false
启动服务
systemctl daemon-reload
systemctl enable mysqlrouter
systemctl start mysqlrouter
查看服务和端口
ps aux |grep mysqlrouter
netstat -lnp |grep mysqlrouter
测试mysql-router
mysql -uyang -p'yanglinux.Com' -hyanglinux01 -P6446 -e "select @@hostname" ##该端口为读写端口,后端就是组复制中的primary节点,也就是yanglinux01
mysql -uyang -p'yanglinux.Com' -hyanglinux01 -P6447 -e "select @@hostname" ##该端口为只读端口,后端为组复制中的secondmary节点,本例中有两个yanglinux02和yanglinux03,它们是轮询的,也就是说第一次请求到yanglinux02,第二次请求到yanglinux03
可以循环10次,分别访问6446和6447端口
for i in `seq 10`; do mysql -uyang -p'yanglinux.Com' -hyanglinux01 -P6446 -NB -e "select @@hostname" 2>/dev/null; done
for i in `seq 10`; do mysql -uyang -p'yanglinux.Com' -hyanglinux01 -P6447 -NB -e "select @@hostname" 2>/dev/null; done
模拟故障
将yanglinux01上的mysql服务停止
systemctl stop mysqld
此时,读写节点为yanglinux03,而只读节点为yanglinux02
将yanglinux01上的mysqld服务启动
systemctl start mysqld
此时只读节点变成两个了--yanglinux02和yanglinux01
一个额外的小常识
如果想基于已经存在的组复制创建集群,需要:
将组复制改为单主模式
将
loose-group_replication_single_primary_mode=off
loose-group_replication_enforce_update_everywhere_checks=on
改为
loose-group_replication_single_primary_mode=on
loose-group_replication_enforce_update_everywhere_checks=off
安装mysql-shell (步骤略)
创建用户,要授权grant option
4)在yanglinux01上执行
mysqlsh -yang -pyanglinux.Com #首先进入到mysqlsh界面,然后执行
dba.createCluster('mycluster', {adoptFromGR: true}); ##结果如下
查看集群状态:
dba.getCluster("mycluster").status();