####################################################
集群IP例子
172.21.243.141
172.21.243.69
172.21.243.47
172.21.243.33
172.21.243.184
172.21.243.64
172.21.243.223
机器配置我这边是 7台机器 ,每台2c8g 100g
####################################################
离线部署方案
一、 下载软件
#tiup包(tidb管理工具)
wget https://download.pingcap.org/tidb-community-server-v7.5.2-linux-amd64.tar.gz
#tidb各类工具包
wget https://download.pingcap.org/tidb-community-toolkit-v7.5.2-linux-amd64.tar.gz
二、 安装
密钥互信设置:
1、安装前集群所有机器需要生成密钥,并在各个节点进行互信
如 机器A 能自动登陆机器A、B、C、D、E、F、G , 机器B能自动登陆机器A、B、C、D、E、F、G,以此类推
2、 安装tiup
解压
tar –zxvf tidb-community-server-v7.5.2-linux-amd64.tar.gz
安装
cd tidb-community-server-v7.5.2-linux-amd64
sh local_install.sh
刷新环境变量
source /root/.bash_profile
执行tiup查看是否成功
tiup
3、 安装tidb工具
解压
tar -zxvf tidb-community-toolkit-v7.5.2-linux-amd64.tar.gz
tidb工具包属于包中包,需要哪个工具解压哪个即可,和本次集群安装没关联
三、 通过tiup部署TiDB集群
1、 生成配置文件模板
tiup cluster template > topology_test.yaml #这个是模板
2、 修改配置文件,以下是我这边环境配置文件 ,你们可以根据自己的实际环境修改对应iP
cat topology_test.yaml
global:
user: "tidb"
ssh_port: 22
deploy_dir: "/data/tidb-deploy"
data_dir: "/data/tidb-data"
arch: "amd64"
monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115
server_configs:
pd:
replication.location-labels: ["zone","dc","rack","host"]
tidb_servers:
- host: 172.21.243.141
deploy_dir: "/data/tidb-deploy/tidb-4000"
log_dir: "/data/tidb-deploy/tidb-4000/log"
pd_servers:
- host: 172.21.243.69
deploy_dir: "/data/tidb-deploy/pd-2379"
data_dir: "/data/tidb-data/pd-2379"
log_dir: "/data/tidb-deploy/pd-2379/log"
- host: 172.21.243.47
deploy_dir: "/data/tidb-deploy/pd-2379"
data_dir: "/data/tidb-data/pd-2379"
log_dir: "/data/tidb-deploy/pd-2379/log"
- host: 172.21.243.33
deploy_dir: "/data/tidb-deploy/pd-2379"
data_dir: "/data/tidb-data/pd-2379"
log_dir: "/data/tidb-deploy/pd-2379/log"
tikv_servers:
- host: 172.21.243.184
config:
server.labels: { zone: "z1", dc: "d1", rack: "r1", host: "243179" }
deploy_dir: "/data/tidb-deploy/tikv-20160"
data_dir: "/data/tidb-data/tikv-20160"
log_dir: "/data/tidb-deploy/tikv-20160/log"
- host: 172.21.243.64
config:
server.labels: { zone: "z1", dc: "d1", rack: "r1", host: "24373" }
deploy_dir: "/data/tidb-deploy/tikv-20160"
data_dir: "/data/tidb-data/tikv-20160"
log_dir: "/data/tidb-deploy/tikv-20160/log"
- host: 172.21.243.223
config:
server.labels: { zone: "z1", dc: "d1", rack: "r1", host: "24393" }
deploy_dir: "/data/tidb-deploy/tikv-20160"
data_dir: "/data/tidb-data/tikv-20160"
log_dir: "/data/tidb-deploy/tikv-20160/log"
grafana_servers:
- host: 172.21.243.69
deploy_dir: "/data/tidb-deploy/grafana-3000"
alertmanager_servers:
- host: 172.21.243.69
deploy_dir: "/data/tidb-deploy/alertmanager-9093"
data_dir: "/data/tidb-data/alertmanager-9093"
log_dir: "/data/tidb-deploy/alertmanager-9093/log"
monitoring_servers:
- host: 172.21.243.69
deploy_dir: "/data/tidb-deploy/prometheus-8249"
data_dir: "/data/tidb-data/prometheus-8249"
log_dir: "/data/tidb-deploy/prometheus-8249/log"
3、 根据配置文件做部署环境检查
tiup cluster check ./topology_test.yaml --user root
如出现报错
检查完成后可能会出现很多修复项
4、 自动修复异常
tiup cluster check ./topology_test.yaml --apply
5、 修复完成后再检查一次
tiup cluster check ./topology_test.yaml --user root
(没有numactl可以忽略)
6、 部署tidb集群,选择版本v7.5.2,命名为tidb-test(可自定义)
tiup cluster deploy tidb-test v7.5.2 ./topology_test.yaml
7、 启动tidb-test集群
#这里我们不加init,这样就不会生成一个随机密码
tiup cluster start tidb-test
可以 查看到打印出很多start success
8、 查看集群状态
tiup cluster display tidb-test
返回下面信息
Cluster type: tidb
Cluster name: tidb-bi-stm-test
Cluster version: v7.5.2
Deploy user: tidb
SSH type: builtin
Dashboard URL: http://172.21.243.69:2379/dashboard
Grafana URL: http://172.21.243.69:3000
通过dashborad查看集群监控 ,
登陆账号root 无密码
http://172.21.243.69:2379/dashboard
登陆账号admin 无密码
http://172.21.243.69:3000
通过mysql客户端访问tidb
mysql -h 172.21.243.141 -P 4000 -u root
官方祥细文档
https://docs.pingcap.com/zh/tidb/stable/production-deployment-using-tiup
标签:log,deploy,部署,集群,tidb,data,172.21,dir From: https://www.cnblogs.com/hmysql/p/18329931