1.TIDB的官方架构
2.TIDB部署硬件要求
实例最低要求 2tidb 3pd 3tikv,基础oltp集群
轻量htap集群 1-2tiflash节点
实时数仓集群功能: 1-2 ticdc 集群
3.集群安装
3.1.环境要求
关闭SWAP 、关闭防⽕墙 、安装NTP 、操作系统优化 、ssh互信 、numactl安装
PS: numactl工具可用于查看当前服务器的NUMA节点配置、状态,可通过该工具将进程绑定到指定CPU core,由指定CPU core来运行对应进程
安装方法:
yum install -y numactl
3.2. 部署拓扑和配置文件
IP | hostname | role |
10.2.83.116 | tidb-01 | tidb-server,pd,promethus,grafana |
10.2.83.117 | tidb-02 | tidb-server,pd |
10.2.83.118 | tidb-03 | tidb-server,pd |
10.2.83.119 | tidb-tikv01 | tikv |
10.2.83.120 | tidb-tikv02 | tikv |
10.2.83.121 | tidb-tikv03 | tikv |
10.2.83.123 | tidb-tiflash01 | tiflash |
10.2.83.124 | tidb-tiflash02 | tiflash |
3.3 安装步骤(服务器联网安装,内网安装请参考官方资料)
步骤一、在中控机部署tiup组件
执行如下命令安装 TiUP 工具:
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
步骤二、按如下步骤设置 TiUP 环境变量
声明环境变量
source .bash_profile
确认tiup是否安装成功
which tiup
步骤三、安装 TiUP cluster 组件
tiup cluster
//如果已经安装,请升级至最新版本
tiup update --self && tiup update cluster
预期输出 “Update successfully!” 字样
验证当前 TiUP cluster 版本信息。执行如下命令查看 TiUP cluster 组件版本
tiup --binary cluster
步骤四、执行如下命令,生成集群初始化配置文件
mkdir /root/tidb-deploy
tiup cluster template > tidb_deploy.yaml
修改配置文件:
global:
user: "tidb"
ssh_port: 22
deploy_dir: "/app/tidb/deploy"
data_dir: "/app/tidb/data"
arch: "amd64"
monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115
deploy_dir: "/app/tidb/monitored/monitored-9100"
data_dir: "/app/tidb/monitored/monitored-9100/data"
log_dir: "/app/tidb/monitored/monitored-9100/log"
server_configs:
tidb:
split-table: true
mem-quota-query: 2147483648
oom-use-tmp-storage: true
tmp-storage-quota: 2147483648
oom-action: "log"
max-server-connections: 500
max-index-length: 6144
table-column-count-limit: 4096
index-limit: 64
log.level: "info"
log.format: "text"
log.enable-slow-log: true
log.slow-threshold: 3000
log.record-plan-in-slow-log: 1
log.expensive-threshold: 1000000
log.query-log-max-len: 5242880
log.file.max-days: 30
binlog.enable: false
binlog.ignore-error: false
performance.max-procs: 32
performance.server-memory-quota: 16106127360
performance.memory-usage-alarm-ratio: 0.8
performance.txn-entry-size-limit: 1048576
performance.txn-total-size-limit: 52428800
performance.cross-join: false
performance.pseudo-estimate-ratio: 0.5
status.record-db-qps: true
stmt-summary.max-stmt-count: 20000
stmt-summary.max-sql-length: 409600
pessimistic-txn.max-retry-count: 64
pessimistic-txn.deadlock-history-capacity: 1000
experimental.allow-expression-index: true
tikv:
raftdb.defaultcf.force-consistency-checks: false
raftstore.apply-max-batch-size: 256
raftstore.apply-pool-size: 2
raftstore.hibernate-regions: true
raftstore.messages-per-tick: 1024
raftstore.perf-level: 5
raftstore.raft-max-inflight-msgs: 256
raftstore.store-max-batch-size: 256
raftstore.store-pool-size: 2
raftstore.sync-log: false
readpool.coprocessor.use-unified-pool: true
readpool.storage.use-unified-pool: true
readpool.unified.max-thread-count: 3
rocksdb.defaultcf.force-consistency-checks: false
rocksdb.lockcf.force-consistency-checks: false
rocksdb.raftcf.force-consistency-checks: false
rocksdb.writecf.force-consistency-checks: false
server.grpc-concurrency: 2
storage.block-cache.capacity: 2G
storage.scheduler-worker-pool-size: 4
pd:
schedule.leader-schedule-limit: 4
schedule.region-schedule-limit: 1024
schedule.replica-schedule-limit: 16
pd_servers:
- host: 10.2.83.116
ssh_port: 22
name: "pd-116"
client_port: 2379
peer_port: 2380
deploy_dir: "/app/tidb/deploy/pd-2379"
data_dir: "/app/tidb/data/pd-2379"
log_dir: "/app/tidb/deploy/pd-2379/log"
- host: 10.2.83.117
ssh_port: 22
name: "pd-117"
client_port: 2379
peer_port: 2380
deploy_dir: "/app/tidb/deploy/pd-2379"
data_dir: "/app/tidb/data/pd-2379"
log_dir: "/app/tidb/deploy/pd-2379/log"
- host: 10.2.83.118
ssh_port: 22
name: "pd-118"
client_port: 2379
peer_port: 2380
deploy_dir: "/app/tidb/deploy/pd-2379"
data_dir: "/app/tidb/data/pd-2379"
log_dir: "/app/tidb/deploy/pd-2379/log"
tidb_servers:
- host: 10.2.83.116
ssh_port: 22
port: 4000
status_port: 10080
deploy_dir: "/app/tidb/deploy/tidb-4000"
log_dir: "/app/tidb/deploy/tidb-4000/log"
config:
log.level: info
log.slow-query-file: tidb_slow_query.log
- host: 10.2.83.117
ssh_port: 22
port: 4000
status_port: 10080
deploy_dir: "/app/tidb/deploy/tidb-4000"
log_dir: "/app/tidb/deploy/tidb-4000/log"
config:
log.level: info
log.slow-query-file: tidb_slow_query.log
- host: 10.2.83.118
ssh_port: 22
port: 4000
status_port: 10080
deploy_dir: "/app/tidb/deploy/tidb-4000"
log_dir: "/app/tidb/deploy/tidb-4000/log"
config:
log.level: info
log.slow-query-file: tidb_slow_query.log
tikv_servers:
- host: 10.2.83.119
ssh_port: 22
port: 20160
status_port: 20180
deploy_dir: "/app/tidb/deploy/tikv-20160"
data_dir: "/app/tidb/data/tikv-20160"
log_dir: "/app/tidb/deploy/tikv-20160/log"
- host: 10.2.83.120
ssh_port: 22
port: 20160
status_port: 20180
deploy_dir: "/app/tidb/deploy/tikv-20160"
data_dir: "/app/tidb/data/tikv-20160"
log_dir: "/app/tidb/deploy/tikv-20160/log"
- host: 10.2.83.121
ssh_port: 22
port: 20160
status_port: 20180
deploy_dir: "/app/tidb/deploy/tikv-20160"
data_dir: "/app/tidb/data/tikv-20160"
log_dir: "/app/tidb/deploy/tikv-20160/log"
tiflash_servers:
- host: 10.2.83.123
ssh_port: 22
tcp_port: 9000
http_port: 8123
flash_service_port: 3930
flash_proxy_port: 20170
flash_proxy_status_port: 20292
metrics_port: 8234
deploy_dir: "/app/tidb/deploy/tiflash-9000"
data_dir: "/app/tidb/data/tiflash-9000"
log_dir: "/app/tidb/deploy/tiflash-9000/log"
- host: 10.2.83.124
ssh_port: 22
tcp_port: 9000
http_port: 8123
flash_service_port: 3930
flash_proxy_port: 20170
flash_proxy_status_port: 20292
metrics_port: 8234
deploy_dir: "/app/tidb/deploy/tiflash-9000"
data_dir: "/app/tidb/data/tiflash-9000"
log_dir: "/app/tidb/deploy/tiflash-9000/log"
monitoring_servers:
- host: 10.2.83.116
ssh_port: 22
port: 9090
deploy_dir: "/app/tidb/deploy/prometheus-8249"
data_dir: "/app/tidb/data/prometheus-8249"
log_dir: "/app/tidb/deploy/prometheus-8249/log"
grafana_servers:
- host: 10.2.83.116
port: 3000
deploy_dir: /app/tidb/deploy/grafana-3000
alertmanager_servers:
- host: 10.2.83.116
ssh_port: 22
web_port: 9093
cluster_port: 9094
deploy_dir: "/app/tidb/deploy/alertmanager-9093"
data_dir: "/app/tidb/data/alertmanager-9093"
log_dir: "/app/tidb/deploy/alertmanager-9093/log"
步骤五、tiup集群部署
1.检查集群配置,主要检查集群有什么配置问题
tiup cluster check ./tidb_deploy.yaml --user tidb
PS:此处给出配置互信配置的脚本,此脚本仅需要在中控服务器执行
#! /bin/bash
##新增用户tidb
useradd tidb && echo 'xxxxxx' | passwd --stdin tidb
##修改用户权限
sed -i "100a tidb ALL=(ALL) NOPASSWD: ALL" /etc/sudoers
##安装sshpass
yum install -y sshpass
##生成密钥对
ssh-keygen -P "" -q
##配置互信
for ip in 116 117 118 119 120 121 124 123 126
do
sshpass -p "1qaz@WSX" ssh-copy-id -i -o StrictHostKeyChecking=no [email protected].$ip
done
2.修复风险
tiup cluster check ./tidb_deploy.yaml --user tidb --apply
3.部署集群
tiup cluster deploy tidb_cooperate v6.5.0 ./tidb_deploy.yaml --user tidb
tidb_cooperate:为集群名称,可以根据需求自己调整
v6.5.0为版本,我此处安装的为官方最新版本
Cluster `tidb_cooperate` deployed successfully, you can start it with command: `tiup cluster start tidb_cooperate --init`
出现如上字样代表安装完成