环境:
OS:Centos 7
DB:v6.5.2
1.下载安装介质
https://cn.pingcap.com/product-community/
选择相应的版本下载
2.创建用户tidb用户
[root@pxc04 /]# groupadd tidb
[root@pxc04 /]# useradd -g tidb -G tidb -s /bin/bash tidb
3.解压
[root@localhost tidb]# tar -xvf tidb-community-server-v6.5.2-linux-amd64.tar.gz
4.部署TiUP环境
[root@183-kvm tidb]# cd tidb-community-server-v6.5.2-linux-amd64 [root@183-kvm tidb-community-server-v6.5.2-linux-amd64]# sh local_install.sh Disable telemetry success Successfully set mirror to /soft/tidb/tidb-community-server-v6.5.2-linux-amd64 Detected shell: bash Shell profile: /root/.bash_profile /root/.bash_profile has been modified to to add tiup to PATH open a new terminal or source /root/.bash_profile to use it Installed path: /root/.tiup/bin/tiup =============================================== 1. source /root/.bash_profile 2. Have a try: tiup playground ===============================================
当前终端执行
[root@localhost tidb-community-server-v6.0.0-linux-amd64]# source /root/.bash_profile [root@localhost tidb-community-server-v6.0.0-linux-amd64]# which tiup /root/.tiup/bin/tiup
local_install.sh 会自动将镜像切换到本地目录,这样就不会去访问外网; tiup mirror show 可以查看镜像地址
[root@localhost tidb-community-server-v6.0.0-linux-amd64]# tiup mirror show /soft/tidb/tidb-community-server-v6.0.0-linux-amd64
5.生成配置文件
生成初始化拓扑模板
[root@localhost tidb]# tiup cluster template > /tmp/topology.yaml
6.修改模板文件
vi /tmp/topology.yaml
内容如下:
global: user: "tidb" ssh_port: 22 deploy_dir: "/tidb-deploy" data_dir: "/tidb-data" server_configs: {} pd_servers: - host: 192.168.1.183 tidb_servers: - host: 192.168.1.183 tikv_servers: - host: 192.168.1.183 monitoring_servers: - host: 192.168.1.183 grafana_servers: - host: 192.168.1.183 alertmanager_servers: - host: 192.168.1.183 tiflash_servers: - host: 192.168.1.183
7.检查系统是否满足TiDB安装要求
配置后,使用tiup cluster check /tmp/topology.yaml来检查系统是否满足TiDB安装要求;
---注:TiUP和其他角色机器需要配置免密登陆,pass和warning是可以接受的,fail项则需要修改
[root@183-kvm tidb-community-server-v6.5.2-linux-amd64]# tiup cluster check /tmp/topology.yaml tiup is checking updates for component cluster ... Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster check /tmp/topology.yaml Error: none of ssh password, identity file, SSH_AUTH_SOCK specified (tui.id_read_failed)
解决办法:
自己给自己配置免密登录
[root@183-kvm ~]# mkdir ~/.ssh
[root@183-kvm ~]# chmod 700 ~/.ssh
[root@183-kvm ~]# ssh-keygen -t rsa
[root@183-kvm ~]# ssh-keygen -t dsa
[root@183-kvm ~]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[root@183-kvm ~]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[root@183-kvm ~]# ssh 192.168.1.183 date
Tue May 30 16:40:21 CST 2023
继续执行
[root@183-kvm ~]# tiup cluster check /tmp/topology.yaml tiup is checking updates for component cluster ... Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster check /tmp/topology.yaml + Detect CPU Arch Name - Detecting node 192.168.1.183 Arch info ... Done + Detect CPU OS Name - Detecting node 192.168.1.183 OS info ... Done + Download necessary tools - Downloading check tools for linux/amd64 ... Done + Collect basic system information + Collect basic system information - Getting system info of 192.168.1.183:22 ... Done + Check time zone - Checking node 192.168.1.183 ... Done + Check system requirements + Check system requirements + Check system requirements + Check system requirements - Checking node 192.168.1.183 ... Done - Checking node 192.168.1.183 ... Done - Checking node 192.168.1.183 ... Done - Checking node 192.168.1.183 ... Done - Checking node 192.168.1.183 ... Done - Checking node 192.168.1.183 ... Done - Checking node 192.168.1.183 ... Done - Checking node 192.168.1.183 ... Done + Cleanup check files - Cleanup check files on 192.168.1.183:22 ... Done Node Check Result Message ---- ----- ------ ------- 192.168.1.183 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009 192.168.1.183 cpu-governor Fail CPU frequency governor is conservative, should use performance 192.168.1.183 memory Pass memory size is 49152MB 192.168.1.183 network Pass network speed of eth0 is 1000MB 192.168.1.183 network Pass network speed of eth1 is 1000MB 192.168.1.183 disk Warn mount point / does not have 'noatime' option set 192.168.1.183 disk Fail multiple components tikv:/tidb-data/tikv-20160,tiflash:/tidb-data/tiflash-9000 are using the same partition 192.168.1.183:/ as data dir 192.168.1.183 sysctl Fail net.core.somaxconn = 128, should be greater than 32768 192.168.1.183 sysctl Fail net.ipv4.tcp_syncookies = 1, should be 0 192.168.1.183 sysctl Fail vm.swappiness = 60, should be 0 192.168.1.183 selinux Pass SELinux is disabled 192.168.1.183 command Fail numactl not usable, bash: numactl: command not found 192.168.1.183 cpu-cores Pass number of CPU cores / threads: 16 192.168.1.183 network Fail network speed of vnet2 is 10MB too low, needs 1GB or more 192.168.1.183 network Fail network speed of vnet0 is 10MB too low, needs 1GB or more 192.168.1.183 network Fail network speed of vnet1 is 10MB too low, needs 1GB or more 192.168.1.183 limits Fail soft limit of 'nofile' for user 'tidb' is not set or too low 192.168.1.183 limits Fail hard limit of 'nofile' for user 'tidb' is not set or too low 192.168.1.183 limits Fail soft limit of 'stack' for user 'tidb' is not set or too low 192.168.1.183 thp Fail THP is enabled, please disable it for best performance
尝试修复(好像这个命令不行)
#tiup cluster check /tmp/topology.yaml –-apply
安装numactl
[root@183-kvm tidb]# yum install numactl
其他问题能修复的可以手工修复
8.安装
[root@183-kvm tidb]# tiup cluster deploy mytidb_cluster v6.5.2 /tmp/topology.yaml tiup is checking updates for component cluster ... Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster deploy mytidb_cluster v6.5.2 /tmp/topology.yaml + Detect CPU Arch Name - Detecting node 192.168.1.183 Arch info ... Done + Detect CPU OS Name - Detecting node 192.168.1.183 OS info ... Done Please confirm your topology: Cluster type: tidb Cluster name: mytidb_cluster Cluster version: v6.5.2 Role Host Ports OS/Arch Directories ---- ---- ----- ------- ----------- pd 192.168.1.183 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379 tikv 192.168.1.183 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160 tidb 192.168.1.183 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000 tiflash 192.168.1.183 9000/8123/3930/20170/20292/8234 linux/x86_64 /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000 prometheus 192.168.1.183 9090/12020 linux/x86_64 /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090 grafana 192.168.1.183 3000 linux/x86_64 /tidb-deploy/grafana-3000 alertmanager 192.168.1.183 9093/9094 linux/x86_64 /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093 Attention: 1. If the topology is not what you expected, check your yaml file. 2. Please confirm there is no port/directory conflicts in same host. 程序自动运行,最后有提示安装成功的 Cluster `mytidb_cluster` deployed successfully, you can start it with command: `tiup cluster start mytidb_cluster --init`
9.启动
[root@183-kvm tidb]# tiup cluster start mytidb_cluster --init tiup is checking updates for component cluster ... Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster start mytidb_cluster --init Starting cluster mytidb_cluster... + [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/mytidb_cluster/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/mytidb_cluster/ssh/id_rsa.pub + [Parallel] - UserSSH: user=tidb, host=192.168.1.183 + [Parallel] - UserSSH: user=tidb, host=192.168.1.183 + [Parallel] - UserSSH: user=tidb, host=192.168.1.183 + [Parallel] - UserSSH: user=tidb, host=192.168.1.183 + [Parallel] - UserSSH: user=tidb, host=192.168.1.183 + [Parallel] - UserSSH: user=tidb, host=192.168.1.183 + [Parallel] - UserSSH: user=tidb, host=192.168.1.183 + [ Serial ] - StartCluster Starting component pd Starting instance 192.168.1.183:2379 Start instance 192.168.1.183:2379 success Starting component tikv Starting instance 192.168.1.183:20160 Start instance 192.168.1.183:20160 success Starting component tidb Starting instance 192.168.1.183:4000 Start instance 192.168.1.183:4000 success Starting component tiflash Starting instance 192.168.1.183:9000 Error: failed to start tiflash: failed to start: 192.168.1.183 tiflash-9000.service, please check the instance's log(/tidb-deploy/tiflash-9000/log) for more detail.: timed out waiting for port 9000 to be started after 2m0s Verbose debug logs has been written to /root/.tiup/logs/tiup-cluster-debug-2023-05-30-17-18-21.log.
查看日志:
Fail to check CPU flags: `avx2` not supported. Require `avx2 popcnt movbe`.
发现安装tiflash需要cpu支持avx2,下面我们去掉不按照tiflash
停掉集群:
tiup cluster stop mytidb_cluster
修改配置文件
vi /tmp/topology.yaml
去掉
tiflash_servers:
- host: 192.168.1.183
10.重新安装
[root@183-kvm tidb]# tiup cluster deploy mytidb_cluster v6.5.2 /tmp/topology.yaml tiup is checking updates for component cluster ... Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster deploy mytidb_cluster v6.5.2 /tmp/topology.yaml Error: Cluster name 'mytidb_cluster' is duplicated (deploy.name_dup) Please specify another cluster name
先销毁集群
[root@183-kvm tidb]# tiup cluster destroy mytidb_cluster
重新部署
[root@183-kvm tidb]# tiup cluster deploy mytidb_cluster v6.5.2 /tmp/topology.yaml [root@183-kvm tidb]# tiup cluster start mytidb_cluster --init tiup is checking updates for component cluster ... Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster start mytidb_cluster --init Starting cluster mytidb_cluster... + [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/mytidb_cluster/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/mytidb_cluster/ssh/id_rsa.pub + [Parallel] - UserSSH: user=tidb, host=192.168.1.183 + [Parallel] - UserSSH: user=tidb, host=192.168.1.183 + [Parallel] - UserSSH: user=tidb, host=192.168.1.183 + [Parallel] - UserSSH: user=tidb, host=192.168.1.183 + [Parallel] - UserSSH: user=tidb, host=192.168.1.183 + [Parallel] - UserSSH: user=tidb, host=192.168.1.183 + [ Serial ] - StartCluster Starting component pd Starting instance 192.168.1.183:2379 Start instance 192.168.1.183:2379 success Starting component tikv Starting instance 192.168.1.183:20160 Start instance 192.168.1.183:20160 success Starting component tidb Starting instance 192.168.1.183:4000 Start instance 192.168.1.183:4000 success Starting component prometheus Starting instance 192.168.1.183:9090 Start instance 192.168.1.183:9090 success Starting component grafana Starting instance 192.168.1.183:3000 Start instance 192.168.1.183:3000 success Starting component alertmanager Starting instance 192.168.1.183:9093 Start instance 192.168.1.183:9093 success Starting component node_exporter Starting instance 192.168.1.183 Start 192.168.1.183 success Starting component blackbox_exporter Starting instance 192.168.1.183 Start 192.168.1.183 success + [ Serial ] - UpdateTopology: cluster=mytidb_cluster Started cluster `mytidb_cluster` successfully The root password of TiDB database has been changed. The new password is: '+1RGb08e7_^9gn4JN@'. Copy and record it to somewhere safe, it is only displayed once, and will not be stored. The generated password can NOT be get and shown again.
11.登录连接
登录密码是上面步骤生成的临时密码
[root@183-kvm tidb]# /opt/mysql57/bin/mysql -h 192.168.1.183 -P4000 -uroot -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 407 Server version: 5.7.25-TiDB-v6.5.2 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql>
12.修改root账号密码
[root@183-kvm tidb]# /opt/mysql57/bin/mysql -h 192.168.1.183 -P4000 -uroot -p mysql> select user,host,authentication_string from mysql.user; +------+------+-------------------------------------------+ | user | host | authentication_string | +------+------+-------------------------------------------+ | root | % | *AB91939E07E2DC5EC3C598D8DED838681D3B8C30 | +------+------+-------------------------------------------+ 1 row in set (0.01 sec)
密码修改为mysql mysql> set password for 'root'@'%' = 'mysql'; Query OK, 0 rows affected (0.12 sec) mysql> flush privileges; Query OK, 0 rows affected (0.04 sec)
标签:单机,部署,tiup,192.168,cluster,1.183,tidb,root From: https://www.cnblogs.com/hxlasky/p/17445974.html