首页 > 其他分享 >tidb单机部署

tidb单机部署

时间:2023-05-31 14:23:48浏览次数:41  
标签:单机 部署 tiup 192.168 cluster 1.183 tidb root

环境:
OS:Centos 7
DB:v6.5.2

 

1.下载安装介质
https://cn.pingcap.com/product-community/
选择相应的版本下载

 

2.创建用户tidb用户
[root@pxc04 /]# groupadd tidb
[root@pxc04 /]# useradd -g tidb -G tidb -s /bin/bash tidb

 

3.解压
[root@localhost tidb]# tar -xvf tidb-community-server-v6.5.2-linux-amd64.tar.gz

 

4.部署TiUP环境

[root@183-kvm tidb]# cd tidb-community-server-v6.5.2-linux-amd64
[root@183-kvm tidb-community-server-v6.5.2-linux-amd64]# sh local_install.sh
Disable telemetry success
Successfully set mirror to /soft/tidb/tidb-community-server-v6.5.2-linux-amd64
Detected shell: bash
Shell profile:  /root/.bash_profile
/root/.bash_profile has been modified to to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
1. source /root/.bash_profile
2. Have a try:   tiup playground
===============================================

当前终端执行

[root@localhost tidb-community-server-v6.0.0-linux-amd64]# source /root/.bash_profile
[root@localhost tidb-community-server-v6.0.0-linux-amd64]# which tiup
/root/.tiup/bin/tiup

 

local_install.sh 会自动将镜像切换到本地目录,这样就不会去访问外网; tiup mirror show 可以查看镜像地址

[root@localhost tidb-community-server-v6.0.0-linux-amd64]# tiup mirror show
/soft/tidb/tidb-community-server-v6.0.0-linux-amd64

 

5.生成配置文件
生成初始化拓扑模板
[root@localhost tidb]# tiup cluster template > /tmp/topology.yaml

 

6.修改模板文件

vi /tmp/topology.yaml
内容如下:

 

global:
  user: "tidb"
  ssh_port: 22
  deploy_dir: "/tidb-deploy"
  data_dir: "/tidb-data"
server_configs: {}
pd_servers:
  - host: 192.168.1.183
tidb_servers:
  - host: 192.168.1.183
tikv_servers:
  - host: 192.168.1.183
monitoring_servers:
  - host: 192.168.1.183
grafana_servers:
  - host: 192.168.1.183
alertmanager_servers:
  - host: 192.168.1.183
tiflash_servers:
    - host: 192.168.1.183

 

7.检查系统是否满足TiDB安装要求
配置后,使用tiup cluster check /tmp/topology.yaml来检查系统是否满足TiDB安装要求;
---注:TiUP和其他角色机器需要配置免密登陆,pass和warning是可以接受的,fail项则需要修改

[root@183-kvm tidb-community-server-v6.5.2-linux-amd64]# tiup cluster check /tmp/topology.yaml
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster check /tmp/topology.yaml

Error: none of ssh password, identity file, SSH_AUTH_SOCK specified (tui.id_read_failed)

解决办法:
自己给自己配置免密登录

[root@183-kvm ~]# mkdir ~/.ssh
[root@183-kvm ~]# chmod 700 ~/.ssh
[root@183-kvm ~]# ssh-keygen -t rsa
[root@183-kvm ~]# ssh-keygen -t dsa
[root@183-kvm ~]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[root@183-kvm ~]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[root@183-kvm ~]# ssh 192.168.1.183 date
Tue May 30 16:40:21 CST 2023

 

继续执行

[root@183-kvm ~]# tiup cluster check /tmp/topology.yaml
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster check /tmp/topology.yaml

+ Detect CPU Arch Name
  - Detecting node 192.168.1.183 Arch info ... Done

+ Detect CPU OS Name
  - Detecting node 192.168.1.183 OS info ... Done
+ Download necessary tools
  - Downloading check tools for linux/amd64 ... Done
+ Collect basic system information
+ Collect basic system information
  - Getting system info of 192.168.1.183:22 ... Done
+ Check time zone
  - Checking node 192.168.1.183 ... Done
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
  - Checking node 192.168.1.183 ... Done
  - Checking node 192.168.1.183 ... Done
  - Checking node 192.168.1.183 ... Done
  - Checking node 192.168.1.183 ... Done
  - Checking node 192.168.1.183 ... Done
  - Checking node 192.168.1.183 ... Done
  - Checking node 192.168.1.183 ... Done
  - Checking node 192.168.1.183 ... Done
+ Cleanup check files
  - Cleanup check files on 192.168.1.183:22 ... Done
Node           Check         Result  Message
----           -----         ------  -------
192.168.1.183  os-version    Pass    OS is CentOS Linux 7 (Core) 7.9.2009
192.168.1.183  cpu-governor  Fail    CPU frequency governor is conservative, should use performance
192.168.1.183  memory        Pass    memory size is 49152MB
192.168.1.183  network       Pass    network speed of eth0 is 1000MB
192.168.1.183  network       Pass    network speed of eth1 is 1000MB
192.168.1.183  disk          Warn    mount point / does not have 'noatime' option set
192.168.1.183  disk          Fail    multiple components tikv:/tidb-data/tikv-20160,tiflash:/tidb-data/tiflash-9000 are using the same partition 192.168.1.183:/ as data dir
192.168.1.183  sysctl        Fail    net.core.somaxconn = 128, should be greater than 32768
192.168.1.183  sysctl        Fail    net.ipv4.tcp_syncookies = 1, should be 0
192.168.1.183  sysctl        Fail    vm.swappiness = 60, should be 0
192.168.1.183  selinux       Pass    SELinux is disabled
192.168.1.183  command       Fail    numactl not usable, bash: numactl: command not found
192.168.1.183  cpu-cores     Pass    number of CPU cores / threads: 16
192.168.1.183  network       Fail    network speed of vnet2 is 10MB too low, needs 1GB or more
192.168.1.183  network       Fail    network speed of vnet0 is 10MB too low, needs 1GB or more
192.168.1.183  network       Fail    network speed of vnet1 is 10MB too low, needs 1GB or more
192.168.1.183  limits        Fail    soft limit of 'nofile' for user 'tidb' is not set or too low
192.168.1.183  limits        Fail    hard limit of 'nofile' for user 'tidb' is not set or too low
192.168.1.183  limits        Fail    soft limit of 'stack' for user 'tidb' is not set or too low
192.168.1.183  thp           Fail    THP is enabled, please disable it for best performance

 

尝试修复(好像这个命令不行)

#tiup cluster check /tmp/topology.yaml –-apply

安装numactl
[root@183-kvm tidb]# yum install numactl

其他问题能修复的可以手工修复

 

8.安装

[root@183-kvm tidb]# tiup cluster deploy mytidb_cluster v6.5.2 /tmp/topology.yaml
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster deploy mytidb_cluster v6.5.2 /tmp/topology.yaml

+ Detect CPU Arch Name
  - Detecting node 192.168.1.183 Arch info ... Done

+ Detect CPU OS Name
  - Detecting node 192.168.1.183 OS info ... Done
Please confirm your topology:
Cluster type:    tidb
Cluster name:    mytidb_cluster
Cluster version: v6.5.2
Role          Host           Ports                            OS/Arch       Directories
----          ----           -----                            -------       -----------
pd            192.168.1.183  2379/2380                        linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv          192.168.1.183  20160/20180                      linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tidb          192.168.1.183  4000/10080                       linux/x86_64  /tidb-deploy/tidb-4000
tiflash       192.168.1.183  9000/8123/3930/20170/20292/8234  linux/x86_64  /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000
prometheus    192.168.1.183  9090/12020                       linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana       192.168.1.183  3000                             linux/x86_64  /tidb-deploy/grafana-3000
alertmanager  192.168.1.183  9093/9094                        linux/x86_64  /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.

程序自动运行,最后有提示安装成功的
Cluster `mytidb_cluster` deployed successfully, you can start it with command: `tiup cluster start mytidb_cluster --init`

 

9.启动

[root@183-kvm tidb]# tiup cluster start mytidb_cluster --init
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster start mytidb_cluster --init
Starting cluster mytidb_cluster...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/mytidb_cluster/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/mytidb_cluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.183
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.183
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.183
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.183
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.183
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.183
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.183
+ [ Serial ] - StartCluster
Starting component pd
        Starting instance 192.168.1.183:2379
        Start instance 192.168.1.183:2379 success
Starting component tikv
        Starting instance 192.168.1.183:20160
        Start instance 192.168.1.183:20160 success
Starting component tidb
        Starting instance 192.168.1.183:4000
        Start instance 192.168.1.183:4000 success
Starting component tiflash
        Starting instance 192.168.1.183:9000

Error: failed to start tiflash: failed to start: 192.168.1.183 tiflash-9000.service, please check the instance's log(/tidb-deploy/tiflash-9000/log) for more detail.: timed out waiting for port 9000 to be started after 2m0s

Verbose debug logs has been written to /root/.tiup/logs/tiup-cluster-debug-2023-05-30-17-18-21.log.

 

查看日志:

Fail to check CPU flags: `avx2` not supported. Require `avx2 popcnt movbe`.

发现安装tiflash需要cpu支持avx2,下面我们去掉不按照tiflash

 

停掉集群:
tiup cluster stop mytidb_cluster

修改配置文件
vi /tmp/topology.yaml
去掉
tiflash_servers:
- host: 192.168.1.183

 

10.重新安装

[root@183-kvm tidb]# tiup cluster deploy mytidb_cluster v6.5.2 /tmp/topology.yaml
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster deploy mytidb_cluster v6.5.2 /tmp/topology.yaml

Error: Cluster name 'mytidb_cluster' is duplicated (deploy.name_dup)

Please specify another cluster name

 

先销毁集群

[root@183-kvm tidb]# tiup cluster destroy mytidb_cluster

 

重新部署

[root@183-kvm tidb]# tiup cluster deploy mytidb_cluster v6.5.2 /tmp/topology.yaml
[root@183-kvm tidb]# tiup cluster start mytidb_cluster --init
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster start mytidb_cluster --init
Starting cluster mytidb_cluster...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/mytidb_cluster/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/mytidb_cluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.183
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.183
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.183
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.183
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.183
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.183
+ [ Serial ] - StartCluster
Starting component pd
        Starting instance 192.168.1.183:2379
        Start instance 192.168.1.183:2379 success
Starting component tikv
        Starting instance 192.168.1.183:20160
        Start instance 192.168.1.183:20160 success
Starting component tidb
        Starting instance 192.168.1.183:4000
        Start instance 192.168.1.183:4000 success
Starting component prometheus
        Starting instance 192.168.1.183:9090
        Start instance 192.168.1.183:9090 success
Starting component grafana
        Starting instance 192.168.1.183:3000
        Start instance 192.168.1.183:3000 success
Starting component alertmanager
        Starting instance 192.168.1.183:9093
        Start instance 192.168.1.183:9093 success
Starting component node_exporter
        Starting instance 192.168.1.183
        Start 192.168.1.183 success
Starting component blackbox_exporter
        Starting instance 192.168.1.183
        Start 192.168.1.183 success
+ [ Serial ] - UpdateTopology: cluster=mytidb_cluster
Started cluster `mytidb_cluster` successfully
The root password of TiDB database has been changed.
The new password is: '+1RGb08e7_^9gn4JN@'.
Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
The generated password can NOT be get and shown again.

 

11.登录连接

登录密码是上面步骤生成的临时密码

[root@183-kvm tidb]# /opt/mysql57/bin/mysql -h 192.168.1.183 -P4000 -uroot -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 407
Server version: 5.7.25-TiDB-v6.5.2 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 

 

12.修改root账号密码

[root@183-kvm tidb]# /opt/mysql57/bin/mysql -h 192.168.1.183 -P4000 -uroot -p
mysql> select user,host,authentication_string from mysql.user;
+------+------+-------------------------------------------+
| user | host | authentication_string                     |
+------+------+-------------------------------------------+
| root | %    | *AB91939E07E2DC5EC3C598D8DED838681D3B8C30 |
+------+------+-------------------------------------------+
1 row in set (0.01 sec)
密码修改为mysql
mysql> set password for 'root'@'%' = 'mysql';
Query OK, 0 rows affected (0.12 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.04 sec)

 

标签:单机,部署,tiup,192.168,cluster,1.183,tidb,root
From: https://www.cnblogs.com/hxlasky/p/17445974.html

相关文章

  • Linux内网Yum仓库自动化部署脚本
    在当今快节奏的互联网时代,Linux系统在企业和组织中扮演着至关重要的角色。为了保证服务器环境的高效运行和软件的稳定性,及时进行软件包的安装和更新显得尤为重要。然而,在某些情况下,网络访问受限或不可行,这就给软件管理带来了挑战。为了解决这一问题,部署内网Yum仓库成为了一种有效的......
  • LAMP平台部署及应用
    清空yum仓库挂载系统光盘安装依赖程序创建管理组创建用户mysql加入mysql组卸载系统光盘挂载LAMP光盘解压Apache配置Apache配置安装Apache优化命令生成服务控制文件修改服务控制文件添加系统服务设置开机自启解压mysql配置mysql配置安装mysql修改安装目录所属用户和组为mysql生成服......
  • docker部署oracle
    docker部署oracle1.拉取镜像dockerpullregistry.cn-hangzhou.aliyuncs.com/helowin/oracle_11g2.启动容器dockerrun-id-p1521:1521--nameoracle11gregistry.cn-hangzhou.aliyuncs.com/helowin/oracle_11g3.进行配置,首先执行如下命令进入oracle容器环境中:docker......
  • hj_podman_jenkins_maven_git_springboot_ssh一键部署项目
    podmanpulljenkins/jenkins:jdk17podmanrun-d--privileged=true\-uroot--namejenkins_jdk17_8081\-p8081:8080-p50001:50000\-v/hj_files/jenkins_jdk17:/var/jenkins_home\a307650508c6http://105.35.260.230:8081/ 安装好推荐配置后.再安装俩插件  Mave......
  • 部署 Kubernetes + KubeVirt
    一、基础环境准备在openstack平台上创建两台虚拟机:主:10.104.43.110备:10.104.43.1181、修改主机名并配置映射#所有节点修改主机名[root@k8s-h-master~]#hostnamectlset-hostnamemaster[root@k8s-h-master~]#bash[root@master~]#[root@k8s-h-node~]#hostnam......
  • linux Centos7 部署 nodejs服务
    nodejs服务要有nodejs环境。所以要先安装nodejs不会安装的可以看  Centos7安装npm学习 安装pm2cnpminstallpm2-g,查看pm2是否安装成功pm2-v,如果报错,升级node版本进入node项目目录,安装项目依赖 cnpminstall创建pm2任务 [root@localhostserver]#pm2sta......
  • 百度飞桨(PaddlePaddle) - PP-OCRv3 文字检测识别系统 基于 Paddle Serving快速使用(服
    百度飞桨(PaddlePaddle)-PP-OCRv3文字检测识别系统预测部署简介与总览百度飞桨(PaddlePaddle)-PP-OCRv3文字检测识别系统PaddleInference模型推理(离线部署)百度飞桨(PaddlePaddle)-PP-OCRv3文字检测识别系统基于PaddleServing快速使用(服务化部署)PaddleServing......
  • 上海项目环境部署
    ideainterllig社区版https://www.jetbrains.com/idea/download/#section=windows点击下载社区版 vscodehttps://code.visualstudio.com/ nodejs14.21.3https://nodejs.org/download/release/v14.21.3/ jdk1.8https://www.oracle.com/java/technologies/downlo......
  • 百度飞桨(PaddlePaddle) - PP-OCRv3 文字检测识别系统 预测部署简介与总览
    百度飞桨(PaddlePaddle)-PP-OCRv3文字检测识别系统预测部署简介与总览百度飞桨(PaddlePaddle)-PP-OCRv3文字检测识别系统PaddleInference模型推理(离线部署)百度飞桨(PaddlePaddle)-PP-OCRv3文字检测识别系统基于PaddleServing快速使用(服务化部署)1.预测部署简介......
  • Windows 局域网批量安装可以帮助您快速在局域网内的多台计算机上部署 Windows 操作系
    Windows局域网批量安装可以帮助您快速在局域网内的多台计算机上部署Windows操作系统,提高部署效率。以下是基于WindowsServer环境下常用的局域网批量安装方法:基于Windows部署服务(WDS)的批量安装Windows部署服务(WDS)是用于集中式部署Windows操作系统的一种Windows......