首页 > 其他分享 >ClickHouse集群部署

ClickHouse集群部署

时间:2022-11-08 17:14:24浏览次数:40  
标签:bin 部署 zookeeper 192.168 server 集群 ClickHouse data clickhouse

集群节点信息

192.168.175.212 ch01
192.168.175.213 ch02
192.168.175.214 ch03

搭建一个zookeeper集群

复制数据需要zookeeper配合

  • 下载安装包
zookeeper 官网:https://zookeeper.apache.org/releases.html#download
  • 解压
tar zxf apache-zookeeper-3.6.0-bin.tar.gz -C /apps/
  • 重命名
mv apache-zookeeper-3.6.0-bin zookeeper
  • 修改配置文件

进入zookeeper的conf目录,拷贝zoo_sample.cfg为zoo.cfg,cp zoo_sample.cfg zoo.cfg 修改zoo.cfg文件:

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/apps/zookeeper/data/zookeeper
clientPort=2182
autopurge.purgeInterval=0
globalOutstandingLimit=200
server.1=ch01:2888:3888
server.2=ch02:2888:3888
server.3=ch03:2888:3888
  • 创建需要的目录
mkdir -p /apps/zookeeper/data/zookeeper 

配置完成后将当前的zookeeper目录scp到其他两个节点

  • 设置myid
vim /apps/zookeeper/data/zookeeper/myid #ch201为1,ch202为2,ch203为3
  • 添加环境变量
cat /etc/profile
export ZOOKEEPER_HOME=/apps/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin
  • 进入zookeeper的bin目录,启动zookeeper服务,每个节点都需要启动
zkServer.sh start
  • 启动之后查看每个节点的状态
root@ch01:~# zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /apps/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: leader
root@ch02:~# zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /apps/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
root@ch03:~# zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /apps/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower

安装单机ClickHouse

官网https://clickhouse.tech/#quick-start

sudo apt-get install dirmngr
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4

echo "deb http://repo.clickhouse.tech/deb/stable/ main/" | sudo tee \
    /etc/apt/sources.list.d/clickhouse.list
sudo apt-get update

sudo apt-get install -y clickhouse-server clickhouse-client

sudo service clickhouse-server start
clickhouse-client
  • 创建数据目录
mkdir -p /data/clickhouse /data/clickhouse/tmp/ /data/clickhouse/user_files/
  • 配置/etc/clickhouse-server/config.xml
<log>/var/log/clickhouse-server/clickhouse-server.log</log>

<path>/data/clickhouse/</path>

<tmp_path>/data/clickhouse/tmp/</tmp_path>

<user_files_path>/data/clickhouse/user_files/</user_files_path>

<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
  • 把上述目录的用户都赋权给clickhouse
chown -R clickhouse:clickhouse /data
  • 启动clickhouse
 /etc/init.d/clickhouse-server start
  • 验证单节点的clickhouse
root@ch01:~# clickhouse-client --password
ClickHouse client version 20.3.4.10 (official build).
Password for user (default): 
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 20.3.4 revision 54433.

ch01 :) show databases;

SHOW DATABASES

┌─name────┐
│ default │
│ system  │
└─────────┘

2 rows in set. Elapsed: 0.004 sec. 

ch01 :) 

配置集群

修改配置文件 vim /etc/clickhouse-server/config.xml

 打开 <listen_host>::</listen_host> 的注释
  • 创建配置文件 vim /etc/metrika.xml
<yandex>
<clickhouse_remote_servers>
<perftest_3shards_1replicas>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>192.168.175.212</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<internal_replication>true</internal_replication>
<host>192.168.175.213</host>
<port>9000</port>
</replica>
</shard>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>192.168.175.214</host>
<port>9000</port>
</replica>
</shard>
</perftest_3shards_1replicas>
</clickhouse_remote_servers>

<!--zookeeper相关配置-->
<zookeeper-servers>
<node index="1">
<host>192.168.175.212</host>
<port>2181</port>
</node>
<node index="2">
<host>192.168.175.213</host>
<port>2181</port>
</node>
<node index="3">
<host>192.168.175.214</host>
<port>2181</port>
</node>
</zookeeper-servers>

<macros>
<replica>192.168.175.212</replica>
</macros>

<networks>
<ip>::/0</ip>
</networks>

<clickhouse_compression>
<case>
<min_part_size>10000000000</min_part_size>
<min_part_size_ratio>0.01</min_part_size_ratio>
<method>lz4</method>
</case>
</clickhouse_compression>

</yandex>

上述配置中三个节点中不同的地方在于

<macros>
<replica>192.168.175.212</replica>
</macros>

改为当前节点的IP即可

  • 重启clickhouse服务
/etc/init.d/clickhouse-server restart

验证

在每个节点启动clickhouse客户端,和单节点启动完全一样,查询集群信息

select * from system.clusters;

1585280379728_2.png

标签:bin,部署,zookeeper,192.168,server,集群,ClickHouse,data,clickhouse
From: https://www.cnblogs.com/dagongzhe/p/16870356.html

相关文章

  • nexus 部署与设置
    安装nexusdf-h先查看目录磁盘空间,我安装的版本占用了四个G空间,目录文件空间不够导致启动失败上传nexus压缩包,并解压查询8081端口号是否被占用sudonetst......
  • redis哨兵模式和集群模式优缺点_redis集群哨兵模式
    redis哨兵模式和集群模式优缺点_redis集群哨兵模式Redis主从模式是一主多从,从节点宕机还有其他的备份,但是主节点宕机了,必然引起系统的故障。为了解决这个问题,Redis提供了......
  • SQLite源码安装部署
    1.下载下载地址:https://www.sqlite.org/download.html我这里下载的是:sqlite-autoconf-3390400.tar.gz 2.解压编译[root@localhostsoft]#yuminstallgcc[root@localh......
  • 前端灰度环境wayne+k8s部署
    前端灰度环境wayne+k8s部署一、灰度发布canay背景灰度发布是一种发布方式,也叫金丝雀发布,起源是矿工在下井之前会先放一只金丝雀到井里,如果金丝雀不叫了,就代表瓦斯浓......
  • UNTX部署到IIS,亲测有效
    一、安装服务器需要的环境1.安装Node.js下载地址:http://nodejs.cn/download,根据服务器环境选择对应版本的安装包即可,本人选的是Windows64位的.msi安装包......
  • SpringBoot构建war部署到tomcat中无法注册到Nacos服务
    最近项目基本开发完成,准备部署到服务器中进行功能的验证,但当把所有的环境都搭建好,启动项目后,tomcat启动日志正常,发现在服务调用时一直报错。项目是使用SpringBoot框架搭建......
  • redis集群部署
    /data/app/redis/redis-5.0.0/src/redis-server/data/app/redis/redis-5.0.0/7001/redis.conf/data/app/redis/redis-5.0.0/src/redis-server/data/app/redis/redis......
  • 关于 Angular 部署以及 index.html 里 base hRef 属性的关联关系
    直接在SAP电商云SpartacusUI项目下,运行命令行ngbuild,输出如下:dist文件夹:把dist文件夹下的mystore直接放到tomcatwebapps文件夹下面,运行时:如果修改b......
  • 关于ASP.NET Core WebSocket实现集群的思考
    前言    提到WebSocket相信大家都听说过,它的初衷是为了解决客户端浏览器与服务端进行双向通信,是在单个TCP连接上进行全双工通讯的协议。在没有WebSocket之前只能通过......
  • mysql单实例部署
    mysql安装分三步走一、数据文件的目录放在/data/mysql二、软件放在/usr/local/三、配置文件/etc/--mysql的配置文件必须是‘.cnf’结尾单实例安装用my.cnf这个配置......