一、环境准备
IP | 配置 | clickhouse版本 | zookeeper版本 | myid |
192.168.12.88 | Centos 7.9 4核8G | 22.8.20.11 | 3.7.1 | 3 |
192.168.12.90 | Centos 7.9 4核8G | 22.8.20.11 | 3.7.1 | 2 |
192.168.12.91 | Centos 7.9 4核8G | 22.8.20.11 | 3.7.1 | 1 |
clickhouse版本选择:可以参照阿里云或腾讯云版本:22.8
# 基础环境配置 vim /etc/security/limits.conf * soft nofile 655365 * hard nofile 655365 * soft nproc 128000 * hard nproc 128000 cat /etc/security/limits.d/20-nproc.conf * soft nproc 4096 root soft nproc unlimited # 禁用selinux vim /etc/selinux/config SELINUX=disabled vim /sys/kernel/mm/transparent_hugepage/enabled always madvise [never]
# 如果不修改会出现警告提示
Warnings:* Linux transparent hugepages are set to "always". Check /sys/kernel/mm/transparent_hugepage/enable
二、zookeeper集群安装
安装jdk11,并设置环境变量
cd /usr/local/ tar -xzvf apache-zookeeper-3.7.1-bin.tar.gz ln -s apache-zookeeper-3.7.1-bin zookeeper cd /usr/local/zookeeper/conf cp zoo_sample.cfg zoo.cfg mkdir /data/clickhouse/zookeeper/{data,logs} -p # 其余机器myid依次写入2、3 echo 1 > /data/clickhouse/zookeeper/data/myid # 配置文件 vim zoo.cfg tickTime=2000 initLimit=10 syncLimit=5 clientPort=2281 #maxClientCnxns=60 #autopurge.snapRetainCount=3 #autopurge.purgeInterval=1 #metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider #metricsProvider.httpPort=7000 #metricsProvider.exportJvmInfo=true dataDir=/data/clickhouse/zookeeper/data dataLogDir=/data/clickhouse/zookeeper/logs # 如果是云服务器需要外网访问就需要设置下面的参数 # quorumListenOnAllIPs=true server.1=192.168.12.91:2999:3999 server.2=192.168.12.90:2999:3999 server.3=192.168.12.88:2999:3999 # 启动程序 cd /usr/local/zookeeper # bin/zkServer.sh start ZooKeeper JMX enabled by default Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED
三、clickhouse部署
1.单机安装
参考文章(OLAP系列之分析型数据库clickhouse单机版部署(一))
2.集群配置(3分片+每分片1副本)
2.1 config.xml修改
cd /etc/clickhouse-server # 1.修改clickhouse目录结构 # 1.1 默认保存日志的目录,通常会修改,将数据保存到大容量磁盘路径中 <level>trace</level> <log>/data/clickhouse/server-logs/clickhouse-server.log</log> <errorlog>/data/clickhouse/server-logs/clickhouse-server.err.log</errorlog> # 1.2 默认数据存储目录修改 <path>/data/clickhouse/clickhouse/</path> <tmp_path>/data/clickhouse/clickhouse/tmp/</tmp_path> <user_files_path>/data/clickhouse/clickhouse/user_files/</user_files_path> <path>/data/clickhouse/clickhouse/access/</path> # 2.使用的端口,不需要修改,可以根据情况修改,使用的端口有: <http_port>8123</http_port> <tcp_port>9000</tcp_port> <postgresql_port>9005</postgresql_port> <interserver_http_port>9009</interserver_http_port> # 3.外网访问打开配置 # 如果有ipv6,则取消下面配置注释 <listen_host>::</listen_host> # 如果只有ipv4,取消下面配置注释 <listen_host>0.0.0.0</listen_host> # 本机访问 <listen_host>::1</listen_host> <listen_host>127.0.0.1</listen_host> # 4.时区修改 <timezone>Asia/Shanghai</timezone> # 5.集群配置 注销掉<remote_servers></remote_servers>中所有内容,单独在后面添加,如果不注销会出现默认的副本: <include_from>/etc/clickhouse-server/config.d/metrika.xml</include_from>
3个节点都增加配置文件[/etc/clickhouse-server/config.d/metrika.xml]
配置如下:
<?xml version="1.0">
<yandex>
<!-- 新版的clickhouse集群的首个标签是clickhouse,而不是yandex --> <remote_servers> <!-- 自定义集群名称 --> <clickhouse_cluster_3shards_1replicas> <!-- 定义集群的分片数量,3个shard标签说明有3个节点--> <shard>
<!-- 表示是否只将数据写入其中一个副本,默认为false,表示写入所有副本,在复制表的情况下可能会导致重复和不一致,所以这里要改为true --> <internal_replication>true</internal_replication> <!-- 定义分片的副本数量 --> <replica> <host>192.168.12.91</host> <port>9000</port> </replica> </shard> <shard> <internal_replication>true</internal_replication> <replica> <host>192.168.12.90</host> <port>9000</port> </replica> </shard> <shard> <internal_replication>true</internal_replication> <replica> <host>192.168.12.88</host> <port>9000</port> </replica> </shard> </clickhouse_cluster_3shards_1replicas> </remote_servers> <!-- zookeeper集群配置 --> <zookeeper> <node index="1"> <host>192.168.12.91</host> <port>2281</port> </node> <node index="2"> <host>192.168.12.90</host> <port>2281</port> </node> <node index="3"> <host>192.168.12.88</host> <port>2281</port> </node> </zookeeper>
<!-- 本节点副本名称replica,配置后能方便后续创建复制表时不用指定zk路径,每台机器的配置不一样,确保和每台机器的host名称一致-->
<macros> <shard>01</shard> <replica>cluster01</replica> </macros> <networks> <ip>::/0</ip> </networks>
<!-- 数据压缩算法配置 --> <clickhouse_compression> <case> <min_part_size>10000000000</min_part_size> <min_part_size_ratio>0.01</min_part_size_ratio> <method>lz4</method> </case> </clickhouse_compression> </yandex>
<yandex> <remote_servers> <clickhouse_cluster_3shards_1replicas> <shard> <internal_replication>true</internal_replication> <replica> <host>192.168.12.91</host> <port>9000</port> </replica> </shard> <shard> <internal_replication>true</internal_replication> <replica> <host>192.168.12.90</host> <port>9000</port> </replica> </shard> <shard> <internal_replication>true</internal_replication> <replica> <host>192.168.12.88</host> <port>9000</port> </replica> </shard> </clickhouse_cluster_3shards_1replicas> </remote_servers> <zookeeper> <node index="1"> <host>192.168.12.91</host> <port>2281</port> </node> <node index="2"> <host>192.168.12.90</host> <port>2281</port> </node> <node index="3"> <host>192.168.12.88</host> <port>2281</port> </node> </zookeeper> <macros> <shard>01</shard> <replica>cluster01</replica> </macros> <networks> <ip>::/0</ip> </networks> <clickhouse_compression> <case> <min_part_size>10000000000</min_part_size> <min_part_size_ratio>0.01</min_part_size_ratio> <method>lz4</method> </case> </clickhouse_compression> </yandex>metrika.xml
注意:需要在每个clickhouse节点上修改macros配置名称
# 第2个节点配置 <macros> <shard>02</shard> <replica>cluster02</replica> </macros> # 第3个节点配置 <macros> <shard>03</shard> <!—第一个分片–> <replica>cluster03</replica> <!—自命名–> </macros>
四、服务启动并验证
4.1 服务启动
# 1.分别登录3台机器,启动服务 # 启动方式 clickhouse start chown -R clickhouse: '/var/run/clickhouse-server/' Will run sudo -u 'clickhouse' /usr/bin/clickhouse-server --config-file /etc/clickhouse-server/config.xml --pid-file /var/run/clickhouse-server/clickhouse-server.pid --daemon Waiting for server to start Waiting for server to start Server started 或者 systemctl start clickhouse-server.service ● clickhouse-server.service - ClickHouse Server (analytic DBMS for big data) Loaded: loaded (/usr/lib/systemd/system/clickhouse-server.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2023-07-21 16:25:40 CST; 1min 26s ago Main PID: 1698 (clckhouse-watch) Tasks: 204 Memory: 115.1M CGroup: /system.slice/clickhouse-server.service ├─1698 clickhouse-watchdog --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid └─1699 /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid
4.2 验证
# 1.本机登录验证 clickhouse-client --password 密码 :)SHOW DATABASES Query id: a80654a4-7946-4285-8b2a-87a46c6b347b ┌─name───────────────┐ │ INFORMATION_SCHEMA │ │ default │ │ information_schema │ │ system │ └────────────────────┘ 4 rows in set. Elapsed: 0.001 sec. :) select * from system.clusters; SELECT * FROM system.clusters Query id: 647b0a0b-f9d2-4b74-8a06-8e40cd6e15eb ┌─cluster──────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name─────┬─host_address──┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐ │ clickhouse_cluster_3shards_1replicas │ 1 │ 1 │ 1 │ 192.168.12.91 │ 192.168.12.91 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │ │ clickhouse_cluster_3shards_1replicas │ 2 │ 1 │ 1 │ 192.168.12.90 │ 192.168.12.90 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │ │ clickhouse_cluster_3shards_1replicas │ 3 │ 1 │ 1 │ 192.168.12.88 │ 192.168.12.88 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │ └──────────────────────────────────────┴───────────┴──────────────┴─────────────┴───────────────┴───────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘ 3 rows in set. Elapsed: 0.001 sec. # 2.任意节点连接 clickhouse-client --host 192.168.12.88 --port 9000 --password ClickHouse client version 22.8.20.11 (official build). Password for user (default): Connecting to 192.168.12.88:9000 as user default. Connected to ClickHouse server version 22.8.20 revision 54460 zookeeper3 :) show databases; SHOW DATABASES Query id: 44b06878-6bfe-4a21-87a9-f8c31e6c79eb ┌─name───────────────┐ │ INFORMATION_SCHEMA │ │ default │ │ information_schema │ │ system │ └────────────────────┘ 4 rows in set. Elapsed: 0.001 sec.
参考文献资料:
clickhouse集群部署指南(3分片1副本模式) :注意安装
Clickhouse集群安装与部署 :注意:chproxy这块
clickhouse集群部署与搭建:(扩缩容比较重点)
clickhouse测试 注意测试就行
标签:数据库,zookeeper,192.168,server,OLAP,9000,data,clickhouse From: https://www.cnblogs.com/yangmeichong/p/17570894.html