首页 > 其他分享 >kafka集群原理及部署

kafka集群原理及部署

时间:2023-03-24 10:34:53浏览次数:56  
标签:log 部署 broker kafka 集群 消息 ############################# server

官方地址

https://kafka.apache.org/

概述

Kakfa起初是由LinkedIn公司开发的一个分布式的消息系统,后成为Apache的一部分,它使用Scala编写,以可水平扩展和高吞吐率而被广泛使用。目前越来越多的开源分布式处理系统如Cloudera、Apache Storm、Spark等都支持与Kafka集成。

原理


如上图所示,一个典型的Kafka体系架构包括若干Producer(可以是服务器日志,业务数据,页面前端产生的page view等等),若干broker(Kafka支持水平扩展,一般broker数量越多,集群吞吐率越高),若干Consumer (Group),以及一个Zookeeper集群。Kafka通过Zookeeper管理集群配置,选举leader,以及在consumer group发生变化时进行rebalance。Producer使用push(推)模式将消息发布到broker,Consumer使用pull(拉)模式从broker订阅并消费消息。

名称 直译 解释
broker kafka节点 消息中间件处理节点,一个kafka节点就是一个broker,一个或者多个broker可以组成一个kafka集群
Producer 生产者 消息生产者,向broker发布消息的客户端
Consumer 消费者 消息消费者,从broker读写消息的客户端
Topic 主题 kafka通过Topic对消息进行归类,发布都kafka集群的每一条消息都需要指定一个Topic
Partition 分区 一个Topic中的消息数据按照多个分区组织,分区是kafka消息队列组织的最小单位,一个分区可以看作是一个有序的消息队列(先入先出)
Consumer group 消费组 每一个Consumer属于一个特定的Consumer group,一个消息可以发送到多个不同的Consumer group,但一个Consumer group只有一个Consumer能够消费该消息
Replica 副本 为保障集群总某个节点发生故障,该节点的partition数据不丢失,kafka提供的容灾机制
leader 主副本 每个分区多个副本为“主”,生产者发送数据的对象,以及消费组消费数据的对象都是leader
follower 从副本 每个分区多个副本为“从”,实时从leader中同步数据,保持和leader数据的同步


一个topic可以认为一个一类消息,每个topic将被分成多个partition,每个partition在存储层面是append log文件。任何发布到此partition的消息都会被追加到log文件的尾部,每条消息在文件中的位置称为offset(偏移量),offset为一个long型的数字,它唯一标记一条消息。每条消息都被append到partition中,是顺序写磁盘,因此效率非常高(经验证,顺序写磁盘效率比随机写内存还要高,这是Kafka高吞吐率的一个很重要的保证)。每一条消息被发送到broker中,会根据partition规则选择被存储到哪一个partition。如果partition规则设置的合理,所有消息可以均匀分布到不同的partition里,这样就实现了水平扩展。(如果一个topic对应一个文件,那这个文件所在的机器I/O将会成为这个topic的性能瓶颈,而partition解决了这个问题)。所以kafka分区是提高kafka性能的关键所在,当你发现你的集群性能不高时,常用手段就是增加Topic的分区,分区里面的消息是按照从新到老的顺序进行组织,消费者从队列头订阅消息,生产者从队列尾添加消息。

Kafka和其他主流分布式消息系统的对比

集群部署

实验环境

node IP Jdk Zookeeper Kafka
node3 192.168.101.209 jdk1.8.0_333 Zk3 Broker2
node2 192.168.100.64 jdk1.8.0_333 Zk2 Broker1
node1 192.168.101.1 jdk1.8.0_333 Zk1 Broker0

安装包下载

https://kafka.apache.org/downloads

百来M,下载缓慢,建议开启漫游模式
kafka_2.13-3.4.0.tgz

部署

  1. 安装jdk,参考《zookeeper原理及集群部署》
  2. 安装zookeeper集群,参考《zookeeper原理及集群部署》
  3. 安装kafka集群(node)
    (只要将包拷贝到有jdk环境的系统下,解压,修改完配置,直接就可以启动)
    3.1 解压
tar -zxf kafka_2.13-3.4.0.tgz

3.2 修改配置server.properties

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

#
# This configuration file is intended for use in ZK-based mode, where Apache ZooKeeper is required.
# See kafka.server.KafkaConfig for additional details and defaults
#

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
#当前机器在集群中的唯一标识,每一台都不一样,和zookeeper的myid性质一样
broker.id=0

############################# Socket Server Settings #############################

# The address the socket server listens on. If not configured, the host name will be equal to the value of
# java.net.InetAddress.getCanonicalHostName(), with PLAINTEXT listener name, and port 9092.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
# 指定监听的地址及端口号,该配置项是指定内网ip
#listeners=PLAINTEXT://:9092
listeners=PLAINTEXT://192.168.101.1:19092

# Listener name, hostname and port the broker will advertise to clients.
# If not set, it uses the value for "listeners".
# 如果需要开放外网访问,则在该配置项指定外网ip
#advertised.listeners=PLAINTEXT://your.host.name:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL


# The number of threads that the server uses for receiving requests from the network and sending responses to the network
#broker通过网络接收请求和发送响应的线程数

num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
#broker进行I/O处理的线程数
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
#发送缓冲区buffer大小,数据不是一下子就发送的,先回存储到缓冲区了到达一定的大小后在发送,能提高性能
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
#kafka接收缓冲区大小,当数据到达一定大小后在序列化到磁盘
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
#这个参数是向kafka请求消息或者向kafka发送消息的请请求的最大数,这个值不能超过java的堆栈大小
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
#消息存放的目录,这个目录可以配置为“,”逗号分割的表达式,上面的num.io.threads要大于这个目录的个数,如果配置多个目录,新创建的topic他把消息持久化的地方是,当前以逗号分割的目录中,那个分区数最少就放那一个
log.dirs=/app/kafka-19092/kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
#默认的分区数,一个topic默认1个分区数
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
#用来恢复和刷新data下数据的线程数
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
#每个topic创建时的副本数,默认是1,生产建议大于1,比如3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
#默认消息的最大持久化时间,168小时,7天
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
#kafka的消息是以追加的形式落地到文件,每个segment文件大小,当超过这个值的时候,kafka会新起一个文件,默认是1G
#log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
#每隔300000毫秒去检查上面配置的log失效时间(log.retention.hours=168 ),到目录查看是否有过期的消息如果有,删除
log.retention.check.interval.ms=300000
############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=192.168.101.1:2281,192.168.100.64:2281,192.168.101.209:2281

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

3.3 创建kafka用户并修改归属

sudo groupadd kafka
sudo useradd -r -g kafka -s /bin/false kafka
chown -R kafka:kafka /app/kafka-19092/

3.4 启动脚本授权

chmod a+x bin/*.sh

3.5 修改完后,拷贝包到node2和node3上
注:broker.id=0 每台服务器的broker.id都不能相同,node2 broker.id=1,node3 broker.id=2。
3.5 配置systemctl
vim /etc/systemd/system/kafka.service

点击查看代码
[Unit]
Description=kafka
After=network.target

[Service]
Type=simple
LimitNOFILE=65535
LimitNPROC=65535
Environment=JAVA_HOME=/usr/local/jdk1.8.0_333
User=kafka
Group=kafka
ExecStart=/app/kafka-19092/bin/kafka-server-start.sh  /app/kafka-19092/config/server.properties
ExecStop=/app/kafka-19092/bin/kafka-server-stop.sh
Restart=always
[Install]
WantedBy=multi-user.target

3.6 加入开机启动并启动服务

systemctl enable kafka
systemctl start kafka


验证

参考:
https://www.cnblogs.com/diaozhaojian/p/10490741.html
https://www.cnblogs.com/luotianshuai/p/5206662.html

标签:log,部署,broker,kafka,集群,消息,#############################,server
From: https://www.cnblogs.com/haiyoyo/p/17250529.html

相关文章