1 kafka下载
linux环境安装kafka,需要预先准备相关资源
这里使用的是kafka_2.12-2.5.1版本,下载路径为:http://archive.apache.org/dist/kafka/2.5.1/kafka_2.12-2.5.1.tgz
也可以通过命令wget http://archive.apache.org/dist/kafka/2.5.1/kafka_2.12-2.5.1.tgz进行资源获取;
2 下载zookeeper
这里是apache-zookeeper-3.5.9-bin.tar.gz为例,官网:https://zookeeper.apache.org/
3 安装zookerper
1)解压
tar -zxvf apache-zookeeper-3.5.9-bin.tar.gz
2)重命名,看起来舒服点
mv apache-zookeeper-3.5.9-bin zookeeper-3.5.9
3)进入配置文件目录conf
复制一份配置文件
cp zoo_sample.cfg zoo.cfg
4)修改配置
修改zoo.cfg中的内容为,主要修改dataDir路径以及端口号
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/usr/local/zookeeper-3.6.1/data/ # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 ## Metrics Providers # # https://prometheus.io Metrics Exporter #metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider #metricsProvider.httpPort=7000 #metricsProvider.exportJvmInfo=true
5)启动zookeeper
进入bin目录
执行命令
./zkServer.sh start
6)查看状态
./zkServer.sh status
6)停止命令
./zkServer.sh stop
4 安装kafka
1)解压
tar -zxvf kafka_2.12-2.5.1.tgz
2)进入配置文件目录config
复制一份配置文件
cp server.properties server-copy.properties
3)修改配置文件
配置以下两个属性(这里是在同一个虚拟机上安装的zookeeper和kafka)
listeners:这里的ip填写kafka所在服务器的ip
zookeeper.connect:这里的ip填写zookeeper所在服务器的ip
listeners=PLAINTEXT://192.168.28.110:9092 zookeeper.connect=192.168.28.110:2181
4)启动
进入bin目录
./kafka-server-start.sh -daemon ../config/server.properties
5)停止命令
./kafka-server-stop.sh
6)查看日志
进入logs目录
tail -f server.log
5 springboot测试使用kafka
1)添加依赖
<!-- 引入kafka依赖 --> <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> <version>2.8.6</version> </dependency>
2)application.yml添加配置
#kafka spring: kafka: bootstrap-servers: 192.168.28.110:9092 # kafka集群信息,多个用逗号间隔 # 生产者 producer: # 重试次数,设置大于0的值,则客户端会将发送失败的记录重新发送 retries: 3 batch-size: 16384 #批量处理大小,16K buffer-memory: 33554432 #缓冲存储大,32M acks: 1 # 指定消息key和消息体的编解码方式 key-serializer: org.apache.kafka.common.serialization.StringSerializer value-serializer: org.apache.kafka.common.serialization.StringSerializer # 消费者 consumer: # 消费者组 group-id: TestGroup # 是否自动提交 enable-auto-commit: false # 消费偏移配置 # none:如果没有为消费者找到先前的offset的值,即没有自动维护偏移量,也没有手动维护偏移量,则抛出异常 # earliest:在各分区下有提交的offset时:从offset处开始消费;在各分区下无提交的offset时:从头开始消费 # latest:在各分区下有提交的offset时:从offset处开始消费;在各分区下无提交的offset时:从最新的数据开始消费 auto-offset-reset: earliest key-deserializer: org.apache.kafka.common.serialization.StringDeserializer value-deserializer: org.apache.kafka.common.serialization.StringDeserializer # 监听 listener: # record:当每一条记录被消费者监听器(ListenerConsumer)处理之后提交 # batch:当每一批poll()的数据被ListenerConsumer处理之后提交 # time:当每一批poll()的数据被ListenerConsumer处理之后,距离上次提交时间大于TIME时提交 # count:当每一批poll()的数据被ListenerConsumer处理之后,被处理record数量大于等于COUNT时提交 # count_time:TIME或COUNT中有一个条件满足时提交 # manual:当每一批poll()的数据被ListenerConsumer处理之后, 手动调用Acknowledgment.acknowledge()后提交 # manual_immediate:手动调用Acknowledgment.acknowledge()后立即提交,一般推荐使用这种 ack-mode: manual_immediate
3)添加一个消费者
import org.apache.kafka.clients.consumer.ConsumerRecord; import org.springframework.kafka.annotation.KafkaListener; import org.springframework.kafka.support.Acknowledgment; import org.springframework.stereotype.Component; /** * 消费者 * kafka监听器 */ @Component public class KafkaConsumer { /** * kafka的监听器1,topic为"topic_test",消费者组为"group_topic_test" * @param record * @param item */ @KafkaListener(topics = "topic_test", groupId = "group_topic_test") public void topicListener1(ConsumerRecord<String, String> record, Acknowledgment item) { String value = record.value(); System.out.println(value); System.out.println(record); //手动提交 item.acknowledge(); } /** * 配置多个消费组 * kafka的监听器2,topic为"topic_test2",消费者组为"group_topic_test" * @param record * @param item */ @KafkaListener(topics = "topic_test2",groupId = "group_topic_test2") public void topicListener2(ConsumerRecord<String, String> record, Acknowledgment item) { String value = record.value(); System.out.println(value); System.out.println(record); item.acknowledge(); } }
4)生产者
@RestController @RequestMapping("/kafka") public class KafkaController { @Autowired private KafkaTemplate<String, String> kafkaTemplate; @RequestMapping("/send") public void send() { kafkaTemplate.send("topic_test", "key", "测试kafka消息"); } }
5)启动项目,访问接口
6 在阿里云上安装zookeeper和kafka
1)安装zookeeper一样
2)安装kafka
在配置上有所不同
listeners:是监听端口,需要使用内网ip(注意,不要使用localhost或者127.0.0.1)
advertised.listeners:给外部的,使用外网ip
listeners=PLAINTEXT://阿里云服务器内网ip:9092
advertised.listeners=PLAINTEXT://阿里云服务器外网ip:9092
zookeeper.connect=阿里云服务器外网ip:2181
阿里云服务器的内网ip可以在以下位置看到
也可以通过命令ifconfig看到
标签:单机,ip,zookeeper,kafka,topic,apache,org,安装 From: https://www.cnblogs.com/jthr/p/17387512.html