一、引入依赖 (kafka的版本和springboot的版本对不上的话,启动会报错,包类不存在)
<dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> <version>2.5.1.RELEASE</version> </dependency>
我的springboot版本:
<parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.3.4.RELEASE</version> <relativePath/> <!-- lookup parent from repository --> </parent>
二、配置 yml
spring: kafka: bootstrap-servers: 192.168.233.11:9092,192.168.233.11:9093,192.168.233.11:9094 #生产者的配置,大部分我们可以使用默认的,这里列出几个比较重要的属性 producer: #0---表示不进行消息接收是否成功的确认 #1---表示当Leader接收成功时确认 #all -1---表示Leader和Follower都接收成功时确认 acks: all #设置大于0的值将使客户端重新发送任何数据,一旦这些数据发送失败。注意,这些重试与客户端接收到发送错误时的重试没有什么不同。允许重试将潜在的改变数据的顺序,如果这两个消息记录都是发送到同一个partition,则第一个消息失败第二个发送成功,则第二条消息会比第一条消息出现要早。 retries: 2 #每批次发送消息的数量 batch-size: 16384 #producer可以用来缓存数据的内存大小。如果数据产生速度大于向broker发送的速度,producer会阻塞或者抛出异常,以“block.on.buffer.full”来表明。这项设置将和producer能够使用的总内存相关,但并不是一个硬性的限制,因为不是producer使用的所有内存都是用于缓存。一些额外的内存会用于压缩(如果引入压缩机制),同样还有一些用于维护请求。 buffer-memory: 33554432 #key序列化方式 key-serializer: org.apache.kafka.common.serialization.StringSerializer value-serializer: org.apache.kafka.common.serialization.StringSerializer properties: linger.ms: 1 enable: idempotence: true #消费者的配置 consumer: #是否开启自动提交 enable-auto-commit: false #自动提交的时间间隔 auto-commit-interval: 100ms #key的解码方式 key-deserializer: org.apache.kafka.common.serialization.StringDeserializer value-deserializer: org.apache.kafka.common.serialization.StringDeserializer properties: session.timeout.ms: 15000 max-poll-records: 15 listener: ack-mode: manual_immediate type: batch
上面我配置的是手动Ack,并且批量消息,一次可以拉取15条记录
yaml中的各个配置参数的意思,参考官方文档 https://kafka.apachecn.org/documentation.html#configuration
三、springboot自动装配机制
package org.springframework.boot.autoconfigure.kafka; import java.io.IOException; import org.springframework.beans.factory.ObjectProvider; import org.springframework.boot.autoconfigure.condition.ConditionalOnClass; import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean; import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty; import org.springframework.boot.autoconfigure.kafka.KafkaProperties.Jaas; import org.springframework.boot.context.properties.EnableConfigurationProperties; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Import; import org.springframework.kafka.core.ConsumerFactory; import org.springframework.kafka.core.DefaultKafkaConsumerFactory; import org.springframework.kafka.core.DefaultKafkaProducerFactory; import org.springframework.kafka.core.KafkaAdmin; import org.springframework.kafka.core.KafkaTemplate; import org.springframework.kafka.core.ProducerFactory; import org.springframework.kafka.security.jaas.KafkaJaasLoginModuleInitializer; import org.springframework.kafka.support.LoggingProducerListener; import org.springframework.kafka.support.ProducerListener; import org.springframework.kafka.support.converter.RecordMessageConverter; import org.springframework.kafka.transaction.KafkaTransactionManager; @Configuration( proxyBeanMethods = false ) @ConditionalOnClass({KafkaTemplate.class}) @EnableConfigurationProperties({KafkaProperties.class}) @Import({KafkaAnnotationDrivenConfiguration.class, KafkaStreamsAnnotationDrivenConfiguration.class}) public class KafkaAutoConfiguration { private final KafkaProperties properties; public KafkaAutoConfiguration(KafkaProperties properties) { this.properties = properties; } @Bean @ConditionalOnMissingBean({KafkaTemplate.class}) public KafkaTemplate<?, ?> kafkaTemplate(ProducerFactory<Object, Object> kafkaProducerFactory, ProducerListener<Object, Object> kafkaProducerListener, ObjectProvider<RecordMessageConverter> messageConverter) { KafkaTemplate<Object, Object> kafkaTemplate = new KafkaTemplate(kafkaProducerFactory); messageConverter.ifUnique(kafkaTemplate::setMessageConverter); kafkaTemplate.setProducerListener(kafkaProducerListener); kafkaTemplate.setDefaultTopic(this.properties.getTemplate().getDefaultTopic()); return kafkaTemplate; } @Bean @ConditionalOnMissingBean({ProducerListener.class}) public ProducerListener<Object, Object> kafkaProducerListener() { return new LoggingProducerListener(); } @Bean @ConditionalOnMissingBean({ConsumerFactory.class}) public ConsumerFactory<?, ?> kafkaConsumerFactory(ObjectProvider<DefaultKafkaConsumerFactoryCustomizer> customizers) { DefaultKafkaConsumerFactory<Object, Object> factory = new DefaultKafkaConsumerFactory(this.properties.buildConsumerProperties()); customizers.orderedStream().forEach((customizer) -> { customizer.customize(factory); }); return factory; } @Bean @ConditionalOnMissingBean({ProducerFactory.class}) public ProducerFactory<?, ?> kafkaProducerFactory(ObjectProvider<DefaultKafkaProducerFactoryCustomizer> customizers) { DefaultKafkaProducerFactory<?, ?> factory = new DefaultKafkaProducerFactory(this.properties.buildProducerProperties()); String transactionIdPrefix = this.properties.getProducer().getTransactionIdPrefix(); if (transactionIdPrefix != null) { factory.setTransactionIdPrefix(transactionIdPrefix); } customizers.orderedStream().forEach((customizer) -> { customizer.customize(factory); }); return factory; } @Bean @ConditionalOnProperty( name = {"spring.kafka.producer.transaction-id-prefix"} ) @ConditionalOnMissingBean public KafkaTransactionManager<?, ?> kafkaTransactionManager(ProducerFactory<?, ?> producerFactory) { return new KafkaTransactionManager(producerFactory); } @Bean @ConditionalOnProperty( name = {"spring.kafka.jaas.enabled"} ) @ConditionalOnMissingBean public KafkaJaasLoginModuleInitializer kafkaJaasInitializer() throws IOException { KafkaJaasLoginModuleInitializer jaas = new KafkaJaasLoginModuleInitializer(); Jaas jaasProperties = this.properties.getJaas(); if (jaasProperties.getControlFlag() != null) { jaas.setControlFlag(jaasProperties.getControlFlag()); } if (jaasProperties.getLoginModule() != null) { jaas.setLoginModule(jaasProperties.getLoginModule()); } jaas.setOptions(jaasProperties.getOptions()); return jaas; } @Bean @ConditionalOnMissingBean public KafkaAdmin kafkaAdmin() { KafkaAdmin kafkaAdmin = new KafkaAdmin(this.properties.buildAdminProperties()); kafkaAdmin.setFatalIfBrokerNotAvailable(this.properties.getAdmin().isFailFast()); return kafkaAdmin; } }
四、生产者:
@RestController public class MyController { @Autowired private KafkaTemplate<String,String> kafkaTemplate; @RequestMapping("/send/msg") public String sendMsg(){ for (int i = 0; i < 100; i++) { //不指定分区,会随机发到不同的分区 kafkaTemplate.send("shop-topic",i+"aaaa"); //指定了key,会用key的hashcode %分区总数,根据结果发送到指定分区,因此同一个key永远发到同一个分区 kafkaTemplate.send("shop-topic",i+"aaa",i+"aaaa"); } return "ok"; } }
五、消费者批量消费(yaml那里要开启批量配置)
/** * author: yangxiaohui * date: 2023/7/13 */ @Service public class OrderService { /** * kafka是按照消费者分组来消费的,同一条消息,在同一个分组中,只会被消费一次,例如 order服务部署了5个节点,他们的消费者组是一样,那么一条 * 消息只会被这5个节点中的一个节点消费,groupId会自动创建 * ack提交的是offset * @param consumerRecordList * @param acknowledgment */ @KafkaListener(topics = "shop-topic",groupId = "my-group",clientIdPrefix = "orderService") public void listenMsg(List<ConsumerRecord> consumerRecordList, Acknowledgment acknowledgment){ acknowledgment.acknowledge(); for (ConsumerRecord consumerRecord : consumerRecordList) { System.out.println(consumerRecord.value()); } } }
六、消费者单条消费:一定要配置单条消费逻辑,不然会报错
spring: kafka: bootstrap-servers: 192.168.233.11:9092,192.168.233.11:9093,192.168.233.11:9094 #生产者的配置,大部分我们可以使用默认的,这里列出几个比较重要的属性 producer: #0---表示不进行消息接收是否成功的确认 #1---表示当Leader接收成功时确认 #all -1---表示Leader和Follower都接收成功时确认 acks: all #设置大于0的值将使客户端重新发送任何数据,一旦这些数据发送失败。注意,这些重试与客户端接收到发送错误时的重试没有什么不同。允许重试将潜在的改变数据的顺序,如果这两个消息记录都是发送到同一个partition,则第一个消息失败第二个发送成功,则第二条消息会比第一条消息出现要早。 retries: 2 #每批次发送消息的数量 batch-size: 16384 #producer可以用来缓存数据的内存大小。如果数据产生速度大于向broker发送的速度,producer会阻塞或者抛出异常,以“block.on.buffer.full”来表明。这项设置将和producer能够使用的总内存相关,但并不是一个硬性的限制,因为不是producer使用的所有内存都是用于缓存。一些额外的内存会用于压缩(如果引入压缩机制),同样还有一些用于维护请求。 buffer-memory: 33554432 #key序列化方式 key-serializer: org.apache.kafka.common.serialization.StringSerializer value-serializer: org.apache.kafka.common.serialization.StringSerializer properties: linger.ms: 1 enable: idempotence: true #消费者的配置 consumer: #是否开启自动提交 enable-auto-commit: false #自动提交的时间间隔 auto-commit-interval: 100ms #key的解码方式 key-deserializer: org.apache.kafka.common.serialization.StringDeserializer value-deserializer: org.apache.kafka.common.serialization.StringDeserializer properties: session.timeout.ms: 15000 max-poll-records: 15 listener: ack-mode: manual_immediate type: single
注意上面最后的配置那里 type=single
@Service public class OrderService { /** * kafka是按照消费者分组来消费的,同一条消息,在同一个分组中,只会被消费一次,例如 order服务部署了5个节点,他们的消费者组是一样,那么一条 * 消息只会被这5个节点中的一个节点消费,groupId会自动创建 * * @param * @param */ @KafkaListener(topics = "shop-topic",groupId = "my-group",clientIdPrefix = "orderService") public void listenMsg(ConsumerRecord consumerRecord,Acknowledgment acknowledgment){ acknowledgment.acknowledge(); System.out.println(consumerRecord.value()); } }
七、需要注意的地方:
标签:springboot,springframework,kafka,public,整合,import,org,properties From: https://www.cnblogs.com/yangxiaohui227/p/17550433.html