kafka
1.kafka"log_level":"log_error","errmsg":"Error while processing: ConsumerRecord(topic = xxx, partition = xx, offset = xxx, CreateTime = xx, serialized key size = 75, serialized value size = xx, headers = RecordHeaders(headers = [], isReadOnly = false), key = xx, value = {xxx)org.springframework.kafka.listener.ListenerExecutionFailedException:Listener method 'public void com.xx.xx(org.apache.kafka.clients.consumer.ConsumerRecord<java.lang.Object, java.lang.Object>,org.springframework.kafka.support.Acknowledgment) throws java.io.IOException' threw exception; nested exception is org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.; nested exception is org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
问题: 手动提交模式下,一次poll的消息在指定时间内没消费完就会触发告警
修改方法:修改消费者配置,
ConsumerConfig.MAX_POLL_RECORDS_CONFIG :5 ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG :5000
标签:记录,max,kafka,工作,xx,org,poll,异常,size From: https://www.cnblogs.com/xiaodeyao/p/17387232.html