首页 > 其他分享 >apache kafka系列之迁移与扩容工具用法

apache kafka系列之迁移与扩容工具用法

时间:2023-06-04 10:01:55浏览次数:31  
标签:replicas -- partition 用法 topic cluster switch apache kafka


kafka迁移与扩容工具使用

参考官网site:https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-6.ReassignPartitionsTool

说明:

当我们对kafka集群扩容时,需要满足2点要求:

 

  1. 将指定topic迁移到集群内新增的node上。
  2. 将topic的指定partition迁移到新增的node上。

1. 迁移topic到新增的node上



假如现在一个kafka集群运行三个broker,broker.id依次为101,102,103,后来由于业务数据突然暴增,需要新增三个broker,broker.id依次为104,105,106.目的是要把push-token-topic迁移到新增node上。脚本(json格式)如下所示:



lizhitao@localhost:$  ./bin/kafka-reassign-partitions.sh --zookeeper 192.168.2.225:2183/config/mobile/mq/mafka
--topics-to-move-json-file  migration-push-token-topic.json  --broker-list  "104,105,106"  --generate

 脚本migration-push-token-topic.json文件内容如下:

{
"topics":
[
{
"topic": "push-token-topic"
}
],
"version":1
}

生成分配partitions的json脚本:

Current partition replica assignment
{"version":1,"partitions":[{"topic":"cluster-switch-topic","partition":10,"replicas":[8]},{"topic":"cluster-switch-topic","partition":5,"replicas":[4]},{"topic":"cluster-switch-topic","partition":3,"replicas":[5]},{"topic":"cluster-switch-topic","partition":4,"replicas":[5]},{"topic":"cluster-switch-topic","partition":9,"replicas":[5]},{"topic":"cluster-switch-topic","partition":1,"replicas":[5]},{"topic":"cluster-switch-topic","partition":11,"replicas":[4]},{"topic":"cluster-switch-topic","partition":7,"replicas":[5]},{"topic":"cluster-switch-topic","partition":2,"replicas":[4]},{"topic":"cluster-switch-topic","partition":0,"replicas":[4]},{"topic":"cluster-switch-topic","partition":6,"replicas":[4]},{"topic":"cluster-switch-topic","partition":8,"replicas":[4]}]}



重新分配parttions的json脚本如下:

migration-topic-cluster-switch-topic.json 
 {"version":1,"partitions":[{"topic":"cluster-switch-topic","partition":10,"replicas":[5]},{"topic":"cluster-switch-topic","partition":5,"replicas":[4]},{"topic":"cluster-switch-topic","partition":4,"replicas":[5]},{"topic":"cluster-switch-topic","partition":3,"replicas":[4]},{"topic":"cluster-switch-topic","partition":9,"replicas":[4]},{"topic":"cluster-switch-topic","partition":1,"replicas":[4]},{"topic":"cluster-switch-topic","partition":11,"replicas":[4]},{"topic":"cluster-switch-topic","partition":7,"replicas":[4]},{"topic":"cluster-switch-topic","partition":2,"replicas":[5]},{"topic":"cluster-switch-topic","partition":0,"replicas":[5]},{"topic":"cluster-switch-topic","partition":6,"replicas":[5]},{"topic":"cluster-switch-topic","partition":8,"replicas":[5]}]}

lizhitao@localhost:$   bin/kafka-reassign-partitions.sh --zookeeper 192.168.2.225:2183/config/mobile/mq/mafka01 --reassignment-json-file migration-topic-cluster-switch-topic.json --execute

2.topic修改(replicats-factor)副本个数

lizhitao@localhost:$ ./bin/kafka-reassign-partitions.sh --zookeeper   192.168.2.225:2183/config/mobile/mq/mafka

--reassignment-json-file  replicas-update-push-token-topic.json  --execute

假如初始时push-token-topic为一个副本,为了提高可用性,需要改为2副本模式。

脚本replicas-push-token-topic.json文件内容如下:

{
        "partitions":
                [
                {
                        "topic": "log.mobile_nginx",
                        "partition": 0,
                        "replicas": [101,102,104]
                },
                {
                        "topic": "log.mobile_nginx",
                        "partition": 1,
                        "replicas": [102,103,106]
                },
{
"topic": "xxxx",
"partition": 数字,
"replicas": [数组]
}                
],             
        "version":1
}

3.topic的分区扩容用法

 

a.先扩容分区数量,脚本如下:

例如:push-token-topic初始分区数量为12,目前到增加到15个

lizhitao@localhost:$ ./bin/kafka-topics.sh --zookeeper 192.168.2.225:2183/config/mobile/mq/mafka  --alter   --partitions 15   --topic   push-token-topic

b.设置topic分区副本

 

lizhitao@localhost:$ ./bin/kafka-reassign-partitions.sh --zookeeper  192.168.2.225:2183/config/mobile/mq/mafka
--reassignment-json-file partitions-extension-push-token-topic.json  --execute


 

脚本partitions-extension-push-token-topic.json文件内容如下:

 


1. {  
2.         "partitions":  
3.                 [  
4.                 {  
5.                         "topic": "push-token-topic",  
6.                         "partition": 12,  
7.                         "replicas": [101,102]  
8.                 },  
9.                 {  
10.                         "topic": "push-token-topic",  
11.                         "partition": 13,  
12.                         "replicas": [103,104]  
13.                 },  
14.                 {  
15.                         "topic": "push-token-topic",  
16.                         "partition": 14,  
17.                         "replicas": [105,106]  
18.                 }  
19.                 ],               
20.         "version":1  
21. }

标签:replicas,--,partition,用法,topic,cluster,switch,apache,kafka
From: https://blog.51cto.com/u_16091571/6409998

相关文章

  • 开源许可证 GPL、BSD、MIT、Mozilla、Apache和LGPL的区别
    开源许可证GPL、BSD、MIT、Mozilla、Apache和LGPL的区别  二、详细说明1.BSD许可证(https://en.wikipedia.org/wiki/BSD_licenses)BSD开源协议是一个给于使用者很大自由的协议。基本上使用者可以”为所欲为”,可以自由的使用,修改源代码,也可以将修改后的代码作为开源或者专有软......
  • Request类源码分析、序列化组件介绍、序列化类的基本使用、常用字段类和参数、反序列
    目录一、Request类源码分析二、序列化组件介绍三、序列化类的基本使用查询所有和查询单条四、常用字段类和参数(了解)常用字段类字段参数(校验数据来用的)五、反序列化之校验六、反序列化之保存七、APIVIew+序列化类+Response写的五个接口代码八、序列化高级用法之source(了解)九、......
  • kafka跨集群发送消息
    1.场景集群B有一个应用要向集群A的kafka集群发送消息,但是集群A和集群B不是直接互通的,需要经过一层转发。 ......
  • Oracle partition by 用法及函数
    Oraclepartitionby--函数row_number、rank、dense_rank--row_number:序号,不重复;例如:1,2,3,4,5--rank:排序,重复;例如:1,2,2,2,5--dense_rank:排序,不重复;例如:1,2,2,2,3--sum:求和,本行排名之前(包括本行排名)的总和--count:技术,包括本行排名一共有多少名SELECTt.*FROM(S......
  • 【花雕学AI】ChatGPT的50种神奇用法:让你的聊天更有趣,更有用,更有创意
      【花雕学AI】是一个普通人学习AI的专栏(于2023年3月29日开始),由驴友花雕撰写,主要介绍了人工智能领域的多维度学习和广泛尝试,目前已包含七十多篇文章,分别介绍了ChatGPT、NewBing和LeonardoAI等人工智能应用和技术的过程和成果。本专栏通过实际案例和故事,分享了花雕在人工......
  • Using Spring for Apache Kafka
    UsingSpringforApacheKafkaSendingMessagesKafkaTemplateThe KafkaTemplate wrapsaproducerandprovidesconveniencemethodstosenddatatokafkatopics.Bothasynchronousandsynchronousmethodsareprovided,withtheasyncmethodsreturninga ......
  • 【kafka】浅谈kafka常考特性
    Kafka前几天聊完绩效的时候问了下今年还有没有涨薪,组长的原话是"很难。。。我尽量帮大家争取。。。",我刚听完脑海的第一念头:"此处涨薪难,自有不难处!"。冷静分析一波,今年整体大环境不行,还是苟着拿波年终吧,先不准备跳了,跟大家浅浅分享一下之前准备的kafka相关知识点,等看机会的时候可......
  • CentOS 7.x安装微服务网关Apache APISIX
    阅读文本大概需要3分钟。    APISIX是一个云原生、高性能、可扩展的微服务API网关。它是基于OpenResty和etcd来实现,和传统API网关相比,APISIX具备动态路由和插件热加载,特别适合微服务体系下的API管理。APISIX通过插件机制,提供动态负载平衡、身份验证、限流限速等功能,并且......
  • nslookup一些常用的用法备份
    nslookup是一个用于查询DNS(DomainNameSystem)服务器的命令行工具。以下是一些常用的nslookup用法:1.查询域名对应的IP地址nslookupdomain-name执行此命令后,将会返回域名对应的IP地址。例如:nslookupgoogle.com会返回:Server:UnKnownAddress:192.168.1.1Non-author......
  • 当Elasticsearch遇见Kafka
    Elasticsearch作为当前主流的全文检索引擎,除了强大的全文检索能力和高扩展性之外,对多种数据源的兼容能力也是其成功的秘诀之一。而Elasticsearch强大的数据源兼容能力,主要来源于其核心组件之一的Logstash,Logstash通过插件的形式实现了对多种数据源的输入和输出。Kafka是一种高吞......