首页 > 其他分享 >kafka与eventing

kafka与eventing

时间:2022-10-25 14:22:42浏览次数:58  
标签:eventing gitlab broker kafka knative root channel

项目地址

https://strimzi.io/quickstarts/

https://github.com/strimzi/strimzi-kafka-operator/tree/0.31.1/examples/kafka

部署ClusterRole和CRD

kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka

 部署kafka,多实例临时存储(实验环境)

kubectl apply -f https://raw.githubusercontent.com/strimzi/strimzi-kafka-operator/0.31.1/examples/kafka/kafka-ephemeral.yaml -n kafka
[root@master ~]# kubectl get pods -n kafka
NAME                                          READY   STATUS    RESTARTS   AGE
my-cluster-entity-operator-6bd798bcdd-vvg46   3/3     Running   0          68s
my-cluster-kafka-0                            1/1     Running   0          99s
my-cluster-kafka-1                            1/1     Running   0          99s
my-cluster-kafka-2                            1/1     Running   0          99s
my-cluster-zookeeper-0                        1/1     Running   0          3m31s
my-cluster-zookeeper-1                        1/1     Running   0          3m31s
my-cluster-zookeeper-2                        1/1     Running   0          3m31s
strimzi-cluster-operator-5986447-qxhb7        1/1     Running   0          4m32s

测试

kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.31.1-kafka-3.2.3 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic
kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.31.1-kafka-3.2.3 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning

 

部署kafka-channel

kubectl apply -f https://github.com/knative-sandbox/eventing-kafka-broker/releases/download/knative-v1.7.4/eventing-kafka-controller.yaml
kubectl apply -f https://github.com/knative-sandbox/eventing-kafka-broker/releases/download/knative-v1.7.4/eventing-kafka-channel.yaml
[root@master ~]# kubectl get pods -n knative-eventing
NAME                                        READY   STATUS    RESTARTS         AGE
eventing-controller-865f8c98d4-fl7j6        1/1     Running   11 (8m52s ago)   3d6h
eventing-webhook-84b9987ccb-p82vk           1/1     Running   18 (9m17s ago)   4d4h
imc-controller-67964f548d-tzsw9             1/1     Running   20 (9m17s ago)   3d6h
imc-dispatcher-8585c4c4bc-kwwcr             1/1     Running   18 (9m27s ago)   3d6h
kafka-channel-dispatcher-74475bbcd5-rlnch   1/1     Running   0                56s
kafka-channel-receiver-756cdb6684-z95x7     1/1     Running   0                56s
kafka-controller-5c667d6fd-vnhqn            1/1     Running   0                99m
kafka-webhook-eventing-646577987c-76tdn     1/1     Running   0                99m
mt-broker-controller-585768d967-mhq9l       1/1     Running   12 (8m52s ago)   3d6h
mt-broker-filter-55f57b859f-rcrbv           1/1     Running   15 (8m47s ago)   3d6h
mt-broker-ingress-844d685b8b-blswm          1/1     Running   16 (8m47s ago)   3d6h
pingsource-mt-adapter-f8c89895f-9425r       1/1     Running   16 (9m27s ago)   4d3h

knative-kafka-broker

kubectl apply --filename https://github.com/knative-sandbox/eventing-kafka-broker/releases/download/knative-v1.7.4/eventing-kafka-controller.yaml
kubectl apply --filename https://github.com/knative-sandbox/eventing-kafka-broker/releases/download/knative-v1.7.4/eventing-kafka-broker.yaml
[root@master ~]# kubectl get pods -n knative-eventing
NAME                                        READY   STATUS    RESTARTS       AGE
eventing-controller-865f8c98d4-fl7j6        1/1     Running   11 (22m ago)   3d6h
eventing-webhook-84b9987ccb-p82vk           1/1     Running   18 (23m ago)   4d4h
imc-controller-67964f548d-tzsw9             1/1     Running   20 (23m ago)   3d6h
imc-dispatcher-8585c4c4bc-kwwcr             1/1     Running   18 (23m ago)   3d6h
kafka-broker-dispatcher-9fb7c7ff4-dffnh     1/1     Running   0              18s
kafka-broker-receiver-5c5d75d6f7-kkg7f      1/1     Running   0              18s
kafka-channel-dispatcher-74475bbcd5-rlnch   1/1     Running   0              14m
kafka-channel-receiver-756cdb6684-z95x7     1/1     Running   0              14m
kafka-controller-5c667d6fd-vnhqn            1/1     Running   0              112m
kafka-webhook-eventing-646577987c-76tdn     1/1     Running   0              112m
mt-broker-controller-585768d967-mhq9l       1/1     Running   12 (22m ago)   3d6h
mt-broker-filter-55f57b859f-rcrbv           1/1     Running   15 (22m ago)   3d6h
mt-broker-ingress-844d685b8b-blswm          1/1     Running   16 (22m ago)   3d6h
pingsource-mt-adapter-f8c89895f-9425r       1/1     Running   16 (23m ago)   4d3h

创建kafka channel

[root@master ~]# kn channel create kc01 --type messaging.knative.dev:v1beta1:KafkaChannel
Channel 'kc01' created in namespace 'default'.
[root@master ~]# kn channel list
NAME        TYPE              URL                                                     AGE     READY   REASON
channel01   InMemoryChannel   http://channel01-kn-channel.default.svc.cluster.local   106s    True    
imc01       InMemoryChannel   http://imc01-kn-channel.default.svc.cluster.local       3d21h   True    
kc01        KafkaChannel      http://kc01-kn-channel.default.svc.cluster.local        4s      True 

查看kafka-channel-config的data和kafka-bootstrap svc相同

[root@master ~]# kubectl get cm kafka-channel-config -n knative-eventing -o yaml
apiVersion: v1
data:
  bootstrap.servers: my-cluster-kafka-bootstrap.kafka:9092
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"bootstrap.servers":"my-cluster-kafka-bootstrap.kafka:9092"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"kafka.eventing.knative.dev/release":"aa777b447beccf0083783c36b7bc024e9e27294d"},"name":"kafka-channel-config","namespace":"knative-eventing"}}
  creationTimestamp: "2022-10-25T01:15:53Z"
  labels:
    kafka.eventing.knative.dev/release: aa777b447beccf0083783c36b7bc024e9e27294d
  name: kafka-channel-config
  namespace: knative-eventing
  resourceVersion: "779464"
  uid: aa087f2f-a077-4f43-bf59-9289b8aa7fb7
[root@master ~]# kubectl get svc -n kafka
NAME                          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                               AGE
my-cluster-kafka-bootstrap    ClusterIP   10.111.24.20    <none>        9091/TCP,9092/TCP,9093/TCP            17m
my-cluster-kafka-brokers      ClusterIP   None            <none>        9090/TCP,9091/TCP,9092/TCP,9093/TCP   17m
my-cluster-zookeeper-client   ClusterIP   10.107.220.52   <none>        2181/TCP                              18m
my-cluster-zookeeper-nodes    ClusterIP   None            <none>        2181/TCP,2888/TCP,3888/TCP            18m

修改创建channel默认类型

只有demo名称空间默认使用inmemorychannel,其他名称空间默认都是kafakachannel

[root@master kafka-broker]# kubectl apply -f default-ch-webhook.yaml 
configmap/default-ch-webhook configured
[root@master kafka-broker]# cat default-ch-webhook.yaml 
apiVersion: v1
data:
  default-ch-config: |
    clusterDefault:
      apiVersion: messaging.knative.dev/v1beta1
      kind: KafkaChannel
      spec:
        numPartitions: 10 
        replicationFactor: 3
    namespaceDefaults:
      demo: 
        apiVersion: messaging.knative.dev/v1
        kind: InMemoryChannel
kind: ConfigMap
metadata:
  name: default-ch-webhook
  namespace: knative-eventing
[root@master kafka-broker]# kn channel create channel02
Channel 'channel02' created in namespace 'default'.
[root@master kafka-broker]# kn channel list
NAME        TYPE              URL                                                     AGE     READY   REASON
channel01   InMemoryChannel   http://channel01-kn-channel.default.svc.cluster.local   12m     True    
channel02   KafkaChannel      http://channel02-kn-channel.default.svc.cluster.local   6s      True    
imc01       InMemoryChannel   http://imc01-kn-channel.default.svc.cluster.local       3d21h   True    
kc01        KafkaChannel      http://kc01-kn-channel.default.svc.cluster.local        10m     True    
[root@master kafka-broker]# kubectl create namespace demo
namespace/demo created
[root@master kafka-broker]# kn channel create channel03 -n demo
Channel 'channel03' created in namespace 'demo'.
[root@master kafka-broker]# kn channel list
NAME        TYPE              URL                                                     AGE     READY   REASON
channel01   InMemoryChannel   http://channel01-kn-channel.default.svc.cluster.local   12m     True    
channel02   KafkaChannel      http://channel02-kn-channel.default.svc.cluster.local   32s     True    
imc01       InMemoryChannel   http://imc01-kn-channel.default.svc.cluster.local       3d21h   True    
kc01        KafkaChannel      http://kc01-kn-channel.default.svc.cluster.local        11m     True    
[root@master kafka-broker]# kn channel list -n demo
NAME        TYPE              URL                                                  AGE   READY   REASON
channel03   InMemoryChannel   http://channel03-kn-channel.demo.svc.cluster.local   8s    True

 修改broker在底层默认使用的channel类型

[root@master kafka-broker]# kubectl apply -f config-br-default-channel.yaml 
configmap/config-br-default-channel configured
[root@master kafka-broker]# cat config-br-default-channel.yaml 
apiVersion: v1
data:
  channel-template-spec: |
    apiVersion: messaging.knative.dev/v1beta1
    kind: KafkaChannel
    spec:
      numPartitions: 10
      replicationFactor: 3
kind: ConfigMap
metadata:
  name: config-br-default-channel
  namespace: knative-eventing
[root@master kafka-broker]# kubectl get cm config-br-default-channel -o yaml -n knative-eventing
apiVersion: v1
data:
  channel-template-spec: |
    apiVersion: messaging.knative.dev/v1beta1
    kind: KafkaChannel
    spec:
      numPartitions: 10
      replicationFactor: 3
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"channel-template-spec":"apiVersion: messaging.knative.dev/v1beta1\nkind: KafkaChannel\nspec:\n  numPartitions: 10\n  replicationFactor: 3\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"config-br-default-channel","namespace":"knative-eventing"}}
  creationTimestamp: "2022-10-20T06:56:19Z"
  name: config-br-default-channel
  namespace: knative-eventing
  resourceVersion: "821839"
  uid: 7dc266b2-8cfc-436e-9a12-1ed949203c5a
[root@master kafka-broker]# kn broker create mbroker03
Broker 'mbroker03' successfully created in namespace 'default'.
[root@master kafka-broker]# kn broker list
NAME        URL                                                                          AGE     CONDITIONS   READY   REASON
br01        http://broker-ingress.knative-eventing.svc.cluster.local/default/br01        3d19h   6 OK / 6     True    
mbroker03   http://broker-ingress.knative-eventing.svc.cluster.local/default/mbroker03   3s      6 OK / 6     True

创建broker发现又个报错

unable to build topic config from configmap: error validating topic config from configmap invalid configuration - numPartitions: 0 - replicationFactor: 0 - bootstrapServers: [] - ConfigMap data: map[channel-template-spec:apiVersion: messaging.knative.dev/v1beta1

[root@master kafka-broker]# kn broker create  kbroker01 --class Kafka
Broker 'kbroker01' successfully created in namespace 'default'.
[root@master kafka-broker]# kn broker list
NAME        URL                                                                     AGE     CONDITIONS   READY   REASON
br01        http://broker-ingress.knative-eventing.svc.cluster.local/default/br01   3d20h   6 OK / 6     True    
kbroker01                                                                           4s      1 OK / 7     False   unable to build topic config from configmap: error validating topic config from configmap invalid configuration - numPartitions: 0 - replicationFactor: 0 - bootstrapServers: [] - ConfigMap data: map[channel-template-spec:apiVersion: messaging.knative.dev/v1beta1
kind: KafkaChannel
spec:
  numPartitions: 10
  replicationFactor: 3
] - ConfigMap data: map[channel-template-spec:apiVersion: messaging.knative.dev/v1beta1
kind: KafkaChannel
spec:
  numPartitions: 10
  replicationFactor: 3
]
mbroker03   http://broker-ingress.knative-eventing.svc.cluster.local/default/mbroker03   34m   6 OK / 6   True

原因是因为cm kafka-channel-config没有配置numPartitions和replicationFactor,我们统一修改config-br-defaults后,再次创建broker

[root@master kafka-broker]# kubectl apply -f config-br-defaults.yaml 
configmap/config-br-defaults configured
[root@master kafka-broker]# cat config-br-defaults.yaml 
apiVersion: v1
data:
  default-br-config: |
    clusterDefault:
      brokerClass: Kafka
      apiVersion: v1
      kind: ConfigMap
      name: kafka-broker-config
      namespace: knative-eventing
    namespacedefaults:
      some-namespace:
        brokerClass: MTChannelBasedBroker
        apiVersion: v1
        kind: ConfigMap
        name: config-br-default-channel
        namespace: knative-eventing
        delivery:
          retry: 10
          backoffPolicy: exponential
          backoffDelay: PT0.2S
kind: ConfigMap
metadata:
  name: config-br-defaults
  namespace: knative-eventing
[root@master kafka-broker]# kn broker create  kbroker02 --class Kafka
Broker 'kbroker02' successfully created in namespace 'default'.
[root@master kafka-broker]# kn broker list
NAME        URL                                                                     AGE     CONDITIONS   READY   REASON
br01        http://broker-ingress.knative-eventing.svc.cluster.local/default/br01   3d20h   6 OK / 6     True    
kbroker01                                                                           9m50s   1 OK / 7     False   unable to build topic config from configmap: error validating topic config from configmap invalid configuration - numPartitions: 0 - replicationFactor: 0 - bootstrapServers: [] - ConfigMap data: map[channel-template-spec:apiVersion: messaging.knative.dev/v1beta1
kind: KafkaChannel
spec:
  numPartitions: 10
  replicationFactor: 3
] - ConfigMap data: map[channel-template-spec:apiVersion: messaging.knative.dev/v1beta1
kind: KafkaChannel
spec:
  numPartitions: 10
  replicationFactor: 3
]
kbroker02   http://kafka-broker-ingress.knative-eventing.svc.cluster.local/default/kbroker02   7s    7 OK / 7   True   
mbroker03   http://broker-ingress.knative-eventing.svc.cluster.local/default/mbroker03         44m   6 OK / 6   True 

 部署一个gitlab

[root@master]# git clone https://github.com/iKubernetes/k8s-gitlab.git
[root@master]# cd k8s-gitlab
[root@master k8s-gitlab]# kubectl apply -f deploy-gitlab/
namespace/gitlab created
service/redis created
deployment.apps/redis created
secret/gitlab created
service/postgresql created
deployment.apps/postgresql created
service/code created
service/gitlab created
deployment.apps/gitlab created
[root@master k8s-gitlab]# kubectl get pods -n gitlab
NAME                          READY   STATUS    RESTARTS   AGE
gitlab-66f876df75-mlmdc       1/1     Running   0          2m55s
postgresql-84cb88d7d4-2ftlt   1/1     Running   0          3m9s
redis-6b96797cb4-7nnsl        1/1     Running   0          3m9s
[root@master k8s-gitlab]# kubectl apply -f istio/
destinationrule.networking.istio.io/gitlab created
gateway.networking.istio.io/gitlab-gateway unchanged
virtualservice.networking.istio.io/gitlab-virtualservice created
[root@master istio]# kubectl get svc -n istio-system
NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                                      AGE
istio-ingressgateway    LoadBalancer   10.100.247.178   10.211.55.30   15021:30446/TCP,80:31623/TCP,443:31154/TCP   9d
istiod                  ClusterIP      10.99.195.197    <none>         15010/TCP,15012/TCP,443/TCP,15014/TCP        9d
knative-local-gateway   ClusterIP      10.111.234.134   <none>         80/TCP

部署gitlab source

[root@master k8s-gitlab]# kubectl apply -f https://github.com/knative-sandbox/eventing-gitlab/releases/download/knative-v1.8.0/gitlab.yaml
namespace/knative-sources created
serviceaccount/gitlab-controller-manager created
serviceaccount/gitlab-webhook created
clusterrole.rbac.authorization.k8s.io/gitlabsource-manager-role created
clusterrole.rbac.authorization.k8s.io/eventing-contrib-gitlab-source-observer created
clusterrolebinding.rbac.authorization.k8s.io/gitlabsource-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/eventing-sources-gitlab-addressable-resolver created
clusterrolebinding.rbac.authorization.k8s.io/eventing-sources-gitlab-webhook created
clusterrole.rbac.authorization.k8s.io/gitlab-webhook created
customresourcedefinition.apiextensions.k8s.io/gitlabbindings.bindings.knative.dev created
customresourcedefinition.apiextensions.k8s.io/gitlabsources.sources.knative.dev created
service/gitlab-controller-manager-service created
deployment.apps/gitlab-controller-manager created
mutatingwebhookconfiguration.admissionregistration.k8s.io/defaulting.webhook.gitlab.sources.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.gitlab.sources.knative.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/gitlabbindings.webhook.gitlab.sources.knative.dev created
secret/gitlab-webhook-certs created
service/gitlab-webhook created
deployment.apps/gitlab-webhook created
[root@master k8s-gitlab]# kubectl get pods -n knative-sources
NAME                                         READY   STATUS    RESTARTS      AGE
gitlab-controller-manager-5fd8cc597f-rhfkf   1/1     Running   0             2m43s
gitlab-webhook-7fb4f8c48f-h2ppj              1/1     Running   0             2m43s

做好本地的域名解析 浏览器访问

开启允许来自web hooks和服务队本地网络的请求

 新建项目

创建个人访问令牌并把令牌复制出来

创建namespace和event-display接受事件

[root@master gitlab-source]# kubectl apply -f 01-namespace.yaml -f 02-kservice-event-display.yaml
[root@master gitlab-source]# cat 01-namespace.yaml 
kind: Namespace
apiVersion: v1
metadata:
  name: event-demo
---
[root@master gitlab-source]# cat 02-kservice-event-display.yaml 
---
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: event-display
  namespace: event-demo
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/min-scale: "1"
    spec:
      containers:
        - image: ikubernetes/event_display
          ports:
            - containerPort: 8080
[root@master gitlab-source]# kubectl apply -f 03-secret-token.yaml 
[root@master gitlab-source]# cat 03-secret-token.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: gitlabsecret
  namespace: event-demo
type: Opaque
stringData:
  accessToken: glpat-iV5VH_4_6o4w6KTb9kug
  secretToken: E7RdUDfd+MBxbye/dygLSw

 创建kafka-broker

[root@master gitlab-source]# kubectl apply -f 04-kafkabroker.yaml
[root@master gitlab-source]# cat 04-kafkabroker.yaml 
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
  annotations:
    eventing.knative.dev/broker.class: Kafka
  name: kbr01
  namespace: event-demo
spec:
  config:
    apiVersion: v1
    kind: ConfigMap
    name: kafka-broker-config
    namespace: knative-eventing
[root@master gitlab-source]# kn broker list -n event-demo
NAME        URL                                                                              AGE     CONDITIONS   READY   REASON
kbr01        http://kafka-broker-ingress.knative-eventing.svc.cluster.local/default/kbr01    7s        7 OK / 7    True

创建trigger

[root@master gitlab-source]# kubectl apply -f 05-Trigger-kafkabroker-to-knative-service.yaml 
[root@master gitlab-source]# cat 05-Trigger-kafkabroker-to-knative-service.yaml 
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  name: kafkabroker-to-knative-service
  namespace: event-demo
spec:
  broker: kbr01
  filter: {}
  subscriber:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: event-display
      namespace: event-demo
      
[root@master gitlab-source]# kn trigger list -n event-demo
NAME                            BROKER    SINK                AGE   CONDITIONS    READY    REASON
kafkabroker-to-knative-service  kbr01     ksvc:event-display  9s    6 OK / 6      True

 创建gitlab-source

[root@master gitlab-source]# kubectl apply -f 06-GitLabSource-to-knative-kafkabroker.yaml
[root@master gitlab-source]# cat 06-GitLabSource-to-knative-kafkabroker.yaml 
apiVersion: sources.knative.dev/v1alpha1
kind: GitLabSource
metadata:
  name: gitlabsource-to-kafkabroker
  namespace: event-demo
spec:
  eventTypes:
    - push_events
    - issues_events
    - merge_requests_events
    - tag_push_events
  projectUrl: http://code.gitlab.svc.cluster.local/root/spring-boot-helloworld
  sslverify: false
  accessToken:
    secretKeyRef:
      name: gitlabsecret
      key: accessToken
  secretToken:
    secretKeyRef:
      name: gitlabsecret
      key: secretToken
  sink:
    ref:
      apiVersion: eventing.knative.dev/v1
      kind: Broker
      name: kbr01

查看gitlab已经自动注入了一个webhook

编辑 将gitlabsource-demo-nz85v.event-demo.svc.cluster.local复制到网址中

推送测试

[root@master gitlab-source]# kubectl logs -f event-display-00001-deployment-6cdd9979bf-5vtmr   -n event-demo

 

标签:eventing,gitlab,broker,kafka,knative,root,channel
From: https://www.cnblogs.com/zyyang1993/p/16822216.html

相关文章

  • Kafka工具:Offset Explorer
    Kafka工具:OffsetExplorer一、OffsetExplorer安装offsetexplorer_64bit.exeWindowsx64版本下载链接安装方式:选择默认安装(即全部默认下一步)官网链接:https://www.kafk......
  • 从头开始学Kafka
    第一章Kafka概述1.1Kafka概述之基本概念1.2Kafka概述之ZooKeeper1.3Kafka概述之环境搭建第二章Kafka之AdminAPI2.1KafkaAdminAPI之创建客户端2.2KafkaAdminAPI......
  • Kafka历史&客户端功能介绍
    Producer新旧对比新版本:发送过程被分为两个线程,一条消息发送后经用户主线程进入内存缓冲区,SenderI/O线程将缓冲区中的数据分批发给Kafkabroker完全异步发送消息,通过F......
  • 4.5 Kafka Consumer API之多线程并发处理
    1.Consumer一对一消费Partition(1).简介这种类型是经典模式,每一个线程单独创建一个KafkaConsumer消费一个partition,用于保证线程安全。(2).代码示例publicclassKafkaCon......
  • 6.4 Kafka集群之副本集
    1.简介kafka的数据是存储在日志文件中的,kafka副本集(副本因子)是指将这些日志文件复制多份从而起到数据备份的目的。kafka中的topic只是个逻辑概念,实际存储数据的是partiti......
  • 6.5 Kafka集群之Leader选举
    1.Broker选举(1).不采用多数投票方式选举的原因kafka并没有采用多数投票来选举leader的(redis和es采用的是多数投票方式来进行选举的),原因有两个,一是防止选举时选举到了数......
  • Kafka Consumer指定时间戳位置消费消息
    KafkaConsumer指定时间戳位置消费消息若用户不想从最旧的或最早的offset位置开始消费,想指定某个时间戳位置开始消费,是否可行呢?答案:可行的用户给定时间戳,kafkaserve......
  • 15.Linux下安装Kafka
    1.解压解压安装包并将解压后的目录移动到/usr/local/kafka目录下。tar-zxvfkafka_2.11-2.3.0.tgzmvkafka_2.11-2.3.0/usr/local/kafkacd2.启动cd3.检验执行jps命令,如......
  • 聊聊kafka
    两个月因为忙于工作毫无输出了,最近想给团队小伙伴分享下kafka的相关知识,于是就想着利用博客来做个提前的准备工作了;接下来会对kafka做一个简单的介绍,包括利用akf原则来解析......
  • kafka springBoot 报错 not present and missingTopicsFatal is true kafka missing-t
    这个问题可以追溯到springboot和kafka的版本问题,解决这个问题太麻烦,要去看官方文档,我选择不看。这里提供一种通用的解决方式在kafkaConfig配置文件中添加下面的代码/**......