首页 > 其他分享 >ActiveMQ 常见集群模式

ActiveMQ 常见集群模式

时间:2023-01-20 15:44:56浏览次数:63  
标签:INFO 5.8 ActiveMQ 常见 local 集群 usr apache activemq

Master Slave 架构模式

这种模式是基于文件共享锁的高可用集群,个人理解其实是一种 Failover 模式,可以是一主一从,也可以是一主两从。

本文使用 Docker 搭建一个主从模式的 ActiveMQ 5.8.0 集群,一个主节点,两个从节点。

ActiveMQ 镜像的创建可以参考 构建ActiveMQ镜像并运行

version: "2.1"
services:
  activemq-A:
    image: cocowool/activemq:5.8.0
    hostname: activemq-A
    expose:
      - "8161"
      - "61613"
    ports:
      - "8161:8161"
      - "61613:61613"
    volumes:
      - ~/Projects/sh-valley/docker-conf/activemq/activemq-5.8.0-A.xml:/usr/local/apache-activemq-5.8.0/conf/activemq.xml
      - ~/Projects/sh-valley/docker-conf/activemq/kahadb:/usr/local/apache-activemq-5.8.0/data/kahadb
    networks:
      - default
  activemq-B:
    image: cocowool/activemq:5.8.0
    hostname: activemq-B
    expose:
      - "8162"
      - "61614"
    ports:
      - "8162:8161"
      - "61614:61613"
    volumes:
      - ~/Projects/sh-valley/docker-conf/activemq/activemq-5.8.0-B.xml:/usr/local/apache-activemq-5.8.0/conf/activemq.xml
      - ~/Projects/sh-valley/docker-conf/activemq/kahadb:/usr/local/apache-activemq-5.8.0/data/kahadb
    networks:
      - default

其实两个配置文件是一样的,关键是将 kahadb 的目录利用本地文件实现在两个容器见共享。验证过程记录。

$ docker-compose up
Attaching to activemq_activemq-B_1, activemq_activemq-A_1
activemq-B_1  | INFO: Using default configuration
activemq-B_1  | (you can configure options in one of these file: /etc/default/activemq /root/.activemqrc)
activemq-B_1  | 
activemq-B_1  | INFO: Invoke the following command to create a configuration file
activemq-B_1  | /usr/local/apache-activemq-5.8.0/bin/activemq setup [ /etc/default/activemq | /root/.activemqrc ]
activemq-B_1  | 
activemq-A_1  | INFO: Using default configuration
activemq-A_1  | (you can configure options in one of these file: /etc/default/activemq /root/.activemqrc)
activemq-A_1  | 
activemq-A_1  | INFO: Invoke the following command to create a configuration file
activemq-B_1  | INFO: Using java '/usr/local/jdk1.8.0_144/bin/java'
activemq-A_1  | /usr/local/apache-activemq-5.8.0/bin/activemq setup [ /etc/default/activemq | /root/.activemqrc ]
activemq-A_1  | 
activemq-B_1  | INFO: Starting in foreground, this is just for debugging purposes (stop process by pressing CTRL+C)
activemq-A_1  | INFO: Using java '/usr/local/jdk1.8.0_144/bin/java'
activemq-A_1  | INFO: Starting in foreground, this is just for debugging purposes (stop process by pressing CTRL+C)
activemq-B_1  | Java Runtime: Oracle Corporation 1.8.0_144 /usr/local/jdk1.8.0_144/jre
activemq-B_1  |   Heap sizes: current=1005056k  free=989327k  max=1005056k
activemq-B_1  |     JVM args: -Xms1G -Xmx1G -Djava.util.logging.config.file=logging.properties -Dcom.sun.management.jmxremote -Djava.io.tmpdir=/usr/local/apache-activemq-5.8.0/tmp -Dactivemq.classpath=/usr/local/apache-activemq-5.8.0/conf; -Dactivemq.home=/usr/local/apache-activemq-5.8.0 -Dactivemq.base=/usr/local/apache-activemq-5.8.0 -Dactivemq.conf=/usr/local/apache-activemq-5.8.0/conf -Dactivemq.data=/usr/local/apache-activemq-5.8.0/data
activemq-B_1  | Extensions classpath:
activemq-B_1  |   [/usr/local/apache-activemq-5.8.0/lib,/usr/local/apache-activemq-5.8.0/lib/camel,/usr/local/apache-activemq-5.8.0/lib/optional,/usr/local/apache-activemq-5.8.0/lib/web,/usr/local/apache-activemq-5.8.0/lib/extra]
activemq-B_1  | ACTIVEMQ_HOME: /usr/local/apache-activemq-5.8.0
activemq-B_1  | ACTIVEMQ_BASE: /usr/local/apache-activemq-5.8.0
activemq-B_1  | ACTIVEMQ_CONF: /usr/local/apache-activemq-5.8.0/conf
activemq-B_1  | ACTIVEMQ_DATA: /usr/local/apache-activemq-5.8.0/data
activemq-A_1  | Java Runtime: Oracle Corporation 1.8.0_144 /usr/local/jdk1.8.0_144/jre
activemq-A_1  |   Heap sizes: current=1005056k  free=989327k  max=1005056k
activemq-A_1  |     JVM args: -Xms1G -Xmx1G -Djava.util.logging.config.file=logging.properties -Dcom.sun.management.jmxremote -Djava.io.tmpdir=/usr/local/apache-activemq-5.8.0/tmp -Dactivemq.classpath=/usr/local/apache-activemq-5.8.0/conf; -Dactivemq.home=/usr/local/apache-activemq-5.8.0 -Dactivemq.base=/usr/local/apache-activemq-5.8.0 -Dactivemq.conf=/usr/local/apache-activemq-5.8.0/conf -Dactivemq.data=/usr/local/apache-activemq-5.8.0/data
activemq-A_1  | Extensions classpath:
activemq-A_1  |   [/usr/local/apache-activemq-5.8.0/lib,/usr/local/apache-activemq-5.8.0/lib/camel,/usr/local/apache-activemq-5.8.0/lib/optional,/usr/local/apache-activemq-5.8.0/lib/web,/usr/local/apache-activemq-5.8.0/lib/extra]
activemq-A_1  | ACTIVEMQ_HOME: /usr/local/apache-activemq-5.8.0
activemq-A_1  | ACTIVEMQ_BASE: /usr/local/apache-activemq-5.8.0
activemq-A_1  | ACTIVEMQ_CONF: /usr/local/apache-activemq-5.8.0/conf
activemq-A_1  | ACTIVEMQ_DATA: /usr/local/apache-activemq-5.8.0/data
activemq-B_1  | Loading message broker from: xbean:activemq.xml
activemq-A_1  | Loading message broker from: xbean:activemq.xml
activemq-B_1  |  INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1@68c4039c: startup date [Wed Sep 07 13:16:20 UTC 2022]; root of context hierarchy
activemq-A_1  |  INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1@68c4039c: startup date [Wed Sep 07 13:16:20 UTC 2022]; root of context hierarchy
activemq-B_1  |  INFO | PListStore:[/usr/local/apache-activemq-5.8.0/data/localhost/tmp_storage] started
activemq-B_1  |  INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[/usr/local/apache-activemq-5.8.0/data/kahadb]
activemq-A_1  |  INFO | PListStore:[/usr/local/apache-activemq-5.8.0/data/localhost/tmp_storage] started
activemq-A_1  |  INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[/usr/local/apache-activemq-5.8.0/data/kahadb]
activemq-A_1  |  INFO | Database /usr/local/apache-activemq-5.8.0/data/kahadb/lock is locked... waiting 10 seconds for the database to be unlocked. Reason: java.io.IOException: File '/usr/local/apache-activemq-5.8.0/data/kahadb/lock' could not be locked.
activemq-B_1  |  INFO | KahaDB is version 4
activemq-B_1  |  INFO | Recovering from the journal ...
activemq-B_1  |  INFO | Recovery replayed 1 operations from the journal in 0.022 seconds.
activemq-B_1  |  INFO | Apache ActiveMQ 5.8.0 (localhost, ID:activemq-B-41795-1662556582026-0:1) is starting
activemq-B_1  |  INFO | Listening for connections at: tcp://activemq-B:61616?maximumConnections=1000&wireformat.maxFrameSize=104857600
activemq-B_1  |  INFO | Connector openwire Started
activemq-B_1  |  INFO | Listening for connections at: amqp://activemq-B:5672?maximumConnections=1000&wireformat.maxFrameSize=104857600
activemq-B_1  |  INFO | Connector amqp Started
activemq-B_1  |  INFO | Listening for connections at: stomp://activemq-B:61612?transport.closeAsync=false
activemq-B_1  |  INFO | Connector stomp Started
activemq-B_1  |  INFO | Listening for connections at: stomp+nio://activemq-B:61613?transport.closeAsync=false
activemq-B_1  |  INFO | Connector stomp+nio Started
activemq-B_1  |  INFO | Apache ActiveMQ 5.8.0 (localhost, ID:activemq-B-41795-1662556582026-0:1) started
activemq-B_1  |  INFO | For help or more information please see: http://activemq.apache.org
activemq-B_1  | ERROR | Temporary Store limit is 51200 mb, whilst the temporary data directory: /usr/local/apache-activemq-5.8.0/data/localhost/tmp_storage only has 42398 mb of usable space
activemq-B_1  |  INFO | Web console type: embedded
activemq-B_1  |  INFO | ActiveMQ WebConsole initialized.
activemq-B_1  |  INFO | Initializing Spring FrameworkServlet 'dispatcher'
activemq-B_1  |  INFO | jolokia-agent: No access restrictor found at classpath:/jolokia-access.xml, access to all MBeans is allowed
activemq-A_1  |  INFO | Database /usr/local/apache-activemq-5.8.0/data/kahadb/lock is locked... waiting 10 seconds for the database to be unlocked. Reason: java.io.IOException: File '/usr/local/apache-activemq-5.8.0/data/kahadb/lock' could not be locked.

根据启动的顺序不同,A、B 都有可能抢到锁对外提供服务,未抢到锁的实例日志一直会报无法解锁的错误,同时不会对外提供 Web Console 服务。使用 Python 的连接例子,发现 stomp.py 有一个问题不能很好的支持 failover。

Networks of Brokers

这是一种负载均衡的架构,各个 Broker 通过互相连接并共享队列信息,实现集群的负载均衡,即只要消费者连接到 Broker 网络中任意一台,就可以消费所有的消息。

这种方案主要关注 ActiveMQ 配置中的两个参数

  • TransportConnector,传输连接器参数主要控制服务端和客户端之间的通信方式
  • NetworkConnector,网络连接器参数主要控制服务端和服务端之间的通信

本文构建了这样一种架构,Broker1 作为与生产者沟通的节点,负责接收消息以及传递消费者反馈给生产者的消息,Broker2、Broker3作为与消费者沟通的节点,支持与大量的消费者建立链接。这种架构适用于消费者数量非常多,通常是超过1万个节点的情况。根据实际生产运行的情况,一台4C16G的虚拟机,一个ActiveMQ实例通常能够承受2000个左右的链接能保持长期稳定运行。

具体的 docker 配置如下,相关的文件可以从我的 Github 上下载。

version: "2.1"
services:
  activemq-broker1:
    image: cocowool/activemq:5.8.0
    hostname: activemq-broker1
    expose:
      - "8161"
      - "61611"
    ports:
      - "8161:8161"
      - "61611:61612"
    volumes:
      - ~/Projects/sh-valley/docker-conf/activemq-network/activemq-5.8.0-broker1.xml:/usr/local/apache-activemq-5.8.0/conf/activemq.xml
    networks:
      activemq-network:
        ipv4_address: 172.28.1.11
  activemq-broker2:
    image: cocowool/activemq:5.8.0
    hostname: activemq-broker2
    expose:
      - "8162"
      - "61612"
    ports:
      - "8162:8161"
      - "61612:61612"
    volumes:
      - ~/Projects/sh-valley/docker-conf/activemq-network/activemq-5.8.0-broker2.xml:/usr/local/apache-activemq-5.8.0/conf/activemq.xml
    networks:
      activemq-network:
        ipv4_address: 172.28.1.12
  activemq-broker3:
    image: cocowool/activemq:5.8.0
    hostname: activemq-broker3
    expose:
      - "8163"
      - "61613"
    ports:
      - "8163:8161"
      - "61613:61612"
    volumes:
      - ~/Projects/sh-valley/docker-conf/activemq-network/activemq-5.8.0-broker3.xml:/usr/local/apache-activemq-5.8.0/conf/activemq.xml
    networks:
      activemq-network:
        ipv4_address: 172.28.1.13
networks:
  activemq-network:
    ipam:
      config:
      - subnet: 172.28.1.0/24

执行 docker-compose up 命令可以启动三个 ActiveMQ 实例,然后分别通过 http://localhost:8161\8162\8163 可以查看三个实例的控制台,在 Broker2\Broker3 的 Network 页签下可以看到到 Broker1 的链接。我们在生产实践过程中发现过这个连接出问题的情况,此时 ActiveMQ 实例进程并无异常,这时就会影响连接到这个实例的消费者。

要测试这种架构的可用性,可以参考 Python 与 ActiveMQ 交互的一些例子 中的代码示例,修改对应的端口地址就能看到效果。

这个架构可以配合 Master Slave 进一步优化,如下图所示:

生产者连接的实例采用 Master Slave Failover 方式,消费者连接的实例在配置 NetworkConnector 时也采用 Failover 模式,这样 Broker1 就提供了主从的高可用模式,进一步提高了整个集群应对风险的能力。

参考资料

  1. ActiveMQ集群安装
  2. Persistence vs. Durability in Messaging. Do you know the difference?
  3. Shared File System Master Slave
  4. Networks of Brokers
  5. docker-compose 固定IP
  6. ActiveMQ实现负载均衡+高可用部署方案
  7. ActiveMQ中的NetworkConnector(网络连接器)详解

标签:INFO,5.8,ActiveMQ,常见,local,集群,usr,apache,activemq
From: https://www.cnblogs.com/cocowool/p/activemq-cluster-architecture.html

相关文章

  • 如何理解这6种常见设计模式?
     导读:设计模式能够帮助我们优化代码结构,让代码更优雅灵活。有哪些常见的设计模式?如何合理运用?本文分享作者对工厂模式、单例模式、装饰模式、策略模......
  • Java Stream常见用法汇总,开发效率大幅提升
    Java8新增的Stream流大大减轻了我们代码的工作量,但是Stream流的用法较多,实际使用的时候容易遗忘,整理一下供大家参考。1.概述Stream使用一种类似用SQL语句从数据......
  • MongoDB 部署分片集群
    部署配置服务器:configsvr先生成.conf文件mkdir-p/data/mongodb/configsvrvim/data/mongodb/configsvr/configsvr.confsystemLog:destination:filelogAppe......
  • 基于docker搭建redis集群
    在docker已安装完毕的情况下,拉取redis镜像#这里拉取最新的redis镜像。当然也可以拉取指定版本的镜像dockerpullredis:5.0dockerpullredis创建redis容器#命令如下......
  • 通过命令管理ceph集群
      ceph集群维护  http://docs.ceph.org.cn/rados/ ceph集群配置、部署与运维   通过套接字进行单机管理  在ceph的节点上使用socket管理只针对ceph的节点......
  • 大数据必知必会:Zookeeper集群环境安装
    (大数据必知必会:Zookeeper集群环境安装)安装前准备集群环境下,至少需要3台服务器。IP地址主机名称10.0.0.5node110.0.0.6node210.0.0.7node3需要保......
  • redis分布式集群 -- Cluster
    Redis分布式系统的特点和限制多主多从,去中心化,从节点作为备用,复制主节点,不做读写操作,不提供服务;支持动态扩容节点;节点之间相互通信,相互选举,保证及时故障转移,不再依赖sen......
  • redis主从集群 -- Sentinel 哨兵
    Redis的主从集群是一个“一主多从”的读写分离集群。集群中的Master节点负责处理客户端的读写请求,而Slave节点仅能处理客户端的读请求。在采用单线程IO模型时,为了......
  • 探究BrainSplit-集群脑裂
    本篇文章将会以redis集群为例,分享在主从集群中会导致数据丢失的一个问题:BrainSplit-集群脑裂1.什么是集群脑裂所谓的脑裂,就是指在主从集群中,同时有两个主节点,它们都能......
  • 一台云主机怎么搭建Kafka集群?[基于Docker]
    前提:首先服务器内存,建议在4G以上。安装docker、docker-compose服务器执行curlcip.cc查看公网IPdocker-compose一键运行yml将下面的内容中的[yourserverip]—......