一、Amoro介绍
2024 年 3 月 11 日,Amoro 项目顺利通过投票,正式进入 Apache 软件基金会(ASF,Apache Software Foundation)的孵化器,成为 ASF 的一个孵化项目。
Amoro 是建立在开放数据湖表格式之上的湖仓管理系统。2020 年开始, 网易大数据团队在公司内基于 Apache Iceberg 进行湖仓一体架构的探索,孵化了流式湖仓服务 Arctic。
官网:https://amoro.apache.org/
二、安装
注:更新情况下先暂停服务,然后备份
1、下载amoro包(root用户)
cd /root/wang
wget https://******/amoro/amoro-0.7.0-gaotu.tar.gz
2、解压(root用户)
tar -zxf amoro-0.7.0-gaotu.tar.gz
mv amoro-0.7.0 amoro
3、下载mysql jar包
cd /root/wang/amoro/lib
MYSQL_JDBC_DRIVER_VERSION=8.0.30
wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/${MYSQL_JDBC_DRIVER_VERSION}/mysql-connector-java-${MYSQL_JDBC_DRIVER_VERSION}.jar
4、建amoro库
mysql -h127.0.0.1 -uroot -p123456
CREATE DATABASE IF NOT EXISTS amoro;
5、修改配置信息(可以直接复制以前的配置文件)
cd /root/wang/amoro/conf
修改项:server-expose-host(本机内网ip)、bind-port:(服务端口9091)、 url: jdbc:mysql(mysql账号密码信息)
ams:
admin-username: admin
admin-password: admin
server-bind-host: "0.0.0.0"
server-expose-host: "本地ip地址"
thrift-server:
max-message-size: 104857600 # 100MB
selector-thread-count: 2
selector-queue-size: 4
table-service:
bind-port: 1260
worker-thread-count: 20
optimizing-service:
bind-port: 1261
http-server:
bind-port: 9092
rest-auth-type: basic
refresh-external-catalogs:
interval: 180000 # 3min
thread-count: 10
queue-size: 1000000
refresh-tables:
thread-count: 10
interval: 60000 # 1min
self-optimizing:
commit-thread-count: 10
runtime-data-keep-days: 30
runtime-data-expire-interval-hours: 1
optimizer:
heart-beat-timeout: 60000 # 1min
task-ack-timeout: 30000 # 30s
polling-timeout: 3000 # 3s
max-planning-parallelism: 1 # default 1
blocker:
timeout: 60000 # 1min
# optional features
expire-snapshots:
enabled: true
thread-count: 10
clean-orphan-files:
enabled: true
thread-count: 10
clean-dangling-delete-files:
enabled: true
thread-count: 10
sync-hive-tables:
enabled: true
thread-count: 10
data-expiration:
enabled: false
thread-count: 10
interval: 1d
auto-create-tags:
enabled: true
thread-count: 3
interval: 60000 # 1min
# database:
# type: derby
# jdbc-driver-class: org.apache.derby.jdbc.EmbeddedDriver
# url: jdbc:derby:/root/amoro/derby-persistent;create=true
# connection-pool-max-total: 20
# connection-pool-max-idle: 16
# connection-pool-max-wait-millis: 1000
# MySQL database configuration.
database:
type: mysql
jdbc-driver-class: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://127.0.0.1:3306/amoro?useUnicode=true&characterEncoding=UTF8&autoReconnect=true&useAffectedRows=true&allowPublicKeyRetrieval=true&useSSL=false
username: root
password: 123456
connection-pool-max-total: 20
connection-pool-max-idle: 16
connection-pool-max-wait-millis: 1000
# Postgres database configuration.
# database:
# type: postgres
# jdbc-driver-class: org.postgresql.Driver
# url: jdbc:postgresql://127.0.0.1:5432/db
# username: user
# password: passwd
# connection-pool-max-total: 20
# connection-pool-max-idle: 16
# connection-pool-max-wait-millis: 1000
terminal:
backend: local
local.spark.sql.iceberg.handle-timestamp-without-timezone: false
# Kyuubi terminal backend configuration.
# terminal:
# backend: kyuubi
# kyuubi.jdbc.url: jdbc:hive2://127.0.0.1:10009/
# High availability configuration.
# ha:
# enabled: true
# cluster-name: default
# zookeeper-address: 192.168.88.170:2181,192.168.88.104:2182,192.168.88.164:2183
containers:
- name: localContainer
container-impl: org.apache.amoro.server.manager.LocalOptimizerContainer
properties:
export.JAVA_HOME: "/usr/local/jdk" # JDK environment
#containers:
# - name: KubernetesContainer
# container-impl: org.apache.amoro.server.manager.KubernetesOptimizerContainer
# properties:
# kube-config-path: ~/.kube/config
# image: apache/amoro:{version}
# namespace: default
- name: flinkContainer
container-impl: org.apache.amoro.server.manager.FlinkOptimizerContainer
properties:
flink-home: /usr/local/service/flink/ # Flink install home
target: yarn-per-job # Flink run target, (yarn-per-job, yarn-application, kubernetes-application)
export.JVM_ARGS: -Djava.security.krb5.conf=/etc/krb5.conf # Flink launch jvm args, like kerberos config when ues kerberos
export.HADOOP_CONF_DIR: /usr/local/service/hadoop/etc/hadoop/ # Hadoop config dir
export.HADOOP_USER_NAME: hadoop # Hadoop user submit on yarn
export.FLINK_CONF_DIR: /usr/local/service/flink/conf/ # Flink config dir
# # flink kubernetes application properties.
# job-uri: "local:///opt/flink/usrlib/optimizer-job.jar" # Optimizer job main jar for kubernetes application
# flink-conf.kubernetes.container.image: "apache/amoro-flink-optimizer:{version}" # Optimizer image ref
# flink-conf.kubernetes.service-account: flink # Service account that is used within kubernetes cluster.
flink-conf.jobmanager.memory.process.size: 1024M
flink-conf.taskmanager.memory.process.size: 1024M
#containers:
- name: sparkContainer
container-impl: org.apache.amoro.server.manager.SparkOptimizerContainer
properties:
spark-home: /usr/local/service/spark/ # Spark install home
master: yarn # The cluster manager to connect to. See the list of https://spark.apache.org/docs/latest/submitting-applications.html#master-urls.
deploy-mode: cluster # Spark deploy mode, client or cluster
export.JVM_ARGS: -Djava.security.krb5.conf=/etc/krb5.conf # Spark launch jvm args, like kerberos config when ues kerberos
export.HADOOP_CONF_DIR: /usr/local/service/hadoop/etc/hadoop/ # Hadoop config dir
export.HADOOP_USER_NAME: hadoop # Hadoop user submit on yarn
export.SPARK_CONF_DIR: /usr/local/service/spark/conf/ # Spark config dir
# # spark kubernetes application properties.
# job-uri: "local:///opt/spark/usrlib/optimizer-job.jar" # Optimizer job main jar for kubernetes application
# ams-optimizing-uri: thrift://ams.amoro.service.local:1261 # AMS optimizing uri
# spark-conf.spark.dynamicAllocation.enabled: "true" # Enabling DRA feature can make full use of computing resources
spark-conf.spark.shuffle.service.enabled: "true" # If spark DRA is used on kubernetes, we should set it false
spark-conf.spark.dynamicAllocation.shuffleTracking.enabled: "true" # Enables shuffle file tracking for executors, which allows dynamic allocation without the need for an external shuffle service
# spark-conf.spark.kubernetes.container.image: "apache/amoro-spark-optimizer:{version}" # Optimizer image ref
# spark-conf.spark.kubernetes.namespace: <spark-namespace> # Namespace that is used within kubernetes cluster
# spark-conf.spark.kubernetes.authenticate.driver.serviceAccountName: <spark-sa> # Service account that is used within kubernetes cluster.
spark-conf.spark.driver.userClassPathFirst: "true"
spark-conf.spark.executor.userClassPathFirst: "true"
spark-conf.spark.executor.instances: 1
6、移动到服务目录
cp -R amoro /usr/local/service/
7、修改目录权限
cd /usr/local/service/
chown hadoop:hadoop amoro -R
chmod 755 -R amoro
8、服务管理(hadoop用户)
sudo su - hadoop
cd /usr/local/service/amoro/bin
启动服务:sh ams.sh start
停止服务:sh ams.sh stop
重启服务:sh ams.sh restart
三、管理
1、默认关闭自动治理。设置一下参数灰度部分治理表
alter table data_lake_ods.test_table set tblproperties ('self-optimizing.enabled'='true','clean-dangling-delete-files.enabled'='true','clean-orphan-file.enabled'='true','table-expire.enabled' = 'true');
2、开启接口调用
curl -H "Authorization: Basic 替换符" http://127.0.0.1:9092/api/ams/v1/optimize/optimizerGroups/all/optimizers
Authorization生成方式(替换符内容): Base64(账号:密码)
四、文章
1、网易湖仓管理系统 Amoro 进入 Apache 孵化器
https://www.sohu.com/a/767189247_355140
标签:service,local,amoro,治理,conf,Apache,spark,true,Amoro From: https://www.cnblogs.com/robots2/p/18339309