首页 > 编程语言 >DataSphere Studio AppConn 部署

DataSphere Studio AppConn 部署

时间:2024-09-26 12:53:20浏览次数:11  
标签:dolphinscheduler mnt dss sh Studio DataSphere AppConn linkis datasphere

一、Exchangis AppConn  部署

参考文档:

https://github.com/WeBankFinTech/Exchangis/blob/master/docs/zh_CN/ch1/exchangis_appconn_deploy_cn.md

https://github.com/WeBankFinTech/Exchangis/blob/dev-1.0.0/docs/zh_CN/ch1/exchangis_deploy_cn.md

1.安装zookeeper(单机版)

tar xf zookeeper-3.5.7.tar.gz -C /mnt/datasphere/
cd /mnt/datasphere/
ln -sv zookeeper-3.5.7/ zookeeper
cd zookeeper/conf
mv zoo_sample.cfg zoo.cfg 

vim /etc/profile.d/zookeeper.sh
export ZOOKEEPER_HOME=/mnt/datasphere/zookeeper
export ZOOCFGDIR=/mnt/datasphere/zookeeper/conf
export PATH=$PATH:${ZOOKEEPER_HOME}/bin

.  /etc/profile.d/zookeeper.sh
zkServer.sh start
zkServer.sh status
netstat -tnlp|grep 2181

2.安装sqoop

tar xf sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz -C /mnt/datasphere/
mv mysql-connector-java-5.1.27-bin.jar /mnt/datasphere/sqoop/lib/
cd /mnt/datasphere/
ln -sv sqoop-1.4.6.bin__hadoop-2.0.4-alpha/ sqoop

vim /etc/profile.d/sqoop.sh
export PATH=${PATH}:/mnt/datasphere/sqoop/bin/
export SQOOP_HOME=/mnt/datasphere/sqoop

. /etc/profile.d/sqoop.sh

sqoop list-databases --connect jdbc:mysql://192.168.1.134:3306/ --username root  -P  #测试配置是否正常
Enter password:     #mysql root密码,根据登录的用户名称
information_schema
dss
hive
mysql
performance_schema
sys

3.Exchangis server 安装部署

下载二进制包 具体根据官方页面提供链接下载

wget https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/WeDatasphere/Exchangis/exchangis1.0.0/wedatasphere-exchangis-1.0.0.tar.gz
wget https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/WeDatasphere/Exchangis/exchangis1.0.0/exchangis-appconn.zip
wget https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/WeDatasphere/Exchangis/exchangis1.0.0/dist.zip

为exchangis加数据源认证的token(dss数据库)

INSERT INTO `linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`) VALUES ('EXCHANGIS-AUTH','*','*','BDP',curdate(),curdate(),-1,'LINKIS');

为exchangis加hive数据源的认证,注意替换 192.168.1.134:3306,该值为 hive数据库IP和端口

INSERT INTO `linkis_ps_dm_datasource_env` (`env_name`, `env_desc`, `datasource_type_id`, `parameter`, `create_time`, `create_user`, `modify_time`, `modify_user`) VALUES ('开发环境SIT', '开发环境SIT', 4, '{"uris":"thrift://192.168.1.134:3306", "hadoopConf":{"hive.metastore.execute.setugi":"true"}}',  now(), NULL,  now(), NULL);
INSERT INTO `linkis_ps_dm_datasource_env` (`env_name`, `env_desc`, `datasource_type_id`, `parameter`, `create_time`, `create_user`, `modify_time`, `modify_user`) VALUES ('开发环境UAT', '开发环境UAT', 4, '{"uris":"thrift://192.168.1.134:3306", "hadoopConf":{"hive.metastore.execute.setugi":"true"}}',  now(), NULL,  now(), NULL);

解压并配置

#解压并配置参数
mkdir  /mnt/datasphere/exchangis-server/
mv wedatasphere-exchangis-1.0.0.tar.gz  /mnt/datasphere/exchangis-server/
cd  /mnt/datasphere/exchangis-server/
tar xf wedatasphere-exchangis-1.0.0.tar.gz 

vim config/config.sh 
#LINKIS_GATEWAY服务地址IP,用于查找linkis-mg-gateway服务
LINKIS_GATEWAY_HOST=192.168.1.134

#LINKIS_GATEWAY服务地址端口,用于查找linkis-mg-gateway服务
LINKIS_GATEWAY_PORT=9001

#Exchangis服务端口
EXCHANGIS_PORT=9080

#Eureka服务URL
EUREKA_URL=http://192.168.1.134:9600/eureka/


vim config/db.sh 
MYSQL_HOST=192.168.1.134
MYSQL_PORT=3306
MYSQL_USERNAME=root
MYSQL_PASSWORD=Qwer@123
DATABASE=exchangis  #不需要提前创建



sbin/install.sh
Do you want to initalize database with sql? (Y/N)y  #初始化数据库

启动服务

sbin/daemon.sh start server

确认服务启动并注册成功

DataSphere Studio  AppConn 部署_flink

4.Exchangis 前端安装部署

#配置Nginx
vim /etc/nginx/conf.d/exchangeis.conf
server {
    listen       10001; 
    server_name  localhost;
    location / {
    root   /appcom/Install/exchangis; # Exchangis 前端部署目录
    autoindex on;
    }

    location /api {
    proxy_pass http://192.168.1.134:9001;  # 后端linkis-mg-gateway的地址,需要修改
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header x_real_ipP $remote_addr;
    proxy_set_header remote_addr $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_http_version 1.1;
    proxy_connect_timeout 4s;
    proxy_read_timeout 600s;
    proxy_send_timeout 12s;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection upgrade;
    }


    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
    root   /usr/share/nginx/html;
    }
}

nginx -t
nginx -s reload

mkdir /appcom/Install/exchangis
unzip dist.zip
mv dist/* /appcom/Install/exchangis

访问 http://192.168.1.134:10001/#/projectManage

5.安装exchangis AppConn

mv exchangis-appconn.zip /mnt/datasphere/dss_linkis_one-click_install_20221201/dss/dss-appconns/
cd /mnt/datasphere/dss_linkis_one-click_install_20221201/dss/dss-appconns/
unzip exchangis-appconn.zip
cd ../bin
./appconn-install.sh   #输入字符串 exchangis 以及exchangis服务的ip和端口,即可以完成安装,这里的exchangis端口是指前端端口,在nginx进行配置即10001

cd ../sbin 
sh ./dss-stop-all.sh
sh ./dss-start-all.sh

6.验证

新建项目test2

DataSphere Studio  AppConn 部署_flink_02

 exchangis前端刷新页面查看是否同步

DataSphere Studio  AppConn 部署_scala_03

二、Visualis 

参考文档:

https://github.com/WeBankFinTech/Visualis/blob/master/visualis_docs/zh_CN/Visualis_deploy_doc_cn.md

https://github.com/WeBankFinTech/Visualis/blob/master/visualis_docs/zh_CN/Visualis_appconn_install_cn.md

1.安装maven

1.安装maven,具体版本根据文档要求,下载其他版本地址 https://archive.apache.org/dist/maven/
tar xf apache-maven-3.6.3-bin.tar.gz -C /mnt/datasphere
cd /mnt/datasphere
ln -sv apache-maven-3.6.3 maven
sudo vim /etc/profile.d/maven.sh
export MAVEN_HOME=/mnt/datasphere/maven
export PATH=$PATH:$MAVEN_HOME/bin
. /etc/profile.d/maven.sh
mvn -v

2.安装npm

# https://nodejs.org/dist/ 可用版本
wget --no-check-certificate https://npm.taobao.org/mirrors/node/v16.13.0/node-v16.13.0-linux-x64.tar.gz
tar xf node-v16.13.0-linux-x64.tar.gz -C /mnt/datasphere/
rm -f /usr/local/bin/node  #删除旧node,按需
rm -f /usr/local/bin/npm  #删旧npm,按需
ln -sv /mnt/datasphere/node-v16.13.0-linux-x64/bin/node /usr/local/bin/node
ln -sv /mnt/datasphere/node-v16.13.0-linux-x64/bin/npm /usr/local/bin/npm
node -v
npm -v

3.编译后端

git clone  https://github.com/WeBankFinTech/Visualis.git
cd Visualis
git check out 1.0.0
mvn clean package -DskipTests=ture
mv  assembly/target/visualis-server.zip /mnt/datasphere/

4.编译前端

cd webapp 
vim package-lock.json-bak 
g/"resolved".*/d  #删除resolved所在行

npm i --registry https://registry.npm.taobao.org
npm run build --registry https://registry.npm.taobao.org
# 卡主后如果没报错,直接ctrl+C中断,build目录就是前端代码
mkdir  /appcom/Install/visualis/dss/visualis
mv build/* /appcom/Install/visualis/dss/visualis

5.编译visualis组件

cd visualis-appconn
mvn clean package -DskipTests=ture
mv visualis-appconn/target/visualis.zip /mnt/datasphere/dss_linkis_one-click_install_20221201/dss/dss-appconns/

6.初始化数据库

mysql -uroot -h 192.168.1.134 -p
mysql> create database visualis default character set utf8mb4 COLLATE utf8mb4_general_ci;
mysql> use visualis;
mysql> source /home/hadoop/Visualis/db/davinci.sql; #源代码db目录下
mysql> source /home/hadoop/Visualis/db/ddl.sql;

7.字体库安装

sudo yum -y install fontconfig-devel
sudo mkdir -p  /usr/share/fonts/visualis
sudo mv /home/hadoop/Visualis/ext/pf.ttf /usr/share/fonts/visualis

8.配置前端

vim /etc/nginx/conf.d/dss.conf 
    location /dss/visualis {
    root   /appcom/Install/visualis; # 修改静态文件目录
    autoindex on;
    }
nginx -t
nginx -s reload

9.配置后端

cd /mnt/datasphere/
unzip visualis-server.zip
cd visualis-server/conf

vim application.yml     #根据升级情况替换
# ##################################
# 1. Visualis Service configuration   配置visualis的前端地址和端口
# ##################################
server:
  protocol: http
  address: 192.168.1.134 # server ip address
  port:  9018 # server port
  url: http://192.168.1.134:8085/dss/visualis # frontend index page full path
  access:
    address: 192.168.1.134 # frontend address
    port: 8085 # frontend port


# ##################################
# 2. eureka configuration   #配置eureka地址
# ##################################
eureka:
      defaultZone: http://192.168.1.134:9600/eureka/ # Configuration required
  instance:
    metadata-map:
      test: wedatasphere


# ##################################
# 3. Spring configuration   #数据库配置
# ##################################
spring:
  main:
    allow-bean-definition-overriding: true
  application:
    name: visualis-dev
  datasource: # visualis需要和dss部署在同一个数据库
    url: jdbc:mysql://192.168.1.134:3306/visualis?characterEncoding=UTF-8&allowMultiQueries=true # Configuration required
    username: root
    password: Qwer@123


vim linkis.properties
wds.linkis.gateway.url=http://192.168.1.134:9001

cd ../bin/
sh start-visualis-server.sh
less logs/linkis.out  #查看日志

eureka 查看注册状态

DataSphere Studio  AppConn 部署_vim_04

10.安装AppConn

cd /mnt/datasphere/dss_linkis_one-click_install_20221201/dss/dss-appconns/
unzip visualis.zip 
cd ../bin/
>> sh appconn-install.sh

# 输入Visualis名称
>> visualis

# 输入Visualis前端IP地址
>> 192.168.1.134

# 输入Visualis服务的前端端口后
>> 8085

#重启dss 服务
sh dss-stop-all.sh 
sh dss-start-all.sh

11.注意事项

在Visualis未进行部署前创建的项目里面拖拽节点,会报project is not found错误,如果需要使用需要在部署Visualis后创建

DataSphere Studio  AppConn 部署_scala_05

三、Streamis AppConn 安装

相关依赖参考:Linkis后端编译:https://linkis.apache.org/zh-CN/docs/1.1.1/development/linkis-compile-and-package/

1.安装scala

wget https://downloads.lightbend.com/scala/2.11.2/scala-2.11.2.tgz
tar xf scala-2.11.2.tgz -C /mnt/datasphere/
ln -sv scala-2.11.12  scala

vim /etc/profile.d/scala.sh
export SCALA_HOME=${BASE_DIR}/scala
export PATH=$PATH:${SCALA_HOME}/bin
. etc/profile.d/scala.sh

2.安装flink

# 其他版本下载地址 https://archive.apache.org/dist/flink/
wget https://archive.apache.org/dist/flink/flink-1.12.2/flink-1.12.2-bin-scala_2.12.tgz
tar xf flink-1.12.2-bin-scala_2.12.tgz -C /mnt/datasphere/  #注意scala版本
cd /mnt/datasphere/
ln -sv flink-1.12.2/ flink


sudo vim /etc/profile.d/flink.sh 
# 确保HADOOP_CONF_DIR HADOOP_CLASSPATH 环境变量已经存在
export FLINK_HOME=/mnt/datasphere/flink
export FLINK_CONF_DIR=${FLINK_HOME}/conf/
export FLINK_LIB_DIR=${FLINK_HOME}/lib
export PATH=${FLINK_HOME}/bin:${FLINK_HOME}

.  /etc/profile.d/flink.sh

3.编译Flink引擎

Flink引擎使用:https://linkis.apache.org/zh-CN/docs/1.1.1/engine-usage/flink/

Linkis引擎安装:https://linkis.apache.org/zh-CN/docs/1.1.1/deployment/engine-conn-plugin-installation/

git clone https://github.com/apache/linkis.git
cd linkis
git checkout release-1.1.1
vim pom.xml
<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>${mysql.connector.version}</version>
    <!--<scope>test</scope>-->  #注释scope
</dependency>


mvn -N  install
mvn clean install -DskipTests
cd linkis-engineconn-plugins/engineconn-plugins/flink  #flink是对应引擎的名称
mvn clean install  # target/out/flink  目录就是引擎包
cp -r target/out/flink /mnt/datasphere/dss_linkis_one-click_install_20221201/linkis/lib/linkis-engineconn-plugins
chown -R hadoop:hadoop /mnt/datasphere

4.添加flink引擎标签

mysql -uroot -D dss -p   
SET @FLINK_LABEL="flink-1.12.2";
SET @FLINK_ALL=CONCAT('*-*,',@FLINK_LABEL);
SET @FLINK_IDE=CONCAT('*-IDE,',@FLINK_LABEL);

insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType',@FLINK_ALL, 'OPTIONAL', 2, now(), now());
insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType',@FLNK_IDE, 'OPTIONAL', 2, now(), now());

select @label_id := id from linkis_cg_manager_label where `label_value` = @FLINK_IDE;
insert into linkis_ps_configuration_category (`label_id`, `level`) VALUES (@label_id, 2);


INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.flink.url', '例如:http://127.0.0.1:8080', '连接地址', 'http://127.0.0.1:8080', 'Regex', '^\\s*http://([^:]+)(:\\d+)(/[^\\?]+)?(\\?\\S*)?$', 'flink', 0, 0, 1, '数据源配置');
INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.flink.catalog', 'catalog', 'catalog', 'system', 'None', '', 'flink', 0, 0, 1, '数据源配置');
INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.flink.source', 'source', 'source', 'global', 'None', '', 'flink', 0, 0, 1, '数据源配置');



insert into `linkis_ps_configuration_key_engine_relation` (`config_key_id`, `engine_type_label_id`)
(select config.id as `config_key_id`, label.id AS `engine_type_label_id` FROM linkis_ps_configuration_config_key config
INNER JOIN linkis_cg_manager_label label ON config.engine_conn_type = 'flink' and label_value = @FLINK_ALL);

insert into `linkis_ps_configuration_config_value` (`config_key_id`, `config_value`, `config_label_id`)
(select `relation`.`config_key_id` AS `config_key_id`, '' AS `config_value`, `relation`.`engine_type_label_id` AS `config_label_id` FROM linkis_ps_configuration_key_engine_relation relation
INNER JOIN linkis_cg_manager_label label ON relation.engine_type_label_id = label.id AND label.label_value = @FLINK_ALL);

exit

5.刷新引擎目录

select *  from linkis_cg_engine_conn_plugin_bml_resources #查看是否有flink数据,及创建时间

6.Streamis 后端安装部署

mysql -uroot  -p
create database streamis default character set utf8mb4 COLLATE utf8mb4_general_ci;
wget https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/WeDatasphere/Streamis/0.2.0/wedatasphere-streamis-0.2.0-dist.tar.gz
mkdir /mnt/datasphere/streamis
mv wedatasphere-streamis-0.2.0-dist.tar.gz /mnt/datasphere/streamis/
cd /mnt/datasphere/streamis/
tar xf wedatasphere-streamis-0.2.0-dist.tar.gz 
vim conf/db.sh 
MYSQL_HOST=192.168.1.134
MYSQL_PORT=3306
MYSQL_DB=streamis
MYSQL_USER=root
MYSQL_PASSWORD=Qwer@123


vim conf/config.sh  #根据实际情况替换
deployUser=hadoop
STREAMIS_PORT=9400
STREAMIS_INSTALL_HOME=/mnt/datasphere/streamis
EUREKA_INSTALL_IP=192.168.1.134
EUREKA_PORT=9600
GATEWAY_INSTALL_IP=192.168.1.134
GATEWAY_PORT=9001
STREAMIS_SERVER_INSTALL_IP=192.168.1.134
STREAMIS_SERVER_INSTALL_PORT=9400


sh bin/install.sh  #会询问是否初始化数据库,第一次安装必须选是
sh bin/start.sh
netstat -tnlp|grep 9400 #查看eurekas是否注册成功

7.Streamis 前端安装部署

wget https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/WeDatasphere/Streamis/0.2.0/streamis-0.2.0-dist.zip
unzip -q streamis-0.2.0-dist.zip 
mkdir /appcom/Install/streamis
mv dist/* /appcom/Install/streamis

server {
    listen       10002;
    server_name  localhost;
    location / {
        root   /appcom/Install/streamis; 
        index  index.html index.html;
    }
    location /api {
    proxy_pass http://192.168.1.134:9001; 
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header x_real_ipP $remote_addr;
    proxy_set_header remote_addr $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_http_version 1.1;
    proxy_connect_timeout 4s;
    proxy_read_timeout 600s;
    proxy_send_timeout 12s;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection upgrade;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
    root   /usr/share/nginx/html;
    }
}

nginx -t 
nginx -s reload

8.修改MySQL sql_mode

vim  /etc/my.cnf
sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
systemctl restart mysqld

9.部署 Streamis AppConn 插件

wget https://github.com/WeBankFinTech/Streamis/archive/refs/tags/0.2.0.tar.gz
tar xf 0.2.0.tar.gz
cd Streamis-0.2.0
cd streamis-appconn
mvn clean install
cp target/streamis.zip /mnt/datasphere/dss_linkis_one-click_install_20221201/dss/dss-appconns/
cd /mnt/datasphere/dss_linkis_one-click_install_20221201/dss/dss-appconns
unzip streamis.zip

cd ../bin
sh ./appconn-install.sh  输入字符串streamis以及streamis服务的ip和端口,streamis端口是指前端端口,在nginx进行配置。而不是后端的服务端口即10002

cd ../sbin/
sh ./dss-stop-all.sh
sh ./dss-start-all.sh

10.验证

1.控制台创建项目
2.数据库查看是否同步创建项目,SQL为
SELECT * FROM linkis_stream_project WHERE name = '项目名称';

四、DolphinScheduler AppConn 安装

参考文档:

https://dolphinscheduler.apache.org/zh-cn/docs/1.3.9/standalone-deployment

https://github.com/WeBankFinTech/DataSphereStudio-Doc/blob/main/zh_CN/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/DolphinScheduler%E6%8F%92%E4%BB%B6%E5%AE%89%E8%A3%85%E6%96%87%E6%A1%A3.md

1.安装依赖

依赖参考:https://dolphinscheduler.apache.org/zh-cn/docs/1.3.9/standalone-deployment

yum install psmisc -y  #其他我都已经满足了

2.创建数据库

mysql -uroot -p
CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;  #这里使用root用户,如果使用其他用户,记得对该用户授权

3.下载JDBCmysql-connector-java

https://repo1.maven.org/maven2/mysql/mysql-connector-java/ 选择对应的版本即可,当前版本要求不低于5.1.47

或者

https://mvnrepository.com/artifact/mysql/mysql-connector-java

DataSphere Studio  AppConn 部署_scala_06

DataSphere Studio  AppConn 部署_scala_07

4.下载dolphinsche

wget https://archive.apache.org/dist/dolphinscheduler/1.3.9/apache-dolphinscheduler-1.3.9-bin.tar.gz  #具体版本请根据实际links版本选择,参考文档
tar xf apache-dolphinscheduler-1.3.9-bin.tar.gz 
mv mysql-connector-java-5.1.47.jar apache-dolphinscheduler-1.3.9-bin/lib/
cd apache-dolphinscheduler-1.3.9-bin

5.配置dolphinsche

这里安装使用的是Standalone,部署用户直接用Hadoop

vi conf/datasource.properties #根据实际情况配置数据库信息,此处注释postgresql配置,启用MySQL
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://192.168.1.134:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8
spring.datasource.username=root
spring.datasource.password=Qwer@123

sh script/create-dolphinscheduler.sh #创建表及导入基础数据脚本

vim conf/env/dolphinscheduler_env.sh   #JAVA_HOME 和 PATH 是必须配置,其他没有用到的可以忽略或者注释掉
BASE_DIR=/mnt/datasphere
export HADOOP_HOME=${BASE_DIR}/hadoop
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop/
export SPARK_HOME=${BASE_DIR}/spark
export HIVE_HOME=${BASE_DIR}/hive
export SPARK_HOME=${BASE_DIR}/spark
export FLINK_HOME=${BASE_DIR}/flink
export JAVA_HOME=/usr/local/java
export PYTHON_HOME=/appcom/Install/anaconda2
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$PATH

sudo ln -sv /usr/local/java/bin/java /usr/bin/java



vim conf/config/install_config.conf  #配置项具体含义参考文档
dbtype="mysql"
dbhost="192.168.1.134:3306"
username="root"
dbname="dolphinscheduler"
password="Qwer@123"
zkQuorum="192.168.1.134:2181"
installPath="/mnt/datasphere/dolphinscheduler"   #注意此处指的是安装后的目录,当前配置的路径为部署目录
deployUser="hadoop"
mailServerHost="smtp.exmail.qq.com"
mailServerPort="25"
mailSender="xxxxxxxxxx"
mailUser="xxxxxxxxxx"
mailPassword="xxxxxxxxxx"
starttlsEnable="true"
sslEnable="false"
sslTrust="smtp.exmail.qq.com"
dataBasedirPath="/tmp/dolphinscheduler"
resourceStorageType="HDFS"
resourceUploadPath="/data/dolphinscheduler"
defaultFS="file:///data/dolphinscheduler"
resourceManagerHttpAddressPort="8088"
singleYarnIp="192.168.1.134"
hdfsRootUser="hadoop"
kerberosStartUp="false"
krb5ConfPath="$installPath/conf/krb5.conf"
keytabUserName="[email protected]"
keytabPath="$installPath/conf/hdfs.headless.keytab"
kerberosExpireTime="2"
apiServerPort="12345"
ips="localhost"
sshPort="22"
masters="localhost"
workers="localhost::default"
alertServer="localhost"
apiServers="localhost"



sudo mkdir -p /data/dolphinscheduler
sudo chown -R hadoop:hadoop /data/dolphinscheduler

6.启动dolphinsche服务

sh install.sh  #应用解压目录下
jps |egrep  'MasterServer|WorkerServer|LoggerServer|ApiApplicationServer|AlertServer'  #查看相关服务是否正常启动
18566 ApiApplicationServer
18129 MasterServer
18457 AlertServer
18348 LoggerServer
18238 WorkerServer
ll /mnt/datasphere/dolphinscheduler/logs/  #如果出现问题日志路径

访问地址:http://192.168.1.134:12345/dolphinscheduler/ui/view/login/index.html

默认账号密码: admin/dolphinscheduler123

DataSphere Studio  AppConn 部署_flink_08

DataSphere Studio  AppConn 部署_vim_09

7.安装DolphinScheduler AppConn

wget https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/WeDatasphere/DolphinScheduler/DSS1.1.1_dolphinscheduler/dolphinscheduler-appconn.zip  
mv dolphinscheduler-appconn.zip /mnt/datasphere/dss_linkis_one-click_install_20221201/dss/dss-appconns/
cd /mnt/datasphere/dss_linkis_one-click_install_20221201/dss/dss-appconns/
unzip dolphinscheduler-appconn.zip  

vim dolphinscheduler/appconn.properties
wds.dss.appconn.ds.admin.user=admin 
wds.dss.appconn.ds.admin.token=b245a4931ccbf58850f67ed202fc6eb7  #dolphinscheduler服务页面--安全中心-令牌管理中创建
wds.dss.appconn.ds.version=1.3.9
wds.dss.appconn.ds.client.home=/mnt/datasphere/dolphinscheduler-client  #dss-dolphinscheduler-client安装目录,后面步骤会安装,注意目录保持一致
cd ../bin/
sh install-appconn.sh # 安装appconn,输入字符串 dolphinscheduler 以及 dolphinscheduler 服务的 ip 和端口,端口为Nginx dss端口 即8085

8.修改jar包

wget https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/WeDatasphere/DolphinScheduler/DSS1.1.1_dolphinscheduler/dss-dolphinscheduler-token-1.1.1.jar  # 用于免密请求 DolphinScheduler 的接口
mv  dss-dolphinscheduler-token-1.1.1.jar   /mnt/datasphere/dss_linkis_one-click_install_20221201/dss/lib/dss-framework/dss-framework-project-server/
sh sbin/dss-daemon.sh restart project-server

wget https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/WeDatasphere/DolphinScheduler/DSS1.1.1_dolphinscheduler/dolphinscheduler-prod-metrics-1.1.1-jar-with-dependencies.jar
mv dolphinscheduler-prod-metrics-1.1.1-jar-with-dependencies.jar  /mnt/datasphere/dolphinscheduler/lib/ #拷贝到 DolphinScheduler 部署的 lib 目录
cd /mnt/datasphere/dolphinscheduler/
sh bin/stop-all.sh
sh bin/start-all.sh

9.配置前端转发

vim /etc/nginx/conf.d/dss.conf  #增加以下内容
location /dolphinscheduler {
    proxy_pass http://192.168.1.134:12345;#后端dolphinscheduler服务的地址
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection upgrade;
}

nginx -t
nginx -s reload

10.配置调度中心 url

vim /mnt/datasphere/dss_linkis_one-
click_install_20221201/dss/conf/dss-workflow-server.properties 
wds.dss.workflow.schedulerCenter.url=http://192.168.1.134:8085/dolphinscheduler #添加该字段
sh sbin/dss-daemon.sh restart workflow-server

11.部署 dss-dolphinscheduler-client

wget https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/WeDatasphere/DolphinScheduler/DSS1.1.1_dolphinscheduler/dss-dolphinscheduler-client.zip
unzip  dss-dolphinscheduler-client.zip

mv dss-dolphinscheduler-client /mnt/datasphere/dolphinscheduler-client   #该目录和dolphinscheduler服务中指定的一致

cd  /mnt/datasphere/dolphinscheduler-client
vim conf/linkis.properties
wds.linkis.gateway.url=192.168.1.134:9001  #gateway地址

12.验证

创建新工作空间(旧空间看不到)

DataSphere Studio  AppConn 部署_vim_10

DataSphere Studio  AppConn 部署_scala_11

确认可以免密登录(dolphinscheduler需要已经登录,可以尝试在dolphinscheduler中创建hadoop用户,然后不登录试试dss是否可以免密登录)

DataSphere Studio  AppConn 部署_vim_12

五、Qualitis  AppConn 安装

1.编译Qualitis  

 

cd /mnt/datasphere/
wget https://services.gradle.org/distributions/gradle-4.6-bin.zip
unzip -q gradle-4.6-bin.zip
ln -sv gradle-4.6 gradle
vim ~/.bashrc
export PATH="/mnt/datasphere/gradle/bin/:$PATH"
. ~/.bashrc
gradle -v

wget https://github.com/WeBankFinTech/Qualitis/archive/refs/tags/release-0.9.2.zip
wget https://github.com/WeBankFinTech/Qualitis/releases/download/release-0.9.1/forgerock.zip
unzip -q forgerock.zip
mv forgerock ~/.m2/repository/org/
unzip -q release-0.9.2.zip
cd Qualitis-release-0.9.2
gradle clean distZip  #编译文件build/distributions/ 目录下

 

2.初始化数据库

cd build/distributions
unzip -q qualitis-0.9.2.zip
cd qualitis-0.9.2

mysql -uroot -h 192.168.1.134 -p --default-character-set=utf8
mysql> create database qualitis default character set utf8mb4 COLLATE utf8mb4_general_ci;
mysql> use qualitis;
mysql> source conf/database/init.sql
mysql> exit;

3.配置Qualitis

vim conf/application-dev.yml  #新增或修改以下配置,其他默认即可
server:
  port: 8100
spring:
  datasource:
    username: root
    password: Qwer@123
    url: jdbc:mysql://192.168.1.134:3306/qualitis?createDatabaseIfNotExist=true&useUnicode=true&characterEncoding=utf-8


task:
  persistent:
    type: jdbc
    username: root
    password: Qwer@123
    address: jdbc:mysql://192.168.1.134:3306/qualitis?createDatabaseIfNotExist=true&useUnicode=true&characterEncoding=utf-8

zk:
  address: 192.168.1.134:2181

front_end:
  home_page: http://192.168.1.134:8100/#/dashboard
  domain_name: http://192.168.1.134:8100

cp mysql-connector-java-5.1.27-bin.jar /mnt/datasphere/spark/jars/

4.启动Qualitis服务

dos2unix bin/*
sh bin/start.sh

访问web页面:http://192.168.1.134:8100/#/home  用户名: admin  密码: admin

DataSphere Studio  AppConn 部署_flink_13

5.系统配置填写

DataSphere Studio  AppConn 部署_vim_14

DataSphere Studio  AppConn 部署_scala_15

6.安装Qualitis AppConn 

git clone https://github.com/WeBankFinTech/Qualitis.git
cd Qualitis/appconn
mvn clean install
mv  target/out/qualitis /mnt/datasphere/dss_linkis_one-click_install_20221201/dss/dss-appconns/
cd /mnt/datasphere/dss_linkis_one-click_install_20221201/dss/dss-appconns/
chmod -R 777 qualitis/
cd ../bin/
sh appconn-install.sh   #qualitis 192.168.1.134 8100
cd ../sbin/
sh dss-stop-all.sh 
sh dss-start-all.sh

7.验证

新建工作空间

DataSphere Studio  AppConn 部署_flink_16

测试自动跳转并免密登录

DataSphere Studio  AppConn 部署_scala_17

 

 

"一劳永逸" 的话,有是有的,而 "一劳永逸" 的事却极少



标签:dolphinscheduler,mnt,dss,sh,Studio,DataSphere,AppConn,linkis,datasphere
From: https://blog.51cto.com/u_8901540/12118316

相关文章

  • Visual Studio Code,关于创建项目时,系统找不到指定路径
            vitecreatehello-vite项目时,系统总是找不到指定的路径         然后在搜索框搜PowerShell,并以管理员身份运行,更改了执行策略为(A)全是          但是在接着重新在终端里的时候,vitecreatehello-vite项目时,系统还是找不到指定路径。发......
  • 2024年一款非常好用的视频剪辑软件会声会影Corel VideoStudio2024,非常适合新手
    随着数字媒体的飞速发展,视频剪辑已成为表达创意、传播信息的重要工具。2024年,视频剪辑软件市场迎来了新一轮的革新与竞争。今天,我们就来盘点一下这一年里备受瞩目的十大视频剪辑软件,无论你是初学者还是专业团队,都能在其中找到适合你的那一款。会声会影CorelVideoStudio2024一......
  • Qwen2.5系列模型在GenStudio平台开源并提供API调用
    9月19日,通义千问宣布新一代模型Qwen2.5系列开源。无问芯穹Infini-AI异构云平台GenStudio目前已上架Qwen2.5-7B/14B/32B/72B,您可轻松调用模型API。快来GenStudio,加入这场Qwen2.5基础模型大派对!GenStudio模型体验地址:cloud.infini-ai.com/genstudio/model此次Qwen2.5开源......
  • Android Studio制作简单登录界面
    实现目标应用线性布局设计登录界面,要求点击输入学号时弹出数字键盘界面,点击输入密码时弹出字母键盘,出现的文字、数字、尺寸等全部在values文件夹下相应.xml文件中设置好,使用时直接引用。当用户名或密码为空,显示一个提示信息“用户名与密码不能为空!”,当用户名和密码匹配,显示“登录......
  • Android Studio实例:绿豆通讯录
    步骤一:了解项目结构步骤二:首先是继承SQLiteOpenHelper的数据库自定义类创建Java文件MyHelper.javaimportandroid.content.Context;importandroid.database.sqlite.SQLiteDatabase;importandroid.database.sqlite.SQLiteOpenHelper;publicclassMyHelperextends......
  • Android studio 新建项目gradle依赖下载超时
    版本信息:android-studio-2024.1.2.12gradle-8.7&使用groovy配置项目配置:修改settings.gradle文件,将阿里云镜像仓库添加到google{}和mavenCentral()上方,不要随意改变仓库位置,仓库列出顺序决定 Gradle在这些仓库中搜索各个项目依赖项的顺序。pluginManagement{......
  • CodeMaid:一款基于.NET开发的Visual Studio代码简化和整理实用插件
    前言今天大姚给大家分享一款由.NET开源、免费、强大的VisualStudio代码简化、整理、格式化实用插件:CodeMaid。工具介绍CodeMaid是一款由.NET开源、免费、强大的VisualStudio实用插件,旨在帮助开发者简化、清理和格式化他们的C#、C++、VB.NET、F#、XAML、CSS、LESS、SCSS、Java......
  • CodeMaid:一款基于.NET开发的Visual Studio代码简化和整理实用插件
    前言今天大姚给大家分享一款由.NET开源、免费、强大的VisualStudio代码简化、整理、格式化实用插件:CodeMaid。工具介绍CodeMaid是一款由.NET开源、免费、强大的VisualStudio实用插件,旨在帮助开发者简化、清理和格式化他们的C#、C++、VB.NET、F#、XAML、CSS、LESS、SCSS、JavaScri......
  • visual studio 调试技巧
    visualstudio调试技巧概述在使用visualstudio进行调试的时候,有几个调试方法很好用,这里做一些记录。GTEST单元测试参考VS2022创建CC++GTEST工程-Hello-FPGA-博客园(cnblogs.com)内存查看命令行测试动态库附加到进程调试动态库内存查看图2‑1内存查看方式......
  • L0- Linux+InternStudio 关卡
    一、使用密码进行SSH远程连接1.打开电脑powerShell终端  使用Win+R快捷键打开运行框,输入powerShell,打开powerShell终端2.回到开发机平台 进入开发机页面找到创建的开发机,点击SSH连接3.复制登录命令  粘贴到powershell中,然后按回车 出现以下页面:4.复制密码......