1. 整体架构
2. 安装部署
2.1 资源规划
实例名称 | 端口名称 | 默认端口 | 通信方式 | 说明 |
---|---|---|---|---|
FE | http_port | 8030 -> 8034 | 用户/FE<--> FE | FE上的http端口 web管理界面 |
BE | webserver_port | 8040 -> 8041 | BE<-->FE | BE上的http端口 跟hadoop端口号冲突 |
FE | query_port | 9030 | 用户 <--> FE | FE上的mysql server端口 navicat连接端口 |
2.2 部署步骤
1. 方式一:bin包安装
https://doris.apache.org/zh-CN/download
1.下载apache-doris-2.0.0-bin-x64.tar.gz
2.解压
3.修改配置文件
修改ip、port
新增JAVA_HOME=/data/module/jdk1.8.0_212
4.新建数据目录
mkdir doris-meta
mkdir doris-storage1
mkdir doris-storage2
5.建库
#连接FE,默认无密码
mysql -h core1 -P 9030 -uroot
#修改密码:
SET PASSWORD FOR 'root' = PASSWORD('000000');
#添加用户:
create user 'test' identified by 'test';
#再次进入:
mysql -h master1 -P 9030 -uroot -p000000
CREATE USER 'test' IDENTIFIED BY 'test';
GRANT ALL ON wafbase TO test;
CREATE USER 'sase'@'%' IDENTIFIED BY 'Xnetworks.c0M';
GRANT SELECT_PRIV ON internal.wafbase.* TO 'test'@'%';
SHOW ROLES;
show all grants;
#查看sql语法:
HELP ALTER TABLE ;
#添加BE要用ip,不能用host
ALTER SYSTEM ADD BACKEND "172.31.12.14:9050";
ALTER SYSTEM ADD BACKEND "172.31.12.10:9050";
ALTER SYSTEM ADD BACKEND "172.31.12.5:9050";
SHOW PROC '/backends';
#添加FE
ALTER SYSTEM ADD FOLLOWER "172.31.12.14:9010";
ALTER SYSTEM ADD OBSERVER "172.31.12.10:9010";
SHOW PROC '/frontends';
#添加BROKER
ALTER SYSTEM ADD BROKER broker_name "172.31.12.5:8000","172.31.12.10:8000","172.31.12.14:8000";
SHOW PROC "/brokers";
6.启动
第一次启动加helper
fe/bin/start_fe.sh --helper 172.31.12.5:9010 --daemon
fe/bin/start_fe.sh --helper 172.31.12.5:9010 --daemon
be/bin/start_be.sh --daemon
fe/bin/start_fe.sh --daemon
7.web界面
http://core1:8034/login
2. 方式二:安装manager部署doris
地址:https://www.selectdb.com/download/enterprise#manager
QA: https://selectdb.feishu.cn/docx/AdaudiDKQovS7SxQwhucrUg7nPd
1.安装manager的用户(hadoop)要配好ssh免密
ssh: vim /etc/ssh/sshd_config(注意不同用户路径不同)
切换到hadoop用户生成公钥
2.权限
具有是否具有crontab权限:crontab -l
查看是否具有S权限位: ll /bin/crontab
chmod a+s /bin/crontab
chmod g-s /bin/crontab
vim /etc/security/access.conf
+ : hadoop : crontab : ALL
echo hadoop >> /etc/cron.allow
3.启动
1.下载Enterprise Manager:doris-manager-23.11.3-x64-bin.tar.gz
cd webserver/
bin/start.sh
4.界面部署doris集群
manager:http://172.31.12.14/:8004 root/@Xnetworks.c0M
sql:http://172.31.12.14:8010/cluster/query
apache-doris升级selectdb-doris才能使用WEBUI界面,manager支持一键升级。
1.下载Enterprise Core:selectdb-doris-2.0.4-b01-bin-x64.tar.gz
2.移到到/data/software/doris目录,让manager找到安装包即可直接she
3. 使用
3.1 数据模型
3.1.1 Aggregate
聚合需求
Key-维度列 排序列和唯一列
Value-指标列 AggregationType
SUM / MAX / MIN / REPLACE / REPLACE_IF_NOT_NULL / HLL_UNION / BITMAP_UNION
CREATE TABLE IF NOT EXISTS example_tbl_agg
(
`user_id` LARGEINT NOT NULL COMMENT "用户id",
`date` DATE NOT NULL COMMENT "数据灌入日期时间",
`timestamp` DATETIME NOT NULL COMMENT "数据灌入日期时间戳",
`city` VARCHAR(20) COMMENT "用户所在城市",
`age` SMALLINT COMMENT "用户年龄",
`sex` TINYINT COMMENT "用户性别",
`last_visit_date` DATETIME REPLACE DEFAULT "1970-01-01 00:00:00" COMMENT "用户最后一次访问时间",
`cost` BIGINT SUM DEFAULT "0" COMMENT "用户总消费",
`max_dwell_time` INT MAX DEFAULT "0" COMMENT "用户最大停留时间",
`min_dwell_time` INT MIN DEFAULT "99999" COMMENT "用户最小停留时间"
)
AGGREGATE KEY(`user_id`, `date`, `timestamp` ,`city`, `age`, `sex`)
DISTRIBUTED BY HASH(`user_id`) BUCKETS 1
PROPERTIES ("replication_allocation" = "tag.location.default: 1");
局限性:
1.查询与聚合类型不一致的聚合类时,查询效率降低,比如cost是SUM,查询时用MIN(cost);
2.count(*)开销很大,增加一个值恒为1的SUM的列,select sum(count) from table。
3.1.2 Unique
UNIQUE KEY:根据主键去重,默认写时合并,排序列和唯一列
效果等同于Aggregate模式的REPLACE
CREATE TABLE IF NOT EXISTS example_tbl_unique
(
`user_id` LARGEINT NOT NULL COMMENT "用户id",
`username` VARCHAR(50) NOT NULL COMMENT "用户昵称",
`city` VARCHAR(20) COMMENT "用户所在城市",
`age` SMALLINT COMMENT "用户年龄",
`sex` TINYINT COMMENT "用户性别",
`phone` LARGEINT COMMENT "用户电话",
`address` VARCHAR(500) COMMENT "用户地址",
`register_time` DATETIME COMMENT "用户注册时间"
)
UNIQUE KEY(`user_id`, `username`)
DISTRIBUTED BY HASH(`user_id`) BUCKETS 1
PROPERTIES ("replication_allocation" = "tag.location.default: 1");
局限性:unique是aggregate的特例
3.1.3 Duplicate
默认的模型,保留明细数据 DUPLICATE KEY:排序列
CREATE TABLE IF NOT EXISTS example_tbl_duplicate
(
`timestamp` DATETIME NOT NULL COMMENT "日志时间",
`type` INT NOT NULL COMMENT "日志类型",
`error_code` INT COMMENT "错误码",
`error_msg` VARCHAR(1024) COMMENT "错误详细信息",
`op_id` BIGINT COMMENT "负责人id",
`op_time` DATETIME COMMENT "处理时间"
)
DUPLICATE KEY(`timestamp`, `type`, `error_code`)
DISTRIBUTED BY HASH(`type`) BUCKETS 1
PROPERTIES ("replication_allocation" = "tag.location.default: 1");
3.2 数据划分
3.2.1 Partition & Bucket
表的数据量可以通过 SHOW DATA 命令查看,结果除以副本数,即表的数据量
Partition 分区:Range 和 List 50GB Bucket(Tablet)分桶:Hash 和 Random 1G-10G 500MB 4-8个Bucket 5GB 8-16个Bucket 50GB 32个Bucket 500GB 10个Partition 每个分区32个Bucket 5TB 100个Partition 每个分区32个Bucket
3.2.2 动态分区
CREATE TABLE event_log (
ts DATETIME
)
DUPLICATE KEY(ts)
PARTITION BY RANGE(ts) ()
DISTRIBUTED BY HASH(ts)
PROPERTIES
(
"replication_num" = "1",
"dynamic_partition.enable" = "true",
"dynamic_partition.time_unit" = "DAY",
"dynamic_partition.start" = "-30", --删除历史分区
"dynamic_partition.end" = "2",
"dynamic_partition.prefix" = "p",
"dynamic_partition.buckets" = "32"
);
SHOW DYNAMIC PARTITION TABLES;
3.3 查询优化
3.3.1 Rollup
Aggregate 和 Uniq 模型中的 ROLLUP表
在Base表的基础上,获得更粗粒度的聚合数据,即将数据按某种指定的粒度进行进一步聚合。
CREATE TABLE IF NOT EXISTS example_site_visit2
(
`user_id` LARGEINT NOT NULL COMMENT "用户id",
`date` DATE NOT NULL COMMENT "数据灌入日期时间",
`timestamp` DATETIME COMMENT "数据灌入时间,精确到秒",
`city` VARCHAR(20) COMMENT "用户所在城市",
`age` SMALLINT COMMENT "用户年龄",
`sex` TINYINT COMMENT "用户性别",
`last_visit_date` DATETIME REPLACE DEFAULT "1970-01-01 00:00:00" COMMENT "用户最后一次访问时间",
`cost` BIGINT SUM DEFAULT "0" COMMENT "用户总消费",
`max_dwell_time` INT MAX DEFAULT "0" COMMENT "用户最大停留时间",
`min_dwell_time` INT MIN DEFAULT "99999" COMMENT "用户最小停留时间"
)
AGGREGATE KEY(`user_id`, `date`, `timestamp`, `city`, `age`, `sex`)
DISTRIBUTED BY HASH(user_id) BUCKETS 10 PROPERTIES("replication_num" = "1");
insert into example_site_visit2 values
(10000,'2017-10-01','2017-10-01 08:00:05','北京',20,0,'2017-10-01 06:00:00',20,10,10),
(10000,'2017-10-01','2017-10-01 09:00:05','北京',20,0,'2017-10-01 07:00:00',15,2,2),
(10001,'2017-10-01','2017-10-01 18:12:10','北京',30,1,'2017-10-01 17:05:45',2,22,22),
(10002,'2017-10-02','2017-10-02 13:10:00','上海',20,1,'2017-10-02 12:59:12',200,5,5),
(10003,'2017-10-02','2017-10-02 13:15:00','广州',32,0,'2017-10-02 11:20:00',30,11,11),
(10004,'2017-10-01','2017-10-01 12:12:48','深圳',35,0,'2017-10-01 10:00:15',100,3,3),
(10004,'2017-10-03','2017-10-03 12:38:20','深圳',35,0,'2017-10-03 10:20:22',11,6,6);
select * from example_site_visit2;
desc example_site_visit2 all;
alter table example_site_visit2 add rollup rollup_cost_userid(user_id,cost);
alter table example_site_visit2 add rollup rollup_city_age_cost_maxd_mind(city,age,cost,max_dwell_time,min_dwell_time);
explain SELECT sum(cost),user_id FROM example_site_visit2 GROUP BY user_id;
explain SELECT sum(cost),user_id,city FROM example_site_visit2 GROUP BY user_id,city;
explain SELECT city, age,max(max_dwell_time), min(min_dwell_time), sum(cost) FROM example_site_visit2 GROUP BY age,city;
SHOW ALTER TABLE ROLLUP;
Duplicate 模型中的 ROLLUP
与mysql索引不同,不支持在任意字段创建索引。doris用的是前缀索引(前36个字节),所以需要调整列顺序,以命中前缀索引
说明:
1.用户可以创建和删除ROLLUP表,但是否命中是Doris自动决定;
2.创建的ROLLUP越多,占用的磁盘空间越大,查询速度变快,导入速度会减低;
3.3.2 物化视图
视图-sql
物化视图-sql+结果
HELP CREATE MATERIALIZED VIEW
HELP ALTER TABLE;
create database test_db;
CREATE TABLE flinktest (
siteid INT DEFAULT '10',
citycode SMALLINT,
username VARCHAR(32) DEFAULT '',
pv BIGINT SUM DEFAULT '0'
)
AGGREGATE KEY(siteid, citycode, username)
DISTRIBUTED BY HASH(siteid)
BUCKETS 10
PROPERTIES("replication_num" = "1");
insert into flinktest values
(1, 1, 'jim', 2),
(2, 1, 'grace', 2),
(3, 2, 'tom', 2),
(4, 3, 'bush', 3),
(5, 3, 'helen', 3);
SELECT * from flinktest;
show data;
SHOW ALTER TABLE COLUMN;
desc flinktest;
ALTER TABLE flinktest ADD COLUMN uv BIGINT SUM DEFAULT '0' after pv;
ALTER TABLE flinktest ADD COLUMN ct BIGINT SUM DEFAULT '0' after pv;
ALTER TABLE flinktest DROP COLUMN uv;
ALTER TABLE flinktest MODIFY COLUMN username VARCHAR(32);
ALTER TABLE event_log RENAME COLUMN ruleId rule_id ;
4. 数据导入
Broker load
create table hdfs_student
(
id int ,
name varchar(50),
age int ,
score decimal(10,4)
)
DUPLICATE KEY(id)
DISTRIBUTED BY HASH(id) BUCKETS 10 properties("replication_num" = "1");
hadoop fs -put student.csv /
LOAD LABEL test_db.hdfs_student
(
DATA INFILE("hdfs://master1:8020/student.csv")
INTO TABLE `hdfs_student`
COLUMNS TERMINATED BY ","
FORMAT AS "csv"
(id, name, age, score)
)
WITH BROKER broker_name
PROPERTIES
(
"timeout" = "3600"
);
show load order by createtime desc limit 1;
select * from hdfs_student;
curl --location-trusted -u root:000000 -H "label:stream" -H "column_separator:," -T student.csv http://172.31.12.5:8034/api/test_db/hdfs_student/_stream_load
参考资料:
doris:
最全 Apache Doris 学习资料汇总(文档/视频/PPT)持续更新中
https://doris.apache.org/zh-CN/docs/dev/data-operate/import/import-scenes/kafka-load/
kafka
doris:https://doris.apache.org/zh-CN/docs/dev/data-operate/import/import-scenes/kafka-load/
ssl认证:https://www.cnblogs.com/felixzh/p/14661156.html
flink:
https://doris.apache.org/zh-CN/docs/dev/ecosystem/flink-doris-connector/
https://mp.weixin.qq.com/s/3VvaC8uzhkhJaHXhoGi2TQ