首页 > 其他分享 >使用 Apache Hudi 实现 SCD-2(渐变维度)

使用 Apache Hudi 实现 SCD-2(渐变维度)

时间:2022-10-16 10:44:30浏览次数:69  
标签:product 12 Hudi SCD 30 ts Apache 59 eff

数据是当今分析世界的宝贵资产。 在向最终用户提供数据时,跟踪数据在一段时间内的变化非常重要。 渐变维度 (SCD) 是随时间推移存储和管理当前和历史数据的维度。 在 SCD 的类型中,我们将特别关注类型 2(SCD 2),它保留了值的完整历史。 每条记录都包含有效时间和到期时间,以标识记录处于活动状态的时间段。 这可以通过少数审计列来实现。 例如:生效开始日期、生效结束日期和活动记录指示器。
让我们了解如何使用 Apache Hudi 来实现这种 SCD-2 表设计。

Apache Hudi 是下一代流数据湖平台。 Apache Hudi 将核心仓库和数据库功能直接引入数据湖。 Hudi 提供表、事务、高效的 upserts/deletes、高级索引、流式摄取服务、数据Clustering/压缩优化和并发性,同时将数据保持为开源文件格式。

Apache Hudi 默认显示表中的快照数据,即最近提交的最新数据。 如果我们想跟踪历史变化,我们需要利用 Hudi 的时间点查询(https://hudi.apache.org/docs/quick-start-guide#point-in-time-query

Hudi 允许通过时间点查询旧版本数据或最新数据和时间旅行,通过时间点查询遍历历史数据变化是不高效的,需要对给定数据进行多次时间间隔分析。
让我们看看如何通过使用经典方法的解决方法来克服这个问题。
让我们考虑一个包含产品详细信息和卖家折扣的表。

+---------+--------------+---------------+---------------+-------------------+-------------------+-------------------+--------+
|seller_id|prod_category |product_name   |product_package|discount_percentage|eff_start_ts       |eff_end_ts         |actv_ind|
+---------+--------------+---------------+---------------+-------------------+-------------------+-------------------+--------+
|3412     |Healthcare    |Dolo 650       |10             |10                 |2022-04-01 16:30:45|9999-12-31 23:59:59|1       |
|1234     |Detergent     |Tide 2L        |6              |15                 |2021-12-15 15:20:30|9999-12-31 23:59:59|1       |
|1234     |Home Essential|Hand Towel     |12             |20                 |2021-10-20 06:55:22|9999-12-31 23:59:59|1       |
|4565     |Gourmet       |Dairy Milk Silk|6              |30                 |2021-06-12 20:30:40|9999-12-31 23:59:59|1       |
+---------+--------------+---------------+---------------+-------------------+-------------------+-------------------+--------+

步骤

  1. 让我们使用 Spark 将这些数据写入 Hudi 表中
spark-shell \
--packages org.apache.hudi:hudi-spark-bundle_2.12:0.11.1,org.apache.spark:spark-avro_2.12:2.4.7,org.apache.avro:avro:1.8.2 \
--conf "spark.serializer=org.apache.spark.serializer.KryoSerializer" \
--conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension' \
--conf "spark.sql.hive.convertMetastoreParquet=false"

启动 spark shell 后,我们可以导入库,并创建 Hudi 表,如下所示。

Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.4.8
      /_/
         
Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_312)
Type in expressions to have them evaluated.
Type :help for more information.
scala> spark.sql("""create table hudi_product_catalog (
     | seller_id int,
     | prod_category string,
     | product_name string,
     | product_package string,
     | discount_percentage string,
     | eff_start_ts timestamp,
     | eff_end_ts timestamp,
     | actv_ind int
     |  ) using hudi
     | tblproperties (
     |   type = 'cow',
     |   primaryKey = 'seller_id,prod_category,eff_end_ts',
     |   preCombineField = 'eff_start_ts'
     |  )
     | partitioned by (actv_ind)
     |  location 'gs://target_bucket/hudi_product_catalog/'""")

将数据写入到存储桶后,如下是 Hudi 目标表的数据格式。

+-------------------+---------------------+-------------------------------------------------------------------------+----------------------+--------------------------------------------------------------------------+---------+--------------+---------------+---------------+-------------------+-------------------+-------------------+--------+
|_hoodie_commit_time|_hoodie_commit_seqno |_hoodie_record_key                                                       |_hoodie_partition_path|_hoodie_file_name                                                         |seller_id|prod_category |product_name   |product_package|discount_percentage|eff_start_ts       |eff_end_ts         |actv_ind|
+-------------------+---------------------+-------------------------------------------------------------------------+----------------------+--------------------------------------------------------------------------+---------+--------------+---------------+---------------+-------------------+-------------------+-------------------+--------+
|20220722113258101  |20220722113258101_0_0|seller_id:3412,prod_category:Healthcare,eff_end_ts:253402300799000000    |actv_ind=1            |a94c9c58-ac6b-4841-a734-8ef1580e2547-0_0-29-1219_20220722113258101.parquet|3412     |Healthcare    |Dolo 650       |10             |10                 |2022-04-01 16:30:45|9999-12-31 23:59:59|1       |
|20220722113258101  |20220722113258101_0_1|seller_id:1234,prod_category:Home Essential,eff_end_ts:253402300799000000|actv_ind=1            |a94c9c58-ac6b-4841-a734-8ef1580e2547-0_0-29-1219_20220722113258101.parquet|1234     |Home Essential|Hand Towel     |12             |20                 |2021-10-20 06:55:22|9999-12-31 23:59:59|1       |
|20220722113258101  |20220722113258101_0_2|seller_id:4565,prod_category:Gourmet,eff_end_ts:253402300799000000       |actv_ind=1            |a94c9c58-ac6b-4841-a734-8ef1580e2547-0_0-29-1219_20220722113258101.parquet|4565     |Gourmet       |Dairy Milk Silk|6              |30                 |2021-06-12 20:30:40|9999-12-31 23:59:59|1       |
|20220722113258101  |20220722113258101_0_3|seller_id:1234,prod_category:Detergent,eff_end_ts:253402300799000000     |actv_ind=1            |a94c9c58-ac6b-4841-a734-8ef1580e2547-0_0-29-1219_20220722113258101.parquet|1234     |Detergent     |Tide 2L        |6              |15                 |2021-12-15 15:20:30|9999-12-31 23:59:59|1       |
+-------------------+---------------------+-------------------------------------------------------------------------+----------------------+--------------------------------------------------------------------------+---------+--------------+---------------+---------------+-------------------+-------------------+-------------------+--------+

2.假设我们的增量数据存储在下表中(非Hudi格式,可以是Hive)。

+---------+-------------+-----------------+---------------+-------------------+-------------------+
|seller_id|prod_category|product_name     |product_package|discount_percentage|eff_start_ts       |
+---------+-------------+-----------------+---------------+-------------------+-------------------+
|1234     |Detergent    |Tide 5L          |6              |25                 |2022-01-31 10:00:30|
|4565     |Gourmet      |Dairy Milk Almond|12             |45                 |2022-06-12 20:30:40|
|3345     |Stationary   |Sticky Notes     |4              |12                 |2022-07-09 21:30:45|
+---------+-------------+-----------------+---------------+-------------------+-------------------+
  1. 现在让我们通过对目标表进行Left Anti Join过滤掉增量表中的所有 Insert only 记录。
val updFileDf = spark.read.option("header",true).csv("gs://target_bucket/hudi_product_catalog/hudi_product_update.csv")
val tgtHudiDf = spark.sql("select * from hudi_product_catalog")
hudiTableData.createOrReplaceTempView("hudiTable")

//Cast as needed
val stgDf = updFileDf.withColumn("eff_start_ts",to_timestamp(col("eff_start_ts")))
.withColumn("seller_id",col("seller_id").cast("int"))

//Prepare an insert DF from incremental temp DF
val instmpDf = stgDf.as("stg")
      .join(tgtHudiDf.as("tgt"),
        col("stg.seller_id") === col("tgt.seller_id") &&
          col("stg.prod_category") === col("tgt.prod_category"),"left_anti")
.select("stg.*")

val insDf = instmpDf.withColumn("eff_end_ts",to_timestamp(lit("9999-12-31 23:59:59")))
.withColumn("actv_ind",lit(1))


insDf.show(false)
+---------+-------------+------------+---------------+-------------------+-------------------+-------------------+--------+
|seller_id|prod_category|product_name|product_package|discount_percentage|       eff_start_ts|         eff_end_ts|actv_ind|
+---------+-------------+------------+---------------+-------------------+-------------------+-------------------+--------+
|     3345|   Stationary|Sticky Notes|              4|                 12|2022-07-09 21:30:45|9999-12-31 23:59:59|       1|
+---------+-------------+------------+---------------+-------------------+-------------------+-------------------+--------+
  1. 我们有一个只插入记录的DataFrame。 接下来让我们创建一个DataFrame,其中将包含来自 delta 表和目标表的属性,并在目标上使用内连接,它将获取需要更新的记录。
//Prepare an update DF from incremental temp DF, select columns from both the tables
val updDf = stgDf.as("stg")
      .join(tgtHudiDf.as("tgt"),
        col("stg.seller_id") === col("tgt.seller_id") &&
          col("stg.prod_category") === col("tgt.prod_category"),"inner")
          .where(col("stg.eff_start_ts") > col("tgt.eff_start_ts"))
.select((stgDf.columns.map(c => stgDf(c).as(s"stg_$c"))++ tgtHudiDf.columns.map(c => tgtHudiDf(c).as(s"tgt_$c"))):_*)

updDf.show(false)

+-------------+-----------------+-----------------+-------------------+-----------------------+-------------------+-----------------------+------------------------+----------------------+--------------------------+---------------------+-------------+-----------------+----------------+-------------------+-----------------------+-------------------+-------------------+------------+
|stg_seller_id|stg_prod_category| stg_product_name|stg_product_package|stg_discount_percentage|   stg_eff_start_ts|tgt__hoodie_commit_time|tgt__hoodie_commit_seqno|tgt__hoodie_record_key|tgt__hoodie_partition_path|tgt__hoodie_file_name|tgt_seller_id|tgt_prod_category|tgt_product_name|tgt_product_package|tgt_discount_percentage|   tgt_eff_start_ts|     tgt_eff_end_ts|tgt_actv_ind|
+-------------+-----------------+-----------------+-------------------+-----------------------+-------------------+-----------------------+------------------------+----------------------+--------------------------+---------------------+-------------+-----------------+----------------+-------------------+-----------------------+-------------------+-------------------+------------+
|         1234|        Detergent|          Tide 5L|                  6|                     25|2022-01-31 10:00:30|      20220710113622931|    20220710113622931...|  seller_id:1234,pr...|                actv_ind=1| 2dd6109f-2173-429...|         1234|        Detergent|         Tide 2L|                  6|                     15|2021-12-15 15:20:30|9999-12-31 23:59:59|           1|
|         4565|          Gourmet|Dairy Milk Almond|                 12|                     45|2022-06-12 20:30:40|      20220710113622931|    20220710113622931...|  seller_id:4565,pr...|                actv_ind=1| 2dd6109f-2173-429...|         4565|          Gourmet| Dairy Milk Silk|                  6|                     30|2021-06-12 20:30:40|9999-12-31 23:59:59|           1|
+-------------+-----------------+-----------------+-------------------+-----------------------+-------------------+-----------------------+------------------------+----------------------+--------------------------+---------------------+-------------+-----------------+----------------+-------------------+-----------------------+-------------------+-------------------+------------+
  1. 现在我们有一个DataFrame,它在一条记录中包含新旧数据,让我们在各自单独的DataFrame中拉取更新记录的活动和非活动实例。

image.png

在进行上述练习时,我们将通过更改活动(新)记录的 eff_end_tsto eff_start_ts -1 并更新 actv_ind = 0 来废弃非活动记录

//Prepare Active updates

val updActiveDf = updDf.select(col("stg_seller_id").as("seller_id"),
col("stg_prod_category").as("prod_category"),
col("stg_product_name").as("product_name"),
col("stg_product_package").as("product_package"),
col("stg_discount_percentage").as("discount_percentage"),
col("stg_eff_start_ts").as("eff_start_ts"),
to_timestamp(lit("9999-12-31 23:59:59")) as ("eff_end_ts"),
lit(1) as ("actv_ind"))

updActiveDf.show(false)
+---------+-------------+-----------------+---------------+-------------------+-------------------+-------------------+--------+
|seller_id|prod_category|product_name     |product_package|discount_percentage|eff_start_ts       |eff_end_ts         |actv_ind|
+---------+-------------+-----------------+---------------+-------------------+-------------------+-------------------+--------+
|1234     |Detergent    |Tide 5L          |6              |25                 |2022-01-31 10:00:30|9999-12-31 23:59:59|1       |
|4565     |Gourmet      |Dairy Milk Almond|12             |45                 |2022-06-12 20:30:40|9999-12-31 23:59:59|1       |
+---------+-------------+-----------------+---------------+-------------------+-------------------+-------------------+--------+

//Prepare inactive updates, which will become obsolete records

val updInactiveDf = updDf.select(col("tgt_seller_id").as("seller_id"),
col("tgt_prod_category").as("prod_category"),
col("tgt_product_name").as("product_name"),
col("tgt_product_package").as("product_package"),
col("tgt_discount_percentage").as("discount_percentage"),
col("tgt_eff_start_ts").as("eff_start_ts"),
(col("stg_eff_start_ts") - expr("interval 1 seconds")).as("eff_end_ts"),
lit(0) as ("actv_ind"))

scala> updInactiveDf.show
+---------+-------------+---------------+---------------+-------------------+-------------------+-------------------+--------+
|seller_id|prod_category|   product_name|product_package|discount_percentage|       eff_start_ts|         eff_end_ts|actv_ind|
+---------+-------------+---------------+---------------+-------------------+-------------------+-------------------+--------+
|     1234|    Detergent|        Tide 2L|              6|                 15|2021-12-15 15:20:30|2022-01-31 10:00:29|       0|
|     4565|      Gourmet|Dairy Milk Silk|              6|                 30|2021-06-12 20:30:40|2022-06-12 20:30:39|       0|
+---------+-------------+---------------+---------------+-------------------+-------------------+-------------------+--------+
  1. 现在我们将使用union运算符将插入、活动更新和非活动更新拉入单个DataFrame。 将此DataFrame作为最终 Hudi 写入逻辑的增量源。
scala> val upsertDf = insDf.union(updActiveDf).union(updInactiveDf)

scala> upsertDf.show

+---------+-------------+-----------------+---------------+-------------------+-------------------+-------------------+--------+
|seller_id|prod_category|     product_name|product_package|discount_percentage|       eff_start_ts|         eff_end_ts|actv_ind|
+---------+-------------+-----------------+---------------+-------------------+-------------------+-------------------+--------+
|     3345|   Stationary|     Sticky Notes|              4|                 12|2022-07-09 21:30:45|9999-12-31 23:59:59|       1|
|     4565|      Gourmet|Dairy Milk Almond|             12|                 45|2022-06-12 20:30:40|9999-12-31 23:59:59|       1|
|     1234|    Detergent|          Tide 5L|              6|                 25|2022-01-31 10:00:30|9999-12-31 23:59:59|       1|
|     4565|      Gourmet|  Dairy Milk Silk|              6|                 30|2021-06-12 20:30:40|2022-06-12 20:30:39|       0|
|     1234|    Detergent|          Tide 2L|              6|                 15|2021-12-15 15:20:30|2022-01-31 10:00:29|       0|
+---------+-------------+-----------------+---------------+-------------------+-------------------+-------------------+--------+

val path = "gs://target_bucket/hudi_product_catalog"

upsertDf.write.format("org.apache.hudi")
.option(TABLE_TYPE_OPT_KEY, "COPY_ON_WRITE") 
.option("hoodie.datasource.write.keygenerator.class","org.apache.hudi.keygen.ComplexKeyGenerator") 
.option(RECORDKEY_FIELD_OPT_KEY, "seller_id,prod_category,eff_end_ts")
.option(PRECOMBINE_FIELD_OPT_KEY, "eff_start_ts") 
.option("hoodie.table.name","hudi_product_catalog") 
.option(DataSourceWriteOptions.HIVE_DATABASE_OPT_KEY, "target_schema") 
.option(DataSourceWriteOptions.HIVE_TABLE_OPT_KEY, "hudi_product_catalog") 
.option(OPERATION_OPT_KEY, UPSERT_OPERATION_OPT_VAL)
.option(DataSourceWriteOptions.HIVE_STYLE_PARTITIONING_OPT_KEY, "true")
.option(PARTITIONPATH_FIELD_OPT_KEY, "actv_ind")
.mode(Append)
.save(s"$path")

scala> spark.sql("refresh table stg_wmt_ww_fin_rtn_mb_dl_secure.hudi_product_catalog")

scala> spark.sql("select * from stg_wmt_ww_fin_rtn_mb_dl_secure.hudi_product_catalog").show(false)

+-------------------+---------------------+-------------------------------------------------------------------------+----------------------+--------------------------------------------------------------------------+---------+--------------+-----------------+---------------+-------------------+-------------------+-------------------+--------+
|_hoodie_commit_time|_hoodie_commit_seqno |_hoodie_record_key                                                       |_hoodie_partition_path|_hoodie_file_name                                                         |seller_id|prod_category |product_name     |product_package|discount_percentage|eff_start_ts       |eff_end_ts         |actv_ind|
+-------------------+---------------------+-------------------------------------------------------------------------+----------------------+--------------------------------------------------------------------------+---------+--------------+-----------------+---------------+-------------------+-------------------+-------------------+--------+
|20220722113258101  |20220722113258101_0_0|seller_id:3412,prod_category:Healthcare,eff_end_ts:253402300799000000    |actv_ind=1            |a94c9c58-ac6b-4841-a734-8ef1580e2547-0_0-72-2451_20220722114049500.parquet|3412     |Healthcare    |Dolo 650         |10             |10                 |2022-04-01 16:30:45|9999-12-31 23:59:59|1       |
|20220722113258101  |20220722113258101_0_1|seller_id:1234,prod_category:Home Essential,eff_end_ts:253402300799000000|actv_ind=1            |a94c9c58-ac6b-4841-a734-8ef1580e2547-0_0-72-2451_20220722114049500.parquet|1234     |Home Essential|Hand Towel       |12             |20                 |2021-10-20 06:55:22|9999-12-31 23:59:59|1       |
|20220722114049500  |20220722114049500_0_2|seller_id:4565,prod_category:Gourmet,eff_end_ts:253402300799000000       |actv_ind=1            |a94c9c58-ac6b-4841-a734-8ef1580e2547-0_0-72-2451_20220722114049500.parquet|4565     |Gourmet       |Dairy Milk Almond|12             |45                 |2022-06-12 20:30:40|9999-12-31 23:59:59|1       |
|20220722114049500  |20220722114049500_0_3|seller_id:1234,prod_category:Detergent,eff_end_ts:253402300799000000     |actv_ind=1            |a94c9c58-ac6b-4841-a734-8ef1580e2547-0_0-72-2451_20220722114049500.parquet|1234     |Detergent     |Tide 5L          |6              |25                 |2022-01-31 10:00:30|9999-12-31 23:59:59|1       |
|20220722114049500  |20220722114049500_0_4|seller_id:3345,prod_category:Stationary,eff_end_ts:253402300799000000    |actv_ind=1            |a94c9c58-ac6b-4841-a734-8ef1580e2547-0_0-72-2451_20220722114049500.parquet|3345     |Stationary    |Sticky Notes     |4              |12                 |2022-07-09 21:30:45|9999-12-31 23:59:59|1       |
|20220722114049500  |20220722114049500_1_0|seller_id:4565,prod_category:Gourmet,eff_end_ts:1655065839000000         |actv_ind=0            |789e0317-d499-4d74-a5d9-ad6e6517d6b8-0_1-72-2452_20220722114049500.parquet|4565     |Gourmet       |Dairy Milk Silk  |6              |30                 |2021-06-12 20:30:40|2022-06-12 20:30:39|0       |
|20220722114049500  |20220722114049500_1_1|seller_id:1234,prod_category:Detergent,eff_end_ts:1643623229000000       |actv_ind=0            |789e0317-d499-4d74-a5d9-ad6e6517d6b8-0_1-72-2452_20220722114049500.parquet|1234     |Detergent     |Tide 2L          |6              |15                 |2021-12-15 15:20:30|2022-01-31 10:00:29|0       |
+-------------------+---------------------+-------------------------------------------------------------------------+----------------------+--------------------------------------------------------------------------+---------+--------------+-----------------+---------------+-------------------+-------------------+-------------------+--------+

实施过程中需要考虑的几点

  • 对于现有记录的每次更新,parquet 文件将在存储中重新写入/移动,这可能会影响写入时的性能
  • 在查询数据期间,根据代表主要过滤器的属性对目标表进行分区总是一个更好的主意。 例如:销售表中的销售日期,注册产品目录的卖家。 上述示例中选择了 actv_ind ,因为我们希望使其易于解释并将所有活动记录保存在一个分区中。

结论

随着我们持续使用 Apache Hudi 编写 Spark 应用程序,我们将继续改进加载数据的策略,上述尝试只是用 Hudi 实现 SCD-2 功能的一个开始。

标签:product,12,Hudi,SCD,30,ts,Apache,59,eff
From: https://www.cnblogs.com/leesf456/p/16795745.html

相关文章

  • Apache Commons Text远程代码执行漏洞(CVE-2022-42889)分析
    漏洞介绍根据apache官方给出的说明介绍到ApacheCommonsText执行变量插值,允许动态评估和扩展属性的一款工具包,插值的标准格式是"${prefix:name}",其中"prefix"是用于定位o......
  • Apache RocketMQ 在阿里云大规模商业化实践之路
    作者:周新宇阿里云消息队列RocketMQ商业化历程RocketMQ诞生于2012年,诞生即开源。2012~2015年,RocketMQ一直在通过内部电商业务打磨自身服务能力,并在2015年于阿里云上......
  • Linux apache服务实现URL重定向配置
    URL重定向,  即将httpd请求的URL转发至另一个的URL实现URL重定向的指令:Redirect[status]URL-pathURLstatus状态:permanent:返回永久重定向状态码301,此重定向......
  • Linux apache实现https的配置
    http协议:  应用层协议,传输层使用TCP协议,默认使用80端口。http协议主要是用来是实现万维网站点资源的访问。ssl(安全套接层)协议:  全称为SecureSocketsLayer。工作与......
  • Apache Shiro
    ApacheShiro目录ApacheShiro简介APISpringBoot集成ApacehShiro参阅简介ApacheShiro:Java安全框架功能:认证Authentication授权Authorization加密Crytogra......
  • Apache Tomcat文件包含漏洞(CVE-2020-1938)复现
    漏洞描述:ApacheTomcat是美国阿帕奇(Apache)软件基金会的一款轻量级Web应用服务器。该程序实现了对Servlet和JavaServerPage(JSP)的支持。TomcatAJP协议由于存在实现缺陷导......
  • 在apache-tomcat-7.0.109中运行Structs2返回java.lang.NoSuchFieldException: resourc
    文章目录​​小结​​​​问题​​​​Noresultdefinedforactionandresultinput​​​​运行Structs2返回java.lang.NoSuchFieldException:resourceEntries错误​​......
  • 11.14/11.15 Apache和PHP结合 11.16/11.17 Apache默认虚拟主机
    11.14/11.15Apache和PHP结合修改Apache(httpd)主配置文件定义ServerName以消除Apache启动时所产生的警告未修改前[root@linux-5~]#/usr/local/apache2.4/bin/apachect......
  • Linux apache的常见配置(7)压缩功能实现
    压缩apache服务器通过压缩页面优化传输速度压缩功能由mod_deflate模块提供。缺点:可能一些比较老的浏览器不支持压缩功能。apache压缩功能的实现:使用SetOutputFi......
  • 【Apache】 修改apache端口号
    1找到apache的安装目录    通过whereisapache2查找      2 进入到/etc/apache2目录修改ports.conf文件     3然后修改/etc/apache2/sites-e......