首页 > 其他分享 >Spark详解(07-1) - SparkStreaming案例实操

Spark详解(07-1) - SparkStreaming案例实操

时间:2023-01-04 13:24:36浏览次数:49  
标签:count String val SparkStreaming adid 实操 apache import 07

Spark详解(07-1) - SparkStreaming案例实操

环境准备

pom文件

  1. <dependencies>
  2.     <dependency>
  3.         <groupId>org.apache.spark</groupId>
  4.         <artifactId>spark-core_2.12</artifactId>
  5.         <version>3.0.0</version>
  6.     </dependency>
  7.  
  8.     <dependency>
  9.         <groupId>org.apache.spark</groupId>
  10.         <artifactId>spark-streaming_2.12</artifactId>
  11.         <version>3.0.0</version>
  12.     </dependency>
  13.  
  14.     <dependency>
  15.         <groupId>org.apache.spark</groupId>
  16.         <artifactId>spark-streaming-kafka-0-10_2.12</artifactId>
  17.         <version>3.0.0</version>
  18.     </dependency>
  19.  
  20.     <!-- https://mvnrepository.com/artifact/com.alibaba/druid -->
  21.     <dependency>
  22.         <groupId>com.alibaba</groupId>
  23.         <artifactId>druid</artifactId>
  24.         <version>1.1.10</version>
  25.     </dependency>
  26.  
  27.     <dependency>
  28.         <groupId>mysql</groupId>
  29.         <artifactId>mysql-connector-java</artifactId>
  30.         <version>5.1.27</version>
  31. </dependency>
  32. <dependency>
  33.     <groupId>com.fasterxml.jackson.core</groupId>
  34.     <artifactId>jackson-core</artifactId>
  35.     <version>2.10.1</version>
  36. </dependency>
  37. </dependencies>

更改日志打印级别

将log4j.properties文件添加到resources里面,就能更改打印日志的级别为error

log4j.rootLogger=error, stdout,R
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %5p --- [%50t] %-80c(line:%5L) : %m%n

log4j.appender.R=org.apache.log4j.RollingFileAppender
log4j.appender.R.File=../log/agent.log
log4j.appender.R.MaxFileSize=1024KB
log4j.appender.R.MaxBackupIndex=1

log4j.appender.R.layout=org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %5p --- [%50t] %-80c(line:%6L) : %m%n

PropertiesUtil工具类

1)创建包名

com.zhangjk.util

2)编写读取资源文件工具类

  1. package com.zhangjk.util
  2.  
  3. import java.io.InputStreamReader
  4. import java.util.Properties
  5.  
  6. object PropertiesUtil {
  7.  
  8.     def load(propertiesName: String): Properties = {
  9.  
  10.         val prop = new Properties()
  11.         prop.load(new InputStreamReader(Thread.currentThread().getContextClassLoader.getResourceAsStream(propertiesName), "UTF-8"))
  12.         prop
  13.     }
  14. }

config.properties

1)在resources目录下创建config.properties文件

2)添加如下配置到config.properties文件

#JDBC配置

jdbc.datasource.size=10

jdbc.url=jdbc:mysql://hadoop102:3306/spark2020?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true

jdbc.user=root

jdbc.password=000000

 

# Kafka配置

kafka.broker.list=hadoop102:9092,hadoop103:9092,hadoop104:9092

kafka.topic=testTopic

实时数据生成模块

RandomOptions

1)根据输入权重,生成对应随机数

  1. package com.zhangjk.util
  2.  
  3. import scala.collection.mutable.ListBuffer
  4. import scala.util.Random
  5.  
  6. // value值出现的比例,例如:(男,8) (女:2)
  7. case class RanOpt[T](value: T, weight: Int)
  8.  
  9. object RandomOptions {
  10.  
  11.     def apply[T](opts: RanOpt[T]*): RandomOptions[T] = {
  12.  
  13.         val randomOptions = new RandomOptions[T]()
  14.  
  15.         for (opt <- opts) {
  16.             // 累积总的权重: 8 + 2
  17.             randomOptions.totalWeight += opt.weight
  18.  
  19.             // 根据每个元素的自己的权重,向buffer中存储数据。权重越多存储的越多
  20.             for (i <- 1 to opt.weight) {
  21.                 //  男 男 男 男 男 男 男 女 女
  22.                 randomOptions.optsBuffer += opt.value
  23.             }
  24.         }
  25.  
  26.         randomOptions
  27.     }
  28.  
  29.     def main(args: Array[String]): Unit = {
  30.  
  31.         for (i <- 1 to 10) {
  32.             println(RandomOptions(RanOpt("男", 8), RanOpt("女", 2)).getRandomOpt)
  33.         }
  34.     }
  35. }
  36.  
  37. class RandomOptions[T](opts: RanOpt[T]*) {
  38.  
  39.     var totalWeight = 0
  40.     var optsBuffer = new ListBuffer[T]
  41.  
  42.     def getRandomOpt: T = {
  43.         // 随机选择:0-9
  44.         val randomNum: Int = new Random().nextInt(totalWeight)
  45.         // 根据随机数,作为角标取数
  46.         optsBuffer(randomNum)
  47.     }
  48. }

MockerRealTime

1)生成日志逻辑

2)创建包名

com.zhangjk.macker

3)编写生成实时数据代码

  1. package com.zhangjk.macker
  2.  
  3. import java.util.Properties
  4. import com.zhangjk.util.{PropertiesUtil, RanOpt, RandomOptions}
  5. import org.apache.kafka.clients.producer.{KafkaProducer, ProducerConfig, ProducerRecord}
  6.  
  7. import scala.collection.mutable.ArrayBuffer
  8. import scala.util.Random
  9.  
  10. //城市信息表: city_id :城市id  city_name:城市名称   area:城市所在大区
  11. case class CityInfo(city_id: Long, city_name: String, area: String)
  12.  
  13. object MockerRealTime {
  14.  
  15.     /**
  16.      * 模拟的数据
  17.      * 格式 :timestamp area city userid adid
  18.      * 某个时间点 某个地区 某个城市 某个用户 某个广告
  19.      * 1604229363531 华北 北京 3 3
  20.      */
  21.     def generateMockData(): Array[String] = {
  22.  
  23.         val array: ArrayBuffer[String] = ArrayBuffer[String]()
  24.  
  25.         val CityRandomOpt = RandomOptions(
  26.             RanOpt(CityInfo(1, "北京", "华北"), 30),
  27.             RanOpt(CityInfo(2, "上海", "华东"), 30),
  28.             RanOpt(CityInfo(3, "广州", "华南"), 10),
  29.             RanOpt(CityInfo(4, "深圳", "华南"), 20),
  30.             RanOpt(CityInfo(5, "天津", "华北"), 10)
  31.         )
  32.  
  33.         val random = new Random()
  34.  
  35.         // 模拟实时数据:
  36.         // timestamp province city userid adid
  37.         for (i <- 0 to 50) {
  38.  
  39.             val timestamp: Long = System.currentTimeMillis()
  40.             val cityInfo: CityInfo = CityRandomOpt.getRandomOpt
  41.             val city: String = cityInfo.city_name
  42.             val area: String = cityInfo.area
  43.             val adid: Int = 1 + random.nextInt(6)
  44.             val userid: Int = 1 + random.nextInt(6)
  45.  
  46.             // 拼接实时数据: 某个时间点 某个地区 某个城市 某个用户 某个广告
  47.             array += timestamp + " " + area + " " + city + " " + userid + " " + adid
  48.         }
  49.  
  50.         array.toArray
  51.     }
  52.  
  53.     def main(args: Array[String]): Unit = {
  54.  
  55.         // 获取配置文件config.properties中的Kafka配置参数
  56.         val config: Properties = PropertiesUtil.load("config.properties")
  57.         val brokers: String = config.getProperty("kafka.broker.list")
  58.         val topic: String = config.getProperty("kafka.topic")
  59.  
  60.         // 创建配置对象
  61.         val prop = new Properties()
  62.  
  63.         // 添加配置
  64.         prop.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokers)
  65.         prop.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
  66.         prop.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
  67.  
  68.         // 根据配置创建Kafka生产者
  69.         val kafkaProducer: KafkaProducer[String, String] = new KafkaProducer[String, String](prop)
  70.  
  71.         while (true) {
  72.  
  73.             // 随机产生实时数据并通过Kafka生产者发送到Kafka集群中
  74.             for (line <- generateMockData()) {
  75.                 kafkaProducer.send(new ProducerRecord[String, String](topic, line))
  76.                 println(line)
  77.             }
  78.  
  79.             Thread.sleep(2000)
  80.         }
  81.     }
  82. }

4)测试:

(1)启动Kafka集群

zk.sh start

kf.sh start

(2)消费Kafka的testTopic数据

bin/kafka-console-consumer.sh --bootstrap-server hadoop102:9092 --from-beginning --topic testTopic

需求一:广告黑名单

实现实时的动态黑名单机制:将每天对某个广告点击超过30次的用户拉黑。

注:黑名单保存到MySQL中。

需求分析

MySQL建表

1)创建库spark2020

2)存放黑名单用户的表

CREATE TABLE black_list (

userid CHAR(1) PRIMARY KEY -- 用户id

);

3)存放单日各用户点击每个广告的次数统计表

CREATE TABLE user_ad_count (

dt VARCHAR(255), -- 时间

userid CHAR (1), -- 用户id

adid CHAR (1), -- 广告id

COUNT BIGINT, -- 广告点击次数

PRIMARY KEY (dt, userid, adid) -- 联合主键

);

4)测试:如果设置主键,有则更新,无则插入。 连续执行两次插入

insert into user_ad_count(dt, userid, adid, count)

values('2020-12-12','a','2',50)

on duplicate key

update count=count+5

5)测试:如果不设置主键会连续插入两条数据

CREATE TABLE user_ad_count_test (

dt VARCHAR(255), -- 时间

userid CHAR (1), -- 用户id

adid CHAR (1), -- 广告id

COUNT BIGINT -- 广告点击次数

);

连续执行两次插入语句,得到两条数据

insert into user_ad_count_test(dt, userid, adid, count)

values('2020-11-11','a','1',50)

on duplicate key

update count=count+5

MyKafkaUtil工具类

接下来开始实时需求的分析,需要用到SparkStreaming来做实时数据的处理,在生产环境中,绝大部分时候都是对接的Kafka数据源,创建一个SparkStreaming读取Kafka数据的工具类。

  1. package com.zhangjk.util
  2.  
  3. import java.util.Properties
  4. import org.apache.kafka.clients.consumer.ConsumerRecord
  5. import org.apache.kafka.common.serialization.StringDeserializer
  6. import org.apache.spark.streaming.StreamingContext
  7. import org.apache.spark.streaming.dstream.{DStream, InputDStream}
  8. import org.apache.spark.streaming.kafka010.{ConsumerStrategies, KafkaUtils, LocationStrategies}
  9.  
  10. object MyKafkaUtil {
  11.  
  12.     //1.创建配置信息对象
  13.     private val properties: Properties = PropertiesUtil.load("config.properties")
  14.  
  15.     //2.用于初始化链接到集群的地址
  16.     private val brokers: String = properties.getProperty("kafka.broker.list")
  17.  
  18.     // 创建DStream,返回接收到的输入数据
  19.     // LocationStrategies:根据给定的主题和集群地址创建consumer
  20.     // LocationStrategies.PreferConsistent:持续的在所有Executor之间分配分区
  21.     // ConsumerStrategies:选择如何在Driver和Executor上创建和配置Kafka Consumer
  22.     // ConsumerStrategies.Subscribe:订阅一系列主题
  23.     def getKafkaStream(topic: String, ssc: StreamingContext): InputDStream[ConsumerRecord[String, String]] = {
  24.  
  25.         //3.kafka消费者配置
  26.         val kafkaParam = Map(
  27.             "bootstrap.servers" -> brokers,
  28.             "key.deserializer" -> classOf[StringDeserializer],
  29.             "value.deserializer" -> classOf[StringDeserializer],
  30.             "group.id" -> "commerce-consumer-group" //消费者组
  31.         )
  32.  
  33.         val dStream: InputDStream[ConsumerRecord[String, String]] = KafkaUtils.createDirectStream[String, String](
  34.             ssc,
  35.             LocationStrategies.PreferConsistent,
  36.             ConsumerStrategies.Subscribe[String, String](Array(topic), kafkaParam)
  37.         )
  38.         dStream
  39.     }
  40. }

JDBCUtil工具类

  1. package com.zhangjk.util
  2.  
  3. import java.sql.{Connection, PreparedStatement, ResultSet}
  4. import java.util.Properties
  5.  
  6. import com.alibaba.druid.pool.DruidDataSourceFactory
  7. import javax.sql.DataSource
  8.  
  9. object JDBCUtil {
  10.  
  11.     //初始化连接池
  12.     var dataSource: DataSource = init()
  13.  
  14.     //初始化连接池方法
  15.     def init(): DataSource = {
  16.           
  17.         val properties = new Properties()
  18.         val config: Properties = PropertiesUtil.load("config.properties")
  19.           
  20.         properties.setProperty("driverClassName", "com.mysql.jdbc.Driver")
  21.         properties.setProperty("url", config.getProperty("jdbc.url"))
  22.         properties.setProperty("username", config.getProperty("jdbc.user"))
  23.         properties.setProperty("password", config.getProperty("jdbc.password"))
  24.         properties.setProperty("maxActive", config.getProperty("jdbc.datasource.size"))
  25.           
  26.         DruidDataSourceFactory.createDataSource(properties)
  27.     }
  28.  
  29.     //获取MySQL连接
  30.     def getConnection: Connection = {
  31.         dataSource.getConnection
  32.     }
  33.  
  34.     //执行SQL语句,单条数据插入
  35.     def executeUpdate(connection: Connection, sql: String, params: Array[Any]): Int = {
  36.  
  37.         var rtn = 0
  38.         var pstmt: PreparedStatement = null
  39.  
  40.         try {
  41.             connection.setAutoCommit(false)
  42.             pstmt = connection.prepareStatement(sql)
  43.  
  44.             if (params != null && params.length > 0) {
  45.                 for (i <- params.indices) {
  46.                     pstmt.setObject(i + 1, params(i))
  47.                 }
  48.             }
  49.  
  50.             rtn = pstmt.executeUpdate()
  51.  
  52.             connection.commit()
  53.             pstmt.close()
  54.         } catch {
  55.             case e: Exception => e.printStackTrace()
  56.         }
  57.           
  58.         rtn
  59.     }
  60.  
  61.     //判断一条数据是否存在
  62.     def isExist(connection: Connection, sql: String, params: Array[Any]): Boolean = {
  63.           
  64.         var flag: Boolean = false
  65.         var pstmt: PreparedStatement = null
  66.  
  67.         try {
  68.             pstmt = connection.prepareStatement(sql)
  69.  
  70.             for (i <- params.indices) {
  71.                 pstmt.setObject(i + 1, params(i))
  72.             }
  73.  
  74.             flag = pstmt.executeQuery().next()
  75.             pstmt.close()
  76.         } catch {
  77.             case e: Exception => e.printStackTrace()
  78.         }
  79.  
  80.         flag
  81.     }
  82.  
  83.     //获取MySQL的一条数据
  84.     def getDataFromMysql(connection: Connection, sql: String, params: Array[Any]): Long = {
  85.  
  86.         var result: Long = 0L
  87.         var pstmt: PreparedStatement = null
  88.  
  89.         try {
  90.             pstmt = connection.prepareStatement(sql)
  91.               
  92.             for (i <- params.indices) {
  93.                 pstmt.setObject(i + 1, params(i))
  94.             }
  95.  
  96.             val resultSet: ResultSet = pstmt.executeQuery()
  97.  
  98.             while (resultSet.next()) {
  99.                 result = resultSet.getLong(1)
  100.             }
  101.  
  102.             resultSet.close()
  103.             pstmt.close()
  104.         } catch {
  105.             case e: Exception => e.printStackTrace()
  106.         }
  107.  
  108.         result
  109.     }
  110.  
  111.     // 主方法,用于测试上述方法
  112.     def main(args: Array[String]): Unit = {
  113.  
  114.         //1 获取连接
  115.         val connection: Connection = getConnection
  116.  
  117.         //2 预编译SQL
  118.         val statement: PreparedStatement = connection.prepareStatement("select * from user_ad_count where userid = ?")
  119.  
  120.         //3 传输参数
  121.         statement.setObject(1, "a")
  122.  
  123.         //4 执行sql
  124.         val resultSet: ResultSet = statement.executeQuery()
  125.  
  126.         //5 获取数据
  127.         while (resultSet.next()) {
  128.             println("111:" + resultSet.getString(1))
  129.         }
  130.  
  131.         //6 关闭资源
  132.         resultSet.close()
  133.         statement.close()
  134.         connection.close()
  135.     }
  136. }

BlackListHandler广告黑名单业务

  1. package com.zhangjk.handler
  2.  
  3. import java.sql.Connection
  4. import java.text.SimpleDateFormat
  5. import java.util.Date
  6. import com.zhangjk.app.Ads_log
  7. import com.zhangjk.util.JDBCUtil
  8. import org.apache.spark.streaming.dstream.DStream
  9.  
  10. object BlackListHandler {
  11.     //时间格式化对象
  12.     private val sdf = new SimpleDateFormat("yyyy-MM-dd")
  13.  
  14.     def addBlackList(filterAdsLogDStream: DStream[Ads_log]): Unit = {
  15.  
  16.         //统计当前批次中单日每个用户点击每个广告的总次数
  17.         //1.转换和累加:ads_log=>((date,user,adid),1) =>((date,user,adid),count)
  18.         val dateUserAdToCount: DStream[((String, String, String), Long)] = filterAdsLogDStream.map(
  19.             adsLog => {
  20.  
  21.                 //a.将时间戳转换为日期字符串
  22.                 val date: String = sdf.format(new Date(adsLog.timestamp))
  23.  
  24.                 //b.返回值
  25.                 ((date, adsLog.userid, adsLog.adid), 1L)
  26.             }
  27.         ).reduceByKey(_ + _)
  28.  
  29.         //2 写出
  30.         dateUserAdToCount.foreachRDD(
  31.             rdd => {
  32.                 // 每个分区数据写出一次
  33.                 rdd.foreachPartition(
  34.                     iter => {
  35.                         // 获取连接
  36.                         val connection: Connection = JDBCUtil.getConnection
  37.  
  38.                         iter.foreach { case ((dt, user, ad), count) =>
  39.                             // 向MySQL中user_ad_count表,更新累加点击次数
  40.                             JDBCUtil.executeUpdate(
  41.                                 connection,
  42.                                 """
  43.                                   |INSERT INTO user_ad_count (dt,userid,adid,count)
  44.                                   |VALUES (?,?,?,?)
  45.                                   |ON DUPLICATE KEY
  46.                                   |UPDATE count=count+?
  47.                                 """.stripMargin, Array(dt, user, ad, count, count)
  48.                             )
  49.  
  50.                             // 查询user_ad_count表,读取MySQL中点击次数
  51.                             val ct: Long = JDBCUtil.getDataFromMysql(
  52.                                 connection,
  53.                                 """
  54.                                   |select count from user_ad_count where dt=? and userid=? and adid =?
  55.                                   |""".stripMargin,
  56.                                 Array(dt, user, ad)
  57.                             )
  58.  
  59.                             // 点击次数>30次,加入黑名单
  60.                             if (ct >= 30) {
  61.                                 JDBCUtil.executeUpdate(
  62.                                     connection,
  63.                                     """
  64.                                       |INSERT INTO black_list (userid) VALUES (?) ON DUPLICATE KEY update userid=?
  65.                                       |""".stripMargin,
  66.                                     Array(user, user)
  67.                                 )
  68.                             }
  69.                         }
  70.  
  71.                         connection.close()
  72.                     }
  73.                 )
  74.             }
  75.         )
  76.     }
  77.  
  78.     // 判断用户是否在黑名单中
  79.     def filterByBlackList(adsLogDStream: DStream[Ads_log]): DStream[Ads_log] = {
  80.  
  81.         adsLogDStream.filter(
  82.             adsLog => {
  83.                 // 获取连接
  84.                 val connection: Connection = JDBCUtil.getConnection
  85.  
  86.                 // 判断黑名单中是否存在该用户
  87.                 val bool: Boolean = JDBCUtil.isExist(
  88.                     connection,
  89.                     """
  90.                       |select * from black_list where userid=?
  91.                       |""".stripMargin,
  92.                     Array(adsLog.userid)
  93.                 )
  94.  
  95.                 // 关闭连接
  96.                 connection.close()
  97.  
  98.                 // 返回是否存在标记
  99.                 !bool
  100.             }
  101.         )
  102.     }
  103. }

RealtimeApp主程序

  1. package com.zhangjk.app
  2.  
  3. import java.util.{Date, Properties}
  4. import com.zhangjk.handler.BlackListHandler
  5. import com.zhangjk.util.{MyKafkaUtil, PropertiesUtil}
  6. import org.apache.kafka.clients.consumer.ConsumerRecord
  7. import org.apache.spark.SparkConf
  8. import org.apache.spark.streaming.dstream.{DStream, InputDStream}
  9. import org.apache.spark.streaming.{Seconds, StreamingContext}
  10.  
  11. object RealTimeApp {
  12.  
  13.     def main(args: Array[String]): Unit = {
  14.  
  15.         //1.创建SparkConf
  16.         val sparkConf: SparkConf = new SparkConf().setAppName("RealTimeApp ").setMaster("local[*]")
  17.  
  18.         //2.创建StreamingContext
  19.         val ssc = new StreamingContext(sparkConf, Seconds(3))
  20.  
  21.         //3.读取数据
  22.         val properties: Properties = PropertiesUtil.load("config.properties")
  23.         val topic: String = properties.getProperty("kafka.topic")
  24.  
  25.         val kafkaDStream: InputDStream[ConsumerRecord[String, String]] = MyKafkaUtil.getKafkaStream(topic, ssc)
  26.  
  27.         //4.将从Kafka读出的数据转换为样例类对象
  28.         val adsLogDStream: DStream[Ads_log] = kafkaDStream.map(record => {
  29.  
  30.             val value: String = record.value()
  31.             val arr: Array[String] = value.split(" ")
  32.  
  33.             Ads_log(arr(0).toLong, arr(1), arr(2), arr(3), arr(4))
  34.         })
  35.  
  36.         //5.需求一:根据MySQL中的黑名单过滤当前数据集
  37.         val filterAdsLogDStream: DStream[Ads_log] = BlackListHandler.filterByBlackList(adsLogDStream)
  38.  
  39.         //6.需求一:将满足要求的用户写入黑名单
  40.         BlackListHandler.addBlackList(filterAdsLogDStream)
  41.  
  42.         //测试打印
  43.         filterAdsLogDStream.cache()
  44.         filterAdsLogDStream.count().print()
  45.  
  46.         //启动任务
  47.         ssc.start()
  48.         ssc.awaitTermination()
  49.     }
  50. }
  51.  
  52. // 时间 地区 城市 用户id 广告id
  53. case class Ads_log(timestamp: Long, area: String, city: String, userid: String, adid: String)

测试

1)启动Kafka集群

zk.sh start

kf.sh start

2)启动广告黑名单主程序:RealTimeApp.scala

3)启动日志生成程序:MockerRealTime.scala

4)观察spark2020中user_ad_count和black_list中数据变化

观察到:黑名单中包含所有用户id,用户统计表中,不会有数据再更新

需求二:各个地区各个城市各广告点击量实时统计

描述:实时统计每天各地区各城市各广告的点击总流量,并将其存入MySQL。

需求分析

MySQL建表

CREATE TABLE area_city_ad_count (

dt VARCHAR(255),

area VARCHAR(255),

city VARCHAR(255),

adid VARCHAR(255),

count BIGINT,

PRIMARY KEY (dt,area,city,adid)

);

DateAreaCityAdCountHandler广告点击实时统计

  1. package com.zhangjk.handler
  2.  
  3. import java.sql.Connection
  4. import java.text.SimpleDateFormat
  5. import java.util.Date
  6. import com.zhangjk.app.Ads_log
  7. import com.zhangjk.util.JDBCUtil
  8. import org.apache.spark.streaming.dstream.DStream
  9.  
  10. object DateAreaCityAdCountHandler {
  11.  
  12.     // 时间格式化对象
  13.     private val sdf: SimpleDateFormat = new SimpleDateFormat("yyyy-MM-dd")
  14.  
  15.     // 根据黑名单过滤后的数据集,统计每天各大区各个城市广告点击总数并保存至MySQL中
  16.     def saveDateAreaCityAdCountToMysql(filterAdsLogDStream: DStream[Ads_log]): Unit = {
  17.  
  18.         //1.统计每天各大区各个城市广告点击总数
  19.         val dateAreaCityAdToCount: DStream[((String, String, String, String), Long)] = filterAdsLogDStream.map(ads_log => {
  20.  
  21.             //a.格式化为日期字符串
  22.             val dt: String = sdf.format(new Date(ads_log.timestamp))
  23.  
  24.             //b.组合,返回
  25.             ((dt, ads_log.area, ads_log.city, ads_log.adid), 1L)
  26.         }).reduceByKey(_ + _)
  27.  
  28.         //2.将单个批次统计之后的数据集合MySQL数据对原有的数据更新
  29.         dateAreaCityAdToCount.foreachRDD(rdd => {
  30.  
  31.             //对每个分区单独处理
  32.             rdd.foreachPartition(iter => {
  33.                 //a.获取连接
  34.                 val connection: Connection = JDBCUtil.getConnection
  35.  
  36.                 //b.写库
  37.                 iter.foreach { case ((dt, area, city, adid), ct) =>
  38.                     JDBCUtil.executeUpdate(
  39.                         connection,
  40.                         """
  41.                           |INSERT INTO area_city_ad_count (dt,area,city,adid,count)
  42.                           |VALUES(?,?,?,?,?)
  43.                           |ON DUPLICATE KEY
  44.                           |UPDATE count=count+?;
  45.                         """.stripMargin,
  46.                         Array(dt, area, city, adid, ct, ct)
  47.                     )
  48.                 }
  49.  
  50.                 //c.释放连接
  51.                 connection.close()
  52.             })
  53.         })
  54.     }
  55. }

RealTimeApp主程序

  1. package com.zhangjk.app
  2.  
  3. import java.util.{Date, Properties}
  4. import com.zhangjk.handler.{BlackListHandler, DateAreaCityAdCountHandler}
  5. import com.zhangjk.util.{MyKafkaUtil, PropertiesUtil}
  6. import org.apache.kafka.clients.consumer.ConsumerRecord
  7. import org.apache.spark.SparkConf
  8. import org.apache.spark.streaming.dstream.{DStream, InputDStream}
  9. import org.apache.spark.streaming.{Seconds, StreamingContext}
  10.  
  11. object RealTimeApp {
  12.  
  13.     def main(args: Array[String]): Unit = {
  14.  
  15.         //1.创建SparkConf
  16.         val sparkConf: SparkConf = new SparkConf().setAppName("RealTimeApp ").setMaster("local[*]")
  17.  
  18.         //2.创建StreamingContext
  19.         val ssc = new StreamingContext(sparkConf, Seconds(3))
  20.  
  21.         //3.读取数据
  22.         val properties: Properties = PropertiesUtil.load("config.properties")
  23.         val topic: String = properties.getProperty("kafka.topic")
  24.  
  25.         val kafkaDStream: InputDStream[ConsumerRecord[String, String]] = MyKafkaUtil.getKafkaStream(topic, ssc)
  26.  
  27.         //4.将从Kafka读出的数据转换为样例类对象
  28.         val adsLogDStream: DStream[Ads_log] = kafkaDStream.map(record => {
  29.  
  30.             val value: String = record.value()
  31.             val arr: Array[String] = value.split(" ")
  32.  
  33.             Ads_log(arr(0).toLong, arr(1), arr(2), arr(3), arr(4))
  34.         })
  35.  
  36.         //5.需求一:根据MySQL中的黑名单过滤当前数据集
  37.         val filterAdsLogDStream: DStream[Ads_log] = BlackListHandler.filterByBlackList(adsLogDStream)
  38.  
  39.         //6.需求一:将满足要求的用户写入黑名单
  40.         BlackListHandler.addBlackList(filterAdsLogDStream)
  41.  
  42.         //测试打印
  43.         filterAdsLogDStream.cache()
  44.         filterAdsLogDStream.count().print()
  45.  
  46.         //7.需求二:统计每天各大区各个城市广告点击总数并保存至MySQL中
  47. DateAreaCityAdCountHandler.saveDateAreaCityAdCountToMysql(filterAdsLogDStream)
  48.  
  49.         //启动任务
  50.         ssc.start()
  51.         ssc.awaitTermination()
  52.     }
  53. }
  54.  
  55. // 时间 地区 城市 用户id 广告id
  56. case class Ads_log(timestamp: Long, area: String, city: String, userid: String, adid: String)

测试

1)清空black_list表中所有数据

2)启动主程序:RealTimeApp.scala

3)启动日志生成程序:MockerRealTime.scala

4)观察spark2020中表area_city_ad_count的数据变化

需求三:最近一小时广告点击量

说明:实际测试时,为了节省时间,统计的是2分钟内广告点击量

结果展示:广告id,List[时间-> 点击次数,时间->点击次数,时间->点击次数]

1:List [15:50->10,15:51->25,15:52->30]

2:List [15:50->10,15:51->25,15:52->30]

3:List [15:50->10,15:51->25,15:52->30]

思路分析

LastHourAdCountHandler最近一小时广告点击量

  1. package com.zhangjk.handler
  2.  
  3. import java.text.SimpleDateFormat
  4. import java.util.Date
  5. import com.zhangjk.app.Ads_log
  6. import org.apache.spark.streaming.Minutes
  7. import org.apache.spark.streaming.dstream.DStream
  8.  
  9. object LastHourAdCountHandler {
  10.  
  11.     //时间格式化对象
  12.     private val sdf: SimpleDateFormat = new SimpleDateFormat("HH:mm")
  13.  
  14.     // 过滤后的数据集,统计最近一小时(2分钟)广告分时点击总数
  15.     def getAdHourMintToCount(filterAdsLogDStream: DStream[Ads_log]): DStream[(String, List[(String, Long)])] = {
  16.  
  17.         //1.开窗 => 时间间隔为2分钟 window()
  18.         val windowAdsLogDStream: DStream[Ads_log] = filterAdsLogDStream.window(Minutes(2))
  19.  
  20.         //2.转换数据结构 ads_log =>((adid,hm),1L) map()
  21.         val adHmToOneDStream: DStream[((String, String), Long)] = windowAdsLogDStream.map(adsLog => {
  22.  
  23.             val hm: String = sdf.format(new Date(adsLog.timestamp))
  24.  
  25.             ((adsLog.adid, hm), 1L)
  26.         })
  27.  
  28.         //3.统计总数 ((adid,hm),1L)=>((adid,hm),sum) reduceBykey(_+_)
  29.         val adHmToCountDStream: DStream[((String, String), Long)] = adHmToOneDStream.reduceByKey(_ + _)
  30.  
  31.         //4.转换数据结构 ((adid,hm),sum)=>(adid,(hm,sum)) map()
  32.         val adToHmCountDStream: DStream[(String, (String, Long))] = adHmToCountDStream.map { case ((adid, hm), count) =>
  33.             (adid, (hm, count))
  34.         }
  35.  
  36.         //5.按照adid分组 (adid,(hm,sum))=>(adid,Iter[(hm,sum),...]) groupByKey
  37.         adToHmCountDStream
  38.             .groupByKey()
  39.             .mapValues(iter => iter.toList.sortWith(_._1 < _._1))
  40.     }
  41. }

RealTimeApp主程序

  1. package com.zhangjk.app
  2.  
  3. import java.util.{Date, Properties}
  4.  
  5. import com.zhangjk.handler.{BlackListHandler, DateAreaCityAdCountHandler, LastHourAdCountHandler}
  6. import com.zhangjk.util.{MyKafkaUtil, PropertiesUtil}
  7. import org.apache.kafka.clients.consumer.ConsumerRecord
  8. import org.apache.spark.SparkConf
  9. import org.apache.spark.streaming.dstream.{DStream, InputDStream}
  10. import org.apache.spark.streaming.{Seconds, StreamingContext}
  11.  
  12. object RealTimeApp {
  13.  
  14.     def main(args: Array[String]): Unit = {
  15.  
  16.         //1.创建SparkConf
  17.         val sparkConf: SparkConf = new SparkConf().setAppName("RealTimeApp ").setMaster("local[*]")
  18.  
  19.         //2.创建StreamingContext
  20.         val ssc = new StreamingContext(sparkConf, Seconds(3))
  21.  
  22.         //3.读取数据
  23.         val properties: Properties = PropertiesUtil.load("config.properties")
  24.         val topic: String = properties.getProperty("kafka.topic")
  25.  
  26.         val kafkaDStream: InputDStream[ConsumerRecord[String, String]] = MyKafkaUtil.getKafkaStream(topic, ssc)
  27.  
  28.         //4.将从Kafka读出的数据转换为样例类对象
  29.         val adsLogDStream: DStream[Ads_log] = kafkaDStream.map(record => {
  30.  
  31.             val value: String = record.value()
  32.             val arr: Array[String] = value.split(" ")
  33.  
  34.             Ads_log(arr(0).toLong, arr(1), arr(2), arr(3), arr(4))
  35.         })
  36.  
  37.         //5.需求一:根据MySQL中的黑名单过滤当前数据集
  38.         val filterAdsLogDStream: DStream[Ads_log] = BlackListHandler.filterByBlackList(adsLogDStream)
  39.  
  40.         //6.需求一:将满足要求的用户写入黑名单
  41.         BlackListHandler.addBlackList(filterAdsLogDStream)
  42.  
  43.         //测试打印
  44.         filterAdsLogDStream.cache()
  45.         filterAdsLogDStream.count().print()
  46.  
  47.         //7.需求二:统计每天各大区各个城市广告点击总数并保存至MySQL中
  48.         DateAreaCityAdCountHandler.saveDateAreaCityAdCountToMysql(filterAdsLogDStream)
  49.  
  50.         //8.需求三:统计最近一小时(2分钟)广告分时点击总数
  51.         val adToHmCountListDStream: DStream[(String, List[(String, Long)])] = LastHourAdCountHandler.getAdHourMintToCount(filterAdsLogDStream)
  52.  
  53.         //9.打印
  54.         adToHmCountListDStream.print()
  55.  
  56.         //启动任务
  57.         ssc.start()
  58.         ssc.awaitTermination()
  59.     }
  60. }
  61.  
  62. // 时间 地区 城市 用户id 广告id
  63. case class Ads_log(timestamp: Long, area: String, city: String, userid: String, adid: String)

测试

1)清空black_list表中所有数据

2)启动主程序:RealTimeApp.scala

3)启动日志生成程序:MockerRealTime.scala

4)观察控制台打印数据

(1,List((16:07,15), (16:08,257), (16:09,233)))

(2,List((16:07,11), (16:08,249), (16:09,220)))

(3,List((16:07,24), (16:08,267), (16:09,221)))

(4,List((16:07,14), (16:08,252), (16:09,259)))

(5,List((16:07,17), (16:08,265), (16:09,265)))

(6,List((16:07,22), (16:08,234), (16:09,235)))

 

标签:count,String,val,SparkStreaming,adid,实操,apache,import,07
From: https://www.cnblogs.com/meanshift/p/16028347.html

相关文章

  • 乐维监控keycloak单点登录实操(上篇)
    Keycloak为Web应用和Restful服务提供了一站式的单点登录解决方案,为登录、注册、用户管理提供了可视化管理界面,用户可以借助于该界面来配置符合自身需要的安全策略和进行用......
  • NC25064 [USACO 2007 Mar G]Ranking the Cows
    题目链接题目题目描述EachofFarmerJohn'sNcows(1≤N≤1,000)producesmilkatadifferentpositiverate,andFJwouldliketoorderhiscowsaccording......
  • 寻找您合适的UML建模工具(20180702更新)
    现有的UML建模工具非常多。UMLChina提供了一个查询的小工具,可以根据各种条件来查询您需要的UML建模工具。网址:​​http://www.umlchina.com/Tools/search.aspx​​本查询工......
  • 每日食词—day079
    reviewn. v.评审、查核、评论、复习、回顾、综述chatsn. v.聊天、闲谈explorev.探索、探究、探险、探测、勘探workflown.工作流、工作流程scopesn. v......
  • 软件方法(下)分析和设计第8章连载[20210723更新]
    墙上挂了根长藤,长藤上面挂铜铃《长藤挂铜铃》;词:元庸,曲:梅翁(姚敏),唱:逸敏,1959您在阅读《软件方法》时如果发现错误,欢迎通过微信umlchina2告知。如果作者认为有道理,决定在下一次......
  • NC53074 Forsaken喜欢独一无二的树
    题目链接题目题目描述​众所周知,最小生成树是指使图中所有节点连通且边权和最小时的边权子集。​不过最小生成树太简单了,我们现在来思考一个稍微复杂一点......
  • NumPy科学计算库学习_007_NumPy数组的基本索引和切片
    导入模块importnumpyasnp一维数组切片创建一个NumPy数组arr=np.array([0,1,2,3,4,5,6,7,8,9])print("【arr】\n",arr)【arr】[0123456789]从Num......
  • 每日食词—day078
    requirev.需要、需求、要求、命令namespacen.命名空间、名字空间、名称空间arean.面积、区域、地区、区rown. v.行columnn.列readonlyadj.只......
  • 力扣107 二叉树的层序遍历
    力扣107二叉树的层序遍历题目:给你二叉树的根节点root,返回其节点值自底向上的层序遍历。(即按从叶子节点所在层到根节点所在的层,逐层从左向右遍历)示例1:输入:root......
  • django框架(07)
    图书管理系统讲解1.表设计 先考虑普通字段再考虑外键字段数据库迁移、测试数据录入2.首页展示3.书籍展示4.书籍添加5.书籍编辑 后端如何获取用户想要编辑的数......