首页 > 其他分享 >基于spark的单词计数统计

基于spark的单词计数统计

时间:2022-10-31 13:10:01浏览次数:84  
标签:maven scala 单词 计数 apache org spark 2.11


单词计数:

直接查看官网:

​http://spark.apache.org/examples.html​

小案例,自己再次基础上进一步的实现,我用了两种语言实现

基于spark的单词计数统计_spark

 

主要文件:

words.txt:

hello me
hello you
hello her
hello me
hello you
hello her
hello me
hello you
hello her
hello me
hello you
hello her

pom.xml:(引入相关的依赖)

<!-- 指定仓库位置,依次为aliyun、cloudera和jboss仓库 -->
<repositories>
<repository>
<id>aliyun</id>
<url>http://maven.aliyun.com/nexus/content/groups/public/</url>
</repository>
<repository>
<id>cloudera</id>
<url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
</repository>
<repository>
<id>jboss</id>
<url>http://repository.jboss.com/nexus/content/groups/public</url>
</repository>
</repositories>
<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<encoding>UTF-8</encoding>
<scala.version>2.11.8</scala.version>
<scala.compat.version>2.11</scala.compat.version>
<hadoop.version>2.7.4</hadoop.version>
<spark.version>2.2.0</spark.version>
</properties>
<dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive-thriftserver_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<!-- <dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
<version>${spark.version}</version>
</dependency>-->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql-kafka-0-10_2.11</artifactId>
<version>${spark.version}</version>
</dependency>

<!--<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.6.0-mr1-cdh5.14.0</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>1.2.0-cdh5.14.0</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
<version>1.2.0-cdh5.14.0</version>
</dependency>-->

<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.7.4</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>1.3.1</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
<version>1.3.1</version>
</dependency>
<dependency>
<groupId>com.typesafe</groupId>
<artifactId>config</artifactId>
<version>1.3.3</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.38</version>
</dependency>
</dependencies>

<build>
<sourceDirectory>src/main/scala</sourceDirectory>
<!-- <testSourceDirectory>src/test/scala</testSourceDirectory>-->
<plugins>
<!-- 指定编译java的插件 -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
</plugin>
<!-- 指定编译scala的插件 -->
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>3.2.2</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
<configuration>
<args>
<arg>-dependencyfile</arg>
<arg>${project.build.directory}/.scala_dependencies</arg>
</args>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.18.1</version>
<configuration>
<useFile>false</useFile>
<disableXmlReport>true</disableXmlReport>
<includes>
<include>**/*Test.*</include>
<include>**/*Suite.*</include>
</includes>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass></mainClass>
</transformer>
</transformers>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>

java实现:

package cn.itcast;

import org.apache.spark.SparkConf;
import org.apache.spark.SparkContext;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import scala.Tuple2;

import java.util.Arrays;

public class Demo {
public static void main(String[] ars) {
//1.创建sc
SparkConf conf = new SparkConf().setAppName("wc").setMaster("local[*]");
JavaSparkContext sc = new JavaSparkContext(conf);
//2.读取文件
JavaRDD<String> stringJavaRDD = sc.textFile("F:data\\words.txt");
//3.每一行按空格切割
//java中函数要的是接口的实现类对象,通过看源码发现flatMap要的函数(接口的实现类对象)
//public interface FlatMapFunction<T, R> extends Serializable {
// Iterator<R> call(T t) throws Exception;
//}
//T就是String,就是传入的每一行
//Iterator<R>,就是Iterator<String>返回的结果就是单词组成的迭代器
//java中函数的语法: (参数)->{函数体}
JavaRDD<String> wordRDD = stringJavaRDD.flatMap(s -> Arrays.asList(s.split(" ")).iterator());
//4.对每个单词记为1
//public interface PairFunction<T, K, V> extends Serializable {
// Tuple2<K, V> call(T t) throws Exception;
//}
JavaPairRDD<String, Integer> wordAndOne = wordRDD.mapToPair(w -> new Tuple2<>(w, 1));
//5.按照key进行聚合
JavaPairRDD<String, Integer> result = wordAndOne.reduceByKey((a, b) -> a + b);

//6.收集结果并输出
//result.foreach(System.out::println);
result.foreach(t-> System.out.println(t));
sc.close();
}

}

scala实现:

package cn.itcast.sparkhello

import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}

object WordCount_3 {
def main(args: Array[String]): Unit = {
//1.创建sc
val conf = new SparkConf().setAppName("wc").setMaster("local[*]")
val sc = new SparkContext(conf)
//2.读取文件
//A Resilient Distributed Dataset (RDD)
//RDD弹性分布式数据集,可以理解成一个分布式的集合,即Spark对于本地集合的封装
//可以让程序员操作分布式的数据就像操作本地集合一样简单,这样就很happy了
//RDD[每一行]
val fileRDD:RDD[String]= sc.textFile("F:data\\words.txt")
// val fileRDD: RDD[String] = sc.textFile("\"F:\\\\黑马资料\\\\第七阶段\\\\Spark资料\\\\资料\\\\data\\\\words.txt\"")
//3.处理数据
//3.1针对每一行按照空格切分并压平,_就代表每一行
//RDD[单词]
//3.2每个单词记为1
//wordRDD.map(w=>(w,1))
//RDD[(单词, 1)]
val wordRDD: RDD[String] = fileRDD.flatMap(line=>line.split(" "))
val wordAndOneRDD = wordRDD.map(word=>(word,1))

val wordAndCount: RDD[(String, Int)] = wordAndOneRDD.reduceByKey((a, b)=>a+b)
//3.handle

wordAndCount.foreach(println(_))

}
}

运行效果:(java与scala)

基于spark的单词计数统计_maven_02

标签:maven,scala,单词,计数,apache,org,spark,2.11
From: https://blog.51cto.com/u_12277263/5809173

相关文章

  • 四、Spark性能调优
    目录​​1.常规性能调优​​​​常规性能调优一:最优资源配置​​​​常规性能调优二:RDD优化​​​​RDD复用​​​​RDD持久化​​​​ RDD尽可能早的filter操作​​​​常......
  • 错排数计数
    定义:\(\foralli\in[1,n],p_i\neqi\)的长度为\(n\)的排列数。一开始看到的时候还想用容斥推:\(\sum\limits_{i=0}^n\binom{n}{i}(-1)^i(n-i)!\),结果发现太垃圾了。递......
  • 【XSY3997】方格计数(容斥,dp)
    题面方格计数题解拼命容斥即可。先考虑\(k=0\)的情况。首先先对对角线的限制容斥,即用“没有限制-正对角线没选-反对角线没选+正反对角线都没选”。设\(Z\)中对角......
  • Spark SQL优化总结2
    接上文内存优化用以下三张表,做性能测试RDD1.1.1cacheimportorg.apache.spark.SparkConfimportorg.apache.spark.sql.{Row,SparkSession}objectMemoryTuning{defmai......
  • Spark有状态算子
    Spark有状态算子不仅可以计算当前批次的结果,还可以结合上一次的结果,并对两次结果进行汇总packagecom.streamingimportorg.apache.spark.sql.SparkSessionimportor......
  • 英语单词组件- 单词在句子中,上面显示中文下面显示音标 css样式
    原先效果:改进demo效果优化点音标长度超出,或者中文超出,总宽度会按照最长的走居中显示再次优化line-height:22px;加入这个对齐中间行(字号大小会让绝对上下高度,对不齐中间的......
  • 英语单词的构成 一个英语单词是怎么组合的
    原文网址:http://www.tingclass.net/show-242-465038-1.html?gfh随着英语的普及,英语也成为一门很重要的外语学科,我们虽然平日里都在学习英语,但是有多少同学真正了解英语呢?......
  • 【XSY3032】画画(Burnside引理,计数)
    为了方便,我们肯定是先考虑有标号图的个数,再用Burnside引理去重,但是用Burnside引理时得先考虑清楚映射集合\(X\)是哪个集合\(A\)到哪个集合\(B\)的哪些映射,以及作......
  • sparkCore
    spark第二天1、打包代码到yarn上运行将代码提交到Yarn.上运行1、将setMaster代码注释,使用提交命令设置运行方式2、修改输入输出路径,并准备数据3、打包上传至服务器4......
  • sparkSql
    SparkSQL&sparkDSL1、SparkSQL(1)、构建SparkSessionspark2.x统一入口如果要与hive进行交互,在建立spark入口时加上.enableHiveSupport()(1)首先添加依赖: <dependen......