- SparkContext:通往Spark集群的入口点,用于创建RDD和广播变量等
- RDD:弹性分布式数据集,Spark应用程序的核心抽象
- Transformation:操作RDD生成新的RDD,如map、filter等
- Action:对RDD的操作,如count、collect等
- 环境:Spark Standalone模式
- 目标:计算文本文件中所有单词的出现频率
- 输入文件:inputFile.txt
- 输出文件:outputFile.txt
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.JavaRDD;
import java.util.Arrays;
public class WordCount {
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("WordCount").setMaster("local");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> textFile = sc.textFile("inputFile.txt");
JavaRDD<String> words = textFile.flatMap(line -> Arrays.asList(line.split(" ")).iterator());
JavaRDD<String> filteredWords = words.filter(word -> word.length() > 0);
JavaRDD<String> keyedWords = filteredWords.mapToPair(word -> new Tuple2(word, 1)).reduceByKey((x, y) -> x + y);
keyedWords.saveAsTextFile("outputFile.txt");
sc.close();
}
}
- spark-submit:Spark的提交脚本
- main-class:包含 "main" 方法的类的名称
- path-to-jar:包含 "main" 方法的类所在的JAR文件的路径
- application-arguments:应用程序参数
将应用程序JAR文件提交到Spark集群
spark-submit --class <main_class> --master yarn --deploy-mode client <your_spark_app.jar> --input <input_path> --output <output_path>
标签:Java,--,WordCount,应用程序,RDD,JavaRDD,spark,Spark
From: https://www.cnblogs.com/playforever/p/17932240.html