实验5
MapReduce初级编程实践
1.实验目的
(1)通过实验掌握基本的MapReduce编程方法;
(2)掌握用MapReduce解决一些常见的数据处理问题,包括数据去重、数据排序和数据挖掘等。
2.实验平台
(1)操作系统:Linux(建议Ubuntu16.04或Ubuntu18.04)
(2)Hadoop版本:3.1.3
3.实验步骤
(一)编程实现文件合并和去重操作
对于两个输入文件,即文件A和文件B,请编写MapReduce程序,对两个文件进行合并,并剔除其中重复的内容,得到一个新的输出文件C。下面是输入文件和输出文件的一个样例供参考。
输入文件A的样例如下:
20170101 x 20170102 y 20170103 x 20170104 y 20170105 z 20170106 x |
输入文件B的样例如下:
20170101 y 20170102 y 20170103 x 20170104 z 20170105 y |
根据输入文件A和B合并得到的输出文件C的样例如下:
20170101 x 20170101 y 20170102 y 20170103 x 20170104 y 20170104 z 20170105 y 20170105 z 20170106 x |
Mapper.java
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
public class MergeMapper extends Mapper<LongWritable, Text, Text, Text> {
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
// 假设输入格式为 "date\tvalue"
String[] parts = value.toString().split("\t");
if (parts.length == 2) {
context.write(new Text(parts[0]), new Text(parts[1]));
}
}
}
Reducer.java
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
import java.util.HashSet;
import java.util.Set;
public class MergeReducer extends Reducer<Text, Text, Text, Text> {
@Override
protected void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
Set<String> uniqueValues = new HashSet<>();
for (Text val : values) {
uniqueValues.add(val.toString());
}
for (String value : uniqueValues) {
context.write(key, new Text(value));
}
}
}
Driver.java
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class MergeDriver {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "Merge and Deduplicate");
job.setJarByClass(MergeDriver.class);
job.setMapperClass(MergeMapper.class);
job.setReducerClass(MergeReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
FileInputFormat.addInputPath(job, new Path(args[0])); // 输入路径可以是多个文件或目录
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
(二)编写程序实现对输入文件的排序
现在有多个输入文件,每个文件中的每行内容均为一个整数。要求读取所有文件中的整数,进行升序排序后,输出到一个新的文件中,输出的数据格式为每行两个整数,第一个数字为第二个整数的排序位次,第二个整数为原待排列的整数。下面是输入文件和输出文件的一个样例供参考。
输入文件1的样例如下:
33 37 12 40 |
输入文件2的样例如下:
4 16 39 5 |
输入文件3的样例如下:
1 45 25 |
根据输入文件1、2和3得到的输出文件如下:
1 1 2 4 3 5 4 12 5 16 6 25 7 33 8 37 9 39 10 40 11 45 |
SortMapper.java
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
public class SortMapper extends Mapper<LongWritable, Text, IntWritable, IntWritable> {
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
try {
int number = Integer.parseInt(value.toString().trim());
context.write(new IntWritable(number), new IntWritable(number));
} catch (NumberFormatException e) {
// 忽略非数字行
}
}
}
SortReducer.java
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
public class SortReducer extends Reducer<IntWritable, IntWritable, IntWritable, IntWritable> {
private int rank = 1;
@Override
protected void reduce(IntWritable key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
for (IntWritable value : values) {
context.write(new IntWritable(rank++), value);
}
}
}
SortDriver.java
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class SortDriver {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "Sort Numbers");
job.setJarByClass(SortDriver.class);
job.setMapperClass(SortMapper.class);
job.setReducerClass(SortReducer.class);
job.setOutputKeyClass(IntWritable.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
(三)对给定的表格进行信息挖掘
下面给出一个child-parent的表格,要求挖掘其中的父子辈关系,给出祖孙辈关系的表格。
输入文件内容如下:
child parent Steven Lucy Steven Jack Jone Lucy Jone Jack Lucy Mary Lucy Frank Jack Alice Jack Jesse David Alice David Jesse Philip David Philip Alma Mark David Mark Alma |
输出文件内容如下:
grandchild grandparent Steven Alice Steven Jesse Jone Alice Jone Jesse Steven Mary Steven Frank Jone Mary Jone Frank Philip Alice Philip Jesse Mark Alice Mark Jesse |
import java.io.*;
import java.util.*;
public class FamilyTree {
// 存储父子关系
private static Map<String, List<String>> parentChildRelations = new HashMap<>();
public static void main(String[] args) {
String inputFilePath = "input.txt";
String outputFilePath = "output.txt";
try {
// 加载并解析输入文件中的父子关系
loadParentChildRelations(inputFilePath);
// 找出所有祖孙关系
Map<String, List<String>> grandparentGrandchildRelations = findGrandparentGrandchildRelations();
// 将祖孙关系写入输出文件
writeGrandparentGrandchildRelations(outputFilePath, grandparentGrandchildRelations);
} catch (IOException e) {
System.err.println("Error processing files: " + e.getMessage());
}
}
// 从文件加载父子关系
private static void loadParentChildRelations(String filePath) throws IOException {
try (BufferedReader br = new BufferedReader(new FileReader(filePath))) {
String line;
while ((line = br.readLine()) != null) {
if (line.trim().isEmpty() || line.startsWith("child")) continue; // 跳过空行和标题行
String[] parts = line.split("\\s+");
if (parts.length == 2) {
String child = parts[0].trim();
String parent = parts[1].trim();
parentChildRelations.computeIfAbsent(parent, k -> new ArrayList<>()).add(child);
}
}
}
}
// 查找所有祖孙关系
private static Map<String, List<String>> findGrandparentGrandchildRelations() {
Map<String, List<String>> grandparentGrandchildRelations = new HashMap<>();
for (Map.Entry<String, List<String>> entry : parentChildRelations.entrySet()) {
String parent = entry.getKey();
for (String child : entry.getValue()) {
List<String> grandchildren = parentChildRelations.get(child);
if (grandchildren != null && !grandchildren.isEmpty()) {
for (String grandchild : grandchildren) {
grandparentGrandchildRelations.computeIfAbsent(grandchild, k -> new ArrayList<>()).add(parent);
}
}
}
}
return grandparentGrandchildRelations;
}
// 写祖孙关系到文件
private static void writeGrandparentGrandchildRelations(String filePath, Map<String, List<String>> relations) throws IOException {
try (BufferedWriter bw = new BufferedWriter(new FileWriter(filePath))) {
bw.write("grandchild\tgrandparent\n");
for (Map.Entry<String, List<String>> entry : relations.entrySet()) {
String grandchild = entry.getKey();
for (String grandparent : entry.getValue()) {
bw.write(grandchild + "\t" + grandparent + "\n");
}
}
}
}
}
4.实验报告
题目: |
MapReduce初级编程实践 |
姓名 |
王士英 |
日期12.2 |
实验环境:idea |
||||
实验内容与完成情况:完成 |
||||
出现的问题:权限不对运行失败,尝试过root仍不行 |
||||
解决方案(列出遇到的问题和解决办法,列出没有解决的问题):System.setProperty("HADOOP_USER_NAME", "anta");权限应该是用户 |