首页 > 其他分享 >MapReduce实战之小文件处理案例(自定义InputFormat)

MapReduce实战之小文件处理案例(自定义InputFormat)

时间:2022-11-11 11:06:45浏览次数:35  
标签:自定义 InputFormat mapreduce hadoop MapReduce io org apache import


小文件处理案例(自定义InputFormat)

1)需求

无论hdfs还是mapreduce,对于小文件都有损效率,实践中,又难免面临处理大量小文件的场景,此时,就需要有相应解决方案。将多个小文件合并成一个文件SequenceFile,SequenceFile里面存储着多个文件,存储的形式为文件路径+名称为key,文件内容为value。

2)输入数据

1:

yongpeng weidong weinan
sanfeng luozong xiaoming

2:

longlong fanfan
mazong kailun yuhang yixin
longlong fanfan
mazong kailun yuhang yixin

3:

shuaige changmo zhenqiang 
dongli lingu xuanxuan

最终预期文件格式:

SEQorg.apache.hadoop.io.Text"org.apache.hadoop.io.BytesWritable      ã;ŠCW

uÊÚX@ù½ü˜í   W   "!file:/e:/inputinputformat/one.txt   1yongpeng weidong weinan
sanfeng luozong xiaoming   Y   $#file:/e:/inputinputformat/three.txt   1shuaige changmo zhenqiang 
dongli lingu xuanxuan   €   "!file:/e:/inputinputformat/two.txt   Zlonglong fanfan
mazong kailun yuhang yixin
longlong fanfan
mazong kailun yuhang yixin

3)分析

小文件的优化无非以下几种方式:

(1)在数据采集的时候,就将小文件或小批数据合成大文件再上传HDFS

(2)在业务处理之前,在HDFS上使用mapreduce程序对小文件进行合并

(3)在mapreduce处理时,可采用CombineTextInputFormat提高效率

4)具体实现

本节采用自定义InputFormat的方式,处理输入小文件的问题。

(1)自定义一个类继承FileInputFormat

(2)改写RecordReader,实现一次读取一个完整文件封装为KV

(3)在输出时使用SequenceFileOutPutFormat输出合并文件

5)程序实现:

(1)自定义InputFromat

package com.atguigu.mapreduce.inputformat;
import java.io.IOException;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.JobContext;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

// 定义类继承FileInputFormat
public class WholeFileInputformat extends FileInputFormat<NullWritable, BytesWritable>{

@Override
protected boolean isSplitable(JobContext context, Path filename) {
return false;
}

@Override
public RecordReader<NullWritable, BytesWritable> createRecordReader(InputSplit split, TaskAttemptContext context)
throws IOException, InterruptedException {

WholeRecordReader recordReader = new WholeRecordReader();
recordReader.initialize(split, context);

return recordReader;
}
}

(2)自定义RecordReader

package com.atguigu.mapreduce.inputformat;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;

public class WholeRecordReader extends RecordReader<NullWritable, BytesWritable>{

private Configuration configuration;
private FileSplit split;

private boolean processed = false;
private BytesWritable value = new BytesWritable();

@Override
public void initialize(InputSplit split, TaskAttemptContext context) throws IOException, InterruptedException {

this.split = (FileSplit)split;
configuration = context.getConfiguration();
}

@Override
public boolean nextKeyValue() throws IOException, InterruptedException {

if (!processed) {
// 1 定义缓存区
byte[] contents = new byte[(int)split.getLength()];

FileSystem fs = null;
FSDataInputStream fis = null;

try {
// 2 获取文件系统
Path path = split.getPath();
fs = path.getFileSystem(configuration);

// 3 读取数据
fis = fs.open(path);

// 4 读取文件内容
IOUtils.readFully(fis, contents, 0, contents.length);

// 5 输出文件内容
value.set(contents, 0, contents.length);
} catch (Exception e) {

}finally {
IOUtils.closeStream(fis);
}

processed = true;

return true;
}

return false;
}

@Override
public NullWritable getCurrentKey() throws IOException, InterruptedException {
return NullWritable.get();
}

@Override
public BytesWritable getCurrentValue() throws IOException, InterruptedException {
return value;
}

@Override
public float getProgress() throws IOException, InterruptedException {
return processed? 1:0;
}

@Override
public void close() throws IOException {
}
}

(3)SequenceFileMapper处理流程

package com.atguigu.mapreduce.inputformat;
import java.io.IOException;
import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;

public class SequenceFileMapper extends Mapper<NullWritable, BytesWritable, Text, BytesWritable>{

Text k = new Text();

@Override
protected void setup(Mapper<NullWritable, BytesWritable, Text, BytesWritable>.Context context)
throws IOException, InterruptedException {
// 1 获取文件切片信息
FileSplit inputSplit = (FileSplit) context.getInputSplit();
// 2 获取切片名称
String name = inputSplit.getPath().toString();
// 3 设置key的输出
k.set(name);
}

@Override
protected void map(NullWritable key, BytesWritable value,
Context context)
throws IOException, InterruptedException {

context.write(k, value);
}
}

(4)SequenceFileReducer处理流程

package com.atguigu.mapreduce.inputformat;
import java.io.IOException;
import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class SequenceFileReducer extends Reducer<Text, BytesWritable, Text, BytesWritable> {

@Override
protected void reduce(Text key, Iterable<BytesWritable> values, Context context)
throws IOException, InterruptedException {

context.write(key, values.iterator().next());
}
}

(5)SequenceFileDriver处理流程

package com.atguigu.mapreduce.inputformat;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat;

public class SequenceFileDriver {

public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {

args = new String[] { "e:/input/inputinputformat", "e:/output1" };
Configuration conf = new Configuration();

Job job = Job.getInstance(conf);
job.setJarByClass(SequenceFileDriver.class);
job.setMapperClass(SequenceFileMapper.class);
job.setReducerClass(SequenceFileReducer.class);

// 设置输入的inputFormat
job.setInputFormatClass(WholeFileInputformat.class);
// 设置输出的outputFormat
job.setOutputFormatClass(SequenceFileOutputFormat.class);

job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(BytesWritable.class);

job.setOutputKeyClass(Text.class);
job.setOutputValueClass(BytesWritable.class);

FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));

boolean result = job.waitForCompletion(true);

System.exit(result ? 0 : 1);
}
}

 

标签:自定义,InputFormat,mapreduce,hadoop,MapReduce,io,org,apache,import
From: https://blog.51cto.com/u_12654321/5843247

相关文章

  • MapReduce实战之辅助排序和二次排序案例
    辅助排序和二次排序案例1)需求有如下订单数据订单id商品id成交金额0000001Pdt_01222.80000001Pdt_0625.80000002Pdt_03522.80000002Pdt_04122.40000002Pdt_05722.40000003Pdt......
  • MapReduce实战之日志清洗案例
    简单解析版1)需求:去除日志中字段长度小于等于11的日志。2)输入数据   数据有点大3)实现代码:(1)编写LogMapperpackagecom.atguigu.mapreduce.weblog;importjava.io.IOExc......
  • MapReduce实战之 MapReduce中多表合并案例
     MapReduce中多表合并案例1)需求:订单数据表t_order:idpidamount1001011100202210030331001   01   11002   02   21003   03   31004   01 ......
  • MapReduce实战之倒排索引案例(多job串联)
    0)需求:有大量的文本(文档、网页),需要建立搜索索引输出数据:a:atguigupingpingatguigussatguigussb:atguigupingpingatguigupingpingpingpingssc:atguigussatguigup......
  • MapReduce实战之压缩/解压缩案例
    1数据流的压缩和解压缩CompressionCodec有两个方法可以用于轻松地压缩或解压缩数据。要想对正在被写入一个输出流的数据进行压缩,我们可以使用createOutputStream(OutputStr......
  • 注解小结及自定义注解
    注解是Java开发中的一个高段位武器,我们可以在很多优秀的开源项目中看到注解的存在。比如,retrofit,eventbus。这些框架里面或多或少都用到了注解。注解使得项目使用起来非常......
  • Vue中如何自定义过滤器 ?
    过滤器可以格式化我们所需要的数据格式 ;自定义过滤器分为全局和局部过滤器:全局过滤器在main.js中使用Vue.direct4ive(过滤器名字,定义过滤器的具体行为函数);......
  • 【XAML】 WindowChrome 自定义窗体样式
    导读【XAML】WindowChrome的功能详解背景 WPF有两种主流的自定义Window窗体的方案,都各有缺点。方法一、缺点《WPF编程宝典》介绍了使用WindowStyle="None"和AllowsT......
  • 定义一个Java类,用来描述订单,属性自定义
    示例代码packagecom.powernode.oo;publicclassOrder{/***名称*/privateStringname;/***订单标识*/privateStrin......
  • java 自定义注解
    packagecom.tedu.in;importjava.lang.reflect.Field;publicclassTest{publicstaticvoidmain(String[]args){Class<User>user=User.class;......