首页 > 其他分享 >MapReduce实战之倒排索引案例(多job串联)

MapReduce实战之倒排索引案例(多job串联)

时间:2022-11-11 11:04:44浏览次数:36  
标签:倒排 -- MapReduce hadoop job org apache import txt


0)需求:有大量的文本(文档、网页),需要建立搜索索引

输出数据:

a:

atguigu pingping
atguigu ss
atguigu ss

b:

atguigu pingping
atguigu pingping
pingping ss

c:

atguigu ss
atguigu pingping

MapReduce实战之倒排索引案例(多job串联)_hadoop

(1)第一次预期输出结果

atguigu--a.txt 3
atguigu--b.txt 2
atguigu--c.txt 2
pingping--a.txt 1
pingping--b.txt 3
pingping--c.txt 1
ss--a.txt 2
ss--b.txt 1
ss--c.txt 1

(2)第二次预期输出结果

atguigu   c.txt-->2 b.txt-->2 a.txt-->3
pingping c.txt-->1 b.txt-->3 a.txt-->1
ss c.txt-->1 b.txt-->1 a.txt-->2

1)第一次处理

(1)第一次处理,编写OneIndexMapper

package com.atguigu.mapreduce.index;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;

public class OneIndexMapper extends Mapper<LongWritable, Text, Text , IntWritable>{

String name;
Text k = new Text();
IntWritable v = new IntWritable();

@Override
protected void setup(Context context)
throws IOException, InterruptedException {
// 获取文件名称
FileSplit split = (FileSplit) context.getInputSplit();

name = split.getPath().getName();
}

@Override
protected void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
// 1 获取1行
String line = value.toString();

// 2 切割
String[] fields = line.split(" ");

for (String word : fields) {
// 3 拼接
k.set(word+"--"+name);
v.set(1);

// 4 写出
context.write(k, v);
}
}
}

(2)第一次处理,编写OneIndexReducer

package com.atguigu.mapreduce.index;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class OneIndexReducer extends Reducer<Text, IntWritable, Text, IntWritable>{

@Override
protected void reduce(Text key, Iterable<IntWritable> values,
Context context) throws IOException, InterruptedException {

int count = 0;
// 1 累加求和
for(IntWritable value: values){
count +=value.get();
}

// 2 写出
context.write(key, new IntWritable(count));
}
}

(3)第一次处理,编写OneIndexDriver

package com.atguigu.mapreduce.index;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class OneIndexDriver {

public static void main(String[] args) throws Exception {

args = new String[] { "e:/input/inputoneindex", "e:/output5" };

Configuration conf = new Configuration();

Job job = Job.getInstance(conf);
job.setJarByClass(OneIndexDriver.class);

job.setMapperClass(OneIndexMapper.class);
job.setReducerClass(OneIndexReducer.class);

job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);

job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);

FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));

job.waitForCompletion(true);
}
}

(4)查看第一次输出结果

atguigu--a.txt 3
atguigu--b.txt 2
atguigu--c.txt 2
pingping--a.txt 1
pingping--b.txt 3
pingping--c.txt 1
ss--a.txt 2
ss--b.txt 1
ss--c.txt 1

2)第二次处理

(1)第二次处理,编写TwoIndexMapper

package com.atguigu.mapreduce.index;
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class TwoIndexMapper extends Mapper<LongWritable, Text, Text, Text>{
Text k = new Text();
Text v = new Text();

@Override
protected void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {

// 1 获取1行数据
String line = value.toString();

// 2用“--”切割
String[] fields = line.split("--");

k.set(fields[0]);
v.set(fields[1]);

// 3 输出数据
context.write(k, v);
}
}

(2)第二次处理,编写TwoIndexReducer

package com.atguigu.mapreduce.index;
import java.io.IOException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class TwoIndexReducer extends Reducer<Text, Text, Text, Text> {

@Override
protected void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
// atguigu a.txt 3
// atguigu b.txt 2
// atguigu c.txt 2

// atguigu c.txt-->2 b.txt-->2 a.txt-->3

StringBuilder sb = new StringBuilder();
// 1 拼接
for (Text value : values) {
sb.append(value.toString().replace("\t", "-->") + "\t");
}
// 2 写出
context.write(key, new Text(sb.toString()));
}
}

(3)第二次处理,编写TwoIndexDriver

package com.atguigu.mapreduce.index;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class TwoIndexDriver {

public static void main(String[] args) throws Exception {

args = new String[] { "e:/input/inputtwoindex", "e:/output6" };

Configuration config = new Configuration();
Job job = Job.getInstance(config);

job.setJarByClass(TwoIndexDriver.class);
job.setMapperClass(TwoIndexMapper.class);
job.setReducerClass(TwoIndexReducer.class);

job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);

job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);

FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));

boolean result = job.waitForCompletion(true);
System.exit(result?0:1);
}
}

(4)第二次查看最终结果

atguigu   c.txt-->2 b.txt-->2 a.txt-->3
pingping c.txt-->1 b.txt-->3 a.txt-->1
ss c.txt-->1 b.txt-->1 a.txt-->2

标签:倒排,--,MapReduce,hadoop,job,org,apache,import,txt
From: https://blog.51cto.com/u_12654321/5843253

相关文章

  • Hadoop群启脚本和关闭(YARN、HDFS、Zookeeper、JobHistoryServer)
    注意配置~目录下的(.bashrc)文件,在其中配置JAVA_HOME,并source~/.bashrc全部机器都配置哦群启:#!/bin/baseecho"==============正在启动Zookeeper服务......
  • MapReduce实战之压缩/解压缩案例
    1数据流的压缩和解压缩CompressionCodec有两个方法可以用于轻松地压缩或解压缩数据。要想对正在被写入一个输出流的数据进行压缩,我们可以使用createOutputStream(OutputStr......
  • ElasticJob‐Lite:作业分片策略介绍与源码分析
    分片弹性调度是​​ElasticJob​​​最重要的功能,也是这款产品名称的由来。它是一款能够让任务通过分片进行水平扩展的任务处理系统。​​ElasticJob​​​中任务分片项的......
  • Quartz.NET--JOB作业
    作业流程是在调度器的统一调度下完成的,它可以调度多个作业,触发器提供作业执行的条件(每天8:00am),触发器与作业关联,它们是1:N的关系,1个触发器可以关联1个或多个作业。附带......
  • SAP 后台任务定时job
    定时任务的事务码sm36:创建定时任务sm37:查看定时任务JDBG:后台任务debug,在对应的sm37中对应的job页面t-code输入创建定时任务SM36名称可以随便起一般都是按自己公......
  • stop_job
    MicrosoftWindows[版本6.1.7601]版权所有(c)2009MicrosoftCorporation。保留所有权利。C:\Users\Administrator>expdp\"/assysdba\"attach=SYS_EXPORT_FULL_02......
  • Cronjob 定时任务
    Job:负责处理任务,即仅执行一次的任务,它保证批处理任务的一个或多个Pod成功结束。CronJob:则就是在Job上加上了时间调度。我们用Job这个资源对象来创建一个任务,我们定......
  • python爬虫,爬取51job 智联 58同城
    口182480171有源码和lun文词云图 ......
  • tsf定时任务迁移到xxl-job
    tsf定时任务迁移到xxl-job​​1.介绍​​​​2.原理​​​​2.1设计思想​​​​2.2系统组成​​​​2.3架构图​​​​3.迁移方案​​​​3.1现状​​​​3.2迁移......
  • effective cmake 和 effective job
    effectivecmake的思想是面向target,影响我target是有编译选项(debug信息),include的头文件,关联的lib,第三方包,log系统。相当与写代码的逆向过程。工作也是一项,你的......