CS 551 Systems Programming, Fall 2024
Programming Project 2n this project we are going to simulate the MapReduce framework on a single machine usingmulti-process programming.
1 Introduction In 2004, Google (the paper “MapReduce: Simplified Data Processing on Large Clusters” by J.Dean and S. Ghemawat) introduced a general programming model for processing and generatinglarge data sets on a cluster of computers.The general idea of the MapReduce model is to partition large data sets into multiple splits,each of whichis small enough to be processed on a single machine, called a worker. The datasplits will be processed in two phases: the map phase and the reduce phase. In themap phase, aworker runs user-defined map functions to parse the input data (i.e., a split of data) into multipleintermediate key/value pairs, which are saved into intermediate files. In the reduce phase, a(reduce) worker runs reduce functions that are also provided by the user to merge the intermediatefiles, and outputs the result to result file(s).We now use a small data set (the first few lines of a famous poem by Robert Frost, see Figure
- 1) to explain to what MapReduce does.
Figure 1: A small data set to be processed by MapReduce.
To run MapReduce, we first split the dataset into small pieces. For this example, we will split
the dataset by the four lines of the poem (Figure 2).
Figure 2: Partitioning the input data set into multiple splits.
The MapReduce framework will have four workers (in our project, the four workers are four
processes that are forked by the main program. In reality, they will be four independent machines)
to work on the four splits (each worker is working on a split). These four map worker will each
run a user-defined map function to process the split. The map function will map the input into
a series of (key, value) pairs. For this example, let the map function simply count the number of
each letter (A-Z) in the data set.Figure 3: The outputs of the map phase, which are also the inputs to the reduce phase.
The map outputs in our example are shown in Figure 3. They are also the inputs for the
reduce phase. In the reduce phase, a reduce worker runs a user-defined reduce function to merge
the intermediate results output by the map workers, and generates the final results (Figure 4).
Figure 4: The final result
2 Simulating the MapReduce with multi-process programming
2.1 The base code
Download the base code from the Brightspace. You will need to add your implementation into
this base code. The base code also contains three input data sets as examples.
2.2 The working scenario
In this project, we will use the MapReduce model to process large text files. The input will be afile that contains many lines of text. The base code folder contains three example input data files.We will be testing using the example input data files, or data files in similar format.A driver program is used to accept user inputs and drive the MapReduce processing. Themain part of driver program is already implemented in main.c. You will need to complete themapreduce() function, which is defined in mapreduce.c and is called by the driver program.A Makefile has already been given. Running the Makefile can give you the executable of the driver
program, which is named as “run-mapreduce”. The driver program is used in the following way:./run-mapreduce "counter"|"finder" file_path split_num [word_to_find]wherethearguments are explained as follows.
- The first argument specifies the type of the task, it can be either the “Letter counter” orthe “Word conter” (explained later).
- The second argument “file path” is the path to the input data file.
- The third argument “split num” 代写CS 551 Systems Programming, specifies how many splits the input data file should bpartitioned into for the map phase.• The fourth argument is used only for the “Word finder” task. This argument specifies theword that the user is trying to find in the input the sizes ofeach splits do not need to be exactly the same, otherwise a word may be divided into two differentplits.
Teenprocessed, the first worker process forked will also need to run the user-defined reduce function to process all theintermediate files output by the map phase. Figure 5 below gives an example about this process.PID=1003ID=1004PID=1001split 3Figure 5: An example of the working scenario.2.3 The two tasks The two tasks that can be performed by the driver program are described as follows.intermediate file
nd the final result file should be written in the following format:A number-of-occurrencesB number-of-occurrences...Y number-of-occurrencesZ number-of-occurrencesThe “Word finder” task is to find the word provided by user (specifiedbythe “word to find”argument of the driver program) in the input file, and outputs to the result file all the lines thatcontain the target word in the same order as they appear in the input file. For this task, youshould implement the word finder as a whole word match, meaning that the function should onlyecognize complete words that match exactly(case-sensitive) with the specified search terms. And
f multiple specified words are found in the same line, you only need to output that line once.2.4 Other requirements
- Besides the mapreduce() function defined in mapreduce.c, you will also need to complete the map/reduce functions of the two tasks (in usr functions.c.)
About the interfaces listed in “user functions.h” and “mapreduce.h”:
necessary).The above requirements allow the TA to test your implementations of worker logic and usermap/reduce functions separately. Note that violation to these requirements will result in 0point for this project.
- Use fork() to spawn processes.Be careful to avoid fork bomb (check on Wikipedia if you are not familiar with it). A fork
bomb will result in 0 point for this project.The fd in the DATA SPLIT structure should be a file descriptor to the original input dat.
The intermediate file output by the first map worker process should be named as “mr-0.itm”,the intermediate file by the second map worker process should be named as“mr-1.itm”, ...The result file is named as “mr.rst” (already done in main.c).Program should not automatically delete the intermediate files once they are created. Theywill be checked when grading. But your submission should not contain any ntermediatefiles as they should be created dynamically.3 Submit your workthe files: compress your README file, all the files in the base code folder, andany additional files you add into a ZIP file. Name the ZIP file based on your BU email ID. Forexample, if your BU email is “abc@binghamton.edu”, then the zip file should be “proj2 abc.zip”.Submission: submit the ZIP file to Brightspace before the deadline.3.1 Grading guidelines (1) Prepare the ZIP file on a Linux machine. If your zip file cannot be uncompressed, 5 points.(2) If the submitted ZIP file/source code files included in the ZIP file are not named as specified
(so that it causes problems for TA’s automated grading scripts), 10 points off.(3) If the submitted code does not compile:1 TA will try to fix the problem (for no more than 3 minutes);2 if (problem solved)3 1%-10% points off (based on how complex the fix is, TA’s discretion);4 else5 TA may contact the student by email or schedule a demo to fix the problem;6 if (problem solved)7 11%-20% points off (based on how complex the fix is, TA’s discretion); else9 All points off;So in the case that TA contacts you to fix a problem, please respond to TA’s emailpromptlyor show up at the demo appointment on time; otherwise the line 9 above will be effective.(4) If the code is not working as required in this spec, the TA should take pointsbasedon theassigned full points of the task and the actual problem.
(5) Lastly but not the least, stick to the collaboration policy stated in the syllabus: you maydiscuss with your fellow students, but code should absolutely be kept private
标签:map,reduce,Programming,551,will,file,CS,input,data From: https://www.cnblogs.com/CSE231/p/18570639