主仆调教SM束缚绳索捆绑,成品人和精品人的区别三叶草,双性受被各种性器调教学生小说,精品一区二区三区水蜜桃

尚硅谷大數(shù)據(jù)技術(shù)之Hadoop(MapReduce)(新)第1章 MapReduce概述

1.7?MapReduce編程規(guī)范

用戶編寫的程序分成三個(gè)部分:Mapper、Reducer和Driver。

1.8?WordCount案例實(shí)操

1.需求

在給定的文本文件中統(tǒng)計(jì)輸出每一個(gè)單詞出現(xiàn)的總次數(shù)

(1)輸入數(shù)據(jù)

atguigu atguigu
ss ss
cls cls
jiao
banzhang
xue
hadoop

(2)期望輸出數(shù)據(jù)

atguigu 2

banzhang 1

cls 2

hadoop 1

jiao 1

ss 2

xue 1

2.需求分析

按照MapReduce編程規(guī)范,分別編寫Mapper,Reducer,Driver,如圖4-2所示。

圖4-2?需求分析

3.環(huán)境準(zhǔn)備

(1)創(chuàng)建maven工程

(2)在pom.xml文件中添加如下依賴

<dependencies>

<dependency>

<groupId>junit</groupId>

<artifactId>junit</artifactId>

<version>RELEASE</version>

</dependency>

<dependency>

<groupId>org.apache.logging.log4j</groupId>

<artifactId>log4j-core</artifactId>

<version>2.8.2</version>

</dependency>

<dependency>

<groupId>org.apache.hadoop</groupId>

<artifactId>hadoop-common</artifactId>

<version>2.7.2</version>

</dependency>

<dependency>

<groupId>org.apache.hadoop</groupId>

<artifactId>hadoop-client</artifactId>

<version>2.7.2</version>

</dependency>

<dependency>

<groupId>org.apache.hadoop</groupId>

<artifactId>hadoop-hdfs</artifactId>

<version>2.7.2</version>

</dependency>

</dependencies>

(2)在項(xiàng)目的src/main/resources目錄下,新建一個(gè)文件,命名為“l(fā)og4j.properties”,在文件中填入。

log4j.rootLogger=INFO, stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender

log4j.appender.stdout.layout=org.apache.log4j.PatternLayout

log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n

log4j.appender.logfile=org.apache.log4j.FileAppender

log4j.appender.logfile.File=target/spring.log

log4j.appender.logfile.layout=org.apache.log4j.PatternLayout

log4j.appender.logfile.layout.ConversionPattern=%d %p [%c] - %m%n

4.編寫程序

(1)編寫Mapper類

package com.atguigu.mapreduce;

import java.io.IOException;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.LongWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Mapper;

 

public class WordcountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{

Text k = new Text();

IntWritable v = new IntWritable(1);

@Override

protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {

// 1 獲取一行

String line = value.toString();

// 2 切割

String[] words = line.split(" ");

// 3 輸出

for (String word : words) {

k.set(word);

context.write(k, v);

}

}

}

(2)編寫Reducer類

package com.atguigu.mapreduce.wordcount;

import java.io.IOException;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Reducer;

 

public class WordcountReducer extends Reducer<Text, IntWritable, Text, IntWritable>{

 

int sum;

IntWritable v = new IntWritable();

 

@Override

protected void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException {

// 1 累加求和

sum = 0;

for (IntWritable count : values) {

sum += count.get();

}

// 2 輸出

???????v.set(sum);

context.write(key,v);

}

}

(3)編寫Driver驅(qū)動(dòng)類

package com.atguigu.mapreduce.wordcount;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

 

public class WordcountDriver {

 

public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {

 

// 1 獲取配置信息以及封裝任務(wù)

Configuration configuration = new Configuration();

Job job = Job.getInstance(configuration);

 

// 2 設(shè)置jar加載路徑

job.setJarByClass(WordcountDriver.class);

 

// 3 設(shè)置map和reduce類

job.setMapperClass(WordcountMapper.class);

job.setReducerClass(WordcountReducer.class);

 

// 4 設(shè)置map輸出

job.setMapOutputKeyClass(Text.class);

job.setMapOutputValueClass(IntWritable.class);

 

// 5 設(shè)置最終輸出kv類型

job.setOutputKeyClass(Text.class);

job.setOutputValueClass(IntWritable.class);

// 6 設(shè)置輸入和輸出路徑

FileInputFormat.setInputPaths(job, new Path(args[0]));

FileOutputFormat.setOutputPath(job, new Path(args[1]));

 

// 7 提交

boolean result = job.waitForCompletion(true);

 

System.exit(result ? 0 : 1);

}

}

5.本地測(cè)試

(1)如果電腦系統(tǒng)是win7的就將win7的hadoop jar包解壓到非中文路徑,并在Windows環(huán)境上配置HADOOP_HOME環(huán)境變量。如果是電腦win10操作系統(tǒng),就解壓win10的hadoop jar包,并配置HADOOP_HOME環(huán)境變量。

注意:win8電腦和win10家庭版操作系統(tǒng)可能有問題,需要重新編譯源碼或者更改操作系統(tǒng)。

(2)在Eclipse/Idea上運(yùn)行程序

6.集群上測(cè)試

(0)用maven打jar包,需要添加的打包插件依賴

注意:標(biāo)記紅顏色的部分需要替換為自己工程主類

<build>

<plugins>

<plugin>

<artifactId>maven-compiler-plugin</artifactId>

<version>2.3.2</version>

<configuration>

<source>1.8</source>

<target>1.8</target>

</configuration>

</plugin>

<plugin>

<artifactId>maven-assembly-plugin </artifactId>

<configuration>

<descriptorRefs>

<descriptorRef>jar-with-dependencies</descriptorRef>

</descriptorRefs>

<archive>

<manifest>

<mainClass>com.atguigu.mr.WordcountDriver</mainClass>

</manifest>

</archive>

</configuration>

<executions>

<execution>

<id>make-assembly</id>

<phase>package</phase>

<goals>

<goal>single</goal>

</goals>

</execution>

</executions>

</plugin>

</plugins>

</build>

注意:如果工程上顯示紅叉。在項(xiàng)目上右鍵->maven->update project即可。

(1)將程序打成jar包,然后拷貝到Hadoop集群中

步驟詳情:右鍵->Run as->maven install。等待編譯完成就會(huì)在項(xiàng)目的target文件夾中生成jar包。如果看不到。在項(xiàng)目上右鍵-》Refresh,即可看到。修改不帶依賴的jar包名稱為wc.jar,并拷貝該jar包到Hadoop集群。

(2)啟動(dòng)Hadoop集群

(3)執(zhí)行WordCount程序

[atguigu@hadoop102 software]$ hadoop jar ?wc.jar

?com.atguigu.wordcount.WordcountDriver /user/atguigu/input /user/atguigu/output

  1. 主站蜘蛛池模板: 建水县| 临汾市| 宁安市| 营山县| 白城市| 达尔| 浮山县| 神木县| 黔江区| 拉孜县| 鹤壁市| 交城县| 孝感市| 辰溪县| 武清区| 名山县| 那坡县| 长岭县| 民县| 波密县| 高邑县| 陆良县| 精河县| 阳高县| 池州市| 德州市| 浏阳市| 吉水县| 峡江县| 龙泉市| 吴江市| 巩留县| 临湘市| 广西| 韶关市| 临汾市| 土默特右旗| 溧水县| 绥滨县| 永安市| 威海市|