eclipse手动pom本地包_环境篇--Eclipse如何远程连接Hadoop集群调试
關注 DLab數據實驗室 公眾號 帶你一起學習大數據~
寫在前面:最近終于閑下來了,打算把之前了解到的內容整理一下,先從搭建環境開始吧~
現在接觸大數據開發的朋友可能直接使用Spark或者其他的查詢引擎了,Hadoop似乎已經不用了,最近突然想再好好的理解一下Hadoop,想做一些實驗,所以又面臨重新配置環境的問題,盡管曾經配過n多次,但是沒有認真的整理下來,所以又不斷的遇到了一些小坑。這次一定漲記性。廢話不多說了,本文是在已有Hadoop集群的前提下進行的,主要是指導如何通過本地的Eclipse輕松的連接遠程Hadoop集群進行任務在線調試。
一、在Eclipse中安裝插件
1.百度搜索hadoop-eclipse-plugin-2.6.5.jar,根據你的Hadoop版本下載相應的插件,然后將該插件放入Eclipse安裝目錄的plugin目錄下;
2.重啟Eclipse,你會看到在Window->show view中多了MapReduce這個模塊;
該模塊就是用于配置你的遠程Hadoop集群的,如下圖:
Location name:隨便起一個名字;
Map/Reduce Master:如果你的Hadoop集群沒有特別指定,默認是50000-50020都可,一般大家都寫50020;Host就是你的Master的地址,此處寫IP或者域名皆可
DFS Master:一樣的,注意端口跟你的集群配置一致即可,一般都是9000;
3.配置成功你雙擊之后會在你的Eclipse左上角出現了下面這個
如果你的配置沒有問題,你會看到你的HDFS上面存的文件信息,這就表明我們已經成功連接Hadoop集群;
二、WordCount測試后
1.創建一個簡單的maven項目,例如取名mrtest;
2.pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><groupId>cn.edu.ruc.dbiir</groupId><artifactId>mrtest</artifactId><version>0.0.1-SNAPSHOT</version><packaging>jar</packaging><name>mrtest</name><url>http://maven.apache.org</url><properties><project.build.sourceEncoding>UTF-8</project.build.sourceEncoding><hadoop.version>2.6.5</hadoop.version></properties><dependencies><dependency><groupId>log4j</groupId><artifactId>log4j</artifactId><version>1.2.17</version> </dependency><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-client</artifactId><version>${hadoop.version}</version> </dependency><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-common</artifactId><version>${hadoop.version}</version> </dependency><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-hdfs</artifactId><version>${hadoop.version}</version> </dependency><dependency><groupId>junit</groupId><artifactId>junit</artifactId><version>3.8.1</version><scope>test</scope></dependency></dependencies> </project>3. WordCount.java
nnpackage cn.edu.ruc.dbiir.mrtest;import java.io.IOException;import org.apache.hadoop.classification.InterfaceAudience.Public; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapred.JobConf; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.Mapper.Context; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.log4j.BasicConfigurator; import org.apache.log4j.Logger;public class WordCount {public static class WCMapper extends Mapper<LongWritable, Text, Text, IntWritable>{@Overrideprotected void map(LongWritable key, Text value,Context context)throws IOException, InterruptedException {// TODO Auto-generated method stub // super.map(key, value, context);String data = value.toString();String [] words = data.split(" ");Logger logger = Logger.getLogger(WCMapper.class); logger.error("Map-key:"+key+"|"+"Map-value:"+value);for(String w:words) {context.write(new Text(w), new IntWritable(1));}}}public static class WCReducer extends Reducer<Text, IntWritable, Text, IntWritable>{@Overrideprotected void reduce(Text key, Iterable<IntWritable> value,Context context) throws IOException, InterruptedException {// TODO Auto-generated method stub // super.reduce(arg0, arg1, arg2);int total =0;Logger logger = Logger.getLogger(WordCount.class);logger.error("Map-key:"+key+"|"+"Map-value:"+value);for(IntWritable v:value) {total+=v.get();}context.write(key, new IntWritable(total));}}public static void main(String[] args) throws Exception{BasicConfigurator.configure();Configuration conf = new Configuration();conf.set("fs.defaultFS", "hdfs://db-01:9000");conf.set("mapreduce.framework.name", "yarn");conf.set("yarn.resourcemanager.hostname", "db-01");//1.創建一個job和任務入口Job job = Job.getInstance(conf);// Job job = Job.getInstance(new Configuration());job.setJarByClass(WordCount.class);((JobConf)job.getConfiguration()).setJar("Your own path by maven install/mrtest/target/mrtest-0.0.1-SNAPSHOT.jar");//2.指定job的mapper和輸出的類型<k2,v2>job.setMapperClass(WCMapper.class);job.setMapOutputKeyClass(Text.class);job.setMapOutputValueClass(IntWritable.class);//3.指定job的reducer和輸出類型<k4,v4>job.setReducerClass(WCReducer.class);job.setOutputKeyClass(Text.class);job.setOutputValueClass(IntWritable.class);//4指定job的輸入和輸出FileInputFormat.setInputPaths(job, new Path(args[0]));FileOutputFormat.setOutputPath(job, new Path(args[1]));//5封裝參數job.setNumReduceTasks(2);//6.提交job給yarnboolean res=job.waitForCompletion(true);}}4.運行,在Run Configuration里面填上參數如下,
以上就完成了本地Eclipse遠程調試Hadoop集群的WordCount示例,我們再總結一下:
- 下載并安裝插件(jar包放入plugin目錄)
- 在插件中配置Hadoop集群的信息mr和hdfs
- 編寫wordcount代碼運行
總結
以上是生活随笔為你收集整理的eclipse手动pom本地包_环境篇--Eclipse如何远程连接Hadoop集群调试的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: shell换行合并多个文件_如何合并多个
- 下一篇: clone远程代码 在不同电脑上git_