在Hadoop 2.3上运行C++程序各种疑难杂症(Hadoop Pipes选择、错误集锦、Hadoop2.3编译等)
首記
感覺Hadoop是一個坑,打著大數據最佳解決方案的旗幟到處坑害良民。記得以前看過一篇文章,說1TB以下的數據就不要用Hadoop了,體現不出太大的優勢,有時候反而會成為累贅。因此Hadoop的使用場所一般有兩:一是有一定規模的公司,數據流一般是TB級別的,這樣的公司其實不多;二是各大高校的實驗室,作為研究使用。不幸的我也走上了這條路,僅為研究之用。而且我的使用需求還不是一般的在Hadoop下開發應用程序,而是開發好的C++程序要放到Hadoop平臺下進行測試。Hadoop是基于Java的數據計算平臺,當然對Java支持的最好,如果要運行C++程序,有三種解決方案:
- 使用JNI/JNA/JNR技術。這三種Java外部函數接口技術都是解決在Java程序中運行C++功能函數的需求,從而使得在Hadoop平臺下開發Java程序且能調用C++函數完成在Hadoop?Java版應用中運行C++程序的目的。早期(大概11年)阿里就是使用該技術將C語言實現的分詞軟件成功部署到Hadoop平臺下運行,詳情請看參考資料1。用過JNI的都知道JNI實在不好用,所以后來有人開發了另外兩種Java外部函數接口,即JNA和JNR,具體介紹和使用實例請看參考資料2和參考資料3。
- 使用Hadoop?Streaming技術。這項技術可以使得除了Java之外的多種其它語言如C/C++/Python/C#甚至shell腳本等運行在Hadoop平臺下,程序只需要按照一定的格式從標準輸入讀取數據、向標準輸出寫數據就可以在Hadoop平臺上使用,原有的單機程序稍加改動就可以在Hadoop平臺進行分布式處理。
- 使用Hadoop?Pipes技術。該技術只專注于在Hadoop平臺下運行C++程序,只允許用戶使用C++語言進行MapReduce程序設計。它采用的主要方法是將應用邏輯相關的C++代碼放在單獨的進程中,然后通過Socket讓Java代碼與C++代碼通信。從很大程度上說,這種方法類似于Hadoop Streaming,不同之處是通信方式不同:一個是標準輸入輸出,另一個是socket。
對于這三種技術我都做了相關調研和比較分析,首先排除了方法1,因為我不想寫Java程序來調用C++功能,有些累贅,調試非常不方便,那么是用Hadoop?Streaming技術還是Hadoop?Pipes技術呢?兩種方式各有優缺點,具體可看參考資料4(說的不太準確,僅供參考),為了準確選擇,我需要將兩種方法都試驗一下,看哪個適合自己的需求再做最終決定。
首先選擇了僅專注C++的Hadoop?Pipes技術,于是就開啟了下面一系列的過程……
Hadoop 2.3環境配置安裝
我之前配置過Hadoop環境(看參考資料5),但那時用的版本是1.1.2,比較老的版本,一堆bug,為了避免遺留bug的困擾我選擇了最新版2.3,因此需要重新配置(主要借鑒參考資料6),由于2.3版本使用的是新MapReduce框架yarn,因此配置與之前有所差異。
配置成功后,用自帶的經典案例wordcount測試下運行是否正常(已上傳數據):
hadoop jar ./hadoop-2.3.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0.jar wordcount wc_input.txt out 結果出現了下面錯誤: 2014-04-03 21:19:40,847 FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager:Error starting NodeManager org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From Slave1/192.168.1.152 to 0.0.0.0:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused ?at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:185) ?at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) ?at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121) ?at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:199) ?at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) ?at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:354) ?at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:401) Caused by: java.net.ConnectException: Call From Slave1/192.168.1.152 to 0.0.0.0:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused ?at sun.reflect.GeneratedConstructorAccessor8.newInstance(Unknown Source) ?at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ?at java.lang.reflect.Constructor.newInstance(Constructor.java:526) ?at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783) ?at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730) ?at org.apache.hadoop.ipc.Client.call(Client.java:1410) ?at org.apache.hadoop.ipc.Client.call(Client.java:1359) ?at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) ?at com.sun.proxy.$Proxy23.registerNodeManager(Unknown Source) ?at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68) ?at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) ?at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ?at java.lang.reflect.Method.invoke(Method.java:606) ?at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) ?at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ?at com.sun.proxy.$Proxy24.registerNodeManager(Unknown Source) ?at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:247) ?at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:179) ?... 6 more Caused by: java.net.ConnectException: Connection refused ?at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ?at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:708) ?at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) ?at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) ?at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493) ?at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:601) ?at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:696) ?at org.apache.hadoop.ipc.Client$Connection.access$2700(Client.java:367) ?at org.apache.hadoop.ipc.Client.getConnection(Client.java:1458) ?at org.apache.hadoop.ipc.Client.call(Client.java:1377) ?... 18 more是連接拒絕問題,連接了“0.0.0.0:8031”這個地址,這顯然是某個選項沒配置連接地址,導致使用了默認的錯誤地址。經過查資料, 是 yarn-site.xml文件沒配置好,其中的yarn.nodemanager.address沒有配置,默認是0.0.0.0。正確的配置如下(可能有些項不需要配置,但為了保險還是大部分都配置了) <configuration> <property> <name>yarn.resourcemanager.address</name> <value>192.168.1.137:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>192.168.1.137:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>192.168.1.137:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>192.168.1.137:8033</value> </property> <property> ? ? ? ?<name>yarn.resourcemanager.webapp.address</name> <value>192.168.1.137:8088</value> </property> <property> ? ? <name>yarn.nodemanager.aux-services</name> ? ? <value>mapreduce_shuffle</value> </property> <property> ? ? <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> ? ? <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> </configuration> 然后就運行成功了。開始以為環境就這樣配好了,直到我測試Hadoop?pipes程序……
Hadoop?pipes運行錯誤集錦
Hadoop 2.3發布版本沒有自帶Hadoop?pipes的wordcount例子,于是我就在老的1.1.2版本中找來了wordcount-simple.cc例子,修改了頭文件路徑(新版本與舊版本路徑完全不一樣),并參考資料7使用下面的makefile編譯(其中的-lssl也許非必需):
HADOOP_INSTALL=/opt/hadoopCC = g++ CCFLAGS = -I$(HADOOP_INSTALL)/includewordcount :wordcount-simple.cc$(CC) $(CCFLAGS) $< -Wall -L$(HADOOP_INSTALL)/lib/native -lhadooppipes -lhadooputils -lpthread -lcrypto -lssl -g -O2 -o $@ 編譯成功后,上傳到HDFS上,用下面的命令測試運行: hadoop pipes -D hadoop.pipes.java.recordreader=true -D hadoop.pipes.java.recordwriter=true \-D mapred.job.name=wordcount -input /data/wc_in -output /data/wc_out2 -program /bin/wordcount 開始一直卡在”map 0%?reduce 0%“,不知道過了多久,報出了下面一系列錯誤: DEPRECATED: Use of this script to execute mapred command is deprecated. Instead use the mapred command for it. 14/04/03 23:59:48 INFO client.RMProxy: Connecting to ResourceManager at /192.168.1.137:8032 14/04/03 23:59:49 INFO client.RMProxy: Connecting to ResourceManager at /192.168.1.137:8032 14/04/03 23:59:50 WARN mapreduce.JobSubmitter: No job jar file set. User classes may not be found. See Job or Job#setJar(String).? 14/04/03 23:59:50 INFO mapred.FileInputFormat: Total input paths to process : 2 14/04/03 23:59:51 INFO mapreduce.JobSubmitter: number of splits:2 14/04/03 23:59:51 INFO Configuration.deprecation: hadoop.pipes.java.recordreader is deprecated. Instead, use mapreduce.pipes.isjavarecordreader 14/04/03 23:59:51 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name 14/04/03 23:59:51 INFO Configuration.deprecation: hadoop.pipes.java.recordwriter is deprecated. Instead, use mapreduce.pipes.isjavarecordwriter 14/04/03 23:59:52 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1396578697573_0004 14/04/03 23:59:52 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources. 14/04/03 23:59:53 INFO impl.YarnClientImpl: Submitted application application_1396578697573_0004 14/04/03 23:59:53 INFO mapreduce.Job: The url to track the job: http://Master:8088/proxy/application_1396578697573_0004/ 14/04/03 23:59:53 INFO mapreduce.Job: Running job: job_1396578697573_0004 14/04/04 00:00:26 INFO mapreduce.Job: Job job_1396578697573_0004 running in uber mode : false 14/04/04 00:00:26 INFO mapreduce.Job: map 0% reduce 0% 14/04/04 00:10:53 INFO mapreduce.Job: map 100% reduce 0% 14/04/04 00:10:53 INFO mapreduce.Job: Task Id : attempt_1396578697573_0004_m_000001_0, Status : FAILED AttemptID:attempt_1396578697573_0004_m_000001_0 Timed out after 600 secs 14/04/04 00:10:54 INFO mapreduce.Job: Task Id : attempt_1396578697573_0004_m_000000_0, Status : FAILED AttemptID:attempt_1396578697573_0004_m_000000_0 Timed out after 600 secs 14/04/04 00:10:55 INFO mapreduce.Job: map 0% reduce 0% 14/04/04 00:21:23 INFO mapreduce.Job: map 100% reduce 0% 14/04/04 00:21:24 INFO mapreduce.Job: Task Id : attempt_1396578697573_0004_m_000000_1, Status : FAILED AttemptID:attempt_1396578697573_0004_m_000000_1 Timed out after 600 secs 14/04/04 00:21:24 INFO mapreduce.Job: Task Id : attempt_1396578697573_0004_m_000001_1, Status : FAILED AttemptID:attempt_1396578697573_0004_m_000001_1 Timed out after 600 secs 14/04/04 00:21:25 INFO mapreduce.Job: map 0% reduce 0% 14/04/04 00:31:53 INFO mapreduce.Job: Task Id : attempt_1396578697573_0004_m_000000_2, Status : FAILED AttemptID:attempt_1396578697573_0004_m_000000_2 Timed out after 600 secs 14/04/04 00:31:53 INFO mapreduce.Job: Task Id : attempt_1396578697573_0004_m_000001_2, Status : FAILED AttemptID:attempt_1396578697573_0004_m_000001_2 Timed out after 600 secs 14/04/04 00:42:24 INFO mapreduce.Job: map 100% reduce 0% 14/04/04 00:42:25 INFO mapreduce.Job: map 100% reduce 100% 14/04/04 00:42:26 INFO mapreduce.Job: Job job_1396578697573_0004 failed with state FAILED due to: Task failed task_1396578697573_0004_m_000000 Job failed as tasks failed. failedMaps:1 failedReduces:0 14/04/04 00:42:27 INFO mapreduce.Job: Counters: 9 Job Counters Failed map tasks=8 Launched map tasks=8 Other local map tasks=6 Data-local map tasks=2 Total time spent by all maps in occupied slots (ms)=5017539 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=5017539 Total vcore-seconds taken by all map tasks=5017539 Total megabyte-seconds taken by all map tasks=5137959936 Exception in thread "main" java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836) at org.apache.hadoop.mapred.pipes.Submitter.runJob(Submitter.java:264) at org.apache.hadoop.mapred.pipes.Submitter.run(Submitter.java:503) at org.apache.hadoop.mapred.pipes.Submitter.main(Submitter.java:518) 通過網上各種資料(之所以在網上找解決方案是因為在namenode、datanode及yarn的日志里都沒能找到錯誤信息及有效信息,所以很不解到底是什么原因導致的錯誤),解決方案都不靠譜,都沒能解決該問題。最后自己一陣搗鼓,最終在一個很隱蔽的日志里看到了錯誤提示,該日志在 datanode的Hadoop安裝路徑下logs目錄下的 userlogs文件夾里,開始以為這是個無用的文件夾,后來發現這個文件夾解決了我所有的問題,該文件夾下面包含了很多application子文件夾,而子文件夾下面又包含了很多container子文件夾,container下面就是標準的三個輸出日志”stderr“、”stdout“和”syslog“,也就是這樣的目錄結構( datanode的container?logs):|-- Hadoop安裝路徑
? ? |-- logs
?? ? ?? |- -userlogs
?? ?? ? ? ? |-- application_XXXXXXXX
?????? ? ? ? ? ? |-- container_XXXXXXXX
?????? ?? ? ? ? ? ??? |-- stderr
??????? ? ? ? ? ? ??? |-- stdout
???????? ? ? ? ? ???? |-- syslog
其中stderr是最重要的文件,具體的錯誤原因就保存在這個文件里面。比如上面的錯誤信息在該文件里找到了這樣的錯誤提示(...表示省略無關信息):
.../application_1396607014314_0001/container_1396607014314_0001_01_000002/wordcount: error while loading shared libraries: libcrypto.so.10: cannot open shared object file: No such file or directory 在datanode上用”locate?libcrypto.so.10“搜了下的確沒找到,但在namenode上找到了這個共享庫文件,于是就將namenode下的該文件拷貝到了datanode的/usr/lib下面,重新運行程序,還是同樣的問題,卡住了最后報出了那樣的錯誤,于是到datanode下面的container?logs里找原因,發現了新的問題: /usr/lib/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by .../wordcount) 同樣的還是namenode里有而datanode里沒有,因此還是將namenode的libstdc++.so.6拷過去了,再次運行,錯誤依舊,這時原因是: error while loading shared libraries: /usr/lib/libstdc++.so.6: ELF file OS ABI invalid 這時候我沒有去搜相應的解決方案,我似乎知道了問題的癥結所在,每次問題都是datanode沒有相關的庫文件而namenode都有,將namenode上的拷過去還可能造成與系統不匹配無效的后果,那么問題就應該出在: namenode(master)和datanode(slave)云節點系統內核的不匹配。我的master節點是Ubuntu 13.04而slave節點是特別舊的Ubuntu 9.11,雖然都是Ubuntu但是內核已經不一樣,各個庫文件已經得到升級和改動,而Hadoop在運行程序時需要master和slave之間的通信互動,采用的請求機制是一樣的,比如master在請求一個動作時用到了libcrypto.so.10這個版本的庫文件,那么slave答復或請求master時也會去自己的系統上找這樣的庫文件,結果自己的系統上沒有,根據Hadoop運行機制就有個timeout和繼續重新請求的過程,但是一直請求不到,所以就一直卡在那個地方,直到最終在限定的時間里沒有完成任務導致的”Job?failed“??上攵?#xff0c;如果你master用的是Ubuntu而slave用的是Fedora,那么這樣的錯誤也非??赡馨l生。那么為什么之前運行普通的Java版wordcount程序沒有發生錯誤呢?這我就不清楚了,畢竟不知道其具體的內部運行機理。我猜測可能是Hadoop?pipes采用的是socket通信方式,所以需要用到那些庫文件,而普通的Java版程序不需要這樣的過程,因此沒有發生錯誤,當然也不排除運行稍復雜的涉及通信的Java程序也發生這樣的錯誤。因此,我也得出這樣一個結論:
Hadoop環境的master節點和所有slave節點的系統環境最好一模一樣(同一個系統同一個版本號),對于Hadoop?pipes而言就是必須一模一樣。
由于我用的都是虛擬機系統,因此需要環境一樣的最簡單方式就是——克隆(clone)虛擬機,使用鏈接link方式即可。然后重新配置Hadoop 2.3環境,重新運行,以為問題就此解決,可是新的問題又來了:
14/04/06 01:11:49 INFO mapreduce.Job: Running job: job_1396756477966_0002 14/04/06 01:12:04 INFO mapreduce.Job: Job job_1396756477966_0002 running in uber mode : false 14/04/06 01:12:04 INFO mapreduce.Job: map 0% reduce 0% 14/04/04 09:59:02 INFO mapreduce.Job: Task Id : attempt_1396618478715_0002_m_000000_0, Status : FAILED Error: java.io.IOException ?at org.apache.hadoop.mapred.pipes.OutputHandler.waitForAuthentication(OutputHandler.java:186) ?at org.apache.hadoop.mapred.pipes.Application.waitForAuthentication(Application.java:195) ?at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:150) ?at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:69) ?at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430) ?at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) ?at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) ?at java.security.AccessController.doPrivileged(Native Method) ?at javax.security.auth.Subject.doAs(Subject.java:415) ?at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) ?at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) 14/04/04 09:59:02 INFO mapreduce.Job: Task Id : attempt_1396618478715_0002_m_000001_0, Status : FAILED Error: java.io.IOException ?at org.apache.hadoop.mapred.pipes.OutputHandler.waitForAuthentication(OutputHandler.java:186) ?at org.apache.hadoop.mapred.pipes.Application.waitForAuthentication(Application.java:195) ?at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:150) ?at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:69) ?at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430) ?at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) ?at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) ?at java.security.AccessController.doPrivileged(Native Method) ?at javax.security.auth.Subject.doAs(Subject.java:415) ?at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) ?at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) 14/04/04 09:59:14 INFO mapreduce.Job: Task Id : attempt_1396618478715_0002_m_000000_1, Status : FAILED Error: java.io.IOException ?at org.apache.hadoop.mapred.pipes.OutputHandler.waitForAuthentication(OutputHandler.java:186) ?at org.apache.hadoop.mapred.pipes.Application.waitForAuthentication(Application.java:195) ?at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:150) ?at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:69) ?at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430) ?at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) ?at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) ?at java.security.AccessController.doPrivileged(Native Method) ?at javax.security.auth.Subject.doAs(Subject.java:415) ?at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) ?at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) 14/04/04 09:59:15 INFO mapreduce.Job: Task Id : attempt_1396618478715_0002_m_000001_1, Status : FAILED Error: java.io.IOException ?at org.apache.hadoop.mapred.pipes.OutputHandler.waitForAuthentication(OutputHandler.java:186) ?at org.apache.hadoop.mapred.pipes.Application.waitForAuthentication(Application.java:195) ?at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:150) ?at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:69) ?at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430) ?at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) ?at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) ?at java.security.AccessController.doPrivileged(Native Method) ?at javax.security.auth.Subject.doAs(Subject.java:415) ?at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) ?at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) 14/04/04 09:59:26 INFO mapreduce.Job: Task Id : attempt_1396618478715_0002_m_000000_2, Status : FAILED Error: java.io.IOException ?at org.apache.hadoop.mapred.pipes.OutputHandler.waitForAuthentication(OutputHandler.java:186) ?at org.apache.hadoop.mapred.pipes.Application.waitForAuthentication(Application.java:195) ?at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:150) ?at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:69) ?at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430) ?at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) ?at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) ?at java.security.AccessController.doPrivileged(Native Method) ?at javax.security.auth.Subject.doAs(Subject.java:415) ?at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) ?at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) 14/04/04 09:59:27 INFO mapreduce.Job: Task Id : attempt_1396618478715_0002_m_000001_2, Status : FAILED Error: java.io.IOException ?at org.apache.hadoop.mapred.pipes.OutputHandler.waitForAuthentication(OutputHandler.java:186) ?at org.apache.hadoop.mapred.pipes.Application.waitForAuthentication(Application.java:195) ?at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:150) ?at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:69) ?at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430) ?at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) ?at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) ?at java.security.AccessController.doPrivileged(Native Method) ?at javax.security.auth.Subject.doAs(Subject.java:415) ?at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) ?at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) 14/04/04 09:59:40 INFO mapreduce.Job: map 100% reduce 100% 14/04/04 09:59:41 INFO mapreduce.Job: Job job_1396618478715_0002 failed with state FAILED due to: Task failed task_1396618478715_0002_m_000000 Job failed as tasks failed. failedMaps:1 failedReduces:0 14/04/04 09:59:41 INFO mapreduce.Job: Counters: 13 ?Job Counters ? Failed map tasks=7 ? Killed map tasks=1 ? Launched map tasks=8 ? Other local map tasks=6 ? Data-local map tasks=2 ? Total time spent by all maps in occupied slots (ms)=88669 ? Total time spent by all reduces in occupied slots (ms)=0 ? Total time spent by all map tasks (ms)=88669 ? Total vcore-seconds taken by all map tasks=88669 ? Total megabyte-seconds taken by all map tasks=90797056 ?Map-Reduce Framework ? CPU time spent (ms)=0 ? Physical memory (bytes) snapshot=0 ? Virtual memory (bytes) snapshot=0 Exception in thread "main" java.io.IOException: Job failed! ?at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836) ?at org.apache.hadoop.mapred.pipes.Submitter.runJob(Submitter.java:264) ?at org.apache.hadoop.mapred.pipes.Submitter.run(Submitter.java:503) ?at org.apache.hadoop.mapred.pipes.Submitter.main(Submitter.java:518) 注意這里并沒有一直卡在”map 0%?reduce 0%“而是很快返回了錯誤結果,說明之前的問題已經解決,這次肯定不是系統的問題,而是IO問題。同樣檢查container?logs,發現了問題原因: Server failed to authenticate. Exiting然后搜索解決方案,最后發現原因竟然是Hadoop 2.3自身的原因,需要我們自己編譯Makefile里所需要的libhadooppipes.a和libhadooputils.a這兩個靜態庫文件以適應自己系統的需求(官網說預編譯的是32位庫,如果你是64位的才需要重新編譯,可是我主機雖是64位但虛擬機系統是32位的,不知道為什么也不行需要重新編譯)。這個要求真是很不注重用戶體驗的要求,于是就來到了重新編譯Hadoop 2.3本地庫的地獄世界……
編譯Hadoop 2.3 Native Library
首先當然是先下載Hadoop 2.3的源碼了,解壓后根據里面的BUILDING.txt使用下面命令進行編譯獲得本地庫:
mvn package -Pdist,native -Dskiptests -Dtar 期間出現了各種錯誤,基本上所有的錯誤都出現在 參考資料9中,解決方案就是安裝編譯前所需要的各種依賴庫和依賴程序,比如 protobuf、cmake、zlib-devel和openssl-devel,都安裝好了過后就”BUILD SUCCESS“了,真是不易,其實官網也給出了這些提示,說需要提前安裝所依賴的東西,只是一開始懶得看,具體看 參考資料10。編譯成功后,在
HADOOP_PATH/hadoop-tools/hadoop-pipes/target/native/ 路徑下可以看到我們所需要的libhadooppipes.a和libhadooputils.a了,將它們拷貝到master和所有slave機器上的lib/native文件里(最好事先備份一下自帶的庫文件,避免該方案失敗),然后重新make編譯wordcount程序,然后Hadoop?pipes運行,最終看到這樣的結果: 14/04/06 01:22:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 14/04/06 01:22:28 INFO client.RMProxy: Connecting to ResourceManager at /192.168.1.137:8032 14/04/06 01:22:28 INFO client.RMProxy: Connecting to ResourceManager at /192.168.1.137:8032 14/04/06 01:22:29 WARN mapreduce.JobSubmitter: No job jar file set.? User classes may not be found. See Job or Job#setJar(String). 14/04/06 01:22:29 INFO mapred.FileInputFormat: Total input paths to process : 2 14/04/06 01:22:30 INFO mapreduce.JobSubmitter: number of splits:2 14/04/06 01:22:30 INFO Configuration.deprecation: hadoop.pipes.java.recordreader is deprecated. Instead, use mapreduce.pipes.isjavarecordreader 14/04/06 01:22:30 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name 14/04/06 01:22:30 INFO Configuration.deprecation: hadoop.pipes.java.recordwriter is deprecated. Instead, use mapreduce.pipes.isjavarecordwriter 14/04/06 01:22:30 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1396756477966_0004 14/04/06 01:22:31 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources. 14/04/06 01:22:31 INFO impl.YarnClientImpl: Submitted application application_1396756477966_0004 14/04/06 01:22:31 INFO mapreduce.Job: The url to track the job: http://Master:8088/proxy/application_1396756477966_0004/ 14/04/06 01:22:31 INFO mapreduce.Job: Running job: job_1396756477966_0004 14/04/06 01:22:40 INFO mapreduce.Job: Job job_1396756477966_0004 running in uber mode : false 14/04/06 01:22:40 INFO mapreduce.Job:? map 0% reduce 0% 14/04/06 01:22:56 INFO mapreduce.Job:? map 67% reduce 0% 14/04/06 01:22:57 INFO mapreduce.Job:? map 100% reduce 0% 14/04/06 01:23:09 INFO mapreduce.Job:? map 100% reduce 100% 14/04/06 01:23:11 INFO mapreduce.Job: Job job_1396756477966_0004 completed successfully 14/04/06 01:23:12 INFO mapreduce.Job: Counters: 51File System CountersFILE: Number of bytes read=118FILE: Number of bytes written=260641FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=266HDFS: Number of bytes written=86HDFS: Number of read operations=9HDFS: Number of large read operations=0HDFS: Number of write operations=2Job Counters Launched map tasks=2Launched reduce tasks=1Data-local map tasks=2Total time spent by all maps in occupied slots (ms)=30171Total time spent by all reduces in occupied slots (ms)=10914Total time spent by all map tasks (ms)=30171Total time spent by all reduce tasks (ms)=10914Total vcore-seconds taken by all map tasks=30171Total vcore-seconds taken by all reduce tasks=10914Total megabyte-seconds taken by all map tasks=30895104Total megabyte-seconds taken by all reduce tasks=11175936Map-Reduce FrameworkMap input records=8Map output records=9Map output bytes=94Map output materialized bytes=124Input split bytes=190Combine input records=0Combine output records=0Reduce input groups=8Reduce shuffle bytes=124Reduce input records=9Reduce output records=8Spilled Records=18Shuffled Maps =2Failed Shuffles=0Merged Map outputs=2GC time elapsed (ms)=764CPU time spent (ms)=2490Physical memory (bytes) snapshot=384413696Virtual memory (bytes) snapshot=3685318656Total committed heap usage (bytes)=258613248Shuffle ErrorsBAD_ID=0CONNECTION=0IO_ERROR=0WRONG_LENGTH=0WRONG_MAP=0WRONG_REDUCE=0WORDCOUNTINPUT_WORDS=9OUTPUT_WORDS=8File Input Format Counters Bytes Read=76File Output Format Counters Bytes Written=86 14/04/06 01:23:12 INFO util.ExitUtil: Exiting with status 0心里真是五味雜陳,這個配置環境的過程真是太痛苦了,當然也鍛煉了發現問題和解決問題的能力,也懂得了 遇到問題后,LOG永遠是最佳解決途徑。
謹以此文來記錄Hadoop?pipes運行環境的配置經歷,同樣也給那些與我遇到類似問題的小伙伴們指明一條明路……
參考資料
1. 如何在Hadoop集群運行JNI程序
2. JNI的替代者—使用JNA訪問Java外部函數接口?
3.JNI的又一替代者—使用JNR訪問Java外部函數接口(jnr-ffi)?
4. Hadoop Streaming和Pipes理解
5. 一步步教你Hadoop多節點集群安裝配置
6. Hadoop 2.3.0 分布式集群搭建圖文
7. Hadoop Tutorial 2.2 -- Running C++ Programs on Hadoop
8. Hadoop 新 MapReduce 框架 Yarn 詳解
9.編譯hadoop 2.3的native library
10. Native Libraries Guide
總結
以上是生活随笔為你收集整理的在Hadoop 2.3上运行C++程序各种疑难杂症(Hadoop Pipes选择、错误集锦、Hadoop2.3编译等)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: linux环境变量显示、添加、删除
- 下一篇: Linux的mmap内存映射机制解析