20200903-03-Hadoop运行模式之本地运行模式伪分布式运行模式
生活随笔
收集整理的這篇文章主要介紹了
20200903-03-Hadoop运行模式之本地运行模式伪分布式运行模式
小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
?
準備工作:
1.一臺Linux機器【windows也支持,參考:https://cwiki.apache.org/confluence/display/HADOOP2/Hadoop2OnWindows】
2.安裝JDK
3.安裝Hadoop
4.ssh免密碼登陸配置
本地運行模式操作過程:【參考:https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html#Standalone_Operation】
本地運行模式使用場景:
By default, Hadoop is configured to run in a non-distributed mode, as a single Java process. This is useful for debugging.
本地無配置運行grep程序:
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.1.jar grep input output 'dfs[a-z.]+' 報錯: [atguigu@hadoop104 hadoop-2.7.2]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar grep input ouput 'dfs[a-z.]+' 20/08/31 23:29:27 INFO client.RMProxy: Connecting to ResourceManager at hadoop102/192.168.59.102:8032 java.net.NoRouteToHostException: No Route to Host from hadoop104/192.168.59.104 to hadoop101:9000 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see: http://wiki.apache.org/hadoop/NoRouteToHostat sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newInstance(Constructor.java:423)at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:758)at org.apache.hadoop.ipc.Client.call(Client.java:1479)at org.apache.hadoop.ipc.Client.call(Client.java:1412)at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)at com.sun.proxy.$Proxy9.delete(Unknown Source)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)at com.sun.proxy.$Proxy10.delete(Unknown Source)at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2044)at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:707)at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:703)at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:703)at org.apache.hadoop.examples.Grep.run(Grep.java:97)at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)at org.apache.hadoop.examples.Grep.main(Grep.java:103)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.hadoop.util.RunJar.run(RunJar.java:221)at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: java.net.NoRouteToHostException: No route to hostat sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)at org.apache.hadoop.ipc.Client.call(Client.java:1451)... 32 more 修改core-site.xml中的配置<property> <name>fs.defaultFS</name><value>hdfs://hadoop104:9000</value> </property> 還報錯: java.net.ConnectException: Call From hadoop104/192.168.59.104 to hadoop104:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefusedat sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newInstance(Constructor.java:423)at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)at org.apache.hadoop.ipc.Client.call(Client.java:1479)at org.apache.hadoop.ipc.Client.call(Client.java:1412)at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)at com.sun.proxy.$Proxy9.delete(Unknown Source)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)at com.sun.proxy.$Proxy10.delete(Unknown Source)at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2044)at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:707)at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:703)at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:703)at org.apache.hadoop.examples.Grep.run(Grep.java:97)at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)at org.apache.hadoop.examples.Grep.main(Grep.java:103)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.hadoop.util.RunJar.run(RunJar.java:221)at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: java.net.ConnectException: Connection refusedat sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)at org.apache.hadoop.ipc.Client.call(Client.java:1451)... 32 more 直接刪除掉原來的hadoop,新解壓一個【原來有額外的yarn,mapreduce配置】 rm -rf hadoop-2.7.2 tar -zxvf hadoop-2.7.2.tar.gz -C /opt/module/【可以多次執(zhí)行,會覆蓋掉】 echo $JAVA_HOME 運行grep后,輸入jps有如下進程: jps RunJar(程序執(zhí)行完就沒了)偽分布式運行模式:
修改配置文件:
etc/hadoop/core-site.xml:
<configuration><property><name>fs.defaultFS</name><value>hdfs://localhost:9000</value></property> </configuration>etc/hadoop/hdfs-site.xml:
<configuration><property><name>dfs.replication</name><value>1</value></property> </configuration>?
設置免密碼登陸
-需要輸入密碼 $ ssh localhost 報錯: ssh: Could not resolve hostname localhost: Name or service not known 在/etc/hosts中添加: localhost 127.0.0.1 然后source /etc/profile--免密碼操作 $ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys $ chmod 0600 ~/.ssh/authorized_keys【這一步很關鍵,否則還得要密碼】Hadoop常見命令:
1.Run a MapReduce job locally 1)格式化hdfs $ bin/hdfs namenode -format【直接輸入bin/hdfs回車會提示后面所有的命令參數(shù)】 2)啟動hdfs $ sbin/start-dfs.sh 執(zhí)行jps,有如下進程: 3585 Jps 3465 DataNode 3357 NameNode 3534 GetConf(啟動過程中有,啟動完成就沒了) SecondaryNameNode 3)訪問namnode:http://localhost:9870/【無法訪問】 2.x是訪問50070, 3.x訪問的是9870?!究吹奈臋n是最新的,是3.x版的】 linux確認是否可以訪問:curl 'localhost:50070' windows訪問:localhost得換為linux ip【如果用linux主機名,需要在windows hosts文件中添加映射】 4)dfs常用命令 $ bin/hdfs dfs -mkdir /user $ bin/hdfs dfs -mkdir /user/<username> $ bin/hdfs dfs -mkdir input $ bin/hdfs dfs -put etc/hadoop/*.xml input $ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.1.jar grep input output 'dfs[a-z.]+'【修改完配置文件,啟動namenode,默認運行的是hdfs目錄文件】 執(zhí)行grep命令警告: 20/09/01 00:24:31 WARN io.ReadaheadPool: Failed readahead on ifile EBADF: Bad file descriptorat org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:206)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:748) 應該與grep運行參數(shù)中的路徑寫法有關: hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar grep /input /output 'dfs[a-z.]+'備注:
1.Hadoop官方文檔寫的是真的好【推薦操作直接按照官方文檔來,會少走很多彎路】
2.SecureCRT楷體中文平躺
切換為宋體即可
3.hdfs文件系統(tǒng)命令行操作的時候,比如我現(xiàn)在在/,再輸入input,其實全路徑是/input,也有相對路徑和絕對路徑之分
總結
以上是生活随笔為你收集整理的20200903-03-Hadoop运行模式之本地运行模式伪分布式运行模式的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: php 共享缓存之yac 快来替换掉AP
- 下一篇: sublime的注册方法 非常好用