hadoop +hbase +zookeeper 完全分布搭建 (版本一)
生活随笔
收集整理的這篇文章主要介紹了
hadoop +hbase +zookeeper 完全分布搭建 (版本一)
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
hadoop2.0已經發布了穩定版本了,增加了很多特性,比如HDFS HA、YARN等。最新的hadoop-2.6.0又增加了YARN HA
注意:apache提供的hadoop-2.6.0的安裝包是在32位操作系統編譯的,因為hadoop依賴一些C++的本地庫,
所以如果在64位的操作上安裝hadoop-2.6.0就需要重新在64操作系統上重新編譯
解決:
1、重新編譯源碼后將新的lib/native替換到集群中原來的lib/native
2、修改hadoop-env.sh ,增加
export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib:$HADOOP_PREFIX/lib/native"
需要進行編譯準備工作:
0.安裝JDK,使用 java -version ?查看jdk版本,確定JDK版本是64位。 a. 解壓jdk $ tar -xvzf jdk-7u60-linux-x64.tar.gz b. 設置環境變量 ?vim ? /etc/profile export JAVA_HOME=/usr/local/jdk1.7 export HADOOP_HOME=/usr/local/hadoop-2.6.0? export MAVEN_HOME=/opt/apache-maven? export FINDBUG_HOME=/opt/findbugs-3.0.0? export ANT_HOME=/opt/apache-ant-1.9.4? export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/bin :$MAVEN_HOME/bin:$FINDBUG_HOME/bin:$ANT_HOME/bin (備注:不能換行) c.使配置文件生效 $ source /etc/profile 1.安裝gcc|gc++ ? yum install gcc ?? yum install gcc-c++ ? 驗證 2.安裝Apache-Maven。 tar -zxvf apache-maven-3.2.1.tar.gz 配置環境變量? vim ?/etc/profile ? export MAVEN_HOME=/opt/apache-maven export PATH=.:PATH:$MAVEN_HOME/bin 驗證: mvn --version 3.安裝Apache-ant(重要). tar -zxvf ?apache-ant-1.9.4-bin.tar.gz 配置環境變量 ?vim ? /etc/profile ? export MAVEN_HOME=/opt/apache-maven? export PATH=$PATH:$MAVEN_HOME/bin 3.安裝protobuf(goole序列化工具) tar -zxvf protobuf-2.5.0.tar.gz ./configuration? make ?#編譯 make install 驗證:protoc --version
4.安裝CMake2.6 ?or newer ? 安裝 yum install cmake ? 安裝 yum install openssl-devel ? 安裝 yum install ncurses-devel 驗證:cmake --version 5.安裝make yum install make 驗證: make --version 6.Hadoop - hadoop-common-project中的pom.xml添加依賴(hadoop-2.2.0需要修改,hadoop2.6.0版本不需要) <dependency> <groupId>org.mortbay.jetty</groupId> <artifactId>jetty-util</artifactId> <scope>test</scope> </dependency> 7.在編譯之前防止 java.lang.OutOfMemoryError: Java heap space ? 堆棧問題,在centos系統中執行命令: $?export?MAVEN_OPTS="-Xms256m -Xmx512m"
8.解壓壓縮包 tar -zxvf?hadoop-2.6.0-src.tar.gz? a.執行命令 ?cd ?${hostname_Local}/hadoop-2.6.0/ ?目錄下 ?? b.編譯 mvn package -DskipTests -Pdist,native c.編譯好的項目放在 ?hadoop-2.6.0-src/hadoop-dist/target目錄下。 /root/Downloads/hadoop-2.6.0-src/hadoop-dist/target 即 ?hadoop-2.6.0就是編譯好的包。
===================================================================== 編譯日志: [INFO] --- maven-jar-plugin:2.3.1:jar (default-jar) @ hadoop-dist --- [WARNING] JAR will be empty - no content was marked for inclusion!? [INFO] Building jar: /root/Downloads/hadoop-2.6.0-src/hadoop-dist/target/hadoop-dist-2.6.0.jar [INFO]? [INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-dist ---? [INFO] No sources in project. Archive not created. [INFO]? [INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-dist ---? [INFO] No sources in project. Archive not created. [INFO]? [INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-dist --- [INFO]? [INFO] --- maven-antrun-plugin:1.7:run (tar) @ hadoop-dist --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO]? [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-dist ---? [INFO] Building jar: /root/Downloads/hadoop-2.6.0-src/hadoop-dist/target/hadoop-dist-2.6.0-javadoc.jar? [INFO] ------------------------------------------------------------------------? [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Main ................................. SUCCESS [ 13.582 s]? [INFO] Apache Hadoop Project POM .......................... SUCCESS [ 9.846 s]? [INFO] Apache Hadoop Annotations .......................... SUCCESS [ 24.408 s] [INFO] Apache Hadoop Assemblies ........................... SUCCESS [ 1.967 s]? [INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [ 6.443 s]? [INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [ 20.692 s]? [INFO] Apache Hadoop MiniKDC .............................. SUCCESS [ 14.250 s]? [INFO] Apache Hadoop Auth ................................. SUCCESS [ 23.716 s]? [INFO] Apache Hadoop Auth Examples ........................ SUCCESS [ 13.714 s]? [INFO] Apache Hadoop Common ............................... SUCCESS [08:46 min]? [INFO] Apache Hadoop NFS .................................. SUCCESS [ 47.127 s]? [INFO] Apache Hadoop KMS .................................. SUCCESS [ 48.790 s]? [INFO] Apache Hadoop Common Project ....................... SUCCESS [ 0.316 s]? [INFO] Apache Hadoop HDFS ................................. SUCCESS [14:58 min]? [INFO] Apache Hadoop HttpFS ............................... SUCCESS [11:10 min]? [INFO] Apache Hadoop HDFS BookKeeper Journal .............. SUCCESS [01:43 min]? [INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [ 27.438 s]? [INFO] Apache Hadoop HDFS Project ......................... SUCCESS [ 0.146 s]? [INFO] hadoop-yarn ........................................ SUCCESS [ 0.165 s]? [INFO] hadoop-yarn-api .................................... SUCCESS [07:03 min]? [INFO] hadoop-yarn-common ................................. SUCCESS [03:31 min]? [INFO] hadoop-yarn-server ................................. SUCCESS [ 0.827 s]? [INFO] hadoop-yarn-server-common .......................... SUCCESS [01:11 min]? [INFO] hadoop-yarn-server-nodemanager ..................... SUCCESS [02:25 min]? [INFO] hadoop-yarn-server-web-proxy ....................... SUCCESS [ 17.129 s]? [INFO] hadoop-yarn-server-applicationhistoryservice ....... SUCCESS [ 39.350 s]? [INFO] hadoop-yarn-server-resourcemanager ................. SUCCESS [01:44 min]? [INFO] hadoop-yarn-server-tests ........................... SUCCESS [ 32.941 s]? [INFO] hadoop-yarn-client ................................. SUCCESS [ 44.664 s]? [INFO] hadoop-yarn-applications ........................... SUCCESS [ 0.197 s]? [INFO] hadoop-yarn-applications-distributedshell .......... SUCCESS [ 15.165 s]? [INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SUCCESS [ 9.604 s]? [INFO] hadoop-yarn-site ................................... SUCCESS [ 0.149 s]? [INFO] hadoop-yarn-registry ............................... SUCCESS [ 31.971 s]? [INFO] hadoop-yarn-project ................................ SUCCESS [ 22.195 s]? [INFO] hadoop-mapreduce-client ............................ SUCCESS [ 0.673 s]? [INFO] hadoop-mapreduce-client-core ....................... SUCCESS [02:08 min]? [INFO] hadoop-mapreduce-client-common ..................... SUCCESS [01:38 min]? [INFO] hadoop-mapreduce-client-shuffle .................... SUCCESS [ 24.796 s]? [INFO] hadoop-mapreduce-client-app ........................ SUCCESS [01:02 min]? [INFO] hadoop-mapreduce-client-hs ......................... SUCCESS [ 43.043 s]? [INFO] hadoop-mapreduce-client-jobclient .................. SUCCESS [01:09 min]? [INFO] hadoop-mapreduce-client-hs-plugins ................. SUCCESS [ 9.662 s]? [INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [ 40.439 s]? [INFO] hadoop-mapreduce ................................... SUCCESS [ 13.894 s]? [INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [ 32.797 s]? [INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [01:00 min]? [INFO] Apache Hadoop Archives ............................. SUCCESS [ 11.333 s]? [INFO] Apache Hadoop Rumen ................................ SUCCESS [ 35.122 s]? [INFO] Apache Hadoop Gridmix .............................. SUCCESS [ 22.939 s]? [INFO] Apache Hadoop Data Join ............................ SUCCESS [ 17.568 s]? [INFO] Apache Hadoop Ant Tasks ............................ SUCCESS [ 12.339 s]? [INFO] Apache Hadoop Extras ............................... SUCCESS [ 18.325 s]? [INFO] Apache Hadoop Pipes ................................ SUCCESS [ 27.889 s]? [INFO] Apache Hadoop OpenStack support .................... SUCCESS [ 30.148 s]? [INFO] Apache Hadoop Amazon Web Services support .......... SUCCESS [01:28 min]? [INFO] Apache Hadoop Client ............................... SUCCESS [ 25.086 s]? [INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [ 0.657 s]? [INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [ 25.302 s]? [INFO] Apache Hadoop Tools Dist ........................... SUCCESS [ 23.268 s]? [INFO] Apache Hadoop Tools ................................ SUCCESS [ 0.156 s]? [INFO] Apache Hadoop Distribution ......................... SUCCESS [01:06 min]? [INFO] ------------------------------------------------------------------------? [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------? [INFO] Total time: 01:17 h [INFO] Finished at: 2014-12-29T20:45:54-08:00? [INFO] Final Memory: 94M/193M [INFO] ------------------------------------------------------------------------? [root@Master hadoop-2.6.0-src]#
2.修改IP
3.修改主機名和IP的映射關系
######注意######如果你們公司是租用的服務器或是使用的云主機(如華為用主機、阿里云主機等)
/etc/hosts里面要配置的是內網IP地址和主機名的映射關系
4.關閉防火墻
5.ssh免登陸
6.安裝JDK,配置環境變量等
集群規劃:
主機名? ?? ???IP? ?? ???安裝的軟件? ?? ???運行的進程
Master? ?? ???192.168.1.201? ?? ???jdk、hadoop? ?? ???NameNode、DFSZKFailoverController(zkfc)
Slave1? ?? ???192.168.1.202? ?? ???jdk、hadoop? ?? ???NameNode、DFSZKFailoverController(zkfc)
Slave2? ?? ???192.168.1.203? ?? ???jdk、hadoop? ?? ???ResourceManager
Slave3? ?? ???192.168.1.204? ?? ???jdk、hadoop? ?? ???ResourceManager
Slave4? ?? ???192.168.1.205? ?? ???jdk、hadoop、zookeeper? ?? ???DataNode、NodeManager、JournalNode、QuorumPeerMain
Slave5? ?? ???192.168.1.206? ?? ???jdk、hadoop、zookeeper? ?? ???DataNode、NodeManager、JournalNode、QuorumPeerMain
Slave6? ?? ???192.168.1.207? ?? ???jdk、hadoop、zookeeper? ?? ???DataNode、NodeManager、JournalNode、QuorumPeerMain 復制代碼
說明:
1.安裝配置zooekeeper集群(在Slave4上)
1.1解壓
[root@Master local]#tar -zxvf? ? zookeeper-3.4.6.tar.g-C /usr/local/
[root@Master local]#mv zookeeper-3.4.6/ zookeeper 復制代碼
1.2修改配置
[root@Master local]#cd /usr/local/zookeeper/conf/
[root@Master local]#cp zoo_sample.cfg zoo.cfg
[root@Master local]#vim zoo.cfg 復制代碼
修改:
dataDir=/itcast/zookeeper/zkData 復制代碼
在最后添加:
server.1=Slave4:2888:3888
server.2=Slave5:2888:3888
server.3=Slave6:2888:3888 復制代碼
保存退出
然后創建一個tmp文件夾
[root@Master local]#mkdir /usr/local/zookeeper/zkData 復制代碼
再創建一個空文件
[root@Master local]#touch /usr/local/zookeeper/zkData/myid 復制代碼
最后向該文件寫入ID
[root@Master local]#echo 1 > /usr/local/zookeeper/zkData/myid 復制代碼
1.3將配置好的zookeeper拷貝到其他節點(首先分別在Slave5、Slave6根目錄:/usr/local/)
[root@Master local]#scp -r /usr/local/zookeeper/ Slave5:/usr/local/
[root@Master local]#scp -r /usr/local/zookeeper/ Slave6:/usr/local/ 復制代碼
注意:修改Slave5、Slave6對應/usr/local/zookeeper/zkData/myid內容
Slave5:
[root@Master local]#echo 2 > /usr/local/zookeeper/zkData/myid
Slave6:
[root@Master local]#echo 3 > /usr/local/zookeeper/zkData/myid 復制代碼
2.安裝配置hadoop集群(在Master上操作)
2.1解壓
[root@Master local]#tar -zxvf hadoop-2.6.0.tar.gz -C /usr/local/ 復制代碼
2.2配置HDFS(hadoop2.0所有的配置文件都在$HADOOP_HOME/etc/hadoop目錄下)
#將hadoop添加到環境變量中
[root@Master local]#vim /etc/profile
export JAVA_HOME=/usr/local/jdk1.7
export HADOOP_HOME=/usr/local/hadoop-2.6.0
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin 復制代碼
#hadoop2.0的配置文件全部在$HADOOP_HOME/etc/hadoop下
[root@Master local]#cd /usr/local/hadoop-2.6.0/etc/hadoop 復制代碼
2.2.1修改hadoo-env.sh
export JAVA_HOME=/usr/local/jdk1.7 復制代碼
2.2.2修改core-site.xml
<configuration>
<!-- 指定hdfs的nameservice為masters -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://masters</value>
</property>
<!-- 指定hadoop臨時目錄 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop-2.6.0/tmp</value>
</property>
<!-- 指定zookeeper地址 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>Slave4:2181,Slave5:2181,Slave6:2181</value>
</property>
</configuration> 復制代碼
2.2.3修改hdfs-site.xml <configuration>
? ?? ???<!--指定hdfs的nameservice為masters,需要和core-site.xml中的保持一致 -->
? ?? ???<property>
? ?? ?? ?? ?? ? <name>dfs.nameservices</name>
? ?? ?? ?? ?? ? <value>masters,ns1,ns2,ns3</value>
? ?? ???</property>
? ?? ???<!-- Master下面有兩個NameNode,分別是Master,Slave1 -->
? ?? ???<property>
? ?? ?? ?? ?? ? <name>dfs.ha.namenodes.masters</name>
? ?? ?? ?? ?? ? <value>Master,Slave1</value>
? ?? ???</property>
? ?? ???<!-- Master的RPC通信地址 -->
? ?? ???<property>
? ?? ?? ?? ?? ? <name>dfs.namenode.rpc-address.masters.Master</name>
? ?? ?? ?? ?? ? <value>Master:9000</value>
? ?? ???</property>
? ?? ???<!-- Master的http通信地址 -->
? ?? ???<property>
? ?? ?? ?? ?? ? <name>dfs.namenode.http-address.masters.Master</name>
? ?? ?? ?? ?? ? <value>Master:50070</value>
? ?? ???</property>
? ?? ???<!-- Slave1的RPC通信地址 -->
? ?? ???<property>
? ?? ?? ?? ?? ? <name>dfs.namenode.rpc-address.masters.Slave1</name>
? ?? ?? ?? ?? ? <value>Slave1:9000</value>
? ?? ???</property>
? ?? ???<!-- Slave1的http通信地址 -->
? ?? ???<property>
? ?? ?? ?? ?? ? <name>dfs.namenode.http-address.masters.Slave1</name>
? ?? ?? ?? ?? ? <value>Slave1:50070</value>
? ?? ???</property>
? ?? ???<!-- 指定NameNode的元數據在JournalNode上的存放位置 -->
? ?? ???<property>
? ?? ?? ?? ?? ? <name>dfs.namenode.shared.edits.dir</name>
? ?? ?? ?? ?? ? <value>qjournal://Slave4:8485;Slave5:8485;Slave6:8485/masters</value>
? ?? ???</property>
? ?? ???<!-- 指定JournalNode在本地磁盤存放數據的位置 -->
? ?? ???<property>
? ?? ?? ?? ?? ? <name>dfs.journalnode.edits.dir</name>
? ?? ?? ?? ?? ? <value>/usr/local/hadoop-2.6.0/journal</value>
? ?? ???</property>
? ?? ???<!-- 開啟NameNode失敗自動切換 -->
? ?? ???<property>
? ?? ?? ?? ?? ? <name>dfs.ha.automatic-failover.enabled</name>
? ?? ?? ?? ?? ? <value>true</value>
? ?? ???</property>
? ?? ???<!-- 配置失敗自動切換實現方式 -->
? ?? ???<property>
? ?? ?? ?? ?? ? <name>dfs.client.failover.proxy.provider.masters</name>
? ?? ?? ?? ?? ? <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
? ?? ???</property>
? ?? ???<!-- 配置隔離機制方法,多個機制用換行分割,即每個機制暫用一行-->
? ?? ???<property>
? ?? ?? ?? ?? ? <name>dfs.ha.fencing.methods</name>
? ?? ?? ?? ?? ? <value>
? ?? ?? ?? ?? ?? ?? ?? ?sshfence
? ?? ?? ?? ?? ?? ?? ?? ?shell(/bin/true)
? ?? ?? ?? ?? ? </value>
? ?? ???</property>
? ?? ???<!-- 使用sshfence隔離機制時需要ssh免登陸 -->
? ?? ???<property>
? ?? ?? ?? ?? ? <name>dfs.ha.fencing.ssh.private-key-files</name>
? ?? ?? ?? ?? ? <value>/root/.ssh/id_rsa</value>
? ?? ???</property>
? ?? ???<!-- 配置sshfence隔離機制超時時間 -->
? ?? ???<property>
? ?? ?? ?? ?? ? <name>dfs.ha.fencing.ssh.connect-timeout</name>
? ?? ?? ?? ?? ? <value>30000</value>
? ?? ???</property>
</configuration> 復制代碼
2.2.4修改mapred-site.xml
<configuration>
<!-- 指定mr框架為yarn方式 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration> 復制代碼
2.2.5修改yarn-site.xml <configuration>
? ?? ???<!-- 開啟RM高可靠 -->
? ?? ???<property>
? ?? ?? ?? ?? ? <name>yarn.resourcemanager.ha.enabled</name>
? ?? ?? ?? ?? ? <value>true</value>
? ?? ???</property>
? ?? ???<!-- 指定RM的cluster id -->
? ?? ???<property>
? ?? ?? ?? ?? ? <name>yarn.resourcemanager.cluster-id</name>
? ?? ?? ?? ?? ? <value>RM_HA_ID</value>
? ?? ???</property>
? ?? ???<!-- 指定RM的名字 -->
? ?? ???<property>
? ?? ?? ?? ?? ? <name>yarn.resourcemanager.ha.rm-ids</name>
? ?? ?? ?? ?? ? <value>rm1,rm2</value>
? ?? ???</property>
? ?? ???<!-- 分別指定RM的地址 -->
? ?? ???<property>
? ?? ?? ?? ?? ? <name>yarn.resourcemanager.hostname.rm1</name>
? ?? ?? ?? ?? ? <value>Slave2</value>
? ?? ???</property>
? ?? ???<property>
? ?? ?? ?? ?? ? <name>yarn.resourcemanager.hostname.rm2</name>
? ?? ?? ?? ?? ? <value>Slave3</value>
? ?? ???</property>
? ?? ???<property>
? ?? ?? ?? ?? ? <name>yarn.resourcemanager.recovery.enabled</name>
? ?? ?? ?? ?? ? <value>true</value>
? ?? ???</property>
? ?? ?? ?
? ?? ???<property>
? ?? ?? ?? ?? ? <name>yarn.resourcemanager.store.class</name>
? ?? ?? ?? ?? ? <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
? ?? ???</property>
? ?? ???<!-- 指定zk集群地址 -->
? ?? ???<property>
? ?? ?? ?? ?? ? <name>yarn.resourcemanager.zk-address</name>
? ?? ?? ?? ?? ? <value>Slave4:2181,Slave5:2181,Slave6:2181</value>
? ?? ???</property>
? ?? ???<property>
? ?? ?? ?? ?? ? <name>yarn.nodemanager.aux-services</name>
? ?? ?? ?? ?? ? <value>mapreduce_shuffle</value>
? ?? ???</property>
</configuration> 復制代碼
2.2.6修改slaves(slaves是指定子節點的位置,因為要在Master上啟動HDFS、在Slave2啟動yarn,所以Master上的slaves文件指定的是datanode的位置,slave2上的slaves文件指定的是nodemanager的位置)
Slave4
Slave5
Slave6 復制代碼
2.2.7配置免密碼登陸
#首先要配置Master到Slave1、Slave2、Slave3、Slave4、Slave5、Slave6的免密碼登陸
#在Master上生產一對鑰匙
[root@Master local]#ssh-keygen -t rsa 復制代碼
#將公鑰拷貝到其他節點,包括自己
[root@Master local]#ssh-copy-id Master
[root@Master local]#ssh-copy-id Slave1
[root@Master local]#ssh-copy-id Slave2
[root@Master local]#ssh-copy-id Slave3
[root@Master local]#ssh-copy-id Slave4
[root@Master local]#ssh-copy-id Slave5
[root@Master local]#ssh-copy-id Slave6 復制代碼
#配置Slave2到Slave3、Slave4、Slave5、Slave6的免密碼登陸
#在Slave2上生產一對鑰匙
[root@Master local]#ssh-keygen -t rsa 復制代碼
#將公鑰拷貝到其他節點
[root@Master local]#ssh-copy-id Slave3
[root@Master local]#ssh-copy-id Slave4
[root@Master local]#ssh-copy-id Slave5
[root@Master local]#ssh-copy-id Slave6 復制代碼
#注意:兩個namenode之間要配置ssh免密碼登陸,別忘了配置Slave1到Master的免登陸
在Slave1上生產一對鑰匙
[root@Master local]#ssh-keygen -t rsa
[root@Master local]#ssh-copy-id -i Master 復制代碼
#在Slave3上生產一對鑰匙
[root@Master local]#ssh-keygen -t rsa 復制代碼
#將公鑰拷貝到其他節點
[root@Master local]#ssh-copy-id Slave4
[root@Master local]#ssh-copy-id Slave5
[root@Master local]#ssh-copy-id Slave6 復制代碼
2.4將配置好的hadoop拷貝到其他節點
[root@Master local]#scp -r /usr/local/hadoop-2.6.0/ Slave1:/usr/local/
[root@Master local]#scp -r /usr/local/hadoop-2.6.0/ Slave2:/usr/local/
[root@Master local]#scp -r /usr/local/hadoop-2.6.0/ Slave3:/usr/local/
[root@Master local]#scp -r /usr/local/hadoop-2.6.0/ Slave4:/usr/local/
[root@Master local]#scp -r /usr/local/hadoop-2.6.0/ Slave5:/usr/local/
[root@Master local]#scp -r /usr/local/hadoop-2.6.0/ Slave6:/usr/local/ 復制代碼
###注意:嚴格按照下面的步驟
2.5啟動zookeeper集群(分別在Slave4、Slave5、Slave6上啟動zk)
[root@Master local]#cd /usr/local/zookeeper/bin/
[root@Master local]#./zkServer.sh start 復制代碼
#查看狀態:一個leader,兩個follower
[root@Master local]#./zkServer.sh status 復制代碼
2.6啟動journalnode(分別在Slave4、Slave5、Slave6上執行)
[root@Master local]#cd /usr/local/hadoop-2.6.0/sbin
[root@Master local]#sbin/hadoop-daemon.sh start journalnode 復制代碼
#運行jps命令檢驗,Slave4、Slave5、Slave6上多了JournalNode進程
2.7格式化HDFS
#在Master上執行命令:
[root@Master local]#hdfs namenode -format 復制代碼
#格式化后會在根據core-site.xml中的hadoop.tmp.dir配置生成個文件,這里我配置的是/usr/local/hadoop-2.6.0/tmp,
然后將/usr/local/hadoop-2.6.0/tmp拷貝到Slave1的/usr/local/hadoop-2.6.0/下。
[root@Master local]#scp -r tmp/ Slave1:/usr/local/hadoop-2.6.0/ 復制代碼
2.8格式化ZK(在Master上執行即可)
[root@Master local]#hdfs zkfc -formatZK 復制代碼
2.9啟動HDFS(在Master上執行)
[root@Master local]#sbin/start-dfs.sh 復制代碼
2.10啟動YARN(#####注意#####:是在Slave2上執行start-yarn.sh,把namenode和resourcemanager分開是因為性能問題,因為他們都要占用大量資源,所以把他們分開了,他們分開了就要分別在不同的機器上啟動)
[root@Master local]#Slave2:${HADOOP_HOME}/sbin/start-yarn.sh
[root@Master local]#Slave3:${HADOOP_HOME}/sbin/yarn-daemon.sh start resourcemanager 復制代碼
到此,hadoop-2.6.0配置完畢,可以統計瀏覽器訪問:
http://192.168.80.100:50070
NameNode 'Master:9000' (active)
http://192.168.80.101:50070
NameNode 'Slave1:9000' (standby) 復制代碼
驗證HDFS HA
首先向hdfs上傳一個文件
[root@Master local]#hadoop fs -put /etc/profile /profile
[root@Master local]#hadoop fs -ls / 復制代碼
然后再kill掉active的NameNode
[root@Master local]#kill -9 <pid of NN> 復制代碼
通過瀏覽器訪問:http://192.168.80.101:50070
NameNode 'Slave1:9000' (active)
這個時候Slave1上的NameNode變成了active
在執行命令:
[root@Master local]#hadoop fs -ls /
-rw-r--r--? ?3 root supergroup? ?? ? 1926 2014-02-06 15:36 /profile 復制代碼
剛才上傳的文件依然存在!!!
手動啟動那個掛掉的NameNode
[root@Master local]#sbin/hadoop-daemon.sh start namenode 復制代碼
通過瀏覽器訪問:http://192.168.80.101:50070
NameNode 'Master:9000' (standby) 復制代碼
驗證YARN:
運行一下hadoop提供的demo中的WordCount程序:
[root@Master local]#hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar wordcount /profile /out 復制代碼
hadoop HA集群搭建完成
??hbase-0.98.9-hadoop2??搭建
4.1 解壓縮,并重命名?
[root@Master local]#mv??hbase-**? ?hbase 復制代碼
修改環境變量:
export HBASE_HOME=/usr/local/hbase
export PATH= .:$PATH:$HBASE_HOME/bin: 復制代碼
保存,退出。??執行 source /etc/profile??生效
4.1 修改HBase的配置文件#HBASE_HOME/conf/hbase-env.sh 修改內容如下:
export JAVA_HOME=usr/local/jdk/
export HBASE_MANAGES_ZK=true? ?//HBase是否管理它自己的ZooKeeper的實例。 復制代碼
保存,退出。
4.2 修改HBase的配置文件#HBASE_HOME/conf/hbase-site.xml,修改內容如下:
<property>
<name>hbase.rootdir</name>
<value>hdfs://Master:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>Master</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property> 復制代碼
注意:$HBASE_HOME/conf/hbase-site.xml的hbase.rootdir的主機和端口號與$HADOOP_HOME/conf/core-site.xml的fs.default.name的主機和端口號一致
4.3 (可選)文件??regionservers 的內容修改為Master.
4.4 執行目錄到../bin ,執行命令??start-hbase.sh
******啟動hbase之前,確保hadoop是運行正常的。并且可以寫入文件。
4.5 驗證:(1)執行jps,發現新增加了3個Havana進程,分別是HMaster、HRegionServer、HQuorumPeer (HQuorumPeerMain 是ZooKeeper的進程 )?
備注:啟動HBase時,請先執行??/usr/local/zookeeper/bin zkServer.sh stop 停止ZooKeeper的進程,以免hbase啟動失敗。
(2)通過瀏覽器查看:??http://masters:60010
5.HBase的集群安裝(在原來的Master上的hbase偽分布基礎上搭建):
5.1 集群結構,主節點(hmaster)是Master,從節點(region server)是Slave1,Slave2,Slave3.
5.2 修改hadoop0上的hbase的幾個文件
(1)修改hbase-env.sh 最后一行?export??HBASE_MANAGES_ZK=false.
(2)修改hbase-site.xml文件的hbase.zookeeper.quorum的值為Master,Slave1,Slave2,Slave3?。
(3)修改regionservers文件(存放的?region server的hostname),內容修改成Slave1,Slave2,Slave3?。
5.3 復制Master中的hbase到Slave1,Slave2,Slave3的對應目錄下,并復制、Master 的/etc/profile文件到hadoop1 、hadoop2 中。
[root@Master local]#scp -r hbase Slave1:/usr/local/
[root@Master local]#scp -r /etc/profile??Slave1:/etc/profile
[root@Master local]#source /etc/profile 復制代碼
5.4 在HA集群中,首先各個節點啟動ZooKeeper集群,其次 Master中啟動hadoop集群,最后在Master上啟動hbase集群。
6.測試Hbase是否啟動正常:
1) 在Master主機中執行jps,查看進程。會新增一個 HMaster 進程
2) 在regionserver 中執行??jps,新增 HRegionServer。
7.執行hbase腳本命令:
[root@Slave2 local]#??hbase shell 復制代碼
注意:apache提供的hadoop-2.6.0的安裝包是在32位操作系統編譯的,因為hadoop依賴一些C++的本地庫,
所以如果在64位的操作上安裝hadoop-2.6.0就需要重新在64操作系統上重新編譯
一.重新編譯?
原因是hadoop-2.6.0.tar.gz安裝包是在32位機器上編譯的,64位的機器加載本地庫.so文件時出錯,不影響使用。解決:
1、重新編譯源碼后將新的lib/native替換到集群中原來的lib/native
2、修改hadoop-env.sh ,增加
export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib:$HADOOP_PREFIX/lib/native"
需要進行編譯準備工作:
0.安裝JDK,使用 java -version ?查看jdk版本,確定JDK版本是64位。 a. 解壓jdk $ tar -xvzf jdk-7u60-linux-x64.tar.gz b. 設置環境變量 ?vim ? /etc/profile export JAVA_HOME=/usr/local/jdk1.7 export HADOOP_HOME=/usr/local/hadoop-2.6.0? export MAVEN_HOME=/opt/apache-maven? export FINDBUG_HOME=/opt/findbugs-3.0.0? export ANT_HOME=/opt/apache-ant-1.9.4? export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/bin :$MAVEN_HOME/bin:$FINDBUG_HOME/bin:$ANT_HOME/bin (備注:不能換行) c.使配置文件生效 $ source /etc/profile 1.安裝gcc|gc++ ? yum install gcc ?? yum install gcc-c++ ? 驗證 2.安裝Apache-Maven。 tar -zxvf apache-maven-3.2.1.tar.gz 配置環境變量? vim ?/etc/profile ? export MAVEN_HOME=/opt/apache-maven export PATH=.:PATH:$MAVEN_HOME/bin 驗證: mvn --version 3.安裝Apache-ant(重要). tar -zxvf ?apache-ant-1.9.4-bin.tar.gz 配置環境變量 ?vim ? /etc/profile ? export MAVEN_HOME=/opt/apache-maven? export PATH=$PATH:$MAVEN_HOME/bin 3.安裝protobuf(goole序列化工具) tar -zxvf protobuf-2.5.0.tar.gz ./configuration? make ?#編譯 make install 驗證:protoc --version
4.安裝CMake2.6 ?or newer ? 安裝 yum install cmake ? 安裝 yum install openssl-devel ? 安裝 yum install ncurses-devel 驗證:cmake --version 5.安裝make yum install make 驗證: make --version 6.Hadoop - hadoop-common-project中的pom.xml添加依賴(hadoop-2.2.0需要修改,hadoop2.6.0版本不需要) <dependency> <groupId>org.mortbay.jetty</groupId> <artifactId>jetty-util</artifactId> <scope>test</scope> </dependency> 7.在編譯之前防止 java.lang.OutOfMemoryError: Java heap space ? 堆棧問題,在centos系統中執行命令: $?export?MAVEN_OPTS="-Xms256m -Xmx512m"
8.解壓壓縮包 tar -zxvf?hadoop-2.6.0-src.tar.gz? a.執行命令 ?cd ?${hostname_Local}/hadoop-2.6.0/ ?目錄下 ?? b.編譯 mvn package -DskipTests -Pdist,native c.編譯好的項目放在 ?hadoop-2.6.0-src/hadoop-dist/target目錄下。 /root/Downloads/hadoop-2.6.0-src/hadoop-dist/target 即 ?hadoop-2.6.0就是編譯好的包。
===================================================================== 編譯日志: [INFO] --- maven-jar-plugin:2.3.1:jar (default-jar) @ hadoop-dist --- [WARNING] JAR will be empty - no content was marked for inclusion!? [INFO] Building jar: /root/Downloads/hadoop-2.6.0-src/hadoop-dist/target/hadoop-dist-2.6.0.jar [INFO]? [INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-dist ---? [INFO] No sources in project. Archive not created. [INFO]? [INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-dist ---? [INFO] No sources in project. Archive not created. [INFO]? [INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-dist --- [INFO]? [INFO] --- maven-antrun-plugin:1.7:run (tar) @ hadoop-dist --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO]? [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-dist ---? [INFO] Building jar: /root/Downloads/hadoop-2.6.0-src/hadoop-dist/target/hadoop-dist-2.6.0-javadoc.jar? [INFO] ------------------------------------------------------------------------? [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Main ................................. SUCCESS [ 13.582 s]? [INFO] Apache Hadoop Project POM .......................... SUCCESS [ 9.846 s]? [INFO] Apache Hadoop Annotations .......................... SUCCESS [ 24.408 s] [INFO] Apache Hadoop Assemblies ........................... SUCCESS [ 1.967 s]? [INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [ 6.443 s]? [INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [ 20.692 s]? [INFO] Apache Hadoop MiniKDC .............................. SUCCESS [ 14.250 s]? [INFO] Apache Hadoop Auth ................................. SUCCESS [ 23.716 s]? [INFO] Apache Hadoop Auth Examples ........................ SUCCESS [ 13.714 s]? [INFO] Apache Hadoop Common ............................... SUCCESS [08:46 min]? [INFO] Apache Hadoop NFS .................................. SUCCESS [ 47.127 s]? [INFO] Apache Hadoop KMS .................................. SUCCESS [ 48.790 s]? [INFO] Apache Hadoop Common Project ....................... SUCCESS [ 0.316 s]? [INFO] Apache Hadoop HDFS ................................. SUCCESS [14:58 min]? [INFO] Apache Hadoop HttpFS ............................... SUCCESS [11:10 min]? [INFO] Apache Hadoop HDFS BookKeeper Journal .............. SUCCESS [01:43 min]? [INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [ 27.438 s]? [INFO] Apache Hadoop HDFS Project ......................... SUCCESS [ 0.146 s]? [INFO] hadoop-yarn ........................................ SUCCESS [ 0.165 s]? [INFO] hadoop-yarn-api .................................... SUCCESS [07:03 min]? [INFO] hadoop-yarn-common ................................. SUCCESS [03:31 min]? [INFO] hadoop-yarn-server ................................. SUCCESS [ 0.827 s]? [INFO] hadoop-yarn-server-common .......................... SUCCESS [01:11 min]? [INFO] hadoop-yarn-server-nodemanager ..................... SUCCESS [02:25 min]? [INFO] hadoop-yarn-server-web-proxy ....................... SUCCESS [ 17.129 s]? [INFO] hadoop-yarn-server-applicationhistoryservice ....... SUCCESS [ 39.350 s]? [INFO] hadoop-yarn-server-resourcemanager ................. SUCCESS [01:44 min]? [INFO] hadoop-yarn-server-tests ........................... SUCCESS [ 32.941 s]? [INFO] hadoop-yarn-client ................................. SUCCESS [ 44.664 s]? [INFO] hadoop-yarn-applications ........................... SUCCESS [ 0.197 s]? [INFO] hadoop-yarn-applications-distributedshell .......... SUCCESS [ 15.165 s]? [INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SUCCESS [ 9.604 s]? [INFO] hadoop-yarn-site ................................... SUCCESS [ 0.149 s]? [INFO] hadoop-yarn-registry ............................... SUCCESS [ 31.971 s]? [INFO] hadoop-yarn-project ................................ SUCCESS [ 22.195 s]? [INFO] hadoop-mapreduce-client ............................ SUCCESS [ 0.673 s]? [INFO] hadoop-mapreduce-client-core ....................... SUCCESS [02:08 min]? [INFO] hadoop-mapreduce-client-common ..................... SUCCESS [01:38 min]? [INFO] hadoop-mapreduce-client-shuffle .................... SUCCESS [ 24.796 s]? [INFO] hadoop-mapreduce-client-app ........................ SUCCESS [01:02 min]? [INFO] hadoop-mapreduce-client-hs ......................... SUCCESS [ 43.043 s]? [INFO] hadoop-mapreduce-client-jobclient .................. SUCCESS [01:09 min]? [INFO] hadoop-mapreduce-client-hs-plugins ................. SUCCESS [ 9.662 s]? [INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [ 40.439 s]? [INFO] hadoop-mapreduce ................................... SUCCESS [ 13.894 s]? [INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [ 32.797 s]? [INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [01:00 min]? [INFO] Apache Hadoop Archives ............................. SUCCESS [ 11.333 s]? [INFO] Apache Hadoop Rumen ................................ SUCCESS [ 35.122 s]? [INFO] Apache Hadoop Gridmix .............................. SUCCESS [ 22.939 s]? [INFO] Apache Hadoop Data Join ............................ SUCCESS [ 17.568 s]? [INFO] Apache Hadoop Ant Tasks ............................ SUCCESS [ 12.339 s]? [INFO] Apache Hadoop Extras ............................... SUCCESS [ 18.325 s]? [INFO] Apache Hadoop Pipes ................................ SUCCESS [ 27.889 s]? [INFO] Apache Hadoop OpenStack support .................... SUCCESS [ 30.148 s]? [INFO] Apache Hadoop Amazon Web Services support .......... SUCCESS [01:28 min]? [INFO] Apache Hadoop Client ............................... SUCCESS [ 25.086 s]? [INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [ 0.657 s]? [INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [ 25.302 s]? [INFO] Apache Hadoop Tools Dist ........................... SUCCESS [ 23.268 s]? [INFO] Apache Hadoop Tools ................................ SUCCESS [ 0.156 s]? [INFO] Apache Hadoop Distribution ......................... SUCCESS [01:06 min]? [INFO] ------------------------------------------------------------------------? [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------? [INFO] Total time: 01:17 h [INFO] Finished at: 2014-12-29T20:45:54-08:00? [INFO] Final Memory: 94M/193M [INFO] ------------------------------------------------------------------------? [root@Master hadoop-2.6.0-src]#
二.hadoop + hbase +zookeeper 環境搭建
1.修改Linux主機名2.修改IP
3.修改主機名和IP的映射關系
######注意######如果你們公司是租用的服務器或是使用的云主機(如華為用主機、阿里云主機等)
/etc/hosts里面要配置的是內網IP地址和主機名的映射關系
4.關閉防火墻
5.ssh免登陸
6.安裝JDK,配置環境變量等
集群規劃:
主機名? ?? ???IP? ?? ???安裝的軟件? ?? ???運行的進程
說明:
1.在hadoop2.0中通常由兩個NameNode組成,一個處于active狀態,另一個處于standby狀態。Active NameNode對外提供服務,而Standby NameNode則不對外提供服務,僅同步active namenode的狀態,以便能夠在它失敗時快速進行切換。
hadoop2.0官方提供了兩種HDFS HA的解決方案,一種是NFS,另一種是QJM。這里我們使用簡單的QJM。在該方案中,主備NameNode之間通過一組JournalNode同步元數據信息,一條數據只要成功寫入多數JournalNode即認為寫入成功。通常配置奇數個JournalNode
這里還配置了一個zookeeper集群,用于ZKFC(DFSZKFailoverController)故障轉移,當Active NameNode掛掉了,會自動切換Standby NameNode為standby狀態
2.hadoop-2.2.0中依然存在一個問題,就是ResourceManager只有一個,存在單點故障,hadoop-2.4.1解決了這個問題,有兩個ResourceManager,一個是Active,一個是Standby,狀態由zookeeper進行協調
安裝步驟:1.安裝配置zooekeeper集群(在Slave4上)
1.1解壓
1.2修改配置
修改:
在最后添加:
保存退出
然后創建一個tmp文件夾
再創建一個空文件
最后向該文件寫入ID
1.3將配置好的zookeeper拷貝到其他節點(首先分別在Slave5、Slave6根目錄:/usr/local/)
注意:修改Slave5、Slave6對應/usr/local/zookeeper/zkData/myid內容
2.安裝配置hadoop集群(在Master上操作)
2.1解壓
2.2配置HDFS(hadoop2.0所有的配置文件都在$HADOOP_HOME/etc/hadoop目錄下)
#將hadoop添加到環境變量中
#hadoop2.0的配置文件全部在$HADOOP_HOME/etc/hadoop下
2.2.1修改hadoo-env.sh
2.2.2修改core-site.xml
2.2.3修改hdfs-site.xml
2.2.4修改mapred-site.xml
2.2.5修改yarn-site.xml
2.2.6修改slaves(slaves是指定子節點的位置,因為要在Master上啟動HDFS、在Slave2啟動yarn,所以Master上的slaves文件指定的是datanode的位置,slave2上的slaves文件指定的是nodemanager的位置)
2.2.7配置免密碼登陸
#首先要配置Master到Slave1、Slave2、Slave3、Slave4、Slave5、Slave6的免密碼登陸
#在Master上生產一對鑰匙
#將公鑰拷貝到其他節點,包括自己
#配置Slave2到Slave3、Slave4、Slave5、Slave6的免密碼登陸
#在Slave2上生產一對鑰匙
#將公鑰拷貝到其他節點
#注意:兩個namenode之間要配置ssh免密碼登陸,別忘了配置Slave1到Master的免登陸
在Slave1上生產一對鑰匙
#在Slave3上生產一對鑰匙
#將公鑰拷貝到其他節點
2.4將配置好的hadoop拷貝到其他節點
###注意:嚴格按照下面的步驟
2.5啟動zookeeper集群(分別在Slave4、Slave5、Slave6上啟動zk)
#查看狀態:一個leader,兩個follower
2.6啟動journalnode(分別在Slave4、Slave5、Slave6上執行)
#運行jps命令檢驗,Slave4、Slave5、Slave6上多了JournalNode進程
2.7格式化HDFS
#在Master上執行命令:
#格式化后會在根據core-site.xml中的hadoop.tmp.dir配置生成個文件,這里我配置的是/usr/local/hadoop-2.6.0/tmp,
然后將/usr/local/hadoop-2.6.0/tmp拷貝到Slave1的/usr/local/hadoop-2.6.0/下。
2.8格式化ZK(在Master上執行即可)
2.9啟動HDFS(在Master上執行)
2.10啟動YARN(#####注意#####:是在Slave2上執行start-yarn.sh,把namenode和resourcemanager分開是因為性能問題,因為他們都要占用大量資源,所以把他們分開了,他們分開了就要分別在不同的機器上啟動)
到此,hadoop-2.6.0配置完畢,可以統計瀏覽器訪問:
驗證HDFS HA
首先向hdfs上傳一個文件
然后再kill掉active的NameNode
通過瀏覽器訪問:http://192.168.80.101:50070
NameNode 'Slave1:9000' (active)
這個時候Slave1上的NameNode變成了active
在執行命令:
剛才上傳的文件依然存在!!!
手動啟動那個掛掉的NameNode
通過瀏覽器訪問:http://192.168.80.101:50070
驗證YARN:
運行一下hadoop提供的demo中的WordCount程序:
hadoop HA集群搭建完成
??hbase-0.98.9-hadoop2??搭建
4.1 解壓縮,并重命名?
修改環境變量:
保存,退出。??執行 source /etc/profile??生效
4.1 修改HBase的配置文件#HBASE_HOME/conf/hbase-env.sh 修改內容如下:
保存,退出。
4.2 修改HBase的配置文件#HBASE_HOME/conf/hbase-site.xml,修改內容如下:
注意:$HBASE_HOME/conf/hbase-site.xml的hbase.rootdir的主機和端口號與$HADOOP_HOME/conf/core-site.xml的fs.default.name的主機和端口號一致
4.3 (可選)文件??regionservers 的內容修改為Master.
4.4 執行目錄到../bin ,執行命令??start-hbase.sh
******啟動hbase之前,確保hadoop是運行正常的。并且可以寫入文件。
4.5 驗證:(1)執行jps,發現新增加了3個Havana進程,分別是HMaster、HRegionServer、HQuorumPeer (HQuorumPeerMain 是ZooKeeper的進程 )?
備注:啟動HBase時,請先執行??/usr/local/zookeeper/bin zkServer.sh stop 停止ZooKeeper的進程,以免hbase啟動失敗。
(2)通過瀏覽器查看:??http://masters:60010
5.HBase的集群安裝(在原來的Master上的hbase偽分布基礎上搭建):
5.1 集群結構,主節點(hmaster)是Master,從節點(region server)是Slave1,Slave2,Slave3.
5.2 修改hadoop0上的hbase的幾個文件
(1)修改hbase-env.sh 最后一行?export??HBASE_MANAGES_ZK=false.
(2)修改hbase-site.xml文件的hbase.zookeeper.quorum的值為Master,Slave1,Slave2,Slave3?。
(3)修改regionservers文件(存放的?region server的hostname),內容修改成Slave1,Slave2,Slave3?。
5.3 復制Master中的hbase到Slave1,Slave2,Slave3的對應目錄下,并復制、Master 的/etc/profile文件到hadoop1 、hadoop2 中。
5.4 在HA集群中,首先各個節點啟動ZooKeeper集群,其次 Master中啟動hadoop集群,最后在Master上啟動hbase集群。
6.測試Hbase是否啟動正常:
1) 在Master主機中執行jps,查看進程。會新增一個 HMaster 進程
2) 在regionserver 中執行??jps,新增 HRegionServer。
7.執行hbase腳本命令:
總結
以上是生活随笔為你收集整理的hadoop +hbase +zookeeper 完全分布搭建 (版本一)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 《Effect Java》学习笔记1——
- 下一篇: (五)WebRTC手记Channel概念