hadoop hive hbase 集群搭建
摘要:去年開發(fā)BI系統(tǒng),其中ETL用到了Hadoop和Hive,我用三臺(tái)Dell服務(wù)器,搭建了一個(gè)Hadoop集群,用于開發(fā)測(cè)試。
在接下來(lái)的幾篇中,我會(huì)介紹些BI架構(gòu)設(shè)計(jì)的內(nèi)容,以及在開發(fā)中遇到的困難和解決辦法。今天就說(shuō)一下搭建集群!
運(yùn)行環(huán)境
服務(wù)器清單
master 10.0.0.88 slave1 10.0.0.89 slave2 10.0.0.90
部署架構(gòu)圖
因?yàn)槲覀兪褂肞ython開發(fā),所以Hive和Hbase最好啟動(dòng)Thrift server,Hive內(nèi)建支持Thrift,而Hbase需要單獨(dú)啟動(dòng)Thrift Server實(shí)例。 實(shí)際的架構(gòu)中,Hbase不會(huì)和Hadoop部署在一個(gè)集群中,具體的原因是不要在Hbase集群上做MapReduce,會(huì)影響Hbase的性能。Zookeeper也會(huì)單獨(dú)拿出來(lái)。還比如在Hive和Hbase整合上,也不要試圖把Hbase的數(shù)據(jù)拉到Hadoop上做處理,這樣MapReduce的性能低。因?yàn)镠ive啟動(dòng)了支持并發(fā)的Hiveserver2實(shí)例,所以還需要Zookeeper和Mysql。其中,Zookeeper提供鎖管理,Mysql提供元數(shù)據(jù)管理,這兩個(gè)是Hive實(shí)現(xiàn)并發(fā)必備的組件。
軟件清單
centos6.4
hadoop-1.2.1
hbase-0.94.12
hive-0.12.0
python-2.7.5
thrift-0.9.1
setuptools-0.6c11
jdk-7u40-linux-x64
基礎(chǔ)環(huán)境準(zhǔn)備
centos安裝
安裝成Basic server模式
增加用戶
$useradd "hadoop" $passwd "admin"
這里需要改變sudoers文件的權(quán)限,文件修改完成后,權(quán)限再改回來(lái) $vi /etc/sudoers
hadoop ALL=(ALL) ALL
網(wǎng)絡(luò)配置
$vi /etc/sysconfig/network-scripts/ifcfg-eth0
ONBOOT=yes
BOOTPROTO=dhcp $ vi /etc/sysconfig/network
"GETEWAY=###.###.###.###"
$sudo /etc/init.d/network restart
or
$/etc/sysconfig/network-scripts/ifup-eth
注:我在路由器上綁定了各主機(jī)Mac
改Hostname
$ vi /etc/sysconfig/network
hostname #####
關(guān)閉防護(hù)墻
很重要,不然slave上hadoop會(huì)莫名其妙的掛掉
$sudo service iptables stop
$sudo chkconfig iptables off
$vi /etc/hosts
10.0.0.88 master
10.0.0.89 slave1
每臺(tái)主機(jī)除了ip地址不同外,配置相同,然后相互ping一下
無(wú)密碼登錄
在hadoop賬戶下進(jìn)行
$ssh-keygen -t rsa
$cd .ssh
$cp -r id_rsa.pub authorized_keys
$chmod 600 authorized_keys
每臺(tái)主機(jī)都這樣操作一遍,然后把各主機(jī)的authorized_keys匯總成一份,拷貝 個(gè)主機(jī)上并覆蓋原來(lái)的authorized_keys
[hadoop@master ~]$ ssh hadoop@slave1
Last login: Tue Feb 18 15:41:59 2014 from 10.0.0.123
個(gè)主機(jī)之間都測(cè)試一遍,如果能不用密碼登陸,表明這一步操作成功
安裝sun sdk
------------------------------------download jdk-7u40-linux-x64.tar.gz packet
$tar -xf jdk-7u40-linux-x64.tar.gz
$cp -r jdk1.7.0_40 /usr/java
$sudo vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.7.0_40
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
$source /etc/profile
各主機(jī)都要安裝,并且配置一樣
安裝mysql
Install MySQL(5.1.69) $yum -y install mysql-server $sudo chkconfig mysqld on $service mysqld start $/usr/bin/mysqladmin -u root password 'new_password'
安裝 Hadoop
$tar -xf hadoop-1.2.1.tar.gz
$cd hadoop-1.2.1/conf
$vim core-site.xml <configuration> ?????<property> ??????????<name>fs.default.name</name> ??????????<value>hdfs://master:9000</value> ?????</property> ?????<property> ???????????? <name>hadoop.tmp.dir</name> ??????? ??<value>/home/hadoop/tmp</value> ?? ?? </property> </configuration>
$vim hadoop-env.sh export JAVA_HOME=/usr/java/jdk1.7.0_40
$vim hdfs-site.xml <configuration> ??????? <property>
??????????????? <name>dfs.name.dir</name>
??????????????? <value>/home/hadoop/name</value>
??????? </property>
??????? <property>
??????????????? <name>dfs.data.dir</name>
??????????????? <value>/home/hadoop/data</value>
??????? </property>
??????? <property>
??????????????? <name>dfs.permissions</name>
??????????????? <value>false</value>
??????? </property>
</configuration>
$ vim mapred-site.xml <configuration> ??????? <property>
??????????????? <name>mapred.job.tracker</name>
??????????????? <value>master:9001</value>
??????? </property>
</configuration>
$vim masters master
$vim slaves slave1 slave2
$scp?hadoop-1.2.1 hadoop@slave1:/home/hadoop $scp?hadoop-1.2.1 hadoop@slave2:/home/hadoop
$cd bin $./hadoop namenode -format $./start-all.sh
啟動(dòng)完畢后 $/usr/java/jdk1.7.0_40/bin/jps
在master上會(huì)出現(xiàn) NameNode SecondaryNameNode JobTracker
在slave1,slave2上會(huì)出現(xiàn) TaskNode TaskTracker
$./hadoop fs -put /home/hadoop/test.py /user/test.py $./hadoop fs -ls /user 如果出現(xiàn)了test.py文件,說(shuō)明安裝成功
安裝hbase
$tar -xf hbase-0.94.12.tar.bz
$cd hbase-0.94.12
$cd conf <configuration> ??????? <property>
??????????????? <name>hbase.rootdir</name>
??????????????? <value>hdfs://master:9000/hbase</value>
??????? </property>
??????? <property>
??????????????? <name>hbase.cluster.distributed</name>
??????????????? <value>true</value>
??????? </property>
??????? <property>
??????????????? <name>hbase.master</name>
??????????????? <value>master:60000</value>
??????? </property>
??????? <property>
??????????????? <name>hbase.zookeeper.quorum</name>
??????????????? <value>master,slave1,slave2</value>
??????? </property>
??????? <property>
??????????????? <name>hbase.zookeeper.property.dataDir</name>
??????????????? <value>/home/hadoop/hbase-0.94.12/zookeeper</value>
??????? </property>
</configuration>
$vi hbase-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_40/ export HBASE_MANAGES_ZK=true
$vi regionservers
slave1 slave2
$scp hbase-0.94.12?hadoop@slave1:/home/hadoop $scp hbase-0.94.12?hadoop@slave2:/home/hadoop
$cd bin
啟動(dòng)Hbase $./start-bhase.sh
啟動(dòng)Thrift,這里使用無(wú)阻塞模式,在實(shí)際使用中,性能好。 要把Thrfit啟動(dòng)起來(lái),還需要另外的操作,這部分在文章的最后。 $./hbase-daemon.sh start thrift -noblocking
$/usr/java/jdk1.7.0_40/bin/jps
HMaster
HQuorumPeer
Jps
=======slave
$/usr/java/jdk1.7.0_40/bin/jps
HRegionServer
HQuorumPeer
Jps
hbase shell
$cd bin $./hbase shell >list(查看表類似mysql show tables)
具體的shell命令道apache官方上查找
安裝hive
特別說(shuō)明:啟用hiveserver2,可以并發(fā)處理客戶端請(qǐng)求,下篇會(huì)專門說(shuō)這個(gè)
$tar -xf hive-0.12.0.tar.gz
$cd hive-0.10.0
$cd conf
$vim hive-env.sh
HADOOP_HOME=/home/hadoop/hadoop-1.2.1
export HIVE_CONF_DIR=/home/hadoop/hive-0.12.0/conf
export HIVE_HOME=/home/hadoop/hive-0.12.0
$vim hive-site.xml <configuration> <!-- WARNING!!! This file is provided for documentation purposes ONLY!???? --> <!-- WARNING!!! Any changes you make to this file will be ignored by Hive. --> <!-- WARNING!!! You must make your changes in hive-site.xml instead.?????? --> <!-- Hive Execution Parameters --> <property> ? <name>mapred.reduce.tasks</name> ? <value>-1</value> </property> <property> ? <name>hive.groupby.skewindata</name> ? <value>false</value> ? <description>Whether there is skew in data to optimize group by queries</description> </property> <property> ? <name>hive.exec.parallel.thread.number</name> ? <value>8</value> ? <description>How many jobs at most can be executed in parallel</description> </property> <property> ? <name>hive.exec.parallel</name> ? <value>true</value> ? <description>Whether to execute jobs in parallel</description> </property> <property> ? <name>javax.jdo.option.ConnectionURL</name> ? <value>jdbc:mysql://master:3306/bihive?createDatabaseIfNotExist=true</value> </property> <property> ? <name>javax.jdo.option.ConnectionDriverName</name> ? <value>com.mysql.jdbc.Driver</value> </property> <property> ? <name>javax.jdo.option.ConnectionUserName</name> ? <value>root</value> </property> <property> ? <name>javax.jdo.option.ConnectionPassword</name> ? <value>root</value> </property> <property> ?? <name>hive.server2.authentication</name> ?? <value>NOSASL</value> </property> <property> ?? <name>hive.server2.enable.doAs</name> ?? <value>false</value> </property> <property> ?? <name>hive.server2.async.exec.threads</name> ?? <value>50</value> </property> <property> ?? <name>hive.server2.async.exec.wait.queue.size</name> ?? <value>50</value> </property> <property> ?? <name>hive.support.concurrency</name> ?? <description>Enable Hive's Table Lock Manager Service</description> ?? <value>true</value> </property> <property> ?? <name>hive.zookeeper.quorum</name> ? ?<value>master</value> </property> <property> ?? <name>hive.groupby.skewindata</name> ?? <value>true</value> </property> <property> ?? <name>hive.multigroupby.singlemr</name> ?? <value>true</value> </property> </configuration>
$vi hive-log4j.properties
log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
訪問(wèn)Mysql需要依賴 mysql-connector-java-5.1.27-bin.jar包
$./hive
hive>shou tables;
OK
Time taken:2.585 seconds
$hiveserver2 & $/usr/java/jdk1.7.0_40/bin/jps
RunJar
$netstat -nl|grep 10000 如果顯示有l(wèi)isten的話,表明啟動(dòng)成功
啟動(dòng) Hbase Thrift Server
安裝 gcc
$su root$yum -y install gcc
$yum install automake libtool flex bison pkgconfig gcc-c++ boost-devel libevent-devel zlib-devel python-devel ruby-devel
$yum install openssl-devel
安裝 Python
$tar -xf Python-2.7.5.tar.bz2
$./configure --prefix=/usr/local --enable-shared
$make && make altinstall
安裝easy_install
$tar -xf setuptools-0.6c11.tar.gz
$python2.7 setup.py install
$tar -xf thrift-0.9.1.tar.gz
$cd thrift-0.9.1
$./configure
$make install
生成Thrift客戶端依賴文件
$thrift --gen py [hbase-root]/src/main/resources/org/apache/hadoop/hbase/thrift/Hbase.thrift
$easy_install thrift
$cp -r gen-py/hbase/ /usr/local/lib/python2.7/site-packages/
啟動(dòng)Thrift Server
$./hbase-daemon.sh start thrift -noblocking
總結(jié)
以上是生活随笔為你收集整理的hadoop hive hbase 集群搭建的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: Hadoop集群 MapReduce初级
- 下一篇: 使用Python实现Hadoop Map