大数据脚本相关
1.xsync集群分發(fā)腳本
首先確保集群配置了SSH免密登錄。(詳見5)
(a)在/home/atguigu目錄下創(chuàng)建bin目錄(/home/atguigu/bin),并在bin目錄下xsync創(chuàng)建文件,文件內(nèi)容如下:
[atguigu@hadoop102 ~]$ mkdir bin[atguigu@hadoop102 ~]$ cd bin/[atguigu@hadoop102 bin]$ touch xsync[atguigu@hadoop102 bin]$ vi xsync?
在該文件中編寫如下代碼
#!/bin/bash#1 獲取輸入?yún)?shù)個(gè)數(shù),如果沒有參數(shù),直接退出 pcount=$# if((pcount==0)); thenecho no args;exit;fi#2 獲取文件名稱 p1=$1fname=`basename $p1`echo fname=$fname#3 獲取上級(jí)目錄到絕對(duì)路徑 pdir=`cd -P $(dirname $p1); pwd`echo pdir=$pdir#4 獲取當(dāng)前用戶名稱 user=`whoami`#5 循環(huán)for((host=103; host<105; host++)); doecho ------------------- hadoop$host --------------rsync -rvl $pdir/$fname $user@hadoop$host:$pdirdone?
(b)修改腳本 xsync 具有執(zhí)行權(quán)限
[atguigu@hadoop102 bin]$ chmod 777 xsync
(c)調(diào)用腳本形式:xsync 文件名稱
[atguigu@hadoop102 bin]$ xsync /home/atguigu/bin
2設(shè)置啟動(dòng)集群(zookeeper hdfs yarn )
在/home/atguigu/bin創(chuàng)建start-cluster.sh,如果zookeeper不能啟動(dòng),在zkEnv.sh加上
?
start-cluster.sh
#!/bin/bash user=`whoami` echo "=============== 開始啟動(dòng)所有節(jié)點(diǎn)服務(wù) ===============" echo "=============== 正在啟動(dòng)Zookeeper...... ===============" for((host=102; host<=104; host++)); doecho "--------------- hadoop$host Zookeeper...... ----------------"ssh $user@hadoop$host '/opt/module/zookeeper-3.4.10/bin/zkServer.sh start' done echo "================ 正在啟動(dòng)HDFS ===============" ssh $user@hadoop102 '/opt/module/hadoop-2.7.2/sbin/start-dfs.sh' echo "================ 正在啟動(dòng)YARN ===============" ssh $user@hadoop103 '/opt/module/hadoop-2.7.2/sbin/start-yarn.sh' echo "================ hadoop102正在啟動(dòng)JobHistoryServer ===============" ssh $user@hadoop102 '/opt/module/hadoop-2.7.2/sbin/mr-jobhistory-daemon.sh start historyserver' done(b)修改腳本 具有執(zhí)行權(quán)限
[atguigu@hadoop102 bin]$ chmod 777 start-cluster.sh
3.關(guān)閉集群(zookeeper hdfs yarn )
在/home/atguigu/bin創(chuàng)建stop-cluster.sh
內(nèi)容為
#!/bin/bash user=`whoami` echo "================ 開始停止所有節(jié)點(diǎn)服務(wù) ===============" echo "================ hadoop102正在停止JobHistoryServer ===============" ssh $user@hadoop102 '/opt/module/hadoop-2.7.2/sbin/mr-jobhistory-daemon.sh stop historyserver' echo "================ 正在停止YARN ===============" ssh $user@hadoop103 '/opt/module/hadoop-2.7.2/sbin/stop-yarn.sh' echo "================ 正在停止HDFS ===============" ssh $user@hadoop102 '/opt/module/hadoop-2.7.2/sbin/stop-dfs.sh' echo "=============== 正在停止Zookeeper...... ===============" for((host=102; host<=104; host++)); doecho "--------------- hadoop$host Zookeeper...... ----------------"ssh $user@hadoop$host '/opt/module/zookeeper-3.4.10/bin/zkServer.sh stop'done修改腳本 具有執(zhí)行權(quán)限
[atguigu@hadoop102 bin]$ chmod 777 stop-cluster.sh
最后使用xsync 分發(fā)到其它集群上(切記要改變權(quán)限)
4.查看集群進(jìn)程
在/home/atguigu/bin創(chuàng)建util.sh
內(nèi)容為
#!/bin/bash for ip in hadoop102 hadoop103 hadoop104 doecho "------------------------------[ jps $ip ]-------------------------"ssh atguigu@$ip "source /etc/profile;jps" done5.SSH免登錄
無(wú)密鑰配置
(1)免密登錄原理,如圖所示
(2)生成公鑰和私鑰:
[atguigu@hadoop102 .ssh]$ ssh-keygen -t rsa
然后敲(三個(gè)回車),就會(huì)生成兩個(gè)文件id_rsa(私鑰)、id_rsa.pub(公鑰)
(3)將公鑰拷貝到要免密登錄的目標(biāo)機(jī)器上
[atguigu@hadoop102 .ssh]$ ssh-copy-id hadoop102
[atguigu@hadoop102 .ssh]$ ssh-copy-id hadoop103
[atguigu@hadoop102 .ssh]$ ssh-copy-id hadoop104
6.配置群起Zookeeper
在/home/用戶名/bin下,創(chuàng)建zkstart.sh?
#!/bin/bash user=`whoami` echo "=============== 正在啟動(dòng)Zookeeper...... ===============" for((host=102; host<=104; host++)); doecho "--------------- hadoop$host Zookeeper...... ----------------"ssh $user@hadoop$host '/opt/module/zookeeper-3.4.10/bin/zkServer.sh start' done創(chuàng)建zkStop.sh
#!/bin/bash user=`whoami` echo "=============== 正在停止Zookeeper...... ===============" for((host=102; host<=104; host++)); do echo "--------------- hadoop$host Zookeeper...... ----------------" ssh $user@hadoop$host '/opt/module/zookeeper-3.4.10/bin/zkServer.sh stop' done修改權(quán)限 chmod 777
7.配置kafka后臺(tái)啟動(dòng)
在kafka目錄下創(chuàng)建startkafka.sh
nohup bin/kafka-server-start.sh config/server.properties > kafka.log 2>&1 &修改權(quán)限 chmod 777
./startkafka.sh 即可執(zhí)行
只需要把startkafka.sh分發(fā)到各個(gè)機(jī)器,再獨(dú)自啟動(dòng)即可
?
【注:
? ? ?a.?>kafka.log 將運(yùn)行的日志寫到kafka中,? 2>&1?的意思就是將標(biāo)準(zhǔn)錯(cuò)誤重定向到標(biāo)準(zhǔn)輸出。
? b. &:后臺(tái)運(yùn)行。當(dāng)你只使用“&”時(shí),關(guān)閉終端,進(jìn)程會(huì)關(guān)閉。所以當(dāng)你要讓程序在后臺(tái)不掛斷運(yùn)行時(shí),需要將nohup和&一起使用。
? ? ? c.? 啟動(dòng)命令首位加上nohup,即使停掉crt,kafka、flume依然可以在后臺(tái)執(zhí)行,這樣就不用每次登陸,重新運(yùn)行啟動(dòng)命令了。如果需要停掉服務(wù),只需運(yùn)行 kill -9 [程序運(yùn)行的號(hào)即可]
? ? ?】
?
轉(zhuǎn)載于:https://www.cnblogs.com/songdanlee/p/10646824.html
總結(jié)
- 上一篇: pyqt5模块介绍
- 下一篇: MyBatis入门及CRUD