使用SCP命令上傳到master主機(jī)上,scp命令使用參考如下。 scp local_file remote_username@remote_ip:remote_folder
參數(shù)依次是本地文件,遠(yuǎn)程用戶(hù)名和遠(yuǎn)程ip,以及保存的文件夾。在PowerShell中使用如下。 PS E:\XunLeiDownload> scp .\jdk-8u271-linux-x64.tar.gz root@192.168.92.137:/usr/local>>> # 這是輸出The authenticity of host '192.168.92.137 (192.168.92.137)' can't be established.ECDSA key fingerprint is SHA256:DjkK5V/chVHAD1SsaosqdxfH4wClmH8S6M8kxw7X/RQ.Are you sure you want to continue connecting (yes/no)?Warning: Permanently added '192.168.92.137' (ECDSA) to the list of known hosts.root@192.168.92.137's password:jdk-8u271-linux-x64.tar.gz 100% 137MB 91.5MB/s 00:01 # 上傳成功
解壓jdk1.8到/usr/local路徑下, tar -xvf jdk-8u271-linux-x64.tar.gz
mv jdk1.8.0_271 jdk1.8 # 重命名文件夾
安裝vim,編輯/etc/profile, yum install vim # 安裝vim
vim /etc/profile
添加兩行內(nèi)容如下, export JAVA_HOME=/usr/local/jdk1.8
export PATH=$PATH:$JAVA_HOME/bin
執(zhí)行source /etc/profile使環(huán)境生效,$JAVA_HOME查看是否配置成功,或者, java -version
>>>
java version "1.8.0_271"
Java(TM) SE Runtime Environment (build 1.8.0_271-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.271-b09, mixed mode)
表示配置成功。
要使master和3臺(tái)slave免密登陸,需先在本地機(jī)器使用ssh-keygen一個(gè)公私鑰對(duì)。 ssh-keygen -t rsa # 生成公私鑰對(duì)
>>>
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): # 不用輸入
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa. # 存放位置
Your public key has been saved in /root/.ssh/id_rsa.pub. # 公鑰
The key fingerprint is:
SHA256:vGAdZV8QBkGYgbyAj4OkQ9GrYEEiilX5QLmL97CcFeg root@master
The key's randomart image is:
+---[RSA 2048]----+
|+o++o+ ..=*o+o. |
|==..= o oo o . |
|*..o.* .. . |
|=.o.+ +o . |
|o..+ .o.S |
| .. E... . |
| o * . |
| + . |
| |
+----[SHA256]-----+
其他三臺(tái)機(jī)器也是這樣生成。
添加環(huán)境變量,vim /etc/profile,在最后添加以下兩行。 export HADOOP_HOME=/usr/local/hadoop-3.2.1
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME:/sbin
執(zhí)行以下命令source /etc/profile,使配置文件生效,并查看是否成功。 hadoop version
>>>
Hadoop 3.2.1
Source code repository https://gitbox.apache.org/repos/asf/hadoop.git -r b3cbbb467e22ea829b3808f4b7b01d07e0bf3842
Compiled by rohithsharmaks on 2019-09-10T15:56Z
Compiled with protoc 2.5.0
From source with checksum 776eaf9eee9c0ffc370bcbc1888737
This command was run using /usr/local/hadoop-3.2.1/share/hadoop/common/hadoop-common-3.2.1.jar
格式化namenode,在master執(zhí)行。 hdfs name -format
>>
2020-11-25 23:58:07,593 INFO common.Storage: Storage directory /usr/local/hadoop-3.2.1/hdfs/name has been successfully formatted.
2020-11-25 23:58:07,640 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/local/hadoop-3.2.1/hdfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
2020-11-25 23:58:07,792 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/hadoop-3.2.1/hdfs/name/current/fsimage.ckpt_0000000000000000000 of size 396 bytes saved in 0 seconds .
2020-11-25 23:58:07,807 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2020-11-25 23:58:07,825 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid=0 when meet shutdown.
2020-11-25 23:58:07,826 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.92.137
************************************************************/
在輸出中看到Storage: Storage directory /usr/local/hadoop-3.2.1/hdfs/name has been successfully formatted.,說(shuō)明格式話(huà)成功了,在下一次格式化前,需要?jiǎng)h除hdfs和tmp目錄下的所有文件,否則會(huì)運(yùn)行不起來(lái)。
啟動(dòng)hadoop,在sbin下。 ./start-all.sh
>>>./start-all.sh
Starting namenodes on [master]
上一次登錄:四 11月 26 00:08:30 CST 2020pts/5 上
master: namenode is running as process 14328. Stop it first.
Starting datanodes
上一次登錄:四 11月 26 00:14:03 CST 2020pts/5 上
slave1: WARNING: /usr/local/hadoop-3.2.1/logs does not exist. Creating.
slave3: WARNING: /usr/local/hadoop-3.2.1/logs does not exist. Creating.
slave2: WARNING: /usr/local/hadoop-3.2.1/logs does not exist. Creating.
localhost: datanode is running as process 14468. Stop it first.
Starting secondary namenodes [master]
上一次登錄:四 11月 26 00:14:04 CST 2020pts/5 上
Starting resourcemanager
上一次登錄:四 11月 26 00:14:09 CST 2020pts/5 上
resourcemanager is running as process 13272. Stop it first.
Starting nodemanagers
上一次登錄:四 11月 26 00:14:19 CST 2020pts/5 上
使用jps查看。 jps
>>>
15586 Jps
14468 DataNode
14087 GetConf
13272 ResourceManager
14328 NameNode
15096 SecondaryNameNode
15449 NodeManager
這上面顯示的master節(jié)點(diǎn)的信息slaves節(jié)點(diǎn)則少了一些,可以自行查看哦。
驗(yàn)證,訪(fǎng)問(wèn)namenod主節(jié)點(diǎn)端口9870。 service firewalld stop # 需要關(guān)閉防火墻哦
webui地址在:http://192.168.92.137:9870/。
./start-all.sh >>>
org.apache.spark.deploy.master.Master running as process 16859. Stop it first.
slave1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-3.0.1/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave1.out
slave3: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-3.0.1/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave3.out
localhost: org.apache.spark.deploy.worker.Worker running as process 17975. Stop it first.
slave2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-3.0.1/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave2.out