人妻少妇精品久久久久久蜜臀av,久久综合激激的五月天,日韩精品无码专区免费播放,欧美精品999

kb3189866

前沿拓展:

kb3189866

下載 KB3189866 **更新包,試試!


我用的是VM虛擬機(jī),**作系統(tǒng)是RedHat7.9的系統(tǒng)進(jìn)行JDK和Hadoop的安裝實(shí)施,

本過(guò)程只是作測(cè)試和學(xué)習(xí)參考用。

1、添加hadoop新用戶(hù)

useradd -m hadoop -s /bin/bash # 添加hadoop用戶(hù)

passwd hadoop # 配置hadoop用戶(hù)的密碼

vi /etc/sudoers #編輯配置文件 在root后一行加入 hadoop ALL=(ALL) ALL ,為hadoop添加管理員權(quán)限

2、配置免密登錄(root)用戶(hù)**作

ssh-keygen -t rsa # 會(huì)有提示,都按回車(chē)就OK

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys # 加入授權(quán)

chmod 0600 ~/.ssh/authorized_keys #添加權(quán)限

配置完成后,執(zhí)行ssh hadoop(主機(jī)名)命令可以不用輸入密碼即可登錄為配置成功。

如查發(fā)現(xiàn)生成ssh-keygen報(bào)錯(cuò),這是沒(méi)有安裝openssh造成的,用yum安裝即可。

[root@hadoop dfs]# rpm -qa |grep openssh
openssh-server-7.4p1-21.el7.x86_64
openssh-clients-7.4p1-21.el7.x86_64
openssh-7.4p1-21.el7.x86_64

3、配置JDK環(huán)境

下載JDK的安裝包之后,將jdk安裝到/usr/local/jdk 這個(gè)目錄。

tar -zxvf /home/hadoop/download/jdk-8u212-linux-x64.tar.gz -C /usr/local/jdk/

添加環(huán)境變量

vi /etc/profile # 打開(kāi)環(huán)境變量配置文件,添加下面的配置

# java環(huán)境變量配置

export JAVA_HOME=/usr/local/jdk

export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH

# 配置完成后 執(zhí)行下面命令是配置生效

source /etc/profile

順便把HADOOP_HOME的環(huán)境變量也一起添加了

# hadoop環(huán)境變量配置

export HADOOP_HOME=/usr/local/hadoop

export PATH=$HADOOP_HOME/bin:$JAVA_HOME/**in:$PATH

[root@hadoop download]# cat /etc/profile
unset i
unset -f pathmunge
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_212
export PATH=$JAVA_HOME/bin:$PATH
export HADOOP_HOME=/usr/local/hadoop/hadoop-3.1.3
export PATH=$HADOOP_HOME/bin:$PATH
export PATH=$HADOOP_HOME/**in:$PATH

java環(huán)境是否配置成功,我們執(zhí)行java -version 可以看到j(luò)ava相關(guān)的信息

[root@hadoop download]# java -version
java version "1.8.0_212"
Java(TM) SE Runtime Environment (build 1.8.0_212-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.212-b10, mixed mode)

4、安裝配置Hadoop

Hadoop NameNode格式化及運(yùn)行測(cè)試,接下來(lái)對(duì)hadoop進(jìn)行一些配置,使其能以偽分布式的方式運(yùn)行。進(jìn)入到hadoop的配置文件所在的目錄

cd /usr/local/hadoop/etc/hadoop配置hadoop-env.sh

在該文件內(nèi)配置JAVA_HOME 所示:

vi /usr/local/hadoop/hadoop-3.1.3/etc/hadoop/hadoop-env.sh
###
# Generic settings for HADOOP
###

# Technically, the only required environment variable is JAVA_HOME.
# All others are optional. However, the defaults are probably not
# preferred. Many sites configure these options outside of Hadoop,
# such as in /etc/profile.d

# The java implementation to use. By default, this environment
# variable is REQUIRED on ALL platforms except OS X!
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_212

# Location of Hadoop. By default, Hadoop will attempt to determine
# this location based upon its execution path.
export HADOOP_HOME=/usr/local/hadoop/hadoop-3.1.3

JAVA_HOME設(shè)置為我們自己的jdk安裝路徑即可

1、配置hdfs-site.xml

hdfs-site.xml的內(nèi)容改成下面的配置。

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hadoop_data/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hadoop_data/dfs/data</value>
</property>
</configuration>dfs.replication # 為文件保存副本的數(shù)量
dfs.namenode.name.dir # 為hadoop namenode數(shù)據(jù)目錄,改成自己需要的目錄(不存在需新建)
dfs.datanode.data.dir # 為hadoop datanode數(shù)據(jù)目錄,改成自己需要的目錄(不存在需新建)

1、配置core-site.xml

core-site.xml的內(nèi)容改成下面的配置。

<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop_data</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop:9000</value>
</property>
</configuration>hadoop.tmp.dir # hadoop 緩存目錄,更改為自己的目錄(不存在需創(chuàng)建)
fs.defaultFS # hadoop fs **端口配置
mkdir /home/hadoop_data/dfs/name
mkdir /home/hadoop_data/dfs/data
cd /home/hadoop_data/
chown -R hadoop:hadoop dfs && chmod -R 777 dfs

如果只需要HDFS,配置就完成,如果需要用到Y(jié)arn,還需要做yarn相關(guān)的配置。

1、配置mapred-site.xml

mapred-site.xml的內(nèi)容改成下面的配置。

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

1、配置yarn-site.xml

yarn-site.xml的內(nèi)容改成下面的配置。

<configuration>
<!– Site specific YARN configuration properties –>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

Hadoop格式化及啟動(dòng)

現(xiàn)在hadoop基礎(chǔ)配置已經(jīng)完成了,需要對(duì)Hadoop的namenode進(jìn)行格式化,第二啟動(dòng)hadoop dfs服務(wù)。

1、NameNode格式化我們跳轉(zhuǎn)到hadoop的bin目錄,并執(zhí)行格式化命令

cd /usr/local/hadoop/bin
./hdfs namenode -format

執(zhí)行結(jié)果如下圖所示,當(dāng)exit status 為0時(shí),則為格式化成功。

此時(shí)我們的hadoop已經(jīng)格式化成功了,接下來(lái)我們?nèi)?dòng)我們hadoop。

進(jìn)到hadoop下的**in目錄

cd /usr/local/hadoop/**in
./start-dfs.sh # 啟動(dòng)HDFS
./start-yarn.sh # 啟動(dòng)YARN

執(zhí)行./start-dfs.sh 如下圖所示:

[hadoop@hadoop ~]$ stop-dfs.sh
Stopping namenodes on [hadoop]
Stopping datanodes
Stopping secondary namenodes [hadoop]
[hadoop@hadoop ~]$ start-dfs.sh
Starting namenodes on [hadoop]
Starting datanodes
Starting secondary namenodes [hadoop]
[hadoop@hadoop ~]$ jps
48336 Jps
48002 DataNode
48210 SecondaryNameNode
46725 NodeManager
46621 ResourceManager
47886 NameNode
[hadoop@hadoop ~]$

還可以看日志看是不啟動(dòng)報(bào)錯(cuò)

[hadoop@hadoop logs]$ ls -rlt
total 2528
-rw-rw-r–. 1 hadoop hadoop 0 Mar 17 23:15 SecurityAuth-hadoop.audit
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 09:37 hadoop-hadoop-datanode-hadoop.out.5
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 09:38 hadoop-hadoop-secondarynamenode-hadoop.out.5
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:04 hadoop-hadoop-namenode-hadoop.out.5
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:04 hadoop-hadoop-datanode-hadoop.out.4
-rw-rw-r–. 1 hadoop hadoop 4124 Mar 18 10:05 hadoop-hadoop-secondarynamenode-hadoop.out.4
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:06 hadoop-hadoop-namenode-hadoop.out.4
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:06 hadoop-hadoop-datanode-hadoop.out.3
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:31 hadoop-hadoop-namenode-hadoop.out.3
-rw-rw-r–. 1 hadoop hadoop 121390 Mar 18 10:50 hadoop-hadoop-secondarynamenode-hadoop.out.3
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:51 hadoop-hadoop-namenode-hadoop.out.2
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:52 hadoop-hadoop-datanode-hadoop.out.2
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:52 hadoop-hadoop-secondarynamenode-hadoop.out.2
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:53 hadoop-hadoop-datanode-hadoop.out.1
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:54 hadoop-hadoop-secondarynamenode-hadoop.out.1
-rw-rw-r–. 1 hadoop hadoop 6151 Mar 18 10:55 hadoop-hadoop-namenode-hadoop.out.1
-rw-rw-r–. 1 hadoop hadoop 2215 Mar 18 14:42 hadoop-hadoop-resourcemanager-hadoop.out
-rw-rw-r–. 1 hadoop hadoop 2199 Mar 18 14:43 hadoop-hadoop-nodemanager-hadoop.out
-rw-rw-r–. 1 hadoop hadoop 41972 Mar 18 14:52 hadoop-hadoop-resourcemanager-hadoop.log
-rw-rw-r–. 1 hadoop hadoop 37935 Mar 18 15:42 hadoop-hadoop-nodemanager-hadoop.log
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 15:48 hadoop-hadoop-namenode-hadoop.out
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 15:48 hadoop-hadoop-datanode-hadoop.out
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 15:48 hadoop-hadoop-secondarynamenode-hadoop.out
-rw-rw-r–. 1 hadoop hadoop 970190 Mar 18 15:48 hadoop-hadoop-datanode-hadoop.log
drwxr-xr-x. 2 hadoop hadoop 6 Mar 18 15:48 userlogs
-rw-rw-r–. 1 hadoop hadoop 572169 Mar 18 15:49 hadoop-hadoop-namenode-hadoop.log
-rw-rw-r–. 1 hadoop hadoop 656741 Mar 18 15:49 hadoop-hadoop-secondarynamenode-hadoop.log
[hadoop@hadoop logs]$ pwd
/usr/local/hadoop/hadoop-3.1.3/logs
[hadoop@hadoop logs]$ tail -20f hadoop-hadoop-namenode-hadoop.log
2022-03-18 15:48:39,521 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(192.168.10.248:9866, datanodeUuid=daafd206-fdfe-44cc-a1fc-8ac1279c5cda, infoPort=9864, infoSecurePort=0, ipcPort=9867, storageInfo=lv=-57;cid=CID-f921bab0-7c73-44ef-bc61-ea81d176ec82;nsid=311459717;c=1647534325420) storage daafd206-fdfe-44cc-a1fc-8ac1279c5cda
2022-03-18 15:48:39,523 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.10.248:9866
2022-03-18 15:48:39,523 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockReportLeaseManager: Registered DN daafd206-fdfe-44cc-a1fc-8ac1279c5cda (192.168.10.248:9866).
2022-03-18 15:48:39,889 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-6dd0b7dc-1fd0-488c-924b-76de306ac2cb for DN 192.168.10.248:9866
2022-03-18 15:48:40,062 INFO BlockStateChange: BLOCK* processReport 0xf15bae747aec7666: Processing first storage report for DS-6dd0b7dc-1fd0-488c-924b-76de306ac2cb from datanode daafd206-fdfe-44cc-a1fc-8ac1279c5cda
2022-03-18 15:48:40,065 INFO BlockStateChange: BLOCK* processReport 0xf15bae747aec7666: from storage DS-6dd0b7dc-1fd0-488c-924b-76de306ac2cb node DatanodeRegistration(192.168.10.248:9866, datanodeUuid=daafd206-fdfe-44cc-a1fc-8ac1279c5cda, infoPort=9864, infoSecurePort=0, ipcPort=9867, storageInfo=lv=-57;cid=CID-f921bab0-7c73-44ef-bc61-ea81d176ec82;nsid=311459717;c=1647534325420), blocks: 0, hasStaleStorage: false, processing time: 3 msecs, invalidatedBlocks: 0
2022-03-18 15:49:44,972 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.10.248
2022-03-18 15:49:44,972 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2022-03-18 15:49:44,973 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 85, 85
2022-03-18 15:49:44,973 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 84 Number of syncs: 2 SyncTimes(ms): 130
2022-03-18 15:49:44,988 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 84 Number of syncs: 3 SyncTimes(ms): 144
2022-03-18 15:49:44,990 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/hadoop_data/dfs/name/current/edits_inprogress_0000000000000000085 -> /home/hadoop_data/dfs/name/current/edits_0000000000000000085-0000000000000000086
2022-03-18 15:49:44,992 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 87
2022-03-18 15:49:45,514 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Sending fileName: /home/hadoop_data/dfs/name/current/fsimage_0000000000000000083, fileSize: 533. Sent total: 533 bytes. Size of last segment intended to send: -1 bytes.
2022-03-18 15:49:45,641 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Sending fileName: /home/hadoop_data/dfs/name/current/edits_0000000000000000084-0000000000000000084, fileSize: 1048576. Sent total: 1048576 bytes. Size of last segment intended to send: -1 bytes.
2022-03-18 15:49:45,744 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Sending fileName: /home/hadoop_data/dfs/name/current/edits_0000000000000000085-0000000000000000086, fileSize: 42. Sent total: 42 bytes. Size of last segment intended to send: -1 bytes.
2022-03-18 15:49:46,668 INFO org.apache.hadoop.hdfs.server.common.Util: Combined time for file download and fsync to all disks took 0.00s. The file download took 0.00s at 0.00 KB/s. Synchronous (fsync) write to disk of /home/hadoop_data/dfs/name/current/fsimage.ckpt_0000000000000000086 took 0.00s.
2022-03-18 15:49:46,668 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000086 size 533 bytes.
2022-03-18 15:49:46,684 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 83
2022-03-18 15:49:46,684 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Purging old image FSImageFile(file=/home/hadoop_data/dfs/name/current/fsimage_0000000000000000081, cpktTxId=0000000000000000081)

這就是namenode啟動(dòng)成功,如查格式化兩次就會(huì)出現(xiàn)datanode啟動(dòng)不成功,這是clusterID兩次不致造成的,可以進(jìn)入

-rw-rw-r–. 1 hadoop hadoop 229 Mar 18 15:48 VERSION
drwx——. 4 hadoop hadoop 54 Mar 18 15:48 BP-301391941-192.168.10.248-1647534325420
[root@hadoop current]# cat VERSION
#Fri Mar 18 15:48:38 CST 2022
storageID=DS-6dd0b7dc-1fd0-488c-924b-76de306ac2cb
clusterID=CID-f921bab0-7c73-44ef-bc61-ea81d176ec82
cTime=0
datanodeUuid=daafd206-fdfe-44cc-a1fc-8ac1279c5cda
storageType=DATA_NODE
layoutVersion=-57
[root@hadoop current]# pwd
/home/hadoop_data/dfs/data/current
修改clusterID和namenode節(jié)點(diǎn)的clusterID一樣后,在重啟hadoop服務(wù)

現(xiàn)在偽分布式hadoop集群已經(jīng)部署成功了,如果啟動(dòng)hadoop的時(shí)候遇到了問(wèn)題,可以查看對(duì)應(yīng)的log文件查看是由什么問(wèn)題引起的。一般的問(wèn)題如,未設(shè)置JAVA_HOME hadoopdata目錄不存在,或者無(wú)權(quán)限等等。

現(xiàn)要在可以進(jìn)入hadoop組件hdfs的UI界面:

kb3189866

現(xiàn)要在可以進(jìn)入hadoop組件yarn的UI界面:

kb3189866

可能安裝過(guò)成會(huì)遇到各樣的問(wèn)題,可是查看日志和搜索或是去官網(wǎng)站都可以找到解決的**,我只是把做的過(guò)程記錄出來(lái)了,以備后需。

拓展知識(shí):

kb3189866

有的機(jī)器升級(jí)不僅僅一次不成功,有時(shí)候反復(fù)升級(jí)都是這樣,這樣的情況我在升級(jí)補(bǔ)丁的時(shí)候也遇到過(guò)!比如:升級(jí)W10周年版本的14393(kb3189866)的時(shí)候更新到45%就卡住了,沒(méi)辦法進(jìn)一步的往下進(jìn)行,最后還是在論壇里找到升級(jí)的下載包解決了!

前沿拓展:

kb3189866

下載 KB3189866 **更新包,試試!


我用的是VM虛擬機(jī),**作系統(tǒng)是RedHat7.9的系統(tǒng)進(jìn)行JDK和Hadoop的安裝實(shí)施,

本過(guò)程只是作測(cè)試和學(xué)習(xí)參考用。

1、添加hadoop新用戶(hù)

useradd -m hadoop -s /bin/bash # 添加hadoop用戶(hù)

passwd hadoop # 配置hadoop用戶(hù)的密碼

vi /etc/sudoers #編輯配置文件 在root后一行加入 hadoop ALL=(ALL) ALL ,為hadoop添加管理員權(quán)限

2、配置免密登錄(root)用戶(hù)**作

ssh-keygen -t rsa # 會(huì)有提示,都按回車(chē)就OK

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys # 加入授權(quán)

chmod 0600 ~/.ssh/authorized_keys #添加權(quán)限

配置完成后,執(zhí)行ssh hadoop(主機(jī)名)命令可以不用輸入密碼即可登錄為配置成功。

如查發(fā)現(xiàn)生成ssh-keygen報(bào)錯(cuò),這是沒(méi)有安裝openssh造成的,用yum安裝即可。

[root@hadoop dfs]# rpm -qa |grep openssh
openssh-server-7.4p1-21.el7.x86_64
openssh-clients-7.4p1-21.el7.x86_64
openssh-7.4p1-21.el7.x86_64

3、配置JDK環(huán)境

下載JDK的安裝包之后,將jdk安裝到/usr/local/jdk 這個(gè)目錄。

tar -zxvf /home/hadoop/download/jdk-8u212-linux-x64.tar.gz -C /usr/local/jdk/

添加環(huán)境變量

vi /etc/profile # 打開(kāi)環(huán)境變量配置文件,添加下面的配置

# java環(huán)境變量配置

export JAVA_HOME=/usr/local/jdk

export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH

# 配置完成后 執(zhí)行下面命令是配置生效

source /etc/profile

順便把HADOOP_HOME的環(huán)境變量也一起添加了

# hadoop環(huán)境變量配置

export HADOOP_HOME=/usr/local/hadoop

export PATH=$HADOOP_HOME/bin:$JAVA_HOME/**in:$PATH

[root@hadoop download]# cat /etc/profile
unset i
unset -f pathmunge
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_212
export PATH=$JAVA_HOME/bin:$PATH
export HADOOP_HOME=/usr/local/hadoop/hadoop-3.1.3
export PATH=$HADOOP_HOME/bin:$PATH
export PATH=$HADOOP_HOME/**in:$PATH

java環(huán)境是否配置成功,我們執(zhí)行java -version 可以看到j(luò)ava相關(guān)的信息

[root@hadoop download]# java -version
java version "1.8.0_212"
Java(TM) SE Runtime Environment (build 1.8.0_212-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.212-b10, mixed mode)

4、安裝配置Hadoop

Hadoop NameNode格式化及運(yùn)行測(cè)試,接下來(lái)對(duì)hadoop進(jìn)行一些配置,使其能以偽分布式的方式運(yùn)行。進(jìn)入到hadoop的配置文件所在的目錄

cd /usr/local/hadoop/etc/hadoop配置hadoop-env.sh

在該文件內(nèi)配置JAVA_HOME 所示:

vi /usr/local/hadoop/hadoop-3.1.3/etc/hadoop/hadoop-env.sh
###
# Generic settings for HADOOP
###

# Technically, the only required environment variable is JAVA_HOME.
# All others are optional. However, the defaults are probably not
# preferred. Many sites configure these options outside of Hadoop,
# such as in /etc/profile.d

# The java implementation to use. By default, this environment
# variable is REQUIRED on ALL platforms except OS X!
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_212

# Location of Hadoop. By default, Hadoop will attempt to determine
# this location based upon its execution path.
export HADOOP_HOME=/usr/local/hadoop/hadoop-3.1.3

JAVA_HOME設(shè)置為我們自己的jdk安裝路徑即可

1、配置hdfs-site.xml

hdfs-site.xml的內(nèi)容改成下面的配置。

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hadoop_data/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hadoop_data/dfs/data</value>
</property>
</configuration>dfs.replication # 為文件保存副本的數(shù)量
dfs.namenode.name.dir # 為hadoop namenode數(shù)據(jù)目錄,改成自己需要的目錄(不存在需新建)
dfs.datanode.data.dir # 為hadoop datanode數(shù)據(jù)目錄,改成自己需要的目錄(不存在需新建)

1、配置core-site.xml

core-site.xml的內(nèi)容改成下面的配置。

<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop_data</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop:9000</value>
</property>
</configuration>hadoop.tmp.dir # hadoop 緩存目錄,更改為自己的目錄(不存在需創(chuàng)建)
fs.defaultFS # hadoop fs **端口配置
mkdir /home/hadoop_data/dfs/name
mkdir /home/hadoop_data/dfs/data
cd /home/hadoop_data/
chown -R hadoop:hadoop dfs && chmod -R 777 dfs

如果只需要HDFS,配置就完成,如果需要用到Y(jié)arn,還需要做yarn相關(guān)的配置。

1、配置mapred-site.xml

mapred-site.xml的內(nèi)容改成下面的配置。

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

1、配置yarn-site.xml

yarn-site.xml的內(nèi)容改成下面的配置。

<configuration>
<!– Site specific YARN configuration properties –>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

Hadoop格式化及啟動(dòng)

現(xiàn)在hadoop基礎(chǔ)配置已經(jīng)完成了,需要對(duì)Hadoop的namenode進(jìn)行格式化,第二啟動(dòng)hadoop dfs服務(wù)。

1、NameNode格式化我們跳轉(zhuǎn)到hadoop的bin目錄,并執(zhí)行格式化命令

cd /usr/local/hadoop/bin
./hdfs namenode -format

執(zhí)行結(jié)果如下圖所示,當(dāng)exit status 為0時(shí),則為格式化成功。

此時(shí)我們的hadoop已經(jīng)格式化成功了,接下來(lái)我們?nèi)?dòng)我們hadoop。

進(jìn)到hadoop下的**in目錄

cd /usr/local/hadoop/**in
./start-dfs.sh # 啟動(dòng)HDFS
./start-yarn.sh # 啟動(dòng)YARN

執(zhí)行./start-dfs.sh 如下圖所示:

[hadoop@hadoop ~]$ stop-dfs.sh
Stopping namenodes on [hadoop]
Stopping datanodes
Stopping secondary namenodes [hadoop]
[hadoop@hadoop ~]$ start-dfs.sh
Starting namenodes on [hadoop]
Starting datanodes
Starting secondary namenodes [hadoop]
[hadoop@hadoop ~]$ jps
48336 Jps
48002 DataNode
48210 SecondaryNameNode
46725 NodeManager
46621 ResourceManager
47886 NameNode
[hadoop@hadoop ~]$

還可以看日志看是不啟動(dòng)報(bào)錯(cuò)

[hadoop@hadoop logs]$ ls -rlt
total 2528
-rw-rw-r–. 1 hadoop hadoop 0 Mar 17 23:15 SecurityAuth-hadoop.audit
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 09:37 hadoop-hadoop-datanode-hadoop.out.5
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 09:38 hadoop-hadoop-secondarynamenode-hadoop.out.5
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:04 hadoop-hadoop-namenode-hadoop.out.5
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:04 hadoop-hadoop-datanode-hadoop.out.4
-rw-rw-r–. 1 hadoop hadoop 4124 Mar 18 10:05 hadoop-hadoop-secondarynamenode-hadoop.out.4
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:06 hadoop-hadoop-namenode-hadoop.out.4
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:06 hadoop-hadoop-datanode-hadoop.out.3
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:31 hadoop-hadoop-namenode-hadoop.out.3
-rw-rw-r–. 1 hadoop hadoop 121390 Mar 18 10:50 hadoop-hadoop-secondarynamenode-hadoop.out.3
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:51 hadoop-hadoop-namenode-hadoop.out.2
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:52 hadoop-hadoop-datanode-hadoop.out.2
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:52 hadoop-hadoop-secondarynamenode-hadoop.out.2
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:53 hadoop-hadoop-datanode-hadoop.out.1
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:54 hadoop-hadoop-secondarynamenode-hadoop.out.1
-rw-rw-r–. 1 hadoop hadoop 6151 Mar 18 10:55 hadoop-hadoop-namenode-hadoop.out.1
-rw-rw-r–. 1 hadoop hadoop 2215 Mar 18 14:42 hadoop-hadoop-resourcemanager-hadoop.out
-rw-rw-r–. 1 hadoop hadoop 2199 Mar 18 14:43 hadoop-hadoop-nodemanager-hadoop.out
-rw-rw-r–. 1 hadoop hadoop 41972 Mar 18 14:52 hadoop-hadoop-resourcemanager-hadoop.log
-rw-rw-r–. 1 hadoop hadoop 37935 Mar 18 15:42 hadoop-hadoop-nodemanager-hadoop.log
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 15:48 hadoop-hadoop-namenode-hadoop.out
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 15:48 hadoop-hadoop-datanode-hadoop.out
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 15:48 hadoop-hadoop-secondarynamenode-hadoop.out
-rw-rw-r–. 1 hadoop hadoop 970190 Mar 18 15:48 hadoop-hadoop-datanode-hadoop.log
drwxr-xr-x. 2 hadoop hadoop 6 Mar 18 15:48 userlogs
-rw-rw-r–. 1 hadoop hadoop 572169 Mar 18 15:49 hadoop-hadoop-namenode-hadoop.log
-rw-rw-r–. 1 hadoop hadoop 656741 Mar 18 15:49 hadoop-hadoop-secondarynamenode-hadoop.log
[hadoop@hadoop logs]$ pwd
/usr/local/hadoop/hadoop-3.1.3/logs
[hadoop@hadoop logs]$ tail -20f hadoop-hadoop-namenode-hadoop.log
2022-03-18 15:48:39,521 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(192.168.10.248:9866, datanodeUuid=daafd206-fdfe-44cc-a1fc-8ac1279c5cda, infoPort=9864, infoSecurePort=0, ipcPort=9867, storageInfo=lv=-57;cid=CID-f921bab0-7c73-44ef-bc61-ea81d176ec82;nsid=311459717;c=1647534325420) storage daafd206-fdfe-44cc-a1fc-8ac1279c5cda
2022-03-18 15:48:39,523 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.10.248:9866
2022-03-18 15:48:39,523 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockReportLeaseManager: Registered DN daafd206-fdfe-44cc-a1fc-8ac1279c5cda (192.168.10.248:9866).
2022-03-18 15:48:39,889 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-6dd0b7dc-1fd0-488c-924b-76de306ac2cb for DN 192.168.10.248:9866
2022-03-18 15:48:40,062 INFO BlockStateChange: BLOCK* processReport 0xf15bae747aec7666: Processing first storage report for DS-6dd0b7dc-1fd0-488c-924b-76de306ac2cb from datanode daafd206-fdfe-44cc-a1fc-8ac1279c5cda
2022-03-18 15:48:40,065 INFO BlockStateChange: BLOCK* processReport 0xf15bae747aec7666: from storage DS-6dd0b7dc-1fd0-488c-924b-76de306ac2cb node DatanodeRegistration(192.168.10.248:9866, datanodeUuid=daafd206-fdfe-44cc-a1fc-8ac1279c5cda, infoPort=9864, infoSecurePort=0, ipcPort=9867, storageInfo=lv=-57;cid=CID-f921bab0-7c73-44ef-bc61-ea81d176ec82;nsid=311459717;c=1647534325420), blocks: 0, hasStaleStorage: false, processing time: 3 msecs, invalidatedBlocks: 0
2022-03-18 15:49:44,972 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.10.248
2022-03-18 15:49:44,972 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2022-03-18 15:49:44,973 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 85, 85
2022-03-18 15:49:44,973 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 84 Number of syncs: 2 SyncTimes(ms): 130
2022-03-18 15:49:44,988 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 84 Number of syncs: 3 SyncTimes(ms): 144
2022-03-18 15:49:44,990 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/hadoop_data/dfs/name/current/edits_inprogress_0000000000000000085 -> /home/hadoop_data/dfs/name/current/edits_0000000000000000085-0000000000000000086
2022-03-18 15:49:44,992 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 87
2022-03-18 15:49:45,514 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Sending fileName: /home/hadoop_data/dfs/name/current/fsimage_0000000000000000083, fileSize: 533. Sent total: 533 bytes. Size of last segment intended to send: -1 bytes.
2022-03-18 15:49:45,641 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Sending fileName: /home/hadoop_data/dfs/name/current/edits_0000000000000000084-0000000000000000084, fileSize: 1048576. Sent total: 1048576 bytes. Size of last segment intended to send: -1 bytes.
2022-03-18 15:49:45,744 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Sending fileName: /home/hadoop_data/dfs/name/current/edits_0000000000000000085-0000000000000000086, fileSize: 42. Sent total: 42 bytes. Size of last segment intended to send: -1 bytes.
2022-03-18 15:49:46,668 INFO org.apache.hadoop.hdfs.server.common.Util: Combined time for file download and fsync to all disks took 0.00s. The file download took 0.00s at 0.00 KB/s. Synchronous (fsync) write to disk of /home/hadoop_data/dfs/name/current/fsimage.ckpt_0000000000000000086 took 0.00s.
2022-03-18 15:49:46,668 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000086 size 533 bytes.
2022-03-18 15:49:46,684 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 83
2022-03-18 15:49:46,684 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Purging old image FSImageFile(file=/home/hadoop_data/dfs/name/current/fsimage_0000000000000000081, cpktTxId=0000000000000000081)

這就是namenode啟動(dòng)成功,如查格式化兩次就會(huì)出現(xiàn)datanode啟動(dòng)不成功,這是clusterID兩次不致造成的,可以進(jìn)入

-rw-rw-r–. 1 hadoop hadoop 229 Mar 18 15:48 VERSION
drwx——. 4 hadoop hadoop 54 Mar 18 15:48 BP-301391941-192.168.10.248-1647534325420
[root@hadoop current]# cat VERSION
#Fri Mar 18 15:48:38 CST 2022
storageID=DS-6dd0b7dc-1fd0-488c-924b-76de306ac2cb
clusterID=CID-f921bab0-7c73-44ef-bc61-ea81d176ec82
cTime=0
datanodeUuid=daafd206-fdfe-44cc-a1fc-8ac1279c5cda
storageType=DATA_NODE
layoutVersion=-57
[root@hadoop current]# pwd
/home/hadoop_data/dfs/data/current
修改clusterID和namenode節(jié)點(diǎn)的clusterID一樣后,在重啟hadoop服務(wù)

現(xiàn)在偽分布式hadoop集群已經(jīng)部署成功了,如果啟動(dòng)hadoop的時(shí)候遇到了問(wèn)題,可以查看對(duì)應(yīng)的log文件查看是由什么問(wèn)題引起的。一般的問(wèn)題如,未設(shè)置JAVA_HOME hadoopdata目錄不存在,或者無(wú)權(quán)限等等。

現(xiàn)要在可以進(jìn)入hadoop組件hdfs的UI界面:

kb3189866

現(xiàn)要在可以進(jìn)入hadoop組件yarn的UI界面:

kb3189866

可能安裝過(guò)成會(huì)遇到各樣的問(wèn)題,可是查看日志和搜索或是去官網(wǎng)站都可以找到解決的**,我只是把做的過(guò)程記錄出來(lái)了,以備后需。

拓展知識(shí):

kb3189866

有的機(jī)器升級(jí)不僅僅一次不成功,有時(shí)候反復(fù)升級(jí)都是這樣,這樣的情況我在升級(jí)補(bǔ)丁的時(shí)候也遇到過(guò)!比如:升級(jí)W10周年版本的14393(kb3189866)的時(shí)候更新到45%就卡住了,沒(méi)辦法進(jìn)一步的往下進(jìn)行,最后還是在論壇里找到升級(jí)的下載包解決了!

前沿拓展:

kb3189866

下載 KB3189866 **更新包,試試!


我用的是VM虛擬機(jī),**作系統(tǒng)是RedHat7.9的系統(tǒng)進(jìn)行JDK和Hadoop的安裝實(shí)施,

本過(guò)程只是作測(cè)試和學(xué)習(xí)參考用。

1、添加hadoop新用戶(hù)

useradd -m hadoop -s /bin/bash # 添加hadoop用戶(hù)

passwd hadoop # 配置hadoop用戶(hù)的密碼

vi /etc/sudoers #編輯配置文件 在root后一行加入 hadoop ALL=(ALL) ALL ,為hadoop添加管理員權(quán)限

2、配置免密登錄(root)用戶(hù)**作

ssh-keygen -t rsa # 會(huì)有提示,都按回車(chē)就OK

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys # 加入授權(quán)

chmod 0600 ~/.ssh/authorized_keys #添加權(quán)限

配置完成后,執(zhí)行ssh hadoop(主機(jī)名)命令可以不用輸入密碼即可登錄為配置成功。

如查發(fā)現(xiàn)生成ssh-keygen報(bào)錯(cuò),這是沒(méi)有安裝openssh造成的,用yum安裝即可。

[root@hadoop dfs]# rpm -qa |grep openssh
openssh-server-7.4p1-21.el7.x86_64
openssh-clients-7.4p1-21.el7.x86_64
openssh-7.4p1-21.el7.x86_64

3、配置JDK環(huán)境

下載JDK的安裝包之后,將jdk安裝到/usr/local/jdk 這個(gè)目錄。

tar -zxvf /home/hadoop/download/jdk-8u212-linux-x64.tar.gz -C /usr/local/jdk/

添加環(huán)境變量

vi /etc/profile # 打開(kāi)環(huán)境變量配置文件,添加下面的配置

# java環(huán)境變量配置

export JAVA_HOME=/usr/local/jdk

export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH

# 配置完成后 執(zhí)行下面命令是配置生效

source /etc/profile

順便把HADOOP_HOME的環(huán)境變量也一起添加了

# hadoop環(huán)境變量配置

export HADOOP_HOME=/usr/local/hadoop

export PATH=$HADOOP_HOME/bin:$JAVA_HOME/**in:$PATH

[root@hadoop download]# cat /etc/profile
unset i
unset -f pathmunge
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_212
export PATH=$JAVA_HOME/bin:$PATH
export HADOOP_HOME=/usr/local/hadoop/hadoop-3.1.3
export PATH=$HADOOP_HOME/bin:$PATH
export PATH=$HADOOP_HOME/**in:$PATH

java環(huán)境是否配置成功,我們執(zhí)行java -version 可以看到j(luò)ava相關(guān)的信息

[root@hadoop download]# java -version
java version "1.8.0_212"
Java(TM) SE Runtime Environment (build 1.8.0_212-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.212-b10, mixed mode)

4、安裝配置Hadoop

Hadoop NameNode格式化及運(yùn)行測(cè)試,接下來(lái)對(duì)hadoop進(jìn)行一些配置,使其能以偽分布式的方式運(yùn)行。進(jìn)入到hadoop的配置文件所在的目錄

cd /usr/local/hadoop/etc/hadoop配置hadoop-env.sh

在該文件內(nèi)配置JAVA_HOME 所示:

vi /usr/local/hadoop/hadoop-3.1.3/etc/hadoop/hadoop-env.sh
###
# Generic settings for HADOOP
###

# Technically, the only required environment variable is JAVA_HOME.
# All others are optional. However, the defaults are probably not
# preferred. Many sites configure these options outside of Hadoop,
# such as in /etc/profile.d

# The java implementation to use. By default, this environment
# variable is REQUIRED on ALL platforms except OS X!
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_212

# Location of Hadoop. By default, Hadoop will attempt to determine
# this location based upon its execution path.
export HADOOP_HOME=/usr/local/hadoop/hadoop-3.1.3

JAVA_HOME設(shè)置為我們自己的jdk安裝路徑即可

1、配置hdfs-site.xml

hdfs-site.xml的內(nèi)容改成下面的配置。

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hadoop_data/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hadoop_data/dfs/data</value>
</property>
</configuration>dfs.replication # 為文件保存副本的數(shù)量
dfs.namenode.name.dir # 為hadoop namenode數(shù)據(jù)目錄,改成自己需要的目錄(不存在需新建)
dfs.datanode.data.dir # 為hadoop datanode數(shù)據(jù)目錄,改成自己需要的目錄(不存在需新建)

1、配置core-site.xml

core-site.xml的內(nèi)容改成下面的配置。

<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop_data</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop:9000</value>
</property>
</configuration>hadoop.tmp.dir # hadoop 緩存目錄,更改為自己的目錄(不存在需創(chuàng)建)
fs.defaultFS # hadoop fs **端口配置
mkdir /home/hadoop_data/dfs/name
mkdir /home/hadoop_data/dfs/data
cd /home/hadoop_data/
chown -R hadoop:hadoop dfs && chmod -R 777 dfs

如果只需要HDFS,配置就完成,如果需要用到Y(jié)arn,還需要做yarn相關(guān)的配置。

1、配置mapred-site.xml

mapred-site.xml的內(nèi)容改成下面的配置。

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

1、配置yarn-site.xml

yarn-site.xml的內(nèi)容改成下面的配置。

<configuration>
<!– Site specific YARN configuration properties –>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

Hadoop格式化及啟動(dòng)

現(xiàn)在hadoop基礎(chǔ)配置已經(jīng)完成了,需要對(duì)Hadoop的namenode進(jìn)行格式化,第二啟動(dòng)hadoop dfs服務(wù)。

1、NameNode格式化我們跳轉(zhuǎn)到hadoop的bin目錄,并執(zhí)行格式化命令

cd /usr/local/hadoop/bin
./hdfs namenode -format

執(zhí)行結(jié)果如下圖所示,當(dāng)exit status 為0時(shí),則為格式化成功。

此時(shí)我們的hadoop已經(jīng)格式化成功了,接下來(lái)我們?nèi)?dòng)我們hadoop。

進(jìn)到hadoop下的**in目錄

cd /usr/local/hadoop/**in
./start-dfs.sh # 啟動(dòng)HDFS
./start-yarn.sh # 啟動(dòng)YARN

執(zhí)行./start-dfs.sh 如下圖所示:

[hadoop@hadoop ~]$ stop-dfs.sh
Stopping namenodes on [hadoop]
Stopping datanodes
Stopping secondary namenodes [hadoop]
[hadoop@hadoop ~]$ start-dfs.sh
Starting namenodes on [hadoop]
Starting datanodes
Starting secondary namenodes [hadoop]
[hadoop@hadoop ~]$ jps
48336 Jps
48002 DataNode
48210 SecondaryNameNode
46725 NodeManager
46621 ResourceManager
47886 NameNode
[hadoop@hadoop ~]$

還可以看日志看是不啟動(dòng)報(bào)錯(cuò)

[hadoop@hadoop logs]$ ls -rlt
total 2528
-rw-rw-r–. 1 hadoop hadoop 0 Mar 17 23:15 SecurityAuth-hadoop.audit
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 09:37 hadoop-hadoop-datanode-hadoop.out.5
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 09:38 hadoop-hadoop-secondarynamenode-hadoop.out.5
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:04 hadoop-hadoop-namenode-hadoop.out.5
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:04 hadoop-hadoop-datanode-hadoop.out.4
-rw-rw-r–. 1 hadoop hadoop 4124 Mar 18 10:05 hadoop-hadoop-secondarynamenode-hadoop.out.4
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:06 hadoop-hadoop-namenode-hadoop.out.4
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:06 hadoop-hadoop-datanode-hadoop.out.3
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:31 hadoop-hadoop-namenode-hadoop.out.3
-rw-rw-r–. 1 hadoop hadoop 121390 Mar 18 10:50 hadoop-hadoop-secondarynamenode-hadoop.out.3
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:51 hadoop-hadoop-namenode-hadoop.out.2
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:52 hadoop-hadoop-datanode-hadoop.out.2
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:52 hadoop-hadoop-secondarynamenode-hadoop.out.2
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:53 hadoop-hadoop-datanode-hadoop.out.1
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:54 hadoop-hadoop-secondarynamenode-hadoop.out.1
-rw-rw-r–. 1 hadoop hadoop 6151 Mar 18 10:55 hadoop-hadoop-namenode-hadoop.out.1
-rw-rw-r–. 1 hadoop hadoop 2215 Mar 18 14:42 hadoop-hadoop-resourcemanager-hadoop.out
-rw-rw-r–. 1 hadoop hadoop 2199 Mar 18 14:43 hadoop-hadoop-nodemanager-hadoop.out
-rw-rw-r–. 1 hadoop hadoop 41972 Mar 18 14:52 hadoop-hadoop-resourcemanager-hadoop.log
-rw-rw-r–. 1 hadoop hadoop 37935 Mar 18 15:42 hadoop-hadoop-nodemanager-hadoop.log
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 15:48 hadoop-hadoop-namenode-hadoop.out
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 15:48 hadoop-hadoop-datanode-hadoop.out
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 15:48 hadoop-hadoop-secondarynamenode-hadoop.out
-rw-rw-r–. 1 hadoop hadoop 970190 Mar 18 15:48 hadoop-hadoop-datanode-hadoop.log
drwxr-xr-x. 2 hadoop hadoop 6 Mar 18 15:48 userlogs
-rw-rw-r–. 1 hadoop hadoop 572169 Mar 18 15:49 hadoop-hadoop-namenode-hadoop.log
-rw-rw-r–. 1 hadoop hadoop 656741 Mar 18 15:49 hadoop-hadoop-secondarynamenode-hadoop.log
[hadoop@hadoop logs]$ pwd
/usr/local/hadoop/hadoop-3.1.3/logs
[hadoop@hadoop logs]$ tail -20f hadoop-hadoop-namenode-hadoop.log
2022-03-18 15:48:39,521 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(192.168.10.248:9866, datanodeUuid=daafd206-fdfe-44cc-a1fc-8ac1279c5cda, infoPort=9864, infoSecurePort=0, ipcPort=9867, storageInfo=lv=-57;cid=CID-f921bab0-7c73-44ef-bc61-ea81d176ec82;nsid=311459717;c=1647534325420) storage daafd206-fdfe-44cc-a1fc-8ac1279c5cda
2022-03-18 15:48:39,523 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.10.248:9866
2022-03-18 15:48:39,523 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockReportLeaseManager: Registered DN daafd206-fdfe-44cc-a1fc-8ac1279c5cda (192.168.10.248:9866).
2022-03-18 15:48:39,889 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-6dd0b7dc-1fd0-488c-924b-76de306ac2cb for DN 192.168.10.248:9866
2022-03-18 15:48:40,062 INFO BlockStateChange: BLOCK* processReport 0xf15bae747aec7666: Processing first storage report for DS-6dd0b7dc-1fd0-488c-924b-76de306ac2cb from datanode daafd206-fdfe-44cc-a1fc-8ac1279c5cda
2022-03-18 15:48:40,065 INFO BlockStateChange: BLOCK* processReport 0xf15bae747aec7666: from storage DS-6dd0b7dc-1fd0-488c-924b-76de306ac2cb node DatanodeRegistration(192.168.10.248:9866, datanodeUuid=daafd206-fdfe-44cc-a1fc-8ac1279c5cda, infoPort=9864, infoSecurePort=0, ipcPort=9867, storageInfo=lv=-57;cid=CID-f921bab0-7c73-44ef-bc61-ea81d176ec82;nsid=311459717;c=1647534325420), blocks: 0, hasStaleStorage: false, processing time: 3 msecs, invalidatedBlocks: 0
2022-03-18 15:49:44,972 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.10.248
2022-03-18 15:49:44,972 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2022-03-18 15:49:44,973 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 85, 85
2022-03-18 15:49:44,973 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 84 Number of syncs: 2 SyncTimes(ms): 130
2022-03-18 15:49:44,988 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 84 Number of syncs: 3 SyncTimes(ms): 144
2022-03-18 15:49:44,990 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/hadoop_data/dfs/name/current/edits_inprogress_0000000000000000085 -> /home/hadoop_data/dfs/name/current/edits_0000000000000000085-0000000000000000086
2022-03-18 15:49:44,992 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 87
2022-03-18 15:49:45,514 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Sending fileName: /home/hadoop_data/dfs/name/current/fsimage_0000000000000000083, fileSize: 533. Sent total: 533 bytes. Size of last segment intended to send: -1 bytes.
2022-03-18 15:49:45,641 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Sending fileName: /home/hadoop_data/dfs/name/current/edits_0000000000000000084-0000000000000000084, fileSize: 1048576. Sent total: 1048576 bytes. Size of last segment intended to send: -1 bytes.
2022-03-18 15:49:45,744 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Sending fileName: /home/hadoop_data/dfs/name/current/edits_0000000000000000085-0000000000000000086, fileSize: 42. Sent total: 42 bytes. Size of last segment intended to send: -1 bytes.
2022-03-18 15:49:46,668 INFO org.apache.hadoop.hdfs.server.common.Util: Combined time for file download and fsync to all disks took 0.00s. The file download took 0.00s at 0.00 KB/s. Synchronous (fsync) write to disk of /home/hadoop_data/dfs/name/current/fsimage.ckpt_0000000000000000086 took 0.00s.
2022-03-18 15:49:46,668 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000086 size 533 bytes.
2022-03-18 15:49:46,684 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 83
2022-03-18 15:49:46,684 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Purging old image FSImageFile(file=/home/hadoop_data/dfs/name/current/fsimage_0000000000000000081, cpktTxId=0000000000000000081)

這就是namenode啟動(dòng)成功,如查格式化兩次就會(huì)出現(xiàn)datanode啟動(dòng)不成功,這是clusterID兩次不致造成的,可以進(jìn)入

-rw-rw-r–. 1 hadoop hadoop 229 Mar 18 15:48 VERSION
drwx——. 4 hadoop hadoop 54 Mar 18 15:48 BP-301391941-192.168.10.248-1647534325420
[root@hadoop current]# cat VERSION
#Fri Mar 18 15:48:38 CST 2022
storageID=DS-6dd0b7dc-1fd0-488c-924b-76de306ac2cb
clusterID=CID-f921bab0-7c73-44ef-bc61-ea81d176ec82
cTime=0
datanodeUuid=daafd206-fdfe-44cc-a1fc-8ac1279c5cda
storageType=DATA_NODE
layoutVersion=-57
[root@hadoop current]# pwd
/home/hadoop_data/dfs/data/current
修改clusterID和namenode節(jié)點(diǎn)的clusterID一樣后,在重啟hadoop服務(wù)

現(xiàn)在偽分布式hadoop集群已經(jīng)部署成功了,如果啟動(dòng)hadoop的時(shí)候遇到了問(wèn)題,可以查看對(duì)應(yīng)的log文件查看是由什么問(wèn)題引起的。一般的問(wèn)題如,未設(shè)置JAVA_HOME hadoopdata目錄不存在,或者無(wú)權(quán)限等等。

現(xiàn)要在可以進(jìn)入hadoop組件hdfs的UI界面:

kb3189866

現(xiàn)要在可以進(jìn)入hadoop組件yarn的UI界面:

kb3189866

可能安裝過(guò)成會(huì)遇到各樣的問(wèn)題,可是查看日志和搜索或是去官網(wǎng)站都可以找到解決的**,我只是把做的過(guò)程記錄出來(lái)了,以備后需。

拓展知識(shí):

kb3189866

有的機(jī)器升級(jí)不僅僅一次不成功,有時(shí)候反復(fù)升級(jí)都是這樣,這樣的情況我在升級(jí)補(bǔ)丁的時(shí)候也遇到過(guò)!比如:升級(jí)W10周年版本的14393(kb3189866)的時(shí)候更新到45%就卡住了,沒(méi)辦法進(jìn)一步的往下進(jìn)行,最后還是在論壇里找到升級(jí)的下載包解決了!

前沿拓展:

kb3189866

下載 KB3189866 **更新包,試試!


我用的是VM虛擬機(jī),**作系統(tǒng)是RedHat7.9的系統(tǒng)進(jìn)行JDK和Hadoop的安裝實(shí)施,

本過(guò)程只是作測(cè)試和學(xué)習(xí)參考用。

1、添加hadoop新用戶(hù)

useradd -m hadoop -s /bin/bash # 添加hadoop用戶(hù)

passwd hadoop # 配置hadoop用戶(hù)的密碼

vi /etc/sudoers #編輯配置文件 在root后一行加入 hadoop ALL=(ALL) ALL ,為hadoop添加管理員權(quán)限

2、配置免密登錄(root)用戶(hù)**作

ssh-keygen -t rsa # 會(huì)有提示,都按回車(chē)就OK

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys # 加入授權(quán)

chmod 0600 ~/.ssh/authorized_keys #添加權(quán)限

配置完成后,執(zhí)行ssh hadoop(主機(jī)名)命令可以不用輸入密碼即可登錄為配置成功。

如查發(fā)現(xiàn)生成ssh-keygen報(bào)錯(cuò),這是沒(méi)有安裝openssh造成的,用yum安裝即可。

[root@hadoop dfs]# rpm -qa |grep openssh
openssh-server-7.4p1-21.el7.x86_64
openssh-clients-7.4p1-21.el7.x86_64
openssh-7.4p1-21.el7.x86_64

3、配置JDK環(huán)境

下載JDK的安裝包之后,將jdk安裝到/usr/local/jdk 這個(gè)目錄。

tar -zxvf /home/hadoop/download/jdk-8u212-linux-x64.tar.gz -C /usr/local/jdk/

添加環(huán)境變量

vi /etc/profile # 打開(kāi)環(huán)境變量配置文件,添加下面的配置

# java環(huán)境變量配置

export JAVA_HOME=/usr/local/jdk

export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH

# 配置完成后 執(zhí)行下面命令是配置生效

source /etc/profile

順便把HADOOP_HOME的環(huán)境變量也一起添加了

# hadoop環(huán)境變量配置

export HADOOP_HOME=/usr/local/hadoop

export PATH=$HADOOP_HOME/bin:$JAVA_HOME/**in:$PATH

[root@hadoop download]# cat /etc/profile
unset i
unset -f pathmunge
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_212
export PATH=$JAVA_HOME/bin:$PATH
export HADOOP_HOME=/usr/local/hadoop/hadoop-3.1.3
export PATH=$HADOOP_HOME/bin:$PATH
export PATH=$HADOOP_HOME/**in:$PATH

java環(huán)境是否配置成功,我們執(zhí)行java -version 可以看到j(luò)ava相關(guān)的信息

[root@hadoop download]# java -version
java version "1.8.0_212"
Java(TM) SE Runtime Environment (build 1.8.0_212-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.212-b10, mixed mode)

4、安裝配置Hadoop

Hadoop NameNode格式化及運(yùn)行測(cè)試,接下來(lái)對(duì)hadoop進(jìn)行一些配置,使其能以偽分布式的方式運(yùn)行。進(jìn)入到hadoop的配置文件所在的目錄

cd /usr/local/hadoop/etc/hadoop配置hadoop-env.sh

在該文件內(nèi)配置JAVA_HOME 所示:

vi /usr/local/hadoop/hadoop-3.1.3/etc/hadoop/hadoop-env.sh
###
# Generic settings for HADOOP
###

# Technically, the only required environment variable is JAVA_HOME.
# All others are optional. However, the defaults are probably not
# preferred. Many sites configure these options outside of Hadoop,
# such as in /etc/profile.d

# The java implementation to use. By default, this environment
# variable is REQUIRED on ALL platforms except OS X!
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_212

# Location of Hadoop. By default, Hadoop will attempt to determine
# this location based upon its execution path.
export HADOOP_HOME=/usr/local/hadoop/hadoop-3.1.3

JAVA_HOME設(shè)置為我們自己的jdk安裝路徑即可

1、配置hdfs-site.xml

hdfs-site.xml的內(nèi)容改成下面的配置。

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hadoop_data/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hadoop_data/dfs/data</value>
</property>
</configuration>dfs.replication # 為文件保存副本的數(shù)量
dfs.namenode.name.dir # 為hadoop namenode數(shù)據(jù)目錄,改成自己需要的目錄(不存在需新建)
dfs.datanode.data.dir # 為hadoop datanode數(shù)據(jù)目錄,改成自己需要的目錄(不存在需新建)

1、配置core-site.xml

core-site.xml的內(nèi)容改成下面的配置。

<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop_data</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop:9000</value>
</property>
</configuration>hadoop.tmp.dir # hadoop 緩存目錄,更改為自己的目錄(不存在需創(chuàng)建)
fs.defaultFS # hadoop fs **端口配置
mkdir /home/hadoop_data/dfs/name
mkdir /home/hadoop_data/dfs/data
cd /home/hadoop_data/
chown -R hadoop:hadoop dfs && chmod -R 777 dfs

如果只需要HDFS,配置就完成,如果需要用到Y(jié)arn,還需要做yarn相關(guān)的配置。

1、配置mapred-site.xml

mapred-site.xml的內(nèi)容改成下面的配置。

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

1、配置yarn-site.xml

yarn-site.xml的內(nèi)容改成下面的配置。

<configuration>
<!– Site specific YARN configuration properties –>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

Hadoop格式化及啟動(dòng)

現(xiàn)在hadoop基礎(chǔ)配置已經(jīng)完成了,需要對(duì)Hadoop的namenode進(jìn)行格式化,第二啟動(dòng)hadoop dfs服務(wù)。

1、NameNode格式化我們跳轉(zhuǎn)到hadoop的bin目錄,并執(zhí)行格式化命令

cd /usr/local/hadoop/bin
./hdfs namenode -format

執(zhí)行結(jié)果如下圖所示,當(dāng)exit status 為0時(shí),則為格式化成功。

此時(shí)我們的hadoop已經(jīng)格式化成功了,接下來(lái)我們?nèi)?dòng)我們hadoop。

進(jìn)到hadoop下的**in目錄

cd /usr/local/hadoop/**in
./start-dfs.sh # 啟動(dòng)HDFS
./start-yarn.sh # 啟動(dòng)YARN

執(zhí)行./start-dfs.sh 如下圖所示:

[hadoop@hadoop ~]$ stop-dfs.sh
Stopping namenodes on [hadoop]
Stopping datanodes
Stopping secondary namenodes [hadoop]
[hadoop@hadoop ~]$ start-dfs.sh
Starting namenodes on [hadoop]
Starting datanodes
Starting secondary namenodes [hadoop]
[hadoop@hadoop ~]$ jps
48336 Jps
48002 DataNode
48210 SecondaryNameNode
46725 NodeManager
46621 ResourceManager
47886 NameNode
[hadoop@hadoop ~]$

還可以看日志看是不啟動(dòng)報(bào)錯(cuò)

[hadoop@hadoop logs]$ ls -rlt
total 2528
-rw-rw-r–. 1 hadoop hadoop 0 Mar 17 23:15 SecurityAuth-hadoop.audit
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 09:37 hadoop-hadoop-datanode-hadoop.out.5
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 09:38 hadoop-hadoop-secondarynamenode-hadoop.out.5
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:04 hadoop-hadoop-namenode-hadoop.out.5
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:04 hadoop-hadoop-datanode-hadoop.out.4
-rw-rw-r–. 1 hadoop hadoop 4124 Mar 18 10:05 hadoop-hadoop-secondarynamenode-hadoop.out.4
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:06 hadoop-hadoop-namenode-hadoop.out.4
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:06 hadoop-hadoop-datanode-hadoop.out.3
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:31 hadoop-hadoop-namenode-hadoop.out.3
-rw-rw-r–. 1 hadoop hadoop 121390 Mar 18 10:50 hadoop-hadoop-secondarynamenode-hadoop.out.3
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:51 hadoop-hadoop-namenode-hadoop.out.2
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:52 hadoop-hadoop-datanode-hadoop.out.2
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:52 hadoop-hadoop-secondarynamenode-hadoop.out.2
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:53 hadoop-hadoop-datanode-hadoop.out.1
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 10:54 hadoop-hadoop-secondarynamenode-hadoop.out.1
-rw-rw-r–. 1 hadoop hadoop 6151 Mar 18 10:55 hadoop-hadoop-namenode-hadoop.out.1
-rw-rw-r–. 1 hadoop hadoop 2215 Mar 18 14:42 hadoop-hadoop-resourcemanager-hadoop.out
-rw-rw-r–. 1 hadoop hadoop 2199 Mar 18 14:43 hadoop-hadoop-nodemanager-hadoop.out
-rw-rw-r–. 1 hadoop hadoop 41972 Mar 18 14:52 hadoop-hadoop-resourcemanager-hadoop.log
-rw-rw-r–. 1 hadoop hadoop 37935 Mar 18 15:42 hadoop-hadoop-nodemanager-hadoop.log
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 15:48 hadoop-hadoop-namenode-hadoop.out
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 15:48 hadoop-hadoop-datanode-hadoop.out
-rw-rw-r–. 1 hadoop hadoop 690 Mar 18 15:48 hadoop-hadoop-secondarynamenode-hadoop.out
-rw-rw-r–. 1 hadoop hadoop 970190 Mar 18 15:48 hadoop-hadoop-datanode-hadoop.log
drwxr-xr-x. 2 hadoop hadoop 6 Mar 18 15:48 userlogs
-rw-rw-r–. 1 hadoop hadoop 572169 Mar 18 15:49 hadoop-hadoop-namenode-hadoop.log
-rw-rw-r–. 1 hadoop hadoop 656741 Mar 18 15:49 hadoop-hadoop-secondarynamenode-hadoop.log
[hadoop@hadoop logs]$ pwd
/usr/local/hadoop/hadoop-3.1.3/logs
[hadoop@hadoop logs]$ tail -20f hadoop-hadoop-namenode-hadoop.log
2022-03-18 15:48:39,521 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(192.168.10.248:9866, datanodeUuid=daafd206-fdfe-44cc-a1fc-8ac1279c5cda, infoPort=9864, infoSecurePort=0, ipcPort=9867, storageInfo=lv=-57;cid=CID-f921bab0-7c73-44ef-bc61-ea81d176ec82;nsid=311459717;c=1647534325420) storage daafd206-fdfe-44cc-a1fc-8ac1279c5cda
2022-03-18 15:48:39,523 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.10.248:9866
2022-03-18 15:48:39,523 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockReportLeaseManager: Registered DN daafd206-fdfe-44cc-a1fc-8ac1279c5cda (192.168.10.248:9866).
2022-03-18 15:48:39,889 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-6dd0b7dc-1fd0-488c-924b-76de306ac2cb for DN 192.168.10.248:9866
2022-03-18 15:48:40,062 INFO BlockStateChange: BLOCK* processReport 0xf15bae747aec7666: Processing first storage report for DS-6dd0b7dc-1fd0-488c-924b-76de306ac2cb from datanode daafd206-fdfe-44cc-a1fc-8ac1279c5cda
2022-03-18 15:48:40,065 INFO BlockStateChange: BLOCK* processReport 0xf15bae747aec7666: from storage DS-6dd0b7dc-1fd0-488c-924b-76de306ac2cb node DatanodeRegistration(192.168.10.248:9866, datanodeUuid=daafd206-fdfe-44cc-a1fc-8ac1279c5cda, infoPort=9864, infoSecurePort=0, ipcPort=9867, storageInfo=lv=-57;cid=CID-f921bab0-7c73-44ef-bc61-ea81d176ec82;nsid=311459717;c=1647534325420), blocks: 0, hasStaleStorage: false, processing time: 3 msecs, invalidatedBlocks: 0
2022-03-18 15:49:44,972 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.10.248
2022-03-18 15:49:44,972 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2022-03-18 15:49:44,973 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 85, 85
2022-03-18 15:49:44,973 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 84 Number of syncs: 2 SyncTimes(ms): 130
2022-03-18 15:49:44,988 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 84 Number of syncs: 3 SyncTimes(ms): 144
2022-03-18 15:49:44,990 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/hadoop_data/dfs/name/current/edits_inprogress_0000000000000000085 -> /home/hadoop_data/dfs/name/current/edits_0000000000000000085-0000000000000000086
2022-03-18 15:49:44,992 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 87
2022-03-18 15:49:45,514 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Sending fileName: /home/hadoop_data/dfs/name/current/fsimage_0000000000000000083, fileSize: 533. Sent total: 533 bytes. Size of last segment intended to send: -1 bytes.
2022-03-18 15:49:45,641 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Sending fileName: /home/hadoop_data/dfs/name/current/edits_0000000000000000084-0000000000000000084, fileSize: 1048576. Sent total: 1048576 bytes. Size of last segment intended to send: -1 bytes.
2022-03-18 15:49:45,744 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Sending fileName: /home/hadoop_data/dfs/name/current/edits_0000000000000000085-0000000000000000086, fileSize: 42. Sent total: 42 bytes. Size of last segment intended to send: -1 bytes.
2022-03-18 15:49:46,668 INFO org.apache.hadoop.hdfs.server.common.Util: Combined time for file download and fsync to all disks took 0.00s. The file download took 0.00s at 0.00 KB/s. Synchronous (fsync) write to disk of /home/hadoop_data/dfs/name/current/fsimage.ckpt_0000000000000000086 took 0.00s.
2022-03-18 15:49:46,668 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000086 size 533 bytes.
2022-03-18 15:49:46,684 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 83
2022-03-18 15:49:46,684 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Purging old image FSImageFile(file=/home/hadoop_data/dfs/name/current/fsimage_0000000000000000081, cpktTxId=0000000000000000081)

這就是namenode啟動(dòng)成功,如查格式化兩次就會(huì)出現(xiàn)datanode啟動(dòng)不成功,這是clusterID兩次不致造成的,可以進(jìn)入

-rw-rw-r–. 1 hadoop hadoop 229 Mar 18 15:48 VERSION
drwx——. 4 hadoop hadoop 54 Mar 18 15:48 BP-301391941-192.168.10.248-1647534325420
[root@hadoop current]# cat VERSION
#Fri Mar 18 15:48:38 CST 2022
storageID=DS-6dd0b7dc-1fd0-488c-924b-76de306ac2cb
clusterID=CID-f921bab0-7c73-44ef-bc61-ea81d176ec82
cTime=0
datanodeUuid=daafd206-fdfe-44cc-a1fc-8ac1279c5cda
storageType=DATA_NODE
layoutVersion=-57
[root@hadoop current]# pwd
/home/hadoop_data/dfs/data/current
修改clusterID和namenode節(jié)點(diǎn)的clusterID一樣后,在重啟hadoop服務(wù)

現(xiàn)在偽分布式hadoop集群已經(jīng)部署成功了,如果啟動(dòng)hadoop的時(shí)候遇到了問(wèn)題,可以查看對(duì)應(yīng)的log文件查看是由什么問(wèn)題引起的。一般的問(wèn)題如,未設(shè)置JAVA_HOME hadoopdata目錄不存在,或者無(wú)權(quán)限等等。

現(xiàn)要在可以進(jìn)入hadoop組件hdfs的UI界面:

kb3189866

現(xiàn)要在可以進(jìn)入hadoop組件yarn的UI界面:

kb3189866

可能安裝過(guò)成會(huì)遇到各樣的問(wèn)題,可是查看日志和搜索或是去官網(wǎng)站都可以找到解決的**,我只是把做的過(guò)程記錄出來(lái)了,以備后需。

拓展知識(shí):

kb3189866

有的機(jī)器升級(jí)不僅僅一次不成功,有時(shí)候反復(fù)升級(jí)都是這樣,這樣的情況我在升級(jí)補(bǔ)丁的時(shí)候也遇到過(guò)!比如:升級(jí)W10周年版本的14393(kb3189866)的時(shí)候更新到45%就卡住了,沒(méi)辦法進(jìn)一步的往下進(jìn)行,最后還是在論壇里找到升級(jí)的下載包解決了!

原創(chuàng)文章,作者:九賢生活小編,如若轉(zhuǎn)載,請(qǐng)注明出處:http://m.xiesong.cn/42372.html

久久久一区黄无码国产| 成人午夜视频在线观看| 一区二区操逼| 久久艹精品大香蕉| 激情人妻视频小说| 精品性爱一区二区| 男人j进入女人j内部免费网站| 欧美 二区 熟女| 老太太无码毛片| 欧美,国产一级片自拍| 东方市| 久久躁狠狠躁夜夜躁| 刚发育粉嫩国产| 日夲熟妇色一夲在线| 情趣天堂√8中文字幕| 国产一区二区在线看| 激情综合网五月| 日本视频网址www色| 久久sp欧美| 中文无码成人在线观看| 四虎在线观看| 亚洲AV无一区二区三区| 色哟哟国产精品自拍| 国产精品午夜一| 喷水视频免费网站| 国产操屄视频| 六月婷婷射| 2000a国产网站| 少妇人妻无码精品视频app| 天天插天天色| 亚洲第36页| 91性久| 亚洲精品视频免费在线观看| 欧美精品一线二线三线| 肥胖毛片| 老熟妇的黑森林| 久久香港日本三级电影| 伊人久久大香线蕉AV一区二区| 欧美亚洲高清| 国产自慰喷水网站| 亚洲中日韩三区|