首页 > 其他分享 >hadoop集群运行

hadoop集群运行

时间:2023-03-18 15:24:49浏览次数:28  
标签:java 16 hadoop 192.168 master ssh 集群 运行

无密码登录

[hadoop@master hadoop]$ rpm -qa | grep openssh
openssh-server-7.4p1-16.el7.x86_64
openssh-clients-7.4p1-16.el7.x86_64
openssh-7.4p1-16.el7.x86_64
[hadoop@master hadoop]$ ssh-keygen -t rsa -P ''
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
/home/hadoop/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:hHo70xwWmDm6dePHlYAdtXsaYP02NpguZCwAi8XPi58 hadoop@master
The key's randomart image is:
+---[RSA 2048]----+
|   .o     ...    |
|   o.o = o o .   |
|  . .oB + = o    |
|     oo+ + o *   |
|    o.o.S + B B  |
|    .+.B B o * o |
|    ..+.+ + o    |
|      Eo . .     |
|                 |
+----[SHA256]-----+
[hadoop@master /]$ cd ~/.ssh
[hadoop@master .ssh]$ ls
id_rsa  id_rsa.pub
[hadoop@master .ssh]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[hadoop@master .ssh]$ ssh-copy-id [email protected]
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/hadoop/.ssh/id_rsa.pub"
The authenticity of host '192.168.100.130 (192.168.100.130)' can't be established.
ECDSA key fingerprint is SHA256:PBtGVMglru206eEDbi9G1WgfQEtCgE78HO8doBP7hl4.
ECDSA key fingerprint is MD5:0e:4f:4f:70:7f:5f:1f:a2:a2:78:4f:37:a4:b3:fa:86.
Are you sure you want to continue connecting (yes/no)? yes
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.

[hadoop@master .ssh]$ ssh-copy-id [email protected]
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/hadoop/.ssh/id_rsa.pub"
The authenticity of host '192.168.100.131 (192.168.100.131)' can't be established.
ECDSA key fingerprint is SHA256:PBtGVMglru206eEDbi9G1WgfQEtCgE78HO8doBP7hl4.
ECDSA key fingerprint is MD5:0e:4f:4f:70:7f:5f:1f:a2:a2:78:4f:37:a4:b3:fa:86.
Are you sure you want to continue connecting (yes/no)? yes
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.

[hadoop@master .ssh]$ ssh [email protected]
Last login: Wed Mar 15 10:13:25 2023 from 192.168.100.1
[root@slave1 ~]# exit
logout
Connection to 192.168.100.130 closed.
[hadoop@master .ssh]$ ssh [email protected]
Last login: Wed Mar 15 10:13:19 2023 from 192.168.100.1
[root@slave2 ~]# exit
logout
Connection to 192.168.100.131 closed.
[hadoop@master .ssh]$ 

1. 实验一:hadoop 集群运行

1.1. 实验目的

完成本实验,您应该能够:
掌握 hadoop 的运行状态
掌握 hadoop 文件系统格式化配置
掌握 hadoop java 运行状态查看
掌握 hadoop hdfs 报告查看
掌握 hadoop 节点状态查看
掌握停止 hadoop 进程操作

1.2. 实验要求

熟悉如何查看 hadoop 的运行状态
熟悉停止 hadoop 进程的操作

1.3. 实验环境

本实验所需之主要资源环境如表 1-1 所示。
表 1-1 资源环境
服务器集群 单节点,机器最低配置:双核 CPU、8GB 内存、100G 硬盘
运行环境 CentOS.7.3
服务和组件 服务和组件根据实验需求安装

1.4. 实验过程

1.4.1. 实验任务一:配置 Hadoop 格式化

1.4.1.1.步骤一:NameNode 格式化
将 NameNode 上的数据清零,第一次启动 HDFS 时要进行格式化,以后启动无需再格式 化,否则会缺失 DataNode 进程。另外,只要运行过 HDFS,Hadoop 的工作目录(本书设置为 /usr/local/src/hadoop/tmp)就会有数据,如果需要重新格式化,则在格式化之前一定要先删 除工作目录下的数据,否则格式化时会出问题。
执行如下命令,格式化 NameNode
[root@master ~]# su - hadoop
[hadoop@master ~]$ cd /usr/local/src/hadoop/
[hadoop@master hadoop]$ ls
bin  dfs  etc  include  lib  libexec  LICENSE.txt  NOTICE.txt  README.txt  sbin  share  tmp
[hadoop@master hadoop]$ ls dfs
data  name
[hadoop@master hadoop]$ ./bin/hdfs namenode -format

23/03/15 11:16:15 INFO namenode.FSImage: Allocated new BlockPoolId: BP-678784440-192.168.100.129-1678850175834
23/03/15 11:16:16 INFO common.Storage: Storage directory /usr/local/src/hadoop/dfs/name has been successfully formatted.
23/03/15 11:16:16 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
23/03/15 11:16:16 INFO util.ExitUtil: Exiting with status 0
23/03/15 11:16:16 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.100.129
************************************************************/

1.4.1.2. 步骤二:启动 NameNode
执行如下命令,启动 NameNode:
[hadoop@master hadoop]$ jps
1684 Jps
[hadoop@master hadoop]$ hadoop-daemon.sh start namenode
starting namenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-namenode-master.out


1.4.2. 实验任务二:查看 Java 进程

启动完成后,可以使用 JPS 命令查看是否成功。JPS 命令是 Java 提供的一个显示当前 所有 Java 进程 pid 的命令
[hadoop@master hadoop]$ jps
1717 NameNode
1784 Jps
1.4.2.1. 步骤一:slave 启动 DataNode
[hadoop@slave1 ~]$ cd /usr/local/src/hadoop/
[hadoop@slave1 hadoop]$ jps
1233 Jps
[hadoop@slave1 hadoop]$ hadoop-daemon.sh start datanode
starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave1.out
[hadoop@slave1 hadoop]$ jps
1257 DataNode
1322 Jps

[hadoop@slave2 ~]$ cd /usr/local/src/hadoop
[hadoop@slave2 hadoop]$ hadoop-daemon.sh start datanode
starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave2.out
[hadoop@slave2 hadoop]$ jps
1285 DataNode
1324 Jps


1.4.2.2. 步骤二:启动 SecondaryNameNode
执行如下命令,启动 SecondaryNameNode:
[hadoop@master hadoop]$  hadoop-daemon.sh start secondarynamenode
starting secondarynamenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-secondarynamenode-master.out
[hadoop@master hadoop]$ jps
1330 SecondaryNameNode
1365 Jps
1240 NameNode

查看到有 NameNode 和 SecondaryNameNode 两个进程,就表明 HDFS 启动成功。
1.4.2.3. 步骤三:查看 HDFS 数据存放位置:
执行如下命令,查看 Hadoop 工作目录:
[hadoop@master hadoop]$ ls dfs
data  name
[hadoop@master hadoop]$ ls tmp
dfs
[hadoop@master hadoop]$ ls tmp/dfs
namesecondary
[hadoop@master hadoop]$ ls tmp/dfs/namesecondary/
in_use.lock

可以看出 HDFS 的数据保存在/usr/local/src/hadoop/dfs 目录下,NameNode、DataNode 和/usr/local/src/hadoop/tmp/目录下,SecondaryNameNode 各有一个目录存放数据。

1.4.3. 实验任务三:查看 HDFS 的报告

[hadoop@master hadoop]$ hdfs dfsadmin -report
Configured Capacity: 57936977920 (53.96 GB)
Present Capacity: 53951262720 (50.25 GB)
DFS Remaining: 53951254528 (50.25 GB)
DFS Used: 8192 (8 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (2):

Name: 192.168.100.130:50010 (slave1)
Hostname: slave1
Decommission Status : Normal
Configured Capacity: 28968488960 (26.98 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 1994018816 (1.86 GB)
DFS Remaining: 26974466048 (25.12 GB)
DFS Used%: 0.00%
DFS Remaining%: 93.12%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Wed Mar 15 18:02:19 CST 2023


Name: 192.168.100.131:50010 (slave2)
Hostname: slave2
Decommission Status : Normal
Configured Capacity: 28968488960 (26.98 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 1991696384 (1.85 GB)
DFS Remaining: 26976788480 (25.12 GB)
DFS Used%: 0.00%
DFS Remaining%: 93.12%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Wed Mar 15 18:02:20 CST 2023


1.4.4. 实验任务四:在windowns中配置master,salve1,salve2的域名解析

image-20230316164905650

image-20230316165031722

1.4.4. 实验任务四:使用浏览器查看节点状态

在浏览器的地址栏输入http://master:50070,进入页面可以查看NameNode和DataNode 信息,如图 5-2 所示。

image-20230316182605211

在浏览器的地址栏输入 http://master:50090,进入页面可以查看 SecondaryNameNode信息,如图 5-3 所示

image-20230316182651347

可以使用 start-dfs.sh 命令启动 HDFS。这时需要配置 SSH 免密码登录,否则在启动过
程中系统将多次要求确认连接和输入 Hadoop 用户密码。
[hadoop@master hadoop]$ stop-dfs.sh
Stopping namenodes on [master]
master: stopping namenode
192.168.100.131: stopping datanode
192.168.100.130: stopping datanode
Stopping secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is SHA256:PBtGVMglru206eEDbi9G1WgfQEtCgE78HO8doBP7hl4.
ECDSA key fingerprint is MD5:0e:4f:4f:70:7f:5f:1f:a2:a2:78:4f:37:a4:b3:fa:86.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: stopping secondarynamenode



[hadoop@master hadoop]$ start-dfs.sh
Starting namenodes on [master]
master: starting namenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-namenode-master.out
192.168.100.131: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave2.out
192.168.100.130: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave1.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-secondarynamenode-master.out

运行测试: 下面运行 WordCount 官方案例,统计 data.txt 文件中单词的出现频度。这个案例可以 用来统计年度十大热销产品、年度风云人物、年度最热名词等。
1.4.4.1. 步骤一:在 HDFS 文件系统中创建数据输入目录
确保 dfs 和 yarn 都启动成功
[hadoop@master hadoop]$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-resourcemanager-master.out
192.168.100.131: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-nodemanager-slave2.out
192.168.100.130: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-nodemanager-slave1.out

[hadoop@master hadoop]$ jps
2213 SecondaryNameNode
2026 NameNode
2397 ResourceManager
2654 Jps

如果是第一次运行 MapReduce 程序,需要先在 HDFS 文件系统中创建数据输入目录,存 放输入数据。这里指定/input 目录为输入数据的存放目录。 执行如下命令,在 HDFS 文件系统中创建/input 目录:
[hadoop@master hadoop]$ hdfs dfs -mkdir /input
[hadoop@master hadoop]$ hdfs dfs -ls /
Found 1 items
drwxr-xr-x   - hadoop supergroup          0 2023-03-16 20:53 /input

此处创建的/input 目录是在 HDFS 文件系统中,只能用 HDFS 命令查看和操作。
1.4.4.2. 步骤二:将输入数据文件复制到 HDFS 的/input 目录中

测试用数据文件仍然是上一节所用的测试数据文件~/input/data.txt,内容如下所示。

[hadoop@master ~]$ mkdir input
[hadoop@master ~]$ ls
input
[hadoop@master ~]$ vi input/data.txt
[hadoop@master input]$ cat data.txt 
hello wangjialin
hello xiaoying
hello hadoop

执行如下命令,将输入数据文件复制到 HDFS 的/input 目录中:
[hadoop@master input]$ hdfs dfs -put ~/input/data.txt /input

确认文件已复制到 HDFS 的/input 目录:
[hadoop@master hadoop]$ hdfs dfs -ls /input
Found 1 items
-rw-r--r--   3 hadoop supergroup         45 2023-03-16 21:07 /input/data.txt

1.4.4.3. 步骤三:运行 WordCount 案例,计算数据文件中各单词的频度。
运行 MapReduce 命令需要指定数据输出目录,该目录为 HDFS 文件系统中的目录,会自 动生成。如果在执行 MapReduce 命令前,该目录已经存在,则执行 MapReduce 命令会出错。 例如 MapReduce 命令指定数据输出目录为/output,/output 目录在 HDFS 文件系统中已经存 在,则执行相应的 MapReduce 命令就会出错。所以如果不是第一次运行 MapReduce,就要先 查看HDFS中的文件,是否存在/output目录。如果已经存在/output目录,就要先删除/output 目录,再执行上述命令。
自动创建的/output 目录在 HDFS 文件系统中,使用 HDFS 命令查看和操作。
[hadoop@master hadoop]$ hdfs dfs -mkdir /output
先执行如下命令查看 HDFS 中的文件:
[hadoop@master hadoop]$ hdfs dfs -ls /
Found 3 items
drwxr-xr-x - hadoop supergroup 0 2020-05-02 22:32 /input
drwxr-xr-x - hadoop supergroup 0 2020-05-02 22:49 /output
上述目录中/input 目录是输入数据存放的目录,/output 目录是输出数据存放的目录。 执行如下命令,删除/output 目录。
[hadoop@master hadoop]$ hdfs dfs -rm -r -f /output
执行如下命令运行 WordCount 案例:
[hadoop@master hadoop]$ hadoop jar /usr/local/src/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input/data.txt /output
23/03/16 21:21:57 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.100.129:8032
23/03/16 21:21:58 INFO input.FileInputFormat: Total input paths to process : 1
23/03/16 21:21:58 INFO mapreduce.JobSubmitter: number of splits:1
23/03/16 21:21:58 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1678970903012_0001
23/03/16 21:21:59 INFO impl.YarnClientImpl: Submitted application application_1678970903012_0001
23/03/16 21:21:59 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1678970903012_0001/
23/03/16 21:21:59 INFO mapreduce.Job: Running job: job_1678970903012_0001
23/03/16 21:22:08 INFO mapreduce.Job: Job job_1678970903012_0001 running in uber mode : false
23/03/16 21:22:08 INFO mapreduce.Job:  map 0% reduce 0%
23/03/16 21:22:09 INFO mapreduce.Job: Task Id : attempt_1678970903012_0001_m_000000_0, Status : FAILED
Container launch failed for container_1678970903012_0001_01_000002 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
	at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
	at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155)
	at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:375)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

23/03/16 21:22:11 INFO mapreduce.Job: Task Id : attempt_1678970903012_0001_m_000000_1, Status : FAILED
Container launch failed for container_1678970903012_0001_01_000003 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
	at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
	at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155)
	at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:375)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

23/03/16 21:22:13 INFO mapreduce.Job: Task Id : attempt_1678970903012_0001_m_000000_2, Status : FAILED
Container launch failed for container_1678970903012_0001_01_000004 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
	at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
	at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155)
	at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:375)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

23/03/16 21:22:16 INFO mapreduce.Job:  map 100% reduce 100%
23/03/16 21:22:16 INFO mapreduce.Job: Job job_1678970903012_0001 failed with state FAILED due to: Task failed task_1678970903012_0001_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0

23/03/16 21:22:16 INFO mapreduce.Job: Counters: 4
	Job Counters 
		Other local map tasks=3
		Data-local map tasks=1
		Total time spent by all maps in occupied slots (ms)=0
		Total time spent by all reduces in occupied slots (ms)=0

在浏览器的地址栏输入 http://master:50070,进入页面,在 Utilities 菜单中选择 Browse the file system,可以查看 HDFS 文件系统内容。如图 5-5 所示,查看 HDFS 的根目录,可 以看到 HDFS 根目录中有三个目录,input、output 和 tmp。

image-20230317134454328

查看 output 目录,如图 5-6 所示,发现有两个文件。文件_SUCCESS 表示处理成功,处 理的结果存放在 part-r-00000 文件中。在页面上不能直接查看文件内容,需要下载到本地 系统才行。

image-20230317134503890

可以使用 HDFS 命令直接查看 part-r-00000 文件内容,结果如下所示:
[hadoop@master output]$ hdfs dfs -cat /output/part-r-00000

Hello  3
Huasan 1
World  1
Hadoop 1

可以看出统计结果正确,说明 Hadoop 运行正常。

1.4.5. 实验任务五:停止 Hadoop

1.4.5.1. 步骤一:停止 yarn
[hadoop@master output]$ stop-yarn.sh
stopping yarn daemons
stopping resourcemanager
192.168.100.131: stopping nodemanager
192.168.100.130: stopping nodemanager
192.168.100.130: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
no proxyserver to stop

1.4.5.2. 步骤二:停止 DataNode
[hadoop@slave1 data]$ hadoop-daemon.sh stop datanode
stopping datanode
[hadoop@slave1 data]$ 

[hadoop@slave2 data]$ hadoop-daemon.sh stop datanode
stopping datanode
[hadoop@slave2 data]$ 

1.4.5.3. 步骤二:停止 NameNode
[hadoop@master input]$ hadoop-daemon.sh stop namenode
stopping namenode
[hadoop@master input]$ 

1.4.5.4. 步骤三:停止 SecondaryNameNode
[hadoop@master ~]$  hadoop-daemon.sh stop secondarynamenode
stopping secondarynamenode

1.4.5.5. 步骤四:查看 JAVA 进程,确认 HDFS 进程已全部关闭
[hadoop@master ~]$ jps
3403 Jps
30838 RunJar

标签:java,16,hadoop,192.168,master,ssh,集群,运行
From: https://www.cnblogs.com/shuangmu668/p/17230830.html

相关文章