首页 > 其他分享 >单机版hadoop搭建过程记录

单机版hadoop搭建过程记录

时间:2023-01-19 14:34:51浏览次数:44  
标签:INFO 13 01 单机版 19 hadoop 52 hadoop1 搭建

1、添加hadoop1用户组

[root@localhost ~]# groupadd hadoop1

2、添加hadoop1用户,并设置密码

[root@localhost ~]# useradd -g hadoop1 hadoop1
[root@localhost ~]# passwd hadoop1
Changing password for user hadoop1.
New password:
BAD PASSWORD: The password contains the user name in some form
Retype new password:
passwd: all authentication tokens updated successfully.

 3、下载jdk安装包,并解压

jdk-8u161-linux-x64.tar.gz

 [hadoop1@localhost ~]$ tar -zxvf jdk-8u161-linux-x64.tar.gz -C jdk/

4、配置JAVA_HOME相关的环境变量

JAVA_HOME=/home/hadoop1/jdk/jdk1.8.0_161
JRE_HOME=$JAVA_HOME/jre
CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib/rt.jar
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
export JAVA_HOME JRE_HOME CLASS_PATH PATH

 5、执行source命令,使环境变量生效,并检查java版本

[hadoop1@localhost bin]$ source ~/.bash_profile
[hadoop1@localhost bin]$ java -version
java version "1.8.0_161"
Java(TM) SE Runtime Environment (build 1.8.0_161-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)
[hadoop1@localhost bin]$ which java
~/jdk/jdk1.8.0_161/bin/java

 

6、设置ssh免密登陆

[hadoop1@localhost ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop1/.ssh/id_rsa):
Created directory '/home/hadoop1/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop1/.ssh/id_rsa.
Your public key has been saved in /home/hadoop1/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:Ysl0IQT7aQT32MU4oRdj7wO6uzBLn9/ZZA3aTYgya+w [email protected]
The key's randomart image is:
+---[RSA 2048]----+
| oo+ *+. |
| + Bo*. |
| . = *.. |
| = * o . . |
| X S + o . |
| o + + + = |
| + . + . + o |
| . = = . = |
| . =oE o . |
+----[SHA256]-----+
[hadoop1@localhost ~]$ cd .ssh/
[hadoop1@localhost .ssh]$ cat id_rsa.pub >> authorized_keys
[hadoop1@localhost .ssh]$ chmod 600 authorized_keys

[hadoop1@hadoop1 hadoop-3.1.3]$ cat ~/.ssh/know_hosts
StrictHostKeyChecking no

7、在hadoop官网下载hadoop安装包hadoop-3.1.3.tar.gz,并解压

[hadoop1@localhost ~]$ mkdir hadoop313
[hadoop1@localhost ~]$ tar -zxvf hadoop-3.1.3.tar.gz -C hadoop313/
hadoop-3.1.3/
hadoop-3.1.3/LICENSE.txt
hadoop-3.1.3/NOTICE.txt
hadoop-3.1.3/README.txt
hadoop-3.1.3/bin/
hadoop-3.1.3/bin/hadoop

8、添加环境变量

export HADOOP_HOME=/home/hadoop1/hadoop313/hadoop-3.1.3
export PATH=$HADOOP_HOME/bin:$PATH

并设置环境变量生效

9、修改配置文件

  • hadoop目录说明

    |--- bin # hadoop客户端命令

    |--- etc/hadoop # 相关配置文件存放目录

    |--- sbin # 启动hadoop相关进程脚本(Server端)

    |--- share # 常见使用例子 *(share/hadoop/mapreduce)

  • 修改配置文件

    (1) etc/hadoop/hadoop_env.sh
    # 添加
    export JAVA_HOME=/home/hadoop1/jdk/jdk1.8.0_161
        
    (2) etc/hadoop/core-site.xml
    # 添加 hadoop1为配置的本地hosts
    <configuration>
            <property>
                    <name>fs.defaultFS</name>
                    <value>hdfs://hadoop1:9000</value>
            </property>
    </configuration>
    
    (3) etc/hadoop/hdfs-site.xml
    # 添加
    <configuration>
        <property>
                <name>dfs.replication</name>    # 副本数
                <value>1</value>
        </property>
    
        <property>
                 # 文件blocks存放位置,默认在linux系统tmp文件夹下,重启可能丢失
                 # 所以需要修改存储位置
                <name>hadoop.tmp.dir</name>   
                <value>/home/hadoop1/data/tmp</value>
        </property>
    </configuration>
    
    (4) 修改workers文件
    # 添加配置 ip 或者 映射 name
    hadoop1

 10、启动服务及验证

第一次启动服务之前需要进行系统格式化

hdfs namenode -format

[hadoop1@hadoop1 hadoop]$ hdfs namenode -format
WARNING: /home/hadoop1/hadoop313/hadoop-3.1.3/logs does not exist. Creating.
2023-01-19 13:52:46,633 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hadoop1/192.168.198.128
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 3.1.3
STARTUP_MSG: classpath = /home/hadoop1/hadoop313/hadoop-3.1.3/etc/hadoop:/home/hadoop1/hadoop313/hadoop-3.1.3/share/hadoop/common/lib/accessors-smart-1:/home/hadoop1/hadoop313/hadoop-3.1.3/share/hadoop/yarn/hadoop-yarn-services-core-3.1.3.jar
STARTUP_MSG: build = https://gitbox.apache.org/repos/asf/hadoop.git -r ba631c436b806728f8ec2f54ab1e289526c90579; compiled by 'ztang' on 2019-09-12T02:47Z
STARTUP_MSG: java = 1.8.0_161
************************************************************/
2023-01-19 13:52:46,645 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2023-01-19 13:52:46,952 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-2dd23274-d81f-4473-b829-8f5676ab1594
2023-01-19 13:52:48,697 INFO namenode.FSEditLog: Edit logging is async:true
2023-01-19 13:52:48,725 INFO namenode.FSNamesystem: KeyProvider: null
2023-01-19 13:52:48,733 INFO namenode.FSNamesystem: fsLock is fair: true
2023-01-19 13:52:48,733 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2023-01-19 13:52:48,838 INFO namenode.FSNamesystem: fsOwner = hadoop1 (auth:SIMPLE)
2023-01-19 13:52:48,838 INFO namenode.FSNamesystem: supergroup = supergroup
2023-01-19 13:52:48,838 INFO namenode.FSNamesystem: isPermissionEnabled = true
2023-01-19 13:52:48,838 INFO namenode.FSNamesystem: HA Enabled: false
2023-01-19 13:52:48,924 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2023-01-19 13:52:48,951 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2023-01-19 13:52:48,951 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2023-01-19 13:52:48,960 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2023-01-19 13:52:48,960 INFO blockmanagement.BlockManager: The block deletion will start around 2023 Jan 19 13:52:48
2023-01-19 13:52:48,962 INFO util.GSet: Computing capacity for map BlocksMap
2023-01-19 13:52:48,962 INFO util.GSet: VM type = 64-bit
2023-01-19 13:52:48,975 INFO util.GSet: 2.0% max memory 405.5 MB = 8.1 MB
2023-01-19 13:52:48,975 INFO util.GSet: capacity = 2^20 = 1048576 entries
2023-01-19 13:52:48,996 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false
2023-01-19 13:52:49,008 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
2023-01-19 13:52:49,009 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2023-01-19 13:52:49,009 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2023-01-19 13:52:49,009 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2023-01-19 13:52:49,010 INFO blockmanagement.BlockManager: defaultReplication = 1
2023-01-19 13:52:49,010 INFO blockmanagement.BlockManager: maxReplication = 512
2023-01-19 13:52:49,010 INFO blockmanagement.BlockManager: minReplication = 1
2023-01-19 13:52:49,010 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
2023-01-19 13:52:49,010 INFO blockmanagement.BlockManager: redundancyRecheckInterval = 3000ms
2023-01-19 13:52:49,010 INFO blockmanagement.BlockManager: encryptDataTransfer = false
2023-01-19 13:52:49,010 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2023-01-19 13:52:49,102 INFO namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215
2023-01-19 13:52:49,127 INFO util.GSet: Computing capacity for map INodeMap
2023-01-19 13:52:49,127 INFO util.GSet: VM type = 64-bit
2023-01-19 13:52:49,129 INFO util.GSet: 1.0% max memory 405.5 MB = 4.1 MB
2023-01-19 13:52:49,129 INFO util.GSet: capacity = 2^19 = 524288 entries
2023-01-19 13:52:49,129 INFO namenode.FSDirectory: ACLs enabled? false
2023-01-19 13:52:49,130 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
2023-01-19 13:52:49,130 INFO namenode.FSDirectory: XAttrs enabled? true
2023-01-19 13:52:49,130 INFO namenode.NameNode: Caching file names occurring more than 10 times
2023-01-19 13:52:49,139 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2023-01-19 13:52:49,143 INFO snapshot.SnapshotManager: SkipList is disabled
2023-01-19 13:52:49,148 INFO util.GSet: Computing capacity for map cachedBlocks
2023-01-19 13:52:49,148 INFO util.GSet: VM type = 64-bit
2023-01-19 13:52:49,148 INFO util.GSet: 0.25% max memory 405.5 MB = 1.0 MB
2023-01-19 13:52:49,149 INFO util.GSet: capacity = 2^17 = 131072 entries
2023-01-19 13:52:49,160 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2023-01-19 13:52:49,160 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2023-01-19 13:52:49,160 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2023-01-19 13:52:49,167 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
2023-01-19 13:52:49,167 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2023-01-19 13:52:49,169 INFO util.GSet: Computing capacity for map NameNodeRetryCache
2023-01-19 13:52:49,169 INFO util.GSet: VM type = 64-bit
2023-01-19 13:52:49,169 INFO util.GSet: 0.029999999329447746% max memory 405.5 MB = 124.6 KB
2023-01-19 13:52:49,169 INFO util.GSet: capacity = 2^14 = 16384 entries
2023-01-19 13:52:49,211 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1767029477-192.168.198.128-1674107569197
2023-01-19 13:52:49,243 INFO common.Storage: Storage directory /home/hadoop1/data/tmp/dfs/name has been successfully formatted.
2023-01-19 13:52:49,305 INFO namenode.FSImageFormatProtobuf: Saving image file /home/hadoop1/data/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
2023-01-19 13:52:49,498 INFO namenode.FSImageFormatProtobuf: Image file /home/hadoop1/data/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 394 bytes saved in 0 seconds .
2023-01-19 13:52:49,522 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2023-01-19 13:52:49,532 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid = 0 when meet shutdown.
2023-01-19 13:52:49,534 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop1/192.168.198.128
************************************************************/

[hadoop1@hadoop1 hadoop-3.1.3]$ sbin/start-dfs.sh
Starting namenodes on [hadoop1]
hadoop1: Warning: Permanently added 'hadoop1,192.168.198.128' (ECDSA) to the list of known hosts.
Starting datanodes
Starting secondary namenodes [hadoop1]
[hadoop1@hadoop1 hadoop-3.1.3]$ jps
3312 DataNode
3192 NameNode
3528 SecondaryNameNode
3663 Jps

 

标签:INFO,13,01,单机版,19,hadoop,52,hadoop1,搭建
From: https://www.cnblogs.com/snake-fly/p/17061422.html

相关文章