标签:src local 平台 Hadoop 192.168 master usr yarn 安装
卸载自带 OpenJDK(最小化安装不用执行)
[root@master ~]# rpm -qa | grep java
[root@master ~]# rpm -e --nodeps x x x
配置免密登录
生成密钥对:[root@master ~]# ssh-keygen -t rsa(回车四次)
发送公钥:[root@master ~]# ssh-copy-id 192.168.100.10
ssh-copy-id 192.168.100.20
ssh-copy-id 192.168.100.30
关闭防火墙:
systemctl stop firewalld
systemctl disable firewalld
修改主机名:
hostnamectl set-hostname master
hostnamectl set-hostname slave1
hostnamectl set-hostname slave2
映射:
vi /etc/hosts
192.168.100.10 master
192.168.100.20 slave1
192.168.100.30 slave2
配置Java、Hadoop环境变量
[root@localhost src]# vi /etc/profile
export JAVA_HOME=/usr/local/src/jdk1.8.0_181/
export PATH=$PATH:$JAVA_HOME/bin
export HADOOP_HOME=/usr/local/src/hadoop-2.7.6/
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
发送从机:
scp -r /etc/profile slave1:/etc/
scp -r /etc/profile slave2:/etc/
配置文件:
1、vi core-site.xml
fs.defaultFS
hdfs://master:9000
io.file.buffer.size
131072
hadoop.tmp.dir
file:///usr/tmp/hadoop
2、vi hadoop-env.sh
export JAVA_HOME=/usr/local/src/jdk1.8.0_181/ (25行)
3、vi hdfs-site.xml
dfs.replication
1
dfs.namenode.name.dir
file:///hdfs/namenode
dfs.datanode.data.dar
file:///hdfs/datanode
dfs.block.size
134217728
dfs.http.address
master:50070
dfs.namenode.secondary.http-address
master:9001
dfs.webhdfs.enbled
true
dfs.permissions
false
4、cp mapred-site.xml.template mapred-site.xml
vi mapred-site.xml
mapreduce.framework.name
yarn
mapreduce.jobhistory.address
master:10020
mapreduce.jobhistory.webapp.address
master:19888
5、vi yarn-site.xml
yarn.resourcemanager.hostname
master
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.resourcemanager.address
master:8032
yarn.resourcemanager.scheduler.address
master:8030
yarn.resourcemanager.resource-tracker.address
master:8031
yarn.resourcemanager.admin.address
master:8033
yarn.resourcemanager.webapp.address
master:8088
6、vi slaves
master
slave1
slave2
分发Hadoop:
scp -r /usr/local/src/hadoop-2.7.6 slave1:/usr/local/src/
scp -r /usr/local/src/hadoop-2.7.6 slave2:/usr/local/src/
分发jdk:
scp -r /usr/local/src/jdk1.8.0_181/ slave1:/usr/local/src/
scp -r /usr/local/src/jdk1.8.0_181/ slave2:/usr/local/src/
刷新生效:
[root@master hadoop]# hadoop namenode -format
启动服务:
[root@master hadoop]# start-all.sh
配置Windows:
C:\Windows\System32\drivers\etc
192.168.100.10 master
192.168.100.20 slave1
192.168.100.30 slave2
标签:src,
local,
平台,
Hadoop,
192.168,
master,
usr,
yarn,
安装
From: https://www.cnblogs.com/-liunian/p/17410957.html