首页 > 其他分享 >部署hadoop

部署hadoop

时间:2024-07-13 21:40:51浏览次数:15  
标签:部署 hadoop yarn server export HOME HADOOP

上一次安装好虚拟机

接下来开始部署Hadoop

首先分配一下角色

node1:Namenode、Datanode、ResourceManager、NodeManager、HistoryServer、WebProxyServer、QuorumPeerMain
node2:Datanode、NodeManager、QuorumPeerMain
node3:Datanode、NodeManager、QuorumPeerMain

调整虚拟机内存

node1设置4GB或以上内存

node2和node3设置2GB或以上内存

 

Hadoop集群部署

下载Hadoop安装包、解压、配置软链接

# 1. 下载
wget http://archive.apache.org/dist/hadoop/common/hadoop-3.3.0/hadoop-3.3.0.tar.gz

# 2. 解压
# 请确保目录/export/server存在
tar -zxvf hadoop-3.3.0.tar.gz -C /export/server/

# 3. 构建软链接
ln -s /export/server/hadoop-3.3.0 /export/server/hadoop

修改配置文件:hadoop-env.sh

cd 进入到/export/server/hadoop/etc/hadoop,文件夹中,配置文件都在这里

修改hadoop-env.sh文件

此文件是配置一些Hadoop用到的环境变量

这些是临时变量,在Hadoop运行时有用

如果要永久生效,需要写到/etc/profile中

# 在文件开头加入: # 配置Java安装路径 export JAVA_HOME=/export/server/jdk # 配置Hadoop安装路径 export HADOOP_HOME=/export/server/hadoop # Hadoop hdfs配置文件路径 export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop # Hadoop YARN配置文件路径 export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop # Hadoop YARN 日志文件夹 export YARN_LOG_DIR=$HADOOP_HOME/logs/yarn # Hadoop hdfs 日志文件夹 export HADOOP_LOG_DIR=$HADOOP_HOME/logs/hdfs # Hadoop的使用启动用户配置 export HDFS_NAMENODE_USER=root export HDFS_DATANODE_USER=root export HDFS_SECONDARYNAMENODE_USER=root export YARN_RESOURCEMANAGER_USER=root export YARN_NODEMANAGER_USER=root export YARN_PROXYSERVER_USER=root

 

修改配置文件:core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->
<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://node1:8020</value>
    <description></description>
  </property>

  <property>
    <name>io.file.buffer.size</name>
    <value>131072</value>
    <description></description>
  </property>
</configuration>

  配置:hdfs-site.xml文件

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.datanode.data.dir.perm</name>
        <value>700</value>
    </property>

  <property>
    <name>dfs.namenode.name.dir</name>
    <value>/data/nn</value>
    <description>Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently.</description>
  </property>

  <property>
    <name>dfs.namenode.hosts</name>
    <value>node1,node2,node3</value>
    <description>List of permitted DataNodes.</description>
  </property>

  <property>
    <name>dfs.blocksize</name>
    <value>268435456</value>
    <description></description>
  </property>


  <property>
    <name>dfs.namenode.handler.count</name>
    <value>100</value>
    <description></description>
  </property>

  <property>
    <name>dfs.datanode.data.dir</name>
    <value>/data/dn</value>
  </property>
</configuration>

  配置:mapred-env.sh文件

# 在文件的开头加入如下环境变量设置
export JAVA_HOME=/export/server/jdk
export HADOOP_JOB_HISTORYSERVER_HEAPSIZE=1000
export HADOOP_MAPRED_ROOT_LOGGER=INFO,RFA

  配置:mapred-site.xml文件

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
    <description></description>
  </property>

  <property>
    <name>mapreduce.jobhistory.address</name>
    <value>node1:10020</value>
    <description></description>
  </property>


  <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>node1:19888</value>
    <description></description>
  </property>


  <property>
    <name>mapreduce.jobhistory.intermediate-done-dir</name>
    <value>/data/mr-history/tmp</value>
    <description></description>
  </property>


  <property>
    <name>mapreduce.jobhistory.done-dir</name>
    <value>/data/mr-history/done</value>
    <description></description>
  </property>
<property>
  <name>yarn.app.mapreduce.am.env</name>
  <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
  <name>mapreduce.map.env</name>
  <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
  <name>mapreduce.reduce.env</name>
  <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
</configuration>

  配置:yarn-env.sh文件

# 在文件的开头加入如下环境变量设置
export JAVA_HOME=/export/server/jdk
export HADOOP_HOME=/export/server/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_LOG_DIR=$HADOOP_HOME/logs/yarn
export HADOOP_LOG_DIR=$HADOOP_HOME/logs/hdfs

配置:yarn-site.xml文件

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->
<property>
    <name>yarn.log.server.url</name>
    <value>http://node1:19888/jobhistory/logs</value>
    <description></description>
</property>

  <property>
    <name>yarn.web-proxy.address</name>
    <value>node1:8089</value>
    <description>proxy server hostname and port</description>
  </property>


  <property>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
    <description>Configuration to enable or disable log aggregation</description>
  </property>

  <property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/tmp/logs</value>
    <description>Configuration to enable or disable log aggregation</description>
  </property>


<!-- Site specific YARN configuration properties -->
  <property>
    <name>yarn.resourcemanager.hostname</name>
    <value>node1</value>
    <description></description>
  </property>

  <property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
    <description></description>
  </property>

  <property>
    <name>yarn.nodemanager.local-dirs</name>
    <value>/data/nm-local</value>
    <description>Comma-separated list of paths on the local filesystem where intermediate data is written.</description>
  </property>


  <property>
    <name>yarn.nodemanager.log-dirs</name>
    <value>/data/nm-log</value>
    <description>Comma-separated list of paths on the local filesystem where logs are written.</description>
  </property>


  <property>
    <name>yarn.nodemanager.log.retain-seconds</name>
    <value>10800</value>
    <description>Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled.</description>
  </property>



  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    <description>Shuffle service that needs to be set for Map Reduce applications.</description>
  </property>
</configuration>

修改workers文件

# 全部内容如下
node1
node2
node3

分发hadoop到其它机器

# 在node1执行
cd /export/server

scp -r hadoop-3.3.0 node2:`pwd`/
scp -r hadoop-3.3.0 node2:`pwd`/

在node2、node3执行

# 创建软链接
ln -s /export/server/hadoop-3.3.0 /export/server/hadoop

 

创建所需目录

mkdir -p /data/nn
mkdir -p /data/dn
mkdir -p /data/nm-log
mkdir -p /data/nm-local

配置环境变量在node1、node2、node3修改/etc/profile

export HADOOP_HOME=/export/server/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

格式化NameNode,在node1执行

hadoop namenode -format

启动hadoop的hdfs集群,在node1执行即可

start-dfs.sh

# 如需停止可以执行
stop-dfs.sh

 

 

 

 

 

 

标签:部署,hadoop,yarn,server,export,HOME,HADOOP
From: https://www.cnblogs.com/youxiandechilun/p/18300780

相关文章

  • 基于gunicorn+flask+docker模型高并发部署的步骤及注意事项
    关注我,持续分享逻辑思维&管理思维&面试题;可提供大厂面试辅导、及定制化求职/在职/管理/架构辅导;推荐专栏《10天学会使用asp.net编程AI大模型》,目前已完成所有内容。一顿烧烤不到的费用,让人能紧跟时代的浪潮。从普通网站,到公众号、小程序,再到AI大模型网站。干货满满。学成后可......
  • Hadoop学习总结
    在我作为初学者探索Hadoop的过程中,我深感兴奋和好奇。Hadoop作为一种开源的分布式存储和计算平台,能够处理大规模数据,这一点让我产生了深刻的震撼和兴趣。刚开始接触时,我面临理解Hadoop核心概念的挑战,特别是涉及到HDFS(Hadoop分布式文件系统)和MapReduce的概念。然而,通过阅读官方文档......
  • hadoop学习
    作为一个开源框架,Hadoop让大数据处理变得更加简便而高效。学习Hadoop对于处理大规模数据集是一个非常有价值的技能。Hadoop不仅仅是一个技术框架,更是一种处理大数据的思维方式。它通过将数据划分为多个小块,并在集群中的多个节点上并行处理,从而实现了对海量数据的快速处理。Hadoop......
  • Hadoop和Hive学习笔记
    Hadoop基础知识什么是Hadoop?Hadoop是Apache软件基金会下的一个开源项目,它允许对大型数据集进行分布式处理。Hadoop的核心组件包括HDFS(Hadoop分布式文件系统)和MapReduce编程模型。HDFS用于存储海量数据,而MapReduce则用于分布式计算。Hadoop的核心组件HDFS(HadoopDistributed......
  • 2024/07/13(暑假学习hadoop第一周总结)
    在本周的学习中,我构建了学习Hadoop所需的基础环境,这包括安装虚拟机VMware和部署CentOS操作系统。这些步骤是学习Hadoop开始,也为是深入学习Hadoop技术做好前置的准备工作。下面将详细介绍如何安装VMware和部署CentOS系统:首先,我们需要下载VMware软件并进行安装。在安装过程中,请务必......
  • Kolla-ansible部署openStack
    目录Kolla-ansible部署openStack1.简介2.环境准备3.部署3.1基础环境配置3.1.1配置主机名,所有节点操作,这里以openstack01为例3.1.2添加hosts3.1.3配置免密登录3.1.4关闭防火墙以及selinux3.1.5设置yum源3.1.6安装docker3.2配置kolla-ansible3.2.1安装相关依赖3.2.2部......
  • hadoop学习
    安装和配置Hadoop:(1)下载配置虚拟机(ip主机防火墙)。下架jdk(环境变量)和hadoop(根据网上教程进行查询测试)创建Web应用程序:创建一个Web应用程序,用于与Hadoop交互。Python语言来编写应用程序。集成Hadoop客户端库:在的Web应用程序中,需要引入Hadoop的客户端库,以便能够与Hadoop......
  • 每周总结:hadoop学习
    在大数据时代的背景下,Hadoop作为一种开源的分布式处理框架,为我打开了一扇通往高效数据处理的大门。通过对Hadoop的学习,我不仅掌握了其核心组件的工作原理,还体验到了分布式计算的强大威力。Hadoop的核心之一HDFS(HadoopDistributedFileSystem),以其高可靠性和高扩展性,为大数据......
  • Hadoop学习记录
    Hadoop生态系统:了解Hadoop生态系统的组成部分,包括HDFS(Hadoop分布式文件系统)、MapReduce、YARN等,理解它们之间的关系和作用。Hadoop安装和配置:学习如何在本地或云端环境中安装和配置Hadoop集群,包括节点设置、配置文件修改等。Hadoop编程模型:学习MapReduce编程模型,掌握使用Java或......
  • hadoop学习
    1.1Hadoop是什么(1)Hadoop是一个由Apache基金会所开发的分布式系统基础架构(2)主要解决海量数据的存储和海量数据的分析计算问题(3)广义上来说,Hadoop通常是指一个更广泛的概念——Hadoop生态圈1.2Hadoop优势(1)高可靠性:Hadoop底层维护多个数据副本,所以即使Hadoop某个计算元素或存储出......