首页 > 其他分享 >spark-3.5.1+Hadoop 3.4.0+Hive4.0 分布式集群 安装配置

spark-3.5.1+Hadoop 3.4.0+Hive4.0 分布式集群 安装配置

时间:2024-06-09 15:04:29浏览次数:22  
标签:java Hive4.0 Hadoop 3.4 export usr org spark local

Hadoop安装参考:

Hadoop 3.4.0+HBase2.5.8+ZooKeeper3.8.4+Hive4.0+Sqoop 分布式高可用集群部署安装 大数据系列二-CSDN博客

一 下载:

Downloads | Apache Spark

1 下载Maven – Welcome to Apache Maven

# maven安装及配置教程
wget https://dlcdn.apache.org/maven/maven-3/3.8.8/binaries/apache-maven-3.8.8-bin.tar.gz
#
tar zxvf  apache-maven-3.8.8-bin.tar.gz
mv apache-maven-3.8.8/ /usr/local/maven
#vi /etc/profile
export MAVEN_HOME=/usr/local/maven
export PATH=$PATH:$MAVEN_HOME/bin
#source /etc/profile
#查看版本
root@slave13 soft]# mvn  --version
Apache Maven 3.8.8 (4c87b05d9aedce574290d1acc98575ed5eb6cd39)
Maven home: /usr/local/maven
Java version: 1.8.0_191, vendor: Oracle Corporation, runtime: /usr/local/jdk/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "4.18.0-348.el8.x86_64", arch: "amd64", family: "unix"

2 下载:Scala 2.13.14 | The Scala Programming Language

#解压
tar zxvf  scala-2.13.14.tgz
sudo mv scala-2.13.14/ /usr/local/scala
sudo vi /etc/profile
export SCALA_HOME=/usr/local/scala
export PATH=$PATH:$SCALA_HOME/bin
source /etc/profile
#查看版本
scala -version
Scala code runner version 2.13.14 -- Copyright 2002-2024, LAMP/EPFL and Lightbend, Inc.

3  安装spark

#解压
tar zxvf  spark-3.5.1-bin-hadoop3.tgz
sudo mv  spark-3.5.1-bin-hadoop3/ /usr/local/spark/
#配置环境变量(slave12,slave13同样配置)
sudo vi /etc/profile
export SPARK_HOME=/usr/local/spark
export PATH=$PATH:$SPARK_HOME/bin
export PATH=$PATH:$SPARK_HOME/sbin
source /etc/profile
#配置环境变量
cd /usr/local/spark/conf/
cp  spark-env.sh.template  spark-env.sh
vim spark-env.sh
export JAVA_HOME=/usr/local/jdk
export SCALA_HOME=/usr/local/scala
export HADOOP_CONF_DIR=/data/hadoop/etc/hadoop/
export SPARK_MASTER_HOST=master11
export SPARK_LIBRARY_PATH=/usr/local/spark/jars
export SPARK_WORKER_MEMORY=2048m
export SPARK_WORKER_CORES=2
export SPARK_MASTER_PORT=7077
export SPARK_MASTER_WEBUI_PORT=8082
export SPARK_DIST_CLASSPATH=$(/data/hadoop/bin/hadoop classpath)
#修改workers配置文件
cp workers.template workers
vim workers
slave12
slave13
#分发文件到slave12,slave13
scp  -r /usr/local/spark/ slave12:/usr/local/
scp  -r /usr/local/spark/ slave13:/usr/local/
scp  -r /usr/local/scala/ slave12:/usr/local/
scp  -r /usr/local/scala/ slave13:/usr/local/

二 启动

#master11启动
[root@master11 ~]# /usr/local/spark/sbin/start-all.sh
#报错
Error: A JNI error has occurred, please check your installation and try again
Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/Logger
        at java.lang.Class.getDeclaredMethods0(Native Method)
        at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
        at java.lang.Class.privateGetMethodRecursive(Class.java:3048)
        at java.lang.Class.getMethod0(Class.java:3018)
        at java.lang.Class.getMethod(Class.java:1784)
        at sun.launcher.LauncherHelper.validateMainClass(LauncherHelper.java:544)
        at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:526)
Caused by: java.lang.ClassNotFoundException: org.slf4j.Logger
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 7 more
#解决
cd /usr/local/spark/jars/
wget https://repo1.maven.org/maven2/org/slf4j/slf4j-api/1.7.9/slf4j-api-1.7.9.jar
wget https://repo1.maven.org/maven2/org/slf4j/slf4j-nop/1.7.9/slf4j-nop-1.7.9.jar
#启动
[root@master11 ~]# /usr/local/spark/sbin/start-all.sh 
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.master.Master-1-master11.out
slave12: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave12.out
slave13: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave13.out
#查看 如下图

 三 Spark 与Hive 集成

1 拷贝配置文件和Mysql 驱动

cp /data/hive/conf/hive-site.xml /usr/local/spark/conf/
cp /data/hadoop/etc/hadoop/hdfs-site.xml /usr/local/spark/conf/
cp /data/hadoop/etc/hadoop/core-site.xml /usr/local/spark/conf/
cp /data/hive/lib/mysql-connector-java-8.0.29.jar  /usr/local/spark/jars/

2 登录hive,创建测试表

hive
create  database testdb;
use testdb;
create table test(id int,name string) row format delimited fields terminated by ',';
#创建测试文件
cat /root/test.csv
1,lucy
2,lili
#导入数据
load data local inpath '/root/test.csv' overwrite  into table test;

3 启动 spark-sql

spark-sql  --master spark://master11:7077  --executor-memory 512m  --total-executor-cores 2  --driver-class-path /usr/local/spark/jars/mysql-connector-java-8.0.29.jar
spark-sql (default)> show databases;
namespace
default
testdb
Time taken: 2.918 seconds, Fetched 2 row(s)
spark-sql (default)> use testdb;
Response code
Time taken: 0.478 seconds
spark-sql (testdb)> show tables;
namespace	tableName	isTemporary
test
Time taken: 0.454 seconds, Fetched 1 row(s)
spark-sql (testdb)> select  * from test;
id	name
1	lcuy
2	lili
Time taken: 4.126 seconds, Fetched 2 row(s)

标签:java,Hive4.0,Hadoop,3.4,export,usr,org,spark,local
From: https://blog.csdn.net/tonyhi6/article/details/139510680

相关文章

  • Docker部署hadoop+运行wordcount详解
    一、拉取ubuntu镜像抓取ubuntu的镜像作为基础搭建hadoop环境#如果不指定版本号的话,默认拉取最新的ubuntu版本dockerpullubuntu二、创建容器#1.查看已拉取的镜像dockerimages#2.创建容器dockerrun-it--namemyhadoop-p80:80ubuntu#dockerrun:创建并运......
  • 【hadoop/Spark】相关命令
    目录hadoopHDFShiveSparkhadoop查看启动状态jps重启hadoopsbin/stop-all.shsbin/start-all.shsbin/start-dfs.sh查看hadoop版本./bin/hadoopversionHDFS查看hdfs的文件夹cd/usr/local/hadoop./bin/hdfsdfs-ls/hive查看创建的数据库showdataba......
  • 新手上路:Linux虚拟机创建与Hadoop集群配置指南①(未完)
    一、基础阶段Linux操作系统:创建虚拟机1.创建虚拟机打开VM,点击文件,新建虚拟机,点击自定义,下一步下一步这里可以选择安装程序光盘映像文件,我选择稍后安装选择linux系统位置不选C盘,创建一个新的文件夹VM来放置虚拟机,将虚拟机名字改为master方便后续识别(也可以改为其他......
  • docker部署hadoop集群
    docker部署hadoop集群:https://blog.51cto.com/865516915/2150651 一、主机规划3台主机:1个master、2个slaver/workerip地址使用docker默认的分配地址:master:主机名:hadoop2、ip地址:172.17.0.2 slaver1:主机名:hadoop3、ip地址:172.17.0.3主机名:hadoop4、ip地址:172.17......
  • 05 Hadoop简单使用
    目录一、hadoop安装配置二、运行hadoop三、hadoop2.x和hadoop3.x变化四、HDFS常用命令五、Java操作HDFS六、MapReduce七、压缩八、yarn常用命令一、hadoop安装配置​1、下载解压hadoop-x.x.x.tar.gztar-xzvfhadoop-x.x.x.tar.gz​2、下载解压jdktar-xzvfj......
  • Hadoop完全分布式安装
    Hadoop完全分布式安装一.集群搭建前期准备1.三台机器防火墙都是关闭的2.确保三台机器网络配置畅通3.三台机器确保/etc/hosts⽂件配置了ip和hostname的映射关系4.确保三台机器配置了ssh免密登录认证二.前期环境搭建免密登录1.修改主机名为server1,配置hosts文件vi/e......
  • CentOS-7.9 安装rabbitmq3.9.11 ,erlang-23.3.4.11
    下载所需rpm包wget https://github.com/rabbitmq/erlang-rpm/releases/download/v23.3.4.11/erlang-23.3.4.11-1.el7.x86_64.rpmwget https://github.com/rabbitmq/rabbitmq-server/releases/download/v3.9.11/rabbitmq-server-3.9.11-1.el7.noarch.rpm安装Erlangsu......
  • 【大数据】Hadoop集群搭建(8249字)
    文章目录@[toc]NAT配置IP配置SecureCRT配置PropertiesTerminalEmulationAppearanceJava安装环境变量配置Hadoop1.0安装建立临时文件夹修改配置文件mastersslavescore-site.xmlmapred-site.xmlhdfs-site.xmlhadoop-env.sh环境变量配置Hadoop2.0安装修改配置文件ha......
  • Mac电脑在线视频播放器:IINA for Mac v1.3.4中文版下载
    IINA是一款优秀的Mac平台视频播放软件,能够支持几乎所有常见的视频格式和编解码器,包括4K、HEVC、H.264等。软件采用了现代化的设计风格,界面简洁清晰,操作简便。同时还支持视频播放过程中的画中画、自定义快捷键、在线字幕搜索等功能,用户体验非常优秀。除此之外,IINA还支持AirP......
  • 《软件方法(下)》8.3.4.6 DDD话语“聚合”中的伪创新(2)
    DDD领域驱动设计批评文集做强化自测题获得“软件方法建模师”称号《软件方法》各章合集《软件方法》最新pdf和epub文件:umlchina.com/url/softmeth2024.html8.3建模步骤C-2识别类的关系8.3.4识别关联关系8.3.4.6DDD话语“聚合”中的伪创新(3)aggregateroot是伪创新......