文章目录
- 一、报错信息
- 二、解决方案 ( 安装 Hadoop 运行环境 )
一、报错信息
核心报错信息 :
- WARN Shell: Did not find winutils.exe: java.io.FileNotFoundException:
- java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset.
在 PyCharm 中 , 调用 PySpark 执行 计算任务 , 会报如下错误 :
D:\001_Develop\022_Python\Python39\python.exe D:/002_Project/011_Python/HelloPython/Client.py
23/08/01 11:25:24 WARN Shell: Did not find winutils.exe: java.io.FileNotFoundException: java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset. -see https://wiki.apache.org/hadoop/WindowsProblems
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
23/08/01 11:25:24 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
PySpark 版本号 : 3.4.1
查看文件内容 : ['Tom Jerry', 'Tom Jerry Tom', 'Jack Jerry']
查看文件内容展平效果 : ['Tom', 'Jerry', 'Tom', 'Jerry', 'Tom', 'Jack', 'Jerry']
转为二元元组效果 : [('Tom', 1), ('Jerry', 1), ('Tom', 1), ('Jerry', 1), ('Tom', 1), ('Jack', 1), ('Jerry', 1)]
D:\001_Develop\022_Python\Python39\Lib\site-packages\pyspark\python\lib\pyspark.zip\pyspark\shuffle.py:65: UserWarning: Please install psutil to have better support with spilling
D:\001_Develop\022_Python\Python39\Lib\site-packages\pyspark\python\lib\pyspark.zip\pyspark\shuffle.py:65: UserWarning: Please install psutil to have better support with spilling
D:\001_Develop\022_Python\Python39\Lib\site-packages\pyspark\python\lib\pyspark.zip\pyspark\shuffle.py:65: UserWarning: Please install psutil to have better support with spilling
D:\001_Develop\022_Python\Python39\Lib\site-packages\pyspark\python\lib\pyspark.zip\pyspark\shuffle.py:65: UserWarning: Please install psutil to have better support with spilling
最终统计单词 : [('Tom', 3), ('Jack', 1), ('Jerry', 3)]
Process finished with exit code 0
二、解决方案 ( 安装 Hadoop 运行环境 )
核心报错信息 :
- WARN Shell: Did not find winutils.exe: java.io.FileNotFoundException:
- java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset.
PySpark 一般会与 Hadoop 环境一起运行 , 如果在 Windows 中没有安装 Hadoop 运行环境 , 就会报上述错误 ;
Hadoop 发布版本在 https://hadoop.apache.org/releases.html 页面可下载 ;
当前最新版本是 3.3.6 , 点击 Binary download 下的 binary (checksum signature) 链接 ,
进入到 Hadoop 3.3.6 下载页面 :
下载地址为 :
https://dlcdn.apache.org/hadoop/common/hadoop-3.3.6/hadoop-3.3.6.tar.gz
官方下载速度很慢 ;
这里提供一个 Hadoop 版本 , Hadoop 3.3.4 + winutils , CSDN 0 积分下载地址 :
下载完后 , 解压 Hadoop , 安装路径为 D:\001_Develop\052_Hadoop\hadoop-3.3.4\hadoop-3.3.4 ;
在 环境变量 中 , 设置
HADOOP_HOME = D:\001_Develop\052_Hadoop\hadoop-3.3.4\hadoop-3.3.4
系统 环境变量 ;
在 Path 环境变量中 , 增加
%HADOOP_HOME%\bin
%HADOOP_HOME%\sbin
环境变量 ;
设置 D:\001_Develop\052_Hadoop\hadoop-3.3.4\hadoop-3.3.4\etc\hadoop\hadoop-env.cmd 脚本中的 JAVA_HOME 为真实的 JDK 路径 ;
将
set JAVA_HOME=%JAVA_HOME%
修改为
set JAVA_HOME=C:\Program Files\Java\jdk1.8.0_91
将 winutils-master\hadoop-3.3.0\bin 中的 hadoop.dll 和 winutils.exe 文件拷贝到 C:\Windows\System32 目录中 ;
重启电脑 , 一定要重启 ;
然后在命令行中 , 执行
hadoop -version
验证 Hadoop 是否安装完成 ;