问题:查看hive日志
进入日志文件下 查看hiveserver2.log
我的hive日志在如下文件夹下:
cd /var/log/my_hive_log
如果日志中显示如下错误:
Maximum was set to 100 partitions per node, number of dynamic partitions on this node: 101
这个错误信息表明在某个节点上动态生成的分区数量超过了设置的最大分区数限制。每个节点被设置为最多可以有100个分区,但是当前节点上却生成了101个分区。
Caused by: org.apache.hadoop.hive.ql.metadata.HiveFatalException: [Error 20004]: Fatal error occurred when node tried to create too many dynamic partitions. The maximum number of dynamic partitions is controlled by hive.exec.max.dynamic.partitions and hive.exec.max.dynamic.partitions.pernode. Maximum was set to 100 partitions per node, number of dynamic partitions on this node: 101
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.getDynOutPaths(FileSinkOperator.java:1190)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:938)
at org.apache.hadoop.hive.ql.exec.vector.VectorFileSinkOperator.process(VectorFileSinkOperator.java:111)
at org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:966)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:939)
at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:158)
at org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:966)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:939)
at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125)
at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:889)
... 10 more
解决方案:
在运行sql命令前运行以下命令即可:
set hive.exec.max.dynamic.partitions.pernode=10000;
标签:code,exec,hadoop,hive,报错,apache,org,ql
From: https://blog.csdn.net/m0_57764570/article/details/142486446