首页 > 编程语言 >Flink 运行错误 java.lang.OutOfMemoryError: Direct buffer memory

Flink 运行错误 java.lang.OutOfMemoryError: Direct buffer memory

时间:2022-12-16 09:44:06浏览次数:60  
标签:lang 09 java buffer flink kafka apache org

如遇到如下错误,表示需要调大配置项 taskmanager.memory.framework.off-heap.size 的值,taskmanager.memory.framework.off-heap.size 的默认值为 128MB,错误显示不够用需要调大。

2022-12-16 09:09:21,633 INFO  [464321] [org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:366)]  - Hadoop UGI authentication : TAUTH
2022-12-16 09:09:21,735 ERROR [464355] [org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager$1.accept(SplitFetcherManager.java:119)]  - Received uncaught exception.
java.lang.OutOfMemoryError: Direct buffer memory
    at java.nio.Bits.reserveMemory(Bits.java:695)
    at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
    at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
    at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:241)
    at sun.nio.ch.IOUtil.read(IOUtil.java:195)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
    at org.apache.kafka.common.network.PlaintextTransportLayer.read(PlaintextTransportLayer.java:103)
    at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:118)
    at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
    at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
    at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:561)
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.transmitSends(ConsumerNetworkClient.java:324)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1246)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210)
    at org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.fetch(KafkaPartitionSplitReader.java:100)
    at org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58)
    at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:142)
    at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:105)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
2022-12-16 09:09:21,735 INFO  [464355] [org.apache.kafka.common.metrics.Metrics.close(Metrics.java:659)]  - Metrics scheduler closed
2022-12-16 09:09:21,735 INFO  [464355] [org.apache.kafka.common.metrics.Metrics.close(Metrics.java:663)]  - Closing reporter org.apache.kafka.common.metrics.JmxReporter
2022-12-16 09:09:21,735 INFO  [464262] [org.apache.flink.connector.base.source.reader.SourceReaderBase.close(SourceReaderBase.java:259)]  - Closing Source Reader.
2022-12-16 09:09:21,735 INFO  [464355] [org.apache.kafka.common.metrics.Metrics.close(Metrics.java:669)]  - Metrics reporters closed
2022-12-16 09:09:21,736 INFO  [464262] [org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.shutdown(SplitFetcher.java:196)]  - Shutting down split fetcher 0
2022-12-16 09:09:21,736 INFO  [464355] [org.apache.kafka.common.utils.AppInfoParser.unregisterAppInfo(AppInfoParser.java:83)]  - App info kafka.consumer for Group-Ods99Log-xuexi-0008-0 unregistered
2022-12-16 09:09:21,736 INFO  [464355] [org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:115)]  - Split fetcher 0 exited.
2022-12-16 09:09:21,737 WARN  [464262] [org.apache.flink.runtime.taskmanager.Task.transitionState(Task.java:1111)]  - Source: t_abc_log_kafka[1] -> Calc[2] -> ConstraintEnforcer[3] -> row_data_to_hoodie_record (1/2)#1832 (16a69b5652ee73524a3f889acbe13ad5) switched from RUNNING to FAILED with failure cause: java.lang.RuntimeException: One or more fetchers have encountered exception
    at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:225)
    at org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:169)
    at org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:130)
    at org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:385)
    at org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
    at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:527)
    at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:812)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761)
    at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:955)
    at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:934)
    at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:748)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:569)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: Direct buffer memory. The direct out-of-memory error has occurred. This can mean two things: either job(s) require(s) a larger size of JVM direct memory or there is a direct memory leak. The direct memory can be allocated by user code or some of its dependencies. In this case 'taskmanager.memory.task.off-heap.size' configuration option should be increased. Flink framework and its dependencies also consume the direct memory, mostly for network communication. The most of network memory is managed by Flink and should not result in out-of-memory error. In certain special cases, in particular for jobs with high parallelism, the framework may require more direct memory which is not managed by Flink. In this case 'taskmanager.memory.framework.off-heap.size' configuration option should be increased. If the error persists then there is probably a direct memory leak in user code or some of its dependencies which has to be investigated and fixed. The task executor has to be shutdown...
    at java.nio.Bits.reserveMemory(Bits.java:695)
    at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
    at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
    at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:241)
    at sun.nio.ch.IOUtil.read(IOUtil.java:195)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
    at org.apache.kafka.common.network.PlaintextTransportLayer.read(PlaintextTransportLayer.java:103)
    at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:118)
    at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
    at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
    at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:561)
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.transmitSends(ConsumerNetworkClient.java:324)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1246)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210)
    at org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.fetch(KafkaPartitionSplitReader.java:100)
    at org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58)
    at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:142)
    at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:105)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    ... 1 more

2022-12-16 09:09:21,737 INFO  [464262] [org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:832)]  - Freeing task resources for Source: t_abc_log_kafka[1] -> Calc[2] -> ConstraintEnforcer[3] -> row_data_to_hoodie_record (1/2)#1832 (16a69b5652ee73524a3f889acbe13ad5).

 

 

标签:lang,09,java,buffer,flink,kafka,apache,org
From: https://www.cnblogs.com/aquester/p/16986545.html

相关文章

  • C# 与 Java 的区别
    C#与Java的区别C#最吸引人的地方是它与Java的区别而不是它们之间的相似性。下表是对C#和Java之间区别的简单介绍:序号JavaC#1Java是Oracle开发的......
  • Java8之list.stream的常见使用
    本文转自 https://blog.csdn.net/jhgnqq/article/details/123679622感谢楼主分享importorg.junit.Before;importorg.junit.Test;importjava.util.Arrays;import......
  • 前端开发系列087-Node篇之Buffer
    title:'前端开发系列087-Node篇之Buffer'tags:-Node系列categories:[]date:2018-09-0623:20:13一、Buffer介绍Buffer是Node中特有的数据类型,它是Node作为运......
  • clang++ 使用address-sanitize报错,ld找不到libclang_rt.*
    当使用clang++14并且打来-fsanitize时,编译出错,1$clang++-O1-g-fsanitize=address-fno-omit-frame-pointerinfile.cc2/bin/ld:cannotfind/usr/lib64/clang/14......
  • JAVA 百度坐标,火星坐标和WGS84之间互转
    百度坐标,但是在国内出于安全等相关因素考虑,在地图发布和出版的时,对WGS84坐标进行了一次非线性加偏,得到的坐标我们称之为GCJ02坐标系,俗称火星坐标,另外国内一些地图厂商出于......
  • JavaScript DOM的性能优化详解
    本身JS操作DOM就比较消耗性能,你可以理解为JS和dom是独立的小岛,用桥实现两者的联系,但桥很窄,要过路费,所以我们要尽最大可能减少过桥的次数。 再加上每次操作DOM都会触发......
  • JAVA Unsafe类详解
    JAVAUnsafe类详解官方不建议使用Unsafe使用Unsafe要注意以下几个问题:1、Unsafe有可能在未来的Jdk版本移除或者不允许Java应用代码使用,这一点可能导致使用了Unsafe的应......
  • 在javaweb中使用Druid连接池
    今天在javaweb中尝试用druid来链接数据库,遇到了无法加载druid.properties的问题这是我在DruidUtil的原代码:Propertiesproperties=newProperties();properties.load......
  • java和vue开发的二手车小程序系统租车小程序系统
    1.需求小程序端用微信开发者工具开发(原生小程序语言,不用云开发)。客户在小程序端登陆后可以卖自己的车,也可以卖别人的车,同时可以出租自己的车,租用别人的车。然后管理员端......
  • Java广度优先爬虫示例
    这个爬虫是近半个月前学习爬虫技术的一个小例子,比较简单,怕时间久了会忘,这里简单总结一下.主要用到的外部Jar包有HttpClient4.3.4,HtmlParser2.1,使用的开发工具(IDE......