首页 > 其他分享 >如何实现Dolphinscheduler YARN Task状态跟踪?

如何实现Dolphinscheduler YARN Task状态跟踪?

时间:2024-10-28 14:33:58浏览次数:3  
标签:Task return String Dolphinscheduler YARN state YarnState dolphinscheduler

背景

Dolphinscheduler针对YARN任务,比如说MR、Spark、Flink,甚至是Shell任务,最初都是会判断如果有YARN任务,解析到applicationId。这样就会不单单以判断客户端进程为单一判断依据,还要根据YARN状态进行最终的Dolphinscheduler任务状态判断。后期,社区对此进行了重构(确实是好的向往,现在已经是半成品),但是导致了一些问题,比如说针对Flink Stream Application模式,这种客户端分离模式会让客户端Shell直接退出,所以现在Dolphinscheduler里面的任务就直接成功了。YARN上的任务还在运行呢,但Dolphinscheduler已经不能追踪到YARN上任务的状态了。

那么,想要实现对于YARN上任务的状态跟踪,可以怎么做呢?

注:以3.2.1版本为例。

Worker Task关系图

首先,让我们来看下DolphinScheduler中Worker Task的关系原理。

file

  • AbstractTask: 主要定义了Task的基本生命周期接口,比如说init、handle和cancel
  • AbstractRemoteTask: 主要对handle方法做了实现,体现了模版方法设计模式,提取了submitApplicationtrackApplicationStatus以及cancelApplication三个核心接口方法
  • AbstractYarnTask: 比如说YARN任务,就抽象了AbstractYarnTask其中submitApplicationtrackApplicationStatus以及cancelApplication可以直接是对YARN API的访问

AbstractYarnTask实现YARN状态跟踪

AbstractYarnTask可以实现YARN状态跟踪,参考org.apache.dolphinscheduler.plugin.task.api.AbstractYarnTask,完整代码如下 :

public abstract class AbstractYarnTask extends AbstractRemoteTask {

    private static final int MAX_RETRY_ATTEMPTS = 3;

    private ShellCommandExecutor shellCommandExecutor;

    public AbstractYarnTask(TaskExecutionContext taskRequest) {
        super(taskRequest);
        this.shellCommandExecutor = new ShellCommandExecutor(this::logHandle, taskRequest);
    }

    @Override
    public void submitApplication() throws TaskException {
        try {
            IShellInterceptorBuilder shellActuatorBuilder =
                    ShellInterceptorBuilderFactory.newBuilder()
                            .properties(getProperties())
                            // todo: do we need to move the replace to subclass?
                            .appendScript(getScript().replaceAll("\\r\\n", System.lineSeparator()));
            // SHELL task exit code
            TaskResponse response = shellCommandExecutor.run(shellActuatorBuilder, null);
            setExitStatusCode(response.getExitStatusCode());
            setAppIds(String.join(TaskConstants.COMMA, getApplicationIds()));
            setProcessId(response.getProcessId());
        } catch (InterruptedException ex) {
            Thread.currentThread().interrupt();
            log.info("The current yarn task has been interrupted", ex);
            setExitStatusCode(TaskConstants.EXIT_CODE_FAILURE);
            throw new TaskException("The current yarn task has been interrupted", ex);
        } catch (Exception e) {
            log.error("yarn process failure", e);
            exitStatusCode = -1;
            throw new TaskException("Execute task failed", e);
        }
    }

    @Override
    public void trackApplicationStatus() throws TaskException {
        if (StringUtils.isEmpty(appIds)) {
            return;
        }


        List<String> appIdList = Arrays.asList(appIds.split(","));
        boolean continueTracking = true;

        while (continueTracking) {
            Map<String, YarnState> yarnStateMap = new HashMap<>();
            for (String appId : appIdList) {
                if (StringUtils.isEmpty(appId)) {
                    continue;
                }

                boolean hadoopSecurityAuthStartupState =
                        PropertyUtils.getBoolean(HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE, false);
                String yarnStateJson = fetchYarnStateJsonWithRetry(appId, hadoopSecurityAuthStartupState);
                if (StringUtils.isNotEmpty(yarnStateJson)) {
                    String appJson = JSONUtils.getNodeString(yarnStateJson, "app");
                    YarnTask yarnTask = JSONUtils.parseObject(appJson, YarnTask.class);
                    log.info("yarnTask : {}", yarnTask);
                    yarnStateMap.put(yarnTask.getId(), YarnState.of(yarnTask.getState()));
                }
            }

            YarnState yarnTaskOverallStatus = YarnTaskStatusChecker.getYarnTaskOverallStatus(yarnStateMap);
            if (yarnTaskOverallStatus.isFinalState()) {
                handleFinalState(yarnTaskOverallStatus);
                continueTracking = false;
            } else {
                try {
                    TimeUnit.MILLISECONDS.sleep(SLEEP_TIME_MILLIS * 10);
                } catch (InterruptedException e) {
                    Thread.currentThread().interrupt();
                    throw new RuntimeException(e);
                }
            }
        }
    }

    private String fetchYarnStateJsonWithRetry(String appId,
                                               boolean hadoopSecurityAuthStartupState) throws TaskException {
        int retryCount = 0;
        while (retryCount < MAX_RETRY_ATTEMPTS) {
            try {
                return fetchYarnStateJson(appId, hadoopSecurityAuthStartupState);
            } catch (Exception e) {
                retryCount++;
                log.error("Failed to fetch or parse Yarn state for appId: {}. Attempt: {}/{}",
                        appId, retryCount, MAX_RETRY_ATTEMPTS, e);

                if (retryCount >= MAX_RETRY_ATTEMPTS) {
                    throw new TaskException("Failed to fetch Yarn state after "
                            + MAX_RETRY_ATTEMPTS + " attempts for appId: " + appId, e);
                }

                try {
                    TimeUnit.MILLISECONDS.sleep(SLEEP_TIME_MILLIS);
                } catch (InterruptedException ie) {
                    Thread.currentThread().interrupt();
                    throw new RuntimeException(ie);
                }
            }
        }
        return null;
    }

    private void handleFinalState(YarnState yarnState) {
        switch (yarnState) {
            case FINISHED:
                setExitStatusCode(EXIT_CODE_SUCCESS);
                break;
            case KILLED:
                setExitStatusCode(EXIT_CODE_KILL);
                break;
            default:
                setExitStatusCode(EXIT_CODE_FAILURE);
                break;
        }
    }

    private String fetchYarnStateJson(String appId, boolean hadoopSecurityAuthStartupState) throws Exception {
        return hadoopSecurityAuthStartupState
                ? KerberosHttpClient.get(getApplicationUrl(appId))
                : HttpUtils.get(getApplicationUrl(appId));
    }


    static class YarnTaskStatusChecker {

        public static YarnState getYarnTaskOverallStatus(Map<String, YarnState> yarnTaskMap) {
            // 检查是否有任何任务处于 FAILED 或 KILLED 状态
            boolean hasKilled = yarnTaskMap.values().stream()
                    .anyMatch(state -> state == YarnState.KILLED);

            if (hasKilled) {
                return YarnState.KILLED;
            }

            // 检查是否有任何任务处于 FAILED 或 KILLED 状态
            boolean hasFailed = yarnTaskMap.values().stream()
                    .anyMatch(state -> state == YarnState.FAILED);

            if (hasFailed) {
                return YarnState.FAILED;
            }


            // 检查是否所有任务都处于 FINISHED 状态
            boolean allFINISHED = yarnTaskMap.values().stream()
                    .allMatch(state -> state == YarnState.FINISHED);

            if (allFINISHED) {
                return YarnState.FINISHED;
            }

            // 检查是否有任何任务处于 RUNNING 状态
            boolean hasRunning = yarnTaskMap.values().stream()
                    .anyMatch(state -> state == YarnState.RUNNING);

            if (hasRunning) {
                return YarnState.RUNNING;
            }

            // 检查是否有任何任务处于提交中状态
            boolean hasSubmitting = yarnTaskMap.values().stream()
                    .anyMatch(state -> state == YarnState.NEW || state == YarnState.NEW_SAVING
                            || state == YarnState.SUBMITTED || state == YarnState.ACCEPTED);

            if (hasSubmitting) {
                return YarnState.SUBMITTING;
            }

            // 如果都不匹配,返回未知状态
            return YarnState.UNKNOWN;
        }
    }


    /**
     * cancel application
     *
     * @throws TaskException exception
     */
    @Override
    public void cancelApplication() throws TaskException {
        // cancel process
        try {
            shellCommandExecutor.cancelApplication();
        } catch (Exception e) {
            throw new TaskException("cancel application error", e);
        }
    }

    /**
     * get application ids
     *
     * @return
     * @throws TaskException
     */
    @Override
    public List<String> getApplicationIds() throws TaskException {
        // TODO 这里看common.properties中是否配置 appId.collect了,如果配置了走aop,否则走log
        return LogUtils.getAppIds(
                taskRequest.getLogPath(),
                taskRequest.getAppInfoPath(),
                PropertyUtils.getString(APPID_COLLECT, DEFAULT_COLLECT_WAY));
    }

    /** Get the script used to bootstrap the task */
    protected abstract String getScript();

    /** Get the properties of the task used to replace the placeholders in the script. */
    protected abstract Map<String, String> getProperties();

    @Data
    static class YarnTask {
        private String id;
        private String state;
    }

    private String getApplicationUrl(String applicationId) throws BaseException {

        String yarnResourceRmIds = PropertyUtils.getString(YARN_RESOURCEMANAGER_HA_RM_IDS);
        String yarnAppStatusAddress = PropertyUtils.getString(YARN_APPLICATION_STATUS_ADDRESS);
        String hadoopResourceManagerHttpAddressPort =
                PropertyUtils.getString(HADOOP_RESOURCE_MANAGER_HTTPADDRESS_PORT);

        String appUrl = StringUtils.isEmpty(yarnResourceRmIds) ?
                yarnAppStatusAddress :
                getAppAddress(yarnAppStatusAddress, yarnResourceRmIds);

        if (StringUtils.isBlank(appUrl)) {
            throw new BaseException("yarn application url generation failed");
        }

        log.info("yarn application url:{}", String.format(appUrl, hadoopResourceManagerHttpAddressPort, applicationId));
        return String.format(appUrl, hadoopResourceManagerHttpAddressPort, applicationId);
    }

    private static String getAppAddress(String appAddress, String rmHa) {

        String[] appAddressArr = appAddress.split(Constants.DOUBLE_SLASH);

        if (appAddressArr.length != 2) {
            return null;
        }

        String protocol = appAddressArr[0] + Constants.DOUBLE_SLASH;
        String[] pathSegments = appAddressArr[1].split(Constants.COLON);

        if (pathSegments.length != 2) {
            return null;
        }

        String end = Constants.COLON + pathSegments[1];

        // get active ResourceManager
        String activeRM = YarnHAAdminUtils.getActiveRMName(protocol, rmHa);

        if (StringUtils.isEmpty(activeRM)) {
            return null;
        }

        return protocol + activeRM + end;
    }

    /** yarn ha admin utils */
    private static final class YarnHAAdminUtils {

        /**
         * get active resourcemanager node
         *
         * @param protocol http protocol
         * @param rmIds yarn ha ids
         * @return yarn active node
         */
        public static String getActiveRMName(String protocol, String rmIds) {

            String hadoopResourceManagerHttpAddressPort =
                    PropertyUtils.getString(HADOOP_RESOURCE_MANAGER_HTTPADDRESS_PORT);

            String[] rmIdArr = rmIds.split(Constants.COMMA);

            String yarnUrl = protocol
                    + "%s:"
                    + hadoopResourceManagerHttpAddressPort
                    + "/ws/v1/cluster/info";
            try {
                /** send http get request to rm */
                for (String rmId : rmIdArr) {
                    String state = getRMState(String.format(yarnUrl, rmId));
                    if (Constants.HADOOP_RM_STATE_ACTIVE.equals(state)) {
                        return rmId;
                    }
                }

            } catch (Exception e) {
                log.error("get yarn ha application url failed", e);
            }
            return null;
        }

        /** get ResourceManager state */
        public static String getRMState(String url) {
            boolean hadoopSecurityAuthStartupState =
                    PropertyUtils.getBoolean(HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE, false);
            String retStr = Boolean.TRUE.equals(hadoopSecurityAuthStartupState)
                    ? KerberosHttpClient.get(url)
                    : HttpUtils.get(url);

            if (StringUtils.isEmpty(retStr)) {
                return null;
            }
            // to json
            ObjectNode jsonObject = JSONUtils.parseObject(retStr);

            // get ResourceManager state
            if (!jsonObject.has("clusterInfo")) {
                return null;
            }
            return jsonObject.get("clusterInfo").path("haState").asText();
        }
    }

    public enum YarnState {
        NEW,
        NEW_SAVING,
        SUBMITTED,
        ACCEPTED,
        RUNNING,
        FINISHED,
        FAILED,
        KILLED,
        SUBMITTING,
        UNKNOWN,
        ;

        // 将字符串转换为枚举
        public static YarnState of(String state) {
            try {
                return YarnState.valueOf(state);
            } catch (IllegalArgumentException | NullPointerException e) {
                // 如果字符串无效,则返回 null
                return null;
            }
        }

        /**
         * 任务结束
         * @return
         */
        public boolean isFinalState() {
            return this == FINISHED || this == FAILED || this == KILLED;
        }
    }
}

可以看到,这里的核心逻辑其实就是去掉之前直接把handle接口重写了,而现在针对YARN任务,只需要实现submitApplicationtrackApplicationStatus两个核心接口,cancelApplication这个其实原则上应该代理YarnApplicationManager才好(当前没有整合,不过不影响)。

流式任务前端applicationId显示

dolphinscheduler-ui/src/views/projects/task/instance/use-stream-table.ts

file

后端封装applicationId为YARN URL

dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/TaskInstanceServiceImpl.java 修改

file

dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/constants/Constants.java 修改

file

dolphinscheduler-common/src/main/resources/common.properties修改

file

dolphinscheduler-storage-plugin/dolphinscheduler-storage-hdfs/src/main/java/org/apache/dolphinscheduler/plugin/storage/hdfs/HdfsStorageOperator.java修改

file

dolphinscheduler-storage-plugin/dolphinscheduler-storage-hdfs/src/main/java/org/apache/dolphinscheduler/plugin/storage/hdfs/HdfsStorageProperties.java修改

file

页面效果如下 :

file

注意 : URL粘贴是需要自己写的,上面的代码不包含

问题追踪

这里其实是有问题,对于state状态来说,是有FINISHED、FAILED、KILLED三种状态,但是FINISHED状态里面还是有FinalStatus,完成不一定是成功,FINISHED下面其实也有SUCCEEDED、FAILED和KILLED。其实就是FINISHED不能作为DolphinScheduler的终态,需要继续判断而已。

org.apache.dolphinscheduler.plugin.task.api.AbstractYarnTask#handleFinalState

private void handleFinalState(YarnState yarnState) {
    switch (yarnState) {
        case FINISHED:
            setExitStatusCode(EXIT_CODE_SUCCESS);
            break;
        case KILLED:
            setExitStatusCode(EXIT_CODE_KILL);
            break;
        default:
            setExitStatusCode(EXIT_CODE_FAILURE);
            break;
    }
}

使用HTTP对任务进行kill

curl -X PUT -d '{"state":"KILLED"}' \
>     -H "Content-Type: application/json" \
>     http://xx.xx.xx.xx:8088/ws/v1/cluster/apps/application_1694766249884_1098/state?user.name=hdfs

注意 : 一定要指定user.name,否则不一定能kill掉。

原文链接:https://segmentfault.com/a/1190000045058893

本文由 白鲸开源 提供发布支持!

标签:Task,return,String,Dolphinscheduler,YARN,state,YarnState,dolphinscheduler
From: https://www.cnblogs.com/DolphinScheduler/p/18510535

相关文章

  • 包管理工具-npm-yarn-cnpm-npx-pnpm
    代码共享方案◼我们已经学习了在JavaScript中可以通过模块化的方式将代码划分成一个个小的结构:在以后的开发中我们就可以通过模块化的方式来封装自己的代码,并且封装成一个工具;这个工具我们可以让同事通过导入的方式来使用,甚至你可以分享给世界各地的程序员来使用......
  • 鸿蒙编程江湖:深入理解TaskPool和Worker的并发任务执行
    本文旨在深入探讨华为鸿蒙HarmonyOSNext系统(截止目前API12)的技术细节,基于实际开发实践进行总结。主要作为技术分享与交流载体,难免错漏,欢迎各位同仁提出宝贵意见和问题,以便共同进步。本文为原创内容,任何形式的转载必须注明出处及原作者。鸿蒙系统提供了两种并发能力:TaskPool和......
  • Camunda中的Execution listeners和Task listeners
    在Camunda中大多数节点元素都可以设置执行监听器(Executionlisteners),例如事件、顺序流、用户任务、服务任务和网关。其中用户任务除了可以设置执行监听器,还可以设置独有的用户任务监听器,相比于执行监听器,用户任务监听器可以设置更加细粒度的事件类型。下面针对执行监听器和用户任......
  • InternVL-1.0: Scaling up Vision Foundation Models and Aligningfor Generic Visual
    论文:https://arxiv.org/abs/2312.14238代码:https://github.com/OpenGVLab/InternVL背景在LLM时代,视觉基础模型通常通过一些轻量级的“粘合”层(如QFormer或线性投影)与LLMs连接。然而,这些模型主要源自ImageNet或JFT等纯视觉数据集,或使用图像文本对与BERT系列对齐,缺乏与L......
  • CSC3100 Problem Scale & Subtasks
    RequirementsCode(90%)YoucanwriteyourcodeinJava,Python,C,orC++.Thetimelimitmayvaryamongdifferentlanguages,dependingontheperformanceofthelanguage.Yourcodemustbeacompleteexcutableprograminsteadofonlyafunction.Weg......
  • 深入解析Apache DolphinScheduler容错机制
    简述ApacheDolphinschedulerMaster和Worker都是支持多节点部署,无中心化的设计。Master主要负责是流程DAG的切分,最终通过RPC将任务分发到Worker节点上以及Worker上任务状态的处理Worker主要负责是真正任务的执行,最后将任务状态汇报给Master,Master进行状态处理那问题来了:M......
  • Ambari 2.8.0已经支持dolphinscheduler 3.2.2 了?
    ......
  • Task01:课程简介、安装Installation
    标题:PyCharm安装流程详解摘要:本文详细介绍了在不同操作系统下安装PyCharm的步骤,包括软件的下载、安装过程中的各项设置以及可能遇到的问题和解决方法,旨在为Python开发者提供一个全面且清晰的PyCharm安装指南。一、引言PyCharm是一款由JetBrains开发的功能强大......
  • Task03:数据类型和操作 Data Types and Opeartors
    Python数据类型与表达式:数据转换视角下的高效编程策略一、引言1.1研究背景在当今的编程领域,Python以其简洁性、易读性和强大的功能而备受青睐。Python数据类型与表达式在编程中具有至关重要的地位。Python的数据类型丰富多样,包括整型、浮点型、布尔型、None类型以及......
  • CE243 CSEE handling Task
    SchoolofComputerScienceandElectronicEngineering(CSEE)1CE243(NWU)Assignment1ObjectivesThisassignmentaimstodemonstrateyourmasterofadvancedCprogrammingskills.TheTaskYoushallwriteaCprogramtoimplementsometexthandlingtask.......