Learn how it handles RPC, task plugins, fault tolerance & scheduling. Perfect for devs looking to optimize or extend their workflows!Learn how it handles RPC, task plugins, fault tolerance & scheduling. Perfect for devs looking to optimize or extend their workflows!

Dissecting the Master Server: How DolphinScheduler Powers Workflow Scheduling

2025/09/26 13:03
Okuma süresi: 4 dk
Bu içerikle ilgili geri bildirim veya endişeleriniz için lütfen crypto.news@mexc.com üzerinden bizimle iletişime geçin.

In modern data-driven enterprises, a workflow scheduling system is the "central nervous system" of the data pipeline. From ETL tasks to machine learning training, from report generation to real-time monitoring, almost all critical business processes rely on a stable, efficient, and scalable scheduling engine.

The author believes that Apache DolphinScheduler 3.1.9 is a stable and widely used version, so this article focuses on this version, analyzing the relevant processes related to the Master service startup, deeply exploring its core source code, architectural design, module division, and key implementation mechanisms. The goal is to help developers understand how the Master works and lay a foundation for further secondary development or performance optimization.

This series of articles is divided into three parts: the Master Server startup process, the Worker server startup process, and related process diagrams. This is the first part.

1. Core Overview of Master Server Startup

  • Code Entry: org.apache.dolphinscheduler.server.master.MasterServer#run
 public void run() throws SchedulerException {         // 1. Initialize rpc server         this.masterRPCServer.start();          // 2. Install task plugin         this.taskPluginManager.loadPlugin();          // 3. Self-tolerant         this.masterRegistryClient.start();         this.masterRegistryClient.setRegistryStoppable(this);          // 4. Master scheduling         this.masterSchedulerBootstrap.init();         this.masterSchedulerBootstrap.start();          // 5. Event execution service         this.eventExecuteService.start();         // 6. Fault tolerance mechanism         this.failoverExecuteThread.start();          // 7. Quartz scheduling         this.schedulerApi.start();         ...     } 

1.1 RPC Startup:

  • Description: Registers processors for relevant commands, such as task execution, task execution results, task termination, etc.
  • Code Entry: org.apache.dolphinscheduler.server.master.rpc.MasterRPCServer#start
 public void start() {          ...         // Task execution request processor         this.nettyRemotingServer.registerProcessor(CommandType.TASK_EXECUTE_RUNNING, taskExecuteRunningProcessor);         // Task execution result request processor         this.nettyRemotingServer.registerProcessor(CommandType.TASK_EXECUTE_RESULT, taskExecuteResponseProcessor);         // Task termination request processor         this.nettyRemotingServer.registerProcessor(CommandType.TASK_KILL_RESPONSE, taskKillResponseProcessor);         ...         this.nettyRemotingServer.start();         logger.info("Started Master RPC Server...");     } 

1.2 Task Plugin Initialization:

  • Description: Task-related template operations such as creating tasks, parsing task parameters, and retrieving task resource information. This plugin needs to be registered on the API, Master, and Worker sides. The role in Master is to set the data source and UDF information.

1.3 Self-Tolerant (Master Registration):

  • Description: Registers the master’s information to the registry (using Zookeeper as an example), and listens for changes in the registration of itself, other masters, and all worker nodes to perform fault-tolerant processing.
  • Code Entry: org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient#start
public void start() {         try {             this.masterHeartBeatTask = new MasterHeartBeatTask(masterConfig, registryClient);             // 1. Register itself to the registry;             registry();             // 2. Listen to the connection state with the registry;             registryClient.addConnectionStateListener(                     new MasterConnectionStateListener(masterConfig, registryClient, masterConnectStrategy));             // 3. Listen to the status of other masters and workers in the registry and perform fault-tolerant work             registryClient.subscribe(REGISTRY_DOLPHINSCHEDULER_NODE, new MasterRegistryDataListener());         } catch (Exception e) {             throw new RegistryException("Master registry client start up error", e);         }     } 

1.4 Master Scheduling:

  • Description: A scanning thread that periodically checks the command table in the database and performs different operations based on command types. This is the core logic for workflow startup, instance fault tolerance, etc.
  • Code Entry: org.apache.dolphinscheduler.server.master.runner.MasterSchedulerBootstrap#run
public void run() {         while (!ServerLifeCycleManager.isStopped()) {             try {                 // If the server is not in running status, it cannot consume commands                 if (!ServerLifeCycleManager.isRunning()) {                     logger.warn("The current server {} is not at running status, cannot consume commands.", this.masterAddress);                     Thread.sleep(Constants.SLEEP_TIME_MILLIS);                 }                  // Handle workload overload (CPU/memory)                 boolean isOverload = OSUtils.isOverload(masterConfig.getMaxCpuLoadAvg(), masterConfig.getReservedMemory());                 if (isOverload) {                     logger.warn("The current server {} is overload, cannot consume commands.", this.masterAddress);                     MasterServerMetrics.incMasterOverload();                     Thread.sleep(Constants.SLEEP_TIME_MILLIS);                     continue;                 }                  // Get commands from the database                 List<Command> commands = findCommands();                 if (CollectionUtils.isEmpty(commands)) {                     Thread.sleep(Constants.SLEEP_TIME_MILLIS);                     continue;                 }                  // Convert commands to process instances and handle the workflow logic                 List<ProcessInstance> processInstances = command2ProcessInstance(commands);                 if (CollectionUtils.isEmpty(processInstances)) {                     Thread.sleep(Constants.SLEEP_TIME_MILLIS);                     continue;                 }                  // Handle workflow instance execution                 processInstances.forEach(processInstance -> {                     try {                         LoggerUtils.setWorkflowInstanceIdMDC(processInstance.getId());                         WorkflowExecuteRunnable workflowRunnable = new WorkflowExecuteRunnable(processInstance, ...);                         processInstanceExecCacheManager.cache(processInstance.getId(), workflowRunnable);                         workflowEventQueue.addEvent(new WorkflowEvent(WorkflowEventType.START_WORKFLOW, processInstance.getId()));                     } finally {                         LoggerUtils.removeWorkflowInstanceIdMDC();                     }                 });             } catch (InterruptedException interruptedException) {                 logger.warn("Master schedule bootstrap interrupted, close the loop", interruptedException);                 Thread.currentThread().interrupt();                 break;             } catch (Exception e) {                 logger.error("Master schedule workflow error", e);                 ThreadUtils.sleep(Constants.SLEEP_TIME_MILLIS);             }         }     } 

1.5 Event Execution Service:

  • Description: Responsible for polling the event queue of the workflow instance. Events such as workflow submission failures or task state changes are handled here.
  • Code Entry: org.apache.dolphinscheduler.server.master.runner.EventExecuteService#run
public void run() {         while (!ServerLifeCycleManager.isStopped()) {             try {                 // Handle workflow execution events                 workflowEventHandler();                 // Handle stream task execution events                 streamTaskEventHandler();                 TimeUnit.MILLISECONDS.sleep(Constants.SLEEP_TIME_MILLIS_SHORT);             } ...         }     } 

1.6 Fault Tolerance Mechanism:

  • Description: Responsible for Master and Worker fault tolerance.
  • Code Entry: org.apache.dolphinscheduler.server.master.service.MasterFailoverService#checkMasterFailover
    public void checkMasterFailover() {         List<String> needFailoverMasterHosts = processService.queryNeedFailoverProcessInstanceHost()                 .stream()                 .filter(host -> localAddress.equals(host) || !registryClient.checkNodeExists(host, NodeType.MASTER))                 .distinct()                 .collect(Collectors.toList());         if (CollectionUtils.isEmpty(needFailoverMasterHosts)) {             return;         }          for (String needFailoverMasterHost : needFailoverMasterHosts) {             failoverMaster(needFailoverMasterHost);         }     } 

Conclusion:

The article provides an in-depth look at the Apache DolphinScheduler 3.1.9 Master service startup process, fault tolerance mechanisms, and the overall architecture. Further articles will explore the Worker startup process and interactions between Master and Worker.

Piyasa Fırsatı
Brainedge Logosu
Brainedge Fiyatı(LEARN)
$0.006605
$0.006605$0.006605
+0.09%
USD
Brainedge (LEARN) Canlı Fiyat Grafiği
Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen crypto.news@mexc.com ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

Sui’s Beep Wallet Unleashes AI Power: Agentic Trading Expands to 300+ Assets

Sui’s Beep Wallet Unleashes AI Power: Agentic Trading Expands to 300+ Assets

BitcoinWorld Sui’s Beep Wallet Unleashes AI Power: Agentic Trading Expands to 300+ Assets In a significant leap for decentralized finance, the Sui blockchain’s
Paylaş
bitcoinworld2026/04/03 02:10
Most Expensive NFT: Record-Breaking Digital Art Sales

Most Expensive NFT: Record-Breaking Digital Art Sales

Discover the most expensive NFT sales in history, from Pak’s "The Merge" to Beeple’s "Everydays." Learn what makes digital art valuable and how to start your NFT
Paylaş
Stealthex2026/04/03 03:19
CME Group to launch Solana and XRP futures options in October

CME Group to launch Solana and XRP futures options in October

The post CME Group to launch Solana and XRP futures options in October appeared on BitcoinEthereumNews.com. CME Group is preparing to launch options on SOL and XRP futures next month, giving traders new ways to manage exposure to the two assets.  The contracts are set to go live on October 13, pending regulatory approval, and will come in both standard and micro sizes with expiries offered daily, monthly and quarterly. The new listings mark a major step for CME, which first brought bitcoin futures to market in 2017 and added ether contracts in 2021. Solana and XRP futures have quickly gained traction since their debut earlier this year. CME says more than 540,000 Solana contracts (worth about $22.3 billion), and 370,000 XRP contracts (worth $16.2 billion), have already been traded. Both products hit record trading activity and open interest in August. Market makers including Cumberland and FalconX plan to support the new contracts, arguing that institutional investors want hedging tools beyond bitcoin and ether. CME’s move also highlights the growing demand for regulated ways to access a broader set of digital assets. The launch, which still needs the green light from regulators, follows the end of XRP’s years-long legal fight with the US Securities and Exchange Commission. A federal court ruling in 2023 found that institutional sales of XRP violated securities laws, but programmatic exchange sales did not. The case officially closed in August 2025 after Ripple agreed to pay a $125 million fine, removing one of the biggest uncertainties hanging over the token. This is a developing story. This article was generated with the assistance of AI and reviewed by editor Jeffrey Albus before publication. Get the news in your inbox. Explore Blockworks newsletters: Source: https://blockworks.co/news/cme-group-solana-xrp-futures
Paylaş
BitcoinEthereumNews2025/09/17 23:55

Trade GOLD, Share 1,000,000 USDT

Trade GOLD, Share 1,000,000 USDTTrade GOLD, Share 1,000,000 USDT

0 fees, up to 1,000x leverage, deep liquidity