TY - JOUR
T1 - Distributed Flexible Job Shop Scheduling With Heterogeneous Transportation Resources Constraints via Deep Reinforcement Learning and Graph Neural Network
AU - Zhu, Kaikai
AU - Li, Xiaobin
AU - Jiang, Pei
AU - Cheng, Min
AU - Wu, Yuanqing
AU - Gao, Kaizhou
AU - Ren, Lei
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2026
Y1 - 2026
N2 - The distributed flexible job shop scheduling problem (DFJSP) has emerged as a critical challenge in the field of scheduling optimization due to its intricate resource allocation and the demand for production–logistics collaboration across multiple factories. However, most existing studies related to DFJSP only focus on the production and transportation process of jobs within a single factory, while neglecting the cross-factory logistics and the heterogeneous characteristics of transportation resources. Therefore, this article first investigates the distributed flexible job shop scheduling problem with heterogeneous transportation (DFJSPHT) resource constraints and proposes an end-to-end deep reinforcement learning (DRL) scheduling method to minimize the makespan. An innovative heterogeneous disjunctive graph model is constructed to uniformly represent the states of factories, machines, operations, and transportation resources in DFJSPHT, and the scheduling process is modeled as a Markov decision process (MDP). Next, a resource release strategy is developed to enhance the efficiency of transportation resources. To enhance the feature expression ability of the model, a graph neural network (GNN) is employed to capture the problem characteristics, and the policy network is trained using the proximal policy optimization. Comparative experiments are conducted on synthetic and benchmark instances demonstrate that the proposed method outperforms the classical priority scheduling rules and two popular DRL-based scheduling methods in solving DFJSPHT, with performance improvements exceeding 10% in most instances.
AB - The distributed flexible job shop scheduling problem (DFJSP) has emerged as a critical challenge in the field of scheduling optimization due to its intricate resource allocation and the demand for production–logistics collaboration across multiple factories. However, most existing studies related to DFJSP only focus on the production and transportation process of jobs within a single factory, while neglecting the cross-factory logistics and the heterogeneous characteristics of transportation resources. Therefore, this article first investigates the distributed flexible job shop scheduling problem with heterogeneous transportation (DFJSPHT) resource constraints and proposes an end-to-end deep reinforcement learning (DRL) scheduling method to minimize the makespan. An innovative heterogeneous disjunctive graph model is constructed to uniformly represent the states of factories, machines, operations, and transportation resources in DFJSPHT, and the scheduling process is modeled as a Markov decision process (MDP). Next, a resource release strategy is developed to enhance the efficiency of transportation resources. To enhance the feature expression ability of the model, a graph neural network (GNN) is employed to capture the problem characteristics, and the policy network is trained using the proximal policy optimization. Comparative experiments are conducted on synthetic and benchmark instances demonstrate that the proposed method outperforms the classical priority scheduling rules and two popular DRL-based scheduling methods in solving DFJSPHT, with performance improvements exceeding 10% in most instances.
KW - Deep reinforcement learning (DRL)
KW - distributed flexible job shop
KW - graph neural network (GNN)
KW - heterogeneous disjunctive graph
KW - heterogeneous transportation resource constraints
UR - https://www.scopus.com/pages/publications/105029594438
U2 - 10.1109/TSMC.2026.3656196
DO - 10.1109/TSMC.2026.3656196
M3 - 文章
AN - SCOPUS:105029594438
SN - 2168-2216
JO - IEEE Transactions on Systems, Man, and Cybernetics: Systems
JF - IEEE Transactions on Systems, Man, and Cybernetics: Systems
ER -