Abstract
The flexible job shop scheduling problem (FJSP) is a challenging combinatorial optimization problem in manufacturing systems. Existing intelligent optimization algorithms for FJSP are often hard to tune key parameters and operations efficiently, which losses the optimality of the obtained solution. To address the issues of genetic algorithms (GA) being prone to local optima and slow convergence, this paper proposes a deep reinforcement learning-assisted adaptive genetic algorithm (DRL-A-GA) for solving FJSP. In the proposed algorithm, continuous state vectors are used to represent the population state of the GA, and four mutation operations with respect to FJSP are designed as actions. Deep reinforcement learning is employed to adaptively tune the key parameters of the GA and dynamically select appropriate genetic operations. To validate the performance of DRL-A-GA, three sets of benchmark instances are selected for testing, and the results are compared with those of classical optimization algorithms and hybrid algorithms. The experimental results demonstrate that the proposed DRL-A-GA significantly outperforms both traditional optimization and intelligent hybrid optimization algorithms for solving FJSP, effectively improving solution quality and accelerating convergence.
Original language | English |
---|---|
Article number | 110447 |
Journal | Engineering Applications of Artificial Intelligence |
Volume | 149 |
DOIs | |
State | Published - Jun 1 2025 |
Scopus Subject Areas
- Control and Systems Engineering
- Artificial Intelligence
- Electrical and Electronic Engineering
Keywords
- Deep reinforcement learning
- Flexible job shop scheduling problem
- Genetic algorithm
- Mutation operations
- Parameter configuration