ISSN: 0256-1115 (print version) ISSN: 1975-7220 (electronic version)
Copyright © 2025 KICHE. All rights reserved

Articles & Issues

Conflict of Interest
In relation to this article, we declare that there is no conflict of interest.
Publication history
Received November 27, 2024
Revised January 7, 2025
Accepted February 21, 2025
Available online July 25, 2025
articles This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/bync/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright © KIChE. All rights reserved.

All issues

Deep Reinforcement Learning-Based Optimization Framework with Continuous Action Space for LNG Liquefaction Processes

Department of Chemical and Biological Engineering , Sookmyung Women’s University 1Institute of Advanced Materials and Systems , Sookmyung Women’s University
ktpark@sm.ac.kr
Korean Journal of Chemical Engineering, July 2025, 42(8), 000042
https://doi.org/10.1007/s11814-025-00428-x

Abstract

Recently, the application of reinforcement learning in process systems engineering has attracted signifi cant attention recently.

However, the optimization of chemical processes using this approach faces various challenges related to performance and

stability. This paper presents a process optimization framework using a continuous advantage actor–critic that is modifi ed

from the existing advantage actor–critic algorithm by incorporating a normal distribution for action sampling in a continuous

space. The proposed reinforcement learning-based optimization framework was found to outperform the conventional

method in optimizing a single mixed refrigerant process with 10 variables, achieving a lower specifi c energy consumption

value of 0.294 kWh/kg compared to the value of 0.307 kWh/kg obtained using the genetic algorithm. Parametric studies

performed into the hyperparameters of the continuous advantage actor-critic algorithm, including the maximum episodes,

learning rate, maximum action value, and structures of the neural networks, are presented to investigate their impacts on the

optimization performance. The optimal specifi c energy consumption, namely 0.287 kWh/kg, was achieved by varying the

learning rate from the base case to 0.00005. These results demonstrate that reinforcement learning can be eff ectively applied

to the optimization of chemical processes.

The Korean Institute of Chemical Engineers. F5,119, Anam-ro, Seongbuk-gu, Seoul, Republic of Korea
TEL. No. +82-2-458-3078FAX No. +82-507-804-0669E-mail : kiche@kiche.or.kr

Copyright (C) KICHE.all rights reserved.

- Korean Journal of Chemical Engineering 상단으로