A practically implementable reinforcement learning‐based process controller design Journal Articles uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • AbstractThe present article enables reinforcement learning (RL)‐based controllers for process control applications. Existing instances of RL‐based solutions have significant challenges for online implementation since the training process of an RL agent (controller) presently requires practically impossible number of online interactions between the agent and the environment (process). To address this challenge, we propose an implementable model‐free RL method developed by leveraging industrially implemented model predictive control (MPC) calculations (often designed using a simple linear model identified via step tests). In the first step, MPC calculations are used to pretrain an RL agent that can mimic the MPC performance. Specifically, the MPC calculations are used to pretrain the actor, and the objective function is used to pretrain the critic(s). The pretrained RL agent is then employed within a model‐free RL framework to control the process in a way that initially imitates MPC behavior (thus not compromising process performance and safety), but also continuously learns and improve its performance over the nominal linear MPC. The effectiveness of the proposed approach is illustrated through simulations on a chemical reactor example.

publication date

  • January 2024