Weighted Policy Learning Based Control for Two-Tank Level System

Authors

  • Mostafa D. Awheda Department of Control Engineering, College of Electronic Technology, Bani Walid, Libya
  • Saad A. Abobakr Department of Control Engineering, College of Electronic Technology, Bani Walid, Libya

Keywords:

Reinforcement Learning, Weighted Policy Learning, Two-Tank Level Control

Abstract

Reinforcement learning (RL) is a model‐free framework in which agents learn control policies through trial‐and‐error interaction with the environment. While classical controllers such as PID, LQR, and MPC guarantee stability and interpretability, they tend to rely on accurate models and become suboptimal in nonlinear and uncertain system dynamics situations. We address the two‐tank fluid level control benchmark, with its high coupling and nonlinear outflows, in this work using the multi‐agent RL formulation by the Weighted Policy Learning (WPL) algorithm. The level of the second tank is controlled by WPL's policy‐gradient weighting to ensure smooth convergence under non‐stationarity. Simulation results demonstrate rapid setpoint tracking, minimal overshoot, and higher disturbance robustness and reflect the effectiveness of, as well as innovativeness in, applying WPL to process control issues.

Dimensions

Published

2025-03-27

How to Cite

Mostafa D. Awheda, & Saad A. Abobakr. (2025). Weighted Policy Learning Based Control for Two-Tank Level System. African Journal of Advanced Pure and Applied Sciences (AJAPAS), 4(1), 513–519. Retrieved from https://aaasjournals.com/index.php/ajapas/article/view/1265