Microgrid control for renewable energy sources based on deep reinforcement learning and numerical optimization approaches
DOI:
https://doi.org/10.21638/11701/spbu10.2023.307Abstract
Optimal scheduling of battery energy storage system plays crucial part in dis- tributed energy system. As a data driven method, deep reinforcement learning does not require system knowledge of dynamic system, present optimal solution for nonlinear optimization problem. In this research, financial cost of energy con- sumption reduced by scheduling battery energy using deep reinforcement learning method (RL). Reinforcement learning can adapt to equipment parameter changes and noise in the data, while mixed-integer linear programming (MILP) requires high accuracy in forecasting power generation and demand, accurate equipment parameters to achieve good performance, and high computational cost for large- scale industrial applications. Based on this, it can be assumed that deep RL based solution is capable of outperform classic deterministic optimization model MILP. This study compares four state-of-the-art RL algorithms for the battery power plant control problem: PPO, A2C, SAC, TD3. According to the simulation results, TD3 shows the best results, outperforming MILP by 5% in cost savings, and the time to solve the problem is reduced by about a factor of three.
Keywords:
reinforcement learning, energy management system, distributed energy system, numerical optimization
Downloads
References
Markelova A., Petrosian O., Laistseva M. Microgrid management with energy storage system and renewable energy generation. Control Processes and Stability, 2021, vol. 8, no. 1, pp. 430–434.
Hatziargyriou N. D. Special issue on microgrids and energy management. European transactions on electrical power, 2011, vol. 21, no. 2, pp. 1139–1141.
Reddy P. P., Veloso M. M. Strategy learning for autonomous agents in smart grid markets. Twenty-Second International Joint Conference on Artificial Intelligence. Barcelona, 2011, pp. 1446–1451.
Chen C., Duan S., Cai T., Liu B., Hu G. Smart energy management system for optimal microgrid economic operation. IET Renewable Power Generation, 2011, vol. 5, no. 3, pp. 258–267.
Mohamed F. A., Koivo H. N. System modelling and online optimal management of microgrid with battery storage. 6th International Conference on renewable energies and power quality (ICREPQ’07). Sevilla, 2007, pp. 26–28.
Colson C., Nehrir M., Pourmousavi S. Towards real-time microgrid power management using computational intelligence methods. IEEE PES general meeting, IEEE, 2010, pp. 1–8.
Abdilahi A. M., Mustafa M., Aliyu G., Usman J. Autonomous Integrated Microgrid (AIMG) system: Review of Potential. International Journal of Education and Research, 2014, vol. 2, no. 1,linebreak pp. 1–18.
Mnih V., Kavukcuoglu K., Silver D., Graves A., Antonoglou I., Wierstra D., Riedmiller M. Playing atari with deep reinforcement learning. arXiv preprint, 2013, arXiv: 1312.5602.
Perera A., Kamalaruban P. Applications of reinforcement learning in energy systems. Renewable and Sustainable Energy Reviews, 2021, vol. 137, pp. 1–22.
Muriithi G., Chowdhury S. Optimal energy management of a grid-tied solar pv-battery microgrid: A reinforcement learning approach. Energies, 2021, vol. 14, no. 9, pp. 1–24.
Wei C., Zhang Z., Qiao W., Qu L. Reinforcement-learning-based intelligent maximum power point tracking control for wind energy conversion systems. IEEE Transactions on Industrial Electronics, 2015, vol. 62, no. 10, pp. 6360–6370.
Xi L., Yu L., Fu Y., Huang Y. Automatic generation control based on deep reinforcement learning with exploration awareness. Proceedings of the CSEE, 2019, vol. 39, no. 14, pp. 4150–4162.
Wang B., Zhou M., Xin B., Zhao X., Watada J. Analysis of operation cost and wind curtailment using multi-objective unit commitment with battery energy storage. Energy, 2019, vol. 178, pp. 101–114.
Gao Y., Yang J., Yang M., Li Z. Deep reinforcement learning based optimal schedule for a battery swapping station considering uncertainties. IEEE Transactions on Industry Applications, 2020, vol. 56, no. 5, pp. 5775–5784.
Kell A. J., McGough A. S., Forshaw M. Optimizing a domestic battery and solar photovoltaic system with deep reinforcement learning. arXiv preprint, 2021, arXiv: 2109.05024.
Wu Y., Tan H., Peng J., Zhang H., He H. Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus. Applied Energy, 2019, vol. 247, pp. 454–466.
Duan Y., Chen X., Houthooft R., Schulman J., Abbeel P. Benchmarking deep reinforcement learning for continuous control. International Conference on Machine Learning. New York, 2016, pp. 1329–1338.
Schneider Electric official website.
Power Laws: Optimizing Demand-side Strategies.
Sedighizadeh M., Esmaili M., Mohammadkhani N. Stochastic multi-objective energy management in residential microgrids with combined cooling, heating, and power units considering battery energy storage systems and plug-in hybrid electric vehicles. Journal of Cleaner Production, 2018, vol. 195, pp. 301–317.
Sutton R. S., Barto A. G. Reinforcement learning: An introduction. London, MIT Press, 2018, 590 p.
Babaeizadeh M., Frosio I., Tyree S., Clemons J., Kautz J. Reinforcement learning through asynchronous advantage actor-critic on a GPU. arXiv preprint, 2016, arXiv: 1611.06256.
Schulman J., Wolski F., Dhariwal P., Radford A., Klimov O. Proximal policy optimization algorithms. arXiv preprint, 2017, arXiv: 1707.06347.
Fujimoto S., Hoof H., Meger D. Addressing function approximation error in actor-critic methods. International Conference on Machine Learning, PMLR, 2018, pp. 1587–1596.
Haarnoja T., Zhou A., Abbeel P., Levine S. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. International Conference on Machine Learning, PMLR, 2018, pp. 1861–1870.
Mittelmann H. Latest benchmark results. Proceedings of the INFORMS Annual Conference. Phoenix, 2018, pp. 4–7.
Brockman G., Cheung V., Pettersson L., Schneider J., Schulman J., Tang J., Zaremba W. OpenAI Gym. arXiv preprint, 2016, arXiv: 1606.01540.
Downloads
Published
How to Cite
Issue
Section
License
Articles of "Vestnik of Saint Petersburg University. Applied Mathematics. Computer Science. Control Processes" are open access distributed under the terms of the License Agreement with Saint Petersburg State University, which permits to the authors unrestricted distribution and self-archiving free of charge.