The Academic Perspective Procedia publishes Academic Platform symposiums papers as three volumes in a year. DOI number is given to all of our papers.
Publisher : Academic Perspective
Journal DOI : 10.33793/acperpro
Journal eISSN : 2667-5862
[1] K. Sundareswaran and V. T. Sreedevi, “Boost converter controller design using queen-bee-assisted GA,” IEEE Transactions on industrial electronics, vol. 56, no. 3, pp. 778–783, 2008.
[2] M. H. Rashid, Power Electronics circuits, devices, and applications. Dorling Kindersley, 2004.
[3] R. P. Borase, D. K. Maghade, S. Y. Sondkar, and S. N. Pawar, “A Review of PID Control, Tuning Methods and Applications,” Int J Dyn Control, vol. 9, pp. 818–827, 2021.
[4] M. Çimen, Z. Garip, and A. Boz, “Chaotic flower pollination algorithm based optimal PID controller design for a buck converter,” Analog Integr Circuits Signal Process, 2021.
[5] O. Güngör and H. İ. Yüksek, “Modeling of Boost and Cuk Converters and Comparison of Their Performance in MPPT,” Sigma Journal of Engineering and Natural Sciences, vol. 11, no. 1, pp. 83–101, 2020.
[6] M. Alkrunz and İ. Yazıcı, “Design of discrete time controllers for the DC-DC boost converter,” Sakarya University Journal of Science, vol. 20, no. 1, pp. 75–82, 2016.
[7] A. Sezen and K. Keskin, “Hybrid Control of DC-DC Buck Boost Converter,” Demiryolu Mühendisliği, vol. 14, pp. 99–109, 2021.
[8] S. Bououden, O. Hazil, S. Filali, and M. Chadli, “Modelling and model predictive control of a DC-DC Boost converter,” in In 2014 15th international conference on sciences and techniques of automatic control and computer engineering (STA), 2014, pp. 643–648.
[9] H. Guldemir, “Sliding mode control of DC-DC boost converter,” Journal of Applied Sciences, vol. 5, no. 3, pp. 588–592, 2005.
[10] M. E. Harmon and S. S. Harmon, “Reinforcement learning: A tutorial,” WL/AAFC, WPAFB Ohio, vol. 45433, pp. 237–285, 1996.
[11] M. E. , Çimen and Z. Garip, “Controlling a Single Tank Liquid Level System with Classical Control Methods and Reinforcement Learning Methods,” Kocaeli Journal of Science and Engineering, vol. 7, no. 1, pp. 30–41, 2024.
[12] A. Angiuli, J. P. Fouque, and M. Laurière, “Unified reinforcement Q-learning for mean field game and control problems,” Mathematics of Control, Signals, and Systems, vol. 34, no. 2, pp. 217–271, 2022.
[13] W. You, G. Yang, J. Chu, and C. Ju, “Deep reinforcement learning-based proportional–integral control for dual-active-bridge converter,” Neural Comput Appl, vol. 35, no. 24, pp. 17953–17966, 2023.
[14] M. E. Çimen, Z. Garip, Y. Yalçın, M. Kutlu, and A. F. Boz, “Self Adaptive Methods for Learning Rate Parameter of Q-Learning Algorithm,” Journal of Intelligent Systems: Theory and Applications, vol. 6, no. 2, pp. 191–198, 2023.
[15] D. Alfred, D. Czarkowski, and J. Teng, “Reinforcement Learning-Based Control of a Power Electronic Converter,” Mathematics, vol. 12, no. 5, p. 671, 2024.
[16] R. F. Muktiadji, M. A. Ramli, and A. H. Milyani, “Twin-Delayed Deep Deterministic Policy Gradient Algorithm to Control a Boost Converter in a DC Microgrid,” Electronics (Basel), vol. 13, no. 2, p. 433, 2024.