||A Strategy-Enhanced Reinforcement Learning by fusing LSTM and PPO models
||Department of Computer Science and Information Engineering
Long short-term memory model
Proximal Policy Optimization
||With the rise of research related to artificial intelligence, many machine learning technologies have gradually matured and have been applied in various fields one after another. However, in this case, the game field still has great room for development. The reason is the complexity of games. An agent's action may create many different situations, which not only greatly increases the complexity of the model, but also takes longer to train. Therefore, this study proposes a strategy-enhanced proximal policy optimization(SEPPO) that combines long short-term memory models with proximal policy optimization, formulates agent strategies based on features, and optimizes proximal policy optimization by combining long short-term memory models. We can use strategic judgment to make reinforcement learning achieve the same results faster. The experimental results confirmed by SEPPO in the field of games show that it can effectively reduce the problem of too long training time.
||Table of Contents
Chinese Abstract I
Table of Contents IV
List of Figures V
List of Tables VI
1. Introduction 1
2. Related work 4
2.1. Long short-term memory 4
2.2. Single dimensional action space 5
2.3. Multidimensional action space 6
3. Preliminary 7
3.1. Notation 7
3.2. Problem Definition 7
4. Proposed RL: SEPPO 8
4.1. Feature extraction and Reward function 8
4.2. Strategy prediction 9
4.3. Strategy-enhanced proximal policy optimization(SEPPO) 11
5. Performance Evaluation 13
5.1. Experiment Setting 13
5.2. The effectiveness of strategy prediction 15
5.3. SEPPO Performance 17
5.4. The effectiveness of SEPPO training episodes 18
6. Conclusion 20
List of Figures
Fig. 1 The snapshot of StarCraft2 game 2
Fig. 2 The architecture of SEPPO 8
Fig. 3 The architecture of Strategy prediction 9
Fig. 4 The environment’s snapshot 14
Fig. 5 The total images in Feature sets 14
Fig. 6 The performance of different parameter setting in cell size in strategy prediction 15
Fig. 7 The performance of different parameter setting of batch size in strategy prediction 16
Fig. 8 Accuracy of training processes 17
Fig. 9 Accuracy of validation process 17
Fig. 10 Reward of training process 19
List of Tables
Table 1 Different models comparison 18
|| D. Silver, T. Hubert, J. Schrittwieser, "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm," in Science, 2018.
 O. Vinyals, T. Ewalds, S. Bartunov, P. Georgiev, A. S. Vezhnevets, M. Yeo, A. Makhzani, H. Küttler, J. Agapiou, J. Schrittwieser, J. Quan, S. Gaffney, S. Petersen, K. Simonyan, T. Schaul, H. van, "StarCraft II: A New Challenge for Reinforcement Learning," in arXiv, 2017.
 X. Wang, L. Gao, J. Song, H. Shen, "Beyond Frame-level CNN: Saliency-Aware 3-D CNN With LSTM for Video Action Recognition," in IEEE Signal Processing Letters, vol. 24, no. 4, pp. 510-514, 2017.
 Z. Wu, X. Wang, Y.G. Jiang, H. Ye, X. Xue, "Modeling spatial-temporal clues in a hybrid deep learning framework for video classification," in Proceedings of the 23rd ACM international conference on Multimedia, pp. 461-470, 2015.
 Q. Li, Z. Qiu, T. Yao, T. Mei, Y. Rui, J. Luo, "Action recognition by learning deep multi-granular spatio-temporal video representation," in Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval, pp. 159-166, 2016.
 W. Lotter, G. Kreiman, D. Cox., "Deep predictive coding networks for video prediction and unsupervised learning," in arXiv, 2016.
 X. Ouyang, S. Xu, C. Zhang, P. Zhou, Y. Yang, G. Liu, X. Li, "A 3D-CNN and LSTM Based Multi-Task Learning Architecture for Action Recognition," in IEEE Access, vol. 7, pp. 40757-40770, 2019.
 T. Akilan, Q. J. Wu, A. Safaei, J. Huo and Y. Yang, "A 3D CNN-LSTM-Based Image-to-Image Foreground Segmentation," in IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 3, pp. 959-971, 2020.
 V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, M. Riedmiller, "Playing Atari with Deep Reinforcement Learning," in Neural Information Processing Systems, 2013.
 T. Schaul, John Quan, Ioannis Antonoglou and David Silver, "Prioritized Experience Replay," in International Conference on Learning Representations, 2016.
 K. De Asis, J. Fernando Hernandez-Garcia, G. Zacharias Holland, Richard S. Sutton, "Multi-Step Reinforcement Learning: A Unifying Algorithm," in Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
 H. van Hasselt, A. Guez, D. Silver, "Deep Reinforcement Learning with Double Q-learning," in Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence, 2016.
 Z. Wang, T. Schaul, M. Hessel, H. van Hasselt, M. Lanctot, N. de Freitas, "Dueling Network Architectures for Deep Reinforcement Learning," in The 33rd International Conference on Machine Learning, 2016.
 M. Hessel, J. Modayil,H. van Hasselt, "Rainbow: Combining Improvements in Deep Reinforcement Learning," in Association for the Advancement of Artificial Intelligence 2018, 2017.
 R. S. Sutton, D. McAllester, S. Singh, Y. Mansour, "Policy Gradient Methods for Reinforcement Learning with Function Approximation," in 12th International Conference on Neural Information Processing Systems, 1999.
 J. Schulman, S. Levine, P. Moritz, M. I. Jordan, P. Abbeel, "Trust Region Policy Optimization," in International conference on machine learning, 2015.
 N. Heess, D. TB, S. Sriram, J. Lemmon, J. Merel, G. Wayne, Y. Tassa, T. Erez, Z. Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver, "Emergence of Locomotion Behaviours in Rich Environments," in arXiv, 2017.
 J. Schulman, F. Wolski, P. Dhariwal, A. Radford, O. Klimov, "Proximal Policy Optimization Algorithms," in arXiv, 2017.
 V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, K. Kavukcuoglu, "Asynchronous Methods for Deep Reinforcement Learning," in International Conference on Machine Learning, 2016.
 M. Hausknecht, Mupparaju, S. Subramanian, S. Kalyanakrishnan, and P. Stone, "Half field offense: An environment for multiagent learning and ad hoc teamwork," in AAMAS Adaptive Learning Agents (ALA) Workshop, 2016.
 M. L. Littman, "Markov games as a framework for multi-agent reinforcement learning," in eleventh international conference on machine learning, 1994.
 W. Masson, P. Ranchod, G. Konidaris, "Reinforcement learning with parameterized actions," in Thirtieth of Association for the Advancement of Artificial Intelligence, 2016.
 M. Hausknecht, P. Stone, "Deep reinforcement learning in parameterized," in International Conference on Learning Representations, 2016.
 J. Xiong, Q. Wang, Z. Yang, P. Sun, L. Han, Y. Zheng, H. Fu, T. Zhang, J. Liu, H. Liu, "Parametrized Deep Q-Networks Learning: Reinforcement Learning with Discrete-Continuous Hybrid Action Space," in CoRR, abs/1810.06394, 2018.
 E. Wei, D. Wicke, S. Luke, "Hierarchical Approaches for Reinforcement Learning in Parameterized Action Space," in AAAI Fall Symposium on Data Efficient Reinforcement Learning, 2018.
 Y. Zhang, Q. H. Vuong, K. Song, X. Y. Gong, K. W. Ross, "Efficient Entropy for Policy Gradient with Multidimensional Action Space," in International Conference on Learning Representations, 2018.
 Z. Fan, R. Su,W. Zhang, Y. Yu, "Hybrid Actor-Critic Reinforcement Learning in Parameterized Action Space," in International Joint Conferences on Artificial Intelligence 2019, 2019.
 S. Kakade, J. Langford, "Approximately optimal approximate reinforcement learning," in Nineteenth International Conference on Machine Learning, 2002.