Loading...
|
Please use this identifier to cite or link to this item:
https://nccur.lib.nccu.edu.tw/handle/140.119/143834
|
Title: | 深度強化學習歷程的視覺化分析以Procgen Benchmark環境為例 The Learning Process of Deep Reinforcement Learning with Visualization Analysis Based on Procgen Benchmark Environment |
Authors: | 黃亦晨 HUANG, Yi-Chen |
Contributors: | 紀明德 Chi, Ming-Te 黃亦晨 HUANG, Yi-Chen |
Keywords: | 深度強化學習 視覺化 Procgen Benchmark Deep reinforcement learning Visualization Procgen Benchmark |
Date: | 2023 |
Issue Date: | 2023-03-09 18:37:03 (UTC+8) |
Abstract: | 強化學習結合了深度學習,也就是深度強化學習—透過傳送遊戲中的畫面給電腦,讓電腦選擇動作,再根據電腦的選擇給予獎勵或是 懲罰。在 Procgen Benchmark 環境中,使用者可以自訂環境數量,能 夠提供幾乎完全隨機的遊戲環境,降低發生擬合過度的情形,使模型 的訓練更為完善。 由於神經網路黑盒子的特性,我們只能透過模型的輸出判斷模型 是否完善,但無法知道模型是透過什麼方式來做決策,因此需要視覺 化的工具輔助觀察模型行為。首先我們呈現了模型相關資訊的圖表, 再來使用了基於擾動生成的顯著圖來觀察模型關注區域,此外我們將 神經網路中隱藏層的數據提取後使用 T-SNE 降維,並用 K-means 演 算法做分群,最後將上述功能整合成一個視覺分析介面,幫助使用者 理解模型決策過程,同時也能觀察模型的決策是否與我們預期中相符。 透過使用者報告,受試者認為我們設計的獎勵機制優於對照組,驗證 了模型良好的性能,此外,受試者也能透過視覺化分析介面觀察出電 腦使用的通關策略以及人類玩家通關策略的差異,驗證了視覺化分析介面的有效性。 Deep reinforcement learning combines with deep learning and reinforcement learning. By sending game screens to the computer and allowing the computer to choose actions, the computer is then rewarded or punished based on its choices. Procgen Benchmark environment, in which users can customize the environment numbers, provides almost random game environments and reduces overfitting, making the model training more comprehensive. Due to the black box nature of neural networks, we can judge whether the model is well-performing by only its output, but we cannot know how the model makes decisions. Therefore, visual tools are needed to assist in observing the model behavior. First, we present charts with model-related information. Then, we use saliency maps generated based on perturbation to observe the areas of focus in the model. In addition, we extract data from the hidden layer of the neural network and use T-SNE for dimensionality reduction and K-means algorithm for clustering. Finally, we integrate the aforementioned functions into a visual analysis interface to help users understand the model`s decision-making process and to observe whether the model`s decisions meet up with our expectations. We verify the good performance of the model through user studies, which showed that the participants believe that our designed reward mechanism is better than the control group. Additionally, that participants can observe the differences between computer and human player strategies through the visual analysis interface validate the effectiveness of the visualization analysis interface. |
Reference: | [1] Cheng, S., Li, X., Shan, G., Niu, B., Wang, Y., & Luo, M. (2022). ACMViz: a visual analytics approach to understand DRL-based autonomous control model. Journal of Visualization, 1-16. [2] Cobbe, K., Hesse, C., Hilton, J., & Schulman, J. (2020, November). Leveraging procedural generation to benchmark reinforcement learning. In International conference on machine learning (pp. 2048-2056). PMLR. [3] Cobbe, K., Klimov, O., Hesse, C., Kim, T., & Schulman, J. (2019, May). Quantifying generalization in reinforcement learning. In International Conference on Machine Learning (pp. 1282-1289). PMLR. [4] Deshpande, S., Eysenbach, B., & Schneider, J. (2020). Interactive visualization for debugging rl. arXiv preprint arXiv:2008.07331. [5] Greydanus, S., Koul, A., Dodge, J., & Fern, A. (2018, July). Visualizing and understanding atari agents. In International conference on machine learning (pp. 1792-1801). PMLR. [6] Hilton, J., Cammarata, N., Carter, S., Goh, G., & Olah, C. (2020). Understanding rl vision. Distill, 5(11), e29. [7] Joo, H. T., & Kim, K. J. (2019, August). Visualization of deep reinforcement learning using grad-CAM: how AI plays atari games?. In 2019 IEEE Conference on Games (CoG) (pp. 1-2). IEEE. [8] Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., ... & Hassabis, D. (2015). Human-level control through deep reinforcement learning. nature, 518(7540), 529-533. [9] Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. [10] Mott, A., Zoran, D., Chrzanowski, M., Wierstra, D., & Jimenez Rezende, D. (2019). Towards interpretable reinforcement learning using attention augmented agents. Advances in neural information processing systems, 32. [11] Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. [12] Van der Maaten, L., & Hinton, G. (2008). Visualizing data using t-SNE. Journal of machine learning research, 9(11). [13] Wang, J., Zhang, W., Yang, H., Yeh, C. C. M., & Wang, L. (2021). Visual analytics for rnn-based deep reinforcement learning. IEEE Transactions on Visualization and Computer Graphics, 28(12), 4141-4155. [14] Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13 (pp. 818-833). Springer International Publishing. |
Description: | 碩士 國立政治大學 資訊科學系 109753107 |
Source URI: | http://thesis.lib.nccu.edu.tw/record/#G0109753107 |
Data Type: | thesis |
Appears in Collections: | [資訊科學系] 學位論文
|
Files in This Item:
File |
Description |
Size | Format | |
310701.pdf | | 14075Kb | Adobe PDF2 | 0 | View/Open |
|
All items in 政大典藏 are protected by copyright, with all rights reserved.
|