English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 109874/140825 (78%)
Visitors : 45913644      Online Users : 648
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 理學院 > 應用數學系 > 學位論文 >  Item 140.119/143172
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/143172


    Title: Double DQN 模型應用於自動股票交易系統
    Application of DDQN Model in automated stock trading system
    Authors: 柯元富
    Ko, Yuan-Fu
    Contributors: 蔡炎龍
    Tsai, Yen-Lung
    柯元富
    Ko, Yuan-Fu
    Keywords: 深度學習
    強化學習
    Q學習
    股票自動交易系統
    Deep Learning
    Reinforcement Learning
    Q-Learning
    Automated Stock Trading System
    Date: 2022
    Issue Date: 2023-02-01 13:51:24 (UTC+8)
    Abstract: 本篇文章使用強化學習與深度學習結合,打造股市自動交易系統。除
    了股市中的原始資料外,也加入了一些投資者常用的技術指標,給定前 10
    天的資料並使用全連接神經網路以及 Q 學習去訓練系統。
    訓練系統時,分了兩組來訓練。第一組,把台灣 50 全部的成分股做為
    訓練資料,並測試其往後 2 年的表現;第二組,取台灣 50 中的 9 支電子股做為訓練資料,並測試其往後 2 年的表現。實驗結果顯示,第一組訓練成果與買入持有策略相比並無明顯差異,而第二組的表現明顯優於買入持有策略。
    實驗結果證明,DQN 模型於特定情況下在股市自動交易系統會是有效
    的。
    This article uses a combination of reinforcement learning and deep learning to create an automated stock trading system. In addition to the original data from the stock market, some technical indicators commonly used by investors are also added to the system.
    When training the system, we divided it into two groups. In the first group, all constituent stocks of the Taiwan 50 were used as training data and their performance
    was tested for the next 2 years. In the second group, 9 electronic stocks in the Taiwan 50 were used as training data and tested their performance for the next 2
    years. The results show that there is no significant difference between the first group and the buy-and-hold strategy, while the second group significantly outperforms the buy-and-hold strategy.
    The experimental results demonstrate that the DQN model is effective in certain situations in the automated stock trading system.
    Reference: [1] D. Z. Anderson, C. Benkert, and D. D. Crouch. Neural Networks for Perception. Neural
    networks for perception /.
    [2] Huang Guan Chi. Double q-network in automated stock trading. 2021.
    [3] M. Lai. Playing atari with deep reinforcement learning.
    [4] Jae Won Lee, Jonghun Park, O Jangmin, Jongwoo Lee, and Euyseok Hong. A multiagent
    approach to q-learning for daily stock trading. IEEE Transactions on Systems, Man, and
    Cybernetics-Part A: Systems and Humans, 37(6):864–877, 2007.
    [5] Moshe Leshno, Vladimir Ya Lin, Allan Pinkus, and Shimon Schocken. Multilayer
    feedforward networks with a nonpolynomial activation function can approximate any
    function. Neural networks, 6(6):861–867, 1993.
    [6] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval
    Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement
    learning. arXiv preprint arXiv:1509.02971, 2015.
    [7] Jerome H Saltzer, David P Reed, and David D Clark. End-to-end arguments in system
    design. ACM Transactions on Computer Systems (TOCS), 2(4):277–288, 1984.
    [8] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang,
    Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering
    the game of go without human knowledge. nature, 550(7676):354–359, 2017.
    [9] Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient
    methods for reinforcement learning with function approximation. Advances in neural
    information processing systems, 12, 1999.
    [10] Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with
    double q-learning. In Proceedings of the AAAI conference on artificial intelligence,
    volume 30, 2016.
    [11] Bayya Yegnanarayana. Artificial neural networks. PHI Learning Pvt. Ltd., 2009.
    [12] Özal Yıldırım, Paweł Pławiak, Ru-San Tan, and U Rajendra Acharya. Arrhythmia
    detection using deep convolutional neural network with long duration ecg signals.
    Computers in biology and medicine, 102:411–420, 2018.
    Description: 碩士
    國立政治大學
    應用數學系
    109751009
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0109751009
    Data Type: thesis
    Appears in Collections:[應用數學系] 學位論文

    Files in This Item:

    File SizeFormat
    100901.pdf1015KbAdobe PDF2160View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback