English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 112704/143671 (78%)
Visitors : 49748706      Online Users : 862
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 理學院 > 應用數學系 > 學位論文 >  Item 140.119/141182
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/141182


    Title: Transformer 應用於中文文章摘要
    Using Transformer for Chinese article summarization
    Authors: 林奕勳
    Lin, Yi-Hsun
    Contributors: 蔡炎龍
    Tsai, Yen-Lung
    林奕勳
    Lin, Yi-Hsun
    Keywords: Transformer
    BERT
    GPT-2
    中文文章摘要
    抽取式摘要
    生成式摘要
    深度學習
    Transformer
    BERT
    GPT-2
    Chinese article summarization
    Extractive summarization
    Abstractive summarization
    Deep learning
    Date: 2022
    Issue Date: 2022-08-01 18:13:06 (UTC+8)
    Abstract: 自從Transformer 發表後,無疑為自然語言處理領域的立下新的里程碑,許多的模型也因應而起,分別在各自然語言處理項目有傑出的表現。如此強大的模型多數背後依靠巨量的參數運算,但各模型皆以英文為發展主軸,我們很難訓練一個一樣強的中文模型,在缺乏原生中文模型的情況下,我們利用現有的資源及模型訓練機器做中文文章摘要,使用BERT 及GPT-2,搭配中研院中文詞知識庫小組的中文模型,並採用新聞資料進行訓練。先透過BERT 從原文章獲得抽取式摘要,使文章篇幅縮短並保留住重要資訊,接著使用GPT-2 從抽取過的摘要中再進行生成式摘要,去除掉重複的資訊並使語句更平順。在我們的實驗中,我們獲得了不錯的中文文章摘要,證明這個方法是有效的。
    Since the publication of Transformer, it has undoubtedly set a new milestone in the field of Natural Language Processing, and many models have also been released depending on it and performed outstandingly in various Natural Language Processing tasks. Most of such powerful models rely on a large number of parameter operations, but most of them are developed in English, and it is difficult for us to train a Chinese model that is equally strong. In the absence of native Chinese models, we use existing resources and model to train the machine to make Chinese article summaries: using BERT and GPT-2 model, with the Chinese model of the Chinese Knowledge and Information Processing of the Academia Sinica of Taiwan, and using news datasets for training. First, use BERT to obtained an extractive summarization from the original article, so that the length of the article is shortened and important information is retained, then use GPT-2 to generate a summarization from the extracted summary to remove duplicate information and make the sentence smoother. In our experiments, we obtained decent Chinese article summaries, proving that this method is effective.
    Reference: [1] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
    [2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
    [3] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.
    [4] Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information, 2016.
    [5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
    [6] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre- training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
    [7] Kunihiko Fukushima. Neural network model for a mechanism of pattern recognition unaffected by shift in position-neocognitron. IEICE Technical Report, A, 62(10):658–665, 1979.
    [8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
    [9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
    [10] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
    [11] Anil K Jain, Jianchang Mao, and K Moidin Mohiuddin. Artificial neural networks: A tutorial. Computer, 29(3):31–44, 1996.
    [12] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436–444, 2015.
    [13] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
    [14] Moshe Leshno, Vladimir Ya Lin, Allan Pinkus, and Shimon Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural networks, 6(6):861–867, 1993.
    [15] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019.
    [16] YangLiuandMirellaLapata.Textsummarizationwithpretrainedencoders.arXivpreprint arXiv:1908.08345, 2019.
    [17] Rada Mihalcea and Paul Tarau. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing, pages 404–411, 2004.
    [18] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
    [19] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Icml, 2010.
    [20] LawrencePage,SergeyBrin,RajeevMotwani,andTerryWinograd.Thepagerankcitation ranking: Bringing order to the web. Technical report, Stanford InfoLab, 1999.
    [21] Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543, 2014.
    [22] MatthewE.Peters,MarkNeumann,MohitIyyer,MattGardner,ChristopherClark,Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations, 2018.
    [23] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. 2018.
    [24] AlecRadford,JeffreyWu,RewonChild,DavidLuan,DarioAmodei,IlyaSutskever,etal. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
    [25] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1–67, 2020.
    [26] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back-propagating errors. nature, 323(6088):533–536, 1986.
    [27] Jürgen Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85–117, Jan 2015.
    [28] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489, 2016.
    [29] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
    [30] Ronald J Williams and David Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1(2):270–280, 1989.
    [31] W Yonghui, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
    Description: 碩士
    國立政治大學
    應用數學系
    109751004
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0109751004
    Data Type: thesis
    DOI: 10.6814/NCCU202200797
    Appears in Collections:[應用數學系] 學位論文

    Files in This Item:

    File Description SizeFormat
    100401.pdf4193KbAdobe PDF20View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback