English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 109948/140897 (78%)
Visitors : 46072776      Online Users : 662
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 商學院 > 統計學系 > 學位論文 >  Item 140.119/124121
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/124121


    Title: 基於文本分析方法探討流行歌曲情緒辨識之研究
    Song Mood Classification Based on Textual Analysis Method
    Authors: 駱昱岑
    Luo, Yu-Tsen
    Contributors: 翁久幸
    駱昱岑
    Luo, Yu-Tsen
    Keywords: 流行歌曲
    文本分析
    情緒辨識
    Date: 2019
    Issue Date: 2019-07-01 10:43:32 (UTC+8)
    Abstract: 現今的音樂串流服務興起,除了提供音樂給使用者聆聽之外,也提供了許多額外的服務,像是歌曲的推薦、整理各個主題的歌單等等。然而,每天有成千上萬的音樂作品不斷推陳出新,我們無法單純依靠人力一首一首的標記與整理,因此,讓機器代替我們將歌曲分門別類,是非常重要的課題。本研究的目標為利用機器學習的方法,將歌曲依據情緒分類。
    首先,我們從歌詞出發,比較分別由Laurier 等人 [8] 與 Zaanen等人 [1] 提出的與TF-IDF 相關的歌詞特徵擷取方法。其次,有鑒於流行歌曲中常有大量重複詞,可能對TF-IDF 造成過度影響,本文提出了刪除重複詞的TF-IDF 做法來擷取歌詞特徵。除此之外,我們也結合歌詞與音訊特徵來改進分類的正確率。本論文使用兩個資料集,分別為 593 首中文歌曲的KKBOX-Song-Mood-Dataset 與777 首英文歌曲的NJU-Music-Mood V1.0-Dataset,實驗結果顯示,使用刪除重複詞的TF-IDF 所得到的分類準確率相較於先前的方法,準確率皆有顯著的提升。
    Nowadays, music streaming services is rising. In addition to providing music to the user, there are also many additional services available, such as recommendation of songs, organizing playlists for various topics and so on. However, thousands of songs are being introduced every day. We cannot rely on human beings checking every single song. Therefore, it is very important to let the machine replace doing such trivial work by us. The goal of this study is to use machine learning methods to classify songs into moods.
    This study mainly uses three lyric feature extraction methods related to TF-IDF. The first two methods were proposed by Lauier et al. [8] and Zaanen rt al. [1]. However, there is no previous study comparing the advantages and disadvantages of the above two methods, so this study compares the differences between the two methods firstly. Moreover, we found that in popular songs, the repeated words have influences on the method of Zaanen et al. [1]. Then we proposed a new TF-IDF related lyric feature extraction method. This study mainly uses KKBOX-Song-Mood-Dataset and NJU-Music-Mood V1.0-Dataset. The experiment result shows that the classification accuracy obtained by our proposed method is significantly higher than that of the previous two methods.
    Reference: [1] Menno van Zaanen and Pieter Kanters. Automatic Mood Classification Using tf*idf Based on Lyrics. In J. Stephen Downie and Remco C. Veltkamp, editors, 11th International Society for Music Information and Retrieval Conference, August 2010.
    [2] Hao Xue, Like Xue, Feng Su.Multimodal Music Mood Classification by Fusion of Audio and Lyrics. In Proc. of MMM 2015, LNCS 8936, pp 26-37.
    [3] Jen-Yu Liu and Yi-Hsuan Yang :Event Localization in Music Auto-tagging, 2016, http://mac.citi.sinica.edu.tw/~yang/pub/liu16mm.pdf
    [4] Wei-Yun Ma and Keh-Jiann Chen. A bottom-up merging algorithm for chinese
    unknown word extraction. In Proceedings of the second SIGHAN workshop
    on Chinese language processing, volume 17, pages 31–38. Association
    for Computational Linguistics, 2003.
    [5] Wei-Yun Ma and Keh-Jiann Chen. Introduction to CKIP chinese word segmentation
    system for the first international chinese word segmentation bakeoff.
    In Proceedings of the second SIGHAN workshop on Chinese language
    processing, volume 17, pages 168–171. Association for Computational Linguistics,
    2003.
    [6] McFee, Brian, Colin Raffel, Dawen Liang, Daniel PW Ellis, Matt McVicar, Eric Battenberg, and Oriol Nieto. "librosa: Audio and music signal analysis in python." In Proceedings of the 14th python in science conference, pp. 18-25. 2015.
    [7] Martin F. McKinney and Jeroen Breebaart. Features for Audio and Music Classification. In Proceedings of International Conference on Music Information Retrieval, 2003.
    [8] C. Laurier, J. Grivolla and P. Herrera: “Multimodal Music Mood Classification Using Audio and Lyrics,” Proceedings of the International Conference on Machine Learning and Applications, 2008.
    [9] Y.-H. Yang, Y-C. Lin, H.-T. Cheng, I,-B. Liao, Y-C. Ho, and H. H. Chen. Toward multi-modal music emotion classification. In Proceedings of Pacific-Rim Conference in Multimedia, pages 70-79. Springer, 2008
    [10] Xing Wang, Xiaoou Chen, Deshun Yang and Yuqian Wu. Music Emotion Classification of Chinese Songs Based on Lyrics Using TF*IDF and Rhyme.
    Description: 碩士
    國立政治大學
    統計學系
    106354007
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0106354007
    Data Type: thesis
    DOI: 10.6814/NCCU201900092
    Appears in Collections:[統計學系] 學位論文

    Files in This Item:

    File SizeFormat
    400701.pdf1511KbAdobe PDF21498View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback