Please use this identifier to cite or link to this item:
Song Mood Classification Based on Textual Analysis Method
|Issue Date: ||2019-07-01 10:43:32 (UTC+8)|
首先，我們從歌詞出發，比較分別由Laurier 等人  與 Zaanen等人  提出的與TF-IDF 相關的歌詞特徵擷取方法。其次，有鑒於流行歌曲中常有大量重複詞，可能對TF-IDF 造成過度影響，本文提出了刪除重複詞的TF-IDF 做法來擷取歌詞特徵。除此之外，我們也結合歌詞與音訊特徵來改進分類的正確率。本論文使用兩個資料集，分別為 593 首中文歌曲的KKBOX-Song-Mood-Dataset 與777 首英文歌曲的NJU-Music-Mood V1.0-Dataset，實驗結果顯示，使用刪除重複詞的TF-IDF 所得到的分類準確率相較於先前的方法，準確率皆有顯著的提升。
Nowadays, music streaming services is rising. In addition to providing music to the user, there are also many additional services available, such as recommendation of songs, organizing playlists for various topics and so on. However, thousands of songs are being introduced every day. We cannot rely on human beings checking every single song. Therefore, it is very important to let the machine replace doing such trivial work by us. The goal of this study is to use machine learning methods to classify songs into moods.
This study mainly uses three lyric feature extraction methods related to TF-IDF. The first two methods were proposed by Lauier et al.  and Zaanen rt al. . However, there is no previous study comparing the advantages and disadvantages of the above two methods, so this study compares the differences between the two methods firstly. Moreover, we found that in popular songs, the repeated words have influences on the method of Zaanen et al. . Then we proposed a new TF-IDF related lyric feature extraction method. This study mainly uses KKBOX-Song-Mood-Dataset and NJU-Music-Mood V1.0-Dataset. The experiment result shows that the classification accuracy obtained by our proposed method is significantly higher than that of the previous two methods.
|Reference: || Menno van Zaanen and Pieter Kanters. Automatic Mood Classification Using tf*idf Based on Lyrics. In J. Stephen Downie and Remco C. Veltkamp, editors, 11th International Society for Music Information and Retrieval Conference, August 2010.|
 Hao Xue, Like Xue, Feng Su.Multimodal Music Mood Classification by Fusion of Audio and Lyrics. In Proc. of MMM 2015, LNCS 8936, pp 26-37.
 Jen-Yu Liu and Yi-Hsuan Yang :Event Localization in Music Auto-tagging, 2016, http://mac.citi.sinica.edu.tw/~yang/pub/liu16mm.pdf
 Wei-Yun Ma and Keh-Jiann Chen. A bottom-up merging algorithm for chinese
unknown word extraction. In Proceedings of the second SIGHAN workshop
on Chinese language processing, volume 17, pages 31–38. Association
for Computational Linguistics, 2003.
 Wei-Yun Ma and Keh-Jiann Chen. Introduction to CKIP chinese word segmentation
system for the first international chinese word segmentation bakeoff.
In Proceedings of the second SIGHAN workshop on Chinese language
processing, volume 17, pages 168–171. Association for Computational Linguistics,
 McFee, Brian, Colin Raffel, Dawen Liang, Daniel PW Ellis, Matt McVicar, Eric Battenberg, and Oriol Nieto. "librosa: Audio and music signal analysis in python." In Proceedings of the 14th python in science conference, pp. 18-25. 2015.
 Martin F. McKinney and Jeroen Breebaart. Features for Audio and Music Classification. In Proceedings of International Conference on Music Information Retrieval, 2003.
 C. Laurier, J. Grivolla and P. Herrera: “Multimodal Music Mood Classification Using Audio and Lyrics,” Proceedings of the International Conference on Machine Learning and Applications, 2008.
 Y.-H. Yang, Y-C. Lin, H.-T. Cheng, I,-B. Liao, Y-C. Ho, and H. H. Chen. Toward multi-modal music emotion classification. In Proceedings of Pacific-Rim Conference in Multimedia, pages 70-79. Springer, 2008
 Xing Wang, Xiaoou Chen, Deshun Yang and Yuqian Wu. Music Emotion Classification of Chinese Songs Based on Lyrics Using TF*IDF and Rhyme.
|Source URI: ||http://thesis.lib.nccu.edu.tw/record/#G0106354007|
|Data Type: ||thesis|
|Appears in Collections:||[統計學系] 學位論文|
Files in This Item:
All items in 政大典藏 are protected by copyright, with all rights reserved.