English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 110934/141856 (78%)
Visitors : 47703014      Online Users : 932
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/51589
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/51589


    Title: 串流式音訊分類於智慧家庭之應用
    Streaming audio classification for smart home environments
    Authors: 溫景堯
    Wen, Jing Yao
    Contributors: 廖文宏
    Liao, Wen Hung
    溫景堯
    Wen, Jing Yao
    Keywords: 計算式聽覺場景分析
    串流式音訊分類
    computational auditory scene analysis
    streaming audio classification
    Date: 2009
    Issue Date: 2011-10-11 16:57:32 (UTC+8)
    Abstract: 聽覺與視覺同為人類最重要的感官。計算式聽覺場景分析(Computation Auditory Scene Analysis, CASA)透過聽覺心理學中對於人耳特性與心理感知的關連性,定義了一個可能的方向,讓電腦聽覺更為貼近人類感知。本研究目的在於應用聽覺心理學之原則,以影像處理與圖型辨識技術,設計音訊增益、切割、描述等對應之處理,透過相似度計算方式實現智慧家庭之環境中的即時音訊分類。
    本研究分為三部分,第一部分為音訊處理,將環境中的聲音轉換成電腦可處理與強化之訊號;第二部分透過CASA原則設計影像處理,以冀於影像上達成音訊處理之結果,並以影像特徵加以描述音訊事件;第三部分定義影像特徵之距離,以K個最近鄰點(K-Nearest Neighbor, KNN)技術針對智慧家庭環境常見之音訊事件,實現即時辨識與分類。實驗結果顯示本論文所提出的音訊分類方法有著不錯的效果,對八種家庭環境常見的聲音辨識正確率可達80-90%,而在雜訊或其他聲音干擾的情況下,辨識結果也維持在70%左右。
    Human receive sounds such as language and music through audition. Therefore, audition and vision are viewed as the two most important aspects of human perception. Computational auditory scene analysis (CASA) defined a possible direction to close the gap between computerized audition and human perception using the correlation between features of ears and mental perception in psychology of hearing. In this research, we develop and integrate methods for real-time streaming audio classification based on the principles of psychology of hearing as well as techniques in pattern recognition.
    There are three major parts in this research. The first is audio processing, translating sounds into information that can be enhanced by computers; the second part uses the principles of CASA to design a framework for audio signal description and event detection by means of computer vision and image processing techniques; the third part defines the distance of image feature vectors and uses K-Nearest Neighbor (KNN) classifier to accomplish audio recognition and classification in real-time. Experimental results show that the proposed approach is quite effective, achieving an overall recognition rate of 80-90% for 8 types of audio input. The performance degrades only slightly in the presence of noise and other interferences.
    Reference: [1] A. S. Bregman. “Auditory Scene Analysis”. The Perceptual Organization of Sound. Cambridge, MA: MIT Press, 1990.
    [2] D. Rosenthal and H. Okuno, Eds.. “Computational Auditory Scene Analysis”. Lawrence Erlbaum Associates, 1998.
    [3] D. Ellis. “Prediction-Driven Computational Auditory Scene Analysis”. Ph.D. thesis, MIT, 1996.
    [4] 王小川,「語音訊號處理」,全華股份有限公司,2007年4月。
    [5] 張智星,「音訊處理與辨識」,http://neural.cs.nthu.edu.tw/jang/books/audioSignalProcessing/ [retrieved July 2009]
    [6] Wen-Hung Liao and Yi-Syuan Su. “Analysis and classification of human sounds”. Master’s thesis, Department of Computer Science National Chengchi University, July 2006.
    [7] Yan Ke, Derek Hoiem and Rahul Sukthankar. “Computer Vision For Music Identification”. IEEE Conference on Computer Vision and Pattern Recognition, 2005.
    [8] J. Haitsma and T. Kalker. “A Highly Robust Audio Fingerprinting System”. in Proceedings of International Conference on Music Information Retrieval, 2002.
    [9] G. Hu and D.L. Wang. “Auditory Segmentation Based on Event Detection”. In ISCA Tutorial and Research Workshop on Stat. and Percept. Audio Process., 2004.
    [10] S.H. Srinivasan. “Auditory blobs”. in IEEE ICASSP `04, vol. 4, pp. iv–313 – iv–316, 2004.
    [11] Valerie Pierson and Nadine Martin. “Comparison of Shape Descriptors For Feature Extraction of A Time- Frequency Image”. CEPHAG-ENSJEG - BP 46 - 38402 ST-MARTIN-D’HERES C&Ex FRANCE.
    [12] Ruohua Zhou, Marco Mattavelli, and Giorgio Zoia. “Music Onset Detection Based On Resonator Time Frequency Image”. IEEE Transactions On Audio, Speech, And Language Processing, Vol. 16, No. 8, 2008.
    [13] 王駿發,「多媒體影音檢索系統」,http://web1.nsc.gov.tw/ct.aspx?xItem=8460&ctNode=40&mp=1[retrieved July 2009]
    [14] D. Li, I. Sethi, N. Dimitrova, and T. McGee. “Classification Of General Audio Data For Content-Based Retrieval”. Pattern Recognition Letters, vol. 22(5), pp. 533–544, 2001.
    [15] Zhu Liu, Yao Wang and Tsuhan Chen. “Audio Feature Extraction And Analysis For Scene Segmentation And Classification”. Polytechnic University, Brooklyn, NY 11201, Carnegie Mellon University, Pittsburgh, PA 15213.
    [16] Silvia Allegro, Michael Büchler and Stefan Launer. “Automatic Sound Classification Inspired By Auditory Scene Analysis”. Signal Processing Department, Phonak AG, Switzerland Department of Otorhinolaryngology, University Hospital Zurich, Switzerland.
    [17] T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution Gray-Scale And Rotation Invariant Texture Classification With Local Binary Patterns”. IEEE Trans. On Pattern Analysis and Machine Intelligence, vol. 24, pp. 971-987, 2002.
    [18] L. Cohen. “Time-Frequency Analysis”. Prentice Hall PTR, Englewood Cliffs 1995.
    [19] J. Bello, L. Daudet, S. Abdallah, C. Duxbury, M. Davies and M. Sandler. “A Tutorial On Onset Detection In Music Signals”. IEEE Transactions on Speech and Audio Processing, 2005.
    [20] S. Paris. “A Gentle Introduction To Bilateral Filtering And Its Applications”. In ACM SIGGRAPH 2007 courses, Course 13.
    [21] V. Aurich and J.Weule. “Non-Linear Gaussian Filters Performing Edge Preserving Diffusion”. in Proceedings of the DAGM Symposium, pp. 538–545, 1995.
    [22] C. Tomasi and R. Manduchi. “Bilateral Filtering For Gray And Color Images”. in Proceedings of the IEEE International Conference on Computer Vision, pp. 839–846, 1998.
    [23] F. Durand and J. Dorsey. “Fast Bilateral Filtering For The Display Of Highdynamic-Range Images”. in Proceedings of the ACM SIGGRAPH conference, 2002.
    [24] Paul Masri and Andrew Bateman. “Improved Modeling Of Attack Transients In Music Analysis-Resynthesis”. in Proceeding of International Computer Music Conference, 1996.
    [25] M. Goto and Y. Muraoka. “Beat Tracking Based On Multiple-Agent Architecture — A Real-Time Beat Tracking System For Audio Signals —” in ICMAS-96, pp. 103–110, 1996.
    [26] H. Freeman, “Techniques For The Digital Computer Analysis Of Chain-Encoded Arbitrary Plane Curves”. in: Proc. Nat. Electronics Conf., 1961, pp. 421-432.
    [27] E. Bruce Goldstein. Sensation and Perception. Wadsworth Publishing Co., Belmont, California, 1980.
    [28] Y. He and A. Kundu. “2-D Shape Classification Using Hidden Markov Model”. IEEE Trans. Pat-tern Analysis and Machine Intelligence, 13(1991) 1172-1184.
    [29] Xu Qing, Yang Jie and Ding Siyi. “Texture Segmentation Using LBP Embedded Region Competition”. Inst. of Image Processing & Pattern Recognition.
    Description: 碩士
    國立政治大學
    資訊科學學系
    97753031
    98
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0097753031
    Data Type: thesis
    DOI 連結: http://dx.doi.org/10.1109/ACPR.2011.6166676
    DOI: 10.1109/ACPR.2011.6166676
    Appears in Collections:[資訊科學系] 學位論文

    Files in This Item:

    File SizeFormat
    303101.pdf4866KbAdobe PDF21808View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback