English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 109952/140891 (78%)
Visitors : 46235144      Online Users : 1184
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/119970


    Title: 基於眼動資料之影片字幕依賴性研究
    Investigating Viewer’s Reliance on Captions Based on Gaze Information
    Authors: 陳巧如
    Chen, Chiao-Ju
    Contributors: 廖文宏
    Liao, Wen-Hung
    陳巧如
    Chen, Chiao-Ju
    Keywords: 眼動資料
    眼動儀
    字幕
    Date: 2018
    Issue Date: 2018-09-03 16:02:31 (UTC+8)
    Abstract: 字幕對國人是習慣性的存在,我們觀察台灣人對於字幕有依賴性,若將影片中字幕去除,或者字幕為陌生外語,對觀看者而言會造成什麼影響?本研究使用Tobii EyeX收集了45個本國受測者的有效眼動資料,並提出合適的指標,分析受測者在不同條件控制下的影片觀看行為。綜合巨觀與微觀的分析,更全面的了解受測者對於字幕及其他AOI(例如臉部)的依賴程度,另外字幕與臉部在影片中都是隨著時間變動的內容,我們採用OpenCV Canny方法自動偵測出字幕區,並利用Faster R-CNN技術劃分臉部區塊,以取得穩定且有效的臉部及字幕的感興趣區域,利於自動化分析。
    分析結果顯示,聽覺語言是最關鍵的因素,以巨觀角度言,Group1(影片播放順序為:英中英中)偏好看臉部;Group2(影片播放順序為:中英中英)偏好看字幕。而以微觀角度言,一開始的偏好行為決定了後續觀看影片的行為模式。以Group2為例,後續的影片也都顯示比Group1對字幕有顯著的偏好,這種習慣性的偏好行為延續到後續觀看的影片,形成沉浸現象;此外我們發現對於看不懂的文字出現時,受測者會有逃避現象。值得注意的是,Group2開頭影片播放聲道為受測者母語,而結果又呈現此受測者集合偏好看字幕,故我們可以初步確認國人對字幕有一定程度的依賴。
    Subtitles are present in almost all TV programs and films in Taiwan. Are Taiwanese more dependent on subtitles to appreciate the content of the film compared to people of other nationality? What happens if subtitles are removed or replaced by unfamiliar languages? In this research, we use Tobii EyeX to collect eye movement data from 45 native-speakers while they watch different films, and propose appropriate indicators to analyze their viewing behavior. To facilitate subsequent data analysis, certain areas of interest (AOI), such as the caption region and human face, are automatically detected using techniques including Canny edge detector and Faster R-CNN.
    Experimental results indicate that auditory language is the most critical factor. Subjects in Group1 (English, Chinese, English and Chinese) have a higher tendency to focus on the face area. Subjects in Group2 (Chinese, English, Chinese and English) appear to read the subtitles more often. The initial behavior seems to determine the viewing pattern subsequently. For subjects in Group2, preference for caption is clearly observed than those in Group1. This habitual preference continues in follow-up movies, resulting in an immersion phenomenon. We also observe that when unfamiliar texts appear, the subjects exhibit ‘escaping’ behavior by avoiding the text region. It is worth noting that the video at the beginning of Group2 is the native language of the testee, and the result shows that the subject gathers preferences to read subtitles. Therefore, we can partially confirm that Taiwanese people have a certain degree of dependence on subtitles.
    Reference: [1] 黃坤年,”電視字幕改良之我見”,廣播與電視第23期,(1973):70-74.
    [2] 劉幼俐, 楊忠川. “傳播科技的另一種選擇──我國隱藏式字幕的政策研究”.廣播與電視第三卷第二期,(1997):109-140.
    [3] 王福興, 周宗奎, 趙顯, 白學軍, 閆國利. “文字熟悉度對電影字幕偏好性影響的眼動研究”. 心理與行為研究10.1 (2012):50-57.
    [4] Boatner, E. (1980). “Captioned Films For the Deaf.” National Association of the Deaf Retrieved February 15, 2016.
    [5] Heppoko, “日本的電視,為什麼不像台灣的一樣「有字幕」?”.關鍵評論網The News Lens, from: https://www.thenewslens.com/article/43280
    [6] 張翔等, "這時候,美國人會怎麼說?", 台北:知識工場. (2013)
    [7] O`Bryan, Kenneth G. "Eye Movements as an Index of Television Viewing Strategies." (1975).
    [8] 蔡政旻. "眼球追蹤技術應用於商品喜好評估." 創新管理與設計跨域研討會 (2013):1-4.
    [9] 張晉文. "基於眼動軌跡之閱讀模式分析." 政治大學資訊科學系碩士論文 (2017): 1-91.
    [10] Tobii/developer zone, “Sentry Versus the EyeX” CRom:
    http://developer.tobii.com/community/forums/topic/sentry-versus-the-eyex/
    [11] Tobii/developer zone, “Fixing Sampling/reCResh Rate” CRom:
    http://developer.tobii.com/community/forums/topic/fixing-samplingreCResh-rate
    [12] OpenCV document, Canny , from: https://docs.opencv.org/3.1.0/da/d22/tutorial_py_canny.html
    [13] 傅筠駿、林崇偉、施懿芳, “超越數位巴別塔:從 TED 開放翻譯計畫探索數位內容的全球在地化策略”。2011 年數位創世紀研討會。(2011)
    [14] Ogama, open gaze and mouse analyzer. In OGAMA. Last visited on 2/4/2016.
    [15] He, Kaiming, et al, "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
    Description: 碩士
    國立政治大學
    資訊科學系碩士在職專班
    1019710252
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G1019710252
    Data Type: thesis
    DOI: 10.6814/THE.NCCU.EMCS.009.2018.B02
    Appears in Collections:[資訊科學系碩士在職專班] 學位論文

    Files in This Item:

    File SizeFormat
    025201.pdf5831KbAdobe PDF2603View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback