English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 110944/141864 (78%)
Visitors : 48060785      Online Users : 1021
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/69231
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/69231


    Title: 照片詮釋資料標記工具之設計與製作
    Design and Implementation of a Metadata Annotation Tool for Images
    Authors: 陳昱宇
    Chen, Yu Yu
    Contributors: 陳恭
    Chen, Kung
    陳昱宇
    Chen, Yu Yu
    Keywords: 詮釋資料
    標記
    照片
    Metadata
    Annotation
    Images
    Date: 2013
    Issue Date: 2014-08-25 15:22:22 (UTC+8)
    Abstract: 數位照片的技術發展快速,使用者拍攝照片後,通常都不再將照片洗出來保存,而是將其保存在電腦或網路相簿,以往照片收集成冊的習慣已經不合時宜。但隨著時間過去,所累積的大量數位相片也帶來搜尋上的困擾。在相片上加註解顯然是一種便於日後搜尋的一種做法。但現今的照片標記工具都是著重在人臉辨識,過於狹隘,缺乏針對整體照片的內容給予如:人事時地物等的註解。
    本論文實作一個數位相片標記工具,讓使用者可以對照片加上多種詮釋資料(metadata),藉由這些詮釋資料來做為有效管理照片的依據,並達到搜尋的目的。目前實作的工具可以將標記的註解(annotation)分為兩類:照片背景與照片內容。照片背景註解是關於整張照片,如:拍照時間、地點;內容是照片內部的註解,可以對照片中的人、事、物進行多個標記。每個註解給予類型(type)作為分類,在搜尋時候類型也會當作搜尋條件來增加精準度。目前的結果,本工具可以讓使用者對照片的詮釋資料執行新增、刪除,並依循詮釋資料協助使用者搜尋到正確的照片。
    As digital photos are widely used in cell phones and camera these days, now days people seldom develop photos but keep them in their own computers. Yet, as the volume of photos grows, it is not easy to search for specific photos. Conceivably, one can add annotations to digital photos to facilitate the task of searching. However, most photo annotation tools focus on people identification; no facility is provided for events, places, and time.
    This thesis presents a metadata annotation tool that enables users to add arbitrary annotations on digital photos to facilitate photo management and search. In this system, users add metadata to photos and then search photos by these added metadata. We classify theses annotation into two categories: background annotation and content annotation. Background annotation specifies information about the whole photo, such as date and location. By contrast, content annotation specifies information about the contents of a photo; there could be more than one content annotation associated with a photo. For example, we can put one annotation for each person occurred in a photo. Every annotation includes a type field to classify into four categories: who, which, when, and where. These types will help us to mange photos and search them. There are also a few other facilities that make annotations easy to manage.
    Reference: [1] Amazon Mechanical Turk: http://aws.amazon.com/cn/documentation/mturk/, Accessed on January 27, 2014.
    [2]T,Gotz. and O,Suhre. Design and implementation of the UIMA Common Analysis System. in04 IBM SYSTEM JOURNAL, pp.476-489
    [3]Apache UIMA:http://uima.apache.org/, Accessed on January 27, 2014.
    [4] Lux,M. Caliph & Emir:MPEG-7 Photo Annotation and Retrieval, in 09 Proceedings of the 17th ACM international conference on Multimedia,pp925-926
    [5]Sarvas,R., .User-centric Metadata for Mobile Photos, In Proc. of MobiSys
    2004. ACM Press, New York, NY, 2004, 33-35.
    [6] Kustanowitz,J. and Sheiderman,B . Motive Annotation for Personal Digital Photo
    Libraries:Lowering Barriers While Raising Incentives , in Proceeding
    CHI’07 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems


    [7] OKFN-Annotator:http://annotatorjs.org/, Accessed on January 27, 2014.
    [8] Catherine,C., and Palo,A. Annotation: from paper books to the digital library.In proc. ACM international conference on Digital libraries 1997,pp 131-140
    [9] Daren C. Crowd sourcing as a Model for Problem Solving, in 08 Sage Publications The International Journal of Research into New Media Technologies, pp 75-89
    [10] jQuery Image Annotation:http://flipbit.co.uk/jquery-image-annotation.html, Accessed on January 27, 2014.
    [11] Manjunath, B.S., Salembier,P. and Sikora, T. Introduction to MPEG-7, Wiley 2002
    [12]Wilhem, A., Takhteyev, Y., Sarvas, R. Van House, N., and David, M. Photo Annotation on a Camera Phone. In Proc. of CHI2004, ACM Press, 2004, pp1403-1406
    [13]Munnelly, G., Hampson, C., Ferro, N.,and Conlan, O. The FAST-CAT: Empowering Cultural Heritage Annotations. In Proc. Digital Humanities 2013, University of Nebraska, Lincoln 2013, pp. 320-322.
    [14] 林宸均,「網路使用者圖像標記行為初探-以Flickr圖像標籤為例」,國立
    台東
    大學教育學系(所)教學科技碩士班
    [15] Sara,S. L. Some Issues in the Indexing of Images. Journal of the American Society for Information Science 45, no. 8 (1994): 583-88 .
    [16] Matthew,P. Gaps in Keywords: A study into the ‘semantic gap’ between images
    and keywords in users of the Witt Library, Courtauld Institute of Art, in partial
    fulfilment of the requirements for the degree of MSc in Information Science,2007 PP.17-23
    [17] Diamond, R. M. The development of a retrieval system for 35mm slides
    utilized in art and humanities instruction: Final report. ED031 925.
    [18] V. Gudivada, V.V. Raghavan, Content-based image retrieval systems, IEEE
    Comput. 28 (9) (1995) 18–22.
    Description: 碩士
    國立政治大學
    資訊科學學系
    100753033
    102
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G1007530331
    Data Type: thesis
    Appears in Collections:[資訊科學系] 學位論文

    Files in This Item:

    File SizeFormat
    033101.pdf1143KbAdobe PDF2498View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback