政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/77007
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 109953/140903 (78%)
Visitors : 46055698      Online Users : 974
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/77007


    Title: 基於MPEG-7之地理群眾註記個人生命記憶典藏暨檢索資訊系統
    Other Titles: MPEG-7 Based Geo-Crowdsourcing Personal Lifelong Archiving and Retrieving Information System
    Authors: 郭正佩
    Contributors: 資科系
    Date: 2013
    Issue Date: 2015-07-27 17:59:13 (UTC+8)
    Abstract: 現今,個人非常容易就能擁有一台以上具有攝影功能的配備。數位記錄、消費型影 音產品如數位攝錄影機、數位相機、平板電腦以及智慧型手機等消費性電子產品的普 及,以及無所不在的社群分享服務使個人數位生命足跡隨時隨地持續地被記錄著。由 於儲存設備的價格逐年下降,網路雲端服務的成熟,個人數位生命記憶資料庫的建置 以及新興數位文字影音服務成為近年日漸重要的課題。然而,若缺少適當的多媒體內 容註記,在長期累積的個人數位生命記憶資料中檢索特定影音資訊將是一件困難且費 時的工作。 2004 年,郭等人曾提出基於 MPEG-7 的多媒體內容註記標準 PARIS (Personal Archiving and Retrieving Image System)系統. [1] PARIS 系統結合 MPEG-7 多媒體內容語意 註記標準以及多媒體內容時間及空間資訊,期能長期以及連續地將個人生命經驗之影 音記錄加以半自動註記之後設資料,並利用外部社群媒體資料達到自動或半自動的註 記、檢索及管理之可能性。 2010 年起,郭及鄭實驗利用外界社群媒體群眾註記內容資料加速個人影像內容註記 的可能性。[2],[4] 2012 年,陳及郭提出延伸自 PARIS 系統 DDDC(Dozen Dimensional Digital Content )註記標準,並利用群眾註記之社群媒體地理資訊的 iDDDC (Integrated Dozen Dimensional Digital Content )地理註記標準 [3],並以 iDDDC 為基礎,著手建立基於 MPEG-7 之地理群眾註記個人生命記憶典藏暨檢索資訊系統。 自從一九七〇年代起,多媒體資料檢索一直是一個熱門的研究主題。過去多媒體資 料檢索許多研究都著力於如何結合內容式影像檢索 (Content Based Image Retrieval )中之 影像特徵分析以及後設資料式影像檢索(Metadata Based Image Retrieval) 裡所用的語意 式註記以提升影音資料庫檢索的準確性。然而個人數位影音資料庫相較於一般型影音 資料庫有許多不同特點,在典藏時因應這些特性,也能發展出不同於傳統影音資訊典 藏之需求以及檢索的可能性。除此之外,由於平板型行動戴具的萌芽,個人數位生命 記憶的呈現以及回憶之形式也產生許多尚待開發的創新模式。 本計劃期能針對個人影音資料庫的特性建立具有系統互通性,並可有效幫助未來典 藏、管理以及檢索的註記結構以及地理群眾註記本體論系統,致力建立長期性個人圖 文影音典藏資料庫。除了應用最近影音載具可能自動化產生之多媒體數位內容語意註 記之外,也將利用各種相關社群服務中由群眾合力建置的社交地理標籤建立本體論資 料庫,達成半主動後設資料建置以及個人數位生命記憶之嶄新回溯瀏覽型式。
    Relation: 計畫編號NSC102-2221-E004-010
    Description: Nowadays, everyone may easily possess one or more cameras equipped devices. With the proliferation of digital recording devices such as digital video cameras, digital cameras, emerging tablets and smartphones, an increasing number of users are building up their private multimedia repositories. At the same time, the fast growing of hard drive capacity and digital audio-visual device resolution also results in an ever-increasing size of personal multimedia collections. Without appropriate multimedia content annotations; however, it is difficult and time consuming to organize and relocate specific audio-visual collection within long-term personal digital life memory database. In 2004, Kuo et al. proposed a multimedia description schema system based on MPEG-7 technology called PARIS (Personal Archiving and Retrieving Image System). [1] It was designed to integrate spatial and temporal information of multimedia content into a MPEG-7 based semantic description. With this description architecture, PARIS envisioned being able to continuously capture and archive personal experience with audio-visual recording data and utilize potential social networking annotations provided by third party services. Since 2010, Kuo and Cheng started to experiment the possibility of utilizing user generated social network content to facilitate personal photograph annotation process. [2] ,[4] In 2012, Chen and Kuo proposed an MPEG-7 based Integrated Dozen Dimensional Digital Content (iDDDC) description structure [3] extended from the PARIS DDDC (Dozen Dimensional Digital Content ) description architecture in order to implement semi-automatic crowdsourcing geographic annotation possibility enabled by third party social networking services. Multimedia Retrieval has been a very active research area since 1970’s. While previous researches on multimedia database system focused mainly on general-purpose archives, our proposal aims particular on continuous personal multimedia collections. Personal long-term archives have very different characteristics compared to general-purpose digital libraries, we envision novel archiving requirements and retrieving possibilities to be established. In addition, with the emerging tablet devices and smartphones, creative personal life memory storytelling system and autobiographical narratives are yet to be developed. This research aims on constructing common annotation architecture and geographic specific ontologies in order to facilitate archiving, retrieving, and managing of lifelong personal multimedia collections. We also envision utilizing and semi-automatic generating crowdsourcing geographic specific ontologies based on related social network contents to achieve novel personal audio-visual database management and autobiographical narrative implementations.
    Data Type: report
    Appears in Collections:[Department of Computer Science ] NSC Projects

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML2773View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback