English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 112704/143671 (78%)
Visitors : 49748881      Online Users : 962
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/138691


    Title: The Establishment of 3D LOD2 Objectivization Building Models based on Data Fusion
    基於資料融合3D LOD2建物模型物件化之研究
    Authors: 邱式鴻
    Chio, Shih-Hong
    林岑燕
    Lin, Tsen-Yann
    Contributors: 地政系
    Keywords: Triangulation Network;Texture Mapping;Rectangle Packing;3D Building Model
    資料融合;三角網;紋理敷貼;矩形包裝;三維建物模型
    Date: 2021-06
    Issue Date: 2022-01-06 16:24:43 (UTC+8)
    Abstract: This paper discussed an auto matic building objectivization method for generating a large number of 3D LOD2 building models by integrating 2D geospatial information, airborne LiDAR point cloud, a DEM, aerial vertical and oblique images. The procedure was divided into the following four stages. First, the elevation and ground floor elevation corresponding to each building in shapefile were determined based on LiDAR point cloud data within each building outline and a DEM, and they were appended into a building outline shapefile. Second, individual 3D LOD1 building models with the standard file format of object file presented by triangulation networks were transformed from every single building outline shapefile into the object file, and even 3D LOD1 building models with any atriums could be produced. Third, from aerial vertical and oblique images as well as whole 3D LOD1 building models, the texture corresponding to each 3D LOD1 building model triangulation was determined by the most appropriate images for texture mapping. Fourth, each complete 3D LOD2 building objectivization model was constructed by splitting and packing its corresponding building texture images into a single image for data compression. Finally, the objectivization models were imported into the WebGL platform for demonstrating the advanced application in this study.
    本研究透過整合2D地理空間資訊、空載光達點雲、數值高程模型、空載垂直和傾斜航空影像探討大量產製3D LOD2建物模型物件化之自動化方法。此方法程序分四個階段:(1)根據每棟建物輪廓內的數值高程模型和光達點雲資料確定shapefile檔案中每棟建物輪廓相對應的高程和和地面高程,並將其附加到建物輪廓shapefile檔案中;(2)將建物輪廓shapefile檔案轉換為個別由三角形網格化呈現的標準檔案格式物件檔的3D LOD1建物模型,甚至可以產製具中庭的3DLOD1建物模型;(3)從航空垂直和傾斜影像以及全部3D LOD1建物模型中,確定每一棟3DLOD1建物模型中每一個三角形面最合適的航空或傾斜影像進行紋理敷貼;(4)每個完整的3D LOD2建物模型均通過將其相對應的建物紋理影像進行矩行封包為單張影像並進行資料壓縮完成物件化。本研究中最後將完成之物件化模型導入WebGL平台,展示進一步之應用。
    Relation: 航測及遙測學刊, Vol.26, No.2, pp.57-73
    Data Type: article
    DOI 連結: https://doi.org/10.6574/JPRS.202106_26(2).0001
    DOI: 10.6574/JPRS.202106_26(2).0001
    Appears in Collections:[地政學系] 期刊論文

    Files in This Item:

    File Description SizeFormat
    148.pdf3147KbAdobe PDF2206View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback