政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/119236
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 110080/141030 (78%)
Visitors : 46392477      Online Users : 914
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/119236


    Title: 基於深度學習框架之衛照圖船艦識別
    Detection of Civilian Boat and War Ship in Satellite Images with Deep Learning Framework
    Authors: 吳信賢
    Wu, Shin-Shian
    Contributors: 廖文宏
    Liao, Wen-Hung
    吳信賢
    Wu, Shin-Shian
    Keywords: 深度學習
    物體偵測
    遷移學習
    資料增強
    衛照圖資分析
    Deep learning
    Object detection
    Transfer learning
    Data augmentation
    Satellite image
    Date: 2018
    Issue Date: 2018-08-06 18:23:57 (UTC+8)
    Abstract: 本論文試圖探究如何在少量資料時利用深度學習的方法以衛照影像來進行軍民艦船的判斷,並從圖中偵測出所判別的船艦的位置。
    本研究使用的方法除了透過深度學習以及遷移學習(Transfer Learning) 的概念來訓練模組外,鑑於部分分類資料較少也必須用資料增強(Data Augmentation)的方式生成部分資料並加入訓練集,藉以提高整體船體偵測的準確率。
    經過不同模型測試與參數調校後,最佳實驗結果得出軍艦AP為0.816、民船AP為0.908,整體mAP為0.862,期許這樣的成果可進一步發展為可輔助軍事判圖人員進行相關作業之系統,提升整體作業效率。
    未來以此為基礎希望可以發展更細部的軍事設施判斷模組,進而可以投入其餘判圖種類之應用,完善其判圖系統。
    The objective of this thesis is to develop methods to detect and recognize civilian boats and war ships in satellite images based on deep learning approaches when only limited amount of data are available.
    The concept of transfer learning is employed to take advantage of existing models. Owing to the restricted availability of certain categorical data, this thesis also used data augmentation techniques to generate and add samples into the training sets to improve the overall accuracy of ship detection.
    After extensive model selection and parameter fine-tuning, the average precision (AP) of war ships and civilian boats has reached 0.816 and 0.908 respectively, and the overall mAP is 0.862. The developed framework is ready to be incorporated in a semi-automatic system to assist military personnel in facilitating the efficiency of image detection and interpretation.
    This thesis is expected to lay the groundwork for more precise military facility detection models, thus improving efficacy of future military facility image detection systems.
    Reference: [1] ImageNet Large Scale Visual Recognition Challenge form http://www.image-net.org/challenges/LSVRC/
    [2] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems. 2012.
    [3] The PASCAL Visual Object Classes form ttp://host.robots.ox.ac.uk/pascal/VOC/
    [4] ImageNet data set from http://image-net.org/
    [5] Cocodataset form http://cocodataset.org/#home
    [6] Ren, Shaoqing, et al. "Faster r-cnn: Towards real-time object detection with region proposal networks." Advances in neural information processing systems. 2015.
    [7] He, Kaiming, et al. "Mask r-cnn." Computer Vision (ICCV), 2017 IEEE International Conference on. IEEE, 2017.
    [8] Liu, Wei, et al. "Ssd: Single shot multibox detector." European conference on computer vision. Springer, Cham, 2016.
    [9] Redmon, Joseph, et al. "You only look once: Unified, real-time object detection." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
    [10] Lin, Tsung-Yi, et al. "Focal loss for dense object detection." arXiv preprint arXiv:1708.02002 (2017).
    [11] Girshick, Ross. "Fast r-cnn." arXiv preprint arXiv:1504.08083 (2015).
    [12] Redmon, Joseph, and Ali Farhadi. "YOLO9000: better, faster, stronger." arXiv preprint (2017).
    [13] YOLO v2 from https://www.youtube.com/watch?time_continue=3&v=VOC3huqHrss
    [14] Ma, Zhong, et al. "Satellite imagery classification based on deep convolution network." Int. J. Comput. Autom. Control Inf. Eng 10 (2016): 1055-1059.
    [15] Albert, Adrian, Jasleen Kaur, and Marta C. Gonzalez. "Using convolutional networks and satellite imagery to identify patterns in urban environments at a large scale." Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2017.
    [16] Van Etten, Adam. "You Only Look Twice: Rapid Multi-Scale Object Detection In Satellite Imagery." arXiv preprint arXiv:1805.09512 (2018).
    [17] You Only Look Twice — Multi-Scale Object Detection in Satellite Imagery With Convolutional Neural Networks from https://medium.com/the-downlinq/you-only-look-twice-multi-scale-object-detection-in-satellite-imagery-with-convolutional-neural-38dad1cf7571
    [18] Google Earth from https://earth.google.com/web/
    [19] arcgis-earth from http://www.esri.com/software/arcgis-earth
    [20] ESRI from https://www.esri.com/en-us/home
    [21] labelImg from https://github.com/tzutalin/labelImg
    [22] Tensorflow from https://www.tensorflow.org/
    [23] Keras from https://github.com/keras-team/keras
    [24] Pan, Sinno Jialin, and Qiang Yang. "A survey on transfer learning." IEEE Transactions on knowledge and data engineering 22.10 (2010): 1345-1359.
    [25] Huang, Jonathan, et al. "Speed/accuracy trade-offs for modern convolutional object detectors." IEEE CVPR. 2017.
    [26] Chen, Liang-Chieh, et al. "Rethinking atrous convolution for semantic image segmentation." arXiv preprint arXiv:1706.05587 (2017).
    [27] Goodfellow, Ian, et al. "Generative adversarial nets." Advances in neural information processing systems. 2014.
    Description: 碩士
    國立政治大學
    資訊科學系碩士在職專班
    104971022
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0104971022
    Data Type: thesis
    DOI: 10.6814/THE.NCCU.EMCS.004.2018.B02
    Appears in Collections:[Executive Master Program of Computer Science of NCCU] Theses

    Files in This Item:

    File SizeFormat
    102201.pdf3210KbAdobe PDF22420View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback