English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 110944/141864 (78%)
Visitors : 47824301      Online Users : 736
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/134204


    Title: 基於深度學習框架之夜晚霧霾圖像模擬與復原方法評估
    Nighttime Haze Images Simulation and Restoration Using Deep Learning Frameworks
    Authors: 鄭可昕
    Cheng, Ko-Hsin
    Contributors: 廖文宏
    Liao, Wen-Hung
    鄭可昕
    Cheng, Ko-Hsin
    Keywords: 深度學習
    夜晚圖像
    霧霾模擬
    圖像去霧
    圖像復原
    Deep learning
    Nighttime images
    Fog simulation
    Haze removal
    Image restoration
    Date: 2021
    Issue Date: 2021-03-02 14:57:22 (UTC+8)
    Abstract: 近年來氣候異常、空氣污染問題日漸嚴重,使得日常中發生霧霾現象的次數越來越多,在霧霾環境中拍攝的圖像,會使得圖像的清晰度與對比度大幅降低,當霧霾現象發生在夜晚,伴隨燈光的干擾,其圖像品質更差。隨著深度學習在圖像領域研究成果的突破,如何將深度學習方法應用於霧霾圖像的復原與去霧,逐漸成為研究者感興趣的主題之一。
    本研究以霧霾圖像形成原理與圖像深度為概念,結合生成對抗網路、大氣散射模型與圖像深度估計等方法,在清晰的夜晚圖像上,疊加霧霾效果,模擬出夜晚霧霾圖像,並透過深度學習方法,將模擬的圖像作為訓練資料,訓練一組模型能夠應用在復原模擬的夜晚霧霾圖像。
    為了對模型進一步評估與分析,本研究亦使用真實夜晚霧霾圖像做測試,檢驗模型的泛化能力。此外,為能更客觀地確認去霧成效,我們計算並比較復原前與復原後圖像之圖像品質指標,以及使用YOLOv5目標偵測方法,以所得之mAP作為衡量基準,均可觀察到處理前後的明顯差異。
    In recent years, extreme weather and air pollution problems have become serious, causing the frequent formation of haze in our daily life. Images taken in a haze environment will significantly lose sharpness and contrast. When haze occurs at nighttime, the image quality worsens due to the interference of light. With the rapid progress of deep learning in the field of computer vision, applying deep neural networks to the restoration of images degraded by haze has become one of the topics of interest to researchers.
    This research employs the concept of fog and haze image formation and image depth, in combination with generative adversarial network, atmospheric scattering model, and image depth estimation, to simulate nighttime haze images by superimposing the haze on clear nighttime images. Based on deep learning methods, the simulated images are used as training data to build a model that can successfully restore the simulated nighttime haze images.
    To evaluate the effectiveness of the proposed approach, we also use real nighttime haze images to observe the generalization ability of the model. To examine the effect of dehazing in an objective manner, several image quality indices have been computed and compared. Additionally, the YOLOv5 object detection method has been utilized to calculate the mAP of the detection before and after restoration. All results indicate improved performance after image dehazing.
    Reference: [1] 中央氣象局數位科普網:迷茫之城-是霧,或是霾? https://pweb.cwb.gov.tw/PopularScience/index.php/weather/277
    [2] ImageNet Large Scale Visual Recognition Competition (ILSVRC). http://www.image-net.org/challenges/LSVRC/
    [3] 國立台灣大學計算機及資訊網路中心電子報,第 0038 期,ISSN:2077- 8813, http://www.cc.ntu.edu.tw/chinese/epaper/0038/20160920_3805.html
    [4] Michael A. Nielsen, "Neural Networks and Deep Learning", Determination Press, 2015. http://neuralnetworksanddeeplearning.com/index.html
    [5] ImageNet Winning CNN Architectures (ILSVRC). https://www.kaggle.com/getting-started/149448
    [6] McCartney, E. J. (1976). Optics of the atmosphere: scattering by molecules and particles. nyjw.
    [7] He, K., Sun, J., & Tang, X. (2010). Single image haze removal using dark channel prior. IEEE transactions on pattern analysis and machine intelligence, 33(12), 2341-2353.
    [8] Zhang, T., Shao, C., & Wang, X. (2011, October). Atmospheric scattering- based multiple images fog removal. In 2011 4th International Congress on Image and Signal Processing (Vol. 1, pp. 108-112). IEEE.
    [9] Zhu, J. X., Meng, L. L., Wu, W. X., Choi, D., & Ni, J. J. (2020). Generative adversarial network-based atmospheric scattering model for image dehazing. Digital Communications and Networks.
    [10] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672-2680).
    [11] Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. InProceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125-1134).
    [12] Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to- image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).
    [13] Anoosheh, A., Sattler, T., Timofte, R., Pollefeys, M., & Van Gool, L. (2019, May). Night-to-day image translation for retrieval-based localization. In 2019 International Conference on Robotics and Automation (ICRA) (pp. 5958-5964). IEEE.
    [14] Li, Z., & Snavely, N. (2018). Megadepth: Learning single-view depth prediction from internet photos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2041-2050).
    [15] Godard, C., Mac Aodha, O., Firman, M., & Brostow, G. J. (2019). Digging into self-supervised monocular depth estimation. In Proceedings of the IEEE international conference on computer vision (pp. 3828-3838).
    [16] Sakaridis, C., Dai, D., & Van Gool, L. (2018). Semantic foggy scene understanding with synthetic data.International Journal of Computer Vision, 126(9), 973-992.
    [17] Zhang, N., Zhang, L., & Cheng, Z. (2017, November). Towards simulating foggy and hazy images and evaluating their authenticity. InInternational Conference on Neural Information Processing (pp. 405-415). Springer, Cham.
    [18] W. Maddern, G. Pascoe, C. Linegar and P. Newman, "1 Year, 1000km: The Oxford RobotCar Dataset", The International Journal of Robotics Research (IJRR), 2016.
    [19] He, K., Sun, J., & Tang, X. (2010). Single image haze removal using dark channel prior. IEEE transactions on pattern analysis and machine intelligence, 33(12), 2341-2353.
    [20] Cai, B., Xu, X., Jia, K., Qing, C., & Tao, D. (2016). Dehazenet: An end-to- end system for single image haze removal. IEEE Transactions on Image Processing, 25(11), 5187-5198.
    [21] Li, Y., Tan, R. T., & Brown, M. S. (2015). Nighttime haze removal with glow and multiple light colors. In Proceedings of the IEEE international conference on computer vision (pp. 226-234).
    [22] Dong, H., Pan, J., Xiang, L., Hu, Z., Zhang, X., Wang, F., & Yang, M. H. (2020). Multi-Scale Boosted Dehazing Network with Dense Feature Fusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2157-2167).
    [23] Qin, X., Wang, Z., Bai, Y., Xie, X., & Jia, H. (2020). FFA-Net: Feature Fusion Attention Network for Single Image Dehazing. In AAAI (pp. 11908-11915).
    [24] Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4), 600-612.
    [25] Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 586-595).
    [26] Mittal, A., Moorthy, A. K., & Bovik, A. C. (2012). No-reference image quality assessment in the spatial domain.IEEE Transactions on image processing, 21(12), 4695-4708.
    [27] Mittal, A., Soundararajan, R., & Bovik, A. C. (2012). Making a “completely blind” image quality analyzer. IEEE Signal processing letters, 20(3), 209-212.
    [28] Yu, F., Chen, H., Wang, X., Xian, W., Chen, Y., Liu, F., ... & Darrell, T. (2020). BDD100K: A diverse driving dataset for heterogeneous multitask learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2636-2645).
    [29] Li, B., Ren, W., Fu, D., Tao, D., Feng, D., Zeng, W., & Wang, Z. (2017). Reside: A benchmark for single image dehazing. arXiv preprint arXiv:1712.04143, 1.
    [30] Jocher, G. (2020). Yolov5. Code repository https://github.com/ultralytics/yolov5.
    [31] Tzutalin. LabelImg. Git code (2015). https://github.com/tzutalin/labelImg
    [32] IEEE International Conference on Multimedia and Expo 2020 Grand Challenge – Embedded Deep Learning Object Detection Model Compression Competition for Traffic in Asian Countries. http://2020.ieeeicme.org/www.2020.ieeeicme.org/index.php/grand- challenges/index.html
    [33] Oxford Robotics Institute. Software Development Kit for the Oxford Robotcar Dataset. Git code repository. https://github.com/ori-mrg/robotcar-dataset- sdk
    [34] 交通部高速公路局「交通資料庫」CCTV 動態資訊(v1.1) https://tisvcloud.freeway.gov.tw/
    Description: 碩士
    國立政治大學
    資訊科學系碩士在職專班
    107971017
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0107971017
    Data Type: thesis
    DOI: 10.6814/NCCU202100281
    Appears in Collections:[資訊科學系碩士在職專班] 學位論文

    Files in This Item:

    File Description SizeFormat
    101701.pdf69433KbAdobe PDF2124View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback