English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 110944/141864 (78%)
Visitors : 47909077      Online Users : 1215
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/145938
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/145938


    Title: 階層式深度神經網路及其應用
    Deep Hierarchical Neural Networks and Its Applications
    Authors: 朱家宏
    Chu, Jia-Hong
    Contributors: 廖文宏
    Liao, Wen-Hung
    朱家宏
    Chu, Jia-Hong
    Keywords: 深度學習
    神經網路
    階層式分類
    對抗例攻擊
    Deep learning
    Neural network
    Hierarchical classification
    Adversarial attack
    Date: 2023
    Issue Date: 2023-07-06 17:03:54 (UTC+8)
    Abstract: 綜觀圖形分類的應用,有若干的資料集在本質上都有著階層的概念,例如動 物辨識屬於物種分類,在標記上就有科、屬、種的階層關係,或是在衛星圖像中 的船隻分類,在大類別民用與軍用設施之下,分別有更細緻的分類,如民用的子 類別有漁船、遊艇,而軍用的子類別則有巡防艦、驅逐艦等。對於這些應用,我 們應該要設計出一個階層式模型,將影像輸入到此模型後,該模型會預測出階層 的分類答案出來,並且由於其為階層式分類,預測出的答案除了皆需保持階層依 賴關係一致性(預測出的細類別要為預測出的粗類別其一子類),也要避免模型 預測出階層上下雖有依賴關係,但其兩者都預測錯誤,造成嚴重的誤判性。所以 在量測階層式準確率時,應使用特定的指標來評估此模型的表現,而不是個別計 算各階層的 Top-1 準確率。準此,本論文提出三個階層式指標(綜合正確率、階 層關係一致性、預測風險率)來評估兩者階層式模型的預測好壞及預測錯誤時的 風險嚴重程度。
    在網路架構部分,本論文提出一個新的階層式架構模型及其訓練方法 HCN(Hierarchy-constrained Network),具體而言,該階層式模型在進行訓練時, 會將粗分類的特徵層融合到細分類的特徵層,同時也將模型的目標函數添加兩種 限制,第一種是限制粗細分類的特徵層,假設兩組影像同屬同一組粗分類,則這 兩張圖的這些特徵層數值要接近; 第二種則是模型預測出的粗分類及細分類類 別輸出需要保有一致關係(e.g. 細分類的狗就要對應到粗分類的動物而不是卡 車)。透過此架構與目標函數的設計,並測試在三種資料集上 (CIFAR100\\TinyImageNet-200\\CUB200-2011),我們發現此架構除了在階層式指 標有優良的表現外,因其訓練方式會使得同屬粗類別的細類別特徵層彼此數值更 相近,模型在受到對抗例攻擊時會較難將輸入影像的細分類判斷成與此影像不同 粗分類的細分類,以此來抵抗對抗例攻擊。
    Hierarchical relationships exist in many datasets that involve object classification tasks. For example, animal recognition involves species classification, which is organized hierarchically into family, genus, and species. Similarly, ship classification in satellite images has finer sub-classes under main categories such as civilian and military vessels, including fishing boats, yachts, patrol ships, and destroyers. Therefore, it is crucial to design models capable of predicting hierarchical classification results.
    In hierarchical classification tasks, predicted answers should not only maintain hierarchical consistency (i.e., the predicted sub-class should be a child class of the predicted parent class), but also avoid serious misjudgments caused by incorrect predictions of both parent and child classes. Hence, it is important to design appropriate metrics to evaluate the performance of hierarchical models beyond calculating Top-1 accuracy for individual classes in each hierarchy. To address this issue, three new metrics, namely Aggregated Accuracy, Hierarchy Consistency, and Risk Factor, are proposed to accurately assess the performance of hierarchical models and the severity of prediction errors.
    In terms of network architecture, this thesis introduces a novel hierarchical model and training method called HCN (Hierarchy-constrained Network). During training, the features of the parent class are fused into the features of the child class. Additionally, two constraints are added to the objective function of the model. The first constraint considers the similarity of features between parent and child classes. If two sets of images belong to the same parent class, their features should be similar. The second constraint ensures that the predicted outputs of parent and child class categories maintain consistent relationships, such as the dog category of the child class corresponding to the animal category of the parent class, and not a truck category.
    Experimental analysis on three datasets, namely CIFAR100, TinyImageNet-200, and CUB200-2011, demonstrates that the proposed HCN outperforms existing hierarchical models based on the proposed hierarchical metrics. Furthermore, the features of child classes belonging to the same parent class are found to be more similar, making it difficult for the model to misclassify the sub-class of an input image into a sub-class of a different parent when subjected to adversarial attacks, thereby enhancing the overall robustness of the model.
    Reference: [ 1 ] Roy, D., Panda, P., & Roy, K. Tree-CNN: a hierarchical deep convolutional neural network for incremental learning. Neural Networks, 121, 148-160. 2020.
    [ 2 ] Jiang, S., Xu, T., Guo, J., & Zhang, J. Tree-CNN: from generalization to specialization. EURASIP Journal on Wireless Communications and Networking, 2018(1), 216. 2018.
    [ 3 ] Xinqi Zhu, Michael Bain. B-CNN: Branch Convolutional Neural Network for Hierarchical Classification. arXiv 2017, arXiv:1709.09890.2017.
    [ 4 ] Salma Taoufiq , Balázs Nagy , Csaba Benedek.HierarchyNet: Hierarchical CNN- Based Urban Building Classification. Remote Sensing,Volume 12 ,Issue 22 .2020.
    [ 5] Yan, Z., Zhang, H., Piramuthu, R., Jagadeesh, V., DeCoste, D., Di, W., & Yu, Y.. HD-CNN: hierarchical deep convolutional neural networks for large scale visual recognition. In Proceedings of the IEEE international conference on computer vision (pp. 2740-2748). 2015.
    [ 6 ] M. D. Zeiler and R. Fergus. Visualizing and Understanding Convolutional Networks, pages 818–833. Springer Interna- tional Publishing, Cham, 2014.
    [ 7 ] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Computer Science Department, University of Toronto, Tech. Rep, 2009.
    [ 8 ] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. Imagenet: A large- scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248–255).2009.
    [ 9 ] Tan, M. and Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. Proceedings of the 36th International Conference on Machine Learning, ICML 2019, Long Beach, 9-15 June 2019, 6105-6114, 2019.
    [ 10 ] D. Arthur and S. Vassilvitskii. k-means++: The advantages of careful seeding. In SODA, pages 1027–1035, 2007.
    [ 11 ] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The caltech-ucsd birds-200-2011 dataset. 2011.
    [ 12 ] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
    [ 13 ] Deng J, Guo J, Xue N, Zafeiriou SJAPA .Arcface: Additive angular margin loss for deep face recognition, 2018.
    [ 14 ] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), 2015.
    [ 15 ] Francesco Croce, Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks, 2020.
    Description: 碩士
    國立政治大學
    資訊科學系
    110753112
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0110753112
    Data Type: thesis
    Appears in Collections:[資訊科學系] 學位論文

    Files in This Item:

    File Description SizeFormat
    311201.pdf19830KbAdobe PDF2130View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback