English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 112704/143671 (78%)
Visitors : 49721853      Online Users : 696
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/152572
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/152572


    Title: 基於階層式深度神經網路及模型可解釋性之機器遺忘方法
    Machine Unlearning Based on Deep Hierarchical Neural Networks and Explainable AI
    Authors: 歐姸君
    Ou, Yen-Chun
    Contributors: 廖文宏
    Liao, Wen-Hung
    歐姸君
    Ou, Yen-Chun
    Keywords: 深度學習
    神經網路
    階層式分類
    機器遺忘
    可解釋性人工智慧
    Deep Learning
    Neural Network
    Hierarchical Classification
    Machine Unlearning
    XAI
    Date: 2024
    Issue Date: 2024-08-05 12:45:52 (UTC+8)
    Abstract: 隨著機器學習的快速發展,生成式人工智慧模型的推出使得AI不再只是科技專業人員的限定技術,更成為大眾熱切關注的話題。然而,機器學習模型的訓練需要大量的資料,引起了一系列有關智慧財產權和個人隱私的爭議,過去歐盟的《一般資料保護規則》(GDPR)和美國加州的《加州消費者隱私保護法》(CCPA),規範資料刪除的遺忘權利,然而進入人工智慧與深度學習時代,僅僅刪除資料本身可能是不足的,因為這些資料已經被運用於模型的訓練過程,並留下了痕跡,成為模型的一部分,僅刪除資料而不對模型進行調整,只是處理的表面的問題。因此,本論文希望藉由機器遺忘(Machine Unlearning)方法,能夠快速且有效地將相關資訊從模型中移除,同時確保模型資訊不會被潛在攻擊者進行成員推斷攻擊和模型反轉攻擊。

    本論文提出一個新的機器遺忘方法,結合階層式模型對於相近類別中的資料之特徵層彼此數值會更相近的特性,維持遺忘後的資料分布與重新訓練一致,遺忘資料集於移除後的模型上表現能如同測試資料集,面對新資料時會被分到接近的類別。另外,由於精確遺忘方法之成本過高,而近似遺忘無法移除所有模型中相關資訊,對此我們希望藉由可解釋性方法,將模型中的遺忘資料集有關資訊在訓練效率高於使用保留資料集重新訓練的情況下,有效地完全移除,同時避免遭受攻擊時會洩漏遺忘資料集。
    As machine learning rapidly advances, the advent of generative artificial intelligence models has transformed AI from a technology exclusive to tech professionals into a topic of widespread public interest. However, training machine learning models requires substantial amounts of data, which has sparked numerous debates concerning intellectual property rights and personal privacy. Regulations such as the European Union's General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) in the United States have empowered users with control over their personal information, including the right to be forgotten. As we enter the era of artificial intelligence and deep learning, simply deleting data itself may not be sufficient, as this data has already been utilized in the training process of models and has left traces, becoming a part of the model. Simply deleting data without adjusting the model is only addressing surface-level issues. Accordingly, this thesis explores the use of machine unlearning methods to efficiently and effectively remove relevant information from models while ensuring that the model data remains secure from potential membership inference attacks and model inversion attacks.

    In this research, we propose a machine unlearning method that leverages the hierarchical model's characteristic of having closer numerical values for features within similar categories, maintaining the data distribution post-unlearning consistent with that of retraining. The forgetting set, once unlearned from the model, performs as well as the test data set and is classified into similar categories when encountering new data. Additionally, due to the high costs associated with precise unlearning methods and the inability of approximate unlearning to remove all relevant information from the model, we aim to use Explainable AI to effectively eliminate all related information from the forgetting set in the model, achieving higher training efficiency than retraining with the retaining set, while also preventing the leakage of forgetting sets during attacks.
    Reference: [1] P. P. Ray, “Chatgpt: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope,” Internet of Things and Cyber-Physical Systems, vol. 3, pp. 121–154, 2023.
    [2] P. Covington, J. Adams, and E. Sargin, “Deep neural networks for youtube recommendations,” in Proceedings of the 10th ACM conference on recommender systems, pp. 191–198, 2016.
    [3] Concord Music Group, Inc. and Anthropic PBC. Case No.3:23-cv-01092 (M.D.Tenn. filed Oct. 18, 2023).
    [4] P. Regulation, “Regulation (EU) 2016/679.” Official Journal of the European Union, 2016. General Data Protection Regulation (GDPR).
    [5] D. U. CCPA, “California Consumer Privacy Act (CCPA) Website Policy,” 2020.
    [6] E. Ullah, T. Mai, A. Rao, R. A. Rossi, and R. Arora, “Machine unlearning via algorithmic stability,” in Conference on Learning Theory, pp. 4126–4142, PMLR, 2021.
    [7] L. Bourtoule, V. Chandrasekaran, C. A. Choquette-Choo, H. Jia, A. Travers, B. Zhang, D. Lie, and N. Papernot, “Machine unlearning,” in 2021 IEEE Symposium on Security and Privacy (SP), pp. 141–159, IEEE, 2021.
    [8] R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in 2017 IEEE symposium on security and privacy (SP), pp. 3–18, IEEE, 2017.
    [9] Z. He, T. Zhang, and R. B. Lee, “Model inversion attacks against collaborative inference,” in Proceedings of the 35th Annual Computer Security Applications Conference, pp. 148–162, 2019.
    [10] J. Konecnỳ, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” arXiv preprint arXiv:1610.05492, vol. 8, 2016. 93
    [11] Z. Liu, Y. Jiang, J. Shen, M. Peng, K.-Y. Lam, and X. Yuan, “A survey on federated unlearning: Challenges, methods, and future directions,” arXiv preprint arXiv:2310.20448, 2023.
    [12] L. Graves, V. Nagisetty, and V. Ganesh, “Amnesiac machine learning,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 13, pp. 11516–11524, 2021.
    [13] A. Warnecke, L. Pirch, C. Wressnegger, and K. Rieck, “Machine unlearning of features and labels,” arXiv preprint arXiv:2108.11577, 2021.
    [14] T. Shaik, X. Tao, H. Xie, L. Li, X. Zhu, and Q. Li, “Exploring the landscape of machine unlearning: A survey and taxonomy,” arXiv preprint arXiv:2305.06360, 2023.
    [15] “NeurIPS 2023 Machine Unlearning Challenge.” https://unlearning-challenge.github.io/, 2023. Accessed: 2024/4/5.
    [16] A. Thudi, H. Jia, I. Shumailov, and N. Papernot, “On the necessity of auditable algorithmic definitions for machine unlearning,” in 31st USENIX Security Symposium (USENIX Security 22), pp. 4007–4022, 2022.
    [17] A. K. Tarun, V. S. Chundawat, M. Mandal, and M. Kankanhalli, “Fast yet effective machine unlearning,” IEEE Transactions on Neural Networks and Learning Systems, 2023.
    [18] A. Golatkar, A. Achille, and S. Soatto, “Eternal sunshine of the spotless net: Selective forgetting in deep networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9304–9312, 2020.
    [19] L. Deng, “The mnist database of handwritten digit images for machine learning research [best of the web],” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 141–142, 2012.
    [20] H. Xu, T. Zhu, L. Zhang, W. Zhou, and P. S. Yu, “Machine unlearning: A survey,” 2023.
    [21] J. Wang, S. Guo, X. Xie, and H. Qi, “Federated unlearning via class-discriminative pruning,” in Proceedings of the ACM Web Conference 2022, pp. 622–632, 2022.
    [22] Z. Yan, H. Zhang, R. Piramuthu, V. Jagadeesh, D. DeCoste, W. Di, and Y. Yu, “Hd-cnn: Hierarchical deep convolutional neural networks for large scale visual
    recognition,” in 2015 IEEE International Conference on Computer Vision (ICCV), pp. 2740–2748, IEEE Computer Society, dec 2015.
    [23] D. Roy, P. Panda, and K. Roy, “Tree-cnn: a hierarchical deep convolutional neural network for incremental learning,” Neural networks, vol. 121, pp. 148–160, 2020.
    [24] S. Jiang, T. Xu, J. Guo, and J. Zhang, “Tree-cnn: from generalization to specialization,” EURASIP Journal on Wireless Communications and Networking, vol. 2018, 09 2018.
    [25] X. Zhu and M. Bain, “B-cnn: Branch convolutional neural network for hierarchical classification,” 2017.
    [26] S. Taoufiq, B. Nagy, and C. Benedek, “Hierarchynet: Hierarchical cnn-based urban building classification,” Remote Sensing, vol. 12, no. 22, p. 3794, 2020.
    [27] 朱家宏, “階層式深度神經網路及其應用,” Master’s thesis, 國立政治大學, 臺灣, 2023. 臺灣博碩士論文知識加值系統.
    [28] V. S. Chundawat, A. K. Tarun, M. Mandal, and M.Kankanhalli, “Can bad teaching induce forgetting? unlearning in deep networks using an incompetent teacher,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 6, pp. 7210–7217, 2023.
    [29] J. Lin, “Divergence measures based on the shannon entropy,” IEEE Transactions on Information Theory, vol. 37, no. 1, pp. 145–151, 1991.
    [30] L. van der Maaten and G. Hinton, “Visualizing data using t-sne,” Journal of Machine Learning Research, vol. 9, no. 86, pp. 2579–2605, 2008.
    [31] S. Lin, X. Zhang, C. Chen, X. Chen, and W. Susilo, “Erm-ktp: Knowledge-level machine unlearning via knowledge transfer,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20147–20155, 2023.
    [32] A. Krizhevsky, “Learning multiple layers of features from tiny images,” University
    of Toronto, 05 2012.
    [33] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie, “The caltech-ucsd
    birds-200-2011 dataset,” 2011.
    [34] M. Tan and Q. V. Le, “Efficientnet: Rethinking model scaling for convolutional
    neural networks,” ArXiv, vol. abs/1905.11946, 2019.
    [35] D. Arthur and S. Vassilvitskii, “K-means++: The advantages of careful seeding,”
    Proc. of the Annu. ACM-SIAM Symp. on Discrete Algorithms, vol. 8, pp. 1027–
    1035, 01 2007.
    [36] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,”
    CoRR, vol. abs/1512.03385, 2015.
    [37] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang,
    A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet
    Large Scale Visual Recognition Challenge,” International Journal of Computer
    Vision (IJCV), vol. 115, no. 3, pp. 211–252, 2015.
    Description: 碩士
    國立政治大學
    資訊科學系
    111753143
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0111753143
    Data Type: thesis
    Appears in Collections:[資訊科學系] 學位論文

    Files in This Item:

    File Description SizeFormat
    314301.pdf23661KbAdobe PDF0View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback