English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 109953/140892 (78%)
Visitors : 46229007      Online Users : 1080
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/147745
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/147745


    Title: 基於Associated Learning架構優化MEC環境訓練模型之效能
    Optimize the Performance of the Training Model in the MEC Environment based on the Associated Learning Architecture
    Authors: 張皓博
    Chang, Hao-Po
    Contributors: 張宏慶
    Jang, Hung-Chin
    張皓博
    Chang, Hao-Po
    Keywords: Associated Learning
    聯邦學習
    分散式學習
    邊緣運算
    D2D通訊
    Associated Learning
    Federated Learning
    Collaborative Machine Learning,
    Mobile Edge Computing
    Device-to-Device Communication
    Date: 2023
    Issue Date: 2023-10-03 10:49:01 (UTC+8)
    Abstract: 近年來,隨著行動通訊網路的進步,邊緣設備的數量及運算能力提升,再加上人工智慧的蓬勃發展,以及資料隱私意識的抬頭,催生出運用邊緣設備訓練模型的分散式機器學習,其中包括聯邦學習以及拆分學習,然而這兩種方法在架構上存在明顯的優缺點。本研究旨在提出一個訓練架構,與聯邦學習相比,不僅能達到相似的模型準確度,同時在訓練過程中也能減少邊緣設備的運算量以及降低邊緣伺服器的流量,並且改善使用模型時的延遲,進一步提升使用者體驗。為了實現這一目標,在系統架構中採用兩層式設計,提出一個啟發式的分群演算法,群組內各邊緣設備只訓練部分模型,邊緣設備間使用設備到設備通訊技術,利用Associated Learning架構來解決拆分模型後反向傳播的流量問題,此外群組內僅透過主設備與邊緣伺服器通訊,進一步降低了邊緣伺服器的流量負擔。為了驗證本研究是否有達成預期指標,模擬實驗中採用PyTorch及ns3進行模擬,從實驗結果可以驗證本研究相較於聯邦學習在實驗中有更佳的準確度,且透過Associated Learning特色能降低使用時的延遲,提升使用者體驗,針對特定情況下也能夠降低邊緣設備運算量及邊緣伺服器流量,最後提出本研究可優化之部分,並歸納出未來學者可持續往安全性、更通用的架構、更合乎現實情況的模擬等方向研究。
    In recent years, with the advancement of cellular networks, the number and computing power of edge devices have increased. The vigorous development of artificial intelligence and the rise of data privacy awareness have spawned distributed machine learning that uses edge device training models, including federated learning and split learning. However, both have obvious advantages and disadvantages in terms of architecture. The purpose of this study is to propose a training framework. Compared with federated learning, it can not only achieve similar model accuracy but also reduce the computation of edge devices and the traffic of edge server during the training process, improve the latency when using the model, and further enhances the user experience. Therefore, a heuristic grouping algorithm is proposed, and a two-layer design is adopted in the system architecture. Each edge device in the group only trains parts of the model and communications through Device-to-Device. The Associated Learning architecture is used to decouple the dependency relationship of backpropagation when updating the model parameters, and it is expected to reduce the amount of computational required to train the model. After grouping, the multi-objective function is used to select the master edge device, and the group only communicates with the edge server through the master edge device, which is expected to reduce the traffic of the edge server. To verify whether this study has achieved the expected indicators, PyTorch and ns3 are used to simulate the experiment. According to experimental results, it can be verified that this study has better accuracy than federated learning in the experiment. Through the Associated Learning feature, it can reduce the latency during inference, improve the user experience, and reduce the computing load of edge devices and the traffic of edge servers under certain circumstances. Finally, the part of this research that can be optimized is proposed, and the sustainable research directions of future scholars are summarized, including security, more general architecture, and more realistic simulation.
    Reference: [1] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, "Deep learning with differential privacy," in Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, 2016, pp. 308-318.
    [2] T. T. Anh, N. C. Luong, D. Niyato, D. I. Kim, and L.-C. Wang, "Efficient training management for mobile crowd-machine learning: A deep reinforcement learning approach," IEEE Wireless Communications Letters, vol. 8, no. 5, pp. 1345-1348, 2019.
    [3] Y. Cai and T. Wei, "Efficient Split Learning with Non-iid Data," in 2022 23rd IEEE International Conference on Mobile Data Management (MDM), 2022: IEEE, pp. 128-136.
    [4] M. Fan, C. Chen, C. Wang, W. Zhou, and J. Huang, "On the Robustness of Split Learning against Adversarial Attacks," arXiv preprint arXiv:2307.07916, 2023.
    [5] A. Imteaj, U. Thakker, S. Wang, J. Li, and M. H. Amini, "A survey on federated learning for resource-constrained IoT devices," IEEE Internet of Things Journal, vol. 9, no. 1, pp. 1-24, 2021.
    [6] J. Jeon and J. Kim, "Privacy-sensitive parallel split learning," in 2020 International Conference on Information Networking (ICOIN), 2020: IEEE, pp. 7-9.
    [7] M. S. Jere, T. Farnan, and F. Koushanfar, "A taxonomy of attacks on federated learning," IEEE Security & Privacy, vol. 19, no. 2, pp. 20-28, 2020.
    [8] J.-P. Jung, Y.-B. Ko, and S.-H. Lim, "Resource Efficient Cluster-Based Federated Learning for D2D Communications," in 2022 IEEE 95th Vehicular Technology Conference:(VTC2022-Spring), 2022: IEEE, pp. 1-5.
    [9] T. Li, M. Sanjabi, A. Beirami, and V. Smith, "Fair resource allocation in federated learning," arXiv preprint arXiv:1905.10497, 2019.
    [10] W. Y. B. Lim et al., "Federated learning in mobile edge networks: A comprehensive survey," IEEE Communications Surveys & Tutorials, vol. 22, no. 3, pp. 2031-2063, 2020.
    [11] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, "Communication-efficient learning of deep networks from decentralized data," in Artificial intelligence and statistics, 2017: PMLR, pp. 1273-1282.
    [12] J. Nguyen, K. Malik, H. Zhan, A. Yousefpour, M. Rabbat, M. Malek, and D. Huba, "Federated learning with buffered asynchronous aggregation," in International Conference on Artificial Intelligence and Statistics, 2022: PMLR, pp. 3581-3607.
    [13] T. Nishio and R. Yonetani, "Client selection for federated learning with heterogeneous resources in mobile edge," in ICC 2019-2019 IEEE international conference on communications (ICC), 2019: IEEE, pp. 1-7.
    [14] D. Pasquini, G. Ateniese, and M. Bernaschi, "Unleashing the tiger: Inference attacks on split learning," in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021, pp. 2113-2129.
    [15] R. Rouil, F. J. Cintrón, A. Ben Mosbah, and S. Gamboa, "Implementation and Validation of an LTE D2D Model for ns-3," in Proceedings of the 2017 Workshop on ns-3, 2017, pp. 55-62.
    [16] R. A. Rouil. "Public Safety Communications Simulation Tool (ns3 based)." https://www.nist.gov/services-resources/software/public-safety-communications-simulation-tool-ns3-based (accessed Feb. 2021, 2021).
    [17] J. Ryu, D. Won, and Y. Lee, "A Study of Split Learning Model," in IMCOM, 2022, pp. 1-4.
    [18] H. S. Sikandar, H. Waheed, S. Tahir, S. U. Malik, and W. Rafique, "A Detailed Survey on Federated Learning Attacks and Defenses," Electronics, vol. 12, no. 2, p. 260, 2023.
    [19] C. Thapa, P. C. M. Arachchige, S. Camtepe, and L. Sun, "Splitfed: When federated learning meets split learning," in Proceedings of the AAAI Conference on Artificial Intelligence, 2022, vol. 36, no. 8, pp. 8485-8493.
    [20] V. Turina, Z. Zhang, F. Esposito, and I. Matta, "Federated or split? a performance and privacy analysis of hybrid split and federated learning architectures," in 2021 IEEE 14th International Conference on Cloud Computing (CLOUD), 2021: IEEE, pp. 250-260.
    [21] P. Vepakomma, O. Gupta, T. Swedish, and R. Raskar, "Split learning for health: Distributed deep learning without sharing raw patient data," arXiv preprint arXiv:1812.00564, 2018.
    [22] D. Y. Wu, D. Lin, V. Chen, and H.-H. Chen, "Associated Learning: an Alternative to End-to-End Backpropagation that Works on CNN, RNN, and Transformer," in International Conference on Learning Representations, 2021.
    [23] N. Yoshida, T. Nishio, M. Morikura, K. Yamamoto, and R. Yonetani, "Hybrid-FL for wireless networks: Cooperative learning mechanism using non-IID data," in ICC 2020-2020 IEEE International Conference On Communications (ICC), 2020: IEEE, pp. 1-7.
    [24] X. Zhang, Y. Liu, J. Liu, A. Argyriou, and Y. Han, "D2D-assisted federated learning in mobile edge computing networks," in 2021 IEEE Wireless Communications and Networking Conference (WCNC), 2021: IEEE, pp. 1-7.
    [25] Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, "Federated learning with non-iid data," arXiv preprint arXiv:1806.00582, 2018.
    Description: 碩士
    國立政治大學
    資訊科學系
    110753113
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0110753113
    Data Type: thesis
    Appears in Collections:[資訊科學系] 學位論文

    Files in This Item:

    File Description SizeFormat
    311301.pdf3833KbAdobe PDF20View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback