English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 112704/143671 (78%)
Visitors : 49778837      Online Users : 359
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/142640
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/142640


    Title: 基於異質型偏好排序表示法之整合圖結構資訊於改進推薦系統效能
    Improving Recommendation Performance via Incorporating Graph Structural Information based on Heterogeneous Preference Embedding
    Authors: 張麒竑
    Chang, Chi-Hung
    Contributors: 蔡銘峰
    Tsai, Ming-Feng
    張麒竑
    Chang, Chi-Hung
    Keywords: 推薦系統
    圖形學習
    圖形結構
    Recommender system
    Graph learning
    Graph structure
    Date: 2022
    Issue Date: 2022-12-02 15:20:18 (UTC+8)
    Abstract: 推薦系統(Recommendation System)發展至今已有三十餘年,從最初較為簡單的暢銷品(Best-Seller)推薦,到有參考他人和商品資訊的方法,如傳統的協同過濾(Collaborative Filtering)演算法和基於內容過濾(Content-Based Filtering);後續進階有將多種方法混和的方法(Hybrid Method),以及近年相當盛行的使用了機器學習(Machine Learning)和深度學習(Deep Learning)的各式先進模型。然而,現在先進的模型或引入知識圖譜(Knowledge Graph),或加入神經網路(Neural Network),雖然能確實的提升模型訓練預測的準確度,但除了會耗費較長的訓練時間及記憶體空間消耗外,某些看似隱含著有助於預測的資訊也有可能會被忽略而未被訓練模型考慮,「圖形結構」即為一個可能隱含正向幫助的資訊,但是鮮少有將此資訊應用至模型訓練當中。

    在本篇當中,我們提出了 HPEstruc 方法。此方法發想於 Facebook 團隊在 2018 年提出的 SEAL(Learning from Subgraphs, Embeddings, and Attributes for Link prediction)模型,除了使用了深度學習進行訓練外,另外將「圖形結構」的資訊加入到模型中,並且在預測任務中得到了相當不錯的成績。因此我們認為,「圖形結構」對於推薦模型的訓練及預測,應能帶來正向的幫助。於是,在本篇當中,我們選定了異質型偏好排序表示法(Heterogeneous Preference Embedding,HPE)作為推薦的訓練模型,並且使用了可以將圖形結構轉換成向量表示法的 struc2vec 來進行圖形結構的擷取,將擷取到的圖形結構資訊加入異質型偏好排序表示法訓練模型當中,比較「圖形結構」對於推薦模型的訓練及預測是否有幫助。從實驗結果可以得知,HPEstruc 在社群網路類型的資料集可以得到比原始 HPE 模型更好的預測準確度,證明了「圖形結構」對於社群網路類型的資料,在推薦預測上是有所幫助的。

    除了比較「圖形結構資訊加入與否」對於推薦模型預測的效果是否有改善外,另外有將預測結果與現今廣泛被使用的深度學習模型預測結果進行比較,以及與較為先進、預測結果較佳的 SEAL 模型進行比較,並且針對 HPEstruc 與 SEAL 模型作法上的差異進行深入探討與比較。
    Recommender systems have been developed for about 30 years. In addition to collaborative filtering and content-based filtering those traditional methods, some hybrid methods are proposed in the field. In recent years, more advanced methods are also proposed based on deep learning techniques to include knowledge graphs for better prediction. So, leveraging graph structural information is one of the potentially crucial research directions.

    In this paper, we proposed a training method named HPEstruc, which a model inspires named SEAL (Learning from Subgraphs, Embeddings, and Attributes for Link prediction). SEAL uses deep learning for training and adds "graph structural information" into the model. Moreover, it gets good results on prediction tasks. Due to those mentioned above, we believe that "graph structural information" can positively influence the recommender model for training and prediction. As a result, we choose Heterogeneous Preference Embedding (HPE) as our training model and use struc2vec, which can convert the graph structure to embeddings, to retrieve the graph structural information in our research. In addition, we add the information captured from the graph into the HPE model to compare whether adding structural information is helpful for training and prediction. It can be seen from the results that HPEstruc can get better prediction accuracy than the original HPE model on the social network datasets. This thesis provides that "graph structure" is helpful for prediction on this type of recommendation problem.

    Furthermore, we also compare the results of HPEstruc with the results of methods used for prediction. We also discuss the difference between HPEstruc and SEAL, one of the most state-of-the-art training models, in detail.
    Reference: [1] N. S. Altman. An introduction to kernel and nearest-neighbor nonparametric regres-
    sion. The American Statistician, 46(3):175–185, 1992.
    [2] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798–1828, 2013.
    [3] J. S. Breese, D. Heckerman, and C. Kadie. Empirical analysis of predictive al- gorithms for collaborative filtering. In Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, UAI’98, page 43–52, San Francisco, CA, USA, 1998. Morgan Kaufmann Publishers Inc.
    [4] C.-M. Chen, M.-F. Tsai, Y.-C. Lin, and Y.-H. Yang. Query-based music recom- mendations via preference embedding. In Proceedings of the 10th ACM Conference on Recommender Systems, RecSys ’16, page 79–82, New York, NY, USA, 2016. Association for Computing Machinery.
    [5] G. E. Dahl, T. N. Sainath, and G. E. Hinton. Improving deep neural networks for lvcsr using rectified linear units and dropout. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8609–8613, 2013.
    [6] L. Ehrlinger and W. Wo ̈ß. Towards a definition of knowledge graphs. SEMANTiCS (Posters, Demos, SuCCESS), 48(1-4):2, 2016.
    [7] J. J. Hopfield. Neural networks and physical systems with emergent collective com- putational abilities. Proceedings of the national academy of sciences, 79(8):2554– 2558, 1982.
    [8] K. S. Jones. A statistical interpretation of term specificity and its application in retrieval. Journal of documentation, 1972.
    [9] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In Proceedings of the 5th International Conference on Learning Repre- sentations, ICLR ’17, 2017.
    [10] Y.Koren,R.Bell,andC.Volinsky.Matrixfactorizationtechniquesforrecommender systems. Computer, 42(8):30–37, 2009.
    [11] P. Liang. Semi-supervised learning for natural language. PhD thesis, Massachusetts Institute of Technology, 2005.
    [12] D. G. Lowe. Object recognition from local scale-invariant features. In Proceedings of the seventh IEEE international conference on computer vision, volume 2, pages 1150–1157. Ieee, 1999.
    [13] H. P. Luhn. A statistical approach to mechanized encoding and searching of literary information. IBM J. Res. Dev., 1(4):309–317, 1957.
    [14] J. Mairal, J. Ponce, G. Sapiro, A. Zisserman, and F. Bach. Supervised dictionary learning. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Ad- vances in Neural Information Processing Systems, volume 21. Curran Associates, Inc., 2008.
    [15] W. S. McCulloch and W. Pitts. A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4):115–133, 1943.
    [16] A. A. Mohammed and V. Umaashankar. Effectiveness of hierarchical softmax in large scale classification tasks. In 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pages 1090–1094, 2018.
    [17] M. Mu ̈ller. Dynamic time warping. Information retrieval for music and motion, pages 69–84, 2007.
    [18] K. O’Shea and R. Nash. An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458, 2015.
    [19] J. Qiu, Q. Chen, Y. Dong, J. Zhang, H. Yang, M. Ding, K. Wang, and J. Tang. Gcc: Graph contrastive coding for graph neural network pre-training. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’20, page 1150–1160, New York, NY, USA, 2020. Association for Computing Machinery.
    [20] L. F. Ribeiro, P. H. Saverese, and D. R. Figueiredo. Struc2vec: Learning node representations from structural identity. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, page 385–394, New York, NY, USA, 2017. Association for Computing Machinery.
    [21] S. E. Robertson, S. Walker, S. Jones, M. M. Hancock-Beaulieu, M. Gatford, et al. Okapi at trec-3. Nist Special Publication Sp, 109:109, 1995.
    [22] F. Rosenblatt. The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6):386, 1958.
    [23] G. Salton and C. Buckley. Term-weighting approaches in automatic text retrieval. Information processing & management, 24(5):513–523, 1988.
    [24] J. Schmidhuber. Deep learning in neural networks: An overview. Neural networks, 61:85–117, 2015.
    [25] Y. Shoham. Combining content-based and collaborative recommendation. Commu- nications of the ACM, 1997.
    [26] C. Sun and G. Wu. Adaptive graph diffusion networks with hop-wise attention. arXiv preprint arXiv:2012.15024, 2020.
    [27] H. Wang, F. Zhang, J. Wang, M. Zhao, W. Li, X. Xie, and M. Guo. Ripplenet: Prop- agating user preferences on the knowledge graph for recommender systems. In Pro- ceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM ’18, page 417–426, New York, NY, USA, 2018. Association for Computing Machinery.
    [28] X. Wang, Y. Xu, X. He, Y. Cao, M. Wang, and T.-S. Chua. Reinforced negative sampling over knowledge graph for recommendation. In Proceedings of The Web Conference 2020, WWW ’20, page 99–109, New York, NY, USA, 2020. Association for Computing Machinery.
    [29] Z. Wang, G. Lin, H. Tan, Q. Chen, and X. Liu. Ckan: Collaborative knowledge- aware attentive network for recommender systems. In Proceedings of the 43rd In- ternational ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’20, page 219–228, New York, NY, USA, 2020. Association for Computing Machinery.
    [30] Z. Wang, Y. Zhou, L. Hong, Y. Zou, and H. Su. Pairwise learning for neural link prediction. arXiv preprint arXiv:2112.02936, 2021.
    [31] H. Yin, M. Zhang, Y. Wang, J. Wang, and P. Li. Algorithm and system co- design for efficient subgraph-based graph representation learning. arXiv preprint arXiv:2202.13538, 2022.
    [32] F. Zhang, N. J. Yuan, D. Lian, X. Xie, and W.-Y. Ma. Collaborative knowledge base embedding for recommender systems. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, page 353–362, New York, NY, USA, 2016. Association for Computing Machinery.
    [33] M. Zhang and Y. Chen. Link prediction based on graph neural networks. In Pro- ceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS’18, page 5171–5181, Red Hook, NY, USA, 2018. Curran Associates Inc.
    [34] M. Zhang, Z. Cui, M. Neumann, and Y. Chen. An end-to-end deep learning archi- tecture for graph classification. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1), 2018.
    Description: 碩士
    國立政治大學
    資訊科學系
    109753119
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0109753119
    Data Type: thesis
    DOI: 10.6814/NCCU202201707
    Appears in Collections:[資訊科學系] 學位論文

    Files in This Item:

    File Description SizeFormat
    311901.pdf1710KbAdobe PDF2131View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback