政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/159417
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 118295/149332 (79%)
Visitors : 78114519      Online Users : 1514
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/159417


    Title: 結合高密度連結子圖檢索以提升大型語言模型的知識圖譜提示能力
    Enhancing Knowledge Graph Prompting for Large Language Models Based on Densely Connected Subgraph Retrieval
    Authors: 陳品伃
    Chen, Pin-Yu
    Contributors: 沈錳坤
    Shan, Man-Kwan
    陳品伃
    Chen, Pin-Yu
    Keywords: 知識圖譜
    子圖檢索
    醫療問答
    提示工程
    大型語言模型
    Knowledge Graph
    Subgraph Retrieval
    Medical Question Answering
    Prompt Engineering
    Large Language Model
    Date: 2025
    Issue Date: 2025-09-01 16:58:06 (UTC+8)
    Abstract: 隨著大型語言模型的應用日益廣泛,儘管展現出優異的語言理解與生成能力,但在需要複雜推理的專業領域,推理品質及可解釋性略顯不足,例如醫療和法律領域。為提升模型在專業領域的推理品質及可解釋性,本研究採用知識圖譜作為外部知識來源,以輔助大型語言模型進行推理。為此,我們提出以Densely Connected Subgraph Retrieval為核心的知識圖譜檢索架構,從知識圖譜中檢索出結構緊密且具高度關聯性的子圖,再結合Shortest Path Search與Importance-Based Path Search等方法來選取推理路徑。最後,將這些路徑轉換為自然語言,並以提示工程引導大型語言模型進行推理。本研究以醫療問答任務為專業應用場景,採用患者主訴與醫師回覆的問診對話作為實驗基礎,並採用GPT-4o Ranking與BERT Score評估大型語言模型在醫療問答任務上的推理品質與可解釋性。結果顯示,本研究提升了大型語言模型於醫療問答任務中的推理品質與可解釋性,展現專業應用的潛力。
    Large language models (LLMs) have demonstrated remarkable capabilities in natural language understanding and generation, yet their effectiveness in professional domains such as healthcare and law remains limited by challenges in complex reasoning and explainability. To address these limitations, we propose a knowledge graph–assisted framework for reasoning, which leverages Densely Connected Subgraph Retrieval to extract structurally cohesive and semantically relevant subgraphs. Within these subgraphs, reasoning paths are identified through a combination of shortest-path search and node-importance-based methods, and subsequently transformed into natural language to guide LLMs via prompt engineering. We evaluate this approach on a medical question-answering task using doctor–patient dialogues, assessing reasoning quality and explainability with GPT-4o Ranking and BERTScore. Experimental results demonstrate that our method significantly improves both the reasoning accuracy and interpretability of LLM outputs, underscoring its potential for deployment in high-stakes professional applications.
    Reference: [1] Z. Ji, N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, E. Ishii, Y. Bang, A. Madotto and P. Fung, “Survey of Hallucination in Natural Language Generation,” ACM Computing Surveys, vol. 55, no. 12, pp. 1-38, 2023.
    [2] S. Pan, L. Luo, Y. Wang, C. Chen, J. Wang and X. Wu, “Unifying Large Language Models and Knowledge Graphs: A Roadmap,” IEEE Transactions on Knowledge and Data Engineering, vol. 36, no.7, pp. 3580-3599, July 2024.
    [3] L. Luo, Y. Li, G. Haffari, and S. Pan, “Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning,” International Conference on Learning Representations, Vienna, Austria, 2024.
    [4] Y. Wen, Z. Wang and J. Sun, “MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in Large Language Models,” 62nd Annual Meeting of the Association for Computational Linguistics, Bangkok, Thailand, 2024.
    [5] D. Edge, H. Trinh, N. Cheng, J. Bradley, A. Chao, A. Mody and S. Truitt, “From Local to Global: A Graph RAG Approach to Query-Focused Summarization,” arXiv preprint arXiv:2404.16130, 2024.
    [6] M. Sozio and A. Gionis, “The Community-Search Problem and How to Plan a Successful Cocktail Party,” 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’10), pp. 939–948, Washington, DC, USA, 2010.
    [7] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le and D. Zhou, “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models,” 36th Conference on Neural Information Processing Systems (NeurIPS 2022), New Orleans, LA, USA, 2022.
    [8] T. Kojima, S. Gu, M. Reid, Y. Matsuo and Y. Iwasawa, “Large Language Models are Zero-Shot Reasoners,” 36th Conference on Neural Information Processing Systems (NeurIPS 2022), New Orleans, LA, USA, 2022.
    [9] X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, S. Narang, A. Chowdhery and D. Zhou, “Self-Consistency Improves Chain of Thought Reasoning in Language Models,” arXiv preprint arXiv:2203.11171, 2023.
    [10] X. Xu, C. Tao, T. Shen, C. Xu, H. Xu, G. Long and J. Lou, “Re-Reading Improves Reasoning in Large Language Models,” 2024 Conference on Empirical Methods in Natural Language Processing, pp. 15549-15575, Miami, FL, USA, 2024.
    [11] S. Yao, D. Yu, J. Zhao, I. Shafran, T. Griffiths, Y. Cao and K. Narasimhan, “Tree of Thoughts: Deliberate Problem Solving with Large Language Models,” 37th Conference on Neural Information Processing Systems (NeurIPS 2023), New Orleans, LA, USA, 2023.
    [12] M. Besta, N. Blach, A. Kubicek, R. Gerstenberger, M. Podstawski, L. Gianinazzi, J. Gajda, T. Lehmann, H. Niewiadomski, P. Nyczyk and T. Hoefler, “Graph of Thoughts: Solving Elaborate Problems with Large Language Models,” 38th AAAI Conference on Artificial Intelligence, vol. 38, no. 16, Vancouver, Canada, 2024.
    [13] J. Sun, C. Xu, L. Tang, S. Wang, C. Lin, Y. Gong, L. Ni, H. Shum and J. Guo, “Think-On-Graph: Deep and Responsible Reasoning of Large Language Model on Knowledge Graph,” 12th International Conference on Learning Representations, Vienna, Austria, 2024.
    [14] B. Jiang, Y. Wang, Y. Luo, D. He, P. Cheng and L. Gao, “Reasoning on Efficient Knowledge Paths: Knowledge Graph Guides Large Language Model for Domain Question Answering,” 2024 IEEE International Conference on Knowledge Graph (ICKG), Abu Dhabi, United Arab Emirates, 2024.
    [15] M. Jia, J. Duan, Y. Song and J. Wang, “medIKAL: Integrating Knowledge Graphs as Assistants of LLMs for Enhanced Clinical Diagnosis on EMRs,” arXiv preprint arXiv:2406.14326, 2024.
    [16] L. Wei, G. Xiao and M. Balazinska, “RACOON: An LLM-based Framework for Retrieval-Augmented Column Type Annotation with a Knowledge Graph,” arXiv preprint arXiv:2409.14556, 2024.
    [17] M. Dehghan, M. Alomrani, S. Bagga, D. Alfonso-Hermelo, K. Bibi, A. Ghaddar, Y. Zhang, X. Li, J. Hao, Q. Liu, J. Lin, B. Chen, P. Parthasarathi, M. Biparva and M. Rezagholizadeh, “EWEK-QA: Enhanced Web and Efficient Knowledge Graph Retrieval for Citation-based Question Answering Systems,” arXiv preprint arXiv:2406.10393, 2024.
    [18] W. Xie, X. Liang, Y. Liu, K. Ni, H. Cheng and Z. Hu, “WeKnow-RAG: An Adaptive Approach for Retrieval-Augmented Generation Integrating Web Search and Knowledge Graphs,” arXiv preprint arXiv:2408.07611, 2024.
    [19] V. Sanh, L. Debut, J. Chaumond and T. Wolf, “DistilBERT, a Distilled Version of BERT: Smaller, Faster, Cheaper and Lighter,” arXiv preprint arXiv:1910.01108, 2019.
    [20] Y. Li, Z. Li, K. Zhang, R. Dan, S. Jiang and Y. Zhang, “ChatDoctor: A Medical Chat Model Fine-Tuned on a Large Language Model Meta-AI (LLaMA) Using Medical Domain Knowledge,” Cureus, vol. 55, no. 6, pp. e40895, 2023.
    [21] S. Brin and L. Page, “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” Computer Networks and ISDN Systems, vol. 30, no. 1-7, pp. 107–117, 1998.
    Description: 碩士
    國立政治大學
    資訊科學系
    112753204
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0112753204
    Data Type: thesis
    Appears in Collections:[Department of Computer Science ] Theses

    Files in This Item:

    File Description SizeFormat
    320401.pdf4844KbAdobe PDF0View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback