English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 110944/141864 (78%)
Visitors : 47828325      Online Users : 627
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 商學院 > 資訊管理學系 > 學位論文 >  Item 140.119/134023
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/134023


    Title: 可解釋性人工智慧的研究架構及書目計量學分析
    Explainable Artificial Intelligence: Research Framework and A Bibliometric Analysis
    Authors: 楊承祐
    Yang, Cheng-Yu
    Contributors: 梁定澎
    彭志宏

    Liang, Ting-Peng
    Peng, Chih-Hung

    楊承祐
    Yang, Cheng-Yu
    Keywords: 可解釋性人工智慧
    深度學習
    黑盒模型
    研究架構
    書目計量學
    Explainable Artificial Intelligence
    Deep Learning
    Black-box Model
    Research Framework
    Bibliometric Analysis
    Date: 2021
    Issue Date: 2021-03-02 14:19:34 (UTC+8)
    Abstract: 近年來隨著人工智慧領域的發展,由於深度學習的黑盒模型(Black-box Model)造成預測結果難以理解,使得在技術、法律、經濟及社會等層面上造成人工智慧發展的瓶頸。因此能否從不透明的黑盒模型中找出可被解釋的決策關鍵成為一個至關重要且迫切的研究方向,即所謂的可解釋性人工智慧(eXplainable Artificial Intelligence, XAI)。
    然而目前學術上對於可解釋性人工智慧的相關研究處於初步發展階段,較缺乏完整性的脈絡以及綜觀性的統整。因此本研究主要目的為針對可解釋性人工智慧的相關研究主題進行過去發表文獻的彙總與分析,整理目前研究的發展現況及釐清現存問題,並提出可供未來研究人員參考的研究架構。本研究透過Web of Science文獻資料庫平台蒐集學術上現有的可解釋性人工智慧相關文獻,並採用書目計量學(Bibliometric Analysis)的書目分析方法,搭配VOSViewer書目計量學輔助分析軟體,將文獻進行量化以及視覺化的分析,並彙整學術上重要的文獻。同時針對可解釋性人工智慧的相關技術及評估方法進行架構性的統整,提供技術層面的基本認識以促使相關研究發展。最後彙整目前可解釋性人工智慧研究的相關問題及發展限制,提供未來研究人員的發展方向。
    Recently, Artificial Intelligence (AI) and deep learning are popular in predictive modeling and decision making, but the process of producing results are not transparent and sometimes hard to understand. This becomes a bottleneck for adopting artificial intelligence from technical, legal, economic, and social aspects. Hence, making AI decisions explainable from the opaque black-box model has become an important and imperative research direction, which is called eXplainable Artificial Intelligence (XAI). A number of papers related to XAI have been published in different areas, but the issue of explainability involves many different issues that make it hard to have a complete profile for researchers with interests in entering the area. The purpose of this research is to conduct a bibliometric analysis to provide a comprehensive overview of explainable artificial intelligence literature. Published literature are identified, sorted out, and clarified to build a research framework that can be used to guide researchers. Based on our findings, future research issues and constraints of the explainable artificial intelligence are identified. The findings of this research shed much light on understanding the current status and future directions of XAI.
    Reference: [1] Arras, L., Horn, F., Montavon, G., Müller, K. R., & Samek, W. (2017). " What is relevant in a text document?": An interpretable machine learning approach. PloS one, 12(8), e0181142.
    [2] Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., et al. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.
    [3] Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K. R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140.
    [4] Biran & C. Cotton. (2017). Explanation and justification in machine learning: A survey. In IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI), Vol. 8, No. 1, pp. 8-13
    [5] Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015, August). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1721-1730).
    [6] Christoph Molnar. (2019). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Available at https://christophm.github.io/interpretable-ml-book/.
    [7] Cobo, M. J., López‐Herrera, A. G., Herrera‐Viedma, E., & Herrera, F. (2012). SciMAT: A new science mapping analysis software tool. Journal of the American Society for Information Science and Technology, 63(8), 1609-1630.
    [8] Defense Advanced Research Projects Agency [DARPA]. (2016). Explainable Artificial Intelligence (XAI) Program. Retrieved from: https://www.darpa.mil/attachments/DARPA-BAA-16-53.pdf
    [9] Doran, D., Schulz, S., & Besold, T. R. (2017). What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:1710.00794.
    [10] Doshi-Velez, F., & Kim, B. (2017). Towards A Rigorous Science of Interpretable Machine Learning. arXiv Preprint arXiv:1702.08608v2.
    [11] Friedman, Jerome H. (2001). Greedy function approximation: A gradient boosting machine. Annals of statistics, 29 (5), 1189-1232.
    [12] Gall, R. (2018). Machine learning explainability vs interpretability: two concepts that could help restore trust in AI. KDnuggets. Kdnuggets. https://www. kdnuggets. com/2018/12/machine-learning-explainabilit y-interpretability-ai. html.
    [13] Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation”. AI magazine, 38(3), 50-57.
    [14] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
    [15] International Joint Conference on Artificial Intelligence [IJCAI]. (2017). IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI). Retrieved from: http://home.earthlink.net/~dwaha/research/meetings/ijcai17-xai
    [16] J. R. Quinlan. (1987) Simplifying decision trees. International journal of man-machine studies, 27 (3), 221-234
    [17] Khan, K. S., Kunz, R., Kleijnen, J., & Antes, G. (2003). Five steps to conducting a systematic review. Journal of the Royal Society of Medicine, 96(3), 118-121. https://doi.org/10.1258/jrsm.96.3.118
    [18] Kobsa, A. (1984). What is explained by AI models?. Communication & Cognition.
    [19] Krauskopf, E. (2018). A bibiliometric analysis of the Journal of Infection and Public Health: 2008-2016. Journal of infection and public health, 11(2), 224-229.
    [20] Lipton, Z.C. (2016). The mythos of model interpretability. Workshop on Human Interpretability in Machine Learning.
    [21] McDermott, D., Waldrop, M. M., Chandrasekaran, B., McDermott, J., & Schank, R. (Kaiming). The Dark Ages of AI: A Panel Discussion at AAAI-84. AI Magazine, 6(3), 122.
    [22] Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38.
    [23] Peng, C. Y. J., Lee, K. L., & Ingersoll, G. M. (2002). An introduction to logistic regression analysis and reporting. The journal of educational research, 96(1), 3-14.
    [24] Persson, O., Danell, R., & Schneider, J. W. (2009). How to use Bibexcel for various types of bibliometric analysis. Celebrating scholarly communication studies: A Festschrift for Olle Persson at his 60th Birthday, 5, 9-24.
    [25] Pieters, W. (2011). Explanation and trust: what to tell the user in security and AI?. Ethics and information technology, 13(1), 53-64.
    [26] Pritchard, A. (1969). Statistical bibliography or bibliometrics. Journal of documentation, 25(4), 348-349.
    [27] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). " Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 1135-1144.
    [28] Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215..
    [29] Samek, W., Wiegand, T., & Müller, K. R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296.
    [30] Schildt, H. A. (2002). Sitkis: software for bibliometric data management and analysis. Helsinki: Institute of Strategy and International Business, 6, 1.
    [31] Schuchmann, Sebastian. (2019). Analyzing the Prospect of an Approaching AI Winter. 10.13140/RG.2.2.10932.91524.
    [32] Skirpan, M., & Yeh, T. (2017). Designing a moral compass for the future of computer vision using speculative analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 64-73.
    [33] Stumpf, S., Rajaram, V., Li, L., Wong, W. K., Burnett, M., Dietterich, T., et al. (2009). Interacting meaningfully with machine learning systems: Three experiments. International Journal of Human-Computer Studies, 67(8), 639-662.
    [34] Van Eck, N. J., & Waltman, L. (2010). Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics, 84(2), 523-538.
    [35] Van Eck, N.J., & Waltman, L. (2009). How to normalize cooccurrence data? An analysis of some well-known similarity measures. Journal of the American Society for Information Science and Technology, 60(8), 1635-1651
    [36] Van Eck, N.J., & Waltman, L. (2020). VOSviewer Manual. Retrieved from: https://www.vosviewer.com/download/f-33t2.pdf
    [37] Wang, Q. (2018). Distribution features and intellectual structures of digital humanities. Journal of Documentation.
    [38] Xu, Z., & Yu, D. (2019). A Bibliometrics analysis on big data research (2009-2018). Journal of Data, Information and Management, 1(1), 3-15.
    [39] Zupic, I., & Čater, T. (2015). Bibliometric methods in management and organization. Organizational Research Methods, 18(3), 429-472.
    Description: 碩士
    國立政治大學
    資訊管理學系
    107356032
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0107356032
    Data Type: thesis
    DOI: 10.6814/NCCU202100316
    Appears in Collections:[資訊管理學系] 學位論文

    Files in This Item:

    File Description SizeFormat
    603201.pdf6471KbAdobe PDF20View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback