資料載入中.....
|
請使用永久網址來引用或連結此文件:
https://nccur.lib.nccu.edu.tw/handle/140.119/159094
|
| 題名: | MLOps的視覺化負責任人工智慧 – Price Prediction as a Service的應用 Implementing Responsible AI in Price Prediction as a Service through Visualization Arrangements in MLOps |
| 作者: | 江應翎 Chiang, Ying-Ling |
| 貢獻者: | 蔡瑞煌 洪智鐸 Tsaih, Rua-Huan Hong, Chih-Duo 江應翎 Chiang, Ying-Ling |
| 關鍵詞: | MLOps 負責任人工智慧 視覺化 MLOps Responsible artificial intelligence(RAI) visualization |
| 日期: | 2025 |
| 上傳時間: | 2025-09-01 15:04:54 (UTC+8) |
| 摘要: | 本論文探討如何透過視覺化在MLOps中實施負責任人工智慧(RAI)原則,以提升金融AI部署的公平性、解釋性、問責性和可靠性。本研究針對台灣金融業現況制定四項視覺化指導原則—資訊追溯性、邏輯解釋性、決策參與性和風險預警性。以四項視覺化指導原則為基礎,實作示範MLOps各階段提出五項RAI視覺化工具:用於稽核軌跡的追溯儀表板、數據準備階段偏差檢測的資料品質指標、模型驗證階段提升可解釋性的特徵重要性和模型解釋工具,以及部署決策的模型比較介面。這些工具在股價預測服務(PPaaS)系統中實作並應用於股價預測。研究採用標準與增加了RAI視覺化工具的使用者介面的比較評估。來自銀行、證券和保險業的金融專業人士參與研究,針對各工具支援特定RAI原則的程度提供李克特量表(Likert scale)評分和質性回饋。結果顯示量化評分有顯著改善。質性回饋證實RAI增強介面解決了標準平台在資訊追溯性、邏輯解釋性、決策參與性和風險預警性方面的關鍵缺口。本研究驗證了針對性視覺化介入能成功將抽象RAI原則操作化為實用工具,提升非AI專業使用者對AI驅動金融應用的理解、信任和決策品質,為透過視覺化設計實施負責任AI提供系統性指導原則。 This thesis explores how to implement Responsible Artificial Intelligence (RAI) principles in MLOps through visualization to enhance fairness, explainability, accountability, and reliability of financial AI applications, further assisting financial professionals without AI-related knowledge to comply with responsible AI principles when using MLOps. This study develops four visualization design guidelines tailored to Taiwan's financial industry context—Information Traceability, Logical Explainability, Decision Participation, and Risk Anticipation. Based on these four visualization guidelines, five RAI visualization tools are implemented across various MLOps modules: Traceability Dashboard, Data Quality Indicator, Feature Importance and Model Explanation, and Model Comparison. These tools are implemented and applied to stock price prediction in the Price Prediction as a Service (PPaaS) system. The research employs a comparative evaluation between standard user interfaces and those enhanced with RAI visualization tools. Financial professionals from banking, securities, and insurance industries participated in the study, providing Likert scale ratings and qualitative feedback on the degree to which each tool supports specific RAI principles. Results show significant improvements in quantitative scores. Qualitative feedback confirms that RAI-enhanced interfaces address critical gaps in the standard platform regarding information traceability, logical explainability, decision participation, and risk anticipation. This study validates that targeted visualization interventions can successfully operationalize abstract RAI principles into practical tools, enhancing non-technical users' understanding, trust, and decision-making quality in AI-driven financial applications, providing systematic guidelines for implementing responsible AI through visualization design. |
| 參考文獻: | [1] Alicioglu, G., & Sun, B. (2022). A survey of visual analytics for explainable artificial intelligence methods. Computers & Graphics, 102, 502-520. [2] Besinger, P., Vejnoska, D., & Ansari, F. (2024). Responsible AI (RAI) in manufacturing: A qualitative framework. Procedia Computer Science, 232, 813-822. [3] Davis, F. D. (1989). Technology acceptance model: TAM. Al-Suqri, MN, Al-Aufi, AS: Information Seeking Behavior and Technology Adoption, 205(219), 5. [4] Dignum, V. (2019). Responsible artificial intelligence: how to develop and use AI in a responsible way (Vol. 2156). Cham: Springer. [5] Financial Supervisory Commission R.O.C. (Taiwan). (2024). Guidelines on the use of artificial intelligence in the financial industry. [6] F⊘ lstad, A. (2007). Work-domain experts as evaluators: usability inspection of domain-specific work-support systems. International Journal of Human-Computer Interaction, 22(3), 217-245. [7] Jain, A., Patel, H., Nagalapatti, L., Gupta, N., Mehta, S., Guttula, S., ... & Munigala, V. (2020, August). Overview and importance of data quality for machine learning tasks. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 3561-3562). [8] Matsui, B. M., & Goya, D. H. (2022, May). MLOps: A Guide to its Adoption in the Context of Responsible AI. In Proceedings of the 1st Workshop on Software Engineering for Responsible AI (pp. 45-49). [9] Molnar, C., Freiesleben, T., König, G., Herbinger, J., Reisinger, T., Casalicchio, G., ... & Bischl, B. (2023, July). Relating the partial dependence plot and permutation feature importance to the data generating process. In World Conference on Explainable Artificial Intelligence (pp. 456-479). Cham: Springer Nature Switzerland. [10] Mujkanovic, F., Doskoč, V., Schirneck, M., Schäfer, P., & Friedrich, T. (2020). timeXplain--A Framework for Explaining the Predictions of Time Series Classifiers. arXiv preprint arXiv:2007.07606. [11] Pathak, S. (2022). Explainable AI for ML Ops. In World of Business with Data and Analytics (pp. 187-201). Singapore: Springer Nature Singapore. [12] Salama, K., Kazmierczak, J., & Schut, D. (2021, May). Practitioners Guide to MLOps: A framework for continuous delivery and automation of machine learning. Google Cloud. [13] Schlegel, U., & Keim, D. A. (2021, October). Time series model attribution visualizations as explanations. In 2021 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX) (pp. 27-31). IEEE. [14] Testi, M., Ballabio, M., Frontoni, E., Iannello, G., Moccia, S., Soda, P., & Vessio, G. (2022). MLOps: A taxonomy and a methodology. IEEE Access, 10, 61725–61747. [15] Yuan, J., Chen, C., Yang, W., Liu, M., Xia, J., & Liu, S. (2021). A survey of visual analytics techniques for machine learning. Computational Visual Media, 7(1), 3-36. |
| 描述: | 碩士 國立政治大學 資訊管理學系 112356030 |
| 資料來源: | http://thesis.lib.nccu.edu.tw/record/#G0112356030 |
| 資料類型: | thesis |
| 顯示於類別: | [資訊管理學系] 學位論文
|
文件中的檔案:
| 檔案 |
描述 |
大小 | 格式 | 瀏覽次數 |
| 603001.pdf | | 5544Kb | Adobe PDF | 0 | 檢視/開啟 |
|
在政大典藏中所有的資料項目都受到原著作權保護.
|