Loading...
|
Please use this identifier to cite or link to this item:
https://nccur.lib.nccu.edu.tw/handle/140.119/159386
|
Title: | 人機互動中的陪伴:LLM 聊天機器人在心理支持上的歷程分析 Companionship in Human–Computer Interaction: A Process Analysis of Psychological Support by LLM Chatbots |
Authors: | 陳韋蓉 Rong, Chen Wei |
Contributors: | 陳宜秀 廖峻鋒 YiSiu Chen Chun-Feng Liao 陳韋蓉 Chen Wei Rong |
Keywords: | 大型語言模型 心理健康支持 人工智慧 支持性溝通理論 信任自動化理論 聊天機器人 情感支持 使用者體驗 回應風格 Large Language Model LLM Mental Health Support Artificial Intelligence AI Supportive Communication Theory Trust in Automation Chatbot User Experience Response Style |
Date: | 2025 |
Issue Date: | 2025-09-01 16:50:22 (UTC+8) |
Abstract: | 本文旨在探討大型語言ế型(Large Language Model, LLM)為核心技術的聊天ỽ器人,在情緒支持與心理陪伴層面是否能展現近似輔導諮商時的支持效果。
隨著生成式人工智慧(artificial intelligence,AI)快速發展,具備自然語言處理能力的聊天ỽ器人日益被應用於心理健康領域,但其是否真能提供被理解、情感撫慰與信任建立的支持性互動,仍需深入驗證。本研究以支持性溝通理論(Supportive Communication Theory)為基礎,聚焦於聊天ỽ器人透過提示詞工程(Prompt engineering)ế擬情感支持、評價支持與資訊支持三種回應風格,是否能有效傳遞同理與關懷,進而提供情緒支持效能。本研究先進行了前導研究,邀請受過專ḋ訓練的諮商人員對聊天ỽ器人的支持回應進行評估及改進。正式研究則採用日記研究法(diary study),邀請參與者與經過前導研究驗證過的聊天ỽ器人連續互動十日,並於不同階⁓進行問卷與訪談,收集使用者感受與互動品質資料。研究結果以信任自動化理論(Trust in AutomationModel)與同理心量表架ṩ(ECSS 與 CARE)進行分析,從信任建立、回應適配、情緒感知與情感連結等面向進行評估,探究聊天ỽ器人是否能如擬人化輔助者般,承接使用者的情緒經驗與心理壓力。
研究結果透過紮根理論建ṩ出「信任建立」、「節奏調節」、「觀點轉化」三階⁓互動―程,提出「生成式 AI 心理支持互動―程ế型」,用以解釋人與生成式 AI 在情緒支持上的關係建ṩ過程。研究亦發現,同理心來自於系統記憶對話內容、主動提起過往經驗等連結,單一情緒ặ記更能引發被理解的感受,因 多數參與者仍認為 AI 難以取代真人的深層共感、經驗整合與價值理解,且部分人更多期待更具挑戰性與實用性的對話,並視之為支持一環,而非僅尋⃠陪伴與安慰性互動。本研究為 AI 諮商介面提供理論基礎與實務建議,指出未來應強化 AI 的支持性對話的設計面向。 |
Reference: | 中文參考資料 謝麗紅、陳亭妍、張瑋珊、陳雪均(2024)。導入探究與實作精神的人工智慧及其應用課程效果研究。教育心理學報,56(1),1-24。https://doi.org/10.6251/BEP.202409_56(1).0001 吳宗儒(2018)。諮商心理師運用同理之方式研究。﹝碩士論文。國立嘉義大學﹞臺灣博碩士論文知識加值系統。 https://hdl.handle.net/11296/49g885。 沈奕辰. (2025). 結合GPT模型與VITS於心理諮商輔導之應用. 淡江大學電機工程學系人工智慧機器人碩士班學位論文, 1–93. https://doi.org/10.6846/tku202400764 社團法人臺灣憂鬱症防治協會. (2025). 各年齡層憂鬱症的求助阻礙—協會通訊—社團法人臺灣憂鬱症防治協會. https://www.depression.org.tw/communication/info.asp?/167.html 胡幼慧. (2008). 質性研究—理論、方法及本土女性研究實例. https://www.books.com.tw/products/0010406897 財團法人台灣網路資訊中心. (2024). 2024 台灣網路報告. https://report.twnic.tw/2024/index.html 財團法人「張老師」基金會. (2024). 財團法人張老師基金會. https://www.1980.org.tw/news_show.php?news_id=579 陳向明. (2024). 社會科學質的研究. 五南官網. https://www.wunan.com.tw/bookdetail?NO=3448 陳德倫. (2023). 3次免費諮商,然後呢?推動「求助常態化」,擴大年輕世代心理健康支持網的新挑戰—報導者 The Reporter. https://www.twreporter.org/a/free-counceling-for-young-people-program 統計處. (2021, 十月 26). 世界心理健康日衛生福利統計通報(統計處) [文字]. 統計處; 統計處. https://dep.mohw.gov.tw/dos/cp-5112-63761-113.html 黃惠惠. (2005). 助人歷程與技巧(新增訂版). 博客來. https://www.books.com.tw/products/0010308213 葉寶玲, 郭文正, & 蔡佳容. (2024). 諮商系所學生使用聊天機器人經驗初探. 教育心理學報, 56(1), 45–72. https://doi.org/10.6251/BEP.202409_56(1).0003 廖本富. (2000). 同理心與焦點解決短期諮商. https://tpl.ncl.edu.tw/NclService/JournalContentDetail?SysId=A00003770 衛福部. (2020a). 心理師執行通訊心理諮商業務核准作業參考原則. https://www.twtcpa.org.tw/sites/default/files/field_files/news/%E5%BF%83%E7%90%86%E5%B8%AB%E5%9F%B7%E8%A1%8C%E9%80%9A%E8%A8%8A%E5%BF%83%E7%90%86%E8%AB%AE%E5%95%86%E6%A5%AD%E5%8B%99%E6%A0%B8%E5%87%86%E4%BD%9C%E6%A5%AD%E5%8F%83%E8%80%83%E5%8E%9F%E5%89%87%281090729%E4%BF%AE%E6%AD%A3%29.pdf 衛福部. (2020b). 壓力指數測量表│健康九九+網站. 健康九九+網站. https://health99.hpa.gov.tw/onlineQuiz/pressure 謝麗紅, 陳亭妍, 張瑋珊, & 陳雪均. (2024). 導入探究與實作精神的人工智慧及其應用課程效果研究. 教育心理學報, 56(1), 1–24. https://doi.org/10.6251/BEP.202409_56(1).0001 英文參考資料 Ackerman, S. J., & Hilsenroth, M. J. (2003). A review of therapist characteristics and techniques positively impacting the therapeutic alliance. Clinical Psychology Review, 23(1), 1–33. https://doi.org/10.1016/S0272-7358(02)00146-0 Altman, I., & Taylor, D. A. (1973). Social penetration: The development of interpersonal relationships (頁 viii, 212). Holt, Rinehart & Winston. ANTHROPIC. (2024). Introducing the Model Context Protocol. https://www.anthropic.com/news/model-context-protocol Asia Grace. (2024, 十月 9). Gen Zs, millennials are using AI for emotional support, calling it ‘more effective’ than a pet: Study. Yahoo Life. https://www.yahoo.com/lifestyle/gen-zs-millennials-using-ai-141641571.html Barrett-Lennard, G. T. (1981). The empathy cycle: Refinement of a nuclear concept. Journal of Counseling Psychology, 28(2), 91–100. https://doi.org/10.1037/0022-0167.28.2.91 Becky Inkster, Shubhankar Sarda, & Vinod Subramanian. (2018). JMIR mHealth and uHealth—An Empathy-Driven, Conversational Artificial Intelligence Agent (Wysa) for Digital Mental Well-Being: Real-World Data Evaluation Mixed-Methods Study. https://mhealth.jmir.org/2018/11/e12106/ Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., … Amodei, D. (2020). Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems, 33, 1877–1901. https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html Burleson, B. R. (2003). The experience and effects of emotional support: What the study of cultural and gender differences can tell us about close relationships, emotion, and interpersonal communication. Personal Relationships, 10(1), 1–23. https://doi.org/10.1111/1475-6811.00033 Bylund, C. L., & Makoul, G. (2002). Empathic communication and gender in the physician–patient encounter. Patient Education and Counseling, 48(3), 207–216. https://doi.org/10.1016/S0738-3991(02)00173-8 Byron Reeves & Clifford Nass. (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Pla. https://www.researchgate.net/publication/37705092_The_Media_Equation_How_People_Treat_Computers_Television_and_New_Media_Like_Real_People_and_Pla Cathy Mengying Fang, Auren R. Liu, Valdemar Danry, & Sandhini Agarwal. (2025). How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Randomized Controlled Study. ResearchGate. https://www.researchgate.net/publication/390143219_How_AI_and_Human_Behaviors_Shape_Psychosocial_Effects_of_Chatbot_Use_A_Longitudinal_Randomized_Controlled_Study Chang, Y.-H., Lin, C.-Y., Liao, S.-C., Chen, Y.-Y., Shaw, F. F.-T., Hsu, C.-Y., Gunnell, D., & Chang, S.-S. (2023). Societal factors and psychological distress indicators associated with the recent rise in youth suicide in Taiwan: A time trend analysis. The Australian and New Zealand Journal of Psychiatry, 57(4), 537–549. https://doi.org/10.1177/00048674221108640 Chatgptsmodel.com. (n.d.). ChatGPT AI girlfriend. Retrieved April 3, 2025, from https://chatgpt.com Cohen, S., & Wills, T. A. (1985). Stress, social support, and the buffering hypothesis. Psychological Bulletin, 98(2), 310–357. https://doi.org/10.1037/0033-2909.98.2.310 Cristen Torrey, Susan R. Fussell, & Sara Kiesler. (2013). How a robot should give advice | IEEE Conference Publication | IEEE Xplore. https://ieeexplore.ieee.org/document/6483599 Dan Jurafsky & James H. Martin. (2019). Speech and Language Processing. https://web.stanford.edu/~jurafsky/slp3/ David Bakker, Nikolaos Kazantzis, Debra Rickwood, & Nikki Rickard. (2016). JMIR Mental Health—Mental Health Smartphone Apps: Review and Evidence-Based Recommendations for Future Developments. https://mental.jmir.org/2016/1/e7/ Derks, D., Fischer, A. H., & Bos, A. E. R. (2008). The role of emotion in computer-mediated communication: A review. Computers in Human Behavior, 24(3), 766–785. https://doi.org/10.1016/j.chb.2007.04.004 Duncan Cramer. (2001). Facilitativeness, conflict, demand for approval, self-esteem, and satisfaction with romantic relationships. https://psycnet.apa.org/record/2003-02058-010 Elliott, R., Bohart, A. C., Watson, J. C., & Greenberg, L. S. (2011). Empathy. Psychotherapy, 48(1), 43–49. https://doi.org/10.1037/a0022187 Elliott Robert, Watson, Jeanne C, Bohart Arthur C, & Murphy, David. (2018). Therapist empathy and client outcome: An updated meta-analysis. https://nottingham-repository.worktribe.com/output/921082/therapist-empathy-and-client-outcome-an-updated-meta-analysis?utm_source=chatgpt.com Felipe Thomaz, Carolina Salge, Elena Karahanna, & John Hulland. (2020). Learning from the Dark Web: Leveraging conversational agents in the era of hyper-privacy to enhance marketing | Journal of the Academy of Marketing Science. https://link.springer.com/article/10.1007/s11747-019-00704-3 Freitas, J. D., Castelo, N., Uguralp, A., & Uguralp, Z. (2024). Lessons From an App Update at Replika AI: Identity Discontinuity in Human-AI Relationships (No. arXiv:2412.14190). arXiv. https://doi.org/10.48550/arXiv.2412.14190 Fu, T. S.-T., Lee, C.-S., Gunnell, D., Lee, W.-C., & Cheng, A. T.-A. (2013). Changing trends in the prevalence of common mental disorders in Taiwan: A 20-year repeated cross-sectional survey. The Lancet, 381(9862), 235–241. https://doi.org/10.1016/S0140-6736(12)61264-1 Ghandeharioun, A., McDuff, D., Czerwinski, M., & Rowan, K. (2019). Towards Understanding Emotional Intelligence for Behavior Change Chatbots. 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), 8–14. https://doi.org/10.1109/ACII.2019.8925433 Glaser, B., & Strauss, A. (2017). Discovery of Grounded Theory: Strategies for Qualitative Research. Routledge. https://doi.org/10.4324/9780203793206 Goldsmith, D. J. (2004). Communicating social support (頁 x, 207). Cambridge University Press. https://doi.org/10.1017/CBO9780511606984 Grant Packard & Jonah Berger. (2020). How Concrete Language Shapes Customer Satisfaction | Journal of Consumer Research | Oxford Academic. https://academic.oup.com/jcr/article/47/5/787/5873524?login=false Gross, J. J. (1998). Antecedent- and response-focused emotion regulation: Divergent consequences for experience, expression, and physiology. Journal of Personality and Social Psychology, 74(1), 224–237. https://doi.org/10.1037//0022-3514.74.1.224 Haque, M. D. R., & Rubya, S. (2023). An Overview of Chatbot-Based Mobile Mental Health Apps: Insights From App Description and User Reviews. JMIR mHealth and uHealth, 11(1), e44838. https://doi.org/10.2196/44838 Hatch, S. G., Goodman, Z. T., Vowels, L., Hatch, H. D., Brown, A. L., Guttman, S., Le, Y., Bailey, B., Bailey, R. J., Esplin, C. R., Harris, S. M., Jr, D. P. H., McLaughlin, M., O’Connell, P., Rothman, K., Ritchie, L., Jr, D. N. T., & Braithwaite, S. R. (2025). When ELIZA meets therapists: A Turing test for the heart and mind. PLOS Mental Health, 2(2), e0000145. https://doi.org/10.1371/journal.pmen.0000145 Herrando, C., & Constantinides, E. (n.d.). Emotional contagion: A brief overview and future directions. Frontiers in Psychology. Retrieved April 3, 2025, from https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2021.712606/full Høiland, C. G., Følstad, A., & Karahasanovic, A. (2020). Hi, can I help? Exploring how to design a mental health chatbot for youths. Human Technology, 16(2), Article 2. Holmstrom, A. J., Bodie, G. D., Burleson, B. R., McCullough, J. D., Rack, J. J., Hanasono, L. K., & Rosier, J. G. (2015). Testing a dual-process theory of supportive communication outcomes: How multiple factors influence outcomes in support situations. Communication Research, 42(4), 526–546. https://doi.org/10.1177/0093650213476293 House, J. S., Umberson, D., & work(s):, K. R. L. R. (1988). Structures and Processes of Social Support. Annual Review of Sociology, 1, 293–318. Hyojin Chin, Hyeonho Song, Gumhee Baek, Mingi Shin, Chani Jung, Meeyoung Cha, & Junghoi Choi. (2024). Journal of Medical Internet Research—The Potential of Chatbots for Emotional Support and Promoting Mental Well-Being in Different Cultures: Mixed Methods Study. https://www.jmir.org/2023/1/e51712 Iryna Pentina, Tianling Xie, Tyler Hancock, & Ainsworth Anthony Bailey. (2023). Consumer-machine relationships in the age of artificial intelligence: Systematic literature review and research directions. https://www.researchgate.net/publication/371229071_Consumer-machine_relationships_in_the_age_of_artificial_intelligence_Systematic_literature_review_and_research_directions Kathleen Kara Fitzpatrick, Alison Darcy, & Molly Vierhile. (2017). JMIR Mental Health—Delivering Cognitive Behavior Therapy to Young Adults With Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial. https://mental.jmir.org/2017/2/e19/ Kaye, L. K., Malone, S. A., & Wall, H. J. (2017). Emojis: Insights, Affordances, and Possibilities for Psychological Science. Trends in Cognitive Sciences, 21(2), 66–68. https://doi.org/10.1016/j.tics.2016.10.007 Kelly Ng. (2025). 「DeepSeek brought me to tears」: How young Chinese find therapy in AI. https://www.bbc.com/news/articles/cy7g45g2nxno Knapp, M. L., & Daly, J. A. (2002). Handbook of Interpersonal Communication. SAGE. Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788–8790. https://doi.org/10.1073/pnas.1320040111 Lai, T., Shi, Y., Du, Z., Wu, J., Fu, K., Dou, Y., & Wang, Z. (2024). Supporting the Demand on Mental Health Services with AI-Based Conversational Large Language Models (LLMs). BioMedInformatics, 4(1), Article 1. https://doi.org/10.3390/biomedinformatics4010002 Lan, A., Lee, A., Munroe, K., McRae, C., Kaleis, L., Keshavjee, K., & Guergachi, A. (2018). Review of cognitive behavioural therapy mobile apps using a reference architecture embedded in the patient-provider relationship. Biomedical Engineering Online, 17(1), 183. https://doi.org/10.1186/s12938-018-0611-4 Lee, J. D., & See, K. A. (2004). Trust in Automation: Designing for Appropriate Reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392 Leslie S. Greenberg, Laura N. Rice, & Robert Elliott. (1996). Facilitating Emotional Change: The Moment-by-Moment Process. Guilford Press. https://www.guilford.com/books/Facilitating-Emotional-Change/Greenberg-Rice-Elliott/9781572302013?srsltid=AfmBOopnflKvY1QtX2HhZ5GHGM8dzAapZaxaa0EM_3korBI6mm6n1Pyt Li, C., Wang, J., Zhang, Y., Zhu, K., Hou, W., Lian, J., Luo, F., Yang, Q., & Xie, X. (2023). Large Language Models Understand and Can be Enhanced by Emotional Stimuli (No. arXiv:2307.11760). arXiv. https://doi.org/10.48550/arXiv.2307.11760 Li, X. (Shirley), Chan, K. W., & Kim, S. (2019). Service with Emoticons: How Customers Interpret Employee Use of Emoticons in Online Service Encounters. Journal of Consumer Research, 45(5), 973–987. https://doi.org/10.1093/jcr/ucy016 Lialin, V., Deshpande, V., Yao, X., & Rumshisky, A. (2024). Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning (No. arXiv:2303.15647). arXiv. https://doi.org/10.48550/arXiv.2303.15647 Lui, J. H. L., Marcus, D. K., & Barry, C. T. (2017). Evidence-based apps? A review of mental health mobile applications in a psychotherapy context. Professional Psychology: Research and Practice, 48(3), 199–210. https://doi.org/10.1037/pro0000122 M D Romael Haque & Sabirat Rubya. (2023). JMIR mHealth and uHealth—An Overview of Chatbot-Based Mobile Mental Health Apps: Insights From App Description and User Reviews. https://mhealth.jmir.org/2023/1/e44838 Mercer, S. W., Maxwell, M., Heaney, D., & Watt, G. C. (2004). The consultation and relational empathy (CARE) measure: Development and preliminary validation and reliability of an empathy-based consultation process measure. Family Practice, 21(6), 699–705. https://doi.org/10.1093/fampra/cmh621 Messina, I., Calvo, V., Masaro, C., Ghedin, S., & Marogna, C. (2021). Interpersonal Emotion Regulation: From Research to Group Therapy. Frontiers in Psychology, 12. https://doi.org/10.3389/fpsyg.2021.636919 Mezirow, J. (1991). Transformative Dimensions of Adult Learning. Jossey-Bass, 350 Sansome Street, San Francisco, CA 94104-1310 ($27. Michal Kosinski. (2024). Evaluating large language models in theory of mind tasks | PNAS. https://www.pnas.org/doi/10.1073/pnas.2405460121 Moerk, E. L. (1974). Age and epogenic influences on aspirations of minority and majority group children. Journal of Counseling Psychology, 21(4), 294–298. https://doi.org/10.1037/h0036640 Moon, J. (n.d.). AI chats feel “emotionally meaningful,” say about 40% of young South Koreans in survey—The Korea Herald. Retrieved April 3, 2025, from https://www.koreaherald.com/article/10429545 Murphy, R. (2021, 九月 21). The Importance of Empathic Listening for Making Meaning of Distress. Mad In America. https://www.madinamerica.com/2021/09/empathic-listening-meaning/ Nass, C., & Moon, Y. (2000). Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues, 56(1), 81–103. https://doi.org/10.1111/0022-4537.00153 Norman, D. A. (2013). The design of everyday things (Revised and expanded edition). Basic Books. Open AI. (2024a, 三月 13). Introducing ChatGPT. https://openai.com/index/chatgpt/ Open AI. (2024b, 三月 13). Introducing GPTs. https://openai.com/index/introducing-gpts/ Prensky, M. (2001). Digital Natives, Digital Immigrants Part 1. On the Horizon, 9(5), 1–6. https://doi.org/10.1108/10748120110424816 Roy, R., & Naidoo, V. (2021). Enhancing chatbot effectiveness: The role of anthropomorphic conversational styles and time orientation. Journal of Business Research, 126, 23–34. https://doi.org/10.1016/j.jbusres.2020.12.051 Ruth Williams, Sarah Hopkins, Chris Frampton, Chester Holt-Quick, Sally Nicola Merry, & Karolina Stasiak. (2021). 21-Day Stress Detox: Open Trial of a Universal Well-Being Chatbot for Young Adults. https://www.mdpi.com/2076-0760/10/11/416 Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. The American Psychologist, 55(1), 68–78. https://doi.org/10.1037//0003-066x.55.1.68 Schaaff, K., Reinig, C., & Schlippe, T. (2023). Exploring ChatGPT’s Empathic Abilities (No. arXiv:2308.03527). arXiv. https://doi.org/10.48550/arXiv.2308.03527 Sharma, A., Lin, I. W., Miner, A. S., Atkins, D. C., & Althoff, T. (2022). Human-AI Collaboration Enables More Empathic Conversations in Text-based Peer-to-Peer Mental Health Support (No. arXiv:2203.15144). arXiv. https://doi.org/10.48550/arXiv.2203.15144 Skjuve, M., Følstad, A., Fostervold, K. I., & Brandtzaeg, P. B. (2022). A longitudinal study of human–chatbot relationships. International Journal of Human-Computer Studies, 168, 102903. https://doi.org/10.1016/j.ijhcs.2022.102903 Soma, C. S., Knox, D., Greer, T., Gunnerson, K., Young, A., & Narayanan, S. (2023). It’s not what you said, it’s how you said it: An analysis of therapist vocal features during psychotherapy. Counselling and Psychotherapy Research, 23(1), 258–269. https://doi.org/10.1002/capr.12489 Strauss, A., & Corbin, J. (1998). Basics of qualitative research techniques. Sun, H., Lin, Z., Zheng, C., Liu, S., & Huang, M. (2021). PsyQA: A Chinese Dataset for Generating Long Counseling Text for Mental Health Support (No. arXiv:2106.01702). arXiv. https://doi.org/10.48550/arXiv.2106.01702 Tao Zhou & Chunlei Zhang. (2024). Examining generative AI user addiction from a C-A-C perspective—ScienceDirect. https://www.sciencedirect.com/science/article/abs/pii/S0160791X2400201X?via%3Dihub Thomas, P., Czerwinski, M., McDuff, D., Craswell, N., & Mark, G. (2018). Style and Alignment in Information-Seeking Conversation. Proceedings of the 2018 Conference on Human Information Interaction & Retrieval, 42–51. https://doi.org/10.1145/3176349.3176388 Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., Rodriguez, A., Joulin, A., Grave, E., & Lample, G. (2023). LLaMA: Open and Efficient Foundation Language Models (No. arXiv:2302.13971). arXiv. https://doi.org/10.48550/arXiv.2302.13971 Ukpe, E. (2023). Information and Communication Technologies (ICTS) for E-Learning in Tertiary Education. Open Journal of Social Sciences, 11(12), 666–680. https://doi.org/10.4236/jss.2023.1112044 Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł. ukasz, & Polosukhin, I. (2017). Attention is All you Need. Advances in Neural Information Processing Systems, 30. https://papers.nips.cc/paper_files/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html Vollstedt, M., & Rezat, S. (2019). An introduction to grounded theory with a special focus on axial coding and the coding paradigm. In G. Kaiser & N. Presmeg (Eds.), Compendium for early career researchers in mathematics education (pp. 81–100). Springer International Publishing. https://doi.org/10.1007/978-3-030-15636-7_4 Vollstedt, M., & Rezat, S. (2019). An introduction to grounded theory with a special focus on axial coding and the coding paradigm. In G. Kaiser & N. Presmeg (Eds.), Compendium for early career researchers in mathematics education (pp. 81–100). Springer International Publishing. https://doi.org/10.1007/978-3-030-15636-7_4Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2023). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (No. arXiv:2201.11903). arXiv. https://doi.org/10.48550/arXiv.2201.11903 Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Commun. ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168 Wilson, T. (2025). A Psycho-Spiritual Journey. Mad In America. https://www.madinamerica.com/2025/01/a-psycho-spiritual-journey/ Wu, J., Gan, W., Chen, Z., Wan, S., & Yu, P. S. (2023). Multimodal Large Language Models: A Survey (No. arXiv:2311.13165). arXiv. https://doi.org/10.48550/arXiv.2311.13165 Xu, Y., Zhang, J., & Deng, G. (n.d.). Enhancing customer satisfaction with chatbots: The influence of communication styles and consumer attachment anxiety. Retrieved April 3, 2025, from https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2022.902782/full Yin, Y., Jia, N., & Wakslak, C. J. (2024). AI can help people feel heard, but an AI label diminishes this impact. Proceedings of the National Academy of Sciences, 121(14), e2319112121. https://doi.org/10.1073/pnas.2319112121 Zengzhi Wang, Qiming Xie, Yi Feng, Zinong Yang, Rui Xia, & Zixiang Ding. (2023). Is ChatGPT a Good Sentiment Analyzer? A Preliminary Study. https://arxiv.org/abs/2304.04339 Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B., Zhang, J., Dong, Z., Du, Y., Yang, C., Chen, Y., Chen, Z., Jiang, J., Ren, R., Li, Y., Tang, X., Liu, Z., … Wen, J.-R. (2025). A Survey of Large Language Models (No. arXiv:2303.18223). arXiv. https://doi.org/10.48550/arXiv.2303.18223 |
Description: | 碩士 國立政治大學 數位內容碩士學位學程 108462008 |
Source URI: | http://thesis.lib.nccu.edu.tw/record/#G0108462008 |
Data Type: | thesis |
Appears in Collections: | [數位內容碩士學位學程] 學位論文
|
Files in This Item:
File |
Size | Format | |
200801.pdf | 4202Kb | Adobe PDF | 0 | View/Open |
|
All items in 政大典藏 are protected by copyright, with all rights reserved.
|