English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 109952/140887 (78%)
Visitors : 46373137      Online Users : 1501
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/141349


    Title: 人工智慧虛擬代理人的社會角色與能力表現對人機互動中信任之影響
    The Effects of AI-Based Agent`s Social Roles and Performance on Trust in Human-Agent Interaction
    Authors: 韓舒容
    Han, Shu-Jung
    Contributors: 陳宜秀
    簡士鎰

    Chen, Yihsiu
    Chien, Shih-Yi

    韓舒容
    Han, Shu-Jung
    Keywords: 人機互動
    人工智慧虛擬代理人
    社會角色
    信任
    人機合作
    Human-Agent Interaction
    AI Agent
    Social Roles
    Trust
    Collaboration
    Date: 2022
    Issue Date: 2022-08-01 18:49:54 (UTC+8)
    Abstract: 隨著人工智慧系統 (AI) 快速的發展,人機互動之間的信任問題日趨受到重視。近年來人工智慧技術也被廣泛應用於產品與服務當中,並時常以虛擬代理人 (Agent) 的形象出現與人們互動。然而,基於虛擬代理人的角色形塑,人們也開始會對人工智慧虛擬代理人投射社會性的期待。因此,本研究想藉此探討除了非社會性的因子——人工智慧的能力表現外,社會性的角色及地位是否會影響人機互動之間的信任。本研究採 3 (社會角色) x 2 (能力表現) 組間設計實驗,並自行設計一個人臉年齡辨識合作任務來了解兩因素對信任的影響。研究結果顯示,兩因素對於人機互動的信任都有影響。不過當人工智慧虛擬代理人擁有較高的角色地位時,其角色地位會幫助消弭代理人表現不佳對於信任的影響。我們期待透過本研究成果可以凸顯出社會性因素對人機互動的影響,並期待能為往後人工智慧互動設計提供參考。
    Studies show that trust in systems can impact human-computer interaction, which has become an increasingly important topic due to the rapid growth of Artificial Intelligence (AI). As capabilities of AI are often presented in the form of an "agent" (e.g., chatbots or robots), this leads to the question of whether social qualities, such as roles and statuses, will influence human trust and interaction with these agents, in addition to non-social properties such as performance. This paper presents the results of an experiment with two independent variables, perceived social roles and performance, conducted to investigate their effects on trust in AI-based agents in a collaborative task. Results show that they both impact trust in human-agent interaction. However, the high social status of an agent can mitigate the influence of its performance on trust. The results highlight the importance of social-psychological factors in the future design and development of AI-based agents.
    Reference: [1] Adadi, A., and Berrada, M. Peeking inside the black-box: a survey on explainable artificial intelligence (xai). IEEE access 6 (2018), 52138–52160.
    [2] Akgun,M.,Cagiltay,K.,andZeyrek,D.Theeffectofapologeticerrormessagesand mood states on computer users’self-appraisal of performance. Journal of Pragmat- ics 42, 9 (2010), 2430–2448.
    [3] Allen, V. L., and Vliert, E. v. d. A role theoretical perspective on transitional pro- cesses. In Role transitions. Springer, 1984, pp. 3–18.
    [4] Aronson, E. Readings about the social animal. Macmillan, 2003.
    [5] Bansal,G.,Nushi,B.,Kamar,E.,Lasecki,W.S.,Weld,D.S.,andHorvitz,E.Beyond accuracy: The role of mental models in human-ai team performance. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (2019), vol. 7, pp. 2–11.
    [6] Basow, S. A. Gender: Stereotypes and roles. Thomson Brooks/Cole Publishing Co, 1992.
    [7] Benbasat, I., and Wang, W. Trust in and adoption of online recommendation agents. Journal of the association for information systems 6, 3 (2005), 4.
    [8] Berger, J., Cohen, B. P., and Zelditch Jr, M. Status characteristics and social interaction. American Sociological Review (1972), 241–255.
    [9] Biddle, B. J. Recent developments in role theory. Annual review of sociology 12, 1 (1986), 67–92.
    [10] Billings, D. R., Schaefer, K. E., Chen, J. Y., and Hancock, P. A. Human-robot inter- action: developing trust in robots. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction (2012), pp. 109–110.
    [11] Blut, M., Wang, C., Wünderlich, N. V., and Brock, C. Understanding anthropomorphism in service provision: a meta-analysis of physical robots, chatbots, and other ai. Journal of the Academy of Marketing Science (2021), 1–27.
    [12] Boon, S. D., and Holmes, J. G. The dynamics of interpersonal trust: Resolving uncertainty in the face of risk. Cooperation and prosocial behavior (1991), 190– 211.
    [13] Cao, L. Ai in finance: Challenges, techniques, and opportunities. ACM Computing Surveys (CSUR) 55, 3 (2022), 1–38.
    [14] Chen, J. Y., and Barnes, M. J. Human-agent teaming for multirobot control: A review of human factors issues. IEEE Transactions on Human-Machine Systems 44, 1 (2014), 13–29.
    [15] Coeckelbergh, M. Can we trust robots? Ethics and information technology 14, 1 (2012), 53–60.
    [16] Cofta, P. Distrust. In Proceedings of the 8th international conference on Electronic commerce: The new e-commerce: innovations for conquering current barriers, obstacles and limitations to conducting successful business on the internet (2006), pp. 250–258.
    [17] Daronnat, S., Azzopardi, L., Halvey, M., and Dubiel, M. Impact of agent reliability and predictability on trust in real time human-agent collaboration. In Proceedings of the 8th International Conference on Human-Agent Interaction (2020), pp. 131–139.
    [18] Dautenhahn, K., Woods, S., Kaouri, C., Walters, M. L., Koay, K. L., and Werry, I. What is a robot companion-friend, assistant or butler? In 2005 IEEE/RSJ international conference on intelligent robots and systems (2005), IEEE, pp. 1192–1197.
    [19] De Visser, E. J., Monfort, S. S., McKendrick, R., Smith, M. A., McKnight, P. E., Krueger, F., and Parasuraman, R. Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied 22, 3 (2016), 331.
    [20] Deutsch, M. The resolution of conflict: Constructive and destructive processes. Yale University Press, 1973.
    [21] Evans, N. J., and Jarvis, P. A. The group attitude scale: A measure of attraction to group. Small group behavior 17, 2 (1986), 203–216.
    [22] Fan, X., Oh, S., McNeese, M., Yen, J., Cuevas, H., Strater, L., and Endsley, M. R. The influence of agent reliability on trust in human-agent collaboration. In Proceedings of the 15th European conference on Cognitive ergonomics: the ergonomics of cool interaction (2008), pp. 1–8.
    [23] Freedy, A., DeVisser, E., Weltman, G., and Coeyman, N. Measurement of trust in human-robot collaboration. In 2007 international symposium on collaborative technologies and systems (2007), IEEE, pp. 106–114.
    [24] Glikson, E., and Woolley, A. W. Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals 14, 2 (2020), 627–660.
    [25] Greenberg, J. Managing behavior in organizations.
    [26] Haleem, A., Javaid, M., and Khan, I. H. Current status and applications of artificial intelligence (ai) in medical field: An overview. Current Medicine Research and Practice 9, 6 (2019), 231–237.
    [27] Hamilton, V. L. Who is responsible? toward a social psychology of responsibility attribution. Social Psychology (1978), 316–328.
    [28] Hamilton,V.L.,andHagiwara,S.Roles, responsibility, and accounts across cultures. International Journal of Psychology 27, 2 (1992), 157–179.
    [29] Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., De Visser, E. J., and Parasuraman, R. A meta-analysis of factors affecting trust in human-robot interaction. Human factors 53, 5 (2011), 517–527.
    [30] Hinds, P. J., Roberts, T. L., and Jones, H. Whose job is it anyway? a study of human-robot interaction in a collaborative task. Human-Computer Interaction 19, 1-2 (2004), 151–181.
    [31] Hoff, K. A., and Bashir, M. Trust in automation: Integrating empirical evidence on factors that influence trust. Human factors 57, 3 (2015), 407–434.
    [32] Ignat, C.-L., Dang, Q.-V., and Shalin, V. L. The influence of trust score on cooperative behavior. ACM Transactions on Internet Technology (TOIT) 19, 4 (2019), 1–22.
    [33] Jacovi, A., Marasović, A., Miller, T., and Goldberg, Y. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in ai. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (2021), pp. 624–635.
    [34] Jemmott III, J. B., and Gonzalez, E. Social status, the status distribution, and performance in small groups 1. Journal of Applied Social Psychology 19, 7 (1989), 584–598.
    [35] Jian, J.-Y., Bisantz, A. M., and Drury, C. G. Foundations for an empirically determined scale of trust in automated systems. International journal of cognitive ergonomics 4, 1 (2000), 53–71.
    [36] Jiang, J. A., Wade, K., Fiesler, C., and Brubaker, J. R. Supporting serendipity: Opportunities and challenges for human-ai collaboration in qualitative analysis. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (2021), 1–23.
    [37] Kunze, A., Summerskill, S. J., Marshall, R., and Filtness, A. J. Automation transparency: implications of uncertainty communication for human-automation interaction and interfaces. Ergonomics 62, 3 (2019), 345–360.
    [38] Lambert, F. Family of tesla driver killed in model x crash on autopilot is preparing to sue tesla, Apr 2018.
    [39] Lee, J., and Moray, N. Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35, 10 (1992), 1243–1270.
    [40] Lee, J. D., and See, K. A. Trust in automation: Designing for appropriate reliance. Human factors 46, 1 (2004), 50–80.
    [41] Lewis, J. D., and Weigert, A. Trust as a social reality. Social forces 63, 4 (1985), 967–985.
    [42] Lewis, M., Sycara, K., and Walker, P. The role of trust in human-robot interaction. In Foundations of trusted autonomy. Springer, Cham, 2018, pp. 135–159.
    [43] Linton, R. The study of man: an introduction.
    [44] Lount Jr, R. B., and Pettit, N. C. The social context of trust: The role of status.
    Organizational Behavior and Human Decision Processes 117, 1 (2012), 15–23.
    [45] Madhavan, P., and Wiegmann, D. A. Similarities and differences between human-human and human-automation trust: an integrative review. Theoretical Issues in Ergonomics Science 8, 4 (2007), 277–301.
    [46] Madsen, M., and Gregor, S. Measuring human-computer trust. In 11th australasian conference on information systems (2000), vol. 53, Citeseer, pp. 6–8.
    [47] Mahmood, A., Fung, J. W., Won, I., and Huang, C.-M. Owning mistakes sincerely: Strategies for mitigating ai errors.
    [48] Mayer, R. C., Davis, J. H., and Schoorman, F. D. An integrative model of organizational trust. Academy of management review 20, 3 (1995), 709–734.
    [49] Mayer, R. C., and Gavin, M. B. Trust in management and performance: Who minds the shop while the employees watch the boss? Academy of management journal 48, 5 (2005), 874–888.
    [50] McAllister, D. J. Affect-and cognition-based trust as foundations for interpersonal cooperation in organizations. Academy of management journal 38, 1 (1995), 24–59.
    [51] McKnight, D. H., Choudhury, V., and Kacmar, C. Developing and validating trust measures for e-commerce: An integrative typology. Information systems research 13, 3 (2002), 334–359.
    [52] McKnight, D. H., Cummings, L. L., and Chervany, N. L. Initial trust formation in new organizational relationships. Academy of Management review 23, 3 (1998), 473–490.
    [53] Niu, Z., Zhou, M., Wang, L., Gao, X., and Hua, G. Ordinal regression with multiple output cnn for age estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition (2016), pp. 4920–4928.
    [54] Oh, C., Song, J., Choi, J., Kim, S., Lee, S., and Suh, B. I lead, you help but only with enough details: Understanding user experience of co-creation with artificial intelligence. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (2018), pp. 1–13.
    [55] Okamura, K., and Yamada, S. Adaptive trust calibration for human-ai collaboration. Plos one 15, 2 (2020), e0229132.
    [56] Parasuraman, R., and Riley, V. Humans and automation: Use, misuse, disuse, abuse. Human factors 39, 2 (1997), 230–253.
    [57] Ridgeway, C. L., and Berger, J. Expectations, legitimation, and dominance behavior in task groups. American Sociological Review (1986), 603–617.
    [58] Ross, L. D., Amabile, T. M., and Steinmetz, J. L. Social roles, social control, and biases in social-perception processes. Journal of personality and social psychology 35, 7 (1977), 485.
    [59] Sande, G. N., Ellard, J. H., and Ross, M. Effect of arbitrarily assigned status labels on self-perceptions and social perceptions: The mere position effect. Journal of Personality and Social Psychology 50, 4 (1986), 684.
    [60] Schmidt, P., Biessmann, F., and Teubner, T. Transparency and trust in artificial intelligence systems. Journal of Decision Systems 29, 4 (2020), 260–278.
    [61] Schwaninger, I., Fitzpatrick, G., and Weiss, A. Exploring trust in human-agent collaboration. In Proceedings of 17th European Conference on Computer-Supported Cooperative Work (2019), European Society for Socially Embedded Technologies (EUSSET).
    [62] Siau, K., and Wang, W. Building trust in artificial intelligence, machine learning, and robotics. Cutter business technology journal 31, 2 (2018), 47–53.
    [63] Simpson, J. A. Psychological foundations of trust. Current directions in psychological science 16, 5 (2007), 264–268.
    [64] Skjuve, M., Følstad, A., Fostervold, K. I., and Brandtzaeg, P. B. My chatbot companion-a study of human-chatbot relationships. International Journal of Human-Computer Studies 149 (2021), 102601.
    [65] Thomas, R., and Skinner, L. Total trust and trust asymmetry: Does trust need to be equally distributed in interfirm relationships? Journal of Relationship Marketing 9, 1 (2010), 43–53.
    [66] Turner, R. H. Strategy for developing an integrated role theory. Humboldt Journal of Social Relations 7, 1 (1979), 123–139.
    [67] van den Bosch, K., and Bronkhorst, A. Human-ai cooperation to benefit military decision making. NATO.
    [68] Vereschak, O., Bailly, G., and Caramiaux, B. How to evaluate trust in ai-assisted decision making? a survey of empirical methodologies. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–39.
    [69] Wang, D., Churchill, E., Maes, P., Fan, X., Shneiderman, B., Shi, Y., and Wang, Q. From human-human collaboration to human-ai collaboration: Designing ai systems that can work together with people. In Extended abstracts of the 2020 CHI conference on human factors in computing systems (2020), pp. 1–6.
    [70] Xie, T., and Pentina, I. Attachment theory as a framework to understand relationships with social chatbots: A case study of replika. In Proceedings of the 55th Hawaii International Conference on System Sciences (2022).
    [71] Yu, K., Berkovsky, S., Taib, R., Conway, D., Zhou, J., and Chen, F. User trust dynamics: An investigation driven by differences in system performance. In Proceedings of the 22nd international conference on intelligent user interfaces (2017), pp. 307–317.
    [72] Zhou,L.,Gao,J.,Li,D.,andShum,H.-Y.Thedesignandimplementationofxiaoice, an empathetic social chatbot. Computational Linguistics 46, 1 (2020), 53–93.
    Description: 碩士
    國立政治大學
    數位內容碩士學位學程
    107462004
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0107462004
    Data Type: thesis
    DOI: 10.6814/NCCU202201055
    Appears in Collections:[數位內容碩士學位學程] 學位論文
    [數位內容與科技學士學位學程] 學位論文

    Files in This Item:

    File Description SizeFormat
    200401.pdf1464KbAdobe PDF20View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback