English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 112881/143847 (78%)
Visitors : 50295796      Online Users : 829
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 商學院 > 資訊管理學系 > 學位論文 >  Item 140.119/152407
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/152407


    Title: 人機互動之間的初始信任感對於生成式人工智慧使用意圖之影響:以社會交換理論為框架
    The impact of initial trust on usage intention of generative artificial intelligence: A social exchange perspective on human-automation interaction
    Authors: 李芷瑄
    LEE, CHIH-HSUAN
    Contributors: 周致遠
    Chou, Chih-Yuan
    李芷瑄
    LEE, CHIH-HSUAN
    Keywords: 生成式人工智慧
    初始信任
    社會交換理論
    使用意圖
    人機互動
    Generative AI
    Initial trust
    Social exchange theory
    Usage intention
    Human-automation interaction
    Date: 2024
    Issue Date: 2024-08-05 12:06:21 (UTC+8)
    Abstract: 社會交換理論(SET)是一種廣泛應用於分析或解釋人類行為及關係的理。然而,由於人類在使用科技或電腦時會表現出社會行為,因此社會交換理論可以應用在人機互動(HAI)的領域。本研究採用社會交換理論來研究人類與生成式人工智慧之間的互動。而因生成式人工智慧的日益普及,加上將科技整合在工作場域的重要性日益上升,本研究的背景著重在人們於工作中使用生成式人工智慧。不過,儘管生成式人工智慧可以在工作中協助人,生成式人工智慧也存在缺點,這使得初始信任在個人承擔生成式人工智慧缺點與風險的過程中發揮了重要作用。本研究將生成式人工智慧的使用視為社會交換行為的一種形式,同時在研究模型納入了會影響社會交換過程的三個調節因素,旨在探討初始信任對生成式人工智慧使用意圖的影響。研究方法採用量化研究的統計分析,從具有科技製造業工作經驗的個人中收集問卷資料。數據分析後的研究結果顯示,根據社會交換理論之研究模型的各個構面都有直接或調節效果,控制變數也發揮了效用。本研究的結果可以為人機互動領域提供有價值的學術貢獻,並為希望應用生成式人工智慧技術於工作場合的組織提供實用的見解。
    Social exchange theory (SET) is frequently utilized to analyze human behaviors and relationships. With the increasing social interaction between humans and technology, SET has found application in the realm of human-automation interaction (HAI). This study adopts SET to explore interactions between humans and generative artificial intelligence (GenAI), particularly within workplace contexts, given the growing integration of technology in professional settings. While GenAI offers assistance, its drawbacks underscore the importance of initial trust in facilitating its adoption. Framing GenAI use as a form of social exchange behavior, this study investigates the influence of initial trust on GenAI usage intention, incorporating three moderating factors related to social exchange dynamics. Employing statistical analysis, the study gathers data from individuals with experience in technology manufacturing sectors. Findings reveal direct and moderate effects among constructs in the research model, with control variables also exerting influence. Finally, this work can make a valuable contribution to the literature on HAI and can offer practical insights for organizations seeking to employ GenAI technologies.
    Reference: Abdul-Rahman, A., & Hailes, S. (1999). Relying on trust to find reliable information. In Proceedings 1999 International Symposium on Database, Web and Cooperative Systems (DWACOS’99), Baden-Baden, Germany. https://reurl.cc/WxKRKk
    Ahmad, R., Nawaz, M. R., Ishaq, M. I., Khan, M. M., & Ashraf, H. A. (2023). Social exchange theory: Systematic review and future directions. Frontiers in Psychology, 13, 1015921. https://doi.org/10.3389/fpsyg.2022.1015921
    Ajzen, I. (1985). From intentions to actions: A theory of planned behavior. In J. Kuhl, & J. Beckmann (Eds.), Action control: From cognition to behavior (pp. 11-39). Springer. https://doi.org/10.1007/978-3-642-69746-3_2
    Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179-211. https://doi.org/10.1016/0749-5978(91)90020-T
    Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus, 15(2). https://doi.org/10.7759/cureus.35179
    Andriulli, F., Chen, P.-Y., Erricolo, D., & Jin, J.-M. (2022). Guest editorial machine learning in antenna design, modeling, and measurements. IEEE Transactions on Antennas and Propagation, 70(7), 4948-4952. https://doi.org/10.1109/TAP.2022.3189963
    Anthropic (Ed.) (2024). Introducing the next generation of Claude. https://www.anthropic.com/news/claude-3-family
    Aydın, Ö., & Karaarslan, E. (2023). Is ChatGPT leading generative AI? What is beyond expectations? Journal of Engineering and Smart Systems. 11(3), 118-134. https://doi.org/10.21541/apjess.1293702
    Bacharach, M., Guerra, G., & Zizzo, D. J. (2007). The self-fulfilling property of trust: An experimental study. Theory and Decision, 63, 349-388. https://doi.org/10.1007/s11238-007-9043-5
    Baek, T. H., & Kim, M. (2023). Is ChatGPT scary good? How user motivations affect creepiness and trust in generative artificial intelligence. Telematics and Informatics, 83, 102030. https://doi.org/10.1016/j.tele.2023.102030
    Baidoo-Anu, D., & Ansah, L. O. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Journal of AI, 7(1), 52-62. https://doi.org/10.61969/jai.1337500
    Bansal, G., & Gefen, D. (2010). The impact of personal dispositions on information sensitivity, privacy concern and trust in disclosing health information online. Decision Support Systems, 49(2), 138-150. https://doi.org/10.1016/j.dss.2010.01.010
    Barley, S. R. (1988). The social construction of a machine: Ritual, superstition, magical thinking and other pragmatic responses to running a CT scanner. In Lock, M. & Gordon, D.(Eds.), Biomedicine examined (pp. 497-539). Springer. https://doi.org/10.1007/978-94-009-2725-4_19
    Bedué, P., & Fritzsche, A. (2022). Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption. Journal of Enterprise Information Management, 35(2), 530-549. https://doi.org/10.1108/JEIM-06-2020-0233
    Benbasat, I., & Wang, W. (2005). Trust in and adoption of online recommendation agents. Journal of the Association for Information Systems, 6(3), 4. https://doi.org/10.17705/1jais.00065
    Bilgihan, A. (2016). Gen Y customer loyalty in online shopping: An integrated model of trust, user experience and branding. Computers in Human Behavior, 61, 103-113. https://doi.org/10.1016/j.chb.2016.03.014
    Black, S. E., & Lynch, L. M. (2001). How to compete: the impact of workplace practices and information technology on productivity. Review of Economics and Statistics, 83(3), 434-445. https://doi.org/10.1162/00346530152480081
    Blau, P. M. (1964). Exchange and power in social life. John Wiley and Sons. https://doi.org/10.4324/9780203792643
    Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M. S., Bohg, J., Bosselut, A., & Brunskill, E. (2021). On the opportunities and risks of foundation models. arXiv. https://doi.org/10.48550/arXiv.2108.07258
    Braithwaite, D. O., & Schrodt, P. (2021). Engaging theories in interpersonal communication: Multiple perspectives. Routledge. https://doi.org/10.4324/9781003195511
    Brandtzaeg, P. B., Skjuve, M., & Følstad, A. (2022). My AI friend: How users of a social chatbot understand their human–AI friendship. Human Communication Research, 48(3), 404-429. https://doi.org/10.1093/hcr/hqac008
    Bright, J., Enock, F. E., Esnaashari, S., Francis, J., Hashem, Y., & Morgan, D. (2024). Generative AI is already widespread in the public sector. arXiv. https://doi.org/10.48550/arXiv.2401.01291
    Brown, B. (1988). The human-machine distinction as predicted by children’s para-social interaction with toys. [Unpublished doctoral dissertation]. Stanford University.
    Brynjolfsson, E., Li, D., & Raymond, L. R. (2023). Generative AI at work. National Bureau of Economic Research. https://www.nber.org/papers/w31161
    Brynjolfsson, E., & Yang, S. (1996). Information technology and productivity: A review of the literature. Advances in Computers, 43, 179-214. https://doi.org/10.1016/S0065-2458(08)60644-0
    Bughin, J., Hazan, E., Lund, S., Dahlström, P., Wiesinger, A., & Subramaniam, A. (2018). Skill shift: Automation and the future of the workforce. McKinsey Global Institute. https://www.voced.edu.au/content/ngv:79805
    Burda, D., & Teuteberg, F. (2014). The role of trust and risk perceptions in cloud archiving—Results from an empirical study. The Journal of High Technology Management Research, 25(2), 172-187. https://doi.org/10.1016/j.hitech.2014.07.008
    Butler Jr, J. K. (1991). Toward understanding and measuring conditions of trust: Evolution of a conditions of trust inventory. Journal of Management, 17(3), 643-663. https://doi.org/10.1177/014920639101700307
    Cabiddu, F., Moi, L., Patriotta, G., & Allen, D. G. (2022). Why do users trust algorithms? A review and conceptualization of initial trust and trust over time. European Management Journal, 40(5), 685-706. https://doi.org/10.1016/j.emj.2022.06.001
    Capel, T., & Brereton, M. (2023). What is human-centered about human-centered AI? A map of the research landscape. In A. Schmidt, K. Väänänen, T. Goyal, P.O. Kristensson, A. Peters, S. Mueller, J.R. Williamson, & M.L. Wilson (Eds.), Proceedings of the 2023 CHI conference on human factors in computing systems (359). Association for Computing Machinery. https://doi.org/10.1145/3544548.3580959
    Cardon, P. W., Getchell, K., Carradini, S., Fleischmann, C., & Stapp, J. (2023). Generative AI in the workplace: Employee perspectives of ChatGPT benefits and organizational policies. SocArXiv. https://doi.org/10.31235/osf.io/b3ezy
    Chattaraman, V., Kwon, W.-S., Gilbert, J. E., & Ross, K. (2019). Should AI-Based, conversational digital assistants employ social-or task-oriented interaction style? A task-competency and reciprocity perspective for older adults. Computers in Human Behavior, 90, 315-330. https://doi.org/10.1016/j.chb.2018.08.048
    Chen, T., Guo, W., Gao, X., & Liang, Z. (2021). AI-based self-service technology in public service delivery: User experience and influencing factors. Government Information Quarterly, 38(4), 101520. https://doi.org/10.1016/j.giq.2020.101520
    Chen, X. A., Burke, J., Du, R., Hong, M. K., Jacobs, J., Laban, P., Li, D., Peng, N., Willis, K. D., & Wu, C.-S. (2023). Next steps for human-centered generative AI: A technical perspective. arXiv. https://doi.org/10.48550/arXiv.2306.15774
    Chen, Y., Biswas, M. I., & Talukder, M. S. (2023). The role of artificial intelligence in effective business operations during COVID-19. International Journal of Emerging Markets, 18(12), 6368-6387. https://doi.org/10.1108/IJOEM-11-2021-1666
    Chen, Z., Jiang, Z., Yang, F., He, Z., Hou, Y., Cho, E., McAuley, J., Galstyan, A., Hu, X., & Yang, J. (2023). The first workshop on personalized generative ai@ cikm 2023: Personalization meets large language models. In Proceedings of the 32nd ACM international conference on information and knowledge management (pp. 5267-5270). Association for Computing Machinery. https://doi.org/10.1145/3583780.3615314
    Chiasson, M. W., & Lovato, C. Y. (2001). Factors influencing the formation of a user's perceptions and use of a DSS software innovation. Acm Sigmis Database: The Database for Advances in Information Systems, 32(3), 16-35.
    Chin, W. W. (1998). The partial least squares approach to structural equation modeling. In G. A. Marcoulides (Ed.), Modern methods for business research (pp. 295-336). Lawrence Erlbaum Associates.
    Chiou, E. K., Lee, J. D., & Su, T. (2019). Negotiated and reciprocal exchange structures in human-agent cooperation. Computers in Human Behavior, 90, 288-297. https://doi.org/10.1016/j.chb.2018.08.012
    Choi, J. K., & Ji, Y. G. (2015). Investigating the importance of trust on adopting an autonomous vehicle. International Journal of Human-Computer Interaction, 31(10), 692-702. https://doi.org/10.1080/10447318.2015.1070549
    Choung, H., David, P., & Ross, A. (2023). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human–Computer Interaction, 39(9), 1727-1739. https://doi.org/10.1080/10447318.2022.2050543
    Chu, Y.-H. (2020). An economy-wide assessment of artificial intelligence investment on manufacturing: a case study of Taiwan’s semiconductor and ICT industries. Modern Economy, 11(5), 1040-1052. https://doi.org/10.4236/me.2020.115078
    Chui, M., Hazan, E., Roberts, R., Singla, A., & Smaje, K. (2023). The economic potential of generative AI. McKinsey & Company. https://bit.ly/45Isbdi
    Chui, M., Roberts, R., & Yee, L. (2022). Generative AI is here: How tools like ChatGPT could change your business. Quantum Black AI by McKinsey. https://bit.ly/3RPlY9F
    Coleman, J. S. (1990). Foundations of social theory. Harvard University Press. https://bit.ly/4bkTD1Y
    Cook, K. S., Cheshire, C., Rice, E. R., & Nakagawa, S. (2013). Social exchange theory. In J. DeLamater, & A. Ward (Eds.), Handbook of social psychology (pp. 61-88). Springer Dordrecht. https://doi.org/10.1007/978-94-007-6772-0
    Cook, K. S., & Emerson, R. M. (1987). Social exchange theory. In S. Abrutyn, & K. McCaffree (Eds.), Theoretical sociology: The future of a disciplinary foundation (pp. 179-205). Routledge. https://bit.ly/4cgwGhO
    Corritore, C. L., Kracher, B., & Wiedenbeck, S. (2003). On-line trust: concepts, evolving themes, a model. International Journal of Human-Computer Studies, 58(6), 737-758. https://doi.org/10.1016/S1071-5819(03)00041-7
    Cropanzano, R., & Mitchell, M. S. (2005). Social exchange theory: An interdisciplinary review. Journal of Management, 31(6), 874-900. https://doi.org/10.1177/0149206305279602
    Czaja, S. J., & Moen, P. (2004). Technology and employment. In R.W. Pew, & S.B. Van Hemel (Eds.), Technology for adaptive aging. National Academies Press (US). https://doi.org/10.17226/10857
    Dasgupta, D., Venugopal, D., & Gupta, K. D. (2023). A review of generative ai from historical perspectives. TechRxiv. https://doi.org/10.36227/techrxiv.22097942.v1
    Davenport, T. H. (2018). The AI advantage: How to put the artificial intelligence revolution to work. MIT Press. https://doi.org/10.7551/mitpress/11781.001.0001
    Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340. https://doi.org/10.2307/249008
    Davlembayeva, D., Papagiannidis, S., & Alamanos, E. (2020). Sharing economy: Studying the social and psychological factors and the outcomes of social exchange. Technological Forecasting and Social Change, 158, 120143. https://doi.org/10.1016/j.techfore.2020.120143
    Deutsch, M. (1960). The effect of motivational orientation upon trust and suspicion. Human Relations, 13(2), 123-139. https://doi.org/10.1177/0018726760013002
    Dhoni, P. (2023). Unleashing the potential: Overcoming hurdles and embracing generative AI in it workplaces: advantages, guidelines, and policies. TechRxiv. https://doi.org/10.36227/techrxiv.23696709.v1
    Dillon, M. (2020). Introduction to sociological theory: Theorists, concepts, and their applicability to the twenty-first century. John Wiley & Sons. https://bit.ly/3VYX7CG
    Dixon, G. N., Deline, M. B., McComas, K., Chambliss, L., & Hoffmann, M. (2015). Saving energy at the workplace: The salience of behavioral antecedents and sense of community. Energy Research & Social Science, 6, 121-127. https://doi.org/10.1016/j.erss.2015.01.004
    Dohmke, T. (2023). GitHub Copilot for Business is now available. GitHub. https://github.blog/2023-02-14-github-copilot-for-business-is-now-available/
    Doney, P. M., & Cannon, J. P. (1997). An examination of the nature of trust in buyer–seller relationships. Journal of Marketing, 61(2), 35-51. https://doi.org/10.1177/002224299706100203
    Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y., Dwivedi, R., Edwards, J., & Eirug, A. (2021). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 57, 101994. https://doi.org/10.1016/j.ijinfomgt.2019.08.002
    Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642
    Dwivedi, Y. K., Pandey, N., Currie, W., & Micu, A. (2024). Leveraging ChatGPT and other generative artificial intelligence (AI)-based applications in the hospitality and tourism industry: practices, challenges and research agenda. International Journal of Contemporary Hospitality Management, 36(1), 1-12. https://doi.org/10.1108/IJCHM-05-2023-0686
    Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. (2003). The role of trust in automation reliance. International Journal of Human-Computer Studies, 58(6), 697-718. https://doi.org/10.1016/S1071-5819(03)00038-7
    Egger, F. N. (2000). " Trust me, I'm an online vendor" towards a model of trust for e-commerce system design. In CHI'00 extended abstracts on Human factors in computing systems (pp. 101-102). Association for Computing Machinery. https://doi.org/10.1145/633292.633352
    Ehsan, U., Liao, Q. V., Muller, M., Riedl, M. O., & Weisz, J. D. (2021). Expanding explainability: Towards social transparency in AI systems. In Proceedings of the 2021 CHI conference on human factors in computing systems (82). Association for Computing Machinery. https://doi.org/10.1145/3411764.3445188
    Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). Gpts are GPTs: An early look at the labor market impact potential of large language models. arXiv. https://doi.org/10.48550/arXiv.2303.10130
    Emerson, R. M. (1972a). Exchange theory, part I: A psychological basis for social exchange. Sociological theories in progress, 2, 38-57.
    Emerson, R. M. (1972b). Exchange theory, part II: Exchange relations and networks. Sociological theories in progress, 2, 58-87.
    Emerson, R. M. (1976). Social exchange theory. Annual Review of Sociology, 2(1), 335-362. https://doi.org/10.1146/annurev.so.02.080176.002003
    Emerson, R. M. (1981). Social Exchange Theory In M. Rosenberg & R. H.Turner (Eds.), Social psychology: Sociological perspectives (pp. 30–65). New York: Basic Books
    Esmaeilzadeh, P. (2020). Use of AI-based tools for healthcare purposes: a survey study from consumers’ perspectives. BMC Medical Informatics and Decision Making, 20, 1-19. https://doi.org/10.1186/s12911-020-01191-1
    Fan, W., Liu, J., Zhu, S., & Pardalos, P. M. (2020). Investigating the impacting factors for the healthcare professionals to adopt artificial intelligence-based medical diagnosis support system (AIMDSS). Annals of Operations Research, 294, 567-592. https://doi.org/10.1007/s10479-018-2818-y
    Farris, G. F., Senner, E. E., & Butterfield, D. A. (1973). Trust, culture, and organizational behavior. Industrial Relations, 12(2), 144-157.
    Fatima, J. K., Khan, M. I., Bahmannia, S., Chatrath, S. K., Dale, N. F., & Johns, R. (2024). Rapport with a chatbot? The underlying role of anthropomorphism in socio-cognitive perceptions of rapport and e-word of mouth. Journal of Retailing and Consumer Services, 77, 103666. https://doi.org/10.1016/j.jretconser.2023.103666
    Feldman, S. S. (2017). Co-creation: human and AI collaboration in creative expression. In J.P. Bowen, G. Diprose, & N. Lambert (Eds.), Proceedings of Electronic Visualisation and the Arts (EVA 2017) (pp. 422-429). BCS Learning & Development. http://dx.doi.org/10.14236/ewic/EVA2017.84
    Ferdous, A. S. (2010). Applying the theory of planned behavior to explain marketing managers’ perspectives on sustainable marketing. Journal of International Consumer Marketing, 22(4), 313-325. https://doi.org/10.1080/08961530.2010.505883
    Feuerriegel, S., Hartmann, J., Janiesch, C., & Zschech, P. (2024). Generative AI. Business & Information Systems Engineering, 66(1), 111-126. https://doi.org/10.1007/s12599-023-00834-7
    Firat, M. (2023). How ChatGPT can transform autodidactic experiences and open education? OSF Preprints. https://doi.org/10.31219/osf.io/9ge8m
    Fishbein, M., & Ajzen, I. (1977). Belief, attitude, intention, and behavior: An introduction to theory and research. Philosophy and Rhetoric, 10(2), 130-132.
    Fiten, B., & Jacobs, E. (2023, March 23). ChatGPT and copyright: Many questions remain to be answered. TIMELEX. https://www.timelex.eu/en/blog/chatgpt-and-copyright-many-questions-remain-be-answered
    Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., & Rossi, F. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28, 689–707. https://doi.org/10.1007/s11023-018-9482-5
    Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39-50. https://doi.org/10.1177/002224378101800104
    Frost, T., Stimpson, D. V., & Maughan, M. R. (1978). Some correlates of trust. Journal of Psychology, 99(1), 103-108.
    Fukuyama, F. (1996). Trust: The social virtues and the creation of prosperity. Simon and Schuster. https://bit.ly/3zkPJc9
    Fuller, C. M., Simmering, M. J., Atinc, G., Atinc, Y., & Babin, B. J. (2016). Common methods variance detection in business research. Journal of Business Research, 69(8), 3192-3198. https://doi.org/10.1016/j.jbusres.2015.12.008
    Gade, K., Geyik, S. C., Kenthapadi, K., Mithal, V., & Taly, A. (2019). Explainable AI in industry. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 3203-3204). Association for Computing Machinery. https://doi.org/10.1145/3292500.3332281
    Gambetta, D. (1988). Trust: Making and breaking cooperative relations. Basil Blackwell. https://bit.ly/3VI5b9R
    Gambino, A., Fox, J., & Ratan, R. A. (2020). Building a stronger CASA: Extending the computers are social actors paradigm. Human-Machine Communication, 1, 71-85. https://search.informit.org/doi/10.3316/INFORMIT.097034846749023
    Gao, Y., & Wu, X. (2010). A cognitive model of trust in e-commerce: evidence from a field study in China. Journal of Applied Business Research (JABR), 26(1), 37-44.
    Garibay, O. O., Winslow, B., Andolina, S., Antona, M., Bodenschatz, A., Coursaris, C., Falco, G., Fiore, S. M., Garibay, I., & Grieman, K. (2023). Six human-centered artificial intelligence grand challenges. International Journal of Human–Computer Interaction, 39(3), 391-437. https://doi.org/10.1080/10447318.2022.2153320
    Gefen, D., & Keil, M. (1998). The impact of developer responsiveness on perceptions of usefulness and ease of use: An extension of the technology acceptance model. ACM SIGMIS Database: The Database for Advances in Information Systems, 29(2), 35-49. https://doi.org/10.1145/298752.298757
    Gefen, D., Straub, D., & Boudreau, M.-C. (2000). Structural equation modeling and regression: Guidelines for research practice. Communications of the Association for Information Systems, 4(1), 7. https://doi.org/10.17705/1CAIS.00407
    Germanakos, P., & Belk, M. (2016). Human-centred web adaptation and personalization. Springer. https://doi.org/10.1007/978-3-319-28050-9
    Geyer, W., Weisz, J., Pinhanez, C. S., & Daly, E. (2022). What is human-centered AI? IBM. https://research.ibm.com/blog/what-is-human-centered-ai
    Gill, K. S. (1990). Summary of human-centered systems research in Europe. Systemist, the Journal of the UK Systems Society, 13(1), 7-27.
    Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627-660. https://doi.org/10.5465/annals.2018.0057
    Goldman Sachs (Ed.) (2023). Generative AI could raise global GDP by 7%. https://www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-gdp-by-7-percent.html
    Gong, L. (2008). How social is social responses to computers? The function of the degree of anthropomorphism in computer representations. Computers in Human Behavior, 24(4), 1494-1509. https://doi.org/10.1016/j.chb.2007.05.007
    Good, D. (1988). Individuals, interpersonal relations and trust. In D. Gambetta (Ed.), Trust: Making and breaking cooperative relations (pp. 31-48). Blackwell Pub. https://doi.org/10.2307/591021
    Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, & K.Q. Weinberger (Eds.), Advances in neural information processing systems 27 (NIPS 2014) (pp. 2672-2680). Neural Information Processing Systems Foundation, Inc.
    Gouldner, A. W. (1960). The norm of reciprocity: A preliminary statement. American Sociological Review, (25)2, 161-178. https://doi.org/10.2307/2092623
    Gozalo-Brizuela, R., & Garrido-Merchán, E. C. (2023). A survey of generative AI applications. arXiv. https://doi.org/10.48550/arXiv.2306.02781
    Groover, M. P. (2016). Automation, production systems, and computer-integrated manufacturing. Pearson Education India.
    Groover, M. P. (2024, June 7). Automation. Encyclopedia Britannica. https://www.britannica.com/technology/automation
    Gulati, S., Sousa, S., & Lamas, D. (2017). Modelling trust: An empirical assessment. In R. Bernhaupt, G. Dalvi, A. Joshi, D.K. Balkrishan, J. O’Neill, & M. Winckler (Eds.), Human-computer interaction – INTERACT 2017: 16th IFIP TC 13 international conference, Mumbai, India, September 25-29, 2017, proceedings, part IV (pp. 40-61). Springer Cham. https://doi.org/10.1007/978-3-319-68059-0_3
    Gulati, S., Sousa, S., & Lamas, D. (2018). Modelling trust in human-like technologies. In Proceedings of the 9th Indian conference on human-computer interaction (pp. 1-10). Association for Computing Machinery. https://doi.org/10.1145/3297121.3297124
    Gulati, S., Sousa, S., & Lamas, D. (2019). Design, development and evaluation of a human-computer trust scale. Behaviour & Information Technology, 38(10), 1004-1015. https://doi.org/10.1080/0144929X.2019.1656779
    Haeffner, M., & Panuwatwanich, K. (2018). Perceived impacts of Industry 4.0 on manufacturing industry and its workforce: Case of Germany. In S. Şahin (Ed.), 8th International Conference on Engineering, Project, and Product Management (EPPM 2017) proceedings (pp. 199-208). Springer Cham. https://doi.org/10.1007/978-3-319-74123-9_21
    Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. European Business Review, 31(1), 2-24. https://doi.org/10.1108/EBR-11-2018-0203
    Hall, H. (2003). Borrowed theory: applying exchange theories in information science research. Library & Information Science Research, 25(3), 287-306. https://doi.org/10.1016/S0740-8188(03)00031-8
    Harman, H. H. (1976). Modern factor analysis. University of Chicago press. https://bit.ly/4eJf3ZC
    Harmon-Jones, E., & Mills, J. (2019). An introduction to cognitive dissonance theory and an overview of current perspectives on the theory. In E. Harmon-Jones (Ed.), Cognitive dissonance: Reexamining a pivotal theory in psychology (pp. 3-24). American Psychological Association. https://doi.org/10.1037/0000135-001
    Harrison, R. T., Dibben, M. R., & Mason, C. M. (1997). The role of trust in the informal investor's investment decision: An exploratory analysis. Entrepreneurship Theory and Practice, 21(4), 63-81. https://doi.org/10.1177/104225879702100405
    Heath, A. F. (1976). Rational choice & social exchange: a critique of exchange theory. Cambridge University Press. https://doi.org/10.2307/2577751
    Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43, 115-135. https://doi.org/10.1007/s11747-014-0403-8
    Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780. https://doi.org/10.1162/neco.1997.9.8.1735
    Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407-434. https://doi.org/10.1177/0018720814547570
    Hoffman, R. R., Johnson, M., Bradshaw, J. M., & Underbrink, A. (2013). Trust in automation. IEEE Intelligent Systems, 28(1), 84-88. https://doi.org/10.1109/MIS.2013.24
    Hois, J., Theofanou-Fuelbier, D., & Junk, A. J. (2019). How to achieve explainability and transparency in human AI interaction. In C. Stephanidis (Ed.), HCI International 2019 – posters: 21st international conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, proceedings, part II (pp. 177–183). Springer Cham. https://doi.org/10.1007/978-3-030-23528-4_25
    Holmes, J. G. (1981). The exchange process in close relationships: Microbehavior and macromotives. In M.J. Lerner & S.C. Lerner(Eds.), The justice motive in social behavior: Adapting to times of scarcity and change (pp. 261-284). Springer. https://doi.org/10.1007/978-1-4899-0429-4
    Homans, G. C. (1958). Social behavior as exchange. American Journal of Sociology, 63(6), 597-606. https://doi.org/10.1086/222355
    Homans, G. C. (1961). Social behavior: Its elementary fonns. Harcourt, Brace & World.
    Houde, S., Liao, V., Martino, J., Muller, M., Piorkowski, D., Richards, J., Weisz, J., & Zhang, Y. (2020). Business (mis) use cases of generative AI. arXiv. https://doi.org/10.48550/arXiv.2003.07679
    Hovland, C. I., Janis, I. L., & Kelley, H. H. (1953). Communication and persuasion. Yale University Press.
    Iivari, J., & Iivari, N. (2011). Varieties of user‐centredness: An analysis of four systems development methods. Information Systems Journal, 21(2), 125-153. https://doi.org/10.1111/j.1365-2575.2010.00351.x
    Janssen, C. P., Donker, S. F., Brumby, D. P., & Kun, A. L. (2019). History and future of human-automation interaction. International Journal of Human-Computer Studies, 131, 99-107. https://doi.org/10.1016/j.ijhcs.2019.05.006
    Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y. J., Madotto, A., & Fung, P. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), 1-38. https://doi.org/10.1145/3571730
    Jiang, H., Cheng, Y., Yang, J., & Gao, S. (2022). AI-powered chatbot communication with customers: Dialogic interactions, satisfaction, engagement, and customer behavior. Computers in Human Behavior, 134, 107329. https://doi.org/10.1016/j.chb.2022.107329
    Jones, E. E. (1985). Major developments in social psychology during the past five decades. In J. DeLamater, & A. Ward (Eds.), Handbook of social psychology (pp. 47-107). Springer Dordrecht. https://doi.org/10.1007/978-94-007-6772-0
    Jones, G. R., & George, J. M. (1998). The experience and evolution of trust: Implications for cooperation and teamwork. Academy of Management Review, 23(3), 531-546. https://doi.org/10.5465/amr.1998.926625
    Joskowicz, J., & Slomovitz, D. (2023). Engineers' perspectives on the use of generative artificial intelligence tools in the workplace. IEEE Engineering Management Review, (52)1, 258-267. https://doi.org/10.1109/EMR.2023.3333794
    Kaber, D. B. (2018). Issues in human–automation interaction modeling: Presumptive aspects of frameworks of types and levels of automation. Journal of Cognitive Engineering and Decision Making, 12(1), 7-24. https://doi.org/10.1177/1555343417737203
    Karray, F., Alemzadeh, M., Abou Saleh, J., & Arab, M. N. (2008). Human-computer interaction: Overview on state of the art. International Journal on Smart Sensing and Intelligent Systems, 1(1), 137-159. https://doi.org/10.21307/ijssis-2017-283
    Kartikeya, A. (2022). Examining correlation between trust and transparency with explainable artificial intelligence. In K. Arai (Ed.), Intelligent computing proceedings of the 2022 computing conference, volume 2 (pp. 353–358). Springer Cham. https://doi.org/10.1007/978-3-031-10464-0_23
    Kashif, M., Zarkada, A., & Ramayah, T. (2018). The impact of attitude, subjective norms, and perceived behavioural control on managers’ intentions to behave ethically. Total Quality Management & Business Excellence, 29(5-6), 481-501. https://doi.org/10.1080/14783363.2016.1209970
    Kee, H. W., & Knox, R. E. (1970). Conceptual and methodological considerations in the study of trust and suspicion. Journal of Conflict Resolution, 14(3), 357-366. https://doi.org/10.1177/002200277001400307
    Kim, D. J., Ferrin, D. L., & Rao, H. R. (2008). A trust-based consumer decision-making model in electronic commerce: The role of trust, perceived risk, and their antecedents. Decision Support Systems, 44(2), 544-564. https://doi.org/10.1016/j.dss.2007.07.001
    Kim, H., So, K. K. F., & Wirtz, J. (2022). Service robots: Applying social exchange theory to better understand human–robot interactions. Tourism Management, 92, 104537. https://doi.org/10.1016/j.tourman.2022.104537
    Kim, K. K., & Prabhakar, B. (2004). Initial trust and the adoption of B2C e-commerce: The case of internet banking. ACM SIGMIS Database: The Database for Advances in Information Systems, 35(2), 50-64. https://doi.org/10.1145/1007965.1007970
    Kim, T., & Song, H. (2021). How should intelligent agents apologize to restore trust? Interaction effects between anthropomorphism and apology attribution on trust repair. Telematics and Informatics, 61, 101595. https://doi.org/10.1016/j.tele.2021.101595
    Kim, Y., & Sundar, S. S. (2012). Anthropomorphism of computers: Is it mindful or mindless? Computers in Human Behavior, 28(1), 241-250. https://doi.org/10.1016/j.chb.2011.09.006
    Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv. https://doi.org/10.48550/arXiv.1312.6114
    Kini, A., & Choobineh, J. (1998). Trust in electronic commerce: definition and theoretical considerations. In H. El-Rewini (Ed.), Proceedings of the thirty-first Hawaii international conference on system sciences (Vol. 4, pp. 51-61). IEEE. https://doi.org/10.1109/HICSS.1998.655251
    Kizilcec, R. F. (2016). How much information? Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI conference on human factors in computing systems (pp. 2390-2395). Association for Computing Machinery. https://doi.org/10.1145/2858036.2858402
    Klumpp, M., Hesenius, M., Meyer, O., Ruiner, C., & Gruhn, V. (2019). Production logistics and human-computer interaction—state-of-the-art, challenges and requirements for the future. The International Journal of Advanced Manufacturing Technology, 105, 3691-3709. https://doi.org/10.1007/s00170-019-03785-0
    Kobsa, A., Koenemann, J., & Pohl, W. (2001). Personalised hypermedia presentation techniques for improving online customer relationships. The Knowledge Engineering Review, 16(2), 111-155. https://doi.org/10.1017/S0269888901000108
    Kocoń, J., Cichecki, I., Kaszyca, O., Kochanek, M., Szydło, D., Baran, J., Bielaniewicz, J., Gruza, M., Janz, A., & Kanclerz, K. (2023). ChatGPT: Jack of all trades, master of none. Information Fusion, 99, 101861. https://doi.org/10.1016/j.inffus.2023.101861
    Kong, W.-c., & Hung, Y.-T. C. (2006). Modeling initial and repeat online trust in B2C e-commerce. Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS'06) (Vol. 6, pp. 120b-120b). IEEE. https://doi.org/10.1109/HICSS.2006.354
    Koufaris, M., & Hampton-Sosa, W. (2004). The development of initial trust in an online company by new customers. Information & Management, 41(3), 377-397. https://doi.org/10.1016/j.im.2003.08.004
    Kramer, R. M., & Tyler, T. R. (1996). Trust in organizations: Frontiers of theory and research. Sage. https://doi.org/10.4135/9781452243610
    Larzelere, R. E., & Huston, T. L. (1980). The dyadic trust scale: Toward understanding interpersonal trust in close relationships. Journal of Marriage and the Family, (42)3, 595-604. https://doi.org/10.2307/351903
    Lee, E.-J. (2010). The more humanlike, the better? How speech type and users’ cognitive style affect social responses to computers. Computers in Human Behavior, 26(4), 665-672. https://doi.org/10.1016/j.chb.2010.01.003
    Lee, J.-C., & Chen, X. (2022). Exploring users' adoption intentions in the evolution of artificial intelligence mobile banking applications: the intelligent and anthropomorphic perspectives. International Journal of Bank Marketing, 40(4), 631-658. https://doi.org/10.1108/IJBM-08-2021-0394
    Lee, J., & Moray, N. (1992). Trust, control strategies and allocation of function in human-machine systems. Ergonomics, 35(10), 1243-1270. https://doi.org/10.1080/00140139208967392
    Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50-80. https://doi.org/10.1518/hfes.46.1.50_30392
    Lee, M. K., & Turban, E. (2001). A trust model for consumer internet shopping. International Journal of Electronic Commerce, 6(1), 75-91. https://doi.org/10.1080/10864415.2001.11044227
    Lee, Y., & Kozar, K. A. (2008). An empirical investigation of anti-spyware software adoption: A multitheoretical perspective. Information & Management, 45(2), 109-119. https://doi.org/10.1016/j.im.2008.01.002
    Lewis, S. C., Guzman, A. L., & Schmidt, T. R. (2019). Automation, journalism, and human–machine communication: Rethinking roles and relationships of humans and machines in news. Digital Journalism, 7(4), 409-427. https://doi.org/10.1080/21670811.2019.1577147
    Li, C., Wang, J., Zhang, Y., Zhu, K., Hou, W., Lian, J., Luo, F., Yang, Q., & Xie, X. (2023). Large language models understand and can be enhanced by emotional stimuli. arXiv. https://doi.org/10.48550/arXiv.2307.11760
    Li, J. (2015). Knowledge sharing in virtual communities: A social exchange theory perspective. Journal of Industrial Engineering and Management (JIEM), 8(1), 170-183. https://doi.org/10.3926/jiem.1389
    Li, X., Hess, T. J., & Valacich, J. S. (2008). Why do we trust new technology? A study of initial trust formation with organizational information systems. The Journal of Strategic Information Systems, 17(1), 39-71. https://doi.org/10.1016/j.jsis.2008.01.001
    Lieberman, J. K. (1981). The litigious society. Basic Books.
    Lindebaum, D., Vesa, M., & Den Hond, F. (2020). Insights from “the machine stops” to better understand rational assumptions in algorithmic decision making and its implications for organizations. Academy of Management Review, 45(1), 247-263. https://doi.org/10.5465/amr.2018.0181
    Lindsay, P. H., & Norman, D. A. (2013). Human information processing: An introduction to psychology. Academic Press. https://bit.ly/3L9EWUA
    Liu, J. L. (2023). Loving a “defiant” AI companion? The gender performance and ethics of social exchange robots in simulated intimate interactions. Computers in Human Behavior, 141, 107620. https://doi.org/10.1016/j.chb.2022.107620
    Lv, Z. (2023). Generative Artificial Intelligence in the Metaverse Era. Cognitive Robotics, 3, 208-217. https://doi.org/10.1016/j.cogr.2023.06.001
    Machogu, A. M., & Okiko, L. (2012). The perception of bank employees towards cost of adoption, risk of innovation, and staff training's influence on the adoption of information and communication technology in the Rwandan commercial banks. Journal of Internet Banking and Commerce, 17(2), 1-15.
    Makridakis, S. (2017). The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures, 90, 46-60. https://doi.org/10.1016/j.futures.2017.03.006
    Manyika, J., & Sneader, K. (2018). AI, automation, and the future of work: Ten things to solve for. McKinsey & Company. https://mck.co/4cBPt6I
    Martins, N. (2002). A model for managing trust. International Journal of Manpower, 23(8), 754-769. https://doi.org/10.1108/01437720210453984
    Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709-734. https://doi.org/10.5465/amr.1995.9508080335
    McKnight, D. H. (2005). Trust in information technology. In G. B. Davis (Ed.), The Blackwell Encyclopedia of Management (pp. 329-331). Wiley-Blackwell.
    Mcknight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems (TMIS), 2(2), 1-25. https://doi.org/10.1145/1985347.1985353
    McKnight, D. H., Choudhury, V., & Kacmar, C. (2002a). Developing and validating trust measures for e-commerce: An integrative typology. Information Systems Research, 13(3), 334-359. https://doi.org/10.1287/isre.13.3.334.81
    McKnight, D. H., Choudhury, V., & Kacmar, C. (2002b). The impact of initial consumer trust on intentions to transact with a web site: a trust building model. The Journal of Strategic Information Systems, 11(3-4), 297-323. https://doi.org/10.1016/S0963-8687(02)00020-3
    McKnight, D. H., Cummings, L. L., & Chervany, N. L. (1998). Initial trust formation in new organizational relationships. Academy of Management Review, 23(3), 473-490. https://doi.org/10.5465/amr.1998.926622
    Mead, G. H. (1934). Mind, self, and society from the standpoint of a social behaviorist. University of Chicago Press: Chicago.
    Meeker, B. F. (1971). Decisions and exchange. American Sociological Review, 36(3), 485-495. https://doi.org/10.2307/2093088
    Meyer, J. (2001). Effects of warning validity and proximity on responses to warnings. Human Factors, 43(4), 563-572. https://doi.org/10.1518/00187200177587039
    Mich, L., & Garigliano, R. (2023). ChatGPT for e-Tourism: a technological perspective. Information Technology & Tourism, 25, 1-12. https://doi.org/10.1007/s40558-023-00248-x
    Michell, P., Reast, J., & Lynch, J. (1998). Exploring the foundations of trust. Journal of Marketing Management, 14(1-3), 159-172. https://doi.org/10.1362/026725798784959417
    Microsoft News Center (Ed.) (2023). Microsoft’s 2023 Work Trend Index Report reveals impact of digital debt on innovation, emphasizes need for AI proficiency for every employee. https://bit.ly/3VIPkYn
    Mohammed, B. S., Fethi, A., & Djaoued, O. B. (2017). The influence of attitude, subjective norms and perceived behavior control on entrepreneurial intentions: Case of Algerian students. American Journal of Economics, 7(6), 274-282. https://doi.org/10.5923/j.economics.20170706.02
    Molm, L. D. (1997). Coercive power in social exchange. Cambridge University Press. https://doi.org/10.1017/CBO9780511570919
    Molm, L. D. (2010). The structure of reciprocity. Social Psychology Quarterly, 73(2), 119-131. https://doi.org/10.1177/0190272510369079
    Molm, L. D., Takahashi, N., & Peterson, G. (2000). Risk and trust in social exchange: An experimental test of a classical proposition. American Journal of Sociology, 105(5), 1396-1427. https://doi.org/10.1086/210434
    Mostafa, R. B., & Kasamani, T. (2022). Antecedents and consequences of chatbot initial trust. European Journal of Marketing, 56(6), 1748-1771. https://doi.org/10.1108/EJM-02-2020-0084
    Moussawi, S., Koufaris, M., & Benbunan-Fich, R. (2021). How perceptions of intelligence and anthropomorphism affect adoption of personal intelligent agents. Electronic Markets, 31, 343-364. https://doi.org/10.1007/s12525-020-00411-w
    Mui, L., Mohtashemi, M., & Halberstadt, A. (2002). A computational model of trust and reputation. In R. H. Sprague, Jr. (Ed.), Proceedings of the 35th annual Hawaii international conference on system sciences (pp. 2431-2439). IEEE. https://doi.org/10.1109/HICSS.2002.994181
    Mukherjee, A., & Nath, P. (2003). A model of trust in online relationship banking. International Journal of Bank Marketing, 21(1), 5-15. https://doi.org/10.1108/02652320310457767
    Muller, M., Chilton, L. B., Kantosalo, A., Martin, C. P., & Walsh, G. (2022). GenAICHI: generative AI and HCI. In S. Barbosa, C. Lampe, C. Appert, D.A. Shamma (Eds.), CHI conference on human factors in computing systems extended abstracts (110). Association for Computing Machinery. https://doi.org/10.1145/3491101.3503719
    Murphy, G. B., & Blessinger, A. A. (2003). Perceptions of no-name recognition business to consumer e-commerce trustworthiness: the effectiveness of potential influence tactics. The Journal of High Technology Management Research, 14(1), 71-92. https://doi.org/10.1016/S1047-8310(03)00005-1
    Nah, F., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3), 277–304. https://doi.org/10.1080/15228053.2023.2233814
    Nalini, M. K., Kumar, R. A., & Sushma, D. (2023). Generative AI: A comprehensive study of advancements and application. International Journal of Science & Engineering Development Research, 8(8), 479-483.
    Nasrolahi Vosta, L., & Jalilvand, M. R. (2023). Electronic trust-building for hotel websites: a social exchange theory perspective. Journal of Islamic Marketing, 14(11), 2689-2714. https://doi.org/10.1108/JIMA-05-2022-0119
    Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of social issues, 56(1), 81-103. https://doi.org/10.1111/0022-4537.00153
    Nass, C., Steuer, J., Tauber, E., & Reeder, H. (1993). Anthropomorphism, agency, and ethopoeia: computers as social actors. In INTERACT'93 and CHI'93 conference companion on human factors in computing systems (pp. 111-112).
    Nass, C., Steuer, J., & Tauber, E. R. (1994). Computers are social actors. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 72-78).
    Neuburger, H. (1971). Perceived costs. Environment and Planning A: Economy and Space, 3(4), 369-376. https://doi.org/10.1068/a030369
    Norman, S. M., Avey, J., Larson, M., & Hughes, L. (2020). The development of trust in virtual leader–follower relationships. Qualitative Research in Organizations and Management: An International Journal, 15(3), 279-295. https://doi.org/10.1108/QROM-12-2018-1701
    Norvig, P. (2017, June 29). Artificial intelligence in the software engineering workflow [Keynote address]. O'Reilly Artificial Intelligence Conference 2017, New York, USA. https://bit.ly/3XKtcQ7
    Nundy, S., Montgomery, T., & Wachter, R. M. (2019). Promoting trust between patients and physicians in the era of artificial intelligence. Jama, 322(6), 497-498. https://doi.org/10.1001/jama.2018.20563
    Obrenovic, B., Gu, X., Wang, G., Godinic, D., & Jakhongirov, I. (2024). Generative AI and human–robot interaction: implications and future agenda for business, society and ethics. AI & SOCIETY, 1-14. https://doi.org/10.1007/s00146-024-01889-0
    OpenAI. (Ed.) (2024a). Data Controls FAQ. https://help.openai.com/en/articles/7730893-data-controls-faq
    OpenAI. (Ed.) (2024b). OpenAI Developer Forum. https://community.openai.com/
    Papamakarios, G., Pavlakou, T., & Murray, I. (2017). Masked autoregressive flow for density estimation. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in neural information processing systems 30 (NIPS 2017) (pp. 2338-2347). Neural Information Processing Systems Foundation, Inc.
    Papenmeier, A. (2019). Trust in automated decision making: how user's trust and perceived understanding is influenced by the quality of automatically generated explanations [Unpublished master dissertation]. University of Twente.
    Park, S. (2020). Multifaceted trust in tourism service robots. Annals of Tourism Research, 81, 102888. https://doi.org/10.1016/j.annals.2020.102888
    Peng, B., Galley, M., He, P., Cheng, H., Xie, Y., Hu, Y., Huang, Q., Liden, L., Yu, Z., & Chen, W. (2023). Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv. https://doi.org/10.48550/arXiv.2302.12813
    Pickering, B. (2021). Trust, but verify: informed consent, AI technologies, and public health emergencies. Future Internet, 13(5), 132. https://doi.org/10.3390/fi13050132
    Pillai, R., Sivathanu, B., Metri, B., & Kaushik, N. (2023). Students' adoption of AI-based teacher-bots (T-bots) for learning in higher education. Information Technology & People, (37)1, 328-355. https://doi.org/10.1108/ITP-02-2021-0152
    Pittman, K. (2016, October 28). A history of collaborative robots: from intelligent lift assists to cobots. Engineering. com. https://reurl.cc/NQKqok
    Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: a critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879. https://doi.org/10.1037/0021-9010.88.5.879
    Posard, M. N., & Rinderknecht, R. G. (2015). Do people like working with computers more than human beings? Computers in Human Behavior, 51, 232-238. https://doi.org/10.1016/j.chb.2015.04.057
    Purwanto, A., Zuiderwijk, A., & Janssen, M. (2020). Citizens’ trust in open government data: a quantitative study about the effects of data quality, system quality and service quality. In S.-J. Eom, & J. Lee (Eds.), Proceedings of the 21st annual international conference on digital government research (pp. 310-318). Association for Computing Machinery. https://doi.org/10.1145/3396956.3396958
    Qalati, S. A., Vela, E. G., Li, W., Dakhan, S. A., Hong Thuy, T. T., & Merani, S. H. (2021). Effects of perceived service quality, website quality, and reputation on purchase intention: The mediating and moderating roles of trust and perceived risk in online shopping. Cogent Business & Management, 8(1), 1869363. https://doi.org/10.1080/23311975.2020.1869363
    Rafaeli, S. (2019). Interacting with media: Para-social interaction and real interaction. In Mediation, information, and communication (pp. 125-181). Routledge. https://doi.org/10.4324/9781351317207
    Rafsanjani, H. N., & Nabizadeh, A. H. (2023). Towards human-centered artificial intelligence (AI) in architecture, engineering, and construction (AEC) industry. Computers in Human Behavior Reports, 100319. https://doi.org/10.1016/j.chbr.2023.100319
    Rahman, M., Ming, T. H., Baigh, T. A., & Sarker, M. (2021). Adoption of artificial intelligence in banking services: an empirical analysis. International Journal of Emerging Markets, 18(10), 4270-4300. https://doi.org/10.1108/IJOEM-06-2020-0724
    Rangarajan, G. (2023). Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design. Synopsys. https://www.synopsys.com/blogs/chip-design/copilot-generative-ai-chip-design.html
    Redmond, M. (2015, January 1). Social exchange theory. English Technical Reports and White Papers. http://works.bepress.com/mark_redmond/2/
    Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people. Center for the Study of Language and Information; Cambridge University Press.
    Rempel, J. K., Holmes, J. G., & Zanna, M. P. (1985). Trust in close relationships. Journal of Personality and Social Psychology, 49(1), 95-112. https://doi.org/10.1037/0022-3514.49.1.95
    Ren, C., Deng, Z., Hong, Z., & Zhang, W. (2019). Health information in the digital age: an empirical study of the perceived benefits and costs of seeking and using health information from online sources. Health Information & Libraries Journal, 36(2), 153-167. https://doi.org/10.1111/hir.12250
    Repperger, D. W., & Phillips, C. A. (2009). The human role in automation. In S.Y. Nof (Ed.), Springer handbook of automation (pp. 295-304). Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-78831-7_17
    Rheu, M., Shin, J. Y., Peng, W., & Huh-Yoo, J. (2021). Systematic review: Trust-building factors and implications for conversational agent design. International Journal of Human–Computer Interaction, 37(1), 81-96. https://doi.org/10.1080/10447318.2020.1807710
    Ring, P. S., & Van de Ven, A. H. (1992). Structuring cooperative relationships between organizations. Strategic Management Journal, 13(7), 483-498. https://doi.org/10.1002/smj.4250130702
    Rom, S. C., Katzir, M., Diel, K., & Hofmann, W. (2020). On trading off labor and leisure: A process model of perceived autonomy and opportunity costs. Motivation Science, 6(3), 235-246. https://doi.org/10.1037/mot0000148
    Salim, T. A., El Barachi, M., Mohamed, A. A. D., Halstead, S., & Babreak, N. (2022). The mediator and moderator roles of perceived cost on the relationship between organizational readiness and the intention to adopt blockchain technology. Technology in Society, 71, 102108. https://doi.org/10.1016/j.techsoc.2022.102108
    Salo, J., & Karjaluoto, H. (2007). A conceptual model of trust in the online environment. Online Information Review, 31(5), 604-621. https://doi.org/10.1108/14684520710832324
    Schmidt, P., Biessmann, F., & Teubner, T. (2020). Transparency and trust in artificial intelligence systems. Journal of Decision Systems, 29(4), 260-278. https://doi.org/10.1080/12460125.2020.1819094
    Schoenherr, J. R., Abbas, R., Michael, K., Rivas, P., & Anderson, T. D. (2023). Designing AI using a human-centered approach: Explainability and accuracy toward trustworthiness. IEEE Transactions on Technology and Society, 4(1), 9-23. https://doi.org/10.1109/TTS.2023.3257627
    Schwab, K. (2017). The fourth industrial revolution. Crown.
    Selamat, M. A., & Windasari, N. A. (2021). Chatbot for SMEs: Integrating customer and business owner perspectives. Technology in Society, 66, 101685. https://doi.org/10.1016/j.techsoc.2021.101685
    Seong, Y., & Bisantz, A. M. (2008). The impact of cognitive feedback on judgment performance and trust with decision aids. International Journal of Industrial Ergonomics, 38(7-8), 608-625. https://doi.org/10.1016/j.ergon.2008.01.007
    Seyal, A. H., & Rahim, M. M. (2006). A Preliminary investigation of electronic data interchange adoption in Bruneian small business organizations. The Electronic Journal of Information Systems in Developing Countries, 24(1), 1-21. https://doi.org/10.1002/j.1681-4835.2006.tb00159.x
    Sherry, T. (1984). The second self: computers and the human spirit. Simon & Schuster. https://reurl.cc/oRE7Q5
    Shin, D. (2020). User perceptions of algorithmic decisions in the personalized AI system: Perceptual evaluation of fairness, accountability, transparency, and explainability. Journal of Broadcasting & Electronic Media, 64(4), 541-565. https://doi.org/10.1080/08838151.2020.1843357
    Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551. https://doi.org/10.1016/j.ijhcs.2020.102551
    Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495-504. https://doi.org/10.1080/10447318.2020.1741118
    Siau, K., & Wang, W. (2018). Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal, 31(2), 47-53.
    Sison, A. J. G., Daza, M. T., Gozalo-Brizuela, R., & Garrido-Merchán, E. C. (2023). ChatGPT: More than a “weapon of mass deception” ethical challenges and responses from the human-centered artificial intelligence (HCAI) perspective. International Journal of Human–Computer Interaction, 1-20. https://doi.org/10.1080/10447318.2023.2225931
    Smelser, N. J., & Baltes, P. B. (2001). International encyclopedia of the social & behavioral sciences (Vol. 12). Elsevier. https://reurl.cc/qVWL0N
    Sousa, S., Cravino, J., Martins, P., & Lamas, D. (2023). Human-centered trust framework: An HCI perspective. arXiv. https://doi.org/10.48550/arXiv.2305.03306
    Sousa, S., Lamas, D., & Dias, P. (2014). A model for human-computer trust: contributions towards leveraging user engagement. In P. Zaphiris, & A. Ioannou (Eds.), Learning and collaboration technologies. designing and developing novel learning experiences: First international conference, LCT 2014, held as part of HCI international 2014, Heraklion, Crete, Greece, June 22-27, 2014, proceedings, part I (pp. 128-137). Springer Cham. https://doi.org/10.1007/978-3-319-07482-5_13
    Sowa, K., Przegalinska, A., & Ciechanowski, L. (2021). Cobots in knowledge work: Human–AI collaboration in managerial professions. Journal of Business Research, 125, 135-142. https://doi.org/10.1016/j.jbusres.2020.11.038
    Srinivasan, A. V., & de Boer, M. (2020). Improving trust in data and algorithms in the medium of AI. Maandblad voor Accountancy en Bedrijfseconomie, 94(3/4), 69-82. https://doi.org/10.5117/mab.94.49425
    Starke, G., Van Den Brule, R., Elger, B. S., & Haselager, P. (2022). Intentional machines: A defence of trust in medical artificial intelligence. Bioethics, 36(2), 154-161. https://doi.org/10.1111/bioe.12891
    Sundar, S. S., & Lee, E.-J. (2022). Rethinking communication in the era of artificial intelligence. Human Communication Research, 48(3), 379-385. https://doi.org/10.1093/hcr/hqac014
    Tarhini, A., Arachchilage, N. A. G., & Abbasi, M. S. (2015). A critical review of theories and models of technology adoption and acceptance in information system research. International Journal of Technology Diffusion (IJTD), 6(4), 58-77. https://doi.org/10.4018/IJTD.2015100104
    Thalpage, N. S. (2023). Unlocking the Black Box: Explainable Artificial Intelligence (XAI) for Trust and Transparency in AI Systems. Journal of Digital Art & Humanities, 4(1), 31-36. https://doi.org/10.33847/2712-8148.4.1_4
    Thibaut, J. W., & Kelley, H. H. (1959). The Social Psychology of Groups. Transaction Publishers. https://reurl.cc/vaR7Ka
    Thiebes, S., Lins, S., & Sunyaev, A. (2021). Trustworthy artificial intelligence. Electronic Markets, 31, 447-464. https://doi.org/10.1007/s12525-020-00441-4
    Thomas, K. W. (2009). Intrinsic motivation at work: What really drives employee engagement. Berrett-Koehler Publishers. https://reurl.cc/nNGDpn
    Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313. https://doi.org/10.1126/science.adg7879
    Tong, Y., Wang, X., & Teo, H.-H. (2007). Understanding the intention of information contribution to online feedback systems from social exchange and motivation crowding perspectives. In 2007 40th Annual Hawaii International Conference on System Sciences (HICSS'07) (pp. 28-28). IEEE. https://doi.org/110.1109/HICSS.2007.585
    Tran, H. T. T., Nguyen, N. T., & Tang, T. T. (2023). Influences of subjective norms on teachers’ intention to use social media in working. Contemporary Educational Technology, 15(1), ep400. https://doi.org/10.30935/cedtech/12659
    Tsung-Yu, H., Yu-Chia, T., & Wen, Y. C. (2024). Is this AI sexist? The effects of a biased AI’s anthropomorphic appearance and explainability on users’ bias perceptions and trust. International Journal of Information Management, 76, 102775. https://doi.org/10.1016/j.ijinfomgt.2024.102775
    Turkle, S. (2005). The second self: Computers and the human spirit. MIT Press. https://doi.org/10.7551/mitpress/6115.001.0001
    Tymon Jr, W. G., Stumpf, S. A., & Doh, J. P. (2010). Exploring talent management in India: The neglected role of intrinsic rewards. Journal of World Business, 45(2), 109-121. https://doi.org/10.1016/j.jwb.2009.09.016
    Tzafestas, S. G. (2010). Human Factors in Automation (II): Psychological, Physical Strength, Human Error and Human Values Factors. In Human and Nature Minding Automation. Intelligent Systems, Control and Automation: Science and Engineering (pp. 35-46). Springer, Dordrecht. https://doi.org/10.1007/978-90-481-3562-2_3
    Uehara, E. (1990). Dual exchange theory, social networks, and informal social support. American journal of sociology, 96(3), 521-557. https://doi.org/10.1086/229571
    Usmani, U. A., Happonen, A., & Watada, J. (2023). Human-centered artificial intelligence: Designing for user empowerment and ethical considerations. In 2023 5th international congress on human-computer interaction, optimization and robotic applications (HORA) (pp. 1-7). IEEE. https://doi.org/10.1109/HORA58378.2023.10156761
    Uysal, E., Alavi, S., & Bezençon, V. (2022). Trojan horse or useful helper? A relationship perspective on artificial intelligence assistants with humanlike features. Journal of the Academy of Marketing Science, 50(6), 1153-1175. https://doi.org/10.1007/s11747-022-00856-9
    Vance, A., Elie-Dit-Cosaque, C., & Straub, D. W. (2008). Examining trust in information technology artifacts: the effects of system quality and culture. Journal of Management Information Systems, 24(4), 73-100. https://doi.org/10.2753/MIS0742-1222240403
    Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in neural information processing systems 30 (NIPS 2017) (pp. 5998-6008). Neural Information Processing Systems Foundation, Inc.
    Vroom, V. H. (1964). Work and motivation. Wiley.
    Wach, K., Duong, C. D., Ejdys, J., Kazlauskaitė, R., Korzynski, P., Mazurek, G., Paliszkiewicz, J., & Ziemba, E. (2023). The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT. Entrepreneurial Business & Economics Review, 11(2).
    Wagner, A. R. (2009). Creating and using matrix representations of social interaction. In Proceedings of the 4th ACM/IEEE international conference on human robot interaction (pp. 125-132). Association for Computing Machinery. https://doi.org/10.1145/1514095.1514119
    Walkowiak, E., & MacDonald, T. (2023). Generative AI and the workforce: What Are the risks? SSRN. http://dx.doi.org/10.2139/ssrn.4568684
    Walsh, T. (2017). The AI revolution. NSW Department of Education Education: Future Frontiers. https://www.saeon.com.au/toniedoc/ai-revolution.pdf
    Wang, W., & Benbasat, I. (2008). Attributions of trust in decision support technologies: A study of recommendation agents for e-commerce. Journal of Management Information Systems, 24(4), 249-273. https://doi.org/10.2753/MIS0742-1222240410
    Waytz, A., Cacioppo, J., & Epley, N. (2010). Who sees human? The stability and importance of individual differences in anthropomorphism. Perspectives on Psychological Science, 5(3), 219-232. https://doi.org/10.1177/1745691610369336
    Weiss, L., & Kivetz, R. (2019). Opportunity cost overestimation. Journal of Marketing Research, 56(3), 518-533. https://doi.org/10.1177/0022243718819474
    Winograd, T., & Flores, F. (1986). Understanding computers and cognition: A new foundation for design. Ablex Publishing Corporation. https://reurl.cc/aqKVK3
    Wong, P. S.-P., & Cheung, S.-O. (2004). Trust in construction partnering: views from parties of the partnering dance. International Journal of Project Management, 22(6), 437-446. https://doi.org/10.1016/j.ijproman.2004.01.001
    Wu, L.-Y., Chen, K.-Y., Chen, P.-Y., & Cheng, S.-L. (2014). Perceived value, transaction cost, and repurchase-intention in online shopping: A relational exchange perspective. Journal of Business Research, 67(1), 2768-2776. https://doi.org/10.1016/j.jbusres.2012.09.007
    Xin, H., Techatassanasoontorn, A. A., & Tan, F. B. (2015). Antecedents of consumer trust in mobile payment adoption. Journal of Computer Information Systems, 55(4), 1–10. https://doi.org/10.1080/08874417.2015.11645781
    Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., & Zhu, J. (2019). Explainable AI: A brief survey on history, research areas, approaches and challenges. In J. Tang, M.-Y. Kan, D. Zhao, S. Li, & H. Zan (Eds.), Natural language processing and Chinese computing: 8th CCF international conference, NLPCC 2019, Dunhuang, China, October 9–14, 2019, proceedings, part II (pp. 563-574). Springer Cham. https://doi.org/10.1007/978-3-030-32236-6_51
    Xu, W. (2019). Toward human-centered AI: A perspective from human-computer interaction. Interactions, 26(4), 42-46. https://doi.org/10.1145/3328485
    Xu, W., Dainoff, M. J., Ge, L., & Gao, Z. (2023). Transitioning to human interaction with AI systems: New challenges and opportunities for HCI professionals to enable human-centered AI. International Journal of Human–Computer Interaction, 39(3), 494-518. https://doi.org/10.1080/10447318.2022.2041900
    Yang, R., & Wibowo, S. (2022). User trust in artificial intelligence: A comprehensive conceptual framework. Electronic Markets, 32(4), 2053-2077. https://doi.org/10.1007/s12525-022-00592-6
    Yates, J. F., & Stone, E. R. (1992). The risk construct. In J. F. Yates (Ed.), Risk-taking behavior (pp. 1–25). John Wiley & Sons.
    Yin, N. (2018). The influencing outcomes of job engagement: An interpretation from the social exchange theory. International Journal of Productivity and Performance Management, 67(5), 873-889. https://doi.org/10.1108/IJPPM-03-2017-0054
    Zainab, B., Awais Bhatti, M., & Alshagawi, M. (2017). Factors affecting e-training adoption: An examination of perceived cost, computer self-efficacy and the technology acceptance model. Behaviour & Information Technology, 36(12), 1261-1273. https://doi.org/10.1080/0144929X.2017.1380703
    Zhang, X., Zhang, Y., Sun, Y., Lytras, M., Ordonez de Pablos, P., & He, W. (2018). Exploring the effect of transformational leadership on individual creativity in e-learning: A perspective of social exchange theory. Studies in Higher Education, 43(11), 1964-1978. https://doi.org/10.1080/03075079.2017.1296824
    Zhao, H., Lan, J., Lyu, T., & Zeng, G. (2023). Working with artificial intelligence surveillance during the COVID-19 pandemic: A mixed investigation of the influence mechanism on job engagement in hospitality industry. Current Issues in Tourism, 26(20), 3318-3335. https://doi.org/10.1080/13683500.2022.2117593
    Zhao, R., Benbasat, I., & Cavusoglu, H. (2019). Do users always want to know more? Investigating the relationship between system transparency and users’ trust in advice-giving systems. In Proceedings of the 27th European Conference on Information Systems (ECIS), Stockholm & Uppsala, Sweden, June 8-14, 2019 (42). https://aisel.aisnet.org/ecis2019_rip/42/

    Description: 碩士
    國立政治大學
    資訊管理學系
    110356017
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0110356017
    Data Type: thesis
    Appears in Collections:[資訊管理學系] 學位論文

    Files in This Item:

    File Description SizeFormat
    601701.pdf3035KbAdobe PDF0View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback