English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 109874/140825 (78%)
Visitors : 45908104      Online Users : 456
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/150265

    Title: 《幻意視界》- 以生成式藝術與眼動追蹤技術探討意識的變幻
    《Illusory Eyescape》- Exploring the Variations of Consciousness through Generative Art and Eye-Tracking Techniques
    Authors: 李欣霏
    Lee, Sin-Fei
    Contributors: 紀明德

    Chi, Ming-Te
    Tao, Ya-Lun

    Lee, Sin-Fei
    Keywords: 生成式藝術
    Date: 2024
    Issue Date: 2024-03-01 14:13:35 (UTC+8)
    Abstract: 在科技的洪流中,藝術形式與數位技術不斷交融、適應、演變,從機械複製時代到虛擬再現真實,到現在連生成作品的工作都變成一句咒語就能夠解決的事,那麼究竟藝術創作的本質到底是甚麼?瑞士藝術家阿爾伯托·賈科梅蒂(Alberto Giacometti)曾說:「藝術品不是再現真實,而是創造具有相同強度的真實。」在此,藝術作品僅作為表達真實感受的一種媒介,其本質在於超出物性所展現出無形的精神與情感。
    Reference: [1] E. Mansimov, E. Parisotto, J. L. Ba, and R. Salakhutdinov, “Generating images
    from captions with attention,” arXiv preprint arXiv:1511.02793, 2015.
    [2] P. Wolfendale, Object-oriented philosophy: The noumenon’s new clothes. MIT
    Press, 2019, vol. 1.
    [3] M. Coeckelbergh, “Can machines create art?” Philosophy & Technology, vol. 30,
    no. 3, pp. 285–303, 2017.
    [4] J.-W. Hong and N. M. Curran, “Artificial intelligence, artists, and art: attitudes
    toward artwork produced by humans vs. artificial intelligence,” ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM),
    vol. 15, no. 2s, pp. 1–16, 2019.
    [5] E. S. Mikalonytė and M. Kneer, “Can artificial intelligence make art?: Folk intuitions as to whether ai-driven robots can be viewed as artists and produce art,”
    ACM Transactions on Human-Robot Interaction (THRI), vol. 11, no. 4, pp. 1–19,
    [6] A. Ramesh, M. Pavlov, G. Goh, S. Gray, C. Voss, A. Radford, M. Chen, and
    I. Sutskever, “Zero-shot text-to-image generation,” pp. 8821–8831, 2021.
    [7] G. M. Edelman, Neural Darwinism: The theory of neuronal group selection. Basic books, 1987.
    [8] G. M. Edelman and G. Tononi, A universe of consciousness: How matter becomes
    imagination. Hachette UK, 2008.
    [9] 傑拉爾德·M·埃德爾曼、朱利歐·托諾尼, 意識的宇宙:物質如何轉變
    為精神(重譯版), 2019.
    [10] G. Tononi, “An information integration theory of consciousness,” BMC neuroscience, vol. 5, pp. 1–22, 2004.
    [11] A. Haun and G. Tononi, “Why does space feel the way it does? towards a principled account of spatial experience,” Entropy, vol. 21, no. 12, p. 1160, 2019.
    [12] B. J. Baars, A cognitive theory of consciousness. Cambridge University Press,
    [13] ——, “Global workspace theory of consciousness: toward a cognitive neuroscience of human experience,” Progress in brain research, vol. 150, pp. 45–53,
    [14] S. Dehaene, M. Kerszberg, and J.-P. Changeux, “A neuronal model of a global
    workspace in effortful cognitive tasks,” Proceedings of the national Academy of
    Sciences, vol. 95, no. 24, pp. 14 529–14 534, 1998.
    [15] R. VanRullen and R. Kanai, “Deep learning and the global workspace theory,”
    Trends in Neurosciences, vol. 44, no. 9, pp. 692–704, 2021.
    [16] N. Block, “How many concepts of consciousness?” Behavioral and brain sciences, vol. 18, no. 2, pp. 272–287, 1995.
    [17] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and
    Y. Bengio, “Show, attend and tell: Neural image caption generation with visual
    attention,” pp. 2048–2057, 2015.
    [18] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, “Show and tell: A neural image
    caption generator,” pp. 3156–3164, 2015.
    [19] K. Gregor, I. Danihelka, A. Graves, D. Rezende, and D. Wierstra, “Draw: A
    recurrent neural network for image generation,” pp. 1462–1471, 2015.
    [20] A. Mordvintsev, C. Olah, and M. Tyka, “Inceptionism: Going deeper into neural
    networks,” 2015.
    [21] L. A. Gatys, A. S. Ecker, and M. Bethge, “A neural algorithm of artistic style,”
    arXiv preprint arXiv:1508.06576, 2015.
    [22] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation
    using cycle-consistent adversarial networks,” pp. 2223–2232, 2017.
    [23] T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” pp. 4401–4410, 2019.
    [24] J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Advances in neural information processing systems, vol. 33, pp. 6840–6851, 2020.
    [25] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry,
    A. Askell, P. Mishkin, J. Clark et al., “Learning transferable visual models from
    natural language supervision,” pp. 8748–8763, 2021.
    [26] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution
    image synthesis with latent diffusion models,” pp. 10 684–10 695, 2022.
    [27] L. Wittgenstein and R. Monk, Tractatus logico-philosophicus. Routledge, 2013.
    [28] M. O’Sullivan, An Analysis of Ludwig Wittgenstein’s Philosophical Investigations. Macat Library, 2017.
    [29] T. Nagel, “What is it like to be a bat?” pp. 159–168, 1980.
    [30] G. Morrot, F. Brochet, and D. Dubourdieu, “The color of odors,” Brain and language, vol. 79, no. 2, pp. 309–320, 2001.
    [31] G. Harman, Object-oriented ontology: A new theory of everything. Penguin
    UK, 2018.
    [32] E. Husserl, Cartesian meditations: An introduction to phenomenology. Springer
    Science & Business Media, 2013.
    [33] A. Papoutsaki, P. Sangkloy, J. Laskey, N. Daskalova, J. Huang, and J. Hays,
    “Webgazer: Scalable webcam eye tracking using user interactions,” in Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI).
    AAAI, 2016, pp. 3839–3845.
    [34] 大學入學考試中心研究發展處, “高中英文參考詞彙表,” https://www.ceec.
    [35] K. Rayner, “Eye movements in reading and information processing: 20 years of
    research.” Psychological bulletin, vol. 124, no. 3, p. 372, 1998.
    [36] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word
    representations in vector space,” arXiv preprint arXiv:1301.3781, 2013.
    [37] Q. Le and T. Mikolov, “Distributed representations of sentences and documents,”
    in International conference on machine learning. PMLR, 2014, pp. 1188–1196.
    Description: 碩士
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0110462008
    Data Type: thesis
    Appears in Collections:[數位內容碩士學位學程] 學位論文
    [數位內容與科技學士學位學程] 學位論文

    Files in This Item:

    File Description SizeFormat
    200801.pdf39491KbAdobe PDF0View/Open

    All items in 政大典藏 are protected by copyright, with all rights reserved.

    社群 sharing

    著作權政策宣告 Copyright Announcement
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback