English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 116884/147915 (79%)
Visitors : 64433050      Online Users : 6650
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/157711


    Title: 深偽視頻/圖像的模型歸因
    Model Attribution for Deepfake Videos/Images
    Authors: 汪新
    Ahmad, Wasim
    Contributors: 彭彥璁
    張原豪

    Yan-Tsung Peng
    Yuan-Hao Chang

    汪新
    Wasim Ahmad
    Keywords: 深偽技術
    深偽模型歸屬(DFMA)
    膠囊網路
    動態路由演算法(DRA)
    時空注意力機制(STA)
    注意力機制
    時間序列分析
    人臉置換深偽
    視訊鑑識
    多媒體鑑識
    資訊安全
    生成對抗網路(GANs)
    Deepfake
    Deepfake Model Attribution (DFMA)
    Capsule Networks
    Dynamic Routing Algorithm (DRA)
    Spatial-Temporal Attention (STA)
    Attention Mechanism
    Temporal Analysis
    Face-swap Deepfakes
    Video Forensics
    Multimedia Forensics
    Information Security
    Generative Adversarial Networks (GANs)
    Date: 2025
    Issue Date: 2025-07-01 14:27:32 (UTC+8)
    Abstract: 隨著利用先進AI 換臉技術的Deepfake 影片數量迅速增加,建立能追溯影片生成來源的強健鑑識技術成為當前亟需解決的議題。不同於目前相關研究多聚焦於真假分類,模型歸屬(Model Attribution),亦即識別生成Deepfake 內容所使用之特定生成模型或工具,則提供了更細緻且具實際應用價值的途徑。透過揭示模型特有的生成痕跡,模型歸屬技術有助於溯源追查,同時支持針對性防禦策略的制定,以因應不斷演進的威脅。
    本論文針對Deepfake 鑑識中的模型歸屬問題,將其建構為一個多分類任務。首先,提出Capsule-Spatial-Temporal(CapST)模型,此模型兼具輕量化與高效能特性。CapST 採用修改版VGG19 作為特徵提取骨幹,能有效萃取深層影像特徵;同時,結合膠囊網路(Capsule Networks)以捕捉複雜且層次化的特徵結構,並搭配時空注意力機制,進一步整合各影格間之時序資訊,進而達到穩健的模型歸屬效果。透過此設計,CapST 能顯著提升模型歸屬準確率,並於DFDM 與GANGen-Detection 資料集上具有優異表現,同時有效控制計算資源消耗,兼顧準確性與效能。
    在此基礎上,本研究進一步提出FAME(Fake Attribution via Multi-level Embeddings)框架,旨在提升模型於多樣化資料與複雜影片條件下之泛化能力。相較於主要著重階層膠囊特徵之CapST,FAME 引入創新之多層次時空注意力策略,能敏銳偵測不同編碼解碼流程與壓縮設定下的細微生成痕跡。FAME 亦結合卷積神經網路(CNN)與長短期記憶網路(LSTM)生成混合空間-時間特徵嵌入,不僅提升歸屬準確率,亦有效降低模型參數量。
    本研究於DFDM、FaceForensics++、FakeAVCeleb 及GANGen-Detection 等多個標竿資料集上進行全面性實驗評估。實驗結果顯示,CapST 能以低運算負擔達成高歸屬準確率,而FAME 更進一步於泛化能力、精確度及執行效率方面展現卓越表現。綜合而言,本論文提出之方法能有效應對生成式媒體技術所帶來的快速威脅演進,並提供一套具擴展性、精確且高效能的Deepfake 模型歸屬解決方案。
    The rapid proliferation of Deepfake videos, enabled by sophisticated AI-driven face swapping techniques, has intensified the demand for robust forensic tools capable of identifying the generative sources behind these manipulations. While binary real/fake classification has been the primary focus in prior research, model attribution the task of determining the specific generative model or tool used to create a Deepfake—offers a more nuanced and actionable approach. By revealing model-specific artifacts, attribution facilitates source tracing and supports the development of tailored countermeasures against evolving threats.
    This dissertation addresses the model attribution problem in Deepfake forensics by casting it as a multiclass classification challenge. We first introduce the Capsule-Spatial-Temporal (CapST) model, a lightweight and effective framework, which leverages a truncated VGG19 for efficient feature extraction, Capsule Networks for hierarchical feature modeling, and a spatio-temporal attention mechanism to aggregate frame-level features into a robust video-level representation. The model demonstrates strong attribution performance on the DFDM and GANGen-Detection datasets while maintaining a compact computational footprint.
    Building on the strengths and limitations of CapST, we propose a second, more generalizable framework, FAME (Fake Attribution via Multi-level Embeddings). While CapST proved effective within controlled datasets, it was less optimized for attribution across diverse and challenging video conditions. FAME addresses this gap by introducing a novel multi-level spatio-temporal attention strategy, designed to detect subtle generative traces across different encoder-decoder pipelines and compression settings. Unlike CapST, which primarily emphasizes hierarchical capsule features, FAME incorporates hybrid spatial-temporal embeddings using CNNs and LSTMs, providing improved attribution accuracy with even fewer parameters.
    Both models have been extensively evaluated on benchmark datasets, including DFDM, FaceForensics ++, FakeAVCeleb, and GANGen detection. CapST showcases high attribution accuracy with low computational cost, while FAME further advances generalization, accuracy, and runtime efficiency across varied scenarios. Together, these contributions offer a comprehensive solution to Deepfake model attribution, paving the way for scalable and effective forensic applications that can adapt to the fast-evolving landscape of generative media technologies.
    Reference: [1] Darius Afchar et al. “MesoNet: A Compact Facial Video Forgery Detection Network”. In: 2018 IEEE International Workshop on Information Forensics and Security (WIFS). IEEE, 2018, pp. 1–7.
    [2] Triantafyllos Afouras, Joon Son Chung, and Andrew Zisserman. “LRS3-TED: A Large-Scale Dataset for Visual Speech Recognition”. In: arXiv preprint arXiv:1809.00496 (2018), p. 5.
    [3] Triantafyllos Afouras et al. “Deep Audio-Visual Speech Recognition”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) (2018), pp. 5, 11.
    [4] Shruti Agarwal and Hany Farid. “Detecting Deep-Fake Videos from Aural and Oral Dynamics”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021, p. 14.
    [5] Wasim Ahmad et al. “CapST: Leveraging Capsule Networks and Temporal Attention for Accurate Model Attribution in Deep-fake Videos”. In: ACM Transactions on Multimedia Computing, Communications and Applications (TOMM) 1.1 (Jan. 2025), pp. 1–23. DOI: 10.1145/3715138. URL: https://doi.org/10.1145/3715138.
    [6] Ahmad et al. “ADNN: An Efficient Model Attribution of Face-Swap Deepfake Videos Using Attention-Driven Neural Network”. In: Under Review (2025).
    [7] Vishal Asnani et al. “Reverse Engineering of Generative Models: Inferring Model Hyperparameters from Generated Images”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (2023).
    [8] Zheng Ba et al. “Exposing the Deception: Uncovering More Forgery Clues for Deepfake Detection”. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). 2024, pp. 13, 14.
    [9] Tadas Baltrusaitis et al. “OpenFace 2.0: Facial Behavior Analysis Toolkit”. In: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). IEEE, 2018, pp. 59–66.
    [10] Mikołaj Bińkowski et al. “Demystifying MMD GANs”. In: Proceedings of the International Conference on Learning Representations (ICLR). 2018, pp. 5, 16.
    [11] Dmitri Bitouk et al. “Face Swapping: Automatically Replacing Faces in Photographs”. In: ACM Transactions on Graphics (TOG), Proceedings of SIGGRAPH. 2008, p. 7.
    [12] Andreas Blattmann et al. “Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets”. In: arXiv preprint arXiv:2311.15127 (2023), pp. 1, 2, 4, 9.
    [13] Sofia Bounareli et al. “HyperReenact: One-Shot Reenactment via Jointly Learning to Refine and Retarget Faces”. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2023, pp. 7149–7159.
    [14] Jingjing Cao et al. “End-to-End Reconstruction-Classification Learning for Face Forgery Detection”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2022, pp. 13, 14, 18.
    [15] Qiong Cao et al. “VGGFace2: A Dataset for Recognizing Faces across Pose and Age”. In: Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition (FG). 2018, p. 5.
    [16] Lucy Chai et al. “What Makes Fake Images Detectable? Understanding Properties that Generalize”. In: Proceedings of the European Conference on Computer Vision (ECCV). 2020, p. 13.
    [17] Linsen Chen et al. “Lip Movements Generation at a Glance”. In: Proceedings of the European Conference on Computer Vision (ECCV). 2018, pp. 5, 10, 11, 16.
    [18] Xiaowei Chen et al. “Image Manipulation Detection by Multi-View Multi-Scale Supervision”. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2021, p. 5.
    [19] Robert Chesney and Danielle Citron. “Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics”. In: Foreign Affairs (2019).
    [20] Jinhyuk Choi et al. “Exploiting Style Latent Flows for Generalizing Deepfake Detection Video Detection”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2024, pp. 13, 14.
    [21] François Chollet. “Xception: Deep Learning with Depthwise Separable Convolutions”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2017.
    [22] Joon Son Chung, Arsha Nagrani, and Andrew Zisserman. “VoxCeleb2: Deep Speaker Recognition”. In: Proceedings of Interspeech. 2018, p. 5.
    [23] Davide Cozzolino et al. “Audio-Visual Person-of-Interest Deepfake Detection”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2023, p. 13.
    [24] Kevin Dale et al. “Video Face Replacement”. In: ACM Transactions on Graphics (TOG), Proceedings of SIGGRAPH. 2011, p. 7.
    [25] Oscar De Lima et al. “Deepfake Detection Using Spatiotemporal Convolutional Networks”. In: arXiv Preprint (2020). eprint: arXiv:2006.14749.
    [26] DFD. Retrieved from https://blog.research.google/2019/09/contributingdata-to-deepfake-detection.html. 2019.
    [27] Brian Dolhansky et al. “The DeepFake Detection Challenge (DFDC) Dataset”. In: arXiv preprint arXiv:2006.07397 (2020), pp. 5, 18.
    [28] Brian Dolhansky et al. “The DeepFake Detection Challenge (DFDC) Preview Dataset”. In: arXiv preprint arXiv:1910.08854 (2019), p. 5.
    [29] Brian Dolhansky et al. “The Deepfake Detection Challenge Dataset”. In: arXiv preprint arXiv:2006.07397 (2020).
    [30] Ricard Durall et al. “Unmasking DeepFakes with Simple Features”. In: arXiv Preprint (2019). eprint: arXiv:1911.00686.
    [31] Cheng Feng, Zhen Chen, and Andrew Owens. “Self-Supervised Video Forensics by Audio-Visual Anomaly Detection”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2023, p. 13.
    [32] Jinyu Feng and Prashant Singhal. “3D Face Style Transfer with a Hybrid Solution of NeRF and Mesh Rasterization”. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). 2024, p. 6.
    [33] Joshua Frank et al. “Leveraging Frequency Analysis for Deep Fake Image Recognition”. In: Proceedings of the International Conference on Machine Learning (ICML). 2020, p. 14.
    [34] Shreyas Girish et al. “Towards Discovery and Attribution of Open-World GAN Generated Images”. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2021, pp. 14094–14103.
    [35] Ian Goodfellow et al. “Generative Adversarial Nets”. In: Proceedings of the Conference on Neural Information Processing Systems (NeurIPS). 2014, pp. 2, 4, 5.
    [36] Zhizhou Gu et al. “Delving into the Local: Dynamic Inconsistency Learning for Deepfake Video Detection”. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). 2022, pp. 13, 14.
    [37] Xiaoming Guo et al. “Hierarchical Fine-Grained Image Forgery Detection and Localization”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2023, pp. 14, 15.
    [38] Yixuan Guo, Cheng Zhen, and Peng Yan. “Controllable Guide-Space for Generalizable Face Forgery Detection”. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2023, pp. 15, 18.
    [39] Zhen Guo et al. “Constructing New Backbone Networks via Space-Frequency Interactive Convolution for Deepfake Detection”. In: IEEE Transactions on Information Forensics and Security (TIFS) (2023), p. 14.
    [40] Anirudh Gupta et al. “Photorealistic Video Generation with Diffusion Models”. In: arXiv preprint arXiv:2312.06662 (2023), p. 4.
    [41] Alexandros Haliassos et al. “Leveraging Real Talking Faces via Self-Supervision for Robust Forgery Detection”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2022, pp. 2, 13, 18.
    [42] Kaiming He et al. “Deep Residual Learning for Image Recognition”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2016, pp. 770–778.
    [43] Martin Heusel et al. “GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium”. In: Proceedings of the Conference on Neural Information Processing Systems (NeurIPS). 2017, pp. 5, 16.
    [44] Jonathan Ho, Ajay Jain, and Pieter Abbeel. “Denoising Diffusion Probabilistic Models”. In: Proceedings of the Conference on Neural Information Processing Systems (NeurIPS). 2020, pp. 2, 4, 5.
    [45] Andrew G. Howard et al. “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications”. In: arXiv Preprint (2017). eprint: arXiv:1704.04861.
    [46] Chien-Chang Hsu, Chih-Yuan Lee, and Yu-Xiang Zhuang. “Learning to Detect Fake Face Images in the Wild”. In: Proceedings of the International Symposium on Computer, Consumer and Control (IS3C). 2018, p. 15.
    [47] Jiale Hu et al. “Finfer: Frame Inference-Based Deepfake Detection for High-Visual-Quality Videos”. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). 2022, pp. 14, 15.
    [48] Bin Huang et al. “Implicit Identity Driven Deepfake Face Swapping Detection”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2023, pp. 5, 14, 15.
    [49] Gary B. Huang et al. “Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments”. In: Workshop on Faces in ‘Real- Life’Images: Detection, Alignment, and Recognition. 2008, p. 5.
    [50] Phillip Isola et al. “Image-to-Image Translation with Conditional Adversarial Networks”. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2017, p. 4.
    [51] S. Jia, X. Li, and S. Lyu. “Model Attribution of Face-Swap Deepfake Videos”. In: IEEE International Conference on Image Processing (ICIP). 2022, pp. 2356–2360. DOI: 10.1109/ICIP46576.2022.9897972.
    [52] Shuming Jia, Xin Li, and Siwei Lyu. “Model Attribution of Face-Swap Deepfake Videos”. In: 2022 IEEE International Conference on Image Processing (ICIP). 2022, pp. 2356–2360. DOI: 10.1109/ICIP46576.2022.9897972.
    [53] Lipeng Jiang et al. “DeeperForensics-1.0: A Large-Scale Dataset for Real-World Face Forgery Detection”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2020, pp. 5, 18.
    [54] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. “Perceptual Losses for Real-Time Style Transfer and Super-Resolution”. In: Proceedings of the European Conference on Computer Vision (ECCV). 2016, p. 5.
    [55] Tero Karras, Samuli Laine, and Timo Aila. “A Style-Based Generator Architecture for Generative Adversarial Networks”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2019, pp. 1, 2, 4, 5,12.
    [56] Tero Karras et al. “Analyzing and Improving the Image Quality of StyleGAN”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2020, pp. 1, 4.
    [57] Tero Karras et al. “Progressive Growing of GANs for Improved Quality, Stability, and Variation”. In: Proceedings of the International Conference on Learning Representations (ICLR). 2018, p. 5.
    [58] Hafeez Khalid et al. “FakeAVCeleb: A Novel Audio-Video Multimodal Deepfake Dataset”. In: Proceedings of the Conference on Neural Information Processing Systems (NeurIPS). 2021, pp. 5, 14.
    [59] Diederik P. Kingma and Max Welling. “Auto-Encoding Variational Bayes”. In: arXiv preprint arXiv:1312.6114 (2014), pp. 1, 4, 5.
    [60] Pavel Korshunov and Sébastien Marcel. “Deepfakes: A New Threat to Face Recognition? Assessment and Detection”. In: arXiv preprint arXiv:1812.08685 (2018).
    [61] Pavel Korshunov and Sébastien Marcel. “Deepfakes: A New Threat to Face Recognition? Assessment and Detection”. In: arXiv preprint arXiv:1812.08685 (2018), pp. 5, 14.
    [62] Pil Kwon et al. “KoDF: A Large-Scale Korean Deepfake Detection Dataset”. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2021, p. 5.
    [63] Bao Minh Le and Sung-Soo Woo. “Quality-Agnostic Deepfake Detection with Intra-Model Collaborative Learning”. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2023, p. 5.
    [64] Jinchao Li et al. “Frequency-Aware Discriminative Feature Learning Supervised by Single-Center Loss for Face Forgery Detection”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021, pp. 2, 14, 18.
    [65] Yuezun Li, Ming-Ching Chang, and Siwei Lyu. “In Ictu Oculi: Exposing AI Created Fake Videos by Detecting Eye Blinking”. In: IEEE International Workshop on Information Forensics and Security (WIFS). 2018.
    [66] Yuezun Li, Ming-Ching Chang, and Siwei Lyu. “In Ictu Oculi: Exposing AIGenerated Fake Face Videos by Detecting Eye Blinking”. In: Proceedings of the IEEE International Workshop on Information Forensics and Security (WIFS). 2018, pp. 5, 13.
    [67] Yuezun Li and Siwei Lyu. “Exposing DeepFake Videos by Detecting Face Warping Artifacts”. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2019.
    [68] Yuezun Li et al. “Celeb-DF: A Large-Scale Challenging Dataset for DeepFake Forensics”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2020, pp. 5, 18.
    [69] Yu Lin et al. “Face Swapping Under Large Pose Variations: A 3D Model-Based Approach”. In: Proceedings of the IEEE International Conference on Multimedia and Expo (ICME). 2012, p. 7.
    [70] Jian Liu et al. “Residual Denoising Diffusion Models”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2024, pp. 2773–2783.
    [71] Zhen Liu, Xin Qi, and Philip H. Torr. “Global Texture Enhancement for Fake Face Detection in the Wild”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2020, pp. 13, 14.
    [72] Ziwei Liu et al. “Deep Learning Face Attributes in the Wild”. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). 2015, p. 5.
    [73] Yiqun Ma et al. “Follow Your Pose: Pose-Guided Text-to-Video Generation Using Pose-Free Videos”. In: arXiv preprint arXiv:2304.01186 (2023), pp. 4, 6, 15.
    [74] Francesco Marra et al. “Do GANs Leave Artificial Fingerprints?” In: 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). 2019, pp. 506–511.
    [75] Francesco Marra et al. “Do GANs Leave Artificial Fingerprints?” In: 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). IEEE, 2019, pp. 506–511.
    [76] Iacopo Masi et al. “Two-Branch Recurrent Network for Isolating Deepfakes in Videos”. In: Proceedings of the European Conference on Computer Vision (ECCV). 2020, pp. 14, 18.
    [77] Debin Meng et al. “Frame Attention Networks for Facial Expression Recognition in Videos”. In: 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019, pp. 3866–3870. DOI: 10.1109/ICIP.2019.8803603.
    [78] Debin Meng et al. “Frame Attention Networks for Facial Expression Recognition in Videos”. In: 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019, pp. 3866–3870. DOI: 10.1109/ICIP.2019.8803603.
    [79] Cheng Miao et al. “Hierarchical Frequency-Assisted Interactive Networks for Face Manipulation Detection”. In: IEEE Transactions on Information Forensics and Security (TIFS) (2022), p. 14.
    [80] Yisroel Mirsky and Wenke Lee. “The Creation and Detection of Deepfakes: A Survey”. In: ACM Computing Surveys (2021).
    [81] Mehdi Mirza and Simon Osindero. “Conditional Generative Adversarial Nets”. In: arXiv preprint arXiv:1411.1784 (2014), p. 4.
    [82] Simon Moore and Richard Bowden. “Multi-view Pose and Facial Expression Recognition”. In: Proceedings of the British Machine Vision Conference (BMVC). 2010, pp. 5, 9.
    [83] Arsha Nagrani, Joon Son Chung, and Andrew Zisserman. “VoxCeleb: A Large Scale Speaker Identification Dataset”. In: Proceedings of Interspeech. 2017, pp. 5, 17.
    [84] Kshitij Narayan et al. “DeepHy: On DeepFake Phylogeny”. In: Proceedings of the IEEE International Joint Conference on Biometrics (IJCB). 2022, p. 5.
    [85] Kshitij Narayan et al. “DF-Platter: Multi-Face Heterogeneous Deepfake Dataset”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2023, p. 5.
    [86] Huy H. Nguyen, Junichi Yamagishi, and Isao Echizen. “Capsule-Forensics: Using Capsule Networks to Detect Forged Images and Videos”. In: ICASSP 2019 -2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019, pp. 2307–2311. DOI: 10.1109/ICASSP.2019.8682602.
    [87] Huy H. Nguyen, Junichi Yamagishi, and Isao Echizen. “CapsuleForensics: Using Capsule Networks to Detect Forged Images and Videos”. In: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). 2019, p. 13.
    [88] Yuval Nirkin et al. “Deepfake Detection Based on Discrepancies Between Faces and Their Context”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) (2021), pp. 13, 14.
    [89] Yuval Nirkin et al. “On Face Segmentation, Face Swapping, and Face Perception”. In: Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition (FG). 2018, p. 7.
    [90] Omkar M. Parkhi, Andrea Vedaldi, and Andrew Zisserman. “Deep Face Recognition”. In: Proceedings of the British Machine Vision Conference (BMVC). 2015, p. 5.
    [91] Chen Peng et al. “Where Deepfakes Gaze At? Spatial-Temporal Gaze Inconsistency Analysis for Video Face Forgery Detection”. In: IEEE Transactions on Information Forensics and Security (TIFS) (2024), pp. 13, 14.
    [92] Xin Peng et al. “PortraitBooth: A Versatile Portrait Model for Fast Identity-Preserved Personalization”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2024, p. 6.
    [93] K. R. Prajwal et al. “A Lip Sync Expert is All You Need for Speech-to-Lip Generation in the Wild”. In: Proceedings of the ACM International Conference on Multimedia (ACM MM). 2020, pp. 5, 10, 11, 16.
    [94] Yuezun Qian et al. “Thinking in Frequency: Face Forgery Detection by Mining Frequency-Aware Clues”. In: Proceedings of the European Conference on Computer Vision (ECCV). 2020, pp. 2, 14, 18.
    [95] Robin Rombach et al. “High-Resolution Image Synthesis with Latent Diffusion Models”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2022, pp. 2, 4.
    [96] Andreas Rossler et al. “FaceForensics++: Learning to Detect Manipulated Facial Images”. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). 2019.
    [97] Andreas Rossler et al. “FaceForensics++: Learning to Detect Manipulated Facial Images”. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. IEEE, 2019, pp. 1–11.
    [98] Andreas Rössler et al. “FaceForensics++: Learning to Detect Manipulated Facial Images”. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2019, pp. 5, 17, 18.
    [99] Nataniel Ruiz, Eunji Chong, and James M. Rehg. “Fine-Grained Head Pose Estimation Without Keypoints”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2018, pp. 5, 16.
    [100] Mark Sandler et al. “MobileNetV2: Inverted Residuals and Linear Bottlenecks”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2018, pp. 4510–4520.
    [101] Shanface33. Deepfake Model Attribution Issue 2. https://github.com/shanface33/Deepfake _ Model _ Attribution / issues / 2. Accessed: February 25, 2025.2025.
    [102] Kota Shiohara and Toshihiko Yamasaki. “Detecting Deepfakes with Self-Blended Images”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2022, pp. 13, 14, 18.
    [103] Jascha Sohl-Dickstein et al. “Deep Unsupervised Learning Using Nonequilibrium Thermodynamics”. In: Proceedings of the International Conference on Machine Learning (ICML). 2015, p. 4.
    [104] Kihyuk Sohn, Honglak Lee, and Xinchen Yan. “Learning Structured Output Representation Using Deep Conditional Generative Models”. In: Proceedings of the Conference on Neural Information Processing Systems (NeurIPS). 2015, pp. 1, 4.
    [105] Jian Sun et al. “Multicaption Text-to-Face Synthesis: Dataset and Algorithm”. In: Proceedings of the ACM International Conference on Multimedia (ACM MM). 2021, p. 5.
    [106] Christian Szegedy et al. “Going Deeper with Convolutions”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2015, pp. 1–9.
    [107] Cheng Tan et al. “Frequency-Aware Deepfake Detection: Improving Generalizability Through Frequency Space Domain Learning”. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). 2024, p. 14.
    [108] Cheng Tan et al. “Learning on Gradients: Generalized Artifacts Representation for GAN-Generated Images Detection”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2023, pp. 5, 13, 14.
    [109] Cheng Tan et al. “Learning on Gradients: Generalized Artifacts Representation for GAN-Generated Images Detection”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2023, pp. 2, 5, 13, 14.
    [110] Chuang Tan, Jiahui Li, and Feng Wu. AIGC Dataset: A Dataset for Detection of AI-Generated Content (GANGen). https://github.com/chuangchuangtan/GANGen-Detection. GitHub Repository. 2025.
    [111] Mingxing Tan and Quoc Le. “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”. In: International Conference on Machine Learning. PMLR, 2019, pp. 6105–6114.
    [112] Justus Thies et al. “Face2Face: Real-time Face Capture and Reenactment of RGB Videos”. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016.
    [113] Aaron Van Den Oord and Oriol Vinyals. “Neural Discrete Representation Learning”. In: Proceedings of the Conference on Neural Information Processing Systems (NeurIPS). 2017, pp. 1, 4.
    [114] Hao Wang et al. “CosFace: Large Margin Cosine Loss for Deep Face Recognition”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2018, pp. 5, 16.
    [115] J. Wang et al. “Memory-Augmented Contrastive Learning for Talking Head Generation”. In: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). 2023.
    [116] Jian Wang et al. “M2TR: Multi-Modal Multi-Scale Transformers for Deepfake Detection”. In: Proceedings of the International Conference on Machine Learning (ICML). 2022, pp. 5, 14, 18.
    [117] Ke Wang et al. “MEAD: A Large-Scale Audiovisual Dataset for Emotional Talking-Face Generation”. In: Proceedings of the European Conference on Computer Vision (ECCV). 2020, pp. 5, 17.
    [118] Rui Wang et al. “FakeSpotter: A Simple Yet Robust Baseline for Spotting AISynthesized Fake Faces”. In: Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI). 2021, p. 15.
    [119] Tian Wang and Kam-Pui Chow. “Noise-Based Deepfake Detection via Multi-Head Relative-Interaction”. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). 2023, pp. 13, 14.
    [120] Ting-Chun Wang, Arash Mallya, and Ming-Yu Liu. “One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021, p. 5.
    [121] Zhou Wang et al. “Image Quality Assessment: From Error Visibility to Structural Similarity”. In: IEEE Transactions on Image Processing (TIP) (2004), pp. 5, 16.
    [122] Sanghyun Woo et al. “CBAM: Convolutional Block Attention Module”. In: Proceedings of the European Conference on Computer Vision (ECCV). Springer, 2018, pp. 3–19.
    [123] Mingjie Wu et al. “Traceevader: Making Deepfakes More Untraceable via Evading the Forgery Model Attribution”. In: Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. 18. 2024, pp. 4365–4373.
    [124] Wei Xia et al. “TediGAN: Text-Guided Diverse Face Image Generation and Manipulation”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021, p. 5.
    [125] Yifan Xu et al. “Towards Generalizable Deepfake Video Detection with Thumbnail Layout and Graph Reasoning”. In: International Journal of Computer Vision (IJCV) (2024), pp. 13, 14.
    [126] Tianchen Yang et al. “Deepfake Network Architecture Attribution”. In: Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. 4. 2022, pp. 4662–4670.
    [127] Wei Yang et al. “Avoid-DF: Audio-Visual Joint Learning for Detecting Deepfake”. In: IEEE Transactions on Information Forensics and Security (TIFS) (2023), pp. 2, 13, 14, 18.
    [128] Xin Yang, Yuezun Li, and Siwei Lyu. “Exposing Deep Fakes Using Inconsistent Head Poses”. In: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). 2019, pp. 5, 13, 14.
    [129] Zhiwei Yang et al. “Masked Relation Learning for Deepfake Detection”. In: IEEE Transactions on Information Forensics and Security (TIFS) (2023), pp. 2, 13, 14, 18.
    [130] Qian Yin et al. “Dynamic Difference Learning with Spatio-Temporal Correlation for Deepfake Video Detection”. In: IEEE Transactions on Information Forensics and Security (TIFS) (2023), pp. 2, 13, 18.
    [131] Linqi Yu, Hao Xie, and Yujing Zhang. “Multimodal Learning for Temporally Coherent Talking Face Generation with Articulator Synergy”. In: IEEE Transactions on Multimedia (TMM) (2021), p. 11.
    [132] Ning Yu, Larry S. Davis, and Mario Fritz. “Attributing Fake Images to GANs: Learning and Analyzing GAN Fingerprints”. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2019, pp. 7556–7566.
    [133] Richard Zhang et al. “The Unreasonable Effectiveness of Deep Features as a Perceptual Metric”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2018, pp. 5, 16.
    [134] Huan Zhao et al. “Multi-Attentional Deepfake Detection”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021, pp. 2185–2194.
    [135] Yujun Zheng et al. “Exploring Temporal Coherence for More General Video Face Forgery Detection”. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2021, pp. 13, 14, 18.
    [136] Peng Zhou et al. “Two-Stream Neural Networks for Tampered Face Detection”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 2017, pp. 2, 13.
    [137] Hongyu Zhu et al. “CelebV-HQ: A Large-Scale Video Facial Attributes Dataset”. In: Proceedings of the European Conference on Computer Vision (ECCV). 2022, pp. 5, 11.
    [138] Bo Zi et al. “WildDeepfake: A Challenging Real-World Dataset for DeepFake Detection”. In: Proceedings of the ACM International Conference on Multimedia (ACM MM). 2020, pp. 5, 14.
    Description: 博士
    國立政治大學
    社群網路與人智計算國際研究生博士學位學程(TIGP)
    108761506
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0108761506
    Data Type: thesis
    Appears in Collections:[社群網路與人智計算國際研究生博士學位學程(TIGP)] 學位論文

    Files in This Item:

    File SizeFormat
    150601.pdf15830KbAdobe PDF0View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback