English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 113100/144073 (79%)
Visitors : 50562956      Online Users : 831
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/136967
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/136967


    Title: 結合頻率域損失之生成對抗網路影像合成機制
    Image Synthesis Using Generative Adversarial Network with Frequency Domain Constraints
    Authors: 曾鴻仁
    Zeng, Hong-Ren
    Contributors: 廖文宏
    Liao, Wen-Hung
    曾鴻仁
    Zeng, Hong-Ren
    Keywords: 生成對抗網路
    離散傅立葉轉換
    離散小波轉換
    偽圖偵測
    Generative adversarial network
    Discrete Fourier transform
    Discrete wavelet transform
    Fake image detection
    Date: 2021
    Issue Date: 2021-09-02 16:56:07 (UTC+8)
    Abstract: 生成對抗網路的技術不斷精進,所產生的圖像人眼往往無法辨別是真實或合成,然而由於生成對抗網路在學習過程較難重建高頻資訊,導致在頻率域上可觀察到偽影,因此能被檢測模型輕易的辨識出來。同時也有研究指出頻率上的高頻分量,不利於生成對抗網路進行學習,因此如何在生成圖像時兼顧頻率域的學習效果,成為一大挑戰。
    本論文從頻率域的角度著手,除了驗證去除掉部分高頻上的雜訊,的確能夠更有效幫助生成對抗網路之學習,也提出了利用添加頻率損失的方式來改善訓練效果。經實驗發現利用離散傅立葉轉換或是離散小波轉換的損失,都能有效幫助生成對抗網路產生品質更好的圖像,在CelebA人臉資料集上,添加離散小波損失的生成圖FID最佳能達到6.53,比起SNGAN的FID為16.53進步許多,添加頻率損失的模型在訓練上也更加的穩定。另外本論文也使用通用的真偽分類模型進行測試,其改善後的模型所產生的圖片能讓辨識準確率有效降低,代表了經過改進後的模型生成的圖像更加逼真,證實了提供頻率的資訊給生成對抗網路的確有助於訓練流程,也提供後續對於生成對抗網路的研究有更多的參考方向。
    Generative adversarial networks (GAN) have evolved rapidly since its introduction in 2014. The quality of synthesized images has improved significantly, making it difficult for human observer to tell the real and GAN-created ones apart. Due to GAN’s inability to faithfully reconstruct high frequency components of a signal, however, artifact can be observed using frequency domain representation, which can be easily detected using simple classification models. Researchers have also studied the adverse effects of high frequency components in the training process. It is a thus challenging task to synthesize visually realistic images while maintaining fidelity in the frequency domain.
    This thesis attempts to enhance the quality of images generated using generative adversarial networks by incorporating frequency domain constraints. To begin with, we observe that the overall training process has become more stable by filtering out high-frequency noises. We then propose to include frequency domain losses in the generator and discriminator networks to investigate their effects on the generated images. Experimental results indicate that both discrete Fourier transform (DFT) and discrete wavelet transform (DWT) losses are effective in improving the quality of the generated images, and the training processes turn out to be more stable. We verify our results using a classification model designed to detect fake images. The accuracy is significantly reduced using images generated by our modified GAN mode, demonstrating the advantages of incorporating frequency domain constraints in generative adversarial networks.
    Reference: [1] Y.LeCun, K.Kavukcuoglu, andC.Farabet, “Convolutional networks and applications in vision,” in ISCAS 2010 - 2010 IEEE International Symposium on Circuits and Systems: Nano-Bio Circuit Fabrics and Systems, 2010, pp. 253–256, doi: 10.1109/ISCAS.2010.5537907.
    [2] Y.Lecun, Y.Bengio, andG.Hinton, “Deep learning,” Nature, vol. 521, no. 7553. Nature Publishing Group, pp. 436–444, May27, 2015, doi: 10.1038/nature14539.
    [3] I. J.Goodfellow et al., “Generative Adversarial Nets.” [Online]. Available: http://www.github.com/goodfeli/adversarial.
    [4] T.Karras, S.Laine, andT.Aila, “A Style-Based Generator Architecture for Generative Adversarial Networks,” Dec.2018, Accessed: Dec.13, 2020. [Online]. Available: https://arxiv.org/abs/1812.04948.
    [5] S.Lyu, “DEEPFAKE DETECTION: CURRENT CHALLENGES AND NEXT STEPS.” Accessed: Apr.23, 2021. [Online]. Available: https://deepfakedetectionchallenge.ai.
    [6] “Experts: Spy used AI-generated face to connect with targets.” https://apnews.com/article/professional-networking-ap-top-news-artificial-intelligence-social-platforms-think-tanks-bc2f19097a4c4fffaa00de6770b8a60d (accessed Apr. 23, 2021).
    [7] B.Dolhansky et al., “The DeepFake Detection Challenge (DFDC) Dataset,” Jun.2020, Accessed: Apr.14, 2021. [Online]. Available: http://arxiv.org/abs/2006.07397.
    [8] A.Rössler, D.Cozzolino, L.Verdoliva, C.Riess, J.Thies, andM.Nießner, “FaceForensics++: Learning to Detect Manipulated Facial Images.” Accessed: Apr.23, 2021. [Online]. Available: https://github.com/ondyari/FaceForensics.
    [9] “Building Autoencoders in Keras.” https://blog.keras.io/building-autoencoders-in-keras.html (accessed May 02, 2021).
    [10] “Overview of GAN Structure.” https://developers.google.com/machine-learning/gan.
    [11] A.Radford, L.Metz, andS.Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” Nov. 2016, Accessed: Dec.11, 2020. [Online]. Available: https://arxiv.org/abs/1511.06434v2.
    [12] M.Arjovsky, S.Chintala, andL.Bottou, “Wasserstein GAN,” arXiv, Jan.2017, Accessed: Apr.24, 2021. [Online]. Available: http://arxiv.org/abs/1701.07875.
    [13] T.Miyato, T.Kataoka, M.Koyama, andY.Yoshida, “Spectral Normalization for Generative Adversarial Networks,” arXiv, Feb.2018, Accessed: Apr.24, 2021. [Online]. Available: http://arxiv.org/abs/1802.05957.
    [14] T.Karras, T.Aila, S.Laine, andJ.Lehtinen, “Progressive Growing of GANs for Improved Quality, Stability, and Variation,” Oct.2017, Accessed: Dec.13, 2020. [Online]. Available: https://arxiv.org/abs/1710.10196.
    [15] X.Huang andS.Belongie, “Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2017-October, pp. 1510–1519, Mar.2017, Accessed: Apr.23, 2021. [Online]. Available: http://arxiv.org/abs/1703.06868.
    [16] A.Brock, J.Donahue, andK.Simonyan, “Large Scale GAN Training for High Fidelity Natural Image Synthesis,” arXiv, Sep.2018, Accessed: Dec.13, 2020. [Online]. Available: http://arxiv.org/abs/1809.11096.
    [17] J.-Y.Zhu, T.Park, P.Isola, andA. A.Efros, “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2017-October, pp. 2242–2251, Mar.2017, Accessed: Jan.01, 2021. [Online]. Available: http://arxiv.org/abs/1703.10593.
    [18] P.Singh, N.Komodakis, andN.Komodakisécole, “Cloud-GAN: Cloud Removal for Sentinel-2 Imagery Using a Cyclic Consistent Generative Adversarial Network CLOUD-GAN: CLOUD REMOVAL FOR SENTINEL-2 IMAGERY USING A CYCLIC CONSISTENT GENERATIVE ADVERSARIAL NETWORKS.” Accessed: Dec.21, 2020. [Online]. Available: https://hal-enpc.archives-ouvertes.fr/hal-01832797.
    [19] N. U.DIn, K.Javed, S.Bae, andJ.Yi, “Effective Removal of User-Selected Foreground Object from Facial Images Using a Novel GAN-Based Network,” IEEE Access, vol. 8, pp. 109648–109661, 2020, doi: 10.1109/ACCESS.2020.3001649.
    [20] C.Ledig et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-January, pp. 105–114, Sep.2016, Accessed: Apr.29, 2021. [Online]. Available: http://arxiv.org/abs/1609.04802.
    [21] J.Guo, S.Lu, H.Cai, W.Zhang, Y.Yu, andJ.Wang, “Long Text Generation via Adversarial Training with Leaked Information,” 32nd AAAI Conf. Artif. Intell. AAAI 2018, pp. 5141–5148, Sep.2017, Accessed: Apr.29, 2021. [Online]. Available: http://arxiv.org/abs/1709.08624.
    [22] H.-W.Dong, W.-Y.Hsiao, L.-C.Yang, andY.-H.Yang, “MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment,” 32nd AAAI Conf. Artif. Intell. AAAI 2018, pp. 34–41, Sep.2017, Accessed: Apr.29, 2021. [Online]. Available: http://arxiv.org/abs/1709.06298.
    [23] “Convolutional neural network - Wikipedia.” https://en.wikipedia.org/wiki/Convolutional_neural_network (accessed Jun. 02, 2021).
    [24] K.He, X.Zhang, S.Ren, andJ.Sun, “Deep Residual Learning for Image Recognition.” Accessed: Jun.02, 2021. [Online]. Available: http://image-net.org/challenges/LSVRC/2015/.
    [25] T.Salimans, I.Goodfellow, W.Zaremba, V.Cheung, A.Radford, andX.Chen, “Improved Techniques for Training GANs,” Adv. Neural Inf. Process. Syst., pp. 2234–2242, Jun.2016, Accessed: Apr.25, 2021. [Online]. Available: http://arxiv.org/abs/1606.03498.
    [26] C.Szegedy, V.Vanhoucke, S.Ioffe, J.Shlens, andZ.Wojna, “Rethinking the Inception Architecture for Computer Vision,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Dec. 2016, vol. 2016-December, pp. 2818–2826, doi: 10.1109/CVPR.2016.308.
    [27] M.Heusel, H.Ramsauer, T.Unterthiner, B.Nessler, andS.Hochreiter, “GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium,” Adv. Neural Inf. Process. Syst., vol. 2017-December, pp. 6627–6638, Jun.2017, Accessed: Apr.25, 2021. [Online]. Available: http://arxiv.org/abs/1706.08500.
    [28] H.Jeon, Y.Bang, J.Kim, andS. S.Woo, “T-GD: Transferable GAN-generated Images Detection Framework,” arXiv, Aug.2020, Accessed: Apr.19, 2021. [Online]. Available: http://arxiv.org/abs/2008.04115.
    [29] L.Nataraj et al., “Detecting GAN generated Fake Images using Co-occurrence Matrices,” arXiv, Mar.2019, Accessed: Nov.23, 2020. [Online]. Available: http://arxiv.org/abs/1903.06836.
    [30] C.-C.Hsu, C.-Y.Lee, andY.-X.Zhuang, “Learning to Detect Fake Face Images in the Wild,” Proc. - 2018 Int. Symp. Comput. Consum. Control. IS3C 2018, pp. 388–391, Sep.2018, Accessed: Apr.21, 2021. [Online]. Available: http://arxiv.org/abs/1809.08754.
    [31] Z.Liu, X.Qi, andP.Torr, “Global Texture Enhancement for Fake Face Detection in the Wild,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 8057–8066, Jan.2020, Accessed: Apr.21, 2021. [Online]. Available: http://arxiv.org/abs/2002.00133.
    [32] S.-Y.Wang, O.Wang, R.Zhang, A.Owens, andA. A.Efros, “CNN-Generated Images Are Surprisingly Easy to Spot… for Now,” pp. 8692–8701, 2020, doi: 10.1109/cvpr42600.2020.00872.
    [33] J.Frank, T.Eisenhofer, L.Schönherr, A.Fischer, D.Kolossa, andT.Holz, “Leveraging Frequency Analysis for Deep Fake Image Recognition,” no. Icml, 2020, [Online]. Available: http://arxiv.org/abs/2003.08685.
    [34] X.Zhang, S.Karaman, andS. F.Chang, “Detecting and Simulating Artifacts in GAN Fake Images,” 2019 IEEE Int. Work. Inf. Forensics Secur. WIFS 2019, 2019, doi: 10.1109/WIFS47025.2019.9035107.
    [35] H.Liu et al., “Spatial-Phase Shallow Learning: Rethinking Face Forgery Detection in Frequency Domain.”
    [36] Z.-Q. J.Xu, Y.Zhang, T.Luo, Y.Xiao, andZ.Ma, “Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks,” Commun. Comput. Phys., vol. 28, no. 5, pp. 1746–1767, Jan.2019, doi: 10.4208/cicp.OA-2020-0085.
    [37] N.Rahaman et al., “On the Spectral Bias of Neural Networks,” 36th Int. Conf. Mach. Learn. ICML 2019, vol. 2019-June, pp. 9230–9239, Jun.2018, Accessed: Apr.18, 2021. [Online]. Available: http://arxiv.org/abs/1806.08734.
    [38] H.Wang, X.Wu, Z.Huang, andE. P.Xing, “High Frequency Component Helps Explain the Generalization of Convolutional Neural Networks,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 8681–8691, May2019, Accessed: Nov.24, 2020. [Online]. Available: http://arxiv.org/abs/1905.13545.
    [39] Y.Chen, G.Li, C.Jin, S.Liu, andT.Li, “SSD-GAN: Measuring the Realness in the Spatial and Spectral Domains,” 2021. Accessed: Jan.20, 2021. [Online]. Available: www.aaai.org.
    [40] V.Dumoulin andF.Visin, “A guide to convolution arithmetic for deep learning,” Mar.2016, Accessed: Apr.25, 2021. [Online]. Available: http://arxiv.org/abs/1603.07285.
    [41] Z.Li, P.Xia, X.Rui, Y.Hu, andB.Li, “Are High-Frequency Components Beneficial for Training of Generative Adversarial Networks,” Mar.2021, Accessed: Apr.25, 2021. [Online]. Available: http://arxiv.org/abs/2103.11093.
    [42] R.Durall, M.Keuper, andJ.Keuper, “Watch your Up-Convolution: CNN Based Generative Deep Neural Networks are Failing to Reproduce Spectral Distributions,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 7887–7896, Mar.2020, Accessed: Apr.25, 2021. [Online]. Available: http://arxiv.org/abs/2003.01826.
    [43] “Large-scale CelebFaces Attributes (CelebA) Dataset.” http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html (accessed Apr. 26, 2021).
    [44] K. S.Lee andC.Town, “Mimicry: Towards the Reproducibility of GAN Research,” arXiv, May2020, Accessed: Apr.26, 2021. [Online]. Available: http://arxiv.org/abs/2005.02494.
    [45] “GitHub - NVlabs/ffhq-dataset: Flickr-Faces-HQ Dataset (FFHQ).” https://github.com/NVlabs/ffhq-dataset (accessed Jul. 13, 2021).
    [46] C.-H.Hsia, J.-S.Chiang, andJ.-M.Guo, “Multiple Moving Objects Detection and Tracking Using Discrete Wavelet Transform,” Discret. Wavelet Transform. - Biomed. Appl., pp. 201–220, Sep.2011, doi: 10.5772/22325.
    [47] “GitHub - facebookresearch/pytorch_GAN_zoo: A mix of GAN implementations including progressive growing.” https://github.com/facebookresearch/pytorch_GAN_zoo (accessed Jul. 15, 2021).
    Description: 碩士
    國立政治大學
    資訊科學系
    108753148
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0108753148
    Data Type: thesis
    DOI: 10.6814/NCCU202101331
    Appears in Collections:[資訊科學系] 學位論文

    Files in This Item:

    File Description SizeFormat
    314801.pdf3829KbAdobe PDF240View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback