|
English
|
正體中文
|
简体中文
|
Post-Print筆數 : 27 |
Items with full text/Total items : 113318/144297 (79%)
Visitors : 50987983
Online Users : 316
|
|
|
Loading...
|
Please use this identifier to cite or link to this item:
https://nccur.lib.nccu.edu.tw/handle/140.119/148734
|
Title: | Towards Adversarial Robustness for Multi-Mode Data through Metric Learning |
Authors: | 廖文宏 Liao, Wen-Hung;Khan, Sarwar;Chen, Jun-Cheng;Chen, Chu-Song |
Contributors: | 資訊系 |
Keywords: | adversarial attacks;adversarial training;classification;metric learning;multi-mode;prototypes |
Date: | 2023-07 |
Issue Date: | 2023-12-13 14:16:36 (UTC+8) |
Abstract: | Adversarial attacks have become one of the most serious security issues in widely used deep neural networks. Even though real-world datasets usually have large intra-variations or multiple modes, most adversarial defense methods, such as adversarial training, which is currently one of the most effective defense methods, mainly focus on the single-mode setting and thus fail to capture the full data representation to defend against adversarial attacks. To confront this challenge, we propose a novel multi-prototype metric learning regularization for adversarial training which can effectively enhance the defense capability of adversarial training by preventing the latent representation of the adversarial example changing a lot from its clean one. With extensive experiments on CIFAR10, CIFAR100, MNIST, and Tiny ImageNet, the evaluation results show the proposed method improves the performance of different state-of-the-art adversarial training methods without additional computational cost. Furthermore, besides Tiny ImageNet, in the multi-prototype CIFAR10 and CIFAR100 where we reorganize the whole datasets of CIFAR10 and CIFAR100 into two and ten classes, respectively, the proposed method outperforms the state-of-the-art approach by 2.22% and 1.65%, respectively. Furthermore, the proposed multi-prototype method also outperforms its single-prototype version and other commonly used deep metric learning approaches as regularization for adversarial training and thus further demonstrates its effectiveness. |
Relation: | Sensors, Vol.23, No.13, 6173 |
Data Type: | article |
DOI 連結: | https://doi.org/10.3390/s23136173 |
DOI: | 10.3390/s23136173 |
Appears in Collections: | [資訊科學系] 期刊論文
|
Files in This Item:
File |
Description |
Size | Format | |
index.html | | 0Kb | HTML | 152 | View/Open |
|
All items in 政大典藏 are protected by copyright, with all rights reserved.
|
著作權政策宣告 Copyright Announcement1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.
2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(
nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(
nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.