Closeness and uncertainty aware adversarial examples detection in adversarial machine learning
Yükleniyor...
Tarih
2022-07
Dergi Başlığı
Dergi ISSN
Cilt Başlığı
Yayıncı
Elsevier Ltd
Erişim Hakkı
info:eu-repo/semantics/closedAccess
Özet
While deep learning models are thought to be resistant to random perturbations, it has been demonstrated that these architectures are vulnerable to deliberately crafted perturbations, albeit being quasi-imperceptible. These vulnerabilities make it challenging to deploy Deep Neural Network (DNN) models in security-critical areas. Recently, many research studies have been conducted to develop defense techniques enabling more robust models. In this paper, we target detecting adversarial samples by differentiating them from their clean equivalents. We investigate various metrics for detecting adversarial samples. We first leverage moment-based predictive uncertainty estimates of DNN classifiers derived through Monte-Carlo (MC) Dropout Sampling. We also introduce a new method that operates in the subspace of deep features obtained by the model. We verified the effectiveness of our approach on different datasets. Our experiments show that these approaches complement each other, and combined usage of all metrics yields 99 % ROC-AUC adversarial detection score for well-known attack algorithms.
Açıklama
Anahtar Kelimeler
Adversarial example detection, Adversarial machine learning, Computational intelligence, Security, Uncertainty, Learning systems, Monte Carlo methods, Neural network models, Uncertainty analysis, Learning models, Machine-learning, Random perturbations, Research studies, Security-critical, Deep neural networks, Object detection, Deep learning, IOU
Kaynak
Computers and Electrical Engineering
WoS Q Değeri
Q2
Scopus Q Değeri
Q1
Cilt
101
Sayı
Künye
Tuna, Ö. F., Çatak, F. Ö. & Eskil, M. T. (2022). Closeness and uncertainty aware adversarial examples detection in adversarial machine learning. Computers and Electrical Engineering, 101, 1-12. doi:10.1016/j.compeleceng.2022.107986