Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples
dc.authorid | 0000-0002-6214-6262 | |
dc.authorid | 0000-0003-0298-0690 | |
dc.contributor.author | Tuna, Ömer Faruk | en_US |
dc.contributor.author | Çatak, Ferhat Özgür | en_US |
dc.contributor.author | Eskil, Mustafa Taner | en_US |
dc.date.accessioned | 2022-03-02T17:53:32Z | |
dc.date.available | 2022-03-02T17:53:32Z | |
dc.date.issued | 2022-03 | |
dc.department | Işık Üniversitesi, Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümü | en_US |
dc.department | Işık University, Faculty of Engineering, Department of Computer Engineering | en_US |
dc.description.abstract | Deep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called ``Adversarial Machine Learning” to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, most of the current research has concentrated on utilising model loss function to craft adversarial examples or to create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the shifted-domain regions where the model has not been trained on. We proposed new attack ideas by exploiting the difficulty of the target model to discriminate between samples drawn from original and shifted versions of the training data distribution by utilizing epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.14%, 82.96% to 90.13% and 89.44% to 91.06% on MNIST Digit, MNIST Fashion and CIFAR-10 datasets, respectively. | en_US |
dc.description.version | Publisher's Version | en_US |
dc.identifier.citation | Tuna, Ö. F., Çatak, F. Ö. & Eskil, M. T. (2022). Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples. Multimedia Tools and Applications, 81(8) 11479-11500. doi:10.1007/s11042-022-12132-7 | en_US |
dc.identifier.doi | 10.1007/s11042-022-12132-7 | |
dc.identifier.endpage | 11479 | |
dc.identifier.issn | 1380-7501 | |
dc.identifier.issn | 1573-7721 | |
dc.identifier.issue | 8 | |
dc.identifier.pmid | 35221776 | |
dc.identifier.scopus | 2-s2.0-85124727858 | |
dc.identifier.scopusquality | Q1 | |
dc.identifier.startpage | 11500 | |
dc.identifier.uri | https://hdl.handle.net/11729/3490 | |
dc.identifier.uri | http://dx.doi.org/10.1007/s11042-022-12132-7 | |
dc.identifier.volume | 81 | |
dc.identifier.wos | WOS:000757777400006 | |
dc.identifier.wosquality | Q2 | |
dc.indekslendigikaynak | Web of Science | en_US |
dc.indekslendigikaynak | Scopus | en_US |
dc.indekslendigikaynak | PubMed | en_US |
dc.indekslendigikaynak | Science Citation Index Expanded (SCI-EXPANDED) | en_US |
dc.institutionauthor | Tuna, Ömer Faruk | en_US |
dc.institutionauthor | Eskil, Mustafa Taner | en_US |
dc.institutionauthorid | 0000-0002-6214-6262 | |
dc.institutionauthorid | 0000-0003-0298-0690 | |
dc.language.iso | en | en_US |
dc.peerreviewed | Yes | en_US |
dc.publicationstatus | Published | en_US |
dc.publisher | Springer | en_US |
dc.relation.ispartof | Multimedia Tools and Applications | en_US |
dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı ve Öğrenci | en_US |
dc.rights | info:eu-repo/semantics/closedAccess | en_US |
dc.subject | Adversarial machine learning | en_US |
dc.subject | Deep learning | en_US |
dc.subject | Loss maximization | en_US |
dc.subject | Multimedia security | en_US |
dc.subject | Uncertainty | en_US |
dc.subject | Deep neural networks | en_US |
dc.subject | Monte Carlo methods | en_US |
dc.subject | Network architecture | en_US |
dc.subject | Epistemic uncertainties | en_US |
dc.subject | Learning models | en_US |
dc.subject | Machine-learning | en_US |
dc.subject | Neural network architecture | en_US |
dc.subject | Random perturbations | en_US |
dc.subject | Uncertainty analysis | en_US |
dc.subject | Neural network | en_US |
dc.title | Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples | en_US |
dc.type | Article | en_US |
dspace.entity.type | Publication |
Dosyalar
Orijinal paket
1 - 1 / 1
Küçük Resim Yok
- İsim:
- Exploiting_epistemic_uncertainty_of_the_deep_learning_models_to_generate_adversarial_samples.pdf
- Boyut:
- 4.92 MB
- Biçim:
- Adobe Portable Document Format
- Açıklama:
- Publisher's Version
Lisans paketi
1 - 1 / 1
Küçük Resim Yok
- İsim:
- license.txt
- Boyut:
- 1.44 KB
- Biçim:
- Item-specific license agreed upon to submission
- Açıklama: