Basit öğe kaydını göster

dc.contributor.authorTuna, Ömer Faruken_US
dc.contributor.authorÇatak, Ferhat Özgüren_US
dc.contributor.authorEskil, Mustafa Taneren_US
dc.date.accessioned2022-03-02T17:53:32Z
dc.date.available2022-03-02T17:53:32Z
dc.date.issued2022-03
dc.identifier.citationTuna, Ö. F., Çatak, F. Ö. & Eskil, M. T. (2022). Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples. Multimedia Tools and Applications, 81(8) 11479-11500. doi:10.1007/s11042-022-12132-7en_US
dc.identifier.issn1380-7501
dc.identifier.issn1573-7721
dc.identifier.urihttps://hdl.handle.net/11729/3490
dc.identifier.urihttp://dx.doi.org/10.1007/s11042-022-12132-7
dc.description.abstractDeep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called ``Adversarial Machine Learning” to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, most of the current research has concentrated on utilising model loss function to craft adversarial examples or to create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the shifted-domain regions where the model has not been trained on. We proposed new attack ideas by exploiting the difficulty of the target model to discriminate between samples drawn from original and shifted versions of the training data distribution by utilizing epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.14%, 82.96% to 90.13% and 89.44% to 91.06% on MNIST Digit, MNIST Fashion and CIFAR-10 datasets, respectively.en_US
dc.language.isoengen_US
dc.publisherSpringeren_US
dc.relation.isversionof10.1007/s11042-022-12132-7
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectAdversarial machine learningen_US
dc.subjectDeep learningen_US
dc.subjectLoss maximizationen_US
dc.subjectMultimedia securityen_US
dc.subjectUncertaintyen_US
dc.subjectDeep neural networksen_US
dc.subjectMonte Carlo methodsen_US
dc.subjectNetwork architectureen_US
dc.subjectEpistemic uncertaintiesen_US
dc.subjectLearning modelsen_US
dc.subjectMachine-learningen_US
dc.subjectNeural network architectureen_US
dc.subjectRandom perturbationsen_US
dc.subjectUncertainty analysisen_US
dc.subjectNeural networken_US
dc.titleExploiting epistemic uncertainty of the deep learning models to generate adversarial samplesen_US
dc.typearticleen_US
dc.description.versionPublisher's Versionen_US
dc.relation.journalMultimedia Tools and Applicationsen_US
dc.contributor.departmentIşık Üniversitesi, Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümüen_US
dc.contributor.departmentIşık University, Faculty of Engineering, Department of Computer Engineeringen_US
dc.contributor.authorID0000-0002-6214-6262
dc.contributor.authorID0000-0003-0298-0690
dc.identifier.volume81
dc.identifier.issue8
dc.identifier.startpage11500
dc.identifier.endpage11479
dc.peerreviewedYesen_US
dc.publicationstatusPublisheden_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı ve Öğrencien_US
dc.contributor.institutionauthorTuna, Ömer Faruken_US
dc.contributor.institutionauthorEskil, Mustafa Taneren_US
dc.relation.indexWOSen_US
dc.relation.indexScopusen_US
dc.relation.indexPubMeden_US
dc.relation.indexScience Citation Index Expanded (SCI-EXPANDED)en_US
dc.description.qualityQ2
dc.description.wosidWOS:000757777400006
dc.description.pubmedidPMID:35221776


Bu öğenin dosyaları:

Thumbnail

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster