Basit öğe kaydını göster

dc.contributor.authorTuna, Ömer Faruken_US
dc.contributor.authorÇatak, Ferhat Özgüren_US
dc.contributor.authorEskil, Mustafa Taneren_US
dc.date.accessioned2022-08-26T08:43:35Z
dc.date.available2022-08-26T08:43:35Z
dc.date.issued2022-07
dc.identifier.citationTuna, Ö. F., Çatak, F. Ö. & Eskil, M. T. (2022). Closeness and uncertainty aware adversarial examples detection in adversarial machine learning. Computers and Electrical Engineering, 101, 1-12. doi:10.1016/j.compeleceng.2022.107986en_US
dc.identifier.issn0045-7906
dc.identifier.issn1879-0755
dc.identifier.urihttps://hdl.handle.net/11729/4800
dc.identifier.urihttp://dx.doi.org/10.1016/j.compeleceng.2022.107986
dc.description.abstractWhile deep learning models are thought to be resistant to random perturbations, it has been demonstrated that these architectures are vulnerable to deliberately crafted perturbations, albeit being quasi-imperceptible. These vulnerabilities make it challenging to deploy Deep Neural Network (DNN) models in security-critical areas. Recently, many research studies have been conducted to develop defense techniques enabling more robust models. In this paper, we target detecting adversarial samples by differentiating them from their clean equivalents. We investigate various metrics for detecting adversarial samples. We first leverage moment-based predictive uncertainty estimates of DNN classifiers derived through Monte-Carlo (MC) Dropout Sampling. We also introduce a new method that operates in the subspace of deep features obtained by the model. We verified the effectiveness of our approach on different datasets. Our experiments show that these approaches complement each other, and combined usage of all metrics yields 99 % ROC-AUC adversarial detection score for well-known attack algorithms.en_US
dc.language.isoengen_US
dc.publisherElsevier Ltden_US
dc.relation.isversionof10.1016/j.compeleceng.2022.107986
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectAdversarial example detectionen_US
dc.subjectAdversarial machine learningen_US
dc.subjectComputational intelligenceen_US
dc.subjectSecurityen_US
dc.subjectUncertaintyen_US
dc.subjectLearning systemsen_US
dc.subjectMonte Carlo methodsen_US
dc.subjectNeural network modelsen_US
dc.subjectUncertainty analysisen_US
dc.subjectLearning modelsen_US
dc.subjectMachine-learningen_US
dc.subjectRandom perturbationsen_US
dc.subjectResearch studiesen_US
dc.subjectSecurity-criticalen_US
dc.subjectDeep neural networksen_US
dc.subjectObject detectionen_US
dc.subjectDeep learningen_US
dc.subjectIOUen_US
dc.titleCloseness and uncertainty aware adversarial examples detection in adversarial machine learningen_US
dc.typearticleen_US
dc.description.versionPublisher's Versionen_US
dc.relation.journalComputers and Electrical Engineeringen_US
dc.contributor.departmentIşık Üniversitesi, Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümüen_US
dc.contributor.departmentIşık University, Faculty of Engineering, Department of Computer Engineeringen_US
dc.contributor.authorID0000-0002-6214-6262
dc.contributor.authorID0000-0003-0298-0690
dc.identifier.volume101
dc.identifier.startpage1
dc.identifier.endpage12
dc.peerreviewedYesen_US
dc.publicationstatusPublisheden_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.contributor.institutionauthorTuna, Ömer Faruken_US
dc.contributor.institutionauthorEskil, Mustafa Taneren_US
dc.relation.indexWOSen_US
dc.relation.indexScopusen_US
dc.relation.indexScience Citation Index Expanded (SCI-EXPANDED)en_US
dc.description.qualityQ2
dc.description.wosidWOS:000798073500009


Bu öğenin dosyaları:

Thumbnail

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster