Basit öğe kaydını göster

dc.contributor.authorTuna, Ömer Faruken_US
dc.contributor.authorÇatak, Ferhat Özgüren_US
dc.contributor.authorEskil, Mustafa Taneren_US
dc.date.accessioned2022-05-24T19:17:27Z
dc.date.available2022-05-24T19:17:27Z
dc.date.issued2022-04-02
dc.identifier.citationTuna, Ö. F., Çatak, F. Ö. & Eskil, M. T. (2022). Uncertainty as a Swiss army knife: new adversarial attack and defense ideas based on epistemic uncertainty. Complex & Intelligent System, 1-19. doi:10.1007/s40747-022-00701-0en_US
dc.identifier.issn2199-4536en_US
dc.identifier.issn2198-6053en_US
dc.identifier.urihttps://hdl.handle.net/11729/4356
dc.identifier.urihttp://dx.doi.org/10.1007/s40747-022-00701-0
dc.description.abstractAlthough state-of-the-art deep neural network models are known to be robust to random perturbations, it was verified that these architectures are indeed quite vulnerable to deliberately crafted perturbations, albeit being quasi-imperceptible. These vulnerabilities make it challenging to deploy deep neural network models in the areas where security is a critical concern. In recent years, many research studies have been conducted to develop new attack methods and come up with new defense techniques that enable more robust and reliable models. In this study, we use the quantified epistemic uncertainty obtained from the model's final probability outputs, along with the model's own loss function, to generate more effective adversarial samples. And we propose a novel defense approach against attacks like Deepfool which result in adversarial samples located near the model's decision boundary. We have verified the effectiveness of our attack method on MNIST (Digit), MNIST (Fashion) and CIFAR-10 datasets. In our experiments, we showed that our proposed uncertainty-based reversal method achieved a worst case success rate of around 95% without compromising clean accuracy.en_US
dc.language.isoenen_US
dc.publisherSpringeren_US
dc.relation.ispartofComplex & Intelligent Systemen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectAdversarial machine learningen_US
dc.subjectUncertaintyen_US
dc.subjectSecurityen_US
dc.subjectDeep learningen_US
dc.subjectObject Detectionen_US
dc.subjectDeep Learningen_US
dc.subjectIOUen_US
dc.titleUncertainty as a Swiss army knife: new adversarial attack and defense ideas based on epistemic uncertaintyen_US
dc.typeArticleen_US
dc.description.versionPublisher's Versionen_US
dc.departmentIşık Üniversitesi, Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümüen_US
dc.departmentIşık University, Faculty of Engineering, Department of Computer Engineeringen_US
dc.authorid0000-0002-6214-6262
dc.authorid0000-0003-0298-0690
dc.authorid0000-0002-6214-6262en_US
dc.authorid0000-0003-0298-0690en_US
dc.identifier.volume9
dc.identifier.issue4
dc.identifier.startpage3739
dc.identifier.endpage3757
dc.peerreviewedYesen_US
dc.publicationstatusPublisheden_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı ve Öğrencien_US
dc.institutionauthorTuna, Ömer Faruken_US
dc.institutionauthorEskil, Mustafa Taneren_US
dc.indekslendigikaynakWeb of Scienceen_US
dc.indekslendigikaynakScopusen_US
dc.indekslendigikaynakScience Citation Index Expanded (SCI-EXPANDED)en_US
dc.identifier.wosqualityQ2
dc.identifier.wosqualityQ2en_US
dc.identifier.wosWOS:000777429400001
dc.identifier.wosWOS:000777429400001en_US
dc.identifier.scopus2-s2.0-85134203085en_US
dc.identifier.doi10.1007/s40747-022-00701-0
dc.identifier.scopusqualityQ1en_US


Bu öğenin dosyaları:

Thumbnail

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster