Basit öğe kaydını göster

dc.contributor.advisorEskil, Mustafa Taneren_US
dc.contributor.authorTuna, Ömer Faruken_US
dc.contributor.otherIşık Üniversitesi, Lisansüstü Eğitim Enstitüsü, Bilgisayar Mühendisliği Doktora Programıen_US
dc.date.accessioned2023-05-12T13:10:43Z
dc.date.available2023-05-12T13:10:43Z
dc.date.issued2022-12-19
dc.identifier.citationTuna, Ö. F. (2022). Using uncertainty metrics in adversarial machine learning as an attack and defense tool. İstanbul: Işık Üniversitesi Lisansüstü Eğitim Enstitüsü.en_US
dc.identifier.urihttps://hdl.handle.net/11729/5538
dc.descriptionText in English ; Abstract: English and Turkishen_US
dc.descriptionIncludes bibliographical references (leaves 94-102)en_US
dc.descriptionxv, 102 leavesen_US
dc.description.abstractDeep Neural Network (DNN) models are widely renowned for their resistance to random perturbations. However, researchers have found out that these models are indeed extremely vulnerable to deliberately crafted and seemingly imperceptible perturbations of the input, defined as adversarial samples. Adversarial attacks have the potential to substantially compromise the security of DNN-powered systems and posing high risks especially in the areas where security is a top priority. Numerous studies have been conducted in recent years to defend against these attacks and to develop more robust architectures resistant to adversarial threats. In this thesis study, we leverage the use of various uncertainty metrics obtained from MC-Dropout estimates of the model for developing new attack and defense ideas. On defense side, we propose a new adversarial detection mechanism and an uncertaintybased defense method to increase the robustness of DNN models against adversarial evasion attacks. On the attack side, we use the quantified epistemic uncertainty obtained from the model’s final probability outputs, along with the model’s own loss function, to generate effective adversarial samples. We’ve experimentally evaluated and verified the efficacy of our proposed approaches on standard computer vision datasets.en_US
dc.description.abstractDerin Sinir Ağları modelleri, yaygın olarak rastgele bozulmalara karşı dirençleri ile bilinir. Bununla birlikte, araştırmacılar, bu modellerin, karşıt (hasmane) örnekler olarak adlandırılan girdinin kasıtlı olarak hazırlanmış ve görünüşte algılanamaz bozulmalarına karşı gerçekten son derece savunmasız olduğunu keşfettiler. Bu gibi hasmane saldırılar, Derin Sinir Ağları tabanlı yapay zeka sistemlerinin güvenliğini önemli ölçüde tehlikeye atma potansiyeline sahiptir ve özellikle güvenliğin öncelikli olduğu alanlarda yüksek riskler oluşturur. Bu saldırılara karşı savunma yapmak ve hasmane tehditlere karşı daha dayanıklı mimariler geliştirmek için son yıllarda çok sayıda çalışma yapılmıştır. Bu tez çalışmasında, yeni saldırı ve savunma fikirleri geliştirmek için modelin Monte- Carlo Bırakma Örneklemesinden elde edilen çeşitli belirsizlik metriklerinin kullanımından yararlanıyoruz. Savunma tarafında, hasmane saldırılara karşı yapay sinir ağı modellerinin sağlamlığını artırmak için yeni bir tespit mekanizması ve belirsizliğe dayalı savunma yöntemi öneriyoruz. Saldırı tarafında, etkili hasmane örnekler oluşturmak için modelin kendi kayıp fonksiyonu ile birlikte modelin nihai olasılık çıktılarından elde edilen nicelleştirilmiş epistemik belirsizliği kullanıyoruz. Standart bilgisayarlı görü veri kümeleri üzerinde önerilen yaklaşımlarımızın etkinliğini deneysel olarak değerlendirdik ve doğruladık.en_US
dc.description.tableofcontentsINTRODUCTIONen_US
dc.description.tableofcontentsVulnerabilities of AI-driven systemsen_US
dc.description.tableofcontentsImportance of Uncertainty for AI-driven systemsen_US
dc.description.tableofcontentsProblem Statementen_US
dc.description.tableofcontentsMotivation for Using Uncertainty Informationen_US
dc.description.tableofcontentsMain Contributions of the Thesis Dissertationen_US
dc.description.tableofcontentsOrganization of the Thesis Dissertationen_US
dc.description.tableofcontentsADVERSARIAL MACHINE LEARNINGen_US
dc.description.tableofcontentsAdversarial Attacksen_US
dc.description.tableofcontentsFormal Definition of Adversarial Sampleen_US
dc.description.tableofcontentsDistance Metricsen_US
dc.description.tableofcontentsAttacker Objectiveen_US
dc.description.tableofcontentsCapability of the Attackeren_US
dc.description.tableofcontentsAdversarial Attack Typesen_US
dc.description.tableofcontentsFast-Gradient Sign Methoden_US
dc.description.tableofcontentsIterative Gradient Sign Methoden_US
dc.description.tableofcontentsProjected Gradient Descenten_US
dc.description.tableofcontentsJacobian-based Saliency Map Attack (JSMA)en_US
dc.description.tableofcontentsCarlini&Wagner Attacken_US
dc.description.tableofcontentsDeepfool Attacken_US
dc.description.tableofcontentsHopskipjump Attacken_US
dc.description.tableofcontentsUniversal Adversarial Attacken_US
dc.description.tableofcontentsAdversarial Defenseen_US
dc.description.tableofcontentsDefensive Distillationen_US
dc.description.tableofcontentsAdversarial Trainingen_US
dc.description.tableofcontentsMagneten_US
dc.description.tableofcontentsDetection of Adversarial Samplesen_US
dc.description.tableofcontentsUNCERTAINTY IN MACHINE LEARNINGen_US
dc.description.tableofcontentsTypes of Uncertainty in Machine Learningen_US
dc.description.tableofcontentsEpistemic Uncertaintyen_US
dc.description.tableofcontentsAleatoric Uncertaintyen_US
dc.description.tableofcontentsScibilic Uncertaintyen_US
dc.description.tableofcontentsQuantifying Uncertainty in Deep Neural Networksen_US
dc.description.tableofcontentsQuantification of Epistemic Uncertainty via MC-Dropout Samplingen_US
dc.description.tableofcontentsQuantification of Aleatoric Uncertainty via MC-Dropout Samplingen_US
dc.description.tableofcontentsQuantification of Epistemic and Aleatoric Uncertainty via MCDropout Samplingen_US
dc.description.tableofcontentsMoment-Based Predictive Uncertainty Quantificationen_US
dc.description.tableofcontentsADVERSARIAL SAMPLE DETECTIONen_US
dc.description.tableofcontentsUncertainty Quantificationen_US
dc.description.tableofcontentsExplanatory Research on Uncertainty Quantification Methodsen_US
dc.description.tableofcontentsProposed Closeness Metricen_US
dc.description.tableofcontentsExplanatory Research on our Closeness Metricen_US
dc.description.tableofcontentsSummary of the Algorithmen_US
dc.description.tableofcontentsResultsen_US
dc.description.tableofcontentsExperimental Setupen_US
dc.description.tableofcontentsExperimental Resultsen_US
dc.description.tableofcontentsFurther Results and Discussionen_US
dc.description.tableofcontentsADVERSARIAL ATTACKen_US
dc.description.tableofcontentsApproachen_US
dc.description.tableofcontentsProposed Epistemic Uncertainty Based Attacksen_US
dc.description.tableofcontentsFast Gradient Sign Method (Uncertainty-Based)en_US
dc.description.tableofcontentsBasic Iterative Attack (BIM-A Uncertainty-Based)en_US
dc.description.tableofcontentsBasic Iterative Attack (BIM-A Hybrid Approach)en_US
dc.description.tableofcontentsBasic Iterative Attack (BIM-B Hybrid Approach)en_US
dc.description.tableofcontentsVisualizing Gradient Path for Uncertainty-Based Attacksen_US
dc.description.tableofcontentsVisualizing Uncertainty Under Different Attack Variantsen_US
dc.description.tableofcontentsSearch For a More Efficient Attack Algorithmen_US
dc.description.tableofcontentsRectified Basic Iterative Attacken_US
dc.description.tableofcontentsAttacker’s Capabilityen_US
dc.description.tableofcontentsResultsen_US
dc.description.tableofcontentsExperimental Setupen_US
dc.description.tableofcontentsExperimental Resultsen_US
dc.description.tableofcontentsFurther Results and Discussionen_US
dc.description.tableofcontentsADVERSARIAL DEFENSEen_US
dc.description.tableofcontentsApproachen_US
dc.description.tableofcontentsIntuition Behind Using Uncertainty-based Reversal Processen_US
dc.description.tableofcontentsUncertainty-Based Reversal Operationen_US
dc.description.tableofcontentsEnhanced Uncertainty-Based Reversal Operationen_US
dc.description.tableofcontentsThe Usage of Uncertainty-based Reversalen_US
dc.description.tableofcontentsThe Effect of Uncertainty-based Reversalen_US
dc.description.tableofcontentsVariants of the Enhanced Uncertainty-Based Reversal Operationen_US
dc.description.tableofcontentsHybrid Deployment Optionsen_US
dc.description.tableofcontentsVia Adversarial Trainingen_US
dc.description.tableofcontentsVia Defensive Distillationen_US
dc.description.tableofcontentsThe Effect on Clean Data Performanceen_US
dc.description.tableofcontentsResultsen_US
dc.description.tableofcontentsPart-1en_US
dc.description.tableofcontentsExperimental Setupen_US
dc.description.tableofcontentsExperimental Resultsen_US
dc.description.tableofcontentsDiscussions and Results with Hybrid Approach (Adversarial Training)en_US
dc.description.tableofcontentsPart-2en_US
dc.description.tableofcontentsDiscussions and Further Resultsen_US
dc.description.tableofcontentsCONCLUSION AND FUTURE WORKen_US
dc.language.isoenen_US
dc.publisherIşık Ünivresitesien_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/us/*
dc.subjectDeep neural networksen_US
dc.subjectAdversarial machine learningen_US
dc.subjectUncertainty quantificationen_US
dc.subjectMonte-Carlo dropout samplingen_US
dc.subjectEpistemic uncertaintyen_US
dc.subjectAleatoric uncertaintyen_US
dc.subjectScibilic uncertaintyen_US
dc.subjectDerin sinir ağlarıen_US
dc.subjectKarşıt makine öğrenmesien_US
dc.subjectMonte-Carlo bırakma örneklemesien_US
dc.subjectModel belirsizliğien_US
dc.subjectEpistemik belirsizliken_US
dc.subjectRassal belirsizliken_US
dc.subjectBilinebilir belirsizliken_US
dc.subject.lccQC793 .T86 U85 2022
dc.subject.lcshDeep neural networks.en_US
dc.subject.lcshAdversarial machine learning.en_US
dc.subject.lcshUncertainty quantification.en_US
dc.subject.lcshMonte-Carlo dropout sampling.en_US
dc.subject.lcshEpistemic uncertainty.en_US
dc.subject.lcshAleatoric uncertainty.en_US
dc.subject.lcshScibilic uncertainty.en_US
dc.titleUsing uncertainty metrics in adversarial machine learning as an attack and defense toolen_US
dc.title.alternativeBelirsizlik metriklerinin hasmane makine öğrenmesinde saldırı ve savunma amaçlı kullanılmasıen_US
dc.typeDoctoral Thesisen_US
dc.departmentIşık Üniversitesi, Lisansüstü Eğitim Enstitüsü, Bilgisayar Mühendisliği Doktora Programıen_US
dc.authorid0000-0002-6214-6262
dc.authorid0000-0002-6214-6262en_US
dc.relation.publicationcategoryTezen_US
dc.institutionauthorTuna, Ömer Faruken_US


Bu öğenin dosyaları:

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster

info:eu-repo/semantics/openAccess
Aksi belirtilmediği sürece bu öğenin lisansı: info:eu-repo/semantics/openAccess