Arama Sonuçları

Listeleniyor 1 - 3 / 3
  • Yayın
    The role of context in desecuritization: Turkish foreign policy towards Northern Iraq (2008–2017)
    (Routledge, 2020-05-26) Kayhan Pusane, Özlem
    For decades, Turkish policymakers have perceived the possible emergence of a Kurdish autonomous region or an independent Kurdish state in northern Iraq as an existential threat to Turkey. However, from 2008 onwards, under the Justice and Development Party government, Turkish foreign policy towards the Iraqi Kurdistan Regional Government (KRG) was gradually desecuritized. In light of Turkey?s experience, this paper explores the role of context in desecuritizing foreign policy issues in general and Turkish foreign policy towards the KRG in particular. It argues that the changing civil?military relations in Turkey as well as the country?s broader political and economic conjuncture allowed for the desecuritization of Turkey-KRG relations from 2008 onwards. The context also determined what kind of a desecuritization Turkey experienced towards the KRG.
  • Yayın
    Closeness and uncertainty aware adversarial examples detection in adversarial machine learning
    (Elsevier Ltd, 2022-07) Tuna, Ömer Faruk; Çatak, Ferhat Özgür; Eskil, Mustafa Taner
    While deep learning models are thought to be resistant to random perturbations, it has been demonstrated that these architectures are vulnerable to deliberately crafted perturbations, albeit being quasi-imperceptible. These vulnerabilities make it challenging to deploy Deep Neural Network (DNN) models in security-critical areas. Recently, many research studies have been conducted to develop defense techniques enabling more robust models. In this paper, we target detecting adversarial samples by differentiating them from their clean equivalents. We investigate various metrics for detecting adversarial samples. We first leverage moment-based predictive uncertainty estimates of DNN classifiers derived through Monte-Carlo (MC) Dropout Sampling. We also introduce a new method that operates in the subspace of deep features obtained by the model. We verified the effectiveness of our approach on different datasets. Our experiments show that these approaches complement each other, and combined usage of all metrics yields 99 % ROC-AUC adversarial detection score for well-known attack algorithms.
  • Yayın
    Uncertainty as a Swiss army knife: new adversarial attack and defense ideas based on epistemic uncertainty
    (Springer, 2022-04-02) Tuna, Ömer Faruk; Çatak, Ferhat Özgür; Eskil, Mustafa Taner
    Although state-of-the-art deep neural network models are known to be robust to random perturbations, it was verified that these architectures are indeed quite vulnerable to deliberately crafted perturbations, albeit being quasi-imperceptible. These vulnerabilities make it challenging to deploy deep neural network models in the areas where security is a critical concern. In recent years, many research studies have been conducted to develop new attack methods and come up with new defense techniques that enable more robust and reliable models. In this study, we use the quantified epistemic uncertainty obtained from the model's final probability outputs, along with the model's own loss function, to generate more effective adversarial samples. And we propose a novel defense approach against attacks like Deepfool which result in adversarial samples located near the model's decision boundary. We have verified the effectiveness of our attack method on MNIST (Digit), MNIST (Fashion) and CIFAR-10 datasets. In our experiments, we showed that our proposed uncertainty-based reversal method achieved a worst case success rate of around 95% without compromising clean accuracy.