Konu "Machine-learning" için İndeksli Yayınlar listeleme
Toplam kayıt 7, listelenen: 1-7
-
Closeness and uncertainty aware adversarial examples detection in adversarial machine learning
(Elsevier Ltd, 2022-07)While deep learning models are thought to be resistant to random perturbations, it has been demonstrated that these architectures are vulnerable to deliberately crafted perturbations, albeit being quasi-imperceptible. These ... -
Cost-conscious comparison of supervised learning algorithms over multiple data sets
(Elsevier Sci Ltd, 2012-04)In the literature, there exist statistical tests to compare supervised learning algorithms on multiple data sets in terms of accuracy but they do not always generate an ordering. We propose Multi(2)Test, a generalization ... -
Eigenclassifiers for combining correlated classifiers
(Elsevier Science Inc, 2012-03-15)In practice, classifiers in an ensemble are not independent. This paper is the continuation of our previous work on ensemble subset selection [A. Ulas, M. Semerci, O.T. Yildiz, E. Alpaydin, Incremental construction of ... -
Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples
(Springer, 2022-03)Deep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, ... -
Machine learning-based model categorization using textual and structural features
(Springer Science and Business Media Deutschland GmbH, 2022-09-08)Model Driven Engineering (MDE), where models are the core elements in the entire life cycle from the specification to maintenance phases, is one of the promising techniques to provide abstraction and automation. However, ... -
TENET: a new hybrid network architecture for adversarial defense
(Springer Science and Business Media Deutschland GmbH, 2023-08)Deep neural network (DNN) models are widely renowned for their resistance to random perturbations. However, researchers have found out that these models are indeed extremely vulnerable to deliberately crafted and seemingly ... -
Unreasonable effectiveness of last hidden layer activations for adversarial robustness
(Institute of Electrical and Electronics Engineers Inc., 2022)In standard Deep Neural Network (DNN) based classifiers, the general convention is to omit the activation function in the last (output) layer and directly apply the softmax function on the logits to get the probability ...