Arama Sonuçları

Listeleniyor 1 - 2 / 2
  • Yayın
    End-effector trajectory control in a two-link flexible manipulator through reference joint angle values modification by neural networks
    (Sage Publications, 2006-02) Öke, Gülay; İstefanopulos, Yorgo
    The basic difficulty in the control of flexible link manipulators stems from the fact that the link deflections cannot be controlled directly. Since the number of control inputs, applied by the actuators, is less than the total number of variables to be controlled, control approaches aiming at the suppression of deflections and vibrations are generally insufficient. Another possible approach is to determine new joint trajectories to minimize the error of the end-effector in the operational space. In this paper, a neural network is designed to compute incremental changes for the reference values of the joint angles to achieve successful tip tracking in the operational space. Tip position errors in the x- and y-directions are utihzed as inputs to the neural network. The cost function, which is minimized in training the neural network, is also chosen as the sum of squares of the tip position error in both directions. Joint angle control is provided by a PD controller. Simulations are carried out to evaluate the performance of the neural-network-based trajectory tracking method, and the results are depicted in both joint and operational spaces.
  • Yayın
    Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples
    (Springer, 2022-03) Tuna, Ömer Faruk; Çatak, Ferhat Özgür; Eskil, Mustafa Taner
    Deep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called ``Adversarial Machine Learning” to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, most of the current research has concentrated on utilising model loss function to craft adversarial examples or to create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the shifted-domain regions where the model has not been trained on. We proposed new attack ideas by exploiting the difficulty of the target model to discriminate between samples drawn from original and shifted versions of the training data distribution by utilizing epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.14%, 82.96% to 90.13% and 89.44% to 91.06% on MNIST Digit, MNIST Fashion and CIFAR-10 datasets, respectively.