Arama Sonuçları

Listeleniyor 1 - 4 / 4
  • Yayın
    Ground plane detection using and rgb-d camera for quadcopter landing
    (Işık Üniversitesi, 2013-06-04) Kırcalı, Doğan; Tek, Faik Boray; Işık Üniversitesi, Fen Bilimleri Enstitüsü, Elektronik Mühendisliği Yüksek Lisans Programı
    The purpose of this study is to build an autonomous quadcopter which is capable of automated detection of landing zones and landing. To ensure this there are important steps linked together. The first step is to build a quadcopter that can fly. We have used a commercially available quadcopter platform and an Ardu Pilot Mega control card to build a low cost, easy to implement, and stable platform. An open source firmware, named Arducopter, is used for the control card. This system can take-off, land, hover, and follow a given flight path. In order to detect of landing zones and safe landing, we propose to use an RGB-D camera as a sensor and a small onboard pc as the computing engine. Hence, we have modified the acquired quadcopter frame to integrate additional components. In this thesis, we propose a novel and robust ground plane and obstacle detec»tion algorithm based on depth information using RGB-D camera. Moreover, our method was compared with V-disparity algorithm from the literature. It has been shown that our algorithm performs better than V-disparity method and produces useful ground plane-obstacle segmentations, even for difficult cases. The method is able to work in highly dynamic platforms. This algorithm is generic in the sense that it can be used for different forward-facing RGB-D placements, for example in ground vehicles or robots. Moreover, we developed a pre-process to allow the use of the method for down- facing sensor view positions as the core method is inadequate for landing zone detection. The proposed method compensates the movements of the camera caused by the air vehicle, and detects the ground plane obstacles successfully. It has been shown that the use of RGB-D camera allows ground plane and landing zone detection even in no-light conditions. All necessary components in this thesis were financed by FMV Işık University internal research funds BAP-10B302 project.
  • Yayın
    Ground plane detection using an RGB-D sensor
    (Springer, 2014-10-27) Kırcalı, Doğan; Tek, Faik Boray
    Ground plane detection is essential for successful navigation of vision based mobile robots. We introduce a very simple but robust ground plane detection method based on depth information obtained using anRGB-Depth sensor. We present two different variations of the method: the simplest one is robust in setups where the sensor pitch angle is fixed and has no roll, whereas the second one can handle changes in pitch and roll angles. Our comparisons show that our approach performs better than the vertical disparity approach. It produces accurate ground plane-obstacle segmentation for difficult scenes, which include many obstacles, different floor surfaces, stairs, and narrow corridors.
  • Yayın
    Microsoft Kinect Sensörü kullanarak zemin düzlemi algılama
    (IEEE, 2013-06-13) Kırcalı, Doğan; Tek, Faik Boray; İyidir, İbrahim Kamil
    Görüntü işleme tabanlı mobil robotların başarılı navigasyonu için zemin düzlemi algılama esastır. Bu bildiride Kinect derinlik sensöründen elde edilen derinlik bilgisine dayalı yeni ve gürbüz bir zemin düzlemi algılama algoritması önerilmektedir. Literatürdeki benzer yöntemlerin aksine, zemin düzleminin sahnedeki en büyük alan olduğunu varsayılmamaktadır. Yöntemimiz sensörün yeri görüş açısının sabit olduğu veya değişken olabileceği iki farklı durum için iki değişik algoritma halinde sunulmaktadır. Yaptığımız deneylerde her iki durum için önerdiğimiz algoritmaların oldukça başarılı olduğu gösterilmektedir.
  • Yayın
    Adaptive visual obstacle detection for mobile robots using monocular camera and ultrasonic sensor
    (Springer-Verlag, 2012-10-07) İyidir, İbrahim Kamil; Tek, Faik Boray; Kırcalı, Doğan
    This paper presents a novel vision based obstacle detection algorithm that is adapted from a powerful background subtraction algorithm: ViBe (VIsual Background Extractor). We describe an adaptive obstacle detection method using monocular color vision and an ultrasonic distance sensor. Our approach assumes an obstacle free region in front of the robot in the initial frame. However, the method dynamically adapts to its environment in the succeeding frames. The adaptation is performed using a model update rule based on using ultrasonic distance sensor reading. Our detailed experiments validate the proposed concept and ultrasonic sensor based model update.