4 sonuçlar
Arama Sonuçları
Listeleniyor 1 - 4 / 4
Yayın Coherent array imaging using phased subarrays. Part II: Simulations and experimental results(IEEE-INST Electrical Electronics Engineers Inc, 2005-01) Johnson, Jeremy A.; Oralkan, Ömer; Ergün, Arif Sanlı; Demirci, Utkan; Karaman, Mustafa; Khuri-Yakub, Butrus ThomasThe basic principles and theory of phased subarray (PSA) imaging imaging provides the flexibility of reducing I he number of front-end hardware channels between that of classical synthetic aperture (CSA) imaging-which uses only one element per firing event-and full-phased array (FPA,) imaging-which uses all elements for each firing. The performance of PSA generally ranges between that obtained by CSA and FPA using the same array, and depends on the amount of hardware complexity reduction. For the work described in this paper, we performed FPA, CSA, and PSA imaging of a resolution phantom using both simulated and experimental data from a 3-MHz, 3.2-cm, 128-element capacitive micromachined ultrasound transducer (CMUT) array. The simulated system point responses in the spatial and frequency domains are presented as a means of studying the effects of signal bandwidth, reconstruction filter size, and subsampling rate on the PSA system performance. The PSA and FPA sector-scanned images were reconstructed using the wideband experimental data with 80% fractional bandwidth, with seven 32-element subarrays used for PSA imaging. The measurements on the experimental sector images indicate that, at the transmit focal zone, the PSA method provides a 10% improvement in the 6-dB lateral resolution, and the axial point resolution of PSA imaging is identical to that of FPA imaging. The signal-to-noise ratio (SNR) of PSA image was 58.3 dB, 4.9 dB below that of the FPA image, and the contrast-to-noise ratio (CNR) is reduced by 10%. The simulated and experimental test results presented in this paper validate theoretical expectations and illustrate the flexibility of PSA imaging as a way to exchange SNR and frame rate for simplified front-end hardware.Yayın Segmentation based classification of retinal diseases in OCT images(Institute of Electrical and Electronics Engineers Inc., 2024) Eren, Öykü; Tek, Faik Boray; Turkan, YaseminVolumetric optical coherence tomography (OCT) scans offer detailed visualization of the retinal layers, where any deformation can indicate potential abnormalities. This study introduced a method for classifying ocular diseases in OCT images through transfer learning. Applying transfer learning from natural images to Optical Coherence Tomography (OCT) scans present challenges, particularly when target domain examples are limited. Our approach aimed to enhance OCT-based retinal disease classification by leveraging transfer learning more effectively. We hypothesize that providing an explicit layer structure can improve classification accuracy. Using the OCTA-500 dataset, we explored various configurations by segmenting the retinal layers and integrating these segmentations with OCT scans. By combining horizontal and vertical cross-sectional middle slices and their blendings with segmentation outputs, we achieved a classification a ccuracy of 91.47% and an Area Under the Curve (AUC) of 0.96, significantly outperforming the classification of OCT slice images.Yayın Retinal disease classification using optical coherence tomography angiography images(Institute of Electrical and Electronics Engineers Inc., 2024) Aydın, Ömer Faruk; Nazlı, Muhammet Serdar; Tek, Faik Boray; Turkan, YaseminOptical Coherence Tomography Angiography (OCTA) is a non-invasive imaging modality widely used for the detailed visualization of retinal microvasculature, which is crucial for diagnosing and monitoring various retinal diseases. However, manual interpretation of OCTA images is labor-intensive and prone to variability, highlighting the need for automated classification methods. This study presents an aproach that utilizes transfer learning to classify OCTA images into different retinal disease categories, including age-related macular degeneration (AMD) and diapethic retinopathy (DR). We used the OCTA-500 dataset [1], the largest publicly available retinal dataset that contains images from 500 subjects with diverse retinal conditions. To address the class imbalance, we employed k-fold cross-validation and grouped various other conditions under the 'OTHERS' class. Additionally, we compared the performance of the ResNet50 model with OCTA inputs to that of the ResNet50 and RetFound (Vision Transformer) models with OCT inputs to assess the efficiency of OCTA in retinal condition classification. In the three-class (AMD, D R, Normal) classification, ResNet50-OCTA o utperformed ResNet50-OCT, but slightly underperformed compared to RetFound-OCT, which was pretrained on a large OCT dataset. In the four-class (AMD, DR, Normal, Others) classification, ResNet50-OCTA and RetFound-OCT achieved similar classification a ccuracies. This study establishes a baseline for retinal condition classification using the OCTA-500 dataset and provides a comparison between OCT and OCTA input modalities.Yayın Self-supervised learning of 3D structure from 2D OCT slices for retinal disease diagnosis on UK biobank scans(Institute of Electrical and Electronics Engineers Inc., 2025-09-21) Nazlı, Muhammet Serdar; Turkan, Yasemin; Tek, Faik BorayThis study presents a self-supervised learning framework for retinal disease classification using Optical Coherence Tomography (OCT) scans. To balance the contextual richness of 3D volumes with the computational efficiency of 2D architectures, we introduce a quasi-3D input generation strategy. Each input is constructed by stacking three OCT slices, sampled from channel-specific Gaussian distributions centered on the volume midplane, and arranged in a standard three-channel 2D format compatible with existing pre-trained models. These quasi-3D images are used to pre-train a Vision Transformer (ViT-Base) via a Masked Autoencoder (MAE) with a shared masking pattern, encouraging the model to reconstruct masked regions by encoding anatomical continuity across slices. Pre-training is conducted on 10,000 unlabeled OCT volumes from the UK Biobank. The encoder is then fine-tuned on the OCTA-500 dataset for three-class and four-class retinal disease classification tasks, including macular degeneration and diabetic retinopathy. The model achieves 92.57% accuracy on the three-class task, matching the performance of RETFound while using over 150 times less pre-training data and a smaller backbone.












