Işık University Institutional Repository
Digitally stores academic resources such as books, articles, dissertations, bulletins, reports, research data published directly or indirectly by İbn Haldun University at international standards, helps track the academic performance of the university, provides long term preservation for resources and makes publications available to Open Access in accordance with their copyright to increase the effect of publications.

Recent Submissions
Sentiment analysis for hotel reviews in Turkish by using LLMs
(Institute of Electrical and Electronics Engineers Inc., 2024)
The field of sentiment analysis plays a pivotal role in consumer decision-making and service quality improvement within the hospitality industry. This study explores the application of Large Language Models (LLMs) for sentiment analysis of Turkish hotel reviews, contributing to the understanding of customer feedback and satisfaction. We created a dataset of 5,000 reviews by translating an English corpus into Turkish, which was then utilized to evaluate the performance of a state-of-the-art Turkish language model, TURNA. The study demonstrates that LLMs, particularly TURNA, outperform traditional machine learning algorithms and other advanced models in sentiment classification tasks, achieving an accuracy of 99.4%. This research underscores the potential of LLMs to enhance the accuracy of sentiment analysis, offering valuable insights for the tourism and hospitality sectors. The findings contribute to the ongoing evolution of sentiment analysis methodologies and suggest that LLMs can significantly improve t he understanding a nd processing of customer feedback in Turkish hotel reviews.
Retinal disease classification using optical coherence tomography angiography images
(Institute of Electrical and Electronics Engineers Inc., 2024)
Optical Coherence Tomography Angiography (OCTA) is a non-invasive imaging modality widely used for the detailed visualization of retinal microvasculature, which is crucial for diagnosing and monitoring various retinal diseases. However, manual interpretation of OCTA images is labor-intensive and prone to variability, highlighting the need for automated classification methods. This study presents an aproach that utilizes transfer learning to classify OCTA images into different retinal disease categories, including age-related macular degeneration (AMD) and diapethic retinopathy (DR). We used the OCTA-500 dataset [1], the largest publicly available retinal dataset that contains images from 500 subjects with diverse retinal conditions. To address the class imbalance, we employed k-fold cross-validation and grouped various other conditions under the 'OTHERS' class. Additionally, we compared the performance of the ResNet50 model with OCTA inputs to that of the ResNet50 and RetFound (Vision Transformer) models with OCT inputs to assess the efficiency of OCTA in retinal condition classification. In the three-class (AMD, D R, Normal) classification, ResNet50-OCTA o utperformed ResNet50-OCT, but slightly underperformed compared to RetFound-OCT, which was pretrained on a large OCT dataset. In the four-class (AMD, DR, Normal, Others) classification, ResNet50-OCTA and RetFound-OCT achieved similar classification a ccuracies. This study establishes a baseline for retinal condition classification using the OCTA-500 dataset and provides a comparison between OCT and OCTA input modalities.
Segmentation based classification of retinal diseases in OCT images
(Institute of Electrical and Electronics Engineers Inc., 2024)
Volumetric optical coherence tomography (OCT) scans offer detailed visualization of the retinal layers, where any deformation can indicate potential abnormalities. This study introduced a method for classifying ocular diseases in OCT images through transfer learning. Applying transfer learning from natural images to Optical Coherence Tomography (OCT) scans present challenges, particularly when target domain examples are limited. Our approach aimed to enhance OCT-based retinal disease classification by leveraging transfer learning more effectively. We hypothesize that providing an explicit layer structure can improve classification accuracy. Using the OCTA-500 dataset, we explored various configurations by segmenting the retinal layers and integrating these segmentations with OCT scans. By combining horizontal and vertical cross-sectional middle slices and their blendings with segmentation outputs, we achieved a classification a ccuracy of 91.47% and an Area Under the Curve (AUC) of 0.96, significantly outperforming the classification of OCT slice images.
Integrating the focusing neuron model with N-BEATS and N-HiTS
(Institute of Electrical and Electronics Engineers Inc., 2024)
The N-BEATS (Neural Basis Expansion Analysis for Time Series) model is a robust deep learning architecture designed specifically for time series forecasting. Its foundational idea lies in the use of a generic, interpretable architecture that leverages backward and forward residual links to predict time series data effectively. N - BEATS influenced the development of N-HiTS (Neural Hierarchical Interpretable Time Series), which builds upon and extends the foundational ideas of N-BEATS. This paper introduces new integrations to enhance these models using the Focusing Neuron model in blocks of N-BEATS and N-HiTS instead of Fully Connected (Dense) Neurons. The integration aims to improve the forward and backward forecasting processes in the blocks by facilitating the learning of parametric local receptive fields. Preliminary results indicate that this new usage can significantly improve model performances on datasets that have longer sequences, providing a promising direction for future advancements in N-BEATS and N-HiTS.
Machine learning-driven adaptive modulation for VLC-enabled medical body sensor networks
(Iran University of Science and Technology, 2024-12)
Visible Light Communication, a key optical wireless technology, offers reliable, high-bandwidth, and secure communication, making it a promising solution for a variety of applications. Despite its many advantages, optical wireless communication faces challenges in medical environments due to fluctuating signal strength caused by patient movement. Smart transmitter structures can improve system performance by adjusting system parameters to the fluctuating channel conditions. The purpose of this research is to examine how adaptive modulation performs in a medical body sensor network system that uses visible light communication. The analysis focuses on various medical situations and investigates machine learning algorithms. The study compares adaptive modulation based on supervised learning with that based on reinforcement learning. The findings indicate that both approaches greatly improve spectral efficiency, emphasizing the significance of implementing link adaptation in visible light communication-based medical body sensor networks. The use of the Q-learning algorithm in adaptive modulation enables real-time training and enables the system to adjust to the changing environment without any prior knowledge about the environment. A remarkable improvement is observed for photodetectors on the shoulder and wrist since they experience more DC gain.