Arama Sonuçları

Listeleniyor 1 - 2 / 2
  • Yayın
    Assessing dyslexia with machine learning: a pilot study utilizing Google ML Kit
    (IEEE, 2023-12-19) Eroğlu, Günet; Harb, Mhd Raja Abou
    In this study, we explore the application of Google ML Kit, a machine learning development kit, for dyslexia detection in the Turkish language. We collected face-tracking data from two groups: 49 dyslexic children and 22 typically developing children. Using Google ML Kit and other machine learning algorithms based on eye-tracking data, we compared their performance in dyslexia detection. Our findings reveal that Google ML Kit achieved the highest accuracy among the tested methods. This study underscores the potential of machine learning-based dyslexia detection and its practicality in academic and clinical settings.
  • Yayın
    Secure and interpretable dyslexia detection using homomorphic encryption and SHAP-based explanations
    (Institute of Electrical and Electronics Engineers Inc., 2025-10-25) Harb, Mhd Raja Abou; Çeliktaş, Barış; Eroğlu, Günet
    Protecting sensitive healthcare data during machine learning inference is critical, particularly in cloud-based environments. This study addresses the privacy and interpretability challenges in dyslexia detection using Quantitative EEG (QEEG) data. We propose a privacy-preserving framework utilizing Homomorphic Encryption (HE) to securely perform inference with an Artificial Neural Network (ANN). Due to the incompatibility of non-linear activation functions with encrypted arithmetic, we employ a dedicated approximation strategy. To ensure model interpretability without compromising privacy, SHapley Additive exPlanations (SHAP) are computed homomorphically and decrypted client-side. Experimental evaluations demonstrate that the encrypted inference achieves an accuracy of 90.03% and an AUC of 0.8218, reflecting only minor performance degradation compared to plaintext inference. SHAP value comparisons (Spearman correlation = 0.59) validate the reliability of the encrypted explanations. These results confirm that integrating privacy-preserving and explainable AI approaches is feasible for secure, ethical, and compliant healthcare deployments.