Arama Sonuçları

Listeleniyor 1 - 2 / 2
  • Yayın
    Cross-layer ransomware detection framework for SDN using HMM, LSTM, and Bayesian inference
    (Institute of Electrical and Electronics Engineers Inc., 2025-08-28) Serter, Cemal Emre; Çeliktaş, Barış
    Ransomware continues to pose a serious threat to endpoint computers as well as network systems, especially in Software Defined Networks (SDN) environments where programmability and centralized control offer novel attack surfaces. In this paper, a cross-layer detection model for ransomware is introduced that integrates host-based behavioral modeling using Hidden Markov Models (HMM), anomaly detection at flow level using Long Short-Term Memory (LSTM) networks, and probabilistic fusion through Bayesian inference. By correlating host and SDN layer anomalies, the system enhances early-stage detection and reduces false positives. A variational Bayesian approximation technique is utilized for decision score stabilization under ambiguous conditions. The model is evaluated with new ransomware datasets and obtains a range between 97.5%-99.92% F1-score across three benchmark datasets with less than 50 ms latency for detection. The hybrid framework gives a promising direction for real-time threat detection in resilient programmable networks.
  • Yayın
    Secure and interpretable dyslexia detection using homomorphic encryption and SHAP-based explanations
    (Institute of Electrical and Electronics Engineers Inc., 2025-10-25) Harb, Mhd Raja Abou; Çeliktaş, Barış; Eroğlu, Günet
    Protecting sensitive healthcare data during machine learning inference is critical, particularly in cloud-based environments. This study addresses the privacy and interpretability challenges in dyslexia detection using Quantitative EEG (QEEG) data. We propose a privacy-preserving framework utilizing Homomorphic Encryption (HE) to securely perform inference with an Artificial Neural Network (ANN). Due to the incompatibility of non-linear activation functions with encrypted arithmetic, we employ a dedicated approximation strategy. To ensure model interpretability without compromising privacy, SHapley Additive exPlanations (SHAP) are computed homomorphically and decrypted client-side. Experimental evaluations demonstrate that the encrypted inference achieves an accuracy of 90.03% and an AUC of 0.8218, reflecting only minor performance degradation compared to plaintext inference. SHAP value comparisons (Spearman correlation = 0.59) validate the reliability of the encrypted explanations. These results confirm that integrating privacy-preserving and explainable AI approaches is feasible for secure, ethical, and compliant healthcare deployments.