4 sonuçlar
Arama Sonuçları
Listeleniyor 1 - 4 / 4
Yayın Efficient estimation of Sigmoid and Tanh activation functions for homomorphically encrypted data using Artificial Neural Networks(Institute of Electrical and Electronics Engineers Inc., 2024) Harb, Mhd Raja Abou; Çeliktaş, BarışThis paper presents a novel approach to estimating Sigmoid and Tanh activation functions using Artificial Neural Networks (ANN) optimized for homomorphic encryption. The proposed method is compared against second-degree polynomial and Piecewise Linear approximations, demonstrating a minor loss in accuracy while maintaining computational efficiency. Our results suggest that the ANN-based estimator is a viable alternative for secure machine learning models requiring privacypreserving computation.Yayın Mahremiyeti koruyan, merkezi, hibrit film öneri sistemi: araçlar arası internet için bir yaklaşım(Institute of Electrical and Electronics Engineers Inc., 2025-08-15) Şimşek, Musa; Tüysüz Erman, AyşegülBu çalışmada, kullanıcı verilerinin gizliliğini korurken öneri doğrulu günü artırmayı hedefleyen, diferansiyel mahremiyet destekli hibrit bir öneri modeli sunulmuştur. Model mimarisi, Matris Çarpanlaması (MF), Çok Katmanlı Algılayıcı (MLP) ve Uzun Kısa Süreli Bellek (LSTM) ağlarını birleştirmektedir. Laplace mekanizmasına dayalı gürültü enjeksiyonu ile eğitim sürecinde diferansiyel mahremiyet sağlanmış ve ayrıca hiperparametre optimizasyonu uygulanmıştır. Model, kullanıcı film etkileşimlerini içeren MovieLens 100K veri kümesi üzerinde değerlendirilmiştir. Performans değerlendirmesi MSE, MAE ve NDCG metrikleriyle yapılmış; hiperparametre optimizasyonu ile MSE bazında yaklaşık %4 iyileşme sağlandığı, yüksek gizlilik düzeyinde ise doğrulukta yaklaşık %39 oranında bozulma yaşandığı gözlemlenmiştir.Yayın Privacy-preserving cyber threat intelligence: a framework combining private information retrieval, federated learning, and differential privacy(Institute of Electrical and Electronics Engineers Inc., 2025-09-21) Çamalan, Emre; Çeliktaş, BarışThreat Intelligence Platforms (TIPs) are essential for sharing indicators of compromise (IoCs), but querying them can leak sensitive organizational data. We propose a privacy-preserving framework that combines Private Information Retrieval (PIR), Federated Learning (FL), and Differential Privacy (DP) to mitigate this risk. Our approach addresses both content-level and metadata-level privacy concerns while supporting collaborative learning across organizations. It ensures that sensitive query patterns remain hidden, local threat data never leaves organizational boundaries, and model updates are protected against inference attacks. The framework integrates with existing TIPs such as MISP and OpenCTI, requiring minimal operational changes. We implement a prototype using a simulated Abuse IP dataset and evaluate it on latency, accuracy, and communication overhead. The system supports private queries in under 300 ms and maintains over 95% model accuracy under DP noise. These results indicate that strong privacy can be achieved with minimal performance trade-offs, making the approach viable for real-world CTI environments.Yayın Secure and interpretable dyslexia detection using homomorphic encryption and SHAP-based explanations(Institute of Electrical and Electronics Engineers Inc., 2025-10-25) Harb, Mhd Raja Abou; Çeliktaş, Barış; Eroğlu, GünetProtecting sensitive healthcare data during machine learning inference is critical, particularly in cloud-based environments. This study addresses the privacy and interpretability challenges in dyslexia detection using Quantitative EEG (QEEG) data. We propose a privacy-preserving framework utilizing Homomorphic Encryption (HE) to securely perform inference with an Artificial Neural Network (ANN). Due to the incompatibility of non-linear activation functions with encrypted arithmetic, we employ a dedicated approximation strategy. To ensure model interpretability without compromising privacy, SHapley Additive exPlanations (SHAP) are computed homomorphically and decrypted client-side. Experimental evaluations demonstrate that the encrypted inference achieves an accuracy of 90.03% and an AUC of 0.8218, reflecting only minor performance degradation compared to plaintext inference. SHAP value comparisons (Spearman correlation = 0.59) validate the reliability of the encrypted explanations. These results confirm that integrating privacy-preserving and explainable AI approaches is feasible for secure, ethical, and compliant healthcare deployments.












