Arama Sonuçları

Listeleniyor 1 - 3 / 3
  • Yayın
    Relationships among organizational-level maturities in artificial intelligence, cybersecurity, and digital transformation: a survey-based analysis
    (Institute of Electrical and Electronics Engineers Inc., 2025-05-19) Kubilay, Burak; Çeliktaş, Barış
    The rapid development of digital technology across industries has highlighted the growing need for enhanced competencies in Artificial Intelligence (AI), Cyber security (CS), and Digital Transformation (DT). While there is extensive research on each of these domains in isolation, few studies have investigated their relationship and joint impact on organizational maturity. This study aims to address this gap by analyzing the relationships among the maturity levels of AI, CS, and DT at the organizational level using Structural Equation Modeling (SEM) and descriptive statistical methods. A mixed-methods design combines quantitative survey data with synthetic modeling techniques to assess organizational preparedness. The findings demonstrate significant bidirectional correlations among AI, CS, and DT, with technology and finance being more advanced than government and education. The research highlights the necessity of an integrated AI-CS strategy and provides actionable recommendations to increase investments in these domains. In contrast to the preceding fragmented evaluations, the current research establishes a comprehensive, empirically grounded framework that acts as a strategic reference point for digital resilience. Follow-up studies will involve collecting real-world industry data in support of empirical validation and predictive ability in measuring AI and CS maturity. This research adds to the existing literature by filling the gaps among fragmented digital maturity models and providing a consistent empirical base for organizations to thrive in an evolving technological environment.
  • Yayın
    A metric-driven IT risk scoring framework: incorporating contextual and organizational factors
    (Institute of Electrical and Electronics Engineers Inc., 2025-09-24) Ünal, Nezih Mahmut; Çeliktaş, Barış
    Risk analysis is a critical process for organizations seeking to manage their cybersecurity posture effectively. However, traditional risk analysis frameworks, such as the Common Vulnerability Scoring System (CVSS), primarily evaluate technical impacts without incorporating organizational context and dynamic risk factors. This paper presents a metric-based risk analysis framework designed to provide a more adaptable and context-aware risk-scoring framework. The proposed model enables risk owners to define customized threat scenarios and dynamically adjust metric weights based on organizational needs. Unlike traditional approaches, our method integrates contextual parameters to improve the accuracy and relevance of risk calculations. Experimental evaluations demonstrate that the proposed framework enhances risk prioritization and provides more actionable insights for decision-makers. This study contributes to the field by addressing the limitations of existing risk analysis models and offering a systematic approach for cybersecurity risk management.
  • Yayın
    Automating cyber risk assessment with public LLMs: an expert-validated framework and comparative analysis
    (Institute of Electrical and Electronics Engineers Inc., 2026-03-26) Ünal, Nezih Mahmut; Çeliktaş, Barış
    Traditional cyber risk assessment methodologies face a critical dilemma: they are either quantitative yet static and context-agnostic (e.g., CVSS), or context-aware yet highly labor-intensive and subjective (e.g., NIST SP 800-30). Consequently, organizations struggle to scale risk assessment to match the pace of evolving threats. This paper presents an automated, context-aware risk assessment framework that leverages the reasoning capabilities of publicly available Large Language Models (LLMs) to operationalize expert knowledge. Rather than positioning the LLM as the final decision-maker, the framework decouples semantic interpretation from risk scoring authority through a transparent, deterministic Dynamic Metric Engine. Unlike complex closed box machine learning models, our approach anchors the AI's reasoning to this expert-validated metric schema, with weights derived using the Rank Order Centroid (ROC) method from a survey of 101 cybersecurity professionals. We evaluated the framework through a comparative study involving 15 diverse real-world vulnerability scenarios (C1-C15) and three supplementary sensitivity stress tests (C16-C18). The validation scenarios were independently assessed by a cohort of ten senior human experts and two state-of-the-art LLM agents (GPT-4o and Gemini 2.0 Flash). The results show that the LLM-driven agents achieve scoring consistency closely aligned with the human median (Pearson r ranging from 0.9390 to 0.9717, Spearman ρ from 0.8472 to 0.9276) against a highly reliable expert baseline (Cronbach's α =0.996), while reducing the assessment cycle time by more than 100× (averaging under 4 seconds per case vs. a human average of 6 minutes). Furthermore, a dedicated context sensitivity analysis (C13-C15) indicates that the framework adapts risk scores based on organizational context (e.g., SME vs. Critical Infrastructure) for identical technical vulnerabilities. Importantly, the system is designed not merely to replicate expert intuition, but to enforce bounded, policy-consistent risk evaluation under predefined governance constraints. Overall, these findings suggest that commercially available LLMs, when constrained by expert-validated metric schemas, can support reproducible, transparent, and real-time risk assessments.