Digital Shadow AI Risk Theoretical Framework (DART): A Framework for Managing Data Disclosure and Privacy Risks of AI tools at Work
Abstract
The accelerated integration of generative AI technologies and agentic AI tools, particularly those like ChatGPT, into workplace settings has introduced complex challenges concerning data governance, regulatory compliance, and organizational privacy (GDPR 2016; CCPA/CPRA). This study introduces the Digital Shadow AI Risk Theoretical Framework (DART)—a novel theoretical framework designed to systematically identify, classify, and address the latent risks arising from the widespread, and often unregulated, use of AI systems in professional environments (NIST, 2023; OECD AI Policy Observatory, 2023). DART introduces six original, interrelated constructs developed in this study: Unintentional Disclosure Risk, Trust-Dependence Paradox, Data Sovereignty Conflict, Knowledge Dilution Phenomenon, Ethical Black Box Problem, and Organizational Feedback Loops. Each construct reflects a unique dimension of risk that emerges as organizations increasingly rely on AI-driven tools for knowledge work and decision-making.
The framework is empirically tested through a mixed-methods research design involving hypothesis testing and statistical analysis of behavioral data gathered from cross-sectional surveys of industry professionals. Two cross-industry surveys (Survey-1: 416 responses, 374 analyzed; Survey-2: 203 responses, 179 analyzed) and CB-SEM tests supported seven of eight hypotheses; H4 (sovereignty) was not significant; H7 (knowledge dilution) was confirmed in replication. The findings highlight critical gaps in employee training, policy awareness, and risk mitigation strategies—underscoring the urgent need for updated governance frameworks, comprehensive AI-use policies, and targeted educational interventions. This paper contributes to emerging scholarship by offering a robust model for understanding and mitigating digital risks in AI-enabled workplaces, providing practical implications for compliance officers, risk managers, and organizational leaders aiming to harness the benefits of generative AI responsibly and securely. The novelty of DART lies in its explicit theorization of workplace-level behavioral risks—especially Shadow AI, which unlike Shadow IT externalizes organizational knowledge into adaptive systems—thereby offering a unified framework that bridges fragmented literatures and grounds them in empirical evidence.
The framework is empirically tested through a mixed-methods research design involving hypothesis testing and statistical analysis of behavioral data gathered from cross-sectional surveys of industry professionals. Two cross-industry surveys (Survey-1: 416 responses, 374 analyzed; Survey-2: 203 responses, 179 analyzed) and CB-SEM tests supported seven of eight hypotheses; H4 (sovereignty) was not significant; H7 (knowledge dilution) was confirmed in replication. The findings highlight critical gaps in employee training, policy awareness, and risk mitigation strategies—underscoring the urgent need for updated governance frameworks, comprehensive AI-use policies, and targeted educational interventions. This paper contributes to emerging scholarship by offering a robust model for understanding and mitigating digital risks in AI-enabled workplaces, providing practical implications for compliance officers, risk managers, and organizational leaders aiming to harness the benefits of generative AI responsibly and securely. The novelty of DART lies in its explicit theorization of workplace-level behavioral risks—especially Shadow AI, which unlike Shadow IT externalizes organizational knowledge into adaptive systems—thereby offering a unified framework that bridges fragmented literatures and grounds them in empirical evidence.