Connect with us

Technology

Surge in Data Violations Linked to Generative AI Usage

Editorial

Published

on

The rise of generative AI is causing a significant increase in data policy violations, with incidents more than doubling over the past year. According to the Netskope Threat Labs’ Cloud and Threat Report: 2026, the average organization now experiences approximately 223 incidents of users uploading sensitive data to AI applications each month. In the most affected cases, this number soars to 2,100 incidents.

The report highlights that user uploads of regulated data—including personal, financial, or healthcare information—account for the majority of violations, representing 54% of incidents. Much of this issue stems from the persistence of shadow AI, where employees utilize personal generative AI accounts. Although reliance on these personal accounts has decreased, 47% of generative AI users still access tools through unmanaged accounts, either exclusively or alongside company-approved applications.

Personal applications pose a substantial insider threat, involved in approximately 60% of insider threat incidents. These apps often facilitate the unauthorized transfer of regulated data, intellectual property, source code, and credentials, culminating in violations of organizational policies.

According to Ray Canzanese, director of Netskope Threat Labs, “Enterprise security teams exist in a constant state of change and new risks as organizations evolve and adversaries innovate. However, genAI adoption has shifted the goal posts. It represents a risk profile that has taken many teams by surprise in its scope and complexity, so much so that it feels like they are struggling to keep pace and losing sight of some security basics.”

The report indicates a staggering growth in the number of generative AI users, which has increased by 200% over the last year. Furthermore, the total volume of prompts sent to generative AI tools has surged by 500%, with monthly prompts rising from an average of 3,000 to 18,000 per organization.

In response to these rising risks, 90% of organizations are now actively blocking at least one generative AI application, a notable increase from 80% the previous year. On average, organizations are blocking around ten tools. Netskope recommends that organizations map the flow of sensitive information, including through personal app usage.

Additionally, the report advises implementing controls to log and manage user activity across all cloud services, ensuring that data movements are tracked and consistent policies are enforced across both managed and unmanaged services. Establishing detailed logs is essential for ensuring compliance with data protection standards.

The increasing utilization of agentic AI systems presents a vast new attack surface, necessitating a fundamental re-evaluation of security perimeters and trust models. To address this evolving threat landscape, organizations should integrate agentic AI monitoring into their risk assessments. This includes mapping the tasks these systems perform and ensuring they operate within defined governance frameworks.

Canzanese emphasizes the need for security teams to adopt an “AI-aware” approach, stating, “Security teams need to expand their security posture to be ‘AI-aware’, evolving policy and expanding the scope of existing tools like DLP, to foster a balance between innovation and security at all levels.”

As the landscape of data security continues to evolve, organizations must adapt their strategies to mitigate the growing risks associated with generative AI technologies.

Our Editorial team doesn’t just report the news—we live it. Backed by years of frontline experience, we hunt down the facts, verify them to the letter, and deliver the stories that shape our world. Fueled by integrity and a keen eye for nuance, we tackle politics, culture, and technology with incisive analysis. When the headlines change by the minute, you can count on us to cut through the noise and serve you clarity on a silver platter.

Continue Reading

Trending

Copyright © All rights reserved. This website offers general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information provided. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult relevant experts when necessary. We are not responsible for any loss or inconvenience resulting from the use of the information on this site.