Gen AI
We've curated 125 cybersecurity statistics about Gen AI to help you understand how generative artificial intelligence is shaping threat landscapes, enhancing security practices, and influencing detection technologies in 2025.
Showing 41-60 of 125 results
26.3% of ChatGPT use by employees was via personal accounts.
68% of security leaders state that their boards now view the secure deployment of generative AI as a critical priority.
535 separate incidents of sensitive exposure were recorded involving Chinese GenAI tools.
7.95% of employees in the average enterprise used a Chinese GenAI tool.
Code leakage was the most common type of sensitive data sent to GenAI tools.
The average enterprise uploaded 1.32GB of files (half of which were PDFs) to GenAI tools and AI-enabled SaaS applications in Q2. A full 21.86% of these files contained sensitive data.
LLMs failed to secure code against log injection (CWE-117) in 88% of cases
LLMs failed to secure code against cross-site scripting (CWE-80) in 86% of cases.
AI-generated code introduces security vulnerabilities in 45% of cases.
When given a choice between a secure and insecure method to write code, GenAI models chose the insecure option 45% of the time.
In 45% of all test cases, LLMs introduced vulnerabilities classified within the OWASP Top 10.
Java was found to be the riskiest language for AI code generation, with a security failure rate over 70%. Other major languages, such as Python, C#, and JavaScript, presented significant risk, with failure rates between 38 percent and 45 percent.
Organisations that implement light-touch guardrails and nudges, rather than blanket blocking of Chinese GenAI tools, have seen up to a 72% reduction in sensitive data exposure, while increasing AI adoption by as much as 300%.
Customer data represented 12.0% of sensitive data exposed through employee use of Chinese GenAI tools at work.
Legal documents made up 4.9% of sensitive data exposed through employee use of Chinese GenAI tools at work.
Among the 1,059 users who engaged with Chinese GenAI tools, there were 535 incidents of sensitive data exposure.
The majority of sensitive data exposure (roughly 85%) due to the use of Chinese GenAI tools occurred via DeepSeek, followed by Moonshot Kimi, Qwen, Baidu Chat and Manus.
Financial information accounted for 14.4% of sensitive data exposed through employee use of Chinese GenAI tools at work.
Personally identifiable information (PII) comprised 17.8% of sensitive data exposed through employee use of Chinese GenAI tools at work.
1 in 12 employees, or 7.95%, used at least one Chinese GenAI tool at work.