Gen AI
We've curated 125 cybersecurity statistics about Gen AI to help you understand how generative artificial intelligence is shaping threat landscapes, enhancing security practices, and influencing detection technologies in 2025.
Showing 81-100 of 125 results
28% of healthcare executives say they are likely to invest in generative AI for social engineering attacks.
Enterprise PCs are logging millions of visits to popular generative AI platforms. Thousands of these visits are specifically landing on DeepSeek.
28% of healthcare executives say they are likely to invest in generative AI for social engineering attacks.
Manufacturers investing in generative and causal AI increased 12% year-over-year.
29% of SOCs are using GenAI for Writing/editing security policies.
DNSFilter blocked over 60 million generative AI requests in March.
Security for generative AI has quickly risen as a top spending priority, securing the second spot in ranked-choice voting, just behind cloud security.
Since January 2024, DNSFilter has been processing a monthly average of over 330 million queries that fall under the generative AI category.
This number of blocked requests represented about 12% of all generative AI queries processed by DNSFilter in March.
There was a 92% decrease in malicious and fake ChatGPT and other generative AI sites between April 2024 and April 2025.
There was a 2,000% rise in malicious sites containing "openai" in their name between April 2024 and April 2025
Nearly 70% of organizations identify AI’s fast-moving ecosystem, particularly in generative AI, as the top GenAI-related security risk.
A third of respondents indicate that GenAI is either being integrated or is actively transforming their operations.
31% of SOCs are using GenAI for Querying security data.
In March, Notion accounted for 93% of all blocked generative AI queries, significantly more than the combined number for Microsoft Copilot, SwishApps, Quillbot, and OpenAI.
33% of SOCs are using GenAI for Threat intelligence analysis.
60% of organizations lack confidence in detecting unregulated AI deployments (shadow AI).
51% of employees are using approved third-party GenAI tools.
22% of employees have unrestricted access to public GenAI.
60% of IT teams are unaware of employee interactions with GenAI.