agentic AI.
We've curated 9942 cybersecurity statistics about Agentic AI to help you understand how autonomous AI systems are revolutionizing threat detection and response in 2025, enhancing security practices while also introducing new risks to navigate.
Showing 361-380 of 9942 results
Threats on the DNSFilter network grew by 30% between October 2024 and September 2025.
84% of organizations doubt they can pass a compliance audit focused on agent behavior or access controls.
In 2025, a malicious email attack occurs every 19 seconds, more than doubling from 2024’s pace of one every 42 seconds.
Abuse of legitimate remote access tools increased by 900% by volume.
A major U.S. healthcare provider experiences over 15,000 unique bot fraud calls since the summer of 2025.
24.9% of Gemini usage occurs through personal accounts.
The top 1% of early adopter organizations use more than 300 GenAI tools.
Across the top 100 most-used GenAI SaaS applications, 82% are classified as medium, high, or critical risk.
32.3% of ChatGPT usage occurs through personal accounts.
Only 6% of developers use AI coding assistants in the lowest-adoption environments.
39.7% of all data movements into AI tools involve sensitive data, including prompts or copy-paste actions.
In companies leading in AI adoption, nearly 90% of developers use AI coding assistants.
Cautious enterprises typically employ fewer than 15 GenAI tools.
Developers at frontier companies are 11.5× more likely to use AI coding assistants than developers in low-adoption environments.
In the later months of 2025, 30% of developers using AI coding assistants reported using at least two assistants.
The average employee enters sensitive data into AI tools once every three days.
In a typical organization, about 50% of developers use AI coding assistants.
67% of banks are implementing AI.
64% of enterprise leaders in the UK view AI as posing little or no threat to networks.
75% of hackers report hacking is becoming more about money than curiosity.