agentic AI.
We've curated 9942 cybersecurity statistics about Agentic AI to help you understand how autonomous AI systems are revolutionizing threat detection and response in 2025, enhancing security practices while also introducing new risks to navigate.
Showing 321-340 of 9942 results
Use of risk-ranking methods to determine where LLM-generated code is safe to deploy increased by 12%.
The DNSFilter network processed over 6 billion AI-related queries between October 2024 and September 2025.
DNSFilter blocked 44% more CSAM content in 2025 than in the previous year.
Humans detect AI-generated content only about 50% of the time.
In 2025, an AI agent placed in the top 5% of teams in a major cybersecurity competition.
Deepfake attacks increased by 880% in 2024.
GenAI traffic experienced a 102.13% month-over-month spike in September 2025.
82% of malicious files have unique hashes that traditional pattern-matching fails to detect.
Credential phishing campaigns using .es domains increase 51 times year-over-year, with the .es top-level domain jumping from the 56th to the 3rd most-abused TLD.
The average internet user encounters 66 threats per day, up from 29 threats per day.
Even when explicitly warned that synthetic bots are common, 33% of study participants still shared sensitive information.
One in five developers grant AI agents permission for unrestricted file deletion, risking recursive wiping of a project or system.
14.5% of AI agent configuration files grant arbitrary code execution permissions for Python.
Streamlining of responsible vulnerability disclosure grew by more than 40%.
Non-AI fraud increased by 195% by the end of 2025.
Organizations delivering expertise through open collaboration channels increased by 29%.
New domains make up over 65% of unique threat domains.
Nearly 60% of organizations report fraudsters using compromised Personally Identifiable Information (PII) to bypass knowledge-based authentication (KBA).
In Q4 2025, CEOs and senior executives accounted for 50% of impersonation-based BEC emails and 41% of total BEC incidents.
One in five developers grant AI code agents unrestricted access to perform high-risk actions without human oversight.