agentic AI.
We've curated 9942 cybersecurity statistics about Agentic AI to help you understand how autonomous AI systems are revolutionizing threat detection and response in 2025, enhancing security practices while also introducing new risks to navigate.
Showing 121-140 of 9942 results
63% of mid-sized AppSec teams (11–50 members) that use SCA cite the inability to verify if vulnerabilities are exploitable in production as their biggest pain point.
58% of large AppSec teams (50 members or more) that use SCA cite the inability to verify if vulnerabilities are exploitable in production as a major pain point.
93% of CISOs and AppSec executives are ready to replace or purchase new AI-native application protection.
60% of ASPM platform users say issues are still ranked by theoretical severity instead of real exposure or exploitability.
16% of CISOs and AppSec executives want to consolidate the AppSec toolchain into one platform.
38% of small AppSec teams (1–10 members) that use SCA cite the inability to verify if vulnerabilities are exploitable in production as their biggest pain point.
71% of incidents in the Automotive and Smart Mobility ecosystem are attributed to black hat actors, up from 65% in 2024.
34% of incidents in the Automotive and Smart Mobility focus on business and operational disruption.
20% of incidents in the Automotive and Smart Mobility are massive-scale events.
67% of CISOs report limited visibility into how AI is used across their environment.
75% of CISOs report their enterprises rely on extending controls originally designed for other attack surfaces to cover AI-driven workflows and infrastructure.
11% of enterprise CISOs have security tools specifically designed to protect AI systems.
78% of enterprises fund AI security through existing security budgets.
27% of internal audit leaders view AI-enabled fraud as a high risk.
57% of internal audit functions currently assess control weaknesses that enable fraud.
26% of internal audit functions investigate and document AI's role in fraud incidents.
58% of internal audit leaders identify automated social engineering as a leading AI-enabled fraud threat.
28% of internal audit leaders are concerned about fabricated job applications or employee profiles created with AI.
27% of internal audit leaders are concerned about synthetic identity fraud enabled by AI.
57% of internal audit leaders identify a lack of appropriate technology or tools as a primary barrier to improving AI-enabled fraud preparedness.