Skip to main content
HomeTopicsagentic AI.

agentic AI.

We've curated 9942 cybersecurity statistics about Agentic AI to help you understand how autonomous AI systems are revolutionizing threat detection and response in 2025, enhancing security practices while also introducing new risks to navigate.

Showing 121-140 of 9942 results

63% of mid-sized AppSec teams (11–50 members) that use SCA cite the inability to verify if vulnerabilities are exploitable in production as their biggest pain point.

Rein Security2/22/2026
Application SecuritySCA

58% of large AppSec teams (50 members or more) that use SCA cite the inability to verify if vulnerabilities are exploitable in production as a major pain point.

Rein Security2/22/2026
Application SecuritySCA

93% of CISOs and AppSec executives are ready to replace or purchase new AI-native application protection.

Rein Security2/22/2026
AI SecurityAI-Native Application Protection

60% of ASPM platform users say issues are still ranked by theoretical severity instead of real exposure or exploitability.

Rein Security2/22/2026
Vulnerability PrioritizationASPM

16% of CISOs and AppSec executives want to consolidate the AppSec toolchain into one platform.

Rein Security2/22/2026
Tool ConsolidationApplication Security

38% of small AppSec teams (1–10 members) that use SCA cite the inability to verify if vulnerabilities are exploitable in production as their biggest pain point.

Rein Security2/22/2026
Application SecuritySCA

71% of incidents in the Automotive and Smart Mobility ecosystem are attributed to black hat actors, up from 65% in 2024.

Upstream2/22/2026
Black Hat ActorsThreat Actors

34% of incidents in the Automotive and Smart Mobility focus on business and operational disruption.

Upstream2/22/2026
Operational DisruptionBusiness Impact

20% of incidents in the Automotive and Smart Mobility are massive-scale events.

Upstream2/22/2026
Mass-Scale EventsAutomotive

67% of CISOs report limited visibility into how AI is used across their environment.

Pentera2/22/2026
AI VisibilityUS

75% of CISOs report their enterprises rely on extending controls originally designed for other attack surfaces to cover AI-driven workflows and infrastructure.

Pentera2/22/2026
Legacy SystemsAI Security

11% of enterprise CISOs have security tools specifically designed to protect AI systems.

Pentera2/22/2026
Security ToolsAI Security

78% of enterprises fund AI security through existing security budgets.

Pentera2/22/2026
Security BudgetAI Security

27% of internal audit leaders view AI-enabled fraud as a high risk.

The Internal Audit Foundation and AuditBoard2/22/2026
Internal AuditAI Fraud

57% of internal audit functions currently assess control weaknesses that enable fraud.

The Internal Audit Foundation and AuditBoard2/22/2026
Fraud PreventionInternal Audit

26% of internal audit functions investigate and document AI's role in fraud incidents.

The Internal Audit Foundation and AuditBoard2/22/2026
Fraud InvestigationInternal Audit

58% of internal audit leaders identify automated social engineering as a leading AI-enabled fraud threat.

The Internal Audit Foundation and AuditBoard2/22/2026
Automated Social EngineeringAI Fraud

28% of internal audit leaders are concerned about fabricated job applications or employee profiles created with AI.

The Internal Audit Foundation and AuditBoard2/22/2026
Employment FraudAI Fraud

27% of internal audit leaders are concerned about synthetic identity fraud enabled by AI.

The Internal Audit Foundation and AuditBoard2/22/2026
Identity FraudAI Fraud

57% of internal audit leaders identify a lack of appropriate technology or tools as a primary barrier to improving AI-enabled fraud preparedness.

The Internal Audit Foundation and AuditBoard2/22/2026
AI FraudOrganizational Barriers