Skip to main content
HomeTopicsAI

AI

We've curated 1475 cybersecurity statistics about AI to help you understand how machine learning algorithms, automated threat detection, and AI-driven defenses are shaping the landscape of cybersecurity in 2025.

Showing 161-180 of 1475 results

80% of leaders are currently using or planning to use AI agents to protect against AI-cyber attacks in 2025.

Vanta11/1/2025
Agents

76% of organizations have a domain management strategy in place

CSC11/1/2025
IP Infringementdomain

28% of cybersecurity professionals in Germany reported that their organizations are fully prepared to handle AI-enhanced attacks in 2025.

Cision PR Newswire11/1/2025
IdentityZero Trust

Only 16% of legal teams have total visibility into how their portfolios are managed

CSC11/1/2025
IP Infringement

49% of organizations reported an increase in AI-generated phishing, 48% reported an increase in AI-powered malware, and 47% reported an increase in AI-driven identity theft or fraud in the past year.

Vanta11/1/2025
AI-powered threats

33% of respondents say AI agents shared sensitive or inappropriate data

10/29/2025
AI agents

60-70% of AI-generated code lacks deployment environment awareness, generating code that runs locally but fails in production.

OX Security10/23/2025
AI Risks

40-50% of AI-generated code inflates coverage metrics with meaningless tests rather than validating logic.

OX Security10/23/2025
AI Risks

80-90% of AI-generated code rigidly follows conventional rules, missing opportunities for more innovative, improved solutions.

OX Security10/23/2025
AI Risks

AI-powered phishing campaigns achieve a 54% click-through rate, over four times higher than traditional phishing.

Microsoft10/23/2025
Phishing

80-90% of AI-generated code creates hyper-specific, single-use solutions instead of generalizable, reusable components.

OX Security10/23/2025
AI Risks

80-90% of AI-generated code generates functional code for immediate prompts but never refactors or architecturally improves existing code.

OX Security10/23/2025
AI Risks

70-80% of AI-generated code violates code reuse principles, causing identical bugs to recur throughout codebases, requiring redundant fixes.

OX Security10/23/2025
AI Risks

40-50% of AI-generated code reimplements from scratch instead of using established libraries, SDKs, or proven solutions.

OX Security10/23/2025
AI Risks

20-30% of AI-generated code over-engineers for improbable edge cases, causing performance degradation and resource waste.

OX Security10/23/2025
AI Risks

90-100% of AI-generated code contains excessive inline commenting, which dramatically increases computational burden and makes code harder to check.

OX Security10/23/2025
AI Risks

40-50% of AI-generated code defaults to tightly-coupled monolithic architectures, reversing decade-long progress toward microservices.

OX Security10/23/2025
AI Risks

40% of organizations reported that they already use AI for threat hunting in 2025.

Red Canary10/23/2025
Security Operations

42% of companies globally do not have a policy in place to govern the use of AI by employees as of 2025.

Riskonnect10/22/2025
Policy

32% of companies globally have formally trained or briefed the entire company on risks related to generative AI in 2025, up from 19% in 2024 and 17% in 2023.

Riskonnect10/22/2025
RiskTraining