AI
We've curated 1475 cybersecurity statistics about AI to help you understand how machine learning algorithms, automated threat detection, and AI-driven defenses are shaping the landscape of cybersecurity in 2025.
Showing 161-180 of 1475 results
80% of leaders are currently using or planning to use AI agents to protect against AI-cyber attacks in 2025.
76% of organizations have a domain management strategy in place
28% of cybersecurity professionals in Germany reported that their organizations are fully prepared to handle AI-enhanced attacks in 2025.
Only 16% of legal teams have total visibility into how their portfolios are managed
49% of organizations reported an increase in AI-generated phishing, 48% reported an increase in AI-powered malware, and 47% reported an increase in AI-driven identity theft or fraud in the past year.
33% of respondents say AI agents shared sensitive or inappropriate data
60-70% of AI-generated code lacks deployment environment awareness, generating code that runs locally but fails in production.
40-50% of AI-generated code inflates coverage metrics with meaningless tests rather than validating logic.
80-90% of AI-generated code rigidly follows conventional rules, missing opportunities for more innovative, improved solutions.
AI-powered phishing campaigns achieve a 54% click-through rate, over four times higher than traditional phishing.
80-90% of AI-generated code creates hyper-specific, single-use solutions instead of generalizable, reusable components.
80-90% of AI-generated code generates functional code for immediate prompts but never refactors or architecturally improves existing code.
70-80% of AI-generated code violates code reuse principles, causing identical bugs to recur throughout codebases, requiring redundant fixes.
40-50% of AI-generated code reimplements from scratch instead of using established libraries, SDKs, or proven solutions.
20-30% of AI-generated code over-engineers for improbable edge cases, causing performance degradation and resource waste.
90-100% of AI-generated code contains excessive inline commenting, which dramatically increases computational burden and makes code harder to check.
40-50% of AI-generated code defaults to tightly-coupled monolithic architectures, reversing decade-long progress toward microservices.
40% of organizations reported that they already use AI for threat hunting in 2025.
42% of companies globally do not have a policy in place to govern the use of AI by employees as of 2025.
32% of companies globally have formally trained or briefed the entire company on risks related to generative AI in 2025, up from 19% in 2024 and 17% in 2023.