AI
Cybersecurity statistics about ai
Showing 1201-1220 of 1475 results
Top concerns for CIOs regarding cybersecurity risk include: Malware and ransomware (42%), data breaches (37%), AI-driven attacks (34%), and phishing (33%).
The primary drivers for AI/ML adoption are improving operational efficiency (41%) and maintaining competitive advantage (40%).
Nearly half (42%) of executives believe AI-powered threats will happen.
Among those who have encountered issues with AI-generated code, 92% reported insecure code as a concern.
65% believe AI will significantly reshape the AppSec function within the next year.
Only 29% of executives surveyed say they are reluctant to implement AI tools and technologies because of cybersecurity ramifications.
86% of respondents are already using or exploring generative AI tools in their security programmes.
Of organizations using AI/ML, 88% are incorporating generative AI at some level.
46% of respondents say their organizations use AI/ML to prevent cyberattacks.
59% of executives say that it is becoming more difficult for employees to identify real threats as AI-powered technologies make attacks more sophisticated.
Only 29% of executives say they are prepared for AI-powered threats.
84% recognise the role of the AppSec leader as more important now than ever. More than 84% believe their role is more important now than it was a few years ago. This increased importance is linked to factors such as growing challenges from AI-generated code and open source software.
Among those who have encountered issues with AI-generated code, 83% cited lack of transparency as major concerns.
48% report needing to get better at defending against AI-powered cyber adversaries.
In January 2025, Skyhigh Security recorded DeepSeek usage by 43% of customers.
94% of all AI services are at risk for at least one of the top Large Language Model (LLM) risk vectors, including prompt injection/jailbreak, malware generation, toxicity, and bias.
Data uploaded to AI applications is up 80%.
83% of AI applications don’t support integration with multi-factor authentication (MFA) tools
AI can automate 70% of all incident investigations and threat remediation activity.
Prompts specifying a need for security or requesting OWASP best practices produced more secure results, yet still yielded some code vulnerabilities for 5 out of the 7 LLMs tested.