AI
Cybersecurity statistics about ai
Showing 1241-1260 of 1475 results
Only 23% of organizations surveyed have implemented comprehensive AI security policies.
11% of files uploaded to AI applications include sensitive corporate content.
OpenAI’s GPT-4o had the lowest performance, scoring a 1/10 secure code result using "naive" prompts.
Claude 3.7 Sonnet scored 10/10 with security-focused prompts.
Over 700 issues in Agentic AI repositories remain unaddressed.
AI now pulls indicators of compromise (IOCs) in as quickly as 10 seconds.
AI can automate 70% of all incident investigations and threat remediation activity.
Skyhigh Security research reveals a 200% increase in AI application traffic within the last year. This compares to a 23% increase in traffic to non-AI applications.
End user engagement with DeepSeek through its web interface surged dramatically following the R1 release, settling at 672.8% growth relative to pre-release baselines by the end of the first seven weeks.
83.8% of enterprise data input into AI tools flows to platforms classified as medium, high, or critical risk.
Mid-level employees use AI tools 3.5 times more frequently than manager-level employees.
44% of organizations say it’s difficult to hire for automation and AI roles.
39.5% of AI tools have the key risk factor of inadvertent exposure of user interactions and training data.
Claude usage rose 136.1% after version 3.5 launched.
34.4% of AI tools have user data accessible to third parties without adequate controls.
AI usage at work has grown an astounding 61x over the past 24 months.
AI usage at work has increased 4.6x in the past 12 months.
81% of security leaders state that AI-driven automation is a top priority for their strategy over the next 3 to 5 years.
35.9% of AI-generated content flows into email and messaging platforms.
89% of security teams have already begun to implement AI into their exposure validation processes.