AI
Cybersecurity statistics about ai
Showing 981-1000 of 1475 results
Recent research indicates that half of organizations reporting AI-related security incidents estimated losses exceeding $50 million. Using an industry-standard metric of $169 per breached record, this equates to approximately 300,000 data records per organization.
The last two years have seen 150% year-over-year growth in AI-related incidents, with a significant inflection point coinciding with widespread cloud adoption in the late 2010s/early 2020s and the 2022 release of ChatGPT.
The FireTail API Data Breach Tracker shows a rise in API security incidents, increasing from 22 in 2023 to 26 in 2024.
MIT's AI Risk Repository identifies over 1000+ risks from an academic perspective.
Recent research from Wiz highlights 6 known vulnerabilities with the underlying AI providers themselves.
Approximately 9% of API traffic from Russia, China, and Iran was flagged as bot activity, particularly in January, November, and December 2024.
The AI Incident Database maintained by the Responsible AI Collaborative tracks AI-related issues dating back to the 1980s, with concentrated growth from 2010 onwards.
The mean number of warnings per OpenAPI specification significantly increased, from an average of 215 warnings per spec in 2023 to 1,078 warnings per spec in 2025. Unrestricted String and Array Lengths emerged as the most common warning type.
61% of manufacturing respondents expect improved real-time threat detection and response as the benefit AI adoption in remote access security.
38% of manufacturing respondents expect proactive risk identification as the benefit AI adoption in remote access security.
Only 43% of cybersecurity functions are meaningfully involved in helping other functions adopt AI.
72% of respondents state AI agents pose a greater risk than machine identities.
60% of respondents say AI agent's ability to access privileged data is a factor contributing to AI agents as a security risk.
75% of financial institutions say fraudsters outpace defenders with generative AI.
An overwhelming 92% state that governing AI agents is critical to enterprise security.
32% of respondents say AI agents downloaded sensitive content.
54% of respondents say AI agents accessing and sharing inappropriate information is a factor contributing to AI agents as a security risk.
"Secure Creators" (organizations with more advanced cybersecurity functions than their peers) were more likely to help other business functions implement AI than "Prone Enterprises" (48% vs. 31%).
55% of respondents say AI agents making decisions based on inaccurate or unverified data is a factor contributing to AI agents as a security risk.
31% of respondents say AI agents accessed inappropriate data.