AI
We've curated 1475 cybersecurity statistics about AI to help you understand how machine learning algorithms, automated threat detection, and AI-driven defenses are shaping the landscape of cybersecurity in 2025.
Showing 1361-1380 of 1475 results
65% of SMBs report cybersecurity as the #1 business function that could be managed more effectively with artificial intelligence (AI).
82.6% of all phishing emails analysed exhibited some use of AI.
Menlo Security identified nearly 600 incidents of GenAI fraud in 2024.
AI is primarily being used to identify suspicious behavior (63%) across financial and professional services organizations.
44% of financial and professional services organisations use AI for identifying risk signals.
More than 1 in 3 (38%) of security professionals predict that ransomware will become even more dangerous when powered by AI.
54% of financial and professional services organisations use AI for network analysis.
Nearly half of financial and professional services organizations (49%) expect to invest in AI solutions as part of their efforts to tackle financial crime.
Over a quarter (27%) of financial and professional services organizations have AI and machine learning as an established part of their financial crime compliance programs, exceeding 2023 levels (24%).
Only 20% of financial services professionals believe AI has had a "very positive" effect on their financial crime compliance framework – down from 37% in 2023.
AI is primarily being used to identify suspicious behavior (63%) across financial and professional services organizations.
94% of CIOs are actively seeking opportunities to incorporate AI into their business, compared to 89% last year.
86% of CIOs report growing pressure within their organisation to ensure ROI from AI.
Artificial intelligence (AI - 95%), machine learning capabilities (93%), and Internet of Things (IoT - 89%) initiatives are among the most widely adopted emerging technologies over the past 12 months.
Speed of threat detection was used to evaluate AI efficacy by 57% of respondents
57% of security teams find it difficult to enforce policies on training data usage.
83% of Security Engineers/Architects worry most about AI systems understanding data access rights.
39% of firms are using AI to solve data overload problems that stymie vulnerability and exposure management work
Costs were an obstacle for 46% of respondents in effective use of AI.
5% of Security Managers/Directors have low or no confidence in controlling data used for AI training.