AI
Cybersecurity statistics about ai
Showing 881-900 of 1475 results
63.3% believe their organization experienced an attack involving some element of AI within the past 12 months.
68% of cybersecurity practitioners expressed concern about long-term genAI threats like adversarial attacks.
38% of respondents believe human oversight to review or approve AI agent decisions would increase trust.
32% of LLM pentest findings are serious
36% of security leaders and practitioners admit that generative AI (genAI) is moving faster than their teams can manage.
72% of security leaders cite genAI-related attacks as their top IT risk.
36% of security leaders expressed concern about near-term operational genAI risks such as inaccurate outputs.
50% of respondents want more transparency from software suppliers about how they detect and prevent vulnerabilities.
46% of all survey respondents are concerned about sensitive information disclosure due to genAI.
Overall, 69% of serious findings across all pentest categories are resolved.
51% of respondents consider AI-enhanced social engineering a fairly or extremely significant concern.
Only 34.1% of Americans feel “very safe” with their current bank or credit union
Fraud is a particular concern among Americans age 65 and older, with 69.9% extremely or very concerned.
Over 83% of consumers have concerns about AI-powered fraud.
48% of security leaders believe a “strategic pause” is needed to recalibrate defenses against genAI-driven threats.
Among digital payments providers, PayPal is the most trusted, with 54.5% of consumers.
For younger age groups, 55% to 57% are extremely or very concerned about fraud.
20.3% of respondents view AI-powered malware as an extremely significant risk. This concern for AI-powered malware climbs to 25% among senior management, compared to just 15% of middle management.
34.3% of iGaming operators saw a large spike in AI-powered fraud.
42% of all survey respondents are concerned about genAI model poisoning or theft.