AI
We've curated 1475 cybersecurity statistics about AI to help you understand how machine learning algorithms, automated threat detection, and AI-driven defenses are shaping the landscape of cybersecurity in 2025.
Showing 521-540 of 1475 results
10% of faculty are 'Not concerned' about AI-related cybersecurity threats.
65% of faculty are concerned about student privacy violations due to AI.
62% of students use AI for research.
In Sweden, only 31% of respondents said government regulation is incredibly important (the lowest rate).
29% of consumers say they don’t even fully understand how apps are built in the first place.
53% of faculty are concerned about learning disruption due to AI.
7% of education institutions discourage but do not ban AI tools for faculty and staff.
52% of faculty are concerned about deepfake impersonation of staff/students due to AI.
35% of respondents from the Netherlands said government regulation of AI is incredibly important.
65% of education leaders are aware of data leakage as an AI cybersecurity risk.
23% of Gen Z would avoid AI apps entirely after an AI-related vulnerability.
1% of education institutions ban AI tools entirely for faculty and staff.
41% of schools report they have already experienced AI-related cyber incidents.
33% of consumers would be more cautious if they learned that AI-generated code caused a vulnerability in an app they used.
Boomers are nearly 2x more likely to lose trust if they find out AI was used to develop their favorite app.
34% of Consumers see themselves as responsible for protecting personal data in an app.
51% of consumers stated that learning that their favourite app uses AI written code would have no effect on their trust.
34% of Gen Z respondents noted that learning that their favourite app uses AI-written code would increase their trust.
Only 22% of consumers believe a typical mobile app’s code is mostly AI-generated.
44% of Millennials see developers as most responsible for protecting personal data in an app.