AI Agents
We've curated 52 cybersecurity statistics about AI agents to help you understand how automated systems are being used for threat detection and response in 2025, enhancing security practices while also introducing new vulnerabilities.
Showing 21-40 of 52 results
Only 6% of security leaders rank securing non-human identities as their most difficult challenge.
34% of healthcare organizations name AI impersonation of users as their top emerging threat.
Only 23% of healthcare organizations offer passwordless authentication
Only 17% of healthcare organizations list compliance as a top concern.
Fewer than 50% of organizations monitor access or behaviour for the AI systems they deploy.
Over 50% of organizations use AI to detect threats.
85% of organizations lack proper security controls for AI agents.
85% of organizations state they are "ready for AI in security".
Only 30% of organizations regularly map AI agents to critical assets.
72% of respondents state AI agents pose a greater risk than machine identities.
60% of respondents say AI agent's ability to access privileged data is a factor contributing to AI agents as a security risk.
96% of tech professionals view AI agents as a growing security risk.
98% of organizations plan to expand their adoption of AI agents.
82% of organizations already use AI agents.
Only 44% of organizations report having policies in place to secure AI agents.
57% of respondents say AI agents sharing privileged data is a factor contributing to AI agents as a security risk.
58% of respondents say AI agent's potential to perform unintended actions is a factor contributing to AI agents as a security risk.
Alarmingly, 23% reported their AI agents have been tricked into revealing access credentials.
39% of respondents say AI agents accessed unauthorized systems or resources.
31% of respondents say AI agents accessed inappropriate data.