AI
Cybersecurity statistics about ai
Showing 261-280 of 1475 results
26% of organizations are adopting governance frameworks to establish rules for AI use in development.
51% of developers worry about unauthorized or excessive API calls from AI agents, making it their number one security concern.
16% of developers have not yet considered AI agents as API consumers.
5% of developers are actively transitioning from human-first to AI-first design for APIs.
36% of developers lack trust in AI systems.
35% of respondents cited difficulty ensuring quality and reliability of AI-generated code.
56% of respondents cited a lack of control over AI model security used for code generation.
47% of respondents cited difficulty understanding and securing AI-generated code.
7% of developers primarily design APIs for AI agents/machine consumption.
45% of respondents cited the potential for new API vulnerabilities tied to AI-generated code.
41% of developers, architects, and executives rely on AI to generate API documentation.
49% of developers are concerned about AI systems accessing sensitive data they shouldn't see.
13% of developers design APIs equally for humans and AI agents.
14.5% of organizations adopting AI have the CISO holding primary responsibility for AI security.
23% of organizations adopting AI identify Shadow AI and unapproved tools as an area where they are least prepared to address threats.
29% of organizations adopting AI have the CIO and IT org leading AI security.
3% of organizations adopting AI are unsure which area of AI security their organization is least prepared to address.
36% of respondents in Italy identify optimizing cloud security as a top security convergence goal.
49% of employees felt empowered by AI use.
23% of respondents in Germany had AI tools provided by their IT team.