AI
Cybersecurity statistics about ai
Showing 741-760 of 1475 results
7.95% of employees in the average enterprise used a Chinese GenAI tool.
535 separate incidents of sensitive exposure were recorded involving Chinese GenAI tools.
Sensitive data in files sent to GenAI tools showed a disproportionate concentration of sensitive and strategic content compared to prompt data, with files being the source of 79.7% of all stored credit card exposures, 75.3% of customer profile leaks, 68.8% of employee PII incidents, and ◦ 52.6% of total exposure volume in financial projections.
47.42% of sensitive employee uploads to Perplexity were from users with standard (non-enterprise) accounts.
In Q2, the average enterprise saw 23 previously unknown GenAI tools newly used by their employees.
5.0% of all sensitive prompts analysed in Q2 originated in Google Gemini.
2.5% of all sensitive prompts analysed in Q2 originated in Claude.
35% of all real-world AI security incidents were caused by simple prompts.
Some prompt injection incidents led to over $100,000 in real losses without requiring any code to be written.
Generative AI (GenAI) was involved in 70% of real-world AI security incidents.
2.1% of all sensitive prompts analysed in Q2 originated in Poe.
AI security incidents have doubled since 2024
8% of organisations reported not knowing if they had been compromised by an AI-related breach.
Security incidents involving shadow AI led to more personally identifiable information (65%) being compromised compared to the global average (53%).
Of those compromised by an AI-related breach, 97% report not having AI access controls in place.
Organisations that used high levels of shadow AI observed an average of $670,000 in higher breach costs.
Security incidents involving shadow AI led to more intellectual property (40%) being compromised compared to the global average (33%).
60% of AI-related security incidents led to compromised data.
Organisations using AI and automation extensively throughout their security operations reduced the breach lifecycle by an average of 80 days.
Of the organisations that have AI governance policies in place, only 34% perform regular audits for unsanctioned AI.