AI
Cybersecurity statistics about ai
Showing 801-820 of 1475 results
Financial information accounted for 14.4% of sensitive data exposed through employee use of Chinese GenAI tools at work.
1 in 12 employees, or 7.95%, used at least one Chinese GenAI tool at work.
Among the 1,059 users who engaged with Chinese GenAI tools, there were 535 incidents of sensitive data exposure.
The majority of sensitive data exposure (roughly 85%) due to the use of Chinese GenAI tools occurred via DeepSeek, followed by Moonshot Kimi, Qwen, Baidu Chat and Manus.
Code and development artifacts made up 32.8% of sensitive data exposed through employee use of Chinese GenAI tools at work.
Subscription prices for generative AI tools like FraudGPT and WormGPT, marketed for illicit uses such as phishing and malware creation, start for as little as $200 per month.
Personally identifiable information (PII) comprised 17.8% of sensitive data exposed through employee use of Chinese GenAI tools at work.
Customer data represented 12.0% of sensitive data exposed through employee use of Chinese GenAI tools at work.
Mergers & acquisitions data accounted for 18.2% of sensitive data exposed through employee use of Chinese GenAI tools at work.
Organisations that implement light-touch guardrails and nudges, rather than blanket blocking of Chinese GenAI tools, have seen up to a 72% reduction in sensitive data exposure, while increasing AI adoption by as much as 300%.
Legal documents made up 4.9% of sensitive data exposed through employee use of Chinese GenAI tools at work.
69% of respondents globally believe AI-powered fraud now poses a greater threat to personal security than traditional forms of identity theft.
31% of cybersecurity professionals believe that AI will create new types of entry- and junior-level roles or increase demand.
The smallest organizations are among the most conservative when it comes to adopting AI tools, with 23% reporting no plans to evaluate AI security tools.
Mid-to-large (2,500–9,999 employees) and smaller (100–499 employees) organizations each have 33% adoption rates of AI tools.
Within both financial services and commercial/consumer sectors, 41% of professionals reported actively evaluating AI tools.
36% of those in the public sector indicated they are actively evaluating AI tools.
The top five areas where AI security tools are expected to have the most positive impact on operations in the shortest amount of time, by improving efficiencies and automating time-consuming tasks, are network monitoring and intrusion detection (60%), endpoint protection and response (56%), vulnerability management (50%), threat modeling (45%), and security testing (43%).
44% of cybersecurity professionals said that their organizations are actively reconsidering the roles and skills needed to support the adoption and use of AI security tools
Mid-sized (500–2,499 employees) and the smallest (1–99 employees) organizations show the lowest adoption rates of AI tools, with 20% in each group.