AI-Enabled Cybersecurity Threats Escalate Rapidly: Glider AI's ID Verify and AI Proctoring Foil North Korean Operatives

Published on 6.12.25

  The increasing reliance on artificial intelligence (AI) in various sectors has led to a significant rise in AI identity security vulnerabilities. A recent survey revealed that nearly 23% of respondents admitted their AI agents have been tricked into revealing access credentials, with 72% believing AI agents pose greater risks than traditional machine identities. Common AI-enabled threats include credential stuffing, brute force attacks, deepfake impersonation in business emails, AI-generated phishing scams, and polymorphic malware that changes to avoid detection. To combat these growing threats, companies like Glider AI have launched secure identity verification products, such as ID Verify, which can help mitigate risk and provide assurance. A notable example of AI-powered threats was seen in an incident involving North Korean operatives posing as remote software developers, which was flagged by Glider AI's ID Verify capability in tandem with its AI Proctoring technology. This incident highlights the need for stronger identity security strategies to address concerns around data control.

Related Posts


AI Bolsters Global Cybersecurity Measures
5.24.25
The integration of artificial intelligence (AI) is transforming the cybersecurity landscape globally by providing enhanced security measures to combat increasingly complex cyber threats. According to a Forrester report, ThreatBook's enterprise-scale...

Back

See Newsfeed: Artificial Intelligence