Microsoft plans to enhance Project Ire, an AI-powered malware detection tool, to improve its scalability and precision. The company aims to integrate this technology into Microsoft Defender as a threat detection and software classification tool. As malicious actors increasingly use AI to generate large quantities of malware, cybersecurity organizations are developing their own AI-based countermeasures.
NIST and CAISI have been tasked with leading critical AI and biosecurity initiatives, including developing high-level requirements for SOC databases and harmonizing global nucleic acid sequence-screening practices. They will also conduct pre-deployment evaluations of US and foreign commercial AI models for capabilities and national security risks. The Biden administration's 2024 national security memorandum on AI identifies NIST and CAISI as primary contacts to lead security testing of frontier AI models, while the Trump administration proposes significant budget cuts, including a $325 million reduction in NIST's funding, citing its support for climate change initiatives.
OpenAI is set to release GPT-5, a large language model, soon, despite facing challenges in training models due to data limitations and hardware failures. The company has made significant advancements with its previous models, including GPT-4, which surpassed human performance in many tasks. OpenAI is investing in
A group of seven Republican US Senators has requested a probe into DeepSeek, an AI chatbot developed by a Chinese firm, citing national security concerns due to its potential risks. The move follows multiple government departments banning the use of the model, with one study suggesting it poses 11 times more danger than competitor chatbots.
The US government's efforts to combat online sexual abuse are being put to the test as AI models continue to pose a significant risk to vulnerable communities, particularly women and girls. President Trump's administration has signed the Take It Down Act, which aims to prevent online sexual exploitation by ensuring federal resources remain in place for survivors and holding offenders accountable. However, concerns have been raised about the lack of motivation and lived experience among AI developers to prioritize safety, exacerbating the problem.
UiPath has launched a solution providing end-to-end coverage for all AI agent categories, including no-code and coding agents. The solution offers comprehensive protection with unmatched visibility and control into AI agent risks through automated discovery, secure integration, real-time threat monitoring, and built-in compliance controls. This addresses the need for robust security postures as AI agents become central to enterprise automation and decision-making, according to Kevin Mooney, UiPath's CISO.
The US aims to secure leadership in AI through three pillars: innovations in AI, AI-enabled applications, and infrastructure. A policy framework will ensure the secure adoption of US AI innovations, while public-private partnerships focus on workforce development and benefits for citizens. The State of Utah is catalyzing an AI-driven innovation ecosystem with government, academia, and industry partners to drive research, foster innovation, and create an AI-savvy workforce.
Vectra AI has launched MCP Server, a tool that brings natural language access to its cybersecurity platform through the Model Context Protocol. The tool allows security teams to engage with the platform using AI assistants like Claude Desktop and Cursor, reducing investigation time and increasing efficiency. Analysts can investigate incidents, reconstruct attack timelines, and report on security posture through conversational queries, eliminating the need for custom connectors. This move aims to democratize security expertise by empowering analysts with instant access to powerful insights through their existing tools.
The White House's AI action plan has been criticized for lacking details on who will implement and oversee its implementation, raising concerns about accountability and effectiveness. The plan aims to promote responsible AI development and deployment but fails to specify which government agencies or individuals will be responsible for its execution.