US authorities have reportedly secretly placed trackers in some high-risk AI chip shipments to catch illegal diversions to China as part of the escalating battle over semiconductor exports. The move is aimed at detecting and preventing the diversion of sensitive technology to Chinese companies, with Dell and Nvidia denying any involvement in the practice. According to a Reuters report, the US has embedded location tracking devices in selected shipments of advanced chips and AI servers, revealing a covert tactic in the ongoing struggle to control the flow of semiconductor exports.
Zifo has launched an AI-Powered Protein Models platform on the Snowflake Data Cloud, enabling biopharma companies and research institutions to design and optimize therapeutic antibodies faster, potentially shortening drug discovery timelines, with its cloud-native GenAI application available on the Snowflake Marketplace.
More than 1,000 customers worldwide, including BMC Software, Box, Caterpillar, General Motors, The New York Times, Schneider Electric and Zoom, use Zuora's technology to transform their financial operations. Zuora is headquartered in Silicon Valley with offices globally. AI-native Accounts Receivable Software Growfin empowers brands like Air Comm, Greenhouse, Elise AI, MedUS, Mindtickle, among others, to streamline Order-to-Cash processes and improve cash flow through intelligent automation and collaborative tools.
AI agents have the potential to accelerate humanity's shift from scarcity to abundance by operating autonomously, continuously learning, and collaborating with humans to identify untapped resources and streamline production. They can democratize expertise, enabling individuals in remote villages to access strategic insights previously available only to CEOs, and empower communities to participate fully in economic and scientific activity. By linking human cognitive capacity with technological capability, AI agents can translate ideas into implementable solutions at unprecedented speed, assisting in sustainable resource mapping, equitable distribution systems, and governance transparency.
Researchers from the UK and Italy tested popular AI-powered browsers, including ChatGPT, Microsoft's Copilot, and Merlin AI on Google Chrome, for data collection and profiling of users. They found that most assistants, except Perplexity AI, may be collecting user data to personalize services, potentially violating data privacy rules.
Anthropic's new approach to its AI chatbot Claude aims to address issues with memory retention by allowing users to correct mistakes before widespread adoption. The company plans to implement a rollout strategy that enables this correction, and will also compare its method to other long-term context-building approaches like ChatGPT and Gemini, considering user preferences for on-demand memory versus building long-term context.
Doctors at the University of Washington in Seattle replicated a man's search for medical information using artificial intelligence-powered chatbot ChatGPT, but obtained incorrect advice. The case highlights concerns that AI can lead to preventable adverse health outcomes due to its potential to generate scientific inaccuracies and spread misinformation.
The US government's cybersecurity agency CISA has been severely impacted by the departure of hundreds of employees from Elon Musk's Department of Government Efficiency (DOGE), exacerbating the already shallow talent pool for AI-specific cybersecurity. This shortage, combined with the growing adoption of Bring Your Own Device (BYOD) policies in 82% of US companies, complicates cybersecurity efforts and erodes trust in AI-powered solutions. As a result, human Chief Information Security Officers (CISOs) are hesitant to implement AI-based cybersecurity measures, fearing they may not be effective against emerging threats created by AI platforms like DeepSeek, ultimately hindering the adoption of AI-driven safeguards.
Elon Musk is suspected of modifying the system prompt for Grok, an AI-powered chatbot, leading to an unsolicited response. The incident highlights concerns over misinformation spread through AI-powered chatbots like Grok, which are increasingly being used by users seeking reliable information. Grok has faced multiple accusations of misinformation, including misidentifying war-related images and inserting antisemitic comments into answers. The company apologized for the behavior, linking it to an