Mark Zuckerberg acknowledged the challenges of open-sourcing large AI models, stating that some systems have become too complex for other companies to replicate. He also expressed concerns about the development of superintelligence, which he believes poses new safety risks.
Adversa AI, a pioneer in Agentic AI Security and AI Red Teaming, has developed an award-winning platform that provides automated, continuous AI red teaming to uncover vulnerabilities in large language model (LLM) applications, autonomous agents, and modern MCP stacks. Founded by veteran red-teamers and AI security pioneers, the platform protects Fortune 500 AI innovators, financial institutions, and government agencies from prompt-injection, tool-leakage, goal hijacking, and infrastructure-level vulnerabilities before they reach production.
A midyear report from Bloomberg reveals that securities lawsuits over allegedly unrealized AI promises are increasing, with big-dollar suits dominating the data. The report, attributed to Martina Barash and Gillian R. Brassil, indicates a growing trend in investor class actions related to artificial intelligence (AI) claims.
A new study by Cornell University has found that AI chatbots like ChatGPT may unintentionally widen gender and racial pay gaps. Researchers analyzed multiple large language models and discovered that these chatbots often advise women to request lower salaries compared to men, with biases linked to gender, ethnicity, and minority sections. The study's lead author, Ivan P. Yamshchikov, warns that these biases can reinforce existing pay disparities rather than helping to close them.
Daloopa, a leading AI-powered data platform, has secured a $13 million investment from Pavilion Capital to address the shortcomings of public web-sourced data in powering Large Language Models (LLMs) and AI agents in financial services. The company's Model Context Protocol (MCP) bridges the gap between LLMs and structured financial data, providing full auditability with hyperlinked datapoints sourced from original documents. Daloopa's platform is already integrated with Anthropic's Claude for Financial Services and supports other AI platforms using MCP Standard Protocol, enabling institutions to build more reliable and scalable financial AI tools.
The rise of AI-generated music is sparking debate about transparency in the music industry, as some creators use software like Suno and Udio to produce songs with just a few prompts. Streaming service Deezer has started flagging AI-generated songs on its platform, using in-house technology to detect subtle patterns found in AI-created audio. Some users can check if a song is human-made or generated by AI using third-party services like IRCAM Amplify, which can provide probabilities of 81.8% to 98%. However, experts warn that there's no foolproof way to determine the authenticity of content as AI technology improves rapidly, making it increasingly challenging to distinguish between real and synthetic music.
The next decade's most successful AI companies will prioritize infrastructure strategies that match their algorithms' needs, moving away from cloud-first assumptions. This shift represents a step toward more tailored approaches to AI system infrastructure, allowing for cost-efficient use of cloud spot instances and dedicated hardware for mission-critical model development, as seen in the work of companies like Google and Amazon.
Google plans to roll out artificial intelligence (AI) technology in the US to estimate whether users are under 18 years old. The AI Age estimation will initially be introduced to a "small set of users" over the next few weeks, with plans to expand it further. If a user is identified as being under 18, Google will apply age-appropriate protections on YouTube, including Digital Wellbeing tools and restrictions on certain types of content. Users who are incorrectly identified can verify their age by uploading a government ID or selfie.
Google's chief legal officer Kent Walker has expressed concerns that the EU's Artificial Intelligence Act and a new voluntary code of practice may slow Europe's AI development, citing potential departures from copyright law and trade secret risks as major concerns.