The US government has dismantled key guardrails against disinformation under President Donald Trump, with slashed funding for research and the closure of a key agency combating foreign influence operations. The National Science Foundation cancelled hundreds of grants focused on diversity, equity, and inclusion, as well as misinformation and disinformation, citing cost savings of $233 million. This move has left researchers concerned about the spread of false information, particularly in health and tech sectors. The State Department's Counter Foreign Information Manipulation and Interference hub was also shut down by Secretary Marco Rubio, which had tracked and countered disinformation from foreign actors. Experts warn that this could grant US adversaries more freedom to sow disinformation, as social media platforms scale back content moderation and Meta suspends third-party fact-checking in the US.
https://www.thestar.com.my/tech/tech-news/2025/04/28/us-anti-disinformation-guardrails-fall-in-trump039s-first-100-daysA new licence aims to provide legal certainty and protection for creators by simplifying access to data for training GenAI models and ensuring fair compensation for their works. This move benefits smaller creators who previously lacked bargaining power, while also potentially reducing the number of copyright litigation cases worldwide. The availability of such licences in the UK and internationally could alter the risk profile for AI system transactions, addressing a long-standing concern about third-party copyright litigation risk.
https://natlawreview.com/article/could-be-ai-nswer-collective-copyright-licence-generative-ai-trainingVectra AI's platform uses its 35 patents in AI security to monitor and detect cyber attacks on customers' data centers, campuses, remote work environments, identities, cloud services, and IoT/OT systems. The company's real-time threat detection capabilities connect the dots to prevent breaches, with a high level of reliance from organizations worldwide, including those referenced by MITRE D3FEND.
https://www.prnewswire.com/news-releases/vectra-ai-expands-partnership-with-crowdstrike-to-launch-offering-for-smb-and-midmarket-security-teams-302438783.htmlAcuvity has won awards for its innovative approach to addressing emerging threats in the GenAI era. Gary S. Miliefsky, publisher of Cyber Defense Magazine, praises Acuvity's ability to provide a cost-effective solution while innovating in unexpected ways to mitigate cyber risk. The platform offers real-time visibility, adaptive risk assessments, and proactive controls that continuously evolve with AI interactions.
https://www.prnewswire.com/news-releases/acuvity-secures-two-global-infosec-awards-for-generative-ai-security-at-rsac-2025-302438882.htmlAuditBoard has launched an AI governance solution to help organizations meet best practices outlined in frameworks like NIST's AI RMF, protecting against cyber, reputational, and financial risks. The solution streamlines AI use case intake, review, and approval processes, establishes a single source of truth for approved models, and dynamically links AI risks to vendors, assets, and controls. This will enable customers to manage AI risks more efficiently and proactively. AuditBoard's Chief Technology Officer Happy Wang stated that the solution addresses the urgent need for AI governance across industries.
https://www.prnewswire.com/news-releases/auditboard-launches-ai-governance-solution-to-help-customers-optimize-ai-innovation-302439010.htmlBloomberg has published two new academic papers focusing on developing a safer framework for large language models (LLMs) in financial services. Researchers from Bloomberg's AI Engineering group, Data AI group, and CTO Office have identified the need for an AI content risk taxonomy to mitigate potential risks associated with retrieval augmented generation (RAG)-based LLMs.
https://www.prnewswire.com/news-releases/bloomberg-ai-researchers-mitigate-risks-of-unsafe-rag-llms-and-genai-in-finance-302439547.htmlThe Children's Commissioner for England, Dame Rachel de Souza, has warned that apps creating deepfake images of sexual abuse should be banned due to alarming risks to youngsters. She cites teenage girls who are scared of AI apps that can digitally combine their faces with pornographic images without consent, fearing manipulation by strangers or friends.
https://www.dailymail.co.uk/news/article-14653779/Urgent-call-ban-deepfake-pornography-apps-target-teenage-girls.htmlDataminr has introduced Intel Agents, a new generation of AI-powered real-time information that enables private and public sector organizations to navigate a world of constant change. The innovation builds on Dataminr's ReGenAI, which automatically regenerates live event briefs in real-time as events unfold. Intel Agents provide context-enhanced real-time threat intelligence for cybersecurity teams, eliminating the need for laborious manual research. The technology is currently being piloted in Dataminr Pulse for Cyber Risk and will be expanded across its platform, including public sector and corporate security solutions. Future updates include Client-Tailored Context and PreGenAI, which will enable customized real-time information and predictive intelligence, respectively.
https://www.prnewswire.com/news-releases/dataminr-unveils-agentic-ai-roadmap-ushering-in-a-new-era-of-ai-powered-real-time-information-302439141.htmlChinese start-up DeepSeek is sparking speculation about its upcoming R2 AI model, with some predicting an imminent launch and improved benchmarks in terms of cost-efficiency and performance. The company's recent release of advanced open-source AI models V3 and R1 has generated significant interest online, particularly after it was revealed that these models were built at a fraction of the cost typically required by major tech companies for large language model projects.
https://www.scmp.com/tech/tech-trends/article/3308227/deepseek-speculation-swirls-online-over-chinese-ai-start-ups-much-anticipated-r2-model?module=top_story&pgtype=section