A petition by the Voice and Speech Association (VDS) has garnered over 75,500 signatures from voice actors worldwide, urging lawmakers to require explicit consent when training AI on artists' voices and ensure fair compensation for their work. The VDS collaborates with United Voice Artists, a network of over 20,000 voice actors advocating for ethical AI use. Some studios, like Neue Tonfilm Muenchen, are cautiously exploring AI-powered dubbing, while others, such as ViaPlay, have opted for hybrid approaches using human and AI voices. Despite concerns about the impact on traditional voice work, some experts believe AI will augment, rather than replace, human talent, with companies like Audio Innovation Lab and Flawless AI developing technologies that aim to match lip movements and create more authentic audio experiences.
Generative AI is being reevaluated due to concerns over compliance, regulatory risks, and operational disruptions. Curt Raffi, Chief Product Officer at Acrolinx, notes that the primary challenge now lies in governance and scaling across organizations as siloed workflows struggle to keep up with the increased volume of generated content.
Researchers found that humans can accurately identify most AI-generated images, with a success rate of 63%, similar to when focusing on all images. The study used various generative models, including Generative Adversarial Networks (GANs), which yielded the highest error rate of 55%. However, the team's own AI detection tool has achieved over 95% accuracy in identifying both real and synthetic images, a significant improvement over past failed attempts.
Kent Walker, president of global affairs at Alphabet, has expressed concerns that the European Commission's AI Act and accompanying Code may hinder Europe's development and deployment of artificial intelligence (AI). The Code, released earlier this month, aims to provide transparency, copyright, and safety guidelines for providers of General Purpose Artificial Intelligence (GPAI) models. However, Walker warns that departures from EU copyright law, slow approvals, or requirements exposing trade secrets could harm European competitiveness. Meta has already declined to sign the rules, citing stifling innovation, while Google is committed to working with the AI Office to ensure the Code is proportionate and responsive to the evolving AI landscape.
Apple is facing pressure from investors to accelerate its development of artificial intelligence (AI) features, following a slump in share prices and delays in introducing new AI capabilities. The company has been considering alternative approaches, including partnering with third-party AI models for Siri and acquiring AI startup Perplexity AI.
Arctic Wolf is integrating its technology with another company's platform, fueling the growth of its Aurora Platform. This integration will enhance threat detection and response for over 10,000 global customers, leveraging Alpha AI's predictive and generative technologies informed by 10 million hours of human SOC experience and a vast security telemetry dataset.
AuditBoard's latest research reveals that many global risk teams are struggling to effectively manage the risks associated with artificial intelligence (AI) technology. Despite having policies in place, only two-thirds of organizations conduct formal AI-specific risk assessments for third-party models or vendors, leaving one-third relying on external systems without clear understanding of potential risks. The main barriers to AI governance identified by respondents include lack of clear ownership, insufficient internal expertise, and resource constraints. To address these challenges, AuditBoard has released a report highlighting the need for integrated, operational approaches to AI risk management, emphasizing the importance of execution over awareness in implementing effective AI governance.
Australia plans to implement its landmark social media laws by banning children under 16 from using YouTube, citing concerns over "predatory algorithms" that target young users. Communications Minister Anika Wells stated that four-in-ten Australian children have reported viewing harmful content on the platform. The move is part of Australia's broader efforts to regulate social media sites such as Facebook and TikTok for minors until they turn 16.
ChatGPT's app usage trends are becoming more similar to Google, with users relying on it as a primary resource for work and personal life. The platform has reached 1 billion global downloads in record time, with users seeking answers on various topics beyond work and education, such as lifestyle and entertainment. This shift is driven by younger generations, including Gen Z and zillennial consumers, who are increasingly using AI chatbots like ChatGPT for work and personal tasks, with only a small percentage reporting no usage.