**California Proposes New AI Governance Plan After Vetoing Strict Regulations**
California has unveiled a new AI governance framework after Governor Gavin Newsom vetoed Senate Bill 1047, which sought strict oversight for large AI models. The report, led by top AI researchers, calls for transparency, third-party evaluations, and safeguards to balance innovation with risk mitigation. With AI capabilities rapidly evolving, the plan aims to prevent irreversible harms while avoiding stifling tech development.
**AI Risks Demand Third-Party Oversight, Says California’s New Report**
A new California report warns that powerful AI models could cause „severe and irreversible harms” without proper safeguards. The study recommends independent risk assessments, whistleblower protections, and public transparency to counter industry opacity. As federal AI regulation stalls, California could lead state-level efforts to harmonize policies and prevent catastrophic risks.
**Tech Giants Resist Transparency as California Pushes AI Safeguards**
Despite calls for accountability, major AI firms remain reluctant to grant third-party evaluators full access to test model risks. California’s new report highlights the need for independent scrutiny, citing gaps in self-regulation by companies like OpenAI and Anthropic. The proposal aims to protect whistleblowers and mandate disclosures to ensure AI safety without hindering innovation.
**AI Policy Shift: California Charts Middle Path Between Innovation and Safety**
After rejecting strict AI regulations last year, California has proposed a balanced governance model emphasizing transparency and third-party audits. The report warns of AI’s potential weaponization but stresses avoiding heavy-handed rules that could stifle progress. With federal regulation uncertain, the state may set a precedent for managing AI risks responsibly.