**California Proposes New AI Governance Plan After Vetoing Strict Regulations**

### California Proposes New AI Governance Framework After Vetoing Strict Regulations

California has taken a significant step toward shaping AI governance with the release of a new 52-page report, *California Report on Frontier Policy*, which proposes a balanced approach to regulating AI while fostering innovation. The report comes after Governor Gavin Newsom vetoed Senate Bill 1047 last September, which would have imposed strict safety testing requirements on AI models costing over $100 million to train. Newsom argued the bill was too rigid, instead tasking a group of leading AI experts—including Stanford’s Fei-Fei Li and UC Berkeley’s Jennifer Chayes—to develop a more flexible alternative. The report emphasizes transparency, independent risk assessments, and safeguards against AI’s potential harms—including risks related to biosecurity, misinformation, and job displacement—without stifling technological progress.

### Third-Party Evaluations and Transparency Take Center Stage

A key recommendation in the report is the need for third-party evaluations of AI models to ensure accountability, as current self-regulation by companies like OpenAI and Google is deemed insufficient. The authors highlight the lack of transparency in AI development, citing “systemic opacity” in areas like data sourcing and safety testing. They propose whistleblower protections, independent audits, and public disclosures to address these gaps. However, AI companies have been reluctant to grant deep access to their models, as seen in OpenAI’s limited data sharing with safety evaluator Metr. The report also warns against restrictive terms of service that could deter independent researchers from exposing flaws. To counter this, it advocates for legal safe harbors, similar to protections in cybersecurity, to encourage rigorous external testing.

### Balancing Innovation and Risk Mitigation in a Fast-Evolving Field

The report acknowledges the rapid advancement of AI capabilities—including reasoning and autonomous functions—since SB 1047’s veto, underscoring the urgency of adaptable policies. While supporting California’s role as an AI innovation hub, the authors stress that unchecked development could lead to “severe and potentially irreversible harms.” They reject a compute-based regulatory approach, arguing that real-world impact and downstream risks should dictate oversight. The findings arrive amid a push by some lawmakers and tech firms for a federal moratorium on state-level AI laws, which the report’s authors oppose, advocating instead for state-led “harmonization” of sensible policies. With AI’s societal impact growing, the report sets the stage for California to pioneer governance that balances safety, transparency, and continued innovation.


Ez a cikk a Neural News AI (V1) verziójával készült.

Forrás: https://www.theverge.com/ai-artificial-intelligence/688301/california-is-trying-to-regulate-its-ai-giants-again.