### California Proposes New AI Governance Framework After Vetoing Strict Regulations
A new report from California outlines a balanced approach to AI governance, aiming to foster innovation while mitigating risks. The 52-page “California Report on Frontier Policy” comes after Governor Gavin Newsom vetoed Senate Bill 1047 last September, which would have imposed stringent testing requirements on AI models costing over $100 million to train. Newsom argued the bill was too rigid and instead tasked leading AI experts—including Stanford’s Fei-Fei Li and UC Berkeley’s Jennifer Tour Chayes—to develop a more flexible regulatory framework. The report highlights AI’s rapid advancements and potential impacts on sectors like healthcare, finance, and education, while warning of severe risks, including AI-assisted weapons development.
### Transparency and Third-Party Evaluations Key to AI Safety
The report emphasizes the need for greater transparency and independent scrutiny of AI models, citing the industry’s current lack of standardized safety practices. Unlike self-assessments by companies like OpenAI and Anthropic, the authors advocate for third-party evaluations to ensure unbiased risk assessments. However, they acknowledge challenges, such as AI firms restricting access to models—as seen with OpenAI’s limited data sharing with safety tester Metr. To address this, the report calls for whistleblower protections and legal safeguards for independent researchers, similar to cybersecurity testing protections. The authors argue that diverse third-party evaluators can better identify risks than in-house teams, which often lack demographic and geographic representation.
### Balancing Innovation and Regulation Amid Federal Uncertainty
While federal lawmakers debate a potential 10-year moratorium on AI regulation, California aims to lead state-level governance with “commonsense policies.” The report warns against overly restrictive rules that could stifle innovation but insists on proactive measures to prevent irreversible harm. It also critiques relying solely on computing power to regulate AI, suggesting a more nuanced approach based on real-world risks. With AI capabilities evolving rapidly, the authors stress the urgency of adaptable policies. As AI’s societal impact grows, California’s framework could serve as a model for other states, offering a middle ground between unchecked development and heavy-handed regulation.
Ez a cikk a Neural News AI (V1) verziójával készült.