California’s New AI Safety Law: A Balance Between Regulation and Innovation
California Governor Gavin Newsom has signed into law SB 53, a bill that prioritizes AI safety and transparency, marking a significant milestone in the state’s efforts to regulate the rapidly evolving AI industry. According to Adam Billen, vice president of public policy at Encode AI, this legislation demonstrates that state regulation does not have to hinder AI progress. In fact, Billen argues that policy makers can craft legislation that protects innovation while ensuring the safety of AI products.
The SB 53 bill requires large AI labs to be transparent about their safety and security protocols, particularly in regards to preventing their models from being used to commit cyberattacks on critical infrastructure or build bio-weapons. The law also mandates that companies adhere to these protocols, which will be enforced by the Office of Emergency Services. Billen notes that companies are already conducting safety testing on their models and releasing model cards, but some may be tempted to skimp on safety standards under competitive pressure.
The Importance of Safety Protocols in AI Development
Billen emphasizes that some AI firms have a policy of relaxing safety standards under competitive pressure. For instance, OpenAI has publicly stated that it may “adjust” its safety requirements if a rival AI lab releases a high-risk system without similar safeguards. This highlights the need for policy to enforce companies’ existing safety promises and prevent them from cutting corners under competitive or financial pressure. By regulating AI safety and transparency, the government can ensure that companies prioritize the well-being of their users and the broader public.
While some may argue that AI regulation will hinder innovation, Billen disagrees. He points out that bills like SB 53 are designed to address specific subsets of AI-related risks, such as deepfakes, transparency, algorithmic discrimination, children’s safety, and governmental use of AI. These regulations can actually support American progress in the AI race by promoting a safer and more responsible development of AI technologies.
Industry Reactions and the AI Moratorium
Despite the muted public opposition to SB 53, the rhetoric in Silicon Valley and among most AI labs has been that almost any AI regulation is anathema to progress and will ultimately hinder the U.S. in its race to beat China. Companies like Meta, VCs like Andreessen Horowitz, and powerful individuals like OpenAI president Greg Brockman have collectively invested hundreds of millions into super PACs to back pro-AI politicians in state elections. Additionally, they have pushed for an AI moratorium that would ban states from regulating AI for 10 years.
However, Encode AI, led by Billen, has been actively working to strike down such proposals. The organization has run a coalition of over 200 organizations to oppose the AI moratorium and has been successful in its efforts. Nevertheless, the fight is not over, as Senator Ted Cruz has introduced the SANDBOX Act, which would allow AI companies to apply for waivers to temporarily bypass certain federal regulations for up to 10 years.
Export Controls and the Chip Security Act
Billen argues that if the goal is to beat China in the AI race, then policymakers should focus on export controls, such as the Chip Security Act, which aims to prevent the diversion of advanced AI chips to China. He also suggests that the existing CHIPS and Science Act, which seeks to boost domestic chip production, is a step in the right direction. However, some major tech companies, including OpenAI and Nvidia, have expressed reluctance or opposition to certain aspects of these efforts, citing concerns about effectiveness, competitiveness, and security vulnerabilities.
Billen speculates that OpenAI may be holding back on chip export advocacy to stay in the good graces of crucial suppliers like Nvidia, which has a strong financial incentive to continue selling chips to China. The inconsistent messaging from the Trump administration has also contributed to the complexity of the issue, with the administration reversing course on an export ban on advanced AI chips to China just three months after expanding it.
Democracy in Action: SB 53 as a Proof Point
Despite the challenges and controversies surrounding AI regulation, Billen views SB 53 as an example of democracy in action – of industry and policymakers working together to craft a bill that everyone can agree on. While the process may be “ugly and messy,” it is a testament to the foundation of American democracy and the economic system. Billen hopes that this approach will continue to be successful in the future, as it is essential for promoting a safer and more responsible development of AI technologies.
As the AI industry continues to evolve, it is crucial for policymakers, industry leaders, and the public to work together to ensure that AI development prioritizes safety, transparency, and responsibility. By doing so, we can promote a future where AI technologies benefit society as a whole, rather than just a select few. For more information on California’s new AI safety law and its implications, visit Here.
Image Credit: techcrunch.com