Gavin Newsom

WEEK 40: AI Safety Bill

By Published On: October 4th, 2024

READING TIME : 7 MINS

Cant be bothered to read?
Listen to our Podcast instead!


Gavin Newsom’s AI Veto: Navigating the Tension Between Innovation and Regulation


In a high-stakes decision that has sent ripples through the tech and political landscape, California Governor Gavin Newsom has vetoed a major bill aimed at regulating artificial intelligence (AI) safety. This bill, which would have required large AI models to undergo rigorous safety vetting before deployment, had the potential to set a national standard for AI regulation. However, Newsom’s veto has sparked a debate that pits Silicon Valley’s innovation engine against calls for stricter oversight in a rapidly advancing field.

In this blog post, we’ll break down the details of the bill, explore the arguments on both sides, and assess what Newsom’s decision means for California’s future as a tech hub and the broader AI landscape.

The Bill That Could Have Changed AI Regulation Nationwide

The AI safety bill, spearheaded by State Senator Scott Wiener, sought to impose safety testing requirements on large AI models, particularly those developed by major tech companies like Google, OpenAI, and Microsoft. The goal was simple: ensure that these powerful AI systems were rigorously tested for safety before being unleashed on the public. Proponents argued that given AI’s potential to create societal risks—like the creation of bioweapons or mass misinformation—California needed to step in where Congress had failed to act.

The legislation gained traction due to California’s central role in the global AI ecosystem. With Silicon Valley serving as the heart of AI innovation, any regulation passed in the state could effectively become a national—or even global—standard. This bill was seen as California’s chance to lead the way on AI regulation, much as it had with climate change and consumer privacy.

However, Newsom had other ideas.

Newsom’s Veto: Balancing Innovation and Public Safety

Governor Newsom, who has long maintained close ties with Silicon Valley, explained his decision by emphasizing the bill’s broad scope. According to Newsom, the legislation did not differentiate between AI systems used in high-risk environments—like health care or critical infrastructure—and more benign applications. This lack of nuance, he argued, would impose burdensome regulations on all AI developers, even those working on less consequential projects.

In his veto message, Newsom stated, “I do not believe this is the best approach to protecting the public from real threats posed by the technology.” Instead of imposing stringent regulations across the board, Newsom suggested a more targeted approach that considers the specific risks of each AI application. He expressed concerns that overregulation could stymie California’s thriving AI sector, threatening the state’s economic competitiveness.

Rather than fully dismissing AI regulation, Newsom paired his veto with a commitment to work on future legislation that could provide guardrails for high-risk AI systems. He also signed a more modest bill requiring the state’s emergency response agency to study AI risks, indicating that California will continue to explore ways to regulate AI without hindering innovation.

The Silicon Valley Factor: Who Stands to Gain?

Newsom’s decision has been framed as siding with Silicon Valley, and for good reason. Major tech companies like Google and OpenAI were vocal opponents of the bill, as were prominent venture capitalists and business groups like the California Chamber of Commerce. These stakeholders argued that the bill’s stringent safety testing requirements would create unnecessary red tape, particularly for startups that might struggle to comply with the proposed regulations.

Some of the bill’s most influential opponents included Ron Conway, a prominent Silicon Valley investor, and the venture capital firm Andreessen Horowitz (A16z). Both parties lobbied hard against the bill, warning that it could undermine California’s role as a global leader in tech innovation. Their message was clear: overregulation could push AI development to other states or countries, leaving California behind.

This opposition extended to the political sphere as well. Several prominent California Democrats in Washington, D.C., including former House Speaker Nancy Pelosi and Representative Ro Khanna, joined the tech industry in opposing the bill. Even San Francisco Mayor London Breed, a close ally of Senator Wiener, warned that the legislation could harm the city’s economy by deterring tech investment.

The Missed Opportunity: What Proponents Are Saying

On the other side of the debate, Senator Scott Wiener and his allies have expressed disappointment in Newsom’s decision. Wiener’s bill was designed to protect the public from potential harms posed by AI, and he believes Newsom’s veto is a missed opportunity for California to take a leading role in regulating a technology that’s evolving faster than lawmakers can keep up.

In a public statement following the veto, Wiener said, “This veto is a missed opportunity for California to once again lead on innovative tech regulation… and we are all less safe as a result.” He and other proponents argue that without strong regulations in place, the most powerful AI systems—those capable of creating widespread harm—will continue to operate without any meaningful oversight.

Interestingly, the AI safety bill found support from some of the very people responsible for advancing the technology. Elon Musk and several leading AI researchers backed the legislation, arguing that without regulation, AI could become an existential risk to humanity. This internal division within the tech community highlights the complexity of the AI safety debate. Even those at the forefront of AI development recognize the need for guardrails—though they may disagree on how stringent those regulations should be.

A Fine Line: The Future of AI Regulation in California

Newsom’s veto leaves California at a crossroads. On one hand, the state is home to some of the most innovative tech companies in the world, and it must ensure that any regulations don’t stifle creativity or drive companies away. On the other hand, the rapid development of AI presents real risks, and lawmakers cannot afford to wait for Washington to catch up.

In his veto message, Newsom hinted at a collaborative approach to future AI legislation. He pledged to work with tech leaders, organized labor, and academic experts—like Stanford professor Fei-Fei Li—to develop a more refined regulatory framework that balances safety with innovation. Newsom also mentioned expanding AI applications in state agencies, exploring how AI could improve traffic management and customer service in public sectors.

The question remains: Will this approach be enough? Many advocates of stronger regulation believe that without clear, enforceable rules, AI could exacerbate existing societal problems—from economic inequality to the spread of misinformation. As AI continues to permeate all aspects of life, from entertainment to public safety, the stakes are higher than ever.

What’s Next for AI Regulation?

The veto of the AI safety bill marks the end of one chapter, but the debate over AI regulation is far from over. Newsom’s call for more targeted legislation suggests that California will continue to wrestle with how to regulate AI without sacrificing its role as a global tech leader. This issue will only grow in importance as AI becomes more integrated into our daily lives.

For those concerned about the unchecked power of AI, this veto might feel like a setback. But Newsom’s decision also signals that the conversation is evolving. As lawmakers, industry leaders, and the public grapple with the implications of AI, there’s an opportunity to shape a regulatory framework that not only safeguards society but also promotes innovation in responsible ways.

The balance between innovation and regulation is delicate, but the stakes couldn’t be higher. As AI continues to transform industries, economies, and even our daily routines, finding that balance will be crucial to ensuring that the technology serves the public good while allowing innovation to flourish.

Join the conversation: What do you think about Governor Newsom’s veto of the AI safety bill? Is it a step in the right direction, or a missed opportunity to protect the public from AI risks? Share your thoughts in the comments below, and be sure to follow us on LinkedIn for more updates on AI and tech regulation.

Share this post

Written by: Sophie Pochtler

Sophie is a Product Designer with over 10 years of experience in Product Development at a technology firm in the food industry. Her passion for innovation and the daily use of AI over the past 3 years have shaped her into a solution-oriented innovator. Embracing the principles of human-centered design, she collaborates closely with businesses to comprehend their unique goals and challenges and develops tailored solutions to perfectly match her clients need.