Categories: News

California’s governor has vetoed a historic AI safety bill

California Gov.Gavin Newsom speaks during a press conference with the California Highway Patrol announcing new efforts to boost public safety in the East Bay, in Oakland, California, July 11, 2024. | Stephen Lam/San Francisco Chronicle via Getty Images<br>

Advocates said it would be a modest law setting “clear, predictable, common-sense safety standards” for artificial intelligence. Opponents argued it was a dangerous and arrogant step that will “stifle innovation.”

In any event, SB 1047 — California state Sen. Scott Wiener’s proposal to regulate advanced AI models offered by companies doing business in the state — is now kaput, vetoed by Gov. Gavin Newsom. The proposal had garnered wide support in the legislature, passing the California State Assembly by a margin of 48 to 16 in August. Back in May, it passed the Senate by 32 to 1.

The bill, which would hold AI companies liable for catastrophic harms their “frontier” models may cause, was backed by a wide array of AI safety groups, as well as luminaries in the field like Geoffrey Hinton, Yoshua Bengio, and Stuart Russell, who have warned of the technology’s potential to pose massive, even existential dangers to humankind. It got a surprise last-minute endorsement from Elon Musk, who among his other ventures runs the AI firm xAI

Lined up against SB 1047 was nearly all of the tech industry, including OpenAI, Facebook, the powerful investors Y Combinator and Andreessen Horowitz, and some academic researchers who fear it threatens open source AI models. Anthropic, another AI heavyweight, lobbied to water down the bill. After many of its proposed amendments were adopted in August, the company said the bill’s “benefits likely outweigh its costs.”

Despite the industry backlash, the bill seemed to be popular with Californians. In a poll designed by supporters and a leading opponent of the bill (meant to ensure that the poll questions were worded fairly), Californians backed the legislation by 54 percent to 28 after hearing arguments from both sides.

The wide, bipartisan margins by which the bill passed the Assembly and Senate, and the public’s general support (when not asked in a biased way), might make Newsom’s veto seem surprising. But it’s not so simple. Andreessen Horowitz, the $43 billion venture capital giant, hired Newsom’s close friend and Democratic operative Jason Kinney to lobby against the bill, and a number of powerful Democrats, including eight members of the US House from California and former Speaker Nancy Pelosi, urged a veto, echoing talking points from the tech industry.

That was the faction that eventually won out, keeping California — the center of the AI industry — from becoming the first state to establish robust AI liability rules. Oddly, Newsom justified his veto by arguing that SB1047 did not go far enough. Because it focuses “only on the most expensive and large-scale models,” he worried that the bill “could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047.”

Newsom’s decision has sweeping implications not just for AI safety in California, but also in the US and potentially the world.

What’s inside SB 1047

To have attracted all of this intense lobbying, one might think that SB 1047 was an aggressive, heavy-handed bill — but, especially after several rounds of revisions in the State Assembly, the actual law proposed to do fairly little. 

It would have offered whistleblower protections to tech workers, along with a process for people who have confidential information about risky behavior at an AI lab to take their complaint to the state Attorney General without fear of prosecution. It would have also required AI companies that spend more than $100 million to train an AI model to develop safety plans. (The extraordinarily high ceiling for this requirement to kick in was meant to protect California’s startup industry, which objected that the compliance burden would be too high for small companies.)

So what about this bill would possibly prompt months of hysteria, intense lobbying from the California business community, and unprecedented intervention by California’s federal representatives? Part of the answer is that the bill used to be stronger. The initial version of the law was based the threshold for compliance on computing power, meaning that over time, more companies would have become subject to the law as computers continue to get cheaper (and more powerful). It would also have established a state agency called the “Frontier Models Division” to review safety plans; the industry objected to the perceived power grab.

Another part of the answer is that a lot of people were falsely told the bill does more. One prominent critic inaccurately claimed that AI developers could be guilty of a felony, regardless of whether they were involved in a harmful incident, when the bill only had provisions for criminal liability in the event that the developer knowingly lied under oath. (Those provisions were subsequently removed anyway). Congressional representative Zoe Lofgren of the science, space, and technology committee wrote a letter in opposition falsely claiming that the bill requires adherence to guidance that doesn’t exist yet.

But the standards do exist (you can read them in full here), and the bill did not require firms to adhere to them. It said only that “a developer shall consider industry best practices and applicable guidance” from the US Artificial Intelligence Safety Institute, National Institute of Standards and Technology, the Government Operations Agency, and other reputable organizations. 

A lot of the discussion of SB 1047 unfortunately centered around straightforwardly incorrect claims like these, in many cases propounded by people who should have known better. 

SB 1047 was premised on the idea that near-future AI systems might be extraordinarily powerful, that they accordingly might be dangerous, and that some oversight is required. That core proposition is extraordinarily controversial among AI researchers. Nothing exemplifies the split more than the three men frequently called the “godfathers of machine learning,” Turing Award winners Yoshua Bengio, Geoffrey Hinton, and Yann LeCun. Bengio — a Future Perfect 2023 honoree — and Hinton have both in the last few years become convinced that the technology they created may kill us all and argued for regulation and oversight. Hinton stepped down from Google in 2023 to speak openly about his fears. 

LeCun, who is chief AI scientist at Meta, took the opposite tack, declaring that such worries are nonsensical science fiction and that any regulation would strangle innovation. Where Bengio and Hinton find themselves supporting the bill, LeCun opposed it, especially the idea that AI companies should face liability if AI is used in a mass casualty event. 

In this sense, SB 1047 was the center of a symbolic tug-of-war: Does government take AI safety concerns seriously, or not? The actual text of the bill may have been limited, but to the extent that it suggested government was listening to the half of experts that think that AI might be extraordinarily dangerous, the implications were big. 

It’s that sentiment that likely drove some of the fiercest lobbying against the bill by venture capitalists Marc Andreessen and Ben Horowitz, whose firm a16z worked relentlessly to kill the bill, and some of the highly unusual outreach to federal legislators to demand they oppose a state bill. More mundane politics likely played a role, too: Politico reported that Pelosi opposed the bill because she’s trying to court tech VCs for her daughter, who is likely to run against Scott Wiener for a House of Representatives seat.

Why SB 1047 is so important

It might seem strange that legislation in just one US state had so many people wringing their hands. But remember: California is not just any state. It’s where several of the world’s leading AI companies are based.

And what happens there is especially important because, at the federal level, lawmakers have been dragging out the process of regulating AI. Between Washington’s hesitation and the looming election, it’s falling to states to pass new laws. The California bill, if Newsom gives it the green light, would be one big piece of that puzzle, setting the direction for the US more broadly. 

The rest of the world is watching, too. “Countries around the world are looking at these drafts for ideas that can influence their decisions on AI laws,” Victoria Espinel, the chief executive of the Business Software Alliance, a lobbying group representing major software companies, told the New York Times in June.

Even China — often invoked as the boogeyman in American conversations about AI development (because “we don’t want to lose an arms race with China”) — is showing signs of caring about safety, not just wanting to run ahead. Bills like SB 1047 could telegraph to others that Americans also care about safety. 

Frankly, it’s refreshing to see legislators wise up to the tech world’s favorite gambit: claiming that it can regulate itself. That claim may have held sway in the era of social media, but it’s become increasingly untenable. We need to regulate Big Tech. That means not just carrots, but sticks, too.

Now that Newsom has killed the bill, he may face some sticks of his own. A poll from the pro-SB1047 AI Policy Institute finds that 60 percent of voters are prepared to blame him for future AI-related incidents if he vetoes SB 1047. In fact, they’d punish him at the ballot box if he runs for higher office: 40 percent of California voters say they would be less likely to vote for Newsom in a future presidential primary election if he vetoes the bill. 

Editor’s note, September 29, 5 PM ET: This story, originally published on August 31, has been updated to reflect California Gov. Gavin Newsom’s decision to veto SB 1047.

Vox - Huntsville Tribune

Recent Posts

America keeps choosing poverty — but it doesn’t have to

Welcome to the first issue of Within Our Means, a biweekly newsletter about ending poverty…

11 hours ago

The climate crisis is here. We can still have a better world.

If I asked you to tell me the one issue that makes you feel the…

13 hours ago

MTV’s nostalgia problem, explained by The Challenge

The cast of MTV’s The Challenge: Battle of The Eras. You’d be forgiven for thinking…

1 day ago

Don’t use Venmo as your checking account

Venmo is good for sending money to friends, but it’s not necessarily the safest place…

2 days ago

Weather radar showed a strange blue mass in the eye of Hurricane Helene. What was it?

Dark clouds from then-tropical storm Helene over Havana, Cuba, on September 25. | Yamil Lage/AFP…

2 days ago

In defense of the washing machine

A launderette in West Kensington, London. | <p>Will Ireland/Classic Rock Magazine/Future Publishing via Getty Images</p>…

2 days ago