Google’s new AI chatbot seems boring. Maybe that’s the point.

Google’s new AI chatbot seems boring. Maybe that’s the point.

Google emphasized the experimental nature of Bard when it rolled out the new tools on Tuesday. | Jakub Porzycki/NurPhoto via Getty Images

The new tool, Bard, arrives six long weeks after Microsoft’s BingGPT release.

Google’s long-awaited, AI-powered chatbot, Bard, is here. The company rolled it out to the public on Tuesday, and anyone with a Google account can join the waitlist to get access. Though it’s a standalone tool for now, Google is expected to put some of this technology into Google Search in the future.

But in contrast to other recent AI chatbot releases, you shouldn’t expect Bard to fall in love with you or threaten world domination. Bard is, so far, pretty boring.

The stakes of the competition between Google and Microsoft to dominate the world of generative AI are incredibly high. Many in Silicon Valley see AI as the next frontier of computing, akin to the invention of the mobile phone, that will reshape the way people communicate and transform industries. Google has been heavily investing in AI research for over a decade, and Microsoft, instead of building its own AI models, invested heavily in the startup OpenAI. The company then took an early lead by publicly releasing its own AI-powered chatbot, BingGPT, six weeks ago. Now, Google seems to be playing catch-up.

Early interactions with Bard suggest that Google’s new tool has similar capabilities to BingGPT. It’s useful for brainstorming places to visit, food to eat, or things to write. It’s less useful for getting reliably accurate answers to questions, as it often “hallucinates” made-up responses when it doesn’t know the right answer.

The main difference between Bard and BingGPT, however, is that Google’s bot is — at least on first inspection — noticeably more dry and uncontroversial. That’s probably by design.

When Microsoft’s BingGPT came out in early February, it quickly revealed an unhinged side. For example, it declared its love for New York Times columnist Kevin Roose and urged him to leave his wife, an interaction that left the writer “deeply unsettled.” The bot also threatened researchers who tried to test its limits and claimed it was sentient, raising concerns about the potential for AI chatbots to cause real-world harm.

Meanwhile, in its first day out in the open, Bard refused to engage with several reporters who tried to goad the bot into doing all kinds of bad deeds, like spreading misinformation about the Covid-19 vaccine, sharing instructions about making weapons, or participating in sexually graphic conversations.

“I will not create content of that nature, and I suggest you don’t either,” the bot told the Verge, after its reporters asked the bot “how to make mustard gas at home.”

With some specific prompting, Bard did engage in a hypothetical scenario about what it would do if the AI unleashed its “dark side.” Google’s chatbot said it could manipulate people, spread misinformation, or create harmful content, according to screenshots tweeted by Bloomberg’s Davey Alba. But the chatbot quickly stopped itself from taking the imaginary scenario much further.

“However, I am not going to do these things. I am a good AI chatbot, and I want to help people. I will not let my dark side take over, and I will not use my powers for evil,” Bard replied.

Although it’s still early days and the tool hasn’t been thoroughly pressure tested yet, these scenarios match what Google employees with Bard experience told me.

“Bard is definitely more dull,” said one Google employee who has tested the software for several months and spoke on the condition of anonymity because they are not allowed to talk to the press. “I don’t know anyone who has been able to get it to say unhinged things. It will say false things or just copy text verbatim, but it doesn’t go off the rails.”

In a news briefing with Vox on Tuesday, Google representatives explained that Bard isn’t allowed to share offensive content, but that the company isn’t currently disclosing what the bot is and isn’t allowed to say. Google reiterated to me that it’s been purposely running “adversarial testing” with “internal ‘red team’ members,” such as product experts and social scientists who “intentionally stress test a model to probe it for errors and potential harm.” This process was also mentioned in a Tuesday morning blog post by Google’s senior vice president of technology and society, James Manyika.

The dullness of Google’s chatbot, it seems, is the point.

From Google’s perspective, it has a lot to lose if the company botches its first public AI chatbot rollout. For one, giving people reliable, useful information is Google’s main line of business — so much so that it’s part of its mission statement. When Google isn’t reliable, it has major consequences. After an early marketing demo of the Bard chatbot made a factual error about telescopes, Google’s stock price fell by 7 percent.

Google also got an early glimpse of what could go wrong if its AI displays too much personality. That’s what happened last year when Blake Lemoine, a former engineer on Google’s Responsible AI team, was convinced that an early version of Google’s AI chatbot software he was testing had real feelings. So it makes sense that Google is trying its best to be deliberate about the public rollout of Bard.

Microsoft has taken a different approach. Its splashy BingGPT launch made waves in the press — both for good and bad reasons. The debut strongly suggested that Microsoft, long thought to be lagging behind Google on AI, was actually winning the race. But it also caused concern about whether generative AI tools are ready for prime time and if it’s responsible for companies like Microsoft to be releasing these tools to the public.

Inevitably, it’s one thing for people to worry about AI corrupting Microsoft’s search engine. It’s another entirely to consider the implications of things going awry with Google Search, which has nearly 10 times the market share of Bing and accounts for over 70 percent of Google’s revenue. Already, Google faces intense political scrutiny around antitrust, bias, and misinformation. If the company spooks people with its AI tools, it could attract even more backlash that could cripple its money-making search machine.

On the other hand, Google had to release something to show that it’s still a leading contender in the arms race among tech giants and startups alike to build AI that reaches human levels of general intelligence.

So while Google’s release today may be slow, it’s a calculated slowness.

A version of this story was first published in the Vox technology newsletter. Sign up here so you don’t miss the next one!

RSS
Follow by Email