The promise and peril of AI, according to 5 experts

The promise and peril of AI, according to 5 experts

Malte Mueller/Getty Images

Is AI going to kill us? Or take our jobs? Or is the whole thing overhyped? Depends on who you ask.

At this point, you have tried ChatGPT. Even Joe Biden has tried ChatGPT, and this week, his administration made a big show of inviting AI leaders like Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman to the White House to discuss ways they could make “responsible AI.”

But maybe, just maybe, you are still fuzzy on some very basics about AI — like, how does this stuff work, is it magic, and will it kill us all? — but don’t want to admit to that.

No worries. We have you covered: We’ve spent much of the spring talking to people working in AI, investing in AI, trying to build businesses in AI — as well as people who think the current AI boom is overblown or maybe dangerously misguided. We made a podcast series about the whole thing, which you can listen to over at Recode Media.

But we’ve also pulled out a sampling of insightful — and oftentimes conflicting — answers we got to some of these very basic questions. They’re questions that the White House and everyone else needs to figure out soon, since AI isn’t going away.

Read on — and don’t worry, we won’t tell anyone that you’re confused. We’re all confused.

Just how big a deal is the current AI boom, really?

Kevin Scott, chief technology officer, Microsoft: I was a 12-year-old when the PC revolution was happening. I was in grad school when the internet revolution happened. I was running a mobile startup right at the very beginning of the mobile revolution, which coincided with this massive shift to cloud computing. This feels to me very much like those three things.

Dror Berman, co-founder, Innovation Endeavors: Mobile was an interesting time because it provided a new form factor that allowed you to carry a computer with you. I think we are now standing in a completely different time: We’ve now been introduced to a foundational intelligence block that has become available to us, one that basically can lean on all the publicly available knowledge that humanity has extracted and documented. It allows us to retrieve all this information in a way that wasn’t possible in the past.

Gary Marcus, entrepreneur; emeritus professor of psychology and neural science at NYU: I mean, it’s absolutely interesting. I would not want to argue against that for a moment. I think of it as a dress rehearsal for artificial general intelligence, which we will get to someday.

But right now we have a trade-off. There are some positives about these systems. You can use them to write things for you. And there are some negatives. This technology can be used, for example, to spread misinformation, and to do that at a scale that we’ve never seen before — which may be dangerous, might undermine democracy.

And I would say that these systems aren’t very controllable. They’re powerful, they’re reckless, but they don’t necessarily do what we want. Ultimately, there’s going to be a question, “Okay, we can build a demo here. Can we build a product that we can actually use? And what is that product?”

I think in some places people will adopt this stuff. And they’ll be perfectly happy with the output. In other places, there’s a real problem.

How can you make AI responsibly? Is that even possible?

James Manyika, SVP of technology and society, Google: You’re trying to make sure the outputs are not toxic. In our case, we do a lot of generative adversarial testing of these systems. In fact, when you use Bard, for example, the output that you get when you type in a prompt is not necessarily the first thing that Bard came up with.

We’re running 15, 16 different types of the same prompt to look at those outputs and pre-assess them for safety, for things like toxicity. And now we don’t always get every single one of them, but we’re getting a lot of it already.

One of the bigger questions that we are going to have to face, by the way — and this is a question about us, not about the technology, it’s about us as a society — is how do we think about what we value? How do we think about what counts as toxicity? So that’s why we try to involve and engage with communities to understand those. We try to involve ethicists and social scientists to research those questions and understand those, but those are really questions for us as society.

Emily M. Bender, professor of linguistics, University of Washington: People talk about democratizing AI, and I always find that really frustrating because what they’re referring to is putting this technology in the hands of many, many people — which is not the same thing as giving everybody a say in how it’s developed.

I think the best way forward is cooperation, basically. You have sensible regulation coming from the outside so that the companies are held accountable. And then you’ve got the tech ethics workers on the inside helping the companies actually meet the regulation and meet the spirit of the regulation.

And to make all that happen, we need broad literacy in the population so that people can ask for what’s needed from their elected representatives. So that the elected representatives are hopefully literate in all of this.

Scott: We’ve spent from 2017 until today rigorously building a responsible AI practice. You just can’t release an AI to the public without a rigorous set of rules that define sensitive uses, and where you have a harms framework. You have to be transparent with the public about what your approach to responsible AI is.

How worried should we be about the dangers of AI? Should we worry about worst-case scenarios?

Marcus: Dirigibles were really popular in the 1920s and 1930s. Until we had the Hindenburg. Everybody thought that all these people doing heavier-than-air flight were wasting their time. They were like, “Look at our dirigibles. They scale a lot faster. We built a small one. Now we built a bigger one. Now we built a much bigger one. It’s all working great.”

So, you know, sometimes you scale the wrong thing. In my view, we’re scaling the wrong thing right now. We’re scaling a technology that is inherently unstable.

It’s unreliable and untruthful. We’re making it faster and have more coverage, but it’s still unreliable, still not truthful. And for many applications that’s a problem. There are some for which it’s not right.

ChatGPT’s sweet spot has always been making surrealist prose. It is now better at making surrealist prose than it was before. If that’s your use case, it’s fine, I have no problem with it. But if your use case is something where there’s a cost of error, where you do need to be truthful and trustworthy, then that is a problem.

Scott: It is absolutely useful to be thinking about these scenarios. It’s more useful to think about them grounded in where the technology actually is, and what the next step is, and the step beyond that.

I think we’re still many steps away from the things that people worry about. There are people who disagree with me on that assertion. They think there’s gonna be some uncontrollable, emergent behavior that happens.

And we’re careful enough about that, where we have research teams thinking about the possibility of these emergent scenarios. But the thing that you would really have to have in order for some of the weird things to happen that people are concerned about is real autonomy — a system that could participate in its own development and have that feedback loop where you could get to some superhumanly fast rate of improvement. And that’s not the way the systems work right now. Not the ones that we are building.

Does AI have a place in potentially high-risk settings like medicine and health care?

Bender: We already have WebMD. We already have databases where you can go from symptoms to possible diagnoses, so you know what to look for.

There are plenty of people who need medical advice, medical treatment, who can’t afford it, and that is a societal failure. And similarly, there are plenty of people who need legal advice and legal services who can’t afford it. Those are real problems, but throwing synthetic text into those situations is not a solution to those problems.

If anything, it’s gonna exacerbate the inequalities that we see in our society. And to say, people who can pay get the real thing; people who can’t pay, well, here, good luck. You know: Shake the magic eight ball that will tell you something that seems relevant and give it a try.

Manyika: Yes, it does have a place. If I’m trying to explore as a research question, how do I come to understand those diseases? If I’m trying to get medical help for myself, I wouldn’t go to these generative systems. I go to a doctor or I go to something where I know there’s reliable factual information.

Scott: I think it just depends on the actual delivery mechanism. You absolutely don’t want a world where all you have is some substandard piece of software and no access to a real doctor. But I have a concierge doctor, for instance. I interact with my concierge doctor mostly by email. And that’s actually a great user experience. It’s phenomenal. It saves me so much time, and I’m able to get access to a whole bunch of things that my busy schedule wouldn’t let me have access to otherwise.

So for years I’ve thought, wouldn’t it be fantastic for everyone to have the same thing? An expert medical guru that you can go to that can help you navigate a very complicated system of insurance companies and medical providers and whatnot. Having something that can help you deal with the complexity, I think, is a good thing.

Marcus: If it’s medical misinformation, you might actually kill someone. That’s actually the domain where I’m most worried about erroneous information from search engines

Now people do search for medical stuff all the time, and these systems are not going to understand drug interactions. They’re probably not going to understand particular people’s circumstances, and I suspect that there will actually be some pretty bad advice.

We understand from a technical perspective why these systems hallucinate. And I can tell you that they will hallucinate in the medical domain. Then the question is: What becomes of that? What’s the cost of error? How widespread is that? How do users respond? We don’t know all those answers yet.

Is AI going to put us out of work?

Berman: I think society will need to adapt. A lot of those systems are very, very powerful and allow us to do things that we never thought would be possible. By the way, we don’t yet understand what is fully possible. We don’t also fully understand how some of those systems work.

I think some people will lose jobs. Some people will adjust and get new jobs. We have a company called Canvas that is developing a new type of robot for the construction industry and actually working with the union to train the workforce to use this kind of robot.

And a lot of those jobs that a lot of technologies replace are not necessarily the jobs that a lot of people want to do anyway. So I think that we are going to see a lot of new capabilities that will allow us to train people to do much more exciting jobs as well.

Manyika: If you look at most of the research on AI’s impact on work, if I were to summarize it in a phrase, I’d say it’s jobs gained, jobs lost, and jobs changed.

All three things will happen because there are some occupations where a number of the tasks involved in those occupations will probably decline. But there are also new occupations that will grow. So there’s going to be a whole set of jobs gained and created as a result of this incredible set of innovations. But I think the bigger effect, quite frankly — what most people will feel — is the jobs changed aspect of this.

RSS
Follow by Email