Article 6E3PW Finding The Wisest Ways To Global AI Regulation

Finding The Wisest Ways To Global AI Regulation

by
Eliza Strickland
from IEEE Spectrum on (#6E3PW)
image.webp?id=35040164&width=980

Welcome to Fixing the Future, an IEEE Spectrum podcast. I'm senior editor Eliza Strickland, and today I'm talking with Stanford University's Russell Wald about efforts to regulate artificial intelligence. Before we launch into this episode, I'd like to let listeners know that the cost of membership in the IEEE is currently 50 percent off for the rest of the year, giving you access to perks, including Spectrum Magazine and lots of education and career resources. Plus, you'll get an excellent IEEE-branded Rubik's Cube when you enter the code CUBE online. So go to IEEE.org/join to get started.

Over the past few years, people who pay attention to research on artificial intelligence have been astounded by the pace of developments, both the rapid gains in AI's capabilities and the accumulating risks and dark sides. Then, in November, OpenAI launched the remarkable chatbot ChatGPT, and the whole world started paying attention. Suddenly, policymakers and pundits were talking about the power of AI companies and whether they needed to be regulated. With so much chatter about AI, it's been hard to understand what's really happening on the policy front around the world. So today, I'm talking with Russell Wald, managing director for policy and society at Stanford's Institute for Human-Centered Artificial Intelligence. Today on Fixing the Future, I'm talking with Russell Wald, managing director for policy and society at Stanford's Institute for Human-Centered Artificial Intelligence. Russell, thank you so much for joining me today.

Russell Wald: Thanks so much. It's great to be here.

We're seeing a lot of calls for regulation right now for artificial intelligence. And interestingly enough, some of those calls are coming from the CEOs of the companies involved in this technology. The heads of OpenAI and Google have both openly discussed the need for regulations. What do you make of these calls for regulations coming from inside the industry?

Wald: Yeah. It's really interesting that the inside industry calls for it. I think it demonstrates that they are in a race. There is a part here where we look at this and say they can't stop and collaborate because you start to get into antitrust issues if you were to go down those lines. So I think that for them, it's trying to create a more balanced playing field. But of course, what really comes from this, as I see it, is they would rather work now to be able to create some of those regulations versus avoiding reactive regulation. So it's an easier pill to swallow if they can try to shape this now at this point. Of course, the devil's in the details on these things, right? It's always, what type of regulation are we talking about when it comes down to it? And the reality is we need to ensure that when we're shaping regulations, of course, industry should be heard and have a seat at the table, but others need to have a seat at the table as well. Academia, civil society, people who are really taking the time to study what is the most effective regulation that still will hold industry's feet to the fire a bit but allow them to innovate.

Yeah. And that brings us to the question, what most needs regulating? In your view, what are the social ills of AI that we most need to worry about and constrain?

Wald: Yeah. If I'm looking at it from an urgency perspective, for me, the most concerning thing is synthetic media right now. And the question on that, though, is what is the regulatory area here? I'm concerned about synthetic media because of what will ultimately happen to society if no one has any confidence in what they're seeing and the veracity of it. So of course, I'm very worried about deep fakes, elections, and things like this, but I'm just as worried about the Pope in a puffy coat. And the reason I'm worried about that is because if there's a ubiquitous amount of synthetic media out there, what are ultimately going to do is create a moment where no one's going to have confidence in the veracity of what they see digitally. And when you get into that situation, people will choose to believe what they want to believe, whether it's an inconvenient truth or not. And that is really concerning.

So just this week, an EU Commission vice president noted that they think that the platform should be disclosing whether something is AI-generated. I think that's the right approach because you're not going to be able to necessarily stop the creation of a lot of synthetic media, but at a minimum, you can stop the amplification of it, or at least, put on some level of disclosure that there is something that signals that it may not be in reality what it says it is and that you are at least informed about that. That's one of the biggest areas. The other thing that I think, in terms of overall regulation that we need to look at is more transparency regarding foundation models. There's just so much data that's been hovered up into these models. They're very large. What's going into them? What's the architecture of the compute? Because at least if you are seeing harms come out of the back end, by having a degree of transparency, you're going to be able to say, Aha." You can go back to what that very well may have been.

That's interesting. So that's a way to maybe get at a number of different end-user problems by starting at the beginning.

Wald: Well, it's not just starting at the beginning, which is a key part, but the primary part is the transparency aspect. That is what is significant because it allows others to validate. It allows others to understand where some of these models are going and what ultimately can happen with them. It ensures that we have a more diverse group of people at the table, which is something I'm very passionate about. And that includes academia, which historically has had a very vibrant role in this field, but since 2014, what we've seen is this slow decline of academia in the space in comparison to where industry's really taking off. And that's a concern. We need to make sure that we have a diverse set of people at the table to be able to ensure that when these models are put out there, there's a degree of transparency that we can help review and be part of that conversation.

And do you also worry about algorithmic bias and automated decision-making systems that may be used in judicial systems, or legal systems, or medical contexts, things like that?

Wald: Absolutely. And so much so in the judicial systems, I'm so concerned about that that I think that if we are going to talk about where there could be pauses, less so, I guess, on research and development, but very much so on deployment. So without question, I am very concerned about some of these biases and biases in high-risk areas. But again, coming back to the transparency side, that is one area of where you can have a much richer ecosystem of being able to chase these down and understand why that might be happening in order to try to limit that or mitigate these type of risk.

Yeah. So you mentioned a pause. Most of our listeners will probably know about the pause letter, as people call it, which was calling for a six-month pause in experiments with giant AI systems. And then, a couple months after that, there was an open statement by a number of AI experts and industry insiders saying that we must take seriously the existential risk posed by AI. What do you make of those kind of concerns? Do you take seriously the concerns that AI might pose as existential threat to our species? And if so, do you think that's something that can be regulated or should be thought about in regulatory context?

Wald: So first, I think, like all things in our society these days, everything seems to get so polarized so quickly. So when I look at this and I see people concerned about either existential risk or saying you're not focused on the immediacy of the immediate harms, I take people for their word in terms of they come at this from good faith and from differing perspectives. When I look at this, though, I do worry about this polarization of these sides and our inability to have a genuine, true conversation. In terms of existential risk, is it the number one thing on my mind? No. I'm more worried about human risk being applied with some of these things now. But to say that existential risk is a 0% probability, I would say no. And so, therefore, of course, we should be having strong and thoughtful dialogs about this, but I think we need to come at it from a balanced approach. If we look at it this way, the positive of the technology is pretty significant. If we look at what AlphaFold has done with protein folding, that in itself, could have such significant impact on health and targeting of rare diseases with therapies that would not have been available before. However, at the same time, there's the negative of one area that I am truly concerned about in terms of existential risk, and that is where the human comes into play with this technology. And that's things like synthetic bio, right? Synthetic bio could create agents that we cannot control and there can be a lab leak or something that could be really terrible. So it's how we think about what we're going to do in a lot of these particular cases.

At the Stanford Institute for Human-Centered AI, we are a grant-making organization internally for our faculty. And before they even can get started with a project that they want to have funded, they have to go through an ethics and society review statement. And you have to go and you have to say, This is what I think will happen and these are the dual-use possibilities." And I've been on the receiving end of this, and I'll tell you, it's not just a walk in the park with a checklist. They've come back and said, You didn't think about this. How would you ameliorate this? What would you do?" And just by taking that holistic aspect of understanding the full risk of things, this is one step that we could do to be able to start to learn about this as we build this out. But again, just to get back to your point, I think we really have to just look at this and the broad risk of this and have genuine conversations about what this means and how we can address this, and not have this hyperpolarization that I'm starting to see a little bit and it's concerning.

Yeah. I've been troubled by that too, especially the sort of vitriol that seems to come out in some of these conversations.

Wald: Everyone can be a little bit over the top here. And I think it's great that people are passionate about what they're worried about, but we have to be constructive if we're going to get towards things here. So it's something I very much feel.

And when you think about how quickly the technology is advancing, what kind of regulatory framework can keep up or can work with that pace of change? I was talking to one computer scientist here in the US who was involved in crafting the blueprint for the AI Bill of Rights who said, It's got to be a civil rights framework because that focuses more on the human impact and less on the technology itself." So he said it can be an Excel spreadsheet or a neural network that's doing the job, but if you just focus on the human impact, that's one way to keep up with the changing technology. But yeah, just curious about your ideas about what would work in this way.

Wald: Yeah. I'm really glad you asked this question. What I have is a greater concern that even if we came up with the optimal regulations tomorrow, that really were ideal, it would be incredibly difficult for government to enforce this right now. My role is really spending more time with policymakers than anything else. And when I spend a lot of time with them, the first thing that I hear is, I see this X problem, and I want to regulate it with Y solution." And oftentimes, I'll sit there and say, Well, that will not actually work in this particular case. You're not solving or ameliorating the particular harm that you want to regulate." And what I see that needs to be done first before we can fully go thinking about regulations is a pairing of this with investment, right? So we don't have a structure that really looks at this, and if we said, Okay, we'll just put out some regulations," I have concern that we wouldn't be able to effectively achieve those. So what do I mean by this? First, largely, I think we need more of a national strategy. And part of that national strategy is ensuring that we have policymakers as informed as possible on this. I spend a lot of time with briefings with policymakers. You can tell the interest is growing, but we need more formalized ways and making sure that they understand all of the nuance here.

The second part of this is we need infrastructure. We absolutely need a degree of infrastructure that ensures that we have a wider degree of people at the table. That includes the National AI Research Resource, which I have been personally passionate about for quite a few years. The third part of this is talent. We've got to recruit talent. And that means we need to really look at STEM immigration and see what we can do because we do show plenty of data, at least within the US. The path for those students who can't stay here, the visa hurdles are just too terrible. They pick up and go, for example, to Canada. We need to expand programs like the Intergovernmental Personnel Act that can allow people who are in academia or other nonprofit research to go in and out of government and inform government so that they're more clear on this.

Then, finally, we need to, in a systematic way, bring in regulation into this space. And on the regulatory front, I see there's two parts here. First, there is new novel regulations that will need to be applied. And again, the transparency part would be one that I would get into mandated disclosures on some things. But the second part of this is there's a lot of low-hanging fruit with existing regulations in place. And I am heartened to see that the FTC and DOJ have at least put out some statements that if you are using AI for nefarious purposes or deceptive practices, or you are claiming something is AI when it's not, we're going to come after you. And the reason why I think this is so important is right now we're shaping an ecosystem. And when you're shaping that ecosystem, what you really need is to ensure that there is trust and validity in that ecosystem. And so I frankly think FTC and DOJ should bring the hammer down on anybody that's using this for any deceptive practice so that we can actually start to deal with some of those issues. And under that entire regime, you're more likely to have the most effective regulations if you can staff up some of these agencies appropriately to help with this. And that's what I find to be one of the most urgent areas. So when we're talking about regulation, I'm so for it, but we've got a pair it up with that level of government investment to back it up.

Yeah. That would be a really good step to see what is already covered before we go making new rules, I suppose.

Wald: Right. Right. And there is a lot of existing areas that are, it's easily covered in some of these things, and it's a no-brainer, but I think AI scares people and they don't understand how that applies. I'm also very for federal data privacy law. Let's start early with some of that type of work of what goes into these systems at the very beginning.

So let's talk a little bit about what's going on around the world. The European Union seemed to get the first start on AI regulations. They've been working on the AI Act since, I think, April 2021, the first proposal was issued, and it's been winding its way through various committees, and there have been amendments proposed. So what's the current status of the AI Act? What does it cover? And what has to happen next for that to become enforceable legislation?

Wald: The next step in this is you have the European Parliament's version of this, you have the council, and you have the commission. And essentially, what they need to look at is how they're going to merge and what areas of these will go into the actual final law. So in terms of overall timeline, I would say we're still about another good year off from anything probably coming into enforcement. I would say a very good year off if not more. But to that end, what is interesting is, again, this rapid pace that you noted and the change of this. So what is in the council and the commission versions really doesn't cover foundation models to the same level that the European Parliament does. And the European Parliament, because it was a little bit later in this, has this area of foundation models that they're going to have to look at, which will have a lot of more key aspects on generative AI. So it's going to be really interesting what ultimately happens here. And this is the problem of some of this rapid moving technology. I was just talking about this recently with some federal officials. We did a virtual training last year where we had some of our Stanford faculty come in and record these videos. They're available for thousands of people in the federal workforce. And they're great. They barely touched on generative AI. Because it was last summer, and no one really got into the deep end of that and started addressing the issues related to generative AI. Obviously, they knew generative AI was a thing then. These are brilliant faculty members. But it wasn't as broad or ubiquitous. And now here we are, and it is like the issue du jour. So the interesting thing is how fast the technology is moving. And that gets back to my earlier point of why you really need a workforce that gets this so that they can quickly adapt and make changes that might be needed in the future.

And does Europe have anything to gain really by being the first mover in this space? Is it just a moral win if they're the ones who've started the regulatory conversation?

Wald: I do think that they have some things to gain. I do think a moral win is a big win, if you ask me. Sometimes I do think that Europe can be that good conscious side and force the rest of the world to think about these things, as some of your listeners might be familiar with. There's the Brussels Effect. And what essentially the Brussels Effect is for those that don't know, it is the concept that Europe has such a large market share that they're able to force through their rules and regulations that being the most stringent and becomes the model for the rest of the world. And so a lot of industries just base their entire type of managing regulation related to the most stringent set and that generally comes from Europe. The challenge for Europe is the degree to which they are investing in the innovation itself. So they have that powerful market share, and it's really important, but where is Europe going to be in the long run is a little to be determined. I will say a former part of the EU, the UK, is actually doing some really, really interesting work here. They are speaking almost to that level of, Let's have some degree of regulation, look at existing regulations," but they're really invested in the infrastructure piece of giving the tools broadly. So the Brits have a proposal for an Exascale computing system that's 900 million. So the UK is really trying to do this, let's double down on the innovation side and where possible do a regulatory side because they really want to see themselves as the leader. I think Europe might need to look into as much as possible a degree of fostering an environment that will allow for that same level of innovation.

Europe seemed to get the first start, but am I right in thinking that the Chinese government may be moving the quickest? There have been a number of regulations, not just proposed in the past few years, but I think actually put into force.

Wald: Yeah. Absolutely. So there's the Brussels Effect, but what happens now when you have the Beijing Effect? Because in Beijing's case, they just don't have market share, but they also have a very strong innovative base. What has happened in China was last year, it was around March of 2022, there was some regulations that came about that were related to recommender systems. And in some of these, you could call for redress or a human to audit this. It's hard to get the same level of data out of China, but I'm really interested in looking at how they apply some of these regulations. Because what I'm really find fascinating is the scale, right? So when you say you allow for for a human review, I can't help but think of this analogy. A lot of people apply for a job, and most people who apply for a job think that they are qualified or they're not going to waste their time applying for the job. And what happens if you never get that interview and what happens if a lot of people don't get that interview and you go and say, Wait a minute, I deserved an interview. Why didn't I get one? Go lift the hood of your system so I can have a human review." I think that there's a degree of legitimacy for that. The concern is that what level cannot be scaled to be able to meet that moment? And so I'm really watching that one. They also had last year the deep synthesis [inaudible] thing that came into effect in January of 2023 that spends a lot of time looking at deep fakes. And this year, it related to generative AI. There is some initial guidance. And what this really demonstrates is a concern that the state has. So the People's Republic of China, or the Communist Party in this case, because one thing is they refer to a need for social harmony and that generative AI should not be used for purposes that disrupt that social harmony. So I think you can see concern from the Chinese government about what this could mean for the government itself.

It's interesting. Here in the US, you often hear people arguing against regulations by saying, Well, if we slow down, China's going to surge ahead." But I feel like that might actually be a false narrative.

Wald: Yeah. I have an interesting point on that, though. And I think it refers back to that last point on the recommender systems and the ability for human redress or a human audit of that. I don't want to say that I'm not for regulations. I very much am for regulations. But I always want to make sure that we're doing the right regulations because oftentimes regulations don't harm the big player, they harm the smaller player because the big player can afford to manage through some of this work. But the other part is there could be a sense of false comfort that can come from some of these regulations because they're not solving for what you want them to solve for. And so I don't want to call the US at a Goldilocks moment. But if you really can see what the Chinese do in this particular space and how it's working, and whether it will work and there might be other variables that would come to place that would say, Okay, well, this clearly would work in China, but it could not work in the US." It's almost like a test bed. You know how they always say that the states are the incubators for democracy? It's kind of interesting how the US can see what happens in New York. But what happened with New York City's hiring algorithm law? Then from there, we can start to say, Wow, it turns out that regulation doesn't work. Here's one that we could have here." My only concern is the rapid pace of this might necessitate that we need some regulation soon.

Right. And in the US, there have been earlier bills on the federal level that have sought to regulate AI. The Algorithmic Accountability Act last year, which went pretty much nowhere. The word on the street is now that Senator Chuck Schumer is working on a legislative framework and is circulating that around. Do you expect to see real concrete action here in the US? Do you think there'll actually be a bill that gets introduced and gets passed in the coming year or two?

Wald: Hard to tell, I would say, on that. What I would say is first, it is unequivocal. I have been working with policymakers for over almost four years now on this specific subject. And it is unequivocal right now that since ChatGPT came out, there is this awakening of AI. Whereas before, I was trying to back down their doors and say, Hey, let's have a conversation about this," and now I cannot ever remotely keep up with the inbound that is coming in. So I am heartened to see that policymakers are taking this seriously. And I have had conversations with numerous policymakers without divulging which ones, but I will say that Senator Schumer's office is eager, and I think that's great. They're still working out the details. I think what's important about Schumer's office is it's one office that can pull together a lot of senators and pull together a lot of people to look at this. And one thing that I do appreciate about Schumer is that he thinks big and bold. And his level of involvement says to me, If we get something, it's not going to be small. It's going to think big. It's going to be really important." So to that end, I would urge the office, as I've noted, to not just think about regulations, but also the crucial need for public investment in AI. And so those two things don't necessarily need to be paired into one big mega bill, but they should be considered in every step that they take together. That for every regulatory idea you're thinking about, you should have a degree of public investment that you're thinking about with it as well. So that we can make sure we have this really more balanced ecosystem.

I know we're running short on time. So maybe one last question and then I'll ask if I missed anything. But for our last question, how might a consumer experience the impact of AI regulations? I was thinking about the GDPR in Europe and how the impact for consumers was they basically had to click an extra button every time they went to a website to say, Yes, I accept these cookies." Would AI regulations be visible to the consumer, do you think, and would they change people's lives in obvious ways? Or would it be much more subtle and behind the scenes?

Wald: That's a great question. And I would probably posit back another question. The question is, how much do people see AI in their daily lives? And I don't think you see that much of it, but that doesn't mean it's not there. That doesn't mean that there are not municipalities that are using systems that will deny benefits or allow for benefits. That doesn't mean banks aren't using this for underwriting purposes. So it's really hard to say whether consumers will see this, but the thing is consumers, I don't think, see AI in their daily lives, and that's concerning as well. So I think what we need to ensure is that there is a degree of disclosure related to automated systems. And people should be made aware of when this is being applied, and they should be informed when that's happening. That could be a regulation that they do see, right? But for the most part, no, I don't think it's as front and center in people's minds and not as a concern because it's not to say that it's not there. It is there. And we need to make sure we get this right. Are people are going to be harmed throughout this process? The first man, I think it was in 2020, [Juan?] Williams, I believe his name was who was arrested falsely for facial recognition technology and what that meant to his reputation, all of that kind of stuff, for literally having no association with the crime.

So before we go, is there anything else that you think it's really important for people to understand about the state of the conversation right now around regulating AI or around the technology itself? Anything that the policymakers you talk with seem to not get that you wish they did?

Wald: The general public should be aware that what we're starting to see is the tip of the iceberg. I think there's been a lot of things that have been in labs, and I think there's going to be just a whole lot more coming. And with that whole lot more coming, I think that we need to find ways to adhere to some kind of balanced arguments. Let's not go to the extreme of, This is going to kill us all." Let's also not go and allow for a level of hype that says, AI will fix this." And so I think we need to be able to have a neutral view of saying, There are some unique benefits this technology will offer humanity and make a significant impact for the better, and that's a good thing, but at the same time there are some very serious dangers from this. How is it that we can manage that process?"

To policymakers, what I want them to most be aware of when they're thinking about this and trying to educate themselves, they don't need to know how to use TensorFlow. No one's asking them to understand how to develop a model. What I recommend that they do is they understand what the technology can do, what it cannot do, and what its societal impacts will be. I oftentimes talk to people, I need to know about the deep parts of the technology." Well, we also need policymakers to be policymakers. And particularly, elected officials have to be in inch deep but a mile wide. They need to know about Social Security. They need to know about Medicare. They need to know about foreign affairs. So we can't have the expectation for policymakers to know everything about AI. But at a minimum, they need to know what it can and cannot do and what that impact on society will be.

Russell, thank you so much for taking the time to talk all this through with me today. I really appreciate it.

Oh, it's my pleasure. Thank you so much for having me, Eliza.

That was Stanford's Russell Wald, speaking to us about efforts to regulate AI around the world. I'm Eliza Strickland, and I hope you'll join us next time on Fixing the Future.

External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/IeeeSpectrum
Feed Title IEEE Spectrum
Feed Link https://spectrum.ieee.org/
Reply 0 comments