Article 6MYHK Sneha Revanur on empowering youth voices in AI, fighting for legislation and combating deepfakes and disinformation

Sneha Revanur on empowering youth voices in AI, fighting for legislation and combating deepfakes and disinformation

by
Aron Yohannes
from The Mozilla Blog on (#6MYHK)

At Mozilla, we knowwe can't create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through ourRise 25 Awards.These storytellers, innovators, activists, advocates, builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with Sneha Revanur, an activist behind Encode Justice, an organization that aims to elevate youth voices in support of human-centered AI. We talked with Sneha about her current work at Encode, working with legislations for AI regulation, fighting against disinformation and more.

So, the first question that I wanted to ask you about is the work that you've already done at such a really young age, including, obviously, founding Encode Justice at 15. How did you become so knowledgeable about the things that you wanted to do right now and so passionate early on in your life?

Sneha Revanur: I say this quite a bit, but honestly, had I not been born in San Jose in the heart of Silicon Valley, it's totally possible, it would not exist. I think that growing up right there in the beating heart of innovation, being exposed to that culture from pretty much the day I was born, was really formative for me. I grew up in a household of software engineers - my parents both work in tech, my older sister works in tech. So I was kind of surrounded by that from a pretty young age. I think there was a point in my early childhood and middle school, probably when I myself thought that I would go and pursue a career in computer science. I think that it was only in and around early high school, when I began to think more critically about the social implications of these technologies and become more convinced that I wanted to be involved on the policy side of things, because I felt that was an area where we didn't have as many voices who were actively involved in the conversation. So I definitely think just the fact that I was growing up in that orbit played a huge role, and I think that was really formative for me as a child. I think that at the same time, it was really helpful to have a background in working on campaigns. I got super involved politically around the 2020 election when I was in high school, I got involved working on Congressional campaigns. That was kind of around the time of my political awakening, and I think that learning the ropes at that point, building out that skill set and that toolkit of really understanding how to leverage the advocacy work, and better understanding how to apply that knowledge to the work that I was wanting to do in the AI world, I think, was really valuable for me. And so that's kind of how it became a perfect storm of factors. And the summer of 2020 or so, I launched this initial campaign against a ballot measure in California that was seeking to replace the use of cash bill with an algorithm that had been shown in previous reports to be racially discriminatory. And so that was my initial entry point to a related advocacy, and that was kind of how the whole organization came to be following that campaign, and ever since, then the rest is the history - we've just grown internationally, expanded to think about all sorts of risks, some challenges in the age of AI. Not just bias, but also disinformation, job loss, so much more. And yeah, very excited to see where the feature leads us as well.

In what ways were you able to become more informative about AI and the things that you're doing?

So I think that when I was first getting started in the space, there were a couple of resources and pieces of media that were really helpful for me. I remember watching Coded Bias by Dr. Joy Buolamwini for the first time, and that was obviously an incredible entry point. I also remember watching The Social Dilemma on Netflix as well. I read a couple of books like Automating Inequality, Algorithms of Oppression, Weapons of Math Destruction. A lot of the classic text on like algorithmic bias, and those are really helpful for me as well. I actually think the initial entry point for me, the one that even got me thinking about was the ballot measure that I was working on in California and the risk of algorithmic bias in pre-trial tools, was an expose published in ProPublica. I find that ProPublica and The Markup have great investigative journalism on AI issues, and so those are obviously fantastic resources, if you're thinking about tech specific harms. So I think those are definitely some valuable resources for me, and ever since then, I've been expanding my repertoire of books. I also love The Alignment Problem, Human Compatible by Stuart Russell. I think so much literature out there on the full gamut of risks posed by AI. But yeah, that's just a quick rundown of what I found to be most helpful.

A lot of younger people are growing up into this generation where AI is just a normal thing, right? How have you been able to see it become part of your daily life and in college as a young person?

I think over the past couple of years, the rate of AI adoption is just skyrocketed. I would say people probably use ChatGPT on a daily basis, if not like, many times per day - I myself use ChatGPT pretty actively. A lot of my peers do as well. I think there's a whole range of uses. I think I find it really promising that my generation is becoming better equipped to understand how to responsibly interact with these tools, and I think that only through trial and error, only through experimentation can you figure out what kinds of use cases these tools are best equipped for what kinds of use cases they're not as prepared for yet. And I think that it's really helpful that we're learning pretty early on how to integrate them into our lives in a meaningful and beneficial way. So I definitely think that the rate of adoption has really increased recently, and that's definitely been a promising development. I would also say, it's not just ChatGPT. Obviously, all of us are active social media users, or many of us are, and we're becoming intimately aware that our online experiences on social media are obviously mediated by algorithms, the content that we're consuming online, the information we're being exposed to, whether that's TikTok - even that's under fire right now - or Instagram or Twitter, or anything of the sort. Obviously, like I said before, our online experiences are being shaped, governed, mediated by these complex algorithmic processes, and I think that young people might not be able to, in most cases, articulate the technical complexities of how those algorithms work, but they'll understand generally that it's obviously looking at prior data about them, and are becoming increasingly conscious of what kinds of personal information is being collected when they navigate online platforms. So I think that definitely in relation to social media, in relation to general generative AI use and the integration of generative AI in the classroom as well, I think that when it comes to general chatbots, for example, a lot of my peers were honestly quite disturbed by Snapchat's My AI tool, which is like this chatbot that was just pinned to the top of your screen when you logged on, there was no opt out ability whatsoever. So I think that with the proliferation of those kinds of chat boxes that are designed to be youth facing tools like ChatGPT, all sorts of things, I just really seen it become a pivotal part of people's daily lives.

I don't think it is talked about enough how much the younger generation also feels that they should be included in being involved in the development of so much of the AI that's coming along. What are some of the things that you are advocating for the most with legislators and officials when it comes to regulating AI?

There's a whole host of things, I think. What's become more challenging for us as we've grown as an organization is we've also realized there are so many issues out there and we want to be able to have capacity to take all of them on. I think in this year, especially, we're thinking a lot about deep fakes and misinformation. Obviously, it's the largest election year in human history. We're going to have the governments of half the world's population up for re-election. And what that means is that people are going to be marching to the polls under a fog of disinformation. We're seeing how AI generates this information has exploded online. We're seeing how deepfakes, not only in a political context, but also in the context of revenge porn, have been targeting vulnerable young girls and women, ranging from celebrities like Taylor Swift to ordinary people - girls in middle schools who are being impacted by these just because of their classmates being able to disseminate and make these like pretty sophisticated deepfake images on the spot. We've never had that kind of technology be so accessible to ordinary people. And so I think that people always compare it to like Photoshop, and it's just not at all. It's not. It's not at all analogous because this is so hyperrealistic. We're talking, not just photos, but also videos. I think we really are seeing some pretty concerning use cases already again, not just in the realm of politics, but in people's daily and social lives as well. So I think that our top priority right now, especially in 2024, is going to be deepfakes and disinformation. But there's so much else we're thinking about as well. For example, we just had a member of our team return from Vienna, where they were hosting a conference on the use of autonomous swapping systems. We're super concerned about the use of AI and warfare and some of the national security implications of that are obviously thinking a lot about algorithmic bias and job loss, especially as AI potentially begins to displace human labor. And of course, there are these growing debates over the potential catastrophic risks that could result from artificial intelligence and whether or not it could empower bad actors by helping people design bioweapons or helping launch cyberattacks. And those are all things that we're really concerned about as well. So I think, yeah, full range of different issues here. But I would say the top thing we're prioritizing right now is the disinformation issue.

What do you think is the biggest challenge as a whole that we face in the world this year, and on and offline? And how do we combat it?

Well, this is a challenge that isn't just specific to AI, it's one that I'm seeing on a society scale: It's just this collapse of trust and human connection that I think is really, really concerning. And obviously AI is going to be the next frontier of that. I mean, whether it's young people turning to chatbots in lieu of friends and family, meaning that we're going to eventually erode the social bonds and sustained societies, or it's people being exposed to more and more AI generated disinformation on social media, and people inherently not being able to trust what they see online. A couple of days ago, actually, I came across this deepfake recording of a principal in Baltimore, Maryland where he was allegedly saying all these like race, antisemitic things, and it was completely doctored, obviously using AI. If you hear it, it sounds like incredibly realistic. I wouldn't have thought to second guess that or interrogate that if I heard it without knowing that it was generated by AI. And so I think that we're really veering towards this state of almost reality collapse, as some have called it, where you don't really know how to generate, how to sift through facts and fiction and understand what's real and what's not, and I think that again, that's a larger problem that's not just related to AI, but AI is definitely going to be a driving force, making things worse.

Where do you draw inspiration from the work that you do today?

I think that a lot of the names that I mentioned before are some of the leading thinkers that I've been following in the space, and also like their books, their movies, things that have been super formative. But I would say, first and foremost, what I found to be most inspiring is just seeing how this like random issue that I was thinking about pretty much in a silo when I was 15 is now something that a lot more people my age are thinking a lot about now. It's been really gratifying to see this movement grow from pretty much me in my like bedroom when I was 15, to like a thousand people now all over the world, and everyone's super passionate about it, and it's just so amazing to see people hosting events in their countries and running workshops and reaching out to legislators. And there's so much excitement and agency around this that I find really, really inspiring. So I would just say, what keeps me going and what I find really re-energizing is just the spirit of the young people that I work with and seeing how immensely this network has grown, but also how deeply invested every single person is in the work, and how they're taking ownership over this in their own lives, I think that has been really, really powerful for me to see, so that's been really inspiring. In terms of direction and the issues and who I'm taking inspiration from in that sense, like I mentioned, some big influences have been Joy Buolamwini, Stuart Russell, Yoshua Bengio, some of the top AI thinkers who, I think, are thinking about a broad range of risks, and I think that getting that balance of perspectives has been really crucial for me and shaping my own views on AI.

Has anything in the last few years from when you started when you're 15 surprised you that you maybe didn't anticipate?

Well, I mean, I did not realize this whole like ChatGPT induced boom and public interest would take place. There was a time, I think maybe two years ago, where I was like, am I just screaming into the void, what is going on here?" There was some interest in AI at that point, but definitely not at the level that it is right now, and I distinctly remember the feeling of going to lawmakers and feeling as though they would just be like, Yeah, yeah, sounds good." And then at the end of the day, they had like 20 other political priorities to get to. Obviously, there's still a long way to go when it comes to getting Federal legislation on AI passed, but I think it was so inspiring coming out of a lot of the conversations around ChatGPT to have the same lawmakers who once ghosted us, reaching out to us, asking for briefings and wanting to get up to speed on the issues. And I think that just seeing that absolute reversal of fate was just absolutely stunning, and I think it was just really promising, of course, thinking about kind of being in a silo for a topic that was being discussed on campus in the dining halls with students and professors and people and seeing the conversation expand beyond the initial bubble. That has been really, really, really powerful for me.

What is one action that you think that everyone should take to make the world and our lives a little bit better?

There's so many things that I could say. The first thing that I'm thinking about right now, especially in this critical deepfake and disinformation. So I would say, if you're living in the U.S., call your member of Congress, ask them to pass deepfake legislation, urge them to pass deepfake legislation. I think it's such an important priority this year, and unfortunately, it's just not being prioritized, especially with so much else going on in the national political stage. So I would say call on your leaders to demand stronger AI regulation. I think that there are lots of ways that people can take direct action, whether or not you live in the U.S., or whether or not you know a lot about AI or have been exposed to AI issues in the past.

We started Rise25 to celebrate Mozilla's 25th anniversary. What do you hope that people are celebrating in the next 25 years?

I hope that we're celebrating a safer social media ecosystem where all users have agency and ownership over their personal data and their online experiences. I hope that we are moving towards a more AI literate world where people are prepared to navigate the surge of, for example, disinformation they're going to experience, and understand how to navigate a world where you might be applying for a job and there's an algorithmic screening tool that's reviewing your application. Or you're standing trial, and there is a risk assessment tool that's assessing your level of criminal risk. I think people need to be aware of those things, and I hope we're moving towards a more AI literate world in that sense. I hope that we have stronger international coordination on AI. I think that it's truly a borderless issue and that right now, we're seeing a lot of patchwork of different domestic regulations. We really need to harmonize international approach. And some sort of Paris climate agreement, but for AI. I would say those are a couple of things that I'm thinking about and hoping for the next couple of decades and years.

What gives you hope about the future of our world?

I've said this before, but I think what gives me hope is seeing the next generation so fired up thinking a lot about this. And I think it's also really exciting to think about the fact that the next generation of people who are actually building the technologies are going to be approaching it with a much different mindset, and with a much different frame of thinking than the people who have been building these technologies in the past. And so I think that seeing that seismic shift has been really rewarding, definitely. And I mean, I'm excited to see how the next couple of years shake out. So I think it's a mixture of optimism mixed with obviously anxiety for the future. But I think that first and foremost, the people that I work with, and my peers, have really inspired me.

Fx-Browser-icon-fullColor-512-512x512.pn Get Firefox Get the browser that protects what's important

The post Sneha Revanur on empowering youth voices in AI, fighting for legislation and combating deepfakes and disinformation appeared first on The Mozilla Blog.

External Content
Source RSS or Atom Feed
Feed Location http://blog.mozilla.com/feed/
Feed Title The Mozilla Blog
Feed Link https://blog.mozilla.org/en/
Reply 0 comments