6 takeaways from The Washington Post Futurist Tech Summit in D.C.
A full conglomerate including journalists from The Washington Post, U.S. policymakers and influential business leaders gathered for a day of engaging discussions about technology March 21 in the nation's capital.
Mozilla sponsored The Futurist Summit: The New Age of Tech," an event focused on addressing the wide range of promise and risks associated with emerging technologies - the largest of them being Artificial Intelligence (AI). It featured interviews moderated by journalists from The Post, as well as interactive sessions about tech for audience members in attendance at the paper's office in Washington D.C.
Missed the event? Here are six takeaways from it that you should know about:
1. How OpenAI is preparing for the election.
The 2024 U.S. presidential election is one of the biggest topics of discussion involving the emergence and dangers of AI this year. It's no secret that AI has incredible power to create, influence and manipulate voters with misinformation and fake media content (video, photos, audio) that can unfairly sway voters.
OpenAI, one of the biggest AI organizations, stressed an importance to provide transparency for its users to ensure their tools aren't being used in those negative ways to mislead the public.
It's four billion people voting, and that is really unprecedented, and we're very, very cognizant of that," OpenAI VP of Global Affairs Anna Makanju said. And obviously, it's one of the things that we work - to ensure that our tools are not used to deceive people and to mislead people."
Makanju reiterated that AI concerns with the election is a very large scale, and OpenAI is focused on engaging with companies to hammer down transparency in the 2024 race.
This is like a whole of society issue," Makanju said. So that's why we have engaged with other companies in this space as well. As you may have seen in the Munich Security Conference, we announced the Tech Accord, where we're going to collaborate with social media companies and other companies that generate AI content, because there's the issue of generation of AI content and the issue of distribution, and they're quite different. So, for us, we really focus on things like transparency. ... We of course have lots of teams investigating abuse of our systems or circumvention of the use case guidelines that are intended to prevent this kind of work. So, there are many teams at OpenAI working to ensure that these tools aren't used for election interference."
And OpenAI will be in the spotlight even more as the election inches closer. According to a report from Business Insider, OpenAI is preparing to launch GPT-5 this summer, which will reportedly eclipse the abilities of the ChatGPT chatbot.
The futurist summit focused on the wide range of promise and risks associated with emerging technologies2. Policymakers address the potential TikTok ban.
The House overwhelmingly voted 352-65 on March 13 to pass a measure that gives ByteDance, the parent company of TikTok, a decision: Sell the social media platform or face a nationwide ban on all U.S. devices.
One of the top lawmakers on the Senate Intelligence Committee, Sen. Mark Warner (D-Va.), addressed the national security concerns around TikTok on a panel moderated by political reporter Leigh Ann Caldwell alongside Sen. Todd Young (R-Ind.).
There is something uniquely challenging about TikTok because ultimately if this information is turned over to the Chinese espionage services that could be then potentially used for nefarious purposes, that's not a good thing for America's long-term national security interests," Werner said. End of the day, all we want is it could be an American company, it could be a British company, it could be a Brazilian company. It just needs not to be from one of the nation states, China being one of the four, that are actually named in American law as adversarial nations."
Young chimed in shortly after Warner: Though I have not authored a bill on this particular topic, I've been deeply involved, for several years running now, in this effort to harden ourselves against a country, China, that has weaponized our economic interdependence in various ways."
The fate of the measure now heads to the Senate, which is not scheduled to vote on it soon.
3. Deep Media AI is fighting against fake media content.
AI to fight against AI? Yes, it's possible!
AI being able to alter how we perceive reality through deepfakes - in other words, synthetic media - is another danger of the emerging technology. Deep Media AI founder Rijul Gupta is countering that AI problem with AI of his own.
In a video demonstration alongside tech columnist Geoffrey Fowler, Gupta showcased how Deep Media AI scans and detects deepfakes in photos, videos and audio files to combat the issue.
For example, Deep Media AI can determine if a photo is fake by looking at wrinkles, reflections and things humans typically don't pay attention to. In the audio space, which Gupta described as uniquely dangerous," the technology analyzes the waves and patterns. It can detect video deepfakes by tracking motion of the face - how it moves, the shape and movement of lips - and changes in lighting.
A good sign: Audience members were asked to identify a deepfake between two video clips (one real, one AI generated by OpenAI) at the start of Gupta's presentation. The majority of people in attendance guessed correctly. Even better: Deep Media AI detected it was fake and scored a 100/100 in its detection system. In other words, it got it right perfectly.
Generative AI is going to be awesome; it's going to make us all rich; it's going to be great," Gupta said. But in order for that to happen, we need to make it safe. We're part of that, but we need militaries and governments. We need buy-in from the generative AI companies. We need buy-in from the tech ecosystem. We need detectors. And we need journalists to tell us what's real, and what's fake from a trusted source, right? I think it's possible. We're here to help, but we're not the only ones here. We're hoping to provide solutions that people use."
VP of Global Policy at Mozilla, Linda Griffin, interviewed by The Washington Post's Kathleen Koch.4. Mozilla's push for trustworthy AI
As we continue to shift towards a world with AI that's helpful, it's important we involve human beings in that process as much as possible. It's concerning if companies are making AI while only thinking about profit and not the public. That hurts public trust and faith in big tech.
This work is urgent, and Mozilla has been delivering the trustworthy AI report - which had a 2020 status update in February - to aid in aligning with our vision of creating a healthy internet where openness, competition and accountability are the norms.
We want to know what you think," Mozilla VP of Global Policy Linda Griffin said. We're trying to map and guide where we think these conversations are. What is the point of AI unless more people can benefit from it more broadly? What is the point of this technology if it's just in the hands of the handful of companies thinking about their bottom line?
They do important and really interesting things with the technology; that's great. But we need more; we need the public counterpoint. So, for us, trustworthy AI, it's about accountability, transparency, and having humans in the loop thinking about people wanting to use these products and feeling safe and understanding that they have recourse if something goes wrong."
5. AI's ability to change rules in the NFL (yes!).
While the NFL is early in the process of incorporating AI into the game of football, the league has found ways to get the ball rolling (pun intended) on using its tools to make the game smarter and better.
One area is with health and safety, a major priority for the NFL. The league uses AI and machine learning tools on the field to grab predictive analysis to identify plays and body positions that most likely lead to players getting injured. Then, they can adjust rules and strategies accordingly, if they want.
For example, kickoffs. Concussions sustained on kickoffs dropped by 60 percent in the NFL last season, from 20 to eight. That is because kickoffs were returned less frequently after the league adjusted the rules governing kickoff returns during the previous offseason, so that a returner could signal for a fair catch no matter where the ball was kicked, and the ball would be placed on the 25-yard line. This change came after the NFL used AI tools to gather injury data on those plays.
The insight to change that rule had come from a lot of the data we had collected with chips on the shoulder pads of our players of capturing data, using machine learning, and trying to figure out what is the safest way to play the game," Brian Rolapp, Chief Media & Business Officer for the NFL, told media reporter Ben Strauss, which led to an impact of rule change."
While kickoff injuries have gone down, making this tweak to one of the most exciting plays in football is tough. So this year, the NFL is working on a compromise and exploring new ideas that can strike a balance to satisfy both safety and excitement. There will be a vote at league meetings this week in front of coaches, general managers and ownership about it.
6. Don't forget about tech for accessibility.
With the new chapter of AI, the possibilities of investing and creating tools for those with disabilities is endless. For those who are blind, have low vision or have trouble hearing, AI offers an entire new slate of capabilities.
Apple has been one of the companies at the forefront creating features for those with disabilities that use their products. For example, on iPhones, Apple has implemented live captions, sound recognition and voice control on devices to assist.
Sarah Herrlinger, Senior Director of Global Accessibility Policy & Initiatives at Apple, gave insight into how the tech giant decides what features to add and which ones to update. In doing so, she delivered one of the best talking points of the day.
I think the key to that is really engagement with the communities," Herrlinger said. We believe very strongly in the disability mantra of, nothing about us without us, and so it starts with first off employing members of these communities within our ranks. We never build for a community. We build with them."
Herrlinger was joined on stage alongside retired Judge David S. Tatel, Mike Buckley, the Chair & CEO of Be My Eyes and Disability reporter for The Post Amanda Morris. When asked about the future of accessibility for those that are blind, Patel shared a touching sentiment many in the disability space resonate with.
It's anything that improves and enhances my independence, and enhances it seamlessly is with what I look for," Tatel said. That's it. Independence, independence, independence."
Get Firefox Get the browser that protects what's importantThe post 6 takeaways from The Washington Post Futurist Tech Summit in D.C. appeared first on The Mozilla Blog.