Examining AI’s Effect on Media and Truth
Today, one of the biggest issues facing the internet - and society - is misinformation.
It's a complicated issue, but this much is certain: The artificial intelligence (AI) powering the internet is complicit. Platforms like YouTube and Facebook recommend and amplify content that will keep us clicking, even if it's radical or flat out wrong.
Earlier this year, Mozilla called for art and advocacy projects that illuminate the role AI plays in spreading misinformation. And today, we're announcing the winners: Eight projects that highlight how AI like machine learning impacts our understanding of the truth.
These eight projects will receive Mozilla Creative Media Awards totalling $200,000, and will launch to the public by May 2020. They include a Turing Test app; a YouTube recommendation simulator; educational deepfakes; and more. Awardees hail from Japan, the Netherlands, Uganda, and the U.S. Learn more about each awardee below.
Mozilla's Creative Media Awards fuel the people and projects on the front lines of the internet health movement. Past Creative Media Award winners have built mock dating apps that highlight algorithmic discrimination; they've created games that simulate the inherent bias of automated hiring; and they've published clever tutorials that mix cosmetic advice with cybersecurity best practices.
These eight awards align with Mozilla's focus on fostering more trustworthy AI.
The winners[1] Truth-or-Dare Turing Test| by Foreign Objects in the U.S.
This project explores deceptive AI that mimic real humans. Users play truth-or-dare with another entity, and at the conclusion of the game, must guess if they were playing with a fellow human or an AI. ("Truths" are played out using text, and "dares" are played out using an online sketchpad.) The project also includes a website outlining the state of mimicry technology, its uses, and its dangers.
[2] Swap the Curators in the Tube| by Tomo Kihara in Japan
This project explores how recommendation engines present different realities to different people. Users will peruse the YouTube recommendations of five wildly different personas - including a conspiracist and a racist persona - to experience how their recommendations differ.
[3] An Interview with ALEX| by Carrie Wang in the U.S.
The project is a browser-based experience that simulates a job interview with an AI in a future of gamified work and total surveillance. As the interview progresses, users learn that this automated HR manager is covering up the truth of this job, and using facial and speech recognition to make assumptions and decisions about them.
[4] The Future of Memory| by Xiaowei Wang, Jasmine Wang, and Yang Yuting in the U.S.
This project explores algorithmic censorship, and the ways language can be made illegible to such algorithms. It reverse-engineers how automated censors work, to provide a toolkit of tactics using a new "machine resistant" language, composed of emoji, memes, steganography and homophones. The project will also archive censored materials on a distributed, physical network of offline modules.
[5] Choose Your Own Fake News| by Pollicy in Uganda
This project uses comics and audio to explore how misinformation spreads across the African continent. Users engage in a choose-your-own-adventure game that simulates how retweets, comments, and other digital actions can sow misinformation, and how that misinformation intersects with gender, religion, and ethnicity.
[6] Deep Reckonings| by Stephanie Lepp in the U.S.
This project uses deepfakes to address the issue of deepfakes. Three false videos will show public figures - like tech executives - reckoning with the dangers of synthetic media. Each video will be clearly watermarked and labeled as a deepfake to prevent misinformation.
[7] In Event of Moon Disaster| by Halsey Burgund, Francesca Panetta, Magnus Bjerg Mortensen, Jeff DelViscio and the MIT Center for Advanced Virtuality
This project uses the 1969 moon landing to explore the topic of modern misinformation. Real coverage of the landing will be presented on a website alongside deepfakes and other false content, to highlight the difficulty of telling the two apart. And by tracking viewers' attention, the project will reveal which content captivated viewers more.
[8] Most FACE Ever| by Kyle McDonald in the U.S.
This project teaches users about computer vision and facial analysis technology through playful challenges. Users will enable their webcam, engage with facial analysis, and try to "look" a certain way - say, "criminal," or "white." The game reveals how inaccurate and biased facial analysis can often be.
These eight awardees were selected based on quantitative scoring of their applications by a review committee, and a qualitative discussion at a review committee meeting. Committee members included Mozilla staff, current and alumni Mozilla Fellows and Awardees, and outside experts. Selection criteria is designed to evaluate the merits of the proposed approach. Diversity in applicant background, past work, and medium were also considered.
These awards are part of the NetGain Partnership, a collaboration between Mozilla, Ford Foundation, Knight Foundation, MacArthur Foundation, and the Open Society Foundation. The goal of this philanthropic collaboration is to advance the public interest in the digital age.
Also see (May 2019):Seeking Art that Explores AI, Media, and Truth
The post Examining AI's Effect on Media and Truth appeared first on The Mozilla Blog.