Get ready to fight misinformation in 2024. Eric Schmidt has advice.
This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.
We're already at that time of year when we start looking ahead to what's coming in 2024. For Technocrat readers (and the rest of the globe!), next year is going to be a doozy, with over40 national electionsworldwide and a landscape of constantly evolving information technologies.
One of the biggest areas to watch, of course, will be generative AI, particularly how it changes social media, political campaigning, and the fight over election misinformation. This confluence of new tech and big elections is also happening while the social media industry is going through major changes, including shifts inmoderation approaches,legal battles,cuts to trust and safety teams, andplatform shake-ups.
This is all poised to make the future of the fight against misinformation murky, to say the least. It's a topic mycolleagues and Itakevery seriouslyand have coveredextensivelyin the past. And recently in MIT Technology Review, former Google bossEric Schmidt penned an op-edthat lays out what he calls a paradigm shift for social media platforms":
The role of Facebook and others has conditioned our understanding of social media as centralized, global public town squares" with a never-ending stream of content and frictionless feedback. Yet themayhemon X (a.k.a. Twitter) anddeclining useof Facebook among Gen Z-alongside the ascent of apps like TikTok and Discord-indicate that the future of social media may look very different. In pursuit of growth, platforms have embraced the amplification of emotions through attention-driven algorithms and recommendation-fueled feeds.
But that's taken agency away from users (we don't control what we see) and has instead left us with conversations full of hate and discord, as well as a growing epidemic of mental-health problems among teens ... Now, with AI starting to make social mediamuch more toxic, platforms and regulators need to act quickly to regain user trust and safeguard our democracy.
Schmidt goes on to lay out a six-point plan social media companies can follow to meet the moment. One thing I was happy to see him mention is the importance of provenance information, whichI have writtenabout afew times previously. It's an insightful and useful piece that I'd definitely urge you to read!
This is the last Technocrat of 2023, and I'll be back in your inbox in January. In the meantime, over the next few weeks we'll be publishing more stories about what's to come in technology in 2024, so be on the lookout for those. And if you want to catch up on some past stories that you may have missed, here are just a few of my favorites from my colleagues in 2023:
- This new tool could give artists an edge over AIfrom Melissa Heikkila
- ChatGPT is going to change education, not destroy itfrom Will Douglas Heaven
- ChatGPT is about to revolutionize the economy. We need to decide what that looks likefrom David Rotman
- Why the dream of fusion power isn't going awayfrom Casey Crownhart
- Deepfakes of Chinese influencers are livestreaming 24/7from Zeyi Yang
- Well, the EU AI Act has now been agreed on, setting the global standard for AI regulation! Here arefive things you need to knowabout it,from my colleague Melissa.And if you want to know more about why this was so hard to get across the finish line, readmy Technocrat from last week.
- I found this story from Voxonhow chatbot therapy may be usefulvery enlightening.
- Thisinvestigation into the YahooHuman Rights Fundand an ongoing lawsuit, whichclaims very little of the money went where it was supposed to go, raises really interesting questions about how tech companies deal with political pressure and messaging.
Microsoft's Bing AI chatbot, renamed Microsoft Copilot,got election information wrong one third of the time, according to a new study from nonprofits AI Forensics and AlgorithmWatch. Will Oremusin the Washington Postwrites that the study results reinforce concerns that today's AI chatbots could contribute to confusion and misinformation around future elections as Microsoft and other tech giants race to integrate them into everyday products, including internet search." Here's a reminder to not rely on generative AI for news!