Join me at EmTech Digital this week!
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
I'm excited to spend this week in Cambridge, Massachusetts. I'm visiting the mothership forMIT Technology Review's annual flagship AI conference,EmTech Digital, on May 22-23.
Between the world leaders gathering in Seoul for the second AI Safety Summit this week and Google and OpenAI's launches of their supercharged new models,AstraandGPT-4o, the timing could not be better. AI feels hotter than ever.
This year's EmTech will be all about how we can harness the power of generative AI while mitigating its risks,and how the technology will affect the workforce, competitiveness, and democracy. We will also get a sneak peek into the AI labs of Google, OpenAI, Adobe, AWS, and others.
This year's top speakers includeNick Clegg, the president of global affairs at Meta, who will talk about what the platform intends to do to curb misinformation. In 2024, over 40 national elections will happen around the world, making it one of the most consequential political years in history. At the same time, generative AI has enabled an entirely new age of misinformation. And it's all coalescing, with major shake-ups at social media companies and information platforms. MIT Technology Review's executive editor Amy Nordrum will press Clegg on stage about what this all means for democracy.
Here are some other sessions I am excited about.
A Peek Inside Google's plans
Jay Yagnik, a vice president and engineering fellow at Google, will share what the history of AI can teach us about where the technology is going next and discuss Google's vision for how to harness generative AI.
From the Labs of OpenAI
Srinivas Narayanan, the vice president of applied AI at OpenAI, will share what the company has been building recently and what is coming next. In another session, Connor Holmes, who led work on video-generation AI Sora, will talk about how video-generation models could work as world simulators, and what this means for future AI models.
The Often-Overlooked Privacy Problems in AI
Language models are prone toleaking private data. In this session Patricia Thaine, cofounder and CEO of Private AI, will explore methods that keep secrets secret and help organizations maintain compliance with privacy regulations.
A Word Is Worth a Thousand Pictures
Cynthia Lu, senior director and head of applied research at Adobe, will walk us through the AI technology that Adobe is building and the ethical and legal implications of generated imagery. I've written about Adobe's efforts tobuild generative AI in a non-exploitative wayand how they're paying off, so I'll be interested to hear more about that.
AI in the ER
Advances in medical image analysis are now enabling doctors to interpret radiology reports and automate incident documentation. This session by Polina Golland, the associate director of the MIT Computer Science and AI Laboratory, will explore both the challenges of working with sensitive personal data and the benefits of AI-assisted health care for patients.
Future Compute
On Tuesday, May 21, we are also hostingFuture Compute, a day looking at how businesses and technical leaders navigate adopting AI. We have tech leaders from Salesforce, Stack Overflow, Amazon, and more, discussing how they are managing the AI transformation, and what pitfalls to avoid.
I'd love to see you there, so if you can make it,sign upand come along! Readers of The Algorithm get 30% off tickets with the code ALGORITHMD24.
Now read the rest of The AlgorithmDeeper LearningTo kick off this busy week in AI, heavyweights such as Turing Prize winners Geoffrey Hinton and Yoshua Bengio, and a slew of other prominent academics and writers, have justwritten an op-edpublished in Science calling for more investment in AI safety research. The op-ed, timed to coincide with the Seoul AI Safety Summit, represents the group's wish list for leaders meeting to discuss AI. Many of the researchers behind the text have been heavily involved in consulting with governments and international organizations on the best approach to building safer AI systems.
They argue that tech companies and public funders should invest at least a third of their AI R&D budgets into AI safety, and that governments should mandate stricter AI safety standards and assessments rather than relying on voluntary measures. The piece calls for them to establish fast-acting AI oversight bodies and provide them with funding comparable to the budgets of safety agencies in other sectors. It also says governments should require AI companies to prove that their systems cannot cause harm.
But it's hard to see this op-ed shifting things much.Tech companies have little incentive to spend money on measures that might slow down innovation and, crucially, product launches. Over the past few years, we've seen teams working on responsible AI take the hit during mass layoffs. Governments have shown more willingness to regulate AI in the last year or so, with the EU passing its first piece of comprehensive AI legislation, but this op-ed calls for them to go much further and faster.
Despite that, focusing on the hypothetical existential risks posed by AI remains controversial among researchers, with some experts arguing that itdistracts from the very real problems AI is causing today. As my colleague Will Douglas Heavenwrote last Junewhen the AI safety debate was at a fever pitch: The Overton window has shifted. What were once extreme views are now mainstream talking points, grabbing not only headlines but the attention of world leaders."
Even Deeper LearningGPT-4o's Chinese token-training data is polluted by spam and porn websites
Last Monday OpenAI released GPT-4o, an AI model that you can communicate with in real time via live voice conversation, video streams from your phone, and text. But just days later, Chinese speakers started to notice that something seemed off about it: the tokens it uses to parse text were full of phrases related to spam and porn.
Oops, AI did it again:Humans read in words, but LLMs analyze tokens-distinct units in a sentence. When it comes to the Chinese language, the new tokenizer used by GPT-4o has introduced a disproportionate number of meaningless phrases. In one example, the longest token in GPT-4o's public token library literally means _free Japanese porn video to watch." Experts say that's likely due to insufficient data cleaning and filtering before the tokenizer was trained. (MIT Technology Review)
Bits and BytesWhat's next in chips
Thanks to the boom in artificial intelligence, the world of chips is on the cusp of a huge tidal shift. We outline four trends to look for in the year ahead that will define what the chips of the future will look like, who will make them, and which new technologies they'll unlock. (MIT Technology Review)
OpenAI and Google are launching supercharged AI assistants. Here's how you can try them out.
OpenAI unveiled itsGPT-4o assistantlast Monday, andGoogle unveiledits own work building supercharged AI assistants just a day later. My colleague James O'Donnell walks you through what you should know about how to access these new tools, what you might use them for, and how much it will cost.
OpenAI has lost its cofounder and dissolved the team focused on long-term AI risks
Last week OpenAI cofounder Ilya Sutskever and Jan Leike, the co-lead of the startup's superalignment team, announced they were leaving the company. The superalignment team was set up less than a year ago to develop ways to control superintelligent AI systems. Leike said he was leaving because OpenAI's safety culture and processes have taken a backseat to shiny products." In Silicon Valley, money always wins. (CNBC)
Meta's plan to win the AI race: give its tech away for free
Mark Zuckerberg's bet is that making powerful AI technology free will drive down competitors' prices, making Meta's tech more widespread while others build products on top of it-ultimately giving him more control over the future of AI. (The Wall Street Journal)
Sony Music Group has warned companies against using its content to train AI
The record label says it opts out of indiscriminate AI training and has started sending letters to AI companies prohibiting them from mining text or data, scraping the internet, or using Sony's content without licensing agreements. (Sony)
What do you do when an AI company takes your voice?
Two voice actors are suing Lovo, a startup, claiming it illegally took recordings of their voices to train their AI model. (The New York Times)