Article 6DM4R Worldcoin just officially launched. Here’s why it’s already being investigated.

Worldcoin just officially launched. Here’s why it’s already being investigated.

by
Tate Ryan-Mosley
from MIT Technology Review on (#6DM4R)

This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

It's possible you've heard the name Worldcoin recently. It's been getting a ton of attention-some good, some ... not so good.

It's a project that claims to use cryptocurrency to distribute money across the world, though itsbigger ambition is to create a global identity system called World ID" that relies on individuals' unique biometric datato prove that they are humans. It officially launched on July 24 in more than 20 countries, and Sam Altman, the CEO of OpenAI and one of the biggest tech celebrities right now, is one of the cofounders of the project.

The company makes big, idealistic promises: that it can deliver a form of universal basic income through technology to make the world a better and more equitable place, while offering a way to verify your humanity in a digital future filled with nonhuman intelligence, which it calls proof of personhood." If you're thinking this sounds like a potential privacy nightmare,you're not alone.

Luckily, we have someone I'd consider the Worldcoin expert on staff here at MIT Technology Review. Last year investigative reporter Eileen Guo, with freelancer Adi Renaldi,dug into the companyand found thatWorldcoin's operations were far from living up to its lofty goalsand that it wascollecting sensitive biometric data from many vulnerable people in exchange for cash.

As they wrote:

Our investigation revealed wide gaps between Worldcoin's public messaging, which focused on protecting privacy, and what users experienced. We found that the company's representatives used deceptive marketing practices, collected more personal data than it acknowledged, and failed to obtain meaningful informed consent."

What's more, the company was using test users' sensitive, but anonymized, data to train artificial intelligence models, but Eileen and Adi found that individuals did not know their data was being used that way.

I highly recommend youread their investigation-which builds on more than 35 interviews with Worldcoin executives, contractors, and test users recruited primarily in developing countries-to better understand how the company was handling sensitive personal data and how its idealistic rhetoric compared with the realities on the ground.

Given their reporting, it's no surprise that regulators in at leastfour countries have already launched investigationsinto the project, citing concerns with its privacy practices. The companyclaimsit has already scanned nearly 2.2 million unique humans" into its database, which was primarily built during an extended test period over the last two years.

So I asked Eileen: What really has changed since her investigation? How do we make sense of the latest news?

Since her story, Worldcoin CEO Alex Blania hastold other outletsthat the company has changed many of its data collection and privacy practices, though there are reasons to be skeptical.The company hasn't specified exactly how it's done this,beyond saying it has stopped some of the most exploitative and deceptive recruitment tactics.

In emails Eileen recently exchanged with Worldcoin, a spokesperson was vague about how the company was handling personal data, saying that the Worldcoin Foundation complies with all laws and regulations governing the processing of personal data in the markets where Worldcoin is available, including the General Data Protection Regulation (GDPR') ... The project will continue to cooperate with governing bodies on requests for more information about its privacy and data protection practices."

The spokesperson added, It is important to stress that The Worldcoin Foundation and its contributor Tools for Humanity never have and never will sell users' personal data."

But, Eileen notes, we (again) have nothing but the company's word that this is true. That's one reason we should keep a close eye on what government investigators start to uncover about Worldcoin.

The legality of Worldcoin's biometric data collection is at theheart of an investigation the French government launched into Worldcoinand a probe by a German data protection agency, which has beeninvestigating Worldcoin since November of last year, according to Reuters. On July 25, the Information Commissioner's Officer in the UK put out a statement that it will be making enquiries" into the company. Then on August 2, Kenya's Office of Data Protectionsuspended the projectin the country,sayingit will investigate whether Worldcoin is in compliance with the country's Data Protection Act.

Importantly, a core objective of the Worldcoin project is to perfect its proof of personhood" methodology, which requires a lot of data to train AI models.If its proof-of-personhood system becomes widely adopted, this could be quite lucrative for its investors,particularly during an AI gold rush like the one we're seeing now.

The company announced this week that it willallow other companies and governmentsto deploy its identity system.

Worldcoin's proposed identity solution is problematic whether or not other companies and governments use it. Of course, it would be worse if it were used more broadly without so many key questions being answered," says Eileen. But I think at this stage, it's clever marketingto try to convince everyone to get scanned and sign up so that they can achieve the fastest' and biggest onboarding into crypto and Web3' to date, as Blania told me last year."

Eileen points out that Worldcoin has also not yet clarified whether it still uses the biometric data it collects to train its artificial intelligence models, or whether it has deleted the biometric data it already collected from test users and was using in training, as it told MIT Technology Review it would do before launch.

I haven't seen anything that suggests that they've actually stopped training their algorithms-or that they ever would," Eileen says. I mean,that's the point of AI, right? that it's supposed to get smarter."

What else I'm reading
  • Meta's oversight board, which issues independently drafted and binding policies, is reviewing how the company is handling misinformation about abortion. Currently, the company'smoderation decisions are a bit of a mess, according to this nice explainer-y piece in Slate. We should expect the board to issue new abortion-information-specific policies in the coming weeks.
  • At the end of July, Twitter rebranded to X, in a strange, unsurprising-yet-surprising move by its new czar Elon. I lovedCasey Newton's obituary-style take, in which he argues that Musk's $44 billion investment was really just a wasteful act of cultural vandalism."
  • Nobel-winning economist Joseph Stiglitz is worried that AI will worsen inequality, andhe spoke with Scientific Americanabout how we might get off the path we seem to currently be on. Well worth a read!
What I learned this week

Bots on social media are likely being supercharged by ChatGPT. Researchers from Indiana University have released apreprint paperthat shows a Twitter botnet of over 1,000 accounts, which the researchers call fox8, that appears to employ ChatGPT to generate human-like content." The botnet promoted fake-news websites and stolen images, and it's an alarming preview of a social media environment fueled by AI and machine-generated misinformation. Tech Policy Press wrotea great quick analysison the findings, which I'd recommend checking out.

Additional reporting from Eileen Guo.

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments