Article 6AN3H Behind the scenes of Carnegie Mellon’s heated privacy dispute

Behind the scenes of Carnegie Mellon’s heated privacy dispute

by
Tate Ryan-Mosley
from MIT Technology Review on (#6AN3H)
Story Image

This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

On April 3, my colleague Eileen Guo and I published a story that takes readers inside a tense debate about privacy within one of the world's most elite computer science programs.

Researchers at Carnegie Mellon University set out to create advanced smart sensors called Mites. The sensors were meant to collect 12 types of environmental data, including motion, temperature, and scrambled audio, in a more privacy-protecting and secure way than the existing infrastructure of the Internet of Things. But after they installed hundreds of the sensors around a new campus building, the project took an ironic turn when some students and faculty members accused the researchers of violating their privacy by failing to seek their consent first.

The debate that ensued within the Software and Societal Systems Department grew heated and complicated, and it highlighted just how nuanced questions around privacy and technology can be. These are issues that we all have to contend with as a ballooning amount of data is collected on us-inside our homes, on our streets, in our cars, in our workplaces and most other spaces. As we write in the piece, if the technologists whose research sets the agenda can't come to a consensus on privacy, where does that leave the rest of us?

The story took us over a year to report. We tried to present different points of view about privacy, consent, and the future of IoT technology while acknowledging the very real roles that power, process, and communication play in how technologies are deployed.

One truth emerged clearly in the reporting: privacy is subjective-there is no clear set of criteria for what constitutes privacy-protecting technology, even in academic research. In the case of CMU, people on all sides of the debate were trying to advocate for a better future according to their own understanding of privacy. David Widder, a PhD student who focuses on tech ethics and a central character in our story, told us, I'm not willing to accept the premise of ... a future where there are all of these kinds of sensors everywhere."

But the very researchers he criticized were also trying to build a better future. The chair of the department, James Herbsleb, encouraged people to support the Mites research. I want to repeat that this is a very important project ... if you want to avoid a future where surveillance is routine and unavoidable!" he wrote in an email to department members.

Big questions about the future were at the core of the CMU debate, and they mirror the same questions we all are grappling with. Is a world full of IoT devices inevitable? Should we spend our time and effort trying to make our new technologically enabled world safer and more secure? Or should we reject the technology altogether? Under what circumstances should we choose which option, and what mechanisms are required to make these decisions collectively and individually?

Questions around consent and how to communicate about data collection became flashpoints in the debate at CMU, and these are key issues at the core of tech regulation discussions today as well. In Europe, for example, regulators are debating the rules around informed consent and data collection in response to the pop-ups that have been cluttering the internet since the passage of the General Data Protection Regulation, the European Union's data privacy law. Companies use the pop-ups to comply with the law, but the messages have been criticized for being useless when it comes to actually informing users about data collection and terms of service.

In the story, we similarly focus on the differences between technical approaches to privacy and the social norms around things like notice and consent. Cutting-edge techniques like edge computing may help preserve privacy, but they can't necessarily take the place of asking people if they want to participate in data collection in the first place. We also consistently encountered confusion about what the project was and what data was being collected, and the communications about data collection that we reviewed were often opaque and incomplete.

I asked my co-reporter, Eileen, to reflect on our story. She eloquently explained, The Mites story was kind of two stories in one. On the one hand, it's a story about ideas: What is privacy? When do we need consent? What future do we want to build? On the other, it's a narrative about a specific situation at Carnegie Mellon University that became incredibly personal for the people at the center of it-and very high stakes."

I'd also add that while these issues are not new, as a society we are at the very beginning of contending with a reality where data can be extracted from most everything we do. And the ability to use that data-both for good and for ill-is only going to get better.

We hope this piece will encourage people to consider their own stance on privacy. As Eileen noted, It's really interesting and relevant to see how IoT researchers are thinking about these issues now, because it always takes some time before academic research eventually turns into or influences the creation of a commercial product. We're kind of getting a front-row seat to what may be coming a few years down the line."

Write to us and let us know what you think. It's a long story, so go get yourself a cup of coffee or tea, and dive in.

What I am reading this week

Parts of the US government are using mobile-phone geolocation tracking technology from the NSO Group, despite the Biden Administration's ban on the group's surveillance tools, according to this deep investigation by Mark Mazzetti and Ronen Bergman of the New York Times. (We've written extensively about NSO, including this profile of the company and this article about where the paid-for hacking industry is going more broadly.)

OpenAI is promising to address the concerns of regulators in Italy who banned ChatGPT at the end of March over questions around data collection. Just what actions the company will take remain unclear, but many other countries are watching the back and forth closely.

President Biden's second Summit for Democracy featured a lot of talk about the role of technology and the future of democracy. Alex Engler, who I featured in The Technocrat recently, wrote a great summary in Tech Policy Press. There was a lot of discussion about the role of civic tech and an emphasis on digital public services, which is a topic I think we can expect to hear more about from the Biden administration.

What I learned this week

Reuters reported that Tesla workers have shared images and videos among themselves that have been recorded by the company's cars. Some are sensitive in nature, like a customer approaching his car in the nude. The company's privacy policy claims that recordings are anonymous and not linked to your vehicle, and that data collection helps Tesla improve its products." But former employees told Reuters that the company's software can show the location of recordings. At one point, Tesla was collecting recordings even when vehicles were turned off, if customers consented.

It's another story that exposes the privacy risks created when companies collect data. In December, Eileen Guo reported on a similar situation with Roomba robot vacuums, in which sensitive images captured by the devices, like a woman on the toilet, were shared among employees of a contracted company and ended up on Facebook.

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments