Article 6RP23 The algorithms around us

The algorithms around us

by
Ariel Bleicher
from MIT Technology Review on (#6RP23)

A metronome ticks. A record spins. And as a feel-good pop track plays, a giant compactor slowly crushes a Jenga tower of material creations. Paint cans burst. Chess pieces topple. Camera lenses shatter. An alarm clock shrills and then goes silent. A guitar neck snaps. Even a toy emoji is not spared, its eyes popping from their plastic sockets before the mechanical jaws close with a deafening thud. But wait! The jaunty tune starts up again, and the jaws open to reveal ... an iPad.

Watching Apple's now-infamous Crush!" ad, it's hard not to feel uneasy about the ways in which digitization is remaking human life. Sure, we're happy for computers to take over tasks we don't want to do or aren't particularly good at, like shopping or navigating. But what does it mean when the things we hold dear and thought were uniquely ours-our friendships, our art, even our language and creativity-can be reduced to software?

books.smith_.jpg?w=1308Devil in the Stack: A Code Odyssey
Andrew SmithATLANTIC MONTHLY PRESS, 2024

In his new book Devil in the Stack, Andrew Smith confronts the fact that computer code is seeping unchallenged and at an accelerating rate into every area of our existence." As a technology journalist covering the rise of phenomena like Amazon and Bitcoin, he had grown curious about the haunting alien logic" behind them. So, like Upton Sinclair in The Jungle, he set out to see how the sausage gets made-in this case, by learning to code himself.

This proves easier said than done. Simply choosing which programming language to start with becomes daunting when Smith discovers there are more than 1,700 to pick from, each with its own quirks and foibles. At times, his forays into the particulars of programming-functions, data structures, assignment operators, conditionals, and while loops-are as torturous to read about as they apparently were for him to slog through. But his deep reporting on coding's history, philosophy, and mechanics is worth sticking around for and paints a fascinating-and, ultimately, unsettling-portrait of a technology into which most people have little insight.

Classical computing, Smith explains, depends on layers of abstraction-what programmers call the stack." At the bottom is machine code, the patterns of 1s and 0s executed by electrical switches on a chip. At the top are high-level languages like Python, JavaScript, and Perl, which are easiest for humans to interpret but make more work for the machine because they must be translated into instructions that a microprocessor can implement. Each new layer allows us to stop thinking about the one below it and simply take its function for granted," Smith writes.

In his view, coding is a devil's bargain that trades understanding for convenience. This compromise makes code both powerful and potentially perilous because it hides complexity, alienating us from the messy, analog processes the coder aims to represent. Abstraction in computing," Smith argues, stretches the conceptual distance between source and signal, input and output, concealing chains of connection and causality." That may be no big deal if, say, you're trying to simulate a forest in a video game or model a new drug. But when the thing being represented is human-relationships, markets, wars-abstraction feeds a dangerous emaciation of empathy." Think social media trolls and killer drones.

Smith is even more troubled by AI, which essentially writes its own code from reams of training data. AI programs like ChatGPT have gotten uncannily good at imitating a person. But whereas humans can be made to explain ourselves, AI is incapable of reflecting on its own decisions, and its processes are largely a black box. Until our machines are intelligent enough to understand why they do what they do," Smith writes, we will be empowering algorithmic systems that write themselves uncritically, and are understood by nothing and no one." His solution is regulation, such as safety labels and bans on algorithms shown to exacerbate inequalities.

In the end, Smith comes to deeply admire coders and coding culture. But he can't shake his worry that humanity's increasing reliance on digital technology will do more harm than good if we don't get serious about addressing its threats.

Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, offers a rosier view of AI in the book Co-Intelligence. Mollick teaches and studies innovation and the implications of working with new technologies. He regularly experiments with AI chatbots-he even used them to write and edit parts of the book-and he has his students employ them to generate business ideas and practice pitching to venture capitalists. In published research, he and others have reported that people who use AI for knowledge work, such as marketing or data analysis, are faster, more creative, and better writers and problem-solvers than those who rely solely on their own brains.

books.mollick.jpg?w=1324Co-Intelligence:
Living and Working with AI
Ethan MollickPORTFOLIO, 2024

This makes Mollick something of an evangelist for human-AI collaboration. In Co-Intelligence, he imagines a not-so-distant future in which AIs become our companions, creative partners, coworkers, tutors, and coaches. He can sound like a shill for Big Tech when he predicts that AI will boost our cognition, help us flourish in our jobs, and transform education in a way that ultimately enhances learning and reduces busywork."

Lofty predictions aside, the book is a useful guide to navigating AI. That includes understanding its downsides. Anyone who's played around with ChatGPT or its ilk, for instance, knows that these models frequently make stuff up. And if their accuracy improves in the future, Mollick warns, that shouldn't make us less wary. As AI becomes more capable, he explains, we are more likely to trust it and therefore less likely to catch its mistakes.

The risk with AI is not only that we might get things wrong; we could lose our ability to think critically and originally.

Ethan Mollick, professor, Wharton School of Business

In a study of management consultants, Mollick and his colleagues found that when participants had access to AI, they often just pasted the tasks they were given into the model and copied its answers. This strategy usually worked in their favor, giving them an edge over consultants who didn't use AI, but it backfired when the researchers threw in a trick question with misleading data. In another study, job recruiters who used high-quality AI became lazy, careless, and less skilled in their own judgement" than recruiters who used low-quality or no AI, causing them to overlook good candidates. When AI is very good, humans have no reason to work hard and pay attention," Mollick laments.

He has a name for the allure of the AI shortcut: The Button. When faced with the tyranny of the blank page, people are going to push The Button," he writes. The risk is not only that we might get things wrong, he says; we could lose our ability to think critically and originally. By outsourcing our reasoning and creativity to AI, we adopt its perspective and style instead of developing our own. We also face a crisis of meaning," Mollick points out. When we use The Button to write an apology or a recommendation letter, for example, these gestures-which are valuable because of the time and care we put into them-become empty.

Mollick is optimistic that we can avoid many of AI's pitfalls by being deliberate about how we work with it. AI often surprises us by excelling at things we think it shouldn't be able to do, like telling stories or mimicking empathy, and failing miserably at things we think it should, like basic math. Because there is no instruction manual for AI, Mollick advises trying it out for everything. Only by constantly testing it can we learn its abilities and limits, which continue to evolve.

And if we don't want to become mindless Button-pushers, Mollick argues, we should think of AI as an eccentric teammate rather than an all-knowing servant. As the humans on the team, we're obliged to check its lies and biases, weigh the morality of its decisions, and consider which tasks are worth giving it and which we want to keep for ourselves.

Beyond its practical uses, AI evokes fear and fascination because it challenges our beliefs about who we are. I'm interested in AI for what it reveals about humans," writes Hannah Silva in My Child, the Algorithm, a thought-provoking mix of memoir and fiction cowritten with an early precursor of ChatGPT. Silva is a poet and performer who writes plays for BBC Radio. While navigating life as a queer single parent in London, she begins conversing with the algorithm, feeding it questions and excerpts of her own writing and receiving long, rambling passages in return. In the book, she intersperses its voice with her own, like pieces of found poems.

books.silva_.jpg?w=1215My Child, the Algorithm:
An Alternatively Intelligent Book
of Love
Hannah SilvaFOOTNOTE PRESS, 2023

Silva's algorithm is less refined than today's models, and so its language is stranger and more prone to nonsense and repetition. But its eccentricities can also make it sound profound. Love is the expansion of vapor into a shell," it declares. Even its glitches can be funny or insightful. I'm thinking about sex, I'm thinking about sex, I'm thinking about sex," it repeats over and over, reflecting Silva's own obsession. These repetitions happen when the algorithm stumbles and fails," she observes. Yet it's the repetitions that make the algorithm seem human, and that elicit the most human responses in me."

In many ways, the algorithm is like the toddler she's raising. The algorithm and the child learn from the language they are fed," Silva writes. They both are trained to predict patterns. E-I-E-I-...," she prompts the toddler. O!" he replies. They both interrupt her writing and rarely do what she wants. They both delight her with their imaginativeness, giving her fresh ideas to steal. What's in the box?" the toddler asks her friend on one occasion. Nothing," the friend replies. It's empty." The toddler drops the box, letting it crash on the floor. It's not empty!" he exclaims. There's a noise in it!"

Like the algorithm, the toddler gets stuck in loops. Miss Mum on the phone Mummy miss Mum on the phone Mummy miss Mum on the phone Mummy ..." he cries one night from his bed, wanting his other mother, from whom Silva is separated. The difference, of course, is that his missing-and his tears-are real. Later in the book, he begs for her, wailing, and Silva can't console him. Overwhelmed with guilt, she lets the algorithm speak for her: I felt exposed and alone and held accountable for every human thought I had ever had, and for my capacity to love, and a darkness welled inside me until I could feel my skull beneath the flood, and I was surrounded by the pale, flat rush of my life to come."

Throughout the book, human and AI mirror each other, forcing us to ask where one ends and the other begins. Silva wonders if she is losing her identity as a writer in the same way she has often lost herself in motherhood and in love. Yet she's having fun, relishing the magic and the madness, just as she does in her human relationships. As the algorithm says, Queer is living with contradictions, and loving them too."

Ariel Bleicher is a science writer and editor whose work has appeared inScientific American,Nautilus,IEEE Spectrum,and other publications.

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments