Article 5T4DH Kids Grok AI, But Not Its Pitfalls

Kids Grok AI, But Not Its Pitfalls

by
Mark Pesce
from IEEE Spectrum on (#5T4DH)
a-blue-and-white-striped-hand-holding-a-

The green light next to my computer's built-in webcam turned on. Uh-oh," I thought to myself, realizing just then that the software I was trying out was recording me. My browser showed an image of my body with some dots placed over it, as the computer worked to map my posture. In using this software, I should have been down on the floor, stretched out, doing an isometric exercise known as the plank. But I wasn't, and the app eventually gave up trying to assess my performance, tut-tutting me for my poor form.

Making all of that happen in an app required the integration of many technologies: webcam live streaming, computer vision, and most significantly, a machine-learning model trained to discriminate a well-performed plank from my nonexistent effort. That entails quite a bit of work, most of it at the high end of what a software designer would typically be asked to deliver. But this amazing app had been written by a seventh grader.

For the last few decades, I've obsessively followed developments in interactive toys: the Furby and Lego Mindstorms, Sony's PlayStation and Bandai's Tamagotchi, all of it handed to kids without a second thought, and all of it shaping the way they think.

We often learn by playing-particularly when we're young. The objects we play with help us build an enduring model of the way things work. A child who chatted with a Furby 25 years ago has no trouble as an adult engaging with Alexa.

Today's kids have toys powered by artificial intelligence and are getting quite comfortable using it. Some of them are even able to apply AI to novel problems. I get to see this up close every year as a judge at an Australia-wide competition-cum-science-fair, where students prototype and present some incredibly creative IT projects.

A decade ago, a typical project might have involved an Arduino or a Raspberry Pi doing something clever like operating a scheduling system for a school playground. (Kids often solve problems they experience themselves.) This year saw an explosion of projects using Google's TensorFlow-such as that plank evaluator-and others using the still-in-beta application programming interface (API) for the awesomely powerful GPT-3 text-analysis engine from OpenAI.

A child who chatted with a Furby 25 years ago has no trouble as an adult engaging with Alexa.

Both have become accessible to secondary school students because Google and OpenAI recently released new APIs, making the sophisticated capabilities of these machine-learning systems easy to exploit. Kids dream up an application, then either adapt some existing code or just throw themselves into it and build something from scratch with the sort of obsessive focus adolescents find effortless.

Alongside Internet-of-Things and robotic projects, this year's crop of applications demonstrated that the next generation already understands the potential of AI and knows exactly how to use it to solve a problem. But they don't always grasp the pitfalls. That was particularly obvious in one of the apps I reviewed: Trained using a million Reddit comments, it reflected the worldview and experience of your average Redditor-a narrow enough base to inadvertently generate (and reinforce) unconscious biases.

These blind spots echo the broader challenge that AI poses. And they point to the growing importance of an education that includes both technical skills and a solid grounding in ethics. After all, with great power comes great responsibility. Youngsters have shown themselves adept at exercising these new AI powers; let's do what we can to make sure they're equally good at applying them responsibly.

This article appears in the January 2022 print issue as Power Play."

External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/IeeeSpectrum
Feed Title IEEE Spectrum
Feed Link https://spectrum.ieee.org/
Reply 0 comments