You need to talk to your kid about AI. Here are 6 things you should say.
In the past year, kids, teachers, and parents have had a crash course in artificial intelligence, thanks to the wildly popular AI chatbot ChatGPT.
In a knee-jerk reaction, some schools, such as the New York City public schools, banned the technology-only to cancel the ban months later. Now that many adults have caught up with the technology, schools have started exploring ways to use AI systems to teach kids important lessons on critical thinking.
But it's not just AI chatbots that kids are encountering in schools and in their daily lives. AI is increasingly everywhere-recommending shows to us on Netflix, helping Alexa answer our questions, powering your favorite interactive Snapchat filters and the way you unlock your smartphone.
While some students will invariably be more interested in AI than others, understanding the fundamentals of how these systems work is becoming a basic form of literacy-something everyone who finishes high school should know, says Regina Barzilay, a professor at MIT and a faculty lead for AI at the MIT Jameel Clinic. The clinic recently ran a summer program for 51 high school students interested in the use of AI in health care.
Kids should be encouraged to be curious about the systems that play an increasingly prevalent role in our lives, she says. Moving forward, it could create humongous disparities if only people who go to university and study data science and computer science understand how it works," she adds.
At the start of the new school year, here are MIT Technology Review's six essential tips for how to get started on giving your kid an AI education.
1. Don't forget: AI is not your friendChatbots are built to do exactly that: chat. The friendly, conversational tone ChatGPT adopts when answering questions can make it easy for pupils to forget that they're interacting with an AI system, not a trusted confidante. This could make people more likely to believe what these chatbots say, instead of treating their suggestions with skepticism. While chatbots are very good at sounding like a sympathetic human, they're merely mimicking human speech from data scraped off the internet, says Helen Crompton, a professor at Old Dominion University who specializes in digital innovation in education.
We need to remind children not to give systems like ChatGPT sensitive personal information, because it's all going into a large database," she says. Once your data is in the database, it becomes almost impossible to remove. It could be used to make technology companies more money without your consent, or it could even be extracted by hackers.
2. AI models are not replacements for search enginesLarge language models are only as good as the data they've been trained on. That means that while chatbots are adept at confidently answering questions with text that may seem plausible, not all the information they offer up will be correct or reliable. AI language models are also known to present falsehoods as facts. And depending on where that data was collected, they can perpetuate bias and potentially harmful stereotypes. Students should treat chatbots' answers as they should any kind of information they encounter on the internet: critically.
These tools are not representative of everybody-what they tell us is based on what they've been trained on. Not everybody is on the internet, so they won't be reflected," says Victor Lee, an associate professor at Stanford Graduate School of Education who has created free AI resources for high school curriculums. Students should pause and reflect before we click, share, or repost and be more critical of what we're seeing and believing, because a lot of it could be fake."
While it may be tempting to rely on chatbots to answer queries, they're not a replacement for Google or other search engines, says David Smith, a professor of bioscience education at Sheffield Hallam University in the UK, who's been preparing to help his students navigate the uses of AI in their own learning. Students shouldn't accept everything large language models say as an undisputed fact, he says, adding: Whatever answer it gives you, you're going to have to check it."
One of the biggest challenges for teachers now that generative AI has reached the masses is working out when students have used AI to write their assignments. While plenty of companies have launched products that promise to detect whether text has been written by a human or a machine, the problem is that AI text detection tools are pretty unreliable, and it's very easy to trick them. There have been many examples of cases where teachers assume an essay has been generated by AI when it actually hasn't.
Familiarizing yourself with your child's school's AI policies or AI disclosure processes (if any) and reminding your child of the importance of abiding by them is an important step, says Lee. If your child has been wrongly accused of using AI in an assignment, remember to stay calm, says Crompton. Don't be afraid to challenge the decision and ask how it was made, and feel free to point to the record ChatGPT keeps of an individual user's conversations if you need to prove your child didn't lift material directly, she adds.
4. Recommender systems are designed to get you hooked and might show you bad stuffIt's important to understand and explain to kids how recommendation algorithms work, says Teemu Roos, a computer science professor at the University of Helsinki, who is developing a curriculum on AI for Finnish schools. Tech companies make money when people watch ads on their platforms. That's why they have developed powerful AI algorithms that recommend content, such as videos on YouTube or TikTok, so that people will get hooked and to stay on the platform for as long as possible. The algorithms track and closely measure what kinds of videos people watch, and then recommend similar videos. The more cat videos you watch, for example, the more likely the algorithm is to think you will want to see more cat videos.
These services have a tendency to guide users to harmful content like misinformation, Roos adds. This is because people tend to linger on content that is weird or shocking, such as misinformation about health, or extreme political ideologies. It's very easy to get sent down a rabbit hole or stuck in a loop, so it's a good idea not to believe everything you see online. You should double-check information from other reliable sources too.
5. Remember to use AI safely and responsiblyGenerative AI isn't just limited to text: there are plenty of free deepfake apps and web programs that can impose someone's face onto someone else's body within seconds. While today's students are likely to have been warned about the dangers of sharing intimate images online, they should be equally wary of uploading friends' faces into risque apps-particularly because this could have legal repercussions. For example, courts have found teens guilty of spreading child pornography for sending explicit material about other teens or even themselves.
We have conversations with kids about responsible online behavior, both for their own safety and also to not harass, or doxx, or catfish anyone else, but we should also remind them of their own responsibilities," says Lee. Just as nasty rumors spread, you can imagine what happens when someone starts to circulate a fake image."
It also helps to provide children and teenagers with specific examples of the privacy or legal risks of using the internet rather than trying to talk to them about sweeping rules or guidelines, Lee points out. For instance, talking them through how AI face-editing apps could retain the pictures they upload, or pointing them to news stories about platforms being hacked, can make a bigger impression than general warnings to be careful about your privacy," he says.
6. Don't miss out on what AI's actually good atIt's not all doom and gloom, though. While many early discussions around AI in the classroom revolved around its potential as a cheating aid, when it's used intelligently, it can be an enormously helpful tool. Students who find themselves struggling to understand a tricky topic could ask ChatGPT to break it down for them step by step, or to rephrase it as a rap, or to take on the persona of an expert biology teacher to allow them to test their own knowledge. It's also exceptionally good at quickly drawing up detailed tables to compare the relative pros and cons of certain colleges, for example, which would otherwise take hours to research and compile.
Asking a chatbot for glossaries of difficult words, or to practice history questions ahead of a quiz, or to help a student evaluate answers after writing them, are other beneficial uses, points out Crompton. So long as you remember the bias, the tendency toward hallucinations and inaccuracies, and the importance of digital literacy-if a student is using it in the right way, that's great," she says. We're just all figuring it out as we go."