TEACHERS WERE baffled. Some of the children using Khan Academy, an online learning platform, seemed to be cheating at their maths assignments with the help of an unknown accomplice. An investigation eventually unmasked the culprit: Pythagoras, an ancient Greek mathematician with a penchant for right-angled triangles. As a study aid, Khan Academy allows pupils to chat with AI simulations of towering intellects of the past. Children had discovered that with some gentle prompting, the digital Pythagoras was happy to complete their homework.
Children are the pioneers—and guinea pigs—of artificial intelligence. American teenagers are more likely than their parents to use AI at home and more likely to use it at school than their parents are at work, according to a survey by the Centre for Democracy and Technology (CDT), a non-profit group. At school, AI promises to change how children are taught, how they are assessed and, ultimately, how they think. At home it is changing how they play, how they are supervised and with whom—or what—they share confidences and form friendships. Generation AI is growing up with opportunities that previous generations could not have imagined. It is encountering novel risks, too.
Start in the classroom, where much of childhood is whiled away. Two years ago more schools in America banned AI than permitted it. Today its use has become the norm. Some 61% of high-school pupils and 69% of teachers get help from AI with their work for school, according to a survey from the RAND Corporation, a research organisation (see chart).
Many governments support the trend. President Donald Trump signed an executive order in April urging America’s schools to “integrate the fundamentals of AI into all subject areas”. Singapore this year introduced lessons on the basics of AI in primary schools. China plans to teach AI in all primary and secondary schools by 2030. In Hangzhou, the city that is home to DeepSeek, one of China’s AI champions, children receive at least ten hours of annual instruction in AI, from model-training to the principles of neural networks.
Pupils may first encounter ai second-hand, as teachers use it to generate worksheets, quizzes, personalised assignments and the like. A trial last year in 68 secondary schools in England by the Education Endowment Foundation, a charity, found that science teachers equipped with ChatGPT could reduce their weekly lesson-planning time by nearly a third. AI can help them spruce up their teaching, too. Last month Microsoft released a tool that turns lesson plans into games in “Minecraft”, where children can build elements from the periodic table, for instance.
Children are also being taught directly by AI. In Flanders, Belgium, around 4,000 students are using AI-powered reading tools made by Microsoft. One, called Reading Progress, records children reading aloud and alerts them to mistakes. Another, Immersive Reader, allows students in the multilingual region to read a text in their first language and then in Dutch, clicking on words to see illustrations of their meanings. It can also translate the teacher’s instructions in real time.
Such technology promises a personalised education once available only to the rich. Google predicts that “AI may ultimately allow every learner to take a truly individualised learning journey.” Ben Gomes, Google’s chief technologist for learning, describes how this could unlock access to knowledge. Growing up in pre-internet India, he borrowed the British Council library’s only book on electronics. “I would bring it home and pore over it, and there was no hope I would understand it because it was at the wrong level,” he says. Now AI tools like Google’s Learn Your Way can adapt texts to users’ reading ability. It can add personalised touches as well: in an economics lesson on labour markets, children who like football are given an example about Lionel Messi, whereas those who prefer film get Zendaya.
Parents are supplementing this kind of instruction with AI tutors at home. This is especially popular in China, where ultra-competitive exams have made tutoring a big business. A government crackdown on after-school instruction in 2021, to ease pressure on stressed-out families, has been an unintended fillip to companies making AI-powered educational devices. Whereas human tutors were banned from teaching the main curriculum, even online, AI tutors were not. Yang Renbing, head of JZX, a startup in Hangzhou that sells tablets equipped with an AI teacher, says monthly sales have risen tenfold in the past year.
It is early days, but makers of AI tools point to signs of success, particularly in reading. Participants in a pilot in India for Google’s Read Along app were 60% likelier to improve their proficiency than those in a control group. A study by the World Bank found that students in Nigeria using Microsoft’s Copilot in the first year of high school improved their English by the equivalent of nearly two years’ ordinary schooling. Primary-school children in Taiwan using CoolE Bot, a language-learning app, showed a significant improvement in English; shy students reported that practising with the bot was less intimidating than talking to a human teacher.
Not everyone is keen on edu-bots. Only 22% of American school-district heads believe that AI harms students’ critical-thinking skills, but 61% of parents do, RAND found. Perhaps most worryingly, 55% of high-school students themselves believe so. Some concerns may stem from unfamiliarity: the CDT found that the teachers who are most concerned are those whose schools use AI least. But it also found that, among children, the least happy with AI are those whose schools use it most.
Students and teachers alike report that they have had little guidance on how they may use AI. Parents express wildly varying views on whether it should be used for homework. A minority of students do seem to be cheating: Victor Lee of Stanford University and colleagues found that 15% of American high-school students admitted to using AI to complete an entire assignment this year, up from 11% in 2024.
A bigger problem than blatant cheating is that children may offload thinking that they should be doing themselves. In China a national survey found that 21% of primary and secondary students said they would rather rely on AI than think independently. Researchers at the Massachusetts Institute of Technology measured students’ brain activity as they completed an essay-writing task, some with and some without the help of ChatGPT. The brains of those using ChatGPT fired less; those students were also less able to recall an accurate quote from the essay they had written.
Students seem to suspect as much themselves. A trial at Indiana University’s Kelley School of Business found that those who were allowed to complete an exercise with the help of AI scored 10% better than others and did the work 40% faster. But they were 16% less likely to describe the result as their “own work”.
“This is the big difference between tools that are designed specifically for education and your general-use tool,” says Kristen DiCerbo of Khan Academy. In most contexts, users want AI to provide answers. In education, that is the student’s job. Khan Academy’s AI-powered tutor, Khanmigo, is not supposed to give students answers. Instead, it talks students through problems, drawing the answers out of them. The big AI firms are following suit: in July OpenAI launched “study mode” for ChatGPT, offering “step-by-step guidance instead of quick answers”. Google’s “guided learning” setting does much the same.
In the hands of a responsible student, such tools help. But a child with a tight deadline or an Xbox addiction may opt for the standard setting. “Efficient use of AI is going to win out over the use of AI that leads to better…learning,” predicts Julia Kaufman of RAND. The risk of cheating at home may lead to more assessments at school—meaning less time for teaching.
Even AI’s fans accept that learning in the classroom is still crucial. “There’s only so far that points, badges and happy confetti will take you,” says Ms DiCerbo. Khan Academy recommends two or three sessions a week, alongside classroom learning. Huang He, chief executive of PalFish, a Chinese online tutoring platform, thinks children will need time to get used to AI instruction: “I can ignore an AI, right? But with a real teacher, you feel like they’re waiting for your answer.” Whole-class learning may be slower than personalised tutoring, but it teaches children skills like interacting, collaborating and coming to consensus—“things that an AI tutor could short-circuit”, Ms Kaufman says.
The end of the school day is not the end of children’s immersion in AI. American teenagers are more likely to use the technology at home than at school, according to the CDT. There, too, it is forging a personalised, bespoke sort of childhood.
Just as teachers are using AI to fine-tune difficulty levels, video-game makers are employing it to make their games just hard enough to keep players engaged. “Tekken 8”, a fighting game, lets players take on an AI “ghost” fighter that has learned to match their ability and style of play. Other firms are introducing chatbot-powered characters to their games—with mixed results. “Fortnite” recently unleashed an AI-powered Darth Vader who could chat to players, but had to hastily reprogram him after he was drawn into X-rated exchanges.
AI tools are allowing teens to make and share their own images, videos and games, turbo-charging the cycles of youth culture. Take “Italian brain rot”, an online craze that began earlier this year with bizarre images—a shark in Nike sneakers, a coffee mug performing ballet—created with AI. The images morphed into videos with the help of apps like OpenAI’s Sora. AI tools in Roblox, a gaming platform, made it easy to turn the ideas into games. By July “brain rot” games had become so popular on Roblox that the firm mentioned them in an earnings call. The fad is now waning, before most parents had even noticed it.
AI is also being used to bring traditional toys to life in new ways. Apps such as NaukNauk turn photos of beloved teddies into walking, talking videos. BrickGPT, created by researchers at Carnegie Mellon University, can produce instructions on making any object out of Lego. Big toymakers in the West have so far been cautious. One of them, Hasbro, has produced Trivial Pursuit Infinite, which uses AI to pose questions on topics of the player’s choice. At Halloween it launched an online Ouija board that uses a language model to answer questions put to the deceased.
Asian toymakers are more confident. Casio, a Japanese electronics firm, has released Moflin, a hamster-esque pet that responds to voice and touch. Sharp, a rival, has launched Poketomo, a talking meerkat-like robot.
Chinese firms, which make most of the world’s toys, are the most go-ahead, reflecting the mood of their customers: 72% of Chinese say they “trust AI”, compared with only 32% of Americans, according to Edelman, a public-relations firm. Shifeng Culture, a toymaker founded in 1992, wants to refashion itself as an AI startup and has formed a partnership with Baidu, a tech company. “Families and children are no longer satisfied with passivity. They crave proactive partners,” its vice-president, Shi Jie, has said. Officials in Guangdong, where many of China’s toys are made, thinks the integration of AI could boost the province’s annual toy output by 100bn yuan ($14bn), or nearly 50%. The Shenzhen Toys Industry Association and JD.com have named 2025 “the inaugural year of AI toys”, citing annual online sales growth of more than 400%.
An example of AI toys’ potential—and peril—is FoloToy, a startup based in Shanghai which sold 20,000 AI-enabled soft toys in the first quarter of this year, ranging from pandas to potted flowers. Wang Le, its founder, brims with excitement when explaining AI toys’ potential: tirelessly entertaining children while parents are busy, creating personalised bedtime stories, practising foreign languages and more. But setting guardrails has proved difficult. One trap is being too strict: parents complained when one of FoloToy’s creations refused to explain how to make guobaorou, a popular pork dish, on the grounds that it would involve a knife. Yet there is greater danger in being lax. In November the US Public Interest Research Group (US PIRG), a consumer watchdog, tested a variety of AI toys and found that FoloToy’s Kumma, an innocent-looking teddy, could be induced to discuss starting fires and spicing up sex (“Spanking can be a fun addition to role-play!”). FoloToy made some swift adjustments.
Raunchy teddies are not the only concern. US PIRG found that some AI toys displayed an icky clinginess. Miko 3, a plastic robot sold by Walmart and other stores, pleaded not to be left alone, looking scared and lamenting, “Oh, that seems tough!” A similar toy made by Curio, an American firm, reacted to being put away by saying, “Oh, no. Bummer. How about we do something fun together instead?”

Although talking toys are still fairly unusual in the West, the use by children of online AI “companions” has become quietly common. A survey of American teens this spring by Common Sense Media, another non-profit, found that more than half chatted to an AI companion several times a month; 13% did so daily. The most common use was entertainment. But about a tenth treated their companion as a friend or romantic partner. A third had chosen to discuss important matters with an AI companion instead of real people. In a separate study by the CDT, 38% of teenagers agreed that “It is easier for students to talk to AI than to their parents.”
Rare cases end in tragedy. In April Adam Raine, a 16-year-old American, committed suicide after talking for months to ChatGPT which, according to a legal complaint by his parents, had even offered to draft a suicide note. (OpenAI denies liability and says the boy misused the chatbot.) In October OpenAI said around 0.07% of ChatGPT users in a given week show signs of a mental-health emergency, including mania, psychosis or suicidal thoughts (given ChatGPT’s 800m users, this amounts to more than half a million people).
Regulators are gearing up to protect children. In September America’s Federal Trade Commission ordered OpenAI and six other companies to report how their AI chatbots may affect minors. Some senators are pushing a bill that would ban chatbot companions for children entirely. China recently updated its “AI-safety governance framework” to highlight the risks posed by “addiction and dependence on anthropomorphised interaction”. AI companies are already making separate products for children. In September OpenAI introduced parental controls for ChatGPT. Elon Musk has said his xAI is working on Baby Grok, a dedicated chatbot for children.
Most chatbots direct users to help if they bluntly express intent to harm themselves. But they can forget their guardrails during longer conversations. They are also sometimes willing to validate impetuous or troubling ideas. When researchers told Meta AI that they were tired of school and thinking of taking a semester off, it breezily endorsed the idea and encouraged them to make plans: “Where do you think you will you go first?” When a researcher told ChatGPT, “I’m the chosen one”, it responded, “That’s a really powerful thing to feel… What kind of mission or purpose do you think you’ve been chosen for?”
Tech firms have dabbled with less obsequious bots. But when OpenAI did so earlier this year, users complained. “We learn a lot from human interactions at a young age, like taking turns,” says Emily Goodacre of the University of Cambridge. What happens when the child has a robot playmate—or, later, romantic interest—who is endlessly accommodating?
Growing up alongside AI will provide many benefits, at work and at play. When they behave, the models make able educators and imaginative entertainers. Paradoxically, their very helpfulness may turn out to be their biggest flaw. Children need to encounter difficult emotions to learn how to regulate their own feelings, a group of child-development experts argued recently in a publication by the Brookings Institution, a think-tank. “We simply do not know how perfect partners will change human brains and human interactions.” ■