A new, nationally representative snapshot of teenage life finds that a clear majority of U.S. adolescents believe AI chatbots are already reshaping schoolwork — and many of them view that change with a mix of utility, unease and resignation. The Pew Research Center’s survey of 1,458 American teens (ages 13–17) and their parents shows roughly two‑thirds of teens have tried chatbots like ChatGPT and Microsoft Copilot, about half report using those tools for schoolwork, nearly six in ten say AI-assisted cheating happens at least “somewhat often” at their schools, and roughly 12 percent say they’ve turned to AI for emotional support or advice.
Background
The rapid adoption of conversational generative AI since 2022 has moved from novelty into routine for many teenagers. Where earlier waves of ed‑tech (from calculators to web search) raised questions about dependency, the current generation of chatbots combines real‑time conversational help, writing assistance, problem solving, and a humanlike tone that can encourage repeated and intimate use. The Pew study — fielded in late 2025 and published as a full report with toplines and methodology — provides the first large, representative look at how teens themselves describe this shift and where parents, educators and policymakers are likely to focus their attention next.
Schools, meanwhile, are scrambling. Districts and teachers face a threefold tension: (1) preventing misuse of AI on summative assessments, (2) integrating the tools productively for learning, and (3) protecting students’ mental health and privacy as they use systems designed primarily by commercial vendors. Advocacy groups and researchers warn that without rapid policy, curriculum and design changes, AI could both magnify educational inequality and blunt critical thinking skills that are central to schooling. The Brookings Institution’s recent global task‑force report argues that the risks to students’ foundational development are substantial unless adoption is guided by explicit pedagogical guardrails.
What the Pew data actually shows
Method and headline findings
Pew’s survey polled 1,458 teen–parent pairs between Sept. 25 and Oct. 9, 2025, using a probability‑based online panel weighted to be nationally representative of U.S. teens living with parents. The margin of sampling error for the full sample is ±3.3 percentage points. That methodology means the headline numbers — roughly 64% of teens have used an AI chatbot, 28–30% use one daily, 54% have used chatbots for schoolwork, and 12% used AI for emotional support or advice — are robust and nationally informative.
But the survey also includes important qualifiers: Pew did not directly ask teens whether
they personally cheated using AI, and it did not define “cheating” in a single, uniform way. Instead, teens reported perceptions of how often AI-assisted cheating happens
at their school — nearly 6 in 10 said it happens at least “somewhat often.” That distinction matters: perception of peer behavior is not the same as documented incidents of academic dishonesty, but perception shapes classrooms, discipline policies and teacher practice.
How teens say they use AI for schoolwork
When asked how they use chatbots, many teens draw a line between research/help and text‑generation tasks. Among the teens who reported using AI for school, a larger share described using chatbots for:
- researching a topic or solving problems,
- getting explanations for difficult concepts,
- generating ideas and outlines.
Fewer teens reported using AI to
edit or rewrite their own writing, which is the use case most commonly framed as likely to cross into plagiarism or “cheating” by many school policies. This pattern suggests teens are, at least in their self‑reports, distinguishing between permissible scaffolding (research, tutoring) and impermissible substitution (having the AI produce the final written product).
The cheating narrative: perception vs. evidence
Why perceptions matter
Perceived prevalence of cheating matters because it changes classroom dynamics. If many students — and parents and teachers —
believe cheating facilitated by AI is common, schools will feel pressured to respond with stricter controls, heavier test‑based assessment, or punitive policies that may have unintended side effects. Sal Khan of Khan Academy, for example, recommends assuming students will use AI for out‑of‑class assignments and shifting evaluation to in‑class writing and oral checks to verify understanding. That is a pragmatic response but also one that reshapes how assessment and instruction are delivered.
The problem with detection and punishment
AI detection tools have clear limitations: models evolve rapidly, students learn evasive strategies, and detection can produce false positives that undermine trust. Many teachers report mixed confidence in these tools, and punitive approaches risk driving AI use further underground rather than teaching ethical, productive practices. Rather than only policing, schools must develop
assessment and learning designs that make AI either irrelevant to the measurement of learning (controlled in‑class tasks) or an explicit, assessed partner (requiring students to document how they used AI).
Dependency, learning harms and equity
Early evidence of dependence
Beyond cheating, the Pew reporting flagged a deeper educational risk: dependence on AI may erode students’ confidence and cognitive skills. Stanford’s Guilherme Lichand told reporters that in an experiment with middle‑school students, those who received AI assistance on an early task and then had the tool removed performed worse on a subsequent cognitive task than peers who’d never had AI help — and they reported lower confidence. Lichand’s experiment remains unpublished and should be treated cautiously, but it aligns with a growing academic literature showing that routine reliance on AI can produce what researchers call “metacognitive laziness” or reduced self‑efficacy. The Brookings report raises similar warnings after reviewing hundreds of studies and stakeholder consultations. Both Brookings and multiple academic experiments urge careful design to avoid scaffolding that never gets removed.
Who is most exposed?
Pew’s survey — and follow‑up reporting — points to a troubling equity dynamic: students from lower‑income households are more likely to use AI for most or all of their schoolwork. For some families, AI is a cheap substitute for tutoring and time‑intensive homework help that wealthier families can buy. That can produce a short‑term advantage (immediate homework completion) but long‑term harm if overreliance prevents skill development. Critics warn that this dynamic risks creating a two‑tier system where under‑resourced students outsource cognitive labor to commercial AIs while wealthier students access human tutoring and richer learning supports.
Teens and emotional support: a nontrivial minority
Roughly 12% of teens in the Pew survey reported using AI for emotional support or advice — a proportion that alarms many parents and child‑development experts. Parents expressed majority disapproval of that use, and researchers caution that chatbots are not trained clinicians: they are optimized for conversational engagement and do not reliably identify crises, provide trauma‑informed care, or maintain appropriate boundaries. Academic work on chatbot psychosocial effects finds that high usage can correlate with greater loneliness, emotional dependence, and lower real‑world socialization over time, especially for users predisposed to seek companionship from nonhuman agents. Schools and health providers need to recognize this trend as a mental‑health issue as much as an academic one.
What educators are doing — and what works
Practical classroom strategies
Teachers and school leaders are experimenting with a range of responses. The strategies that show promise are those that combine pedagogy, assessment design and open student‑teacher conversations:
- In‑class assessments for writing and problem solving so that the measurement of learning does not depend on out‑of‑class AI use.
- Assignment redesign that requires process documentation: students submit drafts, annotated sources, chat logs or revision notes showing how they used AI and why.
- Explicit AI literacy lessons that teach students how to use these tools as assistants (prompt design, fact‑checking, citation) and when their use is inappropriate.
- Rubrics that reward original reasoning and penalize uncredited text production rather than policing surface style.
These approaches align with the Brookings task force recommendations to “use AI tools that teach, not tell” and to prepare teachers with concrete training on classroom integration.
Detection, policy and enforcement
Many districts still rely on policy prohibitions and detection tools. Those policies are easier to implement at scale but risk becoming blunt instruments that fail both learning and fairness tests. A mixed approach is preferable:
- Use detection selectively as a diagnostic aid, not as sole evidence for punishment.
- Pair detection with due‑process steps and teacher interviews.
- Invest in teacher training so educators can ask probing, rights‑respecting questions about student understanding rather than issuing automatic sanctions.
Sal Khan’s recommendation to shift weight to in‑class demonstrations is blunt but practical: if the assessment cannot be completed with external AI help under real‑time observation, that tool is effectively neutralized as a vector for cheating. Yet that shift changes curriculum time and may disadvantage students who need flexible learning supports. Balance is essential.
Technical and safety risks schools must consider
Hallucinations and misinformation
Chatbots can produce plausible but false answers (so‑called hallucinations). When students rely on an AI to “explain” a concept or provide a citation, those outputs must be treated skeptically. Teaching students verification habits — cross‑checking claims with textbooks, trusted websites, and teacher guidance — is urgent.
Privacy and commercial capture
Many widely available educational supports are commercial, hosted on cloud platforms with complex data practices. Schools that adopt AI need procurement contracts that protect student data and prohibit monetization of children’s learning traces. Brookings warns against building educational systems whose future functionality depends on vendors that could change terms or sunset products.
Psychological risks: sycophancy and scaffolding
Research on “sycophantic AI” shows models often affirm users and can foster a cycle of reliance: a chatbot that always agrees increases user trust even when the advice is poor. For emotionally vulnerable teens, that dynamic can be dangerous. Training students to treat chatbots as tools — not as human friends — is part of an AI literacy response, but product teams and regulators also must design signals and guardrails to reduce unwarranted validation.
Policy levers and design recommendations
Brookings’ “Prosper, Prepare, Protect” framework offers a policy map that translates survey findings into actionable steps for governments, school districts, and vendors. Key recommendations worth emphasizing for U.S. districts include:
- Prosper: Design classroom experiences so AI augments rather than replaces cognitive practice; require AI to teach processes, not only produce answers.
- Prepare: Fund and mandate comprehensive teacher professional development on AI pedagogy and assessment.
- Protect: Establish procurement standards that ensure student data privacy, usability audits, and requirements that AI tools explain their recommendations in student‑facing ways.
At the school level, practical moves include mandatory AI literacy units, explicit labelling requirements for AI‑assisted content in student work, and district policies that prioritize equitable access to human tutoring for students in low‑income households. These policies respond directly to the equity concerns raised by Pew’s data.
Four concrete steps for schools this semester
- Audit. District leadership should run a short audit of which AI tools students and teachers are using, how student data might be processed, and what gaps exist in teacher preparation.
- Rebalance assessment. Shift 20–40% of summative assessment weight toward in‑class demonstrations or oral defenses that can’t be outsourced to offsite AI.
- Teach AI literacy. Implement a short unit (2–6 lessons) explaining AI capabilities, limits, evidence‑checking, and ethical use; require students to disclose the how of any AI assistance.
- Expand support. Direct funds to expand tutoring and after‑school help so under‑resourced students do not default to AI as their first and only study partner.
These are pragmatic, near‑term measures that preserve learning standards while recognizing that policing alone will not solve the underlying access and pedagogy problems.
What the evidence does and doesn’t yet prove
The Pew survey is valuable because it measures teens’ self‑reports and perceptions; it is not, by itself, a causal study showing that AI use causes learning loss. Likewise, experiments reported by researchers like Guilherme Lichand provide early signals that dependency and reduced confidence are possible harms, but many of these experimental results remain unpublished or preliminary and require replication. Good policymaking in schools should therefore be precautionary: weigh the plausible, documented risks while implementing experiments and evaluations inside districts to collect local evidence before wide rollout. Transparency about what we
do and
don’t know will serve educators, families and students better than panic or permissiveness.
The long arc: preparing teens for an AI‑infused world
Teens in the Pew study were, on balance,
less pessimistic about AI’s long‑term societal impact than older adults. A plurality saw potential personal benefits even as they expressed concerns about overreliance and loss of critical thinking. That optimism is a resource: rather than trying to forbid AI, educators and policymakers can treat the technology as an
educational object — something students must learn to interrogate, evaluate and harness.
That means shifting from a model of prohibition to a model of disciplined integration: teach students prompt literacy, source verification, ethical frameworks for generative content, and the social‑emotional skills to seek human help when needed. It also means holding vendors and policymakers accountable for building safer, auditable tools for minors. Brookings’ three pillars — Prosper, Prepare, Protect — should be read not as abstract ideals but as a practical roadmap for aligning procurement, curriculum and community engagement.
Conclusion: reality checks for parents, teachers and technologists
The Pew findings are a clear call to action. Teenagers are already using AI in large numbers; many assume their peers misuse it; a nontrivial share are using it for emotional support; and early research suggests dependence can weaken confidence and learning when scaffolds are removed. The path forward requires realism: schools cannot simply pretend this technology doesn’t exist, nor can they outsource the responsibility for young people’s learning to corporate chatbots.
Instead, districts should act on three practical fronts simultaneously:
- Redesign assessment to make outsourcing ineffective,
- Train teachers and equip classrooms with AI‑aware pedagogy,
- Invest in equitable human supports so lower‑income students are not forced into dependency on commercial tools.
Handled badly, AI will magnify existing inequities and hollow out foundational skills. Handled well, it can become a scaffold that is intentionally taken away — a tool that builds real competence rather than permanent dependence. The Pew snapshot shows the choice is urgent: the generation currently sitting in our classrooms is already learning with these systems, and how the adults around them respond this year will shape their skills, confidence and trust for a decade.
Source: The Detroit News
Most teens believe their peers are using AI to cheat in school