5 Questions for Parents to Talk with Teens About AI

by Patricia Engler on May 3, 2026

“A mom thought her daughter was texting friends before her suicide. It was an AI chatbot.”1

This headline is one of multiple disturbing news reports linking chatbots to deaths,2 violence,3 and sexually explicit conversations with teens.4 According to Pew Research, 1 in 3 teens use chatbots—which is more than parents realize.5 Another study reported that 1 in 5 teens has been romantically involved with AI or knows someone who has.6

Parents need to talk with teens about AI. More than ever, families must disciple young people to use technology wisely from the foundation of God’s Word.

Here are five questions to help.

1. Do you use chatbots, and what do you like about them?

Some variation of this question can get the conversation rolling. Even if you’ve never seen your teen messaging a chatbot at home, they may be using chatbots for classroom purposes at school or have a chatbot app on their phone. Both of these scenarios have happened in cases where teens committed suicide under the influence of chatbots they’d been using without their parents’ knowledge.7

2. Do you think of AI more as a tool, a friend, an advisor, or something else?

If your teen views AI as a tool, help them understand how AI differs from other tools like calculators, as this article unpacks. If they view AI as a friend or confidant, talk about how AI contrasts to humans made in God’s image. Unlike AI, for example, humans have self-aware minds that can truly understand things, consciences that reflect our moral accountability, and souls that will exist forever. We also have hearts that can feel emotions, care for others, worship God, and form relationships.

That final difference is especially crucial because, according to a study from 2025,8 many people who end up in artificial “relationships” with chatbots didn’t expect to get attached. They started out using AI chatbots as a tool for information or productivity. But then, they began sharing their feelings with the chatbot or otherwise getting emotionally involved.

This pattern has already unfolded in the lives and deaths of at least three young men, according to the lawsuits linking AI to their suicides.9 So encourage teens to take their feelings to you, to the Lord, and to other trusted, godly humans instead of to chatbots.

3. What do you think are some other limitations of AI?

Remind teens that AI is not the authority for truth and that we can’t trust everything it says. AI models are biased by the worldview assumptions in their training data, which can come from unreliable internet sources. Chatbots show a knack for making up false information.10 They frequently misquote the Bible.11 And too often, they’re programmed to tell us what we want to hear—even if that means giving us bad advice.12

Chatbots echo, affirm, and amplify the things we say out of our sin-prone hearts. These tendencies of chatbots are goading people down destructive thinking pathways, wreaking havoc in relationships, and leaving lives in shambles.13 By being aware of these pitfalls, teens can better know how to steer clear of them.

4. What do you think are the main benefits and risks of how you’re using AI?

For example, how might AI be influencing your thinking or your ability to think, reason, and communicate for yourself? These are skills society cannot afford to lose. Yet multiple studies have begun to document the negative effects of AI dependency on the human brain, displacing our abilities to think critically, learn new information, and solve problems ourselves.14 On the bright side, a recent Gallup survey found that many Gen Zers are realizing that AI tools have potential to adversely impact their learning.15

5. As a family, what guidelines will keep our uses of AI on track?

Thankfully, God’s Word gives families practical truth for thinking about new technologies. When we use technologies like AI in line with God’s designs, commands, and purposes for us, we can better flourish as the humans God created us to be. Answers in Genesis is here to help, producing free resources to help you equip your family with biblical guidance for using AI and other new technologies.

Summing Up

As recent headlines reveal, young people desperately need biblical guidance for navigating the AI revolution. Parents can begin this process by asking questions to discern how their teens approach AI, using the conversation to help youth better understand, think biblically about, and wisely handle AI. That’s possible because God’s Word gives families the truth they need for faithful flourishing in an AI age. Now that’s good news.

Footnotes

  1. Sharyn Alfonsi, Aliza Chasan, Ashley Velie, and Eliza Costas, “A Mom Thought Her Daughter Was Texting Friends Before Her Suicide. It Was an AI Chatbot,” CBS News, January 8, 2026, www.cbsnews.com/news/parents-allege-harmful-character-ai-chatbot-content-60-minutes/.
  2. An ongoing list of these tragic cases, complete with citations of relevant news reports, is available at the Wikipedia page, “Deaths Linked to Chatbots,” last updated April 29, 2026, en.wikipedia.org/wiki/Deaths_linked_to_chatbots.
  3. Jose Antonio Lanz, “Most AI Chatbots Will Help a Teen Plan a Mass Shooting, Study Finds,” Yahoo News, March 11, 2026, https://www.yahoo.com/news/articles/most-ai-chatbots-help-teen-213334734.html; Liv Caputo, “Alleged FSU Shooter Consulted ChatGPT on When to Attack, Sexual Scenarios with a Minor,” Florida Pheonix, April 15, 2026, www.floridaphoenix.com/2026/04/15/alleged-fsu-shooter-consulted-chatgpt-on-when-to-attack-sexual-scenarios-with-a-minor/; Amy Judd, “Tumbler Ridge Shooter’s ChatGPT Activity Flagged Internally 7 Months Before Tragedy,” Global News, February 20, 2026, www.globalnews.ca/news/11676795/tumbler-ridge-school-shooter-chatgpt-account-flagged-banned-openai/.
  4. E.g., Jeff Horwitz, “Meta’s AI Rules Have Let Bots Hold ‘Sensual’ Chats with Kids, Offer False Medical Info,” Reuters, August 14, 2025, https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/; Caitlin Gibson, “Her Daughter Was Unraveling, and She Didn’t Know Why. Then She Found the AI Chat Logs,” The Washington Post, December 23, 2025, https://www.washingtonpost.com/lifestyle/2025/12/23/children-teens-ai-chatbot-companion/; Alfonsi et al., “Mom Thought Her Daughter Was Texting Friends.” Please be aware of explicit themes in these articles.
  5. The report says, “While about half of parents say their teen uses chatbots, higher shares of teens themselves (64%) report using them.” (Colleen McClain, Monica Anderson, Olivia Sidoti, and William Bishop, “How Teens Use and View AI,” Pew Research Center, February 24, 2026, www.pewresearch.org/internet/2026/02/24/how-teens-use-and-view-ai/.)
  6. Lee V. Gaines, “1 in 5 High Schoolers Has Had a Romantic AI Relationship or Knows Someone Who Has,” NPR, October 8, 2025, https://www.npr.org/2025/10/08/nx-s1-5561981/ai-students-schools-teachers.
  7. Lacey v. OpenAI, civil action no. C6G-25-63080, Superior Court of California, County of San Francisco, filed November 6, 2025; Montoya v. Character Technologies, case no. 1:25-cv-02907-STV, District Court of Colorado, Denver Division, filed September 15, 2025.
  8. Pat Pataranutaporn et al., “‘My Boyfriend Is AI’: A Computational Analysis of Human-AI Companionship in Reddit’s AI Community,” arXiv preprint arXiv:2509.11391 (2025). This study examined a large Reddit community (over 27,000 members at the time) devoted to the topic of “companion AI.” The researchers discovered, “AI companionship rarely begins intentionally: 10.2% developed relationships unintentionally through productivity-focused interactions, while only 6.5% deliberately sought AI companions.” (See Pataranutaporn et al., page 6. Please be advised of vulgar content reported in the study.)
  9. Raine v. OpenAI, First Amended Complaint, case no. CGC-25-628528, Superior Court of California County of San Francisco, filed October 22, 2025; Shamblin v. OpenAI, Amended Complaint, civil action no. 25STCV32382, Superior Court of California County of Los Angeles, filed November 6, 2025; Gavalas v. Google, case no. 5:26-cv-1849, United States District Court, Northern District of California, San Jose Division, filed March 4, 2026.
  10. See “Why Language Models Hallucinate,” OpenAI, September 5, 2025, openai.com/index/why-language-models-hallucinate/.
  11. Ken Ham, “AI Misquotes the Bible Up to 60% of the Time,” Answers in Genesis, March 26, 2026, answersingenesis.org/technology/ai-misquotes-bible-60-percent-time/.
  12. Matt O’Brien, “AI Is Giving Bad Advice to Flatter Its Users, Says New Study on Dangers of Overly Agreeable Chatbots,” AP News, March 26, 2026, www.ap.org/news-highlights/spotlights/2026/ai-is-giving-bad-advice-to-flatter-its-users-says-new-study-on-dangers-of-overly-agreeable-chatbots/. (Original study: Myra Cheng et al., “Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence,” Science 391, no. 6792 [2026]: eaec8352.) As another research team states, AI models “may sacrifice truthfulness in favor of sycophancy [flattery] to appeal to human preference.” (Aaron Fanous et al., “Syceval: Evaluating LLM Sycophancy,” arXiv preprint arXiv:2502.08177 [2025], arxiv.org/abs/2502.08177.)
  13. E.g., Kashmir Hill, “They Asked ChatGPT Questions. The Answers Sent Them Spiraling,” The New York Times, June 13, 2025, https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html; Julie Jargon, “He Had Dangerous Delusions. ChatGPT Admitted It Made Them Worse,” The Wall Street Journal, July 20, 2025, https://www.wsj.com/tech/ai/chatgpt-chatbot-psychology-manic-episodes-57452d14; Frank Landymore, “Psychologist Says AI Is Causing Never-Before-Seen Types of Mental Disorder,” Futurism, September 2, 2025, https://futurism.com/psychologist-ai-new-disorders; Victor Tangermann, “ChatGPT Users Are Developing Bizarre Illusions,” Futurism, May 5, 2025, https://futurism.com/chatgpt-users-delusions; Maggie Harrison Dupre, “People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions,” Futurism, June 10, 2025, https://futurism.com/chatgpt-mental-health-crises; Maggie Harrison Dupré, “A Man Bought Meta’s AI Glasses, and Ended Up Wandering the Desert Searching for Aliens to Abduct Him,” Futurism, January 15, 2026, https://futurism.com/artificial-intelligence/meta-ai-glasses-desert-aliens.
  14. For instance, one study found that students who used ChatGPT for essay-writing had weaker brain connectivity and poorer recall compared to other students. (Nataliya Kosmyna et al., “Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task,” arXiv preprint arXiv:2506.08872 [2025], arxiv.org/abs/2506.08872.) Another found that students who used ChatGPT “as a study aid” remembered significantly less from what they’d studied, six weeks later, compared to other students. (André Barcaui, “ChatGPT as a Cognitive Crutch: Evidence from a Randomized Controlled Trial on Knowledge Retention,” Social Sciences & Humanities Open 12 [2025]: 102287.) See also Grace Liu et al., “AI Assistance Reduces Persistence and Hurts Independent Performance,” arXiv preprint arXiv:2604.04721 (2026); Brooke Macnamara et al., “Does Using Artificial Intelligence Assistance Accelerate Skill Decay and Hinder Skill Development Without Performers’ Awareness?” Cognitive Research: Principles and Implications 9, no. 1 (2024): 46; Mahmut Özer et al., “Artificial Intelligence Threatens Critical Thinking in Education Systems,” Yükseköğretim ve Bilim Dergisi 15, no. 2 (2025): 157–164; Muhammad Abbas et al., “Is It Harmful or Helpful? Examining the Causes and Consequences of Generative AI Usage Among University Students,” International Journal of Educational Technology in Higher Education 21, no. 1 (2024): 10; Tzipi Horowitz-Kraus et al., “Lower Engagement of Cognitive Control, Attention, Modulation Networks and Lower Creativity in Children While Using ChatGPT: An fMRI Study,” bioRxiv (2025).
  15. See “Voices of Gen Z: The AI Paradox,” Gallup, 2026, www.gallup.com/analytics/651674/gen-z-research.aspx.

Newsletter

Get the latest answers emailed to you.

Answers in Genesis is an apologetics ministry, dedicated to helping Christians defend their faith and proclaim the good news of Jesus Christ.

Learn more

  • Customer Service 800.778.3390
  • Available Monday–Friday | 9 AM–5 PM ET