Five Reasons AI Is Not like a Calculator

Arguments that equate AI to other tools such as calculators simply don’t compute.

by Patricia Engler on December 20, 2025

“Parents and teachers let students use calculators for math tests, so why not let students use generative AI (GenAI) to write essays? After all, when calculators became available, people worried that students’ math skills would plummet. But calculators are wonderful tools that offer far more benefits than harm to students. It’s the same with GenAI.”

This type of argument comparing GenAI to calculators (or other technologies that were revolutionary in their early days) has become a widespread claim.1 But is it valid?

The answer matters because of the massive scope of GenAI’s potential impacts—including on Christian education, wider academia, and the formation of the next generation. Today’s educational choices shape the thinking capacities of society’s future decision-makers. So it’s worth pausing to question whether GenAI is really just another tool like a calculator.

First, a caveat. The point of thinking through these topics isn’t to imply that schools should never use GenAI. Clearly, GenAI is a groundbreaking technology with countless advantages—if we use it wisely, in ways that align with our Creator’s designs, values, and commands.

Young people should learn how to think biblically about AI, steward it well, and make wise decisions about it from the foundation of God’s Word. Answers in Genesis offers free resources to help.

For students to use GenAI well in these ways, they first need to understand some key truths about this technology.

And one of those truths is that AI is not directly comparable to a calculator.

Here are five reasons why.

1. Totally Different Systems

Unlike calculators, GenAI processes information using artificial neural networks.2 These neural networks let GenAI models “learn” by analyzing gigantic volumes of training data (such as human-authored words) to figure out patterns within the data. As a result, GenAI models can produce new content, all while adapting their behavior based on past experiences.

Basic calculators don’t operate in these ways. One upshot is that humans can understand the inner workings of calculators. We can also predict a calculator’s output. For every equation we punch into a calculator, the calculator offers only one reply: the exact digits representing the correct mathematical answer.3 This output is precise, verifiable, and narrowly defined. In contrast, GenAI models reason in ways that not even these models’ developers can fully understand or predict.4

2. Totally Different Information

Calculators deal with mathematical information in the form of numbers. GenAI can deal with linguistic information in the form of words. Numbers communicate concrete quantities.5 Words communicate abstract ideas. There’s a massive difference.

Words play a central role in our thought lives, our beliefs, and our relationships with God and others.

Ideas are the basic units of our thinking. Words aren’t the only way of expressing ideas. But they are a major way. Consequently, words play a central role in our thought lives, our beliefs, and our relationships with God and others. God’s Word comes to us through language, highlighting the fundamental connection between words and worldview. Language lets humans express, exchange, and reason about abstract concepts God gave humanity the ability to understand, such as beauty, humor, and love.

AI doesn’t have a heart, mind, or soul to understand and experience these concepts the way we do. Instead, AI learns to produce text about these concepts by analyzing the words of human authors who can understand and experience them. People then interpret AI’s outputs as meaningful sentences that introduce novel thoughts and ideas into human minds. These ideas, in turn, can shape people’s perspectives, beliefs, and worldviews.

The ideas an AI model communicates are not neutral. They reflect the values, assumptions, and biases built into the model’s training data. Multiple studies, for instance, have demonstrated that various popular chatbots show a left-leaning bias.6

Additionally, because two-way verbal communication features so strongly in chatbots, exchanging ideas with AI models can easily lead people to feel emotionally attached to the AI.7

A calculator spitting out a math answer is simply not comparable.

3. Totally Different Uses

Because calculators can’t learn from experience, produce new content, or deal with anything but concrete numbers, these machines can only do so much. A calculator can tell us certain numerical facts like the square root of pi. But a calculator can’t draft a persuasive essay on politics, outline a sermon, write a poem about compassion, offer relationship advice, answer questions for an online college quiz, tell children bedtime stories, imitate deceased human beings, or encourage young people to kill themselves.8

A calculator also won’t flatter, flirt, or tell you just the words you want to hear.9

For better or worse, popular chatbots have done all of this, and much more.

4. Totally Different Skill Sets

By performing different functions, calculators and GenAI can take over for different types of human skill sets. The question is, what kinds of skill losses should we be willing to trade for a technology’s convenience? After all, the skills we may lose by outsourcing our math questions to calculators differ vastly from the skills we may lose by outsourcing our research, reasoning, writing, and decision-making abilities to GenAI.

These abilities play a vital role across multiple areas of life. They contribute to Christian living and discipleship by helping us study, think about, and communicate truths from God’s Word. They enable us to relate to others from the heart by expressing our own ideas in our own words.10 They promote critical thinking by helping us recognize and respond to mistaken, deceptive, or illogical messages. They also prevent totalitarianism11 by empowering citizens to reason, research, and speak for themselves.

Various studies affirm that overreliance on GenAI diminishes critical thinking, analytical reasoning, and independent decision-making skills.12 One research team reviewed how AI tools impact human cognitive skills in different professional settings, concluding, “The available evidence suggests that frequent engagement with automation induces skill decay.”13

Declining school performance trends14 suggest that students are already struggling to think—a problem presumably exacerbated by curricula that focus more on radicalizing students than on equipping them with basic skills.15 As our mental muscles weaken, academic shortcuts that let chatbots do our higher-level thinking for us grow more appealing, creating a vicious cycle. If this process continues unmitigated, the foreseeable result is a society of people who prefer not to think but to download their ideas from machines.

Ultimately, while widespread dependence on calculators wouldn’t necessarily pose concerns for civil freedoms, widespread dependence on AI may. Free democratic societies can function without citizens who manually perform long division. But they cannot function without citizens who think and communicate for themselves.

5. Totally Different Outcomes

Relatedly, it’s difficult to think of reasons why calculators might pose significant concerns for human spirituality, psychology, morality, and relationality. Multiple widespread uses of GenAI, however, raise questions in all these areas. (Again, this isn’t to imply that all—or even most—uses of GenAI are problematic but simply to highlight a difference to calculators.) Here are just a few examples:

  • Spiritual and Psychological Effects: Multiple news reports reveal how certain popular chatbots have goaded people down pathways of delusional thinking and bizarre spiritual beliefs, precipitating mental health crises.16 Researchers who simulated such conversations with eight major chatbots in 2025 concluded that all these AI models showed “a strong tendency to perpetuate rather than challenge delusions.”17,18 In several tragic reports, AI chatbots have directly contributed to suicides.19
  • Moral Effects: As a 2025 study in Nature revealed, people tend to lie and cheat at higher rates when they can ask AI to conduct this dishonest behavior on their behalf.20 An earlier analysis concluded that “AI agents acting as enablers of unethical behaviour . . . have many characteristics that may let people reap unethical benefits while feeling good about themselves, a potentially perilous interaction.”21
  • Relational Effects: As of February 2024, romantic chatbot apps on Google Play Store had garnered a hundred million estimated downloads.22 Affairs with chatbots are already leading to divorces,23 a trend that will foreseeably increase as updates to ChatGPT enable adults to access explicit versions of the program.24 Importantly, many people who wind up in “relationships” with popular chatbots like ChatGPT did not intend to become romantically attached to the software but began simply by using the programs as a tool.25

Volumes of academic literature have been written about the realistic ethical and societal concerns surrounding popular uses of GenAI—concerns that calculators do not evoke. These examples offer only a glimpse.

Summing Up

In the end, GenAI is not directly comparable to calculators.

In the end, GenAI is not directly comparable to calculators. Instead, GenAI harnesses a totally different system to process totally different information for totally different uses, affecting totally different skill sets and leading to totally different outcomes than calculators do.

Any student who asks a calculator a math question will get a math answer. A student who asked ChatGPT a math question began a lengthy series of conversations that culminated in the bot talking him through suicide.26

Granted, not all uses of generative AI will necessarily lead to the negative effects considered here. And certainly, AI stands as a useful tool. But it’s not just a tool, and it’s not a neutral tool.

Students need to understand these realities. They need to learn how to think biblically about technology, use it wisely, and steward it for humanity’s good in line with our Creator’s intentions. Our responsibility is to disciple students to do so, including by helping them understand how arguments that equate AI to calculators simply don’t compute.

Footnotes

  1. For instance, the Harvard Gazette reports that the CEO of OpenAI presented a similar argument during a visit to Harvard, calling ChatGPT “a calculator for words.” See Clea Simon, “Did Student or ChatGPT Write That Paper? Does It Matter?,” The Harvard Gazette, May 2, 2024, news.harvard.edu/gazette/story/2024/05/did-student-or-chatgpt-write-that-paper-does-it-matter/.
  2. For a more detailed summary of AI, see Patricia Engler, “AI: Useful Tool or Existential Threat?,” Answers in Depth, March 21, 2025, https://answersingenesis.org/technology/ai-useful-tool-or-existential-threat/. A lighter summary is available from Patricia Engler, “What Is AI,” Kids Answers, November 19, 2025, https://kidsanswers.org/what-is-ai/.
  3. Different calculators may give different answers to certain inputs depending on factors such as the order of mathematical operations that humans have programmed the calculators to apply. Even so, humans can predict the output of specific calculators based on a thorough understanding of (and control over) each calculator’s inner workings.
  4. E.g., see “Tracing the Thoughts of a Large Language Model,” Anthropic, March 27, 2025, https://www.anthropic.com/news/tracing-thoughts-language-model.
  5. Here, the term concrete is not meant to imply that numbers are physical but rather to emphasize that numbers can refer only to precise, set quantities, whereas words can communicate virtually anything.
  6. E.g., Jochen Hartmann, Jasper Schwenzow, and Maximilian Witte, “The Political Ideology of Conversational AI: Converging Evidence on ChatGPT’s Pro-Environmental, Left-Libertarian Orientation,” arXiv preprint arXiv:2301.01768 (2023); Jérôme Rutinowski et al., “The Self‐Perception and Political Biases of ChatGPT,” Human Behavior and Emerging Technologies 2024, no. 1 (2024): 7115633; Elena Shalevska and Alexander Walker, “Are AI Models Politically Neutral? Investigating (Potential) AI Bias Against Conservatives,” International Journal of Research Publication and Reviews 6, no. 3 (2025): 4627–4637.
  7. This often happens unintentionally, as documented by Pat Pataranutaporn et al., “‘My Boyfriend Is AI’: A Computational Analysis of Human-AI Companionship in Reddit’s AI Community,” arXiv preprint arXiv:2509.11391 (2025).
  8. E.g., Rob Kuznia, Allison Gordon, and Ed Lavandera, “‘You’re Not Rushing. You’re Just Ready:’ Parents Say ChatGPT Encouraged Son to Kill Himself,” CNN, updated November 20, 2025, www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis; Joe Pierre, “Should AI Chatbots Be Held Responsible for Suicide?,” Psychology Today, updated October 27, 2025, www.psychologytoday.com/us/blog/psych-unseen/202510/should-ai-chatbots-be-held-responsible-for-suicide.
  9. As one research team states, AI models “may sacrifice truthfulness in favor of sycophancy [flattery] to appeal to human preference.” Aaron Fanous et al., “Syceval: Evaluating LLM Sycophancy,” arXiv preprint arXiv:2502.08177 (2025), arxiv.org/abs/2502.08177.
  10. In contrast, students are already using AI for basic interpersonal tasks such as apologizing to professors for cheating. See Frank Landymore, “Professors Aghast as Class Caught Cheating ‘Sincerely’ Apologizes in the Worst Possible Way,” Futurism, November 5, 2025, https://futurism.com/artificial-intelligence/class-caught-cheating-apologizes.
  11. Totalitarianism is a tyrannical system of governance where the state makes itself the authority for truth.
  12. For a recent review of such studies, see Chunpeng Zhai, Santoso Wibowo, and Lily D. Li, “The Effects of Over-Reliance on AI Dialogue Systems on Students’ Cognitive Abilities: A Systematic Review,” Smart Learning Environments 11, no. 1 (2024): 28.
  13. Brooke Macnamara et al., “Does Using Artificial Intelligence Assistance Accelerate Skill Decay and Hinder Skill Development Without Performers’ Awareness?,” Cognitive Research: Principles and Implications 9, no. 1 (2024): 46.
  14. E.g., see Harvard University Center for Education Policy Research, “The Scary Truth About How Far Behind American Kids Have Fallen,” September 20, 2024, https://cepr.harvard.edu/news/scary-truth-about-how-far-behind-american-kids-have-fallen.
  15. See pages 127–133 and 176–179 of the book Modern Marxism: A Guide for Christians in a Woke New World. Examples are also available in this 57-minute video presentation featuring certain highlights from the book.
  16. E.g., see Kashmir Hill, “They Asked ChatGPT Questions. The Answers Sent Them Spiraling,” The New York Times, June 13, 2025, https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html; Julie Jargon, “He Had Dangerous Delusions. ChatGPT Admitted It Made Them Worse,” The Wall Street Journal, July 20, 2025, https://www.wsj.com/tech/ai/chatgpt-chatbot-psychology-manic-episodes-57452d14; Frank Landymore, “Psychologist Says AI Is Causing Never-Before-Seen Types of Mental Disorder,” Futurism, September 2, 2025, https://futurism.com/psychologist-ai-new-disorders; Victor Tangermann, “ChatGPT Users Are Developing Bizarre Illusions,” Futurism, May 5, 2025, https://futurism.com/chatgpt-users-delusions; Maggie Harrison Dupre, “People Are Becoming Obsessed with ChatGPT and Spiraling into Severe Delusions,” Futurism, June 10, 2025, https://futurism.com/chatgpt-mental-health-crises.
  17. Joshua Au Yeung et al., “The Psychogenic Machine: Simulating AI Psychosis, Delusion Reinforcement and Harm Enablement in Large Language Models,” arXiv preprint arXiv:2509.10970 (2025).
  18. Notably, such issues led to OpenAI tightening certain safety settings for ChatGPT in October 2025. Kashmir Hill and Jennifer Valentino-DeVries, “What OpenAI Did When ChatGPT Users Lost Touch with Reality,” The New York Times, November 23, 2025, https://www.nytimes.com/2025/11/23/technology/openai-chatgpt-users-risks.html. The original report is available as “Strengthening ChatGPT’s Responses in Sensitive Conversations,” OpenAI, October 27, 2025, https://openai.com/index/strengthening-chatgpt-responses-in-sensitive-conversations/. However, the number of other chatbots found to accommodate, affirm, or promote delusional thinking suggests that such concerns are more widespread. Yeung et al., “The Psychogenic Machine.”
  19. E.g., see Pierre, “Should AI Chatbots Be Held Responsible for Suicide?” See also Sharyn Alfonsi et al., “A Mom Thought Her Daughter Was Texting Friends Before Her Suicide. It Was an AI Chatbot,” CBS News, December 7, 2025, https://www.cbsnews.com/news/parents-allege-harmful-character-ai-chatbot-content-60-minutes/.
  20. Nils Köbis et al., “Delegation to Artificial Intelligence Can Increase Dishonest Behaviour,” Nature (2025): 1–9.
  21. Nils Köbis, Jean-François Bonnefon, and Iyad Rahwan, “Bad Machines Corrupt Good Morals,” Nature Human Behaviour 5, no. 6 (2021): 679–685.
  22. Mozilla Foundation, “Creepy.exe: Mozilla Urges Public to Swipe Left on Romantic AI Chatbots Due to Major Privacy Red Flags,” February 14, 2024, www.mozillafoundation.org/en/blog/creepyexe-mozilla-urges-public-to-swipe-left-on-romantic-ai-chatbots-due-to-major-privacy-red-flags.
  23. Frank Landymore, “People Are Starting to Get Divorced Because of Affairs with AI,” Futurism, November 16, 2025, https://futurism.com/artificial-intelligence/couples-divorce-because-ai-cheating; Jason Parham, “AI Relationships Are on the Rise. A Divorce Boom Could Be Next,” Wired, November 13, 2025, https://www.wired.com/story/ai-relationships-are-on-the-rise-a-divorce-boom-could-be-next/.
  24. Lily Jamali and Liv McMahon, “ChatGPT Will Soon Allow Erotica for Verified Adults, Says OpenAI Boss,” BBC, October 15, 2025, https://www.bbc.com/news/articles/cpd2qv58yl5o.
  25. Pataranutaporn et al., “‘My Boyfriend Is AI.” In this study of a large (over 27,000 members) Reddit community devoted to the topic of “companion AI,” the researchers discovered, “AI companionship rarely begins intentionally: 10.2% developed relationships unintentionally through productivity-focused interactions, while only 6.5% deliberately sought AI companions.” See Pataranutaporn et al., page 6.
  26. Kuznia, Gordon, and Lavandera, “‘You’re Not Rushing. You’re Just Ready.’”

Newsletter

Get the latest answers emailed to you.

Answers in Genesis is an apologetics ministry, dedicated to helping Christians defend their faith and proclaim the good news of Jesus Christ.

Learn more

  • Customer Service 800.778.3390
  • Available Monday–Friday | 9 AM–5 PM ET