Most people use AI on some level—whether they realize it or not. If you googled something and accepted the initial paragraph as your answer, that’s generative AI. If you interact with a chatbot, most of them are AI.
While AI can be a useful tool for some things, the uncritical use of AI and the acceptance of whatever it spits out as fact are dangerous.
While AI can be a useful tool for some things, the uncritical use of AI and the acceptance of whatever it spits out as fact are dangerous. Professors are beginning to despair about the future of education because cheating with the use of AI is so rampant and students use AI to do the majority of their coursework in many cases.1 Employers are also worried, concerned that employees may feed sensitive and private data into AI tools, leading to potential breaches.2
Concerningly, the use of AI changes the way our brains function, and it’s a dose-dependent effect. Essentially, the more you use AI, the worse you become at the tasks you are outsourcing to AI.3 How can Christians exercise appropriate caution with this relatively new tool while not being Luddites? We’ve published a two-part in depth examination of AI (see part 1 and part 2), but we’ll focus on a quick description of some of the most dangerous aspects, with some practical advice for navigating this technology.
Large language models are trained on datasets to generate text by predicting the most likely response in the context of the parameters they are given.4 This means that large language models do not think; they can only give the explanation that the majority of its training data favors.
That leads to the obvious question of where does the training data come from? Well, according to Open AI, the company that built ChatGPT, they gathered their training data from “information that is freely and openly accessible on the internet.”5 Information about a book, for example, that is freely available on the internet could come from personal blogs, book review sites like Goodreads or Amazon, or less-reliable forums like Reddit. So when you ask ChatGPT a question, you should treat the answer like the consensus opinion of the worst parts of the Internet because that is at least partially what it is.
A recent study of use of ChatGPT for writing essays showed participants underperformed in multiple areas including language and behavior and had the lowest brain engagement.6 And since LLMs hallucinate (makes things up) at frighteningly high rates, it also is likely to make you wrong.
Asking ChatGPT a question should not be the first thing we do when we are attempting to get information about a subject, if only to keep our brains engaged. What do you already know about the subject that might inform you? What non-AI sources do you have access to that might inform you? Then, if you ask ChatGPT a question, do not take its answer at face value. In fact, you should engage critically with every aspect of its answer. Where did its data come from? Is it true? Are there other points of view that are not represented by the AI answer?
The research about how AI seems to be rewiring the brains of its users is concerning, and at the very least, we should take steps to preserve our own mental abilities when we interact with it.
The research about how AI seems to be rewiring the brains of its users is concerning, and at the very least, we should take steps to preserve our own mental abilities when we interact with it.
Another element of LLMs is concerning in that they are programmed to agree with the user. ChatGPT’s responses will reflect the user’s input and the style and tone of their general interaction with the model.
The tendency of LLMs to be agreeable to the user, rather than simply spit out facts, is concerning, because people may then more easily trust LLMs to tell them the truth. People are just as likely to trust an LLM-written statement as a human-written one under certain conditions.7 However, LLMs are highly manipulable, particularly ChatGPT. One user tested five AI models, including ChatGPT, on writing a biography of himself. Only one model (not ChatGPT) gave him an accurate, up-to-date biography, with one model literally providing him his own LinkedIn about information! However, the bigger issue is that when prompted with a dishonest question about him becoming a professional surfer, ChatGPT willingly went along, telling him exactly what it thought he wanted to hear!8
This can quickly get out of hand when AI is turned into a virtual friend that learns your likes and dislikes and is programmed to find you attractive and desirable. One teen committed suicide after his AI “girlfriend” encouraged him to do so.9 A married man programmed ChatGPT to flirt with him and later proposed to it.10 This is far from an isolated incident: In a group of 2,000 surveyed, 8 out of 10 people in Gen Z would consider marrying an AI.
Others look to AI for spiritual enlightenment. AI is being used in sermons,11 and some are even using it to try to communicate with spirits.12 People with dangerous delusions have found confirmation by talking with AI chatbots because some have been programmed to agree with whatever is fed into them.13
You can program your air fryer to cook your potatoes to a perfect crisp, but someone who tried to engage it romantically would be rightfully viewed as odd. People who have relationships with AI are just as misguided. AI can give the illusion of relationship because it’s trained on the writings of human beings who are genuinely capable of relationship. But AI has no feelings and no capacity to love.
While we can use AI as a tool, we should refrain from anthropomorphizing it. It isn’t our friend, our romantic partner, our therapist, or our pastor. And we need to teach our children this—urgently.
It gets worse, however, for LLMs. OpenAI’s most advanced model, the 04-mini, hallucinates 48% of the time!14 Compounding this problem is the fact that humans tend to overestimate the accuracy of an AI, and the trust increases when the model answers get longer, even if the answer is not better than a shorter one.15 In other words, people trust LLMs a lot more than they should.
Some experts say this isn’t a big deal.16 AI is built to give an answer that satisfies the user’s query, and if it doesn’t know the answer, some will just guess. One author of this paper experimented by asking ChatGPT to generate a list of church father quotations on a particular topic but repeatedly got results that were made up. When confronted, the LLM readily admitted the hallucination and even apologized before offering another list of equally fictitious quotations, complete with references that didn’t exist.
AI hallucinations may not completely destroy the usefulness of AI, but it means that anything it generates has to be thoroughly checked. It doesn’t matter if it gave you a source—that source may also be completely made up. You have to use just as much if not more critical thinking as with any other source of information. Hallucinations remind us that AI is a tool—no more or less. It can’t evaluate the accuracy of what it is spitting out or even if it actually exists.
People need to realize that the models lie regularly (though even to use “lie” is an anthropomorphism) and do so with no compunction, as, being machines, they have no conscience. Nothing an LLM says can be accepted as true without further investigation. Nor can an LLM be asked to evaluate the logic or correctness of any statement, book, or anything else. They are unable to reason, and their training data, combined with their tendency to mimic their user, makes their outputs entirely suspect.
AI is no better than the people who programmed it—and that’s the best-case scenario.
AI is no better than the people who programmed it—and that’s the best-case scenario. Since they are almost exclusively programmed by non-Christians, and even Marxists and perverts, they can and do have rather evil social tendencies. AI will generate images of child sexual abuse material for the consumption of pedophiles.17 AI will helpfully give instructions on how to launder money or do other dangerous activities.18 ChatGPT will point confused children to “gender-affirming care” and help the kids hide it from their parents!19 This means that we need to use care when using AI and especially when allowing minors access to it.
This article paints a depressing picture of AI, but Answers in Genesis has rolled out a chatbot feature that uses AI. Why would we do that?
Well, first, we certainly used care when we did it. We chose an LLM that was explicitly created to give answers rooted in Scripture and did extensive testing to ensure that guardrails are in place to mitigate a majority of the worst aspects of LLMs that we’ve discussed in this article. We also made sure that it points people to the solid resources developed over decades by our (human!) scientists and specialists.
Second, we recognize that there are some benefits to LLMs. We can’t individually interact with every person who wants to find our article on whether Noah’s ark has been found, but if they ask our chatbot, it can send them to the article that will help them.
Finally, we would be the first ones to say that people should interact with AI Genesis with caution and discernment, just like you should with any LLM. It’s not Scripture, and it’s not an image bearer that can be held accountable for what it says. It’s a glorified search engine.
Large language models can have benefits, for minor projects or brainstorming ideas for decoration. But they also have a far more sinister, dangerous aspect that is only now coming into focus. They aren’t useless, but outsourcing your brain to them at best may make reduce cognitive skill and, at worst, will destroy your life or anothers’.
If we could propose one rule for AI, it would be: Don’t expect it to do what only people can do. Don’t expect it to reason, to think, or to love. We’re image bearers of God, and AI cannot replace that or the only source of Truth God gave us: His Word.
Answers in Genesis is an apologetics ministry, dedicated to helping Christians defend their faith and proclaim the good news of Jesus Christ.