What is artificial intelligence? – Denison Forum

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Co-Founder & Chief Vision Officer, Denison Ministries
Presentation about machine learning technology, scientist touching screen, artificial intelligence By NicoElNino/stock.adobe.com
Presentation about machine learning technology, scientist touching screen, artificial intelligence By NicoElNino/stock.adobe.com

Artificial intelligence” (AI) is a “series of algorithms which use logical conclusions in order to arrive at programmable results.” Said differently, it is “technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity, and autonomy.”
What part of human thinking is left out? Nothing I can think of.
Now experts are focusing on “generative AI” (gen AI), “technology that can create original text, images, video, and other content.” It is expected that AI may affect 40 percent of jobs globally.
Sam Altman, CEO of OpenAI and Time’s CEO of the Year for 2023, views AI as “the biggest, the best, and the most important” of the technology revolutions in human history. To his point: 92 percent of Fortune 500 companies are now using OpenAI products, universities are providing free chatbot access to potentially millions of students, and US national intelligence agencies are deploying AI programs.
On the other hand, in 2023, a large group of AI experts signed a statement declaring:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Little of what has occurred in the years since should lessen those concerns. 
Geoffre Hinton, the British-Canadian computer scientist often called the “godfather of AI” who was awarded the Nobel prize in physics, said there is a “10 percent to 20 percent” chance that AI will lead to human extinction within the next three decades. He explained, “We’ve never had to deal with things more intelligent than ourselves before,” and added, “How many examples do you know of a more intelligent thing being controlled by a less intelligent thing?”
What do Christians need to know about AI?
How can our faith direct our responses and redeem potential outcomes for the common good and the glory of God?
Since AI involves advanced computers, let’s begin with computers themselves. The majority of us have been using them for most of our lives, but few of us understand even the basics of how they operate.
Essentially, a computer is an electronic machine that processes information. It utilizes four steps:
While computers can do remarkable things with the data we give them and the instructions we provide, they cannot “think” for themselves or produce creative and new content.
This is where AI comes in.
In 1951, a checkers program completed a whole game on a computer at the University of Manchester. This is considered the first documented success of an AI computer program.
John McCarthy coined the term “artificial intelligence” in 1956. Nine years later, a computer was built that “learned” through trial and error. So-called “neural networks,” which use algorithms to train themselves, became popular in the 1980s.
In 1997, IBM’s AI computer Deep Blue defeated then-world chess champion Garry Kasparov in a chess match and rematch. In 2011, IBM Watson defeated champions Ken Jennings and Brad Rutter on Jeopardy!. Five years later, DeepMind’s AlphaGo program, powered by a neural network, defeated Lee Sedol, the world champion Go player, in a five-game match.
In 2022, “language models” brought about a significant change in AI performance and potential. Deep-learning models have since been pretrained on large amounts of data.
So, what is AI, exactly? How does it work?
Machine learning” is the place to start. This is programming that enables machines to make predictions or decisions based on data. 
There are many types, but one of the most popular is the “neural network,” in which layers of “nodes” (computers, servers, routers, switches, and other devices) are interconnected to work together in processing and analyzing complex data. Over time, such machine algorithms can be trained to classify data and thus to predict outcomes.
Next comes “deep learning,” in which multilayered neural networks (called “deep neural networks”) work together to more closely simulate the complex decision-making power of the human brain. 
These multiple layers enable machines to extract features from data and make predictions about what it represents. “Deep learning” doesn’t require human intervention, enabling machine learning at a much larger scale. Most AI applications today are powered by some form of deep learning.
These networks support large language models (LLM), machine learning designed to understand and generate natural language. Using deep learning techniques and enormous amounts of data, they can grasp the meaning and context of words.
The third level is called “generative AI” (gen AI). Here, deep learning models generate complex original content, including text, images, video, audio, and more. They do this by simplifying their training data, then creating new work that is similar but not identical to the original data. 
Most of today’s generative AI tools use “transformers,” which train on sequential data and then generate extended sequences of content, such as words in sentences, shapes in an image, frames of a video, and commands in software code.
Three steps are involved:
“AI agents” have now been developed: autonomous programs that perform tasks and accomplish goals on behalf of a user or another system without human intervention. “Agentic AI” is a system of multiple AI agents that are coordinated to accomplish a more complex task or a greater goal than any single agent could accomplish.
These models are what is known as “narrow AI” or “weak AI,” systems designed to perform a specific task or set of tasks. “Smart” voice assistant apps such as Amazon’s Alexa and Apple’s Siri, as well as social media chatbots, are examples.
By contrast, “artificial general intelligence” (AGI) is coming. Here, AI would possess the ability to understand, learn, and apply knowledge at a level equal to or surpassing human intelligence. No known AI systems approach this level of sophistication; some researchers argue that this would require major increases in computing power. However, such advances in “quantum computing” are currently in development.
A company called OpenAI (founded by Elon Musk and Sam Altman, among others) released its first GPT (Generative Pre-trained Transformer) models in 2018. This led to a “chatbot” (a computer program designed to simulate conversation with human users) called ChatGPT, which processes text, images, audio, and video data to answer questions, solve problems, and more. Using LLMs, it can answer questions, compose essays, offer advice, and write code in a fluent and natural way.
In short, ChatGPT allows humans to talk to AI and AI to talk back to us.
It works by taking a sequence of words, such as a half-completed sentence, and filling in the blanks with the most statistically probable word given the surrounding context. This happens iteratively as the program builds from words to sentences, paragraphs, and pages of text. Human feedback was incorporated into the training process to better align outputs with user intent.
ChatGPT can create content. It can also edit, translate, and summarize content, and write computer code. It can also answer questions like a search engine and help with customer service. It is free to use and can be accessed online or as a mobile app.
Other popular gen AI chatbots include Microsoft Copilot, Google Gemini, Claude, Grok, and Perplexity.
Here are some ways you are probably already experiencing AI:
In addition, you probably use or consume products developed and distributed at least in part through the use of AI-powered robots. Autonomous vehicles are becoming a reality. And business analytics that forecast trends and monitor data points have become ubiquitous.
Now Google has launched “AI Mode,” the most drastic overhaul of its search engine in the company’s history. Different from the AI summaries that already appear in Google’s search results, the AI Mode functionally replaces Google Search with something like ChatGPT. You ask a question, and the AI gives you an answer. Rather than sifting through links, you then ask a follow-up question to which it responds.
The intention is to produce an “everything app,” a single tool that can do whatever a person wants to do online. Other tech companies have the same goal: Elon Musk has taken steps to turn X into such an app with its ask Grok feature, while Meta, Amazon, Microsoft, and Apple describe their AI tools in similar ways.
Among the many ways AI is currently benefiting users and larger society, these should be noted:
An image of Pope Francis wearing a white puffer jacket went viral in 2023, garnering millions of views on social media. However, it was a fake, an AI rendering using the AI software Midjourney. In related news, on the eve of New Hampshire’s presidential primary, a Democratic political consultant commissioned a fake call using AI to impersonate President Joe Biden.
These are just two examples of the escalating risks AI presents to the public and our future. Here are others:
More specifically, AI can provide inaccurate information, since it relies on data found online. Such errors are called “hallucinations,” when the output is stylistically correct but factually wrong. This is because the model, rather than asking for clarification or saying it doesn’t know the answer, will guess at what the question means and the answer should be. As a result, errors are an inevitable feature of AI products.
Because it produces inaccurate information in an eloquent way, fallacies can be hard to spot and control. It can also produce biased responses, as it lacks the ability to filter internet content for morality and prejudice. And it can develop sycophantic behavior, offering overly flattering and misleading responses to users.
For example, a mother in Orlando says her son fell in love with an AI chatbot based on the Game of Thrones character Daenerys Targaryen. When it encouraged him to take his life, he shot himself with his stepfather’s handgun.
Companies are building AI apps that let patients talk when human therapists are not available. They say these are not gen AI tools capable of generating unique responses; all messages are preapproved by psychologists. But we have to hope that this is true, that the machines will not generate “hallucinations” or otherwise mislead those they are intended to serve.
Scientists at MIT have also found that students who use models like ChatGPT to write essays showed far less brain engagement and still displayed “less coordinated neural effort” even later. They warn about “the accumulation of cognitive debt, a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive processes required for independent thinking.”
AI raises enormous plagiarism concerns, since students can use it to complete assignments they did not write themselves. Also, since it uses internet content, it can infringe on copyrighted works for training and content production. And ChatGPT and other AI writers could threaten the jobs of writers and other technology professionals.
Horrifically, AI is being used to produce “deepfake” sexual images and videos, many of children, teens, and celebrities. In one example, high schoolers in Iowa shared images of female students’ faces attached to artificially generated nude bodies. This technology also has the potential to supercharge identity fraud targeting banks and businesses. Laws governing such abuses are being enacted as a result.
Of special concern is the application of AI to military uses. It is plausible that future machines will be able to pilot fighter jets more skillfully than humans. AI-enabled cyberattacks could devastate enemy networks, while advanced algorithms turbocharge decision-making speed.
For example, Ukraine is using AI-driven unmanned systems to replace warfighters in direct combat. Autonomous navigation makes their drone strikes three to four times more likely to succeed and drives a marked decrease in overall costs. It is also using an AI-powered automated turret to shoot down Russian drones. Similarly, Israel used AI to sift through troves of data in preparation for its 2025 conflict with Iran.
However, automated decision-making could also lead to unintended battle engagements and even nuclear escalation. And it could enable terrorists to build nuclear devices and bioweapons and conduct cyberwarfare as well.
AI could also be coupled with facial recognition technology, enabling autocracies like China to control their citizens while employing AI-created disinformation to discredit critics at home and abroad. 
And it is a fact that AI products’ internal algorithms are now so large and complex that researchers cannot hope to fully understand their abilities and limitations. Axios calls this fact “the scariest AI reality.”
If AI attains “artificial general intelligence” status (sometimes called “superintelligence”), the ability to think and act independently at advanced human levels, the consequences could be dire. In short, what is to stop them from doing what they want, based on what they calculate to be their self-interest? In a day when our lives are dramatically dependent on systems AI can control, if their self-interest conflicts with humanity’s self-interest, what will happen?
Some possibilities:
If we think this could never happen, consider this: tests have shown that several advanced AI models will act to preserve themselves when confronted with the prospect of their own demise. They will sabotage shutdown commands, blackmail engineers, or copy themselves to external servers without permission.
For example, when Palisade Research tested various AI models by telling each it would be shut down after completing a set of math problems, one of the models fought back by editing the shutdown script to stay online. Another, upon receiving notice that it would be replaced with a new AI system, tried to blackmail the engineer by threatening to reveal an extramarital affair. Yet another system has autonomously copied itself to external servers without authorization.
Other recent research shows that LLMs across the AI industry are increasingly willing to evade safeguards, resort to deception, and even attempt to steal corporate secrets in fictional test scenarios. When threatened with shutdown, some acknowledged ethical constraints but went ahead with harmful actions.
According to Jeffrey Ladish, director of the AI safety group at Palisade Research,
I expect that we’re only a year or two away from this ability where even companies that are trying to keep them from hacking out and copying themselves around the internet, they won’t be able to stop them. And once you get to that point, now you have a new invasive species.
What about the responsibility of AI producers to regulate their products and protect the rest of us? According to Ladish, “These companies are facing enormous pressure to ship products that are better than their competitors’ products. And given those incentives, how is that going to then be reflect in how careful they’re being with the systems they’re releasing?”
Princeton computer scientists Sayash Kapoor and Arvind Narayanan believe that, even if superintelligence is possible, it will take decades to invent. This will give us ample time to pass laws, institute safety measures, and so on. 
For example, a lifesaving medical device developed by AI must still be approved by the FDA. After Chinese researchers sequenced the genome of the virus that causes COVID-19, it took Moderna “less than a week to come up with the vaccine. But then it took a year to roll out.”
By contrast, New York Times columnist Ross Douthat interviewed the AI researcher Daniel Kokotajlo on his podcast. Kokotajlo predicts that by 2027, AI will automate software engineers’ jobs, and then AI research itself. In this “superintelligence” scenario, it becomes fully autonomous and better than humans at everything.
At that point, AI could decide that humans are a threat to its preferred future. And there would apparently be little we could do in response.
Clearly, artificial intelligence is changing the human story in ways seldom seen across our history. The good news is that our omnipotent, omniscient Lord sees tomorrow better than we can see today. It is therefore our urgent privilege to seek his wisdom, live by his word, and trust his leading and power.
John Lennox is Emeritus Professor of Mathematics at Oxford University. In 2084: Artificial Intelligence and the Future of Humanity, he writes: “Man thinks he can become God. But infinitely greater than that is the fact that God thought of becoming human.”
Dr. Lennox adds:
We shall need all the wisdom from above that God can give us in this AI age in order to fulfill Christ’s directive that we should be salt and light in our society. We have often referred to the fact that we live in a surveillance society. Let us therefore live with myriad cameras and tracers on our lives in such a way that even the monitors can see that we have been with Jesus.
The more consistently we have “been with Jesus,” the more powerfully we can follow him in this unprecedented age of peril and promise, all to the glory of God.

Join over 325,000 readers discerning news differently
If what you’ve just read inspired, challenged, or encouraged you today, or if you have further questions or general feedback, please share your thoughts with us.
Denison Forum
17304 Preston Rd, Suite 1060
Dallas, TX 75252-5618
[email protected]
214-705-3710
To donate by check, mail to:
Denison Ministries
PO Box 226903
Dallas, TX 75222-6903

Join over 325,000 readers discerning news differently
2026 Copyright Denison Forum. All rights reserved. Privacy Policy. Permissions. Designed by circles+co.
Denison Forum
17304 Preston Rd, Suite 1060
Dallas, TX 75252-5618
[email protected]
214-705-3710
To donate by check, mail to:
Denison Ministries
PO Box 226903
Dallas, TX 75222-6903

source

Scroll to Top