JOURNAL AVAIL+ PODCAST BLOG STORE CONTACT LOGIN

Demystifying AI

blog Dec 21, 2023

By John Martinous III

By now I’m sure a friend of a friend has told you that AI is about to take over the world and replace all sorts of jobs with its so-called “consciousness.” It’s coming for writers, coders and even lawyers. Which leads one to speculate … could it come for the church?

Simply put, Artificial Intelligence (AI) is developed when a program is instructed to read large sets of data and taught to eventually imitate and simulate those patterns with “new” data. The most noteworthy public form of AI today is ChatGPT, developed by OpenAI. Chat GPT is a “large language model,” which means it learned from all forms of public text data on the internet from its inception to the year 2021 (e.g. message boards, Wikipedia articles, books and just about anything else that was readily accessible online).

This volume of data had never before been used as a source code for any software. Needless to say, it took many years before models like ChatGPT were creating anything coherent, but today they’ve clearly made a breakthrough. Their results can be haunting. It can feel … like us.

Does AI have consciousness? It’s natural for people to project themselves on to these algorithms, code and textual output. Its results can be uncanny. But ChatGPT can only predict the most probable next word in a sentence and therefore only appears to be intelligent from our point of view. So then, the question becomes: What is the difference between predicting the next word in a sentence and true intelligence? That question turns out to be much more difficult to answer.

Philosopher John Locke claimed that humans navigate the world through simple sensations (like particular colors or a specific sound) and complex ideas (which are formed by combining multiple simple ideas). The material world contains the most data to give of any database that has ever been created, but GPT has no built-in processes that can inherently discern this material data from fictional data. We have had to teach it how to do that manually.

Perhaps the primary difference between Chat GPT and human intelligence is our ability to discern fact from fiction based on our senses, that which comes from our being made in God’s image. ChatGPT’s understanding of the world is not sensory; it is solely based on text-based data.

I don’t generally like to jump on the hype train for new technologies. I remember when Google Glass was supposed to revolutionize augmented reality and incorporate it into our daily lives. Now it’s a museum piece. I also remember when the Metaverse and virtual reality headsets were destined to be the future of communication and social media. Now I see used VR headsets being sold on Facebook Marketplace for $100. I even remember when 3D televisions were said to be the future of home entertainment. Now they’re a faint memory of a consumer fad.

But AI, somehow, is different. It speaks to our greatest existential insecurities about being replaceable, or perhaps, not good enough (experiences we’ve felt since we were created). It makes us question things, question the future, question what this means for the faith market.

To better understand if AI truly intends on taking over our pulpits, I figured I’d sit down and have a chat with ChatGPT itself. The following are actual responses from ChatGPT-4

——

Me: Tell me about yourself.

GPT-4: Certainly! I'm ChatGPT, based on OpenAI's GPT-4 architecture. Here's a brief overview:

  1. Nature: I am a virtual assistant and a product of machine learning. I don't have emotions, consciousness, or self-awareness. I operate based on patterns in the data I was trained on.
  2. Training: I was trained on vast amounts of text from the internet, so I can generate human-like text based on the patterns I've learned. This allows me to answer questions, generate content, and assist with a variety of tasks.
  3.   Capabilities: I can assist with various queries spanning many topics, from science and history to pop culture and technology. I can help write essays, generate ideas, code in multiple programming languages, and much more.
  4. Limitations: My knowledge has a cutoff in January 2022, which means I don't have data on events or developments after that time. I also don’t “know” in the same way humans do—I don't understand context in the same way, and I can't form opinions or beliefs.
  5. Safety: I’m designed to avoid engaging in harmful actions or promoting harm. However, like any tool, the responsibility for safe and ethical use rests with the user.
  6. No Memory: I don't have the ability to remember personal user data from one request to the next, ensuring user privacy. Always be cautious and avoid sharing personal or sensitive information.

——

What we can learn: Like in a typical job interview, ChatGPT’s answers are almost too polished and polite, almost as if it knows what it’s supposed to say. This is likely intentional by OpenAI, the non-profit operating ChatGPT, since it is in their interest to make their seemingly apocalyptic product seem as harmless as possible (fair enough).

So, when using ChatGPT it is normal to feel that it is constantly in a state of political correctness, even more so than a typical Wikipedia article. Despite these intentional restrictions put on the software, you (or your staff writer using it) cannot help but feel that there is a strong operating intelligence in the background of every question you ask it. It definitely feels like you are talking to something or someone.

——

Me: List five words that describe your character.

GPT-4: Since I don’t possess consciousness, emotions, or a true “character” in the way humans do, the following terms are more about my functional attributes than a personality:

  1. Informative: I provide information on a vast array of topics based on my training data.
  2. Neutral: I don’t have opinions, feelings or biases. I present information based on patterns in my data.
  3. Responsive: I generate replies based on user prompts and aim to do so efficiently.
  4. Consistent: I operate based on my programming and training data, ensuring a predictable interaction.
  5. Impartial: I don’t take sides or have personal preferences.

——

What we can learn: The primary concern for the average Joe is not AI consciousness, but the automation of our careers: writers, church administrators, even graphic designers. I sympathize with how a first-time user could see GPT as a threat, but a more seasoned user would be less concerned and would see opportunities for this technology to augment human effort, not replace it.

In other words, ChatGPT is closer to a computer replacing a typewriter than a robot replacing a human. It definitely makes certain tasks easier, such as summarizing sermon notes or generating Sunday school lessons for volunteers. But it falls short on more human elements, such as its inability to be idiosyncratic and creative—the activities where God shines through.

Often its answers can seem repetitive and uninspired, which makes it perfect for the world of paperwork and summaries. I have even seen some of my students use it to write public prayers. But the software falls short for the world of praise and worship, storytelling and youth group programming. Is it capable of performing those tasks someday? Maybe, but in the meantime, I’m not too concerned.

ChatGPT seems to be closer to the dishwasher than to the Terminator: a useful appliance that makes our personal and work lives easier, but can also unintentionally increase the volume of tasks expected from the user. Now that your dishes are clean, you have more time to tidy up your house! Now that you have a robot vacuum you have more time to fold the laundry! Inch by inch, the expectation to do more (and more perfectly) increases.

A journalist who used to write four articles a week for a moderately successful church blog could very well be expected to write forty in the same amount of time, because as the thinking goes, more automation, more content. We need to be vigilant for burnout on our teams more than ever. If we are not careful, we’ll lose our teams. So much of our work could be automated in the next couple of years that we’ll wish we could take a break.

——

Me: Where do you see yourself in twenty years?

GPT-4: I don’t have the ability to see, predict, or plan for the future, nor do I possess consciousness, self-awareness, or desires. However, based on technological trends and speculations up to my last training data in January 2022, here are some possibilities for AI models like me in 20 years:

  1. Advanced Versions: There may be more advanced iterations of models like mine with even larger training data, better generalization, and improved capabilities.
  2. Specialization: Future AI models might be specialized for specific domains or tasks, fine-tuned for precision in particular areas.
  3. Integration: AI could be more deeply integrated into daily life, assisting in a broader range of tasks, from personal to professional applications.
  4. Ethical and Regulatory Evolution: As AI technology advances, there might be new ethical considerations and regulatory frameworks guiding its use.
  5. Collaboration with Other Technologies: AI could be combined with other emerging technologies (like augmented reality, quantum computing, or neuromorphic hardware) to produce new tools and applications.

——

In summary, AI does not appear to be conscious, but it can be a useful tool in our churches.  Clearly, we are in the early stages of this unfamiliar technology, and anyone who uses it for work, advice, research or simply for fun is participating in a global experiment to find out what GPT can and, more importantly, what it cannot do.

Stay up-to-date with all our upcoming releases!

Join our mailing list to receive the latest news and updates from us. Your information will not be shared.

Close

50% Complete

Two Step

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.