ChatGPT: Does This Clever Chatbot Mark the Dawn of a New Era in AI?

ChatGPT

Welcome to this week’s installment of The Intelligence Brief… with all the recent discussion about OpenAI’s impressive chatbot ChatGPT, this week we’ll be looking at what the hype is really all about. Items in our queue will include 1) what, exactly, the buzz regarding ChatGPT is all about, 2) a few potential pitfalls related to AI, and what the famous chatbot’s critics have had to say, and 3) what ChatGPT actually had to say about all of this in a series of candid responses to our questions. This, in addition to an exciting announcement about new podcasts you can look forward to from The Debrief and much more.

Sign up here to have The Intelligence Brief newsletter sent to your inbox each week.

Quote of the Week

“Building advanced AI is like launching a rocket. The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to focus on steering.”

– Jaan Tallinn

Latest Stories: Before we get into the thick of our analysis this week, a few stories we’re covering at The Debrief include how from eerie blue spirals to ghostly green beams of light, the Subaru star camera keeps capturing mysterious scenes in the night skies over Hawaii… what are they? Tim McMillan gives us the full scoop. Also, Christopher Plain introduces us to Dr. Chance Glenn, a university professor and electrical engineer who is currently preparing to test his potentially revolutionary new warp drive theory.

Video News: On Thursday at 2 PM ET, be sure to tune in as The Debrief’s Christopher Plain hosts a live UAP roundtable over on our YouTube channel, which will feature commentary from the entire team. Also, be sure to join Chrissy Newton this Friday for the premiere of her discussion with former fighter pilot Ryan Graves on Rebelliously Curious, where they discuss the challenges aviators face in terms of UAP, and how they can be mitigated.

Podcasts: Lastly, The Debrief is excited to announce that we will now be featuring several audio podcasts to compliment our weekly reporting on advancements in science and technology:

  • The Debrief Weekly Report is a new podcast hosted by MJ Banias and Stephanie Gerk. Featuring audio commentary on our weekly stories, Banias and Gerk also cover topics related to science fiction, futurism, and more. Subscribe and listen on Apple Podcasts.
  • The Micah Hanks Program, my long-running weekly podcast covering space, science, and mysteries of our universe like unidentified aerial phenomena, also makes its official debut as part of the weekly podcast roundup from The Debrief, providing additional weekly commentary on many of the stories we cover. Subscribe on Apple Podcasts and catch up on past episodes here; future episodes will also be found on our website.
  • Rebelliously Curious with Chrissy Newton has long remained our flagship program on our YouTube channel, but did you know that it is available as an audio podcast as well? Subscribe and listen on Apple Podcasts or your preferred podcasting medium.

With that all out of the way, it’s time for us to take a look at ChatGPT, what it is, what it means, and in a novel twist, what it actually had to say about itself in a recent dialogue with The Debrief.

ChatGPT: The New Wave of Artificial Intelligence?

By now, you have no doubt heard all about ChatGPT, the conversational online AI chatbot that can generate responses to prompts or questions about virtually anything with a seemingly uncanny level of expertise and efficiency.

Launched at the end of November, the chatbot was a creation of OpenAI, built using the company’s GPT-3 family of language models. ChatGPT utilizes what is called transfer learning, which allows the program to retain the knowledge gained while working on any problem it encounters and employ it toward solving similar problems in the future.

ChatGPT

According to its creators, ChatGPT conversational dialogue format “makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”

Since its launch, users worldwide have tasked ChatGPT with answering some of life’s biggest questions about everything from science and technology to gauging the chatbot’s knowledge about pop culture, and virtually anything else you can imagine. ChatGPT seems to have a particular affinity for songwriting, rapping, and sharing recipes, and has shown its prowess as a marketing advisor when it comes to making practical suggestions about potential brand names and other useful capabilities.

Based on the hype that the program has managed to generate, ChatGPT would seemingly represent a game-changing advancement in the field of AI. However, there is a bit more to the story of what is currently the world’s most famous chatbot, and what it reveals about the challenges companies like OpenAI face in their efforts toward developing truly adaptive and autonomous artificial intelligence.

Emptying ChatGPT’s Mixed Bag

Few would dispute ChatGPT’s prowess in terms of being not only the most impressive chatbot the world has seen to date, but also the potential it shows in terms of the wide range of practical uses for such technology in both the near and long term.

Partly due to its impressive capabilities, responses were swift from those who saw the potential for the chatbot’s misuse: what would prevent aspiring, but talentless would-be authors from relying on the speed at which ChatGPT can generate content to aid them in writing seemingly original work? How about students who might succumb to the temptation of tasking ChatGPT with helping write term papers, or a host of other potential ways the chatbot might assist in cheating?

Of course, there are already ways to test for AI cheating using GPT-2 Output Detector and similar programs like Writer AI Content Detector, which help assess whether portions of text appear to have been AI-generated or not. Still, it seems inevitable that programs ChatGPT could be misused in a variety of ways; but on the other side of the coin, one of the practical benefits of AI text generation programs involves their often-touted usefulness in helping generate things like social media posts and other online content, where such concerns about whether a human put in the legwork behind the scenes are of less concern. Right now, discerning where the line should be drawn between the acceptable use of AI in content generation, versus its potential misuse, is still challenging.

AI

Apart from any potential misuse of AI programs like ChatGPT, there are some who have argued that the technology is little more than a novelty.

Meta Chief AI Scientist Yann LeCun recently said during an appearance on The Big Technology Podcast that ChatGPT “is not a particularly interesting scientific advance,” and characterized it as little more than a “flashy demo.” Perhaps LeCun’s assessment is warranted, given the stellar rise of the program in recent days. Growing even faster than TikTok, OpenAI now reportedly expects $200 million in revenue next year and $1 billion by 2024, with additional plans for implementing premium subscriptions in the near future.

With all the buzz—and the controversy—ChatGPT has managed to generate in recent weeks, how better to gain a bit of perspective on the situation than to reach out to chatbot itself for comment?

We Reached Out to ChatGPT for a Comment. Here’s What It Said.  

The Debrief reached out to ChatGPT with several questions about what it could potentially represent in the development of AI, in addition to asking what it thought about the opinions of its critics, and how AI can be best used going forward. What follows are a few of our questions (in italics), along with ChatGPT’s responses (in bold).

 Hi ChatGPT. I am a journalist interested in artificial intelligence, and wondered whether you feel that you represent a turning point in the development of AI?

As an AI language model, I do not have feelings or emotions. However, I can say that AI language models like me represent a significant advancement in the development of AI in terms of natural language processing capabilities. The training process and the vast amounts of data used to train me have allowed me to generate human-like responses to a wide range of questions. However, it’s important to note that I’m still limited in my understanding and lack consciousness, creativity, and self-awareness, which are defining characteristics of human intelligence.

ChatGPT, how do you respond to your critics, one of whom, for instance, recently argued that you are “not a particularly interesting scientific advancement”?

As an AI language model, I don’t have personal opinions or the ability to defend myself against criticism. However, I can acknowledge that there are differing views on the significance of AI language models like myself. Some people view AI language models as a significant scientific and technological advancement, while others may not see them as particularly groundbreaking.

It’s important to consider the context in which AI language models are being used and the limitations they still face. While I can generate human-like responses, I lack the creativity, self-awareness, and consciousness that are hallmarks of human intelligence. There is still much work to be done in the field of AI to overcome these limitations and create more advanced AI systems.

Although you don’t have the ability to defend yourself against criticism or express personal opinions, I find it impressive how well you are able to orient your responses in conversational ways. As many have observed in recent days, your responses can even arouse emotions in humans, which raises another question: do you think that responses generated by artificial intelligence like yourself may one day soon be indiscernible from original human responses?

It’s possible that AI language models like myself will continue to improve and become even more advanced in the future. However, it’s important to note that there is still a long way to go before AI can truly match or surpass human-level language abilities in terms of understanding context, emotions, and consciousness.

There are also limitations to the way AI language models like myself are currently trained and operated that may prevent us from ever fully achieving human-level language abilities. For example, my responses are generated based on patterns in the data used to train me, and I do not have the ability to experience the world or have personal experiences.

That being said, AI language models like myself can still be useful in a variety of applications, such as generating text, answering questions, and engaging in conversations. However, it’s important to understand our limitations and use AI responsibly, recognizing that there are still significant differences between human and AI-generated responses.

Going forward, how would you recommend that artificial intelligence like yourself could be best used, and in ways that overcome the potential pitfalls that arise from people’s misuse of AI for cheating or other similar problems?

As an AI language model, I don’t make recommendations or have the ability to enforce them. However, here are some general principles that could help guide the responsible use of AI:

  1. Transparency: The design, training, and operation of AI systems should be transparent and explainable, so that people can understand how they work and why they produce certain results.
  2. Bias and fairness: AI systems should be designed to minimize bias and ensure that they treat everyone fairly, regardless of their race, gender, or other personal characteristics.
  3. Responsibility: Those who develop and use AI systems should be held responsible for their actions and the impacts of the systems they create.
  4. Ethical considerations: AI should be developed and used in ways that align with ethical principles and respect human rights and dignity.
  5. Monitoring and oversight: AI systems should be regularly monitored and subject to oversight to ensure that they are being used responsibly and in accordance with ethical principles.

By following these principles and using AI responsibly, we can help ensure that AI is used to benefit society and improve people’s lives, rather than cause harm or perpetuate existing problems.

Thank you for your time ChatGPT, I appreciate your responses to my questions, and I look forward to future dialogue with you about this (and perhaps next time, if you’re up for it, we can share recipes as well).

You’re welcome! I’m here to help answer questions and provide information, so feel free to reach out any time. And if you’d like to discuss recipes, I’d be happy to try my best to assist with that as well. Have a great day!

Even within the context of my own brief “interview” with ChatGPT, I felt like I came away having learned a few things about AI, and what it might represent going forward. Obviously, I will probably find myself stressing a bit less in the future when it comes to trying to come up with recipe ideas for big family dinners as well. And considering recent news that AI has proven useful in assisting in things that even include the search for extraterrestrial intelligence, it would seem that the best is likely yet to come… although that depends entirely on how well the potential pitfalls that AI presents along the way can be mitigated.

That wraps up this week’s installment of The Intelligence Brief. You can read past editions of The Intelligence Brief at our website, or if you found this installment online, don’t forget to subscribe and get future email editions from us here. Also, if you have a tip or other information you’d like to send along directly to me, you can email me at micah [@] thedebrief [dot] org, or Tweet at me @MicahHanks.

Here are the top stories we’re covering right now…