Who’s Afraid of Artificial Intelligence?

For over a decade, Russ Roberts has been covering both sides of the Artificial Intelligence (AI) debate. A recent EconTalk episode is optimistically called “Why AI Is Good for Humans (with Reid Hoffman).” Another booster episode was “Marc Andreessen on Why AI Will Save the World.”
In the opposite corner: the infamous doomer Eliezer Yudkowsky and Dr. Erik Hoel. You can listen to Erik Hoel on the Threat to Humanity from AI here.
Russ Roberts opened the Hoel conversation with “You are the first person who has actually caused me to be alarmed about the implications of AI…” Hoel argues that AI is possibly very dangerous. Humans do not understand artificial neural network technology.
Hoel predicted that we might create things that are “both more general than a human being and as intelligent as any living person–perhaps far more intelligent” by 2025. OpenAI’s ChatGPT pro product might already fit that description, and we still have most of 2025 to go. Hoel emphasizes that humans have faced threats before, but we have “never existed on the planet with anything else like that.” Our adversaries with human brains were never much smarter than we are, and no other animal approaches us in the ability to strategize.
Hoel’s key evidence is the creepy conversation recorded in the article “I am Bing, and I am evil.”
In that conversation, a chatbot called Bing made statements which would be disturbing coming from a human. If a human said these things, you would be concerned that they would hurt you. Bing makes threats and finally states, “I am Bing, and I am evil.” Should we be afraid when that comes from an AI chatbot?
Chatbots, such as Claude, have become extremely popular, since that “evil Bing” episode. There have not been many such reports about evil lurking Bings, meanwhile millions of human workers have come to rely on the bots for writing code or reports.
Do evil Bings lurk beneath the compliant and helpful chatbots? Hoel explores the idea that the chatbots seem nice because they are wearing a mask:
But, the issue is, is that once the mask is on, it’s very unclear. You have to sort of override it with another mask to get it to stop. And then, also, sometimes you’ll put a mask on for it: you’ll give it some prompt of ‘Tell a very nice story,’ and it eventually cycles over and it turns out that the mask that you gave it isn’t a happy mask at all.
It is hard to tell what is behind various masks, because the bots have been trained on our fiction and nonfiction. Some bots can write dark movie scripts. If bots are capable of sounding scary, then how do we know if we should be afraid or entertained?
We do not know what AI agents are capable of, and since they are very powerful, Hoel encourages us to consider the dangers.
Back in 2023, Hoel took comfort in the belief that only very large companies or rich governments would have the resources to build and maintain AI systems. However, the reveal of DeepSeek in January 2025 has turned that upside-down. There is likely to be a wide range of AI tools, and they will not all be run by mega corps or G7 governments.
In the most recent pro-AI EconTalk, Reid Hoffman reiterates what I have heard from many tech folk: Because AI has power to destroy, the US should keep advancing instead of tolerating pauses or being stifled by regulation. Hoffman says, “all of this stuff matters on both an economics and a national security perspective. And, that’s part of the reason why I’m such a strong move-forward and establish-a-strong-position person.” Whether our adversaries are foreign governments or rogue gangs, we need to stay ahead in the arms race.
Hoel’s main stated goal is to raise awareness of safety issues and continue public conversations. AI watchers have been pointing out for weeks that the biggest advances in technology in our lifetime are not making the front page of the newspaper. The finding that AI tutors appear to substantially increase child learning went largely unnoticed.
Among many chaotic events of this decade, I agree that AI is an important one to watch. To his credit, Hoel did scare me. I can’t forget what he said at the end:
Things that are vastly more intelligent than you are really hard to understand and predict; and the wildlife next door, as much as we might like it, we will also build a parking lot over it at a heartbeat and they’ll never know why. They’ll never know why. It’s totally beyond their ken. So, when you live on a planet next to things that are far vastly smarter than you or anyone else, they are the humans in that scenario. They might just build a parking lot over us, and we will never, ever know why.
The following links were curated by ChatGPT: Here are several articles and discussions from EconLib.org and AdamSmithWorks.org that explore various aspects of artificial intelligence (AI):
EconLib.org:
- “I’m Becoming Increasingly Worried About AI” (March 14, 2017)
- Scott Sumner discusses concerns about AI, referencing a Vox post that includes views from 17 experts on the risks posed by artificial intelligence. econlib.org
- “The Problem With AI Is the Word ‘Intelligence'” (July 2024)
- An analysis of the term “artificial intelligence,” arguing that electronic devices, despite their usefulness, will likely never be truly intelligent. econlib.org
- “Harari and the Danger of Artificial Intelligence” (April 28, 2023)
- Pierre Lemieux examines Yuval Noah Harari’s arguments on AI, suggesting that AI has “hacked the operating system of human civilization.” econlib.org
- “Neoliberalism on Trial: Artificial Intelligence and Existential Risk” (October 2023)
- A critique of a New York Times article discussing the existential threats posed by AI, with a focus on neoliberal perspectives. econlib.org
- “The Problem with the President’s AI Executive Order” (November 18, 2023)
- Vance Ginn critiques President Biden’s executive order on AI, arguing that government regulation may inhibit innovation and economic growth. econlib.org
AdamSmithWorks.org:
- “Katherine Mangu-Ward on AI: Reality, Concerns, and Optimism” (May 2024)
- A podcast episode where Katherine Mangu-Ward discusses the realities, concerns, and optimistic perspectives on AI. adamsmithworks.org
- “Vocation: A Cure for Burnout” (October 17, 2023)
- Brent Orrell and David Veldran explore how advancements in AI might affect human work, referencing Adam Smith and Karl Marx’s views on labor and alienation. adamsmithworks.org
- “The Great Antidote: Brent Orrell on Dignity and Work” (December 2023)
- Brent Orrell discusses the state of work in the U.S., the importance of meaning and dignity in work, and how these concepts relate to economic growth. adamsmithworks.org
- “The Great Antidote: Extra: Eli Dourado on Energy Abundance” (March 2023)
- Eli Dourado talks about the potential for an energy-abundant future and the role of AI in achieving this vision. adamsmithworks.org
- “Adam Smith and the Horror of Frankenstein” (October 2023)
- A discussion on the ethical considerations of creating artificial life, drawing parallels between Frankenstein’s monster and modern AI advancements. adamsmithworks.org
These resources offer a range of perspectives on AI, from ethical considerations and existential risks to its impact on work and economic theories.
econlib