In this post I talk about the Chinese Room argument. This argument is support for Marc Andreessen’s assertion that dangerous AGI is “a category error”.
That does not mean AI can’t ever be dangerous. It just means we should treat the risk like the risk from any other powerful machines or new technology.
The Chinese Room Argument
In a room sits a man, unknowing of the Chinese language. Notes, inscribed with intricate Chinese characters, are slipped under the door.
He takes each one, studies it, then consults an exhaustive guide, an instruction book of sorts. Following the guide meticulously, he performs detailed calculations and methodically crafts a response in Chinese, letter by letter, stroke by stroke.
He sends it back under the door. The outsider, receiving these perfect responses, thinks the room understands Chinese. But does it really?
This example questions whether a machine that performs calculations can truly understand the world, or if it just mimics understanding without comprehension.
(Note: this summary of the Chinese Room was written by GPT-4, edited by me)
Can computers do the same thing that brains do?
The mainstream of philosophy of mind believes in physicalism, i.e. there is no difference between mental and physical states, and mental states can be perfectly simulated through automation and AI. Therefore, AI can have mental states.
So does Dwarkesh Patel, it seems to me when he says:
How does it [AI] do all this without developing something like a mind?
Much of the fear of risk comes from the assumption that AI can do what brains do. It is “intelligent”, i.e. must have a mind and/or a will if can perfectly simulate it.
The best response I’ve heard to this is by philosopher Michael Huemer:
Q: But the computer is doing the same thing the brain does! So how can it not be conscious?
A: I don’t think it does the same thing the brain does. You probably didn’t learn English by reading 8 million web pages and 570 Gigabytes of text. You did, however, observe the actual phenomena that the words refer to. The computer didn’t observe any of them. The computer also is not made up of 86 billion little cells sending electrical impulses to each other. (And btw, the “neurons” of the neural network are generally not physically real.) It contains completely different materials in a completely different arrangement, doing completely different things. (More discussion of the nature of the mind is warranted, but there’s no time for that now.)
Caveat: I don’t know exactly what causes consciousness, so I can’t rule out that we are somehow going to artificially create it at some future time. I am just saying that the mere fact that a machine produces humanlike text, via the sort of processes current computers use, is not good evidence that it has any mental states.
Vindicating Marc Andreessen
Marc Andreessen, in his essay “Why AI Will Save The World”, says “the idea that AI will decide to literally kill humanity is a profound category error.”
The example above vindicates this view.
It is nevertheless not crazy for people who believe in AI x-risk for two reasons:
The mainstream philosophers of mind and “hard scientists” believe in physicalism. I don’t think it’s crazy to believe what the mainstream says.
There are reasons to see the risk of non-conscious AI. Any powerful machine or technology can hurt people without having decided to want to hurt people.
I do, however, agree with Marc that the fear of AI x-risk is overblown.
The Risk is Government Abuse of Technology
Like Andreessen says: we should embrace AI. Changes from new technology do not benefit everyone, but they almost always are strongly net-positive.
The risk arises not from conscious or non-conscious technology, but from the abuse of conscious humans with bad intentions, bad incentives, etc.
Historically, almost all x-risk technologies (biological weapons, atomic bombs, napalm gas etc.) have come out of military use commanded by governments.
This should give us pause.
Regardless of how AI turns out, giving governments the power to regulate these technologies is the last thing we want to do if we care about x-risk.
More on that in my podcast conversation with Michael Huemer.