What it Means to Understand
An analysis of the Chinese Room Argument and the nature of understanding.
Do we understand what it means to ‘understand?’ Understanding may seem like a concrete concept, but AI has prompted many to doubt that we know what it truly means. This doubt was first sowed by philosopher John Searle’s Chinese Room Argument.
The Chinese Room Argument
The argument is based on a scenario in which a non-Chinese speaker isolated in a room is given slips of paper with Chinese characters. With them, the person has a manual with instructions on which strings of characters can be provided as a response to any given string of characters. Using this manual, the person can write coherent responses to any slip given to them. To those outside the room receiving the responses, it seems the person in the room understands Chinese.
Searle concludes that, despite what it seems, the person in the room does not truly understand Chinese. He then argues that this conclusion can be applied to computers, which work similarly to the Chinese room. Specifically, computers receive inputs and then use algorithms — analogous to the instruction manual — to produce an appropriate response. Thus, he argues that computers are incapable of ‘true understanding.’
Missing the Point
There have been many counterarguments to the Chinese Room Argument, the most common being the Systems Reply. The Systems Reply posits that the person in the room may not understand Chinese, but he is only a part of the system, which, as a whole, does. In this reply, the person is compared to a computer's CPU and the instruction manual to a computer's memory.
Searle’s response is that the person in the room could memorize the manual and become the entire system. Yet, he would still not understand Chinese. However, most of the counterarguments, and Searle himself, in his replies and original argument, fail to recognize the most important point raised by the Chinese Room Argument. The central question the argument raises is not whether the person or the system understands Chinese; it’s what that question is even asking. What does it mean to understand?
What is ‘True Understanding?’
First of all, does the person understand Chinese? Even if they memorize the manual, the only reasonable answer is no. The only aspects of Chinese they can know are the characters, how they are strung together, and what strings form a coherent conversation. This cannot be said to constitute a complete understanding of Chinese.
What is missing for the person to understand Chinese is what the characters represent. If the person were to get another manual with pictorial representations of the characters, they could build an understanding of Chinese. Of course, they would still not completely understand Chinese, as they wouldn’t know the phonetics of the characters and how to speak the language. Regardless, this point sheds light on what understanding means.
In essence, understanding is the ability to associate information. To say one understands Chinese means that one can associate individual characters with others — they know the grammar and syntax of the language. Additionally, they can associate individual characters with other sensory representations, such as images, emotions, sounds, or scents — they know the meaning of the language. These sensory representations are what is missing from the Chinese Room.
Are Computers Capable of Understanding?
The Chinese Room Argument says nothing meaningful about a computer’s ability to ‘truly understand.’ In the scenario of the Chinese Room, the conclusion that the person inside, and thus the system as a whole, does not understand Chinese can only be made because they lack information. If a computer is made to process text and images or sound, it could also make the same associations that comprise ‘true understanding’ in humans.
There is nothing special about understanding that is exclusive to humans. This applies even to abstract concepts. An abstract concept is defined as something intangible that cannot be directly experienced. However, this definition is misleading, as no concepts are independent of experience. Take, for example, love. To understand what love means, you have to be able to associate the word with emotions, people, memories, or other sensory information. This association is functionally no different from what computers do.
Asking The Wrong Questions
Computers are undoubtedly capable of ‘true understanding,’ but many won’t be satisfied with that answer. This is likely because when asking, “Are computers capable of true understanding?” they are looking for the answer to “Are computers capable of experiencing?” These questions are often conflated because many see experience as an essential part of understanding. However, this is not necessarily the case. Understanding is just something we experience.
The experience alone of something familiar may feel like understanding, but it is not. This feeling of understanding depends entirely on your ability to associate the subject of your experience with other information, which is what understanding truly is. As such, the feeling of understanding is more aptly the feeling of familiarity and is not part of true understanding or even the experience of understanding.
Understanding is only experienced when the associations of the subject of understanding are expressed as conscious thought. This is why we sometimes feel that we understand something but realize that we don’t when trying to justify that feeling by attempting to express our understanding. Even if a subject is familiar, we may not necessarily be able to make the proper associations that constitute a true understanding.
Computers Can Understand, But Can They Experience?
This question may never be answered. We aren’t even capable of answering with complete certainty whether other humans can experience. We can only infer that others have conscious experience as they behave as though they do.
Although we may never have an answer to the question, “Are computers capable of experiencing?”, the answer is probably yes. Understanding is not exclusive to humans, and as AI progresses, we continue to learn that there is less and less that is. Maybe experience will be the last of our capabilities that AI won’t share, but I doubt it.