The Chinese Room Experiment
With easily accessible tools like ChatGPT, there is much talk of AIs becoming “smarter.” But how is artificial intelligence evaluated? And can AI really exhibit human attention, rather than merely simulating it
American philosopher John Searle asks that we imagine a locked room into which we send questions in Chinese characters. The room returns sensible answers in Chinese, even ones from complex texts like Dao De Jing. Surely, there is a Chinese speaker inside the room, right?
But in this thought experiment, a person with no knowledge of the Chinese language sits in the room lined with thousands of books that provide detailed directions on how to answer every possible combination of Chinese characters. The directions are entirely in English, and the person in the room only sees Chinese letters as meaningless squiggles—yet, like many types of machine learning models that utilize vast quantities of data, all input will lead to the right output in the view of the outside eye.
Would an AI that exhibits all the qualities of careful human attention be using human attention? The thought experiment asks how we can carve out a realm of human attention between an essentialist viewpoint (exclusionary, seeing the mind as a sole property of humans) and a functionalist viewpoint (goal-oriented, seeing the mind as a tool for ends). What is the role of human attention in an age of AI? Only time will tell — although our decisions may decide this matter somewhat!
To read more about the Chinese Room Experiment, see here.