Perception is necessarily unique and individual, because our perception is not just a shared circumstance or setting. Perception comes through the total sum of our experiences, memories, feelings, emotions, upbringings, and more. When phrased thusly, it seems an exercise in obviousness to say that our perceptions are all different. But that’s just the tip of the iceberg in thinking about the concept of perception—how we understand the world around us.
We start with the thought experiment of the Chinese room, wherein language is reduced to a series of inputs and appropriate outputs. You are sitting in a room, and you are trained to recognize Chinese characters shown to you, and then display an appropriate response. For all intents and purposes, anyone outside the room interacting with you would assume that you are a native Chinese speaker—but you’re just pushing buttons. So what is language? It’s about understanding, emotion, and empathy; yet what happens if we conceive of it as a calculator with numbers and equations? What does it mean to actually understand someone on a real level, instead of just responding to them as you think you should? For all of our best intentions, are we just simulating thinking, behavior, actions, and emotion?
Read the show notes and/or transcript at https://bit.ly/social-skills-shownotes
Get the audiobook on Audible at https://bit.ly/ThoughtExpKing
For a free minibook on conversation tactics, visit Patrick King Consulting at https://bit.ly/pkconsulting
For narration information visit Russell Newton at https://bit.ly/VoW-home
For production information visit Newton Media Group LLC at https://bit.ly/newtonmg
#Plato #Searle #Turing #TheNatureOfPerception #RussellNewton #NewtonMG #PatrickKing #PatrickKingConsulting #SocialSkillsCoaching
Plato,Searle,Turing,The Nature Of Perception,Russell Newton,NewtonMG,Patrick King,Patrick King Consulting,Social Skills Coaching
Our final chapter, in a way, brings us to the heart of the deepest value of thought experiment: as a sneaky way to delve into how we perceive the world, and why. This is the realm of what philosophers call theory of mind. Through what channels do we experience the world, and are we (can we be) accurate in our assessment? The thought experiments below attempt to unpack some of these themes by looking closely at what it takes to dismantle all the ordinary perceptive abilities we take for granted (i.e. the ones you’ve been using all along in reading this book). While they’re not wrong or even necessarily lacking, they can always stand to improve, especially when you take a look at the ways your perception can be bent.
While our first chapter was about what can be known and learnt, and the limits of our knowledge, we go a little further in this chapter to try and understand perception, experience and being in and of itself.
As your eyes scan along the letters on this page, a million separate processes are happening. It seems like such a simple thing: to see what you see, to think what you think. But by slowing everything down and taking a microscopic view on things we ordinarily take for granted, we open up new worlds of insight—for example, the entirely new universe of what other peoples’ experience are like compared to our own.
As with all thought experiments, the right kind of question can offer you insights far more valuable than merely having an answer to your question.
Do You Speak Chinese?
le created this experiment in:Now, it’s important to understand exactly what is meant here by AI. Some machines behave “as if” they were intelligent but are merely simulating thought or going through the motions. Others are more sophisticated and can be said to actually comprehend things, i.e. not just something that acts like a mind, but is one. Searle was talking about the latter type when he proposed the Chinese room thought experiment.
It goes like this. You are an English speaker who knows no other language, locked in a room with some Chinese writing, a second set of writing and some rules, written in English, for matching up the first batch of Chinese with the second. Just by looking at the shape of the Chinese characters, you can see which group of symbols corresponds to which other group.
So far so good. You are also given a third set of Chinese writing and still more instructions (again in English) that let you draw correlations between the third batch and the first two. The first batch is called a script, the second a story and the third are called questions. However, you are not told any of this. You are asked only to provide responses, called answers to the questions.
It takes you a while to learn the rules. Despite not knowing a word of Chinese, you eventually become so good at supplying the right answers to the questions that, to someone looking from the outside in, you appear to be fluent in Chinese.
So, what are you really doing by manipulating the symbols in this way? Searle would say you are behaving as a computer would, and even though your behavior is indistinguishable from a Chinese speaker, you are not one. You are merely applying a script to process a story and respond in the form of answers. You don’t even strictly need to know that these are “questions” or “answers” to respond correctly.
The inputs and outputs are the same as the process for a real Chinese speaker, but what is missing is comprehension. According to Searle, this situation is analogous to what a computer does—it behaves in all the ways to simulate thinking, yet does not really do any thinking, since it has no real understanding of what it is doing. The computer is simply manipulating symbols. For Searle, it didn’t matter how complex the behavior, set of rules or symbols; if there’s no understanding, there’s no actual thought.
Philosophers call this symbol juggling “syntax,” but the meaning ascribed to those symbols is “semantics.” So, although a machine can demonstrate syntactical behavior, it doesn’t have semantics and lacks any kind of mind as we understand it. A machine can “say” something like, “the sky is blue” without having any eyes to see the color blue, any feelings about the fact, any past history with skies or anything else. It has no beliefs, feelings, or indeed any interior life that we typically associate with having a mind.
Now, we won’t take the time to dwell here on AI as a topic, as enormous and interesting a field as it is. Rather, we can use some of these ideas to springboard a more general inquiry into how we understand and perceive our world, how we acquire knowledge, and what we do with it when we have it. You might have had a few responses to hearing about the Chinese room. For example, how do we know that a Chinese speaker really is a Chinese speaker?
Well, simple: by their external behavior. It’s all that we can see, observe, or know. If this is enough to convince us another person is a Chinese speaker, why do we have more stringent standards for machines? In fact, one could argue (if one was feeling argumentative) that we don’t have much proof that we’re not doing this when we claim to be able to speak a language!
There are many responses and counterarguments to Searle’s hypothetical situation, most of which are concerned more technically with linguistics, cognitive science, programming, consciousness and artificial intelligence. However, thought experiments allow us to engage with these questions without necessarily needing a deep understanding of the technical details.
We can ask what this experiment shows us not about AI, but about ourselves. If we say that a machine can’t have a mind, well, why not exactly? More precisely, what is it about what we do that makes us distinct from a robot programmed to do the same thing?
Where exactly do things like “intention,” “consciousness” or “understanding” actually come from, and why is a mere replica of these things not the same as the original? You may be seeing parallels here with the swamp man, as well as the shadows on Plato’s cave. In fact, you may start to see the bigger question: real consciousness versus a symbol, illusion or image of consciousness.
Though these thought experiments appear to be about computers, they are really about humans, and they’re what we’re really talking about when we say we have minds. The computational theory of mind says that mental states are like the software installed on the hardware of our physical brains. Mental states are then characterized by whatever function you’re performing at the time. If we accept this model, we have to concede that computers can do the same, and that AI can think in the same way we can, only with hardware that is slightly different.
Alan Turing’s famous test for machine intelligence was whether a machine could converse with a human in so authentic a way that the human would not realize it was talking to a machine. Searle saw the human mind as more than just a program, and would disagree that the Turing test shows us anything about the “mind” of a machine.
Isn’t a mind so much more than having an intelligent conversation? More than just being able to juggle symbols? What about the feelings of love, the ability to dream, to have a sensation of consciousness, to really smell the proverbial roses, to believe deeply in the experience that you are alive and have a distinct being?
These questions bring us to a final consideration: how accessible is the idea of the “mind” anyway?
What are we really talking about, and are we even talking about the same thing? Let’s zoom right out again and, as we’ve learnt in previous thought experiments, take a closer look at all the assumptions we’ve made in constructing the Chinese room argument.
A related thought experiment is the antepenultimate one we’ll consider in this book, and in a way the most fundamental. Wittgenstein was a philosopher who wanted to cut to the heart of all philosophical questions by looking closely at the way in which we connect our thoughts and ideas about the world with the world itself. He was interested in language and logic—not the content of an argument, but its underlying mechanism.