2

How similar are we?

Chris Walker

Computer terms have become common, and for most of us, we use these terms to describe ourselves, not just our electronic devices. The use of computer terminology maybe because it is the way we are told to think of ourselves by using phrases like, “our brains are hardwired in a certain way” or “sleep is vital so we can recharge our energy levels.” I am sure you could rattle off a few more common statements that we hear or use daily. It also may not seem out of place to use such terms or phrases because they are used by medical professionals when describing human functioning. The manufacturing of artificial intelligence has led to the ambiguity of whether we are like computers or are computers like us, which neither is true (Epstein, 2016). Tech companies have been working diligently to connect humans with computer interfaces, which have become strangely personal, causing us to interact with them as if they were too human.

Most of you reading this are probably familiar with the software devices, Siri and Alexa, and you may use them frequently. These software programs have become commonplace in our lives, and their existence humanizes computers even more. Their uncanny ability to imitate human responses may have led you to think tech companies are not far from producing real artificial intelligence. Before believing this idea entirely, it may be essential to question what aspects of a computer would need to be perfected to deem it intelligent.

When the behavior of an object or animal is similar to that of our own, we tend to place human qualities to it. We may interact with it as if it was another person and perhaps without even noticing it. We typically do this with common house pets such as dogs and cats. Pets show emotion similar to the ways we do; they whine when they are hungry or injured, become energetic when excited, and become cuddly when they are sleepy. They are not using a language we understand, but due to their behavior, we connect with them on a personal level. Now let’s say electronic devices begin to mimic our behavior, would we begin to treat them similarly to the ways we do other people or our pets? Cynthia Breazeal gives us an example of this personal interaction with a robot named Leonardo, which looks like an animal from a fairytale; it resembles an advanced version of a Furby. Leonardo does not speak a language but instead mimics human sounds and emotional behavior. When people interact with Leonardo, they tend to talk to it like a child and respond to its emotional cues.

When Apple began producing the iPhone, Macbook, iMac, and iPod, we witnessed a multigenerational connection to technology unlike ever before. According to Pew Research, 81% of the United State’s population owns a smartphone. These advanced electronic devices do more than just let us communicate with one another. They provide us with information about ourselves, such as how many steps we have taken, the calories we have burned, and the hours we have slept. Applications downloaded to these smartphones give us notifications and reminders of our daily tasks, provide us with up to date information, and even give us motivational quotes to encourage us.

With all the capabilities new technology has come up with, it begs the question of how intelligent these devices are, and are they inching closer to becoming autonomous? What exactly needs to be created to allow these devices or ones like it to become autonomous and perhaps, conscious entities? To begin answering these questions, we would need to start by understanding the various aspects of our intelligence and how complex some of the seemingly simple elements are.

With how often we use language in our daily lives, we may fail to recognize its beauty and complexity. In western culture, we implement a large amount of sarcasm into our speech, and yet even with a large amount of ambiguity, we can decipher the semantics of these words to understand the message being conveyed. The semantic understanding of terms is only one area of expression that is part of a whole system utilized for memory storage (Baddeley, 2012). To fully understand the information we are receiving, we must make sense of the phonology of the words (Baddeley, 2012). Both of these parts of speech play a different role in processing the information and storing it in our brains. Semantics is better known for its ability to help us create meaning to the words we are hearing or reading, and phonology recalls the serial order of information (Baddeley, 2012). This verbal interpretation is one area that tech companies are still improving upon to provide a more personal connection between humans and machines. Still, it is showing to be far more complicated than once imagined. We encounter these boundaries when we ask Siri a question, and it is unable to provide us with an answer because it does not grasp the semantics or phonology of what we are saying. Unlike humans, machines are programmed with information, and in this case, words and sentences to use to provide us with feedback (Pavlus, 2019). If the machine or software system does not have the information preprogrammed, it is unable to produce new data, unlike our brains, that are capable of generating further information by our ability to learn actively (Pavlus, 2019). Our ability to come up with novel ideas and learn as we go is due to our working memory system (Doolittle, 2013).

It is also vital to understand how language changes the way we think and interact with the world. Cultures around the world vary so widely because of the languages used within them. Lera Boroditsky informs us how some languages use/have a very descriptive vocabulary, while others use/have a minimal descriptive vocabulary (Boroditsky, 2012). This is important to remember because humans create all computer programs for all the different cultural backgrounds. If the programmer does not use certain words or phrases, the program will not have that information to be drawn on for answers. Likewise, if we use a vocabulary that the software was not preprogrammed with, then the interaction with that particular interface is limited. For computer scientists to be able to create a robot that is capable of fully understanding us, and we understanding it, the programmers must learn how to create a coding system for every language and its semantic and phonological syntax. Currently, the most sophisticated and advanced software is unable to rival even the most basic levels of human language processing (Pavlus, 2019). This is primarily because we cannot replicate the human brain’s neural network artificially (Epstein, 2016; Kassan, 2014).

There is no doubt that tech companies and the like, have made significant progress in the field of artificial intelligence, but it has also brought to light just how much further they have to go. It is encouraging to realize that with the advancements in the field of A.I., we are gaining even more knowledge about humans and can invent novel ways of improving our lives, whether medical or recreational, through the symbiosis that we currently share with technology.

References 

Baddeley, A. (2012). Working Memory: Theories, Models, and Controversies. Annual Review of Psychology, 63(1), 1–29. https://doi-org.proxy.lib.pacificu.edu:2443/10.1146/annurev-psych-120710-100422

Boroditsky, L. (2012). How the Languages We Speak Shape the Ways We Think. The Cambridge Handbook of Psycholinguistics, 615–632. doi: 10.1017/cbo9781139029377.042

Breazeal, C. (2010). The rise of personal robots. TED conferences.

Demographics of Mobile Device Ownership and Adoption in the United States. (n.d.). Retrieved from https://www.pewresearch.org/internet/fact-sheet/mobile/

Doolittle, P. (2013). How your “working memory” makes sense of the world. TED conferences.

Epstein, R. (2016). Your brain does not process information and it is not a computer – Robert Epstein: Aeon Essays. Retrieved from https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

Kassan, P. (2014). A.I. gone awry: The futile quest for artificial intelligence. Retrieved from https://www.skeptic.com/reading_room/artificial-intelligence-gone-awry/

Murphy, J. (2018). Why are robots and corpses so creepy? Welcome to the Uncanny Valley. Retrieved 2020, from https://www.mdlinx.com/article/why-are-robots-and-corpses-so-creepy-welcome-to-the-uncanny-valley/lfc-2852

Pavlus, J. (2019). Machines Beat Humans on a Reading Test. But Do They Understand? Retrieved June 01, 2020, from https://www.quantamagazine.org/machines-beat-humans-on-a-reading-test-but-do-they-understand-20191017/

 

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

The Singularity Isn’t Nigh and Here’s Why Copyright © 2020 by Chris Walker is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book