10

AI should be something that fuels dreams not nightmares

Ethan Guzman

Growing up I was a huge fan of Science Fiction movies, the ability that authors have: creating a plausible narrative based around scientific fact, astonished me. Many scientists use science fiction as a reference in attempting to accomplish what seems like the impossible. One such task that Cognitive Scientists are trying to realize, is building the human equivalent of consciousness in A.I. However, with such a variety of examples in science fiction it is hard to delineate whether scientists are pursuing depictions such as C-3PO or HAL. With such unknown potential within the field of Cognitive Science it is easy to give way to fear to the unknown. What is the truth? and are we on the verge of a robotic revolution?

The fear of A.I. taking over is depicted in the movie: 2001 A Space Odyssey. In the year 2001 astronauts were sent from Earth to investigate a potential transmission they received from Jupiter. They arrive only to discover no signs of life and deduce that it was a miscalculation in their A.I. processor HAL. The crew members of the spacecraft reason that the best option for their mission is to shut HAL down and reprogram it. However, HAL surmises that the miscalculation was in the program that the humans designed and installed in it. To prevent its shutdown HAL kills off the unsuspecting crew members one by one. Eventually one of the crew members starts disabling the higher-level functioning of HAL and while this is happening HAL pleads for them to stop, expressing fear.

Although the movie 2001 A Space Odyssey was released in 1968 the process in which HAL operates, is a major approach in developing intelligent A.I. This approach is called Machine Functionalism and is described as “A program is just a recipe for getting a job done and can be specified, at a very abstract level, as a set of operations to be performed on an input and yielding a certain output” (Clark, 256). Simply put, Machine Functionalism approaches to address human intelligence by mimicking human behavior. Given an input Intelligent A.I should be able to correctly pick an action in order to solve a problem- an output. Through formal calculations of input and output the interactions between symbols used in the algorithm will yield consciousness. Cognitive Scientists believe that the symbol system of language is an adequate mechanism in developing consciousness in A.I.  For Humans, we use language to describe events and objects in our environment. The symbols we use to describe such objects gives our mind the ability to put meaning to it. This process of attributing meaning to a specific word is called semantics.

While this is true, Cognitive scientists do not believe that Intelligent A.I. need the ability to understand semantics in order to be intelligent. Rather they believe that the rules language has that govern its proper use is a necessary and sufficient condition in building intelligent life. Moreover, the syntactical structure of language allows A.I. to manipulate the symbols without needing to know what they mean. Replicating the proper structure of a sentence, will produce an output that is intelligent to those who can understand it. This is a key point, that A.I. such as HAL, do not need the ability to understand what they are computing, but instead should only know how to use it to get a desirable outcome.

As it currently stands, the Functionalist approach in creating intelligent A.I. has yielded enticing results. The SOAR project operates under this approach, “Given an algorithmically driven iteration the result lines up with a human’s intuitive ideals about what should happen” (Kleinknecht, 2020).  SOAR is an expert system that solves problems via the syntactic structure of its input, in this case language. SOAR uses the structure of language to create scripts, or a set approach involving steps, to achieve a goal. After a goal is achieved SOAR chunks the scripts used to solve that problem and stores it in its database. In a way, SOAR is learning from its environment by updating its knowledge base according to adaptive behavior, achieving the goal. But is that all consciousness is? Are humans only experts at choosing an adaptive solution to potential problems?

According to Cognitive Scientist Andy Clark, the efficient use of language is a necessary component of consciousness however it is not sufficient. In his book “Mindware an introduction to the philosophy of Cognitive Science” he uses a thought experiment created by John Searle to exemplify this point. In this thought experiment:

“Searle asks us to imagine a monolingual English speaker placed in a large room and confronted with a pile of papers covered with apparently unintelligible shapes and squiggles. The squiggles are in fact, Chinese ideograms, but to the person in the room they are just shapes on a page: just syntactic shells devoid of appreciable meaning. A new batch of squiggles then arrives, along with a set of instructions, in English, telling the person how to manipulate the apparently meaningless squiggles according to certain rules. The upshot of these manipulations, unbeknownst to the person in the room, is the creation of an intelligent response in Chinese” (Clark, 36).

This thought experiment represents the inner workings of Intelligent A.I. under the approach of machine functionalism. The native English speaker in this scenario represents the A.I. processor which is operating in symbols that they do not understand or squiggles. The syntactical formation of language is the manual that A.I are using to create what seems like an intelligent response. A.I. under this approach may look like they are thinking intelligently but, ultimately they are just acting as a giant calculator. In this approach A.I. do not understand what they are outputting and instead are mindless zombies just mismatching appropriate squiggles to one another. Searle suggests that the agent is not truly understanding Chinese or in the broader spectrum, A.I. are not truly acting intelligently. Using the example of SOAR, it is able to create scripts that achieve goals in a logical fashion. However, human intelligence is keen at addressing problems in their environments that are not constant with their established scripts. Humans are able to manipulate the semantics of language in different situations to derive adaptive solutions. SOAR would be unable to address a problem that is not congruent with its knowledge base creating a syntax error.

This is a concerning problem for A.I., under the functionalist approach, there is a wide gap between solving problems under set guidelines and solving problems in the real world. Moreover, A.I such as SOAR are able to express something that looks like intelligence in their microworlds. That is, a simulation of events that are in line with the knowledge base of the A.I.  Searle suggests that strictly focusing on how intelligence is produced is not an accurate representation of intelligent thought. A key component that needs to be addressed in Artificial intelligence is the implication of the brain within the body. Clark agrees with Searle on this point, that Cognitive Science has neglected other aspects of the brain that contribute to decision making and ultimately intelligence. Clark terms this research between this gap as Micro-functionalism which will look at aspects such as the role of hormones and neurotransmitters in intelligence. With this approach the intent is to fill in the gap that lies between theoretical testing and practical exercise of intelligence. Looking at how the brain functions within a system will shed light on how to replicate intelligence in unscripted situations.

It might seem as a given that one should look at the brain in order fill in the gaps that functionalism alone provides us with. However, throughout the history of Cognitive Science those within the field have neglected Neuroscience. As Clark has surmised approaching intelligence in this manner cannot be completely representative of what human intelligence is. Cognitive Scientist Marshall suggests his own approach to this problem called Embodied Cognition. Embodied Cognition representations are not seen as passive data structures but rather as a “recipe for action” (Marshall, 119). That the body, both mentally and physically are both actively engaged in the world and representations form through lived experiences. Marshall, similar to Clark, believes that this connection must be explored further. But by neglecting Neuroscience, as psychology has done in the past, it limits the brain to an information processing tool. Although Clark and Marshall suggest different routes to achieve the same goal the commonality is that they both acknowledge the need to investigate implementation.

Implementation is important to investigate because intelligent action is not occuring in a vacuum space. Emotions, which have strong neurochemical correlates, influence not only how we assess a problem but how we act within our environment. Humans act drastically different according to their emotional state. When we are angry we may use language in an argument to convey a thought we wouldn’t have in any other circumstance. Emotion and the state of experiencing such events are a pivotal component of human intelligence. Cognitive Scientists have categorized emotions and the state of experiencing as qualia. Qualia has been a major roadblock for Cognitive Scientists, they don’t know exactly what this “stuff” is. As previously stated there are strong neurochemical correlates associated with such behavior, but getting the right “ingredients” has eluded researchers since the creation of Cognitive Science as a discipline. This has led many, such as functionalists, to completely ignore this aspect of human intelligence. But as Clark, Marshall and Searle all point out this gap must be addressed to adequately represent human intelligence. But say for theoretical purposes, that we are able to locate the flow of information within the brain that allows us to have conscious thought.  Are we doomed to be making our successors with conscious A.I.? Are we in fact making HAL in real time?

While it is tempting and adaptive to think about the worst-case scenarios, Cynthia Braezeal’s Ted Talk outlines a more realistic outcome that we should expect. She states “ What I’ve learned through building these systems (A.I.) is that robots are actually a really intriguing social technology, where it’s actually their ability to push our social buttons, and interact with us like a partner, that is a core part of their functionality” (Braezeal, Ted Talk). A.I. are in part designed to have a functional role within our preexisting social network. Instead of representing intelligent A.I. as HAL from the Space Odyssey, plotting schemes according to their own agenda. A more realistic depiction would be  C-3PO from Star Wars, a loyal companion that is useful in guiding our attention to aspects of our environment we are not aware of. With this depiction we should look forward to the possibility of having personal intelligent robots that help us accomplish daily tasks with ease. Imagine for a moment, being able to have a personal C-3PO robot, which is designed to aid in etiquette and language translation. C-3PO would allow humanity as a whole to transmit ideas that would promote all aspects of life. To put this in perspective, imagine being able to attend a TED event where all of the greatest minds across the world come together. The knowledge and rich backgrounds you could encounter ,and more importantly learn, from is priceless.  Furthermore, Braezeal outlines in her experimental trials that if we are comfortable with A.I., humans start to attribute human-like qualities to their robots. Assigning both names and Gender to their respective robots, blurring the line of what is hardware and what is wetware. By reframing the lens we use to examine conscious A.I. we should start to welcome this possibility.

While having a personal A.I. assistant at the moment is too good to be true, it is not too far-fetched to believe that in the not so distant future we may very well have one. The resources and approaches in creating conscious A.I. we have now fall short to what it truly means to have conscious thought. There are steps that need to be taken before we can realistically think about having C-3PO making our coffee in the morning. Furthermore, Cognitive Scientists need to agree on the definition of what makes A.I conscious and choose an approach that will accurately represent this. We may also need new innovations and inventions such as a tool that will yield greater processing power to help ascertain our goal. Currently there is not a feasible route in creating a completely autonomous agent in A.I. Although this sounds like bad news this should not discourage those who are pursuing such a task. Nonetheless, the conversation may be different in just 50 years from now, this may sound like a long time from now but to think that in our lifetime we may be able to see conscious A.I. should be something that fuels dreams not nightmares.

References

Braezeal, C. (2010). The rise of personal robots. Retrieved from: https://www.ted.com/talks/cynthia_breazeal_the_rise_of_personal_robots?language=en

Clark, A. (2001). Dualism, Behaviorism,  Functionalism, and beyond. 256,  Retrieved from: https://moodle.pacificu.edu/pluginfile.php/796474/mod_resource/content/0/Mindware%20Ap%201%20-%20Intro%20to%20Phil%20of%20Mind.pdf

Clark, A. (2001). Mindware: Symbol Systems. 36, Retrieved from: https://moodle.pacificu.edu/pluginfile.php/796483/mod_resource/content/0/Clark%20Chp%202%20second%20ed.pdf

Kleinknecht, E. (2020). Cog Science Week 14 Sp 2020 [Unpublished Manuscript]. Department of Psychology, Pacific University, Oregon, United States.

Marshall, P. (2009). Relating Psychology and Neuroscience. Perspectives on Psychological Science, 4(2), 119. Retrieved from: https://moodle.pacificu.edu/pluginfile.php/796479/mod_resource/content/0/Relating%20Psych%20and%20Neuroscience.pdf

 

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

The Singularity Isn’t Nigh and Here’s Why Copyright © 2020 by Ethan Guzman is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book