Mind Operation and Relationship between Thought and Language with Artificial Intelligence: Autonomous Vehicles

Emily Ung

With the vast majority of advancement in the technological world, it is a wonder whether machines have a consciousness like human consciousness.  Artificial intelligence is the ability of a computer program to learn and think.  An example of a real-world artificial intelligence (AI) programming that can generate language or thought-like are self-driving cars.  For a long time now, companies such as Waymo have conducted several test drives prior to making the first AI-based public transportation service.  This AI system collects data through the vehicle’s radar, camera, GPS, and cloud services in order to produce control signals in order to operate the vehicle.  The autonomous vehicles are able to do the same actions just as a human would do with minimal mistakes as possible.

Another notable and popular self-driving car is the Tesla.  The AI implemented into this vehicle is able to have computer vision, image detection, a control system that can detect other vehicles or objects (deep learning), and can drive without human intervention—meaning that there does not need to be a person to operate behind the wheels.  Deep learning algorithms are able to detect and predict what objects around the vehicle will do.  This is what makes it safer for cars like Waymo or Tesla, so that there will be a decrease in accidents occurring.

In the reading, Lev Vygotsky’s Thought and Word, he states that there is a connection between thought and speech, and that these two are intertwined working together.  Words are assigned meanings.  Without meaning, Vygotsky says that it is just an empty sound.  He says that “words meanings are dynamic” meaning that it can change.  Another way put is that he is interpreting that a specific word can have double meaning, or more than one meaning.  For example, like the saying, “the cat got out of the bag.”  It could be meant literally or figuratively.  Vygotsky says language gives thought a shareable meaning.  It shows how people interact within their environment (Prof. K, personal communication).

Vygotsky furthers notes about the speech within us: inner speech.  Inner speech was something that was difficult for researchers to examine because there was no way in which they could measure it.  However, one way to analyze inner speech is to connect it with an external activity (Vygotsky, p. 227).  The two processes—thought and speech—are not in correspondence with each other.  The structure and foundation of thought differs from speech.  Thoughts and words emerge as we develop and are always dynamic/changing.  This is just like with the AI application in autonomous vehicles.  Their system and technological use will always be updated and changed to keep up with modern advances.

Peter Doolittle presented on his TED Talk about the working memory (WM), which is the ability to take what we know and apply it to the present.  As human beings, we go through life by learning through experiences, and extracting meaning from it.  The components of what makes up WM are described in the working memory model, proposed by Hitch and Baddely: central executive, phonological loop, and visuo-spatial sketch pad (Baddely, 2011).  The central executive is important because it holds or maintains the information.  The phonological loop is a storage, visual-spatial sketchpad is visual semantics, and the episodic buffer connects interchangeably to episodic long-term memory.  This model links between the working memory and long-term memory.  The phonological loop stores briefly verbal and auditory information, such as language and music, respectively.  Similarly, though, with self-driving cars, AI is used to enable the ability for cars to navigate through traffic and handle complex situations.  Additionally, AI software and cameras with sensors are like the visual sense in which ensures proper and safe driving.  They are able to detect traffic lights, read road signs, track other vehicles, and look for pedestrians.  While we go through life through lived experiences, autonomous vehicles do not.  autonomous vehicles already have a pre-determined idea of what certain objects look like, and do minimal or no mistakes.

Language and consciousness in humans and in machines are something to examine.  Language is characterized by being communicative, arbitrary, structures, generative, and dynamic (Prof. K, personal communication).  Languages act as a meta-schema, which is a complex outline or units of memory about events, actions, or people.  Schemas are activated by sensory register (SR), working memory (WM), and working memory and long-term memory (WM + LTM) combined.  The neural networks are a synced-up collection of cell assemblies working together.

From the reading, How the Languages We Speak Shape the Ways we Think, Lera Boroditsky answers Frequently Asked Questions surrounding the topic of languages and thoughts.  These questions are related to whether thoughts can shape the way in which we think or if thoughts are unthinkable without language.  We demonstrate different cognitive patterns when tested in one language over the other.  Unlike us, autonomous vehicles do not demonstrate different cognitive patterns that we are able to do.

Along these lines, what makes humans different from artificial intelligence is that we are notably not perfect beings.  Humans are able to make mistakes, while AI have a set of programmed guidelines or rules that they follow.  In the article, The empty brain, it talks about the differences between humans and computers.  As human beings, “[our] brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer” (Epstein, 2016).  Unlike AI, we are not born with knowledge, information, rules, memories, processors, and more within us.  In fact, we never even develop them.  For example, we never store in our mind what a dollar bill looked like exactly, despite seeing and knowing what it looks like.

Overall, I do not believe that a machine-learning algorithm that can generate understandable linguistic outputs is so much a cause for concern because it is meant to aid humans.  It may even be very useful for people who have a physical disability to be able to get to places.  As long as there is no harm done to anyone.  Artificial intelligence in autonomous vehicles may be able to function in advanced ways, but that does not mean that they are conscious.  Again, AI follows a set of rules or guidelines programmed for them already which separates them from humans.


Baddely, A. (2011). Working Memory: Theories, Models, and Controversies. Annul. Rev.

Psychol., 63, 1-29.

Boroditsky, L. (2012). How the Languages We Speak Shape the Ways we Think. The

Cambridge handbook of psycholinguistics, 31, 615-632.

Doolittle, P. (2013). How your “working memory” makes sense of the world. TED Conferences.

Epstein, R. (2016). The empty brain. Aeon Magazine.

Johari, A. (2020). AI Applications: Top 10 Real World Artificial Intelligence Applications. edureka!, https://www.edureka.co/blog/artificial-intelligence-applications/.

Vygotsky, L. Thought and Language. The MIT Press, 209-257.


Share This Book