10 Epilogue: Intelligence Augmentation and Beyond: What is its Role in Human Development?

Dr Jay Seitz

Epilogue: Intelligence Augmentation and Beyond: What is its Role in Human Development?

Michael I. Jordan, U Calif, Berkeley, posits “intelligence augmentation” as the current state of “human-imitative AI.”

Although it has been referred to as “artificial intelligence” (AI), it might be better thought of as “synthetic intelligence.” A synthesis of the presumptive ways that the human brain, human cognition, and the social and physical world all work together to create human-like “intelligence.”

Indeed, this “intelligence augmentation” involves an intelligent infrastructure made up of data, computation (“software”), and physical entities (“hardware”) that “make human environments more supportive, interesting and safe,” according to Dr. Jordan.

That is, data, computation, and physical entities augment human intelligence as in the use of a search engine, the translation of a natural language, or pattern detection in the analysis of MRI images.

Machine Intelligence

And this is where it gets interesting…


If you like, you can actually chat with a computer right here (ChatGPT from OpenAI launched on November 30, 2022): ChatGPT.

ChatGPT is a large language model developed by OpenAI that can be used for natural language processing tasks such as text generation and language translation.

OpenAI is an American artificial intelligence research laboratory headquartered in San Francisco. The company conducts research in the field of AI with the goal of developing friendly AI that benefits humanity as a whole. The organization was founded in San Francisco in late 2015 by Sam Altman, Elon Musk, and others who collectively pledged US$1 billion. In 2019, OpenAI received a US$1 billion investment from Microsoft, among others, and another $10 billion.

A Princeton University senior, Edward Tian, launched GPTzero on January 3, 2023. GPTZero identifies if text is written by a bot or a human to undermine plagiarism by students using ChatGPT to write essays or papers: GPTzero.


Programmers and computer scientists have been bragging for over 60 years that computers will attain “artificial general intelligence” and that they have now reached a point of “singularity” (Ray Kurzweil, see book below), in which “machine intelligence” will catch up to and eventually surpass human intelligence (not to mention animal intelligence, which right now is far ahead of any computer or software program).

The Singularity is Near: When Humans Transcend Biology (2005) – Ray Kurzweil

Ray Kurzweil is an American computer scientist, author, inventor, and futurist. He has been involved as an inventor of optical character recognition (OCR), text-to-speech synthesis, speech recognition technology, as well as electronic keyboard instruments (Kurzweil digital pianos). His books include texts on health, artificial intelligence (AI), transhumanism, technological singularity, and futurism. Ray Kurzweil.

As regards robots, “The Matrix (1999) and “iRobot” (2004) are great cinematic depictions of the future of robotics, both good and bad.

Here are some current “robots” on the market (see photos below).

Boston Dynamics Robot, ‘Spot’

SONY’s Aibo Robotic Dog

Nevertheless, programmers and computer scientists’ claim is that “deep/machine learning,” actually, a form of reinforcement learning, can extract patterns better than any human. So a few years ago they alleged that they would soon put radiologists out of business. It turns out that there is much more to “reading” a MRI, X-ray or CT scan than simply extracting a pattern as it also involves attaching substantive meaning to these radiographs. And since deep/machine learning has absolutely no understanding of, nor ability to, interpret symbols, they fail miserably, often missing malignant tumor masses and forms of cancer, among other things.

Historically, reinforcement learning is an area of machine learning in which “intelligent agents” take actions so as to maximize cumulative reward. Sound familiar? It’s basically a variant of operant conditioning. And, yes, it is a staple of animal and human intelligence, but a relatively small part.

The other two “deep/machine learning” paradigms are “supervised” and “unsupervised” learning. Supervised learning is a computer program where the available data consists of labeled examples, meaning that each data point contains features and an associated label. Unsupervised learning is a type of program or “algorithm” in which the program learns patterns from untagged data. It operates through “mimicry” and, interestingly, another universal feature of the human and animal mind.

Operant and Classical Conditioning

Operant or “instrumental” conditioning is a common feature of learning where behaviors are modified through the association of stimuli with reinforcement that either increases the behavior or some form of punishment that decreases it. “Operants” are behaviors that are a normal part of the animal’s or human’s behavioral repertoire (e.g., raising one’s hand in class), are voluntary, and are conditioned to occur or not occur depending on the environmental consequences of the behavior.

On the other hand, classical conditioning is a form of learning where stimuli are paired with biologically relevant events to produce involuntary and reflexive behaviors (e.g., a puff of air in one’s eye makes one blink).

And, by the way, having briefly discussed just two kinds of learning–operant and classical conditioning–there is no question that learning has a huge impact on human development.

Nonetheless, in the first half of the 20c the operant conditioning paradigm attempted to explain cognition and language, but was rapidly replaced by the triumvirate of cognitive psychology; cognitive science; and cognitive, affective, and evolutionary neuroscience, not to mention emerging models and research in the field of artificial intelligence. However, it is widely used and cited in the behavioral health literature (e.g., addiction, autism, brain injury, and phobia) and a slew of ABA programs (Applied Behavioral Analysis) that have popped up in undergraduate and graduate programs across the US where its real strengths shine.

The Pattern Recognition System: Pareidolia and Visuospatial Thinking

In any event, the ability to extract patterns in the world, known as “pareidolia,” actually has a very long evolutionary history in Animalia (i.e., human and non-human animals) going back, at least, to the Cambrian era, 520 million years ago. To wit: We have been doing it effortlessly for hundreds of million of years.

Indeed, in the biological world, “pattern-seeking” is one of the essential ways that organisms gain information and acquire knowledge about physical reality and it occurs very early in life even in young human infants.

But over millions of years ago it has also become the foundation for paradigmatic thought, that is, categorization of the natural world and, much later, it became the core of image-making or visuospatial thinking that goes back in modern human history to, at least, 120 thousand years ago based on recent discoveries of human artifacts in sub-Saharan Africa.

Paradigmatic versus Narrative or Syntagmatic Thought

To be sure, thinking in “categories” and in a hierarchical manner is a defining feature of paradigmatic thought (e.g., categorize all the canines in your neighborhood) and thinking in “stories” or in a narrative manner is a defining feature of narrative or syntagmatic thought (e.g., the writing of a great novelist).

The Rule-Based System: Thinking in Symbols

A rule-based system, on the other hand, which we commonly think of as “intelligence,” involves manipulations on relations among symbols. This is what most of us think what thinking is: Thinking in symbols and, coincidentally, called the “symbolic stage” in Jean Piaget’s theory of cognitive development in which older infants, by the middle of the second year, begin to think symbolically.

What makes up this rule-based system? It includes use of general “algorithms” or rules, “heuristics” or cognitive strategies, deductive and inductive reasoning, conditional and pragmatic reasoning, categorical and conceptual inference, reasoning by analogy, metaphoric thought (see below), the use of schemas and scripts as in implementing a restaurant script in one’s mind for obtaining a table and ordering food in an eatery, global and local planning, as well as insight and creative thought. The latter two often rely on partially unconscious processes that may overlap with the pattern recognition system. A rule-based system would also consist of serial understanding, that is, the ability to string actions together to achieve a pre-defined end or “praxis.”

Metaphoric or Supramodular Thought

Yet, in spite of all the above, an essential core of human intellect is the ability to recognize and create similarities from the seemingly outward appearance of things (see book below). We call this “metaphorical” or “supramodular” thought, that is, the ability to think of one thing in terms of another. And it involves cross-modal, body-kinesthetic, physiognomic, and other kinds of correspondences (e.g., comparing a visual shape or color to an emotion, a balletic movement to a spinning top, or a cloud in the sky as having a sad expression, respectively). We begin to see these abilities in development early in life in young children.

Metaphors We Live By (2003)

Nonetheless, sensory systems do not operate completely independently, but rather sensory information is commingled in the brain, a process referred to as cross-modal perception or synesthesia. In young human infants it’s called “neonatal synesthesia” because the brain systems underlying these abilities are not fully mature and infants have little choice but to experience the world as more integrated than it really is.

Indeed, this supramodular system has four defining features and emerges early in life:

  1. The ability to relate sensory qualities across different sensory modalities
  2. The ability to link an inanimate object to an emotion
  3. The ability to associate a sensory quality to an abstract property
  4. The ability to transform or relate one movement to another

So, What is “Intelligence?”

Unfortunately, one of the fundamental issues in contemporary psychology and the social sciences is the lack of understanding about the concept of intelligence. Indeed, it is not even clear what intelligence actually is in humans or human infants, or other animal species. And least of all, what it might consist of in robots and computers, if it even exists in those machines at all.

Both Robert Sternberg (Yale, Cornell) and Howard Gardner (Harvard) have put forth important theories about what intelligence might be. Sternberg argues that it revolves around specific mental components in the mind, the context in which it is used, and prior experience. Gardner, on the other hand, suggests that it is not one thing, but many things, and has put forth a theory of “multiple intelligences,” actually eight, that have a brain, cognitive, and bodily basis and exist in all humans the world over.

In a nutshell, intelligence might be something like this:

  1. S. Sternberg: “Mental self-management skills.”
  2. Gardner: “The ability to solve problems or fashion products valued in a community or culture.”

A test of Gardner’s eight intelligences is available at MIDAS: MIDAS.

Project Zero (Harvard Graduate School of Education, overseen by Gardner and others), seeks to understand and nurture human potential: Learning, thinking, ethics, intelligence, and creativity. Their research examines the nature of these potentials, the contexts and conditions in which they develop, and the practices that support their flourishing: Harvard Project Zero.

Here are a few other suggestions about what intelligence might be:

To my mind, a human intellectual competence must entail a set of skills of problem solving — enabling the individual to resolve genuine problems or difficulties that he or she encounters and, when appropriate, to create an effective product — and must also entail the potential for finding or creating problems — and thereby laying the groundwork for the acquisition of new knowledge.

A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience . . . it reflects a broader and deeper capability for comprehending our surroundings—”catching on,” “making sense” of things, or “figuring out what to do.

Synthetic Intellect

Historically, “artificial intelligence” programs have gotten better and better at pattern recognition using “deep learning” to recognize voices or images; forms of image analysis to spot particular fingerprints as well as iris and face identification; harmonic, audio, and movement analysis; biopharmaceutical assay; drug discovery; and the like.

But, looking ahead, why would we want artificial intelligence to just mimic human intellect? Augmenting biological intelligence? Great. But simply emulating human intelligence would have limited advantages.

What we want to create is a new kind of artificial intelligence, a “synthetic intellect,” that will abet human intelligence and creativity.

Not a mental doppelganger, but a new class of factitious mentality, that thinks and creates in unfamiliar and novel ways.

Syncretism not simulacrum.


References

Feinberg, T. E. & Mallet, J. M.  (2017). The ancient origins of consciousness: How the brain created experience. Cambridge, MA: MIT Press.

Gardner, H. (2011, 3rd ed.). Frames of mind: The theory of multiple intelligences. NY: Basic Books.

Jordan, M. I. (2019). Artificial intelligence: The revolution hasn’t happened yet. Harvard Data Science Review, 1(1), Dr. Jordan’s article.

Seitz, J. A. & Beilin, H. (1987). The development of comprehension of physiognomic metaphor in photographs. British Journal of Developmental Psychology, 5, 321-331. Original article

Seitz, J. A. (1998). Nonverbal metaphor: A review of theories and evidence. Genetic, Social, and General Psychology Monographs, 124(1), 121-143. Original article.

Seitz, J. A. (1997). The development of metaphoric understanding: Implications for a theory of creativity. Creativity Research Journal, 10(4), 347-353. Original article.

Seitz, J. A. (1997). Metaphor, symbolic play, and logical thought in early childhood. Genetic, Social, and General Psychology Monographs, 123(4), 373-391. Original article.

Seitz, J. A. (2000). The bodily basis of thought. New Ideas in Psychology: An International Journal of Innovative Theory in Psychology, 18, 23-40. Original article.

Seitz, J. A. (2005). The neural, evolutionary, developmental, and bodily basis of metaphor.  New Ideas in Psychology: An International Journal of Innovative Theory in Psychology, 23, 74-95: Original article.

Seitz, J. A. (2019). Mind embodied: The evolutionary origins of complex cognitive abilities in modern humans. NY: Peter Lang International Academic Publishers.

Sternberg, R. S. (1988). The triarchic mind: A new theory of human intelligence. NY: Viking.

License

Icon for the Creative Commons Attribution 4.0 International License

Retracing the Steps of Human Ontogeny Copyright © 2023 by Dr Jay Seitz is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book