4

1.

Ed Feigenbaum and Julian Feldman’s book was Computers and Thought. The papers that went into this antediluvian volume of readings came from a variety of journals, such as Proceedings of the Western Joint Computer Conference, Lernende Automaten, the Symposium on Bionics, and Proceedings of the Institute of Radio Engineers. One came from the IBM Journal of Research and Development, and from Mind came Alan Turing’s brilliant and amusing essay, “Computing Machinery and Intelligence,” proposing the imitation game, which came to be called the Turing test.

Seeking these led me far away from the main library, where I’d done my own studying, to small science and engineering libraries scattered around the northern part of the campus. These early efforts in AI had appeared in so many odd niches because sometimes a scientist simply owed a paper to a journal or conference, and strategic publishing hadn’t entered his mind. But really, the scattering signified how many researchers in different disciplines—engineering, psychology, business—were stirring with the possibility that these new computers might, in some way, be said to think. The zeitgeist was pregnant with the idea.

In the introduction to Computers and Thought, Feigenbaum and Feldman laid out their criteria for inclusion: the most important was that they wanted to focus on results, not speculation. Results, not speculation. Over the years, that issue has come up repeatedly because scientists from various fields used to believe (to use the words of one Nobel physicist I knew) they could “just come in and clean up AI.” Although the field has taken various paths in its development and incorporated research from many fields, results are still the main requirement.

Feigenbaum and Feldman drew distinctions between what was then called neural cybernetics, where learning by computer would start from scratch (later known as neural nets, roughly, brain-like structures), and cognitive models. They favored cognitive models for two reasons. First, intelligent performance by a machine is difficult enough to achieve, they argued, without starting from scratch, as neural cybernetics required, from the cell up, so to speak. Therefore cognitive-model scientists built into their systems as much complexity of information processing as they understood and could program into a computer. Second, the cognitive model approach had yielded results whereas results from neural cybernetics were “barely discernible.” This was to change, but not yet. Building into an intelligent system some intelligence already violated the philosophical notion of a mind as a tabula rasa, but it accorded with reality—humans are born knowing a lot and get trained as we develop.

What did I retrieve from all these arcane technical journals? Reports described programs that played chess and checkers—not brilliantly, but recognizably competing. Two programs proved mathematical theorems, and another program could solve symbolic integration problems in freshman calculus. Certain programs could answer questions (stringently limited in both topic and syntax) and others recognized simple patterns. Several programs had been written to imitate human cognition, at least as it was then understood, at the end of the 1950s. This too defined a division that would continue for a while in AI: on the one side, imitations—simulations, properly—of how humans think, and on the other, results achieved by whatever worked—mathematical models, statistical models, algorithms.[1] In those early days, the field of AI was ecumenical.

Human intelligence seemed as obvious and substantial as the Great Wall of China, but when humans reached out for it, it dissolved in a miasma of conjectures and swamp gas. Computer programs might offer a way of modeling and understanding it by actually behaving intelligently for anyone to see.

I fetched, cut, pasted, and typed. Because my two bosses didn’t care when I worked so long as I got the work done, I mostly worked evenings, when my new husband, Tom Tellefsen, was absorbed as an architecture student. Typical of computing, nighttime was when the lab and office came alive. As I worked, I was surrounded by young graduate students in the field who were not only high-spirited good company but also taught me by osmosis things I needed to know about the scientific method.

Gradually the book was assembled, with Feigenbaum and Feldman adding brief paragraphs that gave the context of each paper. It seemed a natural for the Prentice-Hall series in computing, but the consulting editor of the series, the ubiquitous Herb Simon, told Prentice-Hall that the book wouldn’t sell, and they should reject it. (For that, Simon would laugh at himself for many years to come. More than a half-century later, the book is available for nothing on the Web, but The MIT Press will sell you a printed and bound copy if you wish.)

McGraw-Hill signed it up gladly. The firm had set up a branch office in Marin County, across San Francisco Bay from Berkeley, and I have vivid memories of going with Feigenbaum and Feldman to visit their editor. I’d read books since I was four. I had, in some very vague way, an idea that I might be involved in writing books someday. A visit to an actual publisher? Catnip.

By the time Computers and Thought was published in 1963, I was gone from the campus. I hadn’t entered law school after all but was working instead in my family’s business. It wasn’t a good fit for me, so I was thrilled when, in 1965, Ed Feigenbaum called and asked me to join him at Stanford as his assistant. He’d finally thrown over his dispiriting missionary work among members of the Berkeley faculty and decamped to Stanford, which had an actual computer science department, one of the first.

2.

“It can’t think because it’s not human!” a dear friend shouted at me recently. He meant a computer. In the words of Harvard’s Leslie Valiant, my friend is confusing what the computer is with what it does. (Valiant, 2014). But my friend has plenty of company in his conviction.

Yet in the last half century, something has changed. These days, what we consider to be intelligence, thinking, or cognition has stretched to encompass much more behavior and extended to many more entities than anyone in the 1960s could have anticipated. Animal behaviorists study intelligence in primates, cetaceans, elephants, dogs, cats, raccoons, parrots, rodents, bumblebees, and even slime molds, and nobody now is surprised. Entire books appear on comparative intelligence across species, trying to tease out what’s uniquely human. It isn’t obvious. Our fellow creatures are pretty smart. David Krakauer, a theoretical biologist and president of the Santa Fe Institute, an independent think tank, argues that in biological systems cognition is ubiquitous, from the cell on up, from the brain on down. But so far as we can tell, no other animal seems to possess the faculties of our uniquely wired frontal lobes, the seat of planning, self-restraint, elaborated language, and symbolic cognition. As we’ll see, that competence has astounding effects.

In 2013 Dennis Tenen, a young assistant professor of English at Columbia University, presented a humanities seminar at Harvard. He suggested that maybe intelligence should be considered to reside in systems, not in individual skulls, and wasn’t hooted out of the room. However novel this idea is to humanists, it’s been central to AI for decades. Even humanists need look back no further than “Among School Children” by William Butler Yeats:

O body swayed to music, O brightening glance,

How can we know the dancer from the dance?

Feigenbaum’s explanation to me, back in 1960 (“Ah! That’s intelligent behavior”) was only an operational definition of intelligence. He hadn’t said that a computer needed to imitate human thinking processes. We simply had to recognize its behavior, or output, as what we’d call intelligence if humans did it. Of course this itself is problematic, specific to time and culture. In the 19th century, clerks and bookkeepers were paid professionals; their jobs required intelligent behavior. Without much fuss, machines have long since replaced them. Later, as programs grew more complex, Feigenbaum would add that any computer doing a task that required intelligence needed to be able to explain its line of reasoning to human satisfaction.[2] This idea, early articulated by Feigenbaum, has re-emerged compellingly as flash algorithms make decisions, sometimes life and death, which no one can verify.

3.

More than half a century after Ed Feigenbaum’s brief 1960 definition to me, Russell and Norvig (2010) grouped AIs into four general categories:

acting humanly;

thinking humanly;

thinking rationally;

and acting rationally.

Acting humanly means the artifact can hear and speak a natural language, store what it knows or hears, use that knowledge to answer questions and draw new conclusions, adapt to new circumstances, and detect and extrapolate patterns. It might also be able to see and manipulate objects. It would know the rules of social interaction—if embodied, the right distance to stand from humans, how to conduct a conversation, when to smile, when to look solemn. No single program, no artifact extant, can do all these things now.

Thinking humanly means to model specifically human ways of thinking, drawing on (and contributing to) the latest in cognitive psychology and neuroscience. This is a way to understand human cognition, whether planning tasks, interpreting a scene, or lending a hand. With a sufficiently precise theory of some aspect of mind, a computer program can express that theory. Imitating input and output isn’t enough: the program must trace the same steps as human thought and be part of the construction and testing of theories of the human mind.

Thinking rationally is to obey the laws of thought, reason, or logic, sometimes expressed in the formal terms of logical notation. An Aristotelian ideal of thinking, thinking rationally often runs into trouble in the messy real world.

Acting rationally refers to an agent—something or someone—operating autonomously in its environment, persisting over a long time period, adapting to change, and creating and pursuing goals. A rational agent acts to achieve the best outcome (or in uncertainty, the best expected outcome: a rational agent isn’t omniscient, perfect).

Rational agents offer two advantages. First, they can be more general than the “laws of thought” approach allows. Second, rational agents are, in the words of Russell and Norvig, “more amenable to scientific development than are approaches based on human behavior or human thought.” That’s a tactful way of saying rational agents can improve quickly. Humans take awhile, sometimes never. These days, most AIs can be characterized as rational agents. As defined here, perhaps acting rationally and acting humanly are slowly converging in computers. But it hasn’t happened yet. Nor in humans, come to that.

A second look at those categories, acting and thinking humanly, acting and thinking rationally, and I’m struck by how intimate this machine is. With AI, we’re creating our own doppelgängers, or a new, improved version of humans or maybe our successors. “Just tell me,” the machine seems to say to us softly, flattering as a lover, “Just tell me how you do it.” Irresistibly, we move closer. “So that I can do it too.” We back away. “Just tell me.” We move closer again, of course. We confide, impeded only by what we don’t know, our own tricks we have no conscious access to, which turn out to be deeper and more complex than we expected. I’m also struck by how often humanly appears in these four definitions. That reflects a long-held belief that intelligence is solely a human property, although as I noted earlier, in recent decades, scientists have widened the definition of intelligence to include other species.

Perhaps physicist Max Tegmark’s distinctions in Life 3.0 (2017) are useful here. He cateogorizes three stages of life: biological evolution, cultural evolution, and technological evolution. “Life 1.0 is unable to redesign either its hardware or its software during its lifetime: both are determined by its DNA, and change only through evolution over many generations. In contrast, Life 2.0 can redesign much of its software: humans can learn complex new skills—for example, languages, sports and professions—and can fundamentally update their world-view and goals. Life 3.0, which doesn’t yet exist on Earth, can dramatically redesign not only its software, but its hardware as well, rather than having to wait for it to gradually evolve over generations.”

What I didn’t understand until long after I’d followed my twitching nose down the AI path was that the computer would be the instrument that finally brought the Two Cultures together. AI and its principles would be central to that rapprochement. Given the thrashes I’d get into over the years trying to explain AI, and computing generally, to my colleagues in the First Culture, nobody else understood this either.

Such understanding dawned gradually. By the early 21st century, a field called the digital humanities had blossomed, although that’s only the most obvious sign of detente. I also mean something deeper, both intellectually and emotionally—the beginning of that enormous scientific enterprise called computational rationality that I described earlier, to encompass, explain, and account for intelligence in all its guises.


  1. Decades later, I’ve heard people say that everything in the field was laid out in principle in Computers and Thought, and the subsequent work is mere technological commentary. No. AI has evolved mightily in the sixty years since the book was published thanks to new ideas, new techniques, and dramatically better technology. The book doesn’t include a word about robotics for example. Certainly many fundamental ideas appeared in Computers and Thought, but they’ve developed in unforeseen ways.
  2. A 2013 Google system seems to have baffled everyone. During an exposure to millions of random YouTube videos, it unerringly identified images of paper shredders. Unlike cats, paper shredders are an object most humans can’t identify and whose features Google’s engineers were unable to classify and code with any precision. The system isn’t saying how it learned. In response to Feigenbaum’s declaration, some decades later, others now support this need for explanation, so some deep-learning programs are slowly being equipped with explanatory lines of reasoning.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

This Could Be Important (Mobile) Copyright © 2019 by Carnegie Mellon University: ETC Press: Signature is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book