8

1.

Before I could begin to write, I embarked on a crash course though the AI literature, beginning with Computers and Thought (1963) whose substance I’d barely understood when I worked on it. Then I picked up Herbert A. Simon’s brilliant little classic, The Sciences of the Artificial (1969). Both were lucky choices, for each laid out relatively simple principles that later would be richly elaborated in the field’s research.

In The Sciences of the Artificial, Simon argued that artificial phenomena—whether a business executive’s behavior, the fluctuations of economics systems, or the way people’s thinking is influenced by individual psychology, anything that wasn’t physics or biology—deserved empirical scientific attention as surely as the natural sciences.[1] “Artificiality is interesting principally when it concerns complex systems that live in complex environments. The topics of artificiality and complexity are inextricably interwoven” (1981).

We live for the most part in a human-made environment, he went on, its most significant element those strings of artifacts called symbols, which we exchange in language, mouth to ear, hand to eye, all a consequence of our collective artifice.

Simon also argued that complex behavior appears in response to a deeply complex environment—an ant finds its way over the ground responding to its environment, not because it has any grand plan to get from here to there. But the ant has a goal and knows through trial and error when it has reached that goal. This image is another version of Simon’s longtime preoccupation with the maze, which I’ll say more about in the next chapter. That complex behavior appears as a response to a deeply complex environment is a potent idea, and emerges in many different ways in AI.

Two profound principles were explicit here. First, complexity arises out of simplicity. Simplicity can be a single neuron that, with its fellow neurons, eventually produces great complexity of thought. Simplicity can be a zero-one state of a computer register, which, together with its fellow zero-one “cells,” leads to great complexity of thought. Each example is intelligent behavior, our most human characteristic. Second, our rich human languages are artifacts: we invented them and elaborate upon them daily.

Computers, Simon continued, are empirical objects, capable of storing symbols, acting on them, copying them, erasing them, comparing them. This happens to match much of what the human nervous system does with symbols, which therefore suggests that some parts of human cognition can be modeled, or simulated, on a computer. Intelligence is the work of a symbol system. It can be enacted in the human brain or in a computer.

Simon’s argument foreshadowed an idea that would eventually become a given in the sciences of the mind: intelligence arises not from the medium that embodies it—whether flesh and blood, or electronic components—but from the way interactions among the system’s elements are arranged. Sixty or more years later, this idea would come to be called computational rationality, encompassing intelligence in brains, minds, and machines.

2.

I needed to go back to the beginning of AI, whenever that was. In the first interview I conducted for my book, Ed Feigenbaum told me a story I recounted verbatim in Machines Who Think (1979):

I was an undergraduate senior, but I was taking a graduate course over in GSIA (the Graduate School of Industrial Administration at Carnegie Tech) from Herb Simon called Mathematical Models in the Social Sciences. It was just after Christmas vacation—January 1956—when Herb Simon came into the classroom and said, “Over Christmas, Allen Newell and I invented a thinking machine.” And we all looked blank. We sort of knew what he meant by thinking, but we didn’t know. We kind of had an idea of what machines were like. But the words thinking and machine didn’t quite fit together, didn’t quite make sense. And so we said, “Well, what do you mean by a thinking machine? And in particular, what do you mean by a machine?” In response to that, he put down on the table a bunch of IBM 701 manuals and said, “Here, take this home and read it and you’ll find out what I mean by a machine.” Carnegie Tech didn’t have a 701, but RAND did. So we went home and read the manual—I sort of read it straight through like a good novel. And that was my introduction to computers.[2]

Over the Christmas holidays, Allen Newell and Herb Simon had invented a thinking machine. It was one of those moments so modest in the telling that the world would be unaware of the momentous changes to come, like the instant a single-celled creature added a cell or two and became multicellular; or the first hominid stood up on her hind legs, the better to survey the plain; or when some early 20th-century physicists proved to themselves, if not yet to anyone else, that the physical world was not what it seemed.

Speculation about thinking machines, I discovered, had persisted throughout history and cultures—the early Egyptians, the Greeks, the early Chinese, the Japanese. By the 19th century, scientists and poets shared such speculations, especially scientists who longed to create something real and practical. The record says scientists were nearly all driven by no-nonsense goals. For instance, the brilliant English inventor Charles Babbage wanted to calculate by machine (and automate printing—a source of many errors) tables that were essential to navigation and ballistics, tables whose production had, until then, been the task of bored and overeducated clergymen, marooned on the bleak English moors and fens. “We shall substitute brass for brain!” thundered Lord Kelvin, as he oversaw construction of a machine to calculate the elementary constituents of tidal rise and fall (McCorduck, 1979).

But I wonder. When Babbage ran out of money to build his Analytical Engine, he and Ada Lovelace, his associate, thought to raise funds by building a tic-tac-toe or a chess-playing machine. I don’t know if that was before or after their proposed system to play the ponies.

These deeply perceptive people must have known their machines offered something well beyond the calculation of tidal tables. If these nineteenth century pioneers left no written record of that insight, there were sensible, even compelling, reasons to keep quiet.

The superiority of machines over muscle was the core of the industrial revolution, transforming life dramatically at the very time these Victorian pioneers were speculating about machine intelligence. Might machines, strong and tireless, excel at thinking, too? Such a fear was—and is—fundamental.

3.

While I was on a crash course through AI’s scientific literature in 1973 and 1974, I was also trying to raise money to support my new project. I needed funds for travel and to pay a transcriber of the interview tapes I was making.

I begged from every source I could think of. The Office of Naval Research granted me a meeting, and allowed as how it was an interesting project and certainly should be done. They’d think about it.

The National Science Foundation said essentially the same thing but added with a frown: didn’t I realize I wasn’t a trained historian of science? Merely a writer? Surely the trained historians of science would be crawling all over the place, I replied. A new scientific field aborning: what could enthrall a science historian more? But I planned to write a book for the general reader, not for other historians of science.

I may have written my first in a series of unsuccessful proposals to the Guggenheim Foundation.

I approached a program in the National Endowment for the Humanities whose explicit purpose was to support projects that married the sciences with the humanities. Here I was, a faculty member in a university English department, ready to write about this entrancing new science, whose genesis lay in some of the dearest and most persistent myths and legends of world culture. The National Endowment for the Humanities couldn’t say no fast enough. Its reaction was a harbinger, but I didn’t know it. I knew only that I failed to please, and that was that.

And then, as if by magic, money appeared. It seemed to come from Allen Newell, maybe Raj Reddy, but they said no. It was from somebody at MIT with a great interest in the project, who wished to remain anonymous. After Machines Who Think was published, I discovered that my anonymous benefactor was Ed Fredkin, then at MIT, whose private foundation sponsored oddball projects and had decided to sponsor mine. I am ever deeply grateful. Without that help, the book wouldn’t have been written.

The experience made me skeptical of the whole grant proposal process. No matter who was making the decisions, judgments seemed arbitrary and timid. Many years later, I asked an historian of science, now studying scientists in AI, why I hadn’t encountered any of her colleagues at the dawn. “Oh,” she said, a little embarrassed, “we weren’t sure it would turn out to be important.”

So I had some money and was on my way. I only needed to decide how to tell this fabulous contemporary tale. I’d learn much in the course of writing the book, but more important, I enriched my life by getting to know better the geniuses who, with their own intelligence and sense of adventure, had invented thinking machines. Not only were their passports mostly American, but they were in the American grain: optimistic, inventive, pragmatic, plain spoken, and up for fun.

4.

One of them was Herb Simon.

Over Christmas, Allen Newell and I invented a thinking machine.

When I told this story that I’d heard from Ed Feigenbaum to Simon himself, he laughed disbelievingly. “Did I say that?”

In 1971, when we first met, Simon was 55, still brown-haired, though gray was beginning to show at his temples. As a rule, his visage was grim, almost truculent. His face seemed to rest in a distrustful snarl, and his small brown eyes looked out skeptically from under low eyebrows. It was an astonishing masquerade for a man who liked to laugh as much as Simon did, a man who found delighted wonder in everything, whether it was the grackles in the pine trees beside my Pittsburgh house or the endless entertainment of doing science. As I listen to the interviews we had, they’re often punctuated by loud laughter, for he was teaching me, and he loved to teach, too. It was also the laughter of two flirts, practicing their art shamelessly on each other.

After the Christmas holidays of 1955-56, Simon, Newell, and their longtime RAND collaborator, Cliff Shaw, didn’t yet have a running program on the computer. But they knew how to organize the process, and Simon had recruited his family to enact how it would work—okay, step forward, Dot, and Peter, you fork left over to Barbara. The exercise told them that, with much coding, the program they envisioned, called the Logic Theorist, to prove theorems in Alfred North Whitehead and Bertrand Russell’s Principia Mathematica could be done. After that watershed December result, Simon claimed it was all over but the shouting.

It wasn’t, of course. The coding was arduous and prone to bugs. Many late-night, long-distance, budget-busting phone calls took place among Simon, Newell, and Shaw. Newell said of Shaw, who at the time was in Santa Monica executing the team’s ideas on RAND’s Johnniac,[3] he’s “not just a programmer, but a real computer scientist in some sense that neither Herb nor I were.”

In Simon’s 1991 autobiography, Models of My Life, he wrote that it was 1954 when he and Newell seized the opportunity to use the computer as a general processor for symbols (hence for thoughts) rather than just a speedy engine for arithmetic. Neither of them was ever interested in numerical computing. As I’ve said, Charles Babbage and Ada Lovelace saw this possibility in the mid-19th century; Alan Turing and Konrad Zuse had seen this at the end of the 1930s.[4] At the 1948 Hixson lectures at Caltech, John McCarthy was an undergraduate in the audience and heard comparisons between the brain and the computer that determined the rest of his research career: he knew he wanted to design machines that could think. In 1950, Marvin Minsky, still an undergraduate at Harvard but under the influence of psychologist Warren McCulloch and mathematician Walter Pitts at MIT, nurtured the same ambition. Thinking machines were in the air.

With access to one of the best computers of the time, RAND Corporation’s Johnniac, Simon and Newell had the means to realize this elusive, longstanding, maybe hubristic ambition: a thinking machine.


  1. This assertion would have gone without saying during the Enlightenment, when thinkers aspired to be “the Newton of the moral sciences.” The subsequent Romantic period insisted on a division between the natural sciences and knowledge about humans, or “the humanities.” These days, Nobel Prizes are awarded for knowledge about humans, both biological and psychological. Moreover, the sciences of complexity have been the central theme of the independent think tank, the Santa Fe Institute, for more than a quarter century. The term computer science means the study of the phenomena surrounding the digital computer, which is full of surprises every day. But as Simon was writing, these were novel notions.
  2. The IBM 701 was a vacuum tube machine with a total memory of 2048 words (words roughly equivalent to bytes) of 36 bits each. Addition required 12 microsecond cycles; multiplication or division required 456 microsecond cycles. The desktop I write this on has 4 gigabytes of memory, or 4,000,000,000 bytes, and works, well, fast enough for me. Iliano Cervesato, a teaching professor of computer science at Carnegie Mellon, notes that the average smartphone has more computing power than the entire world had in the 1970s, carries out more tasks than were imaginable just a few years ago, and is so cheap that half the earth’s population can afford one. Iliano Cervesato, “Thought Piece.” Welcome to the <source> of it all. A Symposium on the Fiftieth Anniversary of the Carnegie Mellon Computer Science Department, 2015.
  3. Johnniac was named for John von Neumann. A copy of his machine at the Institute for Advanced Study in Princeton, the Maniac I, was built at Los Alamos. Maniac was purported to be an acronym for Mathematical Analyzer, Numerical Integrator, and Computer. David E. Shaw’s Non-Von was a later computer architecture named for von Neumann.
  4. Newell said that neither Babbage nor Turing had influenced him and Simon. He didn’t know much about either of these early forays into AI. No, what was to be done was just so obvious.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

This Could Be Important (Mobile) Copyright © 2019 by Carnegie Mellon University: ETC Press: Signature is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book