7

1.

Rebuilding an academic department is time-consuming, so Joe and I began going away summers so he could do research undistracted. We spent a summer in Boulder at the National Center for Atmospheric Research, and in 1973, we were invited to Stanford for the summer. It was a joy to be back on a bike, pedaling under the benevolent Stanford sun, seeing old friends and making new ones.

At the beginning of that stay, I heard from Ed Feigenbaum that John McCarthy, who now had a pilot’s license, had made an emergency landing in his small plane at a remote Alaskan spot called, in rough translation, the Pass of Much Caribou Shit. He’d been found and rescued only by chance. I’d finished a novel called Three Rivers and was casting about for a new project. What about a novel based on these unusual people I knew in AI?

Then, lying lazily in the shade one afternoon, I thought, why write a novel? Why not write a history? It had to be a cinch—I’d interview a few key people, splice the interviews together, and presto! On July 4, 1973, I tried the idea out on Joe—an intellectual history of AI, told by those who were making that history, told while they were still alive. He loved the idea. Five days later, I tried it out on Ed Feigenbaum, who was very encouraging.

But I began to doubt. Did I have the intellectual wherewithal to do this? Ed grinned. You’ve got lots of friends who’ll help you, and besides, maybe you shouldn’t get into very detailed analyses so much as the genealogy of ideas and the personalities involved. The time was right, he went on, and the major figures accessible to me—good friends, in some cases. “I’ll get McCarthy on board,” Ed said and arranged a lunch at Stanford’s Faculty Club for me to get reacquainted with John.

2.

Although artificial intelligence hadn’t been a field for very long—John McCarthy had only named it in 1956—McCarthy had been a towering figure from the start. His office was in Polya Hall when I first met him, but he soon moved into the hills behind Stanford to oversee Stanford Artificial Intelligence Laboratory, which he’d started in 1966. SAIL was to become legendary. McCarthy already was.

McCarthy gets credit for a host of key ideas in computing generally, and in AI in particular. One major idea is time-sharing. Computing processes are so much faster than human processes that the computer’s time can be shared simultaneously among a group of users without the human users noticing. McCarthy was the first to suggest this publicly—obvious, once put into words—and then designed and helped implement several systems at MIT where he’d been earlier and then at Stanford. But implementation was hard work. Daunting technical problems had to be solved, and, as several people have told me, in the age of 24-hour-turnaround for results (me toting the boxes of punched cards, remember?) the idea was either incomprehensible to many, or bitterly opposed by them.

Time-sharing would change the way people interacted with computers, and how they interacted with each other via the computer. The machines, faster than us even then, were speeding up continuously. Time-sharing is still fundamental to the Internet’s operation and underlies everything, like the World Wide Web, cell phones, servers, and cloud computing.

John McCarthy had dreamed up a workable time-sharing system simply because he wanted to pursue his AI research more effectively. In Machines Who Think, I wrote about how he and Marvin Minsky organized the embryonic Dartmouth Conference in the summer of 1956, where the first serious practitioners of the art met to talk about the possibilities of intelligent machines.

McCarthy strongly believed that human-level intelligence in a computer might be achieved by using mathematical logic, both as a language for representing the knowledge that an intelligent machine should have and as a means for reasoning with that knowledge.

In that, his beliefs fell unequivocally into that category of AI known as thinking rationally, employing the “laws of thought,” embodied in formal logic and mathematics. Knowledge representation was a hazy idea at the time; only later did other AI researchers realize that it was fundamental to intelligent behavior, and, as McCarthy was the first to argue, it needed to be explicit.[1] Knowledge representation would bedevil and demand refinement in AI for decades: the neats versus the scruffies (the mathematical versus the nonmathematical) until, in the early part of the 21st century, knowledge representation would become one structural element in a tentative bridge between the Two Cultures.

McCarthy’s concept, that thinking involves both knowledge and reasoning, led to his invention of a programming language called Lisp, not the first list-processing language, but certainly the first to be generally useful. List processing is a way of manipulating lists of lists, convenient for representing and acting on attributes of entities, whether objects or actions in, for example, tree-like fashion. An earlier list processing language, IPL-V (Information Processing Language 5) designed by Newell, Shaw, Simon, and Feigenbaum, seemed to McCarthy a great idea, cumbersomely realized. He was proved correct by how widely accepted and long-lived Lisp became. In addition to his work in AI, McCarthy also made fundamental contributions to the mathematical theory of computation.

In 1971, in recognition of all this, he was honored with the Turing Prize, computing’s Nobel. He’d go on to win many other international awards and be inducted into both the National Academy of Engineering and the National Academy of Sciences.

3.

But at this 1973 lunch, when I was proposing to write a history of AI, I knew him best as the founder of SAIL. It was McCarthy’s brainchild and would be the incubator of some of the most famous names, techniques, and programs in the history, not just of AI, but of computing generally.

When I’d worked at Stanford, SAIL was a diverting place to visit. Perched on a grassy golden hillside, it had views in clear weather of peaks as far away as Mt. Tamalpais in Marin County, Mt. Diablo in Contra Costa County, and Mt. Hamilton outside San Jose. I’d drive up the winding approach road and smile to see signs warning me to beware of robot cars. An OK Corral moment between my VW Beetle and a mobile robot? I might’ve stopped to play a game of volleyball from time to time.

Inside the building, the computers, monoliths in the center of a great open room, had been named by the graduate students: Gandalf, Frodo, Bilbo, Gollum, maybe even Sauron. Because we were all taken up with J. R. R. Tolkien’s Lord of the Rings, everyone’s office doors had Middle Earth numbers, rendered by computer in elegant Elvish script, itself a graphics first (not because it was Elvish, but because it was script).

A freestanding robotic arm stood in one area, rather more free than it needed to be, because it was eventually confined behind a Plexiglas barrier to keep it from whacking innocent passersby.

SAIL would be the prototype Silicon Valley organization, with its free-and-easy, 24-hours-a-day atmosphere. But it also laid the foundation of another Silicon Valley tradition. Genius though John McCarthy was—not a word I use lightly—he had little patience for worldly details. He knew enough to hire another gifted scientist as deputy director to make the trains at SAIL run on time. This was Lester Earnest, who was comfortable with the whimsy and the hot tubs, but knew how to set clear goals, get graduate students to meet them, and produce results.

Over the years, results surged forth. SAIL’s students and alumni made significant contributions to robotics, to computer miniaturization (laptops and smartphones), to graphical user interfaces (what appears on the screens of those devices), to voice recognition and understanding (Siri, Alexa, and the voices of airline reservation systems or prescription drug ordering). Spell-checkers and inexpensive laser printers came out of SAIL, and video games went from simplicity to complexity. The technical contributions make an even lengthier list.

When I’d first known McCarthy in the mid-1960s, he had an intense sense of social justice, maybe its genesis his politically radical parents. McCarthy taught himself Russian and, beginning in 1965, visited and taught in the former Soviet Union. Later he championed freedom of expression for Soviet dissidents, pressuring the Soviet government to allow them to travel more freely. It’s said that for a particular dissident, he illegally brought a fax and a copier into the Soviet Union.

Around 1966, he became active in anti-Vietnam War protests, and one day came into my office at Stanford to ask me to sign a pledge, for publication in the Stanford Daily. It declared that the Vietnam War was so reprehensible and contrary to all that the United States stood for, that we signatories would gladly receive, shelter, and otherwise aid any deserters from the armed forces. I believed the war was reprehensible, probably criminal, but I was worried about the risks I might run by making such a pledge. I hesitated.

McCarthy waited, giving off a silent, indisputable righteousness.

I signed.

Just as I was about to go to New York City the first time, my marriage over, I met John McCarthy at a party at Ed Feigenbaum’s. He followed me out and invited me for a cup of coffee. We found ourselves in a garishly lighted coffee shop on El Camino Real, all turquoise surfaces and orange light fixtures, and made small talk, which neither of us was very good at. To be with McCarthy for a few moments was to be awed, even made uneasy, by his intensity. He’d had a bone-deep commitment to the austerity and correctness of formal logic. Relaxing his standards to write about AI without using theorems was genuinely painful for him. For one thing, it implied that humans themselves didn’t conform to that austere logic. All that made me worry that McCarthy had relaxed his standards just to be sociable with me.

We might have talked about pop music, which we both loved. Sometime just before this, I’d stood in line with a friend at San Francisco’s Winterland Ballroom, waiting for the doors to open for a Janis Joplin show. McCarthy wandered past. We chatted for a moment, and I asked him if he’d like to step in front of us into what was getting to be a longer line by the minute. “No,” he said politely, “that wouldn’t be right.” He made his way to the back of the line.

McCarthy went to Czechoslovakia in November 1968, just after the Soviet invasion and repression, and wrote a chatty letter to his deputy, Lester Earnest, giving him not only clear-eyed descriptions of the physical and political results of the invasion and quick evaluations of the scientific groups he visited, but also the music he was hearing—all of it American. When he traveled on to Austria, he wrote, he was delighted to hear West Coast music, which he hadn’t heard in Czechoslovakia: Blue Cheer, Jefferson Airplane, plus the Beatles, Otis Redding, and Wilson Pickett.

So maybe he and I talked about music across the table in our El Camino coffee shop, me staring at his Struwwelpeter hair, his beard a horticultural wonder. His long dip into the counter-culture much amused him. “Most of the people there have ambitions to put together a ‘key,’ a kilo of pot, the better to set themselves up in business. They’re exactly the capitalists they’re railing against,” he laughed.

After a while I said, “You are considered—odd.”

He gazed at me in silence, and I was sure I’d offended him. Finally he said, “No. I’m just shy.”

I had no words for that disarming self-disclosure. I was deeply ashamed that I, who’d suffered two, maybe three, long periods of paralyzing silent shyness in my life, hadn’t recognized this.

4.

As John McCarthy met Ed Feigenbaum and me for lunch on a summer midday in July 1973 at the Stanford Faculty Club, he looked superb—all clear-eyed and pink-cheeked, not at all bothered by his recent scrape in the Alaskan wilderness. His hair was still fairly long (but ruly, I noted in my journal) a leftover from his fling with the counterculture. His beard, showing the first streaks of gray, was now trimmed, though it was square-shaped, because he kept tugging on its corners as he spoke.

At first he discouraged me. It was too soon to be writing any histories of AI, he said, the major ideas had not yet emerged. And anyway, why didn’t I write about—John had some arcane mathematical project he thought would make a better book. Over the iced tea, I shook my head. “I’m not a woman in search of a project,” I said, with more confidence than I felt. “I want to do the history—so far—of artificial intelligence.”

Ed excused himself, but John and I spoke for nearly four hours that afternoon. Story after story poured out of him, all amusing, all incisive. In my journal, I noted:

It’s a pleasure to hear him talk. He brings a wonderful intelligent optimism to life, as if people really can be persuaded to do what’s best for them if only it’s approached right. He’s completely at home with technology, and wonders at the prejudice so many people have against it. If you mention a technological headache, he’ll reply firmly, “But there is a technological solution…” Not only is he infectious, but egad, what a delightful change from all the congenitally gloomy people I’ve been around. Though it drives his colleagues up the wall, I find his sense of play delightful. He seems to be homo ludens in the best sense. Joe tells me that Newell and Simon are often perturbed because two of the elder statesmen in AI, McCarthy and Minsky, refuse to behave in a “responsible” manner. I don’t know enough about Minsky, but I say, viva John!

At the end of those delightful, stimulating four hours, the patient staff at the Faculty Club apparently resigned to such marathon conversations, McCarthy said, “Well, mutter mutter, it’s your time.” “But John,” I countered, “you’re your own best argument for someone doing this. I’m much more enthusiastic about it now than before we talked.”

It was true. Still, McCarthy must never have liked the book. Much later, when he was asked at a grand anniversary celebration of SAIL why computer scientists didn’t write their own histories, he said that wasn’t their job. But added that the histories that existed weren’t very good. I hope he meant only that Machines Who Think was by then way out of date.

That summer afternoon, McCarthy shrugged and agreed to cooperate. With Feigenbaum and McCarthy cooperating, and Newell, Simon, and Reddy at Carnegie willing to go along, if only to keep me busy so I exercised no pull on Joe to take him away from Pittsburgh, I needed to get the people at MIT to agree.

I don’t know now who made that connection for me, but the connection was made, and I’d go to Cambridge, Massachusetts often to interview them all—Marvin Minsky, Seymour Papert, Ed Fredkin, Joseph Weizenbaum, Joel Moses, and even the reclusive Claude Shannon, the founder of information theory, who’d retired to his fine old Victorian house in Somerville, Massachusetts.

In retrospect, the kindness, generosity, open-heartedness, and candor of each man I interviewed astounds me. Most of them were at a peak of their research careers, in hot pursuit of the next discovery. They had pressing work to do not only for the research they were conducting, the graduate students they were overseeing, the undergraduate classroom teaching they owed their universities, but also for fund-raising, that punishing academic treadmill. Computers were costly; robotic equipment was dear; and the money necessary to pursue AI was, by the standards of the time, colossal. Yet they opened their minds to me for hours, struggling patiently with the elementary questions I asked and with questions nobody had ever asked them before. I was humbled by that. I still am.

5.

Before my eyes, the book laid itself out for me. Western literature has a long rich tradition of imagining intelligence outside the human cranium, beginning with Homer (robotic attendants assist in the forge of the god Hephaestus, who limps badly, they show up as party help, and power the ships of Odysseus around the Aegean); the sages of the European Middle Ages and their brazen heads (both sign and source of their worldly wisdom); mad Paracelsus and his homunculus; Joseph Golem, spying on the gentiles of Prague; Dr. Frankenstein’s canonical monster; the robots of R.U.R. The endearing and menacing robots of that worldwide phenomenon Star Wars appeared while I was writing, and robots have become a staple of TV, movies, and video games. As I did my research, I discovered many early quasiscientific attempts to create intelligence outside the human cranium, so it wasn’t just dream-weavers who sang this song.

I’d use that historical framework to make two major points. First, the urge to create intelligence outside the human cranium is an enduring human impulse, with mythical examples across the ages and cultures. Second, this impulse was finally coming to realization in a scientific field called artificial intelligence.

Two attitudes to all this prevailed side by side, I’d say. The first was a general delight in AI, what I called the Hellenic view, because Homer’s robots had been welcomed and useful to the Olympians. The other I called the Hebraic view, a fear and sense of sacrilege these creations engendered based on the commandment that graven images were forbidden.[2] That fear and sense of sacrilege moved through literature (and life, as it would turn out) in examples of such creations gone rogue—Joseph Golem, Frankenstein’s nameless monster, the Sorcerer’s Apprentice, and on and on.

Such a thrilling narrative must seize my English Department colleagues at the University of Pittsburgh. How could they resist? Literature brought to life. It would captivate any intelligent reader.

And what did I think about AI myself? I was agnostic. I simply couldn’t judge the scientific importance of what was underway. But surrounded by the most intellectually exciting human beings I’d ever known, I easily gave them the benefit of the doubt.

The optimism about all this I shared with the AI people was impermeable. We didn’t know when, but AI would inevitably come (art is long, life is short, success was very far off). Its arrival would be an excellent thing, for as I’ve said, pursuing more intelligence was like pursuing more virtue. It had everything to recommend it. Our only adversaries then were the scoffers and the skeptics: this can never be done. How I failed to see that this was a human endeavor and would bring in its train all the flaws that have persistently bedeviled so much of human behavior, I can’t explain.

I failed to see what else lurked in the shadows.

Perhaps worse, I failed to know my own subconscious wish for AI, what its success might virtuously bring about. That wish was buried and wouldn’t surface for another three decades, to be deeply disappointed by AI’s flaws even as its influence dramatically ascended in the first two decades of the 21st century.


  1. Early 20th century art explicitly queried representation, too. Examples include Picasso and Braque with cubism; Magritte’s surrealism (Ceçi n’est pas une pipe); Duchamp’s Readymades; or James Joyce’s Ulysses. But it wouldn’t be until the introduction of computation into humanities scholarship that those scholars would have reason to query the precision of their own modes of representation. Chapter 26 has more about this topic.
  2. For years, I forgot I borrowed these terms from Matthew Arnold’s Culture and Anarchy, which I read in my freshman year of college.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

This Could Be Important (Mobile) Copyright © 2019 by Carnegie Mellon University: ETC Press: Signature is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book