1

1.

Stanford University’s climate is so benign that one luminous day dissolves into another, almost without distinction. On a radiant day in 1965—spring? summer? autumn?—a world-famous but jet-lagged and weary Soviet computer scientist visited the Stanford computer science department. He was Andrei Yershov, and I was asked show him around.

He particularly wanted to see something called the Doctor program in action, one of the first-ever interactive computer programs. Doctor’s home was the Stanford Artificial Intelligence Laboratory, a few miles away, but graduate students had rigged a remote hook-up on the second floor of Polya Hall on the main Stanford campus, which terminated in a teletype machine. This was an ordinary keyboard on a stand, used to transmit and receive text messages. You typed your message to transmit it; in receiving, the keyboard depressed, letter by letter, with ghostly autonomy. It had no screen: the exchange was recorded by physical strikes on a continuous roll of paper.

Yershov sat down at the teletype, and the graduate students and I gathered around him to watch.

A loud electric hum suddenly filled the room; the teletype clattered to life, and banged out a message:

DOCTOR: This is the doctor. Good afternoon. Please type your name.

The visitor did, and the Doctor program warmed up with some polite preliminaries. Then:

DOCTOR: What brought you here to see me today?

VISITOR: Oh, nothing much. I’m feeling a bit tired, that’s all.

DOCTOR: Why do you think you’re feeling a bit tired?

VISITOR: Well, I’ve been traveling a lot, and away from home.

DOCTOR: Tell me about your family.

The exchange suddenly turned intimate. Our visitor knew perfectly well that a computer, not a human being, was at the other end. Nevertheless, he began to disclose his worries about his wife, his children, his distance from them—both geographical and emotional. We watched in painful embarrassment, trying hard not to stare, yet mesmerized. Something about that impersonal machine had evoked a response from the visitor forbidden by the norms of polite human conduct. If a sophisticated computer scientist could succumb to such a revealing conversation, insensible to the spectators over his shoulder, what would a less sophisticated person make of it?

By now I’d been around the field of artificial intelligence some five years, since 1960. But that moment, something changed for me. This wasn’t chess and checkers; this wasn’t proving theorems or solving puzzles. It wasn’t any of the other abstract tasks to which AI had been applied. This was a connection between two minds, one human, one —Other. It was uncanny. Unreal. Yet altogether real. Explicable. Yet not.

Some epiphanies are evanescent. They burst upon you, then hide for years. So it was here. This epiphany would come back every so often in its revelation of a machine’s mind. I never thought it was wicked. I didn’t think it was inappropriate. But I thought about it. I’d think about it the rest of my life.

On that sunny afternoon, I couldn‘t know that, one way or another, AI would preoccupy me for decades. That I was poised for a journey, leading from a place of friendly curiosity all the way to conviction, sometimes moving past cartoon versions of the field (and of me), ducking best-selling jeremiads and scathing mockery about it all. At times the path seemed to lead me through the looking glass to an odd and disquieting world, a place that turned everything I thought I knew upside down, inside out. But I’d emerge at last to a place of relatively serene and optimistic understanding, along with some grave misgivings.

Each of us will take this journey in the future. This book is an invitation to follow me. Maybe the path is easier, knowing someone has stumbled along it before. And I urge readers impatient with the technical details to skip ahead to the next place where the personal emerges again.

Though much sound and fury has swirled around whether machines really think (the way humans do) or are only faking it, that tired dispute bores me. An old behaviorist trope holds that although birds and airplanes don’t operate on exactly the same principles, both birds and airplanes fly. Actually, the Bernoulli principle underlies them both. I feel the same way about thinking brains and thinking computers. Unless you’re a cognitive scientist, devoted specifically to modeling human cognitive behavior, why get exercised about the authenticity of thinking? Because, disputants say, it’s not that human cognition marks our superiority as a species—though the passions, and fury, which people bring to the topic make me suspect that this is what they’re defending. No, they say, if the machines don’t think like us, then how can they share our values? That could be. But many humans wouldn’t recognize “our” values either. I’m not sure I recognize yours.

2.

Nearly half a century after that Stanford afternoon, on Valentine’s Day 2011, and for two distinctly cold nights following, I watched the old gladiators of human and machine intelligence clash again. Their coliseum was the TV quiz show Jeopardy! On the show, contestants need to quickly access lots of trivial knowledge; figure out riddles, puns, and jokes; and interpret ambiguous statements. Representing human intelligence were the two best players ever of this game. The two humans stood at podiums, and between them was a big blue (of course) box, the avatar of Watson, a computer program designed by a team at IBM. Its logo reminded me of Keith Haring’s “Radiant Baby.”

Who would triumph, man or machine?

As I watched, that 1965 afternoon at Stanford came back, Yershov and the wheezing, clattering teletype. How far things had come. A small host of benign spirits crowded into my Manhattan living room, men I’d known over half a century, whose work helped to bring all this about: Herb Simon, Allen Newell, John McCarthy, and Marvin Minsky. They were founding fathers of artificial intelligence, American geniuses all of them.

The first evening, responding in a not-quite-human voice,[1] Watson barely pulled ahead of its human competition. The second evening, the humans might’ve come back, but no, this time Watson-the-machine pulverized its human competition. The third evening was a mop-up—Watson grabbed three points for every one point of its nearest human competitor. The spirits surrounding me smiled, and so did I.

AI’s accomplishments were suddenly public and heady: now came nearly accident-free, self-driving cars (and their hacker perils); phones activated by recognizing your face and responding to your voiced instructions; applications that began to transform entire fields: law, finance, medicine, science, engineering, entertainment. And yes, spying. Daily, the applications continue to cascade out.

A grand spiral nebula of the sciences, statistics, mathematics, logic, and dazzling engineering has swirled together to create modern artificial intelligence.

In the next few years, IBM’s Watson, for example, marched on: medical research and clinical applications, business and financial applications. Watson was poised to answer questions you hadn’t yet asked. In 2015, Watson held the red carpet at the Tribeca Film Festival in New York City, ready to be your partner as screenwriter or designer. In 2017, the program did a few weeks as an art guide in the Pinacoteca of São Paulo, Brazil, answering direct questions from visitors about individual paintings. No one would guess what a fraught relationship IBM once had with AI, but I remember, and smiled again. Google, moving in other directions, promotes the self-driving car, machine-reads and digests tons of text and images; in London, DeepMind, another subsidiary of Google’s parent company, Alphabet, has produced both the chess champion and the Go champion of the world, a feat long considered impossible. DeepMind’s program, first called AlphaGo, and then AlphaZero, because it started knowing nothing but the rules of the game, developed its skills by playing against itself, not by taking instruction from humans. Therefore it could be said to understand the game, and unlike previous game playing programs, didn’t use brute force. With more possible positions in a Go game than atoms in the universe, according to a January 2016 blog post by Demis Hassabis, the co-founder and CEO of DeepMind, the AlphaZero program will lead the way to general, as opposed to specialized, artificial intelligence. Maybe.

About AlphaZero as chess champion, mathematician Stephen Strogatz wrote: “Most unnerving was that AlphaZero seemed to express insight. It played like no computer ever has, intuitively and beautifully, with a romantic, attacking style. It played gambits and took risks….Grandmasters had never seen anything like it. AlphaZero had the finesse of a virtuoso and the power of a machine. It was humankind’s first glimpse of an awesome new kind of intelligence.” (Strogatz, 2018). James Somers sensitively describes the reactions to AlphaZero of the human champions: first sadness, depression, at last acceptance. “The algorithm behind AlphaZero can be generalized to any two-person, zero-sum game of perfect information (that is, a game in which no hidden elements exist, such as face-down cards in poker)…..At its core was an algorithm so powerful that you could give it the rules of humanity’s richest and most studied games and, later that day, it would become the best player there has ever been. Perhaps more surprising, this iteration of the system was also by far the simplest” (Somers, 2018). AlphaZero has since made inroads on multi-person games.

As the second decade of the 21st century came to an end, artificial intelligence was on every editor’s uneasy mind, and the phrase springs up daily in broadcasts, newspapers, journals, and blogs. Within one week in May 2018, Carnegie Mellon announced the creation of the first undergraduate degree with a major in artificial intelligence[2]; the White House held a summit meeting on the future of AI with representatives of major corporations[3]; The New York Times quoted Sundar Pichai, Google’s chief executive, who said that advancements in artificial intelligence had pushed Google to be more reflective about its corporate responsibilities around AI[4]; and The New Yorker ran an article entitled “Superior intelligence: Do the perils of A.I. exceed its promise?”[5] In 2019 The New York Times presented an entire section on the ethics of AI.[6]

Suddenly, the idea of the Singularity—that strangely Oedipal crossroad when machines become smarter than their creators—began to appear like maybe more than science fiction.[7]

Larry Birnbaum, a professor of computer science at Northwestern, calls it “living in the exponential.” An exponential curve seems to move gradually at first. Then it begins to climb more steeply. And climb ever more steeply. We all now live in the exponential of artificial intelligence.

3.

For sixty years, I’ve lived in AI’s exponential. I’ve watched computers evolve from plodding sorcerer’s apprentices to machines that can best any human at checkers, then chess, then the guessing game, Jeopardy!, and now the deeply complex game of Go. By 2018, about two-thirds of American adults had adopted AI into their pockets and handbags, in the form of smartphones. Although only 15% of Amercian households had voice-activated smart speakers such as Alexa or Echo, their adoption rate was outpacing smartphones and tablets.

More technically, according to the 2017 AI Index, AI became better at tasks that once required human intelligence. AI’s error rate in labeling images declined from 28% in 2010 to less than 3% in 2016. (Human error at this task is about 5%.) In 2017, speech recognition achieved parity with humans in the limited domain of Switchboard, an app that looks up people’s contact information and connects them. Two programs, one from Carnegie Mellon and one from the University of Alberta, beat professional experts in poker, and a Microsoft program achieved the maximum of points in Ms. Pac-Man on an Atari 2600. However, the index also acknowledges that these champion programs falter if the task is altered even slightly.

At first a friendly skeptic, I slowly came to believe that AI is inevitable. To push beyond human limitations has been our collective obsession for more than half a millennium, since the beginning of the Scientific Revolution. Maybe longer. At the dawn of that revolution, we learned that knowledge is power—power in the sense that Francis Bacon meant when he coined the phrase: power to shape our environment, our health, our fortunes, perhaps our future. AI is now a fundamental part of that.

Despite its dubious early reputation, AI began at once to elucidate the nature of human intelligence and pushed scientists to look at intelligence in other species. As it matured, the field began to propose principles of intelligence that might govern both the biological and social worlds. Certainly thanks to AI, the very horizons that human intelligence yearns toward have dramatically expanded. AI and its sibling cognitive sciences suggest that if we’re pre-Newtonian in our understanding of the laws of intelligence, perhaps such laws exist and can be discovered.

Meanwhile, AI is transforming everything, including what I studied in college, the humanities. Up to now, assertions or hand-waving have mostly defined key questions in the Western tradition—what is thinking, memory, self, beauty, love, ethics? But in AI, the questions must be specified precisely, realized in executable computer code. Thus, eternal questions are re-examined. Some people think this somehow diminishes human achievement in the arts, the humanities, and philosophy. Even near-centenarian Henry Kissinger declared that AI marks the end of the Enlightenment—a statement to give pause for many reasons (2018). But I believe that AI helps us understand and perhaps answer those vital questions better.

For several decades, people like Ray Kurzweil, an engineer, inventor, and futurist, have talked about AI’s imminent, inevitable, full, and lush arrival in the form of The Singularity. In a 2012 PBS NewsHour interview with Paul Solman, Kurzweil said, “Artificial intelligence will reach human levels by around 2029. Follow that out further to, say 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.” He’s a professional futurist. It’s his job to be provocative. Although sometimes he admits he might be off by a few decades, he assures us the eventual result will be the same. Exceptions to this view, however, are plenty, as we’ll see. [8]

More than forty years ago, Ed Fredkin, then at MIT, said he expected that, to mature AIs, humans would be dull—maybe they’d keep us as pets. This possibility played out in Spike Jonze’s 2013 movie, Her, where the AIs didn’t revolt against humans or try to conquer them as they had in so many classic tales. Instead, the bored AIs abandoned humans (and left humans yearning for the return of their brainy pals).

Most philosophers, whose response to AI over its early decades was was a species of mind games, parables and fables to prove the technology wasn’t really thinking, began at last to pay serious attention. In 2010, David Chalmers, an NYU philosopher, presented a set of reasonable scenarios to address the Singularity and invited colleagues from all over the world to respond.[9] Nick Bostrom, a philosopher and cognitive scientist at Oxford University, sees AI’s possibilities and dangers and concludes that meeting its challenges successfully, especially learning to control it, is the essential human task of our century (2014). In From Bacteria to Bach and Back: The Evolution of Minds, philosopher Daniel Dennett, long a friendly observer and critic of artificial intelligence and a deep thinker about matters of intelligence generally, offers a wonderfully nuanced view of the whole situation because he sees issues of intelligence in context and as parts of a great whole (2017).[10]

Predicting the future of AI isn’t just for scholars. In an interview with Julian Sancton, Beau Willimon, creator, developer, and producer of the television hit House of Cards, says:

This is where I sound like a complete fucking madman. But I actually think that humans are only the beginning. I think we’ve moved beyond biological evolution, which is incredibly inefficient. It took 15 billion years for us to get where we are. And there’s still 15 billion more years to go. We are the salamanders crawling out of an ocean. And there’s something way beyond. And I don’t think it’s God that created the universe. I think it’s the universe’s project to create God. And the things that we do in our rudimentary ways are teaching the next thing how to imagine. And then it’s going to take it from there. It will ask questions we don’t even know how to ask. It will think the things we are incapable of thinking. It will experience and feel the things that we aren’t capable of. (Sancton 2014)

It’s certainly a point of view.

It will ask questions we don’t even know how to ask. It will think the things we are incapable of thinking. It will experience and feel the things that we aren’t capable of. Yes, I believe that will happen eventually. We think of AI in terms of personal gadgets—my search engine will be better, my car will drive itself, my doctor will be better able to heal me, my grandma can be safely left home alone as she ages, a robot will finally do the housework. But greater contributions of AI will be planetary, teasing out how the environment and human wellbeing are subtly intertwined. Al’s greatest contribution might be its fundamental role in understanding and illuminating the laws of intelligence, wherever it manifests itself, in organisms or machines.

For a long time, I’ve been comfortable with such ideas. Unduly optimistic, maybe, but I look forward to having other, smarter minds around. (I’ve always had such minds around in the form of other humans.) I don’t much worry they’ll want my niche—though that presupposes a planet that won’t, in one of Bostrom’s scenarios, be tiled over entirely with solar panels to supply power for the reigning AI. Humans will endure, but possibly not as the dominant species. Instead, that position might belong someday to our non-biological descendants. But really, the scary future scenarios sound as if humans have no agency here. We certainly do, and as we’ll see, it’s already at work.

A search (powered by AI techniques, of course) will quickly show how we’ve already woven AI around and inside our lives, turning scientific inquiry into human desire, even stark necessity. When we did not—the nuclear catastrophe of Fukushima Daiichi, for example—we wished we had. AIs fly, crawl, inhabit our personal devices, connect us with each other willingly or not, shape our entertainment, and vacuum the living room.

Robots, a particularly visible form of AI (embodied, in the field’s term), occupy a significant space in our imaginations, their very birthplace. Books, movies, TV, and video games provoke us to conjecture about some of the ways we might behave and the issues embodied AIs will raise when they become our companions. But this visible embodiment, humanoid or otherwise, is only one form AIs will take. The disembodied, more abstract intelligences, like Google Brain, AlphaZero, and Nell[11] at Carnegie Mellon are hidden inside machines invisible to the human eye, scoffing at human boundaries. Their implications are even more profound.

Distributed intelligence and multiagent software inhabit electronic systems all over the globe, seizing information that can be studied, analyzed, manipulated, redistributed, re-presented, exploited, above all, learned from. Human knowledge and decision-making are rapidly moving from books and craniums into executable computer code. But fair warnings and deep fears abound: algorithms that take big data at face value reinforce the bigotries those data already represent. Bad enough that data about you you’re aware you’re volunteering (submitted for drivers’ licenses for example) are collected, aggregated, and marketed; much worse that involuntary data collected from your on-line behavior (your purchases, your use of public transportation) is also a profit center and a spy on you. Horrors have crawled up from the dark side: bots that lie and mislead across social media, trolls without conscience, and applications whose unforeseeable consequences could be catastrophic.

Larry Smarr, founding director of the California Institute for Telecommunications and Information Technology, on the campus of the University of California at San Diego, calls this distributed intelligence and multiagent software the equivalent of a global computer. “People just do not understand how different a global AI will be with all the data about all the people being fed in real time,” he emailed me a few years ago. By sharing data, he continued, the whole world is helping to create AI at top speed, instead of a few Lisp programmers working at it piecemeal. The next years will see profound changes. In short, AI already surrounds us. Is us.

The industrialization of reading, understanding, and question-answering is well underway to be delivered to your personal device. Some of these machines learn statistically; others learn at multiple, asynchronous levels, which resembles human learning. They don’t wait around for programmers but are teaching themselves. Understanding the importance of this, many conventional firms like Toyota or General Electric are reinventing themselves as software firms with AI prominent.

Word- and text-understanding programs particularly interest me, partly because I’m a word and text person myself, and partly because words, spoken or written, at the level of complexity humans do them, seem to be one of the few faculties that separate human intelligence from the intelligence of other animals. (Making images is another.) Other animals communicate with each other, of course. But if their communication is deeply symbolic, that symbolism has so far evaded us. Moreover, humans have means to communicate not only face to face, but also across generations and distances, and we do so orally, then by pictorial representations, by speaking, creating pictures, writing, print, and now by electronic texts and videos.

For a long time, we were the only symbol-manipulating creatures on the planet. Now, with smarter and smarter computers, we at last have symbol-manipulating companions. A great conversation has begun that won’t be completed for a long time to come.

What follows is one story (there are many) of an extraordinary half-century and more, when humans edged toward an epochal event: a new kind of intelligence emerged, designed by us, to live beside our own. But this book is about humans, not machines. As it happens, AI’s coming of age, if not its full maturity, has paralleled my own life. So this is the saga of a grand scientific quest, intertwined with my personal quest. It’s a coming of age story of a scientific field and of a naïve young woman—now slightly wiser and decidedly older.

I kept a journal—I still do—for I sensed what I witnessed would be momentous.[12] So much was to happen as a consequence of AI. For good or otherwise, everything—social life, medicine, transportation, communication—has changed. But for a long time, I’d spend my life pulling on the sleeves of serious thinkers, trying to tell them that this—artificial intelligence—could be important.

I offer a personal story because it’s the particulars that illuminate: personalities, friendships, enmities, context, chance. To grasp these early times, abstractions won’t do.


  1. Technologists then and now have debated how close to “human” a synthesized voice should—or might—sound. See John Markoff (2016, February 15). An artificial, likable voice. The New York Times. Further disputes have arisen about the gender of such voices: do we want a stereotyped subservient female voice, or a male voice mansplaining?
  2. In October 2018, MIT topped CMU’s establishment of a mere major in AI by announcing an entire School of Artificial Intelligence to be supported by a billion-dollar commitment.
  3. Shepardson, David. (2018, May 10). White House to hold artificial intelligence meeting with companies. Reuters.
  4. Wakabayashi, Daisuke. (2018, May 9). Google promotes A.I. but acknowledges tchnology’s perils. The New York Times.
  5. Friend, Tad. (2018, May 14). Superior Intelligence: Do the perils of A.I. exceed its promise? The New Yorker.
  6. The New York Times, “Artificial Intelligence: Ethical AI.” March 4, 2019.
  7. Science fiction writer Vernor Vinge has often been cited as the originator of this phrase to describe such a moment in the evolution of intelligence, but John von Neumann or Stanislaw Ulam seem to have been the first to appropriate the phrase from mathematics. In his memorial tribute to von Neumann, Ulam writes: “One conversation [we had] centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” Ulam, Stanislaw. (May 1958). John von Neumann, 1903-1957. Bulletin of the American Mathematical Society, 64, 3. My thanks to Bruce Garetz for bringing this to my attention.
  8. Many people strongly disagree with this scenario. For example, a 2016 panel of AI experts convened to examine the ethical and instrumental consequences of AI disagreed. Kevin Kelly, the emeritus editor-in-chief of Wired and a perceptive observer of technology for four decades, disagrees. John Markoff, who covered Silicon Valley for The New York Times for many years, also disagrees. I’ll take this up later.
  9. Chalmers' article appeared in Journal of Consciousness Studies and was followed by responses from many in another 2012 issue of Journal of Consciousness. See “The Singularity: Ongoing Debate Part II.”19, (7-8).
  10. Dennett and I nearly always end up in the same place, but he does the careful, heavy thinking. I just barge ahead, grateful to outsource that careful, heavy thinking to him.
  11. In this book, I will not capitalize most program acronyms or abbreviations, except for initial caps. It’s unnecessary and tiring to the reader’s eye.
  12. The original hand-written spiral-bound notebooks are archived in Hunt Library at Carnegie Mellon University.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

This Could Be Important (Mobile) Copyright © 2019 by Carnegie Mellon University: ETC Press: Signature is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book