20
1.
A month or two after The Fifth Generation was published, Mike Dertouzos, an intense, Greek-born man of ebullient cheer, then the director of MIT’s Laboratory for Computer Science, came to Columbia for some event. We sat next to each during lunch at The Terrace restaurant, gazing from Morningside Heights over Morningside Park, Harlem rooftops, and the East River. “Beware,” Dertouzos said with uncharacteristic seriousness. “Weizenbaum has apparently reviewed The Fifth Generation, and he’s walking around the halls, rubbing his hands together, telling anyone who’ll listen that this’ll get ‘em.”
Joe Weizenbaum had originally engaged in a mid-1960s fight with Kenneth Colby about the use in mental hospitals of Colby’s program, Doctor. Colby was trying to bring some automated relief to patients who, if they were lucky, saw a psychiatrist once a month. Weizenbaum asserted that no machine must interfere in the psychoanalytic process, and if that meant patients went without any therapy, so be it. This eventually led him to attack the entire field of AI, steadily and publicly.
Unlike philosophers of the time, who maintained AI could never succeed, Weizenbaum said to the contrary, the effort certainly could succeed. His early work on Eliza, the therapist program, and his affiliation with MIT gave that opinion public gravitas. But he believed strongly that it mustn’t be done and was the fast chute to catastrophe. To make his points, he leaned heavily on comparisons between AI and the Holocaust.
In 1976 he’d put these arguments together in his book, Computer Power and Human Reason, described in Chapter 17 of this book. Computer Power and Human Reason was influential and warmly welcomed by people already uneasy at the whole idea of computers, never mind machine intelligence. As a consequence, in the last few years, he’d risen to singular prominence as the self-declared “conscience” of AI and computing in general.
“He used to believe about ten percent of the stuff he was spouting,” Dertouzos said over that Morningside Heights lunch, “but now he’s become a complete convert to his own line. He’s also reading passages aloud to people from the chapter, ‘Intellectuals in the Cherry Orchard,’ saying how shocking it all is.”
“Intellectuals in the Cherry Orchard” was a chapter in The Fifth Generation that used Chekhov’s Madame Ranevsky, tragically oblivious of the future and her responsibility to it, to make the same point about some present-day intellectuals. But really, I was restating one of C. P. Snow’s points, in his Two Cultures lecture years ago. Here’s my passage from that offending chapter:
In short, no plausible claim to intellectuality can possibly be made in the near future without an intimate dependence upon this new instrument. Those intellectuals who persist in their indifference, not to say snobbery, will find themselves stranded in a quaint museum of the intellect, forced to live petulantly, and rather irrelevantly, on the charity of those who understand the real dimensions of the revolution and can deal with the new world it will bring about. (Feigenbaum & McCorduck, 1983)
“Weizenbaum’s had a hard time since he’s come to MIT,” Dertouzos continued. “You know—no PhD. So that means you have to be twice as good. As it happens, he hasn’t been twice as good. His research has gone nowhere. He knows he’s not making the grade. It’s got to be very painful. Still, I wish he hadn’t chosen this way to compensate.”
“Maybe,” I said. “But it’s also about World War II, the Holocaust. He’s possessed by it. Interesting, because he didn’t actually go through it himself.” He’d told me so during his long phone call from Berlin. I went on to speculate: “So, is that the problem? It’s some kind of obsession because he didn’t personally experience it?”
Dertouzos and I exchanged our own World War II stories. He’d been a child during the German occupation in Greece. His father was a partisan, and Dertouzos knew from infancy that in that savage occupation, if he betrayed anything that happened at home, he could cause his father’s and uncles’ deaths and his own. I’d been born in England at the height of the Blitz, into a world where the bombs rained down nightly on the just and unjust alike. So we’d both actually lived through it and more or less put it behind us.
We shrugged. Who knew what was driving Joe Weizenbaum? He’d recently appeared on television, contradicting himself by saying machines could never think because they didn’t feel cold and lonely in an empty house. Thirty years earlier, someone had protested to Alan Turing about thinking machines: Impossible! They can never love strawberries and cream! Despite Marvin Minsky’s defense of emotion as an integral part of intelligence, would being afraid in a dark house or loving strawberries and cream get you admitted to MIT?[1]
2.
I was warned. I alerted Ed Feigenbaum, and we waited. Sure enough, The New York Review of Books soon published a five-page review of The Fifth Generation by Weizenbaum (1983). In the opening paragraph, we were compared with Mussolini, Hitler, and Pinochet, my only consolation being that at least Pinochet was still alive. “We wrote a book,” I said aloud to the broadsheet in my hands. “We didn’t jail, torture, or kill anyone; overthrow any governments. Your editor let you get away with this?”
All in all, it was a review that served neither Weizenbaum nor our readers, although I suppose he got a weight off his chest.
Weizenbaum might have had some technological quarrels with the Japanese—we’d had our own and said so. But his biggest outrage was against me, because what he found most shocking about The Fifth Generation was my proposal that AI get busy and do something useful, like the Geriatric Robot. Only people must look after people, he thundered from the pages of The New York Review of Books, an echo of his warnings twenty years earlier, that only psychiatrists must conduct the psychotherapeutic interview, even if it meant that souls in torment went without any treatment whatsoever.
“I shouldn’t have let you leave it in,” Ed said. But he laughed. Soon, at a Manhattan book party for a friend, a stranger made small talk with me about the review, and when I cheerfully told him it was my book, he backed away as if plague cankers had erupted all over my face.
I wondered if Weizenbaum had ever actually looked after an elderly person or whether he just thought somebody—some woman, let’s face it—should. Regardless, demographics suggested that caregivers for the old and infirm looked to be like telephone operators in the 1950s: if trajectories held, half the population would soon be thus occupied. Luckily the Bell System invented automatic switching, and we were all released from compulsory careers as telephone operators.
So while I’d meant the Geriatric Robot as some jokey relief, in fact the telephone operator problem loomed. As a population, as an economy, we simply cannot devote that kind of one-on-one human care to the old and chronically ill. We’re all “stragglers from the wreck of time,” as Henry James almost put it, or we will be. According to the U.S. Center for Disease Control, in 2018, 61 million Americans had some form of disability. About 65% of people of working age with disabilities are unemployed. By 2030, twenty percent of the population will be elderly and one in two working adults will be an informal caregiver. An acute shortage of trained health professionals in geriatrics will become worse. The Japanese were straightforward about it; why did we have to hide behind such pious humbug?
Ed and I sent a snappy reply to The New York Review of Books, which printed it, and that was that. But it wasn’t.
3.
I was soon invited to a Japanese conference on robotic help for the aged. I declined with thanks. Then Der Stern called from Germany. Who was doing the software for this? The hardware? A little joke, I said to the stern lady from Der Stern. Nobody was doing it, unless she wanted to call Tokyo and follow up further.
But necessity is the mother of invention. Twenty or so years later, Nursebot Pearl appeared, the brainchild of Sebastian Thrun, a young German roboticist then at Carnegie Mellon. Thrun had never heard of the Geriatric Robot, but he deeply loved his grandmother and was sure that, with a little help, she ought to be able to spend her last years in her own home. He had the skills to help her and all other grandmothers threatened with leaving familiar homes of a lifetime.
Nursebot Pearl was experimentally deployed in several nursing homes and centers for the elderly in Pittsburgh, in Cleveland, and elsewhere. She was perhaps four feet high, rolled around on casters like an acting walker, had a sweet and expressive female-ish face, and reminded her elderly clients to take their meds, turn on the TV for their favorite programs, and grasp Pearl’s handles so that vital signs could be instantly dispatched to humans on the watch for anomalies. But Pearl became outdated.
In the mid-2000s, an internationally known hardware specialist at Carnegie Mellon, Daniel P. Siewiorek, together with his wife, saw their two sets of parents through a sad and trying end-of-life period. “We can do better than this,” he said to himself. CMU received a highly unusual ten-year grant from the National Science Foundation to establish the Quality of Life Technology Center on the campus, pushing a grand suite of technologies to work not just for the elderly, but for anybody, adult or child, with disabilities.
Thanks to partnerships with major corporations, and CMU’s strong policy of encouraging such enterprises, some of the Center’s products are approaching commercialization. They include robots as home help (one is called Herb, an ungainly looking but useful personal assistive robot for the home) and robots that perceive human emotions and react to them. Virtual coaches have been designed for a variety of disorders, including cognitive and memory assistance, Siewiorek told me. What has been learned in robot vision inspires visual perception enhancements for everyone from the legally blind to Alzheimer’s patients to the general aging population—for example, night vision enhancement for safe night driving. Smart wheelchairs help prevent occupants from tipping over, and automobile interiors are being redesigned for the disabled, hand in hand with the designers of self-driving cars.
At least as important is support for caregivers. For example, a relatively cheap and gentle robot with acute sensors might lift and turn a patient over in bed, saving the backs of human caregivers. The design emphasis is on low-cost: apps for smartphones and tablets instead of expensive special-purpose gadgets.
At the University of Southern California, Maja J. Matarić, a professor of computer science, neuroscience, and pediatrics, and her team have been working on robots that take advantage of the wired-in human responses to speech, facial expressions, gesture, movements, and other bio-mimetic behaviors to offer help. That is, these robots monitor, encourage, and sustain all sorts of activities for their clients. They’re intended to improve learning, training, performance, and the general health of anyone at risk, whether because of age, autism, stroke, or brain injury. Even the healthy elderly have participated with robotic exercise coaches to keep fit in as pleasant and personal a way as possible.
In the autumn of 2013, Mataric gave a riveting lecture at Harvard, which I attended. She told her listeners she believes that humans respond more deeply to this “embodied” presence, these robots, compared to instructions or conversation on a screen. We’re wired, it seems, to assign agency to such a being, so long as its behavior is familiar, interactions with it are believable (not necessarily realistic), and the robot is autonomous (not a guided puppet). Curiously, people who need help respond better to not-quite-perfect robots than they would to perfectionism. “I can’t do that,” says a vaguely humanoid-looking robot to a human client in one of Matarić’s videos, and the client looks relieved. “I can’t either.”
Matarić shows videos of stroke victims going through their exercises, led by various robots. In stroke rehabilitation especially, motivation is the biggest problem: recovering and using a stricken limb is hard, frustrating, and boring for patient and therapist alike. Even primitive robots of ten years earlier, nothing more than rolling bits of machinery, evoked a relationship with their human clients. Later, more sophisticated robots push, prod, and know, by means of various sensors, when a client is about to quit from frustration. They quickly praise the client’s work, and suggest it’s time for a break.
Matarić emphasizes the strong effects of the human voice, a problem in AI that hasn’t yet been solved to the degree that humans and robots can converse as humans do with each other. Accurate speech understanding isn’t enough. Robots will need to enact about seventy nonverbal behaviors too, some as subtle as proxemics—who stands where, and how close?—or the dynamical allocation of roles—now you lead the conversation, now I do.
The uncanny valley effect, first described by Ernst Jentsch in 1906, and elaborated by Freud, was named and quantified in 1970 by Masahiro Mori, a roboticist. Our reactions to something only vaguely human are primarily positive. But when that something reaches a point where it’s almost, but not entirely human, our comfort level drops dramatically: our positive feelings turn to strong revulsion. Then, as a robot’s appearance and behavior cross yet another line and come even closer to human behavior, we ascend from the revulsion of the uncanny valley, and react positively again. The uncanny valley is provoked not just by robots, but by disfigured burn victims, some plastic surgery patients, neurological victims, and even 3-D animations. And, provoked across several dimensions. Matarić says that an Alzheimer’s patient grew very disturbed as her robot began to sing like Frank Sinatra. “That isn’t Frankie!” she said crossly.
4.
What struck me with Matarić’s robots is that they didn’t need much intelligence to be valuable in rehabilitation, nor with autistic children, although Matarić is careful to say that not all autistic children respond to robots. But for those who do, the robot is safe and, with its flaws, “like them.” Semi-intelligent robots interacting with autistic children can elicit social behaviors, communication, turn-taking, initiating play, and even the first social smile.
So far, economics has prevented scaling up various institutional robots so that we could have one in every older person’s home. But as I write, the European Union, facing demographics similar to Japan’s, has developed an ensemble of smart clothing, smart environment, plus a personal robot, all to allow the elderly to stay in their own homes longer. The EU plans pilot studies, and also hopes that economies of scale will shrink to a more affordable sum the ten thousand euros that the smart environment and robot now cost. Canada and, of course, Japan, are working on such systems and robots. Across the United States, centers from Stanford to MIT are studying ways that technology can help. Startups around the country want to remake home care, and adapt social networking for the aged.
I imagine something gradual for most of us—a clever combination of miniaturized off-the-shelf components that deliver smart programs. We’ll wear our future geriatric robot as garments or special-purpose prosthetics, wired into our laptops, our smartphones, our baseboards. For all I know, such robots could be implanted. “Apps, gadgets and dongles,” says Joe Flower, a specialist on the future of health care, and I’ll bet for eldercare, too. Robots around the house could do the boring stuff. Brett the robot, under development at the University of California, Berkeley, named for “Berkeley robot for the elimination of tedious tasks,” will learn by apprenticing—watching you or YouTube—and might eventually be available to fold laundry or anything else humans can physically manage.
As I write this, I’m in my seventies, and for all the idealization of human helpers, I’ve watched some of my friends struggle with the present disadvantages of human home help—while some helpers are magnificent and dedicated, others are poorly trained, paid barely more than minimum wage by their agencies, but cost at least twice that sum out of my friends’ pockets. They’re on their cellphones disruptively or want the TV on nonstop. I say the Geriatric Robot, whatever its form, will be just in time. Sign me up.
5.
Joe Weizenbaum and I were still not finished. In 1984, I was invited to give a talk at the annual summer meeting of Ars Electronica in Linz, Austria. After an arduous trip, I fell gratefully on my bed in instant and deep sleep. Two hours later, the phone rang. It was Weizenbaum, also at the meeting, and he wanted to talk to me. We agreed on Linzer torte and coffee in an hour.
By now, Weizenbaum was a celebrated fixture on the lecture circuit, warning how the foolish use (which seemed to be any use whatsoever) of computers would inevitably lead to another Holocaust. He regularly preached that for moral and ethical reasons, people must be persuaded to abjure computers altogether.
In the passing years, his hair had grown longer and grayer, his eyebrows more bosky, his moustaches more salient, the pouches beneath his eyes more bulging. Maybe because he was spending much time in Berlin and speaking German regularly, his German accent in English was even more Teutonic. He was the very model of the modern prophet of doom.
Over the Linzer torte, I sat across from him and waited. He made desultory small talk—how was my trip? I waited some more. I began to sense that he wanted to make peace, but he couldn’t bring himself to apologize for, or even explain, his disproportionate attack on The Fifth Generation in The New York Review of Books. Where was peacemaking to begin? He said something about how tragic the Holocaust had been, and if we didn’t watch out…
“When did you get out of Germany, Joe?”
The lugubrious voice: “January. 1936.”
So I repeated what I’d said during that long-ago phone call he made to me from Berlin after the publication of Machines Who Think. I told him again what I’d said when we re-met at MIT after The Fifth Generation was published and I’d objected so strongly to his trivializing the Holocaust in his essay for The New York Review of Books. Now, for the third time, I told him how my husband had got out in January 1939, two months after Kristallnacht, three years after Weizenbaum. My husband’s every aunt, uncle, and cousin had perished, and his paternal grandmother had survived the French camp Gurs but without her mind intact. I told Weizenbaum my World War II experience. So what was all this about, then? What moral rank did he hold that he might be so judgmental, so condescending to those who believed AI had human promise? This time he heard me.
The face before me crumpled. He had depended on that moral superiority he claimed for himself and played to the hilt. In his imagination, he’d concocted a portrait of me as some corn-fed bliss-ninny who had No Idea. He’d cherished his sorrows as unique, superior, inaccessible to someone like me. Now he knew.
I waited for a response.
“Well,” he finally said. A pause. “Well.” More silence. “Well… Well… Well…”
My parting words were kind, because before me I watched a grave psychological collapse, and it was painful to see.
A day or so later at the meeting, Weizenbaum delivered his usual speech, reminding us all that the German military used computers and therefore. . . A young German artist beside me muttered, “The German military uses knives and forks; let us not use knives and forks.” Weizenbaum had become a cruel caricature of himself.
Joe Weizenbaum eventually moved permanently to Germany in pursuit of balm that might heal whatever had broken his heart. Was it exile that had crushed him? But exile from where? From Germany, where he was among millions exiled (and lucky to escape murder) during the Third Reich? From AI, where he’d tried valiantly and failed? From MIT, where an Institute for Computing and Human Values had been established, but he had pointedly not been invited to join? Perhaps the exile was from some imagined paradise that had never existed, and never would. But that’s only speculation. He died in Germany in 2008.
By then, whatever beliefs Weizenbaum held of computers being unable to make fine, even humane, judgments were no longer true, even if, back in the 1960s, when he was designing banking systems, it might have been so. In the new millennium, the human race urgently needs help for problems it can’t solve with other humans alone, and is slowly, step by step, turning to its intelligent machines for collaboration. The great ethical issues AI raises are getting serious attention, and I’ll take that up in a later chapter.
- Emotions are part of intelligence, Minsky and many others have argued, necessary but not sufficient for intelligent behavior. ↵