5

1.

In 1965, the Stanford University campus was surely one of the loveliest places in the world. My father once read—and never forgot—that on all Planet Earth, the ideal daily temperature for humans is in Redwood City, California, a few miles from Stanford. Days are sunny and warm; cool evenings come as the marine fog creeps over the Santa Cruz Mountains in the late afternoon from the west. For variety, it rains a little, but mostly the sun-drenched days follow each other benevolently; Stanford students and faculty bike (and nowadays skate and skateboard) everywhere.

From the old Southern Pacific Railway station in Palo Alto (I was living in San Francisco then), I biked daily beneath the noble Canary Island date palms of Palm Drive, an allée that led to the golden sandstone mission-style main campus. Then I’d swerve north to Polya Hall.

George Pólya had been the great mathematical star of the Stanford campus since 1940, when he left ETH, the Swiss Federal Technical Institute. He’d made significant contributions to many mathematical fields and was still an active emeritus professor. He was particularly fascinated by the study of heuristics—a word he popularized. Just how did people go about solving problems? What rules of thumb did they use? This would be dear to the hearts of people in AI, who believed that heuristics compensated for computing deficits in the human brain.

Polya Hall was temporary. The firm of Joseph Eichler, best known for his residential subdivisions that were modernist architecture for the masses, had sited a small group of buildings around an ancient, gnarled live oak, a tree typical of the California coastal range. Except for Polya Hall, which had two stories, these were single-story white stucco with prominent beams and floor-to-ceiling windows that welcomed the California light. Computers might not care about sun and light, but the humans who used and tended them certainly did.

My office in Polya Hall had a door connecting to Ed Feigenbaum’s and looked out on a pretty side garden. Until I got a study overlooking Riverside Park and the Hudson River, it was the most appealing office I’d ever work in. It was a joy to arrive in the morning, the fog newly burned off to reveal indigo skies, the air fresh, the flowers blooming year-round, and get to work.

Along with his professorship in the new department of computer science, Feigenbaum had also been appointed head of the computation center, a computing utility for the entire campus. As his assistant, one of my first projects was to interview faculty users all over the campus and ask what they used the computation facilities for. After I compiled and wrote up that information, the computation center decided it was time to replace its aging (maybe two years old) Burroughs B 5500 Information Processing System with a new system, the IBM System/360. This was next-generation computing: it was up-and-down compatible, meaning the machines with the smallest memories and processors would still be compatible with machines at the high end. It could also combine two kinds of computers that had been separate lines at IBM, those for business and those for science and engineering.

Other manufacturers of high-end mainframes mounted a lively competition and courtship. I took minutes at meetings where claims were staked, promises made. But Ed and his colleagues decided that IBM offered the best solution to their problems, so they chose the S/360. Alas, like a princess unready to meet the prince signed to the marriage contract, the new system was coy. Software problems, hardware problems—the later term was vaporware. I’d struggle to keep a straight face as the IBM reps swept in like Mafioso dons (although far better dressed—conspicuously well-cut suits and rich ties on the decidedly informal Stanford campus). They managed to swagger and apologize simultaneously, then finish with, “But that’s history. What can we do for you now?” An unforgettable response. Later I’d find an occasion or two to use it myself.

Ed would assess them in silence as he puffed on his pipe. Then he’d say evenly, “When do you think we’ll see the system?”

“We’re working round the clock,” the IBM reps would assure him eagerly. Ed would remain quiet, puff, blink thoughtfully behind his horn-rimmed glasses. After a silence, he’d respond, “Let’s see something.”

Exasperating as this whole episode must have been for him, I never saw Ed lose his equanimity. In some sixty years of friendship, I’ve never heard him raise his voice. This was another remarkable lesson for me. I’d grown up in a family of Anglo-Irishmen who understood the tactical value of bellowing, voices that rattled the windows in their frames and made female knees knock. Many years later, I whimsically wondered whether one of the great attractions of AI for me was its antipodean location from the rage and unreason that, as a child of the mid-20th century, had surrounded me from the beginning, both on a world scale and in the domestic theater of my family.

The world’s rage and unreason needs no repeating, but in my family, rages, tantrums, sulks, threats, and high-decibel shouting (my father) led to rebellious but submissive tears (my mother and us kids) at least weekly. To a soul that longed for serenity and a taste for intellectual stimulation, AI, based on calm reason, was emotional balm. Yet into AI’s cool Apollonian order, the Dionysian too would suddenly erupt: it’s always with us.

This description of my family’s domestic theater is incomplete. It was richer in comedy than melodrama, rich in music as well as words. My mother, a country girl from an English village so picturesque it belonged pictured on the top of a biscuit tin, was a gifted musician. My first real memory, as distinct from ones told to me, was of sitting in our front room and listening to her play Debussy’s Clair de Lune on our piano. She’d been an independent career girl in the 1930s, a busy hairdresser by day, pianist in a dance band at night, and her glamorous sequined crepe gowns showed up in my dress-up box years later. She was hard-working, generous-minded, enthusiastic, fascinated by humans in all their variety. I see now she paid a heavy price for the 1950s backwardness of women’s roles, especially in the suburbs of California where we fetched up. Later, her kindness and sense of humor (she was known in her seventies and eighties as “Sunshine” to her fellow musicians), along with her renewed devotion to music and dancing, saw her admirably through twenty-five years of merry widowhood. She was a witty storyteller about her fellow musicians and about astonishing family lore. My father dead, she and I became friends in a way we couldn’t have earlier. In that late mother-daughter friendship, I began to see how I owed some of the best of myself to her.

My father was ambitious, intelligent, and eventually a very successful businessman in the New World; he was handsome, suave, utterly charming, with an abundant Irish storyteller’s gift. Thus he’d transmute stories of his grievously poor boyhood, moments of excruciating humiliation (I now see) into tales so funny that we wept helplessly with laughter and begged to hear them again and again. The boy in the pith helmet (castoff from some relative in the colonial service) provoking the Liverpool schoolmaster to cry, “Willy McCorduck! Come out from under that hat!” Or the men’s trousers so inexpertly cut down that the flies reached to his bony knees. Or how the nearly mythical aunts in London took him over and taught him manners, and he almost starved when he came back to the anarchy of his own family’s Liverpool dinner table. There were dozens of these stories; we could almost mouth the words as he spoke them, we’d loved and heard them so often. We loved him. Thanks to the comedy, and vibrant intellectual curiosity that always drove him, we forgave the rages and tantrums. But we couldn’t forget them.

2.

My Stanford office was a gathering place. Anyone who wanted to see Ed Feigenbaum, whether in his role as professor, or as head of the computation center, had to come through my office. Waiting, they chatted amiably with me, which is how John Reynolds taught me about Venn diagrams and Joshua Lederberg, the Nobel Laureate in genetics, gave me a quick course in mass spectroscopy.

In retrospect, it seems unbelievable that at the same time Ed was running Stanford’s computation center and dealing with recalcitrant computer manufacturers, he and Lederberg were starting research with the audacious goal of studying how scientists formed hypotheses. This project would turn AI research upside down and overthrow convictions of more than two thousand years of Western philosophy.

I didn’t really understand the impact of this research until ten years later, when I began writing my history of AI, Machines Who Think. As I’d discover, the program called Dendral, led by Feigenbaum and Lederberg, involved the radical rethinking of induction, human as well as machine. Its emphasis was securely on knowledge, not reasoning. I’d later come to think of Dendral as the Tristan chord of AI, a bold insight that changed everything.[1]

So, like the Tristan chord, knowledge was present in early AI—the rules of chess, for example—but Feigenbaum and his colleagues changed the emphasis radically: knowledge, not reasoning. This emphasis makes more sense if I consider Dendral in context when, later on, I write about the early days of AI.

3.

My husband Tom and I were weary of San Francisco’s perpetual fog and wind. We realized we could live in that ideal climate of the sunny, warm peninsula that had seized my father’s attention when he read about it so many years earlier. We found a tumbledown little cabin to rent, a mile or two west of the village of Skylonda, the crossroads of La Honda Road and Skyline Boulevard, high above and west of, Stanford. The cabin clung precariously to a canyon wall on the western slopes of the mountains. (It literally slid into the canyon some years after we left.) We heard it had once been the local brothel, but our neighbor laughed. “Oh no,” he said, “speakeasy. Isn’t that big bar still in the cellar?” It was.

Now, instead of taking the old Southern Pacific train and then my bike, I could zip east up La Honda Road, over the Skyline Boulevard summit, and down to Stanford in my black Volkswagen Beetle, which lacked both gas gauge and seat belts.

The cabin was isolated, as different as possible from our former Potrero Hill flat above the Bayshore Freeway in San Francisco—where, for a stunning view of the San Francisco skyline, we inhaled ominous levels of traffic fumes. In the mountains, we were surrounded by lofty coast redwoods and tan oaks and stepped over fat yellow banana slugs every morning as we made our way up the packed dirt stairway to our cars. We knew our neighbors by sight but at night couldn’t even see their lights.

Further down La Honda Road, Ken Kesey and his Merry Pranksters lived and partied in the village of La Honda itself. I’d smile to pass the Pranksters’ psychedelically painted bus parked at the corner of Skyline and La Honda Road, in the garage for repairs almost perpetually. Every weekend, sometimes weeknights, the Hell’s Angels would shatter the forest silence, their Harley hogs in formation down the mountain road on a visit to the Pranksters. Tom was often in the city, working and taking night courses. I began to feel vulnerable. Once, I thought I had an intruder and called the sheriff’s office. The officer finally arrived and shook his head. No way anyone from the sheriff could get to us in less than half an hour. He strongly recommended I arm myself. Tom and I went to Sears and bought a hunting rifle, no ammunition, and this we stored under the bed. I only knew from the movies which way to point it.

It never occurred to me to go down the road and introduce myself to Kesey and his Pranksters. I was shy, serious, and, I thought, probably much too straitlaced for their wild ways. Dope didn’t interest me; my eccentricities were still seedlings. When I met some Pranksters much later, we liked each other, but we’d all transformed, neither wild prankster nor prim Stanford employee.

Two decades on, I met Kesey. My literary agent then, John Brockman, ran occasional meetings called The Reality Club, and we’d gathered to honor Kesey, who was in town to promote a book. We met in somebody’s immaculate white apartment, high above Central Park, with a southern view to the dazzling luminosity of nighttime midtown Manhattan. I introduced myself to Kesey and told him how our paths had almost crossed twenty years earlier. He looked around this magazine-shoot apartment and began to laugh a deeply amused, generous laugh, reached out, and squeezed my hand. “You’ve come a long way, baby!”

The La Honda cabin made me self-sufficient. It had a pitiful and expensive propane gas heating system, so its fireplaces were the main source of heat—up in the Santa Cruz mountains, we had cold nights even in midsummer, when the Pacific marine fog flowed into the canyons and up to the mountain summits. We had occasional snow in the winter. So I learned to split wood. I learned to crawl, hand-over-hand, across the face of the canyon wall to find and fix the break in the water system, probably a monthly event. We got a dog, a Saint Bernard, and I was a spectacle when I bundled behemoth Sebastian into the back seat of my Beetle and tooled around Palo Alto.

Living on the peninsula also meant that I could spend some evenings and weekends working on my writing, using the luxurious electric typewriter that I used all day in the office. But many people worked evenings and weekends in Polya Hall, and if luck was with me, somebody would drop by and interrupt—“You doing anything important? No”—and chat. My favorite visitor was Bill Miller, a specialist in interpreting the graphics of bubble chambers for basic physics research. He was later Stanford’s provost and later still, CEO and president of the Stanford Research Institute. Miller was simultaneously focused, far-sighted, and practical.

His vision is apparent in what he went on to accomplish, not just as Stanford’s provost and vice-president, but in the major role he played in continuing to nurture—on those acres of former cherry, apricot, and peach orchards—what Frederick Terman had begun, what came to be known as Silicon Valley. In addition to his academic roles, he was an early venture capitalist, an entrepreneur, and board member of many organizations. Above all, he was someone who saw and made others see the social, scientific, and economic promise in computing.

Miller was born in rural Indiana and went to university nearby at Purdue. His open and planar face made him appear a bit simple, which he decidedly wasn’t. He had a farm boy’s friendly skepticism combined with utter pragmatism, which colored everything he did. He thought much about how people behaved and had begun to coin pithy precepts, presented to me as the “The Sayings of Middle-Class William.” Always negotiate, Middle Class William counseled. If you negotiate from strength, you have nothing to lose. If you negotiate from weakness, you still have nothing to lose. His tales of boar hunting when he’d been a young officer in the American Occupation in Germany found their way, decades later, into one of my novels, The Edge of Chaos. I don’t know if it was his army or his farm experience that inspired him to improvise support for a young computer music composer, John Chowning. Miller told funders that Chowning was studying the ambient noise in computer rooms.

Miller was full of practical advice. When I needed a cat to keep the mice in the cabin manageable, he said, “Be sure and get a kitten whose mother is a mouser. Mousing doesn’t come naturally to cats; they need to be taught.” But if I asked him how he liked living in all this glorious Palo Alto sunshine, he’d be puzzled. He’d look out the window—yes, there it was—and shrug. “I might just as well be in Indiana,” he’d say. “My work is what I care about.”

No words could’ve surprised me more. I’d always been deeply sensitive to place. When I arrived in California as a seven-year-old, I emerged suddenly into its brilliant light out of a great, oppressive darkness that had literally silenced me. You don’t think of graves at that early age, but I knew the California light had restored me to life.

4.

At Stanford, I was learning by osmosis again, the way I’d learned from the graduate students at Berkeley. I was mainly learning about AI, deeply important at Stanford, which, along with Carnegie Mellon and MIT, was then one of the three great world centers of AI research. All three were undisputed world centers of computing research generally, and it’s no coincidence that AI was centrally embedded in that wider, pioneering research.

Ed Feigenbaum had come to Stanford hoping that he and John McCarthy could collaborate. They remained personally friendly but realized their destiny was to pursue different paths in AI research. When I arrived at Stanford, McCarthy was in the process of moving his research team to a handsome, low-slung semicircle of a new industrial building in the Stanford hills, perhaps five miles from Polya Hall. A now defunct firm called General Telephone and Electric, seeing the new structure didn’t fit their research plans after all, had given it to Stanford, and it became the Stanford Artificial Intelligence Laboratory, SAIL.[2]

Among the research projects that had moved from Polya Hall to SAIL was Kenneth Colby’s Doctor program. Colby was an MD and psychiatrist who thought there must be some way to improve the therapeutic process—perhaps by automating it. Patients in state psychiatric hospitals might see a therapist maybe once a month if they were lucky. If instead they could interact with an artificial therapist anytime they wanted, then whatever its drawbacks, Colby argued, it was better than the current situation. In those prepsychotropic drug days, Colby wasn’t alone in thinking so. Similar work was underway at Massachusetts General Hospital. Colby had collaborated for a while with Joseph Weizenbaum, an experienced programmer, who’d come from a major role in automating the Bank of America and was interested in experimenting with Lisp. Weizenbaum would soon create a dialect of Lisp called Slip, for Symbolic Lisp, though it really had no symbolic aspirations as AI understood the term.

Doctor was the program that the visiting and eminent Soviet scientist, Andrei Yershov, asked to see. His encounter with Doctor was the moment that artificial intelligence suddenly became something deeper and richer for me than just an interesting, even amusing, abstraction.

But Doctor raised questions. Should the machine take on this therapeutic role, even if the alternative was no help at all? The question and those that flowed from it deserved to be taken seriously. Arguments for and against were fierce. Weizenbaum warned that the therapeutic transaction was one area where a machine must not intrude, but Colby said machine-based therapy was surely preferable to no help at all.

Thus Doctor was the beginning of a bitter academic feud between Weizenbaum and Colby, which I would later be drawn into when I published Machines Who Think and made for myself a determined enemy in Weizenbaum.

At the time Yershov was playing (or not) with the Doctor demonstration, Weizenbaum was already beginning to claim that Colby had ripped him off—Doctor, he charged, was just a version of Weizenbaum’s own question-answering program, called Eliza (after Eliza Doolittle). Eliza was meant to simulate, or caricature, a Rogerian therapist, which simply turned any patient’s statement into a question. I’m feeling sad today. Why are you feeling sad today? I don’t know exactly. You don’t know exactly? Feigenbaum, who’d taught Lisp to Weizenbaum, says that Eliza had no AI aspirations and was a no more than a programming experiment.

Colby objected strenuously to Weizenbaum’s charges. Yes, they’d collaborated for a brief period, but putting real, if primitive, psychiatric skills into Doctor was Colby’s original contribution and justified the new name. Furthermore, Colby was trying to make this a practical venture whereas Weizenbaum had made no improvements in his toy program.

Maybe because Weizenbaum seemed to get no traction with his claims of being ripped off, he turned to moralizing. Even if Colby could make it work, Doctor was a repulsive idea, Weizenbaum said. Humans, not machines, should be listening to the troubles of other humans. That, Colby argued, was exactly his point. Nobody was available to listen to people in mental anguish. Should they therefore be left in anguish?[3]

I agreed with Colby. Before this, I might not, and based only on first feelings, have sided with Weizenbaum.

But at Stanford, I was learning to think differently. One day, I tried to explain to Feigenbaum how I’d always groped my way fuzzily, instinctively into issues, relying on feelings. Now I began to think them through logically. Ed laughed. “Welcome to analytic thinking.”

I’d entered university hoping to learn “the best which has been thought and said in the world,” as I read in Matthew Arnold’s Culture and Anarchy in my eager freshman year. But Arnold said more: the purpose of that knowledge was to turn a stream of fresh and free thought upon our stock notions and habits.

For me, meeting artificial intelligence did exactly that.

5.

Other things were happening in my life. Evenings, I’d been taking writing workshops at the University of California Extension in San Francisco, led by an ebullient New Yorker called Leonard Bishop, a successful novelist in the 1950s and 1960s. He took me aside and told me, in his delicious Lower East Side bawl, that I’d better start taking all this writing stuff seriously, awright?

I was restless. I was ambitious to be something beyond somebody’s assistant. When the computation center decided it was generating enough documents to need a technical writer and editor, I applied for the job. Ed was upset, he told me later, but instructed the man doing the hiring that I should be considered just like anyone else. I didn’t get that job—it went to a slack-jawed guy I called The Schlump (along with AI, I was learning Yiddish) who, for all I know, was better qualified. But I couldn’t keep on as I had. My marriage was dissolving. I decided to get away from it all and go east.

One of the last days at Polya Hall, I stood gazing out at the computing quadrangle, at the great old live oak at its center and watched the graduate students come and go among the various buildings. There was Pat Suppes’s early project on teaching machines, the computation center that housed the mainframes, and an adjacent building that was home to a mathematics education project. I knew nearly all of those graduate students. The word was barely in use, but nerds are what I suppose we were. Geeks.

We did know that we were all kin in a strange point of view, a fellowship, a clan, maybe a cult, which nearly nobody else in the world was aware of. In 1966, not all of us could possibly grasp the radical changes, the immense profundity, of the coming information revolution, where nearly everything, from biology to physics to literature to commerce to the arts could be described in information processing terms, where the world was going to go digital. That early in the revolution, we could only take soul-nourishing comfort in connecting with other members of our rare tribe. But we were utterly confident, unshakably sure, that what we were devoted to would somehow change everything. As I headed back to the outside world, I wondered if I’d ever feel so at home again.

If I was so happy living in the future, what drew me backward to the present, the past, that outside world? It was the gravitational pull in my mind of the First Culture—literature, the humanities—that I still imagined was somehow primary. This was not a culture that would ever particularly welcome me, nor appreciate what I was saying. But for a long time, I’d yearn for it just the same.


  1. In Richard Wagner’s opera, Tristan und Isolde, the Tristan chord is a four-note mix of major and minor thirds, which was shocking at the time, although some musicologists argue it wasn’t original with Wagner. Its shock came from the emphasis Wagner gave it. That chord later found a welcome home in the American Songbook—Arlen, Ellington, Gershwin, Strayhorn—whose fusion of late Romantic tonality and African-American rhythms (not to mention a story in 32 concise bars) leapt out of Tin Pan Alley and captivated the world.
  2. Read an autobiography of SAIL at http://infolab.stanford.edu/pub/voy/museum/pictures/AIlab/SailFarewell.html
  3. By 2018, online therapy was thriving. One project, a joint effort between Stanford psychologists and computer scientists, and called Woebot, offered cheap but not free therapy to combat depression. It was a hybrid—one part interaction with a computer, and one part interaction with human therapists, this for people who couldn’t afford the high cost of conventional therapy. Earlier projects included one at the Institute for Creative Technologies in Los Angeles, called Ellie, to assist former soldiers with PTSD. Ellie’s elaborate protocols seem to have overcome the problem that many patients resist telling the truth to a human therapist but feel freer with a computer. (We saw this with Soviet computer scientist Andrei Yershov.) Some decades ago, the Kaiser Foundation discovered the same reaction to ordinary medical questions—people felt judged by human doctors in ways they didn’t by computers and could thus be more candid.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

This Could Be Important (Mobile) Copyright © 2019 by Carnegie Mellon University: ETC Press: Signature is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book