3

1.

My personal quest begins with a lecture that would shake and consternate the entire Anglo-American literary world in the mid-20th century, as surely as Martin Luther’s Ninety-Five Theses shook the very foundations of the 16th century church.

In the autumn of 1960, the English department at the University of California at Berkeley had two visitors for the term. One of them was C.P. Snow, famous then as the author of the Strangers and Brothers series of novels about the British scientific world and the way mathematicians and scientists helped win World War II for Britain and the United States. The other visitor was his wife, Pamela Hansford Johnson. She too was a novelist, as well known as her husband, at least in Britain.

Snow and Johnson held an open house for students to come to tea each Thursday afternoon, and though I longed to go, I was much too shy. I was especially eager—but still couldn’t bring myself to do it—after Snow delivered a talk about what he called “the Two Cultures,” the humanities and the sciences, a kind of road show version of his Rede Lecture delivered a year or so earlier at Cambridge University. This lecture was growing more famous and controversial by the month.[1]

The Two Cultures would reverberate so deeply—sometimes comically, sometimes painfully—through my life that it’s worth saying a bit about Snow’s theme. It outraged the First Culture, the humanities, into near apoplexy and foreshadowed what would become a needlessly Manichean struggle between human and artificial intelligence.

As he stood at the lectern in that autumn of 1960, Snow was a corpulent man in a baggy gray suit. His black-rimmed spectacles, his mouth, indeed his entire face seemed slightly too small for his oversized bald head. His heft and his Oxbridge accent—adopted, I learned later—seemed deeply authoritative.

Two Cultures exist side by side, Snow began, the humanities and the sciences. Trained as a chemist and physicist and now writing novels, he was fortunate to travel in both worlds. But few others did. In truth, he went on, both our intellectual and our practical lives were dividing into polar opposites, the humanists and the scientists. Each group had lost the ability, much less the wish, to speak across the chasm. The humanists had taken the label “intellectuals” to describe themselves, but from that label they excluded scientists. In the view of humanists, scientists were shallow and brash (heroic age of science, pah!). Humanists were the sole custodians of culture, and culture meant only what they said it meant.

The scientists in turn believed that the humanists were “totally lacking in foresight, peculiarly unconcerned with their fellow men, in a deep sense anti-intellectual, anxious to restrict both art and thought to the existential moment,” said Snow. The typical scientist thought that humanists were intellectually impoverished and vain about their ignorance as if traditional literary culture were the only culture that mattered. They behaved as if the natural order didn’t exist, and its exploration was of no value, either for itself or for its consequences.

It was a caricature, he conceded, but not a very extreme one.

In 155 Dwinelle Hall, one of the biggest lecture halls on the Berkeley campus, I sat electrified by Snow’s lecture. I’d enjoyed my required science courses. I’d thought seriously of declaring an anthropology or paleontology major instead of being an English major. I loved kneeling in the dirt, digging things out, and the possibility of finding human precursors seemed the ultimate thrill. What stopped me was not my comically romantic view of this enterprise, but one of the people I confided in. “How would you go on field trips?” she scoffed. “What would your husband and children do while you were away?” I had no answer. For the sake of my hypothetical husband and children, I kept on with English literature. Fine, I loved that, too.

Snow proposed a challenge: can you describe the second law of thermodynamics? A day later, John Paterson, leading my Virginia Woolf seminar, shook his head in—to his credit—bewildered amusement. “I don’t even know what the first law of thermodynamics is.”[2]

Snow’s ideas startled me, and at the same time put into words what, in some inchoate way, I already knew. Science students seldom showed up in my discussion-size classes, so I didn’t know any. I’d never dated an engineering student. What would we talk about? A course offered by the Berkeley philosophy department on British empiricism must address, I thought, the British Empire. I was basically ignorant about science, and sealed inside one of those two cultures, I’d never be privileged to break out. I regretted it, but it wasn’t a tragedy.

No. It was.

Fortunately for me, within weeks of hearing Snow’s talk, I was introduced to artificial intelligence, which began to shatter that seal. I didn’t make the connection between the two events at the time: that only appeared in retrospect, and only much later when I’d given up a yearning for human precursors and got involved with what might be human successors. But that connection would shape my life. It came about this way.

2.

I was working my way through college, a typist in the basement of South Hall, one of the last Victorian buildings on the Berkeley campus: tall, with wrought iron frills bedecking its steep mansard roof, aggressive ivy threatening its lofty windows. It housed the School of Business (then not nearly the glamorous and well-heeled academic discipline it would become).

From Summer 1959 to Winter 1961, I’d arrive every weekday afternoon from my English major courses—the Rise of the Novel, the Age of Milton, Modern Poetry, Literature of 20th Century France—and type course outlines for classes in administrative behavior, type reading lists for marketing, type midterm exams for decision-making under uncertainty, type scholarly papers in macro-economics. One especially long summer, I typed an entire accounting textbook.

In those course outlines and reading lists, I briefly encountered the term artificial intelligence. But most striking, the name of Herbert A. Simon seemed to be everywhere. A course on municipal administration? The textbook was coauthored by Herbert A. Simon. Decision-making in organizations? Textbook coauthored by Herbert A. Simon. Theory of the firm? Herbert A. Simon. Introduction to artificial intelligence? Herbert A. Simon. It went on—and embarrassingly on. Fresh over from Dwinelle Hall and the riches of English and other European literatures, it was as if Shakespeare appeared in every course. (Or actually, Dryden. One of the minor greats you’ve heard of, but never read.) In my snooty English major way, I thought the field of business with all its little offshoots was surely the thinnest academic field ever, if one Herbert A. Simon had a hand in almost every textbook. I didn’t know the word polymath.

Two of Simon’s former PhD students had arrived in Berkeley from what was then Carnegie Tech (later Carnegie Mellon University) in Pittsburgh to be new assistant professors in the business school. Julian Feldman arrived in the fall of 1959, and Edward Feigenbaum, having spent a year on a Fulbright in England, arrived in that critical fall of 1960. Feldman always seemed rushed and overwhelmed with the press of life. (I was put to work typing transcripts of his psychology experiments with nonsense syllables, in this case, JIK and DAX, which gave me names for my two goldfish.) He was a big man and seemed to fill the room, not so much with his size but with his ever-harried presence.

Feigenbaum, on the contrary, was placid and soothing, no stereotyped hot-tempered redhead. He was always ready to stop and share a joke and, between puffs on his omnipresent pipe, shake his head slowly in sweet wonder at life. With his round face and benign grin, he seemed a bespectacled, sunnier, and infinitely more self-confident Charlie Brown.

Feigenbaum and Feldman were the emissaries of this new field called artificial intelligence, or as their part of it was called, computer simulation of cognitive processes, a fancy way of describing the effort to make a computer mimic certain aspects of human thinking. They were not only trying desperately to teach puzzled business school students, but also their colleagues all over the university.[3]

The lack of textbooks compounded the difficulties of teaching AI to business students at Berkeley (and any others who found their way across the campus from psychology, operations research, or engineering). AI had been born as a distinct field only a few years earlier and only named by John McCarthy in the summer of 1956, when he and Marvin Minsky organized a summer workshop at Dartmouth under the guidance of Claude Shannon, the founder of information theory. None of these names meant anything to me then; some of them still don’t mean much to the general public. But they and their colleagues would change the world.

3.

Feigenbaum and Feldman decided to put together a textbook of readings, so the most important AI work could be found between two covers instead of, in those pre-Internet days, hidden in the odd proceeding and journal. I was graduating in January 1961 and meant to go to law school the following September. But about the same time I heard C. P. Snow’s Two Cultures lecture, Feigenbaum said to me, “Would you like to work on our book during that semester you’re off?” I enthusiastically agreed. Only then did I ask what the book was about.

“Artificial intelligence,” he said.

I needed a better definition than what I’d been able to glean from typing course outlines.

“Artificial intelligence,” Feigenbaum said patiently, beginning the first of a thousand one explanations that, over a lifetime, would cover a grand list of topics, “is computers behaving in ways that if humans did that, we’d say, ‘Ah! That’s intelligent behavior.’”

At the time, it was a fair answer. But in 1960, I must have taken a breath. I’d often been to the new computer center in another basement, a few buildings away from South Hall. With dire warnings from fretful faculty that I mustn’t drop or mix up the stacks of punch cards, I’d deliver them across the counter to one of the clerks, who stood guard before the massive tape drives and processing units of the contemporary computer. Twenty-four hours later, I’d fetch the cards and whatever printout the computer had delivered as a result of running the program once. State of the art, 1960.

To associate all this in any way with intelligence was, well, ludicrous.

Intelligence was what the English, French, classics, anthropology, and architecture departments—all places I was taking courses—cherished as the sine qua non of human nature. Intelligence was central to literature, art, music, and beautiful structures. It had nothing to do with computers—all flashing lights and tape drives, on-off switches, dunderheaded insistence on flawless step-by-step instructions. Otherwise, like Victorian ladies, they’d swoon on their fainting couches, awaiting the smelling salts of debugging.

4.

Ed Feigenbaum’s offer intrigued me. I needed a job during that break between graduation and law school. But something else was happeneing, too. As a literature major, I was thoroughly ground down by the pessimism of the post-World War II product, whether British, American, or French. Oppressed by it. Repelled.

I wasn’t naïve. I’d gone through World War II—as a child, yes, but I’d gone through it, born in the middle of an air raid during the English Blitz, nothing between my laboring mother and the deadly night but a flapping tarpaulin. From my first breath, I was given to know that people in the skies wanted to kill me. I’d seen hard times afterwards, both in England and the United States, where my family immigrated. By the time I got to college, the Holocaust was the hottest topic of debate over long espressos at Il Piccolo on Telegraph Avenue. New friends had just fled from the Hungarian revolution. At the Cinema Guild and Studio, a storefront movie house on Telegraph Avenue run by Pauline Kael, one day to be the celebrated film critic of The New Yorker (Kael herself sometimes sold tickets and then disappeared to run the projector), I saw all the new European films. I could do world-weariness with the best of them.

But I was twenty. I was in love. Spring in Berkeley, summer in Berkeley, even autumn in Berkeley, were glorious with a sense of the imminent. Any season, the views across the campus to San Francisco Bay and the Golden Gate Bridge stopped you short with their panoramic grandeur. I’d begun to know that happiest of human moments, the simultaneous arrival of intellectual and sexual awakening. It was hard to be world-weary. No, it was impossible.

What I found among the artificial intelligentsia more than half a century ago was the same kind of excitement, optimism, and joyful hope in the future that I secretly harbored. But these men were open about it. They felt themselves on the cusp of something momentous. Grand. Epoch-making. They were right.


  1. Snow sketched the Two Cultures idea in a 1956 essay for The New Statesman and presented versions of it elsewhere before his 1959 Rede Lecture. He’d expand the ideas into long essays that appeared in two issues of Encounter in 1959. The fur flew—it got personal and nasty; British class prejudices were shamelessly exhibited to the embarrassment of American disputants, who thought one could disagree with Snow without resorting to such things. A small industry of pro and contra arguments labored furiously for years. It seems impossible to believe in the science-saturated world we inhabit now, but Snow’s challenge was grievously heretical to the orthodoxy and intellectual hegemony of the humanities that had prevailed for almost three centuries in Britain and elsewhere (for the book was translated into many languages and sold well).Four years after the Rede Lecture, Snow published a reconsideration, whose major regret was that he hadn’t used molecular biology instead of the second law of thermodynamics as a question to test scientific literacy. Stefan Collini’s Introduction to Snow’s The Two Cultures (1998, Cambridge University Press) offers a useful context for both Snow’s arguments and those of his critics. Snow’s grievance had a long personal genesis (fixed in the 1930s, Collini argues) and was based on both fact and passion, but it struck a public nerve, already raw from the work of the Angry Young Men, whose working-class, kitchen-sink realism (and anti-Modernism) was just then sweeping the West End and Broadway stages and soon Hollywood movies. The launch of Sputnik in early October 1957 had also disquieted the public. In 2008, the Times Literary Supplement named The Two Cultures and the Scientific Revolution as among the 100 books that had most influenced public discourse since World War II. As I walked across the Columbia campus in February 2015, I saw announcements of at least two lectures that had “Two Cultures” in their titles. Ed Feigenbaum recently told me that, as a post-doctoral student in 1959 England, he’d bought a first edition of The Two Cultures and still has it in a fireproof safe.
  2. The second law of thermodynamics says that in an isolated, or closed, physical system, disorder, or entropy, usually increases. It can never reorder itself. If Snow had mentioned that entropy is information concealed from us, or that information reduces uncertainty, as particle physicists believe, it might’ve added to the fun. That particular audience knew something about information.
  3. Why this topic washed up in a business school at all is a bit of historical contingency. Herbert A. Simon and his former student, Allen Newell, had been teaching at Carnegie’s Graduate School of Industrial Administration and were adventurous enough to go wherever their research took them. One major place it took them was the invention of thinking machines, computers that simulated certain aspects of human cognition. Anyone who wanted to study the new field with them had to enter Carnegie’s Graduate School of Industrial Administration, and from there received his PhD. Feldman had originally trained in psychology and Feigenbaum in electrical engineering, so it was an odd but apparently happy path. Because they both had PhDs from the Graduate School of Industrial Administration, it must have seemed obvious to their colleagues at Berkeley that they’d fit in to the business school. They didn’t quite, but no one yet understood that.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

This Could Be Important (Mobile) Copyright © 2019 by Carnegie Mellon University: ETC Press: Signature is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book