30
1.
At the end of his book Chance and Necessity, Jacques Monod, a molecular biologist and Nobel laureate, asks humanity to put its faith in art and science for human salvation. For a long time, I could see why science might save us collectively. I was less confident about art. It’s cliché to say that Nazi commandants loved Brahms and Wagner and still behaved inhumanly, but it’s true and hasn’t yet been satisfactorily answered on behalf of art. But if art only enriches the lives of those individuals whom science saves, then perhaps that’s enough to ask of art.
As a humanist, I soon understood that AI was a human enterprise. From its beginning, it was, and still is, two-pronged: first, to model and therefore better understand human cognition, and second, to apply better-than-human cognition to various problems humans want to solve. I was curious about AI’s history, its prospects, and the people who were making it happen. The computer, I thought, was a tool, human thought made manifest (even before I tangled with Ashley Montagu on that). As you see from these pages, this saga, I was innocently surprised to discover that other humanists didn’t view AI or computing in general the same way. They saw my curiosity as “selling out to the machines,” as scheming to “exchange machines for humans,” and other fearful nonsense.
“As a substitute for humans?” Oliver Selfridge once mused to me. “How? Will the machine substitute me for myself? Of course not.” But over my lifetime, most humanists just dismissed AI.
In 1989, a simple question on a publisher’s publicity questionnaire about “how I came to write this book”—Aaron’s Code—stopped me. It wasn’t just one book; it was my life’s fata morgana. Why artificial intelligence? What drew me and never let me go?
July 21, 1989:
Simple question, but the answer is by no means simple. That early on, I recognized how significant AI was, and wanted to tell the world? Paltry themes bore me. I have no desire to paint teacups. I want to run with the big dogs, but in my own way. Seize a bone you can hardly get your jaw around, then hang on for dear life.
This long romance with AI? The appeal to my rational, cool, cerebral self, yes. I admire in it what I admire in myself (like everybody else doing AI). Yet surely I wanted to winkle out the passion in these people too. I knew my own.
The easy answers: How much I admired the people engaged in the research—astonishing visionaries, pursuing with acute natural intelligence what had been a human dream forever, but never before possible. If they succeeded, they’d change the world. (And so they have.)
Herb Simon, Allen Newell, John McCarthy, Marvin Minsky, Seymour Papert, Raj Reddy, Ed Feigenbaum, Harold Cohen, Lotfi Zadeh, Tomaso Poggio, Patrick Winston, and so many others, all of them at extremes, stretching the mind—their own, mine, the world’s, and newly created artificial minds—in unprecedented directions. I was enchanted by their vision.
Some of them, like Newell and Simon, like Minsky, McCarthy, Feigenbaum, Poggio and Winston, pursued deep scientific goals. Some of them, like Reddy and Papert, wanted to free ordinary people, whether in poverty or just in poor schools, from wretched circumstances. Some of them, like Harold Cohen, had an artistic goal to pursue. For all of them, their goals were demanding, yet high-spirited fun. This young woman, depressed by the pessimism of post-World War II literature, loved the optimism, the excitement, and the ambitions of early AI research. All these were answers to that question.
Did AI represent an attractive transgression for me? A stand-in for being a Merry Prankster? Hardly. For most people in the coming decades, my romance with AI was so absurd it wasn’t even transgressive. To them it was just—incomprehensible. Wacky. Preposterous. Likewise, my serene confidence in it.
Yet in conversation with Harold Cohen, I once blurted out that I thought if intelligence could be shown to exist outside the human cranium—outside the male human cranium—then all the stupidity I’d endured, the conventional wisdom from every side (“women mustn’t be too smart; they won’t get a husband,” “graduate school isn’t for you, you’ll only get married and have babies,” “the most valuable asset you bring to this organization is your typing speed”) was exactly that, stupidity. In the 1970s, when the second wave of feminism began to free me and millions of other women, we finally saw those platitudes for the old-fashioned superstition and self-serving patriarchy they were. Intelligence exhibited by a machine made nonsense of the universally sanctified superiority of male thought. In my enchantment with AI, I could’ve been sticking out my tongue at all that.
As a corollary, I naïvely imagined that AI would be neutral intelligent behavior without sexism or other bigotries, that somehow symbolic reasoning and algorithms descended from some Platonic Ideal Heaven, free of traditional earthly biases. I wasn’t alone in such a hope. Algorithmic decision-making in loan approvals, welfare benefits, college admissions, employment, and what social media shows its participants seemed to offer mathematical detachment and lack of bias.
But this neutrality was a subconscious hope for AI I kept even from myself for a very long time. How wrong I was. Algorithms arise from the soil where they germinate: they embody the unexamined assumptions of their programmers, a majority of whom are white or Asian men. These are the people who label images (in the beginning) and assign the weights of importance to one fact or another (in the beginning). Moreover, the data sets these already compromised algorithms work over are tens of thousands, often millions, of human decisions, and as those decisions exhibit bigotries, so does the program.[1] Silicon Valley has up to now shown itself feudally backward about shaping more inclusive versions of a social culture, so we can’t be astonished that its products reflect that sorry state of affairs. We can’t be astonished that parts of AI might be equally biased.[2]
For a sorry example, in October 2018, Google employees were outraged by a New York Times report that scores of their colleagues, found to be sexual harassers, had been quietly let go, many with generous exit packages, including Andy Rubin, a creator of the Android mobile software, who left with $90 million (Wakabayashi & Brenner, 2018). Some 20,000 Googlers worldwide, female and male, staged a temporary walkout on November 1, protesting their firm’s behavior and demanding more transparent handling of sexual harassment, and greater efforts on behalf of diversity. Google’s president, Sundar Pichai, publicly apologized and promised that the firm would do better (Wakabayashi, Griffith, Tsang, & Conger, 2018).[3] But two of the protest’s female organizers later claimed they’d suffered retaliation from Google.
Silicon Valley’s extreme sexism, racism, and ageism are a bitter disappointment to me. I long for that to change. I long for the kinds of projects that would do credit to us all and yes, slowly eradicate bias. Instead, right now we face unaccountable algorithms that “improve” themselves opaquely and top executives at a major firm like Facebook (an AI company if ever there was one) who practice and cover up grossly harmful social behavior by their engines.
On machine learning, writer and longtime software engineer Ellen Ullman warns, “In some ways we’ve lost agency. When programs pass into code and code passes into algorithms and then algorithms start to create new algorithms, it gets farther and farther from human agency. Software is released into a code universe which no one can fully understand” (Smith, 2018).
Yet alongside my naïve early hopes, my unexamined belief that pursuing more intelligence was like pursuing more virtue, my disappointments with the social aspects of Silicon Valley, I sensed very early that AI was momentous. I wanted to bear witness. To this day, for all my present deep misgivings, I’m thrilled and grateful that I could.
2.
In the mid-teens of the 21st century, a startling efflorescence appeared of declarations, books, articles, and reviews. (Typical titles: “The Robots Are Winning!” “Killer Robots are Next!” “AI Means Calling Up the Demons!” “Artificial Intelligence: Homo sapiens will be split into a handful of gods and the rest of us.”) Even Henry Kissinger (2018) tottered out of the twilight of his life to declare that AI was the end of the Enlightenment, a declaration to give pause for many reasons.
The profound, imminent threat AI made to privileged white men caused this pyrexia. I laughed to friends, “These guys have always been the smartest one on the block. They really feel threatened by something that might be smarter.” Because most of these priviledged white men admitted AI had done good things for them (and none of them so far as I know was willing to give up his smartphone), they brought to mind St. Augustine: “Make me chaste, oh Lord, but not yet.”
Very few women took this up the same way (you’d think we don’t worry our pretty heads). One who did, Louise Aronson, a specialist in geriatric medicine (2014), dared to suggest that robot caregivers for the elderly might be a positive thing, but Sherry Turkle (2014), another woman who responded to Aronson’s opinon piece in The New York Times with a letter to the editor, worried that such caregivers only simulated caring about us. That opened some interesting questions about authentic caring and its simulation even among humans, but didn’t address the issues around who would do this caregiving and how many of those caregivers society could afford.
As I read this flow of heated declarations about the evils of AI, ranging from the thoughtful to the personally revealing to the pitifully derivative—a Dionysian eruption if ever there was one—I remembered the brilliant concept, described and named by the film critic, Laura Mulvey, in 1975: the male gaze. She coined it to describe the dominant mode of filmmaking: a narrative inevitably told from a male point of view, with female characters as bearers, not makers, of meaning. Male filmmakers address male viewers, she argued, soothing their anxieties by keeping the females, so potent with threat, as passive and obedient objects of male desire. (The detailed psychoanalytic reasoning in her article you must read for yourself.)
In many sentences of Mulvey’s essay, I could easily substitute AI for women: AI signifies something other than (white or Asian) male intelligence and must be confined to its place as a bearer not a maker of meaning. To the male gaze, AI is object; its possible emergence as autonomous subject, capable of agency, is frightening and must be prevented, because its autonomy threatens male omnipotence, male control (at least those males who fret in popular journals and make movies). Maybe that younger me who hoped AI might finally demolish universal assumptions of male intellectual superiority was on to something.
The much older me knows that if AI poses future problems (how could it not?) it already improves and enhances human intellectual efforts and has the potential to lift the burden of petty, meaningless, often backbreaking work from humankind. Who does a disproportionate share of that petty, meaningless, backbreaking work? Let a hundred Roombas bloom.[4]
But the handwringing said that people were at last taking AI seriously.
3.
Another great change I’ve seen is the shift of science from the intellectual perimeters of my culture to its center. (Imagine C. P. Snow presenting his Two Cultures manifesto now. Laughable.) These days, not to know science at some genuine level is to forfeit your claims to the life of the mind. That shift hasn’t displaced the importance of the humanities. As we saw with the digital humanities—sometimes tentative, sometimes ungainly, the modest start of something profound—the Two Cultures are reconciling, recognizing each other as two parts of a larger whole, which is what it means to be human. Not enough people yet know that a symbol-manipulating computer could be a welcome assistant to thinking, whether about theoretical physics or getting through the day.
AI isn’t just for science and engineering, as in the beginning, but reshapes, enlarges, and eases many tasks. IBM’s Watson, for instance, stands ready to help in dozens of ways, including artistic creativity: the program (“he” in the words of both his presenter and the audience) was a big hit at the 2015 Tribeca Film Festival when it was offered as eager colleague to filmmakers (Morais, 2015).
At the same time, AI also complicates many tasks. If an autonomous car requires millions of lines of code to operate, who can detect when a segment goes rogue? Mary Shaw, the Alan J. Perlis professor of computer science and a highly-honored software expert, worries that autonomous vehicles are moving too quickly from expert assistants beside the wheel and responsible for oversight, to ordinary human drivers responsible for oversight, to full automation without oversight. She argues that we lack enough experience to make this leap. Society would be better served by semi-autonomous systems that keep the vehicle in its lane, observe the speed limit, and stay parked when the driver is drunk. A woman pushing a bike, its handles draped with shopping bags, was killed by an autonomous vehicle because who anticipated that? If software engineering becomes too difficult for humans, and algorithms are instead written by other algorithms, then what? (Smith, 2018). Who gets warned when systems “learn” but that learning takes them to places that are harmful to humans? What programming team can anticipate every situation an autonomous car (or medical system, or trading system, or. . .) might encounter? “Machine learning is inscrutable,” Harvard’s James Mickens says (USENIX, 2018). What happens when you connect inscrutability to important real-life things, or even what he calls “the Internet of hate” also known as simply the Internet? What about AI mission creep?[5]
Columbia University’s Jeanette Wing has given thought to these issues and offers an acronym: FATES. It stands for all the aspects that must be incorporated into AI, machine learning in particular: Fairness, Accountability, Transparency, Ethics, Security, and Safety. Those aspects should be part of every data scientist’s training from day one, she says, and at all levels of activity: collection, analysis, and decision-making models. Big data is already transforming all fields, professions, and sectors of human activity, so everyone must adhere to FATES from the beginning.
But fairness? In real life, multiple definitions exist.
Accountability? Who’s responsible is an open question at present, but policy needs to be set, compliance must be monitored, and violations exposed, fixed, and if necessary, fined.
Transparency? Assurances of why the output can be trusted are vital, but we already don’t fully understand how some of the technology works. That’s an active area of research.
Ethics? Sometimes a problem has no “right” answer, even when the ambiguity might be encoded. Microsoft has the equivalent of an institutional review board (IRB) to oversee research (Google’s first IRB fell apart publicly after a week), but firms aren’t required to have such watchdogs, nor comply with them. According to Wing, a testing algorithm for deep learning, DeepXplore, recently found thousands of errors, some of them fatal, in fifteen state-of-the-art data neural networks in ImageNet and in software for self-driving cars. Issues around causality versus correlation have hardly begun to be explored.
Safety and security? Research in these areas is very active, but not yet definitive.
This could be important.
So I said again and again over my lifetime. Now we know. AI applications arrive steadily. Some believe we’ll eventually have indefatigable, smart, and sensitive personal assistants to transform and enhance our work, our play, our lives. Researchers are acting on those beliefs to bring such personal assistants about: the Guardian Angel, Maslow, Watson. With such help, humans could move into an era of unprecedented abundance and leisure. Others cry halt! Jobs are ending! Firms and governments are spying on our every move! The machines will take over! They want our lunch! They lack human values! It will be awful![6]
Which will it be?
History says that however painful the transition, a major revolution—the agricultural, the industrial—has led, on balance, to greater advantages for humans in general. Historians know this, but it doesn’t stop them from saying that this time may be different; this time we may be the losers. Because we may be. At the moment, we can only guess.[7] Kai-Fu Lee (2018), the eminent Chinese AI researcher and venture capitalist, reminds us that this revolution will be at least as important as the previous two—one plus one equals three, he puts it—and it’s coming a lot faster. As AI gets better, the intellectual, ethical, and political issues it brings with it are starkly challenging, and we are imperfect vessels to wrestle with them.
For example, as I write, the U.S. government, at least, is hostile to compensating humans for the jobs they lose because of automation. All governments will face a totally novel reality in the future. Wise minds, human and machine, will need to consider their response to that reality, which might include income redistribution or might offer unimaginable new occupations, using every variety of intelligence—ethical, analytical, emotional, machine—that exists.
In 1969, the visionary Buckminster Fuller published Operating Manual for Spaceship Earth. He foresaw the rise of automation and the loss of jobs. He proposed a “life fellowship” in research and development, or in just plain thinking, for every human who became unemployed by automation. Of 100,000 such fellowships, perhaps only one might yield a breakthrough idea, but the idea would be so potent it would more than pay for the other 99,999. He imagined such fellowships—these days called universal basic income—would give everyone a chance to develop their most powerful mental and intuitive faculties. He imagined young people, frustrated by soulless jobs, might just want to go fishing. “Fishing provides an excellent opportunity to think clearly; to review one’s life; to recall one’s earlier frustrated and abandoned longings and curiosities. What we want everybody to do is to think clearly” (Fuller, 1978). From this he foresaw the advent of an age of great abundance and tranquility.
Our present landscape is mixed. The Chinese artist Cao Fei makes a film in a factory where the silence is eerie, humans nearly nonexistent. What should we feel? Sadness that, in the United States at least, decent wages for many have disappeared? Or gladness that hard, repetitive work is now the province of machines, that humans have been released from a grueling forty-hours per week of monotony? Both?
We know that if jobs are disappearing, the planet has no lack of tasks. A friend returns in 2018 from a trip to Guatemala, where, as a volunteer nurse, she helped in a dental clinic for the rural poor and unserved. Her watercolor images depict not only Guatemala’s natural beauty, but also half a dozen of the rotted teeth that volunteer dentists had pulled from the mouths of young children, a small fraction of those pulled daily. The planet offers endless such tasks waiting to be done.
I spoke at the beginning of this book of the great intellectual structure underway, called computational rationality, which embraces intelligence wherever it’s found: brains, minds, machines. I called it a new Hagia Sophia, temple of holy wisdom, because intelligence, as you’ve surely guessed, is one of the things I hold sacred.
But like everyone else, I cannot define nor measure intelligence. I think I know it when I see it (or its absence). Its definition is barely underway and its measurement a conundrum. This part is not yet science.
Late in his life, Harold Cohen said, “The whole of my history in relation to computing really has had to do with a change from the notion of the computer as an imitation human being to the recognition of the computer as an independent entity that has its own capacities which are fundamentally different from the ones we have.” Many researchers share that view. Fundamentally different? Superficially different? We don’t know. Perhaps computational rationality will clarify those questions, along with questions of what’s appropriate for AI to take on, and what it mustn’t.
We return to the male gaze. In a recent NYU study (West et al., 2019) a picture emerges of AI as a field in crisis around diversity across gender and race. Only 18% of authors at leading AI conferences are women, and more than 80% of AI professors are men. At Facebook, for example, women are only 15% of AI research staff; and only 10% at Google. (For people of color, the proportions are even worse.) The male gaze transmutes into the male stranglehold. This means that products—algorithms, heuristics—reinforce biases that follow historical patterns of discrimination. Face recognition programs have infamously failed to identify people of color, and the binary assumptions in assigning gender—a human subject is either male or female—are far too simplified and stereotypical to be effective across a variegated population. Efforts that emphasize training women, the report goes on, will probably benefit white women preponderantly (which is no reason why they shouldn’t continue, but perhaps with different shapings).
The pushbacks against diversification are especially telling. One skeptic about diversity argues that “cognitive diversity” can be achieved by ten different white men in a room, so long as they weren’t raised in the same household. Others argue that everyone is different from everyone else: isn’t that sufficient diversity? The report calls this a “flattening” of diversity, making it into an empty signifier that ignores the lived experiences and documented history of women and minorities. Biological determinism ascends once more, not only in hiring practices, but in the systems that emerge from such a workforce.
To improve workplace diversity, the report makes eight recommendations, among them publishing compensation levels across all roles and job categories, broken down by race and gender; greater transparency in hiring practices; and incentives to encourage hiring and retention of under-represented groups. The report’s introduction puts it in boldface: “The diversity problem is not just about women. It’s about gender, race, and most fundamentally about power. It affects how AI companies work, what products get built, who they are designed to serve, and who benefits from their development.” Let me add that the employees of such companies have been active in protest against these built-in bigotries, especially at Google. (But again, two women organizers of the Google protest claim to have suffered retaliation as a result.)
To address bias in AI systems, the report recommends four steps: transparency in systems and their uses; rigorous testing across the lifecycle of AI systems, including pre-release trials and independent monitoring; a multi-disciplinary approach to detect examples of bias; and thorough risk assessment before AI systems are designed.
It is all about power—power in the workplace, power in the products that emerge, power in society. No one with power gives it up without a fight.
- See, for example, Weapons of Math Destruction by Cathy O’Neil (Penguin Random House), Life in Code: A Personal History of Technology by Ellen Ullman (Farrar, Straus and Giroux), and Plain Text: The Poetics of Computation by Dennis Tenen (Stanford University Press). ↵
- For a good, if brief, survey of sexism in Silicon Valley, see “Letter from Silicon Valley: The tech industry’s gender-discrimination problem” by Sheelah Kolhatkar (The New Yorker, November 20, 2017). The Valley’s ageism and racism is no better. See also “Amazon’s facial recognition wrongly identifies 28 lawmakers, ACLU says” by Natasha Singer (The New York Times, July 26, 2018) and “How white engineers built racist code—and why it’s dangerous for black people” by Ali Breland (The Guardian Weekly, December, 4, 2017), and too many more such articles. A particularly detailed and damning report on sexism in AI has been issued by the AI Now Institute at New York University: West, S.M., Whittaker, M. and Crawford, K. (2019). Discriminating Systems: Gender, Race and Power in AI. AI Now Institute. Retrieved from https://ainowinstitute.org/ discriminatingsystems.html. Does this suggest that diverse employees might design a better product? It does. Chinese facial recognition programs claim to be more skilled than the Western versions and are already meting out good citizen points for good behavior, demerits for undesirable behavior. We do not know if they’re more skilled, or whether individual Chinese have protested that the system has been unfair to them. We do know that such social systems in the West would be deeply flawed by gender and racial bias. See too work showing how predictive algorithms in the justice system unfairly and inconsistently target defenants. Retrieved from https://news.harvard.edu.gazette/story/2018/05/grad-discovers-a…medium=email&utm_campaign=Daily Gazette 20180530 ↵
- The Googlers who began the walkout cheerfully used Google-invented tools to organize and recruit for it. ↵
- Journalist Sarah Todd wrote “Inside the surprisingly sexist world of artificial intelligence” (Quartz, October 25, 2015) about the sexism and lack of diversity in AI. The piece suggests women won’t pursue AI because it de-emphasizes humanistic goals. Maybe public fears about the field are because of the homogeneity of the field, she went on. To close the gap, schools need to emphasize the humanistic applications of AI. And so on. Although many applications of AI grow out of a sexist culture and reflect that, readers of this history can also see the fallacies in Todd’s argument. AI started out as a way of understanding human intelligence. That continues to be one of its major goals, which is why it partners with psychology and brain science. Its humanistic goals are central, whether to understand intelligence or to augment it. But all scientific and technological fields save, perhaps, the biological sciences, could use more women practitioners and more people of color. That is being addressed in many places and many ways, beyond the scope of this book, but one example is the national nonprofit AI4All, launched in 2017 by Stanford’s Fei-Fei Li and funded by Melinda Gates, which aims to make AI researchers, hence AI research, more diverse. The 2019 report from NYU says this is not enough (West et al., 2019). ↵
- The video in which Mickens’ quote appears is mostly about the perils of machine learning, especially the hilariously sad story of Tay, Microsoft’s chatbot, which had to be taken down from the Internet after 16 hours because of what it was learning from its training set, the gutter of the Internet. ↵
- The cries of pain and alarm are too numerous to list. Privacy, meddling, reshaping our sense of ourselves as unique, and more. About the future job market, for example, books and articles abound. See, for example, the relatively optimistic book by Erik Brynjolfsson and Andrew McAfee, Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy (Ditigal Frontier Press, 2011) or the careful quantitative study from the University of Oxford by Carl Benedikt Frey and Michael A. Osborne, “The Future of Employment: How Susceptible are Jobs to Computerisation?” (September 17, 2013 and available via https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf). But later economists question these findings as mere extrapolation, with no allowance for new jobs that will be created. For example, Forbes.com’s Parmy Olson wrote about a PwC report on AI in “AI won’t kill the job market but keep it steady, PwC report says” (July 17, 2018). ↵
- In 2017, Brynjolfsson and Tom Mitchell, an eminent AI researcher, chaired a National Academies panel that strongly recommended the development of new, more precise tools for measuring the impact of AI on jobs, including better data monitoring and analysis, the kinds of tools in common use with firms like Google and Facebook. Some take issue with AI research being sponsored by the Department of Defense, or the big, profit-making tech firms. These are arguable complaints, but the alternatives—to leave it to chance, the Chinese government, or some well-heeled foundation—don’t seem more desirable. Dropping out is not an option. ↵