32
1.
Forty years ago, Herb Simon put the question to me: if human values could be perpetuated, in the form of “beasties” which carried on those values flawlessly, would I agree to that? Because I didn’t say no at once, which surprised him, we never got to what we meant by human values. People still speak casually as if these values are immutable, when in fact they differ from culture to culture, and in any given culture, gradually (or sometimes suddenly) change.
Ethics evolve. Their conclusions are always provisional. We’ve enlarged and transformed human values, declared slavery an evil, condemned colonialism, and, within nation states, tentatively extended civil rights, educated our young, emancipated women. At times these values have moved in reverse. All this is incomplete.
We think we know: machines must never. . .and machines must always. . .
In all humility, we don’t know. Ethical principles are proposed by professional associations, government consortiums and other groups, even, these days, by employees of AI-based firms, and must be tested, refined, amended, and tested again. Such principles might be transposed into law, but the law is a blunt, slow, and imperfect instrument for the subtle swiftness AI exhibits. As we’ve seen again and again, the great and sobering effect of AI upon intellectual projects is that it requires executable code. To work at all, an ethical AI requires, and will continue to require, bracing specificity. It will demand generation, testing, revisions, and exquisite sensitivity to outcomes.
The first widely reported test of ethics in AI has come with social media—should its masters have insouciantly allowed endless invasions into individual privacy merely to fatten their already staggering profits? Ignored manipulation of political propaganda by malicious actors (and then concealed that manipulation)? It’s no secret that these malicious actors pose a real threat to electoral processes and democracy. Recent public rebellion against social media seems to say no, firms must exercise better control, but how are such firms to be forced to do so, even curbed? Much AI research is proprietary, in private hands. To make individual humans behave ethically is difficult enough. To demand that enterprises behave ethically raises different obstacles and questions—the profit motive seems to be as strong as the sex drive. And again, whose ethics?
A few years ago, I came down with a puzzling physical condition. After several doctors could name it but not suggest a cure, I considered turning my medical records over to a Silicon Valley firm that claimed to read millions, not hundreds, of papers of medical and scientific findings, and thereby discover a remedy. But I didn’t turn those records over. Silicon Valley had already lost my trust.
Yet in 2018, a number of employees at Google risked their jobs—threatened to resign—by publicly protesting work pursued by the executives of the company, first against Department of Defense work, and then against an agreement for Google to get into the Chinese market by means of a product accommodating Chinese censorship (Conger & Metz, 2018). In these cases, Google executives reconsidered. One Defense contract, they decided, would not be renewed, and late in the year, they decided against competing at all for another $10 billion cloud computing contract at Defense. This was directly in response to employee concerns (Nix, 2018). Whether Google will enter the Chinese market, accommodating Chinese censorship rules, is unclear. Some 300 Microsoft employees protested their company’s contracts with the federal Immigration and Customs Enforcement (ICE) agency, especially during the summer of 2018 when that agency was separating immigrant children from their parents as a presumed deterrent to illegal immigration.
Microsoft, however, has announced that the firm will sell to the Pentagon whatever advanced technology it needs for a strong national defense, and Amazon has joined in. China, as competitor, as adversary, as a political system that is inimical to democracy, hostile to human rights, looms ever in the calculations. At the same time, Microsoft’s president, Brad Smith, has sued to protected customers’ personal data from the government, and he’s actively engaged in designing international agreements that would limit cyberweapons (Sanger, 2018).
I am of several minds about all this. After all these years, I feel attached to AI and share the revulsion young Google engineers and scientists feel about their work possibly used to harm other humans. I hopelessly wish this were not a world that forced us to confront such possibilities. Why not a different world, of cooperation and even kindness? But if you’ve read this far, you see that I was born in the pounding amidst a war and kept safe in my cot because women and men I would never know were willing to risk and often sacrifice their lives to defend me. I cannot be a pacifist. I love the country where I’ve lived and been an active citizen for most of my life. Do I wish it came closer to its ideals? Yes, deeply. But I want the opportunity, the time, to push harder at making my country match those ideals. For the foreseeable future, then, I want the best possible defenses of this imperfect land.
Plausible arguments exist that despite local conflicts, what has kept general world peace these last seventy years has been mutually assured destruction in case of nuclear war. If this is so—and how can we really know?—can the same principle of mutually assured destruction in cyberdefenses keep the future peace?
Kai-Fu Lee (2018), the scientist and venture capitalist whom we met in Chapter 30, contracted a grave illness that brought him close to death, and compelled him to think about many of these issues. His spiritual guide, Master Hsing Yun in Taipei, helped open his eyes. He concludes his book, AI Superpowers: Silicon Valley and China, by sharing what he learned and what he imagines for a righteous AI future. He’d like to see not the elimination of the professions by AI, but their transformation, for instance turning physicians into compassionate caregivers who no longer need to hold in their heads all the knowledge a physician must now hold, but who have easy access to universal health knowledge via AI. Such compassionate caregivers will be well trained, but they can be “drawn from a larger pool of workers than doctors and won’t need to undergo the years of rote memorization that is required of doctors today. As a result, society will be able to cost-effectively support far more compassionate caregivers than there are doctors, and we would receive far more and better care”. He envisions this transformation for many of the professions. “In the long run, resistance may be futile, but symbiosis will be rewarded.” Unlike present human service jobs, these new jobs will be well paid, because the logic of private enterprise must alter. He argues that this alteration—a reversal of the emphasis on huge profits toward an emphasis on human service—is not just morally right, but self-protective. Yes, this kind of investment will need to accept linear rather than exponential returns, but such companies “will be a key pillar in building an AI economy that creates new jobs and fosters human connections.”
Public policy must be involved too. Lee writes:
I don’t want to live in a society divided into technological castes, where the AI elite live in a cloistered world of almost unimaginable wealth, relying on minimal handouts to keep the unemployed masses sedate in their place. I want to create a system that provides for all members of society, but one that also uses the wealth generated by AI to build a society that is more compassionate, loving, and ultimately human.
Instead of a universal basic income, Lee proposes a social investment stipend, a decent government salary to those who invest their time and energy in activities that promote a kind, compassionate, and creative society. These would include three broad categories: care work, community service, and education, providing the basis for a new social contract that valued and rewarded socially beneficial activities in the same way we now reward economically productive activities. This could put “the economic bounty of AI to work in building a better society, rather than just numbing the pain of AI-induced job losses.” He raises many questions about how such a change would be accomplished but believes the obstacles are not insurmountable.
Had Lee never experienced a terrifying diagnosis, harsh chemotherapy, the sharing of wisdom by his spiritual guide, and the love of his family, he writes, he might never have awakened to the centrality of love in the human experience. That made a simple universal basic income—a mere resource-allocation problem—seem hollow.
Meanwhile, I’ve already named a few problems AI presses us with: the loss of personal privacy, the lack of genuine diversity, the problem of making life better for everyone, not just the privileged few. Mark Zuckerberg, Facebook’s founder and CEO, calls out for government regulation so it will be illegal for his firm and its rivals to gather, expose, or sell users’ private information. Personal privacy, though tied up with AI, is clearly the forerunner of how we as a society will respond to AI in general. Now at only a stage of public uneasiness, policy is unformed. We need to work hard at this: policy around privacy is our tryout. We won’t get it right the first time, nor the second. But each version will get closer to righteous balance.
Machines smarter than we are, which (or who) might not share our values—why would we do this? Why have we dreamed of doing it for so long? What happens to us when they arrive? How can we control them? Where should boundaries be drawn between human concerns and machine powers? Where can they be drawn?[1] The cultural impulse toward them has been very long, very persistent: why? Might something be learned from such other intelligences?
Thinking fast, our first impulse, is no good here. AI brings on a host of the largest questions whose best answers will require collective thoroughness, collective experience, and time. Gradually, we might have to amend the social contract to include protection and responsibilities for humans and machines. A nonhuman intelligent entity raises wrenching questions that demand attention from well-trained ethicists, philosophers, spiritual leaders, computer scientists, historians, legal scholars, and many others, over a long period of time.
Can’t we just pull the plug?
Who’s we? Practically speaking, AI isn’t conducted by some small group of scientists sequestered on a mountaintop in the New Mexico wilderness, to whom we can say imperiously: never mind. This is an international effort and has been for decades. If one nation (or group) decides to foreswear AI because of possible dangers, who else would cede the advantage? Any group that unilaterally decides to give up AI would find itself in shocking arrears, intellectually, socially, politically, and economically. As Ed Feigenbaum liked to say, AI is the manifest destiny of computing. To give up AI, a nation would have to give up computing altogether. Back to index cards in shoeboxes? Street corner public telephones (connected to a pathetic electro-mechanical system)? Seat-of-the-pants flying? The end of medical and biological research? Give up your smartphone? Ain’t gonna happen.
Until now, only a handful of philosophers have taken AI seriously. Daniel Dennett, a philosopher at Tufts, always did—to the field’s great benefit—and a few years ago, David Chalmers, a philosopher at NYU, also took up some of the questions AI raises.[2] Nick Bostrom, a philosopher and cognitive scientist at Oxford, has written Superintelligence: Paths, Dangers, Strategies,[3] a crackling study of when human-level or better AI might appear (probably by mid-century, he says, but it could be somewhat earlier or later); asking what that might mean for us, and suggesting strategies that could possibly shape AI to be desirable for humans, rather than a peril for them. He calls for “a bitter determination to be as competent as we can, much as if we were preparing for a difficult exam that will either realize our dreams or obliterate them.” It is, he says, the essential task of our age.
One of them, certainly. Saving the planet looms at least as large. So does preserving democracy. So does relieving economic inequality.
I part ways with Bostrum on whether AI is unnatural or inhuman. Again, I think its creation is altogether natural, quintessentially human, inevitable. Complete control is a brittle illusion. But I agree with him and many others that a profound challenge looms. This challenge has never been a secret. In his 1976 address called “Fairy Tales,” Allen Newell (1992) said long ago: there are trials to overcome, dangers to brave, giants and witches to outwit. We must grow in virtue and in mature understanding; we must earn our prize. On the other hand, Eric Horvitz, an AI pioneer (an MD as well as a PhD) rightfully reminds us that without AI, 40,000 patients per year die right now from preventable medical accidents, and thousands more die daily on roads and in cars that ought to be safer, to name just a few problems that AI could prevent. In India, millions suffer from retinal pathologies leading to blindness, which AI could easily diagnose. The list goes on.
A report from Freedom House, a U.S. watchdog of worldwide democracy, exposes the repressive computer systems the Chinese are exporting and offers some remedies: sanctions upon companies that knowingly provide technology designed for repressive crackdowns, passage by U.S. legislators of the Global Online Freedom Act, “which would direct the secretary of state to designate Internet-freedom-restricting countries and prohibit export to those countries of any items that could be used to carry out censorship or repressive surveillance,” along with requiring the companies that operate in repressive environments to release annual reports on what they are doing to protect human rights. Above all, the West needs to provide a better model of free information and protect citizens’ data from misuse by governments, firms, and criminals (Abramowitz & Chertoff, 2018).
A group of European scientists has offered a sobering (if sometimes self-contradictory) survey of major things that might go wrong, encapsulated in the title: “Will democracy survive big data and artificial intelligence?” (Helbing et al., 2017). The group specifically rejects any top-down solution to our problems derived from AI or anywhere else. Social complexity continues to grow, they argue, and collective intelligence, formed from pluralism and diversity, has the best chance to solve unanticipated problems that arise. Not only is pluralism likelier to offer solutions to the problems such complexity brings, but like biological diversity, social diversity offers us resilience. Sadly, as we’ve seen, neither Silicon Valley nor its Chinese counterparts are diverse.
The Europeans also argue for informational self-determination and participation, improved transparency to achieve greater trust, reduced pollution and distortion of information, user-controlled information filters, digital assistants and coordination tools, collective intelligence, and the promotion of responsible citizen behavior in the digital world through digital literacy and enlightenment. The group details how such principles might be enacted.
2.
AI researchers themselves have formally been studying the social and ethical problems their field presents for more than a decade. In 2009, Eric Horvitz, Technical Fellow and Director at Microsoft Research Labs, was then president of the Association for the Advancement of Artificial Intelligence (AAAI). Believing it was time for AI to become proactive instead of reactive, he commissioned, and cochaired with Bart Selman, a Cornell professor of computer science, a meeting in Asilomar, California. Modeled on a 1975 gathering of molecular biologists who’d met to consider the long-term prospects and dangers of their research, the 2009 meeting was called the Presidential Panel on Long-Term AI Futures. Like the molecular biologists, AI researchers knew that they understood the possibilities and problems of AI better than nonspecialists, especially politicians, and wanted to assume responsibility for guidelines that would keep their research safe and beneficial.
This committee of AI professionals addressed popular beliefs that AI would produce disruptive outcomes, catastrophic or utopian. Panel experts were skeptical about these extremes—the Singularity, the “explosion of artificial minds,”—and assigned them a low likelihood but agreed that methods were needed to understand and verify the range of behaviors of complex systems, which would minimize unexpected outcomes. Efforts should be made to educate a rattled public and highlight the ways AI enhances the quality of human life.
In addition, the ways AI could help in the short term were proposed—by protecting individual privacy at the same time personal services are improved, improving joint human-AI tasks, and improving the ways machines explain their reasoning, goals, and uncertainties. AI might become more active in seeking out and preventing malicious uses of computing.
Ethical and legal issues in AI were scrutinized, especially as machines become more autonomous and active in what’s called “high-stakes decisions,” such as medical therapy or weaponry. Many are deeply offended by the idea of robots in warfare, for example, and want them banned absolutely, but for others, the issue isn’t so clear-cut. We wish war would disappear and must work as hard as we can toward sustained peace. But if war doesn’t disappear, will it be ethically more desirable to sacrifice human life on the battlefield, when robots might be suitable subsitutes? (I say nothing about the problems of understanding robot decision-making in real time, let alone controlling them.) And what about human relationships with systems that synthesize believable affect, feelings, and personality—those robots that read our faces and react “appropriately”? The panelists called for the participation of ethicists and legal scholars to help work through all these problems.[4] With some melancholy I note that a decade later, all these are still vexing problems (Horvitz, 2009).
Convinced that this was a vital effort and would be for years to come, in December 2014 Eric Horvitz and his wife Mary announced a large gift to Stanford University for a hundred-year study of the social and ethical issues surrounding AI, establishing AI100, a program to support ethicists, philosophers, AI researchers, historians, biologists, and anyone else whose work might be relevant. AI100 is overseen by a standing committee of distinguished AI researchers, its detailed agenda available on its website.[5] The study group’s work was to be issued every five years. The Horvitzes believe that the social and ethical problems AI raises won’t be solved once and for all—this must be a century-long effort, co-evolving with AI itself.
In late 2016, AI100 issued its first five-year study, Artificial Intelligence and Life in 2030 (Stone et al., 2016). Among the topics taken up are technical trends and surprises, key opportunities for AI, the delays with technology transfer in AI, privacy and machine intelligence, democracy and freedom, law, ethics, economics, AI and warfare, the criminal uses of AI, AI and human cognition—the list is long.
The measured tone taken by the AI100 blue ribbon panelists, all of them eminent researchers in AI and related fields, didn’t make headlines but might comfort the anxious. (Because much of the AI we encounter now in our everyday lives is based on research done twenty or more years ago by these scientists, they deserve our attention.) Yes, the science enables “a constellation of mainstream technologies that are having a substantial impact on everyday lives” such as video games, a bigger entertainment industry than Hollywood, practical speech understanding on our phones and in our homes and living rooms, and new power for Internet searches.
But—these technologies are highly tailored to specific tasks. General intelligence is very distant on the horizon. Thus the report focuses on specific domains: transportation, service robots, healthcare, education, low resource communities, public safety and security, employment and workplace, and entertainment. The report limits its scope to thirty years: the achievements of the last fifteen years and developments anticipated within the next fifteen years. The report makes some policy recommendations. It isn’t dry reading (in fact, it’s exciting); it just isn’t headline-making fantasies.
Some topics jump out: AI with neither commercial nor military applications has historically been underfunded, but targeted incentives and funding could help address the problems of poor communities—beginning efforts are promising. The State of Illinois and the city of Cincinnati use predictive models to identify pregnant women who might be at risk for lead poisoning or sites where code violations are likely. AI task-assignment scheduling and planning techniques have been used to redistribute excess food from restaurants to food banks, communities, and individuals. AI techniques could propagate health information to individuals in large populations who would otherwise be unreachable.
In the workforce, AI seems to be transforming certain tasks, changing jobs rather than replacing them, and it creates new jobs, too. Humans can see disappearing jobs but have a harder time imagining jobs yet to be created. The report says:
Because AI systems perform work that previously required human labor, they have the effect of lowering the cost of many goods and services, effectively making everyone richer at least in the aggregate. But as exemplified in current political debates, job loss is more salient to people—especially those directly affected—than diffuse economic gains, and AI unfortunately is framed as a threat to jobs than a boon to living standards. (Stone et al., 2016)
In the longer term, AI may be thought of as a radically different mechanism for wealth creation in which everyone should be entitled to a portion of the world’s AI-produced treasures. “Policies should be evaluated as to whether they foster democratic values and equitable sharing of AI’s benefits, or concentrate power and benefits in the hands of a fortunate few.”
Changes, yes: living standards raised, lives saved. But no imminent threat to humankind is seen, nor is likely to develop soon. The challenges to the economy and to society itself, however, will be broad. A long section of the report is devoted to public policy issues around AI and is frank: without intervention, AI could widen existing inequalities of opportunity if access to the technologies is unfairly distributed across society. “As a society, we are underinvesting resources in research on the societal implications of AI technologies. Private and public dollars should be directed toward interdisciplinary teams capable of analyzing AI from multiple angles.” Pressure for more and tougher regulation might be inevitable, but regulations that stifle innovation would be equally counterproductive.
The real impediments and threats aren’t scientific but political and commercial. Sadly, we can expect few politicians to read or act wisely upon such a report right now. As for the private sector, which engineers and managers at Facebook had read these recommendations before they allowed adversarial actors to mount a wholesale attack on American political discourse in the 2016 elections? How did those engineers and managers then square with their consciences the act of concealing their knowledge until forced by investigative journalists to admit the facts?
Perhaps more firms should establish institutional review boards, like those of universities and research hospitals, to examine proposed research and protect the humans who might be harmed by such research or business practices, but that seems a weak response to grave threats. In a capitalist society, it seems the only way to get firms to change their ways is for customers to vote with their feet. But if customers don’t know what the choices are, how are they to vote?
Meanwhile, AAAI, the professional AI group, has established an annual conference on AI, Ethics, and Society, which calls for papers on such topics as building ethically reliable AI systems, moral machine decision-making, trust and explanation in AI systems, fairness and transparency in AI systems, AI for the social good, and other germane topics. In 2019, a young Chinese company called Squirrel AI Learning, which specializes in AI to enhance human education, announced the establishment of a million dollar annual prize for artificial intelligence that benefits humanity, to be administered by the Association for the Advancement of Artificial Intelligence. Although Squirrel AI Learning’s own commercial focus is on education (the firm’s teacher plus AI programs had won, among other prizes, the 2018 annual innovation award from Bloomberg/BusinessWeek) the firm’s president declared that the prize is for AI innovations across all sectors, its eye-catching largesse—at the level of the Nobel and the Turing—is meant to persuade the public that AI has great benefits, and to coax researchers to apply their intellectual resources toward such benefits.
“With great code comes great responsibility,” begins an announcement of a new competition called the Responsible Computer Science Challenge (Mozilla blog, 2018), sponsored by a consortium that includes the Omidyar Network (eBay), Mozilla, Schmidt Futures (Eric and Wendy Schmidt, him of Google), and Craig Newmark Philanthropies (Craigslist). The challenge is to bring ethical education into computer science classes at the earliest moment and has two stages. The first seeks concepts from professors or teams of faculty and students for integrating ethics deeply into existing computer science courses. Stage 1 winners will receive $150,000 apiece to pilot their ideas. Stage 2 will support the dissemination of the best of Stage 1 programs, and winners will be announced in 2020 and receive $200,000 each to accomplish their goals. A distinguished committee of ethicists, computer scientists, and others will judge projects.
3.
Just after the establishment of AI100, a meeting was held in January 2015 in Puerto Rico, organized by the Future of Life Institute (FLI), a group of scientists and citizens concerned about issues presented by technology and especially AI. Elon Musk, the founder of Tesla Automobiles, put up $10 million to fund studies, although the Institute’s agenda is less detailed than that of AI100. The public face of FLI roared extravagantly of threats, alarms, and catastrophes. Musk, Stephen Hawking, and Stuart Russell (an AI researcher at Berkeley and coauthor of the important textbook, Artificial Intelligence: A Modern Approach, himself publicly comparing AI to atomic weapons) promoted an “open letter” in the summer of 2015, which argued for a ban on autonomous weapons, a term that became “killer robots” in the media. The letter soon collected tens of thousands of signatures and was presented to the U.N.[6] A later more detailed letter is on the Future of Life website: https://futureoflife.org/ai-open-letter.
Although some people think of these two efforts as competitive, the one slow and steady, the other a bright flash of male-gaze egos, Eric Horvitz believes in diversity. He helped organize the Puerto Rico program and describes each program as having related but distinct goals. The FLI program addresses fears of AI and safety, whereas the One Hundred Year Study “casts a broader focus on a wide variety of influences that AI might have on society. While concerns about runaway AI are a topic of interest at AI100, so is the psychology about smart machines,” he told me.
The Future of Life Institute’s original open letter was sloppily written with yawning loopholes had it somehow transmuted into law (enacted by whom? enforced by whom? with what sanctions? exceptions for national defense?). But psychologically, it represents something deeper. Those sensational earlier statements (“calling up the demons,” “the end of the human race,” “as dangerous as atomic weapons”), the open letter’s language and promotion, are altogether Dionysian in the way Nietschze meant the term: that human embrace of the irrational, the extreme, the ecstatic and destructive, the terrifying darkness of the human psyche.[7] But Nietzsche was at pains to remind us that the Dionysian is as much a part of dualistic human nature as the Apollonian: rational, measured, illuminated, pursuing beauty and joy. Our truth is we are—and need—both. Six months after its establishment, the FLI reported distributing $7 million in grants to fund research toward its goals.
The Apollonian AI100 at Stanford will not be spending a million dollars a year funding incoming proposals. Horvitz told me:
We have a different model. On funding levels, there have been offers of engagement and deeper funding to AI100 from others, including corporate sponsors. I’ve additionally suggested raising the level of funding by upping my own philanthropy. So far, the committee has come back with “stand by—we don’t need additional funding,” as they believe they have enough to work with—with the endowment (which is actually set up to fund studies of a thousand years and beyond). My sense is that it’s not the sheer funding level that’s important, but the ideas, scholarship, programs, and balance, and the people attracted to and sought out by the project.
One more way to disinfect with sunshine: the eponymous Craig Newmark, of Craigslist, has given $20 million to a startup website called The Markup, whose purpose is to investigate technology and its effect on society (Bowles, 2018). Its editors are Julia Angwin, part of a Pulitzer-prize winning team at The Wall Street Journal, and data journalist Jeff Larson. They both also worked for ProPublica. (Three million dollars had also been raised from several other foundations, including the Ethics and Governance of Artificial Intelligence Initiative.) The site would employ programmers and data scientists as well as journalists with three initial focuses: how profiling software discriminates against the poor and other vulnerable groups; Internet health and infections, like bots, scams, and misinformation; and the awesome power of tech companies. Each editor was experienced in examining algorithms (“increasingly . . . used as shorthand for passing the buck,” Larson said) for unintentional biases—showing, as they did, how criminal sentencing algorithms were unintentionally racist, how African-Americans are overcharged for auto insurance, and how Facebook allowed political ads that were actually scams and malwear.
This seemed a grand idea until a year later, Julia Angwin was forced to resign, and a majority of the newly-hired staff resigned with her in protest. The future of The Markup remains to be seen.
In On Liberty, in 1859, John Stuart Mill wrote: “It is as certain that many opinions, now general, will be rejected by future eyes, as it is that many once general, are rejected by the present.” Thus the time scale of AI100 is significant.
These major efforts are complemented by independent queries in many countries—at universities in departments of philosophy, law, and computer science, and in think tanks like the United Kingdom’s Center for the Study of Existential Risk at Cambridge University or the National Academies in the United States. Silicon Valley itself has put together the Open AI not-for-profit research company “that aims to carefully promote and develop friendly AI in such a way as to benefit humanity as a whole. The organization aims to freely collaborate with other institutions and researchers by making its patents and research open to the public. It’s supported by over $1 billion in commitments, but that sum will be spent over a long period.”[8] I’ve mentioned the annual Conference on AI, Ethics, and Society.
We humans are thinking about it. Whether we or our policymakers are up to the task of making it work for us in terms we value is another question. Whether engineers and managers think about it, carried away as they might be by the next big thing, isn’t apparent, though we’ve seen lively protests erupt from people who work in Silicon Valley. The basic researchers in the field have been examining the social implications of field for a while and continue to do so.[9]
A consensus might possibly build to halt AI, but to repeat, that consensus must be worldwide and deeply thought out over a long period. It can’t be merely the snap judgment of the privileged. At the very least, it’s unseemly for the privileged to bar research when AI might provide knowledge, abundance, and ease for that great majority on the planet who are now without. Self-righteousness is not sufficient: in the past, great ethical thinkers ferociously supported slavery, misogyny, racism, and homophobia, to name just a few of the ethical stances we’ve tried to evolve beyond. But that evolution took time and is incomplete.
A good ethical stance attends to both the inner and outer worlds. The outer world has the goal of seeing that the stance is effective: policymakers must not only be made aware of any consensus, but be persuaded to act on it. The inner world examines itself to be sure that its members are truly diverse and representative of the constituencies that will be affected. Whose assumptions about reality are included? Members of any panels or committees need to be deeply probed to guard against outcomes that are merely personal preferences writ large.
A good ethical stance also distinguishes among reality now, the normative, and the desirable. It considers what is, what ought to be (the content), and how to achieve what ought to be (strategies). Emergencies of the moment can suck up resources and contract the scope of the ethical stance instead of expanding it.[10]
When it comes time to turn decisions over to the experts, we need to know who they are. What are their goals? What’s their individual character? Are they worthy of our trust? If ever a project presented itself as perfect for a synthesis of the humanities and the sciences, this surely qualifies.
We’re on a long, difficult, but exhilarating journey.
- Oren Etizioni proposes three rules for AI “inspired by, yet develop further the ‘three laws of robotics’ that the writer Isaac Asimov introduced in 1942: A robot may not injure a human being, or through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws.” Etzioni’s developed laws are: An AI system must be subject to the full gamut of laws that apply to its human operator; an AI system must clearly disclose that it is not human; an AI system cannot retain or disclose confidential information without explicit pproval from the source of that information. Etzioni, “How to Regulate Artificial Intelligence,” New York Times, September 1, 2017. ↵
- Or at least those alarmed by the notion of the Singularity, an idea that has always struck me as simplistic. David J. Chalmers, The Singularity: A Philosophical Analysis. Journal of Consciousness Studies. 17 (9-10) pp. 7-65, 2010. This was followed by responses from many: The Singularity: Ongoing Debate Part II, Journal of Consciousness Studies, 19, (7-8) 2012. My own contribution said in essence that people who actually faced the problem would be in the best position to deal with it. ↵
- Bostrom’s book also contains some hair-raising future scenarios of AI gone rogue. ↵
- You’re invited to lose sleep over an essay by Sarah A. Topol, “Attack of the Killer Robots,” https://www.buzzfeed.com/sarahtopol/how-to-save-mankind-from-the-new-breed-of-killer-robots?utm_term-ronLOqqXlb#.hfJJRYY45D. Or a video that was making the rounds in 2017 from Stuart Russell: https://www.theguardian.com/science/2017/nov/13/ban-on-killer-robots-urgently-needed-say-scientists ↵
- The AI100 website address is https://ai100.stanford.edu/ ↵
- One response to that came from Evan Ackerman, editor in chief of IEEE Spectrum, in “We Should Not Ban ‘Killer Robots’ and Here’s Why,” which appeared in the July 29, 2015 issue: “The problem with this argument is that no letter, UN declaration, or even a formal ban ratified by multiple nations is going to prevent people from being able to build autonomous, weaponized robots,” Ackerman began. He cited the toys already available at small cost and assumed market forces would only make them better and cheaper. Thus autonomous armed robots need to be made ethical, he argued, because we can’t prevent them from existing. But they could make war safer: because they can be programmed, armed robots can perform better than humans in armed combat, and their accuracy and ethical behavior can be improved by reprogramming (which humans cannot be). “I’m not in favor of robots killing people. If this letter was about that, I’d totally sign it. But that’s not what it’s about; it’s about the potential value of armed autonomous robots, and I believe that this is something that we need to have a reasoned discussion about rather than banning.” Jerry Kaplan of Stanford wrote an op-ed piece in The New York Times, “Robot Weapons: What’s the Harm?” (August 17, 2015), saying approximately the same thing. A later argument is that the United States has no choice but to increase its technological advantages, however fleeting and however difficult, given that so much AI research is for profit in the commercial sector. AI warfare is unprecedented, for which we are hardly prepared. Matthew Symonds, “The New Battlegrounds.” The Economist, January 27, 2018. ↵
- For a deeper look at the founding and goals of the Future of Life Institute, along with some interesting scenarios of a future saturated with AI, see Max Tegmark’s admirable Life 3.0: Being Human in the Age of Artificial Intelligence (2017). Recall Tegmark’s schema of three stages of life: Life 1.0 is simple biological evolution, which can neither design its hardware or its software during its lifetime and can change only through evolution. Life 2.0 can redesign much of its software (humans can learn complex new skills, like languages or the professions, and can update their worldview and goals). Life 3.0, which doesn’t yet exist on Earth, can dramatically redesign not only its software but its hardware as well instead of being delayed by evolution over generations. ↵
- Melinda Gates, the philanthropist, and Fei-Fei Li, on leave as head of the Stanford University AI Lab, and acting as chief scientist of artificial intelligence and machine learning for Google Cloud, have recently formed AI4All, aimed to bring much more human diversity into AI research: more women, more people of color. “As an educator, as a woman, as a woman of color, as a mother, I’m increasingly worried. AI is about to make the biggest changes to humanity, and we’re missing a whole generation of diverse technologists and leaders,” says Li. ↵
- In his 2017 presidential address, Thomas Dietterich, the president of AAAI, laid out a plan arguing that AI technology is not yet robust enough to support emerging applications in AI and proposed steps to remedy this. See “Steps Toward Robust Artificial Intelligence” in the Fall 2017 issue of AI Magazine. ↵
- I’m grateful for thought provoking and yes, entertaining, talk on this topic with Larry Rasmussen, the Reinhold Niebuhr Professor Emeritus of Social Ethics, Union Theological Seminary. ↵