24
1.
Jaron Lanier is a striking looking man by anyone’s estimate. His ginger-colored dreadlocks always reach at least to his shoulders. He so resembles the self-portrait of Albrecht Dürer, painted at age thirty, that from Munich once, I sent a postcard of that painting to Lanier just for fun. When he was living in New York City, he sometimes had trouble getting cabs to stop for him because of his appearance. If he, Joe, and I went out to dinner together, afterwards I’d save us all trouble by stepping out to hail a cab—any cabbie will pick up a middle-aged white woman—and then give Lanier a big hug, and open the taxi door for him, to the driver’s astonishment.
Under those dreadlocks, Lanier has a sweetly cherubic face, the celestial effect enhanced by his habitual black or white attire. That cherubic face reflects a kind, gentle, and deeply decent soul within.
Jaron Lanier also has one of the most interesting minds in computing. The conventional wisdom is never Lanier’swisdom. You might not agree, but you’re always stretched to argue.
We met first in the summer of 1985 when Lanier was chief scientist of a startup he’d founded called VPL. He confided the initials “sort of but not really” stood for virtual reality, a term he’s credited with popularizing. The firm was selling systems that produced make-believe reality, realized electronically.
For this, I donned a headpiece with tight-fitting goggles. Before my eyes, an electronic landscape appeared that Lanier kept assuring me he’d knocked together over the weekend, a Greek-like temple in a pastoral landscape. I wore a glove that let me interact with this landscape. Heights scare me, so I particularly remember being alarmed by edges I might fall from, staircases I needed to negotiate, objects that floated before me and needed to be swatted or grasped with my glove. I could tell myself firmly that it wasn’t real—the sketchiness of the landscape assured me of that—but my heart beat faster and I backed away when I saw dangerous edges or had to walk down imaginary staircases without banisters.
If you were observing, people in the goggles and gloves were clownish, stepping high over imaginary obstacles or batting imaginary floating objects. After you’d donned the helmet and glove, it looked real enough. But only enough: the landscape was sufficiently suggestive for you to behave as if it were real, although you knew it wasn’t. VR would come to have applications in medicine, military training, PTSD treatment, and entertainment. Recently at the Jewish Museum in New York City, I visited an architectural exhibit that allowed viewers to see, through VR glasses, an architect’s furniture designs in imagined room settings.
Lanier and I immediately hit it off. We cheerfully argued philosophy with each other while Lanier’s businesspeople fumed—after all, real, not virtual, customers were waiting in the reception room. They needn’t have worried. Sitting with us, waiting to take his turn, was Alex Singer, a Hollywood producer, whom we also knew from an informal business network we belonged to. Singer would provide plenty of business for VPL and later, for Lanier as a consultant.
After VPL was disbanded (Lanier recounts its founding and complicated ending in his 2017 memoir, Dawn of the New Everything), Lanier left Silicon Valley and came to live in New York City, where Joe and I got to know him better. At what must have been tremendous expense and trouble, he brought along just a few of the unusual musical instruments he collects; they adorned his Tribeca loft like elegant sculptures. If I was lucky, he’d float among them and play a few, usually so exotic I’d never before heard the sounds they made. He knew their names, their histories, and their connections with similar instruments all over the world. One day I was in the Metropolitan Museum and stopped in the musical instrument collection. For a moment, I longed so strongly for Lanier to appear and explain some of them to me that I wasn’t even surprised when in fact he did appear—flowing dreadlocks, in white from head to toe, but with a friend, and apologetically, too busy to linger.
Lanier’s music is as fundamental to him as any technological skills he has. He often invited us to The Kitchen, an experimental New York City music venue where he regularly performed with other artists such as Philip Glass and Yoko Ono. One night we were lucky enough to join him for dinner with a young Sean Lennon, another musician he performed with.
The World Trade Center was very close to Lanier’s loft, and after the 9/11 attack, he was prevented from going home for weeks. When he was finally permitted back, he packed up the exotic instruments and returned to California, to our regret.
Any evening with Lanier was—and is—always a treat. Ideas explode; some of them even stay aloft. In the days he was visiting at Columbia, he was helping produce a kind of virtual reality system for heart surgeons that would eventually allow surgery without breaking the breast bone, a project that has gone on to real success.
One summer day in Santa Fe, I knew Lanier was in town—we planned to be together the following day—so it didn’t entirely surprise me to see him strolling toward me along Palace Avenue. I was with a friend and her very conventional visitor from Tulsa, Oklahoma. I stopped, gave Lanier a hug after not seeing him for a while, and we made last minute plans for the next day.
When Lanier and I parted, Ms. Smugly Conventional of Tulsa said: “Well! What a strange-looking individual!” After an entire lunchtime of such stuff, I’d had enough. “I believe your husband just had heart surgery,” I snapped. “That strange-looking individual was probably responsible for your husband’s successful outcome.”
The following day, Lanier, Lena (who would later become his wife), Joe, and I drove to see another composer (Lanier not only plays every instrument under the sun, he also composes). This composer lived in a geodesic dome in the remote desert east of Santa Fe, an evocative journey for Lanier, who also once lived in a geodesic dome in the southern New Mexico desert he’d designed at about age thirteen, relying for its construction on a book he believed was sound, but was in reality only describing “ongoing experiments.”
This day, we covered a long, bumpy trip over dirt roads, so Lanier entertained us by teaching us how to call goats, not as easy as you’d think. He’d herded goats to pay his college tuition after he’d skipped high school and gone straight from middle school to New Mexico State University in Las Cruces.
What was a young scientific genius like Lanier doing in southern New Mexico? His parents had both been immigrants, his mother a survivor of a concentration camp, his father an escapee from the pogroms of Ukraine. They’d eventually immigrated to the United States, where they met and married. Although Lanier was born in New York City, somehow the family found its way—fled?—to New Mexico, where his mother, trained as a dancer, supported the family as a kind of day trader. When Lanier was only nine, she died in an automobile accident. In Dawn of the New Everything, Lanier movingly describes his catatonic grief lasting for a year or so after her death. Father and son lived a nomadic life in tents, and then the geodesic dome. At age thirteen, Lanier persuaded New Mexico State in Las Cruces to allow him to take courses in science and music, and eventually he came to the attention of scientists like Marvin Minsky, who would later welcome him at MIT (Lanier, 2017).
Lanier has been consistent in his strong beliefs that computers and humans are not interchangeable in any significant way. We discuss it good-naturedly; in most respects, I agree. His earlier book, Who Owns the Future?, argues that you should be paid for any private information a corporation or government has about you in the same way someone who uses any property of yours would compensate you. He even suggests that this might be a way of providing at least a minimum income for those who’ll inevitably be unemployed by technology (Lanier, 2013). Recently, he’s elaborated on that pay-for-use-of-intellectual-property in terms of AI: improved algorithms improve themselves by learning from human accomplishments. Don’t those humans deserve some compensation for their contributions to smart algorithms? (Brockman, 2014).
With AI’s present public prominence, Lanier has begun speaking out about AI itself. On the existential threat that some boldface names in the science and tech world have expressed about AI—for example, Elon Musk, Stephen Hawking, and Martin Rees—Lanier says that as much as he respects these scientists for their scientific accomplishments, he thinks they’re placing a layer of mystification around technology that makes no realistic sense. If, on the other hand, their anxiety is a call for increased human agency—let’s not allow bad things to happen with this new technology—then it serves a purpose. “The problem I see isn’t so much with the particular techniques, which I find fascinating and useful, and am very positive about, and should be explored more and developed, but the mythology around them which is destructive.”
This distiction between techniques and mythology is important. Of those layers of mythology, one of the most interesting is what Lanier sees as the confusion with religion, a magical, mystical thing. AI is not religion nor is it mystical: its abilities rest on the work of thousands, maybe millions, of human intelligences, which are being used without financial compensation. Translators, for example, become part of a victim population, as do recording musicians, or investigative journalists. Now AI becomes a structure that uses big data, but it uses big data. . .
. . . in order not to pay large numbers of people who are contributing. . . .Big data systems are useful. There should be more and more of them. If that’s going to mean more and more people not being paid for their actual contributions, then we have a problem. (Brockman, 2014)
Informal payoffs, as distinct from formal payoffs (royalties) are useless to people who actually have to pay the rent. With that I agree altogether, and I hope that the new explorations of ethics in AI will address this problem and find a fair, just, and ethical solution.
The mythology, Lanier believes, is a very old idea in a new costume:
To my mind, the mythology surrounding AI is a re-creation of some of the traditional ideas about religion, but applied to the technical world. All the damages are essentially mirror images of old damages that religion has brought to science in the past. There’s an anticipation of a threshold, an end of days. This thing we call artificial intelligence, or a new kind of personhood . . . if it were to come into existence it would soon gain all power, supreme power, and exceed people. The notion of this particular threshold—which is sometimes called the singularity, or super-intelligence, or all sorts of different terms in different periods—is similar to divinity. Not all ideas about divinity, but a certain kind of superstitious idea about divinity, that there’s this entity that will run the world, that maybe you can pray to, maybe you can influence, but it runs the world and you should be in terrified awe of it. That particular idea has been dysfunctional in human history. It’s dysfunctional now, in distorting our relationship to our technology. (Brockman, 2014)
And like many religions in the past, this mythology of AI exploits ordinary people in the service of the elite priesthood. Above all, it ignores human agency. We can shape our future legally and economically and in security.
As we’ll see, others also believe we can and are working to make that happen.
These days, Lanier is settled in a house in the Berkeley hills with Lena and their young daughter, Lillybell, along with a selection from his musical instrument collection on three floors up and down the hillside. He writes and commutes to Silicon Valley. The house is witty: the last long conversation we had there was over delicious Russian tea while I sat in their living room on a four-poster Chinese bed draped in red silk. I suspect Joe and I regretted on his behalf far more than he did that, thirty years after VPL, Facebook paid $2 billion for a new virtual reality startup, Oculus VR.[1]
2.
Lotfi Zadeh was one of the best arguments I know for the tenure system. He arrived at the University of California, Berkeley in 1959, and because he’d already been tenured at Columbia in electrical engineering, he received immediate tenure at Berkeley. He’d been a brilliant young student in Tehran, his family’s home. (However, he was born in Baku, Azerbaijan, where his Persian father was a foreign correspondent for an Iranian newspaper, and that city so captivated and influenced him that he wished to be buried there after his death, and was.) After college he made his way to the United States, where he received a masters from MIT and a PhD from Columbia. He taught at Columbia for ten years until he moved to Berkeley.
In 1965, academically secure, Zadeh published his first paper on fuzzy sets, a system, he’d claim, that allowed you to say something was “almost” there, or “not quite,” or “very much” there. He once defined fuzzy logic as “a bridge between crisp, precise computer reasoning, and human reasoning” to me. It was a kind of approximate reasoning, which, Zadeh said, includes most everyday reasoning, such as where to park your car or when to place a telephone call.
Such problems can’t be precisely analyzed because we lack the information for precise analysis. Moreover, standard logical systems, he argued, have limited expressive power. High precision entails high cost and low tractability, as if you had to park your car within plus or minus one one-thousandth of an inch. Fuzzy logic, on the contrary, exploits the tolerance for imprecision. Fuzzy logic, he said finally, was easy to understand because it was so close to human reasoning.
To say this idea was greeted with puzzlement by mathematicians and computer scientists is to put it generously. The theoreticians didn’t know what to make of this strange logic, and if Zadeh felt he belonged among AI people, that camp was not merely puzzled but dismissive. Had Zadeh been a young assistant professor, following where his brilliant mind led, he’d eventually have been forced to follow a different career. With tenure, he was safe to stretch.
Joe and I met Lotfi Zadeh when we first came to Berkeley for one of our summer stays, and after we bought a Berkeley condominium, we saw the Zadehs often. Zadeh was slender, so spare that it seemed his flesh was barely sufficient to cover his skull, the cheekbones prominent, the forehead high and uncreased, large brown almond-shaped eyes that watched the world guardedly. He and his wife Fay were generous hosts, including us in dinner parties and their polyglot and musical New Year’s Eve parties. I liked Fay very much. A striking, nearly life-sized oil portrait of her, clad in a sweeping pink evening gown, hung over a staircase in their Berkeley home, and yet it hardly did her justice. Fay had an enviable gift for languages, so her friends included women who spoke German, French, Japanese, Farsi, and any number of other tongues in which Fay was fluent. She was extremely warm, extremely practical. She was also kept busy working as Zadeh’s personal secretary, because Berkeley was already on straitened budgets, and the amount of correspondence from around the world on fuzzy concepts was enormous—I’d see the stacks of letters on her desk when we visited.
Thanks not only to the success of fuzzy logic, but also to his diligent apostolizing (two-day round trips from Berkeley to Japan were commonplace), Zadeh was becoming famous around the world, if not yet in the United States. At dinner parties we usually met one or two of his foreign disciples, especially the Japanese, although followers came from everywhere. I wondered if fuzzy logic’s attraction was something I’ve mentioned about expert systems, that it offered a spectrum of opportunities: problems for the brilliant and less brilliant to tackle. That’s a formula for getting your ideas out in the world, something for everyone. As Zadeh himself had argued, fuzzy logic was close to human logic, so it was easy to understand. When I went to Japan and encountered fuzzy washing machines, fuzzy rice cookers, and fuzzy braking systems on the trains (because I didn’t also know about fuzzy logic embedded in HDTV sets, camera focusing, or Sony palmtop computers) I commented on it to my Japanese hosts. They, in turn, were deeply impressed that I counted Lotfi Zadeh among my friends.
But the AI community in the United States still ostracized him. At a 1978 meeting of the International Joint Conferences on Artificial Intelligence in Boston, I stood in a hotel lobby among a group of AI people who’d been invited to dinner at a local professor’s house. Zadeh passed by, hesitated for a moment, then saw no one was going to invite him, and continued on hurriedly. I felt an acute flush of shame and misery—it wasn’t my party, so not my place to invite him, but the feeling was from that archetypal school birthday party, where some kids in the class are pointedly excluded.
Zadeh and I had lunch together one day in 1983. “Ah, my dear,” he said philosophically. “It’s a good news/bad news joke. The good news is that AI is working. The bad news is that it’s fuzzy logic.” He was right in an important sense: he was becoming not just famous, but acclaimed.
And appropriated. In London in June 1988, I came across an art exhibit at the Barbican of the newest, most promising young artists in France. One piece was called Information:Fiction:Publicité:Fuzzy Set. The artists, Jean-François Brun and Dominique Pasqualini, had mounted color photographs of clouds and sky in tall light boxes, hinting at fuzzy, hinting at whatever else. I sent the brochure to Zadeh to amuse him.
Zadeh was an ardent photographer himself, so no guest escaped the house without having a picture taken. Because the portraits on his wall were of famous people—Rudolf Nureyev in mid-grand jeté, Alexander Kerensky looking suitably melancholy at his life’s outcome—I didn’t mind holding that pose (whatever it was) every time I was with the Zadehs. Until he died, Joe had a picture of me in his office that Zadeh took in the 1970s, where I’m stretched out on the fine silk Persian rug in the Zadehs’ living room, wearing bright blue slacks and turtleneck against the scarlet medallion of the rug. When I needed an author’s picture for Machines Who Think, Zadeh took it in his Berkeley garden.
Over dinner one night, I heard this story from Tia Monosoff, who had been Zadeh’s student as an undergraduate in the 1980s. She was taking the final exam in Zadeh’s course in fuzzy logic, and for reasons she can’t remember, had a complete meltdown and couldn’t finish the exam. She walked out of the examination room and immediately called Zadeh, apologizing for and trying to explain her lapse. He began to question her about fuzzy logic, questions which now she could answer fluently. “Okay,” he said after some minutes, “I’ll give you a B.” Expecting at best an Incomplete–or dreading worse—she was deeply grateful for his generous and wise understanding of how things can go awry.
In December 1991, I heard Zadeh give a lecture at Pasadena’s Jet Propulsion Laboratories and realized then, with people standing, crowding the aisles, he’d become something of a legend. In that lecture, he showed viewgraphs of all the terrible things that had been said about fuzzy logic over the years. “Pamela McCorduck,” he added, nodding at me, “has written enough about artificial intelligence to become something of a fixture in that community, and has probably heard even worse.” (Actually, I hadn’t. Nobody even took the trouble to think of insults.) Fuzzy, he continued, was still pejorative in the United States but had high status in Japan, such that there were “fuzzy chocolates” and “fuzzy toilet paper.”
In short, Zadeh laughed at himself and won an already loving audience completely over to his side. “Funny Lotfi,” I wrote in my journal that night. “Laughing last and laughing best.”
Zadeh was always gracious and hospitable, but a membrane persisted between him and the world that was impossible to penetrate. The only time I came close was an evening when he called to invite me to dinner. I was alone and glad to see him. He too was alone. But although we two talked easily, he ate nothing. Why? I asked. He said deprecatingly, “A little medical procedure tomorrow.” So I called him the next day to see how it had gone. He was ecstatic I’d remembered—“cool remote Lotfi,” I wrote in my journal, “so humanly pleased with an ordinary human gesture.”
By the mid-1990s, fuzzy logic had proved itself even to the doubters—and Zadeh had lived to triumph. He received the Allen Newell Award, which is presented “for career contributions that have breadth within computer science, or that bridge computer science and other disciplines.”[2] This award came almost literally from the same group that had once excluded him from that Boston dinner (and everything else). He was inducted into the Institute of Electrical and Electronics Engineers AI Hall of Fame, became a Fellow of the Association for the Advancement of Artificial Intelligence (as well as a fellow of many other distinguished professional societies) and could count twenty-four honorary degrees from all over the world.
Joe and I took the Zadehs to lunch in Berkeley when we visited a few years ago. They were each ninety, and although they showed their age in little ways, Fay was still scolding him as Lotfichen, and Lotfi was as intellectually sharp—and personally opaque—as ever. Fay was to die early in 2017, and Lotfi died on September 6, 2017 at the age of 96.
3.
In 1983, Gwen Bell, an innovative and gifted city planner, then married to computer architect Gordon Bell, adopted the modest corporate museum of the Digital Equipment Corporation and transformed it into The Computer Museum of Boston, relocated on Boston’s Museum Wharf. Its holdings would eventually form the kernel for Silicon Valley’s Computer History Museum. In the early days, Gwen began a popular fundraiser for the museum, a trivia quiz called The Computer Bowl, which pitted two teams against each other, East and West, made up of people well-known in computing, to be broadcast on a nationally syndicated television show, Computer Chronicles. When Bill Gates, then the CEO of Microsoft, participated in an early episode as a member of the West team (and earned MVP status) he was hooked, and for the rest of its nearly ten-year run, he was the quizmaster.
Gwen invited me to be captain of the East team in 1991. “You didn’t!” Ed Feigenbaum groaned to me on the phone. “What if you’re humiliated…?” I hadn’t thought of that. Entirely possible. Pie in the face, dunked into the tank: hey, it was a fundraiser.
The Computer Museum chose superb teammates for me: James Clark, vice president for high-performance systems of AT&T (and African American, unusual in the field); John Markoff, who then covered the computer industry for The New York Times and had written books about it; John Armstrong, vice president for science and technology of IBM; and Sam Fuller, research vice president of Digital Equipment Corporation (Nichols, 1991).
Because everyone would expect us to appear all uptight Eastern, tie and jacket, I suggested to my teammates we do the contrary. Maybe they’d wear whatever outfits they used for exercise? John Markoff rolled his eyes: I am not going on TV in bicycle shorts!
But we all got into the spirit. One team member surprised us with black satin team windbreakers, and under these we wore sassy tee shirts and easy pants. We each had on baseball hats (backward, of course, fashion forward at the time). John Armstrong’s was the John Deere hat he wore when he mowed his lawn; James Clark wore a Top Gun lid. I’d asked a ten-year-old skateboarder to go shopping with me and found an oversize Daffy Duck t-shirt, skateboarding pants. I also borrowed my west coast nephew’s skateboard, plus his hat, which read: “If you can’t run with the big dogs, stay on the porch.” Decked out so, the audience was laughing the moment we marched to our buzzers. It gave us a quick psychological edge over the business attired, and openly astonished, West team.
I’d seen Bill Gates arrive earlier at the San Jose Convention Center, where the quiz was televised, and was surprised that he had only one assistant with him. I imagined that one of the richest men in the world would be surrounded by a phalanx of bodyguards and gofers, but no. This was a hang-loose Bill Gates who was very endearing.
The professional host, Stewart Cheifet, moved things along; Bill Gates asked the questions; the buzzers sounded; and the East team, led by their skateboarder captain, won handily, 460 to 170 (Nichols, 1991). I won MVP status, something to share with Bill Gates. Maybe our goofy outfits had relaxed us or maybe it was just our night. For sure we were lucky to have well-distributed arcane knowledge. I watched myself later, amazed at how cool, even snooty, I seemed. In fact I was determined not to be humiliated, as Ed Feigenbaum had warned, so I was nervous and concentrating hard.
“A computer historian!” Dave Liddle of the West team protested. “No fair! Of course they were gonna win!”
4.
In May 1986, an editor at Harper and Row, Harriet Rubin, surprised me. How did I feel about collaborations? My only collaboration up to then, The Fifth Generation, had been great fun. Moreover, I was surrounded by scientists who loved to collaborate, which not only amplified each individual’s work, but also made it less lonely. With the right coauthor, I was open. She told me John Sculley, who’d been brought in by Steve Jobs to run Apple and then fired Jobs, was looking to do another book like Lee Iacocca’s best-selling Iacocca: An Autobiography.
“Ah,” I said, “he wants a ghostwriter. I’m not sure I’m right for that.” No, he wanted a collaborator, was even “willing to share the credit.” I didn’t say no at once, but I was dubious. Still, it would be interesting to meet him. It might even be a provocative project. “There’s writing for the movies, and then there’s writing for the movies,” I wrote in my journal.
I reviewed Lee Iacocca’s as-told-to memoir. Iacocca had been the anointed successor to Henry Ford II at Ford Motors, but capriciously, or feeling threatened, Ford had suddenly fired his crown prince. Iacocca was stunned, deeply hurt, but took the best possible revenge: he went to Chrysler, which was nearly bankrupt, and turned it around, and Chryslers began to outsell Fords (Iacocca & Novak, 1984).
Iacocca had two splendid myths going for him—first, immigrant rags-to-riches (though his father had been the immigrant); and second, the crown prince exiled by the old king, who finds another kingdom to rule, with subsequent success even grander than the old king’s. But the underlying myth in Sculley’s conflict with Steve Jobs sounded like the old king slaying the crown prince: an idolized victim exiled, effectively slain, by a bean counter. At best, it was Cain and Abel. At this time, Sculley had not yet saved Apple, and Jobs had not yet found a kingdom where he could outdo Apple. It would be an enormous writerly challenge, but not impossible, I thought, if I put my mind to it.
Everyone seemed in a great rush. They’d want a manuscript at the end of the summer, and it was already mid-May. Another writer was doing Steve Jobs’s side of the story, so Sculley’s and my book would be a riposte, or better, a preemptive strike. That was possibly more interesting: at least it could aspire to human drama, and not be just another forgettable piece of businessman’s lore. Sculley was seeking someone who was strong-minded, who insisted her name be on the spines of books. If he’d wanted merely a journalist or a ghostwriter, thousands more pliant were available.
Jane Anderson, a young Englishwoman who was Sculley’s personal PR person, invited me to lunch in San Francisco. The Rosenkavalier, she was very open that they wanted me and asked what would move me? That it be more than just another businessman’s book, I answered, and quoted Melville on mighty books, mighty themes. That seemed to please her. I wasn’t antibusiness; I thought great literature could come from anywhere, approached with suitable intelligence, complexity, and freshness. Sculley was very shy, very private, she said; had begun in design—which surprised me—and had an avocation in science and technology. She gave me some further background, some of Sculley’s thoughts, some of his memos, and suggested I call Alan Kay for a reading on Sculley. Kay was very positive: a man “who loves ideas;” I’d enjoy working with him.
But the memos Anderson gave me were uninspired, and I still didn’t see any way of dealing with the underlying myth problem, except to portray Sculley as a rounded, vulnerable, contradictory human being, which I was sure no CEO in his right mind would permit. Early in his career, Sculley had turned from design to marketing because he saw that in corporate America, marketing made most of the decisions. Yet marketing values were shallow, soda pop values. Was this why Sculley could be lured to Apple, to return to meaningful values?
I was still mulling all this when John Sculley, Jane Anderson, and I finally had dinner together. Shy he might be; egoless he wasn’t. He wanted the book to be told in the first person (so by a ghostwriter after all) and wanted final say on every word. Although Kay had said Sculley loved ideas, I waited for a surprising idea from him, but none came. Perhaps he was saving them for the book. So I probed. Why a book, since most of us write for fame and fortune, and he had ample amounts of each? “I think I have a book in me,” he said, with a sweet ingenuousness that made me smile.
Aloud I mused on the underlying myths of the Iacocca book, a book he admired: rags-to-riches immigrant, old king banishes threatening crown prince. Then I examined the underlying myths of his own story, interloper banishes the crown prince, at best, Cain and Abel, which startled him. It isn’t insuperable, I told him. But it will be difficult, and take some imagination to solve the problem. It couldn’t be just puffery, or people would ask why he’d got me to write it when he already had first-rate Silicon Valley PR people.
But I could see I’d lost him. Bean counter banishes crown prince? Cain and Abel? Who wants to be part of those tales?
Jobs was difficult—how difficult we wouldn’t really know until Walter Isaacson’s biography, Steve Jobs, was published after Jobs’s death. An impossible situation had developed between Sculley and Jobs at Apple in the mid-1980s, and at the time, Apple’s board of directors backed Sculley. Jobs had to go. Sculley would indeed preside over a period of great profitability in the late 1980s at Apple, although die-hard Jobs supporters argue that Sculley was cashing in on new products Jobs had already put into place. In the turning of the corporate wheel, Sculley himself was eventually ousted from Apple, and Jobs came back and made Apple into an even more profitable company, with products that were globally admired and emulated. Sculley went away to be an extremely successful entrepreneur, investor, and businessman.
I liked Sculley personally, and when I ran into him at Brown University two years later, where, as an alumnus, he was much involved with a new computer science building, I reintroduced myself and said I hoped we might find something to collaborate on. I’d just begun work on a book about art and artificial intelligence, and he said politely that he was looking forward to reading it.
Meanwhile, I’d met Steve Jobs, the exiled prince, at a cocktail party to celebrate the inauguration of his NeXT machine, which he hoped would indeed be the next big thing, a way of reclaiming his kingdom. To my embarrassment, Joe told Jobs the Sculley story. When he heard Joe repeat my phrase: “Iacocca was the prince who, in exile, bested the old king, but you banished the prince,” Jobs suddenly stopped being the gee-whiz kid. Joe brought him over to me for corroboration. Yes, I said, I’d presented this to Sculley as a problem for any writer who undertook to tell the story. But for Jobs, of course, it was his life. He grasped my hand. “You really said that to him?” he asked with great intensity. “Yes, of course.” Jobs’s young face was struggling with many emotions. He was nearly in tears. He didn’t let go of my hand. “Thank you,” he said gratefully. “Thank you for telling me that.”
5.
I sometimes wonder how the AI pioneers would regard present-day Silicon Valley. They’d be very pleased that AI is so prominent, highly honored, and pursued. They might perhaps be amazed that by mid-2018, the FAANG group of firms (Facebook, Amazon, Apple, Netflix and Google, AI firms all) was worth more than the whole of the FTSE 100, according to The Economist. They might be less enchanted by a culture that revolves so single-mindedly around making money. Each of AI’s four founding fathers lived modestly, in houses they’d acquired when they were new associate professors, houses where they’d brought their children up, where they ended their days. Science, not the acquisition of capital, drove them. Each of the four had strong if varying senses of social justice, and would be troubled by how much of machine learning learns, draws conclusions from, and reinforces unexamined social bigotries. That the social spirit of Silicon Valley mirrors the most retrograde of other commercial sectors, finance, would dismay them. They would have wanted something better, more honorable, of their brainchild.[3]
It would be left to me and many other women to be impatient, and then angry, with the sexism that dominates Silicon Valley. But that moment was to come.
Meanwhile, AI had moved into that empty mansion of the humanities and wasn’t just cleaning it up and straightening things out, but was making major alterations, as we’ll see in the next part.
- VR was expected to transform video games, which it has. But as Corinne Iozzio notes in “Virtually Revolutionary,” an article she wrote for the October 2014 issue of Scientific American, VR technology is also being used widely in psychological treatments for post-traumatic stress disorder, anxiety, phobias, and addiction and in aviation training. Later speculators imagine a dissolution between humans and the world, a kind of late-stage Buddhism achieved instantaneously with a headset. ↵
- The quotation is from the ACM SIGAI web page for the Allen Newell Award, retrieved from http://sigai.acm.org/awards/allen_newell.html ↵
- And oh, the problems of casual, inept, cut-rate, or overweening applications of machine learning. Virginia Eubanks’s Automating Inequality (St. Martin’s Press, 2018) is a horror story of rigid and brittle systems ruling over and punishing the American poor. Andrew Smith’s article for The Guardian, “Franken-algorithms: The deadly consequences of unpredictable code”(August 30, 2018), deserves a book. Yasmin Anwar’s Berkeley News article “Everything big data claims to know about you could be wrong” (June 18, 2018), describes the follies of averaging over large groups in say, medical outcomes. Clare Garvie’s story for The Washington Post, “Facial recognition threatens our fundamental rights” (July 19, 2018), speaks for itself. This threat is already operational in China, with 1 in 3 billion accuracy of face recognition, which monitors citizen behavior at a level where individuals earn “good citizenship points” for behaving exactly as the state wishes, and demerits when that behavior is considered bad. On the other hand, Thomas McMullan’s story for Medium,“Fighting AI Surveillance with Scarves and Face Paint” (June 13, 2018), shows guerillas are now inventing electronic scarves and face paint. In The Washington Post story “Microsoft calls for regulation of facial recognition” (July 13, 2018), Drew Harwell notes that Microsoft has officially called for government regulation of facial recognition software, as “too important and potentially dangerous for tech giants to police themselves.” ↵