I’ve never known a scientist more singularly driven than Allen Newell. Science, and science alone, drove him. Newell was probably the purest scientist—someone who adored doing science for its own sake alone—that I’ve ever met. He preferred to work and honored work over anything else. He was subtly—and sometimes not so subtly—dismissive of people who didn’t share that value. His scientific, intellectual, and moral stature were such that if you thought there might be more to life than work, you felt slightly shamed by your shabbiness.
At least this was his public persona. In fact he read widely (several times I interrupted him and Noël reading aloud to each other in the evening; once The Lord of the Rings) and he was not to be reached during Monday Night Football, especially if his beloved Steelers were playing.
Newell’s work in computing was wide and deep. At the end of his life, he declared that his career had been devoted to understanding the human mind. This he’d done in a series of computer programs that, as Herbert Simon put it, “exhibited the very intelligence they explained.”
Newell and Simon produced the first working AI program, the Logic Theorist, described in Chapter 9. They collaborated in designing the early list-processing language called IPL-V. Although it was superseded by John McCarthy’s more elegant Lisp, IPL-V laid out the paradigm of lists of lists for both the representation and the solution of nonmathematical problems by the computer. The Logic Theorist and their next program, General Problem Solver, codified some heuristics that humans used to solve problems, such as means-ends analysis (“Here’s my goal; what’s the best way to reach it?”), backward chaining (“If I’m at the goal, what steps did I need to arrive here?”), and the identification of subgoals that moved the program further toward the main goal. At the time, these two programs succeeded enough to confirm what we’d believed since at least Aristotle: reasoning was the glory of human intelligence.
In 1972, Newell and Simon published Human Problem Solving, a massive and highly influential book in cognitive psychology that explored the ways humans solved various kinds of problems—slow thinking—based on what psychologists had evoked from human subjects who spoke aloud while solving problems. For example, a subject trying to guess the next number in a series might say, “No, I won’t choose four, because it’s already come up so many times that I doubt it’s going to come up again.”
For Newell’s work toward a general theory of the mind—the human mind, where the computer was a means to embody his theories—he’d receive many prizes and honors, including the American Psychological Association Award for Distinguished Scientific Contributions, membership in both the National Academy of Sciences and the National Academy of Engineering, and honorary degrees. He delivered the 1987 William James Lectures at Harvard, won the National Medal of Science, and received with Simon the Turing Prize, which is computing’s Nobel equivalent. But, as Newell once laughingly said to me, by the time the awards come, the game is really over. It wasn’t; it still isn’t; but it amused him to say so.
Although understanding the human mind was Newell’s great goal, he yielded to several diversions, which produced major science and technology in their own right. With Gordon Bell, a giant in computer design, he worked on computer architecture, and they published an influential book in 1971. He played a major role as advisor to ARPA’s speech-recognition program in the early 1970s. With scientists at Xerox Palo Alto Research Center (PARC), he worked on the psychology of human-computer interaction, which led in 1973 to the Xerox Alto machine, which had a graphical user interface, was controlled with a mouse (an inspired idea of Douglas Engelbart), and was a forerunner of many of the personal computing environments that followed (such as the Macintosh). Newell also wrote a series of papers and several books.
In the late 1970s and 1980s, Newell was not only filling out his research toward a unified theory of cognition, but working doggedly on hypertext and computer networking so that Carnegie Mellon became one of the earliest of the wired campuses. His ultimate work, unfinished at his death, was that unified general model of human cognition, called Soar, which would be the ancestor in many prescient ways of today’s major AI programs, such as Google Brain, Nell, and AlphaZero.
Newell was a big man with a round face that broke into an easy grin. His porcelain dome seemed to signify intelligence itself. White sideburns popped out cheekily from beneath the earpieces of his glasses. His arms were so long, he had his shirts custom-made. I can see him shambling down the halls at Carnegie Mellon, a high-school football player’s body matured into softer stuff, stopping to talk to graduate students and colleagues. In meetings, he knew how to listen, but also how to get quickly to the heart of the matter. (His two characteristic phrases became common currency in our household: “That’s not entirely ridiculous,” meaning worth some consideration, and “Are we there?” which meant he thought the decision was arrived at, the problem solved—quickly and usually wisely, in his case.) He was a composer and conductor of the symphony of a profound mind, leading you along new paths, beguiling you with the sheer audacity of his ideas.
Soar was Newell’s last big idea and possibly his most audacious. He aimed to construct a scientific theory of mind, detailed and encompassing, with hierarchical layers that were to explain mind from the lowest, the neural level, to the highest, the symbolic manipulation level. That top level included rationality and creativity. The PhD students and postdocs who worked with him on Soar—John E. Laird, now a professor at the University of Michigan and a founder of a firm to commercialize Soar; and Paul Rosenbloom, a professor of computer science at the University of Southern California—went on with their students to develop the model extensively. At a 2014 event called the AI Summit, where AI leaders discussed what the next great steps should be, Kenneth Forbus, a distinguished AI researcher from Northwestern University, named Soar as an example of the kind of grand thinking that once propelled AI research and should again.
Newell was at pains to point out that Soar was a scientific model, not mere metaphor. For decades, the computer-brain comparison had been commonplace: “giant brains,” early journalists prattled. But Newell wanted to move beyond metaphor to scientific model. He warned, “To view the computer as only metaphor is to keep the mind safe from analysis.” Although one must always acknowledge the necessity of approximation and the inevitability of error, a scientific model tries to describe its subject matter directly, not metaphorically.
Newell described this daunting task in his eight William James Lectures at Harvard in 1987. Soar was different from philosophical theories of mind because Newell and his students were trying to instantiate, in executable computer programs, the nature of each level of the model, matching and modifying them to conform with what psychologists then knew about human cognition and the ways one level reacted with adjacent levels, above or below (Newell, 1990). Grand theories of mind are still problematic, but we now know human brains seem to work in a way similar to Newell’s proposed multilayer model. Today, when neuroscience and AI are rapidly driving each other to achieve better, more accurate models of human cognition, I regret that Newell didn’t live to see it.
Soar was an epic undertaking. But audacity, thinking big, was characteristic of Newell and how he wooed his colleagues. One day, as I was interviewing him for Machines Who Think, he speculated that the idea of a physical symbol system—a system that could think embodied in some physical way—was, in its implications, as profound for understanding mind as the idea of natural selection had been for biology. He saw my face light up. “You like that,” he declared, didn’t ask. Yes, I liked that, as he knew I would.
Yet Newell was aware that to analyze mind by means of a computer would also be to synthesize intelligence in a hardier medium than flesh and blood. Once, in the mid-1970s, he repeated what was common currency then (and now, since I saw it in a 2013 book by Eric Schmidt and Jared Cohen, The New Digital Age): in the future, machines would do what they did best, and humans would do what they do best. “But that’s bullshit,” Newell said, a man not given to coarse language. “The machines will just keep on getting smarter and smarter. There won’t be much left for humans to do.”
The Newells had deep roots in the San Francisco Bay Area. Newell Avenue in now-urbanized Walnut Creek, across the bay from San Francisco, is named for a cousin of Newell’s father, who owned a large orchard in the area, a place where Allen spent many days of his boyhood. Newell’s father, Robert, was an eminent professor of radiology at the Stanford Medical School when it was still located in San Francisco, where Allen grew up.
“He was in many respects a complete man,” Newell said of his father. “He’d built a log cabin up in the the Sierra. He could fish, pan for gold, the whole bit. At the same time, he was the complete intellectual….He was extremely idealistic. He used to write poetry. He thought that friendship was so important that he consciously cultivated his friends. He made regular appointments to see them. He actually thought this was important.” Newell said this last with a sense of wonder and just a touch of skepticism.
“And I got my strong sense of ethics from my father,” Newell said with some ambivalence, as if it had cost him pleasures, burdened him uncommonly. He offered the statement to me like a flaming brand, a keen double-edged sword: would I seize it? Come up to his impossible standards? That challenge may have been unconscious, but I felt it nevertheless.
After Allen Newell’s death in 1992, Herb Simon wrote an affectionate and scientifically detailed memoir of Newell for the National Academy of Engineering and mentioned that Newell had met Noël McKenna at Lowell High in San Francisco, where, at age 16, they’d fallen in love. They married when Newell was twenty. “The marriage demonstrated that Allen and Noël were excellent decision-makers even at that early age, for they formed a close and mutually supporting pair throughout the forty-five years of their marriage” (Simon, 1997).
It’s not quite so straightforward. When I could coax Noël off her perennial sickbed, where she’d been felled yet again by chronic migraines, to come and have lunch with me, I’d hear a somewhat different tale.
First, the socially prominent Newells were appalled by Noël McKenna as wife for their young scion. In their view, she was nobody from a nothing family. Shortly before the start of the Great Depression, in Noël’s infancy, her mother died, her father deserted the family, and she was thrust upon a struggling widowed aunt to be brought up, considered just another unwelcome mouth to feed. The Newells did everything they could to oppose the marriage. Allen Newell, who was in love and willful, married her anyway. When we met, Noël was ethereally beautiful with perfectly molded cheekbones, large sad brown eyes, and prematurely grey hair pulled back prettily in a low knot. She was delicately built with finely formed hands and a soft but not girlish voice. I saw her once in the gym and marveled aloud to her that she didn’t have an ounce of extra flesh on her. “Lovely Noël,” I wrote in my journal, “like a delicate fern, beautiful and fragile. Yet in many ways I hardly know her. Is she sensual? Does she have a temper?”
Noël was ridden by self-doubt. In her Dickensian childhood, she was regarded as just another mouth to feed; her aunt often threatened to throw her out on the street, where she’d have nothing, did she understand that? She did. She fled to Allen and his love as the first reliable shelter she’d ever known. She loved Allen for the rest of her life, and Allen did love her deeply—all his life. But he loved science perhaps as much, maybe, she sometimes thought, more. The shelter of his love became for Noël Newell a refuge, but sometimes a kind of prison. With their only child now grown and gone from home, she was alone in an enormous house with a companion whose greatest attention focused on his teletype and the telephone.
Noël and I met from time to time to talk about what we were reading or to see a movie together. We’d both grown up in the San Francisco Bay Area, and our meetings might then devolve into mutual lamentations over being stuck in a gray, ugly, beer-and-a-shot-town because of our husbands’ work. We both yearned to be out of 1970s Pittsburgh and back home, the Bay Area. We both worried about the impact on our husbands if we insisted on getting out. “Would Allen lose his muse?” Noël wondered. It was unthinkable to leave, unthinkable to stay.
Late in May 1973, when Joe and I had been in Pittsburgh for about two years, my journal reports a lunch with Noël. She protested feebly that she’d “made her peace with her life as it is,” yet added that she was worried that when Allen died, she’d have nothing, not even a roof to call her own—or as she interestingly slipped, “I guess I should be glad to have a roof over my mouth.” When Allen dies. They were both in their mid-forties, and Newell, at least, was in robust health, except for perpetual back troubles. Although financial worry was irrational—the Newells lived in a grand old house on Marlborough Street in Squirrel Hill that they’d recently moved to because their former house couldn’t structurally support the five thousand books in Allen’s library—it was a demon of her childhood haunting her yet. Noël was undergoing intensive psychotherapy to treat the migraines, considered in those days solely a psychosomatic disorder, but therapy was dredging up all the other miseries of her childhood, too.
That lunch might have been where I arrived with an idea to cheer her up. Why didn’t she think of going around with a tape recorder and asking all these AI people what they thought they were up to? But I couldn’t bring myself even to suggest it; she was so down and forlorn that I spent my time with her just listening, sympathizing. Only years later did I recognize with what immense courage Noël lived her life.
She and Allen loved each other deeply—that was clear. But nobody loves profoundly without some difficulties. I myself was having some troubles with Newell. I felt I liked him better than he liked me. I understood that moral challenge, that double-edged sword he’d flung at me. It went like this.
Joe was becoming more prominent in computer science as he simultaneously published significant research and began to return the former luster of the Carnegie Mellon computer science department. (Newell told him much later that the few remaining senior faculty members had agreed among themselves to give him a year: if he couldn’t turn the place around, they’d feel free to leave, too.) He not only turned it around; he set it on a firm path to future distinction. Thus Joe stood out at a moment when computer science departments were being formed all over the country and looking for someone to head and build them. He was regularly approached about moving from Carnegie Mellon, but nothing really tempted him until the University of California San Diego called.
California. Warmth and blue skies. My heart leaped. I’d awakened more than one morning and stared out at Pittsburgh’s dreariness, with the unwelcome thought that my parents did not get me out of Liverpool, England, at enormous effort, for me to end up in Pittsburgh, Pennsylvania. But Allen Newell, with a kind of magisterial contempt that only he could express, proclaimed that anybody who went to California—he called it Lotusland—was “only interested in getting a suntan.” For all his geniality (and he was very genial), he had a streak of puritanism, almost self-righteousness, that one hoped not to provoke.
Joe was deeply divided. He too loved the West and the idea of more dimensions to life than research, administration, and teaching. Everybody at CMU worked seven days a week because not only were they immersed in their work, but, especially during the long Pittsburgh winters, what alternatives existed? He didn’t like 1970s Pittsburgh. He never learned to find his way around the city. But he deeply loved his department and his work and was very good at it. More than twenty years after we left Pittsburgh, on the occasion of the 25th anniversary of the founding of the computer science department, Catherine Copetas, then an assistant dean, introduced Joe’s talk, saying that Joe had “implemented many of the traditions here. Alan Perlis, Allen Newell, and Herbert Simon founded the department, but it was Joe who took this place and turned it into an organizational wonder.” He felt immense respect, affection, and loyalty toward his colleagues: a big chunk of his heart would remain at Carnegie Mellon forever.
Joe was caught not only between two kinds of life he might like to lead, but also between two strong people: his wife, who wanted very much to get out of Pittsburgh, and his mentor and deeply admired friend, Allen Newell, who provided invaluable administrative advice and set a shining example of deepest devotion to the life of the mind.
Probably from the beginning, Newell was wary of me. I wasn’t going to be the usual faculty wife, a phrase I detested. I had my own faculty position, unusual in the early 1970s. I’d published two books and was working on a third. I was getting more and more involved with second-wave feminism: I taught a course at the University of Pittsburgh in women’s studies, and I was an officer of the Allegheny County branch of the National Women’s Political Caucus. I was trouble.
In fact, I wasn’t trouble. History was. We were all at an epochal point in relations between the sexes: the easy entitlement any man could once assume was now in question—under assault, some said—and it seemed clear to me then which way history would go. Newell surely felt that I was a bad, even subversive, influence on a fragile woman like Noël.
Late that spring, Joe decided he couldn’t bear to leave his department, and said no to San Diego. I was deeply sad. But we too made our peace with Pittsburgh, at least for the time being. We bought a house, and I began the work that would be Machines Who Think. Newell exhaled and decided I was okay. I was a serious interviewer, intent on writing a good history of AI, and Newell appreciated that. As the Squirrel Hill Sages got underway, he seemed to grow to like me better, and I was relieved.
When Allen Newell was awarded the U. A. and Helen Whitaker professorship in September 1976 at Carnegie Mellon, there was a great celebratory dinner, and for the occasion, he gave a talk entitled “Fairy Tales,” which was to become a classic in computing literature.
Fairy tales, he began, are the way we, as children, learn to cope with the world, enduring trials, overcoming obstacles. But now we are all as children, facing an unknown future. “I see the computer as the enchanted technology. Better, it is the technology of enchantment” (Newell, 1992). Computing is the technology of how to apply knowledge to action to achieve goals. It provides the capability for intelligent behavior, with algorithms that are frozen action, to be thawed when needed. The continuing miniaturization of these physical systems, smaller, faster, more reliable, less energy-demanding, means that everything is happening in the right direction simultaneously. Thus computing offers the possibilities of incorporating intelligent behavior in all the nooks and crannies of our world. “With it, we could build an enchanted land.” He went on to say how, but warned that in fairy tales, trials had to be undertaken and dangers overcome. We must grow in wisdom and maturity; we must earn our prize.
Over the years, as problems arise with computing in society that are sufficiently grave to make us falter and wonder if we’ve made a bad trade and might retreat, I’ve reminded myself: we must earn our prize.
Joe and I called Newell the following day to say how good it was. “I didn’t see your name on the guest list, so I wasn’t sure you’d be there,” he said to me. “But then I saw you, and was very aware of you as a professional in the audience, hearing not only what I said, but if it scanned.”
It scanned. It, well, soared. Its message suited me perfectly, optimistic but cautionary, weaving together the deepest purposes of story with the promise of a science, and its concomitant technology, that I was falling in love with.
By the time we got to the International Joint Conferences on Artificial Intelligence in Boston in the summer of 1977, Newell and I were pals. We had one long dinner alone together, where he raised an interesting and, for me, evocative theme. “Do you believe in the Two Cultures?” he asked.
I nodded. I knew all too much about it.
“I don’t,” he said. “I think there’s more like seventy-five cultures, none of them able to talk to each other in any sensible way.” I must have protested; it had been less than a year since I’d heard “Fairy Tales.” Wasn’t this a way for those seventy-five cultures to speak to each other? But he wouldn’t budge.
We were all spending that summer in California, he and Noël in Palo Alto, where he was consulting at Xerox PARC on human-computer interactions, and Joe and I in Berkeley, Joe with the University of California’s computer science department as the guest of Richard Karp. We lived in the Berkeley condominium we’d bought a few years earlier, so I could be near my family and in my beloved Bay Area, at least part of the year. Thus Newell and I flew back to San Francisco together from the Boston meeting, and he was Scheherazade—he entertained me every moment of the more than six-hour flight. We talked about aging (“I’ll be glad to lay the burden down,” he said cheerfully); about religion; again about the sciences versus the humanities, a topic he’d spent some time thinking about. He retold Frank Stockton’s old story, “The Lady, or the Tiger?” which he loved (Stockton, 1895).
A humble young man and princess fall in love. When the king discovers their love, the man must undergo an ordeal of judgment in the arena: he must choose between two identical wooden doors. Behind one is a beautiful woman, whom he can marry and live a long and happy life. Behind the other is a hungry tiger, certain death. The princess has managed to discover which door is which and promises to signal her lover. She also discovers that the lady waiting behind one door is her rival in beauty and charm. At the moment of trial, she signals her lover subtly. The story ends with the question: which door does she send him to?
Newell thought that this story encapsulated the idea that, given the complexity of the real world, there is no way to predict with accuracy the outcome of a determinate process of any complexity. The computer “does only what you tell it to do,” but we can’t know exactly what that will be.
We had high serious conversation; we had lowdown gossip. He gave me a lecture (nay, sermon) on commitment: “I get so angry at people who get divorces—and usually very hostile to the guy, because he takes his 75 friends and she takes her 4 friends, and they split….”
So between us finally, all was calm, all was bright. On January 25, 1978, my journal says, “Allen wrote me a net message [one of our early terms for email] and invited me to their house to play with ZOG.” A few days later, I was with the Newells, playing with Zog, an early hypertext system that Newell and his students had developed as a way of accessing psychology and AI programs developed at CMU. Zog was fun, and I thought what a superb writer’s notebook it would make. But no writer I knew could afford the hardware, much less the software of such a thing. (True: Zog was implemented on the USS Carl Vinson to access its administrative database.) Yet as Newell had said in “Fairy Tales,” this technology gets cheaper and better in every way—even impoverished writers now have such things at their fingertips, pretty much for free: the World Wide Web, Wikipedia, not to mention cheap, special-purpose programs for organizing large bodies of prose.
In 1980, as the founding president of the American Association for Artificial Intelligence, Newell addressed the newly formed association with a demanding talk called “The Knowledge Level” (Newell, 1981). A precursor to his William James Lectures, the talk proposed multiple levels of cognition that the brain has since been shown to exhibit and that one sophisticated ML technique now employs, in what’s known as deep learning. Newell defined the top level: “Rational agents can be described and analyzed at an abstract level defined by the knowledge they possess rather than the programs they run.” This is the knowledge level.
This top level is what the system knows about the world in which it operates and can use to reach its goals, including the ability to identify and search for missing knowledge. Humans tend to find and store these search results for future use, although some machines are fast enough to search problem spaces whenever they need the knowledge, without necessarily storing it for the future. From lower levels of knowledge, the system aggregates knowledge at a higher level (Newell, 1981). This was mostly speculative on Newell’s part, but the current work of brain scientists points in the same direction.
Many details could still not be filled in or verified when, five years later, Newell delivered his William James Lectures at Harvard in 1987. (He confessed his personal embarrassment that he himself had not done every experiment, something he’d always vowed to do to verify his scientific theories.) Moreover, in 1987 it would have been difficult to envision that top level, the knowledge level, with the kind of access to vast worldwide data that programs like Google Brain, Nell, or others now have. That said, he seems to be generally correct in his notions of how thinking takes place in the human brain, at multiple and asynchronous levels. This strikes me as the kind of insight only a computer scientist could have had.
Soar, the project in which Newell instantiated the ideas in “The Knowledge Level,” and his William James Lectures, showed how a relatively few elements of architecture can combine to produce new capabilities, without necessarily building a new module for each new capability. John E. Laird and others would take Soar further, seeking cognitive Newton’s laws, a small set of very general mechanisms that give rise to the richness of intelligent behavior in a complex world. The Hagia Sophia I referred to in Chapter 2 is another instance of the search for general laws of intelligence.
This last great intellectual effort, Soar, had such ambitious goals that Newell’s premature death kept him from seeing it develop fully. (In his last illness, he wondered to Joe and me if his fatal cancer had arisen from the days of his naval service, where, from a very few miles distance, he witnessed the atomic bomb tests on Eniwetok.) Although researchers continue to work on Soar and models like it, grand models of the complete suite of human cognitive behaviors are not yet at hand. For one thing, they demand the utmost of human intelligence to fill them out and get the details right.
As it happens, about the time Newell was giving his William James Lectures, a brilliant researcher at the University of Toronto, Geoffrey Hinton, was exploring a part of AI that had lain dormant, was even presumed dead when Marvin Minsky and Seymour Papert had seemed to say all there was to say about it. This was neural networks. Minsky’s and Papert’s model was vastly a simplified version of the brain with only an input layer and an output layer. In 1986, Hinton showed that a technique called backpropagation could train a deep neural net, one with more than two or three layers. Much more computing power was needed before Hinton and two of his colleagues could show that deep neural nets, using backpropagation, dramatically improved upon old techniques in image recognition. This has led to deep learning, and applications that propagate like mayflies, including nearly unerring human facial recognition by computer, talking digital assistants, and not incidentally, the 2018 Turing Award for Hinton (vice president and engineering fellow at Google, chief scientific advisor at the Vector Institute, and professor at the University of Toronto), Yann LeCunn (vice president and chief AI scientist at Facebook and a professor at NYU), and Yoshua Bengio (professor at the University of Montreal, and science director of Quebec’s AI Institute and the Institute for Data Valorization).
The time that a computer exhibits a grand suite of complete human cognition behaviors may be approaching. In January 2016 MIT and Harvard sponsored a day-long symposium called “The Science and Engineering of Intelligence: A Bridge across Vassar Street.” Vassar Street separates MIT’s computer science and AI research from the Broad Institute and other Cambridge neuroscience research centers. The symposium’s aim was to show how AI and neuroscience have critically influenced and inspired each other and how quickly we’re learning about each.
Skeptics exist. Ed Feigenbaum, for one, believes a grand theory of mind can’t happen for years, maybe never, because human intelligence grew in such a contingent, biologically opportunistic way. He strongly believes that intelligence in machines will come from the bottom up, not the top down, and incrementally.
Neuroscientists are more sanguine. But their task is mighty, and they could be wrong. Stuart Russell, a professor of computer science at the University of California Berkeley and coauthor of the textbook Artificial Intelligence: A Modern Approach, recently said that, although we know how to make the computer do many things humans can do, we haven’t yet put them all together in a working grand scheme—and maybe, he added, that’s a good thing. He seemed to imply that this might court a dismal fate. In 2017, as if to make the point another way, he gave a lecture where he presented a working example of a small, cheap killer drone that “could wipe out the population of half a city,” a drone “impossible to defend from.” Impossible is a big word. Should the quest for general AI therefore be abandoned? I don’t think Allen Newell would agree. Newell strongly believed that grand working schemes—an overarching question that drives a personal scientific agenda, which in his case was understanding the human mind—is exactly how science should be done.
- The committee Newell chaired to explore this in the late 1980s faced heavy weather. In his typical style, he wanted the meetings to be completely open and recorded on the hyperlinked Zog net he and his students had invented so that the committee’s minutes could be searched in a variety of ways. Even at CMU, people asked whether the computer wasn’t mere gimmickry. Worse, would it attract only nerds to CMU? Could the day-to-day support be maintained by a school that had only recently installed telephones in dormitory rooms? It all went well: the Andrew System of centralized computing and file retrieval was one of the first instances of cloud computing. ↵
- Three years later, a University of California Berkeley group of engineers and computer scientists issued a report, “A Berkeley View of Systems Challenges for AI,” which addressed the kind of cross-disciplinary sharing that future AI systems must incorporate. Stoica, I., Song, D., Popa, R. A., Patterson, D., Mahoney, M.W., Katz, R. H., Joseph, A.D., . . . Abeel, P. (2017, October 16). A Berkeley View of Systems Challenges for AI. (Technical Report No. UCB/EECS-2017-159). Retrieved from http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-159.html ↵
- In October 2015, at the celebration of the fiftieth anniversary of the creation of CMU’s computer science department, CMU’s provost announced the creation of the Joseph F. Traub Chair in Computer Science to honor Joe’s early leadership. This was nearly 35 years after Joe had left CMU for Columbia. I was in the audience, still stunned and raw only two months after Joe’s sudden death. This honor to Joe’s memory made me burst into astonished and grateful tears. ↵
- Newell includes “The Lady, or the Tiger?” in his address as the first president of the Association for the Advancement of Artificial Intelligence (Newell, 1981). ↵
- Today, this organization is named Association for the Advancement of Artificial Intelligence (AAAI). ↵
- In Newell’s 1987 William James Lectures at Harvard, he compared his proposed (and then, only partial) computer model to what was then known about human cognition, from the lowest, the device level (cellular), to the highest, the knowledge level (the agent with goals, actions, and body that operates in the medium of knowledge—what it already knows from experience, what the outside world provides, employing all these layers to exhibit intelligent behavior). Yes, the two sets of levels, human and machine, differ physically, from electrons and magnetic domains at the device level to (several layers up) symbolic expressions at the symbolic level. But functionally, they were the same. “System characteristics change from continuous to discrete processing, from parallel to serial operation, and so on” (Newell, 1990). The sets of levels also operated asynchronously, some quickly, others more slowly. He boldly proposed Soar as a unified theory of cognition.Significantly, Newell went on, computer system levels are a reflection of the nature of the physical world. “They are not just a point of view that exists solely in the eyes of the beholder. This reality comes from computer system levels being genuine specializations rather than being just abstractions that can be applied uniformly” (Newell, 1990). ↵
- Remember from Chapter 1 that Demis Hassabis of DeepMind argued that his company’s program, AlphaGo, is the opening to general as opposed to specialized artificial intelligence, which implies a unified theory of cognition. ↵