A.M. TURING AWARD WINNERS BY...
BIRTH:

March 19, 1927, San Francisco, CA.

DEATH:

July 19, 1992, Pittsburgh, Pennsylvania.

EDUCATION:

BS, Physics (Stanford University, 1949); PhD (Carnegie Institute of Technology, Graduate School of Industrial Administration, 1957.

EXPERIENCE:

US Navy (1943-45); RAND Corporation (1950-1961); Carnegie-Mellon University (1955-1992:graduate student, 1955-57, Professor of Computer Science and Psychology, 1961-1992)

HONORS AND AWARDS:

Harry Goode Memorial Award, American Federation of Information Processing Societies (1971); elected to the United States National Academy of Science. (1972); elected Fellow of the American Academy of Arts and Sciences (1972); A.M. Turing Award of the ACM (1975); Alexander C. Williams Jr. Award (with William C. Biel, Robert Chapman and John L. Kennedy), Human Factors Society (1979); elected to the United States National Academy of Engineering (1980); First President, American Association for Artificial Intelligence (1980); IEEE Computer Society Computer Pioneer Award (1981, charter recipient); Distinguished Scientific Contribution Award, American Psychological Association (1985); William James Lectures, Harvard University(1987); Award for Research Excellence, International Joint Conference on Artificial Intelligence(1989); Doctor in the Behavioral and Social Sciences (Honorary), University of Gröningen, The Netherlands (1989); William James Fellow Award, American Psychological Society (1989, charter recipient); IEEE Emanuel Piore Award (1990); U.S. National Medal of Science (1992); The ACM/AAAI Allen Newell Award was named in his honor.

Allen Newell

United States – 1975
CITATION

In joint scientific efforts extending over twenty years, initially in collaboration with J. C. Shaw at the RAND Corporation, and subsequentially with numerous faculty and student collegues at Carnegie-Mellon University, Newell and co-recipient Herbert A. Simon made basic contributions to artificial intelligence, the psychology of human cognition, and list processing.

Allen Newell was born in San Francisco on March 19, 1927 to Robert R. Newell, a prominent professor of radiology at Stanford Medical School, and Jeanette La Valley Newell. While Newell did not follow his father into medicine, he admired him greatly, and he certainly inherited his father’s taste for research and his broad intellectual interests. While at Lowell High School in San Francisco, Newell met Noël McKenna; the two married in 1947, when both were 20, and remained married until Newell’s death from cancer in 1992.

Professionally, Newell is chiefly remembered for his important contributions to artificial intelligence research, his use of computer simulations in psychology, and his inexhaustible, infectious energy. His central goal was to understand the cognitive architecture of the human mind and how it enabled humans to solve problems. His remarkable accomplishments in computer science were all means to this end, whether they were the development (with Herbert Simon and J.C. Shaw) of the first list-processing language (IPL) and of programs designed to use heuristics in solving problems (especially the Logic Theorist and General Problem Solver, also developed with Simon and Shaw), or advances in speech recognition and in human-computer interaction. For Newell, the goal was to make the computer into an effective tool for simulating human problem-solving. A computer program that solved a problem in a way that humans did not, or could not, was not terribly interesting to him, even if it solved that problem “better” than humans did. The hope was to develop programs that succeeded at solving problems when and how humans succeeded and failed when and how humans failed.

Newell’s career in science initially began differently. After a stint in the Navy, during which he assisted his father in mapping radiation distribution after the Bikini Atoll atomic tests, Newell re-enrolled at Stanford, majoring in physics. While at Stanford, he took several courses with George Polya, one of the leading exponents of heuristic problem-solving in mathematics (explained best in Polya’s 1945 book, How to Solve It). The idea of heuristic problem-solving made a great impression on Newell, who realized that humans have neither the time nor the processing power necessary to solve problems using exhaustive algorithmic methods. Rather, humans must use simplified rules—heuristics—to guide selective searches for solutions. 

Newell was excited by the power and elegance of mathematics, and in 1949 he left Stanford to begin graduate work in mathematics at Princeton. There Newell worked as a research assistant for Oskar Morgenstern, who had just recently co-authored The Theory of Games and Economic Behavior with John von Neumann, creating the new field of game theory. This stay at Princeton was brief, only one year, and Newell did not become a game theorist or a pure mathematician, but his experiences and contacts there started him on a new path, one that connected the powerful abstractions of formal mathematics with the messy realities of empirical experience.

The first step on this new path was to join John Williams (another Princeton mathematician) in the mathematics division at the newly created RAND Corporation in Santa Monica, CA. At RAND, the Air Force’s “think tank”, Newell’s first work applied game-theoretic methods to organization theory—and organizational reality—resulting in a pair of reports co-authored with Joseph Kruskal. This work led to his involvement with a series of experiments on decision-making in groups being conducted by John L. Kennedy, William Biel, and Robert Chapman at RAND’s Systems Research Center (which was spun off almost a decade later as the Systems Development Corporation).

One of the chief problems in the study of human behavior is the difficulty of creating a true controlled experiment. At the Systems Research Center, Kennedy, Biel, and Chapman sought to create a simulated environment (a model air defense control center) that could be controlled so as to give insight into how people working in this environment interacted with each other, their machines, and with the information presented to them. Newell’s specific task for the group was to use a computer (an IBM Card Programmed Calculator, then quickly becoming a dinosaur in the new age of stored-program machines) to create simulated radar maps. In the process, Newell became fascinated with how people in this environment processed information and made decisions. In addition, experience taught him to think of computers as symbol processors and simulators rather than as big, fast calculators. Symbol processing, decision-making, problem-solving, and simulation thus went together for Newell, leading him to think of minds, computers, experiments, and organizations in new ways.

One of the consultants to the Systems Research Center was Herbert Simon, then a professor at Carnegie Institute of Technology’s new Graduate School of Industrial Administration. Newell and Simon met during Simon’s visit to RAND during the summer of 1952 and immediately discovered that they spoke the same language of symbols, problem-solving, heuristics, and simulations. At the time, they both focused on decision-making in organizations, and Newell saw the computer as a tool for simulating experimental environments, not experimental subjects. It was not until 1954, after attending a seminar at RAND with Oliver Selfridge, that Newell had his “conversion experience,” suddenly seeing the possibilities of using computers to simulate human problem-solving.[1] Indeed, to Newell, the analogy between humans and computing machines was tight: problem-solving was something done by “physical symbol systems,” a category that included both humans and computers as separate species of the same genus.

After several years of collaboration at a distance (and in person during summers), Newell moved to Pittsburgh to work with Simon at Carnegie Tech in early 1955. While nominally Simon’s PhD student, Newell was in truth an equal partner in their growing research program. The first fruits of their collaboration was the first successful Artificial Intelligence program, the Logic Theorist (LT), completed in late 1955 and first run on a computer in 1956, which was used to prove the theorems of Russell and Whitehead’s Principia Mathematica. In a wonderful twist of irony, Newell and Simon first used Simon’s family to simulate the workings of the Logic Theorist before it was programmed into a computer, so they had people simulate the workings of a machine designed to simulate the workings of people’s minds!

In addition to employing principles of heuristic problem-solving, the Logic Theorist was an error-controlled, feedback “machine” that compared the goal state (the statement to prove) with the current state and performed one of a small set of basic operations in order to reduce the difference between the two states. The Logic Theorist was a remarkable success, and Simon, Newell, and Shaw elaborated on its basic principles in creating another renowned program, The General Problem Solver (GPS) in 1957-1958. The GPS was not quite so universal as its name implied, but it was startlingly good at solving certain kinds of well-defined problems. Even more, GPS, like LT, appeared to solve them in much the same ways that humans did, employing a core method of means-ends analysis that was both simple and general, if not quite universal.

As part of this work on cognitive simulation, Newell, Simon, and Shaw developed the first list-processing language, IPL, which, according to Simon “introduced many ideas that have become fundamental for computer science in general, including lists, associations, schemas (frames), dynamic memory allocation, data types, recursion, associative retrieval, functions as arguments, and generators (streams).”[2] John McCarthy’s LISP, which became the standard language in the AI community after its development in 1958, incorporated these basic principles of IPL in a language with an improved syntax and a “garbage collector” that recovered unused memory.

Newell collaborated with Simon for a number of years, but their paths did diverge over time, with Newell working on speech recognition, computer architectures, and human-computer interaction during the late 60s and 70s before turning his focus to his “Soar” project from the 1970s until his death in 1992. The Soar project was Newell’s attempt to develop a unified theory of cognition (a unified theory, not the unified theory, he would note). This unified theory centered on problem-solving, which it described in terms of “production systems (sets of “if-then” statements), and it incorporated a theory of learning by “chunking” into the problem-solving schema[3]. While the Soar architecture is informed by psychological and neuro-scientific data, it is a structural model of human cognition than does not seek to explain how this structure is realized physically in the brain. The Soar project continues today, long after Newell’s death, fulfilling Newell’s maxim that one should “choose a final project that will outlast you.”

Newell, like Simon, was an accomplished institution-builder and grant-winner as well as a profound thinker. In these administrative capacities, Newell was instrumental in transforming Carnegie Mellon’s (CMU) Psychology Department into one of the most influential in the United States, in creating CMU’s pioneering School of Computer Science, and in creating CMU’s campus-wide computer network (one of the first in the nation) in 1982.

For all these labors, intellectual and institutional, Newell received a great many honors, including the Harry Goode Award of the American Federation of Information Processing Societies (1971); the A.M. Turing Award of the Association of Computing Machinery (in 1975, with Simon); the Distinguished Scientific Contribution Award of the American Psychological Association (1985); and, just before he died, the National Medal of Science (1992). He was survived by his wife, Noël, his son Paul, and his sister Ann.

For further information on Newell, see the following, especially the pieces by Piccinini and Simon, on which I have drawn heavily for this essay.

Boden. Margaret, Mind as Machine: A History of Cognitive Science, Oxford University Press, 2006.

Laird, John E. and Paul S. Rosenbloom. “The Research of Allen Newell.” AI Magazine, Vol. 13, Num. 4 (1992), pp. 17-45.

Michon, John and Aladin Akyurek, eds. Soar: A Cognitive Architecture in Perspective, Kluwer Academic, Norwell, MA, 1992.

Piccinini, Gualtiero, “Allen Newell.” In the New Dictionary of Scientific Biography, Thomson Gale.

Herbert A. Simon,Allen Newell: 1927-1992.” IEEE Annals of the History of Computing, Vol. 20, Num. 2 (1998), pp. 63-76.

Steier, David and Tom M. Mitchell, eds. Mind Matters: A Tribute to Allen Newell, Lawrence Eribaum Associates, Mahwah, N.J., 1996.

Author: Hunter Heyck



[1] Simon, Herbert A. “Allen Newell: 1927-1992,” IEEE Annals of the History of Computing, Vol. 20, Num. 2 (1998), pp. 63-76.

[2] Ibid.

[3] Learning by chunking means that when Soar solves a problem, it creates a new production system that links condition to result; the next time Soar has to deal with the same problem, the system follows the link from condition to result without having to solve the problem again.