A.M. TURING AWARD WINNERS BY...
BIRTH:

July 27, 1945.

EDUCATION:

BA, Mathematics, (University of Virginia, 1967); MA Mathematics (Duke University, 1968);Ph.D., Computer Science (Cornell University, 1976)

EXPERIENCE:

Duke University (1976–1978); Harvard University (1978–1982); Carnegie Mellon University (1982–present, including FORE Systems Professor 1995–present, University Professor 2008–present).

HONORS AND AWARDS:

Technical Excellence Award, Semiconductor Research Corporation (1995); ACM Paris Kanellakis Theory and Practice Award (1998, with Randal  Bryant, E. Allen Emerson, and Kenneth L. McMillan); IEEE Harry H. Goode Memorial Award (2004); Elected to National Academy of Engineering (2005); ACM Turing award (2007,with Emerson and Sifakis); International Conference on Automated Deduction (CADE) Herbrand Award for Distinguished Contributions to Automated Reasoning (2008); Logic in Computing Science (LICS) 2010 Test-of-Time Award for his 1990 paper, "Symbolic model checking…; Elected to the American Academy of Arts and Sciences (2011); Fellow of the ACM and IEEE; Honorary Doctorate, (Vienna University of Technology, 2012).
 

Edmund Melson Clarke

United States – 2007
CITATION

Together with E. Allen Emerson and Joseph Sifakis, for their role in developing Model-Checking into a highly effective verification technology that is widely adopted in the hardware and software industries.

Birth and education

Edmund Melson Clarke was born on July 27, 1945. He initially studied mathematics, receiving a BA from the University of Virginia in 1967 and an MA from Duke University in 1968. But by the time he enrolled in a doctoral program at Cornell University, he had switched to computing science. At Cornell he studied under Robert Constable, a pioneer in making deep connections between mathematical logic and computing. After graduation Clarke returned to teach at Duke for two years, moving to Harvard University in 1978. He joined Carnegie Mellon University in 1982, where he is currently the FORE Systems University Professor of Computer Science and Professor of Electrical and Computer Engineering.

Clarke’s career has focused on mathematical reasoning about computer systems, with an emphasis on reasoning about the reliability of those systems. Such reasoning is necessary but very hard. A computer system executes simple operations, but those operations can occur in a staggering number of different orders. This makes it impossible for the designer to envision every possible sequence and predict its consequences. Yet every one of those sequences, no matter how infrequently executed, must be correct. If a program executes an incorrect sequence, at the very least it will waste a user’s time, while at the worst it could cause injury or loss of life.

The sequences become even more difficult to envision in systems with multiple programs running at the same time—a feature that has long been present in computer hardware and has become widespread in software since the beginning of the 21st century. Mathematical reasoning, and specifically its expression in formal logic, in principle is sufficient to describe every possible sequence and ensure that all of them are correct, even for simultaneously-running programs. In practice, however, classical mathematical reasoning is awkwardly-matched to describing the many possible execution orderings in a computer system.

Inventing model checking

Early researchers addressed this mismatch by developing logical forms better-suited to describing computer systems. One of the first was by Tony Hoare. Hoare’s logic could be used to prove that every possible execution of a system would only execute an acceptable sequence of operations. This opened the possibility that systems could be proven to perform according to specification every time, no matter the circumstances. Clarke’s early research strengthened the foundations of Hoare’s logic and extended his method. Although Hoare’s method worked for smaller systems, it was close to impossible to apply to systems of any real size. The dream of powerful, effective methods for reasoning about all possible orderings of a system remained unfulfilled.

In 1977, Amir Pnueli introduced temporal logic, an alternative logic for verifying computer systems. As the name implies, temporal logic allows explicit reasoning about time. In temporal logic it is possible to express statements such as, “This condition will remain true until a second condition becomes true”.

Clarke and his student E. Allen Emerson saw an important possibility in temporal logic: it could be directly checked by machine. Whereas Hoare’s logic required the designer to consider every detail of both the system and the argument about the system’s correctness—substantially increasing the designer’s workload—Pnueli’s logic could be implemented in a computer program. The responsibilities could be divided: the designer focused on specifying the system, while the software ensured that the proposed system would always perform correctly.

Clarke and Emerson realized that a program could exhaustively construct every possible sequence of actions a system might perform, and for every action it could evaluate a property expressed in temporal logic. If the program found the property to be true for every possible action in every possible sequence, this proved the property for the system. In the language of mathematical logic, Clarke and Emerson’s program checked that the possible execution sequences form a “model” of the specified property. Working independently, Jean-Pierre Queille and Joseph Sifakis developed similar ideas. The technology of model checking was born.

A great strength of the model checking approach is that when it detects a problem in a system design, the checker prints out an example sequence of actions that gives rise to the problem. Initial designs always get some details wrong. The error traces provided by model checking are invaluable for designers, because the traces precisely locate the source of the problems.

Averting the state space explosion

Although the 1981 paper [2] demonstrated that the model checking was possible in principle, its application to practical systems was severely limited. The most pressing limitation was the number of states to search. Early model checkers required explicitly computing every possible configuration of values the program might assume. For example, if a program counts the millimeters of rain at a weather station each day of the week, it will need 7 storage locations. Each location will have to be big enough to hold the largest rain level expected in a single day. If the highest rain level in a day is 1 meter, this simple program will have 1021 possible states, slightly less than the number of stars in the observable universe. Early model checkers would have to verify that the required property was true for every one of those states.

Systems of practical size manipulate much more data than the simple example above. The number of possible states grows as well, at an explosive speed. This rapid growth is called the state space explosion. Although early model checkers demonstrated that the technology was feasible for small systems, it was not ready for wider use.

Clarke and his student Ken McMillan had a fundamental insight: The state space explodes because the number of states a memory location can assume is much, much bigger than the size of the location itself. The memory location is compact because it encodes many potential states but only contains one at a time. Clarke and McMillan observed that this process could be applied in reverse: with the right encoding, many potential states could be represented by a single value. From the literature, McMillan found an encoding that met the twin goals of tersely encoding multiple states while at the same time permitting efficient computation of formulas in temporal logic.

The new representation dramatically reduced the storage required to represent state spaces, in turn reducing the time required to run a model checker on systems of practical size. They called these new systems symbolic model checkers. In 1995, Clarke, McMillan, and colleagues used this approach to demonstrate flaws in the design of an IEEE standard for interconnecting computer components. Before this, the reliability of such standards had only been informally analyzed, leaving many rare cases unconsidered and potential errors undiscovered. This was the first time every possible case of an IEEE standard had been exhaustively checked.

With enhancements such as these, model checking has become a mature technology. It is widely used to verify designs for integrated circuits, computer networks, and software, by companies such as Intel and Microsoft. Model checkers have been used to analyze systems whose state space (10120) is substantially larger than the number of atoms in the observable universe (around 1080). It is becoming particularly important in the verification of software designed for recent generations of integrated circuits, which feature multiple processors running simultaneously. Model checking has substantially improved the reliability and safety of the systems upon which modern life depends.

Author: Ted Kirkpatrick