• go to Yann LeCun's profile page
  • go to Alfred V Aho's profile page
  • go to Leslie G Valiant's profile page
  • go to J. H. Wilkinson 's profile page
  • go to Frances Allen's profile page
  • go to Peter Naur's profile page
  • go to Raj Reddy's profile page
  • go to John McCarthy's profile page
  • go to Jim Gray 's profile page
  • go to Edgar F. Codd's profile page
  • go to Jeffrey D Ullman's profile page
  • go to Adi Shamir's profile page
  • go to Barbara Liskov's profile page
  • go to Charles W Bachman's profile page
  • go to Maurice V. Wilkes's profile page
  • go to Butler W Lampson's profile page
  • go to Frederick Brooks's profile page
  • go to Robert Melancton Metcalfe's profile page
  • go to John L Hennessy's profile page
  • go to Ronald L Rivest's profile page
  • go to Andrew C Yao's profile page
  • go to Leonard M. Adleman's profile page
  • go to Michael O. Rabin 's profile page
  • go to C. Antony R. Hoare 's profile page
A.M. TURING AWARD LAUREATES BY...
BIRTH:

Caracas, Venezuela, April 26, 1938

EDUCATION:

B.S., Electrical Engineering, MIT (1959); M.S., Electrical Engineering, MIT (1961); Ph.D., Mathematics, MIT (1964).

EXPERIENCE:

Research Assistant and Research Associate for Dr. Warren S. McCulloch, Research Laboratory of Electronics, MIT, (1960-1965); Assistant Professor, Mathematics, MIT (1966-68); Visiting Assistant Professor, Associate Professor, Professor, Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, (1968-2001); Associate Chair for Computer Science, U.C. Berkeley (1977-1980); Arthur J. Chick Professor of Computer Science, U.C. Berkeley, (1995-2001); Visiting Professor of Computer Science. City University of Hong Kong (1997-1999); Bruce Nelson Professor of Computer Science, Carnegie Mellon University, 2001-present.

HONORS AND AWARDS:

Fellow of the Institute of Electrical and Electronics Engineers (1982); the American Association for the Advancement, of Science (1983); and the American Academy of Arts and Sciences (1995). ACM Turing Award (1995); Member, National Academy of Engineering; United States National Academy of Sciences (2002). 

 

Manuel Blum DL Author Profile link

United States – 1995
CITATION

In recognition of his contributions to the foundations of computational complexity theory and its application to cryptography and program checking.

Manuel Blum was born in Caracas, Venezuela in 1938. He remembers that as a child he wanted to know how brains work. The reason: he wanted to be smarter.  (“I was the opposite of a prodigy," he claims). At M.I.T. he studied electrical engineering because he thought electric circuits might hold the answer. As a junior he studied brains in Warren McCulloch's lab. Eventually he became convinced that the limitations of our brain are related to computational complexity, and that is how he embarked on a life-long journey that transformed that subject as well as much of computer science.

By the early 1960s computers were already solving all kinds of problems in business and science. An algorithm for solving a problem expends resources: the time it takes to run, the amount of memory it uses, the energy it consumes, etc. The irresistible question is: Can I do better? Can I find an algorithm that solves the problem faster using fewer resources? You can sort n numbers in a computer's memory with n log n comparisons using Mergesort, Quicksort or a similar sorting algorithm. But can it be done better? You can find the factors of an n-bit integer in time proportional to 2n/2 by checking all candidate divisors up to the square root of the number. But this is exponential in n.  Is there a better, truly fast algorithm? You can search a maze with n corridors using only log2 n memory. Is there a solution utilizing log n memory? This is the stuff of complexity.

Of the three questions above, after half a century of contemplation we know the answer to only the first one, about sorting: An elementary argument discovered in the 1960s establishes that you cannot sort with fewer than n log n comparisons.  (This is proved by considering the decision tree of a sorting algorithm and then taking the logarithm of its size). So sorting a list of n numbers requires n log n time.

Computing the average of a list of numbers is, of course, easier than sorting, since it can be done in just n steps. But the median is usually more informative than the average–in connection with incomes, for example. Unfortunately you need to first sort the numbers in order to find the median. Or do you? In the late 1960s, Blum was convinced that computing the median does indeed require n log n steps, just like sorting. He tried very hard to prove that it does, and in the end his labors were rewarded with a most pleasant surprise: in 1971 he came up with an algorithm that finds the median in linear time! (A few other great researchers had in the meantime joined him in this quest Robert Tarjan, Ronald Rivest, Bob Floyd,  and Vaughan Pratt.) [2]

Blum discusses the algorithm he created to find the median of a series of numbers in linear time.

But we are getting ahead of our story. In the early 1960s Blum wanted to understand complexity. To do so one must first pick a specific machine model on which algorithms run, and then a resource to account for in each computation. But Blum wanted to discover the essence of complexity–its fundamental nature that lies beyond such mundane considerations. In his doctoral thesis at M.I.T. under Marvin Minsky, he developed a machine-independent theory of complexity that underlies all possible studies of complexity. He postulated that a resource is any function that has two basic properties (since then called the Blum axioms), essentially stating that the amount of the resource expended by a program running on a particular input can be computed—unless the computation fails to halt, in which case it is undefined.

Surprisingly these simple and uncontroversial axioms spawn a rich theory full of unexpected results. One such result is Blum’s speedup theorem, stating that there is a computable function such that any algorithm for this function can be sped up exponentially for almost all inputs [1]. That is, any given algorithm for that function can be modified to run exponentially faster, but then the modified algorithm can be modified again, and then again, an infinite sequence of exponential improvements! Blum's theory served as a cautionary tale for complexity theorists: unless resource bounds are restricted to the familiar, well-behaved ones (such as n log n, n2, polynomial, exponential, etc.), very strange things can happen in complexity.

Blum explains his counter-intuitive thesis result known as the “speedup theorem.”

By the early 1970s Manuel Blum was a professor at UC Berkeley, where he would teach for over thirty-five years. He had married the notable mathematician Lenore Blum, about whom Manuel wrote a haiku: “Honor her requests as if/Your life depends on it/It does." They had a son Avrim, now a professor of Computer Science at Carnegie Mellon, where Lenore and Manuel are also now employed.

By the early 1970s the study of complexity had become much more concrete, partly because of the scary phenomena exposed by Blum's model.  It had already succeeded in articulating the fundamental P vs NP question, which remains famously unresolved to this day: are there problems which can only be solved by rote enumeration of all possible solutions? At that point, when most students of complexity embarked on a long-term research effort aiming at either taming the beast, or establishing that all such efforts are futile, Blum's thinking about the subject took a surprising turn that would constitute the overarching theme of his work for the coming decades: he decided to make friends with complexity, to turn the tables on it, and use it to accomplish useful things.

One particular such “application” of complexity was emerging just around that time: public key cryptography, which allows secure communication through the use of one-way functions that are easy to compute but hard to reverse. It so happened that in the mid-1970s Blum decided to spend a sabbatical studying number theory—the beautiful branch of mathematics theretofore famously proud of its complete lack of application. Nothing could be more fortuitous.  The RSA approach to public key cryptography, which was being invented around that time, exploits important one-way functions inspired by number theory. RSA's inventors Ron Rivest, Adi Shamir, and Len Adleman (the latter a doctoral student of Blum's) won the Turing award in 2003 for this idea.

Blum wondered what else complexity can help us with beyond cryptography. One strange thought came to him: can two people, call them Alice and Bob, settle an argument over the telephone about, say, who is buying drinks next time, by flipping a coin? This seems impossible, because who is going to force the one who flips, for example, Bob, to admit that he lost the toss? Surprisingly, Blum showed [3] that it can be done: Alice “calls” the flip by announcing to Bob not her choice, but some data which commits her to her choice. Then Bob announces the outcome of the flip, and they both check whether the committing data reveals a choice that wins or loses the flip. The notion of commitment (Alice creating a piece of data from a secret bit that can later be used, with Alice's cooperation, to reveal the bit in a way that can neither be manipulated by Alice nor questioned by Bob) has since become an important tool in cryptography.

Perhaps the most consequential application of complexity initiated by Manuel Blum relates to randomness. We often use pseudo-random number generators to instill randomness into our computations. But to what extent are the numbers generated truly random, and what does this mean, exactly? Blum's idea was that the answer lies in complexity. Distinguishing the output of a good generator from a truly random sequence seemed be an intractable problem, and many random number generators that were popular at the time fail this test. In 1984, Blum and his student Silvio Micali came up with a good generator [4] based on another notoriously difficult problem in number theory, the discrete logarithm problem. With Lenore Blum and Michael Shub they found another generator [5] }using  repeated squaring modulo the product of two large primes. To discover the seed, the adversary must factor the product, and we know that this is a hard problem.

Blum describes work with his students Micali and Goldwasser on zero knowledge proofs.

On top of all this, Blum demonstrated that cryptographically strong pseudorandom generators can pay back their intellectual debt to cryptography. He and student Shafi Goldwasser came up in 1986 with a public key encryption scheme [6] based on the Blum-Blum-Shub generator [5] which, unlike RSA, has been mathematically proven to be as hard to break as factoring.

In the mid-1980s, Blum became interested in the problem of checking the correctness of programs. Given a program that purports to solve a particular problem, can you write a program checker for it: an algorithm that interacts with the program and its input, and eventually decrees (in reasonable time and with great certainty) whether or not the program works correctly. Checkability turns out to be a windfall of complexity, and Manuel and his students Sampath Kannan and Ronitt Rubinfeld were able to come up with some interesting checkers [7]. They used the tricks of their trade: randomness, reductions, and algebraic maneuvers reminiscent of coding theory.

Blum describes his work with Rubinfeld on automated program checkers.

All said, this work was not very successful in helping software engineers in making a dent into the practical problem of software testing. Instead, it serendipitously inspired other theoreticians to come up with more clever tricks of this sort, and eventually use them—closing a full circle—in the study of complexity. The concept of interactive proof was proposed, whereby an all-powerful prover can convince a distrustful but limited checker that a theorem is true (just as a program interacts with its checkers to convince it of its correctness), and such proof systems were demonstrated for truths at higher and higher spheres of complexity. And, finally, in 1991, this long chain of ideas and results—this intellectual Rube Goldberg structure initiated by Blum's ideas about program checking—culminated with the famed PCP (Probabilistically checkable proof) Theorem stating that any proof can be rewritten in such a way that its correctness can be tested with high confidence by quickly examining for consistency just a few bits of the proof! Importantly, the PCP theorem implies a result which had been long sought by complexity theorists, namely that, assuming that P is not NP, not only can we not solve the notorious NP-complete optimization problems exactly in polynomial time, we cannot even approximately.

In the mid-1990s, Manuel and Lenore Blum went to Hong Kong for a long sabbatical, and soon after that Manuel decided to retire from UC Berkeley and move to CMU. At CMU, Blum envisioned a new use for complexity, motivated by the vagaries of our networked environment: What if one takes advantage of the complexity of a problem—and the difficulty involved in solving it by a computer program—to verify that the solver is not a computer program? This kind of twisted Turing test would be useful, for example, in making sure that the new users signing up for a web service are not mischievous bots with base intentions. In 2000 Blum and his student Von Ahn, with the collaboration of others, came up with the idea of a visual challenge (“what is written on this wrinkled piece of paper?”) as an appropriate test. They called it “Completely Automated Public Turing test to tell Computers and Humans Apart,” or CAPTCHA for short [8]. And, as we all know all too well from our encounters with this CAPTCHA device on web pages, the idea spread. Blum is now working on harnessing people's creativity through games, and on a computational understanding of consciousness.

Blum recalls his invention of the CAPTCHA system to distinguish humans from bots.

The research career of Manuel Blum is extraordinary. It is not just the sheer number and impact of his contributions, it is the unmistakable style that runs through them. Manuel believes that research results should be surprising, paradoxical, and nearly contradictory. “When you can prove that a proposition is true, and also that the same proposition is false, then you know you are on to something,” he says. A wonderful advisor with an amazing array of former students, he is always delighted to impart wisdom.  His “advice to a new graduate student" essay is a must-read. Quoting Anatole France, he implores students to “know something about everything and everything about something.” He urges them to read books as random access devices, not necessarily from beginning to end. And to write: if we read without writing, he observes, we are reduced to finite state machines; it is writing that makes us all-powerful Turing machines. He advocates the advice of John Shaw Billings: “First, have something to say.  Second, say it. Third, stop when you have said it.”

Author: Christos H. Papadimitriou