28 March 1949, Budapest, Hungary
Latymer Upper School, London England; King’s College, Cambridge, England (BA, Mathematics, 1970); Imperial College, London, England (DIC in Computing Science); University of Warwick, England (PhD, Computer Science, 1974)
Carnegie Mellon University (Visiting Assistant Professor, 1973-1974); Leeds University, England (Lecturer, 1974-1976); University of Edinburgh, Scotland (Lecturer, later Reader, 1977-1982); Harvard University (Gordon McKay Professor of Computer Science and Applied Mathematics, from 1982, T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics (2001 forward).
International Mathematical Union Nevanlinna Prize (1986); Fellow of the Royal Society (1991); ACM Special Interest Group on Algorithms and Computation Theory and the IEEE Technical Committee on the Mathematical Foundations of Computation Theory Knuth Prize (1997); member of the USA National Academy of Sciences (2001); European Association for Theoretical Computer Science Distinguished Achievement Award (2008); Turing Award (2010); Fellow of the American Association for artificial Intelligence.
For transformative contributions to the theory of computation, including the theory of probably approximately correct (PAC) learning, the complexity of enumeration and of algebraic computation, and the theory of parallel and distributed computing.
Les Valiant has had an extraordinarily productive career in theoretical computer science producing results of great beauty and originality. His research has opened new frontiers and has resulted in a transformation of many areas. His work includes the study of both natural and artificial phenomena. The natural studies encompass the algorithms used by computing objects such as the human brain while the artificial include computers and their capabilities. In the case of computers the limitations of these devices are only beginning to be understood while for natural objects, such as the human brain, the questions of how they operate remain to be answered.
|Valiant discusses his realization that machine learning is the central concept in artificial intelligence.
In 2003, in the 50th anniversary volume of the Journal of the ACM, he published a paper  which advocates this view that computer science describes both natural and artificial phenomena, in terms of three general problem areas for further investigation:
In 1983 he published an important paper  in the area of semantics for cognitive computation. In it he devised a model of learning that offers a quantitative criterion on when a computing device can be considered to be able to learn. This probably approximately correct (PAC) model has given rise to a fruitful research area now known as computational learning theory. The PAC model considers a learning algorithm that takes experience from the past to create a hypothesis that can be used to make a decision in the future with controlled error. The model has been intensively studied and extended by other researchers into an important tool in practical applications.
|Valiant discusses his 1983 paper “A Theory of the Learnable” which introduced the PAC (Probably Approximately Correct) model of learning.
In 1994 he broadened the PAC concept in his book Circuits of the Mind  to investigate how the brain accesses and computes on the large amount of information it needs when reasoning. This investigation uncovered some important quantitative constraints on neural computation, such as the strength of interconnections, and offers a range of questions for experimentalists. The book also provides a model in a computational language and framework that can be used in future investigations into memory, learning and reasoning.
The power of computation is the subject of computational complexity theory.In the early 1970’s, computational complexity generally dealt with the difficulty of decision problems, such as whether a graph has a perfect matching or whether a traveling salesman can find a route of at most a certain length. This difficulty was characterized by complexity classes, such as P (tractable problems) and NP (problems for which a solution can easily be checked once it has been produced, but perhaps not easily found in the first place). Of course, many important practical problems are not simply decision problems: instead of asking whether there is a route of at most 1000 miles, one could ask for the length of the shortest possible route. However, these more general problems can often be reduced to a sequence of decision problems, for example by using binary search to narrow in on the length of the shortest route.
One of Valiant’s most noteworthy discoveries is that counting problems are much more subtle than previous experience suggested. A counting problem asks for the number of some combinatorial objects: for example, how many perfect matchings are there in a graph? Now we are not just asking the decision problem of whether that number is positive, but also how large it is. If the decision problem is difficult, then the counting problem must be as well, but Valiant’s surprising realization was that the converse fails. In his paper “The complexity of computing the permanent” , he showed that although there is an efficient algorithm to tell whether a graph has a perfect matching, counting perfect matchings is as hard as any counting problem, including for any NP-complete problem. This came as a shock to the computational complexity community, which had grown accustomed to the idea that decision problems would easily capture the key features of a problem. Instead, Valiant extended the theory of complexity classes to include the new counting class #P.
|Valiant discusses the origins of P#-completeness in his analysis of the complexity of counting problems.
If counting problems were limited to esoteric mathematical problems, Valiant’s theory would still have been conceptually fascinating but it would have had limited impact. However, it turns out that these problems pervade much of computer science. For example, estimating any probability amounts to an approximate counting problem, and random sampling is closely related. Approximate counting and sampling are both important goals in their own right (for example, in statistics) and valuable tools for solving other problems; entire subfields of computer science, such as Markov Chain Monte Carlo algorithms, are by now devoted to these topics. The ideas Valiant introduced have grown into a thriving field, which has united computer science, probability and statistics, combinatorics, and even aspects of statistical physics.
In the area of artificial computation, his work has analyzed of how to control vary large parallel computing systems. These range from the multi-core processors that are commonly found in personal computers to large, widely distributed, clusters of computing machines. He centered his work on investigating the role played by communication between parallel processors. In 1990 he published a paper  describing his bulk synchronous parallel (BSP) model. This model has had wide influence on the way parallel computation is conceptualized and carried out.
|Valiant describes the origins of the bulk synchronous model for the organization of communication within large parallel computing systems.
In describing Valiant’s contributions, the ACM Turing Award Committee pointed out: