• go to Raj Reddy's profile page
  • go to William Kahan's profile page
  • go to Leonard M. Adleman's profile page
  • go to Frederick Brooks's profile page
  • go to Richard E Stearns's profile page
  • go to Michael Stonebraker's profile page
  • go to Dr. Jack Dongarra's profile page
  • go to Charles P. Thacker's profile page
  • go to John E Hopcroft's profile page
  • go to Kristen Nygaard 's profile page
  • go to Jim Gray 's profile page
  • go to Amir Pnueli's profile page
  • go to Richard W. Hamming's profile page
  • go to Herbert A. Simon's profile page
  • go to Douglas Engelbart's profile page
  • go to Shafi Goldwasser 's profile page
  • go to J. H. Wilkinson 's profile page
  • go to Allen Newell 's profile page
  • go to John Backus 's profile page
  • go to Niklaus E. Wirth's profile page
  • go to Judea Pearl's profile page
  • go to Frances Allen's profile page
  • go to Andrew C Yao's profile page
  • go to Robert Melancton Metcalfe's profile page
A.M. TURING AWARD LAUREATES BY...
BIRTH:

13 January 1934 Yealmpton, near Plymouth, England;

DEATH:

20 March 2010 in Cambridge, England

EDUCATION:

Eton; Kings College Cambridge 1954-’57 B.Sc. (mathematics) 1956; Part II (moral sciences) 1957.

EXPERIENCE:

Second Lieutenant, Royal Engineers (1952-1954); Mathematics teacher, Marylebone Grammar School (1958-1959); Programmer, Ferranti, (1960-1962); Lecturer, The City University, (1963-1967); Researcher, Swansea University (1968-1970), Stanford University (1971-1972); Lecturer/Personal Chair (1984) Edinburgh (1973-1995); Chair, Cambridge University Computer Laboratory (1995-2001); Chair, Department of Computer Science, University of Edinburgh (2009–2010).

HONORS AND AWARDS:

British Computer Society Technical Award (1987); Distinguished Fellow of the British Computer Society (1988), Founding Member Academia Europaea (1988); Fellow of the Royal Society (1988); Turing Award (1991); Fellow of the Royal Society of Edinburgh (1993); Fellow of the ACM (1994); Royal Medalist of the Royal Society of Edinburgh (2004); Foreign Associate of the National Academy of Engineering (2008).

Arthur John Robin Gorell ("Robin") Milner DL Author Profile link

United Kingdom – 1991
CITATION

For three distinct and complete achievements:

  1. LCF, the mechanization of Scott's Logic of Computable Functions, probably the first theoretically based yet practical tool for machine assisted proof construction;
  2. ML, the first language to include polymorphic type inference together with a type-safe exception-handling mechanism;
  3. CCS, a general theory of concurrency.
In addition, he formulated and strongly advanced full abstraction, the study of the relationship between operational and denotational semantics.

Working in challenging areas of computer science for twenty years, Robin Milner has established an international reputation for three distinct achievements, each of which has had a marked, important, and widespread effect on both the theory and practice of computer science. In addition, he formulated and strongly advanced full abstraction, the study of the relationship between denotational semantics (which formalizes the meaning of programs using abstract mathematical descriptions of behavior called denotations), and operational semantics (which models program execution).

A key ingredient in all of his work has been his ability to combine deep insight into mathematical foundations of the subject with an equally deep understanding  of practical and aesthetic issues, thus allowing the feedback of theory into practice in an exciting way. Further, his style of scholarship, rigor, and attention to detail sets a high example for all to follow.

Milner was born into a military family residing on the South coast of England. He was awarded a scholarship to Eton College in 1947, and subsequently served in the Royal Engineers, attaining the rank of Second Lieutenant. He then enrolled at King's College, Cambridge, graduating in 1957. At Cambridge, Milner studied Mathematics (B.Sc.), then Moral Sciences (Philosophy) (Part II).

His first encounter with computing, as an undergraduate in 1956, was uninspiring. He took a 10-day course in programming on Maurice Wilkes’ EDSAC (Electronic Delay Storage Automatic Calculator). His reaction was characteristically decisive: “Programming was not a very beautiful thing. I resolved I would never go near a computer in my life.” He found the activity of “writing one instruction after the other to be “inelegant” and “arbitrary”.

Robin was, however, interested in tools as prosthetic devices that serve to extend our reach. He was intensely practical, as evidenced by the wooden geometric shapes on the light-strings on his lamps, and the comprehensive collection of chisels, gimlets and awls arranged neatly in the utility room (a spotlessly clean workshop that also served as the laundry) in his Cambridge home. Robin later became fascinated with computers as tools. He realized, very early, that that they were not just mere calculating devices, but could become ‘tools of the mind’.

Milner never studied for a PhD. After one year as a school teacher, and two years of national service in Suez, his next engagement with computers came at the computer manufacturer Ferranti, where he “made sure that all the new machines they were selling were actually working,” and also wrote parts of a compiler. In 1963 he moved to academia with a passionate belief that a deeper understanding of programming could form the basis for a more elegant practice.

At City University, London, he wondered how question-answering in databases might relate to relational algebra, but “didn’t get very far with that.” Then, at Swansea University in Wales, he began to publish on “program schemes,” an early flowchart-like operational model of computer programs.

Milner went to Oxford to hear lectures from Christopher Strachey and Dana Scott, who had been thinking about computable functionals, an abstract mathematical model for the meaning (“denotational semantics”) of a program, and about a logic of computable functionals (LCF) for reasoning about “denotation” objects. Milner said that getting interested in program verification and semantics was the real start of his research career. At Swansea he wrote his first automatic theorem prover, but was still “shattered with the difficulty of doing anything interesting”.

He was then invited to Stanford, where he spent two years (1971-1973) as a research associate with John McCarthy’s Artificial Intelligence Project. Milner was more interested in how human intelligence might be “amplified” than in what an artificial intelligence might achieve independently. Milner thought that machines might help humans apply the Scott-Strachey approach to denotational semantics to practical examples. The calculations required were laborious and error-prone, but many were mechanical and amenable to computerization.

He started work, together with Whitfield Diffie, Richard Weyhrauch, and Malcolm Newey, on an implementation of Scott’s LCF which he described as

…designed to allow the user interactively to generate formal proofs about computable functions and functionals over a variety of domains, including those of interest to the computer scientist - for example, integers, lists and computer programs and their semantics.

This LCF proof-assistant included a sub-goaling facility that allowed the user to divide the search for a proof into searches for smaller subproofs, with the machine keeping track of how to put the pieces of the proof back together. It also included a powerful simplification mechanism that allowed the machine to shoulder the burden of performing the more routine calculations.

The beauty of this architecture is that improvements in mechanization can easily be incorporated to build a more powerful assistant without invalidating existing proofs. Furthermore, patterns of proof can be coded as tactics that can be used in many proofs. Proof search is a combination of subgoaling and simplification steps. A tactic (an example might be induction on n) automates some combination of steps. Tactics allow common patterns to be reused; they are not simply a list of one instruction after the other. These ideas have themselves been reused, and built upon, to create proof assistants for a variety of logics.

Milner was still dissatisfied with the difficulty of using the then-existing programming languages – he was programming in LISP – and with the possibility of error. How could one be sure that the system could not introduce an inconsistency, and hence be useless because it could prove anything? When he moved to Edinburgh, in 1974, he embarked on ambitious program to develop and implement a practical programming language that drew on LISP, but was also much influenced by Peter Landin’s ISWIM (If you See What I Mean), an ‘abstract’ language that was never really implemented but which has clear mathematical foundations and operational semantics based on Landin’s ‘SECD’ abstract machine.

Milner’s new language was designed as a metalanguage (hence its name, ML) for the implementation of a new proof-assistant called Edinburgh LCF. It was a robust and efficient practical tool well-adapted for this task, and was defined with the mathematical clarity exemplified by Landin’s work. A further discussion of ML can be found here.

ML was way ahead of its time. It is built on clean and well-articulated mathematical ideas, teased apart so that they can be studied independently and relatively easily remixed and reused. ML has influenced many practical languages, including Java, Scala, and Microsoft’s F#. Indeed, no serious language designer should ignore this example of good design.

Meanwhile, Milner continued to worry about the foundational issues of semantics, in particular the link between the properties of the mathematical denotation of a program, and its operational behavior. If two programs can be observed to behave differently, they must have different denotations, otherwise the semantics would be unsound. Ideally, the converse should hold: any two observationally indistinguishable programs should have the same denotation.

However, Gordon Plotkin discovered a key limitation of sequential programming, which meant that the natural Scott-Strachey model for an ISWIM-like language could have two programs that are observationally indistinguishable, but have different denotations. Milner constructed the first fully abstract model in which two observationally equivalent programs must have the same denotation.

Milner worried about modeling non-deterministic programs and concurrent processes, which Scott-Strachey semantics didn’t address conveniently. The advent of timesharing and networking made this a practical as well as a theoretical issue. A simple functional model of sequential computation does not account for the possibility of interference between two programs that share the same memory, nor does it account for the interactions in a client-server architecture, or the interactions between a system and its users.

It is possible to create clever denotational models that account for such features, but Milner wanted a “semantic theory in which interaction or communication is the central idea.” His first concern was about “the choice of interpretation and of mathematical framework, a small well-knit collection of ideas, which justifies the manipulations of the calculus.” After a long period of reflection and experimentation, he settled on Robert M. Keller’s deceptively simple model of labeled transition systems with synchronized communication, to which he applied the notion of observational equivalence. This formed the basis for his1980 book Calculus of Communicating Systems (CCS) [2], which sets out a ‘general theory of concurrency’.

But he was not satisfied. He observed that “there is still latitude for choice in the definition of observational equivalence,” and conjectured that CCS might be extended by allowing “labels to be passed as values in communication.”

Shortly after the publication of CCS, Milner learnt of David Park’s notion of “bisimulation”, a subtle but compelling refinement of the notion of observational equivalence. Milner completely reworked his theory in a new book, Communication and Concurrency [3] in 1989. CCS is little-changed, but the whole theory, in particular the treatment of observational equivalence, is more elegant and less arbitrary.

These three strands of his early works – proof, concurrency, and ML – are highlighted in Milner’s Turing Award citation. The award trophy, representing computing’s highest honor, sat inconspicuously in his kitchen between a vase and a bowl of fruit.

An account that stopped at this point would not do justice to his legacy. In addition to making his own technical contributions to the field, Milner eloquently articulated a vision of Computer Science as an intellectual endeavor. His inaugural address for the Laboratory for Foundations of Computer Science at the University of Edinburgh asked, “Is Computing an Experimental Science?”, and argued that theories of computation should be tested against their impact on practice.

In accepting his honorary doctorate from the University of Bologna in 1997, he reflected on the power of programming

Every tool designed by man is a prosthetic device, and for every prosthetic device there is a means of control. Many tools are physical, and their control is manual. In contrast, computers are the most complex tools ever invented, and they are tools of the mind; the means of control is hardly muscular - it is primarily linguistic. Quite simply, the versatility of computers is exactly equal to the versatility of the languages by which we prescribe their behaviour, and this appears to be unbounded.

and then articulated his vision of informatics

Computing is not only about a computer's internal action; it is about the way a computer behaves as part of a larger system. The terms in which the computer behaviour is prescribed must harmonize with the way we describe information flow in such systems. Thus computing expands into informatics, the science of information and communication […] it can provide an exact view of the world which is not the province of any previously existing science.

His technical work continued with this broader perspective. Working with Joachim Parrow and David Walker, he developed the π-calculus in 1992, in which labels are passed as values to create a calculus of communicating systems that can naturally express processes with changing structure. For Milner’s definitive account, see Communicating and Mobile Systems: The π-calculus [5].

He was increasingly concerned about ensuring the impact of theory on practice.  For example, he contributed as an invited expert to the Web Services Choreography Description Language (WS-CDL); aspects of this business-process description language are inspired by the π-calculus.

The calculus also led to further formal calculi for describing and reasoning about other systems, including the spi calculus for cryptographic protocols and the stochastic π-calculus for cellular signaling pathways. Such widely differing applications reinforced Milner’s vision of informatics as a science of information and interaction that can be applied in many domains. “Informatics,” he said, “is a synonym for computer science. The ‘info’ words have an advantage: they express the insight that informatic behavior is wider than what computers do, or what computing is.”

Milner retired in 2001, but he did not stop. The π-calculus had been applied to new examples of interaction that he had not anticipated, but he now saw even more. For example, a

“daunting range of concepts [is] needed to understand ubiquitous computing: data provenance, privacy, authorization, trust (among agents), self-awareness, self-management, reconfiguration, failure recovery, goals, beliefs, ..., and the list can be extended much further.”

He embarked enthusiastically on the development of a further model, “a topographical model which aims to provide a theoretical foundation for mobile interactive systems”.

This ambitious enterprise led to the “bigraphical" model reported in his final book, The Space and Motion of Communicating Agents [7] in 2009. In it Milner seeks to account for agents with locality and connectivity that may be reconfigured through motion (placing) and interaction (linking). His goal is to model “not only interactive behavior among artificial agents, but also interaction with and among natural agents. Ultimately our informatic modelling should merge with, and enrich, natural science.”

Bigraphs model the concurrent and interactive behaviors of mobile communicating agents. Applications include computational systems biology, where the agents include genes and proteins, and the internet, where the agents include people, computers, and programs. Like Milner's earlier work, this is destined to have far-reaching and profound significance, both within computer science and beyond.

He was interviewed as ‘Geek of the Week’ just a few weeks before his death, and said, of his work on bigraphs,

I would dearly love to follow this kind of work through to the front line of design and experience, but I don't think I'll be around for long enough. I'm making up for it by listening and talking to those who will be!

Robin’s listening and talking stimulated various groups to work on bigraphical programming languages, and on the application of bigraphs to model natural systems. He returned part time to a chair at Edinburgh, still full of plans and projects for the future, when he died of a heart attack on 20th March 2010, just a few days after the funeral of his beloved wife Lucy.

Author: Michael Fourman