• go to Robert E Tarjan's profile page
  • go to Manuel Blum's profile page
  • go to John Backus 's profile page
  • go to Avi Wigderson's profile page
  • go to Douglas Engelbart's profile page
  • go to Edward A Feigenbaum's profile page
  • go to Fernando Corbato's profile page
  • go to Peter Naur's profile page
  • go to Alfred V Aho's profile page
  • go to Shafi Goldwasser 's profile page
  • go to Ronald L Rivest's profile page
  • go to A J Milner 's profile page
  • go to Dennis M. Ritchie 's profile page
  • go to Pat Hanrahan's profile page
  • go to Alan Kay's profile page
  • go to Marvin Minsky 's profile page
  • go to John McCarthy's profile page
  • go to Martin Hellman 's profile page
  • go to Kristen Nygaard 's profile page
  • go to Donald E. Knuth's profile page
  • go to Frederick Brooks's profile page
  • go to E. Allen Emerson's profile page
  • go to Maurice V. Wilkes's profile page
  • go to John E Hopcroft's profile page
A.M. TURING AWARD WINNERS BY...

Frances ("Fran") Elizabeth Allen DL Author Profile link

United States – 2006
Short Annotated Bibliography
  1. Allen, Frances E. and John Cocke. “A catalogue of optimizing transformations,” in Randall Rustin (ed.), Design and Optimization of Compilers (Prentice-Hall, 1972), 1--30. Perhaps the first paper to systematically catalog code transformations used by an optimizing compiler to improve code performance. The methods described—procedure integration (today called inlining), loop unrolling, loop fusion, common subexpression elimination, code motion, constant folding, dead code elimination, strength reduction, instruction scheduling, register allocation, and anchor pointing (now called Boolean short-circuiting)—are still in use today.
  2. Allen, Frances E., “Interprocedural data flow analysis,” Proceedings of Information Processing 74, IFIP, Elsevier / North-Holland (1974), pp. 398-402.An early paper describing a compiler algorithm for tracing the flow of data among multiple procedures. This work assumes the procedures are not recursive, and does not yet use the graph interval representation.
  3. Allen, Frances E. and J. Cocke, “A program data flow analysis procedure,” Communications of the ACM, Vol. 19, Num. 3 (March 1976), pp. 137-147. May be obtained herePerhaps the most important of Allen's technical contributions, this paper describes a "definition-use" data flow analysis on control-flow graphs that is particularly effective. It features a graph interval decomposition of the control flow graph, and the use of bit vectors as an efficient representation for sets. An appendix presents an implementation of the algorithm in PL/I.
  4. Allen, Frances E. et al, “The Experimental Compiling System,” IBM Journal of Research and Development, Vol. 24, Num. 6, (November, 1980), pp. 695-715. Notable features of the system are flow-free program analysis (to cope with programs that use procedure-valued variables, label-valued variables, and/or pointer-valued variables) and a general system for compile-time constant folding and procedure inlining.
  5. Allen, Frances E., “The history of language processor technology at IBM,” IBM Journal of Research and Development, Vol. 25, Num. 5 (September 1981), pp. 535-548. An overview, with extensive bibliography, of language and compiler development at IBM from 1952 to 1981, including early symbolic assembly languages (including the SOAP optimizing assembler for the IBM 650), FORTRAN I, FORTRAN IV, Comtran and COBOL, Quiktran, the Stretch-Harvest compiler, PL/I, APL, and the Advanced Computing Systems project.
  6. Allen, Frances E. et al. “An overview of the PTRAN analysis system for multiprocessing,” Proceedings of the 1st International Conference on Supercomputing (Athens, Greece, 1988), Springer-Verlag, pp. 194-211. Also in Journal of Parallel and distributed Computing, Vol. 5, (Academic Press, 1988), pp. 617-640. PTRAN was a Parallel TRANslator that automatically restructured sequential FORTRAN programs for execution on parallel architectures. This paper introduced a linear-time analysis of control dependence (improving on previous quadratic-time techniques) as well as algorithms for solving data dependence equations, and it incorporated interprocedural information into the dependence analysis.
  7. Allen, Frances E., Oral interview—the transcript is available here, and some selected video clips are available here . This is the transcript of an extensive interview conducted by the IBM Corporate Archivist in 2003.
  8. The IBM ACS System: A Pioneering Supercomputer Project of the 1960s. Speakers: Russ Robelen, Bill Moone, John Zasio, Fran Allen, Lynn Conway, Brian Randell. Computer History Museum, February 18, 2010. Video, TRT 1:33:35. Available hereThe Advanced Computing Systems project built on the experience with the Stretch Harvest project, and was, in some ways, an alternative design to the System 360/91. The ACS architecture incorporated innovations that remain important today, including multiple out-of-order dynamic instruction scheduling, multiple condition codes, a decoupled branch architecture, and instruction pre-fetching (from today's standpoint we might describe it as the first "superscalar" processor design). Advanced ECL circuits and optimizing compilers were also crucial parts of the plans for ACS.
  9. Buchholz, Werner (ed.), Planning a Computer System: Project Stretch, McGraw-Hill, 1962. Now in the public domain and available here. Fran Allen is not a co-author of this work, but it gives an overview of the Stretch computer system, including its Harvest coprocessor, providing useful background for understanding Allen's work on compilers for this complex system. The work that Allen herself did for the Harvest software was classified as secret by the NSA, as were her written reports.