• go to Richard W. Hamming's profile page
  • go to Charles P. Thacker's profile page
  • go to Dennis M. Ritchie 's profile page
  • go to Geoffrey E Hinton's profile page
  • go to Peter Naur's profile page
  • go to Leslie G Valiant's profile page
  • go to Whitfield Diffie 's profile page
  • go to Frederick Brooks's profile page
  • go to Barbara Liskov's profile page
  • go to C. Antony R. Hoare 's profile page
  • go to Marvin Minsky 's profile page
  • go to Robert Melancton Metcalfe's profile page
  • go to Ivan Sutherland's profile page
  • go to Dr. Jack Dongarra's profile page
  • go to Alan Kay's profile page
  • go to Ronald L Rivest's profile page
  • go to Sir Tim Berners-Lee's profile page
  • go to Douglas Engelbart's profile page
  • go to Jim Gray 's profile page
  • go to Adi Shamir's profile page
  • go to Raj Reddy's profile page
  • go to Joseph Sifakis's profile page
  • go to Kenneth E. Iverson 's profile page
  • go to Juris Hartmanis's profile page
A.M. TURING AWARD WINNERS BY...

William (“Velvel”) Morton Kahan DL Author Profile link

United States – 1989
Additional Materials

Floating Point

One of the most fundamental characteristics of a computer is its “word length” – the size of the standard chunk of information processed by its hardware. Although the first personal computers processed only 8 bits at a time, computers intended for scientific computation supported much longer word lengths. In the 1950s, 36-bit word length, which could store a ten digit decimal number, was common. Other scientific machines had words of 60 bits or even longer.

Ten digit accuracy would be sufficient for most purposes. However the numbers used in scientific calculations are generally not integers (whole numbers), meaning that the programmer had to keep track the position of the decimal point. More seriously, the physical quantities being represented are often very large or very small and hence require an ungainly number of zeros either before or after the decimal point. Scientists had traditionally dealt with this by splitting numbers in two parts, with the second part specifying the number of zeros to be added. For example, Avogadro’s number, an estimate of the number of molecules in a standard volume of gas, is written as 6.02214129×1023 rather than as 602,214,129,000,000,000,000,000. Programmers adopted the same technique, now called “floating point” representation, but keeping track of the details was a major source of complexity and bugs in early programs.

Adding hardware support for floating point number representation could speed things up and greatly simplify the work of programmers. It took only a few years from the first computers for floating point instructions to emerge as a standard feature of scientific computers. But the problem was that different manufacturers, and even different computers from the same manufacturer, took different approaches to representing and manipulating floating point numbers. They differed in the word length, the number base (for example binary or decimal) used to represent numbers, the methods used to round off excess digits, the allowable size and precision of the number, and the response given when a result was too large (overflow) or too small (underflow) or represented infinity.

By the 1960s almost all scientific and technical programs were written in FORTRAN, a language that was standardized so that a program written for one computer could be compiled and run on another without being rewritten. However one could not count on the program producing the same results on different computers. A program might run inefficiently, stop with an unexpected error, or produce results less accurate than expected. This was a serious problem when no single manufacturer dominated the scientific computing market to establish a default standard. The IEEE standard for floating point arithmetic solved that problem.

Kahan was interviewed by Charles Severance, the transcript of which went on to appear in IEEE in March, 1998. The transcript is also available online here.