FLOW-MATIC(ID:27/flo013)

Release name for B-0, Possibly the first English-like DP language 


So-called because it eased workflow recording

(originally B-0). Remington Rand, 1957.
Possibly the first English-like DP language.

Provided part-basis for COBOL:
     Permitted English-like names for variables
     Begin each statement with an English verb.
     Enabled separation of data from procedure


Hardware:
Structures:
Related languages
Procedure Translator => FLOW-MATIC   Renaming
X-1 => FLOW-MATIC   Target language for
FLOW-MATIC => COBOL   Evolution of
FLOW-MATIC => FACT   Influence
FLOW-MATIC => SHARE Information Algebra   Influence

References:
  • Hopper, Grace M. "Automatic Programming for Business Applications" view details Abstract: A discussion of the history of automatic programming and of the philosophy behind the development of Flow-Matic, Remington-Rand's programming system for business applications.
          in "Proceedings of the Fourth Annual Computer Applications Symposium" , Armour Research Foundation, Illinois Institute of Technology, Chicago, Illinois 1957 view details
  • [Bemer, RW] [Addendum to the Automatic Programming Systems Chart of 1(4)] June 1958 view details
          in [ACM] CACM 1(06) (June 1958) view details
  • [Bemer, RW] [State of ACM automatic coding library August 1958] view details
          in [ACM] CACM 1(06) (June 1958) view details
  • Bemer, R "Techniques Department" - Translation to another language rather than compiling view details
          in [ACM] CACM 1(07) July 1958 view details
  • FLOW-MATIC Input Tape. Remington-Rand Univac Publication U-l 163.536 1958 view details
          in [ACM] CACM 1(07) July 1958 view details
  • FLOW-MATIC Programming (Univac I and II). Remington-Rand Univac Publication U.1518. 1958 view details
          in [ACM] CACM 1(07) July 1958 view details
  • Asch, Alfred. 1959 July 29. Minutes of Committee Meeting on Data Systems Languages Held at Bureau of Standards, June 23-24. (Cited in Sammet 1978) view details Extract: Languages examined by CODASYL
    An important decision of the committee was to agree (Asch, 1959) "that the following language systems and programming aids would be reviewed by the committee: AIMACO, Comtran [sic], Flowmatic [sic], Autocoder III, SURGE, Fortran, RCA 501 Assembler, Report Generator (GE Hanford) , APG-I (Dupont)"
          in [ACM] CACM 1(07) July 1958 view details
  • FLOW-MATIC Operating Statements (Univac Solid-State 90). Remington- Rand Univac Publication U-1984.1 1959 view details
          in [ACM] CACM 1(07) July 1958 view details
  • Hopper, Grace "Automatic programming: present status and future trends" view details
          in Proceedings of the Symposium on the Mechanisation of Thought Processes. Teddington, Middlesex, England: The National Physical Laboratory, November 1958 view details
  • Locks, Mitchell O. "Automatic Programming for Automatic Computers" Journal of the American Statistical Association, 549(288) Dec 1959 pp744-754 view details Extract: SOAP, RECO, X1
    Assembly and Compiling Systems both obey the "pre-translation"7 principle. Pseudo instructions are interpreted and a running program is produced before the solution is initiated. Usually this makes possible a single set of references to the library rather than many repeated references.
    In an assembly system the pseudo-code is ordinarily modified computer code. Each pseudo instruction refers to one machine instruction or to a relatively short subroutine. Under the control of the master routine, the assembly system sets up all controls for monitoring the flow of input and output data and instructions.
    A compiler system operates in the same way as an assembly system, but does much more. In most compilers each pseudo instruction refers to a subroutine consisting of from a few to several hundred machine instructions.8 Thus it is frequently possible to perform all coding in pseudo-code only, without the use of any machine instructions.
    From the viewpoint of the user, compilers are the more desirable type of automatic programming because of the comparative ease of coding with them. However, compilers are not available with all existing equipments. In order to develop a compiler, it is usually necessary to have a computer with a large supplementary storage such as a magnetic tape system or a large magnetic drum. This storage facilitates compilation by making possible as large a running program as the problem requires.
    Examples of assembly systems are /Symbolic Optimum Assembly Programming (S.O.A.P.) for the IBM 650 and REgional COding (RECO) for the UNIVAC SCIENTIFIC 1103 Computer. The X-l Assembly System for the UNIVAC I and II Computers is not only an assembly system, but is also used as an internal part of at least two compiling systems. Extract: MATHMATIC, FORTRAN and UNICODE
    For scientific and mathematical calculations, three compilers which translate formulas from standard symbologies of algebra to computer code are available for use with three different computers. These are the MATH-MATIC (AT-3) System for the UNIVAC I and II Computers, FORTRAN (for FOR-mula TRANslation) as used for the IBM 704 and 709, and the UNICODE Automatic Coding System for the UNIVAC SCIENTIFIC 1103A Computer. Extract: FLOW-MATIC and REPORT GENERATOR
    Two advanced compilers have also been developed for use with business data processing. These are the FLOW-MATIC (B-ZERO) Compiler for the UNIVAC I and II Computers and REPORT GENERATOR for the new IBM 709.13 In these compilers, English words and sentences are used as pseudocode.
    Extract: FLOW-MATIC
    It is worth noting that the same compiler is used in this case on two different computers with substantially different command structures. This illustrates the fact that the language of a compiler can be independent of that of the computer.
          in Proceedings of the Symposium on the Mechanisation of Thought Processes. Teddington, Middlesex, England: The National Physical Laboratory, November 1958 view details
  • Programmer's Guide for Preparation of Library Routines. Remington-Rand Univac Publication U-l 163.537 July 1959 view details
          in Proceedings of the Symposium on the Mechanisation of Thought Processes. Teddington, Middlesex, England: The National Physical Laboratory, November 1958 view details
  • B2 FLOW-MATIC: Programming (Univac II). Preliminary Users' Manual to accompany COBOL Manual. (Second Edition, September 1960) view details
          in Proceedings of the Symposium on the Mechanisation of Thought Processes. Teddington, Middlesex, England: The National Physical Laboratory, November 1958 view details
  • FLOW-MATIC Data Design (Univac Solid-State 90). Remington-Rand Publication U-1984.5 (1960) view details
          in Proceedings of the Symposium on the Mechanisation of Thought Processes. Teddington, Middlesex, England: The National Physical Laboratory, November 1958 view details
  • Martin, E. Wayne Jr.; Hall, Dale J. "Data Processing: Automation in Calculation" Review of Educational Research, Vol. 30, No. 5, The Methodology of Educational Research (Dec., 1960), 522-535. view details Abstract: Availability of the electronic computer makes it possible currently to
    employ new methods in many areas of research. Performance of 1 million
    multiplications on a desk calculator is estimated to require about five vears
    and to cost $25,000. On an early scientific computer, a million
    multiplications required eight minutes and cost (exclusive of programing
    and input preparation) about $10. With the recent LARC computer,
    1 million multiplications require eight seconds and cost about
    50 cents (Householder, 1956). Obviously it is imperative that researchers
    examine their methods in light of the abilities of the computer.
    It should be noted that much of the information published on computers
    and their use has not appeared in educational or psychological literature
    but rather in publications specifically concerned with computers. mathematics,
    engineering, and business. The following selective survey is intended
    to guide the beginner into this broad and sometimes confusing
    area. It is not an exhaustive survey. It is presumed that the reader has
    access to the excellent Wrigley (29571 article; so the major purpose of
    this review is to note additions since 1957.
    The following topics are discussed: equipment availabilitv, knowledge
    needed to use computers, general references, programing the computer,
    numerical analysis, statistical techniques, operations research, and mechanization
    of thought processes. Extract: Compiler Systems
    Compiler Systems
    A compiler is a translating program written for a particular computer which accepts a form of mathematical or logical statement as input and produces as output a machine-language program to obtain the results.
    Since the translation must be made only once, the time required to repeatedly run a program is less for a compiler than for an interpretive system. And since the full power of the computer can be devoted to the translating process, the compiler can use a language that closely resembles mathematics or English, whereas the interpretive languages must resemble computer instructions. The first compiling program required about 20 man-years to create, but use of compilers is so widely accepted today that major computer manufacturers feel obligated to supply such a system with their new computers on installation.
    Compilers, like the interpretive systems, reflect the needs of various types of users. For example, the IBM computers use "FORTRAN" for scientific programing and "9 PAC" and "ComTran" for commercial data processing; the Sperry Rand computers use "Math-Matic" for scientific programing and "Flow-Matic" for commercial data processing; Burroughs provides "FORTOCOM" for scientific programming and "BLESSED 220" for commercial data processing.
    There is some interest in the use of "COBOL" as a translation system common to all computers.
          in Proceedings of the Symposium on the Mechanisation of Thought Processes. Teddington, Middlesex, England: The National Physical Laboratory, November 1958 view details
  • Taylor, A., "The FLOW-MATIC and MATH-MATIC Automatic Programming Systems" view details
          in Goodman, Richard (ed) "Annual Review in Automatic Programming "(1) 1960 Pergamon Press, Oxford view details
  • Blum, E. K. review of Goodman 1960 view details Abstract: This volume contains the 18 papers presented to the Conference on Automatic Programming of Digital Computers held in April 1959 at Brighton Technical College. The papers are, for the most part, brief descriptions of various automatic programming systems in use in Great Britain at the time of the conference. The following sample of titles gleaned from the table of contents will convey some idea of the scope and content of the papers: "The MARK 5 System of Automatic Coding for TREAC"; "PEGASUS: An Example of an Autocoded Program for Sales Analysis and Forecasting"; "The Application of Formula Translation to Automatic Coding of Ordinary Differential Equations"; "Further DEUCE Interpretive Programs and some Translating Programs"; and "Automatic Programming and Business Applications."

    Most of the papers are written in a style and manner which seem to have become universally accepted for papers on computer programming, at least in the English-speaking world and probably in others. This style insists on a liberal dosage of impressively detailed flow charts which, considering the well-known and understandable reluctance of programmers to read their own programs much less those of others, one suspects most readers hastily skip over, willingly granting their authenticity. The flow charts are invariably accompanied by long lists of special instructions described in the private patois of the author, who seems blissfully unaware or unconcerned that his specially constructed vocabulary of acronyms may present;. rough going to the reader from the inlying provinces. Finally, the style demands long and wearisome descriptions of basic concepts (e.g., subroutine; symbolic instruction, etc.) long since familiar to the average reader, some indication of difficulties as yet to be surmounted (e.g., automatic storage allocation; easier debugging; et al). Nevertheless, the volume does give some idea of the status of automatic programming systems in Great Britain in early 1959. It also contains a concise description of the 709 SHARE operating system, and another brief account of FLOW-MATIC and MATH-MATIC. There are two interesting appendices worthy of mention. Appendix One consists of reprints of two papers by the late A. M. Turing, "On Computable Numbers with an Application to the Entscheidungsproblem", in which the "Turing machine" was conceived, and a brief corrective note on the same subject. Appendix Two contains the "Preliminary Report of ~ ACM-GAMM Committee on an International Algebraic Language", since published elsewhere.

    The reviewer cannot suppress the question of whether this sort of material (Appendices excepted), so soon obsolescent or obsolete and so difficult to present adequately in short papers, deserves the effort and expense required to reproduce it between the bound hard covers of a handsome book.

          in ACM Computing Reviews 2(03) May-June 1961 view details
  • Sammet, Jean E "1960 Tower of Babel" diagram on the front of CACM January 1961 view details

          in [ACM] CACM 4(01) (Jan 1961) view details
  • Willey, E.L.; d'Agapeyeff, A.; Marion Tribe, B.J. Gibbens, Michelle Clark, "Some commercial Autocodes -- A comparative study", A.P.I.C. Studies in Data Processing #1, Academic Press, London, 1961, pp. 53. view details
          in [ACM] CACM 4(01) (Jan 1961) view details
  • d'Agapeyeff, Alex "An introduction to Commercial Compilers" view details Extract: Introduction
    Introduction
    It is desirable to begin by defining what we mean by a compiler. In the broadest terms this might be described as a Programming System, running on some computer, which enables other programs to be written in some artificial source language. This result is obtained by the simulation of the artificial machine represented by the source language, and the conversion of such programs into a form in which they can be executed on one or more existing computers.
    This definition, however inadequate, at least avoids any artificial distinction between 'Interpreters' and 'Compilers' or 'Translators'. For although these terms are still in widespread use it now appears that the distinction between them is only valid for particular facets of any modern programming language, and even then only as a rather inadequate indication of the moment in time that certain criteria are evaluated.
    On the more positive side the definition does bring out a number of points which are sometimes forgotten.

    1 . The Compiler as a Program
    A compiler is a program of a rather specialized nature. As such it takes a considerable time and effort to write and, more particularly, to get debugged. In addition it often takes a surprising amount of time to run. Nevertheless compilers are concerned with the generalized aspects of programming and have in consequence been the source of several important techniques. These could be applied to a wider field if a sufficient number of programmers would take an interest in such developments.
    2. The Three Computers
    There are three possible computers involved-the one on which the compiler will run, the artificial computer, and the one which will execute the program (although the first and last are not necessarily distinct). In general the more that the facilities in the computers involved diverge the harder is the task of the compiler. On the other hand the use of a large computer to compile programs for smaller computers can avoid storage problems during compilation, and any consequent restrictions in the language, and make the process more flexible and efficient. This can be readily appreciated from the fact that some COBOL compilers running on small computers take more than forty passes! But the full benefit of large computers used at a distance for this purpose is dependent on cheap and reliable data links, since at the moment they introduce delays in compilation and the reporting of errors.
    3. Presentation to the User
    Any user of the system must, to be effective, learn how to write programs in the artificial source language which will fit the capabilities of both the artificial and the actual object machine. This requirement has become obscured by the current myths of the so-called 'Natural' or 'English' langu. ages, and the widespread claims that 'anyone can now write programs' are highly misleading. Indeed existing source languages are rather diacult to learn and show no particular indication that they are well adapted toward the task for which they are intended.
    A more serious drawback of the 'Natural' language approach is that it hinders or prevents the new facilities being presented in terms of a machine. This is unfortunate because to do so would probably give a better mental image of the realities involved and a better understanding of the rules and restrictions of the language, which tend to be confusing until it is appreciated that they are compiler-, or computer-, oriented. Extract: The Main Requirements of a Commercial Compiler
    The Main Requirements of a Commercial Compiler
    A major problem in commercial compilers is the diversity of the tasks which they are required to achieve, some of which often appear to have been specified without regard to the complexity they introduce into the compiler compared to the benefit they confer on users. This ambitiousness may be contrasted with that of the authors of ALGOL (Backus et al., 1960) who concentrated on the matters which were considered important and capable of standardization (e.g. general procedures), and largely ignored those considered of lesser importance (e.g. input/output). There is no doubt that this concentration is the chief reason why ALGOL compilers normally work on or about the date intended whilst commercial compilers normally do not. The main tasks which are commonly required are discussed here.
    1. Training and Protecting the User
    The System must enable intending users who have no previous experience of computers to obtain that experience. This means that the language must include both a 'child's guide to programming' and the facilities which the user will require when he actually comes to writing real and complex programs. In addition the compiler is expected to protect the user, as much as possible, from the results of his own folly. This implies extensive checking of both the source and object program and the production of suitable error reports. But the main difficulty is to provide error protection which is not unduly inefficient in terms of time or restrictive on the capabilities of the user.
    2. Data Declarations
    The declaration of the properties of the users' data should be separable from the procedures which constitute a particular program, because the data may have an independent existence that is quite distinct from the action of any one program. Thus for example a Customer Accounts File of a distributing company may be originally set up by one program and regularly updated by a number of others. In terms of the 'global' and 'local' concepts of ALGOL this introduces a 'universal' declaration which is valid for all, or a number, of programs.
    This requirement means that the compiler should keep a master file of declarations which is accessible to all programs, and at the same time provides the means of extending, amending and reporting on the contents of this file.
    3. InputlOutput Data
    The input and output data must have a wide range of formats and representations because it is normally intended to allow users to continue with the same media, and the same codes and conventions, which they employ at present. And there is of course no assurance that these conventions were designed, or are particularly suitable, for a computer! This is the kind of requirement which is very hard on the compiler writer.
    If the specification of format and any other information is to be made in the data description it must be done in a way that is understandable and not too complicated for the users, which means the compiler may not find it simple to pick up the relevant parameters. The actual process of implementation is a choice of one of the following.
    (a) A very generalized routine is built to cover all possibilities. This is safe but inefficient in the average case.
    (b) A fairly general routine is built to cover the anticipated range. This is less safe but only slightly more efficient.
    (c) Individual routines are constructed for each particular variation. This is very efficient but also very expensive in terms of compiler effort.
    (d) A combination is provided of both (a) and (c) which is economical only if a correct guess has been made of the most commonly occurring cases.
    But this is of course the type of quandary which arises in several different areas of a compiler.
    4. Properties and Manipulation of Data
    It must be possible to form data structures during the running of the object program and to be able to manipulate data in a reasonably general way. The latter has been deliberately left indefinite because there is considerable disparity between the different commercial languages as to both the properties and the manipulation of data. The points of distinction are:
    (a) Whether the most common unit of data (normally referred to as a Field) should be allowed to vary dynamically in length. The argument in favour of this property centres on such items as postal addresses-these usually vary between twenty to 100 or more characters with an average of forty or less.
    This property has a very marked effect on the compiler. Not only must the address determination of such fields be handled in a particular way but other fields may have to be moved round dynamically to accommodate their changes in length.
    (b) Whether two or more Lists of fields (i.e. vectors) should be allowed in the unit of data handled sequentially on input and output (normally referred to as a Record). Some types of Record kept by conventional methods are claimed to have this property but it is very difficult for a compiler to handle more than one list economically. The problem is simply one of addressing when the lists grow unevenly.
    (c) Whether characters within fields should be individually addressable.
    This is the kind of facility which is very important when master files are being loaded on to magnetic tape for the first time. Such loading often constitutes the largest single task of an installation and may be very complex because of the variety of media in which parts of these files were previously held. The addressing of characters even on fixed word machines turns out to be quite simple using a variation of the normal representation of subscripts.
    There is a greater emphasis on the efficiency of object programs in Commercial as opposed to Scientific Languages due to the higher frequency of use expected of such programs. Unfortunately this emphasis usually takes the form of rather superficial comparisons in terms of time, whereas on most computers actually available in Europe space is the dominant factor. The distinction between the two is however as valid for a compiled program as any other, the production of open subroutines will take least time and most space whilst closed subroutines, particularly if the calls and parameters are evaluated at object time, will take most time and least space.
    6. Operating Characteristics
    Little attention is normally given to the operating characteristics of either the Compiler or the Object Program until some user is actually running both. Yet both are important in practice. In regard to the former this involves inter alia the details of loading the compiler and source programs, the options available in the media of the Object Program and the production of reports thereon, and the actions to be taken on the detection of errors. They are all quite trivial tasks for the compiler and usually depend on the time available to add the necessary frills to make the compiler more convenient to the user. The operating characteristics of the Object Program are much more serious and largely dominated by the question of debugging and the action on errors detected through checks inserted by the compiler. There is no doubt that the proper solution to both lies in making all communication between the user and the machine in terms of the source language alone. At the same time the difficulty is obvious because there is no other reason why such terms should be present in the Object Program. In addition the dynamic tracing of the execution of the program in source language is hindered both by any optimization phase included in the compiler, and by the practice of compiling into some kind of Assembly Code.
    No existing Commercial Compiler has succeeded in solving this problem. Instead it is customary to print out a full report of the Object Program which is related to the source language statements (e.g. by printing them side by side). Error messages will then refer to that report and special facilities may be provided to enable test data to be run on the program with similar messages as to the results obtained. It will be apparent that this is not very difficult for the compiler, since it is normally possible to discover what routine gave rise to an error jump, but more knowledge is required by the user in terms of the real machine than is otherwise necessary.
    The other items which can be considered as part of the Operating Characteristics are greatly assisted by the hardware facilities available and include: (a) checks that the correct peripherals have been assigned, and the correct media loaded, for the relevant program; (b) changes in the assignments of input/output channels due to machine faults, and options in peripherals for the same reason; (c) returns to previous dump points particularly in regard to the positioning of input/output media. Extract: Flowmatic
    Flowmatic
    The first genuine commercial compiler was that of FLO WMATIC, designed for UNIVAC I and II, and it has been described in detail by Dr Hopper (4). This compiler has had a considerable influence on all subsequent commercial languages and in many cases on their methods of implementation. The FLOWMATIC compiler consists essentially of three main phases- Translation, Selection and Conversion which operate as follows.
    (i) Translation-this examines each of the sentences in the source program and performs the operations described in 4.2 and 4.3 above. Its output is a 'file entry' which consists of a code word for the verb, the properties of the operands and details of any implied jumps.
    (ii) Selection-this uses the code word in the file entry to call a 'generator' which produces the assembly code for the sentence. It also lists the storage requirements of the code produced, and any constants and cross references.
    (iii) Conversion-this uses the lists produced in (ii) to allocate storage economically and then acts as an Assembler to the output code. It also produces a print out on the lines described on p. 209.
    The method used in this compiler had two great drawbacks. First each sentence, which consisted of the scope of a verb, had to be treated as an independent entity which prevented any form of expression being allowed. Second the generators tended to produce open subroutines which made the Object Program very long.
          in Wegner, Peter (ed.) "An Introduction to Systems Programming" proceedings of a Symposium held at the LSE 1962 (APIC Series No 2) view details
  • McGee, William C. "The property classification method of file design and processing" pp450-458 view details Abstract: Introduction
    A problem of continuing concern to the computer programmer is that of file design: Given a collection of data to be processed, how should these data be organized and recorded so that the processing is feasible on a given computer, and so that the processing is as fast or as efficient as required? While it is customary to associate this problem exclusively with business applications of computers, it does in fact arise, under various guises, in a wide variety of applications: data reduction, simulation, language translation, information retrieval, and even to a certain extent in the classical scientific application. Whether the collections of data are called files, or whether they are called tables, arrays, or lists, the problem remains essentially the same.

    The development and use of data processing compilers places increased emphasis on the problem of file design. Such compilers as FLOW-MATIC of Sperry Rand , Air Materiel Command's AIMACO, SURGE for the IBM 704, SHARE'S 9PAC for the 709/7090, Minneapolis- Honeywell's FACT, and the various COBOL compilers each contain methods for describing, to the compiler, the structure and format of the data to be processed by compiled programs. These description methods in effect provide a framework within which the programmer must organize his data. Their value, therefore, is closely related to their ability to yield, for a wide variety of applications, a ,data organization which is both feasible and practical.

    To achieve the generality required for widespread application, a number of compilers use the concept of the multilevel, multi-record type file. In contrast to the conventional file which contains records of only one type, the multi-level file may contain records of many types, each having a different format. Furthermore, each of these record types may be assigned a hierarchal relationship to the other types, so that a typical file entry may contain records with different levels of "significance." This article describes an approach to the design and processing of multi-level files. This approach, designated the property classification method, is a composite of ideas taken from the data description methods of existing compilers. The purpose in doing this is not so much to propose still another file design method as it is to emphasize the principles underlying the existing methods, so that their potential will be more widely appreciated.
          in [ACM] CACM 5(08) August 1962 view details
  • Rosen, Saul "Programming Systems and Languages: a historical Survey" (reprinted in Rosen, Saul (ed) Programming Systems & Languages. McGraw Hill, New York, 1967) view details Extract: FLOWMATIC
    Actually, there had been very little previous experience with Data processing compilers Univac's B0 or Flow-Matic, which was running in 1956, was probably the first true Data-Processing compiler It introduced the idea of file descriptions, consisting of detailed record and item descriptions, separate from the description of program procedures It also introduced the idea of using the English language as a programming language.
    It is remarkable to note that the Univac I on which Flow-Matic was implemented did not have the data processing capabilities of a good sized 1401 installation To add to the problems caused by the inadequacy of the computer, the implementation of the compiler was poor, and compilation was very very slow There were installations that tried it and dropped it. Others used it with the philosophy that even with compiling times measured in hours the total output of the installation was greater using the compiler than without it. Experience with Flow-Matic was almost the only experience available on Data Processing compilers prior to the launching of the COBOL project.

          in [AFIPS JCC 25] Proceedings of the 1964 Spring Joint Computer Conference SJCC 1964 view details
  • Bemer, Robert W. "The PL/I Family Tree" view details Extract: Introduction
    The family tree of programming languages, like those of humans, is quite different from the tree with leaves from which the name derives.
    That is, branches grow together as well as divide, and can even join with branches from other trees. Similarly, the really vital requirements for mating are few. PL/I is an offspring of a type long awaited; that is, a deliberate result of the marriage between scientific and commercial languages.
    The schism between these two facets of computing has been a persistent one. It has prevailed longer in software than in hardware, although even here the joining was difficult. For example, the CPC (card-programmed calculator) was provided either with a general purpose floating point arithmetic board or with a board wired specifically to do a (usually) commercial operation. The decimal 650 was partitioned to be either a scientific or commercial installation; very few were mixed. A machine at Lockheed Missiles and Space Company, number 3, was the first to be obtained for scientific work. Again, the methods of use for scientific work were then completely different from those for commercial work, as the proliferation of interpretive languages showed.
    Some IBM personnel attempted to heal this breach in 1957. Dr. Charles DeCarlo set up opposing benchmark teams to champion the 704 and 705, possibly to find out whether a binary or decimal machine was more suited to mixed scientific and commercial work. The winner led to the 709, which was then touted for both fields in the advertisements, although the scales might have tipped the other way if personnel assigned to the data processing side had not exposed the file structure tricks which gave the 705 the first edge. Similarly fitted, the 704 pulled ahead.
    It could be useful to delineate the gross structure of this family tree for programming languages, limited to those for compilers (as opposed to interpreters, for example).
    On the scientific side, the major chronology for operational dates goes like this:
    1951, 52      Rutishauser language for the Zuse Z4 computer
    1952      A0 compiler for Univac I (not fully formula)
    1953      A2 compiler to replace A0
    1954      Release of Laning and Zierler algebraic compiler for Whirlwind
    1957      Fortran I (704)
    1957      Fortransit (650)
    1957      AT3 compiler for Univac II (later called Math-Matic)
    1958      Fortran II (704)
    1959      Fortran II (709)
    A fuller chronology is given in the Communications of the ACM, 1963 May, 94-99.
    IBM personnel worked in two directions: one to deriving Fortran II, with its ability to call previously compiled subroutines, the other to Xtran in order to generalize the structure and remove restrictions. This and other work led to Algol 58 and Algol 60. Algol X will probably metamorphose into Algol 68 in the early part of that year, and Algol Y stands in the wings. Meanwhile Fortran II turned into Fortran IV in 1962, with some regularizing of features and additions, such as Boolean arithmetic.
    The corresponding chronology for the commercial side is:
    1956      B-0, counterpart of A-0 and A-2, growing into
    1958      Flowmatic
    1960      AIMACO, Air Material Command version of Flowmatic
    1960      Commercial Translator
    1961      Fact
    Originally, I wanted Commercial Translator to contain set operators as the primary verbs (match, delete, merge, copy, first occurrence of, etc.), but it was too much for that time. Bosak at SDC is now making a similar development. So we listened to Roy Goldfinger and settled for a language of the Flowmatic type. Dr. Hopper had introduced the concept of data division; we added environment division and logical multipliers, among other things, and also made an unsuccessful attempt to free the language of limitations due to the 80-column card.
    As the computer world knows, this work led to the CODASYL committee and Cobol, the first version of which was supposed to be done by the Short Range Committee by 1959 September. There the matter stood, with two different and widely used languages, although they had many functions in common, such as arithmetic. Both underwent extensive standardization processes. Many arguments raged, and the proponents of "add A to B giving C" met head on with those favoring "C = A + B". Many on the Chautauqua computer circuit of that time made a good living out of just this, trivial though it is.
    Many people predicted and hoped for a merger of such languages, but it seemed a long time in coming. PL/I was actually more an outgrowth of Fortran, via SHARE, the IBM user group historically aligned to scientific computing. The first name applied was in fact Fortran VI, following 10 major changes proposed for Fortran IV.
    It started with a joint meeting on Programming Objectives on 1963 July 1, 2, attended by IBM and SHARE Representatives. Datamation magazine has told the story very well. The first description was that of D. D. McCracken in the 1964 July issue, recounting how IBM and SHARE had agreed to a joint development at SHARE XXII in 1963 September. A so-called "3 x 3" committee (really the SHARE Advanced Language Development Committee) was formed of 3 users and 3 IBMers. McCracken claimed that, although not previously associated with language developments, they had many years of application and compiler-writing experience, I recall that one of them couldn't tell me from a Spanish-speaking citizen at the Tijuana bullring.
    Developments were apparently kept under wraps. The first external report was released on 1964 March 1. The first mention occurs in the SHARE Secretary Distribution of 1964 April 15. Datamation reported for that month:
    "That new programming language which came out of a six-man IBM/ SHARE committee and announced at the recent SHARE meeting seems to have been less than a resounding success. Called variously 'Sundial' (changes every minute), Foalbol (combines Fortran, Algol and Cobol), Fortran VI, the new language is said to contain everything but the kitchen sink... is supposed to solve the problems of scientific, business, command and control users... you name it. It was probably developed as the language for IBM's new product line.
    "One reviewer described it as 'a professional programmer's language developed by people who haven't seen an applied program for five years. I'd love to use it, but I run an open shop. Several hundred jobs a day keep me from being too academic. 'The language was described as too far from Fortran IV to be teachable, too close to be new. Evidently sharing some of these doubts, SHARE reportedly sent the language back to IBM with the recommendation that it be implemented tested... 'and then we'll see. '"
    In the same issue, the editorial advised us "If IBM announces and implements a new language - for its whole family... one which is widely used by the IBM customer, a de facto standard is created.? The Letters to the Editor for the May issue contained this one:
    "Regarding your story on the IBM/SHARE committee - on March 6 the SHARE Executive Board by unanimous resolution advised IBM as follows:
    "The Executive Board has reported to the SHARE body that we look forward to the early development of a language embodying the needs that SHARE members have requested over the past 3 1/2 years. We urge IBM to proceed with early implementation of such a language, using as a basis the report of the SHARE Advanced Language Committee. "
    It is interesting to note that this development followed very closely the resolution of the content of Fortran IV. This might indicate that the planned universality for System 360 had a considerable effect in promoting more universal language aims. The 1964 October issue of Datamation noted that:
    "IBM PUTS EGGS IN NPL BASKET
    "At the SHARE meeting in Philadelphia in August, IBM?s Fred Brooks, called the father of the 360, gave the word: IBM is committing itself to the New Programming Language. Dr. Brooks said that Cobol and Fortran compilers for the System/360 were being provided 'principally for use with existing programs. '
    "In other words, IBM thinks that NPL is the language of the future. One source estimates that within five years most IBM customers will be using NPL in preference to Cobol and Fortran, primarily because of the advantages of having the combination of features (scientific, commercial, real-time, etc.) all in one language.
    "That IBM means business is clearly evident in the implementation plans. Language extensions in the Cobol and Fortran compilers were ruled out, with the exception of a few items like a sort verb and a report writer for Cobol, which after all, were more or less standard features of other Cobol. Further, announced plans are for only two versions of Cobol (16K, 64K) and two of Fortran (16K and 256K) but four of NPL (16K, 64K, 256K, plus an 8K card version).
    "IBM's position is that this emphasis is not coercion of its customers to accept NPL, but an estimate of what its customers will decide they want. The question is, how quickly will the users come to agree with IBM's judgment of what is good for them? "
    Of course the name continued to be a problem. SHARE was cautioned that the N in NPL should not be taken to mean "new"; "nameless" would be a better interpretation. IBM's change to PL/I sought to overcome this immodest interpretation.
    Extract: Definition and Maintenance
    Definition and Maintenance
    Once a language reaches usage beyond the powers of individual communication about it, there is a definite need for a definition and maintenance body. Cobol had the CODASYL committee, which is even now responsible for the language despite the existence of national and international standards bodies for programming languages. Fortran was more or less released by IBM to the mercies of the X3. 4 committee of the U. S. A. Standards Institute. Algol had only paper strength until responsibility was assigned to the International Federation for Information Processing, Technical Committee 2. 1. Even this is not sufficient without standard criteria for such languages, which are only now being adopted.
    There was a minor attempt to widen the scope of PL/I at SHARE XXIV meeting of 1965 March, when it was stated that X3. 4 would be asked to consider the language for standardization. Unfortunately it has not advanced very far on this road even in 1967 December. At the meeting just mentioned it was stated that, conforming to SHARE rules, only people from SHARE installations or IBM could be members of the project. Even the commercial users from another IBM user group (GUIDE) couldn't qualify.
    Another major problem was the original seeming insistence by IBM that the processor on the computer, rather than the manual, would be the final arbiter and definer of what the language really was. Someone had forgotten the crucial question, "The processor for which version of the 360? , " for these were written by different groups. The IBM Research Group in Vienna, under Dr. Zemanek, has now prepared a formal description of PL/I, even to semantic as well as syntactic definitions, which will aid immensely. However, the size of the volume required to contain this work is horrendous. In 1964 December, RCA said it would "implement NPL for its new series of computers when the language has been defined.?
    If it takes so many decades/centuries for a natural language to reach such an imperfect state that alternate reinforcing statements are often necessary, it should not be expected that an artificial language for computers, literal and presently incapable of understanding reinforcement, can be created in a short time scale. From initial statement of "This is it" we have now progressed to buttons worn at meetings such as "Would you believe PL/II?" and PL/I has gone through several discrete and major modifications.
    Extract: Introduction
    The family tree of programming languages,   like those of humans,   is quite different from the tree with leaves from which the name derives.

    That is,  branches grow together as well as divide,  and can even join with branches from other trees.    Similarly,   the really vital requirements for mating are few.    PL/I is an offspring of a type long awaited; that is,   a deliberate result of the marriage between scientific and commercial languages.
    The schism between these two facets of computing has been a persistent one.    It has prevailed longer in software than in hardware,  although even here the joining was difficult.    For example,   the CPC (card-programmed calculator) was provided either with a general purpose floating point arithmetic board or with a board wired specifically to do a (usually) commercial operation.     The decimal 650 was partitioned to be either a scientific or commercial installation; very few were mixed.    A machine at Lockheed Missiles and Space Company,  number 3,   was the first to be obtained for scientific work.    Again,   the methods of use for scientific work were then completely different from those for commercial work,  as the proliferation of interpretive languages showed.
    Some IBM personnel attempted to heal this breach in 1957.    Dr.  Charles DeCarlo set up opposing benchmark teams to champion the 704 and 705, possibly to find out whether a binary or decimal machine was more suited to mixed scientific and commercial work.      The winner led to the 709, which was then touted for both fields in the advertisements,   although the scales might have tipped the other way if personnel assigned to the data processing side had not exposed the file structure tricks which gave the 705 the first edge.    Similarly fitted,   the 704 pulled ahead.
    It could be useful to delineate the gross structure of this family tree for programming languages,  limited to those for compilers (as opposed to interpreters,  for example).

          in PL/I Bulletin, Issue 6, March 1968 view details
  • Sammet, Jean E. "Computer Languages - Principles and History" Englewood Cliffs, N.J. Prentice-Hall 1969. pp.316-324. view details
          in PL/I Bulletin, Issue 6, March 1968 view details
  • Sammet, Jean E., "Programming languages: history and future" view details
          in [ACM] CACM 15(06) (June 1972) view details
  • Sammet, Jean E. "The early history of COBOL" view details Abstract: This paper discusses the early history of COBOL, starting with the May 1959 meeting in the Pentagon which established the Short Range Committee which defined the initial version of COBOL, and continuing through the creation of COBOL 61. The paper gives a detailed description of the committee activities leading to the publication of the first official version, namely COBOL 60. The major inputs to COBOL are discussed, and there is also a description of how and why some of the technical decisions in COBOL were made. Finally, there is a brief “after the fact” evaluation, and some indication of the implication of COBOL on current and future languages.

          in SIGPLAN Notices 14(04) April 1979 including The first ACM SIGPLAN conference on History of programming languages (HOPL) Los Angeles, CA, June 1-3, 1978 view details
  • Sammet, Jean E. "Farewell to Grace HopperEnd of an Era!" view details Extract: Cobol
    In my view, the most significant technical contribution Grace Hopper made was the concept of Flow-Matic (originally called B-0) and the leadership of its design and implementation. In attempting to develop a language suitable for business data processing, she realized that although mathematics had a relatively common vocabulary and abbreviations (for example, sin, cos, x + y), there was no similar common terminology for data processing. Thus, she saidin several informal papers and articlesthat full English words should be used for data names (for example, unit-price, discount, inventory) and commands (for example, count, divide, replace). Furthermore, although mathematical problems could generally be stated and solved using only fixed and floating-point data representation, data-processing problems required a system that permitted the description of user-defined data types. Flow-Matic development started in 1955, and manuals and a system were generally available by 1958. It was used for practical work by several companies, including the Metropolitan Life Insurance Company. People from Met Life reported on the work at the Automatic Coding Symposium held January 1957 at the Franklin Institute in Philadelphia.1

    Grace Hopper's role in Cobol has been generally misunderstood, and I would like to take this opportunity to correct the incorrect statements and impressions that have consistently been conveyed in almost all articles and books, and even by a misleading Navy commendation. These comments are based on original Cobol records from 1959, which I still have and are reported in detail in my paper, "The Early History of Cobol."2 A draft of that paper was sent to many peopleincluding Gracefor comments, and she generally agreed with what I said.

    Grace was one of a group of six people who met in April 1959 and decided to suggest to Charles Phillips in the Department of Defense that he convene a meeting to consider the development of specifications for a common business language. She attended the meeting called by Phillips in May 1959, along with approximately 40 other people, including myself, from business, government, and academia. That meeting established the Codasyl Executive Committee and the Short-Range Committee, as well as other committees. Grace was one of two technical advisors to the self-appointed Executive Committee (the other being Robert Bemer from IBM.)

    Under the aegis of and with minimal guidance from the Executive Committee, the Short-Range Committee defined the Cobol specifications by December 1959. There were initially nine members (including myself), and eventually over 25 people participated in some phase of the basic Cobol language design: this large group included two people who worked for Grace, but Grace herself was not a member of the committee that defined Cobol. She did not participate in its work except through the general guidance she gave to her staff who were direct committee members. Thus, while her indirect influence was very important, regrettably the frequent repeated statements that "Grace Hopper developed Cobol" or "Grace Hopper was a codeveloper of Cobol" or "Grace Hopper is the mother of Cobol" are just not correct.

    Grace's primary contribution to Cobol was indirect, and via Flow-Matic. It was the only business-oriented programming language in use at the time the Cobol development started (aside from Aimaco, a dialect of Flow-Matic). Without the existing practical use of Flow-Matic, I doubt that we would have had the courage to develop a language such as Cobol. (The other significant input to the early Cobol work was Commercial Translator, a set of specifications from IBM, but it had not yet been implemented.) Thus, in my view, without Flow-Matic we probably never would have had a Cobol. The practical experience of implementing and using that type of language was priceless. This is a major contrast with the mathematical area, in which there had been many small attempts at a high-level language going back as early as 1952.

    Grace spent a lot of time convincing managers in various companies of the feasibility of Math-Matic, Flow-Matic, Cobol, and other high-level languages at a time when this was a unique and generally uncomfortable concept. She led her own group in the very practical "race" with RCA to produce the first Cobol compiler and demonstrate machine independence. Both companies demonstrated their successful results in December 1960.
          in [ACM] CACM 35(04) (April 1992) view details
  • Denise Gürer "Amazing Grace – Computer Pioneer, Technologist, Teacher, and Visionary" view details External link: online article Abstract: Hopper's brilliant insight liberated the very source of a computer's power – a program to create a program, or what we know today as a compiler.  No longer constrained to the relentless demands of minutia required by machine coding, Hopper established the path that within a few years would lead to computers as something far more powerful, something that was of interest to not just scientists and technicians, but to business and industry as well.

    Hopper's next triumph was A-2; what we today call an assembly language compiler. As a front-end translator to A-0, A-2 implemented a three-address machine code (e.g., to add x to y to give z, you use [ADD 00X 00Y 00Z]). Next Hopper developed the A-3 and the AT-3;  languages with a mathematical flavor (AT-3 was similar to FORTRAN), and were marketed as ARITH-MATIC and MATH-MATIC, respectively, by Remington Rand in the hopes of making them more appealing to customers.

    A mathematician herself Hopper understood that mathematics was in a sense a short-hand version of natural language. Undaunted by the naysayers who proclaimed that a computer would never understand English, Hopper set about to build a new language, a language of business software engineering. Hopper strove to cleave the meaningful from the jargon, preferring well-known English words and terms over their computer laden counterparts. Hopper and her team began by identifying about 30 verbs which seemed to capture the semantics and operators of data processing.

    FLOW-MATIC's key technical contributions included the use of data names instead of short symbolic names, such as UNIT-PRICE and INVENTORY, the use of full English words for commands, such as DIVIDE and COMPARE, the allocation of less than a full machine word for each data item (thus saving on memory), and the separation of data descriptions from instructions, a technique taken for granted nowadays.

          in [ACM] CACM 35(04) (April 1992) view details