MATRIX MATH(ID:94/mat017)

Matrix algebra compiler for UNIVAC II 

Also called Matrix Compiler

Algebraic compiler for UNIVAC, developed at Franklin Institute, had matrix math capabilities

Related languages
MATRIX MATH => Matrix PRORAB   Influence

  • Kahrimanian, H. G. "Analytic Differentiation by a Digital Computer" 1954 pp6-14 view details
  • [Remington Rand Univac] "Manual for the Matrix Math Tape" Remington 1955 view details Extract: Preface

    In order to facilitate reference to material in this manual, the sections have been so arranged that they explain the fundamental principles involved in adapting problems to Univac. These sections indicate the use of library routines., define input-output handling and editing methods,, and also outline the three categories of matrix routines.
    Each section is prefaced by an introduction explaining the type of rout ines involved. These sections are followed by operating instructions and other suggestions whenever necessary. The routines offer the programmer the benefit of the results of previous problems encountered and the experience gained by many people over a period of years during which matrix problems, were solved at Univac installations.
    Much of the material contained in the Matrix Math Manual is a result of the initial and subsequent efforts of the Computational Division, Directorate of Management Analysis, Headquarters., United States Air Force. We wish therefore, to express our special indebtedness to this division the Air Force. The efforts of all Univac users who have made valuable contributions to this publication are also deeply appreciated. Extract: INTRODUCTION
    This manual has been compiled to aid programmers in the use of Matrix Math Tape with the Univac System. Descriptions of the routines and operating instructions required for the successful solution of matrix problems are included.
    The Matrix Math Tape, a collection of discrete routines, allows the programmer to select operations and organize them in any logical sequence he desires. This affords him a wider choice of options to suit varied requirements than would be possible with a single rigid program. The routines have been so devised that a program of matrix operations can be carried out with a minimum of intermediary action by the Supervisory Control.
    This method does not prevent the building of so-called "Super Routines" which effect a series of operations without interruption. This type of routine will, of course, continue to be built. As new methods are discovered from time to time, this manual will be revised and the changes made available to users of the Univac System.
    In order to obtain the full benefit of organizing a program incorporating the matrix routines as components, the programmer must have a thorough knowledge of the operations that can be performed by the routines. He must be familiar with the specifications, descriptions and operating procedures associated with each routine. In order to assure an effective,successful solution to his problem with a minimum expenditure of production time, he must plan the detailed series of operations beforehand.
    Three important considerations are involved in the design and interrelating of these routines. First, a consistent scheme of input and output from one routine to the next is provided. Second, it is important to maintain control of the accuracy of the results while operating on large arrays of elements. Finally, routines are devised to operate within reasonably established limits for both tape and Uniservos. At present, as space is not the most important factor, it is possible to include routines that sacrifice space to time in runs involving multiplication and inversion. However, in the future, alternate methods for these operations may be programmed. Existing routines are extremely general in purpose in order to have a wider application, and therefore do not necessarily take advantage of any special peculiarities of a given array.
    Processing, Arithmetic, and Service Routines are the three categories of Matrix Math Routines. In each case the matrix has been partitioned into sub-matrices. The so-called high level sub-routines treat the sub-matrices as entities (i.e. as members of ring), while the low level sub-routines are used to handle the elements within a sub-matrix. Extract: PROCESSING ROUTINES
    PROCESSING ROUTINES: The purpose of these routines is essentially one of editing. Because of the inconvenience of preparing the input matrix in a partitioned array and computing the parameters associated with it, an input editing routine has been devised to perform it automatically. The programmer need only prepare a "list" tape, containing the elements of a matrix arranged sequentially row by row.
    A tape containing a matrix pertaining to one of several fixed or floating decimal formats can be prepared from the output editing routines.  The formats can be printed by the Unifirinter.
    The generating routines automatically prepare certain types of matrices (e.g., the unit matrix of arbitrary order) for testing or for computational purposes.
    Then too, there is a conversion routine that generates a list from a tape prepared from punched cards by th,e Card-to-Ta-pe Converter and also checks the accuracy of the conversion.
    ARITHMETIC ROUTINES: These routines accomplish the following operations performed with matrices:
    1.  Addition
    2.  Multiplication
    3.   Inversion
    4.  Transposition
    5.   Modification
    The Modifier routine provides the inverse of a matrix that differs from some other matrix in one row or column whose inverse has been obtained. By repetitive use of this routine, it may be feasible in certain instances to obtain an inverse when there are differences from a known inverse in many rows or columns.
    SERVICE ROUTINES: At the present time there are three types of Matrix Service Routines available: (1) The Norm which computes the square root of the sum of the squares of the elements and is used for certain error and condition estimates; (2) Row Column Sums and (3) The Diagonal Unit Fill Routine.
    Whenever required, routines are provided with automatic reruns. It will be noted that the routines follow a pattern of Uniservc allocation. The input is usually on Uniservo two, or two and three, and the output on Uniservo four. The programs have been so designed that the computational matrix in each case fits on one tape. The obligation of meeting this requirement limits the size of the matrix in terms of a square array to an order of approximately 310.
    The time required to perform complex arithmetic operations on Univac is consumed not so much in addition and multiplication, as in controlling and sequencing the process, and in transferring information. This is particularly true when an involved floating decimal system is used.
    Matrices are handled in partitioned form. Since tape has to be employed as auxiliary memory for large matrices, the use of standard size sub-matrices makes this convenient and efficient, and no time is lost in reading information. Submatrices permit the use of standard sub-routines to do the arithmetic, which can be highly refined by virtue of minimum latency coding and other means. However, the most important use of the sub-matrix is that it permits a high degree of control of the accuracy of inversion. The memory space and block length determine the optimum size for sub-matrices. In these routines they have been limited to a maximum of two blocks each, with a 10 x 10 sub-matrix apparently giving the highest efficiency.
    Computational routines have generally been placed in two divisions, "high-level " and "low-level". The low-level routines carry out basic operations on sub-matrices of relatively small dimensions and the high level routines operate on the "super-matrices" having the submatrices as elements.
    All routines discussed here are recorded on the Matrix Math Instruction Tape, but the parameters associated with the order of the matrices to be operated upon are left undetermined. When a routine is selected, control is transferred to a preliminary section of the appropriate high-level routine which sets the parameters, in accordance with the order indicated by the input matrices. Computation then proceeds according to the instructions of the high-level routine, which uses the low-level routines for arithmetic processing. The high-level routine may also select various other low-level routines or further sections of itself as needed in the course of the computation.
    Progress of the computation is indicated on the Supervisory Control at certain crucial intervals. In general, no action is required by the operator from the commencement of an operation to its completion. When a series of distinct operations is to be carried out, tapes must be changed and the starting sequence repeated for each new processing.
    Re-run procedures have been developed which do not depend upon the retention of the contents of the internal memory. This reduces operator intervention to a minimum requiring little more of the operator than the initial starting procedure. This is accomplished by using a variation of the word that selects the routine followed by the last type-out on the Supervisory Control Printer.
    The proper program is then placed in the memory, parameters and counters reconstituted, and the computation begins again at a point slightly previous to that at which operations stopped.
    Inasmuch as these routines were designed to be as general as possible, some form of floating decimal arithmetic has to be adopted. Briefly, each element of an n x m sub-matrix is represented in the full two word floating decimal form; then to conserve space each row is contracted according to a formula based on the following:
    A row exponent is determined by choosing the exponent of the element of greatest absolute value in the row of the sub-matrix.  A sub-exponent is then determined for each element in the row by subtracting the element exponent from the row exponent. This sub-exponent is restricted to one digit and is inserted in the least significant digit of the word containing the element. This form is standard for all computational routines.
    There is no one procedure that can be outlined for solving every problem in matrix algebra, since each case has its own peculiar requirements. In general, however, the following suggestions are applicable:
    1.  Analyze the problem in order to derive the simplest and most economical expression for the equation or equations.
    2.  Prepare the lists for the various distinct input matrices.
    3.  Draw a simple flow chart and servo allocation chart outlining the processing to be done each step of the way, including input and output editing.
    4.  On the basis of this, write a complete list of operating instructions to be used on the computer. These should provide for eventualities such as down-time in the middle of a long computation, computer detection of errors such as incorrect scaling, etc.
    5.  Observe all the precautionary measures mentioned in this manual as well as those usually applicable to computer operation.
    6.  Output edit and print the original input matrices, after they have been input edited for additional checking, if possible.
    7.  When feasible employ checking operations, particularly if working with poorly conditioned matrices. This may involve determining Turing's condition number* and an independent determination of the error matrix.**
    *  Turing, A.M. - Round-off errors, XX in Matrices Journal of Math & Physics.
    Briefly, C(A) = 1 N(A) N(A~ ) is small for well conditioned matrices, for example in the vicinity of 2.5 for Leontieff matrices.
    ** For example, (l - A A"1) I = E1
    A fairly good example of the economies that can be effected by the use of one type of reduction in preference to an equally legitimate alternative reduction is clearly seen in the case of a system with complex coefficients.

    A system, Ax = K, with complex coefficients is broken down into its real and imaginary parts as follows, and expressions are derived for the solution of the real and the imaginary components of the unknowns separately:
    1.    (A + iB)  (x + iy) = (K + iL)
    2.    Ax + iBx + i Ay - By = (K + i L)
    From these are formed 2 equations, the first containing only real terms, and the second containing only imaginary terms.
    a.   Ax - By = K
    b.   Bx + Ay = L
    3.      x = A"1    (K + By) and substituting this in (2b)
    4.       BA"1 K + BA-iBy + Ay = L
    5.       y = (BA^B + A)"1 (L - BA^K)
    Then solving for y in (2b)
    6.   y = A-i (L - Bx) and substituting in (2a)
    7.   Ax - BA'1 L + BA^Bx = K which reduces to
    8.   x = (BA^B + A)"1 (K + BA"1!.)
    The similarities between (5) and (8) are noted. However, if the expression for y in (2a) has been substituted in (2b), then
    9.   x = (AB-*A + B)"1 (L + AB^K)
    Thus, using the expressions in (5) and (8) requires only 4 large operations, two inversions and two multiplications, whereas using (5) and (9) requires four inversions and four multiplications.
    Extract: Cover of book
  • Bemer, R. W. "The Status of Automatic Programming for Scientific Problems" view details Abstract: A catalogue of automatic coding systems that are either operational or in the process of development together with brief descriptions of some of the more important ones Extract: Summary
    Let me elaborate these points with examples. UNICODE is expected to require about fifteen man-years. Most modern assembly systems must take from six to ten man-years. SCAT expects to absorb twelve people for most of a year. The initial writing of the 704 FORTRAN required about twenty-five man-years. Split among many different machines, IBM's Applied Programming Department has over a hundred and twenty programmers. Sperry Rand probably has more than this, and for utility and automatic coding systems only! Add to these the number of customer programmers also engaged in writing similar systems, and you will see that the total is overwhelming.
    Perhaps five to six man-years are being expended to write the Alodel 2 FORTRAN for the 704, trimming bugs and getting better documentation for incorporation into the even larger supervisory systems of various installations. If available, more could undoubtedly be expended to bring the original system up to the limit of what we can now conceive. Maintenance is a very sizable portion of the entire effort going into a system.
    Certainly, all of us have a few skeletons in the closet when it comes to adapting old systems to new machines. Hardly anything more than the flow charts is reusable in writing 709 FORTRAN; changes in the characteristics of instructions, and tricky coding, have done for the rest. This is true of every effort I am familiar with, not just IBM's.
    What am I leading up to? Simply that the day of diverse development of automatic coding systems is either out or, if not, should be. The list of systems collected here illustrates a vast amount of duplication and incomplete conception. A computer manufacturer should produce both the product and the means to use the product, but this should be done with the full co-operation of responsible users. There is a gratifying trend toward such unification in such organizations as SHARE, USE, GUIDE, DUO, etc. The PACT group was a shining example in its day. Many other coding systems, such as FLAIR, PRINT, FORTRAN, and USE, have been done as the result of partial co-operation. FORTRAN for the 705 seems to me to be an ideally balanced project, the burden being carried equally by IBM and its customers.
    Finally, let me make a recommendation to all computer installations. There seems to be a reasonably sharp distinction between people who program and use computers as a tool and those who are programmers and live to make things easy for the other people. If you have the latter at your installation, do not waste them on production and do not waste them on a private effort in automatic coding in a day when that type of project is so complex. Offer them in a cooperative venture with your manufacturer (they still remain your employees) and give him the benefit of the practical experience in your problems. You will get your investment back many times over in ease of programming and the guarantee that your problems have been considered.
    The IT language is also showing up in future plans for many different computers. Case Institute, having just completed an intermediate symbolic assembly to accept IT output, is starting to write an IT processor for UNIVAC. This is expected to be working by late summer of 1958. One of the original programmers at Carnegie Tech spent the last summer at Ramo-Wooldridge to write IT for the 1103A. This project is complete except for input-output and may be expected to be operational by December, 1957. IT is also being done for the IBM 705-1, 2 by Standard Oil of Ohio, with no expected completion date known yet. It is interesting to note that Sohio is also participating in the 705 FORTRAN effort and will undoubtedly serve as the basic source of FORTRAN-to- IT-to-FORTRAN translational information. A graduate student at the University of Michigan is producing SAP output for IT (rather than SOAP) so that IT will run on the 704; this, however, is only for experience; it would be much more profitable to write a pre-processor from IT to FORTRAN (the reverse of FOR TRANSIT) and utilize the power of FORTRAN for free. Extract: Matrix-Math
    MATRIX MATH COMPILER.-This program is an adaptation, by the Franklin Institute, of several previously separate UNIVAC service routines into one extensive package. Two installations are using the system with a preliminary manual, and a final manual and system are expected by January, 1958.
          in "Proceedings of the Fourth Annual Computer Applications Symposium" , Armour Research Foundation, Illinois Institute of Technology, Chicago, Illinois 1957 view details
  • McGinn, Laurence C. "A Matrix Compiler for UNIVAC" view details Abstract: The sequence of steps in problem solution is described as: (I) definition of the problem, (2) logical solution, the problem or flow charting, (3) translation of the problem into machine Code or coding. The basis for and development of the Matrix Compiler is then described. The principal components of this system are (1) input, (2) catalogue and library, (3) output. Examples of the use of this system are included.
          in Automatic Coding, Monograph No. 3, Journal of the Franklin Institute Philadelphia, Pa., April 1957. view details
  • McKay, William "The Matrix Math Compiler for Univac I", The Franklin Institute Laboratories for Research and Development—OPM 2499, November 15, 1957 view details
          in Automatic Coding, Monograph No. 3, Journal of the Franklin Institute Philadelphia, Pa., April 1957. view details
  • [Bemer, RW] [Addendum to the Automatic Programming Systems Chart of 1(4)] June 1958 view details
          in [ACM] CACM 1(06) (June 1958) view details
  • [Bemer, RW] [State of ACM automatic coding library August 1958] view details
          in [ACM] CACM 1(06) (June 1958) view details
  • [Bemer, RW] [State of ACM automatic coding library May 1959] view details Extract: Obiter Dicta
    Bob Bemer states that this table (which appeared sporadically in CACM) was partly used as a space filler. The last version was enshrined in Sammet (1969) and the attribution there is normally misquoted.
          in [ACM] CACM 2(05) May 1959 view details
  • Carr, John W III; "Computer Programming" volume 2, chapter 2, pp115-121 view details
          in E. M. Crabbe, S. Ramo, and D. E. Wooldridge (eds.) "Handbook of Automation, Computation, and Control," John Wiley & Sons, Inc., New York, 1959. view details
  • Locks, Mitchell O. "Automatic Programming for Automatic Computers" Journal of the American Statistical Association, 549(288) Dec 1959 pp744-754 view details Abstract: Anyone who attempts to process data or perform computations with an automatic digital computer encounters the obstacle of preparing a detailed program in computer code and checking this program out on the computer. Automatic programming techniques help reduce the magnitude and cost of this task. Complex statistical and data processing problems can be programmed with little or no knowledge of the computer code. An example is given of an automatically programmed one way analysis of variance F computation. External link: Online copy Extract: SOAP, RECO, X1
    Assembly and Compiling Systems both obey the "pre-translation"7 principle. Pseudo instructions are interpreted and a running program is produced before the solution is initiated. Usually this makes possible a single set of references to the library rather than many repeated references.
    In an assembly system the pseudo-code is ordinarily modified computer code. Each pseudo instruction refers to one machine instruction or to a relatively short subroutine. Under the control of the master routine, the assembly system sets up all controls for monitoring the flow of input and output data and instructions.
    A compiler system operates in the same way as an assembly system, but does much more. In most compilers each pseudo instruction refers to a subroutine consisting of from a few to several hundred machine instructions.8 Thus it is frequently possible to perform all coding in pseudo-code only, without the use of any machine instructions.
    From the viewpoint of the user, compilers are the more desirable type of automatic programming because of the comparative ease of coding with them. However, compilers are not available with all existing equipments. In order to develop a compiler, it is usually necessary to have a computer with a large supplementary storage such as a magnetic tape system or a large magnetic drum. This storage facilitates compilation by making possible as large a running program as the problem requires.
    Examples of assembly systems are /Symbolic Optimum Assembly Programming (S.O.A.P.) for the IBM 650 and REgional COding (RECO) for the UNIVAC SCIENTIFIC 1103 Computer. The X-l Assembly System for the UNIVAC I and II Computers is not only an assembly system, but is also used as an internal part of at least two compiling systems. Extract: MATHMATIC, FORTRAN and UNICODE
    For scientific and mathematical calculations, three compilers which translate formulas from standard symbologies of algebra to computer code are available for use with three different computers. These are the MATH-MATIC (AT-3) System for the UNIVAC I and II Computers, FORTRAN (for FOR-mula TRANslation) as used for the IBM 704 and 709, and the UNICODE Automatic Coding System for the UNIVAC SCIENTIFIC 1103A Computer. Extract: FLOW-MATIC and REPORT GENERATOR
    Two advanced compilers have also been developed for use with business data processing. These are the FLOW-MATIC (B-ZERO) Compiler for the UNIVAC I and II Computers and REPORT GENERATOR for the new IBM 709.13 In these compilers, English words and sentences are used as pseudocode.
    Extract: Matrix-Math
    One of the methods of solving this problem is the construction of special-purpose automatic programming which can be best applied to particular classes of problems. An example of one of these systems that should be of interest to many statisticians is the Matrix-Math Compiler developed by the Franklin Institute in Philadelphia. This compiler was constructed especially for performing matrix algebra computations on the UNIVAC 1 Computer. In this system there are pseudo instructions for operations such as Matrix Inversion, Matrix or Scalar Multiplication, Matrix Multiplication and Addition to the Identity Matrix and Transposition.

          in E. M. Crabbe, S. Ramo, and D. E. Wooldridge (eds.) "Handbook of Automation, Computation, and Control," John Wiley & Sons, Inc., New York, 1959. view details
  • Martin, E. Wayne Jr.; Hall, Dale J. "Data Processing: Automation in Calculation" Review of Educational Research, Vol. 30, No. 5, The Methodology of Educational Research (Dec., 1960), 522-535. view details Abstract: Availability of the electronic computer makes it possible currently to
    employ new methods in many areas of research. Performance of 1 million
    multiplications on a desk calculator is estimated to require about five vears
    and to cost $25,000. On an early scientific computer, a million
    multiplications required eight minutes and cost (exclusive of programing
    and input preparation) about $10. With the recent LARC computer,
    1 million multiplications require eight seconds and cost about
    50 cents (Householder, 1956). Obviously it is imperative that researchers
    examine their methods in light of the abilities of the computer.
    It should be noted that much of the information published on computers
    and their use has not appeared in educational or psychological literature
    but rather in publications specifically concerned with computers. mathematics,
    engineering, and business. The following selective survey is intended
    to guide the beginner into this broad and sometimes confusing
    area. It is not an exhaustive survey. It is presumed that the reader has
    access to the excellent Wrigley (29571 article; so the major purpose of
    this review is to note additions since 1957.
    The following topics are discussed: equipment availabilitv, knowledge
    needed to use computers, general references, programing the computer,
    numerical analysis, statistical techniques, operations research, and mechanization
    of thought processes.
          in E. M. Crabbe, S. Ramo, and D. E. Wooldridge (eds.) "Handbook of Automation, Computation, and Control," John Wiley & Sons, Inc., New York, 1959. view details
  • Rosen, Saul "Programming Systems and Languages: a historical Survey" (reprinted in Rosen, Saul (ed) Programming Systems & Languages. McGraw Hill, New York, 1967) view details Extract: Early UNIVAC languages
    The first large scale electronic computer available commercially was the Univac I (1951). The first Automatic Programming group associated with a commercial computer effort was the group set up by Dr. Grace Hopper in what was then the Eckert-Mauchly Computer Corp., and which later became the Univac Division of Sperry Rand. The Univac had been designed so as to be relatively easy to program in its own code. It was a decimal, alphanumeric machine, with mnemonic instructions that were easy to remember and use. The 12 character word made scaling of many fixed-point calculations fairly easy. It was not always easy to see the advantage of assembly systems and compilers that were often slow and clumsy on a machine with only 12,000 characters of high speed storage (200 microseconds average access time per 12 character word). In spite of occasional setbacks, Dr. Hopper persevered in her belief that programming should and would be done in problem-oriented languages. Her group embarked on the development of a whole series of languages, of which the most used was probably A2, a compiler that provided a three address floating point system by compiling calls on floating point subroutines stored in main memory. The Algebraic Translator AT3 (Math-Matic) contributed a number of ideas to Algol and other compiler efforts, but its own usefulness was very much limited by the fact that Univac had become obsolete as a scientific computer before AT3 was finished. The B0 (Flow-Matic) compiler was one of the major influences on the COBOL language development which will be discussed at greater length later. The first sort generators were produced by the Univac programming group in 1951. They also produced what was probably the first large scale symbol manipulation program, a program that performed differentiation of formulas submitted to it in symbolic form.
          in [AFIPS JCC 25] Proceedings of the 1964 Spring Joint Computer Conference SJCC 1964 view details
  • Sammet, Jean E. "Computer Languages - Principles and History" Englewood Cliffs, N.J. Prentice-Hall 1969. p.642. view details Extract: Matrix Compiler
    One of the earliest of these specialized systems was the UNIVAC Matrix Compiler. [...] It provided the user with language and facilities for performing a number of operations on matrices, including addition, multiplication, inversion, computing the norm, and transposition. For example, if the user wrote I+MULTYOOIJO this meant that the matrices on tapes I and J were to be multiplied together and have the identity matrix added to them to produce the result. This was followed by another 12-character operation to specify information about the output. Extract: Matrix Computations: Matrix Compiler
    Matrix Computations: Matrix Compiler

    One of the earliest of these specialized systems was the UNIVAC Matrix Compiler. It was highly oriented to the 12-character UNIVAC word and thus violates some of the criteria in Section 1.4.2. Its interest is historical since it provided the user with language and facilities for performing a number of operations on matrices, including addition, multiplication, inversion, computing the norm, and transposition. For example, if the user wrote I+MULTYOOIJO this meant that the matrices on tapes I and 3 were to be multiplied together and have the identity matrix added to them to produce the result. This was followed by another 12-character operation to specify information about the output.

          in [AFIPS JCC 25] Proceedings of the 1964 Spring Joint Computer Conference SJCC 1964 view details
  • Stock, Marylene and Stock, Karl F. "Bibliography of Programming Languages: Books, User Manuals and Articles from PLANKALKUL to PL/I" Verlag Dokumentation, Pullach/Munchen 1973 370 view details Abstract: PREFACE  AND  INTRODUCTION
    The exact number of all the programming languages still in use, and those which are no longer used, is unknown. Zemanek calls the abundance of programming languages and their many dialects a "language Babel". When a new programming language is developed, only its name is known at first and it takes a while before publications about it appear. For some languages, the only relevant literature stays inside the individual companies; some are reported on in papers and magazines; and only a few, such as ALGOL, BASIC, COBOL, FORTRAN, and PL/1, become known to a wider public through various text- and handbooks. The situation surrounding the application of these languages in many computer centers is a similar one.

    There are differing opinions on the concept "programming languages". What is called a programming language by some may be termed a program, a processor, or a generator by others. Since there are no sharp borderlines in the field of programming languages, works were considered here which deal with machine languages, assemblers, autocoders, syntax and compilers, processors and generators, as well as with general higher programming languages.

    The bibliography contains some 2,700 titles of books, magazines and essays for around 300 programming languages. However, as shown by the "Overview of Existing Programming Languages", there are more than 300 such languages. The "Overview" lists a total of 676 programming languages, but this is certainly incomplete. One author ' has already announced the "next 700 programming languages"; it is to be hoped the many users may be spared such a great variety for reasons of compatibility. The graphic representations (illustrations 1 & 2) show the development and proportion of the most widely-used programming languages, as measured by the number of publications listed here and by the number of computer manufacturers and software firms who have implemented the language in question. The illustrations show FORTRAN to be in the lead at the present time. PL/1 is advancing rapidly, although PL/1 compilers are not yet seen very often outside of IBM.

    Some experts believe PL/1 will replace even the widely-used languages such as FORTRAN, COBOL, and ALGOL.4) If this does occur, it will surely take some time - as shown by the chronological diagram (illustration 2) .

    It would be desirable from the user's point of view to reduce this language confusion down to the most advantageous languages. Those languages still maintained should incorporate the special facets and advantages of the otherwise superfluous languages. Obviously such demands are not in the interests of computer production firms, especially when one considers that a FORTRAN program can be executed on nearly all third-generation computers.

    The titles in this bibliography are organized alphabetically according to programming language, and within a language chronologically and again alphabetically within a given year. Preceding the first programming language in the alphabet, literature is listed on several languages, as are general papers on programming languages and on the theory of formal languages (AAA).
    As far as possible, the most of titles are based on autopsy. However, the bibliographical description of sone titles will not satisfy bibliography-documentation demands, since they are based on inaccurate information in various sources. Translation titles whose original titles could not be found through bibliographical research were not included. ' In view of the fact that nany libraries do not have the quoted papers, all magazine essays should have been listed with the volume, the year, issue number and the complete number of pages (e.g. pp. 721-783), so that interlibrary loans could take place with fast reader service. Unfortunately, these data were not always found.

    It is hoped that this bibliography will help the electronic data processing expert, and those who wish to select the appropriate programming language from the many available, to find a way through the language Babel.

    We wish to offer special thanks to Mr. Klaus G. Saur and the staff of Verlag Dokumentation for their publishing work.

    Graz / Austria, May, 1973
          in [AFIPS JCC 25] Proceedings of the 1964 Spring Joint Computer Conference SJCC 1964 view details
    • Cover of book