SPEEDCODING(ID:7/spe013)

Backus Autocoder  


also Speedcode, proto-FORTRAN

One of languages describe by Backus 1979 as creating a "synthetic computer"

Team led by John Backus 1953

Backus says:
Speedcoding took the 701, which was a fixed-point, single-address computer without index registers, and made it look like a floating-point, free-address computer with index registers. In other words, programmers no longer had to tangle with the binary code that was the true "machine" language, Most 701 programmers used speedcoding, although it exacted a price. The synthetic computer ran a lot slower because it had to do all the extra work of simulating floating-point and index registers.

Hopper (1955) type 1 - Interpretive routines - system

from the Manual

It is hardly possible to assign credit for the IBM Speedcoding System because the group in IBM which developed the system has benefited from the suggestions of so many.
Historically, the ingenious development of general-purpose systems for the IBM Card Programmed Calculator certainly  influenced the basic thinking. Once the IBM 701 was announced, scientists concerned with preparing for these machines actively considered the problem of reducing problem  preparation. Many useful and provocative ideas in this area  were discussed in Poughkeepsie, New York, during the week  of August 2628,1952, when representatives from the following organizations met to consider programming ideas: Boeing  Airplane Company, Douglas Aircraft Company (El Segundo  and Santa Monica), General Electric Company, International Business Machines Corporation, Lockheed Aircraft  Corporation, Los Alamos Scientific Laboratory, National Bureau of Standards, US. Naval Ordnance Laboratory,  U.S. Naval Ordnance Test Station (Inyokern), North American Aviation, Inc., United Aircraft Corporation, Bell Telephone Laboratories, Rand Corporation, U.S. Atomic Energy  Commission. At that meeting there were specific discussions  of systems analogous to Speedcoding.
The group at IBM which developed the Speedcoding System undertook to do so after discussing such systems  with Dr. Willard Bouricius, head of the 701 Planning Group  at Los Alamos Scientific Laboratory. This group, under the  direction of Mr. Bengt Carlson, had completed a program  with objectives similar to those of Speedcoding.  Many discussions were held with Mr. Stuart Crossman's  group at United Aircraft, particularly with Mr. Walter Ramshaw, whose assistance was extremely helpful in organizing  and collating information and text.
Finally, appreciation is expressed to those at IBM who have been most closely associated with the Speedcoding  project since its beginning in January, 1953. These are: Mr.  John Backus, who supervised the project, Mr. Harlan L.  Herrick, Mr. Donald A. Quarles, Jr., Mr. Sherwood Skillman,  Mr. John Pulos, and Miss Lucy A. Siegel. The project was  carried out under the general direction of Mr. John Sheldon.

People at GM Allison refered to it as Ramshaw's Speed Co


People: Hardware:
Related languages
Aiken CPC translator => SPEEDCODING   Influence
SAP => SPEEDCODING   Influence
SHACO => SPEEDCODING   Positive Moderate Influence
Williams => SPEEDCODING   Influence
SPEEDCODING => FORTRAN   Evolution of
SPEEDCODING => Speed Co   Adaptation of
SPEEDCODING => Speed Co   Productisation of
SPEEDCODING => SPEEDCODE   Alias
SPEEDCODING => SPEEDCODING 3   Port of
SPEEDCODING => TRANSCODE   Influence
SPEEDCODING => Wolontis-Bell Interpreter   Influence

References:
  • [IBM] IBM Speedcoding System SpeedCo 1 view details Extract: General introduction to SpeedCo I
    SpeedCo I is the name of a system - an integrated combination of a large-scale digital computer and a method by which that computer may readily be programmed to solve scientific and engineering problems.
    The computer used is the IBM 701. Its internal high-speed memory is on cathode-ray tubes and will be referred to as the "electrostatic memory." When the amount of storage available in the electrostatic memory is not large enough, magnetic drums are used to store and supply large blocks of information for ready access at frequent intervals. The "drum memory" is also capable of retaining its contents while the power is turned off, so that intermediate results remain available overnight when the machine is shut down. Any part of the information on the drums may be selectively altered by SpeedCo I at any time.
    If a larger secondary memory is needed, or if information is to be filed away for future reference, magnetic tapes may be used instead of magnetic drums. Magnetic tape is a storage and input-output medium that provides compactness, allows rapid reading and writing, and can be reused many times.
    To achieve a greater computing efficiency, the machine works internally in the binary number system. The user of SpeedCo I will find, however, that this fact does not in any   way affect the programming. During SpeedCo I operation   all numbers introduced into the machine and all results   printed by the machine are expressed in the decimal number   system. SpeedCo I automatically performs all necessary conversions between the decimal and binary number systems.  
    The programs may be written and introduced into the computer in various ways. Usually the instructions are keypunched on cards and read into the machine. If the program   is to be preserved for future use, it can then be recorded on   tape and filed away in a compact form. To prepare the   machine for calculation the appropriate magnetic tapes are   inserted in the tape units, and the cards containing the   instructions and data of the problem are placed in the hopper   of the card reader. Pushing a button then causes the machine   to store the program and data of the problem and start   calculating. From then on, operation of the computer is   fully automatic, with all of the components being under the   complete control of the program, although it is possible for   the operator to interrupt the calculation manually at any   time.  
    The primary unit of information handled by SpeedCo I consists of 72 binary digits and is called a "word."  
  • Speedcoding System for the Type 701 Electronic Data Processing Machines, IBM Corp., 24-6059-0 (Sept., 1953). view details
  • Backus, J. W. "The IBM 701 Speedcoding System" pp4 - 6 view details Extract: Introduction
    The IBM 701 Speedcoding System is a set of instructions which causes the 701 to behave like a three-address floating point calculator. Let us call this the Speedcoding calculator. In addition to operating in floating point, this Speedcoding calculator has extremely convenient means for getting information into the machine and for printing results; it has an extensive set of operations to make the job of programming as easy as possible. Speedeoding also provides automatic address modification, flexible tracing, convenient use of auxiliary storage, and built-in checking. The system was developed by IBM's New York Scientific Computing Service.
    Since this floating point Speedcoding calculator is slower than the 701, despite the conveniences that it offers, you might ask: Why go to a lot of trouble to make an extremely fast calculator like the 701 behave like a slower one? There are two principal reasons. The first is that some computing greups are working against time, and the elapsed time for solving a problem may often be reduced by minimizing the time for programming and checking out the problem even though the running time is thereby increased.
    The second and most important reason for having a Speedcoding calculator, in addition to the 701, is a matter of economy. Often, the expense of operating a computing installation is almost equally divided between machine costs and personnel cost. Furthermore, machine time spent checking out problems is frequently a very appreciable percentage of the total machine time. It is clear, therefore, that programming and testing cost often comprise between 50 and 75% of the total cost of operating a computing installation. Since Speedcoding reduces coding and testing time considerably, and is fairly fast in operation, it will often be the more economical way of solving a problem.
    Speedcoding is an interpretive system. I have implied that Speedcoding is a threeaddress system. Actually this is not quite the case. In a floating point system data and instructions have completely different forms and are treated differently. Therefore, it was thought desirable to have separate methods of dealing with each of these two types of information. Thus, each Speedcoding instruction has two operation codes in it called OP 1 and OP 2. OP 1 has three addresses A, B, and C, associated with it and is always an arithmetic or an inputoutput operation. OP 2 has one address, D, associated with it and is always a logical operation. OP 1 deals with floating point numbers, OP 2 deals with instructions. This arrangement was also adopted because it makes efficient use of the space available for an instruction and because it often speeds up operation by reducing the number of instructions which must be interpreted. OP 1 operations consist of the usual arithmetic operations plus square root, sine, arc tangent, exponential, and logarithm. There are also orders for transferring arbitrary blocks of information between electrostatic storage and tapes, drums or printer. These input-output orders have built-in automatic checks for correct transmission.
    Accompanying the OP 1 operation code is a code to specify that any or all of the three addresses, A, B, C, should be modified during interpretation by the contents of three associated special registers (B tubes) labeled RA, RB, R C. This feature. often enables one to reduce the number of instructions in a loop by a factor of 1/2. The OP 2 operation in an instruction is executed after the OP 1. By means of this operation one can obtain conditional or unconditional transfer of control. One can initialize the contents of any of the R-registers or one can, in one operation, increment any or all of the R-registers and transfer control. Another OP 2 operation allows one to compare the contents of an R-register with the given D-address and skip the next instruction if they are equal. OP 2 also provides a set of operations for using a fixed point accumulator for computations with addresses and for comparing the contents of this accumulator with the D address. Finally, OP 2 provides a convenient means of incorporating checking in a problem if desired. This feature consists mainly of two operations, START CHECK, and END CHECK; all instructions between these two orders may be automaticaily repeated as a block and at the end of the second repetition two separate check sums which have been accumulated during the two cycles are compared and the instruction following the END CHECK skipped if they agree.
    Instructions or data may be stored anywhere in electrostatic or auxiliary storage as single Speedcoding words. Average execution times for various Speedcoding operations are as follows:
    Add: 4.2 milliseconds
    Multiply: 3.5 milliseconds
    Read Tape; 14 milliseconds access plus 1.6 milliseconds per word
    Transfer Control: .77 milliseconds
    Electrostatic storage space available is about 700 words.
    Let us follow a problem from its coded form on programming sheets and data sheets until it is checked out and ready to run. First the instructions and data are punched on decimal cards whose formats are identical to those of the sheets. If there are any data or instructions which the program requires from tapes or drums, loading control cards are punched (one for each block of information) which will cause the loading system to put this information in the proper places in auxiliary storage. The deck of binary cards for Speedcoding is placed in front of this decimal deck consisting of instruction cards, data cards, and, possibly, loading control cards, and the entire deck is put in the 701 card reader. When the load button is pressed, the information will be stored in electrostatic storage, on tapes or on drums as indicated by locations on the cards. When the last card is read, execution of the program will begin automatically.
    In checking out the program, use will be made of a feature of Speedcoding which has not been mentioned yet. Each Speedcoding instruction includes a list code which may be assigned one of three possible values. Associated with each of these values is a switch on the operator's panel of the 701. During execution of a program all instructions will be printed which have list codes corresponding to switches which are on. If one has properly assigned list codes, one may then check out a problem in the following way: One begins execution of the program with all three switches on, after seeing the most repetitive portions of the program printed once or twice, one of the switches is turned off, after which only moderately repetitive parts of the program are listed. Finally, the second switch is turned off and only the least repetitive instructions are seen. If trouble is encountered in the last cycle of a much repeated loop, one can approach this point rapidly with a minimum of printing and just before reaching it one can turn on all three switches and see all details of the program. Each instruction is printed with alphabetic operation codes just as it was originally written on the programming sheet. The floating point numbers at A, B, and C, the contents of the R-registers and the address accumulator, are also printed with each instruction.
    Extract: To summarize:
    Speedcoding is a floating point threeaddress system which greatly simplifies programming, and checking out a program. Speedcoding provides convenient inputoutput operations, built-in checking, easy loading and printing. Therefore, Speedcoding reduces programming and testing expenses considerably. These expenses are often a large part of the cost of operating a computing installation. Thus Speedcoding is economical as well as convenient to use.
          in [ACM] JACM 2(1) (Jan 1955) view details
  • Backus, John W., and Herrick, Harlan, Jr "IBM 701 Speedcoding and Other Automatic Programming Systems" pp106-45 view details
          in Symposium on Automatic Programming For Digital Computers, Office of Naval Research, Dept. of the Navy, Washington, D.C. PB 111 607 May 13-14 1954 view details
  • Hopper, Grace "Automatic Coding for Digital Computers" view details pdf Extract: Introduction
    Automatic coding is a means for reducing problem costs and is one of the answers to a programmer's prayer. Since every problem must be reduced to a series of elementary steps and transformed into computer instructions, any method which will speed up and reduce the cost of this process is of importance.
    Each and every problem must go through the same stages:
    Analysis,
    Programming,
    Coding,
    Debugging,
    Production Running,
    Evaluation
    The process of analysis cannot be assisted by the computer itself. For scientific problems, mathematical or engineering, the analysis includes selecting the method of approximation, setting up specifications for accuracy of sub-routines, determining the influence of roundoff errors, and finally presenting a list of equations supplemented by definition of tolerances and a diagram of the operations. For the commercial problem, again a detailed statement describing the procedure and covering every eventuality is required. This will usually be presented in English words and accompanied again by a flow diagram.
    The analysis is the responsibility of the mathematician or engineer, the methods or systems man. It defines the problem and no attempt should be made to use a computer until such an analysis is complete.
    The job of the programmer is that of adapting the problem definition to the abilities and idiosyncrasies of the particular computer. He will be vitally concerned with input and output and with the flow of operations through the computer. He must have a thorough knowledge of the computer components and their relative speeds and virtues.
    Receiving diagrams and equations or statements from the analysis he will produce detailed flow charts for transmission to the coders. These will differ from the charts produced by the analysts in that they will be suited to a particular computer and will contain more detail. In some cases, the analyst and programmer will be the same person.
    It is then the job of the coder to reduce the flow charts to the detailed list of computer instructions. At this point, an exact and comprehensive knowledge of the computer, its code, coding tricks, details of sentinels and of pulse code are required. The computer is an extremely fast moron. It will, at the speed of light, do exactly what it is told to do no more, no less.
    After the coder has completed the instructions, it must be "debugged". Few and far between and very rare are those coders, human beings, who can write programs, perhaps consisting of several hundred instructions, perfectly, the first time. The analyzers, automonitors, and other mistake hunting routines that have been developed and reported on bear witness to the need of assistance in this area. When the program has been finally debugged, it is ready for production running and thereafter for evaluation or for use of the results.
    Automatic coding enters these operations at four points. First, it supplies to the analysts, information about existing chunks of program, subroutines already tested and debugged, which he may choose to use in his problem. Second, it supplies the programmer with similar facilities not only with respect to the mathematics or processing used, but also with respect to using the equipment. For example, a generator may be provided to make editing routines to prepare data for printing, or a generator may be supplied to produce sorting routines.
    It is in the third phase that automatic coding comes into its own, for here it can release the coder from most of the routine and drudgery of producing the instruction code. It may, someday, replace the coder or release him to become a programmer. Master or executive routines can be designed which will withdraw subroutines and generators from a library of such routines and link them together to form a running program.
    If a routine is produced by a master routine from library components, it does not require the fourth phase - debugging - from the point of view of the coding. Since the library routines will all have been checked and the compiler checked, no errors in coding can be introduced into the program (all of which presupposes a completely checked computer). The only bugs that can remain to be detected and exposed are those in the logic of the original statement of the problem.
    Thus, one advantage of automatic coding appears, the reduction of the computer time required for debugging. A still greater advantage, however, is the replacement of the coder by the computer. It is here that the significant time reduction appears. The computer processes the units of coding as it does any other units of data --accurately and rapidly. The elapsed time from a programmer's flow chart to a running routine may be reduced from a matter of weeks to a matter of minutes. Thus, the need for some type of automatic coding is clear.
    Actually, it has been evident ever since the first digital computers first ran. Anyone who has been coding for more than a month has found himself wanting to use pieces of one problem in another. Every programmer has detected like sequences of operations. There is a ten year history of attempts to meet these needs.
    The subroutine, the piece of coding, required to calculate a particular function can be wired into the computer and an instruction added to the computer code. However, this construction in hardware is costly and only the most frequently used routines can be treated in this manner. Mark I at Harvard included several such routines — sin x, log10x, 10X- However, they had one fault, they were inflexible. Always, they delivered complete accuracy to twenty-two digits. Always, they treated the most general case. Other computers, Mark II and SEAC have included square roots and other subroutines partially or wholly built in. But such subroutines are costly and invariant and have come to be used only when speed without regard to cost is the primary consideration.
    It was in the ENIAC that the first use of programmed subroutines appeared. When a certain series of operations was completed, a test could be made to see whether or not it was necessary to repeat them and sequencing control could be transferred on the basis of this test, either to repeat the operations or go on to another set.
    At Harvard, Dr. Aiken had envisioned libraries of subroutines. At Pennsylvania, Dr. Mauchly had discussed the techniques of instructing the computer to program itself. At Princeton, Dr. von Neumman had pointed out that if the instructions were stored in the same fashion as the data, the computer could then operate on these instructions. However, it was not until 1951 that Wheeler, Wilkes, and Gill in England, preparing to run the EDSAC, first set up standards, created a library, and the required satellite routines and wrote a book about it, "The Preparation of Programs for Electronic Digital Computers". In this country, comprehensive automatic techniques first appeared at MIT where routines to facilitate the use of Whirlwind I by students of computers and programming were developed.
    Many different automatic coding systems have been developed - Seesaw, Dual, Speed-Code, the Boeing Assembly, and others for the 701, the A—series of compilers for the UNIVAC, the Summer Session Computer for Whirlwind, MAGIC for the MIDAC and Transcode for the Ferranti Computer at Toronto. The list is long and rapidly growing longer. In the process of development are Fortran for the 704, BIOR and GP for the UNIVAC, a system for the 705, and many more. In fact, all manufacturers now seem to be including an announcement of the form, "a library of subroutines for standard mathematical analysis operations is available to users", "interpretive subroutines, easy program debugging - ... - automatic program assembly techniques can be used."
    The automatic routines fall into three major classes. Though some may exhibit characteristics of one or more, the classes may be so defined as to distinguish them.
    1) Interpretive routines which translate a machine-like pseudocode into machine code, refer to stored subroutines and execute them as the computation proceeds — the MIT Summer Session Computer, 701 Speed-Code, UNIVAC Short-Code are examples.
    2) Compiling routines, which also read a pseudo-code, but which withdraw subroutines from a library and operate upon them, finally linking the pieces together to deliver, as output, a complete specific program for future running — UNIVAC A — compilers, BIOR, and the NYU Compiler System.
    3) Generative routines may be called for by compilers, or may be independent routines. Thus, a compiler may call upon a generator to produce a specific input routine. Or, as in the sort-generator, the submission of the specifications such as item-size, position of key-to produce a routine to perform the desired operation. The UNIVAC sort-generator, the work of Betty Holberton, was the first major automatic routine to be completed. It was finished in 1951 and has been in constant use ever since. At the University of California Radiation Laboratory, Livermore, an editing generator was developed by Merrit Ellmore — later a routine was added to even generate the pseudo-code.
    The type of automatic coding used for a particular computer is to some extent dependent upon the facilities of the computer itself. The early computers usually had but a single input-output device, sometimes even manually operated. It was customary to load the computer with program and data, permit it to "cook" on them, and when it signalled completion, the results were unloaded. This procedure led to the development of the interpretive type of routine. Subroutines were stored in closed form and a main program referred to them as they were required. Such a procedure conserved valuable internal storage space and speeded the problem solution.
    With the production of computer systems, like the UNIVAC, having, for all practical purposes, infinite storage under the computers own direction, new techniques became possible. A library of subroutines could be stored on tape, readily available to the computer. Instead of looking up a subroutine every time its operation was needed, it was possible to assemble the required subroutines into a program for a specific problem. Since most problems contain some repetitive elements, this was desirable in order to make the interpretive process a one-time operation.
    Among the earliest such routines were the A—series of compilers of which A-0 first ran in May 1952. The A-2 compiler, as it stands at the moment, commands a library of mathematical and logical subroutines of floating decimal operations. It has been successfully applied to many different mathematical problems. In some cases, it has produced finished, checked and debugged programs in three minutes. Some problems have taken as long as eighteen minutes to code. It is, however, limited by its library which is not as complete as it should be and by the fact that since it produces a program entirely in floating decimal, it is sometimes wasteful of computer time. However, mathematicians have been able rapidly to learn to use it. The elapsed time for problems— the programming time plus the running time - has been materially reduced. Improvements and techniques now known, derived from experience with the A—series, will make it possible to produce better compiling systems. Currently, under the direction of Dr. Herbert F. Mitchell, Jr., the BIOR compiler is being checked out. This is the pioneer - the first of the true data-processing compilers.
    At present, the interpretive and compiling systems are as many and as different as were the computers five years ago. This is natural in the early stages of a development. It will be some time before anyone can say this is the way to produce automatic coding.
    Even the pseudo-codes vary widely. For mathematical problems, Laning and Zeirler at MIT have modified a Flexowriter and the pseudo-code in which they state problems clings very closely to the usual mathematical notation. Faced with the problem of coding for ENIAC, EDVAC and/or ORDVAC, Dr. Gorn at Aberdeen has been developing a "universal code". A problem stated in this universal pseudo-code can then be presented to an executive routine pertaining to the particular computer to be used to produce coding for that computer. Of the Bureau of Standards, Dr. Wegstein in Washington and Dr. Huskey on the West Coast have developed techniques and codes for describing a flow chart to a compiler.
    In each case, the effort has been three-fold:
    1) to expand the computer's vocabulary in the direction required by its users.
    2) to simplify the preparation of programs both in order to reduce the amount of information about a computer a user needed to learn, and to reduce the amount he needed to write.
    3) to make it easy, to avoid mistakes, to check for them, and to detect them.
    The ultimate pseudo-code is not yet in sight. There probably will be at least two in common use; one for the scientific, mathematical and engineering problems using a pseudo-code closely approximating mathematical symbolism; and a second, for the data-processing, commercial, business and accounting problems. In all likelihood, the latter will approximate plain English.
    The standardization of pseudo-code and corresponding subroutine is simple for mathematical problems. As a pseudo-code "sin x" is practical and suitable for "compute the sine of x", "PWT" is equally obvious for "compute Philadelphia Wage Tax", but very few commercial subroutines can be standardized in such a fashion. It seems likely that a pseudocode "gross-pay" will call for a different subroutine in every installation. In some cases, not even the vocabulary will be common since one computer will be producing pay checks and another maintaining an inventory.
    Thus, future compiling routines must be independent of the type of program to be produced. Just as there are now general-purpose computers, there will have to be general-purpose compilers. Auxiliary to the compilers will be vocabularies of pseudo-codes and corresponding volumes of subroutines. These volumes may differ from one installation to another and even within an installation. Thus, a compiler of the future will have a volume of floating-decimal mathematical subroutines, a volume of inventory routines, and a volume of payroll routines. While gross-pay may appear in the payroll volume both at installation A and at installation B, the corresponding subroutine or standard input item may be completely different in the two volumes. Certain more general routines, such as input-output, editing, and sorting generators will remain common and therefore are the first that are being developed.
    There is little doubt that the development of automatic coding will influence the design of computers. In fact, it is already happening. Instructions will be added to facilitate such coding. Instructions added only for the convenience of the programmer will be omitted since the computer, rather than the programmer, will write the detailed coding. However, all this will not be completed tomorrow. There is much to be learned. So far as each group has completed an interpreter or compiler, they have discovered in using it "what they really wanted to do". Each executive routine produced has lead to the writing of specifications for a better routine.
    1955 will mark the completion of several ambitious executive routines. It will also see the specifications prepared by each group for much better and more efficient routines since testing in use is necessary to discover these specifications. However, the routines now being completed will materially reduce the time required for problem preparation; that is, the programming, coding, and debugging time. One warning must be sounded, these routines cannot define a problem nor adapt it to a computer. They only eliminate the clerical part of the job.
    Analysis, programming, definition of a problem required 85%, coding and debugging 15$, of the preparation time. Automatic coding materially reduces the latter time. It assists the programmer by defining standard procedures which can be frequently used. Please remember, however, that automatic programming does not imply that it is now possible to walk up to a computer, say "write my payroll checks", and push a button. Such efficiency is still in the science-fiction future.

          in the High Speed Computer Conference, Louisiana State University, 16 Feb. 1955, Remington Rand, Inc. 1955 view details
  • Bemer, R. W. "The Status of Automatic Programming for Scientific Problems" view details Abstract: A catalogue of automatic coding systems that are either operational or in the process of development together with brief descriptions of some of the more important ones Extract: Summary
    Let me elaborate these points with examples. UNICODE is expected to require about fifteen man-years. Most modern assembly systems must take from six to ten man-years. SCAT expects to absorb twelve people for most of a year. The initial writing of the 704 FORTRAN required about twenty-five man-years. Split among many different machines, IBM's Applied Programming Department has over a hundred and twenty programmers. Sperry Rand probably has more than this, and for utility and automatic coding systems only! Add to these the number of customer programmers also engaged in writing similar systems, and you will see that the total is overwhelming.
    Perhaps five to six man-years are being expended to write the Alodel 2 FORTRAN for the 704, trimming bugs and getting better documentation for incorporation into the even larger supervisory systems of various installations. If available, more could undoubtedly be expended to bring the original system up to the limit of what we can now conceive. Maintenance is a very sizable portion of the entire effort going into a system.
    Certainly, all of us have a few skeletons in the closet when it comes to adapting old systems to new machines. Hardly anything more than the flow charts is reusable in writing 709 FORTRAN; changes in the characteristics of instructions, and tricky coding, have done for the rest. This is true of every effort I am familiar with, not just IBM's.
    What am I leading up to? Simply that the day of diverse development of automatic coding systems is either out or, if not, should be. The list of systems collected here illustrates a vast amount of duplication and incomplete conception. A computer manufacturer should produce both the product and the means to use the product, but this should be done with the full co-operation of responsible users. There is a gratifying trend toward such unification in such organizations as SHARE, USE, GUIDE, DUO, etc. The PACT group was a shining example in its day. Many other coding systems, such as FLAIR, PRINT, FORTRAN, and USE, have been done as the result of partial co-operation. FORTRAN for the 705 seems to me to be an ideally balanced project, the burden being carried equally by IBM and its customers.
    Finally, let me make a recommendation to all computer installations. There seems to be a reasonably sharp distinction between people who program and use computers as a tool and those who are programmers and live to make things easy for the other people. If you have the latter at your installation, do not waste them on production and do not waste them on a private effort in automatic coding in a day when that type of project is so complex. Offer them in a cooperative venture with your manufacturer (they still remain your employees) and give him the benefit of the practical experience in your problems. You will get your investment back many times over in ease of programming and the guarantee that your problems have been considered.
    Extract: IT, FORTRANSIT, SAP, SOAP, SOHIO
    The IT language is also showing up in future plans for many different computers. Case Institute, having just completed an intermediate symbolic assembly to accept IT output, is starting to write an IT processor for UNIVAC. This is expected to be working by late summer of 1958. One of the original programmers at Carnegie Tech spent the last summer at Ramo-Wooldridge to write IT for the 1103A. This project is complete except for input-output and may be expected to be operational by December, 1957. IT is also being done for the IBM 705-1, 2 by Standard Oil of Ohio, with no expected completion date known yet. It is interesting to note that Sohio is also participating in the 705 FORTRAN effort and will undoubtedly serve as the basic source of FORTRAN-to- IT-to-FORTRAN translational information. A graduate student at the University of Michigan is producing SAP output for IT (rather than SOAP) so that IT will run on the 704; this, however, is only for experience; it would be much more profitable to write a pre-processor from IT to FORTRAN (the reverse of FOR TRANSIT) and utilize the power of FORTRAN for free.
          in "Proceedings of the Fourth Annual Computer Applications Symposium" , Armour Research Foundation, Illinois Institute of Technology, Chicago, Illinois 1957 view details
  • Gorn, Saul "Standardized Programming Methods and Universal Coding" view details Extract: Introduction
    Introduction
    It is possible so to standardize programming and coding for general purpose, automatic, high-speed, digital computing machines that most of the process becomes mechanical and, to a great degree, independent of the machine. To the extent that the programming and coding process is mechanical a machine may be made to carry it out, for the procedure is just another data processing one.
    If the machine has a common storage for its instructions along with any other data, it can even carry out each instruction immediately after having coded it. This mode of operation in automatic coding is known as 'interpretive'. There have been a number of interpretive automatic coding procedures on various machines, notably MIT's Summer Session and Comprehensive systems for Whirlwind, Michigan's Magic System for MIDAC, and IBM's Speedcode; in addition there have been some interpretive systems beginning essentially with mathematical formulae as the pseudocode, such as MIT's Algebraic Coding, one for the SEAC, and others.
    We will be interested, however, in considering the coding of a routine as a separate problem, whose result is the final code. Automatic coding which imitates such a process is, in the main, non-interpretive. Notable examples are the A-2 and B-O compiler systems, and the G-P (general purpose) system, all for UNIVAC, and IBM's FORTRAN, of the algebraic coding type.
    Although, unlike interpretive systems, compilers do not absolutely require their machines to possess common storage of instructions and the data they process, they are considerably simpler when their machines do have this property. Much more necessary for the purpose is that the machines possess a reasonable amount of internal erasable storage, and the ability to exercise discrimination among alternatives by simple comparison instructions. I t will be assumed that the machines under discussion, whether we talk about standardized or about automatic coding, possess these three properties, namely, common storage, erasable storage, and discrimination. Such machines are said to possess "loop control".
    We will be interested in that part of the coding process which all machines having loop control and a sufficiently large storage can carry out in essentially the same manner; it is this part of coding that is universal and capable of standardization by a universal pseudo-code.
    The choice of such a pseudo-code is, of course, a matter of convention, and is to that extent arbitrary, provided it is
    (1) a language rich enough to permit the description of anything these machines can do, and
    (2) a language whose basic vocabulary is not too microscopically detailed.
    The first requirement is needed for universality of application; the second is necessary if we want to be sure that the job of hand coding with the pseudo-code is considerably less detailed than the job of hand coding directly in machine code. Automatic coding is pointless practically if this second condition is not fulfilled.
    In connection with the first condition we should remark on what the class of machines can produce; in connection with the second we should give some analysis of the coding process. In either case we should say a few things about the logical concept of computability and the syntax of machine codes.
          in [ACM] JACM 4(3) July 1957 view details
  • [Bemer, RW] [State of ACM automatic coding library August 1958] view details
          in [ACM] JACM 4(3) July 1957 view details
  • Carr, John W III; "Computer Programming" volume 2, chapter 2, pp115-121 view details
          in E. M. Crabbe, S. Ramo, and D. E. Wooldridge (eds.) "Handbook of Automation, Computation, and Control," John Wiley & Sons, Inc., New York, 1959. view details
  • Bemer, R "ISO TC97/SC5/WGA(1) Survey of Programming Languages and Processors" December 1962 view details
          in [ACM] CACM 6(03) (Mar 1963) view details
  • Bemer, R "ISO TC97/SC5/WGA(1) Survey of Programming Languages and Processors" December 1962 view details
          in [ACM] CACM 6(03) (Mar 1963) view details
  • Rosen, Saul "Programming Systems and Languages: a historical Survey" (reprinted in Rosen, Saul (ed) Programming Systems & Languages. McGraw Hill, New York, 1967) view details Extract: Speedcode
    The IBM 701 (1952) was the first commercially marketed large scale binary computer. The best known programming language on the 701 was Speedcode, a language that made the one address, binary, fixed point 701 appear to be a three address decimal floating point machine with index registers. More than almost any other language, Speedcode demonstrated the extent to which users were willing to sacrifice computer speed for the sake of programming convenience.
          in [AFIPS JCC 25] Proceedings of the 1964 Spring Joint Computer Conference SJCC 1964 view details
  • Sammet, Jean E. "Computer Languages - Principles and History" Englewood Cliffs, N.J. Prentice-Hall 1969. p.130. view details
          in [AFIPS JCC 25] Proceedings of the 1964 Spring Joint Computer Conference SJCC 1964 view details
  • Stock, Marylene and Stock, Karl F. "Bibliography of Programming Languages: Books, User Manuals and Articles from PLANKALKUL to PL/I" Verlag Dokumentation, Pullach/Munchen 1973 573 view details Abstract: PREFACE  AND  INTRODUCTION
    The exact number of all the programming languages still in use, and those which are no longer used, is unknown. Zemanek calls the abundance of programming languages and their many dialects a "language Babel". When a new programming language is developed, only its name is known at first and it takes a while before publications about it appear. For some languages, the only relevant literature stays inside the individual companies; some are reported on in papers and magazines; and only a few, such as ALGOL, BASIC, COBOL, FORTRAN, and PL/1, become known to a wider public through various text- and handbooks. The situation surrounding the application of these languages in many computer centers is a similar one.

    There are differing opinions on the concept "programming languages". What is called a programming language by some may be termed a program, a processor, or a generator by others. Since there are no sharp borderlines in the field of programming languages, works were considered here which deal with machine languages, assemblers, autocoders, syntax and compilers, processors and generators, as well as with general higher programming languages.

    The bibliography contains some 2,700 titles of books, magazines and essays for around 300 programming languages. However, as shown by the "Overview of Existing Programming Languages", there are more than 300 such languages. The "Overview" lists a total of 676 programming languages, but this is certainly incomplete. One author ' has already announced the "next 700 programming languages"; it is to be hoped the many users may be spared such a great variety for reasons of compatibility. The graphic representations (illustrations 1 & 2) show the development and proportion of the most widely-used programming languages, as measured by the number of publications listed here and by the number of computer manufacturers and software firms who have implemented the language in question. The illustrations show FORTRAN to be in the lead at the present time. PL/1 is advancing rapidly, although PL/1 compilers are not yet seen very often outside of IBM.

    Some experts believe PL/1 will replace even the widely-used languages such as FORTRAN, COBOL, and ALGOL.4) If this does occur, it will surely take some time - as shown by the chronological diagram (illustration 2) .

    It would be desirable from the user's point of view to reduce this language confusion down to the most advantageous languages. Those languages still maintained should incorporate the special facets and advantages of the otherwise superfluous languages. Obviously such demands are not in the interests of computer production firms, especially when one considers that a FORTRAN program can be executed on nearly all third-generation computers.

    The titles in this bibliography are organized alphabetically according to programming language, and within a language chronologically and again alphabetically within a given year. Preceding the first programming language in the alphabet, literature is listed on several languages, as are general papers on programming languages and on the theory of formal languages (AAA).
    As far as possible, the most of titles are based on autopsy. However, the bibliographical description of sone titles will not satisfy bibliography-documentation demands, since they are based on inaccurate information in various sources. Translation titles whose original titles could not be found through bibliographical research were not included. ' In view of the fact that nany libraries do not have the quoted papers, all magazine essays should have been listed with the volume, the year, issue number and the complete number of pages (e.g. pp. 721-783), so that interlibrary loans could take place with fast reader service. Unfortunately, these data were not always found.

    It is hoped that this bibliography will help the electronic data processing expert, and those who wish to select the appropriate programming language from the many available, to find a way through the language Babel.

    We wish to offer special thanks to Mr. Klaus G. Saur and the staff of Verlag Dokumentation for their publishing work.

    Graz / Austria, May, 1973
          in [AFIPS JCC 25] Proceedings of the 1964 Spring Joint Computer Conference SJCC 1964 view details
  • Allen, F. E. "The history of language processor technology in IBM" pp535-548. view details
          in IBM Journal of Research and Development, 25(5), September 1981 25th anniversary issue view details
  • Kerner, L.O. review of Allen 1981 in ACM Computing Reviews September 1982 view details Abstract: This history of language processors in IBM can be divided quite naturally into five periods, generally delineated by the introduction of significant new products. The first period starts with the introduction of the IBM 701 in 1952 and the early "automatic-programming system" -- Speedcode -- for this computer in 1953. In 1954 the second period starts with the work on FORTRAN I, a language and system which established the foundations of compiler technology. The period from 1957, when the 709 was announced, provided many computers, languages, and systems: the 7070, 7090, 1401, COBOL, RPG, and FORMAC. In 1964 IBM announced the System/360 as a family of compatible computers. The new PL/I language had to support most of the configurations and to function in the environment supplied by the operating system. Another important language of this period was APL, implemented on the 360 in 1966. During the seventies IBM was busy with theory: program analysis and optimization, global register allocation, and implementation based on a LARL parser-generator. The reader can see the relationship between theory and practice. Much historical information is included, but of course all this is seen through the eyes of IBM. The paper contains 89 relevant references.



          in IBM Journal of Research and Development, 25(5), September 1981 25th anniversary issue view details
  • Ceruzzi, Paul with McDonald, Rod and Welch, Gregory "Computers: A Look at the First Generation" view details External link: The Computer Museum Report, Volume 7 online at Ed Thelen's site Extract: Programming first generation machines
    The first generation of computers were programed in machine language, typically by binary digits punched into a paper tape. Activity in higher-level programming was found on both the large-scale machine and on the smaller commercial drum computers.

    High-level programming languages have their roots in the mundane. A pressing problem for users of drum computers was placing the program and data on the drum in a way that minimized the waiting time for the computer to fetch them.

    It did not take long to realize that the computer could perform the necessary calculations to minimize the so called latency, and out of these routines grew the first rudimentary compilers and interpreters. Indeed, nearly every drum or delay line computer had at least one optimizing compiler. Some of the routines among the serial memory computers include SOAP for the IBM 650, IT for the Datatron, and Magic for the University of Michigan's MIDAC.

    Parallel memory machines had less sophisticated and diverse compilers and interpreters. Among the exceptions were SPEEDCODE developed for the IBM 701, JOSS for the Johnniac, and a number of compilers and interpreters for the Whirlwind.


          in The Computer Museum Report, Volume 7, Winter/1983/84 view details
  • "The Speedcoding Language and interpreter" 9.3 pp333-338 view details
          in C.J. Bashe, L.R. Johnson, J.H. Palmer, and E.W. Pugh "IBM's Early Computers" MIT Press, 1986 (Vol. 3 in the History of Computing series) view details
    Resources