Algebraic(ID:39/alg008)

MIT Automatic Coding System  


Adams and Laning, MIT. Non-interpretive (ie automatic code translator) language system for Whirlwind

Also Algebraic Coding (ref Gorn 57)

Became part of the Comprehensive system

Places Hardware:
Related languages
Laning and Zierler => Algebraic   Evolution of
Algebraic => Comprehensive   Incorporated features of

References:
  • Carr, J. W., "Review of Electronic Digital Computers: Extension of Remarks Made From the Floor" pp113 - 114. view details
          in [JCC 01] Joint AIEE-IRE Computer Conference Proceedings February 1952 view details
  • John W. Carr, III "Progress of the whirlwind computer towards an automatic programming procedure" view details Abstract: This paper shall discuss present and proposed uses of subroutines and other pre-tested automatic programs on the Whirlwind computer at the Digital Computer Laboratory, Massachusetts Institute of Technology. At Whirlwind, programming methods are involved not only with a Subroutine Library, but also with the use of artificial " programmed codes," interpretive routines, automatic assembly schemes, and conversion programs. For this reason, this paper shall attempt both to summarize the use of all these devices and to show the directions that final organization of the machine's program structure may take. DOI Extract: Introduction
    PROGRESS OF THE WHIRLWIND COMPUTER TOWARDS AM AUTOMATIC PROGRAMMING PROCEDURE
    John W. Carr III
    Massachusetts institute of Technology
    Summary
    This paper shall discuss present and proposed uses of subroutines and other pre-tested automatic programs on the Whirlwind computer at the Digital Computer Laboratory, Massachusetts Institute of Technology. At Whirlwind, programming methods are involved not only with a Subroutine Library, but also with the use of artificial "programmed codes,"  interpretive routines, automatic assembly schemes, and conversion programs. For this reason, this paper shall attempt both to summarize the use of-all these devices and to show the directions that final organization of the machine's program structure may take.

    Advantages of Subroutines
    The growing use of subroutines on high-speed automatic computing machines requires a re-evaluation of their use and purpose, as viewed in the light of the overall program structure of the machine. Although some hold-outs exist, most people involved in setting up problems for high-speed computing machines admit the usefulness Of subroutines. In fact, the subroutine idea is basic to any machine, any instruction can be considered to be a wired-in subroutine combining electronically many "micro-instructions."
    The use of a set of instructions, or "order code," in a computing machine has, among others, the following advantages:
    1) The instructions are automatic in operation and in sequencing to the next instruction.
    2) The instructions are pre-tested, and so their procedure of operation does not have to be re-written and checked before each use.
    3) No provision has to be made for storage of the results of intermediate steps--partial sums, etc.--in the course of the performance of the instruction.
    4) Storage of instructions, upon read-in into the machine, is performed sequentially and automatically, so that no provision has to be made by the programmer in order for the next instruction to go in the succeeding storage location.
    5) The use of instructions -- "wired-in" subroutines -- saves a great deal of time and effort on the part of the mathematician or programmer, since he has to write only a very short coded instruction to represent a complicated operation.
    An obvious goal for the user of subroutines - -"programmed instructions" -- would be that they should have all the above advantages of the "electronic instructions." In addition they would automatically have the following additional advantages:
    6) The subroutines may be changed at will, without interfering with machine hardware, rewiring circuits, shutting down operation, etc., simply by changing the combination of electronic Instructions.
    7) The subroutines are designed on the basis of actual need by the users of the machine, rather than on the basis of hypothesized future needs by designers who in most cases are not the users of the machine.
    It must not be overlooked that subroutines, like the wired-in electronic instructions, suffer from several disadvantages. They often require more machine storage than a one-shot routine (that is, one coded for a single specific purpose). Too, subroutines that are to cover a multitude of eventualities may often require more machine time than similar "one-shot" routines, Just as an addition instruction that deals only with positive numbers could be made to operate faster than one which must handle all numbers irrespective of sign. Most of the subroutines and subroutine organizations devised up to now have included the advantages 1 - 3 listed above.
    Perhaps the first general use of such subroutines was at EDSAC, the machine of the Mathematical Laboratories at Cambridge University. However, until recently, very little effort has seemingly been made to write subroutines and organize the programming structure of a machine, so that advantages 4 and 5 would be incorporated into the use of subroutines. Both recent and present efforts in programming at Whirlwind have been to incorporate these two ideas -- automatic sequential storage and "programmed instruction code" -- into the use of subroutines. The final goal would be one or more instruction codes which would treat "electronic instructions" and "programmed instructions ~" (automatic subroutines with the five characteristics listed above) in exactly the same manner. In fact, separation of instructions into the "two classes-- electronic and programmed -- is an artificial distinction which might well be eliminated.

    Future Programming Organization
    The general goal listed above, if possible of accomplishment on present machines, might lead an observer to predict that some of the general purpose, automatic high-speed computing machines of the future will have the following characteristics:
    1) A large, rapid-access high-speed storage.
    2) A simple, almost minimal instruction code.
    3) A simple control element.
    4) A set of programmed instructions, actually consisting of automatic subroutines, automatically called in, which would be used almost exclusively in programming problems.

    One of the general tendencies which is forcing the future of some machines, at least, in this direction, is getting enough competent programmers. To the mathematicians connected with machines where would-be users are clamoring for time on the machine, arid at the same time offering to perform the programming task, this may seem a little pessimistic. However, the history of some of the earlier automatic machines which, partially because of programming difficulties spent many hours grinding out tables, might point a moral. With more efficient machines, and more time available, and at the same time more difficult problems being attempted, the Job of planning and programming problems may well become the bottleneck in operation.
    Connected with this same problem is a similar one: what type of technically trained personnel should be used to do the programming Job? At present, because of the intricacies of the various instruction codes and the lack of pre-tested coded units ("programmed instructions"), a large part of the coding is done by mathematicians who might better be spending more time planning the problem and evaluating the results from it. If their time is to be saved by use of personnel with a smaller amount of training, or even if they are still to be used in this task, it is imperative that the routine of coding be simplified and made as automatic as possible.
    Too, as problems become larger and more complicated, the time required for input of instructions from outside into the machine increases. For these several reasons, one solution seems to be to let the machine itself do as much of the routine work required in programming, assigning storage, assembling subroutines, etc., and for the machine to carry as many as possible of the subroutines in its own accessible storage.
    It thus appears that the planning and design of a computing machine is far from being finished when the last wire has been soldered and the last tube inserted in its socket. Actually the job of organization of the machine has only just begun. The organization and automatization of a machine's activities must have as thorough overall planning as was needed to establish its basic circuitry. New logical methods and devices, most of them unheard of when the present machines went on the drawing boards, may be needed to solve the programming problem mentioned above. Moreover, for ultimate success, these must be integrated into a logical whole rather than assembled piecemeal. Some of' the programming methods that have arisen from separate needs, but that must finally be combined into an overall !integrated system are preprogrammed instructions J~ (close ~ ~ automatic subroutines), postmortem print-outs, interpreter routines, ~ error diagnostic programs, and the general large-scale conversion program.

    The Program at Whirlwind
    Whirl wind, the high-speed electronic computer of the Digital Computer Laboratory M.I.T., has from the first offered natural testing ground for such advanced programming ideas, The men who built it (some of whom are its present users) planned the operation to be similar to that of most: other presently-working machines. Programs were first coded in hexadecimal form, with translation by hand. Then a general octal number scheme was used, with addresses being labelled octally, and numbers generally referred to by their octal value.
    However, Whirlwind is a very fast machine, having been originally planned as a simulation device, and its register length is short, sixteen binary digits. Therefore, for engineering and scientific calculations of any accuracy, Whirlwind cannot be used in its original "hardware" form, but must be programmed for multi-length or floating-point operation. Its high speed allows such operation at speeds competitive with most other machines (2 milliseconds for multiplication of two 24-diglt floating-point numbers, each with 6-dlgit exponent).

    Early in the operation of Whirlwind, an important decision was made to translate the operation portion of an instruction from a two-letter pair (such as ca n: clear accumulator and add the contents of register n) typed on an automatic typewriter--and punched at the same time on paper tape--outside of the machine, into the five binary digits required inside the machine. This was accomplished by a so-called Conversion Program, which was originally an enlargement and expansion of the EDSAC input program devised by Wilkes, Wheeler, and Gill (1).

    Conversion and Translation Programs
    Credit for this conversion program is due to C. W. Adams and J. T. Gilmore, who refused to limit themselves to the straightjacket of a minimal-storage conversion or translation program, and from the first expanded their basic program to include many "frills." At first, because not enough storage was available in the machine, the operation of a problem took two steps:
    1) Typing in a standard "outside-the-machine" form, and temporary storage on a punched-out "machine-dialect" binary paper tape.
    2) Reading in the second paper tape and performance of the problem.
    This process allowed a programmer to code in the two-letter-pair code with either octal or decimal addresses, simple automatic control combinations, relative or absolute addresses, and preset parameters similar to those at EDSAC. It proved successful as a simpler code for programmers, in that a large number of graduate students, research engineers, and outside mathematicians were able to learn and use it in a very short time. This conversion program was thus, in a way, a translation device from an outside programmer's language, entirely different from the machineJs internal binary language, into that machine language. Once such a program is recognized to be just that, there is no reason why the outside language cannot be made to conform as closely as possible to the programmer's or mathematicianJs desires, rather than to the machine's standards.
    With this translation device at the same time has grown up various re-translation schemes, one of the most important of which is the "post-mortem" print-out, "to be resorted to only after the patient has died" (i.e., after the problem has performed unsuccessfully). The more different the outside language from that of the machine, the more obvious is the need for such re-translation programs, if the programmer is to avoid dealing simultaneously with two languages, his own outside, and that of the machine. In a step .toward the evolution of a more elaborate outside language, Whirlwind programmers have devised an "algebraic address" code. [Rochester of IBM states that terminology in his laboratory is to call such addresses "symbolic addresses."] This code labels machine addresses in an outside language which is altogether different from the machlneJs standard binary notation. From the use of the preset parameters, and a suggestion which originated with Wilkes (2), a "free-" or "floating-address" type of programming, using automatic assembly techniques, has evolved. (Whirlwind notation prefers "floating-address" because of the confusion between "three-" and 'Tree-address." It should be noted that this is not the same terminology as the SEAC "floating-address," which is called "relative-address coding at Whirlwind.) The floating address programming scheme calls addresses by their preset-parameter ~nameJ a which has no connection with the corresponding machine address, except through the conversion program. On the original '~standard" tape, an address is noted by a letter and numeral combination which is actually the equivalent preset parameter. Upon read-in, the address is replaced by the contents of that preset parameter, which has been stored there previously. Insertions and corrections in programs are remarkably easy using this system.

    Automatic Assembly
    Some of the more experimental programmers have combined this idea with a "two-pass" paper-tape scheme, which sets up the proper address in the preset parameters automatically on the first read-in, and then substitutes the now correct values on the second run. This method Is obviously more feasible with magnetic tape or drum storage, yet it should prove helpful on machines without these aids. The next step in the Whirlwind scheme for aiding the programmer is a proposed new conversion scheme which will make use of the presently available magnetic tape, and later, the drum. This scheme makes full use of the Flexowriter automatic typewriter available input, using almost every one of the approximately sixty symbols on the keyboard, in what has been an attempt to approximate a standard outside algebraic terminology.
    A single-address code is to be used, but programmers will have available a large number of "floating addresses" perhaps as many as 500. This means up to 500 addresses may be programmed using only letter and numeral designation. With this scheme, the original coding notation in "blocks," as proposed by Burks, von Neumann, and Goldstine (5) is completely possible, with the added advantage of algebraic addresses for single registers. All storage will be assigned automatically, with automatic checks to prevent programs from extending beyond available storage.
    Along with the floating addresses and automatic assembly, the new program is to deal with multi-precision and floating-point number storage. If various portions of numbers are to be stored in separate registers, it is imperative that their conversion, storage, and assembly be made as automatic as possible. At Whirlwind, for numbers (either multi-precision or floating point) stored in separate registers, two schemes have been investigated:
    1) Separation of the various parts of a number by a separation constant K > i, where actually K = total number of such numbers.
    2) Separation of the various parts of a number by a separation constant K = I.
    The first alternative has been used in the various read-in programs so far used at Whirlwind. It has the advantage that with the Whirlwind instruction ao (add one) the address in a series of multi-length numbers may be advanced by the ao instruction, so that multi-length numbers may be stored and called in sequence in this simple fashion. This method has proved satisfactory in operation, but difficult to include in an automatic assembly scheme.
    The second method is similar in logic to the EDSAC built-in logic, and is much easier for automatic assembly. In the program now being planned for Whirlwind, it may be used instead of the previous separation scheme.

    Interpretive Subroutines
    The floating-point and multi-length operations have been handled completely automatically by interpretive subroutines, which act successively on instructions stored as program parameters, by examining them and performing a closed subroutine dependent on the instruction. The interpretive subroutines so far developed have been combined with the conversion program, so that the instructions to be interpreted are typed as two-letter pairs and translated into machine form automatically.
    The standard interpretive subroutines in
    use so far include:
    1) A 30 digit multi-precision (2 register) routine.
    2) A 24 digit, 6 digit exponent floating point (2 register) routine.
    3) A 45 digit multi-precision (3 register) routine.
    4) A 39 digit, 6 digit exponent floating point (3 register) routine.
    5) A 15 digit, 15 digit exponent floating point (2 register) routine.
    These and other variants all use a standard two-letter pair, single address code. The operations included are most of the "hardware" logical and arithmetic operations, with divide sometimes being omitted and those instructions affecting only single-length words not included. Future interpretive subroutines may be of a different logical type--perhaps three-address with programmed B-boxes and logic similar to the University of Manchester machine. Also special instructions to eliminate some round-Dff upon storing may be incorporated. Special credit is due to John Frankovich and Frank Helwig of the Whirlwind Applications Group for many of the ideas and notation evolved in these programs.

    Error Diagnosis
    One aspect of the programming problem that has received special attention at the Digital Computer Laboratories has been the problem of finding errors on the part of the programmer or tape-room personnel that escape undetected. To aid in this purpose, a special interpretive subroutine, duplicating the built-in operation of Whirlwind, in an artificial manner, with duplicates of the arithmetic registers in storage, has been designed. This program performs all the "electronic instructions" of the machine by equivalent closed subroutines, at the same time allowing printing out of the contents of the various arithmetic registers before and after operation of each instruction. Thus any ordinary single-length program can be interpreted by a standard error diagnosis program, from time to time printing out the instruction operation, instruction address, accumulator contents, or whatever else Is desired. This gives the programmer an inside look into the behavior of the problem. Much of the Error Diagnosis investigation has been done by Donn Combello, who has introduced many new twists of his own into these programs, including various delayed print-outs that save much time. Unfortunately, Error Diagnosis programs for the various interpretive routines are not yet completed. However, it is planned that they will be an integral part of the interpretive routine itself, with optional print-outs of the various register contents of interest.

    Programmed Instructions
    In addition to the present two-letter-pair Whirlwind and interpretive instructions, one suggestion that has been seriously under investigation is a set of further three-letter-triplets to represent automatically-called-in closed subroutines. The various printing subroutines now being used would be automatically called in by the programmer by inserting a "programmed instruction" (one tentative notation would call In a print subroutine by the three-letter-triplet PRTx, where x is some coded symbol whicH-Ells the form in which the printing is to be done--number of digits, decimal point postilion, etc.). The Job of substitution and automatic call-in would be performed on read-in by the conversion or translation program itself. Thus, in place of the typewritten three-letter combination~ a change-of-control instruction would be inserted automatically in the proper sequence~ referring to a closed (self-returning) automatic subroutine. This - combined with automatic assembly - would make the use of subroutines as simple as present use of ordinary instructions.

    Subroutines in Use
    The present Whirlwind Library of Subroutines can be divided into two parts, those subroutines written for use with single-length numbers, and those for use with the interpretive subroutines. In the first category~ the most useful routines have been a complete sequence of routines printing out the binary content of a single register in octal or decimal form, as fractions or integers, and with various possible layouts, spacing, digit lengths, etc. Similar display programs are available for the cathode-ray oscilloscope output. It is planned to combine all these slight variants into a few or one master routine, with an ultimate saving in handling time and programming effort. Programs for the square root use the standard iterative scheme] the trigonometric functions are approximated by optimized polynomials.
    The subroutines of the real-time applications group, where emphasis is on speed rather than accuracy, use step-by-step methods and very approximate polynomials. A similar set of subroutines, but more recent in origin, have been developed for use with multi-length and floating-point interpretive subroutines. The floating-point form of numbers is particularly suited for iterative schemes, since a first guess can often be obtained very close to the final wanted result. Polynomial evaluation and differentiation subroutines are also available in the floating point scheme. An automatic real-root-finder for polynomials with real coefficients is also available. There also exist a number of small special purpose subroutines (counters, switching programs, etc.), most of which were copied from similar EDSAC programs. These have been used very little, if at all; and they may be omitted later from the Subroutine Library as so much excess baggage.
    Other entries in the Subroutine Library include the aforementioned Error Diagnosis, Programmed Arithmetic (Interpretive) Subroutines, and Post Mortem print-outs. A proposed Post Mortem program is to print out registers changed only during the course of a program.

    Matrix Calculations
    Originally, a full matrix calculus, with completely automatic matrix operations, was planned, and some parts of it actually coded for single-length numbers, but the development of the interpretive subroutines and prospect of changes in the final system have caused temporary abandonment of this scheme. However, complete programs have been written and tested for several of the standard matrix inversion and simultaneous linear equation solution methods (Selection, Relaxation, Elimination, Steepest Descent) and for various eigenvalue and eigenvector methods (Iterative diagonalizatlon, Direct Trace Methods, Iteration for Highest Eigenvalue, etc.). With these programs, any matrix typed in standard form, and satisfying certain mathematical prerequisites, can be inserted into the machine, along with the program, which then types out the required answers automatically.

    Conclusions
    The investigations at Whirlwind, along with similar ones made at other machines~ suggest an answer to the problem of simplifying the Job of programming. This answer is not the simple one, adopted by some, of using the basic machine language of '~electronic instructions '~ and being satisfied with it. Rather, this proposed answer assumes that the language of the programmer shall be vastly different from that of the machine, and that it should be patterned so as to fit the previous languages of the programmer (English and algebra, for example, in the case of many problems here in the United States).
    Entries of information into and out of the machine must, then, all pass through translation or conversion programs. By means of interpretive subroutines and artificial codes, present-day codes must be extended to include all the basic subroutines of today. Notification of errors should be made in the language of outside, if possible, with full explanation of Just what has declared, rather than the mystic lights and bells that are the plagues of the present machines. Post mortem programs should re-translate into the outside language, printing out only the registers changed during an operation of the program. Ideally, the machine should perform all the menial Jobs now performed by the programmer.

    Bibliography
    1. Wilkes, M.V.Wheeler, D.J. and Gill, S.; The Preparation of Programs for an Electronic Digital Computer, Addison-Wesley Press, Inc., Cambridge, Mass.
    2. Cart, J.W., "Extension of Remarks Made From the Floor," Review of Electronic Digital Computers, Joint AIEE-IRE Computer Conference (AIEE, February, 1952) pp. 113 - I14.
    3. Goldstine, H.H., and von Neumann, J., Planning and Coding of Problems for an Electronic Computing Instrument, The Institute of Advanced Study, Princeton, N.J., Parts I and II, Vol. III, 1947. Page 241
          in Proceedings of the 1952 ACM national meeting (Pittsburgh) view details
  • [Forrester, Jay]; Adams, Charles and Gill, Stanley "Notes on digital computers and their applications" Summer session 1953 Cambridge, MA MIT 1953, 1953 view details
          in Proceedings of the 1952 ACM national meeting (Pittsburgh) view details
  • Adams, Charles W and Laning J.H. Jr "The MIT System of Automatic Coding: Comprehensive, Summer Session and Algebraic" view details
          in Symposium on Automatic Programming For Digital Computers, Office of Naval Research, Dept. of the Navy, Washington, D.C. PB 111 607 May 13-14 1954 view details
  • Bemer, R. W. "The Status of Automatic Programming for Scientific Problems" view details Abstract: A catalogue of automatic coding systems that are either operational or in the process of development together with brief descriptions of some of the more important ones Extract: Summary
    Let me elaborate these points with examples. UNICODE is expected to require about fifteen man-years. Most modern assembly systems must take from six to ten man-years. SCAT expects to absorb twelve people for most of a year. The initial writing of the 704 FORTRAN required about twenty-five man-years. Split among many different machines, IBM's Applied Programming Department has over a hundred and twenty programmers. Sperry Rand probably has more than this, and for utility and automatic coding systems only! Add to these the number of customer programmers also engaged in writing similar systems, and you will see that the total is overwhelming.
    Perhaps five to six man-years are being expended to write the Alodel 2 FORTRAN for the 704, trimming bugs and getting better documentation for incorporation into the even larger supervisory systems of various installations. If available, more could undoubtedly be expended to bring the original system up to the limit of what we can now conceive. Maintenance is a very sizable portion of the entire effort going into a system.
    Certainly, all of us have a few skeletons in the closet when it comes to adapting old systems to new machines. Hardly anything more than the flow charts is reusable in writing 709 FORTRAN; changes in the characteristics of instructions, and tricky coding, have done for the rest. This is true of every effort I am familiar with, not just IBM's.
    What am I leading up to? Simply that the day of diverse development of automatic coding systems is either out or, if not, should be. The list of systems collected here illustrates a vast amount of duplication and incomplete conception. A computer manufacturer should produce both the product and the means to use the product, but this should be done with the full co-operation of responsible users. There is a gratifying trend toward such unification in such organizations as SHARE, USE, GUIDE, DUO, etc. The PACT group was a shining example in its day. Many other coding systems, such as FLAIR, PRINT, FORTRAN, and USE, have been done as the result of partial co-operation. FORTRAN for the 705 seems to me to be an ideally balanced project, the burden being carried equally by IBM and its customers.
    Finally, let me make a recommendation to all computer installations. There seems to be a reasonably sharp distinction between people who program and use computers as a tool and those who are programmers and live to make things easy for the other people. If you have the latter at your installation, do not waste them on production and do not waste them on a private effort in automatic coding in a day when that type of project is so complex. Offer them in a cooperative venture with your manufacturer (they still remain your employees) and give him the benefit of the practical experience in your problems. You will get your investment back many times over in ease of programming and the guarantee that your problems have been considered.
    Extract: IT, FORTRANSIT, SAP, SOAP, SOHIO
    The IT language is also showing up in future plans for many different computers. Case Institute, having just completed an intermediate symbolic assembly to accept IT output, is starting to write an IT processor for UNIVAC. This is expected to be working by late summer of 1958. One of the original programmers at Carnegie Tech spent the last summer at Ramo-Wooldridge to write IT for the 1103A. This project is complete except for input-output and may be expected to be operational by December, 1957. IT is also being done for the IBM 705-1, 2 by Standard Oil of Ohio, with no expected completion date known yet. It is interesting to note that Sohio is also participating in the 705 FORTRAN effort and will undoubtedly serve as the basic source of FORTRAN-to- IT-to-FORTRAN translational information. A graduate student at the University of Michigan is producing SAP output for IT (rather than SOAP) so that IT will run on the 704; this, however, is only for experience; it would be much more profitable to write a pre-processor from IT to FORTRAN (the reverse of FOR TRANSIT) and utilize the power of FORTRAN for free.
          in "Proceedings of the Fourth Annual Computer Applications Symposium" , Armour Research Foundation, Illinois Institute of Technology, Chicago, Illinois 1957 view details
  • Gorn, Saul "Standardized Programming Methods and Universal Coding" view details Extract: Introduction
    Introduction
    It is possible so to standardize programming and coding for general purpose, automatic, high-speed, digital computing machines that most of the process becomes mechanical and, to a great degree, independent of the machine. To the extent that the programming and coding process is mechanical a machine may be made to carry it out, for the procedure is just another data processing one.
    If the machine has a common storage for its instructions along with any other data, it can even carry out each instruction immediately after having coded it. This mode of operation in automatic coding is known as 'interpretive'. There have been a number of interpretive automatic coding procedures on various machines, notably MIT's Summer Session and Comprehensive systems for Whirlwind, Michigan's Magic System for MIDAC, and IBM's Speedcode; in addition there have been some interpretive systems beginning essentially with mathematical formulae as the pseudocode, such as MIT's Algebraic Coding, one for the SEAC, and others.
    We will be interested, however, in considering the coding of a routine as a separate problem, whose result is the final code. Automatic coding which imitates such a process is, in the main, non-interpretive. Notable examples are the A-2 and B-O compiler systems, and the G-P (general purpose) system, all for UNIVAC, and IBM's FORTRAN, of the algebraic coding type.
    Although, unlike interpretive systems, compilers do not absolutely require their machines to possess common storage of instructions and the data they process, they are considerably simpler when their machines do have this property. Much more necessary for the purpose is that the machines possess a reasonable amount of internal erasable storage, and the ability to exercise discrimination among alternatives by simple comparison instructions. I t will be assumed that the machines under discussion, whether we talk about standardized or about automatic coding, possess these three properties, namely, common storage, erasable storage, and discrimination. Such machines are said to possess "loop control".
    We will be interested in that part of the coding process which all machines having loop control and a sufficiently large storage can carry out in essentially the same manner; it is this part of coding that is universal and capable of standardization by a universal pseudo-code.
    The choice of such a pseudo-code is, of course, a matter of convention, and is to that extent arbitrary, provided it is
    (1) a language rich enough to permit the description of anything these machines can do, and
    (2) a language whose basic vocabulary is not too microscopically detailed.
    The first requirement is needed for universality of application; the second is necessary if we want to be sure that the job of hand coding with the pseudo-code is considerably less detailed than the job of hand coding directly in machine code. Automatic coding is pointless practically if this second condition is not fulfilled.
    In connection with the first condition we should remark on what the class of machines can produce; in connection with the second we should give some analysis of the coding process. In either case we should say a few things about the logical concept of computability and the syntax of machine codes.
          in [ACM] JACM 4(3) July 1957 view details
  • [Bemer, RW] [State of ACM automatic coding library August 1958] view details
          in [ACM] JACM 4(3) July 1957 view details
  • [Bemer, RW] [State of ACM automatic coding library May 1959] view details Extract: Obiter Dicta
    Bob Bemer states that this table (which appeared sporadically in CACM) was partly used as a space filler. The last version was enshrined in Sammet (1969) and the attribution there is normally misquoted.
          in [ACM] CACM 2(05) May 1959 view details
  • Carr, John W III; "Computer Programming" volume 2, chapter 2, pp115-121 view details
          in E. M. Crabbe, S. Ramo, and D. E. Wooldridge (eds.) "Handbook of Automation, Computation, and Control," John Wiley & Sons, Inc., New York, 1959. view details
  • Rosen, Saul "Programming Systems and Languages: a historical Survey" (reprinted in Rosen, Saul (ed) Programming Systems & Languages. McGraw Hill, New York, 1967) view details Extract: FACT vs COBOL
    At the original Defense Department meeting there were two points of view One group felt that the need was so urgent that it was necessary to work within the state of the art as it then existed and to specify a common language on that basis as soon as possible The other group felt that a better understanding of the problems of Data-Processing programming was needed before a standard language could be pro posed They suggested that a longer range approach looking toward the specification of a language in the course of two or three years might produce better results As a result two committees were set up a short range commit tee, and an intermediate range committee The original charter of the short range committee was to examine existing techniques and languages, and to report back to CODASYL with recommendations as to how these could be used to produce an acceptable language The committee set to work with a great sense of urgency A number of companies represented had commitments to produce Data-processing compilers, and representatives of some of these be came part of the driving force behind the committee effort The short range committee decided that the only way it could satisfy its obligations was to start immediately on the design of a new language The committee became known as the COBOL committee, and their language was COBOL.
    Preliminary specifications for the new language were released by the end of 1959, and several companies, Sylvania, RCA, and Univac started almost immediately on implementation on the MOBIDIC, 501, and Univac II respectively.
    There then occurred the famous battle of the committees The intermediate range committee had been meeting occasionally, and on one of these occasions they evaluated the early COBOL specifications and found them wanting The preliminary specifications for Honeywell's FACT30 compiler had become available and the inter mediate range committee indicated their feeling that Fact would be a better basis for a Common Business Oriented Language than Cobol.
    The COBOL committee had no intention of letting their work up to that time go to waste. With some interesting rhetoric about the course of history having made it impossible to consider any other action, and with the support of the Codasyl executive board, they affirmed Cobol as the Cobol. Of course it needed improvements but the basic structure would remain. The charter of the Cobol committee was revised to eliminate any reference to short term goals and its effort has continued at an almost unbelievable rate from that time to the present. Computer manufacturers assigned programming Systems people to the committee essentially on a full time basis Cobol 60, the first official description of the language was followed by 6131 and more recently by 61 extended .32
    Some manufacturers dragged their feet with respect to Cobol implementation. Cobol was an incomplete and developing language, and some manufacturers, especially Honeywell and IBM, were implementing quite sophisticated data processing compilers of their own which would become obsolete if Cobol were really to achieve its goal In 1960 the United States government put the full weight of its prestige and purchasing power behind Cobol, and all resistance disappeared This was accomplished by a simple announcement that the United States government would not purchase or lease computing equipment from any manufacturer unless a Cobol language compiler was available, or unless the manufacturer could prove that the performance of his equipment would not be enhanced by the availability of such a compiler. No such proof was ever attempted for large scale electronic computers.
    To evaluate Cobol in this short talk is out of the question A number of quite good Cobol compilers have been written The one on the 7090 with which I have had some experience may be typical It implements only a fraction, less than half I would guess, of the language described in the manual for Cobol 61 extended. No announcement has been made as to whether or when the rest, some of which has only been published very recently will be implemented. What is there is well done, and does many useful things, but the remaining features are important, as are some that have not yet been put into the manual and which may appear in Cobol 63.
    The language is rather clumsy to use, for example, long words like synchronized and computational must be written out all too frequently, but many programmers are willing to put up with this clumsiness because within its range of applicability the compiler performs many important functions that would otherwise have to be spelled out in great detail. It is hard to believe that this is the last, or even very close to the last word in data processing languages.
          in [AFIPS JCC 25] Proceedings of the 1964 Spring Joint Computer Conference SJCC 1964 view details
  • Backus, John "Programming in America in the nineteen fifties - some personal impressions" pp125-135 view details
          in Metropolis, N. et al., (eds.),A History of Computing in the Twentieth Century (Proceedings of the International Conference on the History of Computing, June 10 15, 1976) Academic Press, New York, 1980 view details
  • Ceruzzi, Paul with McDonald, Rod and Welch, Gregory "Computers: A Look at the First Generation" view details External link: The Computer Museum Report, Volume 7 online at Ed Thelen's site Extract: Programming first generation machines
    The first generation of computers were programed in machine language, typically by binary digits punched into a paper tape. Activity in higher-level programming was found on both the large-scale machine and on the smaller commercial drum computers.

    High-level programming languages have their roots in the mundane. A pressing problem for users of drum computers was placing the program and data on the drum in a way that minimized the waiting time for the computer to fetch them.

    It did not take long to realize that the computer could perform the necessary calculations to minimize the so called latency, and out of these routines grew the first rudimentary compilers and interpreters. Indeed, nearly every drum or delay line computer had at least one optimizing compiler. Some of the routines among the serial memory computers include SOAP for the IBM 650, IT for the Datatron, and Magic for the University of Michigan's MIDAC.

    Parallel memory machines had less sophisticated and diverse compilers and interpreters. Among the exceptions were SPEEDCODE developed for the IBM 701, JOSS for the Johnniac, and a number of compilers and interpreters for the Whirlwind.


          in The Computer Museum Report, Volume 7, Winter/1983/84 view details
  • Diana H. Hook; Jeremy M. Norman; Michael R. Williams "Origins of Cyberspace" Jeremy Norman 2002 view details Extract: Early MIT Algebraic systems
    MIT was allocated a small amount of the Whirlwind I's computing time for academic work, an activity organized by Charles W. Adams, assistant professor of digital computation at MIT's Digital Computer Laboratory. (Adams later founded Adams Associates, and became the proprietor of Key Data.) In 1952 Adams held the first of what was to be a series of yearly summer sessions on computer programming at MIT, designed to provide scientists, engineers, and business people with a better understanding of the potentialities and limitations of electronic information-processing systems. MIT thus became the second American educational institution, after the Moore School, to offer courses on electronic digital computers. During the first summer session the students did not have access to the computer, but the next year's course, held between August 24 and September 4, 1953, was based on a specially designed hypothetical computer called the Summer Session computer, for which an emulation was written to run on the Whirlwind-"a very progressive thing to do in 1953" (Wilkes 1985, 180-81). Among the session's lecturers were Stanley Gill and Maurice Wilkes, co-authors with David J. Wheeler of the first textbook of computer programming. Attending the 1953 session were 106 students: 28 from commercial and business groups, 53 from industrial research groups, and 25 from military and government establishments. The students attended lectures in the morning, and in the afternoon worked on writing computer programs for the solution of various problems, such as plotting the trajectory of a bouncing ball or selecting the winner from a given number of poker hands. The syllabus for the 1953 course includes a seven-page bibliography of computer literature.
          in The Computer Museum Report, Volume 7, Winter/1983/84 view details