IAL(ID:30/ial001)

International Algebraic Language 


for International Algebraic Language.

Subsequently renamed ALGOL

Aims
1.  close to standard mathematical notation and readable with little explanation
2.  use for description of algorithms in publications
3.  machanically translatable into machine programs




Related languages
FORTRAN II => IAL   Negative Slight Influence
IT 3 => IAL   Positive Strong Influence
MATH-MATIC => IAL   Evolution of
Sequentielle Formelübersetzung => IAL   Negative Slight Influence
XTRAN => IAL   Influence
ZMD Algorithmic => IAL   Evolution of
IAL => ALGO   Influence
IAL => ALGOL   Renaming
IAL => BALGOL   Influence
IAL => CLIP   Based on
IAL => JOVIAL   Influence
IAL => MAD   Influence
IAL => NELIAC   Influence
IAL => TUG Algol   Implementation of
IAL => V2   left over from Based on
IAL => Wegstein algebraic language   Antipathy to
IAL => Z22 compiler   Implementation

References:
  • Bauer, F. L., H. Bottenbruch, H. Rutishauser, and K. Samelson, Proposal for a universal language for the description of computing processes view details
          in Carr, John W. III, ed., "Computer Programming and Artificial Intelligence" College of Engineering, University of Michigan, Ann Arbor, Michigan, 1958 view details
  • Backus, J. W. "The syntax and semantics of the proposed international algebraic language of the Zurich ACM-GAMM conference" view details
          in Proc. Int. Conf. Information Processing, Oldenbourg, München, 1960 view details
  • Carr, John W., III "Recursive subscripting compilers and list-type memories" pp4-6 view details DOI
          in [ACM] CACM 2(02) February 1959 view details
  • Graham, Robert M. "On the implementation of the IAL" view details Abstract: When the Michigan Algorithmic Decoder (MAD) was started it was intended to be an exact implementation of the IAL. As the project developed several serious difficulties were encountered. Thus MAD (now being checked out) differs from IAL in several respects. One of our prime objectives was to keep the translation time as small as possible. The majority of University problems are not run on a production basis. On the average there is probably one translation for every execution. A translator which takes several minutes to translate a problem which takes twenty seconds to run is not very practical for a university installation.The following discussion of the differences does not mention restrictions or omissions which can be added later, e.g., MAD has no D0 statement (in the IAL sense), but it can be added at any time DOI Extract: Introduction
    When the Michigan Algorithmic Decoder (MAD) was started it was intended to be an exact implementation of the IAL. As the project developed several serious difficulties were encountered. Thus MAD (now being checked out) differs from IAL in several respects. One of our prime objectives was to keep the translation time as small as possible. The majority of University problems are not run on a production basis. On the average there is probably one translation for every execution. A translator which takes several minutes to translate a problem which takes twenty seconds to run is not very practical for a university installation.The following discussion of the differences does not mention restrictions or omissions which can be added later, e.g., MAD has no D0 statement (in the IAL sense), but it can be added at any time

    Some of the more important changes are:
    1. Statement labels are restricted to integers. Switch vectors are replaced by one "statement switching dictionary" which contains the entry for each labeled statement in the program (this ellminatos the need for any other switch vectors). The difficulty in usin~ any switch vector occurs in the use of functions If one of the arguments is to be the statement label of an alternate exits then what does the translator do when an integer is found as an argument? At translate time no information is available as to what the integer will be used for; so, does the translator give the location of the integer constant or the location of the statement whose label is the integer? With the "switching dictionary" the location of the integer constant is given and this constant is used as an index in the dictionary.
    2. Compound statements are not accepted. The IF - OR IF statements, which allow the convenient selection of one 9f a set of sub-programs are included. Since program branches are not usually considered in this form, a conditional IF, GOTO statement is also included.
    3. The FOR statement is generalized so that any Boolean expression may be given as the terminating condition.
    4. All arrays are stored in linear form, with elements indexed O, l, 2, ..., m. In addition to giving the value of m the array declaration may also give, as a ba:~ a subscript of the linear array and the values of n-1 dimensions if t~,e elements are to be referred to in n-dlmenslonal subscript notation. Both the base and dimensions may be variables, hence may be changed dtrlng execution.
    5. Internal functions may be defined in the same fashion am external functions (procedures), by subprograms. Function definitions may use an ERROR RETURN statement. If such a return is used as an exit from a function with n-arguments, an n+1st argument - which is the number of the statement to be executed if the error is encountered - may be appended. If this argument is not given ERROR RETURN causes a machine halt (or transfer back to an executive routine)
    6. Corresponding to the formal parameters, "function with k empty parameter positions", IAL allows a function with k empty positions. MAD extends this to allows in addition, a variable whose value is a function names IAL specifies an array with k empty subscript positions as a formal parameter. Corresponding to this may be an array with k empty subscript positions whose dimension is greater than k. The intent of this seems to be to allow functions to operate on k-dimensional subspaces. The machinery for accomplishing this translation is extremely complex and the necessary computing time is tcogreat to warrant, at this time, the inclusion of this notational convenience.
    IAL is an extremely rich language but in some cases this richness is gained at a very high cost. Accordingly, the implemented language is limited in some ways. These limltations do not actually decrease the ease of use, and in some cases make it more convenient. A new group of statements allow the definition of new binary and unary operators, The definition may be either in MAD or in a SAP-like code, The defined operator is then used Just as any other operator. The precedence and commutativity of a defined operator, the mode of the operands, and allowed conversions between these and other modes may all be specified by appropriate statements in MAD.
    As an example of definition, suppose one wishes to define two
    operators; a) extract the left most BCD character in A, and b) concatenate two characters B and C. Defined operators are delimited by periods. In MAD one writes:
    DEFINE OPERATION (K=.KR1.A)
         CAL A
         ARS 30
         STO K
    DEFINE END
    DEFINE OPERATION (T=B.CONCAT.C)
         CLA B
         ALS 6
         ADD C
         ST0 T
    DEFINE END
    When these operators are used in a MAD statement, the translation takes advantage of the availability of the arguments. For example,
    M = .KR1.P .CONCAT..KR1.Q
    translates into:
    CAL Q
    ARS 3o
    STO (TMP)
    CAL P
    ARS 30
    ALS 6
    ADD ( TMP )
    STO M
    This extension of the language costs essentially nothing in translation time since its implementation is a byproduct of the method of translation used in MAD. This translation scheme is described elsewhere.

          in SESSION: Automatic programming: compilers view details
  • Green, Julien "Possible modification to the international algebraic language" pp6-8 view details DOI
          in [ACM] CACM 2(02) February 1959 view details
  • Katz, Charles "The international algebraic language and the future of programming" view details Extract: Introduction
    In the past few years the computer world has undergone a tremendous change, particularly in the area of programmed computer aids. Just five or six years ago the atmosphere was completely hostile to such things as compilers and synthetic languages. Today there is not only general acceptance of these devices but a definite demand for them. A prospective computer customer will not consider any machine that is not equipped with an assembly system, a full library of subroutines, diagnostic routines, sort and merge generators, an algebraic compiler, an English-language business compiler, etc.

    The first automatic coding systems that were developed were the result of many man-years of experimentation and determination. They were written for machines that were rapidly approaching obsolescence. They had the many inefficiencies that are typical of prototype systems. In spite of these drawbacks, they met with tremendous success. As with any successful product, the band wagon got under way. Users demanded that compilers of all shapes, sizes, and descriptions be made available for all machines. Manufacturers built large staffs of inexperienced people to meet these demands. They produced "quick and dirty" systems that took three or four months to complete and three or four years to alter. Translators were indiscriminately built on top of existing assembly systems, and then other translators were built on top of these. The inefficiency of the products rose exponentially with the number of translators. Individuals at universities and installations throughout the country decided that they could build bigger and better compilers in a lot less time. As a result, the market was flooded with many similar systems for the same machines. Each of these systems had its own excellent features, and each had its inefficiencies. The source languages differed slightly from system to system, and no one seemed to take advantage of the experience of others in writing compilers. Each group was going to produce the ultimate system and propose its own "common natural language."

    Early in 1957 the GAMM organization of Germany and Switzerland formed a committee to consider the possibility of devising a common algebraic language for the various computers that were ordered by several universities. Their primary interests were to devise a language for the teaching of numerical methods at the universities, and one that could be used to publish algorithms in a standard and precise notation. They reviewed the systems that had been produced in the United States and found them to be very similar to one another. Rather than add another variation to the saturated atmosphere, they dispatched an emissary to the United States to try to start a movement toward a single internationally accepted  algebraic language. Professor Frederick Bauer of the University of Mainz was successful in this mission. A committee of about twenty people, representing computer manufacturers, universities, and industrial users, was formed under ACM sponsorship.

    This committee held several meetings and surprisingly enough came through the maze of conflicting opinions with a proposed language structure. Concurrently, the GAMM committee produced a similar proposal. Four representatives from each committee met in Zurich, Switzerland, in June, 1958, with their two formal proposals and their eight individual opinions. The vote on most controversial points was four to four. But somehow or other each difficulty was overcome, and ALGOL, a basically sound and reasonable language, was produced. It had elements that were obviously the result of compromise, but on the whole it had a more consistent structure and a more powerful and comprehensive potential than any of the languages that had preceded it.

    It was received in the computer community with varied reaction. Some felt that it was just impossible for eight people to do anything worthwhile in just one week; others declared that this accomplishment would settle all the problems of the computer world; still others made constructive and useful suggestions as to how the language could be improved. Users' organizations immediately formed committees to study the language and to make recommendations as to how it could be implemented on the machines with which they were concerned. Several manufacturers and several universities actually have started implementing ALGOL.

    These people have been the chief source of the suggestions for improvement. A meeting of the full ACM committee is scheduled to be held in Washington, D.C., during the first week of November, 1959, to consider the various proposals that have been made; to incorporate the best of these into the language; and to set up the mechanism to publish an up-to-date description of ALGOL.

    In all the excitement and discussion over the improvement of the language of ALGOL, there seems to have developed a tendency to overlook the reasons for its very existence. Let us therefore consider the advantages of ALGOL. To the user of many different machines, or to the installation that is replacing its present machine with a new one, the universality of programs coded in ALGOL presents an obvious advantage. For the communication of algorithms and effective computing techniques, ALGOL is a powerful tool. A program that is written in ALGOL can be published directly, without revision. The published program can then be used by any interested parties without expensive conversion or recoding.

    The language of formal mathematics has long been established and universally accepted. The relatively new field of numerical methods, however, has no commonly accepted method of notation. It is entirely conceivable that ALGOL will become the basis of the symbology for this field, not only as the vehicle for written communication of algorithms, but also as the language of instruction. Various universities in Germany have already begun to use ALGOL in the numerical methods classrooms.

    Extract: ALGOL and CODASYL
    Partly as a result of the success of ALGOL, a movement has recently begun to define a common business language. The Committee for Data Systems Languages (CODASYL) was organized by the Defense Department in May of this year. Various working subcommittees were formed, and a proposal is forthcoming by the end of the year.
    Extract: Conclusion
    Let us now consider the impact that these movements toward standardization and the development of user-oriented languages might have on the computer world. Or, more precisely, let us consider the trends that seem to exist in the computer world that have manifested themselves in these movements and are being perpetuated by them.

    The community of computer users has changed considerably in the past few years. During the Dark Ages of computing, the only way of communicating with a machine was through a highly intricate, sophisticated, and delicate piece of equipment known as a programmer. This programmer was highly specialized and intimately familiar with the computer to which it was attached. All problems for the computer had to pass through the programmer, and, although it sometimes produced works of art, it worked entirely too slowly and was much too expensive to operate.

    The climate today is not favorable to the widespread operation of such a specialized piece of equipment, and there are relatively few of the old model still being produced.

    The number of computers in use is increasing much more rapidly than the supply of new programmers. Many of the better members of the old school have deserted the art to enter the field of management. Personnel at installations that have more than one kind of computer, or at installations that change computers every two years, do not have the opportunity to gain the detailed knowledge of the machine that is essential to warrant the name "programmer."

    I do not wish to imply that there are no good programmers today. There are a great many excellent ones. However, there is an ever growing number of "nonprogrammer" users of computers. These are people who are experts in their own fields, who know the problems or systems for which they want results, and who want to use a computer, as a tool, to get these results in the quickest and most economical way possible. This is one of the major factors that have created the need for automatic coding systems et al. The use of synthetic languages has, in turn, pulled the users farther away-from the details of the machine, thus creating the need for more sophisticated automatic systems-systems that make more decisions for the user; systems for automatic programming rather than just automatic coding; complete systems, that carry the use of the source language through the entire effort, even to the debugging of the program.

    Another important effect that should result from the use of common languages is a lessening of duplication of effort. With the availability of scientific algorithms and of effective solutions to various business problems in convenient, understandable, and usable languages, it should no longer be necessary for every installation to resolve the same problems. This should permit a more rapid advancement, not only in the programming area, but in many of the fields that make use of computers.

    New computers are becoming more complex and hence more difficult to use. Every known piece of peripheral equipment is being "hung" on the main frame. Each of these devices is buffered and operative simultaneously with the processor. parallel processors will soon be available, so that branches of a program, or different programs, can be executed concurrently. Recently there has been a great deal of emphasis on simultaneous execution of unrelated programs. This has created such programming problems as priority assignments and protection of instructions.

    To take advantage of these efficient concepts in programming a problem is near impossible. There is only one device that I know of that can make use of these complex features with any degree of efficiency, and that of course is a computer. One of the major objections that professional programmers have raised to the use of compilers in the past has been that programs produced automatically were not so efficient as those tailored by hand. This, of course, is dependent upon the tailor and, with the continuing development of better techniques, is becoming less true. As the machines become more difficult to program, it is my feeling that no individual will, within a reasonable amount of time, be able to match the efficiency of programs that are produced automatically. The time has certainly come to automate the programming field-to automate the production of programs, runs, systems of runs, and the production of automatic programming systems.

    I firmly believe that, in the near future, computers will be released to the customer with automatic programming systems and synthetic languages. The basic machine code will not be made available. All training and programming will take place at the source-language level. The desire to go below this language level will pass, just as our desire to get at the basic components of "wired-in" instructions has passed. Machines will be built in a truly modular fashion, ranging from a simple inexpensive model, expandable to a degree of complexity that we cannot now imagine. Automatic programming systems will also be built modularly, thus allowing the use of the same source language on all models. Every hardware module will be available also as a program module. The smallest model of the machine will have all the optional features available as program packages. As hardware modules are added to the machine, in order to increase speed, capacity, etc., the corresponding package in the programming system will be removed. Thus any programs written for a small model can be run without modification on a larger model, and growth of an installation is possible with a minimum of effort and expense.

    One word of caution is necessary. We cannot expect these transitions to take place overnight. The present frantic activity in the common-language area has its frivolous side. There seems to be a prevalent attitude that, as soon as these languages have been defined and implemented, all computing problems will be solved. This is, of course, absurd. The time when we may just state our problem to a computer and expect it to decide on a method of solution and deliver the results has not yet come.
          in Proceedings of the 1959 Computer Applications Symposium, Armour Research Foundation, Illinois Institute of Technology, Chicago, Ill., Oct. 29, 1959 view details
  • Rosen, Saul "Programming Systems and Languages: a historical Survey" (reprinted in Rosen, Saul (ed) Programming Systems & Languages. McGraw Hill, New York, 1967) view details Extract: IAL, Algol 58
    Until quite recently, large scale computers have been mainly an American phenomenon. Smaller computers were almost worldwide right from the beginning. An active computer organization GAMM had been set up in Europe, and in 1957 a number of members of this organization were actively interested in the design of Algebraic compilers for a number of machines. They decided to try to reach agreement on a common language for various machines, and made considerable progress toward the design of such a language. There are many obvious advantages to having generally accepted computer independent problem oriented languages. It was clear that a really international effort in this direction could only be achieved with United States participation. The President of GAMM wrote a letter to John Carr who was then President of the ACM, suggesting that representatives of ACM and of GAMM meet together for the purpose of specifying an international language for the description of computing procedures.
    The ACM up to that time had served as a forum for the presentation of ideas in all aspects of the computer field. It had never engaged in actual design of languages or systems.
    In response to the letter from GAMM, Dr. Carr appointed Dr. Perlis as chairman of a committee on programming languages. The committee set out to specify an Algebraic compiler language that would represent the American proposal at a meeting with representatives of GAMM at which an attempt would be made to reach agreement on an internationally accepted language. The ACM committee consisted of representatives of the major computer manufacturers, and representatives of several Universities and research agencies that had done work in the compiler field. Probably the most active member of the committee was John Backus of IBM. He was probably the only member of the committee whose position permitted him to spend full time on the language design project, and a good part of the "American Proposal" was based on his work.
    The ACM committee had a number of meetings without any very great sense of urgency. Subcommittees worked on various parts of the language and reported back to the full committee, and in general there was little argument or disagreement. There is after all very general agreement about the really basic elements of an Algebraic language. Much of the language is determined by the desirability of remaining as close as possible to Mathematical notation. This is tempered by experience in the use of computers and in the design of compilers which indicates some compromises between the demands of desirable notation and those of practical implementation.
    At one meeting of the committee Dr. Bauer, one of the leaders of the GAMM effort, presented a report on the proposed European language. Among other things they proposed that English language key words, like begin, end, for, do, be used as a world-wide standard. Of course this is something the American committee would never have proposed, but it seemed quite reasonable to go along with the Europeans in this matter. Although some of the notations seemed strange, there were very few basic disagreements between what GAMM was proposing, and what the ACM committee was developing. Dr. Bauer remarked that the GAMM organization felt somewhat like the Russians who were meeting with constant rebuffs in an effort to set up a summit meeting. With such wide areas of agreement why couldn't the ACM-GAMM meeting take place?
    Although there is quite general agreement about the basic elements of an Algebraic language, there is quite considerable disagreement about how far such a language should go, and about how some of the more advanced and more difficult concepts should be specified in the language. Manipulation of strings of symbols, direct handling of vectors, matrices, and multiple precision quantities, ways to specify segmentation of problems, and the allocation and sharing of storage; these were some of the topics which could lead to long and lively discussion. The ACM language committee decided that it was unreasonable to expect to reach an agreement on an international language embodying features of this kind at that time. It was decided to set up two subcommittees. One would deal with the specification of a language which included those features on which it was reasonable to expect a wide range of agreement.
    The other was to work toward the future, toward the specification of a language that would really represent the most advanced thinking in the computer field.
    The short-range committee was to set up a meeting in Europe with representatives of GAMM. Volunteers for work on this committee would have to arrange for the trip to Europe and back, and were therefore limited to those who worked for an organization that would be willing to sponsor such a trip. The ACM was asked to underwrite the trip for Dr. Perlis.
    The meeting of the ACM and GAMM subcommittees was held in Zurich in the spring of 1958, and the result was a Preliminary report on an International Algebraic Language, which has since become popularly known as Algol 58. Extract: Algol vs Fortran
    With the use of Fortran already well established in 1958, one may wonder why the American committee did not recommend that the international language be an extension of, or at least in some sense compatible with Fortran. There were a number of reasons. The most obvious has to do with the nature and the limitations of the Fortran language itself. A few features of the Fortran language are clumsy because of the very limited experience with compiler languages that existed when Fortran was designed. Most of Fortran's most serious limitations occur because Fortran was not designed to provide a completely computer independent language; it was designed as a compiler language for the 704. The handling of a number of statement types, in particular the Do and If statements, reflects the hardware constraints of the 704, and the design philosophy which kept these statements simple and therefore restricted in order to simplify optimization of object coding.
    Another and perhaps more important reason for the fact that the ACM committee almost ignored the existence of Fortran has to do with the predominant position of IBM in the large scale computer field in 1957-1958 when the Algol development started. Much more so than now there were no serious competitors. In the data processing field the Univac II was much too late to give any serious competition to the IBM 705. RCA's Bizmac never really had a chance, and Honeywell's Datamatic 1000, with its 3 inch wide tapes, had only very few specialized customers. In the Scientific field there were those who felt that the Univac 1103/1103a/1105 series was as good or better than the IBM 701 / 704 /709. Univac's record of late delivery and poor service and support seemed calculated to discourage sales to the extent that the 704 had the field almost completely to itself. The first Algebraic compiler produced by the manufacturer for the Univac Scientific computer, the 1103a, was Unicode, a compiler with many interesting features that was not completed until after 1960, for computers that were already obsolete. There were no other large scale scientific computers. There was a feeling on the part of a number of persons highly placed in the ACM that Fortran represented part of the IBM empire, and that any enhancement of the status of Fortran by accepting it as the basis of an international standard would also enhance IBM's monopoly in the large scale scientific computer field.
    The year 1958 in which the first Algol report was published, also marked the emergence of large scale high speed transistorized computers, competitive in price and superior in performance to the vacuum tube computers in general use. At the time I was in charge of Programming systems for the new model 2000 computers that Philco was preparing to market. An Algebraic compiler was an absolute necessity, and there was never really any serious doubt that the language had to be Fortran. The very first sales contracts for the 2000 specified that the computer had to be equipped with a compiler that would accept 704 Fortran source decks essentially without change. Other manufacturers, Honeywell, Control Data, Bendix, faced with the same problems, came to the same conclusion. Without any formal recognition, in spite of the attitude of the professional committees, Fortran became the standard scientific computing language. Incidentally, the emergence of Fortran as a standard helped rather than hindered the development of a competitive situation in the scientific computer field.
    Extract: The Algol 58-alike languages
    To go on with the Algol development, the years 1958-1959 were years in which many new computers were introduced, The time was ripe for experimentation in new languages. As mentioned earlier there are many elements in common in all Algebraic languages, and everyone who introduced a new language in those years called it Algol, or a dialect of Algol. The initial result of this first attempt at the standardization of Algebraic languages was the proliferation of such languages in great variety.
    A very bright young programmer at Burroughs had some ideas about writing a very fast one pass compiler for Burroughs new 220 computer. The compiler has come to be known as Balgol.
    A compiler called ALGO was written for the Bendix G15 computer. At Systems Development Corporation, programming systems had to be developed for a large command and control system based on the IBM military computer (ANFSQ32). The resulting Algebraic language with fairly elaborate data description facilities was JOVIAL (Jules Schwartz' own Version of the International Algebraic Language). By now compilers for JOVIAL have been written for the IBM 7090, the Control Data 1604, the Philco 2000, the Burroughs D825, and for several versions of IBM military computers.
    The Naval Electronics Laboratory at San Diego was getting a new Sperry Rand Computer, the Countess. With a variety of other computers installed and expected they stressed the description of a compiler in its own language to make it easy, among other things, to produce a compiler on one computer using a compiler on another. They also stressed very fast compiling times, at the expense of object code running times, if necessary. The language was called Neliac, a dialect of Algol. Compilers for Neliac are available on at least as great a variety of computers as for JOVIAL.
    The University of Michigan developed a compiler for a language called Mad, the Michigan Algorithmic Decoder. They were quite unhappy at the slow compiling times of Fortran, especially in connection with short problems typical of student use of a computer at a University. Mad was, originally programmed for the 704 and has been adapted for the 7090 It too was based on the 1958 version of Algol.
    All of these languages derived from Algol 58 are well established, in spite of the fact that the ACM GAMM committee continued its work and issued its now well known report defining Algol 60.
          in [AFIPS JCC 25] Proceedings of the 1964 Spring Joint Computer Conference SJCC 1964 view details
  • Bemer, RW "A politico-Social History of Algol" pp 151-238 view details
          in [AFIPS JCC 25] Proceedings of the 1964 Spring Joint Computer Conference SJCC 1964 view details
  • Computer Oral History Collection, 1969-1973, 1977 Interviewee: Morton Bernstein Interviewer: Robina Mapstone Date: March 14, 1973 Archives Center, National Museum of American History view details Extract: IAL, Algol 58, Algol 60
    There was a tremendous number of ideas floating around at the time. Around that rime, Al Perlis had come through RAND and talked about how to really do all this. He stimulated a lot of people to at least think about it if not do something about it. I got deeper and deeper into the whole mystique of language design and compiler construction, and began to follow the first international effort?IAL?rather closely. RAND, SDC and some other organizations decided that the International Algebraic Language, which was eventually called ALGOL 58, was something of interest. Even though it had some flaws and ambiguities in it, at least it looked like a reasonable improvement over FORTRAN. From it came JOVIAL at SDC.
    [...]
    ALGOL 58 came along and excited a lot of people, but neither ALGOL 58 nor ALGOL 60 was really accepted in this country, because by that time FORTRAN was really rolling down the road. SHARE had promoted, or helped promote FORTRAN II rather heavily and then along came FORTRAN III which in retrospect was a rather idiotic attempt.
    [...]
    So FORTRAN III was born and it allowed you to do much of what JOVIAL did, dropping into assembly language by adding a statement that allowed you to pop in and out of assembly from FORTRAN, which everybody knew was a disaster.
    [...]
    I have the strong recollection that FORTRAN III actually saw the light of day, but it died. [13] It just didn't have it?there were people who were much closer to FORTRAN because I had begun to move away from FORTRAN per se and pursue other language oriented things. I was looking at the SDC's JOVIAL. I was trying to build some small compilers for open shop languages at RAND at the time and then I got involved in the SHARE ALGOL effort in '58-'59.
    [...]
         Well, there was a letter written by, I believe it was Rudishauser, to possibly Al Perlis through the ACM[15], and said the world was going off in a thousand different directions with this language stuff. Maybe we ought to get together, the Europeans and the Americans, and design one internationally based language that we would all agree upon, and maybe we will solve some of the world's nastier problems, particularly in program communications, if we have a common source language. We may end up getting real power in the world of commonality, usability, and things like that.
    [...]
    I have to back up. I just remembered something. There was another effort going on in SHARE called UNCOL: Universal Computer Oriented Language, which I got my fingers into. As a result of that, the ACM appointed a group, and I don't know whether Perlis was the chairman of the group, but I believe he was. I know John Backus was a member of the group. And I am trying to remember who the others were. Joe Wegstein may well have been a member. I know he was on ALGOL 60. I don't know whether he was on '58. Anyway, the group met in Zurich and sat down and designed a language. What they came back with was ALGOL '58. And they had done it very, very swiftly. Everyone came with their own ideas; there was an awful lot of argument and compromise. But it wasn't a terribly bad language. It had some ambiguities and some holes, e.g., they completely left the I/O out, so that was up to the implementer to decide how he had to do it on his local computer. It may not have been a full quantum improvement over FORTRAN, but it was an improvement. It incorporated an awful lot of things which you couldn't express or only expressed with great difficulty in FORTRAN. In IAL or ALGOL it was reasonably straight forward.
    [...]
         There were a lot of us, including me, who sat around and looked at ALGOL and realized that it had a lot of faults. There are notational things that could be cleaned up. There are things that they forgot to put in which could be easily added. There are ambiguities that really had to be straightened out. It didn't have any I/O. And it is not all that easy to get common implementation because there are these vagaries of interpretation about what certain things meant. And I can't remember whether ALGOL 58 was a block structured language or not, but there were some misunderstandings about the scope of variables and all the other things that you run into when you start putting together rather complex language structures.

    After some early and some aborted attempts to implement ALGOL 58, which is when I got really involved with ALGOL through SHARE. We decided that ALGOL was a good thing and that SHARE was going to implement ALGOL 58, or at least convince IBM to help us. We would help specify how it was to be done, and IBM would do the implementation. Bob Bemer, Julian Green and several other people at IBM were building something called XTRAN, which was a tool that would help them implement almost any language, but in particular ALGOL. Things go to the point where it was clearly understood that ALGOL had to be cleaned up. And so the [original] group [that produced IAL] was to reconvene in late '59 or early '60. Their report came out in early '60, which is why it was called ALGOL 60, although they actually met in '59. Perlis was the chairman of the group that went to Paris to clean up IAL. That is when the name ALGOL was invented. [16] The earlier effort was designated ALGOL 58 because the new one was designated '60. I know Joe Wegstein was there. I know Julian Green was there. I know John Backus was there because that is when Backus Normal Form (before it became Backus Naur Form) was invented. I don't know whether it is apocryphal or what but the story goes that John finally figured out [how to represent the syntax of a programming language in a formal notation] on the plane over to Paris. I don't quite believe that, knowing John. He works very long and hard and maybe he had an of inspiration [on the plane] but the [representation problem] had been brewing as a result of '58. And the difficulties they had trying to get a syntax description pushed him into working that problem very hard. There were thirteen people [at the Paris meeting] in all. Seven from Europe and six from the United States, but I can't remember any of the rest of the six. The seventh U.S. representative was killed in an automobile accident just before he was supposed to go. I can't remember who that was [William Turanski].
    [...]
    They [the U.S. committee] came back and Julian Green showed up at the SHARE meeting in March following the ALGOL 60 meeting, and he quietly presented me [as Chairman of the SHARE ALGOL Committee] with a draft copy of the report written in Backus NORMAL FORM. I looked at the thing and had a hemorrhage, because we had all been working very hard on the supposition that they were going to go away and fix ALGOL 58 and make it useful and wonderful and clean and beautiful and unambiguous. What they had done was gone away and thrown ALGOL 58 out the window and started clean again. They came up with a whole new completely incompatible language. People who had implemented ALGOL 58 in any shape or form really had thrown their money away because that was no longer going to be the international language, ALGOL 60 was. The compatibility was just not there. I got very, very upset. I wrote Dr. Perlis, who was then at Carnegie Tech, a rather harsh and nasty letter about dereliction of duty and a few odd things like that. I'm sure there is a copy of it somewhere in RAND. I couldn't get it [mailed] out of the building first time I tried. There was a lady who controlled all outgoing mail from the RAND Corporation. Her primary job was to see that no classified information was sent out without proper protection and was going to the recipient who knew how to handle it. She was also the guardian of the corporate morals, ethics and language and my letter was rather harsh. I had gone to Paul Armer and said, "I am going to send this very nasty letter. Be warned. Apparently, he saw it and said, "If that is what you want to say, okay, that is what you want to say." The next morning when I came in the letter hadn't gone out. We had a minor battle inside RAND convincing them I had every right to say what I was saying, both as a member of the RAND Corporation staff and representing my chairmanship of the SHARE ALGOL Committee. Finally, it got out the door. And I will never forget Perlis' answer scribbled on the bottom of my original
    letter that said in essence, "Go away, you bother me or you don't bother me." But I think that was the death knell for ALGOL 60 and for the concept of ALGOL in this country.

    MAPSTONE:
         Maybe the death knell of any kind of universal language.

    BERNSTEIN:
         The basic problem was you can't get thirteen guys together and in six days invent the world. And it wasn't the right flavor of people. The mix was all wrong. I have yet to be able to plow through the wondrous and glorious [latest] ALGOL report. But it is probably a major improvement. It is far more consistent, has far fewer ambiguities, and many well be one of the most powerful and delightful programming languages in the world. If you could only understand the description. It is a terribly, terribly complex thing and you must spend many hours studying it in order to get any appreciation out of it. But it was done by a fairly small group over a long period of time who really honed and worked the problem.

    Julian Green, Joe Wegstein and I had a rather uproarious session which I didn't find very uproarious. I got rather upset when I heard the stories of how ALGOL 60 was created. It was like putting a piece of legislation through Congress rather than a bunch of scientists getting together and saying, "What's best? What's cleanest? What's purest? What really is scientifically or technologically the best possible thing, the apotheosis of our skill and our knowledge." Instead, what they did was barter as in, "If you will let me put this in, I will let you put that in. No, no. I am not going to vote for your thing if you are going to argue with me about that thing." Joe Wegstein was sitting there telling me that when things got out of order he would slam his hand down on the table and bring everybody to silence by shaking them up. It doesn't sound to me the way you design a language. You go away and the think about it for a while. Think through the problems: What is the objective of the whole thing? Who are we trying to get to use it? How would you like them to be able to use it? Are there any implementation aspects that you might take into account? Can this thing be implemented at all on the existing computer? Must it wait for the next generation? Things like that. That wasn't the kind of considerations that they made. Now they may have had all those considerations in the back of their mind. And maybe I am being over-critical, but I believe they blew it.

    They just destroyed whatever first, good step they had taken by not going after ALGOL 58, as impotent as it may have been in their two year further down the pike view of the world. Who cared if it didn't have "own" variables. Nobody really understood what they were all about anyway. They probably represent one tenth of one percent of the utility of a true language. The decision that all procedures had to be recursive was an interesting concept, but not something you lay on the world before they are thoroughly and completely educated in the value and use and the need, and where you may or may not want to get into things like that. And they just came out with all this beautiful, gorgeous?not terribly pragmatic?inconsistent language. And then they did other wondrous and glorious things. They said, we will have the publication language and we leave to the implementer the hardware representation. And that is where we had great fun in SHARE. I must have spent sixty or seventy hours sitting in on arguments, chairing a committee, trying to satisfy all the diverse points of view about a SHARE hardware representation for ALGOL 60. Up to this time SHARE had a resolution which blessed the whole thing. And it was subsequent to that that I became completely disenchanted with the whole thing. The deeper I got into it, the angrier I got, and finally introduced a resolution that SHARE withdraw its original resolution and castigate the idiots who had done that, rather soundly. It didn't get that strongly worded, but we withdrew our original commendation at ACM and the [other] parties involved were going off on various crusade to create an ALGOL international language.

    MAPSTONE:
         What was the SHARE resolution?

    BERNSTEIN:
         I think it ended up just withdrawing SHARE's original resolution. It said: We no longer bless it, we don't care. SHARE no longer has any interest in ALGOL?particularly ALGOL 60.

    MAPSTONE:
         But the language did go on, and is still extant today?

    BERNSTEIN:
         ALGOL 60: yes, people still talk about it. It may be used in a few universities.

    MAPSTONE:
         Is there not a language called ALGOL which is in use today?

    BERNSTEIN:
         Yes. In fact, there is still some ALGOL 60 compilers around. How much of ALGOL 60 they really cover is not clear to me. SHARE actually did eventually, after I gave up, produce [a compiler]. This was a SHARE effort. We decided that we didn't have much of a chance to convince IBM [to implement ALGOL]. IBM was completely antithetical to ALGOL by this time. They had gotten turned off and put most of their interest in FORTRAN. We had essentially been told was that IBM was not going to implement an ALGOL processor as an IBM supported language. They said, "If you guys want to do something, build your own compiler with blessings. We'll even help you." Julian Green and Rex Franciotti were two guys that I interfaced with from IBM, who were trying to build certain pieces of the ALGOL compiler for us. I had given up on trying to keep a cooperative effort going. I had worked rather hard building one piece with a guy at Lockheed in Sunnyvale. We would get together occasionally and check out code to get this and that going. And then somebody would turn up and hadn't done their job, so by that time I got sick and tired of the whole thing and said, "To hell with you guys." I thought that the whole business of trying to get a cooperative ALGOL compiler produced was an abortion. But some people stuck by it, particularly people at Oak Ridge and I can't remember where the other guy was from. I guess he was at Rocketdyne and had a certain personal dedication to the whole thing. Marjorie Lisky and he eventually put together an operable ALGOL compiler which she offered to distribute through SHARE. I don't know how many people took her up on that and I don't know if it ever really worked. It took umpteen thousand passes and probably worked reasonably well. It created at least one social upset. As a result of Julian Green and Marjorie Lisky working so closely together on ALGOL, both divorced their spouses and married each other.

    MAPSTONE:
         Well, that means they worked well together.


          in [AFIPS JCC 25] Proceedings of the 1964 Spring Joint Computer Conference SJCC 1964 view details
  • Backus, John "Programming in America in the nineteen fifties - some personal impressions" pp125-135 view details
          in Metropolis, N. et al., (eds.),A History of Computing in the Twentieth Century (Proceedings of the International Conference on the History of Computing, June 10 15, 1976) Academic Press, New York, 1980 view details
  • Bauer, Friedrich L. "A computer pioneer's talk: pioneering work in software during the 50s in Central Europe" view details Extract: Introduction
    Introduction
    In the late 40s and early 50s, there were a few groups in the USA, in England, in Continental Europe and other countries that started to construct computers. To be precise they constructed the computer hardware. The computing machine in the modern sense had to replace desk calculators which were slow, had limited capabilities and lacked automatic performance of complex tasks.
    In 1936, Alan Turing described the functioning of a computer in a theoretical way with the aim to create a sound basis for a definition of the concept of computable numbers. Turing's computer of 1936 would have been terribly slow if it had been constructed. However, Turing did not construct it at that time. Then Konrad Zuse, John Presper Eckert and John W. Mauchly and a few others constructed computer hardware with relays or even with vacuum tubes and they had to master immense difficulties of engineering nature. Their products were often clumsy since the main problem of the hardware designers was to make their machines function in the first place. Furthermore, they had to make compromises with respect to available technologies. Zuse's first outlines around 1934 ("Vom Formular zur Programmsteuerung") showed for example a machine that used non-erasable storage and thus was much closer to functional programming than his later designs. However, he had to give up this approach and came to the solution of using erasable storage with appropriate writing and reading mechanisms.
    Extract: How did software arise?
    Heinz Zemanek has put it this way: "Software started with the translation of algebraic formulas into machine code." Thus, the Plankalkul of Konrad Zuse in 1945, the Formelubersetzung of Heinz Rutishauser and of Corrado Bohm in 1951, the Algebraic interpreter of Alick E. Glennie in 1953, the Programming Program of E. Z. Ljubimskii and Sergej Sergeevich Kamynin in 1954 and of Andrei Ershov in 1955 stood at the beginning of software, soon followed by Remington-Rand's Math-Matic (1955) and IBM's Fortran (1956). All these advances were made before the word 'software' came into wider use in 1960, 1961 or 1962.

    Now I will describe how we started to construct software in Munich and Zurich in the 50s, we being Klaus Samelson (1918-1980), Heinz Schecher (1922-1984) and myself in Munich and Heinz Rutishauser (1919-1970) in Zurich.
    Extract: Early Work in Munich and Zurich
    Early Work in Munich and Zurich
    In Munich, we were part of a group, under the supervision of the engineer Hans Piloty and the mathematician Robert Sauer, that constructed a computer. Our specific task was, to see to it that the computer under construction could perlorm the calculations it was intended for. Our group started in 1952, well-informed about the von Neumann report. Our first challenge  was the  ESDAC book  of Wilkes-Wheeler-Gill, published in 1951. We learned from this book that we had to develop tools to make the programming job easier and more reliable. Only as a first step we decided to consider the subroutine technique advocated by Maurice V. Wilkes. We also were aware of Rutishauser's publication 'Uber automatische Rechenplanfertigung bei programmgesteuerten Rechenanlageri in ZAMM 1951, where the idea to use the computer in order to write its own program was exemplified. For this reason first personal contact to Rutishauser was established in November 1952. Apart from our daily work of fabricating subroutines for Sauer and checking the engineers' design, we started modestly to think about such simple tools as a supervisory program, while Rutishauser wrote a program for his relay computer Z4 performing the automatic generation for his relay of a linear sequence of machine instructions from a loop.
    In accordance with Rutishauser, we were convinced that 'automatic programming' was the direction we had to follow. Rutishauser knew that he could not use the Z4 for the realization of his 1951 ideas. Piloty's PERM in Munich was slightly ahead in time compared to Speiser's ERMETH in Zurich, so we aimed at constructing a practicable tool for 'automatic programming' as soon as our Munich machine was ready; in the meantime we followed closely the development in 'automatic programming' in the USA and in other countries, but we were not satisfied with the idea of constructing a 'compiler' that simply piled up prefabricated subroutines. Under the influence of Wilhelm Britzelmayr, our logic professor in Munich, we became more linguistically-oriented (when FORTRAN appeared in 1956, we were dissatisfied).
    Nevertheless, we now knew that we had to look for a better solution. We had several ideas how to achieve this aim. The PERM machine was ready for tests in 1955 (it was completely usable in 1956), so we could actually proceed.
    Extract: The Kellerprinzip
    The Kellerprinzip
    In 1955, when STANISLAUS was under construction, Samelson and I discussed how we would proceed with translating arithmetical formulas into machine code and suddenly we came to the conclusion that besides the 'Zahlkeller', that was used in Polish notation to postpone the intermediate results, we had to use an 'Op-I'l'alioiiskcllcr' in a similar way for postponing operations when dealing with pa-renlheses or implicit precedence.
    Soon, we found out that the functional principle we had discovered (we called it Kellerprinzip) "postpone operations that you can not yet evaluate and evaluate Iliem as soon as possible" was not only suitable for the evaluation or translation of parenthesized formulas, but for any constructs that show a parenthesized, interlocked structure.'
    Samelson in particular used the Kellerprinzip to make storage allocation efficient and when we designed step by step the programming language that we wauled lo use for 'automatic programming,' we structured the language accordingly. It became known as Chomsky-2-language.
    Rutishauser in particular showed how to use the Kellerprinzip in parameter transfer, bringing the spirit of the Lambda notation of formal logic into programming.
    Our efforts finally led in 1958 to ALGOL and made ALGOL a well-structured, very safe programming language. As usual, when freed from the restrictions hardware implies, more elegant, more flexible, more versatile solutions can be obtained.

          in "History of computing: software issues" Hashagen, Ulf; Keil-Slawik, Reinhard; Norberg, Arthur L. eds Proceedings of the international conference on History of computing: software issues 2002, Paderborn, Germany April 05 - 07, 2000 Springer 2002 view details
  • Bauer, Friedrich L. "From the Stack Principle to Algol" view details Extract: Introduction to Zuse's PLankalkul
    Britzlmayr and Angstl
    The idea of what I later called the stack principle came into my mind before I became seriously acquainted with what was dubbed program-controlled calculators at that time. In the academic year 1948-1949 I was sitting in a class of Wilhelm Britzlmayr's, logician and honorary
    professor at the Ludwig-Maximilians-Universitat in Munich (his regular occupation was director of a bank). One day he spoke on the syntactic aspects (but this terminology was not used at that time) of the parentheses-free notation that was advocated by Jan Lukasiewicz [1]. Something stirred my interest, and thus I was not completely lost when on June 27, 1949 in Britzlmayr's seminar a man by the name of Kon-rad Zuse, while giving a survey of his Plankalkul [4], used the checking of the well-formedness of a propositional formula written in parentheses-free notation as an example for a Plankalkul program (a Rechenplan, as he called it). The discussion between Zuse and Britzlmayr brought to light that it was an algorithm Zuse had learned from his colleague Hans Lohmeyer, a mathematician working, like Zuse, at Henschel-Flugzeug-Werke in Berlin. The algorithm originated in 1943 from the Berlin logician Karl Schroter [3], based on the 1938 work by the Viennese logician Karl Menger [2]. While Zuse wanted to sell his Plankalkul, Britzlmayr was interested only in the algorithm as such, and most of the discussion took place in two different jargons, which was rather confusing.
    Extract: Influence by Shannon
    I did not like [Angstl's] solution with the wooden apparatus, and influenced by the 1949 paper by Claude Shannon [6], The Synthesis of Two-Terminal Switching Circuits,   I started to look for a relay realization for the formula, which was to be typed in directly. At the same time, this allowed a direct evaluation of the propositional formula for true or false instantiations of the variables; the test for well-formedness turned out to be a byproduct of such a formula-programmed relay calculator for parentheses-free propositional formulas.
    Around the turn of the year 1950/51, during a stay in Davos, Switzerland, I made the wiring diagram for the relay calculator; in honor of the Polish school of logic I dubbed it STANISLAUS Extract: sequential formula translation
    A Software Patent for the Stack The Stack Principle
    In 1955, Samelson and I had quite a different motivation with respect to the STANISLAUS design. In February 1952, I had visited Heinz Rutis-hauser in Zurich and had seen the Z4 working; since the fall of 1953, I had had close contact with him, mainly in questions of numerical mathematics, which was my main duty under Sauer and also the field where I hoped to obtain my habilitation. Naturally, I did not neglect to take notice of Rutishauser's germinative publication [9] [10] of 1951, Uber automatische Rechenplanfertigung bei programmgesteuerten Rechen-maschinen, dealing for the first time with the translation of a mathematical formula language into machine programs by a Superplan in Rutishauser's terminology, a "programming program", as Andrei Ershov called it later. Samelson and I both had in mind to realize this Formel-ubersetzung for the PERM. When in mid-1955 we had time to start it and could expect a speedy finish to the construction of the PERM, it soon came to our attention that STANISLAUS solved a similar, but simplified task. Its "cellar" contained stored intermediate yes-no values; in the case of arithmetic formulas this would be a "number cellar".
    [...]
    The translation algorithm turns out to be superior to Rutishauser's method [9] inasmuch as it avoids the Rutishauser Springprozession; the effort is only proportional to the length of the formula and not, as with Rutishauser, to the square of the length. In Rutishauser's terminology it amounts to the decomposition of the parenthesis mountain from the first pair of peaks in the first chain of peaks, so it was sequential. Correspondingly, in the publication the method was characterized as "sequential formula translation".
    Extract: Recursion and the cellar principle
    Hardware Stacks
    We gave a report on our results to Sauer and Piloty. Piloty remarked that the German Research Council (Deutsche Forschungsgemeinschaft) had a tendency to make sure that patents were obtained for the projects it supported; he urged us to examine whether this would be possible in our case. We agreed, and he offered the prospect of providing the successor machine of the PERM with a number cellar and operation cellar in hardware. This must have been in the summer or fall of 1955. For the patent application we had to disguise our method as hardware, and for this purpose had to become engineers. The elaboration of the patent application therefore brought a lot of work and was fun, too; on the other hand it meant that publication of our results was paralyzed. Samelson therefore reported at the Dresden meeting at the end of November 1955 [13] with great caution. Both Rutishauser and Heinz Billing in Gottingen, who was building the G3 computer, were in on the secret The German patent application [14, in the following partly reprinted] was finally filed March 30, 1957 (Auslegeschrift 109472019, Kl.42m), the U.S.-American one [15] March 28, 1958 within the priority time limit.
    A Software Patent for the Stack
    While the U.S.-American application contained an abundance of and and or gates, the German patent law allowed quite functional descriptions of methods, thus the German application stayed free of drawings for electrical circuits; it was possible to design from it immediately a program with programmed cellar structures, later called stacks. Our patent can therefore be considered as an early case of a software patent
    The actual writing of a machine program for the PERM, which in the meantime was operating, was delegated in mid-1957 to the assistants Manfred Paul and Peter Graeff; the program was ready for tests after a few months. At first, an interpreting machine program was written; then the transition to a generating translator (a compiler) meant simply, instead of immediately executing say (as above)
    inserting into the program the corresponding instruction
    The hardware stacks, the building of which Piloty had suggested, were not realized in Munich, since the PERM II project was not supported by the German Research Council. Billing, however, equipped his G3 computer with the hardware for a number stack.
    thus particularly well suited for efficient translation, which was a main concern of the impoverished European side.

    In Zurich, Samelson had particularly stressed the point that the block structure of storage allocation (Cellar Principle of State Transition and Storage Allocation, [30]), following so naturally from the cellar principle and becoming so typical in the later development, became the dominant organization principle of the programming language. Storage allocation with complete parentheses structure should be organized in a dynamic stack, which without further complications allowed mastery of recursive definitions. The compromise that was achieved in a struggle with the U.S.-American side did not reflect this issue in the published report; thus, later the implementation of recursive situations was reinvented by some people not present at the Zurich meeting.
    Extract: arrival of Algol
    The Preliminary ALGOL Report
    The goals attempted by the ZMD group, to create a language widely following mathematical notation, readable without much ado, and suitable for publications and for mechanical translation, had been largely reached. The publication was completed in 1959 in the first issue of the new journal Numerische Mathematik of Springer-Verlag under the title Report on the Algorithmic Language ALGOL
    Extract: Mainz 22 and algol 58
    When in 1958 I moved from Munich to Mainz, with Samelson soon following me, the ZMD group was widened to the ZMMD group. Emphasis was on finishing compilers for ALGOL 58. The common basis was the method of a stack automaton developed in Munich, which was extended without any difficulty to a full algorithmic language including statements, declarations, block structure, indexed variables, and so on. It was published in 1959 in the newly founded journal Elektronische Rechenanlagen [..] and in 1960 in Communications of the ACM [...]. Manfred Paul, who had done most of the preparatory work, finished a compiler for the Mainz Z22 towards the end of 1958. Soon afterwards, H.R.Schwarz and P.Lauchli followed in Zurich for the ERMETH and G.SeegmuIler in Munich for the PERM.
    Extract: ICIP, BNF, stack notation
    ICIP Conference
    A lot of work was caused by the preparations for ALGOL 60. At the International Conference on Information Processing (ICIP), arranged in Paris, |une 15-20, 1959 by UNESCO, John Backus [24] made a famous proposal for the formal description of the syntax. The Backus notation (Backus Normal Form), soon generally accepted, allowed one to attach in an elegant way the semantics of the programming language to the syntax of a context-free language. Manfred Paul, in his 1962 dissertation, clarified how from this description the transition matrix for the stack automaton could be derived formally.
    Extract: British hostility and absence, ZMD excellence
    The Zurich Meeting
    In the summer of 1957, Bottenbruch became initiated in the Munich Sequential Formula Translator method [16], and at the Zurich meeting the ZMD group not only presented a draft [17] for the language, which at first was called International Algebraic Language, but also had a completed compiler design in the bag. Some U.S.-American delegates had experience with working compilers (Backus with FORTRAN, Perlis with IT, Katz with MATH-MATIC). An open discussion of the technical problems of programming language translation into machine code was left out, as there would not have been enough time. Technically speaking, the state of the art within the ZMD group was far more advanced: FORTRAN used the method of parentheses completion, introduced by P.B.Sheridan [18] and discussed as early as 1952 by Corrado Boehm [11] in his Zurich dissertation, a method that like Rutishauser's required an effort proportional to n2; IT [12] used pure comparison of neighboring operators, enforcing an oppressive limitation to operator precedence grammars. This situation led from time to time to a paralysis of the discussion, which was basically oriented towards progress. On the whole, ALGOL, as it was called in the publication [19b], was an incarnation of the cellar principle […]
    Christopher Strachey, who—inadvertently—had not been invited to the Zurich meeting, went into the trouble of criticizing ALGOL 58 and produced not only considerable stir, but also a lot of public attention. Thus, it was not too difficult to convince the International Federation for Information Processing, founded in Paris, to organize a conference for the "final ALGOL", later called ALGOL 60. The preparations were this time much more intensive; a European pre-conference was held in November 1959 in Paris; it nominated seven European delegates, who met again in December 1959 in Mainz. The U.S.-American side nominated its delegates in November 1959. This time, representatives from Great Britain, France, the Netherlands, and Denmark, besides representatives from the U.S.A., Germany, and Switzerland, were invited. Extract: Paris Conference
    Paris, 1960
    The ALGOL conference took place in Paris, January 11-16, 1960 under the patronage of the newly founded IFIP. It led to consolidations and completions of the Preliminary Report. Characteristically, the introduction to the report [25a, b] says "The present report represents the union of the committee's concepts and the intersection of its agreements". In this way, contradictions could remain here and there and solutions were omitted. What made me particularly angry was that Samelson, who in 1958 regarding the question of the block structure could not win against Alan Perlis, in 1960, when acceptance of recursion was no longer an issue, was given no credit for the block structure; the editor Peter Naur, who was not present in Zurich, was not aware of this.
    In the short period of six days we also did not succeed in formalizing, next to the syntax which now was formalized in BNF (Backus Normal Form), the semantics as well; it was still explained rather verbally, leading later to exegetic quarrels. Heinz Zemanek tried, with the IFIP Technical Committee 2 Working Conference Formal Language Description Language, held in 1964 in Baden near Vienna, to compensate for this lack. Peter Landin [29] gave a complete formal description of ALGOL 60, but it lacked the blessing of the authorities.
    Extract: The ALCOR Group
    The ALCOR Group
    After the ICIP Congress, June 1959 in Paris and particularly after the publication of the ALGOL 60 report, the ZMMD group decided to widen its membership and invited interested institutions in Europe and North
    America to participate in the efforts for propagation of ALGOL through the construction of ALGOL compilers for various different machine configurations; the documents of the previous ZMMD group were made available for this purpose. The offer was accepted by scientific institutions (among the first were Regnecentralen Copenhagen, Bonn University, Zemanek's Mailiifterl group in Vienna, Oak Ridge National Laboratory, Neber Laboratory Leidschendam) as well as by some computer companies (Siemens and Halske AG for the 2002, Telefunken for the TR4, Standard Elektrik Lorenz AG, IBM Germany). The resulting multitude of concrete implementations was unavoidable because of the differences in the machines involved, but it was useful in its scientific aspects. For example, Albert A. Grau, Oak Ridge, introduced the concept of syntactic states and described the compiler as a system of mutually recursive subprograms [31]. Peter Lucas in Vienna went his own way [32] in generating, like Paul in Mainz [33,34], the compiler from the syntax in BNF. Jurgen Eickel and Manfred Paul, in 1964, studied the parsing and ambiguity problem for Chomsky languages in general [39].
    Extract: Algol 58 and the death of Algol
    After my return to Munich in 1963, the build-up of computer science there became my main obligation, leaving less time to be spent on the further development of ALGOL. The way it went became more and more of a nightmare, leading to ALGOL 68 and to the ruin of the ALGOL idea. One after another, people left the IFIP Working Group 2.1 on ALGOL: Peter Naur, Niklaus Wirth, Tony Hoare, Edsger Dijkstra, Brian Randall, Gerhard Seegmiiller, Wlad Turski, Mike Woodger, Hans Bekic, and Fraser Duncan.
          in Software Pioneers: Contributions to Software Engineering, Bonn, 28-29. 6. 2001 eds Broy, Manfred and Denert, Ernst Springer 2002 view details
  • Galler and Galler "A Career Interview with Bernie Galler" pp22-33 view details Extract: MAD
    I should talk a little about the history of MAD, because MAD was one of our major activities at the Computing Center. It goes back to the 650, when we had the IT compiler from Al Perlis. Because of the hardware constraints - mainly the size of storage and the fact that there were no index registers on the machine - IT had to have some severe limitations on what could be expressed in the language. Arden and Graham decided to take advantage of index registers when we finally got them on the 650 hardware. We also began to understand compilers a little better, and they decided to generalize the language a little. In 1957, they wrote a compiler called GAT, the Generalized Algebraic Translator. Perlis took GAT and added some things, and called it GATE - GAT Extended.
    GAT was not used very long at Michigan. It was okay, but there was another development in 1958. The ACM [Association for Computing Machinery] and a European group cooperatively announced a "standard" language called IAL, the International Algebraic Language, later changed to Algol, Algorithmic Language, specifically Algol 58. They published a description of the language, and noted: "Please everyone, implement this. Let's find out what's wrong with it. In two years we'll meet again and make corrections to the language," in the hope that everyone would use this one wonderful language universally instead of the several hundred already developing all over the world. The cover of an issue of the ACM Communications back then showed a Tower of Babel with all the different languages on it.
    John Carr was very active in this international process. When he returned to Michigan from the 1958 meeting with the Europeans, he said:
         We've got to do an Algol 58 compiler. To help the
         process, let's find out what's wrong with the language.
         We know how to write language compilers,
         we've already worked with IT, and we've
         done GAT. Let's see if we can help.
    So we decided to do an Algol 58 compiler. I worked with Arden and Graham; Carr was involved a little but left Michigan in 1959. There were some things wrong - foolish inclusions, some things very difficult to do -  with the Algol 58 language specification. When you write a compiler, you have to make a number of decisions. By the time we designed the language that we thought would be worth doing and for which we could do a compiler, we couldn't call it Algol any more; it really was different. That's when we adopted the name MAD, for the Michigan Algorithm Decoder. We had some funny interaction with the Mad Magazine people, when we asked for permission to use the name MAD. In a very funny letter, they told us that they would take us to court and everything else, but ended the threat with a P.S. at the bottom - "Sure, go ahead." Unfortunately, that letter is lost.
    So in 1959, we decided to write a compiler, and at first it was Arden and Graham who did this. I helped, and watched, but it was mainly their work because they'd worked on GAT together. At some point I told them I wanted to get more directly involved. Arden was doing the back end of the compiler; Graham was doing the front end. We needed someone to do the middle part that would make everything flow, and provide all the tables and so on. I said, "Fine. That's my part." So Graham did part 1, Galler did part 2, and Arden did part 3.
    A few years later when Bob Graham left to go to the University of Massachusetts, I took over part 1. So I had parts 1 and 2, and Arden had 3, and we kept on that way for several years. We did the MAD compiler in 1959 and 1960, and I think it was 1960 when we went to that famous Share meeting and announced that we had a compiler that was in many ways better and faster than Fortran. People still tell me they remember my standing up at that meeting and saying at one of the Fortran discussions, "This is all unnecessary, what you're arguing about here. Come to our session where we talk about MAD, and you'll see why."
    Q: Did they think that you were being very brash, because you were so young?
    A: Of course. Who would challenge IBM? I remember one time, a little bit later, we had a visit from a man from IBM. He told us that they were very excited at IBM because they had discovered how to have the computer do some useful work during the 20-second rewind of the tape in the middle of the Fortran translation process on the IBM 704, and we smiled. He said, "Why are you smiling?" And we said, "That's sort of funny, because the whole MAD translation takes one second." And here he was trying to find something useful for the computer to do during the 20-second rewind in the middle of their whole Fortran processor.
    In developing MAD, we were able to get the source listings for Fortran on the 704. Bob Graham studied those listings to see how they used the computer. The 704 computer, at that time, had 4,000 36-bit words of core storage and 8,000 words of drum storage. The way the IBM people overcame the small 4,000-word core storage was to store their tables on the drum. They did a lot of table look up on the drum, recognizing one word for each revolution of the drum. If that wasn't the word they wanted, then they'd wait until it came around, and they'd look at the next word.
    Graham recognized this and said, "That's one of the main reasons they're slow, because there's a lot of table look-up stuff in any compiler. You look up the symbols, you look up the addresses, you look up the types of variables, and so on."
    So we said, fine. The way to organize a compiler then is to use the drum, but to use it the way drums ought to be used. That is, we put data out there for temporary storage and bring it back only once, when we need it. So we developed all of our tables in core. When they overflowed, we stored them out on the drum. That is, part 1 did all of that. Part 2 brought the tables in, reworked and combined them, and put them back on the drum, and part 3 would call in each table when it needed it. We did no lookup on the drum, and we were able to do the entire translation in under a second.
    It was because of that that MIT used MAD when they developed CTSS, the Compatible Time-Sharing System, and needed a fast compiler for student use. It was their in-core translator for many years.
    MAD was successfully completed in 1961. Our campus used MAD until the middle of 1965, when we replaced the 7090 computer with the System /360. During the last four years of MAD's use, we found no additional bugs; it was a very good compiler. One important thing about MAD was that we had a number of language innovations, and notational innovations, some of which were picked up by the Fortran group to put into Fortran IV and its successors later on. They never really advertised the fact that they got ideas, and some important notation, from MAD, but they told me that privately.
    We published a number of papers about MAD and its innovations. One important thing we had was a language definition facility. People now refer to it as an extensible language facility. It was useful and important, and it worked from 1961 on, but somehow we didn't appreciate how important it was, so we didn't really publish anything about it until about 1969. There's a lot of work in extensible languages now, and unfortunately, not a lot of people credit the work we did, partly because we delayed publishing for so long. While people knew about it and built on it, there was no paper they could cite.
          in IEEE Annals of the History of Computing, 23(1) January 2001 view details