High level interpreter at Bell labs for 650
Interpreter for the IBM 650 written by V. Michael Wolontis and Dolored Leagus Bell Labs 1955
Operational on 650 August 1955
Significant higher level language better known later as Bell 1 interpreter
NB When IBM650 issue of AHIC was put together, the editor recalls that everyone asked him to remember the WB system
The report serves a dual purpose. It presents the external characteristics of the interpretive system to the potential user by means of detailed explanations accompanied by illustrative examples, assuming no previous familiarity with internally programmed machines. It also describes the internal structure of the system to the professional designer of such systems, enabling him to modify it to suit his particular needs or to borrow ideas or building blocks he may find useful. Extract: Details
This memorandum gives a complete description of a General Purpose System for using the IBM 650 on general scientific problems. The system contains input and output, 8 decimal place floating-point arithmetic including the elementary transcendental functions to the full range of arguments, extensive logical control including five index registers, flexible tracing arrangements for easy program "debugging" and provisions for the ready use of library routines for special functions and operations. The use of a non-addressed order system makes it easy to learn and use. Previous knowledge of large scale digital computers is not required to understand and use the system. Many of the common mistakes in computing are eliminated by the system and ease of tracing aids in the location of others. The non-addressed order system in turn makes their correction a straightforward matter.
I.1. General Design Considerations
The use of most existing computing devices whose degree of automatic performance substantially exceeds that of a desk calculator entails certain problems not encountered in desk computing. To cope with these problems, one may incorporate additional circuitry into the machine-this, indeed, appears to be the trend in recently announced commercially available machines-or, alternatively, one may program, in terms of the basic language of the machine, a system or super-language in terms of which the general user will program his problems. The user may consider the machine and the super-language as one entity, and no knowledge of the basic machine language is required of him. Before actual calculation, the programmer?s instructions are translated by the machine,into the basic language. If this translation or interpretation takes place each time an instruction is to be executed, rather than once for all at the beginning of a problem, the super-language is referred to as an interpretive language or system. Limitations in storage capacity may necessitate the choice of an interpretive system rather than a system of the once-for-all type in the case of most small or medium-sized computers.
The designers of an interpretive system are faced with a very large number of decisions. To provide a basis of motivation for these decisions, it is convenient to list here, in somewhat arbitrary order, some of the above-mentioned problems which the present interpretive system proposes to solve. All of them may fundamentally be measured in terms of total time spent by a programmer in learning to use the machine and in using it on a specific problem. In this sense, the "ease of use" referred to in the abstract above is implicitly defined by the list that follows. The price paid for the saving of programmer time is of course to be found in substantially reduced speed of operation.
In its external characteristics, the interpretive system described in this report owes much to the IBM Speedcoding System for the 701.
Let me elaborate these points with examples. UNICODE is expected to require about fifteen man-years. Most modern assembly systems must take from six to ten man-years. SCAT expects to absorb twelve people for most of a year. The initial writing of the 704 FORTRAN required about twenty-five man-years. Split among many different machines, IBM's Applied Programming Department has over a hundred and twenty programmers. Sperry Rand probably has more than this, and for utility and automatic coding systems only! Add to these the number of customer programmers also engaged in writing similar systems, and you will see that the total is overwhelming.
Perhaps five to six man-years are being expended to write the Alodel 2 FORTRAN for the 704, trimming bugs and getting better documentation for incorporation into the even larger supervisory systems of various installations. If available, more could undoubtedly be expended to bring the original system up to the limit of what we can now conceive. Maintenance is a very sizable portion of the entire effort going into a system.
Certainly, all of us have a few skeletons in the closet when it comes to adapting old systems to new machines. Hardly anything more than the flow charts is reusable in writing 709 FORTRAN; changes in the characteristics of instructions, and tricky coding, have done for the rest. This is true of every effort I am familiar with, not just IBM's.
What am I leading up to? Simply that the day of diverse development of automatic coding systems is either out or, if not, should be. The list of systems collected here illustrates a vast amount of duplication and incomplete conception. A computer manufacturer should produce both the product and the means to use the product, but this should be done with the full co-operation of responsible users. There is a gratifying trend toward such unification in such organizations as SHARE, USE, GUIDE, DUO, etc. The PACT group was a shining example in its day. Many other coding systems, such as FLAIR, PRINT, FORTRAN, and USE, have been done as the result of partial co-operation. FORTRAN for the 705 seems to me to be an ideally balanced project, the burden being carried equally by IBM and its customers.
Finally, let me make a recommendation to all computer installations. There seems to be a reasonably sharp distinction between people who program and use computers as a tool and those who are programmers and live to make things easy for the other people. If you have the latter at your installation, do not waste them on production and do not waste them on a private effort in automatic coding in a day when that type of project is so complex. Offer them in a cooperative venture with your manufacturer (they still remain your employees) and give him the benefit of the practical experience in your problems. You will get your investment back many times over in ease of programming and the guarantee that your problems have been considered.
Extract: IT, FORTRANSIT, SAP, SOAP, SOHIO
The IT language is also showing up in future plans for many different computers. Case Institute, having just completed an intermediate symbolic assembly to accept IT output, is starting to write an IT processor for UNIVAC. This is expected to be working by late summer of 1958. One of the original programmers at Carnegie Tech spent the last summer at Ramo-Wooldridge to write IT for the 1103A. This project is complete except for input-output and may be expected to be operational by December, 1957. IT is also being done for the IBM 705-1, 2 by Standard Oil of Ohio, with no expected completion date known yet. It is interesting to note that Sohio is also participating in the 705 FORTRAN effort and will undoubtedly serve as the basic source of FORTRAN-to- IT-to-FORTRAN translational information. A graduate student at the University of Michigan is producing SAP output for IT (rather than SOAP) so that IT will run on the 704; this, however, is only for experience; it would be much more profitable to write a pre-processor from IT to FORTRAN (the reverse of FOR TRANSIT) and utilize the power of FORTRAN for free.
in "Proceedings of the Fourth Annual Computer Applications Symposium" , Armour Research Foundation, Illinois Institute of Technology, Chicago, Illinois 1957 view details
During the summer of 1956, I attended a series of seminars at Endicott, New York, given by the International Business Machines Corporation. One of the talks was given by Dr. Richard Hamming on the new Bell General Purpose System Interpretive Scheme which was currently being developed at Bell Telephone Laboratories. Hamming sent me a copy of this system as soon as it was available. I turned it over to one of the professors in the School of Electrical Engineering and asked him to give it a trial run. At that time he did not have much experience with computers, but he was successfully able to employ this system and was, in fact, very enthusiastic about it. He then asked me if I would give a series of seminars on its use to the Electrical Engineering faculty and graduate students. AS a consequence of this seminar, several faculty members and students utilized the computer on their individual research or thesis problems.
During the same school year, I used the Bell System in my course on the numerical solution of ordinary differential equations. All the students in this course were asked to do all their numerical problems employing the Bell System. I felt that this quarter of work was particularly successful in giving the students an understanding of the problems involved in numerical computation work. I was able to ask the students to solve many more problems than when I had previously taught the course, and I also was able to ask them to vary the increment of integration much more widely. I feel that one of the greatest benefits derived from such a course was the subtle one of providing a very high motivation for carrying out normally rather dull computational work.
Since the two experiences mentioned above, at least one seminar on the Bell System has been offered each quarter. Some quarters seminars have been given for special groups such as chemical engineers, or civil engineers, or mechanical engineers, with an emphasis on their particular type of work. The attendance in these seminars has varied from thirty to sixty, with the latter number being more frequently the case. Somehow this system seemed to catch the fancy of both students and faculty and really went over. There are a number of courses now in which the instructors regularly require their students to solve problems on the computer using the Bell System. These include a special problems course in Chemical Engineering, a photogrammetry course in Civil Engineering, a machine design course in Mechanical Engineering, an optics course in Physics, and a structures course in Civil Engineering. In the latter course, in addition to using the Bell System, the students are also required to utilize our standard IBM 650 Library for such things as the solution of systems of simultaneous linear equations. Our success with the Bell System on the IBM 650 inspired us to write a similar interpretive scheme for the 1101. This was again a three-address system, but unfortunately with our 24-bit word it was necessary to use three words to hold the interpretive order. It also employed the octal number system for addresses, as is customary on the 1101. The hope, of course, was to be able better to utilize the 1101. But these hopes were very quickly lost because the system was inherently more clumsy than the 650 Bell System. Thus, for the moment, we have abandoned our interpretive system for the 1101.
Extract: FORTRAN, FORTRANSIT, RUNCIBLE, Bell
Shortly after FORTRAN was made available on the 650 in the form of FORTRANSIT, we ran seminars on it. I t was felt that the high mnemonic value of the FORTRAN language would be a further aid to programming. This turned out to be the case for those students or faculty members who were already familiar with the Bell System or with machine language. However, it appeared to us that those without some previous computer background did not take to the FORTRANSIT compiler system as well as they did to the Bell General Purpose Interpretive System. Students with the appropriate background and with more complex problems were pleased with the ease with which they could program their problems using FORTRANSIT.
It was about this stage that we decided that we would try to make the FORTRAN system available on our 1101. Also about this time the ElectroData Division of Burroughs Corporation indicated that they planned to make the FORTRAN system available on their forthcoming 220. Thus we felt that, by writing FORTRAN for the 1101, we would be able to program a problem in the FORTRAN language and run it on any one of our three machines. In this manner we would then be able to equalize the load on our three machines. Consequently, a year ago this past summer two of our programmers started writing FORTRAN for the 1101. They continued this work until they attended the seminar on compilers held last February 18-20, 1959, at Carnegie Institute of Technology, which was jointly sponsored by Carnegie, Case, and the University of Michigan. After returning from this seminar, these two boys reviewed their work and the compiler systems presented at this conference. They then recommended that the course of their work be changed from making FORTRAN available to making the RUNCIBLE system available for the three computers. As most of you probably know, the RUNCIBLE system of Case is the result of several revisions of Dr. A. J. Perlis' IT system. Our boys felt that RUNCIBLE was logically far more complete and much more flexible in its use. It was felt that these two major advantages were sufficiently great to overcome the loss of higher mnemonic value of FORTRAN. It was thus decided that the RUNCIBLE system would be used instead of the FORTRAN system. Since RUNCIBLE is more complete logically, it would be a relatively easy task to put a translator in front of RUNCIBLE to be able to handle the FORTRAN language if it was later decided that this was desirable.
Our decision to adopt RUNCIBLE rather than FORTRAN has been further justified by the fact that the ElectroData people appeared to have set aside their project to write FORTRAN for the 220. In the meantime, our programmers have also been able to run the RUNCIBLE on the 220 by use of the 650 Simulator. The simulator was written by the ElectroData people for the 220 and appears to be very efficient in the cases where we have employed it. It is true that this is not an exceedingly efficient use of the 220, but it is also true that in our case we will probably not run many compiler programs on the 220. It currently appears that we have enough people willing and able to write the machine language to keep the 220 busy. Even though we only installed the 220 early this year, we are already running two shifts on it. Most of this work is either sponsored research work or faculty-graduate student research projects that require the larger machine. Essentially, no one-shot small problems have been put on the 220.
We are currently running our third seminar on the RUNCIBLE system. The attendance at these seminars has varied. This quarter our Bell seminar drew the usual sixty people, and the RUNCIBLE seminar only seven. However, the two previous RUNCIBLE seminars had about twenty each. In order that we may not be accused of being out of date relative to the current push on compilers, we are considering the possibility of offering only the RUNCIBLE seminar next quarter. Perhaps this will help overcome the mass momentum that has developed relative to the Bell System. I still have, however, the strong feeling in my own mind that, for the smaller one-shot computer projects of the uninitiated, the actual time elapsed between problem initiation and problem solution may be considerably less in using the Bell System. I had the experience of sitting down with a sharp faculty person one afternoon and describing the Bell System to him. The next day he came back with a moderately complex integration problem completely programmed and ran it on the 650. I have not had the exact set of circumstances to try the RUNCIBLE system, but I doubt that the same degree of success could be achieved.
It seems very clear to me that an undisputed value for a compiler system such as RUNCIBLE or FORTRAN is for the larger-scale problems and for experimental mathematical studies, where the model is sufficiently changed to make it unfeasible efficiently to employ a simple interpretive scheme. My current feeling is that, within an educational research situation such as ours, there will always be a place for interpretive systems such as the Bell System. I t seems to me that, in learning such a system, one gets a better feeling for the way in which the machine actually functions. After all, the interpretive schemes are not too far removed from machine-language programming and yet still have many advantages over such programming. It appears that, the wider the basic knowledge that a student has, the more useful he will be when he goes out into industry, even though there his computer work may be confined to the use of a compiler. I would also concur in the continued inclusion of machine-language programming in the basic programming courses offered for credit by the Mathematics Department, the Electrical Engineering Department, or whatever department happens to offer these courses, someone has to have a sufficiently strong background to be able to build compilers.
Extract: Not using assemblers (SOAP, STAR)
You probably have noted by now that I have made no direct mention of assembly routines. This lack of reference reflects our situation at Georgia Tech. Small use has been made of them. No seminars have been run on their use. A few people have used SOAP on the 650. A very few are using STAR I on the 220. An assembly program was written for our 1101, but it was purely for intermediate purposes and had no direct use. I currently see no necessity of ever running a non-credit seminar on assembly routines, but I would advocate their inclusion in the credit courses in programming.
in Proceedings of the 1959 Computer Applications Symposium, Armour Research Foundation, Illinois Institute of Technology, Chicago, Ill., Oct. 29, 1959 view details
in E. M. Crabbe, S. Ramo, and D. E. Wooldridge (eds.) "Handbook of Automation, Computation, and Control," John Wiley & Sons, Inc., New York, 1959. view details
employ new methods in many areas of research. Performance of 1 million
multiplications on a desk calculator is estimated to require about five vears
and to cost $25,000. On an early scientific computer, a million
multiplications required eight minutes and cost (exclusive of programing
and input preparation) about $10. With the recent LARC computer,
1 million multiplications require eight seconds and cost about
50 cents (Householder, 1956). Obviously it is imperative that researchers
examine their methods in light of the abilities of the computer.
It should be noted that much of the information published on computers
and their use has not appeared in educational or psychological literature
but rather in publications specifically concerned with computers. mathematics,
engineering, and business. The following selective survey is intended
to guide the beginner into this broad and sometimes confusing
area. It is not an exhaustive survey. It is presumed that the reader has
access to the excellent Wrigley (29571 article; so the major purpose of
this review is to note additions since 1957.
The following topics are discussed: equipment availabilitv, knowledge
needed to use computers, general references, programing the computer,
numerical analysis, statistical techniques, operations research, and mechanization
of thought processes. Extract: Interpretive Systems
Among the first approaches to automatic programing were the interpretive
systems, in which the pseudo instructions of the programing language
were stored in the memory of the computer, along with a program
that translated these pseudo instructions into the proper sequence of
machine instructions as the computer engaged in the process of solution.
The most widely known general-purpose interpretive system for the
IBM 650 is "Bell Telephone Laboratories Interpretive Code," which has
been described by Wolontis (1956), Andree (1958), and Wrubel (1959).
Other systems were developed for special purposes, such as the University
of Michigan "MITLAC" (1955), which included differential equation
operations, and "SIS" (Haynam, 1957), which is designed for the solution
of routine statistical problems. Frequently one of the existing interpretive
systems will lend itself to the solution of the problem at hand;
however, since time is required to perform the translation of each program
run, this convenience must be paid for in terms of computer execution
in E. M. Crabbe, S. Ramo, and D. E. Wooldridge (eds.) "Handbook of Automation, Computation, and Control," John Wiley & Sons, Inc., New York, 1959. view details
Delivery of the- smaller, medium scale magnetic-drum computers started in 1953, and by 1955-56 they were a very important factor in the computer field. The IBM 650 was by far the most popular of the early drum computers. The 650 was quite easy to program in its own language, and was programmed that way in many applications, especially in the data-processing area. As a scientific computer it lacked floating point hardware, a feature that was later made available. A number of interpretive floating point systems were developed, of which the most popular was the one designed at the Bell Telephone Laboratories. This was a three address floating point system with automatic looping and with built in Mathematical subroutines. It was a logical continuation of the line of systems that had started with the general purpose CPC boards, and had been continued in 701 Speedcode. It proved that on the right kind of computer an interpretive system can provide an efficient effective tool. Interpretive systems fell into disrepute for a number of years. They are making a very strong comeback at the present time in connection with a number of so-called micro-programmed computers that have recently appeared on the market.
in [AFIPS JCC 25] Proceedings of the 1964 Spring Joint Computer Conference SJCC 1964 view details
It is no accident that Bell Labs was deeply involved with the origins of both analog and digital computers, since it was fundamentally concerned with the principles and processes of electrical communication. Electrical analog computation is based on the classic technology of telephone transmission, and digital computation on that of telephone switching. Moreover, Bell Labs found itself, by the early 1930s, with a rapidly growing load of design calculations. These calculations were performed in part with slide rules and, mainly, with desk calculators. The magnitude of this load of very tedious routine computation and the necessity of carefully checking it indicated a need for new methods. The result of this need was a request in 1928 from a design department, heavily burdened with calculations on complex numbers, to the Mathematical Research Department for suggestions as to possible improvements in computational methods. At that time, however, no useful suggestions could be made.
External link: Online copy Extract: History
To make this kind of operation really practicable, Bell Labs developed new problem-oriented programming languages that permitted such users to make effective use of the machine without the necessity of becoming completely familiar with programming in the machine's "native" language. These languages made floating-point operation available to the user (although the machines themselves operated in fixed-point arithmetic), greatly simplified the addressing of data in the memory, and provided useful diagnostic information as to program malfunctions. There were two such languages, each with specific advantages for certain types of work: the L1 language , developed by V. Michael Wolontis and Dolores C. Leagus, and the L2 language, developed by Richard W. Hamming and Ruth A. Weiss. They proved very convenient in operation, and both of them were released to users outside of Bell Labs, who usually referred to them as Bell 1 and Bell 2. In the late 1950s, at least half the IBM 650s doing scientific and engineering work used either Bell 1 or Bell 2. One organization became so fond of Bell 1 that, when its 650 was replaced by the more powerful IBM 1401 (which came complete with excellent IBM problem-oriented software), they went to the trouble of writing their own Bell 1 interpreter for the new machine.
With this software, the IBM 650s served Bell Labs scientists and engineers very well for several years. The operating procedures were straightforward: the user's program and data were keypunched and proofread, then the card deck, preceded by the L1 or L2 interpreter, was fed into the IBM 650, and the output appeared at the other end of the machine, also punched into cards. The output deck was then printed for the user on an IBM tabulator. If the user feared there might be undetected errors in the program, it could be run in tracing mode to obtain a complete listing of executed instructions. Clean decks were run by an operator without the user being present. During the last year of use of the 650s, the machines ran pretty well around the clock; on each of the second and third shifts, one operator ran both machines with no trouble.
in [AFIPS JCC 25] Proceedings of the 1964 Spring Joint Computer Conference SJCC 1964 view details
At Carnegie Tech (now CMU) the 650 arrived in July 1956. Earlier in the spring I had accepted the directorship of a new computation center at Carnegie that was to be its cocoon. Joseph W. Smith, a mathematics graduate student at Purdue, also came to fill out the technical staff. A secretary-keypuncher, Peg Lester, and a Tech math grad student, Harold Van Zoeren, completed the staff later that summer. The complete annual budget -- computer, personnel, and supplies -- was $50,000. During the tenure of the 650, the center budget never exceeded $85,000. Before the arrival of the computer, a few graduate students and junior faculty in engineering and science had been granted evening access to a 650 at Mellon National Bank. In support of their research, the 650, largely programmed using the Wolontis-Bell Labs three-address interpreter system, proved invaluable. The success of their efforts was an important source of support for the newly established Computation Center.
The 650 operated smoothly almost immediately. The machine was quite reliable. Even though only a one-shift maintenance contract was in force, by the start of fall classes the machine was being used on the second shift, as well as weekends. The talented user group, the stable machine, two superb software tools -- SOAP (Poley 1957) and Wolontis (see Technical Newsletter No. 11 in this issue) -- and an uninhibited open atmosphere contributed to make the center productive and, even more, an idea-charged focus on the campus for the burgeoning insights into the proper -- nay, inevitable -- role of the computer in education and research. Other than the usual financial constraints, the only limits were lack of time and assistance. The center was located in the basement of the Graduate School of Industrial Administration (GSIA). Its dean, Lee Bach, was an enthusiastic supporter of digital computation. Consequently, he was not alarmed at the explosion in the use of the center by his faculty and graduate students, and he acceded graciously to the pressure, when it came, to support the center in its requests to the administration for additional space and equipment.
From its beginning the center, its staff, and many of the users were engaged in research on programming as well as with programming. So many problems were waiting to be solved whose programs we lacked the resources to write: We were linguistically inhibited, so that our programs were too often exercises in stuttering fueled by frustration. Before coming to Carnegie, Smith and I had already begun work on an algebraic language translator at Purdue intended for use on the ElectroData Datatron computer, and we were determined to continue the work at Carnegie. The 650 proved to be an excellent machine on which to pursue this development. Indeed, the translator was completed on the 650 well before the group at Purdue finished theirs. The 650 turned out to have three advantages over the Datatron for this particular programming task: punched cards being superior to paper tape, simplicity in handling alphanumerics, and SOAP. The latter was an absolutely crucial tool. Any large programming task is dominated by the utility with which its parts can be automatically assembled, modified, and reassembled.
The translator, called IT for Internal Translator (see Perlis and Smith 1957), was completed shortly after Thanksgiving of 1956. In the galaxy of programming languages IT has become a star of lesser magnitude. IT'S technical constructs are of historical interest only, but its existence had enormous consequences. Languages invite traffic, and use compels development. Thus IT became the root of a tree of language and system developments whose most important consequence was the opening of universities to programming research. The 650, already popular in universities, could be used the way industry and government were soon to use FORTRAN, and education could turn its attention to the subject of programming over and above applications to the worlds of Newton and Einstein. The nature of programming awaited our thoughts.
No other moment in my professional life has approached the dramatic intensity of our first IT compilation. The 650 accepted five cards (see Figure 1) and produced 42 cards of SOAP code (see Figure 2) evaluating a polynomial. The factor of 8 was a measure of magic, not the measure of a poor code generator. For me it was the latest in a sequence of amplifiers, the search for which exercises computation. The 650 implementation of IT had an elastic quality: it used 1998 of the 2000 words of 650 storage, no matter what new feature was added to the language. Later in 1957 IT-2 was made available and bypassed the need for SOAP completely. IT-2 translated the IT language directly into machine code. By the beginning of 1958 IT3 became available. It was identical to IT-2 except that all floating-point arithmetic was performed in double precision. For its needs GSIA produced IT-2- S which was IT-2 using scaled fixed-point arithmetic. The installation of the FORTRAN character set prompted the replacement of IT-9 by IT-2- A-S, which used both the FORTRAN character set and floating-point hardware. With IT-2-A-S the work on IT improvements came to an end at Carnegie.
While the IT developments were being carried out within our Computation Center, parallel efforts were under way on our machine in the development of list-processing languages under the direction of Allen Newell and Herbert Simon. The IPL family and the IT family have no linguistic structure in common, but they benefited from each other's existence through the continual interaction of the people, problems, and ideas within each system.
The use of Wolontis decreased. Soon almost all computation was in IT, and use expanded to three shifts. By the end of the summer of 1957, IT was in the hands of a number of other universities. Case and Michigan made their own improvements and GAT, developed by Michigan, became available in 1958 (see Arden and Graham 1958). It bypassed SOAP, producing machine code directly, and used arithmetic precedence. We were so impressed by GAT that we immediately embarked on its extension and produced GATE (GAT Extended) by spring of 1959. GATE was later transported to the Bendix G-20 when that computer replaced the 650 at Carnegie in 1961.
The increased use of the machine and the increased dependence on IT and its successors as a programming medium pressured the computing center into continual machine expansion. As soon as IBM provided enhancements to the 650 that would improve the use of our programming tools, our machine absorbed them: the complete FORTRAN character set, index registers, floating point, 60 core registers, masking and format commands and, most important, a RAMAC disk unit. All but the last introduced trivial modifications to our language processors. There was the usual grumbling from some users because the enhancements pressured (not required) them to modify both the form and logic of their programs. The users were becoming computer-bound by choice as well as need, though, and they had learned the first, and most important, lesson of computer literacy: In man-machine symbioses it is the human who must adjust, adapt, and learn as the computer evolves along its own peculiar gradients. Getting involved with a computer is like having a cannibal as a valet.
Most universities opted for magnetic tape as their secondary storage medium; Carnegie chose disks. Our concern with the improvement of the programming process had thrust upon us the question: How do programs come into being? Our answer: Pure reasoning and the artful combination of programs already available, understood, and poised for modification and execution. It is not enough to be able to write programs easily; one must be able to assemble new ones from old ones. Sooner or later everyone accepts this view -- first mechanized so admirably on the EDSAC almost 40 years ago (Wilkes, Wheeler, and Gill 1957). Looked at in hindsight, our concern with the process of assembly was an appreciation of the central role evolution plays in the man-computer dialogue: making things fit is a crucial part in making things work. It was obvious that retention and assembly of programs was more easily realized with disk than with tape. Like everything else associated with the 650, the RAMAC unit worked extremely well. Computation became completely dependent on it.
GATE, our extension of GAT, made heavy use of the disk (Perks, Van Zoeren, and Evans 1959). Programs were getting larger, and a form of segmentation was needed. The assembly of machine-language programs and already compiled GATE programs into new ones was becoming a normal mode of use. GATE provided the vehicle for accomplishing these tasks. The construction of GATE was done by Van Zoeren and Smith. Not all programs originated in GATE; some were done in machine language. SOAP, always our model of a basic machine assembly language, had matured into SOAP 11 but had not developed into an adult assembler for a 650 with disks. IBM was about to stunt that species, so we designed and built TASS (Tech Assembly System). Smith and Arthur Evans wrote the code; Smith left Carnegie, and Evans completed TASS. A few months later he extended it to produce TASS I! and followed it with SUPERTASS. TASS and its successors were superb assemblers and critical to our programming research (Perks, Smith, and Evans 1959).
Essentially, any TASS routine could be assembled and appended to the GATE subroutine library. These routines were relocatable. GATE programs were fashioned from independently compiled segments connected by link commands whose executions loaded new segments from disk. Unfortunately, we never implemented local variables in GATE, although their value was appreciated and an implementation was sketched.
The TASS family represented our thoughts on the combinational issues of programming. In the TASS manual is the following list of desiderata for an assembler:
1. Programs should be constructed from the combination of units (called P routines in TASS) so that relationships between them are only those specified by the programmer.
2. A programmer should be able to combine freely P routines written elsewhere with his own.
3. Any program, once written, may become a P routine in the library.
4. When a P routine is used from the library, no detailed knowledge of its internal structure is required.
5. All of the features found in SOAP I! should be available in P routines.
TASS supported an elaborate, but not rococo, mechanism for controlling circumstances when two symbols were (1) identical but required different addresses and (2) different but required identical addresses. Communication between P routines was possible both at assembly and at run time. Language extension through macrodefinitions was supported. SUPERTASS permitted nesting of macrocalls and P routine definition. Furthermore, SUPERTASS permitted interruptions of assembly by program execution and interruption of execution for the purpose of assembly.
Many of the modern ideas on modularization and structured programming were anticipated in TASS more as logical extensions to programming than as good practice. As so often happens in life cycles, both TASS and GATE attained stable maturity about the time the decision to replace the 650 by the Bendix G-20 was made.
Three other efforts to smooth the programming process developed as a result of the programming language work. IBM developed the FORTRANSIT system (see Hemmes in this issue) for translating FORTRAN programs into IT, thus providing a gradient for programs that would anticipate the one for computer acquisition. Van Zoeren (1959) developed a program GIF, under support of Gulf Oil Research, that went the other way so that programs written by their engineering department for their 650 could run on an available 704. Both programs were written in SOAP II. GATE translated programs in one pass, statement by statement. Van Zoeren quickly developed a processor called CORREGATE that enabled editing of compiled GATE programs by processing, compiling, and assembling into already compiled GATE programs only new statements. GATE was anticipating BASIC, although the interactive, time-sharing mode was far from our thoughts in those days.
As so often happened, when a new computer arrived, sufficient software didn't. The 650 was replaced in the spring of 1961 by a superior computer, the Bendix G20, whose software was inferior to that in use on our 650. For a variety of reasons, it had been decided to port GATE to the new machine -- but no adequate assembly language existed in which to code GATE. TASS had become as complex as GATE and appeared to be an inappropriate vehicle to port to the new machine, particularly because of the enormous differences in instruction architecture between the two machines. Consequently, a new assembler, THAT (To Help Assemble Translators) was designed (Jensen 1961). It was a minimal assembler and never attained the sophistication of TASS -- another example of the nonmonotonicity of software and hardware development over time.
We found an important lesson in this first transition. In the design and construction of software systems, you learn more from a study of the consequences of success than from analysis of failure. The former uses evolution to expose necessary revolution; the latter too often invokes the minimal backtrack. But who gave serious attention to the issues of portability in those days?
The 650 was a small computer, and its software, while dense in function, was similarly small in code size. The porting of GATE to the G-20 was accomplished in less than one man-year by three superb programmers, Evans, Van Zoeren, and Jorn Jensen, a visitor from the Danish Regnecentralen. They shared an office, each being the vertex of a triangle, and cooperated in the coding venture: Jensen was defining and writing THAT, Evans was writing the lexical analyzer and parser, and Van Zoeren was doing the code generator. The three activities were intricately meshed much as co-routines with backtracking: new pseudooperations for THAT would be suggested, approved, and code restructured, requiring reorganization in some of the code already written. This procedure converged quite quickly but would not be recommended for doing an Ada compiler.
in Annals of the History of Computing, 08(1) January 1986 (IBM 650 Issue) view details
in Annals of the History of Computing, 08(1) January 1986 (IBM 650 Issue) view details