TRANSCODE(ID:132/tra020)

Autocoder at Toronto 


Autocoder developed by J N Paterson Hume and Trixie Worsley in 1953 for the Ferut computer in Toronto.

Ferut based on Machester machine, and TRANSCODE on AUTOCODE, and influenced by SPEEDCODE article by Backus in JACM 1.


Places
People: Hardware:
Related languages
AUTOCODE => TRANSCODE   Influence
SPEEDCODING => TRANSCODE   Influence

Samples:
References:
  • Hume, J. N. P. "Input and Organisation of Sub-routines for FERUT" view details
          in [JCC 01] Joint AIEE-IRE Computer Conference Proceedings February 1952 view details
  • "Resolutions of a Non-Linear Differential Equation Arising in the Theory of Differential Flames" B.H. Worsley M.T.A.C. 1955 view details
          in [JCC 01] Joint AIEE-IRE Computer Conference Proceedings February 1952 view details
  • Hopper, Grace "Automatic Coding for Digital Computers" view details pdf Extract: Introduction
    Automatic coding is a means for reducing problem costs and is one of the answers to a programmer's prayer. Since every problem must be reduced to a series of elementary steps and transformed into computer instructions, any method which will speed up and reduce the cost of this process is of importance.
    Each and every problem must go through the same stages:
    Analysis,
    Programming,
    Coding,
    Debugging,
    Production Running,
    Evaluation
    The process of analysis cannot be assisted by the computer itself. For scientific problems, mathematical or engineering, the analysis includes selecting the method of approximation, setting up specifications for accuracy of sub-routines, determining the influence of roundoff errors, and finally presenting a list of equations supplemented by definition of tolerances and a diagram of the operations. For the commercial problem, again a detailed statement describing the procedure and covering every eventuality is required. This will usually be presented in English words and accompanied again by a flow diagram.
    The analysis is the responsibility of the mathematician or engineer, the methods or systems man. It defines the problem and no attempt should be made to use a computer until such an analysis is complete.
    The job of the programmer is that of adapting the problem definition to the abilities and idiosyncrasies of the particular computer. He will be vitally concerned with input and output and with the flow of operations through the computer. He must have a thorough knowledge of the computer components and their relative speeds and virtues.
    Receiving diagrams and equations or statements from the analysis he will produce detailed flow charts for transmission to the coders. These will differ from the charts produced by the analysts in that they will be suited to a particular computer and will contain more detail. In some cases, the analyst and programmer will be the same person.
    It is then the job of the coder to reduce the flow charts to the detailed list of computer instructions. At this point, an exact and comprehensive knowledge of the computer, its code, coding tricks, details of sentinels and of pulse code are required. The computer is an extremely fast moron. It will, at the speed of light, do exactly what it is told to do no more, no less.
    After the coder has completed the instructions, it must be "debugged". Few and far between and very rare are those coders, human beings, who can write programs, perhaps consisting of several hundred instructions, perfectly, the first time. The analyzers, automonitors, and other mistake hunting routines that have been developed and reported on bear witness to the need of assistance in this area. When the program has been finally debugged, it is ready for production running and thereafter for evaluation or for use of the results.
    Automatic coding enters these operations at four points. First, it supplies to the analysts, information about existing chunks of program, subroutines already tested and debugged, which he may choose to use in his problem. Second, it supplies the programmer with similar facilities not only with respect to the mathematics or processing used, but also with respect to using the equipment. For example, a generator may be provided to make editing routines to prepare data for printing, or a generator may be supplied to produce sorting routines.
    It is in the third phase that automatic coding comes into its own, for here it can release the coder from most of the routine and drudgery of producing the instruction code. It may, someday, replace the coder or release him to become a programmer. Master or executive routines can be designed which will withdraw subroutines and generators from a library of such routines and link them together to form a running program.
    If a routine is produced by a master routine from library components, it does not require the fourth phase - debugging - from the point of view of the coding. Since the library routines will all have been checked and the compiler checked, no errors in coding can be introduced into the program (all of which presupposes a completely checked computer). The only bugs that can remain to be detected and exposed are those in the logic of the original statement of the problem.
    Thus, one advantage of automatic coding appears, the reduction of the computer time required for debugging. A still greater advantage, however, is the replacement of the coder by the computer. It is here that the significant time reduction appears. The computer processes the units of coding as it does any other units of data --accurately and rapidly. The elapsed time from a programmer's flow chart to a running routine may be reduced from a matter of weeks to a matter of minutes. Thus, the need for some type of automatic coding is clear.
    Actually, it has been evident ever since the first digital computers first ran. Anyone who has been coding for more than a month has found himself wanting to use pieces of one problem in another. Every programmer has detected like sequences of operations. There is a ten year history of attempts to meet these needs.
    The subroutine, the piece of coding, required to calculate a particular function can be wired into the computer and an instruction added to the computer code. However, this construction in hardware is costly and only the most frequently used routines can be treated in this manner. Mark I at Harvard included several such routines — sin x, log10x, 10X- However, they had one fault, they were inflexible. Always, they delivered complete accuracy to twenty-two digits. Always, they treated the most general case. Other computers, Mark II and SEAC have included square roots and other subroutines partially or wholly built in. But such subroutines are costly and invariant and have come to be used only when speed without regard to cost is the primary consideration.
    It was in the ENIAC that the first use of programmed subroutines appeared. When a certain series of operations was completed, a test could be made to see whether or not it was necessary to repeat them and sequencing control could be transferred on the basis of this test, either to repeat the operations or go on to another set.
    At Harvard, Dr. Aiken had envisioned libraries of subroutines. At Pennsylvania, Dr. Mauchly had discussed the techniques of instructing the computer to program itself. At Princeton, Dr. von Neumman had pointed out that if the instructions were stored in the same fashion as the data, the computer could then operate on these instructions. However, it was not until 1951 that Wheeler, Wilkes, and Gill in England, preparing to run the EDSAC, first set up standards, created a library, and the required satellite routines and wrote a book about it, "The Preparation of Programs for Electronic Digital Computers". In this country, comprehensive automatic techniques first appeared at MIT where routines to facilitate the use of Whirlwind I by students of computers and programming were developed.
    Many different automatic coding systems have been developed - Seesaw, Dual, Speed-Code, the Boeing Assembly, and others for the 701, the A—series of compilers for the UNIVAC, the Summer Session Computer for Whirlwind, MAGIC for the MIDAC and Transcode for the Ferranti Computer at Toronto. The list is long and rapidly growing longer. In the process of development are Fortran for the 704, BIOR and GP for the UNIVAC, a system for the 705, and many more. In fact, all manufacturers now seem to be including an announcement of the form, "a library of subroutines for standard mathematical analysis operations is available to users", "interpretive subroutines, easy program debugging - ... - automatic program assembly techniques can be used."
    The automatic routines fall into three major classes. Though some may exhibit characteristics of one or more, the classes may be so defined as to distinguish them.
    1) Interpretive routines which translate a machine-like pseudocode into machine code, refer to stored subroutines and execute them as the computation proceeds — the MIT Summer Session Computer, 701 Speed-Code, UNIVAC Short-Code are examples.
    2) Compiling routines, which also read a pseudo-code, but which withdraw subroutines from a library and operate upon them, finally linking the pieces together to deliver, as output, a complete specific program for future running — UNIVAC A — compilers, BIOR, and the NYU Compiler System.
    3) Generative routines may be called for by compilers, or may be independent routines. Thus, a compiler may call upon a generator to produce a specific input routine. Or, as in the sort-generator, the submission of the specifications such as item-size, position of key-to produce a routine to perform the desired operation. The UNIVAC sort-generator, the work of Betty Holberton, was the first major automatic routine to be completed. It was finished in 1951 and has been in constant use ever since. At the University of California Radiation Laboratory, Livermore, an editing generator was developed by Merrit Ellmore — later a routine was added to even generate the pseudo-code.
    The type of automatic coding used for a particular computer is to some extent dependent upon the facilities of the computer itself. The early computers usually had but a single input-output device, sometimes even manually operated. It was customary to load the computer with program and data, permit it to "cook" on them, and when it signalled completion, the results were unloaded. This procedure led to the development of the interpretive type of routine. Subroutines were stored in closed form and a main program referred to them as they were required. Such a procedure conserved valuable internal storage space and speeded the problem solution.
    With the production of computer systems, like the UNIVAC, having, for all practical purposes, infinite storage under the computers own direction, new techniques became possible. A library of subroutines could be stored on tape, readily available to the computer. Instead of looking up a subroutine every time its operation was needed, it was possible to assemble the required subroutines into a program for a specific problem. Since most problems contain some repetitive elements, this was desirable in order to make the interpretive process a one-time operation.
    Among the earliest such routines were the A—series of compilers of which A-0 first ran in May 1952. The A-2 compiler, as it stands at the moment, commands a library of mathematical and logical subroutines of floating decimal operations. It has been successfully applied to many different mathematical problems. In some cases, it has produced finished, checked and debugged programs in three minutes. Some problems have taken as long as eighteen minutes to code. It is, however, limited by its library which is not as complete as it should be and by the fact that since it produces a program entirely in floating decimal, it is sometimes wasteful of computer time. However, mathematicians have been able rapidly to learn to use it. The elapsed time for problems— the programming time plus the running time - has been materially reduced. Improvements and techniques now known, derived from experience with the A—series, will make it possible to produce better compiling systems. Currently, under the direction of Dr. Herbert F. Mitchell, Jr., the BIOR compiler is being checked out. This is the pioneer - the first of the true data-processing compilers.
    At present, the interpretive and compiling systems are as many and as different as were the computers five years ago. This is natural in the early stages of a development. It will be some time before anyone can say this is the way to produce automatic coding.
    Even the pseudo-codes vary widely. For mathematical problems, Laning and Zeirler at MIT have modified a Flexowriter and the pseudo-code in which they state problems clings very closely to the usual mathematical notation. Faced with the problem of coding for ENIAC, EDVAC and/or ORDVAC, Dr. Gorn at Aberdeen has been developing a "universal code". A problem stated in this universal pseudo-code can then be presented to an executive routine pertaining to the particular computer to be used to produce coding for that computer. Of the Bureau of Standards, Dr. Wegstein in Washington and Dr. Huskey on the West Coast have developed techniques and codes for describing a flow chart to a compiler.
    In each case, the effort has been three-fold:
    1) to expand the computer's vocabulary in the direction required by its users.
    2) to simplify the preparation of programs both in order to reduce the amount of information about a computer a user needed to learn, and to reduce the amount he needed to write.
    3) to make it easy, to avoid mistakes, to check for them, and to detect them.
    The ultimate pseudo-code is not yet in sight. There probably will be at least two in common use; one for the scientific, mathematical and engineering problems using a pseudo-code closely approximating mathematical symbolism; and a second, for the data-processing, commercial, business and accounting problems. In all likelihood, the latter will approximate plain English.
    The standardization of pseudo-code and corresponding subroutine is simple for mathematical problems. As a pseudo-code "sin x" is practical and suitable for "compute the sine of x", "PWT" is equally obvious for "compute Philadelphia Wage Tax", but very few commercial subroutines can be standardized in such a fashion. It seems likely that a pseudocode "gross-pay" will call for a different subroutine in every installation. In some cases, not even the vocabulary will be common since one computer will be producing pay checks and another maintaining an inventory.
    Thus, future compiling routines must be independent of the type of program to be produced. Just as there are now general-purpose computers, there will have to be general-purpose compilers. Auxiliary to the compilers will be vocabularies of pseudo-codes and corresponding volumes of subroutines. These volumes may differ from one installation to another and even within an installation. Thus, a compiler of the future will have a volume of floating-decimal mathematical subroutines, a volume of inventory routines, and a volume of payroll routines. While gross-pay may appear in the payroll volume both at installation A and at installation B, the corresponding subroutine or standard input item may be completely different in the two volumes. Certain more general routines, such as input-output, editing, and sorting generators will remain common and therefore are the first that are being developed.
    There is little doubt that the development of automatic coding will influence the design of computers. In fact, it is already happening. Instructions will be added to facilitate such coding. Instructions added only for the convenience of the programmer will be omitted since the computer, rather than the programmer, will write the detailed coding. However, all this will not be completed tomorrow. There is much to be learned. So far as each group has completed an interpreter or compiler, they have discovered in using it "what they really wanted to do". Each executive routine produced has lead to the writing of specifications for a better routine.
    1955 will mark the completion of several ambitious executive routines. It will also see the specifications prepared by each group for much better and more efficient routines since testing in use is necessary to discover these specifications. However, the routines now being completed will materially reduce the time required for problem preparation; that is, the programming, coding, and debugging time. One warning must be sounded, these routines cannot define a problem nor adapt it to a computer. They only eliminate the clerical part of the job.
    Analysis, programming, definition of a problem required 85%, coding and debugging 15$, of the preparation time. Automatic coding materially reduces the latter time. It assists the programmer by defining standard procedures which can be frequently used. Please remember, however, that automatic programming does not imply that it is now possible to walk up to a computer, say "write my payroll checks", and push a button. Such efficiency is still in the science-fiction future.

          in the High Speed Computer Conference, Louisiana State University, 16 Feb. 1955, Remington Rand, Inc. 1955 view details
  • Hume, J. N. P. and H. Worsley, Beatrice "Transcode, A System of Automatic Coding for FERUT" view details pdf
          in [ACM] JACM 2(4) (Oct 1955) view details
  • Worsley, B.H. and Hume, J. N.P "A New Tool for Physicists" view details
          in Physics in Canada 10(4) 1955 view details
  • Gotlieb, C. C. et al. "Free use of the Toronto computer, and the remote programming of it, II" Comput. Auto. 5, 7 (l956), 29-31. view details
          in Physics in Canada 10(4) 1955 view details
  • Bemer, R. W. "The Status of Automatic Programming for Scientific Problems" view details Abstract: A catalogue of automatic coding systems that are either operational or in the process of development together with brief descriptions of some of the more important ones Extract: Summary
    Let me elaborate these points with examples. UNICODE is expected to require about fifteen man-years. Most modern assembly systems must take from six to ten man-years. SCAT expects to absorb twelve people for most of a year. The initial writing of the 704 FORTRAN required about twenty-five man-years. Split among many different machines, IBM's Applied Programming Department has over a hundred and twenty programmers. Sperry Rand probably has more than this, and for utility and automatic coding systems only! Add to these the number of customer programmers also engaged in writing similar systems, and you will see that the total is overwhelming.
    Perhaps five to six man-years are being expended to write the Alodel 2 FORTRAN for the 704, trimming bugs and getting better documentation for incorporation into the even larger supervisory systems of various installations. If available, more could undoubtedly be expended to bring the original system up to the limit of what we can now conceive. Maintenance is a very sizable portion of the entire effort going into a system.
    Certainly, all of us have a few skeletons in the closet when it comes to adapting old systems to new machines. Hardly anything more than the flow charts is reusable in writing 709 FORTRAN; changes in the characteristics of instructions, and tricky coding, have done for the rest. This is true of every effort I am familiar with, not just IBM's.
    What am I leading up to? Simply that the day of diverse development of automatic coding systems is either out or, if not, should be. The list of systems collected here illustrates a vast amount of duplication and incomplete conception. A computer manufacturer should produce both the product and the means to use the product, but this should be done with the full co-operation of responsible users. There is a gratifying trend toward such unification in such organizations as SHARE, USE, GUIDE, DUO, etc. The PACT group was a shining example in its day. Many other coding systems, such as FLAIR, PRINT, FORTRAN, and USE, have been done as the result of partial co-operation. FORTRAN for the 705 seems to me to be an ideally balanced project, the burden being carried equally by IBM and its customers.
    Finally, let me make a recommendation to all computer installations. There seems to be a reasonably sharp distinction between people who program and use computers as a tool and those who are programmers and live to make things easy for the other people. If you have the latter at your installation, do not waste them on production and do not waste them on a private effort in automatic coding in a day when that type of project is so complex. Offer them in a cooperative venture with your manufacturer (they still remain your employees) and give him the benefit of the practical experience in your problems. You will get your investment back many times over in ease of programming and the guarantee that your problems have been considered.
    Extract: IT, FORTRANSIT, SAP, SOAP, SOHIO
    The IT language is also showing up in future plans for many different computers. Case Institute, having just completed an intermediate symbolic assembly to accept IT output, is starting to write an IT processor for UNIVAC. This is expected to be working by late summer of 1958. One of the original programmers at Carnegie Tech spent the last summer at Ramo-Wooldridge to write IT for the 1103A. This project is complete except for input-output and may be expected to be operational by December, 1957. IT is also being done for the IBM 705-1, 2 by Standard Oil of Ohio, with no expected completion date known yet. It is interesting to note that Sohio is also participating in the 705 FORTRAN effort and will undoubtedly serve as the basic source of FORTRAN-to- IT-to-FORTRAN translational information. A graduate student at the University of Michigan is producing SAP output for IT (rather than SOAP) so that IT will run on the 704; this, however, is only for experience; it would be much more profitable to write a pre-processor from IT to FORTRAN (the reverse of FOR TRANSIT) and utilize the power of FORTRAN for free.
          in "Proceedings of the Fourth Annual Computer Applications Symposium" , Armour Research Foundation, Illinois Institute of Technology, Chicago, Illinois 1957 view details
  • [Bemer, RW] [State of ACM automatic coding library August 1958] view details
          in "Proceedings of the Fourth Annual Computer Applications Symposium" , Armour Research Foundation, Illinois Institute of Technology, Chicago, Illinois 1957 view details
  • [Bemer, RW] [State of ACM automatic coding library May 1959] view details Extract: Obiter Dicta
    Bob Bemer states that this table (which appeared sporadically in CACM) was partly used as a space filler. The last version was enshrined in Sammet (1969) and the attribution there is normally misquoted.
          in [ACM] CACM 2(05) May 1959 view details
  • Carr, John W III; "Computer Programming" volume 2, chapter 2, pp115-121 view details
          in E. M. Crabbe, S. Ramo, and D. E. Wooldridge (eds.) "Handbook of Automation, Computation, and Control," John Wiley & Sons, Inc., New York, 1959. view details
  • Bemer, R "ISO TC97/SC5/WGA(1) Survey of Programming Languages and Processors" December 1962 view details
          in [ACM] CACM 6(03) (Mar 1963) view details
  • Campbell-Kelly, Martin "The Development of Computer Programming in Britain (1945 to 1955)" view details Extract: FERUT and TRANSCODE
    The FERUT Programming Group.
    There was initially a very direct transfer of computer programming techniques from Manchester University to the FERUT programming group. The leader of the University of Toronto group, C. C. Gotlieb, spent three months at Manchester University in the spring of 1951 preparing for the delivery of the Mark I to Toronto.

    When the University of Toronto took possession of the Mark I, it inherited the input routines developed at Manchester University and the subroutine library. In addition, it obtained the services of three people with experience on the Manchester machine: D. G. Prinz of the Moston programming group, C. Strachey of the NRDC, and C. M. Popplewell of Manchester.

    The people who played the most important roles in developing programming systems for the FERUT were J. N. P. Hume and B. H. Worsley (who had been a research student at Cambridge). During 1953 a new input routine was developed, mainly by Hume (1954), to supplant the Manchester scheme. Hume's input routine, however, was similar to the Manchester schemes, including the awful teleprinter notation. This reflected the Manchester experience of the impossibility of making a break with the teleprinter notation once it had become established. A comprehensive library of over 100 routines was eventually developed, of which the library inherited from Manchester was the root stock (Gotlieb 1954).

    The most interesting programming innovation at Toronto was an automatic coding system TRANSCODE developed by Hume and Worsley (1955) that came into operation in September 1954. Although TRANSCODE was a completely separate development from Brooker's Mark I Autocode, which was being developed at about the same time at Manchester, the two systems had a good deal in common (although Brooker's notation was far more elegant). It is unfortunate that communications between Toronto and Manchester were by this time so limited that these separate but parallel developments took place with such a wasteful duplication of effort.
    Extract: Conclusions
    Conclusions
    When we compare the development of programming at the three centers -- Cambridge, Manchester, and Teddington -- there are several factors to consider. First, we must consider the quality of the programming system; this is a subjective issue that ranges from the purely aesthetic to the severely practical -- for example, from the elegance of an implementation at one extreme to the speed of a matrix inversion at the other. We must also consider the failures of the three centers, especially the failure to devise a programming system that exploited the full potential of the hardware. Finally, we must consider the influence of the programming systems on other groups; this is less subjective -- it was described in the previous two sections and is summarized in Figure 2.

    Few could argue that Cambridge devised the best of the early programming systems. The work done by Wilkes and Wheeler stood out as a model of programming excellence. Cambridge made several outstanding contributions to early programming: the use of closed subroutines and parameters, the systematic organization of a subroutine library, interpretive routines, and the development of debugging routines. Perhaps the finest innovation was the use of a symbolic notation for programming, as opposed to the use of octal or some variant. It is difficult for us today to appreciate the originality of this concept.
    If Cambridge can be said to have had a failure, it was the failure to develop programming languages and autocodes during the middle and late 1950s, as reflected in the second edition of Wilkes, Wheeler, and Gill (1957), of which Hamming said in a review,

    It is perhaps inevitable that the second edition, though thoroughly revised, does not represent an equally great step forward, but it is actually disappointing to find that they are no longer at the forefront of theoretical coding. (Hamming 1958)]

    By neglecting research into programming languages, Cambridge forfeited its preeminence in the programming field.

    In the early 1950s, however, Cambridge was by far the most important influence on programming in Britain. This came about partly through the excellence of the programming system and partly through the efforts that Cambridge made to promote its ideas. Two machines (I`EO and TREAC) based their programming system directly on EDSAC, and five machines (Nicholas, the Elliott 401 and 402, MOSAIC, and Pegasus) were strongly influenced by it. It is also probably true that no programming group was entirely uninfluenced by the Cambridge work. Overseas, the influence of the EDSAC programming system was just as great, largely through the classic programming textbook by Wilkes, Wheeler, and Gill (1951) (see Campbell-Kelly 1980a).

    At Manchester the programming system devised by Turing for the Mark I makes a disappointing contrast with the elegance of the Cambridge work. From the point of view of notation, it is difficult to find a single redeeming feature. Probably the only feature of real merit was the concept of dividing a program into physical and logical pages. Echoes of this idea can be discerned in today's segmented computers.

    In its way, Turing's programming system did have considerable influence, for all efforts to replace it with something more suitable were curiously unsuccessful.

    Thus programmers for both Mark Is and all seven Mark Iota's had to struggle with Turing's clumsy teleprinter notation throughout the life of these machines. Here is perhaps one of the most valuable lessons of this study: poor design decisions taken early on are almost impossible to correct later. Thus even when people with a Cambridge background arrived at Manchester, they were unable to make a really fresh start. By producing two successive input routines that were not much better than Turing's, they managed to combine the worst of both worlds: an unsatisfactory programming system that was not even a stable one.

    The one real high spot of the Manchester programming activity was Brooker's Mark I Autocode. Brooker's achievement was the most important programming event of the mid-1950s in Britain. If Brooker had not devised his autocode at that time, programming in Britain might have developed very differently. The autocodes for DEUCE and Pegasus were directly inspired by Brooker's and had considerable notational similarities with it. Beyond the time scale of this paper, Brooker's Mark I Autocode and his later Mercury Autocode (1958) were a dominant influence on British programming until well into the 1960s, when languages such as ALGOL 60 and FORTRAN came onto the scene in Britain.

    Of the three programming systems devised at Cambridge, Manchester, and Teddington, it is probably the latter that inspires the least passion. Ii the punching of programs in pure binary was an efficient method, it was also a singularly uninspiring one. Curiously, aficionados of the Pilot ACE and the DEUCE had great enthusiasm for programming these machines, which really had more to do with the joys of optimum coding and exploiting the eccentric architecture than with any merits of the programming system.

    In many ways the crudity of the programming system for the Pilot ACE was understandable: the speed of events, the lack of a backing store, and so on. But perpetuating it on the DEUCE was a minor tragedy; by replicating the programming system on the 32 commercially manufactured DEUCES, literally hundreds of rank-and-file programmers were imbued in this poor style of programming. MOSAIC (Section 3.4) shows that it was entirely possible to devise a satisfactory programming system for machines of the ACE pattern; it is most unfortunate that this work was not well enough known to influence events.

    NPL did, however, have one notable programming-success: the GIP matrix scheme devised by Woodger and Munday. This scheme became the sole reason for the existence of many DEUCES. The reliability of the mathematical programs produced by NPL, their comprehensiveness, and their speed have become almost legendary. A history of numerical methods in Britain would no doubt reveal the true role of NPL in establishing the methods of linear algebra as an analytical tool for the engineer.

    In an interview, P. M. Woodward, one of the principals of the TREAC programming activity, recalled, "Our impression was that Cambridge mattered in software whereas Manchester mattered in hardware" (Woodward and Jenkins 1977). He might well have added that NPL mattered in numerical methods.

    Because this paper has been primarily concerned with the development of programming during the period 1945-1955, Cambridge has received pride of place as the leading innovator. Had the paper been concerned principally with hardware or numerical methods, however, the ranking of the three centers would have been different. But considered purely as innovators of programming, there can be no question that Cambridge stood well above the rest.
    Abstract: By 1950 there were three influential centers of programming in Britain where working computers had been constructed: Cambridge University (the EDSAC), Manchester University (the Mark I), and the National Physical Laboratory (the Pilot ACE). At each of these centers a distinctive style of programming evolved, largely independently of the others. This paper describes how the three schools of programming influenced programming for the other stored-program computers constructed in Britain up to the year 1955. These machines included several prototype and research computers, as well as five commercially manufactured machines. The paper concludes with a comparative assessment of the three schools of programming.


          in Annals of the History of Computing 4(2) April 1982 IEEE view details
  • Griffith, B. A. "My Early Days in Toronto", pp62-63 view details Extract: Transcode and FERUT
    The first PhD candidate with whom I worked was Beatrice Worsley, always known to members of our computing group as "Trixie." She had graduated with a BA degree in honors mathematics at Toronto during World War II. In late 1945 or early 1946, she had gone to England for postgraduate study at Cambridge. There she became interested in the work on electronic computers, and for her PhD thesis she undertook to write an account of the early pioneer work on the construction of electronic computers at Cambridge, Manchester and the NPL. For reasons unknown to me she left Cambridge, probably early in 1950, and returned to Toronto before finishing her thesis.

    Before leaving Cambridge, Trixie was able to make arrangements to complete her thesis in Toronto. The authorities at Cambridge merely required that she find a senior staff member at Toronto who would act as an extramural representative of Cambridge and supervise the completion of her thesis. In the autumn of 1950, Trixie asked me if I would agree to act as her supervisor until she completed her thesis. I agreed to help, and during the next few months Trixie gave me quite regular progress reports to read. Her work was well organized and clearly written -- I had no need to do more than make some encouraging comments and perhaps a few minor suggestions. In a few months Trixie completed her thesis and had the required number of copies typed, bound, and sent to Cambridge. Soon she was granted the degree of PhD and now continued to do valuable work for our Computation Centre until at least 1959 and probably for many years thereafter. [Editor's note: I believe this to be the first PhD in which the thesis actually involved modern computers. but not the first awarded by a Department of Computer Science.]

    In 1952 Trixie worked with Pat Hume in writing a program for Ferut that was known as Transcode. The program enabled Ferut to accept simple mnemonic instructions and convert them into the usual Ferut instructions. It contained a number of subroutines for the calculation of some transcendental functions such as trigonometrical functions and probably ex, log x, and so on. [Editor's note: See the article in this issue by J.N.P. Hume on software for the Ferut. ]
    The mnemonic instructions provided by Transcode were very simple and proved useful in the training of programmers. Pat Hume and Kelly Gotlieb initiated evening courses of about 20 sessions for the training of those wishing to learn computer programming. In each session Pat or Kelly would present a sample program, pointing out the need for each instruction and describing the operation that would be performed by the computer in response to that instruction. Then one or two exercises, similar to the given example, would be assigned to the members of the class. In addition to Pat and Kelly, there were always a few volunteers, some from the Computation Centre and others with experience in programming. They were available to assist, in a tutorial manner, members of the class who had any difficulty with the assigned exercises. Joe Kates and I often acted as two of these volunteers.

          in Annals of the History of Computing 16(2) Summer 1994 view details
  • Patterson Hume, J.N. "Development of Systems Software for the Ferut Computer at the University of Toronto, 1952 to 1955" view details Abstract: The Ferut computer was a copy of the Mark I computer at the University of Manchester. Two years after its delivery in Toronto, systems software had been developed to vastly enlarge the community of users. To go from a few dedicated programmers patient enough to deal with the extremely difficult machine code to a situation where anyone with two hours to spare could program successfully was a major advance. This article retraces the steps in this pioneering experiment in automatic programming, in which the author played
    a central role. Extract: Transcode
    Transcode
    The Transcode system was finished by September 1954 and was the work of myself and Trixie Worsley. A total of six person-months was required. Many of the library subroutines used as part of Transcode were created by others. The distinctive difference from Speedcode was that ours was a compiler. Because our scale-of-32 computer already used letters of the alphabet, mnemonic operation codes were obvious. Since our Transcode instructions did not have to reside in memory after translation, we could be wasteful of space. We assigned four- letter words to each operation: MULT for multiplication, SUBT for subtraction, and so on. There was no need, as in Speedcode, to have two types of instructions. We made all instructions, even those requiring only one argument, the three-address type for uniformity. The notion of making our language machine independent was never considered, so we took every advantage of the nature of Ferut s hardware. Each three-address instruction was assigned four lines of memory. One for the operation and one each for the three addresses -- this made the translation much simpler.

    During execution five of the eight pages of the immediate-access memory were assigned to the translated program, Perm, and the floating-point instructions.

    Keeping the floating-point instruction memory meant that the translated program had only to transfer control to the appropriate part, after planting the arguments and a return address in fixed locations. No transfer between drum and memory was required.

    This left three pages for data; part of one of these pages was also used to hold the link list of subroutines. Each floating number required three lines, one for the exponent and two for the significant digits. So a page of 64 lines could hold a maximum of 21 floating numbers. These locations were given names, essentially predeclared variable names. One page, the X-page, had locations (or variables) X01, X02, up to X21. Another, the Y-page, had Y01, Y02, up to Y21. The partial page, the Z-page, had locations Z01, Z02, up to Z13. To add the number in X01 to the number in Y01 and put the result in Z01 you used the instruction ADDN X01.0 Y01.0 Z01.0

    The extra position in each of the three arguments was used for indexing. The period was just for readability and was not part of the instruction. Each of the three sets of variables could act as a vector (or an array). Indexing was done by appending the number of a B- register, which held an integer, to the variable name. The address X01.3 was the one on the X-page that was designated by X01 plus the contents of B-register 3.

    The contents of any B-register could be set by BSET, incremented by INCB, decremented by NEGB, and stored in memory by JOTB. The contents of B-register 5 could, for example, be increased by 3 by the instruction INCB 000.5 003.0 000.0

    This is an example of an instruction that did not use all of its arguments, but, as mentioned, keeping all instructions in the same form made the compilation process much simpler.

    Extract: Experience with Transcode
    Experience with Transcode
    Transcode was an almost instant success. It could be learned with two hours of lectures or understood from a manual prepared by the Computation Centre staff. No longer were researchers in physics, chemistry, and astronomy waiting for the services of a professional programmer. The ease of programming and debugging decreased the time between wanting a calculation done and having results from weeks to a few days.

    Researchers at the National Research Council of Canada and at the Defense Research Board no longer had to rely on their programmer representatives in Toronto and were mailing programs to be punched and run. Often any errors could be found for them and results returned the same day.

    In 1955, thanks to the kindness of CN Telecommunications, the teletypewriter lines between Saskatoon and Toronto were made available certain evenings, and Transcode programs punched by people at the University of Saskatchewan in Saskatoon were sent to Ferut. These were run, and the results punched and sent back to their originators. This was the first recorded long-distance on-line use of a computer in Canada, and it would never have worked without Transcode.

    Transcode was not a toy. Examples of successful research calculations were recorded in an article addressed to Canadian physicists.

    Worsley and I sent a copy of our article to Grace Murray Hopper, whom we knew was working in this area, and received a warm reply: "It is so encouraging to read of the experience of others who have so successfully applied the automatic coding techniques and it helps no end in presenting the ideas to others to be able to cite your experience as well as our own."

    Extract: Automatic coding at Manchester
    Automatic coding at Manchester
    An early attempt at automatic coding called Autocode was completed by A.E. Glennie of the Ministry of Supply, but used only by him. It did not deal with scaling or the two-level store.

    In 1954 R.A. Brooker introduced his system, which he reluctantly also called Mark I Autocode.6 This he published in September 1955, too late to influence Transcode. His system had predeclared floating-point variables called v1, v2, v3, .... vS000, thus eliminating all reference to the two-level store for the data. This introduced considerable inefficiency whenever a sequence of variables was used that was longer than could be kept in the electrostatic store. A form of the assignment statement was used instead of the three-address format for arithmetic instructions. For example, v12 = v12 + v13 was used instead of Transcode's ADDN Z01.0 Z02.0 Z01.0, or n2 = n2 + l instead of Transcode's INCB 000.2 001.0 000.0.

    The right-hand side of the "assignment" could only be a dyadic operation. There was no loop operation, all counted repetitions being programmed explicitly. Printing was of one number at a time; for example, *v12 = v12 would result in the printing of the contents of v12 (the asterisk meant to print). Reports of the success of Brooker's Autocode were similar to those of Transcode. Transcode was more sophisticated in many ways than Autocode in having counted loops, formatted printing, ability to create functions, faster compile time, and user control of data segmentation.
    Extract: After Transcode
    Ferut continued to serve the Canadian scientific community, but by 1958 the maintenance costs were seen to be larger than the rental costs of an IBM 650 -- and the Ferut service was still on again, off again. Age was taking its toll. Ferut went to the National Research Council in Ottawa (where it was used for a while), and we got the IBM 650. My interest flagged as there already existed the SOAP (Symbolic Optimal Assembly Program) system to place the one-plus-one machine- code instructions strategically around the drum. The machine was decimal and the machine (or assembly) code was not as forbidding as Ferut's had been. But the experience we had with Transcode led us to want a higher level language for writing programs. The Fortran language had been invented and was available in time for the 650. Although it took forever to compile, it did seem the way of the future and it was to survive the next generations of machines -- it was machine independent.

    Fortran provided, as advertised, formula translation. Arithmetic operations did not need to be dyadic only, as arithmetic expressions could be parsed and the appropriate machine code generated. At one point, just before the demise of Ferut, we had contemplated implementing such a language ourselves, where the translation would be two stage, from the algebraic language to Transcode and then, as before, to machine code. But we never did it.

    Convinced that higher level languages were the only way to go, we began offering four-hour courses on Fortran using color slides and audio tapes to prepare the scientific community in Toronto to use it. When we got the IBM 7094 in the early 1960s, we were ready.

    Transcode was for us a major step in the direction of what had become automatic programming. It was much more than an assembly language and yet, because it had a fixed format and was definitely not machine independent, it did not rate to be called a programming language. But that depends on who makes up the definition. It was an exciting time to be involved in computing. When you are a pioneer, all contributions are welcome.
    Extract: Translation
    Translation
    The compiler and operating system for Transcode occupied 18 tracks on the drum. After all the information was read in from the tape, all numbers and constants being converted from decimal to binary, translation began. Each Transcode instruction was replaced by a sequence of from one to 16 machine instructions.

    Since the translated program resident in memory could not exceed two pages at a time, segments of this length were created, each being sent in turn to the drum and having the calling in sequence for the next segment as required for execution. A list was kept of the correspondences between Transcode instruction numbers and addresses of the beginning of the corresponding machine-code sequence (both drum track and store location). This list was used whenever a control transfer instruction (TRNS) was encountered. If the transfer was a jump ahead in the program, its location was stored in a list of as yet untranslated addresses. At the end of translation, which was signaled by the Transcode instruction QUIT in the program, all the untranslated addresses were translated in a second pass. Compile time was short relative to execution time of most programs, largely because of the simplicities obtained by using knowledge of the machine hardware to advantage in creating the Transcode language.
    Extract: Automatic coding
    Automatic coding
    About the fall of 1953 it was decided to try to get the computer itself to help with the coding process. In the first issue of the Journal of the Association for Computing Machinery, January 1954, J.W. Backup described his success with the Speedcoding system for the IBM 701. This material had been presented at the meeting of ACM in September 1953 and so was known to us. Speedcoding made the 701 behave as a three-address, floating-point computer and was an interpretive system. Some instructions, other than the arithmetic and input-output ones, were written with one address. The system simulated the existence of three B-tubes and thus permitted indexing of an array of memory locations. It had reasonably easy input operations for data and program, with checking features. Although programs in Speedcode ran much more slowly than machine code, the reduction in programming and testing time made it an economical alternative. This report encouraged us to launch into our own Transcode project.

    About the fall of 1953 it was decided to try to get the computer itself to help with the coding process. In the first issue of the Journal of the Association for Computing Machinery, January 1954, J.W. Backup described his success with the Speedcoding system for the IBM 701. This material had been presented at the meeting of ACM in September 1953 and so was known to us. Speedcoding made the 701 behave as a three-address, floating-point computer and was an interpretive system. Some instructions, other than the arithmetic and input-output ones, were written with one address. The system simulated the existence of three B-tubes and thus permitted indexing of an array of memory locations. It had reasonably easy input operations for data and program, with checking features. Although programs in Speedcode ran much more slowly than machine code, the reduction in programming and testing time made it an economical alternative. This report encouraged us to launch into our own Transcode project.
    Extract: Beginnings of Coding on FERUT
    The Ferut computer was a copy of the Mark I computer at the University of Manchester. Two years after its delivery, in Toronto, systems software had been developed to vastly enlarge the community of users. To go from a few dedicated programmers patient enough to deal with the extremely difficult machine code to a situation where anyone with two hours to spare could program successfully was a major advance. This article retraces the steps in this pioneering experiment in automatic programming, in which the author played a central role.

    In the late spring of 1952 an electronic digital computer built by Ferranti Ltd., Manchester, England, arrived at the University of Toronto. It was a copy of the Mark I computer at the University of Manchester and the second computer ever sold. Our computer was dubbed Ferut, combining the names of the manufacturer and the new owner. It was set up in the Physics Department across the hall from an office that I, a relatively new assistant professor, shared with C.C. (Kelly) Gotlieb, who held the title of chief computer of the Computation Centre. I had been using the center's IBM punched-card equipment to carry out wave-function calculations for complex atoms, a considerable improvement over a hand-cranked Millionaire mechanical calculator that had been the best the Physics Department had when I finished my PhD in 1949. Ferut was to me a dream come true -- days of computation could be compressed into minutes. But there was a catch -- we had to get the new machine going. From a hardware point of view the maintenance engineers had their hands full with hundreds of vacuum tubes and thousands of soldered connections, a job that was to keep them busy for months.

    In September 1952, ACM had a meeting in Toronto, and this attracted a number of the people from the University of Manchester whose brains could be picked for details of operating systems and machine language. This job fell to me and Beatrice H. (Trixie) Worsley, since Gotlieb was taken up with responsibilities of the conference. There was no manual with the computer and none was brought from Manchester. But a user of that computer, D.G. Prinz, had prepared one and had the facts carefully memorized. We found ourselves sitting behind him as he typed, in a most systematic way, a sort of on-line version of the Prinz manual, adding comments as he went. That was the beginning of my conversion from physicist to computer scientist -- wave-function calculations had to wait while operating system programs were written -- and I was elected. I could not have done this without the encouragement of W.H. Watson, who was both head of the Physics Department and director of the Computation Centre.


          in Annals of the History of Computing 16(2) Summer 1994 view details
  • Smillie, Keith "The History of Computing Science at the University of Alberta" view details Abstract: Keith Smillie recounts personal recollections of how computing science found a place in the traditional structure of a university. Extract: FERUT use at Alberta
    The first use of an electronic computer was probably in the Department of Physics, which in May 1957 established a link with the Ferranti Computer, known as FERUT, in the Department of Physics at the University of Toronto. It used World War II vacuum tubes and occupied a large room. Input and output was by five-hole punched paper teletype tape. The machine had the capacity of one of today's programmable pocket calculators but was much less reliable. Although a crew of eight engineers was required for maintenance, it could not be depended upon to run without failure for more than half an hour or so. The Edmonton terminal was a teletype machine in a closet in the basement of the Arts Building. The National Research Council paid for the computer time. It was used one evening a week throughout the summer of 1957 by several faculty members in the Department of Physics and their graduate students.


          in IEEE Annals of the History of Computing, Spring 1996 view details