INTERCODE(ID:3089/int012)

Mark I* Autocode 


Autocode system for a modified Ferranti Mark I* at ARDE in the UK called AMOS.

Gawlik and Berry 1957 Later modified to run on all LEO IIIs.

According to Berry, based on the Brooker Mark I autocode, not the Brooker Mercury autocode


People:
Related languages
Amos Input System => INTERCODE   Evolution of
INTERCODE => AS Intercode   Implementation
INTERCODE => LEO Intercode   Evolution of

References:
  • Gawlik, H.J. "A Comprehensive Input System for AMOS. " Applied Mathematics and Mechanics Division, Memo 9/54 Armament Research Establishment, Fort Halstead, Kent June 1954 view details
  • Berry, F.J."Catalogue of Amos Library Subroutines. " Branch Memorandum B4/2/55 Armament Research Establishment, Fort Halstead, Kent view details
  • Berry, FJ "Catalogue of Amos Library Subroutines. Supplement 1." Branch Memorandum B4/3/55 Armament Research Establishment, Fort Halstead, Kent view details
  • Gawlik, H.J. "A New and Enlarged Version of the Amos Input System." Branch Memorandum B4/1/56 Armament Research Establishment, Fort Halstead, Kent view details
  • Berry, FJ "Intercode: An Easy Way of Using the Digital Computer AMOS" Branch Memorandum B4/3/57. ARMAMENT RESEARCH ESTABLISHMENT, FORT HALSTEAD, KENT view details
  • Berry, F. J. "Intercode, a simplified coding scheme for AMOS" pp55-58 view details Abstract: BERRY, F. J. Intercode, a simplified coding scheme for AMOS. Comput. J. 2 (1959),55-58.

    Intercode is an interpretive routine for the Ferran Mark I* computer, providing for 11 three-address pseud instructions and 100 floating point numbers. It is useful fo training programmers and for short computing problems

    C. C. Gotlieb, Toronto, Ont
    Rev. No. 4573, Math. Rev. 21, 7 (July-August 1960
    Extract: Introduction
    AMOS is installed in a research and development establishment where it is expected to satisfy the computing requirements of many different groups. All the computing problems which have arisen in practlce can be put into one of two distinct classes. There are large problems, most of which involve the solution of partial differential equations and often require new computing techniques, and small routine problems involving no more than substituting numbers into formulae. AMOS has proved extremely satisfactory for the large problems, for the following reasons. It is not difficult to program in the basic code of the machine and so exploit all its possibilities. The coherent logical design of the whole machine and the flexibility of a one-address code are a definite advantage when organizing the mechanical computation of these large problems. Moreover, the total storage capacity of the machine has proved ample for all the programs written so far; the maximum size of problem which may be computed on AMOS is likely to be restricted more on account of running time than by storage capacity.
    AMOS has not proved so satisfactory for the user with small computing problems. It is clear that a machine of much smaller capacity would be adequate, and certain differences in basic structure would be desirable. Since AMOS must nevertheless be used for small problems, an alternative system of programming has been devised, which in effect makes AMOS behave  like a smaller machine more suited to small, routine computations. This system is called Intercode; it was inspired chiefly by the Mark I Autocode (Brooker, 1956).
    Unlike the Mercury Autocode (Brooker, 1958), it is not intended to replace programming in the basic code of the machine, except for a strictly limited range of
    problems. Indeed, many conventional programming techniques were deliberately retained in Intercode so that it might also serve as a first step in learning to
    program in the basic code. Extract: BASIC FEATURES OF INTERCODE
    BASIC FEATURES OF INTERCODE
    For the purpose of multiplication in AMOS, the
    position of the binary point is considered fixed, so that
    all the numbers occurring in a computation must be less
    than 2 in modulus. While it is often possible, especially
    in partial differential equation problems, to choose
    variables which stay in the range 5 2 , intermediate
    results frequently go "out of range," so that a programmer
    has to introduce extra scaling into his program,
    as necessary. No extra scaling would be needed in 9OU,,
    of tlic problems done on AMOS if results as big as 103
    could be accumulated directly. In Intercode, numbers
    may go up to 239 (about 5 x 10"); this removes the
    scaling problem completely, allowing all practical computations
    to be carried out in the original physical units,
    with a reasonable margin of safety for intermediate
    results.
    As far as the programmer is concerned, the storage in
    Intercode is all on one level and there is no question of
    access time. The instructions and numbers are kept
    quite separately. The capacity of the instruction store
    is limited only by the addressing system chosen for jump
    instructions, and is, in fact, far larger than will ever be
    required. In the type of computation for which Intercode
    was designed, there are never many numbers involved
    simultaneously. A number store capable of holding
    100 numbers is therefore adequate and the location of
    any number can be specified economically by a two-digit
    decimal integer in the range 00 to 99.
          in The Computer Journal 2(2) July 1959 view details
  • Gotlieb, C. C. Review of Intercode (Review No 4573) in Math Rev 21(7) (July-August 1960) view details Abstract: Intercode is an interpretive routine for the Ferran Mark I* computer, providing for 11 three-address pseudo-instructions and 100 floating point numbers. It is useful for training programmers and for short computing problems
          in The Computer Journal 2(2) July 1959 view details
  • Forbes, JM "An introduction to compiler writing" view details Abstract: The Computer Journal, Volume 8, Issue 2, pp. 98-102: Abstract.


    An introduction to compiler writing
    JM Forbes

    English Electric-Leo-Marconi Computers Ltd., Hartree House, Queensway, London. UK


    This paper is based on a talk given to the Birmingham branch of the B.C.S. in January 1965. It refers to problem met with in developing a translator and a compiler for LEO III computers. These have a single-address order code, ane level of store is usually recommended, and the order code is designed to handle and store data directly in any radix; decimal, binary, or a mixed radix such as sterling.
    Extract: Cleo
    LEO Computers Ltd. started the development of CLEO, which is a full-scale Autocode, in 1961. It is now being used by the majority of those English Electric-LEO-Marconi Computers Ltd. customers who have a member of the LEO range of computers. This has been a most encouraging response; more than one user has said that all his future programming will be done in CLEO and at the time of writing at least 25 of the first 30 users of LEO equipment have used CLEO to some extent. Extract: Cleo
    CLEO, in common with other Autocodes, has two main divisions; the procedure division and the data division. These two divisions reflect the two main problems which occupy the attention of the CLEO compiler.
    These two main problems are
    (i) to break down the procedures into their constituent parts and into computer code;
    (ii) to allocate addresses to identifiers and ensure these addresses are associated with the references to their identifiers in the procedures.

    It should be added that the compilation process from CLEO to machine code is divided into two parts: from CLEO to INTERCODE and then from INTERCODE to machine code.
    Extract: INTERCODE
    INTERCODE
    It is established practice among LEO users that a data-processing system will not start to process any data until that data has been thoroughly vetted. This is equally true of compilers. If a compiler does not vet its data thoroughly it is likely to get into trouble itself or even worse produce wrong object program coding without any indication that this is the case. INTERCODE is an intermediate language with upwards of 150 actions with facilities for constants and tables and an expansion factor of about 1-0 to 1-5. Once the function part of any of these 150 actions has been recognized there remains the problem of vetting the other constituent parts of the action, of which there may be up to 5.
    It is a further requirement of the INTERCODE system that the nature of errors detected be printed out with the instruction in the program listing. The system adopted is to divide errors into two classes—disastrous and others. Disastrous errors would stop the program running correctly and are indicated by 5 special marks in the program listing. In addition the offending entry is designated.
    To carry out individual tests on each of these 150 actions, or even to identify them individually and to carry out tests by subroutine, is bound to be a fairly long process. Instead a table is held, whose entries are made up of an instruction number and certain coded details, each of which comprise a number of bits, which signify what the valid field values of the instruction are. This table is held in action-number order and, in fact, one entry is held only for the highest-numbered action in a group which share common checking characteristics. An additional complication is that certain actions may have continuation lines, and this swells the number of entries. However, the total amount of space used is equivalent only to 85 instructions.
    The actual checking process consists of doing a "table look-up," finding the relevant checking constant and using that constant to enter a number of routines to carry out the appropriate detailed checking. This is not claimed to be some new and marvellous technique peculiar to compilers; there are many commercial data-vetting programs which use similar techniques, but what it does illustrate, is that in an area where it may be thought that particular compiler problems exist, the solution lies in an approach which could well be applied to programs in general.
    Extract: Conclusion
    Conclusion
    The case for automatic programming is well known; the two main disadvantages, inefficiency in the object program and the length of time it takes to compile, are probably equally well known. Unfortunately, these two disadvantages tend to pull in opposite directions. If the compilation process took longer one could have a more efficient object program.
    Further, compilers usually have to be written for minimal configurations. This tends to reduce the amount of store available to the compiler writer and hence the number of instructions, and this means that the number of passes must be increased.
    The inefficiency factor arises partly from the need to deal with situations in a general way. For example, at the entry to any routine one cannot make any assumptions about what is in any of the accumulators or modifiers or what radix is set. Therefore the compiler writer has to take precautions and perhaps insert some extra instructions, in case the correct values are not in the relevant registers. Any hand coder would probably only do this where necessary.
    Furthermore a compiler can make no assumptions about the likelihood or otherwise of any particular routine of a program being obeyed. To a compiler they are all one. And it is probably because of this that compilers for machines with two levels of storage have tended to be less efficient than others.

          in The Computer Journal 8(2) July, 1965 view details
  • Pratt, Terrence W.; and Lindsay, Robert K. "A processor-building system for experimental programming languages" pp613-621 view details
          in [AFIPS] Proceedings of the 1966 Fall Joint Computer Conference FJCC 29 view details
  • Bachelor, G. A. review of Pratt 1966 view details Abstract: This article describes a system called AMOS, which is called a "processor- building" system. A processor is defined as a system which accepts programs written in a particular programming language as input and then executes those programs. The processing of programs is usually divided into two phases -- translation and execution. The translation phase (performed by a translator) converts the original program into an intermediate form, which may be machine code or some "high- level" form. An interpreter performs the execution phase by doing the operations specified in the intermediate form.

    One can define a processor for a programming language L by describing: (1) a translator for L which accepts L programs and data as input and produces an intermediate form of these programs and data as output; (2) a set of basic processes for performing the operations which can be specified in L programs; and (3) an interpreter which accepts the intermediate form of L programs as input and interprets it as indicating the basic processes to be called, the sequence of these calls, and the input parameters for each call.

    A convenient way to define a translator is to specify the language by a set of syntax rules (e.g., Backus Normal Form), and the translation by a corresponding set of semantic (or interpretation) rules. A number of such "syntax-directed" systems have been described and implemented, which allow one to construct translators for experimental programming languages with relative ease. In most such systems, there is a fixed intermediate form into which the source language must be translated. Such a restriction on the output from a translator may make it rather difficult to construct a translator for a new language, since there may not be a straightforward way to translate the new language into the specified intermediate form.

    The AMOS system alleviates this problem by allowing the user to choose a convenient intermediate form and then write both the translator and interpreter. The syntax of the language is described by a set of rules called a "tactic grammar." The interpretation routines for the translator are written in a simple programming language called the "L-language." This same I-language is used to write the interpreter which processes the output of the translator. Of course, if there already exists a suitable intermediate form and an interpreter for it, the user need only design a translator which translates his new language into the existing intermediate form.

    This article describes the tactic grammar and the L language mentioned above, and illustrates them by an example: the translation of simple arithmetic assignment statements into a Polish suffix form, and the interpretation of this suffix form. The AMOS system operates on a CDC 3600 computer and has been used to implement a translator that translates HINT programs into IPL-V list structures, which are interpreted by the IPL-V interpreter. The article is clearly written and much easier to read than many other articles on syntax-directed systems.

          in ACM Computing Reviews 8(06) November-December 1967 view details
  • Feldman, Jerome and Gries, David "Translator writing systems" p77-113 view details Abstract: A critical review of recent efforts to automate the writing of translators of programming languages is presented. The formal study of syntax and its application to translator writing are discussed in Section II. Various approaches to automating the postsyntactic (semantic) aspects of translator writing are discussed in Section III, and several related topics in Section IV.
          in [ACM] CACM 11(02) (February 1968) view details
  • Stock, Karl F. "A listing of some programming languages and their users" in RZ-Informationen. Graz: Rechenzentrum Graz 1971 125 view details Abstract: 321 Programmiersprachen mit Angabe der Computer-Hersteller, auf deren Anlagen die entsprechenden Sprachen verwendet werden kennen. Register der 74 Computer-Firmen; Reihenfolge der Programmiersprachen nach der Anzahl der Herstellerfirmen, auf deren Anlagen die Sprache implementiert ist; Reihenfolge der Herstellerfirmen nach der Anzahl der verwendeten Programmiersprachen.

    [321 programming languages with indication of the computer manufacturers, on whose machinery the appropriate languages are used to know.  Register of the 74 computer companies;  Sequence of the programming languages after the number of manufacturing firms, on whose plants the language is implemented;  Sequence of the manufacturing firms after the number of used programming languages.]
          in [ACM] CACM 11(02) (February 1968) view details
  • Stock, Marylene and Stock, Karl F. "Bibliography of Programming Languages: Books, User Manuals and Articles from PLANKALKUL to PL/I" Verlag Dokumentation, Pullach/Munchen 1973 299 view details Abstract: PREFACE  AND  INTRODUCTION
    The exact number of all the programming languages still in use, and those which are no longer used, is unknown. Zemanek calls the abundance of programming languages and their many dialects a "language Babel". When a new programming language is developed, only its name is known at first and it takes a while before publications about it appear. For some languages, the only relevant literature stays inside the individual companies; some are reported on in papers and magazines; and only a few, such as ALGOL, BASIC, COBOL, FORTRAN, and PL/1, become known to a wider public through various text- and handbooks. The situation surrounding the application of these languages in many computer centers is a similar one.

    There are differing opinions on the concept "programming languages". What is called a programming language by some may be termed a program, a processor, or a generator by others. Since there are no sharp borderlines in the field of programming languages, works were considered here which deal with machine languages, assemblers, autocoders, syntax and compilers, processors and generators, as well as with general higher programming languages.

    The bibliography contains some 2,700 titles of books, magazines and essays for around 300 programming languages. However, as shown by the "Overview of Existing Programming Languages", there are more than 300 such languages. The "Overview" lists a total of 676 programming languages, but this is certainly incomplete. One author ' has already announced the "next 700 programming languages"; it is to be hoped the many users may be spared such a great variety for reasons of compatibility. The graphic representations (illustrations 1 & 2) show the development and proportion of the most widely-used programming languages, as measured by the number of publications listed here and by the number of computer manufacturers and software firms who have implemented the language in question. The illustrations show FORTRAN to be in the lead at the present time. PL/1 is advancing rapidly, although PL/1 compilers are not yet seen very often outside of IBM.

    Some experts believe PL/1 will replace even the widely-used languages such as FORTRAN, COBOL, and ALGOL.4) If this does occur, it will surely take some time - as shown by the chronological diagram (illustration 2) .

    It would be desirable from the user's point of view to reduce this language confusion down to the most advantageous languages. Those languages still maintained should incorporate the special facets and advantages of the otherwise superfluous languages. Obviously such demands are not in the interests of computer production firms, especially when one considers that a FORTRAN program can be executed on nearly all third-generation computers.

    The titles in this bibliography are organized alphabetically according to programming language, and within a language chronologically and again alphabetically within a given year. Preceding the first programming language in the alphabet, literature is listed on several languages, as are general papers on programming languages and on the theory of formal languages (AAA).
    As far as possible, the most of titles are based on autopsy. However, the bibliographical description of sone titles will not satisfy bibliography-documentation demands, since they are based on inaccurate information in various sources. Translation titles whose original titles could not be found through bibliographical research were not included. ' In view of the fact that nany libraries do not have the quoted papers, all magazine essays should have been listed with the volume, the year, issue number and the complete number of pages (e.g. pp. 721-783), so that interlibrary loans could take place with fast reader service. Unfortunately, these data were not always found.

    It is hoped that this bibliography will help the electronic data processing expert, and those who wish to select the appropriate programming language from the many available, to find a way through the language Babel.

    We wish to offer special thanks to Mr. Klaus G. Saur and the staff of Verlag Dokumentation for their publishing work.

    Graz / Austria, May, 1973
          in [ACM] CACM 11(02) (February 1968) view details
  • Campbell-Kelly, Martin "The Development of Computer Programming in Britain (1945 to 1955)" view details Extract: Intercode
    By later standards, the programming support for the Mark I* was poor. The most glaring omission was an autocode; this was particularly surprising considering the precedent of the Mark I Autocode developed at Manchester University. It fell to individual installations to develop their own automatic programming systems. For example, F. J. Berry (1959) of the Armaments Research Establishment produced a system called "Intercode" that was directly inspired by Brooker's Mark I Autocode.
    Extract: Programming on the DEUCE
    DEUCE
    The English Electric DEUCE grew out of an active collaboration between English Electric and NPL. The DEUCE was based closely on the Pilot ACE (Haley i956).

    The initial software effort for the DEUCE lay in converting the existing Pilot ACE programs developed by NPL. Most of this work was done during 1955 in a combined effort between the users of the first three DEUCES, which were installed at English Electric, NPL, and the Royal Aircraft Establishment. This conversion work was in fact coordinated by NPL; it seems that in the mid-1950s English Electric did not see the provision of programming systems as part of their brief, although they did organize the DEUCE Users Group and a library service.

    Several active programming groups were associated with DEUCE installations, and by 1958 three important interpretive schemes for the DEUCE had emerged: GIP, TIP, and Alphacode (Robinson 1959). These three schemes had complementary domains of application: GIP was, of course, the famous matrix interpretive scheme from NPL, TIP was used for calculations on vectors, and Alphacode was used for scalars.

    The GIP matrix scheme was easily the most important programming system for DEUCE. Apart from its remarkably high speed, CIP was noted for its reliability. By means of check sums and other devices, complete confidence could be had in the results in spite of the inherent unreliability of the DEUCE (which had no parity checking, for example).

    The TIP (Tabular Interpretive Program) scheme was in effect a variant of GIP restricted to vector operations. The system was designed by the DEUCE group at Bristol Aero Engines to simplify programming for engineers and was widely used. TIP was a rather elegant system and required no formal understanding of linear algebra. It was intended to be accessible to anyone who was familiar with a "desk machine ... and a sheet of paper ruled into rows and columns" (Robinson 1959). TIP is an interesting relic of the transition from machine language to true programming languages.

    The third interpretive scheme, Alphacode, was specified by S. J. M. Denison of English Electric as an automatic coding system for naive users and for one-time jobs; Alphacode was directly inspired by the Manchester Mark I Autocode (Denison 1959). The interpreter produced programs that were typically about five times slower than conventionally coded programs, actually a considerable achievement considering the high speed of DEUCE when optimally coded.

    In November 1957 a project for an Alphacode translator (as opposed to the existing interpreter) was begun (Duncan and Huxtable 1961). The aim was the exceedingly ambitious one of producing translated programs as good as hand-coded ones. The translator was developed by F. G. Duncan, working at first with E. N. Hawkins and later with D. R. Huxtable. The system came into use toward the end of 1959. It was one of the most impressive programming achievements of its day, both in terms of sheer size (22,000 instructions) and in the difficulty of producing code for a machine with a multilevel store. The translator in practice produced code that was about two-thirds as good as handwritten code, a truly remarkable achievement given the complexity and subtlety of programming for the DEUCE.

    Several other programming schemes were produced for the DEUCE by other installations in the late 1950s. These included STAC, STEVE, GEORGE, SODA, and EASICODE (Robinson 1959). AH these systems were made available through the Users Group, but they do not appear to have been used as widely as the schemes already described.

    The development of software for DEUCE can be summarized as follows. The existence of a large amount of high-quality software from NPL led English Electric into believing that it was unnecessary to develop further programming systems. English Electric did see the need to coordinate and distribute programs through the Users Group and to organize programming courses. English Electric's failure to make a timely provision of an autorpatic programming system for DEUCE led to a number of ad hoc developments at various DEUCE installations during the period 1957-1959, which was a wasteful duplication of effort. In underwriting the Alphacode translator, however, English Electric demonstrated that it had at last come to recognize its duty to provide programming systems for the DEUCE. In January 1960 English Electric transferred its programming staff to the Data Processing and Control Systems Division at Kidsgrove, where an automatic programming section was established under the management of F. G. Duncan (1979). At this point, machines such as the KDF 9, for which excellent software was produced, were on the horizon. Extract: Conclusions
    Conclusions
    When we compare the development of programming at the three centers -- Cambridge, Manchester, and Teddington -- there are several factors to consider. First, we must consider the quality of the programming system; this is a subjective issue that ranges from the purely aesthetic to the severely practical -- for example, from the elegance of an implementation at one extreme to the speed of a matrix inversion at the other. We must also consider the failures of the three centers, especially the failure to devise a programming system that exploited the full potential of the hardware. Finally, we must consider the influence of the programming systems on other groups; this is less subjective -- it was described in the previous two sections and is summarized in Figure 2.

    Few could argue that Cambridge devised the best of the early programming systems. The work done by Wilkes and Wheeler stood out as a model of programming excellence. Cambridge made several outstanding contributions to early programming: the use of closed subroutines and parameters, the systematic organization of a subroutine library, interpretive routines, and the development of debugging routines. Perhaps the finest innovation was the use of a symbolic notation for programming, as opposed to the use of octal or some variant. It is difficult for us today to appreciate the originality of this concept.
    If Cambridge can be said to have had a failure, it was the failure to develop programming languages and autocodes during the middle and late 1950s, as reflected in the second edition of Wilkes, Wheeler, and Gill (1957), of which Hamming said in a review,

    It is perhaps inevitable that the second edition, though thoroughly revised, does not represent an equally great step forward, but it is actually disappointing to find that they are no longer at the forefront of theoretical coding. (Hamming 1958)]

    By neglecting research into programming languages, Cambridge forfeited its preeminence in the programming field.

    In the early 1950s, however, Cambridge was by far the most important influence on programming in Britain. This came about partly through the excellence of the programming system and partly through the efforts that Cambridge made to promote its ideas. Two machines (I`EO and TREAC) based their programming system directly on EDSAC, and five machines (Nicholas, the Elliott 401 and 402, MOSAIC, and Pegasus) were strongly influenced by it. It is also probably true that no programming group was entirely uninfluenced by the Cambridge work. Overseas, the influence of the EDSAC programming system was just as great, largely through the classic programming textbook by Wilkes, Wheeler, and Gill (1951) (see Campbell-Kelly 1980a).

    At Manchester the programming system devised by Turing for the Mark I makes a disappointing contrast with the elegance of the Cambridge work. From the point of view of notation, it is difficult to find a single redeeming feature. Probably the only feature of real merit was the concept of dividing a program into physical and logical pages. Echoes of this idea can be discerned in today's segmented computers.

    In its way, Turing's programming system did have considerable influence, for all efforts to replace it with something more suitable were curiously unsuccessful.

    Thus programmers for both Mark Is and all seven Mark Iota's had to struggle with Turing's clumsy teleprinter notation throughout the life of these machines. Here is perhaps one of the most valuable lessons of this study: poor design decisions taken early on are almost impossible to correct later. Thus even when people with a Cambridge background arrived at Manchester, they were unable to make a really fresh start. By producing two successive input routines that were not much better than Turing's, they managed to combine the worst of both worlds: an unsatisfactory programming system that was not even a stable one.

    The one real high spot of the Manchester programming activity was Brooker's Mark I Autocode. Brooker's achievement was the most important programming event of the mid-1950s in Britain. If Brooker had not devised his autocode at that time, programming in Britain might have developed very differently. The autocodes for DEUCE and Pegasus were directly inspired by Brooker's and had considerable notational similarities with it. Beyond the time scale of this paper, Brooker's Mark I Autocode and his later Mercury Autocode (1958) were a dominant influence on British programming until well into the 1960s, when languages such as ALGOL 60 and FORTRAN came onto the scene in Britain.

    Of the three programming systems devised at Cambridge, Manchester, and Teddington, it is probably the latter that inspires the least passion. Ii the punching of programs in pure binary was an efficient method, it was also a singularly uninspiring one. Curiously, aficionados of the Pilot ACE and the DEUCE had great enthusiasm for programming these machines, which really had more to do with the joys of optimum coding and exploiting the eccentric architecture than with any merits of the programming system.

    In many ways the crudity of the programming system for the Pilot ACE was understandable: the speed of events, the lack of a backing store, and so on. But perpetuating it on the DEUCE was a minor tragedy; by replicating the programming system on the 32 commercially manufactured DEUCES, literally hundreds of rank-and-file programmers were imbued in this poor style of programming. MOSAIC (Section 3.4) shows that it was entirely possible to devise a satisfactory programming system for machines of the ACE pattern; it is most unfortunate that this work was not well enough known to influence events.

    NPL did, however, have one notable programming-success: the GIP matrix scheme devised by Woodger and Munday. This scheme became the sole reason for the existence of many DEUCES. The reliability of the mathematical programs produced by NPL, their comprehensiveness, and their speed have become almost legendary. A history of numerical methods in Britain would no doubt reveal the true role of NPL in establishing the methods of linear algebra as an analytical tool for the engineer.

    In an interview, P. M. Woodward, one of the principals of the TREAC programming activity, recalled, "Our impression was that Cambridge mattered in software whereas Manchester mattered in hardware" (Woodward and Jenkins 1977). He might well have added that NPL mattered in numerical methods.

    Because this paper has been primarily concerned with the development of programming during the period 1945-1955, Cambridge has received pride of place as the leading innovator. Had the paper been concerned principally with hardware or numerical methods, however, the ranking of the three centers would have been different. But considered purely as innovators of programming, there can be no question that Cambridge stood well above the rest.
    Abstract: By 1950 there were three influential centers of programming in Britain where working computers had been constructed: Cambridge University (the EDSAC), Manchester University (the Mark I), and the National Physical Laboratory (the Pilot ACE). At each of these centers a distinctive style of programming evolved, largely independently of the others. This paper describes how the three schools of programming influenced programming for the other stored-program computers constructed in Britain up to the year 1955. These machines included several prototype and research computers, as well as five commercially manufactured machines. The paper concludes with a comparative assessment of the three schools of programming.


          in Annals of the History of Computing 4(2) April 1982 IEEE view details