NEBULA(ID:150/neb001)

Ferranti business autocode 


for Natural Electronic Business Language or New Electronic Business Language

Ferranti Ltd (later ICL) Early business-oriented language for Ferranti Orion computer.

Had a pronoun syntax - featured QIH (quantity in hand)


Related languages
NEBULA => COMPL   Subset

References:
  • [Ferranti] "Introduction to NEBULA (Natural Electronic Business Language)" Ferranti Limited Publication CS 275 (October 1960) view details
  • [Ferranti] "NEBULA (programming examples and solutions)" Ferranti publication CS 283 (Nov. 1960) view details
  • [Ferranti] "NEBULA (programming manual)" Ferranti publication CS 282. Nov. 1960 and addendum (March 1961) view details
  • [Ferranti] "NEBULA: a programming language for Commercial Data Processing" Ferranti LD 12 Nov 1960 view details
  • [Ferranti] "NEBULA-A Programming Language for Computer Programming". Ferranti Limited Publication January 1960 view details
  • [Ferranti] "NEBULA, Addenda Nos 1 to 1-10" Ferranti April-Nov 1961 view details
  • BCS Bulletin - Literature and References to Simplified Programming Schemes for Computers, Available or Projected - November 1961 view details
  • Braunholtz T.G.H., A.G. Fraser and P.M. Hunt "NEBULA: A Programming Language for Data Processing" view details Abstract: NEBULA, a programming language for data processing, will be used on the Ferranti Orion Data Processing System. No previous acquaintance with automatic programming languages is assumed in this account of NEBULA. The aim is to give the reader a sense of mastery over NEBULA, rather than to give him every factual detail of the language. However, most of the basic topics are treated fully, otherwise a false impression of imprecision would be conveyed. A complete example is given in the Appendix.


          in The Computer Journal 4(3) October 1961 view details
  • NEBULA (summarised Procedure Description). Ferranti publication CS 284 (Feb. 1961) view details
          in The Computer Journal 4(3) October 1961 view details
  • Willey, E.L.; d'Agapeyeff, A.; Marion Tribe, B.J. Gibbens, Michelle Clark, "Some commercial Autocodes -- A comparative study", A.P.I.C. Studies in Data Processing #1, Academic Press, London, 1961, pp. 53. view details Extract: NEBULA
    NEBULA
    The Natural Electronic Business Language. This language incorporates English expressions and a wide use of symbols. It is the first business language to allow formulae as the operands of non-arithmetic instructions, and which enables data description to be given in the Procedure Division.
          in The Computer Journal 4(3) October 1961 view details
  • d'Agapeyeff, A.; "Current developments in commercial automatic programming" pp107-111 view details Abstract: This paper discusses the progress made in certain aspects of commercial automatic programming, presents a progress report on the major commercial languages, and offers some hopes and expectations for the future. Extract: The properties of data
    The properties of data
    It is, of course. the available properties of the data which to a large extent determine the power of an automatic programming system, and distinguish commercial from mathematical languages.

    Consider the function of moving data within the internal store. In a mathematical language the problem is trivial because the unit which may be moved is very restricted, often to the contents of a single machine word. But in a commercial language this limitation is not acceptable. There, data units will occur in a variety of shapes and sizes. for example:

    i) Fixed Length Units (i.e. those which on each occurrence will always be of the same length) may vary widely in size and will tend not to fit comfortably into a given number of words or other physical unit of the machine. Generally the move will be performed by a simple loop. but there are some awkward points such as what to fill in the destination if the source is the smaller in size;
    ii) Static Variable Length Units (i.e. those whose length may vary when they are individually created but will not change subsequently) are more difficult to handle. Essentially the loop will have two controlling variables whose value will be determined at the moment of execution. There are again awkward points such as the detection of overflow in the destination (and deciding what to do when it occurs, since this will only be discovered at run time);
    iii) Dynamically Variable Length Units (i.e. those which expand and contract to fit the data placed in them) are even more difficult. They have all the problems of (ii), together with the need to find and allot space when they expand.

    It is clear, therefore, that a simple MOVE is less innocuous than it might seem at first. Actually the above remarks assumed that it was not possible to move data between different classes of units. The absence of this restriction, and the performance of editing functions during the process, can make the whole thing very complicated for the compiler indeed.

    The properties of data will have a similar influence on most of the other operators or verbs in the language.

    This has particular significance when the desired attribute is contrary to that pertaining on the actual machine. Thus arithmetic on decimal numbers having a fractional part is thoroughly unpleasant on fixed-word binary machines.

    Nevertheless, despite these difficulties considerable progress has been made toward giving the user the kind of data properties he requires. Unfortunately this progress has not been matched by an improvement in machine design so that few, if any, of the languages have achieved all of the following.

    (a) The arbitrary grouping of different classes of unit, allowing their occurrence to be optional or for the repetition in variable-length lists.

    (b) The input and output of arbitrary types of records, or other conglomerations of data units. having flexible formats. editing conventions and representations

    (c) The manipulation of individual characters, with the number and position of the characters being determined at run time.

    (d) The dynamic renaming or grouping of data units. Yet users do need these facilities. It is not always acknowledged that getting the main files on to the computer, before any processing is done, may constitute the largest single operation in many applications. Furthermore. these files will have a separate independent existence apart from the several programs which refer to them.

    Progress has also been made in declaring the data properties in such a way as to imply a number of necessary procedures to the compiler. For example, if one declares both the layout of some record to be printed, and the printed record, the compiler may deduce the necessary conversion and editing processes. It is here, in the area of input and output, that some languages have approached the aim of being problem-orientated. Extract: NEBULA
    NEBULA
    This is the language of Ferranti designed originally for the ORION but now to be also implemented on ATLAS. NEBULA has some very nice facilities, particularly in regard to input and output and the general use of formulae. Unfortunately, like FACT, it has grown a little untidy while the compiler has progressed, and there are now a great many rules in the language. It may well, however, prove to be a valuable tool once it is working.
          in The Computer Journal 5(2) July 1962 view details
  • Gearing, H. W. "Autocodes for mathematical and statistical work" An address given at the inaugural meeting of the Edinburgh Branch on 13 December 1961 view details Extract: Introduction
    Introduction
    Electronic computers are able to work at high speeds only because they are programmed. Analysis of a problem, or a data processing procedure, and the programming of it for a computer in machine code, is a laborious task. In the early machines, users soon appreciated the advantage of having standard programs for assembly and program development, and a library of routines for regular calculations, complex number input and output, and for tracing the course of programs during program development, particularly when unexpected results were given when the program was tried on the machine.
    Where jobs are to be done regularly, it is still most economical, in the long run, to program them in machine language using such library routines as may be available. But for jobs which have to be done only once, or where it is desirable to try-out part of the job first and then extend the application of the computer, simplified programming systems have been developed. By their means, the computer can be addressed in a form of English, or by direct use of mathematical symbols if these are available in the character set of the teleprinter-punch used for punching the program. These simplified programming systems to which I shall apply the term "Autocodes" (originally named by Brooker at Manchester), together with available libraries of programs, constitute a very significant extension of the machinery which is now available.
    Besides saving the time spent on programming, these systems reduce the clerical errors of program writing by eliminating many tedious steps and this also reduces the time taken by program development. They also make it easier for the writer (or another person) to amend the program at a later date. In the earlier systems, the simplification entailed a varying loss of operating speed, varying between two and fifteen times as long to do a job on the machine. But a half hour of computer time after a few hours' programming in autocode is a more economic proposition for a one-off job, or trial of a routine job, than several weeks on programming, followed, after several trials, by a successful five-minute run on the computer.
    The newer programming systems, which involve a preliminary operation to compile a machine code program, will suffer less loss of speed and will become the normal method of programming computers for calculations and dataprocessing work, where the operations are not sufficiently standard to justify the writing of specific or general programs in machine language.
    Extract: Work at Rothamsted
    Work at Rothamsted
    In his valedictory Presidential address to The British Computer Society in London on 26 September 1961, Dr. Frank Yates reviewed the contribution which computers have made and are making in research statistics. It is on the solution of problems involving heavy numerical computation, in pure research, and in engineering design that, in his view, computers have achieved their most striking successes:
    Dr. Yates went on to point out that even in fields where computational tasks had previously been performed on desk calculators, computers could introduce three new features:
    (a) Speed, e.g. where further progress depends on knowing results to date.
    (b) A more thorough job, with better editing of data and more accurate calculations.
    (c) Relegation of computational methodology to the machine, so that people requiring to do calculations do not have to know the detail of the calculations involved.
    His paper (Yates, 1962), published in the January issue of The Computer Journal, reviews experience at Rothamsted and the development there of general programs for statistical work and some autocodes at other centres.
    If a general purpose program is available to include a mathematical procedure or the statistical method which one needs for a piece of analysis, then more people can use the computer. 1 have prepared a schedule of some of the schemes that are now available, or may be expected to be available early in 1962. Those who have access to a computer may find this of interest to follow up whichever line of development is applicable. Extract: Applications in Metal Box Company
    Applications in Metal Box Company
    In a paper published in The Computer Journal for April 1961 (Gearing 1961) 1 referred to the use which we, in The Metal Box Company Limited, had made of Pegasus autocode.
    I reviewed, in some detail, two programs. One of these was an analysis of a market survey for household trays in which interviews were conducted with some 1,100 households, covering 17 general questions and 10 observations of each tray found at the house: these questionnaires were analysed and 17 tables for the internal report were printed direct from the computer output tape. The other program related to part of our work on experimental sales forecasting and is now available in the Pegasus/Sirius interchange scheme (Ferranti, 1960).
    Our first computer application to quality control was in 1958-59 when we undertook an analysis of variance in connection with a productive operation being camed out by a group of machines in the chain between the sheet of tinplate and the finished open top can. Three factors were involved.
    The autocode program for Pegasus was thought to be rather slow and a full machine code program was written by Mr. D. Bulcock and is now available in the Pegasus interchange scheme. There are other analyses of variance programs available, notably one by BISRA (Caner and Taylor, 1960) which caters for up to seven factors, but if there are more than seven levels and only three factors involved, our program permits all the levels of data to be used.
    Nowadays, we would not attempt to write a full machine code program unless the job was going to be frequently done and would require considerable machine time. In the group of machines which we are using, a compiler-program has become available which automatically translates the Pegasus autocode program into Sirius machine orders (Ferranti, 1959 and 1961).
    In 1960 we were asked to assist in the analysis of data on the variability of some raw material which had been collected from sampled consignments over two years. Several different characteristics of the material had been measured. An autocode program was wntten to analyse each characteristic separately, prmting sample means, ranges, standard dev~ations, and compiling frequency distributions of means and standard deviations. A hierarchic analysis of variance was also given at the end of each characteristic. We were asked to undertake this work on 29 March and the calculations were substantially completed on 12 Aprll. Further calculations and a correlation between two characteristics were made on 3 June and 5 October 1960.
    Here I would like to stress that although the program was written in autocode, which is normally advocated for one-off jobs, the program is a general one. The progress of the calculations is controlled by ten parameters and the print routine by seven more. Thus one program served for the analysis of all the different characteristics, including some that involved preliminary arithmetic on pairs of observations.
    The correlation program was written separately but took only two hours to write, using pairs of exlsting data tapes fed in on the two tape readers simultaneously. Extract: Scientific Autocodes
    Scientific Autocodes
    The list appended covers a wide range of programs. compilers,
    autocodes. Among the autocodes which can be taught in a few days and which are already fully operational are :
    Mercury autocode.
    Pegasus/Sirius autocode.
    Ferranti Matrix Interpretive Scheme.
    Deuce Alphacode.
    IBM Fortran.
    Edsac 2 Autocode.
    Stantec Zebra Simple Code.
    Elliott 803 autocode.
    Elliott and other systems based on ALGOL
    Extract: Commercial Autocodes
    Commercial Autocodes
    Those concerned with Commercial Data Processing should have a look at:
    ICT Rapidwrite-Cobol.
    Ferranti Nebula.
    Cleo & Gypsy when available.
    These may take a couple of weeks to study, because, speaking from experience with Nebula, there are not only procedure descriptions but also file outlines and specifications of format of data and results when dealing with computers having considerable ancillary equipment. The Scientific Autocodes are usually concerned with one medium of inputloutput only, punched tape or punched-cards.
    Programming a data processing operation in a Commercial Autocode like Nebula becomes a full-time job; but it is easier to train staff in Nebula than a machine code and the autocode compiler will (we hope) take care of housekeeping routines when opening and closing files. We are using young men and women of O level mathematics, who have had experience of controlling our punched card routines, for this work.
          in The Computer Bulletin March 1962 view details
  • Kilner, Daphne "Automatic Programming Languages for Business and Science" view details Abstract: A Conference under this title was held on 17-18 April 1962 by the Mathematics Department of the Northampton College of Advanced Technology in co-operation with the British Computer Society. The following is a summary report on the Proceedings which will be published in full in the Computer Journal Extract: Aims
    Aims
    What do we want from these Automatic Programming Languages? This is a more difficult question to answer than appears on the surface as more than one participant in the recent Conference of this title made clear. Two aims are paramount: to make the writing of computer programs easier and to bring about compatibility of use between the computers themselves. Towards the close of the Proceedings one speaker ventured that we were nowhere near achieving the second nor, indeed, if COBOL were to be extended any further, to achieving the first.
    These aims can be amplified. Easier writing of programs implies that they will be written in less, perhaps in much less, time, that people unskilled in the use of machine language will still be able to write programs for computers after a minimum of training, that programs will be written in a language more easily read and followed, even by those completely unversed in the computer art, such as business administrators, that even the skilled in this field will be relieved of the tedium of writing involved machine language programs, time-consuming and prone to error as this process is. Compatibility of use will permit a ready exchange of programs and applications between installations and even of programmers themselves (if this is an advantage!), for the preparation of programs will tend to be more standardised as well as simplified. Ultimately, to be complete, this compatibility implies one universal language which can be implemented for all digital computers.
    Extract: NEBULA
    NEBULA
    Ferranti considered that it was essential for their ORION customers to have a good auto-coding system and that COBOL 60 lacked some facilities they regarded as necessary. COBOL tended to be bound by business requirements in the USA, e.g. it is orientated towards character rather than binary machines, and if they had accepted it they would be penalising their customers for a language over which they had little control, besides the actual delay in getting the COBOL decisions through.
    Their language NEBULA is similar to COBOL but gives a greater freedom of choice of input/output media with choice of format within those media. It can, for example, take all existing punched card codes and there is nothing comparable in COBOL to provide for layout presentation as there is in NEBULA. A neat solution to the problem of debugging the object program had been found in developing two versions of the compiler, the production and program testing versions. An input parameter chose one or the other. The present compiler for ORION, of some 30,000 instructions, is expected to be ready by the end of 1962 and that for ATLAS is in the course of construction.    Both compilers owe a debt to techniques developed at Manchester University.

          in The Computer Bulletin September 1962 view details
  • Rousell, A R "A progress report on NEBULA" view details External link: Online copy
          in The Computer Journal 5(3) October 1962 view details
  • D'Agapeyeff, A.; Baecker, H. D.; and Gibbens, B. J. "Progress In Some Commercial Source Languages" pp277-298 view details
          in Goodman, Richard (ed) "Annual Review in Automatic Programming" (3) 1963 Pergamon Press, Oxford view details
  • Hirschmann, W. review of d'Agapeyeff 1962 (Comp J) view details Abstract: In a very short and concentrated paper, the author tries to treat three different aspects of commercial automatic programming: general requirements of a commercial compiler as opposed to those of an algebraic compiler; progress reports on existing commercial languages; and outlook into the future. The result is rather fragmentary. Time or space limits prevented the author from making more than a few relevant though well taken points concerning each of his topics.

    The paper contains some detailed description of the problems involved in data handling in commercial translators with emphasis on the need for more flexibility than presently available. The author, as have many before him, again questions the value of attempting to create languages "any fool can use" at the cost of efficiency and flexibility, when even these languages will not prevent the fool from having to debug his programs.

    On existing commercial translators, the author lists and compares briefly COBOL, FACT, COMTRAN and several English efforts which "are either not working, or not on a par with their American equivalents."

    The paper concludes with some not so new but nevertheless appropriate recommendations to computer manufacturers and standards committees, and the expectation of the universal acceptance of COBOL as He commercial language.

          in ACM Computing Reviews 4(01) January-February, 1963 view details
  • Bennett, R. K. review of Roussel 1962 view details Abstract: This report contains a small amount of information which would be of interest to NEBULA (Ferranti's answer to COBOL for its ORION computer) followers, COBOL detractors, and those who wish to keep abreast of compiler development. The author reviews Ferranti's reasons for having "ignored" COBOL 60, and claims superiority of NEBULA over COBOL in the area of data and equipment descriptions, whereby READ and WRITE become "completely machine and data-medium independent."

    It is interesting to note that NEBULA permits the use of machine instructions, but the use of this facility is discouraged to make the programs machine-independent. The author reports over 95% of all programs are being written completely in Autocode. Those interested in NEBULA will find it described in an earlier issue.

          in ACM Computing Reviews 5(04) July-August 1964 view details
  • Stoker, J. W. review of Willey et al 1961 view details Abstract: "This paper is intended to serve as an interim report between the publication of the first and second editions of "Some Commercial Autocodes -- A Comparative Study" (APIC Studies in Data Processing, 1, Academic Press, 1961).
    "We have compared the progress m four languages -- COBOL, IBM Commercial Translator, FACT and NEBULA; and introduced some description of three others -- ICT RAPIDWRITE, CLEO and FILECODE.J'
    The authors are associated with a firm of computer analysts in England, and apparently this was actually written sometime in 1962.
    After a general discussion of notation, functional capabilities, and debugging, a section deals with each of the seven languages. The authors compare COBOL 60 with COBOL 61, and emphasize that manufacturer commitments indicate future general use of COBOL in the United States. In conclusion the authors remark, "The most obvious conclusion . . . is the dismal record achieved in this field in the UK compared to the USA."

          in ACM Computing Reviews 5(06) November-December 1964 view details
  • Boles, John A. Logical Design of the NEBULA Computer, Document cc-68-23, June 1 1968 view details
          in ACM Computing Reviews 5(06) November-December 1964 view details
  • King, P.J.H. "Systems analysis documentation: computer-aided data dictionary definition" view details Extract: BCL, COBOL, NEBULA
    The major data processing languages (e.g. COBOL, NEBULA) recognise the importance of data definition by having a distinct data division. Recent language development lays even greater emphasis on data structure specification and BCL (Hendry, 1966; Hendry and Mohan, 1968) could be described as 'structure oriented'.
    The systems methodology proposals discussed in the foregoing all present their data dictionary as an alphabetic list of data element names together with field requirements and miscellaneous information. They do not include group or structure names in the dictionary nor provide formal facilities for group naming and specification. The data divisions of COBOL and NEBULA, on the other hand, emphasise structure and display it clearly. BCL does not attempt to display structure but specifies it precisely in a very simple way. Notation of the BCL type is adopted in the suggestions in this paper. Information conveyed in this notation is processed to provide data dictionary documentation, the self-consistency or otherwise of the information being determined during processing. Amending and correcting facilities are provided to alter and improve the documentation. It is suggested that the processing involved should be a computer function.
    Attributes of a data dictionary-programming requirements As stated above a data dictionary provides working information for implementation. What does this requirement involve ?
    First, it must specify all data elements involved giving a useful and meaningful name to each for human communication purposes. A field specification for each element or adequate information to determine one is required, together with allowable ranges for numerical items and sets of allowable values for non-numerical items. Secondly, it should provide information on the grouping and structuring of the data, since this is often a natural way of defining records within a programming scheme. As with elements, sensible and meaningful names are required for groups and structures. A third requirement is information on variable and optional occurrence of data. For example, in an input structure groups or elements occurring optionally or a variable number of times must be so specified. With variable occurrence, information is required on variability likely to be expected in practice, since decisions may be necessary on whether to use fixed or variable length records.
    Data names which are clearly meaningful frequently prove rather cumbersome for programming. A fairly common technique of avoiding this is to assign simple codes as alternatives to names to give the required brevity. We suggest that a data dictionary should define such codes.
    Data dictionary construction during analysis and design Data definition in programming requires individual data elements to be identified and defined before groups and structures; there is definition 'upwards' from the elements. This is so even where the notation (e.g. COBOL) is apparently 'downwards'. It is necessary for program generation but implies that formalised documentation and use of the computer's checking capacity must wait on all detail being specified.
    Data specification is 'from the top down', that is broad groups and structures are defined first and given names, then sub-groups and sub-structures and so on down to the individual elements; this is the natural approach of systems analysis. BCL has a simple notation for such a method of working and groups and structures are defined by statements of the type
    A is (B, C, D)
    B is (X, Y), etc.
    This notation corresponds to the natural approach of systems analysis. In BCL the above definition of A is only valid if B, C and D are already defined. Use of this type of notation for analysis and design requires this restriction to be relaxed, since introduction of data names without a precise definition must be allowed. There will, of course, be a general idea of what such a name signifies but there must be freedom to leave precise specification to later.
    In the example above A and B are defined as group names but C, D, X and Y remain undefined. Subsequently they may be defined as group names or specified as data elements by the giving of a field definition, information on permissible values, uniqueness, etc. There is advantage in not requiring field specifications at the time names are introduced as is isually required in programming languages. Names not defined as groups or given a field definition represent data about which further information must be provided.
    A large number of definitions of the type discussed, together with field specifications, information on variability, optionality, etc., would not be very readable even though precise in information content. In addition to proposing formalised ways of giving these definitions it is suggested they should be processed by computer, vetted for errors and used to create a data dictionary file. From this the data dictionary documentation will be obtained. Thus it is explicitly recognised that part of systems analysis and design is itself data processing and that a computer can be used advantageously in this. An important feature is that the dictionary file is created at an early stage and information is added continuously as the work proceeds. A display is always available of the 'state of the work so far' with processing and clarification as it proceeds. Much of the formal documentation chore thus becomes a com~uter function. Work outstanding is indicated in the dictionary documentation and, where a team is involved, there is automatic amalgamation of different members' work, each receiving up to date documentation of the whole project in standard form.
          in The Computer Journal 12(1) 1969 view details
  • Leonard, AJ and Tribe, ME "Notable features of Orion" pp344-347 view details Abstract: The Orion Computer was design by Ferranti Ltd. and announced in 1959. It is a binary machine with time-sharing capability and uses an operating system called OMP (Organisation and Monitor Program). Because of the unpromising state of COBOL at that time, a commercial language NEBULA was designed specifically for the Orion. It goes out of commission in 1976.

    So that many of its valuable and noteworthy characteristics do not become forgotten with the machine's demise, the following project was drawn up as a joint ICP/PACL effort. Hopefully it may be used, in part or whole, as an aid to forthcoming design. External link: Online copy Extract: Introduction
    Introduction
    The purpose of this investigation, which was carried out at the
    Prudential Assurance Co. as a joint effort by them and ICL,
    was to make a note of all worthy and, in some ways, unique
    features of the Orion machine and its associated languages and
    operating facilities. These could then be borne in mind for
    subsequent design when more sophisticated data handling
    will surely extract more benefit from the many advanced
    facilities which the Orion was not fully able to exploit.
    The inability to capitalise on advanced facilities is particularly
    true of the NEBULA compiler: increased object program
    complexity is the price paid for data compression, and with the
    elaborate code optimisation facilities, the result is a very large
    and complicated compiler (some 250,000 words) with ensuing
    loss of speed. Computer time has been lengthened by the necessity
    to use magnetic tape rather than discs for file storage.
    Further, almost 15 % of time is used in the dumping procedures.
    No doubt many of these anomalies would have been overcome
    with the passage of time had the Orion project not been
    abandoned: an attached data store, e.g. disc, would have made
    the supply of data in the compiler more efficient and, further,
    software advances would have removed many of the difficulties
    connected with the compiler.
    There is a strong argument to be made for concepts such as
    the separation of logical and physical description in the data
    structure, and for the use of a form as the logical Input-Output
    data unit rather than a line. These features are unique and
    there are opinions suggesting that this separation will become
    necessary in future, particularly with regard to the data base
    concept.
    It was eight years after this idea was first conceived for Orion
    that an article in Computer and Data Processing: Expert
    Opinion Survey VIIIA, dated June 1968, declared that '. . . a
    slight familiarity with the subject (data files) reveals a primary
    distinction between physical and logical file structures.
    Physical file structure is properly a part of operating systems
    and hardware design'.
    With the Orion, however, this has been an expensive feature
    in that reasonably sophisticated programming is necessary with
    consequently heavier overheads. The underlying philosophy
    remains sound and future designers can take note of the
    advantages in the context of data base file structuring.
    It is worthy of mention that NEBULA was the first issue of a
    collaboration between language experts and the users (this may
    have accounted for its exceptional flexibility).
    Multiprogramming has been operable since 1962 and there is
    justification for disappointment with the general lack of
    appreciation of the value of this advance which was foreseen
    so early in Orion. At the Cardiff Conference in September 1962
    other systems were seen to have only inoperable or unsafe
    multijob facilities. It permits adequate machine time in the
    multiprogramming environment with excellent logging and
    diagnostic facilities. Time sharing in the present day sense is
    possible and, in fact, there has been an installation with simultaneously
    'plugged in' terminals.
    The Operating System is regarded as very satisfactory even
    when NEBULA is not used and the systems overheads compare
    very favourably with those of systems implemented five or six
    years later.
    Largely because of the complementary nature of the Operating
    System and the NEBULA system, the Orion is a good machine
    for the operator to exercise his skill, although it is superficially
    easy to work on. An understanding of the machine system is
    necessary to draw the full benefit from the scope provided for
    the operators.
    So, many features are a decade in advance of their time and
    their full value is only now beginning to be realised when communications
    and centralised filing systems seem to be the way
    ahead. Extract: Nebula features
    6. NEBULA
    6.1. The compiler
    (a) The compiler makes use of a modular programming concept.
    (b) It is designed as an historical system in that all work carried out is remembered and taken account of automatically by the compilation process.
    (c) A natural corollary to the modular and historical approach is the opportunity it provides for standardisation of logical descriptions and the sections within them. This leads to good interfacing.
    (d) There are good, clear, comprehensive diagnostics. All errors in one phase are reported together at the end of the phase and must be corrected before proceeding to the succeeding phase.
    (e) NEBULA has a large vocabulary with only 10 reserved words. The identifiers can be any length.
    6.2. Controls
    (a) The idea of major and minor restarts is written into the NEBULA compiler. Major restarts comprise the ordinary dump and restart. Minor restarts are used when a peripheral failure has occurred, e.g. when paper tape is exhausted; also the 'GO ON', 'GO BACK' facility on punched cards can be employed as a restart.
    (b) The Restart facilities for magnetic tape are flexible. These utilise the block identifier and backward read facility. There are three commands available to the operator:
    GO ON-with the proviso that this procedure will ignore that block of data
    GO BACK-i.e. to the current block for up to six further attempts
    REPOSITION-i.e. after 'n' attempts to accommodate the data, the operator has the option of transferring the reel to another deck for a further 'n' attempts to read or write.
    (c) Working file values can be set by parameters on Job (Control) tape.
    e.g. A run constant such as current interest rate can be set to, say, 23 by writing Rate = 23.
    (d) There is a choice of either three or four work areas for magnetic tape files. The four area system is the more usual two input, two output buffer system and the more economical of tape, while the three area system has three buffers which are used cyclically (without copying) for Input, Updating and Output. The shared addressing routines also save core.
    There is a special verb for READIWRITE.
    (e) There is an economical duplicate tape output facility which copies the dumps as well as data.
    6.3. Procedures
    (a) Monitoring facilities are excellent with up to 20 different traces available as fixed routines. These can be initiated by console or job tape.
    (b) There is a facility for clearing a field, record or group in a single source language instruction and similarly clearness can be examined in one instruction.
    (c) Overlay is governed by the 'usage' statements; in default of these the compiler organises overlay according to core specified at assembly time (machine description).
    6.4. Data structure
    (a) There is a neat separation of logical and physical structuring; each is set up in a columnar layout.
    (b) Logical:
    Packing can be arranged by compiler for any data structure. Distinctive features include :
    (a) Flexibility of packing
    (i) AUTOMATIC (BY COMPILER)
    0 %-in words (minimum): speed of unpacking increased 50 %-at character boundaries (medium)
    100 %-in bits (maximum): minimum storage space used but slower.
    (ii) MANUAL (USER)-by programmer possibly following an automatic draft.
    (b) Quantity on hand-temporary working storage is provided by an undeclared item which is assumed to have been already defined.
    (c) Data on magnetic tape is not unpacked when it is read in, but only if reference is made to it.
    (d) Copes with genuinely variable length record. 'Key' words are:
    'REPEAT'
    'OPTIONAL'
    'VARIABLE', or combination of these.
    (c) Physical:
    (A) A form rather than line is described for both paper tape and printing. There are various facilities to promote this concept.
    (i) one can position repeated groups anywhere on page.
    (ii) concatenation (chaining) of variable length items
    (iii) there is an automatic closing up of unwanted page space.
    (B) Checking of input and output of data is optimised where possible.
    (C) Input and Output physicals are covered by the same concepts where possible, e.g. 'TEXT' on input cards.
    (D) There is an option of automatic conversion on input, programmed conversion or 'REPLACE' technique.
    Editing characters can be inserted (output) or removed (input).
    (E) Peripheral Codes are set by the user so that one's own character set for a suite of programs, a single program or file can be defined.
    (F) Any punched card can be described.
    This includes
    -single/multi card records
    -optional cards
    -variable/repeated fields
    -single/overpunch columns
    (G) Inclusion of procedure and conditionals in the physical description
    e.g. PRINT IF
    PUNCH IF
          in The Computer Journal 14(4) 1971 view details
  • Sammet, Jean E., "Roster of Programming Languages 1972" 191 view details
          in Computers & Automation 21(6B), 30 Aug 1972 view details
  • Stock, Marylene and Stock, Karl F. "Bibliography of Programming Languages: Books, User Manuals and Articles from PLANKALKUL to PL/I" Verlag Dokumentation, Pullach/Munchen 1973 413 view details Abstract: PREFACE  AND  INTRODUCTION
    The exact number of all the programming languages still in use, and those which are no longer used, is unknown. Zemanek calls the abundance of programming languages and their many dialects a "language Babel". When a new programming language is developed, only its name is known at first and it takes a while before publications about it appear. For some languages, the only relevant literature stays inside the individual companies; some are reported on in papers and magazines; and only a few, such as ALGOL, BASIC, COBOL, FORTRAN, and PL/1, become known to a wider public through various text- and handbooks. The situation surrounding the application of these languages in many computer centers is a similar one.

    There are differing opinions on the concept "programming languages". What is called a programming language by some may be termed a program, a processor, or a generator by others. Since there are no sharp borderlines in the field of programming languages, works were considered here which deal with machine languages, assemblers, autocoders, syntax and compilers, processors and generators, as well as with general higher programming languages.

    The bibliography contains some 2,700 titles of books, magazines and essays for around 300 programming languages. However, as shown by the "Overview of Existing Programming Languages", there are more than 300 such languages. The "Overview" lists a total of 676 programming languages, but this is certainly incomplete. One author ' has already announced the "next 700 programming languages"; it is to be hoped the many users may be spared such a great variety for reasons of compatibility. The graphic representations (illustrations 1 & 2) show the development and proportion of the most widely-used programming languages, as measured by the number of publications listed here and by the number of computer manufacturers and software firms who have implemented the language in question. The illustrations show FORTRAN to be in the lead at the present time. PL/1 is advancing rapidly, although PL/1 compilers are not yet seen very often outside of IBM.

    Some experts believe PL/1 will replace even the widely-used languages such as FORTRAN, COBOL, and ALGOL.4) If this does occur, it will surely take some time - as shown by the chronological diagram (illustration 2) .

    It would be desirable from the user's point of view to reduce this language confusion down to the most advantageous languages. Those languages still maintained should incorporate the special facets and advantages of the otherwise superfluous languages. Obviously such demands are not in the interests of computer production firms, especially when one considers that a FORTRAN program can be executed on nearly all third-generation computers.

    The titles in this bibliography are organized alphabetically according to programming language, and within a language chronologically and again alphabetically within a given year. Preceding the first programming language in the alphabet, literature is listed on several languages, as are general papers on programming languages and on the theory of formal languages (AAA).
    As far as possible, the most of titles are based on autopsy. However, the bibliographical description of sone titles will not satisfy bibliography-documentation demands, since they are based on inaccurate information in various sources. Translation titles whose original titles could not be found through bibliographical research were not included. ' In view of the fact that nany libraries do not have the quoted papers, all magazine essays should have been listed with the volume, the year, issue number and the complete number of pages (e.g. pp. 721-783), so that interlibrary loans could take place with fast reader service. Unfortunately, these data were not always found.

    It is hoped that this bibliography will help the electronic data processing expert, and those who wish to select the appropriate programming language from the many available, to find a way through the language Babel.

    We wish to offer special thanks to Mr. Klaus G. Saur and the staff of Verlag Dokumentation for their publishing work.

    Graz / Austria, May, 1973
          in Computers & Automation 21(6B), 30 Aug 1972 view details