COBOL-61 Extended(ID:385/cob011)

COBOL with FACT features 


Extension of COBOL 1961 specifications, incorporating extra features from FACT (especially SORT).


Related languages
COBOL-61 => COBOL-61 Extended   Evolution of
FACT => COBOL-61 Extended   Influence
COBOL-61 Extended => ALGEK   Evolution of
COBOL-61 Extended => COBOL Edition 1965   Standardisation
COBOL-61 Extended => COBOL-68   Evolution of
COBOL-61 Extended => Pascal   Incorporated some features of

References:
  • Clippinger, R. F. "COBOL" Comput. J. 5(3) October 1962 pp177-180 view details Extract: Introduction
    The description of COBOL as of late 1960 is well handled by an article by Jean Sammet in the Annual Review in Automatic Programming, Volume 2. This discussion will. therefore, stress developments since that time. The period from March 1960 to May 1961 was used by the COBOL Committee to clarify COBOL 60 and remove ambiguities as far as possible. This resulted in the publication of the second COBOL report entitled: "COBOL Report to Conference on Data Systems Languages, including Revised Specifications for a Common Business Oriented Language (COBOL) for Programming Electronic Digital Computers, Department of Defense 1961," and is available from the Superintendent of Documents, US Government Printing Office, Washington 25, DC, price $1.25.

    By December 1961 there were twelve manufacturers in the United States and three foreign manufacturers committed to prepare compilers for thirty-five makes and models of computers. These compilers vary in regard to the faithfulness with which they have attempted to implement the required features, and in the extent to which they have proceeded to implement certain elective features. A few of these compilers are in systems test (RCA. Sperry Rand, Air Force Logistics Command, National Cash Register, IBM 1410), others are in various stages of development. The total volume of completely checked-out applications running today in COBOL language is negligible, representing less than 1 of the great volume of data processing going on in the United States today. A fairly significant number of computer users are committed to the use of COBOL, many of them in blind acceptance of all the advantages of compilers without any understanding of the limitations upon these advantages. There has already been a certain amount of vivid disappointment on the part of some of the earliest users. because they were expecting too much, and the compilers they were working with were not working well yet and had not been improved to the extent possible. Only heavy-volume usage of COBOL, with compilers that have been tuned up to a much higher level of efficiency, will produce the perspective which will enable us to evaluate with certainty its success. This heavy volume will not be available before the end of 1963.

    Meanwhile, assuming the ultimate success of COBOL, the COBOL Committee has proceeded, in the period from May 1961 to April 1962, to work on various extensions to COBOL, three of which have now been adopted. and are in final editing, getting ready for publication in July 1962 as "COBOL 61 extended." The present paper makes no attempt to provide an authoritative description of these extensions, which will be defined by the July official publication. The only purpose here is to give general information about the nature of these changes. Extract: Sorting
    Sorting
    Influenced to a considerable extent by FACT and by Honeywell, the COBOL Committee has extended COBOL 61 to include sorting. The changes required to incorporatc sorting affect the data division and the procedure division. The form of a sort statement in a COBOL program will be :

    DESCENDING KEY
    SORT file-name ON
    data-name-l [. data-name-2 . . .]
    DESCENDING) KEY. . ]

    The KEY clauses permit the specification of any number of sort keys, and each key can be sorted from low to high value (ASCENDING) or from high to low value (DESCENDING).

    The USING and GIVING options allow different names to be assigned to a file before and after sorting. The INPUT PROCEDURE allows any amount of modification, selection or creation of records during the initial pass of the sort operation. and the OUTPUT PROCEDURE allows similar modification and selection during the final merge pass of the sort operation. The inclusion of the sort verb in COBOL eliminates the necessity to leave the COBOL language and enter some local language, and thus constitutes :giant step further towards commonality. Extract: Report writer
    Report writer
    Report writing is another extension of COBOL which was heavily influenced by FACT, though FACT was not the only influence since SURGE and other pro- gramming systems have incorporated report writers. The report writer is a maior extension of COBOL which will greatly facilitate for the programmer the job of creating reports. There are approximately fifty pages of new material that will be inserted in the new July 62 COBOI. Specifications manual to define the report writer.

    The Report Section constitutes a fourth section in the Data Division. Three new verbs are provided in the Procedure Division: INITIATE, GENERATE, TERMINATE. These verbs are to report writing as OPEN, WRITE, CLOSE are to file processing. Thirty-seven new reserved words are introduced by the report writer -such words as COLUMN, CONTROLS, DETAIL, FINAL, FOOTING, GROUP, HEADING. LINE COUNTER, PAGE, PAGE COUNTER. REPORT. are suggestive. Reports are named in the file description entries or described in the Report Section.
    [...]
    A report group may be an elementary item, a line of print, or a series of lines, hence the use of a level number and a TYPE description which clarifies whether the report group is a report heading, page heading, overflow heading, control heading, detail line, control footing line, overflow footing. page footing. or report footing. The level number gives the depth of the group; the TYPE description defines the type of group. Any groups belonging to other groups have the same TYPE description. Report groups are created in object time as a result of the use of the verb GENERATE in the Procedure Division. Many of the expressions used in the report group description entry are sufficiently suggestive that one can guess their use. Space does not permit elaboration at this point. Suffice it to say that everything needed to create a report of considerable flexibility, including the generation of lines and tables, tabulated fields, headings. footings, etc.. is available.
    It is expected that users of COBOL 61 will be pleased that COBOL 61 extended now includes the SORT Verb and the Report Writer.
  • Clippinger, RF; "Information algebra" view details Extract: COBOL, DETAB, IA and influences
    COBOL was developed by the COBOL Committee. which is a subcommittee of a layer committee called the "Conference on Data System Languages" which has an Executive Committee and, in addition to the COBOL Committee, a Development Committee. Within the Development Committee are two working groups called the Language Structure Group (whose members are Roy Goldfinger of IBM, Robert Bosak of SDC, Carey Dobbs of Spcrry Rand, Renee Jasper of Navy Management Office, William Keating of National Cash Register, George Kendrick of General Electric, and Jack Porter of Mitre Corporation). and the Systems Group. Both of these groups are interested in going beyond COBOL to further simplify the work of problem preparation. The Systems Group has concentrated on an approach which represents a generalization of TABSOL in GE's GECOM. The driving force behind this effort is Burt Grad.

    The Language Structure Group has taken a different approach. It notes that FACT, COBOL, and other current data-processing compilers are "procedure-oriented," that is, they require a solution to the dataprocessing problem to be worked out in detail, and they are geared to the magnetic-tape computer and assume that information is available in files on magnetic tape. sorted according to some keys. The Language Structure Group felt that an information algebra could be defined which would permit the definition of a data-processing problem without postulating a solution. It felt that if this point were reached. then certain classes of problems could be handled by compilers that would, in effect, invent the necessary procedures for the problem solution.

    One sees a trend in this direction in FLOW-MATIC, COBOL, Commercial Translator, and FACT, if one notes that a sort can be specified in terms of the ipnput file and the output file, with no discussion of the technique required to create strings of ordered items and merge these strings. Similarly, a report writer specifies where the information is to come from and how it is to be arranged on the printed page, and the compiler generates a program which accomplishes the purpose without the programmer having to supply the details. The FACT update verb specifies a couple of input files and criteria for matching, and FACT generates a program which does the housekeeping of reading records from both files, matching them and going to appropriate routines depending on where there is a match, an extra item, a missing item, etc. A careful study of existing compilers will reveal many areas where the compiler supplies a substantial program as a result of having been given some information about data and interrelationships between the data. The Language Structure Group has, by no means, provided a technique of solving problems once they are stated in abstract form. Indeed, it is clear that this problem is not soluble in this much generality. All the Language Structure Group claims is to provide an information algebra which may serve as a stimulus to the development of compilers with somewhat larger abilities to invent solutions in restricted cases. The work of the Language Structure Group will be described in detail in a paper which will appcar shortly in the Coinnnmications offhe ACM. Extract: Basic concepts
    Basic concepts
    The algebra is built on three undefined concepts: ENTITY, PROPERTY and VALUE. These concepts are related by the following two rules which are used as the basis of two specific postulates:
    (a) Each PROPERTY has a set of VALUES associated with it.
    (b) There is one and only one VALUE associated with each PROPERTY of each ENTITY.
    The data of data processing are collections of VALUES of certain selected PROPERTIES of certain selected ENTITIES. For example, in a payroll application. one class of ENTITIES are the employees. Some of the PROPERTIES which may be selected for inclusion are EMPLOYEE NUMBER, NAME, SEX, PAY RATE, and a payroll file which contain a set of VALUES of these PROPERTIES for each ENTITY.
    Extract: Property spaces
    Property spaces
    Following the practice of modern algebra, the Information Algebra deals with sets of points in a space. The set of VALUES is assigned to a PROPERTY that is called the PROPERTY VALUE SET. Each PROPERTY VALUE SET contains at least two VALUES U (undefined, not releUant) and II (missing, relevant, but not known). For example, the draft status of a female employee would be undefined and not relevant, whereas if its value were missing for a male employee it would be relevant but not known. Several definitions are introduced :
    (1) A co-ordinate set (Q) is a finite set of distinct properties.
    (2) The null co-ordinate set contains no properties.
    (3) Two co-ordinate sets are equivalent if they contain exactly the same properties.
    (4) The property space (P) of a co-ordinate set (Q) is the cartesian product P = V, X V, x Vj X . . . X V,, where V, is the property value set assigned to the ith property of Q. Each point (p) of the property space will be represented by an n-tuple p = (a,, a,, a,, . . ., a,), where a, is some value from C',. The ordering of properties in the set is for convenience only. If n -= 1, then P = V,. If n = 0, then P is the Null Space.
    (5) The datum point (d) of an entity (e) in a property space (P) is a point of P such that if (1 -= (a,, al. a,. . . ., a,,), then a, is the value assigned to e from the ith property value set of P for i -I 1. 2, 3, . . ., n. ((l) is the representation of (e) in (P). Thus, only a small subset of the points in a property space are datum points. Every entity has exactly one datum point in a given property space.
    Extract: Lines and functions of lines
    Lines and functions of lines
    A line (L) is an ordered set of points chosen from P.
    The span (n) of the line is the number of points comprising the line. A line L of span n is written as: L = (p,, p2. . . ., p,,) The term line is introduced to provide a generic term for a set of points which are related. In a payroll application the datum points might be individual records for each person for five working days of the week. These live records plotted in the property space would represent a line of span five.
    A function of lines (FOL) is a mapping that assigns one and only one value to each line in P. The set of distinct values assigned by an FOL is the value set of the FOL. It is convenient to write an FOL in the functional form , f ' ( X ) , where f is the FOL and X is the line. In the example of the five points representing work records for five days work, a FOL for such a line might be the weekly gross pay of an individual, which would be the hours worked times payrate summed over the five days of the week. An Ordinal FOL (OFOL) is an FOL whose value set is contained within some defining set for which there exists a relational operator R which is irreflexive, asymetric, and transitive. Such an operator is "less than" on the set of real numbers. Extract: Areas and functions of areas
    Areas and functions of areas
    An area is any subset of the property space P; thus the representation of a file in the property space is an area. A function of an area (FOA) is a mapping that assigns one and only one value to each area. The set of distinct values assigned by an FOA is defined to be the value set of the FOA. It is convenient to write an FOA in the functional form f'(X), where f is the FOA and X is the area.
    Extract: Bundles and functions of bundles
    Bundles and functions of bundles
    In data-processing applications, related data from several sources must be grouped. Various types of functions can be applied to these data to define new information. For example, files are merged to create a new file. An area set /A of order n is an ordered n-tuple of areas (A,, A,. . . ., A, = /A). Area set is a generic term for a collection of areas in which the order is significant. It consists of one or more areas which are considered simultaneously; for example. a transaction file and master file from an area set of order 2.

    Definition:
    The Bundle B -: B(b, /A) of an area set /A for a selection OFOL h is the set of all lines L such that if
    (a) /A = (A,, .Az. . . ., A,,) and
    (0) L = ( p , . p*, . . .,p,,). where p, is a point of A, for
    i = 1,2 , . . . , n,
    then b( L) = True.

    A bundle thus consists of a set of lines each of order n where 11 is the order of the area set. Each line in the bundle contains one. and only one. point from each area. The concept of bundles gives us a method of conceptually linking points from different areas so that they may be considered jointly. As an example, consider two areas whose names are "Master File" and "Transaction File," each containing points with the property Part Number. The bundling function is the Part Number. The lines of the bundle are tlie pairs of points representing one Master File record and one Transaction File record with the same partner.
    A function of a bundle (FOB) is a mapping that assigns an area to a bundle such that
    (a) there is a many-to-one correspondence between the lines in the bundle and the points in the area;
    (b) the value of each properly of each point in the area is defined by an FOL of the corresponding line of the bundle;
    (c) the value set of such an FOL must be a subset of the corresponding property value set.
    The function of a bundle assigns a point to each line of the bundle; thus a new Master File record must be assigned to an old master file record and a matching transaction file record.
    Extract: Glumps and functions of glumps
    Glumps and functions of glumps
    If A is an area and g is an FOL defined over lines of
    span 1 in A, then a Glump G - G(g, A) of an area A for an FOL g is a partition of A by g such that an ele-  ment of this partition consists of all points of A that  have identical values f' or g. The function g will be  called the Glumping Function for G. The concept of a glump does not imply an ordering either of points within an element, or of elements within a glump. As an example, let a back order consist of points with the three non-null properties: Part Number, Quantity Ordered, and Date. Define a glumping function to he Part Number. Each element of this glump consists of all orders for the same part. Different glump elements may contain different numbers of points. A function of a glump (FOG) is a mapping that assigns an area to a glump such that
    (a) there is a many-to-one correspondence between the elements of the glump and the points of the area ;
    (b) the value of each property of a point in the assigned area is defined by an FOA of the corresponding element in the glump:
    (c) the value set of the FOA must be a subset of the corresponding property value set.
          in The Computer Journal 5(3) October 1962 view details
  • Paterson, J. B. "The COBOL sorting verb" pp255-258 view details
          in [ACM] CACM 6(05) May 1963 "Proceedings of ACM Sort Symposium, November 29, 30, 1962" view details
  • Smith, L. W. review of Clippinger 1962 view details Abstract: This note is a summary of the main features of "COBOL 61 Extended" (still not officially published at the time of review). The position of Honeywell in some aspects of COBO1 design and the status of their implementations are also discussed.

          in ACM Computing Reviews 4(03) May-June 1963 view details
  • Weik, Martin H. "A Fourth Survey of Domestic Electronic Digital Computing Systems" Report No. 1227, January 1964 Ballistic Research Laboratories, Aberdeen Proving Ground, Maryland view details External link: Online copy at Computer History Museum
          in ACM Computing Reviews 4(03) May-June 1963 view details
  • Sammet, Jean E. "Computer Languages - Principles and History" Englewood Cliffs, N.J. Prentice-Hall 1969. p.339. view details
          in ACM Computing Reviews 4(03) May-June 1963 view details
  • Wirth, Niklaus "Pascal and its Successors" view details Extract: Pascal, 1968-1972
    Pascal, 1968-1972
    Freed from the constraining influence of a working group's consensus, Wirth developped the language Pascal in Zurich. The basis was Algol-W and the desire to have a language that would satisfy the requirements of system design (compilers, operating systems, etc.). Also, there was to be a basis of clear concepts and structures, definable axiomatically and independently of any particular computer, as the language was to be suitable also for teaching in an academic environment. Pascal has satisfied these requirements; it is today one of the most widely used languages in computer science education. The first Pascal compiler was designed in Zurich for the CDC 6000 computer family, and it became operational in 1970. Already in 1972 Pascal was used in introductory programming courses. Extract: 0. Introduction
    0. Introduction
    Many times I have been asked how one "invents" a programming language. One cannot really tell, but it certainly is a matter of experience with the subject of programming, and of careful deliberation. Sometimes I answered: "Like one designs an airplane. One must identify a number of necessary building blocks and materials, and then assemble and combine them properly to a functioning whole". This answer may not be entirely satisfactory, but at least in both cases the result either flies or crashes.

    Programming languages were one of the first topics that established computing science as a discipline with its own identity. The topic belonged neither to mathematics nor to electrical engineering. It was Algol 60 that introduced rigor and precision to the subject through its formal definition of syntax. A flurry of activities began in academia to investigate language properties, to find faults and inconsistencies, to devise powerful algorithms of syntax analysis, and to cope with the challenges of compilers. Soon the range of application of Algol was felt to be too narrow. A new, better language was required, perhaps a successor to Algol. Committees were established and hot controversies raged, some protagonists dreaming of grandiose formal systems, some thinking more modestly of a practical improvement. It was this environment that bred Pascal.


    Extract: 1. Structured Programming and Pascal
    1. Structured Programming and Pascal
    Pascal was born in 1969 out of an act of liberation [0]. In more than one sense. Confronted with the duty to teach programming, I had been faced with the dire options of Fortran and Algol. The former did not appeal to my taste as a scientist, the latter not to those of the practical engineer. I liberated myself from this jail by designing Pascal, convinced that an elegant style and an effective implementation were not mutually exclusive. I felt strongly -- and still do -- that a language used in teaching must display some style, elegance, consistency, while at the same time also reflecting the needs (but not necessarily bad habits) of practice. I wanted to design a language for both my classroom and my "software factory".

    The second alluded liberation was from the design constraint imposed by committee work. In 1966, Algol W [1] had been a compromise bowing to many divergent opinions and requests from both an Algol committee and an Algol community. Surely, many of them were inspiring and beneficial, but others were incompatible and hindering. Some members had high ambitions of creating a language with novel features whose consequences were to be the subject of further research, whereas I had been brought up as an engineer feeling uneasy with proposals whose realization was still the subject of speculation. I wanted to have at least a concrete idea of how a construct was to be represented on available computers, and these, let me add, were rather ill-suited for any feature not already present in Fortran.

    The general idea dominating the design of Pascal was to provide a language appealing to systematic thinking, mirroring conventional mathematical notation, satisfying the needs of practical programming, and encouraging a structured approach. The rules governing the language should be intuitive and simple, and freely combinable. For example, if x+y stands for an expression, x+y should be usable as a sub expression, in assignments, as procedure parameter, or as index. For example, if a widely established convention interprets x-y-z to mean (x-y)-z, we should not redefine this to denote x-(y-z). Or if x=y is used for centuries to denote equality of x and y, we should refrain from the arrogance of replacing it by x==y. Clearly, Pascal was to build up on the notational grounds established by mathematics and Algol. Pascal and its successors were therefore called Algol-like.

    Today, it is hard to imagine the circumstances prevailing in the 1960s. We must recall that the computing community was strictly split into two professional camps. The scientists and engineers used Fortran for their programming large-scale, word-oriented, binary computers, wheres the business community used Cobol for their smaller, character-oriented, decimal machines. System programmers were labouring within computer companies using proprietary machine-code assemblers. There were attempts to unite the two worlds, such as the highly innovative Burroughs B-5000 computer, or IBM's programming language PL/I. Both were ill-fated and devoured considerable budgets. Pascal was another such attempt, although less ambitious and without budget or industrial support. It applied the idea of recursively defined structures not only to executable statements, but also to data types. As elements, it adopted arrays (vectors, matrices) from Fortran and Algol, as well as records and files from Cobol. It allowed them to be freely combined and nested.

    The other fact about the 1960s that is difficult to imagine today is the scarcity of computing resources. Computers with more than 8K of memory words and less than 10us for the execution of an instruction were called super-computers. No wonder it was mandatory for the compiler of a new language to generate at least equally dense and efficient code as its Fortran competitor. Every instruction counted, and, for example, generating sophisticated subroutine calls catering to hardly ever used recursion was considered an academic pastime. Index checking at run-time was judged to be a superfluous luxury. In this context, it was hard if not hopeless to compete against highly optimized Fortran compilers.

    Yet, computing power grew with each year, and with it the demands on software and on programmers. Repeated failures and blunders of industrial products revealed the inherent difficulties of intellectually mastering the ever increasing complexity of the new artefacts. The only solution lay in structuring programs, to let the programmer ignore the internal details of the pieces when assembling them into a larger whole. This school of thought was called Structured Programming [2], and Pascal was designed explicitly to support this discipline. Its foundations reached far deeper than simply "programming without go to statements" as some people believed. It is more closely related to the top-down approach to problem solving.

    Besides structured statements, the concept of data types characterized Pascal profoundly. It implies that every object, be it a constant, a variable, a function, or a parameter has a type. Data typing introduces redundancy, and this redundancy can be used to detect inconsistencies, that is, errors. If the type of all objects can be determined by merely reading the program text, that is, without executing the program, then the type is called static, and checking can be performed by the compiler. Surely errors detected by the compiler are harmless and cheap compared to those detected during program execution in the field, by the customer. Thus static typing became an important concept in software engineering, the discipline emerging in the 1970s coping with the construction of large software complexes.

    A particularly successful concept was the integration of pointers into static typing as suggested by Hoare [3] and adopted in Pascal. The simple idea is to attribute a fixed type not only with every data object, but also with every pointer, such that a pointer variable can at any time only refer to an object of the type to which it is bound (or to none at all). Programming with pointers, then called list processing, notoriously fraught with pitfalls, now became as safe as programming without pointers.
    Yet, Pascal also suffered from certain deficiencies, more or less significant depending on personal perception and application. One of them had its origin in a too dogmatic interpretation of static typing, requiring that the type of every procedure parameter be known at compile-time. Since this included index bounds in the case of array types, the frequently convenient dynamic arrays were excluded. In hindsight, this rigidity was silly and kept many Algolites from adopting Pascal. Arrays are typically passed by a reference, and for dynamic arrays only the array bounds must be added to this information. The limited additional complexity of the compiler would certainly have been outweighed by the gained language flexibility.
    Extract: 2. Modular Programming and Modula-2
    2. Modular Programming and Modula-2
    With various defects clearly identified and new challenges in programming emerging, time seemed ripe for a fresh start, for a successor language. The two foremost novel challenges were multiprogramming and information hiding. For me personally, a third, quite practical challenge became an ambition: To create a language adequate for describing entire systems, from storage allocator to document editor, from process manager to compiler, and from display driver to graphics editor. I perceived that many problems in software development stemmed from the mixing of parts written in different languages. The challenge became real within our project to design and build the workstation Lilith in 1978 [6]. Its precursor was Xerox PARC's pioneering workstation Alto [5]. The Alto's software was mostly written in Mesa; Lilith's software entirely in Modula-2. It would have been prohibitive to implement more than one language. Evidently, Modula was born out of an act of necessity [7].

    The cornerstone of Modula-2 was the module construct. Whereas Pascal had served to build monolithic programs, Modula-2 was suitable for systems consisting of a hierarchy of units with properly defined interfaces. Such a unit was called module, and later package in Ada. In short, a module is like a Pascal program with the addition of an explicit interface specification to other modules. This is obtained as follows: Modules are described by two, distinct texts, a definition part and an implementation part. In the former all objects are defined which are visible by other modules, typically types and procedure signatures. They are said to be exported. The latter part contains all local, hidden objects, and the bodies of procedures, i.e. their implementations. Hence the term information hiding. The heading contains lists of identifiers to be imported from other modules.

    [...]

    This key feature catered for the urgent demands for programming in teams. Now it became possible to determine jointly a modular decomposition of the task and to agree on the interfaces of the planned system. Thereafter, the team members could proceed independently in implementing the parts assigned to them. This style is called modular programming. The concept of module arose earlier in work by Parnas and, in conjunction with multi-programming by Hoare and Brinch Hansen, where the module construct was called a monitor [8, 9]. The module was also present in a concrete form in Mesa, which in Modula was simplified and generalized.

    The module construct would, however, have remained of mostly academic interest only, were it not for the technique of separate compilation, which was from its inception combined with the module. By separate compilation we understand that (1) full type checking is performed by the compiler not only within a module, but also across module interfaces, and (2) that compatibility (or version) checking between modules to be joined is achieved by a simple key comparison when the modules are linked and loaded. We refrain from expounding technical details, but emphasize that this is a crucial requirement for the design of complex systems, yet still poorly handled by most systems of commercial provenience.

    Besides the successful feature of the module with separate compilation, the language also had some drawbacks. Surely, the evident deficiencies of Pascal had been mended. The syntax was now unambiguous, type specifications complete, and the set of basic data types adequately comprehensive. But as a result, the language, and with it the compiler, had become relatively large and bulky, although still orders of magnitude less so than comparable commercial ventures. The goal of making the language powerful enough to describe entire systems was achieved by introducing certain low-level features, mostly for accessing particular machine resources (such as I/O device registers) and for breaching, overriding the rigid type system. Such facilities, like e.g. type casting, are inherently contrary to the notion of abstraction by high-level language, and should be avoided. They were called loopholes, because they allow to break the rules imposed by the abstraction. But sometimes these rules appear as too rigid, and use of a loophole becomes unavoidable. The dilemma was resolved through the module facility which would allow to confine the use of such "naughty" tricks to specific, low-level server modules. It turned out that this was a naive view of the nature of programmers. The lesson: If you introduce a feature that can be abused, then it will be abused, and frequently so! Extract: 3 Object-oriented Programming and Oberon
    3 Object-oriented Programming and Oberon
    The advent of the personal computer around 1980 changed the way in which computers were used dramatically. Direct, immediate interaction replaced remote access and batch processing. User interfaces became an important issue. They were shaped by the novel mouse and the high-resolution display, replacing the 24 lines by 80 character screens. They established the paradigm of windows and multi-tasking. It had been pioneered by the Xerox Alto workstation, and in particular the Smalltalk project [10]. Along with it came the object-oriented style of programming. Object-oriented design emerged from the specialized subject of discrete event simulation and its language Simula [11], whose authors Dahl and Nygaard had realised that its concepts had a scope far beyond simulation. Some of the proponents of object-oriented programming even suggested that all of programming should be converted to the new view of the world.

    We felt that a revolution was undesirable to cure the lamented ills of the software profession, and we considered evolution as the wiser approach. Tempted to design a version of Modula stripped down to essentials, we also wanted to identify those features that were indispensable to encompass object-orientation. Our findings were revealing: A single feature would suffice, all other ingredients were already present in Modula. The one feature to be added had to allow the construction of data type hierarchies, called sub-classing in Smalltalk. Our own term was type extension: The new type adds attributes to those of the old type. Type extension had the welcome side effect of practically eliminating all needs for loopholes.

    The absence of loopholes is the acid test for the quality of a language. After all, a language constitues an abstraction, a formal system, determined by a set of consistent rules and axioms. Loopholes serve to break these rules and can be understood only in terms of another, underlying system, an implementation. The principal purpose of a language, however, is to shield the programmer from implementation details, and to let him think exclusively in terms of the higher-level abstraction. Hence, a language should be fully defined without reference to any implementation or computer.

    The language Oberon was born out of the ambition to simplify language to the essentials. The result turned out to be a considerably more powerful and more elegant language than its predecessors The defining report of Pascal required 30 pages, that of Modula grew to 45 pages, Oberon's could do with 16 [12]. Not surprisingly, implementations profited substantially from this achievement after 25 years of experience in language design, from a continuous evolution.

    One of the simplifications was the reunification of the definition and implementation parts of modules. An Oberon module is again defined by a single text. Its heading contains a single list of server modules (rather than of individual, imported objects). Declarations of objects that are to be accessible in client modules are specially marked. Unmarked, local objects remain hidden. From a didactic point of view this reunification may be regretted, because ideally definition parts are designed among team members and form contracts between them, whereas implementation parts can thereafter be designed by individual members without regard for the others, as long as the definition part remains untouched. However, the proliferation of files and the burden to keep corresponding parts consistent was considered a drawback. Moreover, reunification eliminated the compiler's duty to check for consistency between definition and implementation parts. Also, a definition part can readily be extracted from the module text by a simple tool. Extract: 4 Conclusions and Outlook
    4 Conclusions and Outlook
    In his article about Oberon, M. Franz wrote in [13]: "The world will remember Niklaus Wirth primarily as 'that language guy' and associate his name with Pascal." His observation is accurate; also the invitation to speak at this Conference hinted that I should concentrate on Pascal. My disobedient digression stems from my conviction that its successors Modula and Oberon are much more mature and refined designs than Pascal. They form a family, and each descendant profited from experiences with its ancestors. At the end, the time span was 25 years.

    Why, then, did Pascal capture all the attention, and Modula and Oberon got so little? Again I quote Franz: "This was, of course, partially of Wirth's own making". He continues: "He refrained from ... names such as Pascal-2, Pascal+, Pascal 2000, but instead opted for Modula and Oberon". Again Franz is right. To my defense I can plead that Pascal-2 and Pascal+ had already been taken by others for their own extensions of Pascal, and that I felt that these names would have been misleading for languages that were, although similar, syntactically distinct from Pascal. I emphasized progress rather than continuity, evidently a poor marketing strategy.

    But of course the naming is by far not the whole story. For one thing, we were not sufficiently active -- today we would say aggressive -- in making our developments widely known. Instead of asking what went wrong with Modula and Oberon, however, let us rather ask what went right with Pascal. In my own perception, the following factors were decisive:

    1. Pascal, incorporating the concepts of structured programming, was sufficiently different and progressive from Fortran to make a switch worth while. Particularly so in America, where Algol had remained virtually unknown.

    2. In the early 1970s, an organization of users (Pascal User Group PUG) was formed and helped to make Pascal widely known and available. It also published a Newsletter.

    3. Pascal was ported just in time for the first micro computers (UCSD) [14], and thereby reached a large population of newcomers unconstrained by engrained habits and legacy code.

    4. Pascal was picked up by start-up companies. Borland's Pascal implementation was the first compiler to be available for less than $50, turning it into a household article.

    5. UCSD as well as Borland properly integrated the compiler into a complete development tool, including a program editor, a file system, and a debugging aid. This made Pascal highly attractive to schools and to beginners. It changed the manner in which programs were "written". A fast write-compile-test-correct cycle and interactivity were the new attractions.

    6. Shortly after the initial spread, an ever growing number of text books on programming in Pascal appeared. They were as important as the compiler for Pascal's popularity in schools and universities. Text books and software entered a symbiotic partnership.
    Perhaps it is worth observing that this chain reaction started around 1977, fully seven years after Pascal had been published and implemented on a CDC mainframe computer. Meanwhile, Pascal had been ported to numerous other large computers, but remained largely within universities. This porting effort was significantly facilitated by our project resulting in the Pascal P-compiler generating P-code, the predecessor of the later M-code (for Modula) and Java byte-code.
    In contrast to Pascal, Modula and Oberon did not appear at a time when computing reached new segments of the population. The module concept was not perceived in teaching as sufficiently significant to warrant a change to a new, albeit similar language. Text books had been selected, investments in learning had been made, time was not ripe for a change. Industry did not exactly embrace Modula either, with a few exceptions, mainly in Britain. A more palatable solution was to extend Pascal, retaining upward compatibility and old shortcomings. And there appeared competition in the form of C++ and Ada with powerful industrial backing.
    Oberon fared even worse. It was virtually ignored by industry. This is astounding, as not only the elegant and powerful language was presented in 1988, but also a compact and fast compiler in 1990, along with a modern, flexible development environment for workstations, complete with window system, network, document and graphics editor, neatly fitting into about 200 Kbytes of memory, the compiler alone taking less than 50 Kbytes. The entire system was fully described in a single, comprehensive book, including its source code [15]. It carried the proof that such a system could be built using only a small fraction of manpower typically allotted to such endeavors, if proper methods and tools were employed.
    One is tempted to rationalize this history with recent, deep changes in the computing field. Computing resources are no longer at a premium, memory is counted in megabytes rather than kilobytes as 15 years ago, instruction cycles in nanoseconds instead of microseconds, and clock frequencies in gigahertz instead of megahertz. The incentive to economize has dwindled alarmingly. The only scarce resource is manpower, competent manpower. Industry has difficulties even in finding good C-programmers, those of finding Oberon programmers are insurmountable. So how could one reasonably expect companies to adopt Oberon?
    We recognize a deadlock: The adoption of new tools and methods is impracticable, yet the retention of the old ones implies stagnation. Are we therefore condemned to eternally produce an ever growing mountain of software of ever growing complexity, software that nobody fully understands, although everybody is well aware of its defects and deficiencies? In order to avert this highly unpleasant vision of the future, some fresh starts will have to undertaken now and then. They require the courage to discard and abandon, to select simplicity and transparency as design goals rather than complexity and obscure sophistication.
    In the field of programming languages two fresh starts have been attempted recently. I refer to Java and C#. Both tend to restrict rather than augment the number of features, trying to suggest a certain programming discipline. Hence they move in the right direction. This is encouraging. But there remains still a long way to reach Pascal, and longer even to Oberon.

          in Software Pioneers: Contributions to Software Engineering, Bonn, 28-29. 6. 2001 eds Broy, Manfred and Denert, Ernst Springer 2002 view details