Base Language(ID:8452/)


Common Base Language for representing programs effectively to hardware, based on the VDM formalism, but taken to the position of interpretation of instructions. Effectively the first DF language, evolved into Dataflow, then Val

"The Computation Structures Group of Project MAC is working toward the design and specification of a base language for programs and information structures.  The base language is intended to serve as a common intermediate representation for programs expressed in a variety of source programming languages."

Influenced by McCarthy, Landin, VDM, Iliffe's hardware work. Dennis also communicated with Ershov at Novozibirsk on his internal translation experiments

Intimately involved with efficient parallel processing in hardware and its description, which was the subject of research by Dennis at MIT from 1964 onwards. The formalisation of the problem as solvable in some form by a computer language was about 1969, and it was with the DARPA grant in 1970 that it began as a focussed research initiative, wth the first published drafts of the Base Language paper


Related languages
GENIE => Base Language   Incorporates some features of
OREGANO => Base Language   Influence
Vienna Definition Language => Base Language   Incorporated features of
Base Language => Dataflow   Evolution of

References:
  • Dennis, J. B. and Van Horn, E. C. "Programming semantics for multiprogrammed computations" CACM, Vol. 9, No. 3 (March 1966), pp143-155. view details
  • Dennis, Jack B. "Programming Generality, Parallelism and Computer Architecture" Computation Structures Group - Memo No. 32 MIT-Project MAC - MAC-M-409 view details Abstract: Two current trends in computing fortell a major transformation in the architecture of computer systems. One is the increasing Importance of concurrency or parallelism, both to improve hardware utilization through multiprogramming, and to heighten the useability of a computer through "multi-access" operation.  The other is "programming generality" - the independence of an algorithm description of the environment In which it is used.  Programming generality is the ability to move a program between computer installations; the ability to maintain a program within changing hardware; the ability to use a program in the construction of another -without altering the program description in any way.
    To exhibit programming generality a computer system must permit a program module to 1) create information structures of arbitrary extent;  2) call on procedures with unknown requirements for storage and information structures; and 3) transmit information structures of arbitrary complexity to a called procedure. These requirements imply that the hardware- or operating system must make storage allocation decisions and, hence, that location-independent addressing must be used. A small unit of storage allocation is necessary to prevent waste of valuable storage capacity and unnecessary information transfer.  Exploitation of detailed parallelism is then essential to fully utilize the processing hardware,
    A naming scheme is a set of ru?.es for relating instances of identifiers in algorithm descriptions to information units represented in a computer system memory.  To obtain programming generality, a naming scheme must provide for  1) referencing components of information structures,  2) sharing information structures among computations, 3)  extending information structures indefinitely, and A) transmitting arbitrary structures as procedure parameters.
    A graph model of programs is given that satisfies the naming requirements of programming generality and exhibits the detailed parallelism of a computation.  The nodes of a program graph represent operators; the arcs indicate the flow of data from one operator to another.  Some node types permit reference to information structures that are subtrees of an environment tree.  The apply node permits application of subordinate program graph procedures.
    A computer organization is proposed in which a machine language procedure is simply a transliteration of a program graph. A multiple-level memory system is suggested which employs associative, location-independent, addressing.  The processing hardware consists of many autonomous cells that employ service-on-demand control.
  • Dennis, J. B. "Future trends in time sharing systems" Time-Sharing Innovation for Operations Research and Decision-Making. Washington Operations Research Council 1969, pp 229-235. view details
  • Dennis, J. B. "Programming generality, parallelism and computer architecture". Information Processing 68, North-Holland, Amsterdam 1969, pp 484-492. view details
  • Dennis, Jack B. "On the Exchange of Information" Computation Structures Group - Memo No. 52 October, 1970 view details Abstract: There is widespread and increasing interest in moving information between computer installations which may differ in their hardware, software, or their program libraries and data files [1,2,3]. There ia general agreement that the task of making a major program written at one inatalla-tion available for use at another installation ia difficult, and is not seriously undertaken without thorough analysis. Yet there seems to be considerable optimism that the problem of accesaing a data base at a foreign installation can be solved through development of a "data description language" that would serve to characterize any class of data objects that might be communicated among computer installations [4]. The purpose of this paper is to argue that the general problem of data exchange is no less difficult than the problem of program exchange, and that the concept of a "data description language" is not a solution to the problem.
    Extract: The Problem
    The Problem
    To express the general problem of data base transfer, consider two contexts within which procedures may be executed so as to accomplish computations on behalf of users. Let these be known as context A and
    context B. By nhe term "context" we mean the collection of all those factors that determine the detailled course of execution of computations performed by a computer installation.  The factors defining a context for execution of a procedure F are normally these:
    1.  The programming language in which P is expressed for presentation to the computer installation.
    2.  The compiler for the programming language in which P is expressed,
    3.  The computer hardware in which the compiled form of P is run. 4. The file manipulation and communications services provided by
    the operating system of the installation.
    5.   The set of rules uaed by the installation for binding external references contained in F.
    6.  The status of catalogs or directories of data and procedures; the files themselves; the user account under which F la executed.
    A difference in any of these factors may cause the procedure P to have different effect when performed at two installations.
    Now let us formulate the problem. A data base D has been created by procedures operating in context A.  As shown in Figure la, let Q be representative of these procedures.  It is desired to make use of D in a distinct context B, We assume that "make use of D" means that a user wishes to carry out computations in context B in which certain procedures either retrieve information from D or alter the content of D.  Let P be representative of these procedures (Figure 1b).
    Now in the users  mind P, q, and D are objects that have existence independent of any computer system :  P and Q are abstract -procedures; D is an abstract data object.   Procedure Q must be expressed in a. programming language and compiled before it is ready for execution In context A.  Let the compiled form of Q be Q. some representation of q in context A. Operation of Q and related procedures in context A produces D  a representation of D according to the data types and compiling conventions of context A.
    To apply procedure P to structure D in context B requires chat P be expressed in a programming language and compiled into P(B). and that D be translated into D(B), where Pn and DD are representations of P and D in context B.
    The problem Is   to ensure  that the effect of executing P with D in context B will be  "consistent" with the effect of executing Q to generate D in context A. To be tnore specific we assume that the effect desired by the user of P is exactly the effect th»t would result from applying P to a copy of D in context A: The desired effect is  the result that would be obtained by expressing F la the same language as Q, translating it with the same compiler, and running it in the same operating environment as P except for use of a copy of D.  Thus we adopt the following as our criterion for successful achievement of program and data transfer between two contexts:
    Application of P to D in context B must have the same effect as though P were applied to a copy of D in context A.  If this property holds for all procedures F and all structures D we say that contexts A and B are consistent.
    For investigating the conditions necessary for two contexts to he consistent, we offer the following interpretation of the phrase "have the same effect,1' Suppose the diagram in Figure 2 is an adequate model for procedure P:
    D represents the data base prior to execution of procedure P.
    D* represents the data base after execution of P.
    X denotes data (other Chan D) accessible to P but not altered
    by P (input data), Y denotes data (other than D1) that execution of P generates
    (output data).

    Extract: Conclusion
    Conclusion
    For the purpose of moving data structures among computer installations, it will be necessary to define a "concrete representation" of a suitable class of abstract data structures for use as a conraon language for data interchange via communications networks. The examples in this paper have been chosen to demonstrate the merits of the class of abstract data structures.
    I see two principal useg for a data description language of the kind suggested above. One of these uses is in translating data structures in one context into the common representation adopted for interchange.  If the representations in the context do not contain type information for the elementary objects, or if the representation is ambiguous without additional information, then a formal specification of the class of data, structurers may make it possible for a standard program to perform the translations.  If complete type information la included aa part of representations of data structures, then, they may be converted into the common form without need for a data description.
    The second use for a data description is in the context to which the data is moved. A description is not needed to translate from the cornnon form to representations in the new context, because the coamon form should have complete type information.  However, If the nev context does not retain complete type information, a description may be useful to a general purpose program for retrieving information about transferred data structures.
    Neither of these uses of a data description language waives the requirement of consistency of contexts between which data is moved.
    Thus the concept of data description language is not & substitute for a universal representation for data structures and the procedures that operate on them.  The eventual solution will be a common intermediate language used as a standard semantic base for assigning meaning to programs and information structures.  We believe this is not an unreasonable goal: After all,System 360 machine language is currently serving this purpose for a large class of programs even though its qualifications for the role are far from ideal.
  • Dennis, J. B. "Coroutines and parallel computation", Princeton Conference an Information Sciences and Systems, Princeton, New Jersey, March 1971 view details
  • Dennis, J. B. On the Design and Specification of a Common Base Language. Proceedings of the Symposium on Computers and Automata, Polytechnic Institute of Brooklyn, New York, April, 1971 view details
  • Dennis, J.B. "On the Design and Specification of a Common Base Language" June 1972, MIT LCS TR-101 view details Abstract: Abstract:  This is a report on the work of the Computation Structures Group of Project MAC toward the design and specification of a common base language for programs and information structures.  We envision that the meanings of programs expressed in practical source languages will be defined by rules of translation into the base language.  The meanings of programs in the base language is fixed by rules of interpretation which constitute a transition system called the interpreter for the base language.  We view the base language interpreter as the functional specification of a computer system in which emphasis is placed on programming generality — the ability of users to build complex programs by combining independently written program modules.
    Our concept of a common base language is similar to the abstract programs of the Vienna definition method — but a single class of abstract programs applies to all source languages to be encompassed.  The semantic constructs of the base language must be just those fundamental cons tructs necessary for the effective realization of the desired range of source languages.  Thus we seek simplicity in the design of the interpreter at the expense of increased complexity of the translator from a source language to the base language. As an illustration of this philosophy, we oresent a rudimentary form of the base language in which nonlocal references are not permitted, and show how programs expressed in a simple block structured language may be translated into this base language.
    The importance of representing concurrency within and among computations executed by the interpreter is discussed, and our approach toward incorporating concurrency of action in the base langauge is outlined.
    Extract: Introduction
    INTRODUCTION
    The Computation Structures Group of Project MAC is working toward the design and specification of a base language for programs and information structures.  The base language is Intended to serve as a common intermediate representation for programs expressed in a variety of source programming languages.
    The motivation for this work is the design of computer systems in which the creation of correct programs is as convenient and easy as possible. A major ingredient in the convenient synthesis of programs is the ability to build large programs by combining simpler procedures or program modules, written independently, and perhaps by different individuals using different source languages.  This ability of a computer system to support modular programming we have called programming generality [3, 4].  Programming generality requires the communication of data among Independently specified procedures, and thus that the semantics of the languages in which these procedures are expressed must be defined in terms of a common collection of data types and a common concept of data structure.
    We have observed that the achievement of programming generality is very difficult in conventional computer systems, primarily because of the variety of data reference and access methods that must be used for the implementation of large programs with acceptable efficiency. For example, data structures that vary in size and form during a computation are given different representations from those that are static; data that reside in different storage media are accessed by different means of reference; clashes of identifiers appearing in different blocks or procedures are prevented by design in some source languages but similar consideration has not been given to the naming and referencing of cataloged files and procedures in the operating environment of programs. These limitations on the degree of generality possible in computer systems of conventional architecture have led us to study new concepts of computer system organization through which these limitations on programming generality might be overcome.
    In this effort we are working at the same time on developing the base language and on concepts of computer architecture suited to the execution of computations specified by base language programs.  That is, we regard the base language we seek to define as a specification of the functional operation of a computer system.  Thus our work on the base language is strongly influenced by hardware concepts derived from the requirements of programming generality f3].
    In particular, the choice of trees with shared substructures as our universal representation for information structures is based in part on a conviction that there are attractive hardware realizations of memory systems for tree structured data.  For example, Gertz [8] considers how such a memory system might be designed as a hierarchy of associative memories.  Also, the base language is intended to represent the concurrency of parts of computations in a way that permits their execution in parallel.  One reason for emphasizing concurrency is that it is essential to the description of certain computations — in particular, when a response is required to whichever one of several independent events is first to occur. An example is a program that must react to the first message received from either of two remote terminals.  Furthermore, we believe that exploiting the potential concurrency in programs will be important in realizing efficient computer systems that offer programming generality.  This is because concurrent execution of program parts increases the utilization of processing hardware by providing many activities that can be carried forward while other activities are blocked pending retrieval of information from slower parts of the computer system memory.
    Our proposal for the definition of a common base language may seem like a rebirth of the proposal to develop a Universal Computer Oriented Language [24].  Thus it is reasonable to inquire whether there is any better chance that the development suggested here will succeed whereas this earlier work did not result in a useful contribution to the art.  Our confidence in eventual success rests on important trends in the computer field during the past ten years and fundamental differences in philosophy-  The most important change is the increased importance of achieving greater programming generality in future computer systems.  The cost of acquiring and operating the hardware portion of computer systems has become dominated by the expense
    of creating and maintaining the system and application software. At present, there is great interest in the exchange of programs and data among computer installations, and in building complex procedures from components through the facilities of time-shared computers.  Computer users are often prepared to forsake efficiency of programs to gain the ability to operate them in different environments, and the ability to use the program in conjunction with other programs to accomplish a desired objective.
    Furthermore, the pace of programming language evolution has slowed. ' It is rare that a fundamentally new concept for representing algorithms is introduced.  Workers on programming language design have turned to refining the conceptual basis of program representation, providing more natural modes of expressing algorithms in different fields, and consolidating diverse ways of representing similar actions.  Today, there is good reason to expect that a basic set of notions about data and control structures will be sufficient to encompass a usefully large class of practical programming languages and applications.  In particular, the set of elementary data types used in computation has not changed significantly since the first years of the stored program computer — they are the integers, representations for real numbers, the truth values true and false, strings of bits, and strings of symbols from an alphabet.  Also, considerable attention is currently devoted to the development of useful abstract models for information structures, and the prospects are good that these efforts will converge on a satisfactory general model.
    We are also encouraged by others who are striving toward similar goals. Andrei Ershov is directing a group at the Novosibirsk Computing Center of the Soviet Union in the development of a common "internal language" for use in an optimizing compiler for three different languages — PL/I, Algol 68, and Simula 67 [7].  The internal language would be a representation common to the three source languages and is to serve as the representation in which transformations are performed for machine independent optimization.
    The "contour model" for program execution, as explained by Johnston [10] and Berry [1] provides a readily understood vehicle for explaining the
    semantics of programming languages such as Algol 60, PL/I, and Algol 68 in which programs have a nested block structure.  It is easy to imagine how the contour model could be formalized and thus serve as a basis for specifying the formal semantics of programming languages.  The contour model may be considered as a proposal for a common base language and as a guide for the design of computer systems that implement block structured languages.
    John Iliffe has for some time recognized some of the fundamental implications of programming generality with respect to computer organization. His book Basic Machine Principles [9] is a good exposition of his ideas which are argued from the limitations of conventional computer hardware in executing general algorithms.  Again, Iliffe's machine defines a scheme of program representation that could be thought of as a common base language. However, Iliffe has not discussed his ideas from this viewpoint. Extract: Formal Semantics
    FORMAL SEMANTICS
    When the meaning of algorithms expressed in some programming language has been specified in precise terms, we say that a formal semantics for the language has been given.  A formal semantics for a programming language generally takes the form of two sets of rules — one set being a translator, and the second set being an interpreter.  The translator specifies a transformation of any well formed program expressed in the source language (the concrete language) into an equivalent program expressed In a second language — the abstract language of the definition.  The interpreter expresses the meaning of programs in the abstract language by giving explicit directions for carrying out the computation of any well formed abstract program as a countable set of primitive steps.
    It would be possible to specify the formal semantics of a programming language by giving an interpreter for the concrete programs of the source language.  The translator is then the identity transformation.  Yet the inclusion of a translator in the definition scheme has important advantages. For one, the phrase structure of a programming language viewed as a set of strings on some alphabet usually does not correspond well with the semantic
    structure of programs.  Thus it is desirable to give the semantic rules of interpretation for a representation of the program that wore naturally represents its semantic structure.  Furthermore, many constructs present in source languages are provided for convenience rather than as fundamental linguistic features.  By arranging the translator to replace occurrences of these constructs with more basic constructs, a simpler abstract language is possible, and its interpreter can be made more readily understandable  and therefore more useful as a tool for the design and specification of computer languages and systems.
    The abstract language that has received the most attention as a base for the formal semantics of programming languages is the lambda-calculus of Church.  For several reasons we have found the lambda calculus unsuited to our work.  The most serious problem is that the lambda calculus does not deal directly with structured data.  Thus it is inconvenient to use the lambda calculus as a common target language for programs that make use of selection to reference components of information structures.  It also rules out modeling of sharing in the form of two or more structures having the same substructure as a component.
    A second defect in terms of our goals is that the lambda calculus incorporates the concept of free and bound variables characteristic of block structured programming languages. We prefer to exclude these concepts so the base language and its interpreter are simpler and more readily applied to the study of computer organization.  Later in the paper we show how block structured programs may be translated into base language programs using the rudimentary version of the base language introduced below.  This translation of block structured programs into programs that are not block structured is an important example of how simplicity in the interpreter may be obtained by translating source language constructs into more primitive constructs.
    Our thoughts on the definition of programming languages in terms of a base language are closely related to the formal methods developed at the IBM Vienna Laboratory [17, 18], and which derive from the ideas of McCarthy [19, 20] and Landin [13, 14].  For the formal semantics of programming languages a general model is required for the data on which programs act.  We regard data as consisting of elementary objects and compound objects fora>ed by combining elementary objects into data structures.
    Elementary objects are data items whose structure in terms of simpler objects is not relevant to the description of algorithms.  For the purposes of this paper, the class E of elementary objects is
    E = Z U R U W where
    Z = the class of integers
    R = a set of representations for real numbers
    W = the set of all strings on some alphabet
    Data structures are often represented by directed graphs in which elementary objects are associated with nodes, and each arc is labelled by a member of a set S of selectors.  In the class of objects used by the Vienna group, the graphs are restricted to be trees, and elementary objects are associated only with leaf nodes. We prefer a less restricted class so an object may have distinct component objects that share some third object as a common component.  The reader will see that this possibility of sharing is essential to the formulation of the base language and interpreter presented here.  Our class of objects is defined as follows:
    Let | be a class of elementary objects, and let S be a class of selectors. An object is a directed acyclic graph having a single root node from which all other nodes may be reached over directed paths.  Each arc is labelled with one selector in JJ, and an elementary object in E may be associated with each leaf node.
    s = z u w
    Figure 1 gives an example of an object.  Leaf nodes having associated elementary objects are represented by circles with the element of E written inside; integers are represented by numerals, strings are enclosed in single
    quotes, and reals have decimal points.  Other nodes are represented by solid dots, with a horizontal bar if there is more than one emanating arc.
    The node of an object reached by traversing an arc emanating from its root node is itself the root node of an object called a component of the original object.  The component object consists of all nodes and arcs that can be reached by directed paths from its rooc node.
    At present, we rule out directed cycles in the graphs of objects for several reasons:  In Che first place, the data structures of the most important source languages are readily modelled as objects according to our definition.  Also, it seems that realizing the maximal concurrency of computations on data structures will be difficult to do with a guarantee of determinism if objects are permitted to contain cycles.  Finally, the possibility of cycles invalidates the reference count technique of freeing storage for data items no longer accessible to computations, and some more general garbage collection scheme must be used.  The general techniques do not seem attractive with regard to the concepts of computer organization we have been studying — especially when data items are distributed among several physical levels of memory.
    It is convenient to introduce our concept of a base language and its interpreter by comparison with the Vienna definition method as represented by the formal definitions of Algol 60 [15] and PL/I [18].  The Vienna method is outlined in Figure 2.  The concrete programs of the programming language being defined are mapped into abstract programs by the translator.  A concrete program is a string of symbols that satisfies a concrete syntax usually expressed as a form of context free grammar.  The interpreter is a nondeter-ministic state transition system defined by a relation that specifies all possible next states for any state of the interpreter. Abstract programs and the states of the interpreter are represented by objects (trees). Figure 2 shows the three major components of interpreter states.  The 'text'-component is the abstract program being interpreted.  The 'mera'-component is an object that contains the values of variables in the abstract program, thus serving as a model of memory.  The 'cont'-component of the
    state contains information about statements of the abstract program whose execution is in progress.  The interpreter is specified as a non-deterministic system so Activities may be carried out concurrently where permitted by the language being defined.
    For comparison, note that a separate class of abstract programs and interpreter are sepcified for each formal definition of a source language; that states of the interpreter model only the information structures re-* lated to execution of one abstract program; and that statements in the concrete program retain their identity as distinct parts of the corresponding abstract program.
    Figure 3 is the corresponding outline showing how source languages would be defined in terms of a common base language. A single class of abstract programs constitutes the base language.  Concrete programs in source languages (LI and L2 in the figure) are defined by translators into the base language — the class of abstract programs serves as the common target representation for several source languages.  For this to be effectively possible, the base language should be the "least common denominator" of the set of source languages to be accommodated.  The structure of abstract programs cannot reflect the peculiarities of any particular source language, but must provide a set of fundamental linguistic constructs in terms of which the features of these source languages may be realized.  The translators themselves should be specified in terms of the base language, probably by means of a specialized source language.  Formally, abstract programs in the base language, and states of the interpreter are elements of our class of objects defined above.
    The structure of states of the interpreter for the base language is shown in Figure 4.  Since we regard the interpreter for the base language as a complete specification for the functional operation of a computer system, a state of the interpreter represents the totality of programs, data, and control information present in a computer system.  In Figure 4 the
    universe is an object that represents all information present in the computer system when the system is idle — that is, when no computation is in progress. The universe has date structures and procedure structures as constituent objects.  Any object is a legitimate data structure; for example, a data structure may have components that are procedure structures.  A procedure structure is an object that represents a procedure expressed in the base language.  It has components which are instructions of the base language, data structures, or other procedure structures.  So that multiple activations of procedures may be accommodated, a procedure structure remains unaltered during its interpretation.
    ^"*ie ^OC&1 structure of an interpreter state contains a local structure for each current activation of each base language procedure.  Each local structure has as components the local structures of all procedure activations initiated within it.  Thus the hierarchy of local structures represents the dynamic relationship of procedure activations.  One may think of the root local structure as the nucleus of an operating system that initiates independent, concurrent computations on behalf of system users as they request activation of procedures from the system files (the universe).
    The local structure of a procedure activation has a component object for each variable of the base language procedure.  The selector of each component is its identifier in the instructions of the procedure.  These objects may be elementary or compound objects and may be common with objects within the universe or within local structures of other procedure activations.
    The control component of an interpreter state is an unordered set of sites of activity. A typical site of activity is represented in Figure 4 by an asterisk at an instruction of procedure P and an arrow to the local structure L for some activation of P.  This is analogous to the "instruction pointer/environment pointer" combination that represents a site of activity in Johnston's contour model [10].  Since several activations of a procedure may exist concurrently, there may be two or more sites of activity involving the same instruction of some procedure, but designating different local structures. Also, within one activation of a procedure, several
    instructions may be active concurrently; thus asterisks on different instructions of a procedure may have arrows to the same local structure.
    Each state transition of the interpreter executes one instruction for some procedure activation, at a site of activity selected arbitrarily from the control of the current state.  Thus the interpreter is a nondeter-ministic transition system.  In the state resulting from a transition, the chosen site of activity is replaced according to the sequencing rules of the base language.  Replacement with two sites of activity designating two successor instructions would occur in interpretation of a fork instruction; deletion of the site of activity without replacement would occur in execution of a quit or join instruction. Extract: Representation Of Concurrency In The Base Language
    REPRESENTATION OF CONCURRENCY IN THE BASE LANGUAGE
    A subject of major Importance in the design of the base language is the representation of concurrent activities.  In the introduction we noted that some computations inherently involve concurrent processes and cannot be simulated by sequential programs — also, that a high degree of concurrency within computations may prove essential to the practical realization of computer systems with programming generality.  To these motivations we may add that some contemporary source languages, notably PL/I, have explicit provision for programming concurrent processes.
    We regard the state transitions of the interpreter as representing the progress of all activities in a computer system that is executing many programs simultaneously.  The basic requirements for representing concurrent actions in the interpreter are met by providing for many sites of activity in the control component of the state (Figure 3), and by organizing the local structures of procedure activations as a tree so a procedure may spawn independent, concurrent activations of component procedures.  Multiple sites of activity may represent many actions required to accomplish different parts of one computation as well as parallel execution of many independent computations.
    Consideration of concurrent computation brings in the issue of nondeterminacy — the possibility that computed results will depend on the relative timing with which the concurrent activities are carried forward. The work of Van Horn [27], Rodriguez [22] and others has shown that computer systems can be designed so that parallelism in computations may be realized while defcerminacy is guaranteed for any program written for the system.  The
    ability of a computer user to direct the system to carry out computations with a guarantee of determinacy is very important. Most programs are intended to implement a functional dependence of results on inputs, and determinism is essential to the verification of their correctness.
    There are two ways of providing a guarantee of determinacy to the user of a computer system.  TJie distinction is whether the class of abstract or base language programs is constrained by the design of the interpreter to describe only determinate computations.  If this is the case, then any abstract program resulting from compilation will be determinate in execution. Furthermore, if the compiler is itself a determinate procedure, then each translatable source program represents a determinate procedure.  On the other hand, if the design of the interpreter does not guarantee determinacy of abstract programs, determinacy of source programs, when desired, must be ensured by the translator.
    In the base language, it is necessary to provide for computations that are inherently nondeterminate, such as the example of a process awaiting the first response from either of two terminals.  We want to include in the base language primitive features for representing essential forms of nondeterminacy. In principle, we wish to guarantee that any (base language) procedure that does not use these features will be determinate in its operation.  Furthermore, use of base language primitives for the construction of nondeterminate procedures is intended to be such that the choice among alternative outcomes always originates from the source intended by the program author, and never from timing relationships unrelated to his computation.
    Our current thoughts regarding representation of base language procedures so as to guarantee determinacy are based on data flow representations for programs in which each operation is activated by the arrival of its operands, and each result is transmitted, as soon as it is ready, to those operations requiring its use.  Rodriguez [22] has formulated a data flow model that applies to programs involving assignment, conditional, and iteration statements f and data represented by simple variables.  Procedures represented by Rodriguez program graphs are naturally parallel and the rules for their execution guarantee determinacy.  In [3], Dennis has given a similar program graph model for procedures that transform data structures, but do not involve
    conditional or iteration steps.  Determinacy is guaranteed for these program graphs if they satisfy a readily testable condition.
    We hope to be successful in combining and extending these two models to obtain a satisfactory data flow model for all determinate procedures. If this objective can be achieved, we expect to use program graphs as the nucleus of the base language.  On the basis of improved understanding of parallel programs obtained by recent research on program schemes by Karp and Miller [11], Paterson [21], Slutz [23], and Keller [12], we are optimistic about finding an inherently determinate scheme for representing the concurrency present in most algorithms. Extract: Conclusion
    This article has been an introduction to the goals, philosophy and methods of our current work on the design of a base language.  The material presented is an "instantaneous description" of an activity that still has far to go — many issues need to be satisfactorily resolved before we will be pleased with our effort.  In addition to the representation of concurrency, the base language must encompass certain concepts and capabilities beyond those normally provided in contemporary source languages.  Four aspects of this kind are:  1. Generation and transformation of information structures that share component structures; 2. Concurrent processes that, in pairs, have producer-consumer relationships; 3. Programming systems that are able to generate base language programs and monitor their execution; and 4. Provision for controlling and sharing access to procedures and data structures among users of a computer system. We are continuing investigation of how these capabilities should be incorporated in the base language.  Some ideas on intercommunicating processes have been reported briefly [5].  Some thoughts on program monitoring and controlled sharing of information are given by Dennis and Van Horn [6], and by Vanderbilt [25]. External link: Online copy