Dataflow(ID:6622/dat009)


Formalisation of the Base Language into a proper programmign system


Related languages
Base Language => Dataflow   Evolution of
Dataflow => DFPL   Extension of
Dataflow => EPS   Influence
Dataflow => TDFL   Evolution of
Dataflow => VAL   Evolution of

References:
  • Dennis, J. B. On the Design and Specification of a Common Base Language. Proceedings of the Symposium on Computers and Automata, Polytechnic Institute of Brooklyn, New York, April, 1971 view details
  • Dennis, J.B. "On the Design and Specification of a Common Base Language" June 1972, MIT LCS TR-101 view details Abstract: Abstract:  This is a report on the work of the Computation Structures Group of Project MAC toward the design and specification of a common base language for programs and information structures.  We envision that the meanings of programs expressed in practical source languages will be defined by rules of translation into the base language.  The meanings of programs in the base language is fixed by rules of interpretation which constitute a transition system called the interpreter for the base language.  We view the base language interpreter as the functional specification of a computer system in which emphasis is placed on programming generality — the ability of users to build complex programs by combining independently written program modules.
    Our concept of a common base language is similar to the abstract programs of the Vienna definition method — but a single class of abstract programs applies to all source languages to be encompassed.  The semantic constructs of the base language must be just those fundamental cons tructs necessary for the effective realization of the desired range of source languages.  Thus we seek simplicity in the design of the interpreter at the expense of increased complexity of the translator from a source language to the base language. As an illustration of this philosophy, we oresent a rudimentary form of the base language in which nonlocal references are not permitted, and show how programs expressed in a simple block structured language may be translated into this base language.
    The importance of representing concurrency within and among computations executed by the interpreter is discussed, and our approach toward incorporating concurrency of action in the base langauge is outlined.
    Extract: Introduction
    INTRODUCTION
    The Computation Structures Group of Project MAC is working toward the design and specification of a base language for programs and information structures.  The base language is Intended to serve as a common intermediate representation for programs expressed in a variety of source programming languages.
    The motivation for this work is the design of computer systems in which the creation of correct programs is as convenient and easy as possible. A major ingredient in the convenient synthesis of programs is the ability to build large programs by combining simpler procedures or program modules, written independently, and perhaps by different individuals using different source languages.  This ability of a computer system to support modular programming we have called programming generality [3, 4].  Programming generality requires the communication of data among Independently specified procedures, and thus that the semantics of the languages in which these procedures are expressed must be defined in terms of a common collection of data types and a common concept of data structure.
    We have observed that the achievement of programming generality is very difficult in conventional computer systems, primarily because of the variety of data reference and access methods that must be used for the implementation of large programs with acceptable efficiency. For example, data structures that vary in size and form during a computation are given different representations from those that are static; data that reside in different storage media are accessed by different means of reference; clashes of identifiers appearing in different blocks or procedures are prevented by design in some source languages but similar consideration has not been given to the naming and referencing of cataloged files and procedures in the operating environment of programs. These limitations on the degree of generality possible in computer systems of conventional architecture have led us to study new concepts of computer system organization through which these limitations on programming generality might be overcome.
    In this effort we are working at the same time on developing the base language and on concepts of computer architecture suited to the execution of computations specified by base language programs.  That is, we regard the base language we seek to define as a specification of the functional operation of a computer system.  Thus our work on the base language is strongly influenced by hardware concepts derived from the requirements of programming generality f3].
    In particular, the choice of trees with shared substructures as our universal representation for information structures is based in part on a conviction that there are attractive hardware realizations of memory systems for tree structured data.  For example, Gertz [8] considers how such a memory system might be designed as a hierarchy of associative memories.  Also, the base language is intended to represent the concurrency of parts of computations in a way that permits their execution in parallel.  One reason for emphasizing concurrency is that it is essential to the description of certain computations — in particular, when a response is required to whichever one of several independent events is first to occur. An example is a program that must react to the first message received from either of two remote terminals.  Furthermore, we believe that exploiting the potential concurrency in programs will be important in realizing efficient computer systems that offer programming generality.  This is because concurrent execution of program parts increases the utilization of processing hardware by providing many activities that can be carried forward while other activities are blocked pending retrieval of information from slower parts of the computer system memory.
    Our proposal for the definition of a common base language may seem like a rebirth of the proposal to develop a Universal Computer Oriented Language [24].  Thus it is reasonable to inquire whether there is any better chance that the development suggested here will succeed whereas this earlier work did not result in a useful contribution to the art.  Our confidence in eventual success rests on important trends in the computer field during the past ten years and fundamental differences in philosophy-  The most important change is the increased importance of achieving greater programming generality in future computer systems.  The cost of acquiring and operating the hardware portion of computer systems has become dominated by the expense
    of creating and maintaining the system and application software. At present, there is great interest in the exchange of programs and data among computer installations, and in building complex procedures from components through the facilities of time-shared computers.  Computer users are often prepared to forsake efficiency of programs to gain the ability to operate them in different environments, and the ability to use the program in conjunction with other programs to accomplish a desired objective.
    Furthermore, the pace of programming language evolution has slowed. ' It is rare that a fundamentally new concept for representing algorithms is introduced.  Workers on programming language design have turned to refining the conceptual basis of program representation, providing more natural modes of expressing algorithms in different fields, and consolidating diverse ways of representing similar actions.  Today, there is good reason to expect that a basic set of notions about data and control structures will be sufficient to encompass a usefully large class of practical programming languages and applications.  In particular, the set of elementary data types used in computation has not changed significantly since the first years of the stored program computer — they are the integers, representations for real numbers, the truth values true and false, strings of bits, and strings of symbols from an alphabet.  Also, considerable attention is currently devoted to the development of useful abstract models for information structures, and the prospects are good that these efforts will converge on a satisfactory general model.
    We are also encouraged by others who are striving toward similar goals. Andrei Ershov is directing a group at the Novosibirsk Computing Center of the Soviet Union in the development of a common "internal language" for use in an optimizing compiler for three different languages — PL/I, Algol 68, and Simula 67 [7].  The internal language would be a representation common to the three source languages and is to serve as the representation in which transformations are performed for machine independent optimization.
    The "contour model" for program execution, as explained by Johnston [10] and Berry [1] provides a readily understood vehicle for explaining the
    semantics of programming languages such as Algol 60, PL/I, and Algol 68 in which programs have a nested block structure.  It is easy to imagine how the contour model could be formalized and thus serve as a basis for specifying the formal semantics of programming languages.  The contour model may be considered as a proposal for a common base language and as a guide for the design of computer systems that implement block structured languages.
    John Iliffe has for some time recognized some of the fundamental implications of programming generality with respect to computer organization. His book Basic Machine Principles [9] is a good exposition of his ideas which are argued from the limitations of conventional computer hardware in executing general algorithms.  Again, Iliffe's machine defines a scheme of program representation that could be thought of as a common base language. However, Iliffe has not discussed his ideas from this viewpoint. Extract: Formal Semantics
    FORMAL SEMANTICS
    When the meaning of algorithms expressed in some programming language has been specified in precise terms, we say that a formal semantics for the language has been given.  A formal semantics for a programming language generally takes the form of two sets of rules — one set being a translator, and the second set being an interpreter.  The translator specifies a transformation of any well formed program expressed in the source language (the concrete language) into an equivalent program expressed In a second language — the abstract language of the definition.  The interpreter expresses the meaning of programs in the abstract language by giving explicit directions for carrying out the computation of any well formed abstract program as a countable set of primitive steps.
    It would be possible to specify the formal semantics of a programming language by giving an interpreter for the concrete programs of the source language.  The translator is then the identity transformation.  Yet the inclusion of a translator in the definition scheme has important advantages. For one, the phrase structure of a programming language viewed as a set of strings on some alphabet usually does not correspond well with the semantic
    structure of programs.  Thus it is desirable to give the semantic rules of interpretation for a representation of the program that wore naturally represents its semantic structure.  Furthermore, many constructs present in source languages are provided for convenience rather than as fundamental linguistic features.  By arranging the translator to replace occurrences of these constructs with more basic constructs, a simpler abstract language is possible, and its interpreter can be made more readily understandable  and therefore more useful as a tool for the design and specification of computer languages and systems.
    The abstract language that has received the most attention as a base for the formal semantics of programming languages is the lambda-calculus of Church.  For several reasons we have found the lambda calculus unsuited to our work.  The most serious problem is that the lambda calculus does not deal directly with structured data.  Thus it is inconvenient to use the lambda calculus as a common target language for programs that make use of selection to reference components of information structures.  It also rules out modeling of sharing in the form of two or more structures having the same substructure as a component.
    A second defect in terms of our goals is that the lambda calculus incorporates the concept of free and bound variables characteristic of block structured programming languages. We prefer to exclude these concepts so the base language and its interpreter are simpler and more readily applied to the study of computer organization.  Later in the paper we show how block structured programs may be translated into base language programs using the rudimentary version of the base language introduced below.  This translation of block structured programs into programs that are not block structured is an important example of how simplicity in the interpreter may be obtained by translating source language constructs into more primitive constructs.
    Our thoughts on the definition of programming languages in terms of a base language are closely related to the formal methods developed at the IBM Vienna Laboratory [17, 18], and which derive from the ideas of McCarthy [19, 20] and Landin [13, 14].  For the formal semantics of programming languages a general model is required for the data on which programs act.  We regard data as consisting of elementary objects and compound objects fora>ed by combining elementary objects into data structures.
    Elementary objects are data items whose structure in terms of simpler objects is not relevant to the description of algorithms.  For the purposes of this paper, the class E of elementary objects is
    E = Z U R U W where
    Z = the class of integers
    R = a set of representations for real numbers
    W = the set of all strings on some alphabet
    Data structures are often represented by directed graphs in which elementary objects are associated with nodes, and each arc is labelled by a member of a set S of selectors.  In the class of objects used by the Vienna group, the graphs are restricted to be trees, and elementary objects are associated only with leaf nodes. We prefer a less restricted class so an object may have distinct component objects that share some third object as a common component.  The reader will see that this possibility of sharing is essential to the formulation of the base language and interpreter presented here.  Our class of objects is defined as follows:
    Let | be a class of elementary objects, and let S be a class of selectors. An object is a directed acyclic graph having a single root node from which all other nodes may be reached over directed paths.  Each arc is labelled with one selector in JJ, and an elementary object in E may be associated with each leaf node.
    s = z u w
    Figure 1 gives an example of an object.  Leaf nodes having associated elementary objects are represented by circles with the element of E written inside; integers are represented by numerals, strings are enclosed in single
    quotes, and reals have decimal points.  Other nodes are represented by solid dots, with a horizontal bar if there is more than one emanating arc.
    The node of an object reached by traversing an arc emanating from its root node is itself the root node of an object called a component of the original object.  The component object consists of all nodes and arcs that can be reached by directed paths from its rooc node.
    At present, we rule out directed cycles in the graphs of objects for several reasons:  In Che first place, the data structures of the most important source languages are readily modelled as objects according to our definition.  Also, it seems that realizing the maximal concurrency of computations on data structures will be difficult to do with a guarantee of determinism if objects are permitted to contain cycles.  Finally, the possibility of cycles invalidates the reference count technique of freeing storage for data items no longer accessible to computations, and some more general garbage collection scheme must be used.  The general techniques do not seem attractive with regard to the concepts of computer organization we have been studying — especially when data items are distributed among several physical levels of memory.
    It is convenient to introduce our concept of a base language and its interpreter by comparison with the Vienna definition method as represented by the formal definitions of Algol 60 [15] and PL/I [18].  The Vienna method is outlined in Figure 2.  The concrete programs of the programming language being defined are mapped into abstract programs by the translator.  A concrete program is a string of symbols that satisfies a concrete syntax usually expressed as a form of context free grammar.  The interpreter is a nondeter-ministic state transition system defined by a relation that specifies all possible next states for any state of the interpreter. Abstract programs and the states of the interpreter are represented by objects (trees). Figure 2 shows the three major components of interpreter states.  The 'text'-component is the abstract program being interpreted.  The 'mera'-component is an object that contains the values of variables in the abstract program, thus serving as a model of memory.  The 'cont'-component of the
    state contains information about statements of the abstract program whose execution is in progress.  The interpreter is specified as a non-deterministic system so Activities may be carried out concurrently where permitted by the language being defined.
    For comparison, note that a separate class of abstract programs and interpreter are sepcified for each formal definition of a source language; that states of the interpreter model only the information structures re-* lated to execution of one abstract program; and that statements in the concrete program retain their identity as distinct parts of the corresponding abstract program.
    Figure 3 is the corresponding outline showing how source languages would be defined in terms of a common base language. A single class of abstract programs constitutes the base language.  Concrete programs in source languages (LI and L2 in the figure) are defined by translators into the base language — the class of abstract programs serves as the common target representation for several source languages.  For this to be effectively possible, the base language should be the "least common denominator" of the set of source languages to be accommodated.  The structure of abstract programs cannot reflect the peculiarities of any particular source language, but must provide a set of fundamental linguistic constructs in terms of which the features of these source languages may be realized.  The translators themselves should be specified in terms of the base language, probably by means of a specialized source language.  Formally, abstract programs in the base language, and states of the interpreter are elements of our class of objects defined above.
    The structure of states of the interpreter for the base language is shown in Figure 4.  Since we regard the interpreter for the base language as a complete specification for the functional operation of a computer system, a state of the interpreter represents the totality of programs, data, and control information present in a computer system.  In Figure 4 the
    universe is an object that represents all information present in the computer system when the system is idle — that is, when no computation is in progress. The universe has date structures and procedure structures as constituent objects.  Any object is a legitimate data structure; for example, a data structure may have components that are procedure structures.  A procedure structure is an object that represents a procedure expressed in the base language.  It has components which are instructions of the base language, data structures, or other procedure structures.  So that multiple activations of procedures may be accommodated, a procedure structure remains unaltered during its interpretation.
    ^"*ie ^OC&1 structure of an interpreter state contains a local structure for each current activation of each base language procedure.  Each local structure has as components the local structures of all procedure activations initiated within it.  Thus the hierarchy of local structures represents the dynamic relationship of procedure activations.  One may think of the root local structure as the nucleus of an operating system that initiates independent, concurrent computations on behalf of system users as they request activation of procedures from the system files (the universe).
    The local structure of a procedure activation has a component object for each variable of the base language procedure.  The selector of each component is its identifier in the instructions of the procedure.  These objects may be elementary or compound objects and may be common with objects within the universe or within local structures of other procedure activations.
    The control component of an interpreter state is an unordered set of sites of activity. A typical site of activity is represented in Figure 4 by an asterisk at an instruction of procedure P and an arrow to the local structure L for some activation of P.  This is analogous to the "instruction pointer/environment pointer" combination that represents a site of activity in Johnston's contour model [10].  Since several activations of a procedure may exist concurrently, there may be two or more sites of activity involving the same instruction of some procedure, but designating different local structures. Also, within one activation of a procedure, several
    instructions may be active concurrently; thus asterisks on different instructions of a procedure may have arrows to the same local structure.
    Each state transition of the interpreter executes one instruction for some procedure activation, at a site of activity selected arbitrarily from the control of the current state.  Thus the interpreter is a nondeter-ministic transition system.  In the state resulting from a transition, the chosen site of activity is replaced according to the sequencing rules of the base language.  Replacement with two sites of activity designating two successor instructions would occur in interpretation of a fork instruction; deletion of the site of activity without replacement would occur in execution of a quit or join instruction. Extract: Representation Of Concurrency In The Base Language
    REPRESENTATION OF CONCURRENCY IN THE BASE LANGUAGE
    A subject of major Importance in the design of the base language is the representation of concurrent activities.  In the introduction we noted that some computations inherently involve concurrent processes and cannot be simulated by sequential programs — also, that a high degree of concurrency within computations may prove essential to the practical realization of computer systems with programming generality.  To these motivations we may add that some contemporary source languages, notably PL/I, have explicit provision for programming concurrent processes.
    We regard the state transitions of the interpreter as representing the progress of all activities in a computer system that is executing many programs simultaneously.  The basic requirements for representing concurrent actions in the interpreter are met by providing for many sites of activity in the control component of the state (Figure 3), and by organizing the local structures of procedure activations as a tree so a procedure may spawn independent, concurrent activations of component procedures.  Multiple sites of activity may represent many actions required to accomplish different parts of one computation as well as parallel execution of many independent computations.
    Consideration of concurrent computation brings in the issue of nondeterminacy — the possibility that computed results will depend on the relative timing with which the concurrent activities are carried forward. The work of Van Horn [27], Rodriguez [22] and others has shown that computer systems can be designed so that parallelism in computations may be realized while defcerminacy is guaranteed for any program written for the system.  The
    ability of a computer user to direct the system to carry out computations with a guarantee of determinacy is very important. Most programs are intended to implement a functional dependence of results on inputs, and determinism is essential to the verification of their correctness.
    There are two ways of providing a guarantee of determinacy to the user of a computer system.  TJie distinction is whether the class of abstract or base language programs is constrained by the design of the interpreter to describe only determinate computations.  If this is the case, then any abstract program resulting from compilation will be determinate in execution. Furthermore, if the compiler is itself a determinate procedure, then each translatable source program represents a determinate procedure.  On the other hand, if the design of the interpreter does not guarantee determinacy of abstract programs, determinacy of source programs, when desired, must be ensured by the translator.
    In the base language, it is necessary to provide for computations that are inherently nondeterminate, such as the example of a process awaiting the first response from either of two terminals.  We want to include in the base language primitive features for representing essential forms of nondeterminacy. In principle, we wish to guarantee that any (base language) procedure that does not use these features will be determinate in its operation.  Furthermore, use of base language primitives for the construction of nondeterminate procedures is intended to be such that the choice among alternative outcomes always originates from the source intended by the program author, and never from timing relationships unrelated to his computation.
    Our current thoughts regarding representation of base language procedures so as to guarantee determinacy are based on data flow representations for programs in which each operation is activated by the arrival of its operands, and each result is transmitted, as soon as it is ready, to those operations requiring its use.  Rodriguez [22] has formulated a data flow model that applies to programs involving assignment, conditional, and iteration statements f and data represented by simple variables.  Procedures represented by Rodriguez program graphs are naturally parallel and the rules for their execution guarantee determinacy.  In [3], Dennis has given a similar program graph model for procedures that transform data structures, but do not involve
    conditional or iteration steps.  Determinacy is guaranteed for these program graphs if they satisfy a readily testable condition.
    We hope to be successful in combining and extending these two models to obtain a satisfactory data flow model for all determinate procedures. If this objective can be achieved, we expect to use program graphs as the nucleus of the base language.  On the basis of improved understanding of parallel programs obtained by recent research on program schemes by Karp and Miller [11], Paterson [21], Slutz [23], and Keller [12], we are optimistic about finding an inherently determinate scheme for representing the concurrency present in most algorithms. Extract: Conclusion
    This article has been an introduction to the goals, philosophy and methods of our current work on the design of a base language.  The material presented is an "instantaneous description" of an activity that still has far to go — many issues need to be satisfactorily resolved before we will be pleased with our effort.  In addition to the representation of concurrency, the base language must encompass certain concepts and capabilities beyond those normally provided in contemporary source languages.  Four aspects of this kind are:  1. Generation and transformation of information structures that share component structures; 2. Concurrent processes that, in pairs, have producer-consumer relationships; 3. Programming systems that are able to generate base language programs and monitor their execution; and 4. Provision for controlling and sharing access to procedures and data structures among users of a computer system. We are continuing investigation of how these capabilities should be incorporated in the base language.  Some ideas on intercommunicating processes have been reported briefly [5].  Some thoughts on program monitoring and controlled sharing of information are given by Dennis and Van Horn [6], and by Vanderbilt [25]. External link: Online copy
  • Dennis, J.B. "First Version of a Dataflow Procedure Language" view details
          in Proc. Programming Syraposium, Paris 1974, edited by, G. Goos and J. Hartmanis, Springer- Verlag, New York (1974). view details
  • Weng, K.S. "An abstract implementatlon for a generalized data flow language," LCS TR-228, MIT, Cambridge, MA May 1979 view details Extract: Introduction
    Introduction
    In this thesis we are concerned with issues arising from the need to achieve concurrency of operation within a computation on a large scale. Several factors contribute toward increasing interest in systems capable of exploiting the concurrency of computation. Concurrency provides the potential for performance improvement through concurrent operation of hardware components such as processors and memory modules. This results in better utilization of total resources and in faster response if a computation has a high level of concurrency. The dramatic progress of technology has made concurrent systems more attractive as an alternative for high performance systems. In particular, systems that have many replicated hardware modules can take advantage of the projected potential of the processing capability of a single chip device which can be very economically produced. Such systems may further offer better fault-tolerance capability and extendibility of system performance.
    So far, concurrent programming has not been adequately dealt with in conventional programming languages. It is our belief that future systems must depart from the prevalent view of sequential computation both at the programming language level and at the machine organization level if a substantial progress is to be made toward practical large concurrent systems.
    The goal of this thesis is to demonstrate that an adequate computation model can provide a basis both for a good programming language and for an architecture that can fully exploit the inherent concurrency in algorithms expressed in the language. To this end, we show how a value-oriented language can be implemented based on a model of concurrent computation known as dataflow schemas and how this implementation can guide the design of an architecture that achieves a high level of concurrent operations.
    The model of computation is based on the notion of data driven computation, in the sense that an operation in a computation is executed as soon as all of the required operands become available. Thus, there is no notion of sequential control of execution. Data flow schemas allow many concurrent subcomputations to take place without creating side-effects. The lack of side-effects is essential for several reasons. First, the existence of side-effects among concurrent processes may cause the outcome of the computation to be dependent on the order in which the processes are executed -- that is, the computation is nondeterminate. In most applications, it is desirable to achieve concurrent operation while preserving the uniqueness of the result of the computation. From the semantic point of view, a language that is free of side-effects is easily formalized using denotational semantics. Furthermore, when a computation is expressed in a side-effect-free language, concurrency in the computation is easily recognized as subcomputations which do not depend on results of other subcomputations - and this data dependency is manifest in the program structure.
    We introduce a simple value-oriented language that has two important features: streams which are sequences of values communicated between computations and forall constructs in which one can express concurrent operations on components of data structures. A computation expressed in this language is guaranteed determinate unless explicit forms of nondeterminacy are used. In this thesis, we consider a (united form of nondeterminacy that merges two sequences of values in a nondeterminate manner. We discuss limitations of the language in Section 1.4.
    The architecture presented in this thesis is based on a form of data flow processor proposed by Dennis and Misunas. We show how the language can be effectively implemented on this architecture such that concurrency of a computation can be exploited. The main extension includes suggestions for the design of the storage of a targe number of activations of procedures and data structures such that contentions in accessing data structures can be alleviated.

    Extract: Rationale for TDFL
    Data Flow Languages
    Because the data flow model is graphical in nature, numerous studies have attempted to define textual programming languages based on these models. While it is possible to define an algorithm that transforms programs written in existing sequential programming, languages into data flow schemas, such an algorithm is complex because of the semantics of the sequential programming languages. Furthermore, the inherent concurrency of a computation is often hidden from the translator because there are additional constraints that are built-in in the expressiveness of sequential programming languages. We believe that high level data flow programming languages will allow algorithms for concurrent computation to be easily expressible.

    Programming languages based on the data flow concept are sufficiently expressive to encompass conventional programming language constructs such as iterations, while-loops, conditionals, procedures, and data types such as date structures and procedure values. These constructs, however, are embedded in a semantics which is free of both side-effects and the sequential control of execution. The distinctive tack of control transfer primitives such as GOTO's and operations which introduce side-effect allows compilers to easily detect data dependencies between operations in a program. Languages with these characteristics have been shown to have simple denotational formal semantics.

    Additional features such as forall constructs, primitives for stream values, and constructs for nondeterminate computations are found to be natural extensions to these languages. The forall constructs allow programmers to specify concurrent operations on all components of a data structure. The notion of stream provides an alternative to the use of coroutines and synchronization primitives for expressing computations passing sequences of values among their component modules.

    A very important characteristic of these languages is that the determinacy of a computation is guaranteed when the computation is expressed not using primitives or features explicitly provided for situations where non-determinacy is required. In conventional languages, nondeterminate computations are expressed using semaphore primitives, call and wait primitives, and monitors. The semantics of these primitives, however, are not consistent with the semantics of data flow languages. Because there are significant applications where nondeterminacy is necessary, the formal semantics of languages with non-determinate primitives is still an active area of research. In this thesis, we have chosen a very primitive form of nondeterminacy which seems essential as a basis for higher level constructs for nondeterminate computations.

          in Proc. Programming Syraposium, Paris 1974, edited by, G. Goos and J. Hartmanis, Springer- Verlag, New York (1974). view details
  • Sharp, J.A. "Data flow computing" Chichester: E. Horwood ; New York : Halsted Press, 1985 view details Extract: MIT Dataflow systems: Dataflow, DBFL, Val

    The major group in America is that led by Jack Dennis at MIT, and many of the other groups in the USA have used the MIT work as a basis. Jack Dennis's group are currently partially funded by the Lawrence Livermore Laboratory, and have built a simple version of their data flow machine. [Dataflow] Work is also in progress in the field of programming language design [Val], and in application areas. Kosinski has also worked at MIT looking into the formal semantics of data flow programming languages [DFPL] Extract: Id languages
    Another large data flow research project in America was the one at the University of California at Irvine [Id].
    They originally took Dennis?s graphical notation as a base language, and designed a high-level language which could be compiled into the MIT notation. Since then they have made many changes to the base language -  adding and deleting features to fit more closely the view of data flow that they developed - so that it is now significantly different to Dennis's. Various designs for machine architectures have been produced, and work has been done on simulating them. As far as I am aware there is as yet no intention to build an actual machine. Around 1980 both the main researchers, Arvind and Gostelow, left Irvine, but the project is continuing on a smaller scale with Lubomir Bic being the main faculty member involved. Arvind moved to MIT, and is pursuing his research separately from Jack Dennis there, though obviously there is some interaction between the groups. A report on Arvind's ideas for a data flow computer was given in a paper by Arvind and Kathail in 1981.
          in Proc. Programming Syraposium, Paris 1974, edited by, G. Goos and J. Hartmanis, Springer- Verlag, New York (1974). view details