dataflow language 

Bill Wadge and Ed Ashcroft, 1981. A dataflow language descended from ISWIM, lazy but first-order. Statements are regarded as equations defining a network of processors and communication lines, through which the data flows. Every data object is thought of as an infinite stream of simple values, every function as a filter. Lucid has no data constructors such as arrays or records. Iteration is simulated with 'is current' and 'fby' (concatenation of sequences).

Later versions (which led to Glu) had the idea of dimensions, which simplified programming.

Related languages
ISWIM => LUCID   Evolution of
LUCID => Chronolog   Influence
LUCID => ELP   Influence
LUCID => GLU   Augmentation of
LUCID => Indexical Lucid   Evolution of
LUCID => LUSTRE   Subset
LUCID => mLucid   Augmentation of
LUCID => Plane Lucid   Implementation of

  • Ashcroft, E.A. and Wadge, W.W. "Lucid - a formal system for writing and proving programs" SIAM Journal of Computing, 5(3):336-354, September 1976 view details
  • Ashcroft, E.A. and Wadge, W.W. "Lucid: Scope structures and defined functions" Rep. CS-76-22, Computer Science Dept., U. of Waterloo. view details
  • Ashcroft, E.A. and Wadge, W.W. "Lucid, a Nonprocedural Language with Iteration" view details Abstract: Lucid is a formal system in which programs can be written and proofs of programs carried out. The proofs are particularly easy to follow and straightforward to produce because the statements in a Lucid program are simply axioms from which the proof proceeds by (almost) conventional logical reasoning, with the help of a few axioms and rules of inference for the special Lucid functions. As a programming language, Lucid is unconventional because, among other things, the order of statements is irrelevant and assignment statements are equations. Nevertheless, Lucid programs need not look much different than iterative programs in a conventional structured programming language using assignment and conditional statements and loops. DOI
          in [ACM] CACM 20(07) (July 1977) view details
  • Dromard, D. and Dromard, F. "Direct Execution Of An Algebraic Oriented Language" pp254-260 view details Extract: Introduction
    The current programming languages even those called high level languages are actually very far from the formalisms used in algebra, physics,etc... After a first description with the adequate formalism, one must write again his problem and has to respect the programming language constraints. In most of cases, this programming step is not a mere translation and requires much efforts. Furthermore, the program being written, the user has no warranty for its correctness. Now, new high level languages are being developed and offer program proving possibilities. Among them, the language defined by J. Arsac, is an algorithm description-directed high level language, very close to the algebraic notations. At the beginning, this language was not considered as a programming language, because it currently requires too much processing time and besides it supposes a great amount of softwares on available conventional machines. The problem we had to cope with is to get an adequate computer architecture for an efficient execution of such a language. In this case, a direct execution machine seems to be a very attractive solution. Extract: The language peculiarities
    The language peculiarities
    The language defined by J. Arsac presents great similarities with LUCID [2,3]. Actually, these two languages were conceived to offer easy program proving. Moreover, they are unconventional because the order of statements is irrelevant and because any instruction is considered as an equation. Nevertheless, if LUCID has still some writing similarities with a conventional programming language, the language we use has a different form because it was defined to be as close to the algebraic notations as possible, so that the user would have no execution considerations to look upon. In the following sections, we shall set off the language particularities, some of them being also found in LUCID.

    2.1. Definition unicity
    In this language, every instruction actually represents the definition of a variable, so that any instruction gives the manner to get the variable value. Moreover, each variable must only be defined once and only once. This implies that the value assigned to a given variable is unique: there is biunivocity between the variable name and its value.
    So, we can consider an instruction as an equation. This property allows substitutions within the program: any variable occurrence can be replaced by its definition.
    2.2. Lack of control schema
    In the usual high level languages (Algol, Fortran, etc...) the control schema is explicit: the algorithm represented by a program depends on the program instruction order. Any alteration of the instruction order may modify the algorithm itself.
    Thanks to the definition unicity (see § 2.1.), we do not change an algorithm by changing the definition order, even if these definitions contain conditional expressions. The instruction effect only depends on its content; it does not depend on its position in the program. So, the instruction order is irrelevant.
    2.3. Recurrences
    The language gives the opportunity to use the recurrences. To describe a recurrence, we need:
    - one or several recurrent definitions
    - a recurrence stop condition
    - the recurrent variable initialization.
    As it is done in the algebraic language, the recurrent variables are indexed: two occurences of a given variable name,having distinct indexes, are then considered as two different variables, so that we should respect the definition unicity.
    No recurrence delimiter (analogous to DO loops) is necessary. Moreover , the recurrent definitions need neither to be contiguous nor to be ordered. A recurrence is defined by a recurrence index: only the variable definitions indexed by a given index are involved in the same iteration. The recurrence index has initially the value zero and it is incremented by one at each recurrence step until the stop condition occurs; this condition is specified in a definition by a special operator (see § 2.5.).
    Moreover, the recurrence depth can be greater than one; we can use definitions having the following form: y(i) = f(x(i), y(i-l), y(i-2) .... ). To nest recurrences, we only have to index the variables with several recurrence indexes and we specify for each of them a stop condition.
    2.4. Algorithm structure
    Any algorithm is composed of:
    - a header, where we find the name of the algorithm, the results, the input data
    - a body, represented by the result definitions and also eventually by the local variable definitions
    (a local variable does not appear in the header)
    Any definition is composed of:
    - a left part, representing the variable to be defined
    - a left part delimiter, represented by the symbol " <--"
    - a right part, defining the variable value.
    Any right part may be composed of:
    - either an arithmetic expression
    - or a selection among several computation possibilities of the value (alternant selection)
    - or a sub-algorithm call; in this case, the name of the sub-algorithm is followed by the adequate input data.
    These different possibilities can be combined with one another.
          in Proceedings of the 1978 annual conference 1978, Washington, D.C., United States view details
  • Hoffmann. C. M. "Semantic properties of LUCID'S compute clause and its compilation" Acta Inf. 13, 1 (Jan. 1980), 9-20. view details
          in Proceedings of the 1978 annual conference 1978, Washington, D.C., United States view details
  • Kupka, I. and Wilsing, N. "Conversational Languages" John Wiley, 1980 view details
          in Proceedings of the 1978 annual conference 1978, Washington, D.C., United States view details
  • Boehm, APW "Dataflow Copmutation", Mathmatisch Centrum, Amsterdam, 1984 view details Extract: LUCID
    The motivation for single, assignment in LUCID is the ease of program correctness proving. LUCID operators operate on sequences of values.
          in Proceedings of the 1978 annual conference 1978, Washington, D.C., United States view details
  • Ashcroft E.A. "Eazyflow Architecture," Tech. Report TR-CSL- 147, SRI 1985 view details
          in Proceedings of the 1978 annual conference 1978, Washington, D.C., United States view details
  • Ashcroft, E.A. and Wadge, W.W. "Lucid, the Dataflow Programming Language", Academic Press 1985. view details
          in Proceedings of the 1978 annual conference 1978, Washington, D.C., United States view details
  • Ashcroft, E. A.; Faustini, A. A.; Jagannathan, R.; "An intensional language for parallel applications programming" pp11 - 49 view details DOI
          in Szymanski, B. (ed.) "Parallel Functional Languages and Compilers", Addison-Wesley, 1991 view details
  • Faustini and Jaggannathan. Multidimensional Problem Solving in Lucid. Technical report, SRI International, 1993. view details
          in Szymanski, B. (ed.) "Parallel Functional Languages and Compilers", Addison-Wesley, 1991 view details
  • Whiting, Paul G. and Pascoe, Robert S. V. "A History of Data-Flow Languages" pp38-59 view details Extract:
    Ashcroft and Wadge?s Lucid, originally designed for program proving, was considered a number of times for use on data-flow machines by the authors. This culminated in 1985 with Ashcroft?s design for the Eazyflow machine.
          in Annals of the History of Computing 16(4) Winter 1994 view details
  • Dick Grune's Annotated Literature Lists view details Extract: Review of language
    William W. Wadge, Edward A. Ashcroft, "Lucid, the Dataflow Programming Language", Academic Press, London, 1985, pp. 310.
    All variables in Lucid denote possibly infinite sequences of values; all definitions define filters on these sequences. There are no statements, there is no flow-of-control. I/O is done by seamlessly interfacing with UNIX filters (= pipes). Although the authors make an attempt to explain the difference between the data-flow model and functional programming, that text is not readily summarized.
    The authors defend Lucid and the data-flow paradigm in an entertaining way by classifying the critics under such headings as Cowboys, Mystics, Boffins, etc. A further piece of comic relief is that the Lucid OS is called LUNIX.
    The book starts with an algebraic foundation; sequences, iteration, recursion and nested iteration follow. Examples a large chapter on program transformations and one on further developments conclude the book. Extract: Review of language
    James R. McGraw, "The VAL language: description and analysis", ACM TOPLAS, 4, #1, pp. 44-82. Jan. 1982,

    VAL is a side effect free language, based on expressions and non-recursive functions. Normally the language is single-definition, but in a fairly complex construct redefinition is allowed to handle iteration. Its main features are explained using a single example, quad integration.
          in Annals of the History of Computing 16(4) Winter 1994 view details
  • Library of Congress Subject Headings L83 view details
          in Annals of the History of Computing 16(4) Winter 1994 view details
  • Johnston, Wesley M.; Hanna, J. R. Paul and Richard J. Millar "Advances in Dataflow Programming Languages" ACM CSUR 36(1) March 2004 view details Extract: Lucid
    Originally developed independently of the dataflow field by Ashcroft and Wadge [1977], Lucid was a functional language designed to enable formal proofs. Recursion was regarded as too restrictive for loop constructs, but it was realized that iteration introduced two nonmathematical features into programming: transfer and assignment. Thus, Lucid was designed to permit iteration in a way that was mathematically respectable, through single assignment and the use of the keyword next to define the value of the variable in the next iteration. It quickly became apparent, however, that Lucid's functional and single-assignment semantics were similar to those required for dataflow machines, and Ashcroft and Wadge [1980] brooded on the topic in literature before publishing a book in 1985 [Wadge and Ashcroft 1985] that firmly established Lucid's claim to be a dataflow language. Extract: Introduction
    The original motivation for research into dataflow was the exploitation of massive parallelism. Therefore, much work was done to develop ways to program parallel processors. However, one school of thought held that conventional "von Neumann" processors were inherently unsuitable for the exploitation of parallelism [Dennis and Misunas 1975; Weng 1975]. The two major criticisms that were leveled at von Neumann hardware were directed at its global program counter and global updatable memory [Silc et al. 1998],
    both of which had become bottlenecks [Ackerman 1982; Backus 1978]. The alternative proposal was the dataflow architecture [Davis 1978; Dennis and Misunas 1975; Weng 1975], which avoids both of these bottlenecks by using only local memory and by executing instructions as soon as their operands become available. The name dataflow comes from the conceptual notion that a program in a dataflow computer is a directed graph and that data flows between instructions, along its arcs [Arvind and Culler 1986; Davis and Keller 1982; Dennis 1974; Dennis and Misunas 1975]. Dataflow hardware architectures
    looked promising [Arvind and Culler 1986; Dennis 1980; Treleaven and Lima 1984; Veen 1986], and a number of physical implementations were constructed and studied (for examples, see Davis [1978], Keller [1985], Papadopoulos [1988], Sakai et al. [1989], and Treleaven et al. [1982]).
    Faced with hardware advances, researchers found problems in compiling conventional imperative programming languages to run on dataflow hardware, particularly those associated with side effects and locality [Ackerman 1982; Arvind et al. 1977; Arvind and Culler 1986; Kosinski 1973; Wail and Abramson 1995; Weng 1975; Whiting and Pascoe 1994]. They found that by restricting certain aspects of these languages, such as assignments, they could create languages [Ackerman 1982; Ashcroft and Wadge 1977; Dennis 1974; Hankin and Glaser 1981; Kosinski 1978] that more naturally fitted the dataflow architecture and could thus run much more efficiently on it. These are the so-called dataflow programming languages [Ackerman 1982; Whiting and Pascoe 1994] that developed distinct properties and programming styles as a consequence of the fact that they were compiled into dataflow graphs-the "machine language" of dataflow computers.
    The often-expressed view in the 1970s and early 1980s that this form of dataflow architecture would take over from von Neumann concepts [Arvind et al. 1977; Treleaven et al. 1982; Treleaven and Lima 1984] never materialized [Veen 1986]. It was realized that the parallelism used in dataflow architectures operated at too fine a grain and that better performance could be obtained through hybrid von Neumann dataflow architectures. Many of these architectures [Bic 1990] took advantage of more coarse-grained parallelism where a number of dataflow instructions were grouped and executed in sequence. These sets of instructions are, nevertheless, executed under the rules of the dataflow execution model and thus retain all the benefits of that approach. Most dataflow architecture efforts being pursued today are a form of hybrid
    [Iannucci 1988; Nikhil and Arvind 1989], although not all, for example, Verdoscia andVaccaro [1998].
    The 1990s saw a growth in the field of dataflow visual programming languages (DFVPLs) [Auguston and Delgado 1997; Baroth and Hartsough 1995; Bernini and Mosconi 1994; Ghittori et al. 1998; Green and Petre 1996; Harvey and Morris 1993, 1996; Hils 1992; Iwata and Terada 1995; Morrison 1994; Mosconi and Porta 2000; Serot et al. 1995; Shizuki et al. 2000; Shurr 1997; Whiting and Pascoe 1994; Whitley 1997]. Some of these, such as LabView and Prograph were primarily driven by industry, and the former has become a successful commercial product that is still used today. Other languages, such as NL [Harvey and Morris 1996], were created for research. All have software engineering as their primary motivation, whereas dataflow programming was traditionally concerned with the exploitation of parallelism. The latter remains an important consideration, but many DFVPLs are no longer primarily concerned with it. Experience has shown that many key advantages of DFVPLs lie with the software development lifecycle [Baroth and Hartsough 1995].
    This article traces the development of dataflow programming through to the present. It begins with a discussion of the dataflow execution model, including a brief overview of dataflow hardware. Insofar as this research led to the development of dataflow programming languages, a brief historical analysis of these is presented. The features that define traditional, textual dataflow languages are discussed, along with examples of languages in this category. The more recent trend toward large-grained dataflow is presented next. Developments in the field of dataflow programming languages in the 1990s are then discussed, with an emphasis on DFVPLs. As the environment is key to the success of a DFVPL, a discussion of the issues involved in development environments is also presented, after which four examples of open issues in dataflow programming are presented.
    Extract: Early Dataflow Programming Languages
    3. Early Dataflow Programming Languages
    3.1. The Development of Dataflow Languages
    With the development of dataflow hardware came the equally challenging problem of how to program these machines. Because they were scheduled by data dependencies, it was clear that the programming language must expose these dependencies. However, the data dependencies in each class of language can be exploited to different degrees, and the amount of parallelism that can be implicitly or explicitly specified also differs. Therefore, the search began for a suitable paradigm to program dataflow computers and a suitable compiler to generate the graphs [Arvind et al. 1988]. Various paradigms were tried, including imperative, logical, and functional methods. Eventually, the majority consensus settled on a specific
    type of functional language that became known as dataflow languages.
    An important clarification must be made at this stage. In early publications, dataflow graphs are often used to illustrate programs. In many cases, these graphs are simply representations of the compiled code [Dennis and Misunas 1975] that would be executed on the machine, where the graph was generated either by hand or by a compiler from a third-generation programming language. Until the advent of Dataflow Visual Programming Languages in the 1980s and 1990s, it was rarely the intention of researchers that developers should generate these graphs directly. Therefore these early graphs are not to be thought of as "dataflow programming languages."
    3.1.1. What Constitutes a Dataflow Programming Language ?. While dataflow programs can be expressed graphically, most of the languages designed to operate on dataflow machines were not graphical. There are two reasons for this. First, at the low level of detail that early dataflow machines required, it became tedious to graphically specify constructs such as loops and data structures which could be expressed more simply in textual languages [Whiting and Pascoe 1994]. Second, and perhaps more importantly, the hardware for displaying graphics was not available until relatively recently, stifling any attempts to develop graphical dataflow systems. Therefore, traditional dataflow languages are primarily text-based.
    One of the problems in defining exactly what constitutes a dataflow language is that there is an overlap with other classes of language. For example, the use of dataflow programming languages is not limited to dataflow machines. In the same way, some languages, not designed specifically for dataflow, have subsequently been found to be quite effective for this use (e.g., Ashcroft and Wadge [1977]; Wadge and Ashcroft [1985]). Therefore, the boundary for what constitutes a dataflow language is somewhat blurred. Nevertheless, there are some core features that would
    appear to be essential to any dataflow language. The best list of features that constitute a dataflow language was put forward by Ackerman [1982] and reiterated by Whiting and Pascoe [1994] and Wail and Abramson [1995]. This list includes the following:
    (1) freedom from side effects,
    (2) locality of effect,
    (3) data dependencies equivalent to scheduling,
    (4) single assignment of variables,
    (5) an unusual notation for iterations due to features 1 and 4,
    (6) lack of history sensitivity in procedures.
    Because scheduling is determined from data dependencies, it is important that the value of variables do not change between their definition and their use. The only way to guarantee this is to disallow the reassignment of variables once their value has been assigned. Therefore, variables in dataflow languages almost universally obey the single-assignment rule. This means that they can be regarded as values, rather than variables, which gives them a strong flavor of functional programming. The implication of the single-assignment rule is that the compiler can represent each value as one or more arcs in the resultant dataflow graph, going from the instruction that assigns the value to each instruction that uses that value.
    An important consequence of the single-assignment rule is that the order of statements in a dataflow language is not important. Provided there are no circular references, the definitions of each value, or variable, can be placed in any order in the program. The order of statements becomes important only when a loop is being defined. In dataflow languages, loops are usually provided with an imperative syntax, but the single-assignment rule is preserved by using a keyword such as next to define the value of the variable on the next iteration [Ashcroft and Wadge 1977]. A few dataflow languages offer recursion instead of loops [Weng 1975].
    Freedom from side effects is also essential if data dependencies are to determine scheduling. Most languages that avoid side effects do so by disallowing global variables and introducing scope rules. However, in order to ensure the validity of data dependencies, a dataflow program does not even permit a function to modify its own parameters. All of this can be avoided by the single-assignment rule. However, problems arise with this strategy when data structures are being dealt with. For example, how can an array be manipulated if only one assignment can ever be made to it? Theoretically, this problem is dealt with by conceptually viewing each modification of an array as the creation of a new copy of the array, with the given element modified. This issue is dealt with in more detail in Section 6.3.
    It is clear from the above discussion that dataflow languages are almost invariably functional. They have applicative semantics, are free from side effects, are determinate in most cases, and lack history sensitivity. This does not mean that dataflow and functional languages are equivalent. It is possible to write certain convoluted programs in the functional language Lucid [Ashcroft and Wadge 1977], which cannot be implemented as a dataflow graph [Ashcroft and Wadge 1980]. At the same time, much of the syntax of dataflow languages, such as loops, has been borrowed from imperative languages. Thus it seems that dataflow languages are essentially functional languages with an imperative syntax [Wail and Abramson 1995].
    3.1.2. Dataflow Languages. A number of textual dataflow languages, or functional languages that can be used with dataflow, have been implemented. A representative sample is discussed below. (Whiting and Pascoe [1994] presented a fuller review of these languages.) Dataflow Visual Programming Languages are discussed in detail in Section 5.

          in Annals of the History of Computing 16(4) Winter 1994 view details