GRUNCH(ID:8206/)


Graphical variant of Crunch, which was a front end to Cajole. Considered to be an early grpahical functional language


Related languages
CAJOLE => GRUNCH   Extension of

References:
  • de Jong MD "SCRUNCH User Manual" Westfield College, March 1981 view details
  • de Jong, M. D. and Hankin, C. L. "Structured data flow programming" view details Extract: Introduction
    Introduction
    Structured programming is of course a highly developed technique in control flow appIications. Most of this work has been textually related [4,16] and, to a large extent, the method has become standard programming practice. However, data flow advocates have not greatly encouraged its use.
    Graphically, there has not been as much work done, although two approaches are immediately apparent. The Nassi-Shneiderman approach uses nested boxes to represent nested levels of computation [17]; whereas the tree-structured approach introduces the concept of refinement by the use of further dimensions [3,18]. Data flow workers have in the past emphasised the former avenue [2], although recent work is more general [5].
    This paper begins with a description of CRUNCH, a program development system specifically designed for textual data flow languages. A high-level description of a programming task is repeatedly made more specific by the programmer and is finally translated into Cajole by the package. Next, an equivalent graphical system, GRUNCH, is described, the result of work stemming from graphical ideas of quite long-standing [12]. Again, a high-level problem description is refined, this time by descending a graphical tree structure. The system also incorporates a compiler for generating data flow machine code. An example of how both systems are used follows.
    Extract: CRUNCH
    2 CRUNCH - The Textual System

    2.1 Introduction
    The group at Westfield have developed a stable graphical model for the data flow approach to computing. This has formed the basis for the development of an applicative programming language, Cajole, which attempts to mirror the graphical model in an efficient textual way. A language summary, in extended BNF, should serve to familiarise the reader with Cajole.
    A program is merely a list of function definitions, each of which may be followed by further auxiliary definitions.
    :: =
    ::= = I = with wend
    The value associated with a particular name can be an ordinary arithmetic expression, a function application, a list of alternative values , the choice of which is dependent upon a number of associated guards or conditions, or a function.
    ::= I () I {} !
    [ ] ::= :
    Explicit data structuring is not provided although a shorthand bracket notation (<...>) for bounded functions with integer parameters is available. Function definition, function application and the runtime data dependent decision mechanism are demonstrated in the example of Section 4 and further details can be found in [11,13].
    The language has been used in a number of application areas. CAJOLE programs show the static structure of a problem very well. The nesting of with...wends illustrates the different levels of refinement. However, the auxiliary definitions must all be defined at the same time as the higher level definition and this is not conducive to structured program development which is a dynamic process.
    Two main reasons emerged for the development of a top-down refinery system which encouraged the structured programming method. Firstly, as has already been mentioned, any static programming language is inadequate to cope with such a dynamic process. And, secondly, existing program development systems with their associated design languages are not applicable for a functional or data flow notation.
    CRUNCH is such a refinery system. It enables a programmer to write Cajole programs by means of top-down stepwise refinement. Programs are input in the form of a base version, together with a n~nber of refinements and function definitions, described by the identification of a number of possible cases [15]. Having input the program it can then be listed, with or without refinements, or translated into Cajole. The package is written in Pascal and runs on a PDP 11/44 under the UNIX operating system° An example of the use of CRUNCH is shown in Section 4 of this paper°

    2.2.1 Functions
    A function definition consists of a header line which specifies its name and parameters, if any, followed by the function body.
    ::=

    ::= function [~ parameters ] :- ::= [ otherwise] deliver ::= when deliver
    The body of the function describes either a single action which is to be performed by the function, or a number of cases, dependent on the input parameters, with their corresponding actions° Strings are generally a high-level (English) description of an action and can be refined as described below.

    2.2.2  Refinements
    A refinement consists of the specification of a string of characters followed by an arrow and a further string which is to replace that already specified. The left hand string may be abbreviated by replacing parts of the string that are unnecessary for the unique identification of the refinement by a series of dots (.).
    ::= { }
    When the refinement is included in the program, all occurrences of the first string that are found are replaced by the second string.

    2.2.3 Using CRUNCH
    The user communicates with the system by typing CRUNCH commands in response to a prompt.
    Specifically, the commands crunch and crunchmore introduce new definitions and refinements, crunch overwrites any existing information, whereas the latter adds to the existing structure.
    ::= crunch {* } crunch ::= crunchmore {* } crunch ::= I
    Robust syntax checking, the routeing of either input or output via a UNIX file, a means of interacting with the UNIX shell, and a "help" facility, create an environment that is easy to work in.
    At any stage of the development process the user is able to delete or change any part of his program, to produce a listing of the functions and refinements thus far input, or to include the given refinements in the function definitions. Furthermore, the user is able to refine the program and output its Cajole translation.

    Extract: GRUNCH
    3. GRUNCH - The Graphical System

    3.1 Introduction
    Many people feel that a graphical approach to high-level data flow is more natural [2,12]. So a graphical interface was added as an extension to CRUNCH [6]° The user is given the option of writing his programs in graphical mode by filling in function templates supplied by the system° Each template section corresponds to a 'case' of a CRUNCH definition° The user is able to alternate between these two approaches as he wishes~ there being a common data structure. This approach displays the inherent parallelism within a particular function very wello However, the user is left with no feel for the parallelism of the program as a whole, and the use of templates is also a very static application of the graphical approach°
    Thus we have developed a stand alone graphical system called GRUNCHo This enables programs to be written in a high-level graphical language, and incorporates a compiler for translating to data flow machine code. The system displays the parallelism of a problem as the user constructs the program.
    GRUNCH enables programs to be written by means of top-down stepwise refinement. Although it is possible to interface with CRUNCH, the main means of input is graphical. Programs are input as a hierarchical tree of graphical levels. Each level corresponds to a refinement or function definition in CRUNCH, and consists of a number of nodes and arcs which represent the flow of data through the program. The system, like CRUNCH, is written in Pascal on a PDP 11/44 under UNIX and makes use of the graphics package Dgraph [9], and the Imlac vector drawing display terminal.

    3.2.1 Nodes
    In the graphical system, it has been found unnecessary to distinguish between functions and refinements, as in CRUNCH, since both merely transform data. Unavoidably, however, a means of choosing between two or more alternatives at run time must be provided. Hence two types of node are distinguishable which will be referred to as refinements and guarded expression lists (figure I).
    At each level in the tree it is possible to create, move and delete nodes as desired. The system does the lower level work of drawing in the arcs, although its decisions can be overwritten by the user. In addition, text can be associated with any node or arc. It is possible to add further guards to any node, and remove guards from guarded expression lists. This is because a refinement is conceptually a special case of a guarded expression list, with a simple guard evaluating to true.

    3.2.2 Graphical Programming
    The text of each refinement or box of a guarded expression list is either fully refined (as an arithmetic expression) or can be refined further. The former case corresponds to the leaves of the program tree, and the latter to an individual branch. Having refined down to the level of an arithmetic expression the system completes the process by refining further if necessary although this facility, too, can be overridden. When a user is at a particular level the screen also displays the immediate context of that ievel with regard to the whole tree ? it is also possible to display the whole tree structure at once. These two facilities enable the tree to be traversed and changes to be made at any level.
    As well as creating levels in this way it is possible to insert an extra level in the tree (by naming a portion of a particular graph) or to collapse the tree slightly by removing redundant levels. This process can be taken to its logical conclusion, and the entire tree structure can be reduced to a much simpler one ? corresponding to the inclusion of refinements in the textual version. Complete sub-trees can also be destroyed.
    At any stage in the development process it is possible to inspect the entire structure graphically or textually (by translating to an equivalent CRUNCH program). The end result is a tree in which the leaves contain low-level data flow graphs. These can be inspected at any stage by the user.

    3.2.3 Compilation
    The final compilation process involves completely collapsing the tree as described above. This effectively distinguishes individual functions of the program. At this stage it is possible to analyse individual arithmetic expressions to produce a textual machine code.

          in SIGPLAN Notices 17(08) August 1982 view details
  • Johnston, Wesley M.; Hanna, J. R. Paul and Richard J. Millar "Advances in Dataflow Programming Languages" ACM CSUR 36(1) March 2004 view details Extract: Introduction
    Introduction
    The original motivation for research into dataflow was the exploitation of massive parallelism. Therefore, much work was done to develop ways to program parallel processors. However, one school of thought held that conventional "von Neumann" processors were inherently unsuitable for the exploitation of parallelism [Dennis and Misunas 1975; Weng 1975]. The two major criticisms that were leveled at von Neumann hardware were directed at its global program counter and global updatable memory [Silc et al. 1998],
    both of which had become bottlenecks [Ackerman 1982; Backus 1978]. The alternative proposal was the dataflow architecture [Davis 1978; Dennis and Misunas 1975; Weng 1975], which avoids both of these bottlenecks by using only local memory and by executing instructions as soon as their operands become available. The name dataflow comes from the conceptual notion that a program in a dataflow computer is a directed graph and that data flows between instructions, along its arcs [Arvind and Culler 1986; Davis and Keller 1982; Dennis 1974; Dennis and Misunas 1975]. Dataflow hardware architectures
    looked promising [Arvind and Culler 1986; Dennis 1980; Treleaven and Lima 1984; Veen 1986], and a number of physical implementations were constructed and studied (for examples, see Davis [1978], Keller [1985], Papadopoulos [1988], Sakai et al. [1989], and Treleaven et al. [1982]).
    Faced with hardware advances, researchers found problems in compiling conventional imperative programming languages to run on dataflow hardware, particularly those associated with side effects and locality [Ackerman 1982; Arvind et al. 1977; Arvind and Culler 1986; Kosinski 1973; Wail and Abramson 1995; Weng 1975; Whiting and Pascoe 1994]. They found that by restricting certain aspects of these languages, such as assignments, they could create languages [Ackerman 1982; Ashcroft and Wadge 1977; Dennis 1974; Hankin and Glaser 1981; Kosinski 1978] that more naturally fitted the dataflow architecture and could thus run much more efficiently on it. These are the so-called dataflow programming languages [Ackerman 1982; Whiting and Pascoe 1994] that developed distinct properties and programming styles as a consequence of the fact that they were compiled into dataflow graphs-the "machine language" of dataflow computers.
    The often-expressed view in the 1970s and early 1980s that this form of dataflow architecture would take over from von Neumann concepts [Arvind et al. 1977; Treleaven et al. 1982; Treleaven and Lima 1984] never materialized [Veen 1986]. It was realized that the parallelism used in dataflow architectures operated at too fine a grain and that better performance could be obtained through hybrid von Neumann dataflow architectures. Many of these architectures [Bic 1990] took advantage of more coarse-grained parallelism where a number of dataflow instructions were grouped and executed in sequence. These sets of instructions are, nevertheless, executed under the rules of the dataflow execution model and thus retain all the benefits of that approach. Most dataflow architecture efforts being pursued today are a form of hybrid
    [Iannucci 1988; Nikhil and Arvind 1989], although not all, for example, Verdoscia andVaccaro [1998].
    The 1990s saw a growth in the field of dataflow visual programming languages (DFVPLs) [Auguston and Delgado 1997; Baroth and Hartsough 1995; Bernini and Mosconi 1994; Ghittori et al. 1998; Green and Petre 1996; Harvey and Morris 1993, 1996; Hils 1992; Iwata and Terada 1995; Morrison 1994; Mosconi and Porta 2000; Serot et al. 1995; Shizuki et al. 2000; Shurr 1997; Whiting and Pascoe 1994; Whitley 1997]. Some of these, such as LabView and Prograph were primarily driven by industry, and the former has become a successful commercial product that is still used today. Other languages, such as NL [Harvey and Morris 1996], were created for research. All have software engineering as their primary motivation, whereas dataflow programming was traditionally concerned with the exploitation of parallelism. The latter remains an important consideration, but many DFVPLs are no longer primarily concerned with it. Experience has shown that many key advantages of DFVPLs lie with the software development lifecycle [Baroth and Hartsough 1995].
    This article traces the development of dataflow programming through to the present. It begins with a discussion of the dataflow execution model, including a brief overview of dataflow hardware. Insofar as this research led to the development of dataflow programming languages, a brief historical analysis of these is presented. The features that define traditional, textual dataflow languages are discussed, along with examples of languages in this category. The more recent trend toward large-grained dataflow is presented next. Developments in the field of dataflow programming languages in the 1990s are then discussed, with an emphasis on DFVPLs. As the environment is key to the success of a DFVPL, a discussion of the issues involved in development environments is also presented, after which four examples of open issues in dataflow programming are presented.
    Extract: Early Dataflow Programming Languages
    3. Early Dataflow Programming Languages
    3.1. The Development of Dataflow Languages
    With the development of dataflow hardware came the equally challenging problem of how to program these machines. Because they were scheduled by data dependencies, it was clear that the programming language must expose these dependencies. However, the data dependencies in each class of language can be exploited to different degrees, and the amount of parallelism that can be implicitly or explicitly specified also differs. Therefore, the search began for a suitable paradigm to program dataflow computers and a suitable compiler to generate the graphs [Arvind et al. 1988]. Various paradigms were tried, including imperative, logical, and functional methods. Eventually, the majority consensus settled on a specific
    type of functional language that became known as dataflow languages.
    An important clarification must be made at this stage. In early publications, dataflow graphs are often used to illustrate programs. In many cases, these graphs are simply representations of the compiled code [Dennis and Misunas 1975] that would be executed on the machine, where the graph was generated either by hand or by a compiler from a third-generation programming language. Until the advent of Dataflow Visual Programming Languages in the 1980s and 1990s, it was rarely the intention of researchers that developers should generate these graphs directly. Therefore these early graphs are not to be thought of as "dataflow programming languages."
    3.1.1. What Constitutes a Dataflow Programming Language ?. While dataflow programs can be expressed graphically, most of the languages designed to operate on dataflow machines were not graphical. There are two reasons for this. First, at the low level of detail that early dataflow machines required, it became tedious to graphically specify constructs such as loops and data structures which could be expressed more simply in textual languages [Whiting and Pascoe 1994]. Second, and perhaps more importantly, the hardware for displaying graphics was not available until relatively recently, stifling any attempts to develop graphical dataflow systems. Therefore, traditional dataflow languages are primarily text-based.
    One of the problems in defining exactly what constitutes a dataflow language is that there is an overlap with other classes of language. For example, the use of dataflow programming languages is not limited to dataflow machines. In the same way, some languages, not designed specifically for dataflow, have subsequently been found to be quite effective for this use (e.g., Ashcroft and Wadge [1977]; Wadge and Ashcroft [1985]). Therefore, the boundary for what constitutes a dataflow language is somewhat blurred. Nevertheless, there are some core features that would
    appear to be essential to any dataflow language. The best list of features that constitute a dataflow language was put forward by Ackerman [1982] and reiterated by Whiting and Pascoe [1994] and Wail and Abramson [1995]. This list includes the following:
    (1) freedom from side effects,
    (2) locality of effect,
    (3) data dependencies equivalent to scheduling,
    (4) single assignment of variables,
    (5) an unusual notation for iterations due to features 1 and 4,
    (6) lack of history sensitivity in procedures.
    Because scheduling is determined from data dependencies, it is important that the value of variables do not change between their definition and their use. The only way to guarantee this is to disallow the reassignment of variables once their value has been assigned. Therefore, variables in dataflow languages almost universally obey the single-assignment rule. This means that they can be regarded as values, rather than variables, which gives them a strong flavor of functional programming. The implication of the single-assignment rule is that the compiler can represent each value as one or more arcs in the resultant dataflow graph, going from the instruction that assigns the value to each instruction that uses that value.
    An important consequence of the single-assignment rule is that the order of statements in a dataflow language is not important. Provided there are no circular references, the definitions of each value, or variable, can be placed in any order in the program. The order of statements becomes important only when a loop is being defined. In dataflow languages, loops are usually provided with an imperative syntax, but the single-assignment rule is preserved by using a keyword such as next to define the value of the variable on the next iteration [Ashcroft and Wadge 1977]. A few dataflow languages offer recursion instead of loops [Weng 1975].
    Freedom from side effects is also essential if data dependencies are to determine scheduling. Most languages that avoid side effects do so by disallowing global variables and introducing scope rules. However, in order to ensure the validity of data dependencies, a dataflow program does not even permit a function to modify its own parameters. All of this can be avoided by the single-assignment rule. However, problems arise with this strategy when data structures are being dealt with. For example, how can an array be manipulated if only one assignment can ever be made to it? Theoretically, this problem is dealt with by conceptually viewing each modification of an array as the creation of a new copy of the array, with the given element modified. This issue is dealt with in more detail in Section 6.3.
    It is clear from the above discussion that dataflow languages are almost invariably functional. They have applicative semantics, are free from side effects, are determinate in most cases, and lack history sensitivity. This does not mean that dataflow and functional languages are equivalent. It is possible to write certain convoluted programs in the functional language Lucid [Ashcroft and Wadge 1977], which cannot be implemented as a dataflow graph [Ashcroft and Wadge 1980]. At the same time, much of the syntax of dataflow languages, such as loops, has been borrowed from imperative languages. Thus it seems that dataflow languages are essentially functional languages with an imperative syntax [Wail and Abramson 1995].
    3.1.2. Dataflow Languages. A number of textual dataflow languages, or functional languages that can be used with dataflow, have been implemented. A representative sample is discussed below. (Whiting and Pascoe [1994] presented a fuller review of these languages.) Dataflow Visual Programming Languages are discussed in detail in Section 5.

    Extract: Cajole and GRUNCH
    Shortly afterwards, the Grunch system was developed by de Jong et al. [1982], the same researchers who created the Cajole textual dataflow language [Hankin and Glaser 1981]. While not a programming language in the proper sense, it was a graphical overlay for Cajole that allowed the developer to graphically express a dataflow program using stepwise refinement, and then use the tool to convert the graph into Cajole. The actual conversion was performed by an underlying tool called Crunch. The development of Grunch supported the claims of Davis and Keller [1982] that software engineering could be as much a motivation for pursuing graphical dataflow as the pursuit of efficient parallelism.
          in SIGPLAN Notices 17(08) August 1982 view details