REF-ARF(ID:525/ref002)


Pioneering CSP problem-solving language


Related languages
REF-ARF => RAMOS   Influence
REF-ARF => ThingLab   Influence

References:
  • Fikes, R.E. "REF-ARF: A System for Solving Problems Stated as Procedures" view details
          in Artificial Intelligence Journal 1(1) Spring 1970 view details
  • Gibbons, Gregory Dean "Beyond REF-ARF : toward an intelligent processor for a nondeterministic programming language" 1972 PhD Carnegie-Mellon university. Department of computer science view details
          in Artificial Intelligence Journal 1(1) Spring 1970 view details
  • Stanton, R.B. "The interpretation of graphics and graphic languages" pp144-160 view details Abstract: Computer processing of graphical data can be divided into two broud categories, by drawing a distinction between those systems which attempt to interpret the data as a 'picture' and those which do not. We are here concerned with the former whilst acknowledging that the majority of computer graphics systems belong to the latter.

    Research into the interpretation of graphics has been motivated primarily from two sources. The first is simply the desire to realise in man-machine communication the kind of descriptive power supported by the use of graphics in man-man communication. The second springs from working with a data base of information which is most conveniently recorded in graphical form (e.g. maps, engineering drawings, etc.). The quality associated with this kind of interpretation is captured by the idea of the 'machine perception of graphics'. The acceptance of this idea places computer graphics and with it, graphic languages, in a cognate position with respect to picture interpretation and scene analysis, although there are, of course, important differences. The body of the paper is concerned with reviewing the status of graphical languages given that the task  for which they have to be suited is one of interpretation.
          in Nake, F. and Rosenfeld, A. "Graphic Languages" Amsterdam: North-Holland Publishing Company 1972. view details
  • Stock, Marylene and Stock, Karl F. "Bibliography of Programming Languages: Books, User Manuals and Articles from PLANKALKUL to PL/I" Verlag Dokumentation, Pullach/Munchen 1973 501 view details Abstract: PREFACE  AND  INTRODUCTION
    The exact number of all the programming languages still in use, and those which are no longer used, is unknown. Zemanek calls the abundance of programming languages and their many dialects a "language Babel". When a new programming language is developed, only its name is known at first and it takes a while before publications about it appear. For some languages, the only relevant literature stays inside the individual companies; some are reported on in papers and magazines; and only a few, such as ALGOL, BASIC, COBOL, FORTRAN, and PL/1, become known to a wider public through various text- and handbooks. The situation surrounding the application of these languages in many computer centers is a similar one.

    There are differing opinions on the concept "programming languages". What is called a programming language by some may be termed a program, a processor, or a generator by others. Since there are no sharp borderlines in the field of programming languages, works were considered here which deal with machine languages, assemblers, autocoders, syntax and compilers, processors and generators, as well as with general higher programming languages.

    The bibliography contains some 2,700 titles of books, magazines and essays for around 300 programming languages. However, as shown by the "Overview of Existing Programming Languages", there are more than 300 such languages. The "Overview" lists a total of 676 programming languages, but this is certainly incomplete. One author ' has already announced the "next 700 programming languages"; it is to be hoped the many users may be spared such a great variety for reasons of compatibility. The graphic representations (illustrations 1 & 2) show the development and proportion of the most widely-used programming languages, as measured by the number of publications listed here and by the number of computer manufacturers and software firms who have implemented the language in question. The illustrations show FORTRAN to be in the lead at the present time. PL/1 is advancing rapidly, although PL/1 compilers are not yet seen very often outside of IBM.

    Some experts believe PL/1 will replace even the widely-used languages such as FORTRAN, COBOL, and ALGOL.4) If this does occur, it will surely take some time - as shown by the chronological diagram (illustration 2) .

    It would be desirable from the user's point of view to reduce this language confusion down to the most advantageous languages. Those languages still maintained should incorporate the special facets and advantages of the otherwise superfluous languages. Obviously such demands are not in the interests of computer production firms, especially when one considers that a FORTRAN program can be executed on nearly all third-generation computers.

    The titles in this bibliography are organized alphabetically according to programming language, and within a language chronologically and again alphabetically within a given year. Preceding the first programming language in the alphabet, literature is listed on several languages, as are general papers on programming languages and on the theory of formal languages (AAA).
    As far as possible, the most of titles are based on autopsy. However, the bibliographical description of sone titles will not satisfy bibliography-documentation demands, since they are based on inaccurate information in various sources. Translation titles whose original titles could not be found through bibliographical research were not included. ' In view of the fact that nany libraries do not have the quoted papers, all magazine essays should have been listed with the volume, the year, issue number and the complete number of pages (e.g. pp. 721-783), so that interlibrary loans could take place with fast reader service. Unfortunately, these data were not always found.

    It is hoped that this bibliography will help the electronic data processing expert, and those who wish to select the appropriate programming language from the many available, to find a way through the language Babel.

    We wish to offer special thanks to Mr. Klaus G. Saur and the staff of Verlag Dokumentation for their publishing work.

    Graz / Austria, May, 1973
          in Nake, F. and Rosenfeld, A. "Graphic Languages" Amsterdam: North-Holland Publishing Company 1972. view details
  • Leavenworth, Burt M.; Sammet, Jean E. "An overview of nonprocedural languages" pp1-12 view details Abstract: This paper attempts to describe some of the basic characteristics and issues involving the class of programming languages commonly referred to as ?nonprocedural? or ?very high level?. The paper discusses major issues such as terminology, relativeness, and arbitrary sequencing. Five features of nonprocedural languages are described, and a number of specific languages are discussed briefly. A short history of the subject is included.
    Extract: REF-ARF
    REF-ARF consists of a nondeterministic language for stating problems and a processor that attempts to find a successful execution of the nondeterministic program.
          in Proceedings of the ACM SIGPLAN symposium on Very high level languages, March 28-29, 1974, Santa Monica, California, United States view details
  • Fleck, A.C. review of Cohen 1975 view details Extract: Introduction
    Introduction
    In 1965, Golomb and Baumert described a technique called backtrack programming, which is useful in writing programs to search for specific configuration vectors in large multi-dimensional spaces. Two years later Floyd presented detailed information on how such a technique could be implemented in a computer. Floyd also suggested that programming languages could be enhanced if they contained features enabling their users to utilise backtrack programming.
    In this work we describe an implementation of Floyd's ideas that can be programmed in a short time and utilises FORTRAN as a base language. More specifically, the objectives of this paper are:
    1. To propose a set of FORTRAN-like primitives that can be used to express Floyd's non-deterministic algorithms.
    2. To describe a syntax-directed translator that transforms programs written in the extended FORTRAN containing the new primitives into standard (deterministic) FORTRAN programs. One such translator can be easily implemented in FORTRAN if one makes suitable conventions about the form in which programs should be input. These conventions do not semantically restrict the set of extended FORTRAN programs that can be processed by the translator. It is shown that a simple finite-state grammar can be used to drive the translation.
    3. To present examples of the usage and translation of nondeterministic FORTRAN algorithms.

    In what follows, the main features of Floyd's paper will be briefly reviewed. His example of an algorithm to solve the eight queens problem is also used to illustrate the usefulness of non-deterministic programs. This problem consists of placing eight queens on a chessboard in such a way that there is only one queen in each row, column, or diagonal of the board (see Fig. 1).
    A series of steps describing an algorithm to find one$ solution to the problem is the following:
    1. consider the first column;
    2. choose a row, i.e. a number from 1 to 8;
    3. test if a queen can be placed in the current column and row.
    If so, go to Step 4; otherwise backtrack (i.e. 'undo' the actions that have been taken) to the previous point where a new choice can be made (Step 2);
    4. place the queen in the current row and column position and list the row number for future display when a solution is found ;
    5. test if all queens have been placed. If so, report the successful solution of the problem by displaying the list prepared in Step 4 and then stopping; otherwise consider the next column and go to Step 2.
    Notice that the actions that have to be 'undone' in case the test of Step 3 fails may entail removing one or more previously placed queens and erasing their row numbers in the list of Step 4.
    The translation of the above algorithm into a computerlike language can be easily accomplished by observing that the sum or difference of the indices of an element in a diagonal is constant. A convenient way to perform the test of Step 3, making use of this property, is to represent the chessboard by three one-dimensional arrays a, b, and c, whose bounds are respectively 1 to 8 (the number of rows), 2 to 16 (the smallest and largest values of the sum of the indices of a row and column), and -7 to 7 (the smallest and largest values of the difference of the indices of a row and column). A 'one' in ai, b,, and c, indicates that the ith row, jth left diagonal, and kth right diagonal are occupied. A 'zero' indicates that the corresponding row and diagonals are free. The test of Step 3 in the above algorithm is then translated into:
    Fig. 2 presents, in flowchart formythe algorithm just described to solve the eight queens problem. Notice that the terms CHOICE, FAILURE, and SUCCESS introduced in some of the boxes of Fig. 2 correspond to the expressions 'choose', 'backtrack', and 'report successful solution' used in the informal description of the algorithm. The reader may want to verify that the execution of the algorithm in Fig. 2 reaches the box SUCCESS if the variable row in the box row c choice (8) takes the sequence of values 8, 4, 1, 3, 6, 2, 7, 5 (see Fig. 1).
    The objective of Floyd's paper is to provide detailed information on how a flowchart describing a non-deterministic algorithm (such as that of Fig. 2) can be translated into a standard deterministic flowchart representing a program executable by a computer. We transcribe in Fig. 3 some of Floyd's transformation rules that enable one to automatically perform such a translation. These rules make use of three stacks M, W, R (for memory, write, and read) and a temporary variable T; an f denotes an arbitrary expression, an X an arbitrary variable, and a P an arbitrary conditional. Each row in Fig. 3 corresponds to one transformation rule: a non deterministic command in column 2 is transformed into a deterministic forward part under column C+ and a backtracking part under column C-. The forward parts of the commands specifying CHOICE, SUCCESS, and FAILURE are directly linked to their backtracking counterparts. The following comments are helpful in understanding the transformation rules presented in rows (a) to (i) of Fig. 3:
    (a) The assignment of an expression to a variable is preceded by stacking the previous value of that variable so as to recover it when backtracking. In two special cases, stacking is unnecessary: (1) the expression f is a function of X having a known inverse and recovery of X consists of assigning this inverse to the variable; (2) the programmer knows that the previous value of the variable is not needed when backtracking and specifies stacking as unnecessary.
    (b) There is no loss of information in the execution of a branch command; therefore, no special provisions are needed for backtracking.
    (c) A fork, i.e. the merging of two program paths into a single path is translated in the following manner: in the forward part of the program a marker (a 0 or a 1) is stacked in each of the merging paths, so as to enable the program to recapture the proper path when backtracking.
    (d) The starting point of the forward part of a program corresponds to the halting point of its backtracking part.
    (e) A CHOICE command is translated into a test which initially commands the program to take the forward direction of execution with X = f, the previous value of X being stacked as in the case of assignments. Subsequent returns to the CHOICE command when backtracking result in re-executing the forward part with a decreased value of X until it reaches the value 0; in the latter case, the original value of X is restored, and backtracking continues to the command preceding the CHOICE command.
    (f) When a SUCCESS command is reached, the contents of stack W (used in the write command) are displayed and one of two actions may take place depending on the programmer's option: the program either stops or proceeds in its backtracking mode. The latter option is useful when one wishes to compute all solutions that can be obtained by a non-deterministic algorithm.
    (g)The forward part of a program is directly linked to its. backtracking part.
    (h) The value of an output variable is stacked in win the forward part and unstacked when backtracking.
    (i) Two pointers 'max' and 'min' are introduced to keep track of the last element actually read (max) and the element that should be considered when the next non-deterministic read command is activated (min). The transformation rule as. presented in Fig. 3 is a corrected version of the one given by Floyd. His transformation rule for the read command contains a flaw which can best be shown through an example. Suppose the following sequence of commands is. executed :
          in ACM Computing Reviews 16(02) February 1975 view details
  • Borning, Alan "ThingLab - A Constraint-Oriented Simulation Laboratory" Xerox PARC Report SSL-79-3 July 1979 view details Extract: The Kernel ThingLab System
    The Kernel ThingLab System

    The kernel ThingLab system consists of a Smalltalk extension, written by the present author, that is used in all ThingLab simulations. Embedded in this program is knowledge about such things as inheritance hierarchies, part-whole relations, and constraint satisfaction techniques. The kernel system doesn?t have any knowledge about specific domains in which ThingLab can be used, such as geometry or electrical circuits. Rather, it provides tools that make it easy to construct objects that contain such knowledge.

    Another goal in constructing the system, besides the exploration of language design as described above, was to investigate computer-based tools for use in education. For example, a ThingLabstyle system might prove valuable as part of a geometry curriculum, or as an adjunct to a physics laboratory. With this in mind, it is anticipated that there would be two sorts of users of the system. The first sort of user would employ ThingLab to construct a set of building blocks for a given domain. For example, for user in simulating electrical circuits, such a user would construct definitions of basic parts such as resistors, batteries, wires and meters. The second sort of user could then employ these building blocks to construct and explore particular simulations. The knowledge and skills required by these two kinds of users would be different. The first kind of user would need to now about message passing the constraint specification domain (e.g.Ohm?s Law). The second kind of user, on the other hand, could deal with the system using only simple interactive graphics techniques, such as selecting items in a menu or moving pictures around on the screen. Thus, this sort of user wouldn?t need to be familiar with either the details of ThingLab, or with the domain-specific theory behind the simulation (although one of the objectives of a curriculum might be for such a user to acquire this domain-specific knowledge).

    Extract: Constraint Representation and Satisfaction
    Constraint Representation and Satisfaction

    Representation of Constraints

    The representation of constraints reflects their dual nature as both descriptions and commands. Constraints in ThingLab are represented as a rule and a set of methods that can be invoked to satisfy the constraint. The rule is used by the system to construct a procedural test for checking whether or not the constraint is satisfied, and to construct an error expression that indicates how well the constraint is satisfied. The methods describe alternate ways of satisfying the constraint; if any one of the methods is invoked, the constraint will be satisfied.

    Merges

    An important special case of a constraint is a merge. When several parts are merged, they are constrained to be identical. For efficiency, they are usually replaced by a single part rather than being kept as several separate objects. The owner of the parts keeps a symbolic representation of the merge for use in constraint satisfaction, as well as for reconstruction of the original parts if the merge is deleted. One use of merging is to represent connectivity. For example, to connect two sides of the triangle, an endpoint from one side is merged with an endpoint of the other. Another use of merging is to apply pre-defined constraints. Thus, to constrain the base of the triangle to be horizontal, one can simply merge an instance of HorizontalLine with the side to be constrained.

    Constraint Satisfaction

    It is up to the user to specify the constraints on an object; but it is up to the system to satisfy them. Satisfying constraints is not always trivial. A basic problem is that constraints are typically multi-directional. For example, the horizontal line constraint is allowed to change either endpoint of the line. Thus, one of the tasks of the system is to choose among several possible ways of locally satisfying each constraint. One constraint may interfere with another; in general the collection of all the constraints on an object may be incomplete, circular, or contradictory. Again it is up to the system to sort this out.

    The approach taken in ThingLab is first to analyze the constraints on an object and plan how to satisfy them, and then to make the actual changes to satisfy the constraints. In ThingLab, the particular values that an object holds usually change much more rapidly than its structure. For example, if on the display the user moves some part of a constrained geometric object with the cursor, the values held by this object will change every time its picture is refreshed. Each time some value is changed, other values may need to be changed as well to keep the constraints satisfied. However, the object?s structure will change only when the user adds or deletes a part or constraint. The design of the ThingLab constraint satisfaction mechanism is optimized for this environment. A constraint satisfaction plan may depend on the particular structure of an object, but should work for any values that the object might hold. (If not, appropriate tests must be included as part of the plan.) Once a plan for satisfying some constraints has been constructed, Smalltalk code is compiled to carry out this plan. Thus each time the part of the constrained geometric object is moved , it is this pre-compiled method that is invoked, rather than a general interpretive mechanism. Also, the plan is remembered in case it is needed again. Planning is done using symbolic references to the constrained parts, so that the same plan may be used by all instances of a class. If the class structure is changed so that the plan becomes obsolete, it will be automatically forgotten.

    When an object is asked to make a change to one of its parts or subparts, it gather up all the constraints that might be affected by the change, and plans a method for satisfying them. In planning a constraint satisfaction method, the object will first attempt to find a one-pass ordering for satisfying the constraints. There are two techniques available in ThingLab for doing this: propagation of degrees of freedom, and propagation of known states. In propagating degrees of freedom, the constraint satisfier looks for an object with enough degrees of freedom so that it can be altered to satisfy all its constraints. If such an object is found, that object and all the constraints that apply to it can be removed from further consideration. Once this is done, another object may acquire enough degrees of freedom to satisfy all its constraints. The process continues in this manner until either all constraints have been taken care of, or until no more degrees of freedom can be propagated. In the second technique propagating known states, the constraint satisfier looks for objects whose states are completely known. If such an object is found, the constraint satisfier will look for one-step deductions that allow the states of other objects to be know, and so on recusively.

    If there are constraints that cannot be handled by either of these techniques the object will invoke a method for dealing with circularity. Currently, the classical relaxation method is the only such method available. As will be described in Chapter 5, relaxation can be used only with certain numerical constraints, and is also slow. In this method, the object changes each of its numerical values in turn so as to minimize the error expressions of its numerical values in turn so as to minimize the error expressions of its constrains. These changes are determined by approximating the constraints on a given value as a set of linear equations by finding the derivative of the error expressions with respect to the value, and solving this set of equations. Relaxation continues until all the constraints are satisfied (all the errors are less than some cutoff), or until the system decides that it cannot satisfy the constraints (the errors fail to decrease after an iteration).

    If the relaxation method is used, the system issues a warning message to the user. The user can either let things stand, or else supply additional information in the form of redundant constraints that eliminate the need for relaxation.

    Where are Constraints Useful?

    Where are constraints useful? In discussing this question, it is important to differentiate what can be expressed using constraints from what sets of constraints can be satisfied. Many more things can be expressed than can be satisfied, For example, it is easy to state the following constraints:

    xn + yn = zn

    x, y, z, n integers

    x, y, z >0

    n > 2.

    However, finding values that satisfy these constraints, or proving that no such values exist, requires either a counterexample or a proof of Fermat?s Last Theorem.

    What can be expressed using constraints? To express a relation as a constraint, the following information is needed: a rule (from which the system will derive a satisfaction test and an error expression); and one or more methods for satisfying the constraint. For numerical constraints, the methods may be omitted if the user is willing to live with the relaxation method. Any relation that must always hold, and for which this information can be supplied, may be expressed as a constraint. Some relations that cannot be expressed as constraints in a general way using current ThingLab techniques include: any relation involving ordering or time; relations that need hold only under certain conditions; and meta-constraints (constraints on other constraints or on constraint satisfaction strategies).

    What sets of constraints can be satisfied? If the constraint dependency graph has no circularities, or if the circularities can all be broken using one-step deductions, then the one-pass constraint satisfaction techniques will always succeed, and will provide correct results. Further, the constraints can be satisfied, or determined to be unsatisfiable, in time proportional to that required to execute the local methods provided by the constraints. If the dependency graph does have circularities that cannot be broken by one-step deductions, the constraints for which relaxation can be used. These constraints must either be linear, or else constraints for which linearization is an adequate approximation. An example of a set of circular constraints for which the relaxation method does not work are those that describe a cryptarithmetic problem, e.g. DONALD + GERALD = ROBERT with D=5. [See Newell & Simon 1972 for a description of this domain.] Relaxation is useless here, since the constraints cannot be approximated by linear equations. To solve such kinds of problems, other constraints satisfaction techniques would be needed, such as heuristic search.

    Relation to Other Work

    As mentioned previously, the two principal ancestors of ThingLab are Sketchpad an Smalltalk. It is also closely related to work on constraints by Gerald Sussman and his students; other related work includes Simula the Actor languages, KRL, and a number of problem solving systems. Following a discussion of these and other systems, a summary of the novel features of ThingLab is presented.
    Extract: REF-ARF
    Richard Fikes constructed a system REF-ARF, for solving problems stated as procedures. In the programming language REF, select functions were used to indicate the permissible values for a variable, while condition statements were used to build up sets of constraints that the variables had to satisfy. ARF, the problem solver, then attempted to find values for the variables that satisfied all the conditions by first using a number of rather clever constraint manipulation methods to limit the possible values of the variables, followed by a GPS-style search to find an answer.
          in ACM Computing Reviews 16(02) February 1975 view details