PLANNER(ID:297/pla014)Goal-direct LISP dialectCarl Hewitt MIT 1967 A language for writing theorem provers, Hewitt's PhD thesis. Important for the development of daemons. Never fully implemented, but Sussmann, Winograd and Charniak implemented MICRO-PLANNER in 1970. Related languages
References: Preface PLANNER is a language for proving theorems and manipulating models in a robot. Although we say that PLANNER is a programming language, we do not mean to imply that it is a purely procedural language like the lambda calculus in pure LISP. PLANNER is different from pure LISP in that function calls can be made indirectly through recommendations specifying the form of the data on which the function is supposed to work. In such a call the actual name of the called function is usually unknown. Many of the primitives in PLANNER are concerned with manipulating a data base. The language will be explained by giving an over-simplified picture and then attempting to correct any misapprehensions that the reader might have gathered from the rough outline. The basic idea behind the language is a duality that we find between certain imperative and declarative sentences. For example consider the statement (implies a b). As it stands the statement is a perfectly good declarative statement. It also has certain imperative uses for PLANNER. For example it says that we should set up a procedure which will note whether a is ever asserted and if so to consider whether b should then be asserted. Furthermore it says that we should set up a procedure that will watch to see if it ever is our goal to try to deduce b and if so whether it is wise to make a subgoal to deduce a. Similar observations can be made about the contrapositive of the statement (implies a b). Statements with universal quantifiers, conjunctions, disjunctions, etc. also have both declarative and imperative uses. Of course if what we have described thus far were all there was to the language, then there would be no point. From the above observations, we have constructed a language that permits both the imperative and declarative aspects of statements to be easily manipulated. PLANNER uses a pattern directed information retrieval system that is more powerful than a retrieval system based directly on association lists. The language permits us to set up procedures which will make assertions and automatically draw conclusions from other assertions. Procedures can make recommendations as to which theorems should be used in trying to draw conclusions from an assertion, and they can recommend the order in which the theorems should be applied. Goals can be created and automatically dismissed when they are satisfied. Objects can be found from schematic or partial descriptions. Properly formulated descriptions have their own imperative uses for the language. Provision is made for the fact that statements that were once true in a model may no longer be true at some later time and that consequences must be drawn from the fact that the state of the model has changed. Assertions and goals created within a procedure can be dynamically protected against interference from other procedures. Procedures written in the language are extendable in that they can make use of new knowledge whether it be primarily declarative or imperative in nature. The logical deductive system used by PLANNER is subordinate to the hierarchical control structure of the language. PLANNER has a sophisticated deductive system in order to give us greater power over the direction of the computation. In several respects the deductive system is more powerful than the quantificational calculus of order omega. Our criteria for an ideal deductive system contrast with those that are used to justify resolution based systems. Having only a single rule of inference, resolution provides a very parsimonious logical system. Workers who build resolution systems hope that their systems can be made efficient through acute mathematical analysis of the simple structure of their deductive system. We have tried to design a sophisticated deductive system together with an elaborate control structure so that lengthy computations can be carried out without blowing up. Of course the control structure can still be used when we limit ourselves to using resolution as the sole rule of inference. Indeed, R. Burstall has suggested that we might try to implement some of the well known resolution strategies in PLANNER. Because of its extreme hierarchical control and its ability to make use of new imperative as well as declarative knowledge, it is feasible to carry out very long chains of inference in PLANNER. Extract: PLANNER PLANNER Now that we have described MATCHLESS, we are in a position to begin a detailed description of PLANNER. Consider a statement that will match the pattern (implies $-a $-b) . The statement has several imperative uses. xl: If we can deduce $$a, then we can deduce $$b. In PLANNER the statement xl would be expressed as (antecedent(()) $$a {assert $$b)) which means that $$a is declared to be the antecedent of a theorem such that if $$a is ever asserted in such a way as to allow the theorem to become activated then $$b will be asserted. x2: If we want to deduce $$b, then establish a subgoal to first deduce $$a. In PLANNER the statement x2 would be expressed as (consequent (0) $$b {thprog 0 {goal $$a} {assert-consequent})) which means that $$b is declared to be the consequent of a theorem such that if the subgoal $$a can be established using any theorem then the consequent $$b will be asserted. We obtain two more PLANNER statements analogous to the above by considering the contrapositive of (implies $$a $$b) which is (implies (not $$b) (not $$a)). The following three forms are the ones which are presently defined in the language for satisfying requests made in the body of procedures: (consequent $-declaration $-consequent $-expression) declares that $$consequent is the consequent of the theorem. The theorem can be used to try to establish goals that match the pattern $$consequent. Whether or not the theorem will actually succeed in establishing the goal depends on $$expression. However, no theorem can be activated for a goal which is already currently activated for that goal. The only way that a t.heorem that begins with the atom consequent can be called is by the function goal. (antecedent $-declaration $-antecedent $-expression) declares that $$antecedent is the antecedent of the theorem. The theorem can be used to try to deduce consequences from the fact that a statement that matches $$antecedent has been asserted. The only way that a theorem that begins with the atom antecedent can be called is by the functions assert and conclude-from. (erasing $-declaration $-statement $-expres-sion) can be used to try to deduce consequences from the fact that a statement that matches $$statement has been erased. The only way that a theorem that begins with the atom erasing can be called is by the function erase. Some of the functions in PLANNER are listed below together with brief explanations of their function. Examples of their use will be given immediately after the definition of the primitives below. The primitives probably cannot be understood without trying to understand the examples since the language is highly recursive. In general PLANNER will try to remember everything that it is doing on all levels unless there is some reason to forget some part of this information. In the implementation of the language special measures must be taken to ensure that variables receive their correct bindings. The most efficient way to implement the language is tu put puiiltefs un the stack back to the place where the correct bindings are. Value cells do not provide an efficient means of implementing the language. The default response of the language when a simple failure occurs is to back track to the last decision that it made and try to fix it up. {{";thval)$-expression $-bindings $state) will evaluate the value of $$expression with bindings which are the valu: of $$bindings and local state which is the val.ue of $$state. At any given time PLANNER expressions are being evaluated in a local state. This local state determines what changes have been made to the data base i.e., what erasures and assertions have been made. 11" state]) returns as its value the current local state. {{" update) $-statel will update the data base according to the state which is the value of $$state. i" assert1 $ statement Irecommendation]) where (def recommendation (kappa (0) If the statement $$statement has already been asserted then the function assert acts as the null instruction. Otherwise, the function assert causes the statement $$statement to be asserted with a recommendation as to how to try to draw some conclusions from the fact that $$statement has been asserted. The () recommendation means that we should take no action. The recommendation ? excludes no theorem from consideration in trying to deduce consequences from the value of $$statement. The disjunction of a list of recommendations requires that each recommendation be tried in turn until one works. The conjunction of a list of recommendations requires that they all work in the order in which they appear in the conjuction. In a sequence of recommendations, PLANNER will try each recommendation in turn regardless of whether any given one succeeds or not. The subset of a list of recommendations trys all the sublists of the list in all possible orders. If the recommendation of an assertion statement fails or if a lower level failure backs up to the assertion then the assertion that $$statement holds is withdrawn. {{" conclude-from} $-statement {recommendation]) will cause PLANNER to try to draw conclusions from the statement $$statement using the recommendation. {{" assert-consequent} {recommendation]) causes the consequent in which the function assert-consequent appears to be asserted. The function should be used only in theorems that begin with the atom consequent. After the function assert-consequent has been evaluated, execution will cease in the theorem in which the function appears. {{" permanent-assert} $-statement {recommendation)) is like the function assert except that $$statement continues to hold even if a failure backs up to the call to permanent-assert. if" temporary-assert} $-statement {recommendation)) is like assert except that $$statement will be withdrawn if everything works out. In other words, $$statement is a temporary result that will go away after we solve our current problem. {{" erase} $-statement {recommendation) ) If there is a statmeni that matches $$statement, then it is erased and then the recommendation is followed. Otherwise, the function erase generates a simple failure. If a simple failure backs up to the function erase, then the statement that was originally erased is restored and the whole process repeats with another statement that has been proved. The function erase is a partial left inverse of the function assert. {{" proved?} $-statement) tests to see if a statement that matches $$statement has already been asserted. If there is such a statement, then the variables in the pattern $$statement are bound to the appropriate values. If there is no such statement, then a simple failure is generated. If a simple failure backs up to the function proved?, then the variables that were bound to the elements of the statement that was found first are unbound and the whole process repeats with another statement that has been proved. PLANNER is designed so that the time it takes to determine whether a statement that matches $$statement is in the data base or not is essentially independent of the number of irrelevant statements that have already been asserted. When an s-expression is asserted PLANNER remembers the position of every atom that occurs in the s-expression. Two expressions are similar on retrieval only to the extent that they have the same atoms in the same position. If MATCHLESS had an efficient parallel processing capability then the retrieval could be even faster since we would do the look-ups on atoms by position in parallel. {{" proven}$_pattern} will return as value a list whose first element is the number of remaining elements in the list and such that the remaining elements of the list are statements that have been asserted and match the pattern $$pattern. {{" for-proved} $-declaration $gattern $-Jody) where body is of type seg will attempt to excecute $$body once for every proved statement that matches the pattern $pattern. {{" proveable} $gattern {goal-recommendation}) will return as its value a list whose first element is the number of remaining elements in the list and such that the remaining elements of the list are the proveable statements that match the pattern $$pattern and can be proved using the recommendation. Note that if there are an infinite number of proveable statements that match the pattern $$pattern then the function proveable will not converge. I{" goal} $-statement {goal-recommendation}} where (def goal-recommendation (kappa (0) {block ((theoremlist seg)) {disj (first $-theoremlist) (only $-theoremlist)11)) A goal-recommendation of (first $-theoremlist) means that the theorems on $$theoremlist are the first to be used to try to achieve the goal which is the value of $$statement. On the other hand a goal recommendation of (only $-theoremlist) means that the theorems on $$theoremlist in the order given are the only ones to be used to try to achieve the goal. The first thing that the function goal does is to evaluate {proved? $$statement). If the evaluation produces a failure then the goal recommendation is followed to try to find a theorem that can establish $$statement. {I" goals} $gattern} returns as its value a list of the currently active goals. {{" genfail)] causes a simple failure to be reported above. I{" genfail} $-message1 causes a failure to be reported above with the message the value of $$message. {{" fail?} $-expr $-failclauses} where failclauses is of type seg evaluates $$expr. If the evaluation does not produce a failure, then the value of $$expr is the value of the function fail?. If the message of the failure matches the first element of a clause then the rest of the elements of the clause are evaluated. Otherwise, the failure continues to propagate upward. {{I' failto) $-tag} causes failure to the tag $$tag which must previously have been passed over. {I" blkfail}} causes the current block to fail. {I" thfailll causes the current theorem to fail. {I" end}} causes the current theorem to cease execution. {I" goal-end}) causes execution to cease on the current theorem and the current goal to be dismissed without being asserted. 11" finalize-from} $-tag1 causes all actions that have been taken since the last time that the tag $$tag was passed over to be finalized. Finalize statements are mainly used to save storage. The next statement to be executed is the one immediately after the call to finalize-f rom. {{" thfinalize}} causes all actions that have been taken in the current theorem to be finalized. {I" blkfinalize}] causes all actions that have been taken in the current block to be finalized. {{I' defth) $-theorem-name $-theorem) defines $$theorem-name to be the name of the theorem $$theorem. I{" thcond} $-clauselist:} where clauselist if of type seg is like the LISP function cond except that it treats a simple failure in the first element of a clause like a 0. {{" thprog) $variablelist $grogbody1 where progbody is of type seg is like the LISP function prog except that it can handle the mechanism of failure. {i" thandl $conjuncts} where conjucts is of type seg is like the LISP function and. 11" thorl $-disjuncts] where disjuncts is of type seg is like the LISP function or. {{'I thrplaca) $-a $-b} is like the LISP function rplaca except that the old value of $$a is remembered so that it can be restored in case of failure. Suppose that we know that (subset a b), (subset a d), (subset b c), and (for-all (x y z) (implies (and (subset x y) (subset y z)) (subset x 2))) are true. How can we get PLANNER to prove that (subset a c) holds? We would give the system the following theorems. (subset a b) (subset a d) (subset b c) (defth backward (consequent (((x ptr) (z ptr))) (subset $?x $?z) {thprog ((Y ptr)) {goal (subset $?x $?y) (first backward)} {goal (subset $$y $?z) (only backward)) {assert-consequent}})) Now we ask PLANNER to evaluate {goal (subset a c)) then it looks for a theorem that it can activate to work on the goal. It finds backward and binds x to a and z to c. Then it makes (subset a $?y) a subgoal with the recommendation that backward should be used first to try to achieve the sub-goal. The system notices that y mighc be d, so it binds y to d. Next (subset d c) is made a subgoal with the recommendation that only backward be used to try to achieve it. Thus backward is called recursively, x is bound to d, and z is bound to c. The subgoal (subset d $?y) is established causing backward to again be called Extract: Conclusion Conclusion The most natural way to do a proof by contradiction. Another type of problem that PLANNER will not solve very naturally is to nonconstructively show that there is some object x such that (p x) is true. We shall call the logistic system based purely on the primitives of PLANNER "robot logic". Robot logic is a kind of hybrid between the classical logics such as the quantificational calculus and intuitionism, and the recursive functions as represented by the lambda calculus and Post productions. The semantical definition of truth in robot logic complicated by the existence of the primitive erase. There are interesting parallels between theorem proving and algebraic manipulation. The two fields face similar problems on the issues of simplification, equivalence of expressions, intermediate expression bulge, and man-machine interaction. In any particular case, the theorems need not allow PLANNER to lapse into its default conditions. It will sometimes happen that the heuristics for a problem are very good and that the proof proceeds smoothly until almost the very end. At the point the program gets stuck and lapses into default conditions to try to push through the proof. On the other hand the program might grope for a while trying to get started and then latch onto a theorem that knows how to polish off the problem in a lengthy but foolproof computation. PLANNER is designed for use where one has great number of interrelated procedures (theorems) that might be of use in solving some problem along with a general plan for the solution of the problem. The language helps to select procedures to refine the plan and to sequence through these procedures in a flexible way in case everything doesn't go exactly according to plan. The fact that PLANNER is phrased in the form of a language forces us to think more systematically about the primitives needed for problem solving. We do not believe that computers will be able to prove deep mathematical theorems without the use of a hierarchical control structure. Nor do we believe that computers can solve difficult problems where their domain dependent knowledge is limited to finite-state difference tables of connections between goals and methods. Extract: Acknowledgements Acknowledgements The preceeding is a report on some of the work that I have done as a graduate student at Project MAC. Reproduction in full or in part is permitted for any purpose of the United States government. We would like to thank the various system "hackers" that have made this work possible: D. Eastlake, R. Greenblatt, J. Holloway, T. Knight, G. Mitchell, S. Nelson, and J. White. We had several useful discussions with H. V. McIntosh and A. Guzman on the subject of pattern matching. S. Papert and T. Winograd made suggestions for improving the presentation of the material in this paper. in Donald E. Walker, Lewis M. Norton (Eds.): Proceedings of the 1st International Joint Conference on Artificial Intelligence IJCAI-69, Washington, DC, May 1969. William Kaufmann, 1969 view details in Proceedings of the Second International Joint Conference on Artificial Intelligence (IJCAI), September 5 - 8, 1971, London, England view details in Proceedings of the Second International Joint Conference on Artificial Intelligence (IJCAI), September 5 - 8, 1971, London, England view details in Proceedings of the Second International Joint Conference on Artificial Intelligence (IJCAI), September 5 - 8, 1971, London, England view details in Computers & Automation 21(6B), 30 Aug 1972 view details Research into the interpretation of graphics has been motivated primarily from two sources. The first is simply the desire to realise in man-machine communication the kind of descriptive power supported by the use of graphics in man-man communication. The second springs from working with a data base of information which is most conveniently recorded in graphical form (e.g. maps, engineering drawings, etc.). The quality associated with this kind of interpretation is captured by the idea of the 'machine perception of graphics'. The acceptance of this idea places computer graphics and with it, graphic languages, in a cognate position with respect to picture interpretation and scene analysis, although there are, of course, important differences. The body of the paper is concerned with reviewing the status of graphical languages given that the task for which they have to be suited is one of interpretation. in Nake, F. and Rosenfeld, A. "Graphic Languages" Amsterdam: North-Holland Publishing Company 1972. view details in [AFIPS] Proceedings of the 1972 Fall Joint Computer Conference FJCC 41 view details List processing languages such as LISP and IPLV have played an essential role in many programs developed for artificial intelligence. These languages provide convenient data structure manipulations and built-in recursion which have made the writing of programs easier than in conventional programming languages. The author has developed a programming language called PLANNER, which appears to be the forerunner of a new set of programming languages oriented towards problem solving in artificial intelligence. The features incorporated into PLANNER are described in the paper. PLANNER differs significantly from past programming languages in that the language is goaldirected and has a pattern-directed backtrack control structure that controls the search for a goal. Theorems within PLANNER can be thought of as sub-routines, and can be called by specifying the goal which is to be satisfied. In attempting to solve a goal, data and theorems may be invoked. The user can control the sequence in which theorems may be applied to goals through a RECOMMENDATION list. When some path is blocked in attempting to solve a goal, PLANNER processes have the capability of backtracking to previous states to continue the search. A large number of primitive commands are available for matching patterns and manipulating a data base. The pattern-matching capability is similar to the unification algorithm in theorem provers. The above features are important developments in the area of problem solving. PLANNER provides the user with the ability to add semantic information by specifying procedures in the programming language. The power of the system is apparent for state-space and problem-reduction problems (see Nilsson [1971] for a description of these terms). PLANNER may also be compared to conventional theorem provers which are based on the Robinson Resolution Principle (Robinson [1965] ). In particular, input proofs [1] are achieved in PLANNER. However, it is known that input proofs are, in general, not complete. Hence, it would appear that PLANNER does not have the same power as a theorem prover would in solving problems. However, if the user asserts as a theorem every subgoal that is developed, then linear proofs [3, 4, 5] can be achieved. Hence, by the judicious use of PLANNER commands, the system can be made to be complete. A number of procedural languages have been developed that incorporate features not contained in PLANNER. Based on experience with using PLANNER, a language termed CONNIVER [8] has been developed that minimizes the amount of backtracking and permits parallel processing. Two other procedural languages, SAIL [2,7] permit similar capabilities. An important feature that has been incorporated into these systems is the ability to pass messages between nodes of a goal tree. This feature permits information discovered at some node of a goal tree to be communicated to other nodes so that actions may be taken at these nodes based upon the information transmitted. These developments will start to make an impact on problem-solving systems, particularly where users have a PDP-10 available to them (as all of the languages cited have been implemented on the PDP-10). The ivory by the author in the development of PLANNER is therefore significant, and all workers in artificial intelligence should be aware of the implications of PLANNER and other problem-solving languages. in ACM Computing Reviews 14(08) August 1973 view details in ACM Computing Reviews 15(04) April 1974 view details The exact number of all the programming languages still in use, and those which are no longer used, is unknown. Zemanek calls the abundance of programming languages and their many dialects a "language Babel". When a new programming language is developed, only its name is known at first and it takes a while before publications about it appear. For some languages, the only relevant literature stays inside the individual companies; some are reported on in papers and magazines; and only a few, such as ALGOL, BASIC, COBOL, FORTRAN, and PL/1, become known to a wider public through various text- and handbooks. The situation surrounding the application of these languages in many computer centers is a similar one. There are differing opinions on the concept "programming languages". What is called a programming language by some may be termed a program, a processor, or a generator by others. Since there are no sharp borderlines in the field of programming languages, works were considered here which deal with machine languages, assemblers, autocoders, syntax and compilers, processors and generators, as well as with general higher programming languages. The bibliography contains some 2,700 titles of books, magazines and essays for around 300 programming languages. However, as shown by the "Overview of Existing Programming Languages", there are more than 300 such languages. The "Overview" lists a total of 676 programming languages, but this is certainly incomplete. One author ' has already announced the "next 700 programming languages"; it is to be hoped the many users may be spared such a great variety for reasons of compatibility. The graphic representations (illustrations 1 & 2) show the development and proportion of the most widely-used programming languages, as measured by the number of publications listed here and by the number of computer manufacturers and software firms who have implemented the language in question. The illustrations show FORTRAN to be in the lead at the present time. PL/1 is advancing rapidly, although PL/1 compilers are not yet seen very often outside of IBM. Some experts believe PL/1 will replace even the widely-used languages such as FORTRAN, COBOL, and ALGOL.4) If this does occur, it will surely take some time - as shown by the chronological diagram (illustration 2) . It would be desirable from the user's point of view to reduce this language confusion down to the most advantageous languages. Those languages still maintained should incorporate the special facets and advantages of the otherwise superfluous languages. Obviously such demands are not in the interests of computer production firms, especially when one considers that a FORTRAN program can be executed on nearly all third-generation computers. The titles in this bibliography are organized alphabetically according to programming language, and within a language chronologically and again alphabetically within a given year. Preceding the first programming language in the alphabet, literature is listed on several languages, as are general papers on programming languages and on the theory of formal languages (AAA). As far as possible, the most of titles are based on autopsy. However, the bibliographical description of sone titles will not satisfy bibliography-documentation demands, since they are based on inaccurate information in various sources. Translation titles whose original titles could not be found through bibliographical research were not included. ' In view of the fact that nany libraries do not have the quoted papers, all magazine essays should have been listed with the volume, the year, issue number and the complete number of pages (e.g. pp. 721-783), so that interlibrary loans could take place with fast reader service. Unfortunately, these data were not always found. It is hoped that this bibliography will help the electronic data processing expert, and those who wish to select the appropriate programming language from the many available, to find a way through the language Babel. We wish to offer special thanks to Mr. Klaus G. Saur and the staff of Verlag Dokumentation for their publishing work. Graz / Austria, May, 1973 in ACM Computing Reviews 15(04) April 1974 view details The PLANNER concept was developed by Hewitt at MIT starting in 1967 (Hewitt 1971, 197Z), and Sussman and Winograd built a first implementation, MICRO-PLANNER, which contained a subset of PLANNER features. These projects established the basis of the currently popular concept of procedural representation of knowledge. CONNIVER is a recent attempt by Sussman at MIT to remedy some observed shortcomings in the practical use of PLANNER, while preserving its good ideas. in [ACM] ACM Computing Surveys (CSUR) 6(3) September 1974 view details Extract: Planner Pattern directed structures are incorporated in the PLANNER language in a fundamental way. Specifically, PLANNER includes a pattern directed data base search, and the pattern directed invocation of procedures. The pattern directed data base search allows the user to ask for data items called assertions which match a given pattern, while pattern directed procedure invocation has the capability to initiate tasks of the form "call a subroutine which will achieve the desired result at this point" (Sussman, et al., 1971). in Proceedings of the ACM SIGPLAN symposium on Very high level languages, March 28-29, 1974, Santa Monica, California, United States view details Introduction In 1965, Golomb and Baumert described a technique called backtrack programming, which is useful in writing programs to search for specific configuration vectors in large multi-dimensional spaces. Two years later Floyd presented detailed information on how such a technique could be implemented in a computer. Floyd also suggested that programming languages could be enhanced if they contained features enabling their users to utilise backtrack programming. In this work we describe an implementation of Floyd's ideas that can be programmed in a short time and utilises FORTRAN as a base language. More specifically, the objectives of this paper are: 1. To propose a set of FORTRAN-like primitives that can be used to express Floyd's non-deterministic algorithms. 2. To describe a syntax-directed translator that transforms programs written in the extended FORTRAN containing the new primitives into standard (deterministic) FORTRAN programs. One such translator can be easily implemented in FORTRAN if one makes suitable conventions about the form in which programs should be input. These conventions do not semantically restrict the set of extended FORTRAN programs that can be processed by the translator. It is shown that a simple finite-state grammar can be used to drive the translation. 3. To present examples of the usage and translation of nondeterministic FORTRAN algorithms. In what follows, the main features of Floyd's paper will be briefly reviewed. His example of an algorithm to solve the eight queens problem is also used to illustrate the usefulness of non-deterministic programs. This problem consists of placing eight queens on a chessboard in such a way that there is only one queen in each row, column, or diagonal of the board (see Fig. 1). A series of steps describing an algorithm to find one$ solution to the problem is the following: 1. consider the first column; 2. choose a row, i.e. a number from 1 to 8; 3. test if a queen can be placed in the current column and row. If so, go to Step 4; otherwise backtrack (i.e. 'undo' the actions that have been taken) to the previous point where a new choice can be made (Step 2); 4. place the queen in the current row and column position and list the row number for future display when a solution is found ; 5. test if all queens have been placed. If so, report the successful solution of the problem by displaying the list prepared in Step 4 and then stopping; otherwise consider the next column and go to Step 2. Notice that the actions that have to be 'undone' in case the test of Step 3 fails may entail removing one or more previously placed queens and erasing their row numbers in the list of Step 4. The translation of the above algorithm into a computerlike language can be easily accomplished by observing that the sum or difference of the indices of an element in a diagonal is constant. A convenient way to perform the test of Step 3, making use of this property, is to represent the chessboard by three one-dimensional arrays a, b, and c, whose bounds are respectively 1 to 8 (the number of rows), 2 to 16 (the smallest and largest values of the sum of the indices of a row and column), and -7 to 7 (the smallest and largest values of the difference of the indices of a row and column). A 'one' in ai, b,, and c, indicates that the ith row, jth left diagonal, and kth right diagonal are occupied. A 'zero' indicates that the corresponding row and diagonals are free. The test of Step 3 in the above algorithm is then translated into: Fig. 2 presents, in flowchart formythe algorithm just described to solve the eight queens problem. Notice that the terms CHOICE, FAILURE, and SUCCESS introduced in some of the boxes of Fig. 2 correspond to the expressions 'choose', 'backtrack', and 'report successful solution' used in the informal description of the algorithm. The reader may want to verify that the execution of the algorithm in Fig. 2 reaches the box SUCCESS if the variable row in the box row c choice (8) takes the sequence of values 8, 4, 1, 3, 6, 2, 7, 5 (see Fig. 1). The objective of Floyd's paper is to provide detailed information on how a flowchart describing a non-deterministic algorithm (such as that of Fig. 2) can be translated into a standard deterministic flowchart representing a program executable by a computer. We transcribe in Fig. 3 some of Floyd's transformation rules that enable one to automatically perform such a translation. These rules make use of three stacks M, W, R (for memory, write, and read) and a temporary variable T; an f denotes an arbitrary expression, an X an arbitrary variable, and a P an arbitrary conditional. Each row in Fig. 3 corresponds to one transformation rule: a non deterministic command in column 2 is transformed into a deterministic forward part under column C+ and a backtracking part under column C-. The forward parts of the commands specifying CHOICE, SUCCESS, and FAILURE are directly linked to their backtracking counterparts. The following comments are helpful in understanding the transformation rules presented in rows (a) to (i) of Fig. 3: (a) The assignment of an expression to a variable is preceded by stacking the previous value of that variable so as to recover it when backtracking. In two special cases, stacking is unnecessary: (1) the expression f is a function of X having a known inverse and recovery of X consists of assigning this inverse to the variable; (2) the programmer knows that the previous value of the variable is not needed when backtracking and specifies stacking as unnecessary. (b) There is no loss of information in the execution of a branch command; therefore, no special provisions are needed for backtracking. (c) A fork, i.e. the merging of two program paths into a single path is translated in the following manner: in the forward part of the program a marker (a 0 or a 1) is stacked in each of the merging paths, so as to enable the program to recapture the proper path when backtracking. (d) The starting point of the forward part of a program corresponds to the halting point of its backtracking part. (e) A CHOICE command is translated into a test which initially commands the program to take the forward direction of execution with X = f, the previous value of X being stacked as in the case of assignments. Subsequent returns to the CHOICE command when backtracking result in re-executing the forward part with a decreased value of X until it reaches the value 0; in the latter case, the original value of X is restored, and backtracking continues to the command preceding the CHOICE command. (f) When a SUCCESS command is reached, the contents of stack W (used in the write command) are displayed and one of two actions may take place depending on the programmer's option: the program either stops or proceeds in its backtracking mode. The latter option is useful when one wishes to compute all solutions that can be obtained by a non-deterministic algorithm. (g)The forward part of a program is directly linked to its. backtracking part. (h) The value of an output variable is stacked in win the forward part and unstacked when backtracking. (i) Two pointers 'max' and 'min' are introduced to keep track of the last element actually read (max) and the element that should be considered when the next non-deterministic read command is activated (min). The transformation rule as. presented in Fig. 3 is a corrected version of the one given by Floyd. His transformation rule for the read command contains a flaw which can best be shown through an example. Suppose the following sequence of commands is. executed : in ACM Computing Reviews 16(02) February 1975 view details in ACM Computing Reviews 16(02) February 1975 view details in ACM Computing Reviews 16(02) February 1975 view details in ACM Computing Reviews 16(02) February 1975 view details in SIGPLAN Notices 13(11) Nov 1978 view details Planners are specialized production-system programs. In a production system the control responsibility for rule selection may be contained within the rules, lie outside the rules or be some combination of the two. Backtrack-control information in Prolog, for example, is found in the clauses. A lack of domain-specific control knowledge can lead to inefficiency in planning. Early examples of control information incorporated into the rules are PLANNER, QA4 and CONNIVER. The STRIPS model was one of the first applications of forward production. In the STRIPS model the world consisted of blocks that could be moved around by a simplified robot hand. Blocks could be cleared, placed on top of each other and the robot hand could be emptied. Several precondition formulas (used as implicational rules) constrained production-rule firing. A database described the world to which additional structures were appended. Forward production rules modeled robot actions which altered a global database. The STRIPS approach generates plans in advance but error in execution can cause plan failure. Improvement is obtained by applying partially executed, and previous plans, via plan splicing or dynamic replanning. These techniques provide an error execution recovery scheme. Domain specific knowledge must be incorporated in any planner that plans for real-world problems on a local level. in SIGPLAN Notices 13(11) Nov 1978 view details |