DL*(ID:3806/dl:001)


for Description Language, extended

Erotetic language, drawing heavily on the notions of LEAP and the GPS.
(Banerji and Ernst were Comfort's "Gurus" according to the preface to his book, and there is an interesting insight into the intellectual prehistory of the work by seenig Banerji's review of Ernst's account of the GPS, see under GPS)
Comfort's book has a critique of PLANNER, CONNIVER, QA4


Related languages
DL => DL*   Evolution of
LEAP => DL*   Incorporated features of

References:
  • Comfort, John Craig "A Flexible Efficient Computer System To Answer Human Questions - The Dl*-Programming Language For Artificial Intelligence Applications" Interdisciplinary Systems Research - ISR 8 Birkhauser 1975 view details Abstract: Question answering computer programs may be divided into those which use a natural language, and spend much of their computational power in understanding the stated question, and those which use an artificial language, sacrificing a certain immediacy of use for more pertinent application of available computing power. This book describes a computer language and a computer system supported, using the second of these approaches.
    This language, DL*, based upon the predicate calculus, permits statements creating and modifying the data base, requesting answers to questions asked with respect to the data base, and the construction of entities derived from the data base. The data base itself contains both explicit data entries, in the form of Leap-type association and Lisp-type lists, and implicit data entries, in the form of definitions or procedures written in DL*.
    The DL* system is quite efficient, in that to answer a given question, a quite small segment of the data base need be examined. As refinements were added to the DL* system in the interest of efficiency, this system evolved from a straight-forward retrieval system to something resembling a moderately sophisticated programming language. Accordingly, a discussion of some of the modern programming languages used in artificial intelligence research is included, as is a comparison of some of the features of these languages and of DL*. The book also contains a rather detailed example written in DL*f and a brief discussion of context relative question answering and parallelism in DL*. Extract: Preface
    Preface
    In the context of computer science, question answering may be roughly defined as that body of knowledge concerned with inferential retrieval of information contained in a data base. In this book I propose, after some analysis of previous work, some desirable criteria for question answering systems, and a particular system satisfying some of these criteria.
    The book is divided, unlike Gaul, into five parts and two appendices. Following the introduction, in which the proposed language (called DL*) is mentioned, is a fairly complete DL* operation manual, and a moderately extended example problem. The fourth part compares and contrasts DL* to some of the programming languages which have been used in artificial intelligence research, while the fifth suggests directions for further development. The first appendix lists the data base alluded to in the third chapter, and the second is actual output from a simple run. I was greatly assisted in the undertaking of this work by my guru. Professor Ranan B, Banerji. late of Case Western Reserve University, and my deputy guru. Professor George Ernst, also of Case Western Reserve University, and on another plane, by the systems group associated with the Case PDP-10 system, especially James Calvin and Alan Rosenfeld, and by the Air Force, who partially supported this work under contract number AFOSR-71-21 10.
    John Craig Comfort
    Extract: Introduction
    Introduction
    The realm of interactive systems which permit users to ask questions regarding a data base may be analysed with respect to several criteria. Most important among these are the feasible size of the data bases, the complexity of questions to be asked , and the complexity of inference. In light of these criteria, three reasonably distinct classes of such systems may be discerned. Information retrieval systems are usually categorized by the ability to refer to a quite large data base, by the limited complexity of the types of questions which may be asked, and by the similarly limited complexity of inference which may be requested. Theorem proving systems, on the other hand, usually permit only a relatively small data base, and although quite complex questions are permitted, the common theorem to be proved will be rather simple in form (The complexity being hidden in other theorems and in definitions, already a part of the data base) .  The complexity of inference is quite high, usually resulting in a few questions being answered in a large amount of time. The vague area referred to as "question answering" exists somewhere near the center of the aforementioned realm, as the data bases used are usually of some size, although not extremely large: the questions asked may be quite complex in form , but the inference rules and deduction modes are (usually) strongly tied to the procedures for searching through the data base, unlike the more powerful substitution methods (resolution and its successors) used in theorem proving.
    Question answering systems may be further categorized by the input language used. One class of languages accepts a subset of English (or other natural language) as its command language, and the system usually spends much more time in trying to understand the question than in discovering the answer. (For an outstanding example of a natural language question answering system, and for a review of earlier work, see Winograd [1] .) A further drawback of a natural language system is that natural language itself is often an extremely inefficient way to describe problems to be solved. The alternative approach sacrifices the immediacy of being able to "talk" to the computer, in return for the compactness and power of some other  input medium.  In this research, the second approach was chosen, together with the mitigating goal of providing a language / interpretation that , even if unnatural, would not be too unfamiliar to the majority of the (anticipated) users. A second goal, of almost equal importance, was that the language design/ implementation should limit the number of data base accesses as much as possible. The implementation was to be initially in a high level language, so blazing times would not be acheived, but this secondary measure of efficiency could be minimized.
    It was felt that the user should be permitted to store information in the data base both explicitly and Implicitly, the latter in the form of procedure-like entities. The mode of operation was intended to be interactive and sentential. The language to be created was explicitly not to be a programming language in the usual sense of the term, and there was to be no sophisticated pattern matching capability (More on this subsequently).
    A major weakness of many previous nonnatural language question answering systems was the extremely awkward or otherwise annoying form in which questions to the processor needed to be posed. In partial avoidance of this  flaw,  the language being designed was to be based upon the first order predicate calculus, a quite familiar mode of communication. Previous research by Banerji [2] had resulted in the development of a language, called the Description Language (DL), which was designed for the formulation of concepts and the description of patterns. This language became the basis for much of the syntactic structure of the new language (then christened DL*, to avoid confusion with the earlier language ).
    Extract: The Structure of the DL* System
    The Structure of the DL* System
    The DL* system, as implemented on the PDP-10 / TENEX system at Case Western Reserve University, consists of two major and several subsidiary phases. The two major phases are co-routines ? a compiler , which translates the source language into a machine independent intermediate language, and an interpreter to evaluate this intermediate language. As will be seen , it is difficult to decide at the time of compilation whether the value of a given entity is a fact resident in the data base, or the name of an entity requiring further compilation. Thus the compiler may be recalled several times in the course of a computation. The phases minor of the system include input, output, editing, , and debugging routines. The virtual memory of the TENEX system considerably simplifies the problems associated with the maintenance of the data base.
    Extract: A Dl* User's Manual: Introduction
    A Dl* User's Manual: Introduction
    DL* is a language designed to facilitate the asking of moderately difficult questions about moderately large data bases ,and to answer these questions with a minimal search through the data base. The language is designed for interactive use, and provides a somewhat rudimentary means for handling data bases on the mass storage devices.
    The language is very similar in form to the lower predicate calculus, with (in the interest of efficiency ) modifications made in the interpretation of certain of the operators.
    The statements of the language may be divided into two general classes ? those statements which modify the data base, and those which do not. the former consists of statements creating lists, definitions, and objects, and is discussed in sections 2,4, and 5. In the latter class are questions about the status of the data base, or of the entities composing it, or requests for the construction of entities from the data base having designated properties. These are discussed in sections 3,4 and 5. In addition, there are control statements, which permit input and output operations, removal of entities from the data base, and certain other housekeeping operations. These are enumerated in section 6A) A listing of the messages with which the processor may respond to incorrect input is given in section 7, and some details of the implementation, together with a few caveats to the user, are given in section 8.
    Extract: Dl* And Programming Languages
    Dl* And Programming Languages
    DL* was initially designed to be a "super" information retrieval system ? super, at least , in the sense of permitting the asking of quite difficult questions, and answering these questions with a near minimal amount of searching through the data base. As the system evolved, it became apparent that features incorporated into the language to make the implementation semantics agree with the semantics of the predicate calculus referred to finite domains, together with constructions introduced to ease the way for potential users, were providing the DL* language with a good deal more power than was originally foreseen. As an example, the connective "or" was originally simply banned from appearing in the search scope of a quantifier. However, this also prevented recursive definitions from appearing in such search scopes, and thus either undue computational time or extreme programming virtuosity to compute quantities like partial transitive closures was required. This permission of the disjunction operator within such contexts required the addition of a backtracking mechanism. This , in turn, resulted in the distressingly frequent generation of extremely temporally expensive extraneous searches, and resulted in the elevation of the "cons1" quantifier to its special place in the language. Certain other constructs were initially banned to simplify the implementation, later restored in the interest of the prior aesthetic, and resulted in a further augmentation of the implementation semantics. At some indeterminate point about midway through the research, it became apparent that the DL* language had sufficiently expanded that its peers were the then emerging group of programming languages intended primarily for research in artificial intelligence, rather than the much more specialized systems which have been previously used for question answering.
    Extract: DL* as a Programming Language
    DL* as a Programming Language
    Many of the types of programs which have been written in the programming languages discussed below , especially those programs which require significant search through an essentially static data base, may be rewritten in DL* with some advantage ? usually either in program clarity, efficiency of search, or ease of programming. DL* was not originally intended as a programming language. To examine its limitations in this context, it would seem a reasonable endeavor to examine some of the properties of these languages.
    Programming languages must permit their users three basic macro-operations ? the user must be able easily to construct complex command and control structures from simpler such structures, these structures must be able to modify and respond to the local environment of the program ( be that environment a data base, a collection of variables ? or whatever ) , and provision must be made for the program to communicate with entities external to the program. It is in the first two areas in which DL* is at least partially deficient. As was shown in chapter three, statements *) ) through *4), it is often necessary to enter a series of statements to produce a desired effect. It would Certainly be more civilized to permit the user to specify a list of DL* statements and something like the APL "unquote" operator to permit their sequential execution. Something almost on this order can currently be "faked" ? for example, if
    "Exec def exist X«exist Y» in.l.Y = X  ?  tun X,in.3J isin in.2.Y",
    and if "Arrghs is a list of arguments and "Deefs" a list of definitions, then
    "cons X* [Arrghs,Deefs,X] isin Exec"
    would cause the application of each element of "Arrghs" to the corresponding element of "Deefs", and those definitions which were found true could deposit output in the variable "X" (which has been passed in as "in.3"). As "Deefs" and "Arrghs" may both contain the symbol "Exec", quite complex control paths may be generated. However, it does not seem especially reasonable ( or ,at least, humane-) to force the user into rather elaborate circumlocutibns to perform (apparently) natural acts.
    The above mentioned difficulty may represent a simple inconvenience . More serious, however , is the present restriction in DL* that all permanent modifications to the data base be made at the top (statement ) level. This convention was originally included in the language to insure that some searches did not die unexpectedly due to a dangling pointer to a recently deleted data bnse entity. (Such disasters may happen in SAIL, at least in the 6/73 version.) The convention does prohibit thra previously mentioned four lines from chapter 3 to be combined in the manner suggested above. There are again ways around this, such as placing all the elements of the data base on one (or perhaps several ) large lists,  and requiring that definitions , in which it is desired to change the data base, enter their proposed additions and deletions into a holding list created through a standard set of parameters. Then a pair of further definitions could be used to actually perform the modification, at the same time checking for difficulties in the lists. This same type of mechanism, incidentally, could be used to simulate a context mechanism. However much more reasonable is the addition of one bit to each triple, which would be energized if the triple were deleted at any level but the top. If a marked triple were encountered in a search, it would be passed over. When the triple was marked, it would also be indicated on the garbage collection list, to be deleted (if it were not shared ) when the execution of the statement had been terminated. In the current implementation, there are no spare bits per triple, thus requiring an additional word per triple. This leaves enough bits remaining to specify at least two additional links and a few more bits for descriptor information, so that a context mechanism could be added without overmuch difficulty.
    Extract: DL* and the Others
    DL* and the Others
    Four of the more common programming languages used in areas related to artificial intelligence research are SAIL [4], PLANNER [5], CONNIVES [63, and QA-4 [7] /QLISP E8J . The first of these is based upon ALGOL-60, and includes the associative store mechanism of LEAP [9]; the other three are LISP based languages, incorporating various degrees of diffusion of control structure. As these languages have recently been analysed in Bobrow and Raphael [10], it would seem sufficient to discuss those features of the languages which are the most closely or distantly related to those of DL*. Following Bobrow and Raphael, the discussion will be divided into the general headings ? data types and memory use, control flow, pattern matching, and deduction methods. In each section, the approach taken in DL* will be presented together with comments upon the methods used in the other languages.
    2.1.  Data Types and Memory Use
    In the languages considered , there is a plethora of data types, both assertive and inferential. Th9 assertive types include associations (objects, property lists), lists (tuples ), sets, and bags ( which are similar to sets , but permit repetitions of elements)! the inferential types include definitions,  theorems  of various kinds, functions, and procedures. The existence of all of these types in one language may be a convenience, but is obviously not necessary.
    The assertive data types permitted in DL* are lists and (essentially ) LEAP- type objects. The data base itself consists of two large lists , a much smaller hashing area, and a few auxiliary control variables. The two large lists are the dictionary, containing descriptor and linking information, and available space, each entry of which contains three directed terms ,each of which as a dictionary address and a pointer. The addresses are those of the attribute, object, and value in a triple associated with a DL* object, and those of the index (item number ), list name , and item in a triple associated with a list. Each dictionary entry acts as the head of four distinct lists ? one for each possible context in which the entity named may appear ? attribute/index, object/list, value, and item. As an entity may be at most one of an object, list or definition, the second list may serve both functions. Similarly, as integers may never be attributes and always are list indexes, the first list head may serve two functions. Since the same entity may easily appear as both a value and (in another triple ) a list item, the two lists are irredundent. These lists referenced are chained through available space, using the pointers associated with the directed terms composing the triple. This list structure results in extremely efficient answering of questions similar to "cons X: exist Y: Properties(X) = Y ? ..." and somewhat less efficient answering of questions such as "cons X: Properties (X) = Seldom ? ...  ".
    When local entities are generated, as in "L: [A,[B]]", the name "*local-'N", where 'N is an ever increasing integer value, is entered in the dictionary, and the address generated is used in the appropriate portion of the triples generated. Local entities are not shared (except in one special case). This demands that the expression used to determine the search space of a quantifier must be atomic, at least if the expected result is to be obtained, "cons X: exist Y: X.Y=[A] ? X.Y+1 = [B] " would result in the creation of a list containing nothing. An alternative approach is taken in QLISP ? any entity to be placed in the data base in systematically compared to entities currently in the data base of similar descriptions if a match is found, the resident description is shared. If this storage method had been used in DL*, the above mentioned search would have produced one item for each pattern of the form "... [A3 , [B], ... ". The major drawback of a QA-4 type system is the increased time necessary for entry of an entity into the data base. Whether this is a significant consideration will depend upon the particular application.
    2.2.  Control Flow
    The control structure in DL* is completely hierarchical, and is determined obviously by the quantifier structure. Because of the backtracking and the necessity of not prematurely terminating unfinished quantified searches, the processor must be capable of at least internally suspending and resuming process like segments of DL* statements. Process control information is contained in several virtual stacks, and a major virtual list structure , all of which share the available space with the data base.
    The backtracking mechanism is similar to that of PLANNER , though somewhat easier to turn off, and considerably easier to follow. In reaction to the often excessive searches generated by PLANNER programs, CONNIVER was created to permit the user full control over the search process,  with the result of causing the programmer to spend a great deal of time on bookkeeping operations.
    DL* has neither a context mechanism nor an externally available process structure, although , as mentioned above, the internal basis for such a structure is present. A proposed addition to the language would provide another process like capability. Let "sor" and "sand" (simultaneous or and simultaneous and, respectively) be defined to time slice between the respective disjuncts/ conjuncts, terminating when an appropriate subset of these "juncts" receives truth values. This would be of use principally when both juncts were of a high degree of complexity (and is a proposed topic for further research) . These connectives could be implemented easily, except for the difficulty of interpreting a simultaneous connective appearing In a search scope. Initially , at least, these connective would probably be replaced (surreptitiously , by the processor) with non-simultaneous connectives.
    A more remote addition is the simultaneous quantifier. The effect of "sexist X: 'P(X) ? 'Q(X)" would be the construction of a list of all values of "X" satisfying "'P", and the initialization if a separata process "'Q(X)", for each value of "X" computed in the search scope. This would permit , in effect, a breadth first search.
    2.3.  Pattern Matching
    DL* provides a low level of pattern matching, much on the order of that provided by SAIL ? that is, templates of the form "A (B) - C" and "A.B=C", where one or two of "A", "B", and "C" are variables, may be matched against the data base. The indication of the binding of variables is also the same as in SAIL , in that the first occurrence of the variable within a search scope / foreach loop causes the variable to be bound, later appearances of the variable are used only for testing. DL* has one major advantage over SAIL in that disjunctions are not permitted within a foreach loop.
    Other , more restrictive patterns may be matched, provided that these patterns can be stated in such a way as to provide a binding mechanism.  For example,
    "cons X: exist Y: Y.I = List ? exist Z* Y.Z = Last ? X = [[ Y from 1 to Z-1 ], [Y from Z+l to inf] ]"
    will match patterns looking like "[List ,'R1, Last , 'R2 ] ", returning a list of entities looking like "[ ["R1 ], ['R2]]". Were the data base to contain
    "L1: [List,Lost,Last,Lust,Lust,Lust]"

    "L2: [List,Last]"
    "L3: [Lust,List,Last]",
    the result of evaluation the above quantified statement would be the list
    "[[[Lost],[Lust,Lust,Lust] ], [[], []] ]". QLISP , as well as PLANNER / CONNIVER, provides a mechanism by which the user may specify the time of binding or changing of binding of variables. "_X" would indicate that "X" is to be bound, "$X" indicates that "X" is to use a previously bound value, and "?X" is to be bound only if it is not currently bound. The QLlSP function "(QLAMBDA (SET *X -Y SAM )) (SET $X )" , when invoked, will attempt to find a three element set containing "SAM", will bind the variable "X" to on« of the elements not "SAM", "Y" to the other, set up a failure point to permit backtracking to bind the variables in the other order (if needed) , and return as its value the set containing the value to which "X" was bound.
    QLISP also permits the so called segment variables, which may be used to match the remainder of a set , bug, or tuple after the fixed parts of the pattern have bean matched, and the individual unbound variables have been assigned values. These variables are designated "<-<-X",

    "**X", and "??X", with binding conventions the same as for the individual variables prefixed by the same symbols. The action of ((QLAMBDA (SET -X SAM <-<-Y)) (SET $$Y) would match any set containing "SAM", choose one of the elements to bind to "X", set up a failure point, and return a set containing the remainder of the original set. Certain difficulties may arise with segment variables, especially if the entity being matched is a pattern containing another segment variable.
    A further example of pattern matching may be found in the next section.
    2.4.  Deduction Mechanisms
    The three LISP based languages are goal directed that is , a desired goal is stated , the data base is searched to see if the goal has been asserted. If not, the base is again searched for a theorem (procedure, function) whose conclusion resembles the desired goal. After an appropriate binding of variables, the hypotheses of the theorem are used to generate subgoals. Any changes made to the data base during the evaluation of the theorem must have been marked so that they may be undone if the current deduction path should prove faulty. The process continues until all the goals generated along one path are satisfied, or no more theorems are applicable, or some external constraint is satisfied (such as a "demon" being awakened, or another kind of interrupt occurring) .
    In PLANNER, there are three classes of theorems, which are similar in appearance , but differ in the time at which they are applied to the data base. The most common type is the "consequent theorem", which is used directly in the goal finding procedure sketched above, and is invoked at the request for the satisfaction of a goal. "Antecedent theorems" are invoked when they are added to the data base, and also whenever an entity in added to the data base which matches the hypotheses of the theorem. As the application of antecedent theorems usually results in the addition of entities to the data base, still further antecedent theorems may be invoked. The chief use of antecedent theorems is to reduce the amount of search taken by subsequently invoked consequent theorems. If the data base primitive entities were the simple (one step ) connection table of a graph, and the transitive closure were referred to frequently, it could be computed either piecemeal, as needed, in a consequent theorem, probably resulting in redundant computation and significant temporal  expense,  or at one time,  by a consequent theorem, with a probably significant spatial expense Apparently antecedent theorems must be used quite cautiously, or an extremely cluttered data base could result. The final type of theorem permitted to PLANNER users is the "erase theorem", which is triggered on the deletion of entities from the data base, much as antecedent theorems are triggered upon additions. These theorems may the be used to undo the damage created by each other.
    None of these kinds of theorems is precisely analogous to DL* definitions, though if something similar to a consequent theorem is desired, the following may suffice:
    "Match def exist Y : Y isin Match!set and (all Z: Z isin Pattern(Y) ? Z = Any or Z isin in.I ) ? in.I isin Definition(Y)
    This definition, when invoked, will cause the application of all definitions named in "Matchlset" whose associated patterns ,after deletions of appearances of "Any", are contained as a subset of the first parameter, to the first parameter.  Somewhat more sophisticated is
    "Matcher def exist Y: Y isin Matchlset and (exist U: U = (cons X« exist Z« exist Wt Pattern(Y).rt=Z ? Z= Any and X = in.2.W or Z = in.2.W ) and U.O = Pattern (Y).O ? CU,in.l] isin Definition (Y))",
    as this definition restricts the data to be matched to be a list of the same length as the pattern, and only those parts of the data not used in the match are passed as parameters to the definition, together with a variable in which the successfully invoked definitions may deposit output information. As an example, suppose "Foolist" were a list of entities .  Then the effect of
    "consl X« exist Y» Y  isin  Foolist  ?  [X,Y] isin Matcher"
    would be the attempted matching of successive items from "Foolist" with the patterns of "Matchluse", the invokinn of the definitions for which the match was successful, and the returning of a value in "X" by the firnt definition to take the value true. As the invoked definitions themselves may be defined in tflrma of "Matcher" or something similar , a very general dlrsctnij search might ensue. The. elements of "Foolist" hava th» function of alternative goals, and may be arrnnofld by thu user in the order of decreasina desirability.
    Antecedent and erase theorems have no dlrnct counterpart in DL*, as there is currently no "demon" mechanism which may be set to watch certain parts of th« data base. If one acceots the fact that certain definitions may need to be re-invoked after modification of the data base, however , the options present in PLANNtR with regard to deciding whether computation is to be distributed or done at one time are certainly present.
    As an alternative to the antecedent and erase theorems of PLANNER, QLISP and CONNIVER have implemented a context mechanism, in which the data base is considered to be a tree structure indexed by special "context" variables. Expressions may be evaluated with respect to any accessable context, and the mechanism of assumption may be implemented by "growing" a new context identical to the current one, and adding the assumptions to the data base of the new context. If the deduction proves faulty, restoring the context to the previous value causes the changes made and the assumptions to go away. SAIL has a much weaker context mechanism, in which the user must -specify lists of variables to be saved or restored.
    As mentioned previously, it would not be overly difficult to add a context mechanism to DL*, although no such mechanism currently exists.

    Extract: Conclusions
    Conclusions
    The implementation of a sentential search directed language modeled on the first order predicate calculus must, if it is to preserve the common meaning of the quantifiers and connectives of the calculus, either restrict its admissible syntax considerably (to something like the separable statements of chapter two, containing no disjunctions in a search determining section of m statement), have no sophisticated search limiting mechanism, answering questions by a search (probably exhaustive) over a large segment of the data base, or internally contain one of the more sophisticated features of modern programming languages, the ability to suspend and resume processes. If disjunctions are permitted in search determining parts of statements, then a related feature, automatic backtracking, must also be added, and in addition, some means to terminate the backtracking when it is no longer desired. This means may be either manual   (as  in DL* . )  ,  or extremely sophisticated automatic (further research is suggested). Thus, the originally stated goal of this research , namely, the creation and implementation of an efficient search directed question answering system, became essentially the task of creating and implementing a moderately sophisticated programming language. Many of the originally anticipated difficulties, for instance, when to terminate a quantified search, and in general , when to advance an inner quantified search, were subsumed by the problem of determining the process structure of a statement, and the resulting run time control graph of the statement.
    The language / system meets its original design goals; questions are answered with an almost minimal number of accesses to the data base. Although available space is shared with several virtual stacks used for loop checking and backtracking purposes, the overhead from these structures rarely exceeds ten percent in space and five percent in time over the space/time required to evaluate the statement. Another five percent of operating time is spent in garbage collection.
    The actual time taken for the evaluation of a statement is not extremely small, because of the implementation having been done in a high level language not blessed with an available optimizing compiler, and, further, because of a relative plethora of debugging and trace features built into the implementation.
    The major area of dissatisfaction with the current DL* system , aside from run time, is in the area of user interface. The input, output, editing, diagnostic, and debugging tools provided are primitive. A production version of DL* would certainly require more flexibility and power in these areas, and would undoubtedly ba implemented using either an assembly language or a high level language with a good optimizing compiler.
    The language itself is still evolving. Further additions are under consideration, the most important, of which are«
    1) The  addition of  a  pseudo process structure to !J|,* through the simultaneous connectives "sand" and "sor", as discussed in chapter four 5
    2) The  addition of a "program" capability, through the generalization of the data base to allow DL* stntomnntn to appear as values / list items,  and an "execute" operator, accepting a list of  such statements.  Thln would permit such a statement as
    "execute (cons X: exist Y» Type(Y)= Program ? X= Body.(Y) ) ".
    3) The capability to virtually modify the data base  at levels other than the top .
    4) The  addition of  a context mechanism,  to permit conditional changes to the data base, which may be easily unmade.
    These  additions  would result  in DL* becoming  ,  in addition to a powerful question answering system,  a language suitable for use in many artificial intelligence projects.

    Resources
    • Cover of book