KRL(ID:763/krl001)

Knowledge Representation Language 


Knowledge Representation Language. A frame-based language.


People:
Related languages
Frames => KRL   Influence
KRL => KRL-0   Evolution of
KRL => LOOPS   Influence
KRL => ThingLab   Influence

References:
  • Bobrow, Daniel G. and Winograd, Terry A. "An overview of KRL, a Knowledge Representation Language" Report Number: CS-TR-76-581 Department of Computer Science Stanford University November 1976 view details Abstract: Report Number: CS-TR-76-581
    Institution: Stanford University, Department of Computer Science
    Title: An overview of KRL, a Knowledge Representation Language
    Author: Bobrow, Daniel G.
    Author: Winograd, Terry A.
    Date: November 1976
    Abstract: This paper describes KRL, a Knowledge Representation Language designed for use in understander systems. It outlines both the general concepts which underlie our research and the details of KRL-0, an experimental implementation of some of these concepts. KRL is an attempt to integrate procedural knowledge with a broad base of declarative forms. These forms provide a variety of ways to express the logical structure of the knowledge, in order to give flexibility in associating procedures (for memory and reasoning) with specific pieces of knowledge, and to control the relative accessibility of different facts and descriptions. The formalism for declarative knowledge is based on structured conceptual objects with associated descriptions. These objects form a network of memory units with several different sorts of linkages, each having well-specified implications for the retrieval process. Procedures can be associated directly with the internal structure of a conceptual object. This procedural attachment allows the steps for a particular operation to be determined by characteristics of the specific entities involved. The control structure of KRL is based on the belief that the next generation of intelligent programs will integrate data-directed and goal-directed processing by using multi-processing. It provides for a priority-ordered multi-process agenda with explicit (user-provided) strategies for scheduling and resource allocation. It provides procedure directories which operate along with process frameworks to allow procedural parameterization of the fundamental system processes for building, comparing, and retrieving memory structures. Future development of KRL will include integrating procedure definition with the descriptive formalism.
    pdf
  • Bobrow, Daniel G., and Terry Winograd, "An Overview of KRL, A Knowledge Representation language" view details
          in Cognitive Science, 1(1) 1977 view details
  • Bobrow, Daniel., Terry Winograd, and the KRL research group, "Experience with KRL-0: One Cycle of a Knowledge Representation Language", pp213-222 view details
          in Proceedings of the 5th International Joint Conference on Artificial Intelligence IJCAI-77, MIT, Cambridge, Mass., August, 1977 view details
  • Bobrow D.G. and Winograd T., "KRL, Another Perspective" view details Abstract: Wendy Lehnert and Yorick Wilks (pp. 1-28 of this
    issue of Cognitive Science) have written a lengthy paper
    raising a number of issues concerning KRL. Much of their paper is an excellent explanation of some of the features and problems of KRL, and will serve to clarify things which we have explained poorly or not at all in previous papers. Other parts of what they say we find more contentious, and much of this response will be an argument against views of theirs which we feel are confused or wrong.

    The decision to focus on the disputes does not imply a general rejection of the paper. It was clearly intended in the spirit of constructive criticism and makes a number of valid and important points. We feel it is useful to write a response, not as a defense, but as a further step in a dialog through which we will all come
    to a better understanding of language and cognition.
          in Cognitive Science 3 (1979) view details
  • Borning, Alan "ThingLab - A Constraint-Oriented Simulation Laboratory" Xerox PARC Report SSL-79-3 July 1979 view details Extract: The Kernel ThingLab System
    The Kernel ThingLab System

    The kernel ThingLab system consists of a Smalltalk extension, written by the present author, that is used in all ThingLab simulations. Embedded in this program is knowledge about such things as inheritance hierarchies, part-whole relations, and constraint satisfaction techniques. The kernel system doesn?t have any knowledge about specific domains in which ThingLab can be used, such as geometry or electrical circuits. Rather, it provides tools that make it easy to construct objects that contain such knowledge.

    Another goal in constructing the system, besides the exploration of language design as described above, was to investigate computer-based tools for use in education. For example, a ThingLabstyle system might prove valuable as part of a geometry curriculum, or as an adjunct to a physics laboratory. With this in mind, it is anticipated that there would be two sorts of users of the system. The first sort of user would employ ThingLab to construct a set of building blocks for a given domain. For example, for user in simulating electrical circuits, such a user would construct definitions of basic parts such as resistors, batteries, wires and meters. The second sort of user could then employ these building blocks to construct and explore particular simulations. The knowledge and skills required by these two kinds of users would be different. The first kind of user would need to now about message passing the constraint specification domain (e.g.Ohm?s Law). The second kind of user, on the other hand, could deal with the system using only simple interactive graphics techniques, such as selecting items in a menu or moving pictures around on the screen. Thus, this sort of user wouldn?t need to be familiar with either the details of ThingLab, or with the domain-specific theory behind the simulation (although one of the objectives of a curriculum might be for such a user to acquire this domain-specific knowledge).

    Extract: Constraint Representation and Satisfaction
    Constraint Representation and Satisfaction

    Representation of Constraints

    The representation of constraints reflects their dual nature as both descriptions and commands. Constraints in ThingLab are represented as a rule and a set of methods that can be invoked to satisfy the constraint. The rule is used by the system to construct a procedural test for checking whether or not the constraint is satisfied, and to construct an error expression that indicates how well the constraint is satisfied. The methods describe alternate ways of satisfying the constraint; if any one of the methods is invoked, the constraint will be satisfied.

    Merges

    An important special case of a constraint is a merge. When several parts are merged, they are constrained to be identical. For efficiency, they are usually replaced by a single part rather than being kept as several separate objects. The owner of the parts keeps a symbolic representation of the merge for use in constraint satisfaction, as well as for reconstruction of the original parts if the merge is deleted. One use of merging is to represent connectivity. For example, to connect two sides of the triangle, an endpoint from one side is merged with an endpoint of the other. Another use of merging is to apply pre-defined constraints. Thus, to constrain the base of the triangle to be horizontal, one can simply merge an instance of HorizontalLine with the side to be constrained.

    Constraint Satisfaction

    It is up to the user to specify the constraints on an object; but it is up to the system to satisfy them. Satisfying constraints is not always trivial. A basic problem is that constraints are typically multi-directional. For example, the horizontal line constraint is allowed to change either endpoint of the line. Thus, one of the tasks of the system is to choose among several possible ways of locally satisfying each constraint. One constraint may interfere with another; in general the collection of all the constraints on an object may be incomplete, circular, or contradictory. Again it is up to the system to sort this out.

    The approach taken in ThingLab is first to analyze the constraints on an object and plan how to satisfy them, and then to make the actual changes to satisfy the constraints. In ThingLab, the particular values that an object holds usually change much more rapidly than its structure. For example, if on the display the user moves some part of a constrained geometric object with the cursor, the values held by this object will change every time its picture is refreshed. Each time some value is changed, other values may need to be changed as well to keep the constraints satisfied. However, the object?s structure will change only when the user adds or deletes a part or constraint. The design of the ThingLab constraint satisfaction mechanism is optimized for this environment. A constraint satisfaction plan may depend on the particular structure of an object, but should work for any values that the object might hold. (If not, appropriate tests must be included as part of the plan.) Once a plan for satisfying some constraints has been constructed, Smalltalk code is compiled to carry out this plan. Thus each time the part of the constrained geometric object is moved , it is this pre-compiled method that is invoked, rather than a general interpretive mechanism. Also, the plan is remembered in case it is needed again. Planning is done using symbolic references to the constrained parts, so that the same plan may be used by all instances of a class. If the class structure is changed so that the plan becomes obsolete, it will be automatically forgotten.

    When an object is asked to make a change to one of its parts or subparts, it gather up all the constraints that might be affected by the change, and plans a method for satisfying them. In planning a constraint satisfaction method, the object will first attempt to find a one-pass ordering for satisfying the constraints. There are two techniques available in ThingLab for doing this: propagation of degrees of freedom, and propagation of known states. In propagating degrees of freedom, the constraint satisfier looks for an object with enough degrees of freedom so that it can be altered to satisfy all its constraints. If such an object is found, that object and all the constraints that apply to it can be removed from further consideration. Once this is done, another object may acquire enough degrees of freedom to satisfy all its constraints. The process continues in this manner until either all constraints have been taken care of, or until no more degrees of freedom can be propagated. In the second technique propagating known states, the constraint satisfier looks for objects whose states are completely known. If such an object is found, the constraint satisfier will look for one-step deductions that allow the states of other objects to be know, and so on recusively.

    If there are constraints that cannot be handled by either of these techniques the object will invoke a method for dealing with circularity. Currently, the classical relaxation method is the only such method available. As will be described in Chapter 5, relaxation can be used only with certain numerical constraints, and is also slow. In this method, the object changes each of its numerical values in turn so as to minimize the error expressions of its numerical values in turn so as to minimize the error expressions of its constrains. These changes are determined by approximating the constraints on a given value as a set of linear equations by finding the derivative of the error expressions with respect to the value, and solving this set of equations. Relaxation continues until all the constraints are satisfied (all the errors are less than some cutoff), or until the system decides that it cannot satisfy the constraints (the errors fail to decrease after an iteration).

    If the relaxation method is used, the system issues a warning message to the user. The user can either let things stand, or else supply additional information in the form of redundant constraints that eliminate the need for relaxation.

    Where are Constraints Useful?

    Where are constraints useful? In discussing this question, it is important to differentiate what can be expressed using constraints from what sets of constraints can be satisfied. Many more things can be expressed than can be satisfied, For example, it is easy to state the following constraints:

    xn + yn = zn

    x, y, z, n integers

    x, y, z >0

    n > 2.

    However, finding values that satisfy these constraints, or proving that no such values exist, requires either a counterexample or a proof of Fermat?s Last Theorem.

    What can be expressed using constraints? To express a relation as a constraint, the following information is needed: a rule (from which the system will derive a satisfaction test and an error expression); and one or more methods for satisfying the constraint. For numerical constraints, the methods may be omitted if the user is willing to live with the relaxation method. Any relation that must always hold, and for which this information can be supplied, may be expressed as a constraint. Some relations that cannot be expressed as constraints in a general way using current ThingLab techniques include: any relation involving ordering or time; relations that need hold only under certain conditions; and meta-constraints (constraints on other constraints or on constraint satisfaction strategies).

    What sets of constraints can be satisfied? If the constraint dependency graph has no circularities, or if the circularities can all be broken using one-step deductions, then the one-pass constraint satisfaction techniques will always succeed, and will provide correct results. Further, the constraints can be satisfied, or determined to be unsatisfiable, in time proportional to that required to execute the local methods provided by the constraints. If the dependency graph does have circularities that cannot be broken by one-step deductions, the constraints for which relaxation can be used. These constraints must either be linear, or else constraints for which linearization is an adequate approximation. An example of a set of circular constraints for which the relaxation method does not work are those that describe a cryptarithmetic problem, e.g. DONALD + GERALD = ROBERT with D=5. [See Newell & Simon 1972 for a description of this domain.] Relaxation is useless here, since the constraints cannot be approximated by linear equations. To solve such kinds of problems, other constraints satisfaction techniques would be needed, such as heuristic search.

    Relation to Other Work

    As mentioned previously, the two principal ancestors of ThingLab are Sketchpad an Smalltalk. It is also closely related to work on constraints by Gerald Sussman and his students; other related work includes Simula the Actor languages, KRL, and a number of problem solving systems. Following a discussion of these and other systems, a summary of the novel features of ThingLab is presented.
    Extract: KRL
    Other relevant work includes representation languages, in particular KRL. Ideas in KRL regarding object-centered factorization of knowledge and the inheritance of properties have been very helpful. A comparison of the approaches taken in KRL and ThingLab to the questions of inheritance and the relation between classes and instances may be found in Chapter 3. Such questions also arise in the design of semantic nets.
          in Cognitive Science 3 (1979) view details
  • Lehnert,W.; Wilks, Y.; "A critical perspective on KRL" pp1-28 view details Abstract: Bobrow and Winograd have presented to the AI community two descriptions of KRL (Bobrow & Winograd, 1977, Bobrow, Winograd et al., 1977) which explicate both a high level AI programming language
    and a theory of knowledge representation. In actual practice, the line between these roles is necessarily vague. As is the case with all programming languages, commitments made to specific data formats or control structures profoundly affect design decisions made by
    the user. In KRL, there are additional commitments to knowledge representation in the programming language as well. While these commitments are neutrally presented as convenient features of high level language, their impact on the user would be far less neutral.
    To a user who has not previously investigated problems of knowledge representation first-hand, KRL either suggests a particular approach or imposes that same approach. In either case, the user is liable to be unconscious of the continual trade-off between low level design options and high level programming convenience.
          in Cognitive Science 3 (1979) view details
  • Stefik, M. An examination of a frame-structured representation system.Proceedings of the International Joint Conference on Artificial Intelligence, Tokyo, Japan, pp. 845-852, August 1979. view details Abstract: The Unit Package is an interactive knowledge representation system with representations for individuals, classes, indefinite individuals, and abstractions. Links between the nodes are structured with explicit definitional roles, types of inheritance, defaults, and
    various data formats. This paper presents the general ideas of the Unit Package and compares it with other current knowledge representation languages. The Unit Package was created for a hierarchical planning application, and is now in use by several Al projects. pdf
          in Cognitive Science 3 (1979) view details
  • Bobrow, Daniel G. "KRL, A Knowledge Representation Language" view details Abstract: KRL is intended to be a knowledge representation language with which to build sophisticated systems and theories of language understanding. A first implementation, KRL-O (Bobrow and Winograd. 1977) was used for a number of test programs, and its problems (Bobrow, Winograd and the KRL Research Group. 1977) led to a redesign now being implemented as KRL-1. The major' premises of KRL are:

    1. Knowledge should be organized around conceptual entities with associated descriptions and procedures.
    2. A description must be able to represent partial knowledge about an entity and accommodate multiple descriptors which can describe the associated entity from different viewpoints.
    3. An important method of description is comparison with a known entity, with further specification of the described instance with respect to the prototype.
    4. Descriptions are legitimate entities in the system, and thus ; can be described. A system ought to be able to describe all its own operations and structures.
    5. Reasoning is dominated by a process of recognition in which new objects and events are compared to stored sets of expected prototypes, and in which specialized reasoning strategies are keyed to. these prototypes.
    6. Intelligent programs will require multiple active processes with explicit user-provided scheduling and resource allocation heuristics.
    7. Information should be clustered to reflect use in processes whose results are affected by resource limitation and differences in information accessibility.
    8. A knowledge representation language must provide a flexible set of underlying tools, rather than embody specific commitments about either processing strategies or the representation of specific areas of knowledge.
    9. A knowledge representation system ought to have a clear well-defined semantics, where "semantics" is used here as pjprHel to the. sense of semantics used by Tarski for logic. KRS (see the section by Brian Smith) is intended to provide a clean semantic basis for KRL.

    The basic mechanisms in KRL include:
    1. Scoped units of memory, with costs associated with ' traversing scope boundaries; these interact with a family of procedures to add information within a scope, and to seek and match beliefs found within and across scopes.
    2. Methods for associating procedural information with entities, and invocation of procedures dependent on specific ? actions on those entities. Procedures are controlled through a multiprocess agenda which can be scheduled by the user. Schedulers use information about both current goals and resource availability.
    3. An active belief context which acts as an attention - focussing mechanism. It is this active context on which the inference mechanism works to derive new beliefs. Incorporating information from memory structures into Jhe active context is a separately controllable process from performing inferences on that set of active beliefs.
    4. An indexing mechanism which allows the user to specify how some data items can be addressed by contents, and which parts of a given structure are to act as an access key. Index keys are general descriptions, not just atoms as in simple semantic nets.
    5. A means of attaching descriptions to descriptions so that knowledge about knowledge is expressible easily and directly in the same formalism. For example, these "meta-descriptions" can specify when specific information might be useful, or the dependency of a particular description on other assumptions made in reasoning.
    6. A mechanism for compiling objects and operations so that the overhead of general forms and interpreter mechanisms can be avoided in simple cases. This depends on a data compaction scheme which allows specialized data structures to be interpreted as full fledged descriptions.

    English Dialog
    A basic premise of our approach to natural language understanding is that a dialog system demands integration of a number of complex coptponents through many layers of abstraction. As a basis one needs a computer representation, programming, monitoring and debugging system. On top of that must be constructed basic reasoning strategies. Specialized representations and reasoning must be provided for common domains such as time, causality, planning, events and states, etc. One also needs linguistic knowledgee.g.,syntax and parsing strategies, mophological rules, discourse structures. Finally, a dialog system must have expert knowledge of the task domain itself, such as travel planning, medical diagnosis etc. Although we intend to implement modules at all these layers, KRL-I will only embody the first two. The designer of a dialog system will have to provide specialized knowledge (built in KRL) for the rest of the layers.

    Strengths and Weaknesses
    A major strength of KRL is that it is a self-descriptive system. Because of the access compiler and data compaction mechanisms, this self description can be used to actually implement the system itself. This implies that the basis of the system is accessible to the user in a way that has not been true of other knowledge representations since LISP.
    A problem for KRL is that our current representation of processes is weak. Giving a sequence of instructions which an interpreter should follow in order is only one way of describing a procedure. We believe that it will be possible to develop a notion of factored description in which a procedure is described through multiple perspectives which may combine high level statements about the structure of the process, its results, conditions on various parts, ways it fits into goal structures, etc. We would like to'apply the self descriptive power of the language to use the reasoning, matching and problem solving powers of KRL as fundamental elements in our tools for designing, building, and working with KRL programs.

          in [Bobrow, Daniel G.] A Panel On Knowledge Representation chaired by Daniel G. Bobrow view details