GLISP(ID:332/)Lisp with structural abstractionNovak, Stanford, 1980 LISP extended to support Abstractions after the fashion of Clu/Alphard, with a view to high level reuse in SE. Later the focus of much research at U Texas (extended to included the conceptualisation of units, as part of this abstraction process) (Obiter dicta - a beautifully conceived language, thoroughly and consistently thought-through, and executed elegantly) Related languages
References: Introduction My earlier research on computer understanding of physics problems stated in English [1,2] has convinced me that English is best viewed as a programmtng language. That is, an English sentence does not contain the message to be transmitted to the reader, but rather is a program which provides the minimum information necessary for the reader to construct the message from what the reader already knows. In the case of physics problems, the size of the model constructed by the ISAAC program is some 30 times as large as the size of the English sentences which specified the construction of that model. Woods [3j has suggested that the need to communicate complex concepts over a bandwidth-limited serial channel (speech) was the driving force behind the evolution of natural language abilities in humans. Study of the ways in which English permits compact expression of complex ideas reveals several features which would be useful if incorporated into programming languages. The reader of an English text maintains a current context, or focus, which can be used to understand definite references to objects or features of objects which are not specified completely, but are closely related to objects in the current context. For example, consider the following sample of text: Last night I went to Scholz's for a beer. The bartender asked for a ride home, since his car was disabled. Somebody had let the air out of the tires. A person reading this passage can easily understand a definite reference such as "the air", which means "the air which was contained in the tires which are part of the car which is owned by the person who works as a bartender at the bar named Scholz's". The reader has made these connections while reading the story by using world knowledge and by maintaining a current context relative to which definite references to previously unmentioned objects and features can be understood; each reference to an object or feature causes it to be brought into the context, thus enabling further references relative to it. The ability of the reader to infer the connection between a definite reference in a sentence and a "closely related" object in the current context permits the compact specification of complex relationships among objects. Another valuable feature of English is that it provides a standard interface for communicating information. The writer and the reader may have very different internal representations for certain objects, but they both have procedures for translation between their internal representations and corresponding English descriptions. A related feature is provided by "object-oriented" programming in the SMALLTALK language, which is based on the idea of Objects which communicate by exchanging Messages. In most programming languages, object representations are merely storage locations; the nature of the representation is represented implicitly in the programs which manipulate the storage locations. In SMALLTALK, the internal structure of objects is hidden, and programs cannot manipulate the internal structures directly; instead, programs query and change values in the objects by sending them messages, e.g., "what is your X?" or "set your X to the value y". Only the object itself knows whether it actually has an X, o r whether its X is a consequence of other values. In addition, the object can act to maintain its own internal consistency; for example, changing the size of an object may require that the object change the size of its picture on a display screen. Unfortunately, SMALLTALK has been implemented on special hardware, and has been unavailable to most researchers. Extract: Need for English-like Programming Languages Need for English-like Programming Languages English-like programming provides two features which are needed by workers in Cognitive Science and Artificial Intelligence and which are not provided by most existing programming languages: brevity of expression and ease of changing representations. In fact, these two are intertwined: the more detail one has provided about how to perform an action on an object, the more code one will have to change if the basic structure of the object is to be changed. Most existing programming languages implicitly specify the structures of objects within the code. For example, in either PASCAL or CLISP, referencing a field of a record structure requires that both the record and a complete path from the record to the desired field be specified in the code; if the record structure is to be changed, all the code which references such records will often have to be changed also. In a large system, such changes are so difficult that significant changes to data structures are seldom possible once a large body of code exists. Extract: GLISP GLISP GLISP is a LISP-based language which permits English-like programs containing definite references. GLISP is implemented by a compiler which compiles GLISP programs into LISP relative to a knowledge base which is separate from the programs; the resulting LISP code can be further compiled to machine language by the LISP compiler. In GLISP, the execution of a program causes an implicit context of computation t o be constructed, just as an English conversation causes an implicit conversational context to be constructed i n the minds of the conversants. The context is computed at compile time, using flow analysis, from the previous context, which includes Structure Descriptions of previously mentioned objects. Definite references to features of objects which are currently in context are permitted; these cause the newly referenced objects to be added to the context, allowing further references relative to them. The initial context within a GLISP function consists of the arguments of the function, its PROG variables, and any declared global variables. The context contains, for each variable, its variable name, reference name, and Structure Description. When a definite reference is encountered within a GLISP program, the compiler determines whether the reference nams such a variable or names a substructure or feature of some variable which is in context. If a substructure or feature is referenced, the compiler determines how to get it from the original structure; the resulting code replaces the definite reference in the compiled version of the program. In addition to producing code to get the feature from the starting structure, the compiler also determines the Structure Description of the result. The new item and its Structure Description are added to the context, thus enabling further definite references relative to it. When the compilation is finished, the context structures disappear; the compiled code contains only the LISP code necessary t o perform the specified actions. Thus, the code produced by GLISP is relatively efficient; the user of GLISP must pay for compilation, but does not incur a runtime penalty. The GLISP compiler runs incrementally, so that functions are compiled automatically the first time they are called. The following example illustrates some of the features of GLISP. Suppose that a wicked witch curses a grandmother by decreeing that each of her calico cats shall age by five years. The code to accomplish this can be written in GLISP as follows: (CURSE (GLAMBDA ( (A GRANDMOTHER) ) (PROG () (FOR EACH CAT WITH COLOR = 'CALICO DO AGE t+ 5) ))) The GLAMBDA indicates that this is a GLISP function, and causes the GLISP compiler to be called when the function is first interpreted (using the LAMBDATRAN feature of INTERLISP). Since GLISP maintains a context and permits definite reference, it is often unnecessary to give names to variables; thus, we need only declare the type of the argument, (A GRANDMOTHER). Since a GRANDMOTHER is in context, the compiler can determine how to access her CATS and how to generate an appropriate Loop to examine each of them. Within the loop, of course, the current CAT is in context, allowing definite reference to its features. The compiler generates the appropriate kind of test to compare the COLOR of the CAT against the constant 'CALICO; if needed, the constant and the operator could be coerced into the appropriate forms. For example, 'CALICO might have several possible meanings; in the context of the COLOR of a CAT, it could be coerced to the unique constant 'CALICO-CAT-COLOR. If the test is satisfied, the AGE of the CAT is increased by 5; the operator t+, which specifies appending when applied to lists, is interpreted as addition when applied to numbers. In the GLISP program, we have implied that certain objects have certain features, e.g., that a CAT has a COLOR, but we have said nothing about how to get or replace the COLOR of a CAT, or about what type of entity the COLOR actually is. This information is held separately in the knowledge base of Structure Descriptions and other information relative to which the program is compiled. This makes possible significant changes to data structures with no changes to the code - a goal long sought in high-level languages, but one which has been largely unrealized for structures involving pointers. GLISP can be viewed as similar to SMALLTALK in the sense that a program does not specify directly how to manipulate objects. Instead of sending a message to an object, we can think of the GLISP compiler as generating tke code to do what the object would do if it received such a message. This provides some of the flexibility of SMALLTALK with high runtime efficiency. The GLISP compiler allows GLISP expressions and ordinary LISP to be mixed; the user can use as much or as little GLISP as desired. A Structure Description language is provided for the common LISP data structures, and the compiler automatically generates code to access such structures. In addition, the compiler provides a clean interface to one or more representation languqes; the user can use both ordinary LISP structures and units in his favorite representation language, accessing both in a transparent manner. A more compact, CLISP-like syntax for GLISP expressions is provided in addition to the English-like syntax. The GLISP compiler, accessing both LISP structures and our GIRL representation language, is currently running. Overview of GLISP GLISP is a LISP-based language which provides high-level language features not found in ordinary LISP. The GLISP language is implemented by means of a compiler which accepts GLISP as input and produces ordinary LISP as output; this output can be further compiled to machine code by the LISP compiler. The goal of GLISP is to allow structured objects to be referenced in a convenient, succinct language, and to allow the structures of objects to be changed without changing the code which references the objects. The syntax of many GLISP constructs is English-like; much of the power and brevity of GLISP derive from the compiler features necessary to support the relatively informal, English-like language constructs. The following example function illustrates how GLISP permits definite reference to structured objects. (HourlySalaries (GLAMBDA ( (a DEPARTMENT) ) (for each EMPLOYEE who is HOURLY (PRIN1 NAME) (SPACES 3) (PRINT SALARY) ) )) The features provided by GLISP include the following: 1. GLISP maintains knowledge of the "context" of the computation as the program is executed. Features of objects which are in context may be referenced directly; the compiler will determine how to reference the objects given the current context, and will add the newly referenced objects to the context. In the above example, the function's argument, an object whose class is DEPARTMENT, establishes an initial context relative to which EMPLOYEEs can be found. In the context of an EMPLOYEE, NAME and SALARY can be found. 2. GLISP supports flexible object definition and reference with a powerful abstract datatype facility. Object classes are easily declared to the system. An object declaration includes a definition of the storage structure of the object and declarations of properties of the object; these may be declared in such a way that they compile open, resulting in efficient object code. GLISP supports object-centered programming, in which processes are invoked by means of "messages" sent to objects. Object structures may be LISP structures (for which code is automatically compiled) or Units in the user's favorite representation language (for which the user can supply compilation functions). 3. Loop constructs, such as (FOR EACH 4. Compilation of infix expressions is provided for the arithmetic operators and for additional operators which facilitate list manipulation. Operator overloading for user-defined objects is provided using the message facility. 5. The GLISP compiler infers the types of objects when possible, and uses this knowledge to generate efficient object code. By performing compilation relative to a knowledge base , GLISP is able to perform certain computations (e.g., inheritance of an attached procedure from a parent class of an object in a knowledge base) at compile time rather than at runtime, resulting in much faster execution. 6. By separating object definitions from the code which references objects, GLISP permits radical changes to object structures with no changes to code _ a goal long sought in high-level languages, but one which has largely been unrealized for structures involving pointers. Extract: Implementation Implementation GLISP is implemented by means of a compiler [using the LAMBDATRAN feature of INTERLISP, written by Ron Kaplan], which incrementally compiles each function the first time the function is called. [Of course, compilation can be invoked directly as well.] GLISP functions are indicated by the use of GLAMBDA instead of LAMBDA in the function definition. When the INTERLISP interpreter sees the GLAMBDA, it effects an "interrupt" to the GLISP compiler , which compiles the GLISP function and returns a normal LISP EXPR; thereafter, the LISP version is used. Thus, use of GLISP entails the cost of a single compilation, but otherwise is about as efficient as normal LISP. The LISP code produced by GLISP can be further compiled to machine code by the LISP compiler. To use GLISP, it is only necessary to load the compiler: LOAD(GLISP.LSP). Thereafter, whenever a function which has GLAMBDA in its definition is interpreted, the compiler will be called automatically. The GLISP compiler is also called automatically when LISP compilation of a function is requested. An individual function can be compiled explicitly by invoking GLCOMPILE( The compiled code, result type, and original code for compiled functions are stored on the property list of the function name. Properties of GLISP functions and Structure names can be examined with the function GLED( Extract: Declaration of Object Descriptions Declaration of Object Descriptions An Object Description in GLISP is a description of the structure of an object in terms of named substructures, together with definitions of ways of referencing the object. The latter may include virtual fields (i.e., fields whose values are not stored, but are computed from the values of other fields), adjectival predicates, and messages which the object can receive; the messages can be used to implement operator overloading and other compilation features. Object Descriptions are obtained by GLISP in several ways: 1. The descriptions of basic datatypes (e.g., INTEGER) are automatically known to the compiler. 2. Structure descriptions (but not full object descriptions) may be used as types in function definitions. 3. The user may declare object descriptions to the system using the function GLDEFSTRQ [Generic Lisp Define Structure Quote!]. 4. Object descriptions may be included as part of a knowledge representation language, and are then furnished to GLISP by the interface package written for that representation language. LISP data structures are declared using the function GLDEFSTRQ ("GLisp DEFine STRucture Quote"). GLDEFSTRQ takes one or more object descriptions as arguments, assuming the descriptions to be quoted. The format of each description is as follows: ( PROP ADJ ISA MSG The ( 1 1 n n where Once declared, object descriptions may be included in INTERLISP program files by including in the (GLISPOBJECTS 1 n The following example illustrates some of the declarations which might be made to describe the object type Vector. (GLDEFSTRQ (VECTOR (CONS (X NUMBER) (Y NUMBER)) PROP ( (MAGNITUDE ((SQRT X*X + Y*Y))) ) ADJ ( (ZERO (X IS ZERO AND Y IS ZERO)) (NORMALIZED (MAGNITUDE = 1.0)) ) MSG ( (+ VECTORPLUS OPEN T) (- VECTORDIFFERENCE) ) )) Since GLISP compilation is performed relative to the knowledge base of object descriptions, the object descriptions must be declared prior to GLISP compilation of functions using those descriptions. 2.2. Structure Descriptions Much of the power of GLISP is derived from its use of Structure Descriptions. A Structure Description (abbreviated " Introduction GLISP is a high-level language, based on Lisp and including Lisp as a sublanguage, which is compiled into Lisp relative to a knowledge base of object descriptions, a form of abstract datatypes. GLISP syntax includes PASCËL-like control structures and infix arithmetic expressions. In addition, GLISP permits English-like phrases which may include definite reference to objects which are in the current computational Context, or to their features3. Substructures or properties of objects may be referenced by the forms" in [ACM SIGACT-SIGPLAN] Proceedings the 10th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL '83) 1983 view details Extract: Introduction GLISP (Novak 1982, 1983A, 198313) is a high-level language, based on LISP and including LISP as a sublanguage, that is compiled into LISP (which can be further compiled to machine language by the LISP compiler). The GLISP system runs within an existing LISP system and provides an integrated programming environment that includes automatic incremental compilation of GLISP programs, interactive execution and debugging, and display-based editing and inspection of data. Use of GLISP makes writing, debugging, and modifying programs significantly easier; at the same time, the code produced by the compiler is optimized so that its execution. efficiency is comparable to that of handwritten LISP. This article describes features or GLISP and illustrates them with examples. most or the syntax of GLISP is similar to LISP syntax or PASCAL syntax, so explicit treatment of GLISP syntax will be brief. GLISP programs are compiled relative to a knowledge base or object descriptions, a form of abstract data types (Liskov et al. 1977; Wulf, London, and Shaw 1976). A primary goal of the use of abstract data types in GLISP is to make programming easier. The implernentations of objects are described in a single place; the compiler uses the object descriptions to convert GLISP code written in terms or user objects into efficient LISP code written in terms of the implementations of the objects in LISP. This allows the implementations of objects to be changed without changing the code; it also allows the same code to be effective for objects that are implemented in different ways and thereby allows the accumulation of programming knowledge in the form of generic programs. Extract: Nature of GLisp GLISP contains ordinary LISP as a sublanguage; LISP code can be mixed with GLISP code, so that no capabilities of the underlying LISP system are lost. GLISP provides PASCAL-like reference to substructures and properties, infix arithmetic expressions, and PASCAL-like control statements. Object-centered programming is built in; optimized compilation allows object-centered programs to run efficiently. GLISP is easily extensible for new object representations. Operator overloading for user-defined objects occurs automatically when arithmetic operators are defined as message selectors for those objects. The compiler can compile optimized code for access to objects represented in user-specified representation languages. GLISP has also been extended as a hardware description language for describing VLSI designs. in [ACM SIGACT-SIGPLAN] Proceedings the 10th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL '83) 1983 view details in Proceedings of the SIGPLAN '83 symposium on Programming language issues in software systems 1983, San Francisco view details in Proceedings of the SIGPLAN '83 symposium on Programming language issues in software systems 1983, San Francisco view details in Proceedings of the SIGPLAN '83 symposium on Programming language issues in software systems 1983, San Francisco view details in Proceedings of the SIGPLAN '83 symposium on Programming language issues in software systems 1983, San Francisco view details in Proceedings of the SIGPLAN '83 symposium on Programming language issues in software systems 1983, San Francisco view details in Proceedings of the SIGPLAN '83 symposium on Programming language issues in software systems 1983, San Francisco view details External link: Online copy at U Texas Extract: GLISP Language and Compiler GLISP Language and Compiler GLISP [novak:glisp,novak:gluser,novak:tose92] is a high-level language with abstract data types that is compiled into Lisp. It allows the data structure of an object to be specified, so that the programmer has control of the structure and representation of data. Many features of GLISP are easily understood as extensions to object-oriented programming; a GLISP type is analogous to a class in OOP. Associated with each type are message-like operations; computed properties are side-effect-free operations involving only the object itself. As in OOP, there is a hierarchy of types; messages and properties can be inherited from parent types. GLISP supports run-time message sending; however, it is desirable to compile in-line code rather than interpreted message calls. The compiler performs type inference while compiling expressions. When the type of an object is known at compile time, a message to the object can be compiled as a direct function call, or a specialized version of a generic function can be compiled, or the function can be expanded and compiled in-line. The type of the result of a message or property can be specified; this allows a particular view of an object to be selected. Symbolic optimization [schaefer] folds operations on constants, removes unnecessary code, and combines operations where appropriate; this improves efficiency and provides conditional compilation by eliminating conditionals when the test can be evaluated at compile time. in Proceedings of the SIGPLAN '83 symposium on Programming language issues in software systems 1983, San Francisco view details in Proceedings of the SIGPLAN '83 symposium on Programming language issues in software systems 1983, San Francisco view details External link: Online copy pdf in Proceedings of the SIGPLAN '83 symposium on Programming language issues in software systems 1983, San Francisco view details in Proceedings of the SIGPLAN '83 symposium on Programming language issues in software systems 1983, San Francisco view details Resources
|