GPS(ID:3075/gps001)

General Problem Solver  


General Problem Solver

Newell & Simon 1957 RAND Corp/Carnegie Mellon University

Based on Logical Theorist. Written in and oputputs to ILP-V

A production system that recorded behavior as a function of memory operations, control processes and rule sets.

These were used to make a simulation, and the results then compared with the behaviour of competent people


People:
Structures:
Related languages
IPL-V => GPS   Written using
IPL-V => GPS   Target language for
GPS => LISP   Influence
GPS => OPS   Evolution of

References:
  • Newell, Allen; Shaw, Christopher J. ; Simon, A. H.: "Preliminary description of general problem solving program - I GPS-1" Pittsburgh, Pa.: Graduate School of Industrial Administration, Carnegie Inst. of Technology 1957. (CIP Working Paper. No. 7.) view details
  • Deutsch, M. L. review of Simon and Newell 1961 (GPS) view details Abstract: This article is the first of a series of two. The structure of the now well-known General Problem Solver (GPS) of the authors and J. C. Shaw is described in general terms. The plausible claim is made that the program embodying the GPS is a theory of human problem solving in the sense that at least certain aspects of human thinking are simulated by it. The GPS is presented as an example of a heuristic program of the simulation type. That is, the GPS' s processes resemble the processes of the human mind. Other heuristic programs may simply solve problems which humans also solve, e.g. chess playing, but which are approached by a computer in quite a different way. It remains to be seen, of course, whether simulation of human thought processes, or quite different attacks on problem solving by computers, will prove most fruitful in equipping computers to take over tasks which, when humans perform them, we call "thinking."
          in ACM Computing Reviews 2(03) May-June 1961 view details
  • Newell, Allen; and Simon, A. H.: "GPS, a program that simulates human thought" in Lernende Automaten. Munchen 1961 view details
          in ACM Computing Reviews 2(03) May-June 1961 view details
  • Simon, Herbert A.; and Newell, Allen "Computer simulation of human thinking and problem solving" pp18-20 view details
          in Datamation 7(06) June 1961 view details
  • Bemer, R "ISO TC97/SC5/WGA(1) Survey of Programming Languages and Processors" December 1962 view details
          in [ACM] CACM 6(03) (Mar 1963) view details
  • Newell, A. and Simon, H. A., "GPS, A Program That Simulates Human Thought", pp279-93 view details
          in Feigenbaum, E. and Feldman, J. (eds.) "Computers and Thought" MIT Press, Cambridge, MA, 1963 view details
  • Newell, Allen: "A guide to the general problem-solver program GPS-2-2" Santa Monica, Calif. Rand Corp. 1963. ( RM-3337-PR. ) view details
          in Feigenbaum, E. and Feldman, J. (eds.) "Computers and Thought" MIT Press, Cambridge, MA, 1963 view details
  • Ernst, George Werner; Newell, Allen: "Generality and GPS". Pittsburgh, Pa.: Carnegie Inst. of Technology 1966 view details
          in Feigenbaum, E. and Feldman, J. (eds.) "Computers and Thought" MIT Press, Cambridge, MA, 1963 view details
  • Ware, Willis H. "JOHNNIAC Eulogy" RAND July 1979 Corp. document P-3313, March 1966, pp. 1l-12 view details Extract: Notable uses of JOHNNIAC
    In the earliest days of 1954, most programming was done in machine language and in absolute octal at that. In 1955 Jules Schwartz wrote the first assembly routine for JOHNNIAC and Cliff Shaw produced a revised assembler in 1956. Then came QUAD, an interpretive programming system, and SMAC, a small compiler. Each was noted for being foolproof. The non-professional programmer could use these systems comfortably; his errors would be reported to him in great detail by the machine. There were other significant contributions to the programming art as well; among them were items with such names as EASY-FOX, CLEM, JBL-4, J-100, MORTRAN done by Mort Bernstein, and Load-and-Go.

    In the late fifties, the nature of JOHNNIAC's task changed. The rental equipment from IBM carried most of the computing load from the RAND staff. JOHNNIAC became a free good; its time was available for research use. The cost of operation was sufficiently low that one need not be concerned about using large amounts of machine time. Much of its time was consumed by research on the general questions of artificial intelligence and the initials NSS came to be closely associated with JOHNNIAC. These are the initials of Allen Newell, Cliff Shaw, and Herb Simon who used the machine extensively for research. During this period came such achievements as:
  • List structures,
  • list processing techniques and their embodiment in such languages as IPL-2, -3, -4;
  • Chess playing routines such as CP-I AND -2;
  • Theorem proving routines such as LT -- the Logic Theorist;
  • The general problem solver GPS;
  • The assembly line balancer of Fred Tonge.
          in Feigenbaum, E. and Feldman, J. (eds.) "Computers and Thought" MIT Press, Cambridge, MA, 1963 view details
  • Burton, C. R. revision of Ernst and Newell 1967 (GPS) view details Abstract: The authors have programmed a new version, GPS-2.6, of the General Problem Solver. Its purpose is to extend the generality of GPS-2.5 and, at the same time, hold its power at a fixed level. It will be recalled that the GPS uses a heuristic method of difference reduction to solve problems of the type: determine a sequence of operators that transforms an initial situation into a desired situation when the two situations and a set of operators are specified.

    Eleven tasks, ranging from symbolic integration and the Tower of Hanoi ring puzzle to a letter series completion test, were given the new GPS. It solved all, except for Euler's problem of the seven bridges of Konigsberg which is insolvable). For the missionaries and cannibals problem, the article gives the program and instructive print-out listing the 57 goals the GPS set up, with its attempts to achieve them.

    It is apparent that a higher degree of generality was achieved than with the original GPS. The authors point out, however, that competitive games and many optimization problems may not be handled by either GPS. Also, a part of the tradeoff for more generality was the use of a cruder representation of differences. The whole project is discussed in more detail by the same authors elsewhere (Generality and GPS, Carnegie Institute of Technology, 1966)
          in ACM Computing Reviews 8(05) September-October 1967 view details
  • Ernst, George W.; AND Newell, Allen "Some issues of representation in a general problem solver" view details
          in [AFIPS] Proceedings of the 1968 Spring Joint Computer Conference SJCC 32 view details
  • Ernst, George Werner: "Sufficient conditions for the success of GPS" pp517-533 view details
          in [ACM] JACM 16(4) 1969 view details
  • Ernst, George Werner; Newell, Allen: "GPS: a case study in generality and problem solving" New York, London: Academic Press 1969 view details
          in [ACM] JACM 16(4) 1969 view details
  • Banerji, R. B. review of Ernst 1969 view details Abstract: The General Problem Solver (GPS) of Newell, Simon and Shaw (Self organizinq systems, Pergamon Press, N. Y., 1960) was designed to solve a large class of problems using specific heuristic search strategy; it was studied under various problems by Ernst. The GPS conducts its search guided by 3 sets of information provided by the problem poser: l) the definition of a set of differences between states of the problem; 2) an ordering of the differences (by difficulty), and 3) a table to show which problem transformations affect which difference (the "Table of Connection").

    In this paper, Ernst defines these concepts with math matical precision and indicates how a specific form of the table of connections (the "triangular," where no difference can be affected by more transformations than affect a: easier difference) is useful in reducing search. He the: proves that given a triangular table of connections, the GPS will find a solution only if the problem has an "ordered solution," i.e., if the problem and all its subproblems cane solved by removing the most difficult differences first. This gives a sufficient condition for the GPS to succeed in 5 problem. Unfortunatelv, no test is known for the existence of an ordered solution. However, in the opinion of this reviewer, this paper makes a step in the right directions the field of artificial intelligence -- in precision of analyst as well as in meticulous adherence to the realities of the phenomenon under discussion.
          in ACM Computing Reviews 11(10) October 1970 view details
  • Kain, R. Y. Ernst and Newell 1969 view details Abstract: In the middle 1950's, diverse groups of people worked on the problem of programming computers to perform complex reasoning tasks -- playing board games and finding mathematical proofs (in logic, geometry, etc.) for example. An early successful attempt at generating proofs was the Logic Theory Machine of Newell, Simon, and Shaw. In the process of structuring the internal representations of the logical statements, Newell, Simon, and Shaw created the concept of a list structure. Later they developed this concept in the form of a family of programming languages, IPL-V being the culmination of that effort.

    Having a powerful (a term we intentionally leave undefined) language, they used it on the artificial intelligence problem of simulating problem solving, and proposed the development of a General Problem Solver (GPS) . This book describes a recent (and, it is suggested in the text, final) version of the evolutionary GPS.

    As the authors point out in numerous places, the issue of generality is very complex. In particular, the generality of a problem solver is limited in several ways by the structure and description of 1) the problem solver itself, and 2) the problem situations with which the problem solver deals. The generality of the problem solver is demonstrated by exhibiting its behavior with eleven different and diverse problems, including one which cannot be solved (a fact which the program cannot discover) and one pair which are similar. The statements and solutions for all of these problems are included as figures.

    GPS describes objects as tree structures. The problems it can solve are of the form "Transform object x into an object of the form y." Note that the answer is the object having a certain form, and not necessarily a specific object. (Thus, the task of evaluating an indefinite integral is described as "Transform x. the input, into an object which does not contain integral signs.")

    GPS also has procedures, called operators, which transform objects to produce new objects. Other procedures, called tests, compare objects to each other or to forms, and produce a "same" response, a specification of the difference between the objects, or a "Not the same but I don't know the difference" response. We will discuss some specifics of this below.

    The sequences of operations which can be used to transform the initial object into the desired object can be depicted in a tree structure, in which the modes represent objects and a directed branch from node x to node y means that an operator can be used to transform the object at node x into the object at node y. The problem solving problem is to determine whether there exists a path from the initial object to the desired object. Of course, we do not want to consider algorithms which amount to generating this complete tree.

    The authors of GPS introduce the concept of "goal" to denote information about the desirable directions in which the tree might be searched. Typical goals have the forms "Apply operator x to object y," or "Remove the negation sign in the first position of object y," or "Transform object x into object y."

    Now, the general goal is to produce an object satisfying the given form from the given initial object. Once a difference between the two objects has been found, a table is consulted to determine what to do to remove that difference. The table will specify some operator to apply. Thus the subgoal is to apply the operator. But that operator may not be applicable to the current object because the object does not meet the conditions (called Pretests") for the applicability of that operator. Thus, new differences are discovered which need to be removed, and the program proceeds to recursively remove those differences before attempting the previous goal again.

    One cannot quarrel with this general strategy, except that new operators cannot be created, and that the table describing how to remove various types of differences must be specified. However, there is a specific problem, which is that differences cannot themselves be structures, and thus the difference between two objects that are quite dissimilar must be reduced to a single item. This is achieved by specifying an ordering of the relative importance of various types of differences with the statement of the problem.

    It is discovered (hopefully, not surprisingly) that the nature of the problem specification affects the speed of the solution in significant ways. Good programming is needed even at very high levels!

    With this introduction to the general strategy of GPS, we turn to a discussion of this book.

    What it needs is less detail and better structure. The excessive detail (sometimes included in non-conventional program descriptions) makes reading very dull. Some of the apparent detail is misleading: in several places we find "Repeat . . ." below several lines of pseudo-code, and it is not clear how much of the preceding should be repeated. What structure there is is denoted by the size of type used in section headings, not by any decimal numbering scheme. Somehow this structure is not the best; the parts in many of the sections are interdependent in complex ways.

    Another problem is that much of the book is taken from the PhD thesis by the first author and consequently contains many comments about how the present GPS differs from its predecessor. These comments are useful for examining committees, but aren't interesting to the casual reader who doesn't have access to equivalent documentation about the predecessor. Therefore, they should have been deleted when this was made into a book.

    One who wishes to learn how GPS relates to artificial intelligence should skip Chapters 3, 4, 5, 6 (which constitute 213 pages of the book), because it is hard to get much out of them without: 1) expending enormous effort in imposing structure on the discussion or, 2) in Chapter 6, wading through many details of computer printout for each example presented. For these people. the discussion given above should help provide some structure.

    The book does present a reasonably balanced description of the shortcomings of the system, but it can be recommended only for those who can put up with the disorganized presentations unfortunately made by too many computer professionals.

          in ACM Computing Reviews 11(01) January 1970 view details
  • Sammet, Jean E., "Roster of Programming Languages 1972" 118 view details
          in Computers & Automation 21(6B), 30 Aug 1972 view details
  • Simon, Herbert A. "The structure of ill-structured problems" in Artificial Intelligence 4(3&4) (Winter 1973), pp181-201. view details
          in Computers & Automation 21(6B), 30 Aug 1972 view details
  • Stock, Marylene and Stock, Karl F. "Bibliography of Programming Languages: Books, User Manuals and Articles from PLANKALKUL to PL/I" Verlag Dokumentation, Pullach/Munchen 1973 272 view details Abstract: PREFACE  AND  INTRODUCTION
    The exact number of all the programming languages still in use, and those which are no longer used, is unknown. Zemanek calls the abundance of programming languages and their many dialects a "language Babel". When a new programming language is developed, only its name is known at first and it takes a while before publications about it appear. For some languages, the only relevant literature stays inside the individual companies; some are reported on in papers and magazines; and only a few, such as ALGOL, BASIC, COBOL, FORTRAN, and PL/1, become known to a wider public through various text- and handbooks. The situation surrounding the application of these languages in many computer centers is a similar one.

    There are differing opinions on the concept "programming languages". What is called a programming language by some may be termed a program, a processor, or a generator by others. Since there are no sharp borderlines in the field of programming languages, works were considered here which deal with machine languages, assemblers, autocoders, syntax and compilers, processors and generators, as well as with general higher programming languages.

    The bibliography contains some 2,700 titles of books, magazines and essays for around 300 programming languages. However, as shown by the "Overview of Existing Programming Languages", there are more than 300 such languages. The "Overview" lists a total of 676 programming languages, but this is certainly incomplete. One author ' has already announced the "next 700 programming languages"; it is to be hoped the many users may be spared such a great variety for reasons of compatibility. The graphic representations (illustrations 1 & 2) show the development and proportion of the most widely-used programming languages, as measured by the number of publications listed here and by the number of computer manufacturers and software firms who have implemented the language in question. The illustrations show FORTRAN to be in the lead at the present time. PL/1 is advancing rapidly, although PL/1 compilers are not yet seen very often outside of IBM.

    Some experts believe PL/1 will replace even the widely-used languages such as FORTRAN, COBOL, and ALGOL.4) If this does occur, it will surely take some time - as shown by the chronological diagram (illustration 2) .

    It would be desirable from the user's point of view to reduce this language confusion down to the most advantageous languages. Those languages still maintained should incorporate the special facets and advantages of the otherwise superfluous languages. Obviously such demands are not in the interests of computer production firms, especially when one considers that a FORTRAN program can be executed on nearly all third-generation computers.

    The titles in this bibliography are organized alphabetically according to programming language, and within a language chronologically and again alphabetically within a given year. Preceding the first programming language in the alphabet, literature is listed on several languages, as are general papers on programming languages and on the theory of formal languages (AAA).
    As far as possible, the most of titles are based on autopsy. However, the bibliographical description of sone titles will not satisfy bibliography-documentation demands, since they are based on inaccurate information in various sources. Translation titles whose original titles could not be found through bibliographical research were not included. ' In view of the fact that nany libraries do not have the quoted papers, all magazine essays should have been listed with the volume, the year, issue number and the complete number of pages (e.g. pp. 721-783), so that interlibrary loans could take place with fast reader service. Unfortunately, these data were not always found.

    It is hoped that this bibliography will help the electronic data processing expert, and those who wish to select the appropriate programming language from the many available, to find a way through the language Babel.

    We wish to offer special thanks to Mr. Klaus G. Saur and the staff of Verlag Dokumentation for their publishing work.

    Graz / Austria, May, 1973
          in Computers & Automation 21(6B), 30 Aug 1972 view details
  • Kling, R. E. review of Simon 1972 view details Abstract: Many critics of AI allege that the scope of AI artifacts is limited to highly structured problems such as games and toy robot worlds. In this brief article the author argues that the mechanisms appearing in contemporary AI artifacts are adequate for tackling ill-structured problems. First he articulates certain conventionally accepted characterizations of wellstructured problems. Then he argues that problem solving in certain apparently well- structured domains (chess, theoremproving) is not in fact very well structured by his criteria. In passing, he notes that the designers of most extant AI artifact add considerable structure to these task areas to improve the performance of their systems. In a second assault he illustrates the way in which problems which seem ill defined in the large appear quite well defined when they are decomposed into a loosely coordinated set of simpler subproblems. The author's reliance upon GPS as the technological base for his analysis limits the boldness of his proposals. GPS is a rather homogen ous system with weak control structures for coordinating sub-tasks. The GPS structure contrasts with his description of de. sign organization as the cooperation of a community of heterogeneous "task specialists." These specialists may inter rogate and negotiate with each other to work out the details of related but disparate subproblems. Unfortunately, the GPS "operators" can't directly interact and trigger each other. Such capabilities, however, do appear in a primitive form it systems such as PLANNER and QA4.

    In short, the author's arguments reinforce his faith that no "new" mechanisms are necessary for AI systems to tackle more complex ill-structured problems. And such faith, like all faith, finds support in a few positive events. For example, ff some reasoning by analogy may be accomplished with the tra. ditional AI mechanisms [ 1 ] , well then, probably all reasoning by analogy can be automated by similar devices. Perhaps, but there is evidence for an alternate view: progress in AI can be seen as our growing sophistication in ways to structure ever more complex processes with an enlarging set of mechanisms. This set would include clever schemes for loosely coordinating diverse processes. Subroutine call by pattern match is one of the newest additions to the set. When we include coroutines, heterarchies, forking, and operator-difference computations' these may not be a sufficiently robust set to allow us the weak, but coherent, control of diverse specialized problem solvers that the author advocates. But in AI, the real proof is in the design principles, which are sufficient to build ambitious prob. lem solvers, rather than in articulate arguments about what it or is not possible.

    REFERENCE

    [ 1 ] KLING, R. E. "A paradigm for reasoning by analogy," Artificial Intel6

    genre, Winter 1972, 147-178.

    Kling, Many critics of AI allege that the scope of AI artifacts is limited to highly structured problems such as games and toy robot worlds. In this brief article the author argues that the mechanisms appearing in contemporary AI artifacts are adequate for tackling ill-structured problems. First he articulates certain conventionally accepted characterizations of wellstructured problems. Then he argues that problem solving in certain apparently well- structured domains (chess, theoremproving) is not in fact very well structured by his criteria. In passing, he notes that the designers of most extant AI artifact add considerable structure to these task areas to improve the performance of their systems. In a second assault he illustrates the way in which problems which seem ill defined in the large appear quite well defined when they are decomposed into a loosely coordinated set of simpler subproblems. The author's reliance upon GPS as the technological base for his analysis limits the boldness of his proposals. GPS is a rather homogen ous system with weak control structures for coordinating sub. tasks. The GPS structure contrasts with his description of de. sign organization as the cooperation of a community of heterogeneous "task specialists." These specialists may inter rogate and negotiate with each other to work out the details of related but disparate subproblems. Unfortunately, the GPS ''operators" can't directly interact and trigger each other. Such capabilities, however, do appear in a primitive form it systems such as PLANNER and QA4.

    In short, the author's arguments reinforce his faith that no "new" mechanisms are necessary for AI systems to tackle more complex ill-structured problems. And such faith, like all faith, finds support in a few positive events. For example, ff some reasoning by analogy may be accomplished with the tra. ditional AI mechanisms [ 1 ] , well then, probably all reasoning by analogy can be automated by similar devices. Perhaps, but there is evidence for an alternate view: progress in AI can be seen as our growing sophistication in ways to structure ever more complex processes with an enlarging set of mechanisms. This set would include clever schemes for loosely coordinating diverse processes. Subroutine call by pattern match is one of the newest additions to the set. When we include coroutines, heterarchies, forking, and operator-difference computations' these may not be a sufficiently robust set to allow us the weak, but coherent, control of diverse specialized problem solvers that the author advocates. But in AI, the real proof is in the design principles, which are sufficient to build ambitious prob. lem solvers, rather than in articulate arguments about what it or is not possible.




          in ACM Computing Reviews 15(10) October 1974 view details
  • Gruenberger, F. J. "The History of the JOHNNIAC" pp57-59 (reprint of Gruenberger 1968) view details
          in Annals of the History of Computing, July 1979 view details
  • Norvig, Peter "Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp" Morgan Kaufmann 1991 view details ftpSource code
          in Annals of the History of Computing, July 1979 view details