IPL-III(ID:162/ipl003)

Information Processing Language version 3 


IPL-III was largely a concpetual phase and existed only briefly

Hardware:
Related languages
IPL-II => IPL-III   Evolution of
IPL-III => IPL-IV   Evolution of

References:
  • Ware, Willis H. "JOHNNIAC Eulogy" RAND July 1979 Corp. document P-3313, March 1966, pp. 1l-12 view details Extract: Notable uses of JOHNNIAC
    In the earliest days of 1954, most programming was done in machine language and in absolute octal at that. In 1955 Jules Schwartz wrote the first assembly routine for JOHNNIAC and Cliff Shaw produced a revised assembler in 1956. Then came QUAD, an interpretive programming system, and SMAC, a small compiler. Each was noted for being foolproof. The non-professional programmer could use these systems comfortably; his errors would be reported to him in great detail by the machine. There were other significant contributions to the programming art as well; among them were items with such names as EASY-FOX, CLEM, JBL-4, J-100, MORTRAN done by Mort Bernstein, and Load-and-Go.

    In the late fifties, the nature of JOHNNIAC's task changed. The rental equipment from IBM carried most of the computing load from the RAND staff. JOHNNIAC became a free good; its time was available for research use. The cost of operation was sufficiently low that one need not be concerned about using large amounts of machine time. Much of its time was consumed by research on the general questions of artificial intelligence and the initials NSS came to be closely associated with JOHNNIAC. These are the initials of Allen Newell, Cliff Shaw, and Herb Simon who used the machine extensively for research. During this period came such achievements as:
  • List structures,
  • list processing techniques and their embodiment in such languages as IPL-2, -3, -4;
  • Chess playing routines such as CP-I AND -2;
  • Theorem proving routines such as LT -- the Logic Theorist;
  • The general problem solver GPS;
  • The assembly line balancer of Fred Tonge.
  • Knuth, Donald E. The Art of computer programming, Addison-Wesley Publishing Company Reading, MA 1968 view details Extract: History And Bibliography
    History And Bibliography
    Linear lists and rectangular arrays of information kept in consecutive memory locations were widely used from the earliest days of stored-program computers, and the earliest treatises on programming gave the basic algorithms for traversing these structures. [ For example, see J. von Neumann, Collected Works, vol. V, 113-116 (written 1947); M. V. Wilkes, D. J. Wheeler, S. Gill, The Preparation of Programs for an Electronic Digital Computer (Reading, Mass.: Addison-Wesley, 1951), subroutine V-1. ] Before the days of index registers, operations on sequential linear lists were done by performing arithmetic on the machine language instructions themselves, and this type of operation was one of the early motivations for having a computer whose programs share memory space with the data they manipulate.

    Techniques which permit variable-length linear lists to share sequential locations, in such a way that they shift back and forth when necessary, as described in Section 2.2.2, were apparently a much later invention. J. Dunlap of Digitek Corporation developed these techniques in 1963 in connection with the design of a series of compiler programs; about the same time the idea independently appeared in the design of a COBOL compiler at IBM Corporation, and a collection of related subroutines called CITRUS was subsequently used at, various installations. The techniques remained unpublished until after they had been independently developed by Jan Garwick of Norway; see BIT 4 (1964), 137-140.

    The idea of having linear lists in non-sequential locations seems to have originated in connection with the design of computers with drum memories; the first such computer was developed by General Electric Corporation during 1946-1948, and the IBM 650 was the most notable later machine of this type. After executing the instruction in location n, such a computer is usually not ready to get its next instruction from location n + 1 because the drum has already rotated past this point. Depending on the instruction being performed, the most favorable position for the next instruction might be n + 7 or n + 18, etc., and the machine can operate up to six or seven times faster if its instructions are optimally located rather than consecutive. [ For a discussion of the interesting problems concerning best placement of these instructions, see the author's article in JACM 8 (1961), 119-150. ] Therefore the machine design provides an extra address field in each machine language instruction, to serve as a link to the next instruction. Such a machine is called a "one-plus-one-address computer," as distinguished from MIX which is a one-address computer." The design of one-plus-one-address computers is apparently the first appearance of the linked-list idea within computer programs, although the dynamic insertion and deletion operations which we have used so frequently in this chapter were still unknown.

    Linked memory techniques were really born when A. Newell, C. Shaw, and H. Simon began their investigations of heuristic problem-solving by machine. As an aid to writing programs that searched for proofs in mathematical logic, they designed the first " listprocessing" language IPL-II in the spring of 1956. (IPL stands for Information Processing Language.) This was a system which made use of links and included important concepts like the list of available space, but the concept of stacks was not yet well developed; IPL-III was designed a year later, and it included "push down" and "pop up" for stacks as important basic operations. (For references to IPL-II see "The Logic Theory Machine," IRE Transactions on Information Theory IT-2 (Sept. 1956), 61-70; see also Proc. Western Joint Comp. Conf. (Feb. 1957), 218-240. Material on IPL- III first appeared in course notes given at the University of Michigan in the summer of 1957.)

    The work of Newell, Shaw, and Simon inspired many other people to use linked memory (which was often at the time referred to as NSS memory), mostly for problems dealing with simulation of human thought processes. Gradually, the techniques became recognized as basic computer programming tools; the first article describing the usefulness of linked memory for "down-to-earth" problems was published by J. W. Carr, III, in CACM 2 (Feb. 1959), 4-6. Carr pointed out in this article that linked lists can readily be manipulated in ordinary programming languages, without requiring sophisticated sub-routines or interpretive systems. See also G. A. Blaauw, IBM J. Res. and Dev. 3 (1959), 288-301.

    At first, one-word nodes were used for linked tables, but about 1959 the usefulness of several consecutive words per node and "multilinked" lists was gradually being discovered by several different groups of people. The first article dealing specifically with this idea was published by D. T. Ross, CACM 4 (1961), 147-150; at that time he used the term "plex" for what has been called a "node" in this chapter, but he subsequently has used the word "plex" in a different sense to denote a class of nodes combined with associated algorithms for their traversal.

    Notations for referring to fields within nodes are generally of two kinds: the name of the field either precedes or follows the pointer designation. Thus, while we have written "INFO(P)" in this chapter, some other authors write, for example, "P.INFO ". At the time this chapter was prepared, the two notations seemed to be equally prominent. The notation adopted here has been used in the author's lectures since 1962 and it has also been independently devised by numerous other people, primarily because it is natural to use mathematical functional notation to describe attributes of a node. (For example, see the article by N. Wirth and C. A. R. Hoare, CACM 9 (1966), 413-432.) Note that "INFO(P)" is pronounced "info of P" in conventional mathematical verbalization, just as f(x) is rendered "f of x." The alternative notation P.INFO has less of a natural flavor, since it tends to put the emphasis on P, although it can be read "P's info" the reason INFO(P) seems to be more natural is apparently the fact that P is variable, but INFO has a fixed significance when the notation is employed. By analogy, we could consider a vector A = (A [ l ] , A [ 2 ] , ..., A [ l00 ] ) to be a node having 100 fields named 1, 2,..., 100. Now the second field would be referred to as "2(P)" in our notation, where P points to the vector A; but if we are referring to the jth element of the vector, we find it more natural to write A [ j ] , putting the variable quantity "j" second. Similarly it seems most appropriate to put the variable quantity "P' second in the notation INFO(P).

    Perhaps the first people to recognize that the concepts "stack" (last-in-first-out) and "queue" ( first-in-first-out) are important objects of study were cost accountants interested in reducing income tax assessments; for a discussion of the "LIFO" and "FIFO" methods of pricing inventories, see any intermediate accounting textbook, e.g., C. F. and W. J. Schlatter, Cost Accounting (New York: Wiley, 1957), Chapter 7. In 1947 A. M. Turing developed a stack, called Reversion Storage, for use in subroutine linkage (see Section 1.4.5). No doubt simple uses of stacks kept in sequential memory locations were common in computer programming from the earliest days, since a stack is such a simple and natural concept. The programming of stacks in linked form appeared first in IPL, as stated above; the name "stack" stems from IPL terminology (although "pushdown list" was the more official IPL wording), and it was also independently introduced in Europe by E. W. Dijkstra. "Deque" is a term introduced by E. J. Schweppe.

    The origin of circular and doubly linked lists is obscure; presumably these ideas occurred naturally to many people. A strong factor in the popularization of these techniques was the existence of general List-processing systems based on them [principally the Knotted List Structures, CACM 5 (1962), 161-165, and Symmetric List Processor, CACM 6 (1963), 524-544, of J. Weizenbaum ] .

    Various methods for addressing and traversing multidimensional arrays of information were developed independently by clever programmers since the earliest days of computers, and thus another part of the unpublished computer folklore came into existence. This subject was first surveyed in print by H. Hellerman, CACM 5 (1962), 205-207. See also J. C. Gower, Comp. J. 4 (1962), 280-286.

    Tree structures represented explicitly in computer memory were described for the first time in applications to algebraic formula manipulation. The A-1 compiler language, developed by G. M. Hopper in 1951, used arithmetic expressions written in a three-address code; the latter is equivalent to the INFO, LLINK, and RLINK of a binary tree representation. In 1952, H. G. Kahrimanian developed algorithms for differentiating algebraic formulas represented in the A-1 compiler language; see Symposium on automatic Programming (Washington, D.C.: Office of Naval Research, May 1954), 6-14.

    Since then, tree structures in various guises have been studied independently by many people in connection with numerous computer applications, but the basic techniques for tree manipulation (not general List manipulation) have seldom appeared in print except in detailed description of particular algorithms. The first general survey was made in connection with a more general study of all data structures by K. E. Iverson and L. R. Johnson [ IBM Corp. research reports RC-390, RC-603, 1961; see Iverson, A Programming Language (New York: Wiley, 1962), Chapter 3 ] . See also G. Salton, CACM 5 (1962), 103-114.

    The concept of threaded trees is due to A. J. Perlis and C. Thornton, CACM 3 (1960), 195-204. Their paper also introduced the important idea of traversing trees in various orders, and gave numerous examples of algebraic manipulation algorithms. Unfortunately, this important paper was hastily prepared and it contains many misprints. The threaded lists of Perlis and Thornton actually were only "right-threaded trees" in our terminology; binary trees which are threaded in both directions were independently discovered by A. W. Holt, A Mathematical and Applied Investigation of Tree Structures (Thesis, U. of Pennsylvania, 1963). Postorder and preorder for the nodes of trees were called "normal along order" and "dual along order" by Z. Pawlak, Colloquium on the Foundation of Mathematics, etc. (Tihany, 1962, published by Akademiai Kiado, Budapest, 1965), 227-238. Preorder was called "subtree order" by Iverson and Johnson in the references cited above. Graphical ways to represent the connection between tree structures and corresponding linear notations were described by A. G. Oettinger, Proc. Harvard Symp. on Digital Computers and their Applications (April, 1961), 203-224.

    The history of tree structures as mathematical entities, together with a bibliography of the subject, is reviewed in Section 2.3.4.6.

    At the time this section was written, the most widespread knowledge about information structures was due to programmers' exposure to List processing systems, which have a very important part in this history. The first widely used system was IPL-V (a descendant of IPL-III, developed late in 1959); IPL-V is an interpretive system in which a programmer learns a machine-like language for List operations. At about the same time, FLPL (a set of FORTRAN sub-routines for List manipulation, also inspired by IPL but using subroutine calls instead of interpretive language) was developed by H. Gelernter and others. A third system, LISP, was designed by J. McCarthy, also in 1959. LISP is quite different from its predecessors: programs for it are expressed in mathematical functional notation combined with "conditional expressions" (see Chapter 8), then converted into a List representation. Many List processing systems have come into existence since then, of which the most prominent historically is J. Weizenbaum's SLIP; this is a set of subroutines for use in FORTRAN programs, operating on doubly linked Lists.

    An article by Bobrow and Raphael, CACM 7 (1964), 231-240, may be read as a brief introduction to IPL-V, LISP, and SLIP, and it gives a comparison of these systems. An excellent introduction to LISP has been given by P. M. Woodward and D. P. Jenkins, Comp. J. 4 (1961), 47-53. See also the authors' discussions of their own systems, which are each articles of considerable historical importance: "An introduction to IPL-V" by A. Newell and F. M. Tonge, CACM 3 (1960), 205-211; "A FORTRAN-compiled List Processing Language" by H. Gelernter, J. R. Hansen, and C. L. Gerberich, JACM 7 (1960), 87-101; "Recursive functions of symbolic expressions and their computation by machine, I" by John McCarthy, CACM 3 (1960), 184-195; "Symmetric List Processor" by J. Weizenbaum, CACM 6 (1963), 524-544. The latter article includes a complete description of all of the algorithms used in SLIP. In recent years a number of books about these systems have also been written.

    Several string manipulation systems have also appeared; these are primarily concerned with operations on variable-length strings of alphabetic information (looking for occurrences of certain substrings, etc.). Historically, the most important of these have been COMIT (V. H. Yngve, CACM 6 (1963), 83-84) and SNOBOL (D. J. Farber, R. E. Griswold, and I. P. Polonsky, JACM 11 (1964), 21-30). Although string manipulation systems have seen wide use and although they are primarily composed of algorithms such as we have seen in this chapter, they play a comparatively small role in the history of the techniques of information structure representation; users of these systems have largely been unconcerned about the details of the actual internal processes carried on by the computer. For a survey of string manipulation techniques, see S. E. Madnick, CACM 10 (1967), 420-424.

    The IPL-V and FLPL systems for List-processing did not use either a garbage collection or a reference count technique for the problem of shared Lists; instead, each List was "owned" by one List and "borrowed" by all other Lists which referred to it, and a List was erased when its "owner" allowed it to be. Hence, the programmer was enjoined to make sure no List was still borrowing any Lists that were being erased. The reference counter technique for Lists was introduced by G. E. Collins, CACM 3 (1960), 655-657; see also the important sequel to this paper, CACM 9 (1966), 578-588. Garbage collection was first described in McCarthy's article cited above; see also CACM 7 (1964), 38, and an article by Cohen and Trilling, BIT 7 (1967), 22-30.

    Unfortunately, too few people have realized that manipulation of links is not a magic, complex process that must be done by fancy List manipulation routines. There is still a widespread tendency to feel that the advantages of link manipulation can be obtained only by using a large List-processing system; actually, as we have seen in many examples throughout this chapter, it is quite easy to design linking algorithms which operate more efficiently than a general List-processing subroutine could possibly do. The author hopes that these detailed examples will help to convey a better-balanced attitude towards the use of structural links in data.

    Dynamic storage allocation algorithms were in use several years before published information about them appeared. A very readable discussion is given by W. T. Comfort, CACM 7 (1964), 357-362 (an article written in 1961). The "boundary-tag' method, introduced in Section 2.5, was designed by the author in 1962 for use in a control program for the B5000 computer. The "buddy system" was first used by H. Markowitz in connection with the SIMSCRIPT programming system in 1963, and it was independently discovered and published by K. Knowlton, CACM 8 (1965), 623-625; see also CACM 9 (1966), 616-625. For further discussion of dynamic storage allocation, see the articles by Iliffe and Jodeit, Comp. J. 5 (1962), 200-209; Bailey, Barnett, and Burleson, CACM 7 (1964), 339-346; Berztiss, CACM 8 (1965), 512-513; and D. T. Ross, CACM 10 (1967), 481-492.

    A general discussion of information structures and their relation to programming has been given by Mary d'Imperio, "Data Structures and their Representation in Storage," Annual Review of Automatic Programming 5 (Oxford: Pergamon Press), in press. This paper is also a valuable guide to the history of the topic, since it includes a detailed analysis of the structures used in connection with twelve List processing and string manipulation systems. See also the proceedings of two symposia, CACM 3 (1960), 183-234 and CACM 9 (1966), 567-643, for further historical details. (Several of the individual papers from these proceedings have already been cited above.)

    An excellent annotated bibliography, which is primarily oriented towards applications to symbol manipulation and algebraic formula manipulation but which has numerous connections with the material of this chapter, has been compiled by Jean E. Sammet, Comput. Rev. 7 (July-August 1966), B1-B31.

  • Simon, H. A.; Newell, A. "Information Processing Language V on the IBM 650" view details Extract: IPLs
    In late 1954, the authors began a collaboration with J. C. (Cliff) Shaw in research on complex information processing with computers. The initial goal was to develop a computer program that could learn to play chess, but in the autumn of 1955 this target was displaced by the interim goal of writing a program that could prove theorems in the propositional calculus of Principia Mathematica. Newell and Shaw, both on the staff of the Rand Corporation in Santa Monica, had access to the newly completed JOHNNIAC computer there, while Simon, on the faculty of Carnegie Institute of Technology, was a consultant to Rand.
    In order to achieve our goals, we decided that it was essential to devise a programming language capable of handling large, complex symbol structures whose size and configuration would be extremely difficult to predict in advance, and would have to be modified during run time. Our solution was to organize memory in terms of list structures and to construct a language designed to operate on lists. By the end of 1955, the first list-processing language, IPL-II, had taken shape; early in the following year it was operating on the JO INNIAC and a theorem-proving system had been coded in it.
    Meanwhile, Newell had moved to Pittsburgh in order to take a doctoral degree there, having interrupted his graduate education when he began work at Rand some years earlier. In Pittsburgh, he retained his affiliation with Rand, and the research continued vigorously, but with Shaw and Newell on opposite ends of a teletype wire that connected them across the continent -- a sort of prehistoric network with human node processors similar to Arpanet IMPs.
    On the campus of Carnegie Institute of Technology, a few faculty members had been exposed, in the early 1950s, to the new electronic computers. As we have seen, one of these was Simon, another was Charles C. Holt, an economist and engineer. Both were associated with the Graduate School of Industrial Administration (GSIA), a "new- look" business school established in 1949 with a strong emphasis on the developing tools of operations research and management science. Holt Newell, and Simon thought the time ripe to bring a computer to the CIT campus, and were successful in persuading the dean of GSIA, G. L. Bach, and the dean of Science and Engineering, Richard Teare, to underwrite the cost. Since the electrical engineers on campus were mainly concerned with avoiding responsibility for maintaining such a machine if it arrived, and most mathematicians and scientists could not see how or why they would use one, there was no objection to locating the computer in the basement of GSIA.
    Choosing an appropriate computer called for consultation with other universities that already had one, and consultation led to Alan Perlis at Purdue, with the result that an IBM 650 and Alan arrived in Pittsburgh in the spring and summer, respectively, of 1956. Perlis has told his own story of the 650 at Carnegie in the preceding article in this issue. Here, we need only record our deep gratitude for his imaginative and productive leadership of computing at the university during the decade and a half he was our colleague.
    By the spring of 1956, a number of graduate students, enrolled in Simon's course on Mathematical Methods in Social Science, were considering doing theses on complex information processing (alias artificial intelligence). These included Edward A. Feigenbaum, Julian Feldman, Robert K. Lindsay, Fred M. Tonge, and soon afterward Geoffrey Clarkson. Although we could provide them with some access to the Rand machines (Tonge wrote his thesis program in IPL-IV on the JOHNNIAC), it became imperative that we bring up a list-processing language for student and faculty use at Carnegie. Today, a programmer might have second thoughts about putting a list-processing language on a machine with only 2000 words of highspeed memory. When we remember that the highspeed store of the JOHNNIAC was only 4096 words (supplemented by a drum with about 10,000 words of usable capacity), the memory limits of the 650, while severe, seemed manageable.
    IPL-V, the language developed for the IBM 650, "started out in late 1957 to be a 'modified copy' of IPL-IV (then being implemented on JOHNNIAC) (Newell 1963, p. 86). An initial running system was produced under Newell's direction in early 1958, mainly by Carleton B. Hensley and Tonge (Hensley, Newell, and Tonge 1958). Meanwhile, since the Rand Corporation had acquired an IBM 704, it was decided that the language should be designed to run on both the 650 and the 704. The revised language, with Newell, Tonge, Feigenbaum, Bert Green, Jr., and George Mealy as its principal designers, was described, in June 1958, in a preliminary version of the IPL-V manual, and the system became operational on the 704 at the end of the summer of 1959 (Newell 1963).
    Thus, IPL-V having been coded for both 650 and 704, and provided with a manual (first edition, 1961; second edition, 1964), became the first list-processing language to be made available for public use. Subsequently, IPL-V was brought up on a substantial number of other computers and continued for a decade to be an important language, both for research in artificial intelligence and cognitive science and for teaching the basic concepts of list processing.
    A glance at the pioneering research that is collected in Feigenbaum and Feldman's Computers and Thought (1963) shows that the IPL-V system on the 650 at Carnegie made important contributions to the foundations of AI and cognitive science. Among the programs written at that time were Feigenbaum's EPAM, a simulation of verbal learning processes, Feldman's program for simulating human decisions in the binary choice experiment, Kenneth Laughery's program for concept formation, Lindsay's SAD SAM, an early program with natural language and reasoning capabilities, and Clarkson's simulation of the investment decisions of a bank trust officer. As can be seen from this list, most of this research focused on the simulation of human cognitive processes.
    Although these programs were written in IPL-V, and at least partially debugged on the 650 at Carnegie, most experience in running them on actual tasks was gained on other machines. Both the small memory of the 650 and its brief tenure at Carnegie after IPL-V became available prevented extensive runs of large programs with it. So the greatest significance of the machine in the history of AI was as the instigator and test bed of the first public list-processing language, and as an instrument for teaching list processing to the first generation of students in cognitive science.
    The availability of IPL-V on the 650, and of a carefully written manual describing the language (Newell et al. 1961; 1964), contributed much also to the diffusion of knowledge of list-processing techniques to other university campuses and AI research groups. Because it could be run on a wide variety of computers of the 1960s, someone quipped that IPL-V was a machine-independent language, and that the machine it was independent of was the 650.
    An interesting, but probably undecidable, historical question is whether IPL-V made a significant contribution to the set of concepts that were some years later labeled "structured programming." At the time IPL-V was produced, mainstream systems programmers and researchers on algebraic programming languages paid little attention to list-processing languages, which were generally regarded as esoteric and unbearably wasteful of machine time and memory. It was probably Chapter II of Knuth's memorable Fundamental Algorithms (1969) that first gave them a measure of credibility outside the AI community. Therefore, although a strong case can be made that the central principles of structured programming were developed and employed extensively by users of list-processing languages, almost from the day those languages became available, the principles were likely largely unknown to the developers of algebraic languages who independently reinvented them.
    The IPL-V manual is quite explicit in its advocacy of top-down programming and independent closed subroutines. A few brief quotes will indicate the explicitness of its conceptions in this domain.
    One programming strategy, often called the "top- down" approach, is to divide each large process, no matter how complicated, into a small number of subprocesses. Each of these subprocesses is given a name, and its function -- the processing it accomplishes -- is defined precisely by specifying exactly what inputs it requires and what outputs it produces. How the subprocess will carry on this processing does not matter at this stage. (Kelly and Newell 1964, pp. 104-105)
    Once any process is coded, attention can be directed to developing and coding each of its subprocesses, using exactly the same strategy of decomposing these into subprocesses. Ultimately, subprocesses are reached that can be defined directly in terms of the IPL primitive processes, so that the decomposition comes to a stop. Although apparently at each stage all the complexities are being relegated to the subprocesses and only codes for trivial processes are being written, it will be found at last that nothing complicated remains to be done at the bottom of the hierarchy. (1964, p. 105)
    Another principle may be called the principle of isolation. The flexibility in hierarchical organization depends on each subroutine being isolated from the rest of the program, except for a small number of well-defined connections.... Concretely, one subroutine should not link to another. (1964, p. 109)
    The top-down approach (with some needed qualifications, not quoted here), the characterization of processes solely in terms of inputs and outputs, hierarchical structure, and wariness of COTOS are all here, quite explicitly. Nearly a quarter-century later, the principles of programming enunciated in the IPL-V manual sound as modern and relevant as when they were written. It can be seen that the IBM 650 at Carnegie Institute of Technology played a significant role in the early research on artificial intelligence and cognitive science, not so much because it provided computing cycles -- although it did that, too -- as because it provided the occasion for developing the first widely used list-processing language, and a facility for training many early computer scientists in the concepts and skills required for using computers to do complex information processing.

          in Annals of the History of Computing, 08(1) January 1986 (IBM 650 Issue) view details