FORTRAN List Processing Language 

for FORTRAN List Processing Language.

Package of FORTRAN subroutines for handling lists. They were developed initially for automatic geometry theory proving at IBM in 1958 by Hansen, Gelernter, and Gerberich.

Implemented some of the notions (and terms) from IPL-V (which they termed Newell-Shaw-Simon memory - NSS), but in a more directly representational manner

Weizenbaum's program Eliza was first implemented in FLPL.

"A set of subroutines added to FORTRAN in connection with work on geometry theorem providing" (Sammett 1966)

Related languages
FORTRAN II => FLPL   Extension of
IPL-V => FLPL   Influence
FLPL => LISP   Influence
FLPL => SLIP   Influence
FLPL => Tabory graph-theoretic language   Incorporated some features of

  • Gelertner, H. and Rochester, N., Intelligent behavior in problem-solving machines, IBM J. Res. Dev. 1958, pp336-345 view details
  • Gelertner, H., Realization of a geometry theorem-proving machine, Proc. Int. Conf. on Information Processing, Unesco, Paris (1959) view details
  • Gelernter, H.; Hansen, J.R.; Gerberich, C.L. A FORTRAN Compiled List Processing Language. view details Abstract: A compiled computer language for the manipulation of symbolic expressions organized in storage as Newell-Shaw-Simon lists has been developed as a tool to make more convenient the task of programming the simulation of a geometry theorem-proving machine on the IBM 704 high-speed electronic digital computer.    Statements in the language are written in usual Fortran notation, but with a large set of special list-processing functions appended to the standard Fortran library.    The algebraic structure of certain statements in this language corresponds closely to the structure of an NSS list,  making possible the generation and manipulation of complex list expressions with a single statement. The many programming advantages accruing from the use   of Fortran, and in particular,  the ease with which massive and complex programs may be revised,  combined with the flexibility offered by an NSS list organization of storage make the language particularly useful where, as in the case of our theorem-proving program,  intermediate data of unpredictable form,  complexity,  and length may be generated.
    DOI Extract: Introduction
    I. Introduction
    Until recently, digital computer design has been strongly oriented toward increased speed and facility in the arithmetic manipulation of numbers, for it is in this mode of operation that most calculations are performed. With greater appreciation of the ultimate capacity of such machines, however, and with increased understanding of the techniques of information processing, many computer programs are being now written that deal largely with entities that are purely symbolic and processes that are logistic rather than arithmetic. One such effort is the simulation of a geometry theorem-proving machine being investigated by the authors and D. Loveland at the Yorktown IBM Research Center [1, 2]. This massive simulation program has a characteristic feature in common with many other such symbol-manipulating routines, and in particular, with those intended to carry out some abstract problem-solving process by the use of heuristic methods [3, 4]. The intermediate data generated by such programs are generally unpredictable in their form, complexity, and length. Arbitrary lists of information may or may not contain as data an arbitrary number of items or sublists. To allocate beforehand to each possible list a block of storage sufficient to contain some reasonable maximum amount of information would quickly exhaust all available fast-access storage as well as prescribe rigidly the organization of information in the lists. A program failure caused by some list exceeding its allotted space while most of the remainder of storage is almost empty could be expected as a not uncommon occurrence.
    Faced with this problem, Newell, Shaw, and Simon, in programming a heuristic theorem-proving system for the propositional calculus, simulated (by program-
    ming) a kind of associative memory (henceforth referred to as an NSS memory) in which lists of arbitrary length and organization could be generated by annexing registers from a common store [5]. The price paid for this very substantial increase in programming flexibility is an apparent decrease (by a factor of about one-half) in usable high-speed storage and a real decrease in the speed of indexing consecutive items in a given list. The debilities are due to the fact that consecutive items of data are not in consecutive memory registers, as hi a standard memory, but are, rather, connected by a string of location words. These location words, however, determine the organization of the data, and in programs such as the one for which the NSS memory was developed, the organization of the data contains a good deal of the information about it. In addition, the location words themselves can carry several bits of useful data and can be used to annex a given item on several different lists, making repetition of the data unnecessary. Consequently, as the number and complexity of the generated lists increases, the density of useful information stored in a NSS memory approaches one word per register.
    The decrease in processing speed is not so easily shrugged off. By modifying the logical design of the instruction roster to permit, for example, indirect addressing from both the left and the right half of a register (decrement and address, respectively, hi the IBM 704), much improvement may be realized in this respect. But the ability to quickly withdraw a specified item of data by computing its address is inexorably lost. Lacking the built-in refinements of indirect addressing and other special instructions designed to manipulate NSS lists, Newell, et al, designed an interpretive routine for their computer (the RAND JOHNNIAC) to lighten the task of translating then: programming wishes into the arithmetic-oriented instruction code of the JOHNNIAC. In fact a series of these interpretive languages were written and were called by their authors "Information Processing Languages" [5]. Unfortunately, the introduction of an intermediate interpreter for each command further extracts its toll in computing speed, so that relatively simple operations require an inordinate amount of tune. This is in large degree responsible for the great disparity in tune required by the propositional calculus theorem prover of Newell et al, and that of Wang [6] to prove the same theorems, Wang's machine being from three to five orders of magnitude faster. The designers of the Information Processing Languages (!PL) estimate that a complex operation like choosing a strong move in a game of chess would require of the order of an hour when programmed in their interpretive system.
    When the present authors embarked upon their effort to simulate a geometry theorem-proving machine, it was early decided that an NSS organization of memory would best serve their purpose, and consideration was given to the translation of a JOHNNIAC IPL for use with the IBM 704 computer. However, J. McCarthy, who was then consulting for the project, suggested that Fortran could be adapted to serve the same purpose. He pointed out that the nesting of functions that is allowed within the Fortran format makes possible the construction of elaborate information-processing subroutines with a single statement. The authors have since discovered a further substantial advantage of an algebraic
    list-processing language such as one written within the framework of Fortran. It is the close analogy that exists between the structure of an NSS list and a certain class of algebraic expressions that may be written within the language. We shall return to this point in greater detail below. Not to be overlooked is the considerable sophistication incorporated into the Fortran compiler itself, all of which carries over, of course, into our Fortran-compiled list-processing language. It is reasonable to estimate that a routine written in our language would run about five times as fast as the same program written in an interpretive language. Extract: The FORTRAN List-Processing Functions
    III. The FORTRAN List-Processing Functions
    Throughout the remainder of this paper, it will be assumed that the reader is familiar with the FORTRAN compiling system for the IBM 704 Data Processing Machine and has at his disposal the reference manuals describing the original system and its extensions, FORTRAN II and III. It must be emphasized that FORTRAN is in itself an information processing language of great versatility and sophistication. Our list-processing functions merely serve to increase the "vocabulary" of the language so that list manipulation processes may be described within the FORTRAN framework as are ordinary computer processes. We are thus able to enjoy the same ease of programming, ease of modification, and estensive debugging aids available to the programmers of standard numerical problems. Since this paper is not intended to serve as a programmer's manual for ÃLPL, many details essential for its use will be omitted. The description of the language is completed in an IBM internal memorandum, soon to be available.
    The dominant characteristic of most of our special list-processing functions is that they are not functions at all in the normally understood sense of the term. For many of them, the value of the function1 depends not only upon its arguments, but also upon the particular internal state configuration of the computer at the time it is evaluated. Too, some of these functions not only have numerical values, but also produce changes in the internal state configuration of the computer as they are "evaluated." Indeed, one often uses them solely to effect such
    a change, discarding the unwanted value of the function. Most of our list-processing functions are, in fact, arbitrary subroutines that can be compounded and manipulated according to the algebraic rules for the compounding and manipulation of functions in the FOBTRAN language.
    The primitive (coded directly in machine language, and available on the library tape) FLPL functions fall into seven rather well defined groups for performing the operations enumerated below:
    (a)  information retrieval
    (b)  information storage
    (c)  information processing
    (d)  information search
    (e)  list generation
    (f)   service routines
    (g)  special purpose functions
    By combining the primitive operations according to the rules of FORTRAN, list-processing operations and subroutines of arbitrary complexity may be constructed. The compound operations can be named, if desired, so that they may be used as elements of larger routines. Extract: Concluding Remarks
    IV. Concluding Remarks
    FLPL has been, in a sense, specifically "product-developed" for the geometry theorem-proving program, and thus far the latter is the only large-scale program written in that language. The geometry program, however, comprises three largely dissimilar subsections which span the entire range of complex information-processing operations [2], On one hand the syntax computer deals almost exclusively with uninterpreted symbolic expressions, while on the other hand the diagram computer is mostly concerned with a highly structured array of numerical data. The heuristic computer, which serves as an intermediary between syntax and diagram computers, must process both kinds of information, interpreting symbolic expressions as numerical equations and converting numerical data into abstract symbolic expressions. On this basis it is reasonable to claim a fan* degree of universality for FLPL, providing only that the requirements of the problem indicate the desirability of an NSS organization of storage.
    It is interesting to compare FLPL with IPL v, a Newell-Shaw-Simon interpretive list-processing language soon to be available for the IBM 704. In IPL v, Newell, et al, have solved the problem of list priority assignment in much the same way that the authors have, by assigning the equivalent of our type code to each Z-word. Both languages are able to perform identical list-processing operations with the following exceptions. FLPL contains, in addition to the processes described in section III, all legitimate FORTRAN operations, so that the entire floating point arithmetic power of FORTRAN is at the programmer's disposal, together with the convenient input, output, indexing, and format processes available in
    FORTEAN. Too, the diagnostic services performed by FORTBÄN offer great aid and comfort to the programmer. On the other hand, because IPL v completely discards the accumulator as a means of communication between IPL subroutines, substituting instead an NSS communication list, one may define routines recursively within the framework of IPL v. Although it is possible to do a limited amount of recursion within FLPL by purposefully saving all intermediate results and index registers in NSS lists, the authors have not yet felt the need to experiment with this mode of operation, since the traditional looping procedures have served the purpose well.
    Higher list-processing routines may easily be defined within either language, although perhaps here FLPL maintains a slight edge over IPL v in the variety of procedures whereby such routines may be constructed and named. In the other direction, the inclusion of basic machine language instructions within the vocabulary of the language is a decided advantage of FLPL, and one that has been pressed with great frequency in the programming of the geometry theorem-proving machine.
    FLPL trades a negligible increase in program storage for a significant increase in processing speed. In many cases, the difference in computing time required to produce results will make the difference between a program that is a useful research tool and one that is merely a curiosity. For a typical problem appearing in a high-school geometry examination, our FLpL-programmed geometry machine requires a reasonable twenty minutes. The authors feel that the two hours required by the same program written in IPL v would be an excessive amount of time to allow the machine to search for a proof.
    Comparison of the two languages in terms of programming convenience is largely taste-dependent. Again, the authors feel that an algebraic compounding of mnemonically-named expressions is preferable to a linear sequencing of "catalog-number" designated routines, but programmers experienced in the use of IPL seem to find little advantage in the change of program format. A new programmer, unless he has had previous training in the use of FORTRAN, is likely to require about the same amount of time to gain proficiency in either language.
    One feature of IPL v is excluded from FLPL by the nature of a compiler. Sequences of IPL instructions to be interpreted are stored in the computer as NSS lists, just as are the data. Although this property has been largely irrelevant to all programs written to date, it is conceivable that one might want to write a program in which the symbolic entities that are processed are IPL instructions themselves, and in which transfers of control take place between the meta-program and the machine-generated one. The fact that the transformation of FLPL expressions into computer activity is a two-stage, irreversible process places this kind of behavior beyond the range of our language, even though it is quite feasible to manipulate FLPL expressions within FLPL.
          in [ACM] JACM 7(2) (1960). view details
  • Hansen, J.R. The Use of the FORTRAN Compiled List Processing Language: IBM Th. Watson Res. Ctr., Resp. Rep. RC-282, Yorktown Heights, Juni 1960 view details
          in [ACM] JACM 7(2) (1960). view details
  • Hoffman, Walter: Review of Gelertner 1960 paper view details Extract: Review
    GELERNTER, H.; HANSEN, J. R.; AND GERBERICH, C. L. A Fortrancompiled list-processing language. J. Assoc. Comput. Mach. 7, 2 (April 1960), 87-101.
    This paper contains an introductory description of FLPL which is an extension of the Fortran Compiler System for the IBM 704. A geometry theorem proving program was written in this language by the authors and D. Loveland. In addition to standard Fortran, FLPL contains a number of list processing functions which are available on library tapes. Examples show how fairly complex symbol manipulation processes are easily coded by nesting and combining these functions according to the rules of Fortran.

    FLPL is compared with Newell, Simon, and Shaw's IPL V, an interpretive list processing language for the 704. The advantages of FLPL are the availability of the arithmetic and diagnostic abilities of Fortran as well as higher operating speed of the compiled program. On the other hand, IPL V has the advantage of defining functions recursively and the ability to consider and manipulate sequences of IPL instructions as lists.

    Walter Hoffman, Detroit, Mich.

          in ACM Computing Reviews, January-December 1960 view details
  • R. M. Shapiro "Computers, connector systems, and data descriptions" pp72-73 view details
          in Artificial languages view details
  • Barbieri, R. Computer List-Processing Languages IBM, Data Systems Division, Poughkeepsie, New York Technical Report No. TR00.1209 November 1964 view details Extract: Sammet summary
    This report describes several list structures and some important aspects of their implementation. It describes and compares FLPL, IPL-V, LISP and SLIP. It also gives examples of the use of slip in manipulating power series.
          in Artificial languages view details
  • Foster, J. M. "List processing" London, McDonald 1967 view details
          in Artificial languages view details
  • Bobrow, Daniel Review of Foster 1967 view details Extract: Review
    [ Book ] FOSTER, J. M.
    List processing.
    American Elsevier Publ., New York, 1967, 54 pp. $4.50.

    This tiny (54 pages) monograph on list processing is a beautiful introduction to techniques which have been used for some time in many applications. Among the topics included are: 1) the representations of lists (both in the computer and externally); and 2) operations on lists, garbage collection, and typical list languages. One short chapter is also devoted to an example of list processing used in a program to perform syntactic analysis of language. All the examples in the book are programmed in an extended A~Go~ with added list processing functions, and are carefully chosen to illustrate important points about list processing. The book's principal weakness is its lack of depth, which is not really to be expected in such a brief monograph. The summary of the list processing languages IPL-V, LISP, FLIP, FLPL, and COMIT are too brief to give any real feeling for these languages to people who have not seen them before. But these are minor quibbles with a very fine book which will broaden the outlook of people who think of computers only as tools to manipulate numbers, and will allow them to begin to appreciate what can be done with list processing techniques for symbol manipula
    tion. D. a. Bobrow, Cambridge, Mass.

          in ACM Computing Reviews 9(01) January 1968 view details
  • Knuth, Donald E. The Art of computer programming, Addison-Wesley Publishing Company Reading, MA 1968 view details Extract: History And Bibliography
    History And Bibliography
    Linear lists and rectangular arrays of information kept in consecutive memory locations were widely used from the earliest days of stored-program computers, and the earliest treatises on programming gave the basic algorithms for traversing these structures. [ For example, see J. von Neumann, Collected Works, vol. V, 113-116 (written 1947); M. V. Wilkes, D. J. Wheeler, S. Gill, The Preparation of Programs for an Electronic Digital Computer (Reading, Mass.: Addison-Wesley, 1951), subroutine V-1. ] Before the days of index registers, operations on sequential linear lists were done by performing arithmetic on the machine language instructions themselves, and this type of operation was one of the early motivations for having a computer whose programs share memory space with the data they manipulate.

    Techniques which permit variable-length linear lists to share sequential locations, in such a way that they shift back and forth when necessary, as described in Section 2.2.2, were apparently a much later invention. J. Dunlap of Digitek Corporation developed these techniques in 1963 in connection with the design of a series of compiler programs; about the same time the idea independently appeared in the design of a COBOL compiler at IBM Corporation, and a collection of related subroutines called CITRUS was subsequently used at, various installations. The techniques remained unpublished until after they had been independently developed by Jan Garwick of Norway; see BIT 4 (1964), 137-140.

    The idea of having linear lists in non-sequential locations seems to have originated in connection with the design of computers with drum memories; the first such computer was developed by General Electric Corporation during 1946-1948, and the IBM 650 was the most notable later machine of this type. After executing the instruction in location n, such a computer is usually not ready to get its next instruction from location n + 1 because the drum has already rotated past this point. Depending on the instruction being performed, the most favorable position for the next instruction might be n + 7 or n + 18, etc., and the machine can operate up to six or seven times faster if its instructions are optimally located rather than consecutive. [ For a discussion of the interesting problems concerning best placement of these instructions, see the author's article in JACM 8 (1961), 119-150. ] Therefore the machine design provides an extra address field in each machine language instruction, to serve as a link to the next instruction. Such a machine is called a "one-plus-one-address computer," as distinguished from MIX which is a one-address computer." The design of one-plus-one-address computers is apparently the first appearance of the linked-list idea within computer programs, although the dynamic insertion and deletion operations which we have used so frequently in this chapter were still unknown.

    Linked memory techniques were really born when A. Newell, C. Shaw, and H. Simon began their investigations of heuristic problem-solving by machine. As an aid to writing programs that searched for proofs in mathematical logic, they designed the first " listprocessing" language IPL-II in the spring of 1956. (IPL stands for Information Processing Language.) This was a system which made use of links and included important concepts like the list of available space, but the concept of stacks was not yet well developed; IPL-III was designed a year later, and it included "push down" and "pop up" for stacks as important basic operations. (For references to IPL-II see "The Logic Theory Machine," IRE Transactions on Information Theory IT-2 (Sept. 1956), 61-70; see also Proc. Western Joint Comp. Conf. (Feb. 1957), 218-240. Material on IPL- III first appeared in course notes given at the University of Michigan in the summer of 1957.)

    The work of Newell, Shaw, and Simon inspired many other people to use linked memory (which was often at the time referred to as NSS memory), mostly for problems dealing with simulation of human thought processes. Gradually, the techniques became recognized as basic computer programming tools; the first article describing the usefulness of linked memory for "down-to-earth" problems was published by J. W. Carr, III, in CACM 2 (Feb. 1959), 4-6. Carr pointed out in this article that linked lists can readily be manipulated in ordinary programming languages, without requiring sophisticated sub-routines or interpretive systems. See also G. A. Blaauw, IBM J. Res. and Dev. 3 (1959), 288-301.

    At first, one-word nodes were used for linked tables, but about 1959 the usefulness of several consecutive words per node and "multilinked" lists was gradually being discovered by several different groups of people. The first article dealing specifically with this idea was published by D. T. Ross, CACM 4 (1961), 147-150; at that time he used the term "plex" for what has been called a "node" in this chapter, but he subsequently has used the word "plex" in a different sense to denote a class of nodes combined with associated algorithms for their traversal.

    Notations for referring to fields within nodes are generally of two kinds: the name of the field either precedes or follows the pointer designation. Thus, while we have written "INFO(P)" in this chapter, some other authors write, for example, "P.INFO ". At the time this chapter was prepared, the two notations seemed to be equally prominent. The notation adopted here has been used in the author's lectures since 1962 and it has also been independently devised by numerous other people, primarily because it is natural to use mathematical functional notation to describe attributes of a node. (For example, see the article by N. Wirth and C. A. R. Hoare, CACM 9 (1966), 413-432.) Note that "INFO(P)" is pronounced "info of P" in conventional mathematical verbalization, just as f(x) is rendered "f of x." The alternative notation P.INFO has less of a natural flavor, since it tends to put the emphasis on P, although it can be read "P's info" the reason INFO(P) seems to be more natural is apparently the fact that P is variable, but INFO has a fixed significance when the notation is employed. By analogy, we could consider a vector A = (A [ l ] , A [ 2 ] , ..., A [ l00 ] ) to be a node having 100 fields named 1, 2,..., 100. Now the second field would be referred to as "2(P)" in our notation, where P points to the vector A; but if we are referring to the jth element of the vector, we find it more natural to write A [ j ] , putting the variable quantity "j" second. Similarly it seems most appropriate to put the variable quantity "P' second in the notation INFO(P).

    Perhaps the first people to recognize that the concepts "stack" (last-in-first-out) and "queue" ( first-in-first-out) are important objects of study were cost accountants interested in reducing income tax assessments; for a discussion of the "LIFO" and "FIFO" methods of pricing inventories, see any intermediate accounting textbook, e.g., C. F. and W. J. Schlatter, Cost Accounting (New York: Wiley, 1957), Chapter 7. In 1947 A. M. Turing developed a stack, called Reversion Storage, for use in subroutine linkage (see Section 1.4.5). No doubt simple uses of stacks kept in sequential memory locations were common in computer programming from the earliest days, since a stack is such a simple and natural concept. The programming of stacks in linked form appeared first in IPL, as stated above; the name "stack" stems from IPL terminology (although "pushdown list" was the more official IPL wording), and it was also independently introduced in Europe by E. W. Dijkstra. "Deque" is a term introduced by E. J. Schweppe.

    The origin of circular and doubly linked lists is obscure; presumably these ideas occurred naturally to many people. A strong factor in the popularization of these techniques was the existence of general List-processing systems based on them [principally the Knotted List Structures, CACM 5 (1962), 161-165, and Symmetric List Processor, CACM 6 (1963), 524-544, of J. Weizenbaum ] .

    Various methods for addressing and traversing multidimensional arrays of information were developed independently by clever programmers since the earliest days of computers, and thus another part of the unpublished computer folklore came into existence. This subject was first surveyed in print by H. Hellerman, CACM 5 (1962), 205-207. See also J. C. Gower, Comp. J. 4 (1962), 280-286.

    Tree structures represented explicitly in computer memory were described for the first time in applications to algebraic formula manipulation. The A-1 compiler language, developed by G. M. Hopper in 1951, used arithmetic expressions written in a three-address code; the latter is equivalent to the INFO, LLINK, and RLINK of a binary tree representation. In 1952, H. G. Kahrimanian developed algorithms for differentiating algebraic formulas represented in the A-1 compiler language; see Symposium on automatic Programming (Washington, D.C.: Office of Naval Research, May 1954), 6-14.

    Since then, tree structures in various guises have been studied independently by many people in connection with numerous computer applications, but the basic techniques for tree manipulation (not general List manipulation) have seldom appeared in print except in detailed description of particular algorithms. The first general survey was made in connection with a more general study of all data structures by K. E. Iverson and L. R. Johnson [ IBM Corp. research reports RC-390, RC-603, 1961; see Iverson, A Programming Language (New York: Wiley, 1962), Chapter 3 ] . See also G. Salton, CACM 5 (1962), 103-114.

    The concept of threaded trees is due to A. J. Perlis and C. Thornton, CACM 3 (1960), 195-204. Their paper also introduced the important idea of traversing trees in various orders, and gave numerous examples of algebraic manipulation algorithms. Unfortunately, this important paper was hastily prepared and it contains many misprints. The threaded lists of Perlis and Thornton actually were only "right-threaded trees" in our terminology; binary trees which are threaded in both directions were independently discovered by A. W. Holt, A Mathematical and Applied Investigation of Tree Structures (Thesis, U. of Pennsylvania, 1963). Postorder and preorder for the nodes of trees were called "normal along order" and "dual along order" by Z. Pawlak, Colloquium on the Foundation of Mathematics, etc. (Tihany, 1962, published by Akademiai Kiado, Budapest, 1965), 227-238. Preorder was called "subtree order" by Iverson and Johnson in the references cited above. Graphical ways to represent the connection between tree structures and corresponding linear notations were described by A. G. Oettinger, Proc. Harvard Symp. on Digital Computers and their Applications (April, 1961), 203-224.

    The history of tree structures as mathematical entities, together with a bibliography of the subject, is reviewed in Section

    At the time this section was written, the most widespread knowledge about information structures was due to programmers' exposure to List processing systems, which have a very important part in this history. The first widely used system was IPL-V (a descendant of IPL-III, developed late in 1959); IPL-V is an interpretive system in which a programmer learns a machine-like language for List operations. At about the same time, FLPL (a set of FORTRAN sub-routines for List manipulation, also inspired by IPL but using subroutine calls instead of interpretive language) was developed by H. Gelernter and others. A third system, LISP, was designed by J. McCarthy, also in 1959. LISP is quite different from its predecessors: programs for it are expressed in mathematical functional notation combined with "conditional expressions" (see Chapter 8), then converted into a List representation. Many List processing systems have come into existence since then, of which the most prominent historically is J. Weizenbaum's SLIP; this is a set of subroutines for use in FORTRAN programs, operating on doubly linked Lists.

    An article by Bobrow and Raphael, CACM 7 (1964), 231-240, may be read as a brief introduction to IPL-V, LISP, and SLIP, and it gives a comparison of these systems. An excellent introduction to LISP has been given by P. M. Woodward and D. P. Jenkins, Comp. J. 4 (1961), 47-53. See also the authors' discussions of their own systems, which are each articles of considerable historical importance: "An introduction to IPL-V" by A. Newell and F. M. Tonge, CACM 3 (1960), 205-211; "A FORTRAN-compiled List Processing Language" by H. Gelernter, J. R. Hansen, and C. L. Gerberich, JACM 7 (1960), 87-101; "Recursive functions of symbolic expressions and their computation by machine, I" by John McCarthy, CACM 3 (1960), 184-195; "Symmetric List Processor" by J. Weizenbaum, CACM 6 (1963), 524-544. The latter article includes a complete description of all of the algorithms used in SLIP. In recent years a number of books about these systems have also been written.

    Several string manipulation systems have also appeared; these are primarily concerned with operations on variable-length strings of alphabetic information (looking for occurrences of certain substrings, etc.). Historically, the most important of these have been COMIT (V. H. Yngve, CACM 6 (1963), 83-84) and SNOBOL (D. J. Farber, R. E. Griswold, and I. P. Polonsky, JACM 11 (1964), 21-30). Although string manipulation systems have seen wide use and although they are primarily composed of algorithms such as we have seen in this chapter, they play a comparatively small role in the history of the techniques of information structure representation; users of these systems have largely been unconcerned about the details of the actual internal processes carried on by the computer. For a survey of string manipulation techniques, see S. E. Madnick, CACM 10 (1967), 420-424.

    The IPL-V and FLPL systems for List-processing did not use either a garbage collection or a reference count technique for the problem of shared Lists; instead, each List was "owned" by one List and "borrowed" by all other Lists which referred to it, and a List was erased when its "owner" allowed it to be. Hence, the programmer was enjoined to make sure no List was still borrowing any Lists that were being erased. The reference counter technique for Lists was introduced by G. E. Collins, CACM 3 (1960), 655-657; see also the important sequel to this paper, CACM 9 (1966), 578-588. Garbage collection was first described in McCarthy's article cited above; see also CACM 7 (1964), 38, and an article by Cohen and Trilling, BIT 7 (1967), 22-30.

    Unfortunately, too few people have realized that manipulation of links is not a magic, complex process that must be done by fancy List manipulation routines. There is still a widespread tendency to feel that the advantages of link manipulation can be obtained only by using a large List-processing system; actually, as we have seen in many examples throughout this chapter, it is quite easy to design linking algorithms which operate more efficiently than a general List-processing subroutine could possibly do. The author hopes that these detailed examples will help to convey a better-balanced attitude towards the use of structural links in data.

    Dynamic storage allocation algorithms were in use several years before published information about them appeared. A very readable discussion is given by W. T. Comfort, CACM 7 (1964), 357-362 (an article written in 1961). The "boundary-tag' method, introduced in Section 2.5, was designed by the author in 1962 for use in a control program for the B5000 computer. The "buddy system" was first used by H. Markowitz in connection with the SIMSCRIPT programming system in 1963, and it was independently discovered and published by K. Knowlton, CACM 8 (1965), 623-625; see also CACM 9 (1966), 616-625. For further discussion of dynamic storage allocation, see the articles by Iliffe and Jodeit, Comp. J. 5 (1962), 200-209; Bailey, Barnett, and Burleson, CACM 7 (1964), 339-346; Berztiss, CACM 8 (1965), 512-513; and D. T. Ross, CACM 10 (1967), 481-492.

    A general discussion of information structures and their relation to programming has been given by Mary d'Imperio, "Data Structures and their Representation in Storage," Annual Review of Automatic Programming 5 (Oxford: Pergamon Press), in press. This paper is also a valuable guide to the history of the topic, since it includes a detailed analysis of the structures used in connection with twelve List processing and string manipulation systems. See also the proceedings of two symposia, CACM 3 (1960), 183-234 and CACM 9 (1966), 567-643, for further historical details. (Several of the individual papers from these proceedings have already been cited above.)

    An excellent annotated bibliography, which is primarily oriented towards applications to symbol manipulation and algebraic formula manipulation but which has numerous connections with the material of this chapter, has been compiled by Jean E. Sammet, Comput. Rev. 7 (July-August 1966), B1-B31.

          in ACM Computing Reviews 9(01) January 1968 view details
  • Sammet, Jean E. "Revised Annotated Descriptor Based Bibliography for the Use of Computers for Non-Numerical Mathematics" view details
          in Bobrow, D. G. (ed) "Symbol Manipulation Languages and Techniques", Proceedings of the IFIP Working Conference on Symbol Manipulation Languages. North-Holland Publishing Co., Amsterdam, 1968 view details
  • Sammet, Jean E. "Computer Languages - Principles and History" Englewood Cliffs, N.J. Prentice-Hall 1969. p.388. view details
          in Bobrow, D. G. (ed) "Symbol Manipulation Languages and Techniques", Proceedings of the IFIP Working Conference on Symbol Manipulation Languages. North-Holland Publishing Co., Amsterdam, 1968 view details