Symmetric LIst Processsor 

for Symmetric LIst Processsor.

Joseph Weizenbaum, MIT, 1963

Language for list processing using doubly-linked lists. Originally embedded in FORTRAN, later also embedded in MAD and ALGOL.

An extension of ALGOL to do formal algebraic manipulation. (Sammett 1966)

Related languages
FLPL => SLIP   Influence
FORTRAN IV => SLIP   Augmentation of
IPL-V => SLIP   Influence
KLS => SLIP   Evolution of
SLIP => ALGEM   Extension of
SLIP => AMPPL-I   Influence
SLIP => AMPPL-II   Written using
SLIP => ASLIP   Implementation
SLIP => GRASPE   Extension of
SLIP => LPL   Influence
SLIP => OCAL   Incorporated some features of
SLIP => OPL-1   Based on
SLIP => SIGMA   Incorporated features of

  • Yasaki, Edward K. "Computing at Stanford: Scholars & Throughput" view details Extract: Cutting Edge Computing at Stanford
    "The computation center also has the responsibility of supplementing the classroom instruction of the CSD - the educational role ? by teaching programming and making available machine time," Forsythe said. The CSD last spring offered 18 hours of instruction, including what has been called "the second most popular course on campus": Use of Automatic Digital Computers (three credit hours). Other courses are Computer Programming for Engineers, Numerical (and Advanced Numerical) Analysis, Intermediate (and Advanced) Computer Programming, and such seminar-type courses as Computer Lab, Advanced Reading and Research, and Computer Science Seminar. Selected Topics in Computer Science this fall will cover computing with symbolic expressions and the LISP language ... in the winter, a mathematical theory of computation . . . and spring, artificial intelligence.
    Last year, 876 students were enrolled in CSD courses, of whom more than 50 per cent were graduate students. (Total enrollment at Stanford is approximately 10,000, of whom 5,000 are graduate students.) During the current school year, 600-700 are expected in introductory programming courses, and 70 in the master's degree program. A PhD in numerical analysis is also offered, five graduates having been produced so far, and work has begun on a similar program in computer sciences.
    The majority of programs, and almost all students' work, has been written in BALGOL (Burroughs Algebraic Language), for which the 7090 has a translator written at Stanford and based on a previous translator for the 220. In timing runs, BALGOL was found to compile three to 10 times faster than FORTRAN, although it is five to 10 per cent slower on the object program. In a test case, a problem of 150 equations (150 unknowns) had a compile-and-go time of 2.1 minutes in FORTRAN, and 2.5 minutes in BALGOL; the latter time was reduced to 1.6 minutes by coding the inner loop in machine language. The same problem in B5000 ALGOL ran 3.2 minutes, a respectable figure considering that the 90's memory is almost three times faster than the 5000's. In another test, a 7090 FORTRAN program with a run time of two minutes was translated and run on the 5000 in four minutes.
    In a quite different test, involving the sorting of the columns of a two-dimensional array, the following times were found:

         run time      total time
    B5000 ALGOL      405 sec.      about 425 sec.
    7090 FORTRAN      47 sec.      72 sec.
    7090 BALGOL      71 sec.      84 sec.
    Student utilization of the 90 during the last school year, however, was only 22 per cent; sponsored and unsponsored research projects took up the remainder ? 52 and 28 per cent, respectively. With the installation of the 5000, the campus language will be switched to ALGOL. "The new machine will be used for the job-shop work," Forsythe says, "for which it is ideal. That will make the 90 available for experiments in time-sharing and other research."
    This, then, is the third area of activity of a university computing center ? research which will increase man's understanding of the computing sciences. Under Forsythe's direction, Stanford computer activities have moved from under its heavy numerical analysis orientation toward more immediate or popular research ? time-sharing and list processing: The recruitment of John McCarthy from MIT and the recent courtesy appointment of GE's Joe Weizenbaum as research associate signify the development of a CSD faculty resembling Forsythe's projections.
    McCarthy was one of four who delivered a time-sharing paper at the last Spring Joint Computer Conference. The system reported on then includes five operator stations on-line with an 8K PDP-1. The PDF is connected to a drum memory with which it swaps 4K words as each user's main frame time comes up. Thus, as far as the user is concerned, he faces a 4K machine. Stanford's PDP is being replaced in February with a 20K model with 12 stations, each with a keyboard and display console. Early next year, McCarthy says, the system should be on the air, and experimentation undertaken in preparation for its use with the 7090 and in the computer-based learning and teaching lab. To be established under a one-megabuck Carnegie grant, the lab's physical plant will be an extension of Pine Hall.
    Says Forsythe, "If you can see two or three years ahead, you're doing well in this field. But we're all looking toward the wide availability of computers to people. Our long-range goal is to continue development of the computer as an extension of the human mind?and enable almost anybody with a problem to have access to computers, just like the availability of telephone service."
    McCarthy is also working on the Advice Taker system in his three-year ARPA-funded research in artificial intelligence. Designed to facilitate the writing of heuristic programs, the Advice Taker will accept declarative sentences describing a situation, its goals, and the laws of the situation, and deduce from it a sentence telling what to do.
    "Out of optimism and sheer necessity of batch processing, we dream that computers can replace thinkers," says Forsythe. "This has not yet been realized, but there have been valuable by-products from this work." Forsythe and Prof. John G. Herriot have grants from the Office of Naval Research and the National Science Foundation, respectively, for work in numerical analysis.
    Forsythe, associate director R. Wade Cole, and Herriot are all numerical analysts. Herriot returned last summer from a year's sabbatical which he spent teaching numerical analysis under a Fulbright grant at the Univ. of Grenoble in France, and completing his recently-published book, "Methods of Mathematical Analysis and Computation" (Wiley).
    "When we get the new PDP, I hope to improve my chess and Kalah-playing programs," the heavily-bearded, bespectacled McCarthy says. His chess program can use considerable  improvement,   he   adds,   and   the  enlarged PDP will give him this opportunity.
    The most popular program, however, is the PDP's "space war," written by the programmer Steve Russell, who worked under McCarthy at MIT and was brought to Stanford from Harvard. Aptly described as the ideal gift for "the man. who has everything," space war is a two-dimensional, dynamic portrayal of armed craft displayed on the console scope. Each player utilizes four ! Test Word switches on the console to control the speed and direction of his craft, the firing of "bullets" and, with the fourth, the erasure of his craft from the CRT (to avoid being hit ? a dastardly way out). With each generation of images on the scope, each player's limited supply of "fuel" and "bullets" is pre-set, and a new image is automatically, generated after a player scores a hit. Much to the consternation of the CSD staff, the game is a hit with everyone; by executive fiat, its use has been restricted to non-business hours.
    The young and jovial Russell, whose character and demeanor fit the nature of space war, is presently working onLISP-2.
          in Datamation 7(12) Dec 1961 view details
  • Bobrow, Daniel G.; and Raphael, Bertram. "A comparison of list-processing computer languages" RAND Corp., Santa Monica, Calif., RM-384Q-PR, Oct. 1963 view details
          in Datamation 7(12) Dec 1961 view details
  • Weizenbaum, J. "Symmetric List Processor" view details
          in [ACM] CACM 6(09) (September 1963) view details
  • Barbieri, R. Computer List-Processing Languages IBM, Data Systems Division, Poughkeepsie, New York Technical Report No. TR00.1209 November 1964 view details Extract: Sammet summary
    This report describes several list structures and some important aspects of their implementation. It describes and compares FLPL, IPL-V, LISP and SLIP. It also gives examples of the use of slip in manipulating power series.
          in [ACM] CACM 6(09) (September 1963) view details
  • Bobrow, D. G. and Weizenbaum, J., "List Processing and Extension of Language Facility by Embedding" view details
          in IEEE Transactions on Electronic Computers Vol EC-13 August 1964 (Special Issue on Programming Languages) view details
  • Brown, J. H. review of Wiezenbaum 1963 (SLIP) view details Abstract: The description of the Symmetric List Processor is one of thi very few reports about list structures and processors which can be understood by the average programmer. This report gives a detailed description of the list structure and also describes in detail the various functions of the processor. As an added bonus a complete listing (in FORTRAN language) of the processor is included in an appendix. This report will undoubtedly become the reference for anyone working with list structures.

    A symmetric list is one in which each item points back to its predecessor as well as forward to its successor. In addition the head points to the tail and the tail points to the head. The NA`rE of a symmetric list points to the HEADER of the list; the HEADER points to the top of the list and also to the bottom. The list is traversed by a READER. If a sublist is encountered in this traversal, the current reader is "stacked" in a push-down list and a new reader for the sublist is activated. When the end of the sublist is reached, the previous READER is re-activated and the traversal continues.

    The report concludes with a discussion concerning comparisons of SLIP with other list processors.

    It is recommended that this report be read, if only for the pleasure of reading a report on list structures which is welldocumented and which can be understood.

          in ACM Computing Reviews 5(01) January-February 1964 view details
  • Weizenbaum, J. "More on the reference counter method" view details
          in [ACM] CACM 7(01) (Jan 1964). view details
  • Fano, Robert "The MAC system: a progress report" pp131-150 view details
          in Sass, M. and W. Wilkinson, eds. Computer Augmentation of Human Reasoning Spartan Books, Washington, D.C., 1965 view details
  • Bachman, C. W. review of Bobrow and Raphael 1963 view details
          in ACM Computing Reviews 7(03) May-June 1966 view details
  • Bordelon, S. A.: An Introduction to SLIP: The Symmetric List Programming Language. North American Aviation Inc. 1967. 55 S. view details
          in ACM Computing Reviews 7(03) May-June 1966 view details
  • Heindel, L. E. "The Automatic Optimization of Slip Routines" SIGSAM Bulletin no8, pp21-30 December 1967 view details Abstract: The code improvement technique discussed in this paper consists of a (Slip) compiler past-processor that inserts inline, open code in place of calls to very Short, run-time routines. Slip-compiled programs thus improved run 35 to 40 Percent faster on the Control Data 1604 and occupy about 3 percent less core memory.
          in ACM Computing Reviews 7(03) May-June 1966 view details
  • Knuth, Donald E. The Art of computer programming, Addison-Wesley Publishing Company Reading, MA 1968 view details Extract: History And Bibliography
    History And Bibliography
    Linear lists and rectangular arrays of information kept in consecutive memory locations were widely used from the earliest days of stored-program computers, and the earliest treatises on programming gave the basic algorithms for traversing these structures. [ For example, see J. von Neumann, Collected Works, vol. V, 113-116 (written 1947); M. V. Wilkes, D. J. Wheeler, S. Gill, The Preparation of Programs for an Electronic Digital Computer (Reading, Mass.: Addison-Wesley, 1951), subroutine V-1. ] Before the days of index registers, operations on sequential linear lists were done by performing arithmetic on the machine language instructions themselves, and this type of operation was one of the early motivations for having a computer whose programs share memory space with the data they manipulate.

    Techniques which permit variable-length linear lists to share sequential locations, in such a way that they shift back and forth when necessary, as described in Section 2.2.2, were apparently a much later invention. J. Dunlap of Digitek Corporation developed these techniques in 1963 in connection with the design of a series of compiler programs; about the same time the idea independently appeared in the design of a COBOL compiler at IBM Corporation, and a collection of related subroutines called CITRUS was subsequently used at, various installations. The techniques remained unpublished until after they had been independently developed by Jan Garwick of Norway; see BIT 4 (1964), 137-140.

    The idea of having linear lists in non-sequential locations seems to have originated in connection with the design of computers with drum memories; the first such computer was developed by General Electric Corporation during 1946-1948, and the IBM 650 was the most notable later machine of this type. After executing the instruction in location n, such a computer is usually not ready to get its next instruction from location n + 1 because the drum has already rotated past this point. Depending on the instruction being performed, the most favorable position for the next instruction might be n + 7 or n + 18, etc., and the machine can operate up to six or seven times faster if its instructions are optimally located rather than consecutive. [ For a discussion of the interesting problems concerning best placement of these instructions, see the author's article in JACM 8 (1961), 119-150. ] Therefore the machine design provides an extra address field in each machine language instruction, to serve as a link to the next instruction. Such a machine is called a "one-plus-one-address computer," as distinguished from MIX which is a one-address computer." The design of one-plus-one-address computers is apparently the first appearance of the linked-list idea within computer programs, although the dynamic insertion and deletion operations which we have used so frequently in this chapter were still unknown.

    Linked memory techniques were really born when A. Newell, C. Shaw, and H. Simon began their investigations of heuristic problem-solving by machine. As an aid to writing programs that searched for proofs in mathematical logic, they designed the first " listprocessing" language IPL-II in the spring of 1956. (IPL stands for Information Processing Language.) This was a system which made use of links and included important concepts like the list of available space, but the concept of stacks was not yet well developed; IPL-III was designed a year later, and it included "push down" and "pop up" for stacks as important basic operations. (For references to IPL-II see "The Logic Theory Machine," IRE Transactions on Information Theory IT-2 (Sept. 1956), 61-70; see also Proc. Western Joint Comp. Conf. (Feb. 1957), 218-240. Material on IPL- III first appeared in course notes given at the University of Michigan in the summer of 1957.)

    The work of Newell, Shaw, and Simon inspired many other people to use linked memory (which was often at the time referred to as NSS memory), mostly for problems dealing with simulation of human thought processes. Gradually, the techniques became recognized as basic computer programming tools; the first article describing the usefulness of linked memory for "down-to-earth" problems was published by J. W. Carr, III, in CACM 2 (Feb. 1959), 4-6. Carr pointed out in this article that linked lists can readily be manipulated in ordinary programming languages, without requiring sophisticated sub-routines or interpretive systems. See also G. A. Blaauw, IBM J. Res. and Dev. 3 (1959), 288-301.

    At first, one-word nodes were used for linked tables, but about 1959 the usefulness of several consecutive words per node and "multilinked" lists was gradually being discovered by several different groups of people. The first article dealing specifically with this idea was published by D. T. Ross, CACM 4 (1961), 147-150; at that time he used the term "plex" for what has been called a "node" in this chapter, but he subsequently has used the word "plex" in a different sense to denote a class of nodes combined with associated algorithms for their traversal.

    Notations for referring to fields within nodes are generally of two kinds: the name of the field either precedes or follows the pointer designation. Thus, while we have written "INFO(P)" in this chapter, some other authors write, for example, "P.INFO ". At the time this chapter was prepared, the two notations seemed to be equally prominent. The notation adopted here has been used in the author's lectures since 1962 and it has also been independently devised by numerous other people, primarily because it is natural to use mathematical functional notation to describe attributes of a node. (For example, see the article by N. Wirth and C. A. R. Hoare, CACM 9 (1966), 413-432.) Note that "INFO(P)" is pronounced "info of P" in conventional mathematical verbalization, just as f(x) is rendered "f of x." The alternative notation P.INFO has less of a natural flavor, since it tends to put the emphasis on P, although it can be read "P's info" the reason INFO(P) seems to be more natural is apparently the fact that P is variable, but INFO has a fixed significance when the notation is employed. By analogy, we could consider a vector A = (A [ l ] , A [ 2 ] , ..., A [ l00 ] ) to be a node having 100 fields named 1, 2,..., 100. Now the second field would be referred to as "2(P)" in our notation, where P points to the vector A; but if we are referring to the jth element of the vector, we find it more natural to write A [ j ] , putting the variable quantity "j" second. Similarly it seems most appropriate to put the variable quantity "P' second in the notation INFO(P).

    Perhaps the first people to recognize that the concepts "stack" (last-in-first-out) and "queue" ( first-in-first-out) are important objects of study were cost accountants interested in reducing income tax assessments; for a discussion of the "LIFO" and "FIFO" methods of pricing inventories, see any intermediate accounting textbook, e.g., C. F. and W. J. Schlatter, Cost Accounting (New York: Wiley, 1957), Chapter 7. In 1947 A. M. Turing developed a stack, called Reversion Storage, for use in subroutine linkage (see Section 1.4.5). No doubt simple uses of stacks kept in sequential memory locations were common in computer programming from the earliest days, since a stack is such a simple and natural concept. The programming of stacks in linked form appeared first in IPL, as stated above; the name "stack" stems from IPL terminology (although "pushdown list" was the more official IPL wording), and it was also independently introduced in Europe by E. W. Dijkstra. "Deque" is a term introduced by E. J. Schweppe.

    The origin of circular and doubly linked lists is obscure; presumably these ideas occurred naturally to many people. A strong factor in the popularization of these techniques was the existence of general List-processing systems based on them [principally the Knotted List Structures, CACM 5 (1962), 161-165, and Symmetric List Processor, CACM 6 (1963), 524-544, of J. Weizenbaum ] .

    Various methods for addressing and traversing multidimensional arrays of information were developed independently by clever programmers since the earliest days of computers, and thus another part of the unpublished computer folklore came into existence. This subject was first surveyed in print by H. Hellerman, CACM 5 (1962), 205-207. See also J. C. Gower, Comp. J. 4 (1962), 280-286.

    Tree structures represented explicitly in computer memory were described for the first time in applications to algebraic formula manipulation. The A-1 compiler language, developed by G. M. Hopper in 1951, used arithmetic expressions written in a three-address code; the latter is equivalent to the INFO, LLINK, and RLINK of a binary tree representation. In 1952, H. G. Kahrimanian developed algorithms for differentiating algebraic formulas represented in the A-1 compiler language; see Symposium on automatic Programming (Washington, D.C.: Office of Naval Research, May 1954), 6-14.

    Since then, tree structures in various guises have been studied independently by many people in connection with numerous computer applications, but the basic techniques for tree manipulation (not general List manipulation) have seldom appeared in print except in detailed description of particular algorithms. The first general survey was made in connection with a more general study of all data structures by K. E. Iverson and L. R. Johnson [ IBM Corp. research reports RC-390, RC-603, 1961; see Iverson, A Programming Language (New York: Wiley, 1962), Chapter 3 ] . See also G. Salton, CACM 5 (1962), 103-114.

    The concept of threaded trees is due to A. J. Perlis and C. Thornton, CACM 3 (1960), 195-204. Their paper also introduced the important idea of traversing trees in various orders, and gave numerous examples of algebraic manipulation algorithms. Unfortunately, this important paper was hastily prepared and it contains many misprints. The threaded lists of Perlis and Thornton actually were only "right-threaded trees" in our terminology; binary trees which are threaded in both directions were independently discovered by A. W. Holt, A Mathematical and Applied Investigation of Tree Structures (Thesis, U. of Pennsylvania, 1963). Postorder and preorder for the nodes of trees were called "normal along order" and "dual along order" by Z. Pawlak, Colloquium on the Foundation of Mathematics, etc. (Tihany, 1962, published by Akademiai Kiado, Budapest, 1965), 227-238. Preorder was called "subtree order" by Iverson and Johnson in the references cited above. Graphical ways to represent the connection between tree structures and corresponding linear notations were described by A. G. Oettinger, Proc. Harvard Symp. on Digital Computers and their Applications (April, 1961), 203-224.

    The history of tree structures as mathematical entities, together with a bibliography of the subject, is reviewed in Section

    At the time this section was written, the most widespread knowledge about information structures was due to programmers' exposure to List processing systems, which have a very important part in this history. The first widely used system was IPL-V (a descendant of IPL-III, developed late in 1959); IPL-V is an interpretive system in which a programmer learns a machine-like language for List operations. At about the same time, FLPL (a set of FORTRAN sub-routines for List manipulation, also inspired by IPL but using subroutine calls instead of interpretive language) was developed by H. Gelernter and others. A third system, LISP, was designed by J. McCarthy, also in 1959. LISP is quite different from its predecessors: programs for it are expressed in mathematical functional notation combined with "conditional expressions" (see Chapter 8), then converted into a List representation. Many List processing systems have come into existence since then, of which the most prominent historically is J. Weizenbaum's SLIP; this is a set of subroutines for use in FORTRAN programs, operating on doubly linked Lists.

    An article by Bobrow and Raphael, CACM 7 (1964), 231-240, may be read as a brief introduction to IPL-V, LISP, and SLIP, and it gives a comparison of these systems. An excellent introduction to LISP has been given by P. M. Woodward and D. P. Jenkins, Comp. J. 4 (1961), 47-53. See also the authors' discussions of their own systems, which are each articles of considerable historical importance: "An introduction to IPL-V" by A. Newell and F. M. Tonge, CACM 3 (1960), 205-211; "A FORTRAN-compiled List Processing Language" by H. Gelernter, J. R. Hansen, and C. L. Gerberich, JACM 7 (1960), 87-101; "Recursive functions of symbolic expressions and their computation by machine, I" by John McCarthy, CACM 3 (1960), 184-195; "Symmetric List Processor" by J. Weizenbaum, CACM 6 (1963), 524-544. The latter article includes a complete description of all of the algorithms used in SLIP. In recent years a number of books about these systems have also been written.

    Several string manipulation systems have also appeared; these are primarily concerned with operations on variable-length strings of alphabetic information (looking for occurrences of certain substrings, etc.). Historically, the most important of these have been COMIT (V. H. Yngve, CACM 6 (1963), 83-84) and SNOBOL (D. J. Farber, R. E. Griswold, and I. P. Polonsky, JACM 11 (1964), 21-30). Although string manipulation systems have seen wide use and although they are primarily composed of algorithms such as we have seen in this chapter, they play a comparatively small role in the history of the techniques of information structure representation; users of these systems have largely been unconcerned about the details of the actual internal processes carried on by the computer. For a survey of string manipulation techniques, see S. E. Madnick, CACM 10 (1967), 420-424.

    The IPL-V and FLPL systems for List-processing did not use either a garbage collection or a reference count technique for the problem of shared Lists; instead, each List was "owned" by one List and "borrowed" by all other Lists which referred to it, and a List was erased when its "owner" allowed it to be. Hence, the programmer was enjoined to make sure no List was still borrowing any Lists that were being erased. The reference counter technique for Lists was introduced by G. E. Collins, CACM 3 (1960), 655-657; see also the important sequel to this paper, CACM 9 (1966), 578-588. Garbage collection was first described in McCarthy's article cited above; see also CACM 7 (1964), 38, and an article by Cohen and Trilling, BIT 7 (1967), 22-30.

    Unfortunately, too few people have realized that manipulation of links is not a magic, complex process that must be done by fancy List manipulation routines. There is still a widespread tendency to feel that the advantages of link manipulation can be obtained only by using a large List-processing system; actually, as we have seen in many examples throughout this chapter, it is quite easy to design linking algorithms which operate more efficiently than a general List-processing subroutine could possibly do. The author hopes that these detailed examples will help to convey a better-balanced attitude towards the use of structural links in data.

    Dynamic storage allocation algorithms were in use several years before published information about them appeared. A very readable discussion is given by W. T. Comfort, CACM 7 (1964), 357-362 (an article written in 1961). The "boundary-tag' method, introduced in Section 2.5, was designed by the author in 1962 for use in a control program for the B5000 computer. The "buddy system" was first used by H. Markowitz in connection with the SIMSCRIPT programming system in 1963, and it was independently discovered and published by K. Knowlton, CACM 8 (1965), 623-625; see also CACM 9 (1966), 616-625. For further discussion of dynamic storage allocation, see the articles by Iliffe and Jodeit, Comp. J. 5 (1962), 200-209; Bailey, Barnett, and Burleson, CACM 7 (1964), 339-346; Berztiss, CACM 8 (1965), 512-513; and D. T. Ross, CACM 10 (1967), 481-492.

    A general discussion of information structures and their relation to programming has been given by Mary d'Imperio, "Data Structures and their Representation in Storage," Annual Review of Automatic Programming 5 (Oxford: Pergamon Press), in press. This paper is also a valuable guide to the history of the topic, since it includes a detailed analysis of the structures used in connection with twelve List processing and string manipulation systems. See also the proceedings of two symposia, CACM 3 (1960), 183-234 and CACM 9 (1966), 567-643, for further historical details. (Several of the individual papers from these proceedings have already been cited above.)

    An excellent annotated bibliography, which is primarily oriented towards applications to symbol manipulation and algebraic formula manipulation but which has numerous connections with the material of this chapter, has been compiled by Jean E. Sammet, Comput. Rev. 7 (July-August 1966), B1-B31.

          in ACM Computing Reviews 7(03) May-June 1966 view details
  • S. Ramani "Slip Operations On Trees And Their Relevance To Problems Of Linguistic Interest" Computer Group, Tata Institute oj Fundamental Research, Bombay, India view details Abstract: Abstract. The essential features of abstract devices used in describing and in investigating processes of a linguistic nature, such as syntax analysis, are present in list processing computer languages. These features include the availability of push down stores and program recursion. It is argued here that a very important feature of list processors is that they are machines that perform computations over structured operands.
    The Symmetrical List Processor, SLIP, is shown to have many primitives which are identifiable with basic linguistic operations.  Procedures for generation and analysis of sentences using context-sensitive grammars and a procedure for transformational generation of sentences are described.  Results of a limited amount of experimentation are presented.
    Extract: Introduction
    It has been pointed out elsewhere that using a list processing language, programs of linguistic interest such as sentence generators could be written and debugged in a very short time [1]. It is argued here that this is not due to any superficial reasons, and that many primitives of list processing languages (abbreviated to i.p.i.s nereaiter; are laenunaDie witn certain basic operations needed to synthesize linguistic processes, such as transformations of sentences. Abstract devices such as push-down store automata and special purpose Turing machines are useful in describing these processes. In general, the significant features of such devices are found incorporated in l.p.l.s.  Chomsky's identification of Context Free phrase structure grammars with push-down store automata, for instance, indicates the reason for the utility of a push-down stack available in every l.p.l., for the purpose of syntax analysis [3].  Kuno's description of syntax analysis using Cocke's algorithm, shows clearly the need for operations that create and manipulate tree-like structures [4, 5]. His description of the cycling mechanisms needed in an analyser, to handle possibly ambiguous sentences, illustrates the complexity that has to be introduced into analysers. While Kuno does not mention list processing at all, his description of the cycling mechanisms shows that dynamic memory allocation and program recursion could be useful in analysers of this type.
    Many syntax analysers use list processing techniques to some degree or other. Stacks and provision for describing structural relationships between the elements of a sentence are specific examples. Obviously, the exclusive use of a standard l.p.l. to implement an important processor has its own drawbacks.  Many of these languages do not offer the convenience and efficiency of conventional programming languages for routine operations.  Most of the l.p.l.s produce slow running programs and they provide very limited input-output facilities. They often do not allow the use of magnetic tapes and do not provide the flexibility required to integrate programs written in several languages including the assembly language, into a system.
    SLIP is free of many of these shortcomings. However, there is no need to elaborate on these points here,  since they have been well covered elsewhere [2]. The emphasis here is on the fact that list processors are machines that perform computations over structured operands, and that as an essential characteristic of list processors, this is at least as important as dynamic memory allocation or program recursion.  List processors use certain primitive functions that have as their domain the set of structured operands.  This particular feature of list processing is taken care of in SLIP very well.  SLIP introduces more generality into its list structures and makes available a whole class of new primitives to analyse structures.
    Study of the formal properties of languages by many of the transformational grammarians has been based on the premise that the processes of sentence generation and sentence recognition can be explained in terms of combinations of operations on sentence structures. It is argued here that the primitives provided by SLIP to explore and manipulate tree structures find an important and natural place in implementing these processes.
          in ACM Computing Reviews 7(03) May-June 1966 view details
  • Sammet, Jean E. "Revised Annotated Descriptor Based Bibliography for the Use of Computers for Non-Numerical Mathematics" view details
          in Bobrow, D. G. (ed) "Symbol Manipulation Languages and Techniques", Proceedings of the IFIP Working Conference on Symbol Manipulation Languages. North-Holland Publishing Co., Amsterdam, 1968 view details
  • Findler, Nicholas V.; McKinzie, Wiley R. "On a Tool in Artificial Intelligence Research: An Associative Memory, Parallel Language, AMPPL-II" pp259-270 view details Abstract: One of the remarkable things about human information processing is the way we store and retrieve information. The human associative memory is a device that has been only rather crudely approximated with computers in flexibility, self-modifiabilitv, and complexity of associations.
    In this paper, we describe a computer language, AMPPL-11, which simulates a machine with an extensive, variable size associative memory and parallel processing capability. It was to be made fairly machine-independent and is, therefore, embedded in a high-level algebraic language. The version outlined here is built as an extension of SLIP (Symmetric List Processing Language), itself being embedded in FORTRAN IV.
    The expected range of applicability of AMPPL-II covers a rather wise area. Candidates in the non-numerical field are game playing, picture processing and image identification, encoding-decoding, simulation of cognitive behavior, multikey information retrieval, language analysis, sorting, merging, collating, query systems, and so on. numerical applications would be matrix calculations, solution of partial differential equations, computation of auto- and cross-correlation functions, signal correction, etc.
    Stimulated by this language, new computational techniques and algorithms nay, hopefully, be developed. These, on one hand, and technological advances, on the other, nay render the hardware implementation of an AMPPL-like machine economical.
    Extract: Introduction
    There is an obvious interrelation between the level of complexity of the problems that are potentially solvable on computers, and the power of available software and hardware. Further development in programing systems and languages enables us to attack problems that have so far been beyond the sphere of practicability. List processing and string manipulating languages, for example, have opened up new vistas in many research fields, probably most significantly in Artificial Intelligence.
    We describe in this paper an approach to memory and information processing, radically different from those of the traditional von Neumann-type computers. An Associative Memory, Parallel Processing Language, AMPPL-II*, is introduced, which simulates a computer built in hardware. (We point out that some of the facilities available in AMPPL-II would not be present in the engineer's implementation. However, they do not "cost" much in programming effort and in computing time, and they may also be handy to have. We have, therefore, decided to include a few "non-generic" features.)
    It should be noted here that there are many references in the literature to hardware and software efforts that are aimed at objectives similar to ours here. The engineering activity can be divided into two, rather distinct categories. First, very expensive "black boxes" have been constructed and attached to special or general purpose computers to perform some of the functions we shall discuss later. Second, small associative memory units have been included in computer systems for specific purposes, such as the paging operation with large-scale time-shared machines.
    A number of programming systems have also been developed, most of them directed towards specific, some of them towards more general applications. We only wish to single out here the work of Feldman and Rovner [3,4,5], and of Dodd, Beach and Rossol [6]. Their objectives are very similar to ours, although the respective techniques are different. Experience will show which of these languages is more powerful, easier to program and debug in.
    Excellent surveys, covering the field in question up to the middle of 1966, can be found in [7,8]. For the sake of completeness, we mention a few more, some left out of the above surveys, some of more recent vintage [9, 10, 11, 12, 13, 14, 15, 16, 17, 16, 19, 20]. Extract: Notes on implementation
    The, first version, AMPPL-I, was reported on in [1,2] and was implemented but not completely debugged for the IBM 7044. The presently described language, running on the CDC 6400, has improved dynamic memory allocation facilities and more powerful instructions concerning Relations (cf. Section II/6). Extract: Authorship
    NVF Designed AMPPL and is responsible for the mistakes contained in this paper. W. R. McK. implemented the language on the CDC 6400. Teiji Furugori's contribution to the latter is also acknowledged. Further, Peter Muller and Charles Bergenstock worked on an IBM 7044 implementation of AMPPL-I.
    Extract: Some Basic Concepts
    Some Basic Concepts
    In contrast with conventional, word-oriented machines, computers with associative memories are content-addressable. These terms refer to the interrelationship between data as well as to the fact that a memory word or words can be accessed by matching with a selectable field of a special word, rather than by an address. Parallel processing, a related idea and distinct from multiprocessing, allows command sequences to be executed simultaneously over large numbers of data sets.
    It was intended to develop a language which incorporates, from the programmer's viewpoint, these facilities in addition to the previously available algebraic, list processing and string manipulating facilities.
    For understandable reasons, embedding seemed to be an economical and fairly efficient approach, which also achieves a reasonably high level of machine independence. The presently described version is an extension of SLIP, itself being  embedded  in  FORTRAN IV.  (The
    internal mechanism of SLIP had to be modified to a small extent but the user need not be aware of this fact.) There are only two AMPPL subprograms written in assembly language now. Converting often used subroutines and functions, currently coded in FORTRAN, into assembly language would, of course, save considerable machine time with Clare programs.
    Unlike the expensive and inflexible associative memories presently built in hardware, the size of the Simulated Associative Memory (SAM) is determined by the programmer according to his needs. In fact, he has to specify the ratio between the sizes of SAM and of the Available Space List (AVSL) of SLIP at the beginning of his program. The sun of SAM and AVSL equals the storage area left unused after the compilation of the FORTRAN program.
    The programmer can also build into the program trap points, at which a part of SAM or AVSL, is dynamically reassigned to the other part if there is a need for it, i.e. if the size of one memory type decreases below a certain prespecified threshold value. Memory contraction, coupled with a specific form of garbage collection, takes place in SAM at this instance.
    There is an efficient and fast flow of information between any two of the SAM, AVSL and FORTRAN memory.
    It will be helpful in understanding the organization of AMPPL-II if we consider the diagram in Figure 1.

    The main block is the Memory Matrix, which consists of r SAM-words, each having 2n bit positions. There are four special words, the short registers, each n bits long. (On the CDC 6400, there are n=60 bits in a word.) [As will he seen  later,  every  SAM-word  consists of two actual memory words.]
    Two of these serve as Argument Registers 1 and 2. The other two represent Mask Registers 1 and 2. There are also three long registers, vertical columns r bits in length and 1 bit in width. Two of these are called Response Register 1 and Response Register 2. The third one is the Availability Register. (The role of the words FIRST and LAST in Figure 1 is explained  later).
    In the following, we shall briefly describe some of the operations used in AMPPL-II and discuss only two, the 'Search and  Flag'  and 'Set Relation', in detail. Extract: Operations in AMPPL-II (1) Memory Allocation
    Operations in AMPPL-II (1)  Memory Allocation:
    As mentioned above, the initial subdivision of free core into SAM and AVSL can be dynamically modified during the execution of the program. Instructions are available to count the number of currently available SAM and AVSL cells, and to accomplish the transfer of free cells from one memory category to the other. At these points, and also at the time of mass input into SAM (see below), memory contraction and garbage collection in SAM takes place.
    Extract: Operations in AMPPL-II (2) Input-Output
    Operations in AMPPL-II (2) Input-Output:
    Mass input-output is obtained when FORTRAN arrays or SLIP lists are read into SAM, and when designated SAM-words are loaded into FORTRAN arrays or SLIP lists. Also, wholesale clearing operations can be executed between any two specified SAM addresses. (A SAM-address can be conceived of as the index of a one-dimensional array.)
    Individual words can be transferred between any two of the SAM, SLIP and FORTRAN memories. Designated parts of SAM can be printed in different modes.
    Full words or specified bit configurations can be put into the Argument and Mask Registers, and the contents of the latter can also be read into the FORTRAN memory.
    Extract: Operations in AMPPL-II (3) Operations Concerning the Response Registers
    Operations in AMPPL-II
    (3) Operations  Concerning  the  Response Registers:
    To the AMPPL programmer, SAM appears to contain information in a manner that permits certain basic processes, including reading and writing, to be carried out simultaneously in particular cells. These designated cells contain responding pieces of information and were selected by a previous 'Search and Flag' operation or were deliberately flagged by a special flagging instruction. The 'Search and Flag' operations locate and mark SAM-words according to various criteria. Subsequent processes may then be performed on these again and the results can represent Boolean combinations, AND'S, OR's and NOT'S of consecutive searches.
    The basis: of comparison is usually* put into one of the Argument Registers. The search is carried out only over those fields of SAM-words that are marked by bits '1' in the relevant Mask Register. The success of a search is indicated by flags (bits 1) at the level of the corresponding SAM-word in the relevant Response Register. Those SAM-words that have not been used or are no longer necessary are denoted by a tag (bit '1') in the Availability Register.
    Two points should be mentioned here. In order to speed up the search processes in SAm, one-directional pointers link flagged SAM-words (see Figure 2) and, also, end markers indicate the addresses of the SAM-words first and last flagged in the two Response Registers (the contents of the FORTRAN variables FIRST and LAST on Figure 1).

    Two other FORTRAN variables, POINT1 and POINT2 contain the SAM-addresses of the last SAM-word searched, whether successfully or otherwise, with reference to Response Register 1 and Response Requester 2, respectively. (Cf. the possible values of the argument WHICH of the subroutine SERFLG. )
    The actual form of the major instruction performing the above described processes is

    A few words of comment are needed here. Two subsequent 'Search and Flag' operations with CRITERrGTEQ and LTEQ yield responsive words of values between given limits. NEXTHI can be performed by two subsequent searches with criteria GTEQ and MIN, similarly NEXTLO is done with criteria LTEQ and MAX. The value of one of NEXTHI and NEXTLO, that is nearer to the value in the Argument Register, yields NEXT. The criteria BITSHI and BITSLO are useful in comparing non-numerical data and selecting the "most similar" or "least similar" pieces of information, respectively. The number of matching bits can be found as the values of special FORTRAN variables. GRPOFM finds, for example, misprints caused by transposition, missing and added characters. The character set can be represented by groups of 2-8 bits. Since the matching process ignores the position of the groups being matched, there are extra facilities to identify transposition errors. Also, the number of the matching groups is accessible.
    There are safeguards to prevent a SAM-word of the wrong information mode (e.g. floating point number instead of alphabetic information) from becoming respondent if its contents happens to be the right bit configuration. A detailed description of this is to be found in [22].
    Finally, it should be noted that there exist instructions to flag and unflag specified SAM-words, to count the number of flagged SAM-words, and to put the SAM-addresses of flagged words in a FORTRAN array or a SLIP list.

    *If, for example, those  numbers  are searched for that are greater than a given one.   However, if the criterion of search is, for example, 'maximum',  the  Argument Registers are ignored. Extract: Operations in AMPPL-II (4) Operations Concerning the Availability Register
    Operations in AMPPL-II
    (4) Operations Concerning the Availability Register:
    As mentioned before, a bit 1 in the Availability Register indicates that the corresponding SAM-word is free. We call this mark a 'tag', as distinct from the 'flag' in the Response Registers.
    There are instructions which tag and untag SAM-words, count the number of available SAM-words, and which put the SAM-addresses of tagged words in a FORTRAN array or a SLIP list. Extract: Operations in AMPPL-II (5) Inter-Register Operations
    Operations in AMPPL-II
    (5) Inter-Register Operations:
    All the 16 logical operations possible are executable between any two of the short (Argument and Mask) Registers or between any two of the long (Response and Availability) Registers. Here the operands are the bit strings occupying the respective registers.
    Extract: Operations in AMPPL-II (6) Processes Regarding Relations
    Operations in AMPPL-II
    (6) Processes Regarding Relations:
    Besides the 'Search and Flag' operation, these processes are the most significant in AMPPL-11. We shall, therefore, discuss them, also, in some ietail.
    Let us generalize the concept of an algebraic function and define a Relation (REL) between an Object (OBJ) and a Value (VAL)
    REL (OBJ) = VAL.
    Each of the above three entities can be single items or three kinds of lists. The first kind simply contains various equivalent names of the same item. (One can think of synonyms within the given context.) This is called the EQUIVALENT LIST. The second kind of list bears the names of a number of subunits any processing on which is always uniform. An example of these lists may be the students of a class, who always have the same teacher, always stay in the same classroom, etc. Distinguishing processes, such as grading of individual exams, are not to be carried out on the elements of so designated lists. Finally, the third kind of list has distinct and, in some respect, independent elements. An example of this could be the pieces of furniture in a certain room if one would like to, say, paint them to different colors.
    Also, the programmer can define a Relation as an ordered set of other Relations.  The combining connectives are:
    ^ (and) ,   v (or) ,  -] (not), + (concatenated), <- (reverse).
    Let us further define a reserved symbol SELF in order to be able to exclude self-referencing in unwanted cases. Finally, the term on the left hand side of a 'concatenated' symbol is considersd to be in (Teutonic) genitive. The following examples should make this clear:
    (i)      PARZNT => FATHER v  MOTHER
    i.e.  a parent  is defined a  father or mother;
    (ii)       CHILD =>  <- PARENT
    i.e. the child is defined as the reverse of the parent;
    i.e.  the grandfather is defined as the father's or mother's father;
    (iv)     HUSBAND => SPOUSE ^ -] WIFE
    i.e. the husband is defined as a spouse but (and) not wife;
    i.e. the brother is defined as the mother's and father's son but (and) not self: if we wish to include half-brothers as well, we can put
    i.e. the mother's or father's son but (and) not self.
    As the system reads in these definitions from cards, it checks them in toto for circularity, which it does not accept, and puts them in a Polish prefix form on DEFINITION LISTs. The latter are sublists of the EQUIVALENT LISTs and therefore sub-sublists of the SLIP list CODEL.  (See Figure 2).  

    On the CODEL list, the members are either entity (Relation, Object, Value) names, each less than 11 characters long, or the names of the three kinds of sublists mentioned before. The machine address of an item on the CODEL list is called its Code Number. Whenever a new Relation is defined by the subroutine
    the system sets up one (or more) Relation Descriptor word(s) in SAM, which contain(s), in the proper fields, the three code numbers representing the Relation, the Object and the Value. (See Figure 3).  K is an integer between 0 and
    7, and characterizes each of the three entities whether they represent a single item or non-distinct elements of a list, on one hand, or distinct elements of a list, on the other. In the latter case, as many Relation Descriptor words are established as the total number of combinations possible.

    There are altogether seven basic questions a retrieval system for Relations can answer.  These are as follows:
    (a)     Is   a  particular relation, between a given object and value, true?
    (b)    What  is  (are)  the  value (s) belonging to a given relation-object pair, if any?  REL (OBJ)=?
    (c)    What  is  (are)  the  object (s) belonging  to a given relation-value pair, if any?  REL(?)=VAL
    (d)   PJhat is  (are)  the  relation (~) that connect (s) a given object-value pair, if any?  ?(OBJ)=VAL
    (e)     What relation-object pair(s) belong (s)  to a liven value,  if  any? ?(?)=VAL
    (f)     What  relation-value  pair(s) belong (s)  to a given object,  if  any? ?(OBJ)=?
    (g)     Finally,   what   object-value pair(s) belong(s) to a given relation,  if any?  REL(?)=?
    The answers are obtainable by using one simple instruction in every case.
    Finally, we note two more instructions of considerable power. One of them creates
    REL2 (VAL) = 0BJ
    where the Object and Value  are  connected by
    REL1 (OBJ) = VAL and
    i.e. REL1 and REL2 are Reversed relations. Examples of this are:
    RELl             REL2
    husband of      wife of
    spouse of       spouse of
    parent of       child of
    loves           is loved by
    superset of     subset of
    similar to      similar to
    greater than    less than
    above           below
    left to         right to
    Note that always
    Another  instruction  finds  X,  for which it is true that
    A : B = C : X
    where A, B, and C are any of the three entities, Relation, Object, or Value; A, and C, on one hand, and B and X, on the other, are of the same type, and the third entity in the Relation Descriptor words containing A, B and C, X is the same for both. (Occurrences of all possible combinations of A and B are considered.) Extract: Operations in AMPPL-II (7) Parallel Operations over SAM
    Operations in AMPPL-II
    (7) Parallel Operations over SAM:
    Here, we briefly list some basic but high-level instructions that should be useful both in numerical and non-numerical applications:
    A constant word in one of the Argument Registers can be added to, subtracted from, multiplied by, divided into, Boolean AND-ed and OR-ed with designated SAM-words through one of the Mask Registers.
    The above operations can also be performed between any two fields, and the Boolean NOT of any single field, of designated SAM-words.
    Specified fields of designated SN4-words can be cleared.
    Designated SAM-words can be shifted to the left or to the right by specified number of places.
    Single elements, vectors, planes, cofactors or all elements of an array can be flagged if the array was read into SAM according to the usual mapping order.
    Vector addition, subtraction and scalar multiplication are single instructions.
    Also, single instructions yield the determinant and the inverse of one, and the product of two, matrices. Extract: An Overview
    An Overview
    We have tried to give a short outline of a new computer language. We, however, feel it is more than "just another language" -- it represents another philosophy of, another approach to, problem solving. After all, it is only the sequential design of the von Neumann-type machines that has imposed upon the computing community the presently prevalent but often quite unnatural computational methods. Even using these conventional methods, AMPPL-II (a) should decrease the length of written programs and (b) should simplify the writing, debugging and understanding of programs. (It has very powerful diagnostic facilities.) There is, however, a significant trend, as can be seen in the referenced literature, to develop new algorithms and techniques that make use of content addressability and parallel processing, expose latent parallelism, and introduce computational redundancy. We hope AMPPL-I1 will enhance this trend.
    We have had only limited programming experience with the language. The following, as yet incomplete, projects are representative examples:
    (1)   a query  system,  which  can be continually updated, dealing with  complex kinship structures;
    (2)    simulation of  a  learning and self-adapting organism  in  a  hostile environment;
    (3)   empirical proofs of conjectures in computational linguistics;
    (4) simulation of a demographical problem; and
    (5) scheduling classrooms, teachers and students.
    We have found that, although one must pay a certain price in machine time and available memory, the programming ease achieved is quite significant. (The disk resident system consists  of roughly  8k
    words for each SLIP and AMPPL-II. Only the needed subprograms are called into core.) In the INTRODUCTION, we listed a number of broad areas of application in which AMPPL-I1 should prove useful. Now we can be more specific in terms of problem characteristics. We expect to save considerable programming effort in using the language whenever
    (i) data are to be addressed by a combination of various sets of reference properties,
    (ii) data elements satisfying the above reference properties are scattered throughout the memory in a sparse and random manner,
    (iii) data elements dynamically change their location in the memory as a consequence of the information processes acting on them,
    (iv) identical sequences of processes manipulate on distinct, non-interacting data elements,
    (v) the ratio between concurrently and serially executable processes is reasonably high.
    These criteria of language applicability occur to same extent with practically every complex programming problem. The availability of a conventional algebraic language with the AMMPL-cum-SLIP package renders programing more efficient.
    We intend to study in a later paper the numerous issues involved in using AMPPL-II in various fields. Here, we only wish to point to the fact that the proposed system, to an extent, is capable of simulating two distinct kinds of associative  memory.    In  the   exact associative memory, the information processes are performed on the basis of finding the intersection of several matching descriptors. Because of the non-uniqueness of many associations and, also, because retrieval requests may be incomplete, there can be several respondent pieces of information. However, ill-formulated and imprecise tasks cannot, in general, be solved, and there is no logical "interpolation" or "extrapolation".
    On the other hand, in a non-exact associative memory these restrictions do not apply. Associations connect statistically related entities, too. The measure of "nearness" is an important concept. Various counting processes, the criteria of search for the most and least similar items (BITSHI and BITSLO) and matching groups of bits (GRPOFM) represent steps in this direction. Biological systems, of course, incorporate both of the above described associative  memories.
          in Donald E. Walker, Lewis M. Norton (Eds.): Proceedings of the 1st International Joint Conference on Artificial Intelligence IJCAI-69, Washington, DC, May 1969. William Kaufmann, 1969 view details
  • Sammet, Jean E. "Computer Languages - Principles and History" Englewood Cliffs, N.J. Prentice-Hall 1969. p.387. view details
          in Donald E. Walker, Lewis M. Norton (Eds.): Proceedings of the 1st International Joint Conference on Artificial Intelligence IJCAI-69, Washington, DC, May 1969. William Kaufmann, 1969 view details
  • Weizenbaum, Joseph "Recovery of reentrant list structures in SLIP" view details
          in [ACM] CACM 12(07) (July 1969) view details
  • Harrison, Malcolm C "Data-structures and programming" New York: Courant Institute of Mathematical Sciences 1970 view details
          in [ACM] CACM 12(07) (July 1969) view details
  • Stock, Karl F. "A listing of some programming languages and their users" in RZ-Informationen. Graz: Rechenzentrum Graz 1971 231 view details Abstract: 321 Programmiersprachen mit Angabe der Computer-Hersteller, auf deren Anlagen die entsprechenden Sprachen verwendet werden kennen. Register der 74 Computer-Firmen; Reihenfolge der Programmiersprachen nach der Anzahl der Herstellerfirmen, auf deren Anlagen die Sprache implementiert ist; Reihenfolge der Herstellerfirmen nach der Anzahl der verwendeten Programmiersprachen.

    [321 programming languages with indication of the computer manufacturers, on whose machinery the appropriate languages are used to know.  Register of the 74 computer companies;  Sequence of the programming languages after the number of manufacturing firms, on whose plants the language is implemented;  Sequence of the manufacturing firms after the number of used programming languages.]
          in [ACM] CACM 12(07) (July 1969) view details
  • Findler, Nicholas "Basic List-processing Concepts and a Description of the Symmetric List-Processor, SLIP" view details
          in Findler, Nicholas et al "Four high-level extension of FORTRAN IV : SLIP, AMPPL-II, TREETRAN, SYMBOLANG" New York : Spartan Books, 1972 view details
  • Findler, Nicholas [Foreword] view details
          in Findler, Nicholas et al "Four high-level extension of FORTRAN IV : SLIP, AMPPL-II, TREETRAN, SYMBOLANG" New York : Spartan Books, 1972 view details
  • Sammet, Jean E., "Roster of Programming Languages 1972" 262 view details
          in Computers & Automation 21(6B), 30 Aug 1972 view details
  • Stock, Marylene and Stock, Karl F. "Bibliography of Programming Languages: Books, User Manuals and Articles from PLANKALKUL to PL/I" Verlag Dokumentation, Pullach/Munchen 1973 557 view details Abstract: PREFACE  AND  INTRODUCTION
    The exact number of all the programming languages still in use, and those which are no longer used, is unknown. Zemanek calls the abundance of programming languages and their many dialects a "language Babel". When a new programming language is developed, only its name is known at first and it takes a while before publications about it appear. For some languages, the only relevant literature stays inside the individual companies; some are reported on in papers and magazines; and only a few, such as ALGOL, BASIC, COBOL, FORTRAN, and PL/1, become known to a wider public through various text- and handbooks. The situation surrounding the application of these languages in many computer centers is a similar one.

    There are differing opinions on the concept "programming languages". What is called a programming language by some may be termed a program, a processor, or a generator by others. Since there are no sharp borderlines in the field of programming languages, works were considered here which deal with machine languages, assemblers, autocoders, syntax and compilers, processors and generators, as well as with general higher programming languages.

    The bibliography contains some 2,700 titles of books, magazines and essays for around 300 programming languages. However, as shown by the "Overview of Existing Programming Languages", there are more than 300 such languages. The "Overview" lists a total of 676 programming languages, but this is certainly incomplete. One author ' has already announced the "next 700 programming languages"; it is to be hoped the many users may be spared such a great variety for reasons of compatibility. The graphic representations (illustrations 1 & 2) show the development and proportion of the most widely-used programming languages, as measured by the number of publications listed here and by the number of computer manufacturers and software firms who have implemented the language in question. The illustrations show FORTRAN to be in the lead at the present time. PL/1 is advancing rapidly, although PL/1 compilers are not yet seen very often outside of IBM.

    Some experts believe PL/1 will replace even the widely-used languages such as FORTRAN, COBOL, and ALGOL.4) If this does occur, it will surely take some time - as shown by the chronological diagram (illustration 2) .

    It would be desirable from the user's point of view to reduce this language confusion down to the most advantageous languages. Those languages still maintained should incorporate the special facets and advantages of the otherwise superfluous languages. Obviously such demands are not in the interests of computer production firms, especially when one considers that a FORTRAN program can be executed on nearly all third-generation computers.

    The titles in this bibliography are organized alphabetically according to programming language, and within a language chronologically and again alphabetically within a given year. Preceding the first programming language in the alphabet, literature is listed on several languages, as are general papers on programming languages and on the theory of formal languages (AAA).
    As far as possible, the most of titles are based on autopsy. However, the bibliographical description of sone titles will not satisfy bibliography-documentation demands, since they are based on inaccurate information in various sources. Translation titles whose original titles could not be found through bibliographical research were not included. ' In view of the fact that nany libraries do not have the quoted papers, all magazine essays should have been listed with the volume, the year, issue number and the complete number of pages (e.g. pp. 721-783), so that interlibrary loans could take place with fast reader service. Unfortunately, these data were not always found.

    It is hoped that this bibliography will help the electronic data processing expert, and those who wish to select the appropriate programming language from the many available, to find a way through the language Babel.

    We wish to offer special thanks to Mr. Klaus G. Saur and the staff of Verlag Dokumentation for their publishing work.

    Graz / Austria, May, 1973
          in Computers & Automation 21(6B), 30 Aug 1972 view details