Associative array extensions to FORTRAN IV 

Associative Memory and Parallel Processing Language

Related languages
FORTRAN IV => AMPPL-I   Extension of
SLIP => AMPPL-I   Influence
AMPPL-I => AMPPL-II   Evolution of

  • Findler, Nicholas V.; McKinzie, Wiley R. "On a Tool in Artificial Intelligence Research: An Associative Memory, Parallel Language, AMPPL-II" pp259-270 view details Abstract: One of the remarkable things about human information processing is the way we store and retrieve information. The human associative memory is a device that has been only rather crudely approximated with computers in flexibility, self-modifiabilitv, and complexity of associations.
    In this paper, we describe a computer language, AMPPL-11, which simulates a machine with an extensive, variable size associative memory and parallel processing capability. It was to be made fairly machine-independent and is, therefore, embedded in a high-level algebraic language. The version outlined here is built as an extension of SLIP (Symmetric List Processing Language), itself being embedded in FORTRAN IV.
    The expected range of applicability of AMPPL-II covers a rather wise area. Candidates in the non-numerical field are game playing, picture processing and image identification, encoding-decoding, simulation of cognitive behavior, multikey information retrieval, language analysis, sorting, merging, collating, query systems, and so on. numerical applications would be matrix calculations, solution of partial differential equations, computation of auto- and cross-correlation functions, signal correction, etc.
    Stimulated by this language, new computational techniques and algorithms nay, hopefully, be developed. These, on one hand, and technological advances, on the other, nay render the hardware implementation of an AMPPL-like machine economical.
    Extract: Introduction
    There is an obvious interrelation between the level of complexity of the problems that are potentially solvable on computers, and the power of available software and hardware. Further development in programing systems and languages enables us to attack problems that have so far been beyond the sphere of practicability. List processing and string manipulating languages, for example, have opened up new vistas in many research fields, probably most significantly in Artificial Intelligence.
    We describe in this paper an approach to memory and information processing, radically different from those of the traditional von Neumann-type computers. An Associative Memory, Parallel Processing Language, AMPPL-II*, is introduced, which simulates a computer built in hardware. (We point out that some of the facilities available in AMPPL-II would not be present in the engineer's implementation. However, they do not "cost" much in programming effort and in computing time, and they may also be handy to have. We have, therefore, decided to include a few "non-generic" features.)
    It should be noted here that there are many references in the literature to hardware and software efforts that are aimed at objectives similar to ours here. The engineering activity can be divided into two, rather distinct categories. First, very expensive "black boxes" have been constructed and attached to special or general purpose computers to perform some of the functions we shall discuss later. Second, small associative memory units have been included in computer systems for specific purposes, such as the paging operation with large-scale time-shared machines.
    A number of programming systems have also been developed, most of them directed towards specific, some of them towards more general applications. We only wish to single out here the work of Feldman and Rovner [3,4,5], and of Dodd, Beach and Rossol [6]. Their objectives are very similar to ours, although the respective techniques are different. Experience will show which of these languages is more powerful, easier to program and debug in.
    Excellent surveys, covering the field in question up to the middle of 1966, can be found in [7,8]. For the sake of completeness, we mention a few more, some left out of the above surveys, some of more recent vintage [9, 10, 11, 12, 13, 14, 15, 16, 17, 16, 19, 20]. Extract: Notes on implementation
    The, first version, AMPPL-I, was reported on in [1,2] and was implemented but not completely debugged for the IBM 7044. The presently described language, running on the CDC 6400, has improved dynamic memory allocation facilities and more powerful instructions concerning Relations (cf. Section II/6). Extract: Authorship
    NVF Designed AMPPL and is responsible for the mistakes contained in this paper. W. R. McK. implemented the language on the CDC 6400. Teiji Furugori's contribution to the latter is also acknowledged. Further, Peter Muller and Charles Bergenstock worked on an IBM 7044 implementation of AMPPL-I.
    Extract: Some Basic Concepts
    Some Basic Concepts
    In contrast with conventional, word-oriented machines, computers with associative memories are content-addressable. These terms refer to the interrelationship between data as well as to the fact that a memory word or words can be accessed by matching with a selectable field of a special word, rather than by an address. Parallel processing, a related idea and distinct from multiprocessing, allows command sequences to be executed simultaneously over large numbers of data sets.
    It was intended to develop a language which incorporates, from the programmer's viewpoint, these facilities in addition to the previously available algebraic, list processing and string manipulating facilities.
    For understandable reasons, embedding seemed to be an economical and fairly efficient approach, which also achieves a reasonably high level of machine independence. The presently described version is an extension of SLIP, itself being  embedded  in  FORTRAN IV.  (The
    internal mechanism of SLIP had to be modified to a small extent but the user need not be aware of this fact.) There are only two AMPPL subprograms written in assembly language now. Converting often used subroutines and functions, currently coded in FORTRAN, into assembly language would, of course, save considerable machine time with Clare programs.
    Unlike the expensive and inflexible associative memories presently built in hardware, the size of the Simulated Associative Memory (SAM) is determined by the programmer according to his needs. In fact, he has to specify the ratio between the sizes of SAM and of the Available Space List (AVSL) of SLIP at the beginning of his program. The sun of SAM and AVSL equals the storage area left unused after the compilation of the FORTRAN program.
    The programmer can also build into the program trap points, at which a part of SAM or AVSL, is dynamically reassigned to the other part if there is a need for it, i.e. if the size of one memory type decreases below a certain prespecified threshold value. Memory contraction, coupled with a specific form of garbage collection, takes place in SAM at this instance.
    There is an efficient and fast flow of information between any two of the SAM, AVSL and FORTRAN memory.
    It will be helpful in understanding the organization of AMPPL-II if we consider the diagram in Figure 1.

    The main block is the Memory Matrix, which consists of r SAM-words, each having 2n bit positions. There are four special words, the short registers, each n bits long. (On the CDC 6400, there are n=60 bits in a word.) [As will he seen  later,  every  SAM-word  consists of two actual memory words.]
    Two of these serve as Argument Registers 1 and 2. The other two represent Mask Registers 1 and 2. There are also three long registers, vertical columns r bits in length and 1 bit in width. Two of these are called Response Register 1 and Response Register 2. The third one is the Availability Register. (The role of the words FIRST and LAST in Figure 1 is explained  later).
    In the following, we shall briefly describe some of the operations used in AMPPL-II and discuss only two, the 'Search and  Flag'  and 'Set Relation', in detail. Extract: Operations in AMPPL-II (1) Memory Allocation
    Operations in AMPPL-II (1)  Memory Allocation:
    As mentioned above, the initial subdivision of free core into SAM and AVSL can be dynamically modified during the execution of the program. Instructions are available to count the number of currently available SAM and AVSL cells, and to accomplish the transfer of free cells from one memory category to the other. At these points, and also at the time of mass input into SAM (see below), memory contraction and garbage collection in SAM takes place.
    Extract: Operations in AMPPL-II (2) Input-Output
    Operations in AMPPL-II (2) Input-Output:
    Mass input-output is obtained when FORTRAN arrays or SLIP lists are read into SAM, and when designated SAM-words are loaded into FORTRAN arrays or SLIP lists. Also, wholesale clearing operations can be executed between any two specified SAM addresses. (A SAM-address can be conceived of as the index of a one-dimensional array.)
    Individual words can be transferred between any two of the SAM, SLIP and FORTRAN memories. Designated parts of SAM can be printed in different modes.
    Full words or specified bit configurations can be put into the Argument and Mask Registers, and the contents of the latter can also be read into the FORTRAN memory.
    Extract: Operations in AMPPL-II (3) Operations Concerning the Response Registers
    Operations in AMPPL-II
    (3) Operations  Concerning  the  Response Registers:
    To the AMPPL programmer, SAM appears to contain information in a manner that permits certain basic processes, including reading and writing, to be carried out simultaneously in particular cells. These designated cells contain responding pieces of information and were selected by a previous 'Search and Flag' operation or were deliberately flagged by a special flagging instruction. The 'Search and Flag' operations locate and mark SAM-words according to various criteria. Subsequent processes may then be performed on these again and the results can represent Boolean combinations, AND'S, OR's and NOT'S of consecutive searches.
    The basis: of comparison is usually* put into one of the Argument Registers. The search is carried out only over those fields of SAM-words that are marked by bits '1' in the relevant Mask Register. The success of a search is indicated by flags (bits 1) at the level of the corresponding SAM-word in the relevant Response Register. Those SAM-words that have not been used or are no longer necessary are denoted by a tag (bit '1') in the Availability Register.
    Two points should be mentioned here. In order to speed up the search processes in SAm, one-directional pointers link flagged SAM-words (see Figure 2) and, also, end markers indicate the addresses of the SAM-words first and last flagged in the two Response Registers (the contents of the FORTRAN variables FIRST and LAST on Figure 1).

    Two other FORTRAN variables, POINT1 and POINT2 contain the SAM-addresses of the last SAM-word searched, whether successfully or otherwise, with reference to Response Register 1 and Response Requester 2, respectively. (Cf. the possible values of the argument WHICH of the subroutine SERFLG. )
    The actual form of the major instruction performing the above described processes is

    A few words of comment are needed here. Two subsequent 'Search and Flag' operations with CRITERrGTEQ and LTEQ yield responsive words of values between given limits. NEXTHI can be performed by two subsequent searches with criteria GTEQ and MIN, similarly NEXTLO is done with criteria LTEQ and MAX. The value of one of NEXTHI and NEXTLO, that is nearer to the value in the Argument Register, yields NEXT. The criteria BITSHI and BITSLO are useful in comparing non-numerical data and selecting the "most similar" or "least similar" pieces of information, respectively. The number of matching bits can be found as the values of special FORTRAN variables. GRPOFM finds, for example, misprints caused by transposition, missing and added characters. The character set can be represented by groups of 2-8 bits. Since the matching process ignores the position of the groups being matched, there are extra facilities to identify transposition errors. Also, the number of the matching groups is accessible.
    There are safeguards to prevent a SAM-word of the wrong information mode (e.g. floating point number instead of alphabetic information) from becoming respondent if its contents happens to be the right bit configuration. A detailed description of this is to be found in [22].
    Finally, it should be noted that there exist instructions to flag and unflag specified SAM-words, to count the number of flagged SAM-words, and to put the SAM-addresses of flagged words in a FORTRAN array or a SLIP list.

    *If, for example, those  numbers  are searched for that are greater than a given one.   However, if the criterion of search is, for example, 'maximum',  the  Argument Registers are ignored. Extract: Operations in AMPPL-II (4) Operations Concerning the Availability Register
    Operations in AMPPL-II
    (4) Operations Concerning the Availability Register:
    As mentioned before, a bit 1 in the Availability Register indicates that the corresponding SAM-word is free. We call this mark a 'tag', as distinct from the 'flag' in the Response Registers.
    There are instructions which tag and untag SAM-words, count the number of available SAM-words, and which put the SAM-addresses of tagged words in a FORTRAN array or a SLIP list. Extract: Operations in AMPPL-II (5) Inter-Register Operations
    Operations in AMPPL-II
    (5) Inter-Register Operations:
    All the 16 logical operations possible are executable between any two of the short (Argument and Mask) Registers or between any two of the long (Response and Availability) Registers. Here the operands are the bit strings occupying the respective registers.
    Extract: Operations in AMPPL-II (6) Processes Regarding Relations
    Operations in AMPPL-II
    (6) Processes Regarding Relations:
    Besides the 'Search and Flag' operation, these processes are the most significant in AMPPL-11. We shall, therefore, discuss them, also, in some ietail.
    Let us generalize the concept of an algebraic function and define a Relation (REL) between an Object (OBJ) and a Value (VAL)
    REL (OBJ) = VAL.
    Each of the above three entities can be single items or three kinds of lists. The first kind simply contains various equivalent names of the same item. (One can think of synonyms within the given context.) This is called the EQUIVALENT LIST. The second kind of list bears the names of a number of subunits any processing on which is always uniform. An example of these lists may be the students of a class, who always have the same teacher, always stay in the same classroom, etc. Distinguishing processes, such as grading of individual exams, are not to be carried out on the elements of so designated lists. Finally, the third kind of list has distinct and, in some respect, independent elements. An example of this could be the pieces of furniture in a certain room if one would like to, say, paint them to different colors.
    Also, the programmer can define a Relation as an ordered set of other Relations.  The combining connectives are:
    ^ (and) ,   v (or) ,  -] (not), + (concatenated), <- (reverse).
    Let us further define a reserved symbol SELF in order to be able to exclude self-referencing in unwanted cases. Finally, the term on the left hand side of a 'concatenated' symbol is considersd to be in (Teutonic) genitive. The following examples should make this clear:
    (i)      PARZNT => FATHER v  MOTHER
    i.e.  a parent  is defined a  father or mother;
    (ii)       CHILD =>  <- PARENT
    i.e. the child is defined as the reverse of the parent;
    i.e.  the grandfather is defined as the father's or mother's father;
    (iv)     HUSBAND => SPOUSE ^ -] WIFE
    i.e. the husband is defined as a spouse but (and) not wife;
    i.e. the brother is defined as the mother's and father's son but (and) not self: if we wish to include half-brothers as well, we can put
    i.e. the mother's or father's son but (and) not self.
    As the system reads in these definitions from cards, it checks them in toto for circularity, which it does not accept, and puts them in a Polish prefix form on DEFINITION LISTs. The latter are sublists of the EQUIVALENT LISTs and therefore sub-sublists of the SLIP list CODEL.  (See Figure 2).  

    On the CODEL list, the members are either entity (Relation, Object, Value) names, each less than 11 characters long, or the names of the three kinds of sublists mentioned before. The machine address of an item on the CODEL list is called its Code Number. Whenever a new Relation is defined by the subroutine
    the system sets up one (or more) Relation Descriptor word(s) in SAM, which contain(s), in the proper fields, the three code numbers representing the Relation, the Object and the Value. (See Figure 3).  K is an integer between 0 and
    7, and characterizes each of the three entities whether they represent a single item or non-distinct elements of a list, on one hand, or distinct elements of a list, on the other. In the latter case, as many Relation Descriptor words are established as the total number of combinations possible.

    There are altogether seven basic questions a retrieval system for Relations can answer.  These are as follows:
    (a)     Is   a  particular relation, between a given object and value, true?
    (b)    What  is  (are)  the  value (s) belonging to a given relation-object pair, if any?  REL (OBJ)=?
    (c)    What  is  (are)  the  object (s) belonging  to a given relation-value pair, if any?  REL(?)=VAL
    (d)   PJhat is  (are)  the  relation (~) that connect (s) a given object-value pair, if any?  ?(OBJ)=VAL
    (e)     What relation-object pair(s) belong (s)  to a liven value,  if  any? ?(?)=VAL
    (f)     What  relation-value  pair(s) belong (s)  to a given object,  if  any? ?(OBJ)=?
    (g)     Finally,   what   object-value pair(s) belong(s) to a given relation,  if any?  REL(?)=?
    The answers are obtainable by using one simple instruction in every case.
    Finally, we note two more instructions of considerable power. One of them creates
    REL2 (VAL) = 0BJ
    where the Object and Value  are  connected by
    REL1 (OBJ) = VAL and
    i.e. REL1 and REL2 are Reversed relations. Examples of this are:
    RELl             REL2
    husband of      wife of
    spouse of       spouse of
    parent of       child of
    loves           is loved by
    superset of     subset of
    similar to      similar to
    greater than    less than
    above           below
    left to         right to
    Note that always
    Another  instruction  finds  X,  for which it is true that
    A : B = C : X
    where A, B, and C are any of the three entities, Relation, Object, or Value; A, and C, on one hand, and B and X, on the other, are of the same type, and the third entity in the Relation Descriptor words containing A, B and C, X is the same for both. (Occurrences of all possible combinations of A and B are considered.) Extract: Operations in AMPPL-II (7) Parallel Operations over SAM
    Operations in AMPPL-II
    (7) Parallel Operations over SAM:
    Here, we briefly list some basic but high-level instructions that should be useful both in numerical and non-numerical applications:
    A constant word in one of the Argument Registers can be added to, subtracted from, multiplied by, divided into, Boolean AND-ed and OR-ed with designated SAM-words through one of the Mask Registers.
    The above operations can also be performed between any two fields, and the Boolean NOT of any single field, of designated SAM-words.
    Specified fields of designated SN4-words can be cleared.
    Designated SAM-words can be shifted to the left or to the right by specified number of places.
    Single elements, vectors, planes, cofactors or all elements of an array can be flagged if the array was read into SAM according to the usual mapping order.
    Vector addition, subtraction and scalar multiplication are single instructions.
    Also, single instructions yield the determinant and the inverse of one, and the product of two, matrices. Extract: An Overview
    An Overview
    We have tried to give a short outline of a new computer language. We, however, feel it is more than "just another language" -- it represents another philosophy of, another approach to, problem solving. After all, it is only the sequential design of the von Neumann-type machines that has imposed upon the computing community the presently prevalent but often quite unnatural computational methods. Even using these conventional methods, AMPPL-II (a) should decrease the length of written programs and (b) should simplify the writing, debugging and understanding of programs. (It has very powerful diagnostic facilities.) There is, however, a significant trend, as can be seen in the referenced literature, to develop new algorithms and techniques that make use of content addressability and parallel processing, expose latent parallelism, and introduce computational redundancy. We hope AMPPL-I1 will enhance this trend.
    We have had only limited programming experience with the language. The following, as yet incomplete, projects are representative examples:
    (1)   a query  system,  which  can be continually updated, dealing with  complex kinship structures;
    (2)    simulation of  a  learning and self-adapting organism  in  a  hostile environment;
    (3)   empirical proofs of conjectures in computational linguistics;
    (4) simulation of a demographical problem; and
    (5) scheduling classrooms, teachers and students.
    We have found that, although one must pay a certain price in machine time and available memory, the programming ease achieved is quite significant. (The disk resident system consists  of roughly  8k
    words for each SLIP and AMPPL-II. Only the needed subprograms are called into core.) In the INTRODUCTION, we listed a number of broad areas of application in which AMPPL-I1 should prove useful. Now we can be more specific in terms of problem characteristics. We expect to save considerable programming effort in using the language whenever
    (i) data are to be addressed by a combination of various sets of reference properties,
    (ii) data elements satisfying the above reference properties are scattered throughout the memory in a sparse and random manner,
    (iii) data elements dynamically change their location in the memory as a consequence of the information processes acting on them,
    (iv) identical sequences of processes manipulate on distinct, non-interacting data elements,
    (v) the ratio between concurrently and serially executable processes is reasonably high.
    These criteria of language applicability occur to same extent with practically every complex programming problem. The availability of a conventional algebraic language with the AMMPL-cum-SLIP package renders programing more efficient.
    We intend to study in a later paper the numerous issues involved in using AMPPL-II in various fields. Here, we only wish to point to the fact that the proposed system, to an extent, is capable of simulating two distinct kinds of associative  memory.    In  the   exact associative memory, the information processes are performed on the basis of finding the intersection of several matching descriptors. Because of the non-uniqueness of many associations and, also, because retrieval requests may be incomplete, there can be several respondent pieces of information. However, ill-formulated and imprecise tasks cannot, in general, be solved, and there is no logical "interpolation" or "extrapolation".
    On the other hand, in a non-exact associative memory these restrictions do not apply. Associations connect statistically related entities, too. The measure of "nearness" is an important concept. Various counting processes, the criteria of search for the most and least similar items (BITSHI and BITSLO) and matching groups of bits (GRPOFM) represent steps in this direction. Biological systems, of course, incorporate both of the above described associative  memories.
          in Donald E. Walker, Lewis M. Norton (Eds.): Proceedings of the 1st International Joint Conference on Artificial Intelligence IJCAI-69, Washington, DC, May 1969. William Kaufmann, 1969 view details
  • Findler, Nicholas "The Associative Memory, Parallel-Processing Language, AMPPL-II" view details
          in Findler, Nicholas et al "Four high-level extension of FORTRAN IV : SLIP, AMPPL-II, TREETRAN, SYMBOLANG" New York : Spartan Books, 1972 view details
  • Findler, Nicholas [Foreword] view details
          in Findler, Nicholas et al "Four high-level extension of FORTRAN IV : SLIP, AMPPL-II, TREETRAN, SYMBOLANG" New York : Spartan Books, 1972 view details
  • Sammet, Jean E., "Roster of Programming Languages 1972" 11 view details
          in Computers & Automation 21(6B), 30 Aug 1972 view details
  • Stock, Marylene and Stock, Karl F. "Bibliography of Programming Languages: Books, User Manuals and Articles from PLANKALKUL to PL/I" Verlag Dokumentation, Pullach/Munchen 1973 33 view details Abstract: PREFACE  AND  INTRODUCTION
    The exact number of all the programming languages still in use, and those which are no longer used, is unknown. Zemanek calls the abundance of programming languages and their many dialects a "language Babel". When a new programming language is developed, only its name is known at first and it takes a while before publications about it appear. For some languages, the only relevant literature stays inside the individual companies; some are reported on in papers and magazines; and only a few, such as ALGOL, BASIC, COBOL, FORTRAN, and PL/1, become known to a wider public through various text- and handbooks. The situation surrounding the application of these languages in many computer centers is a similar one.

    There are differing opinions on the concept "programming languages". What is called a programming language by some may be termed a program, a processor, or a generator by others. Since there are no sharp borderlines in the field of programming languages, works were considered here which deal with machine languages, assemblers, autocoders, syntax and compilers, processors and generators, as well as with general higher programming languages.

    The bibliography contains some 2,700 titles of books, magazines and essays for around 300 programming languages. However, as shown by the "Overview of Existing Programming Languages", there are more than 300 such languages. The "Overview" lists a total of 676 programming languages, but this is certainly incomplete. One author ' has already announced the "next 700 programming languages"; it is to be hoped the many users may be spared such a great variety for reasons of compatibility. The graphic representations (illustrations 1 & 2) show the development and proportion of the most widely-used programming languages, as measured by the number of publications listed here and by the number of computer manufacturers and software firms who have implemented the language in question. The illustrations show FORTRAN to be in the lead at the present time. PL/1 is advancing rapidly, although PL/1 compilers are not yet seen very often outside of IBM.

    Some experts believe PL/1 will replace even the widely-used languages such as FORTRAN, COBOL, and ALGOL.4) If this does occur, it will surely take some time - as shown by the chronological diagram (illustration 2) .

    It would be desirable from the user's point of view to reduce this language confusion down to the most advantageous languages. Those languages still maintained should incorporate the special facets and advantages of the otherwise superfluous languages. Obviously such demands are not in the interests of computer production firms, especially when one considers that a FORTRAN program can be executed on nearly all third-generation computers.

    The titles in this bibliography are organized alphabetically according to programming language, and within a language chronologically and again alphabetically within a given year. Preceding the first programming language in the alphabet, literature is listed on several languages, as are general papers on programming languages and on the theory of formal languages (AAA).
    As far as possible, the most of titles are based on autopsy. However, the bibliographical description of sone titles will not satisfy bibliography-documentation demands, since they are based on inaccurate information in various sources. Translation titles whose original titles could not be found through bibliographical research were not included. ' In view of the fact that nany libraries do not have the quoted papers, all magazine essays should have been listed with the volume, the year, issue number and the complete number of pages (e.g. pp. 721-783), so that interlibrary loans could take place with fast reader service. Unfortunately, these data were not always found.

    It is hoped that this bibliography will help the electronic data processing expert, and those who wish to select the appropriate programming language from the many available, to find a way through the language Babel.

    We wish to offer special thanks to Mr. Klaus G. Saur and the staff of Verlag Dokumentation for their publishing work.

    Graz / Austria, May, 1973
          in Computers & Automation 21(6B), 30 Aug 1972 view details