Abrams APL Machine(ID:8234/)Phil Abrams machine implementation of APLPhil Abrams PhD thesis at Stanford, which featured a metalanguage for operations over n dimensions. HUgely influential both in practical and theoretical areas, even though never built per se. Related languages
References: Philip's first contact with APL came as an undergraduate in mathematics at Princeton University. While browsing in a bookstore, he saw a copy of Hen Iverson's book, thought it looked interesting, bought a copy and filed it away on the bookshelf. That was in 1962 or 3. In 1964, Philip went on to Stanford where he met up with another graduate student Larry Breed. They saw an article in the IBM Systems Journal by Ken Iverson, Adin Falkoff and Ed Sussenguth, called "A Formal Description of System\360" which used Iverson notation to describe the functions of the System\360 computer. Philip and Larry decided to teach a seminar about it in the spring of 1965. During this activity they discovered a few bugs and began communicating with Ken and Adin about them. Late that summer Philip and Larry began a project to write an interpreter for Iverson notation. Remember that none existed at that time. Then, during that Christmas break in 1965, they put together their boxes of FORTRAN punched cards on an IBM 7090 computer, and---Gee Whiz! It workedl--and we had the world's first APL interpreter! This proved the feasibility of APL\360. In a later project for IBM (in 1966), Philip created an APL operation system for an experimental desktop machine. As it turns out, this machine was really a microcomputer with an eight-instruction set, which we know today as RISC architecture. Although the Little Computer never left the lab, Philip's work was easily ported to an IBM 1130 later that year. Philip's thesis topic was design of a highly optimized processor to directly execute APL---a machine whose native language would be APL. His thesis, entitled "An APL Machine," with its unique design features of "drag along" and "beating," has influenced many APL implementors and continues to be cited frequently. Although his thesis has never been implemented per se, many of the ideas are embodied in the APL for the HP3000 and in various APL compilers. Since the onset of computers and computation, arrays have been the primary data structure used for scientific problem formulation. [...]we develop an indexing function which will be used to operate on n-dimensional arrays over any axis given the size of each dimension. The operations will be a subgroup of permutation and transformation groups. MOA will evolve to include other transformations and permutations which consistently appear throughout nature, i.e. in scientific, applications. In this paper we introduce the Psi function and its algebraic properties. Using the Psi function, we give a few elementary operations on I dimensional arrays. In particular, we give the definition of Blelloch's scan operation[3] noting that scan is defined in MOA for n-dimensional arrays over any axis. [13]. We note that all operations are, by default, over the primary axis of an array. This includes scalars(0-dimensional arrays) and vectors(l-dimensional arrays) where the primary axis is the only axis. In this introductory paper we omit the higher order operation Omega which extends operations over all dimensions. Details on Omega as well as other operations may be referenced[13]. Extract: MOA: a historical perspective MOA: a historical perspective As previously mentioned, arrays with an associated algebra have been around for over 100 years. It was not until 1970 when Philip Abrams[l] investigated the mathematical properties of certain APL operations with the idea, of developing an APL Machine. He recognized that certain operations could he denned using the structural information of an n-dimensional array, An, i.e. the size of each of An's dimensions. He developed a meta-languago an a. preliminary to a full mathematical theory based on the definition of array operations with structural information and indexing as the building blocks. He described elementary properties of indexing with scalar operation and concatenation. He used these properties in what he refereed to as the .iivi-plification of array expressions which would be used in his D-Machine (or Deferred execution unit). The D-Machine parsed, simplified array expton sions, and passed addresses of arguments to unary and binary operations in a stack oriented architecture called the E-Machine(or Execution unit). In this context he was the first to coin the word deferred execution of an array expression. We can now see that Abram's D-Machine is the basis for an in telligent compiler(i.e. a compiler than can simplify and derive optimal code) for a functional language with arrays, lie left as open questions the need todevelop a full mathematical theory on arrays using structure and indexing with the application of these ideas to parallel processing. Work by Perlis[16], Miller[12], Budd[4] and others furthered this development. MOA achieves closure on the class of operations introduced by Abrams and formally unifies the inner and outerproduct as he conjectured in his thesis. in Restifo Mullin, Lenore M. et. al., (eds) "Arrays, functional languages and parallel systems" Kluwer Academic Publishers, Boston, MA, 1991 view details |