MOA(ID:6238/moa001)

parallel language 


for Mathematics of Arrays

Array-centered parallel language


Related languages
Abrams APL Machine => MOA   Influence

References:
  • Mullin, Lenore M. R. "A mathematics of arrays" Ph.D. Dissertation, Syracuse University, Syracuse, NY, December, 1988 view details
  • Mullin L.M.R. & G.Hains, Formal Program Derivation for a Shared-Memory Architecture: Gauss Jordan Pivoting, McGill University, TR in progress, Feb. 1990. view details
  • Mullin, LM.R. ; G. Gao, M. Santavy, & B. Tiffou, Formal Program Derivation for a Shared-Memory Architecture: LU-Decomposition, McCJill University, TR , May 1990. view details
  • Jenkins, Michael A.; Mullin, Lenore M. Restifo "A Comparison of Array Theory and A Mathematics of Arrays" view details Abstract: Array-based programming began with APL. Two mathematical treatments of array computations have evolved from the data concepts of APL. The first, More's array theory, extends APL concepts to include nested arrays and systematic treatment of second order functions. More recently, Mullin has developed a mathematical treatment of flat arrays that is much closer to the original APL concepts. The two approaches are compared and evaluated.
    Extract: Introduction
    Introduction
    The modern concept of an array as a multidimensional data structure has evolved from its use in early programming languages. The original motivation of arrays was to find a counterpart to subscript notation used for sequences, vectors, matrices and higher dimensional objects. The array concept was introduced in Fortran with static arrays of a fixed type and with a limited number of dimensions. It was extended in Algol 60 by allowing an arbitrary (>=) number of dimensions and by having the size of the array determined dynamically at block entry.

    In Algol 60 and Fortran, arrays are treated as data containers for scalar values. The array is viewed as a collection of variables each capable of holding a scalar value. Arrays can be passed as parameters to procedures, but cannot be returned as results from value returning functions. Similar restrictions have been passed on in most subsequent procedural languages such as Pascal and Turing.
    APL extended the array concept by treating it as a value at the same level as a single number or character, thus permitting expressions with array values to be used. One of the original contributions of APL was to give a consistent treatment of scalar values as arrays, thus allowing the universe of data objects in the language to be entirely arrays. This brought to the notation a mathematical uniformity that has much appeal. In APL as originally developed, an array was either an array of characters or an array of numbers. A formal description of APL arrays and the operations of APL was developed as part of the APL standard[ISO82].
    This paper examines two mathematical treatments of arrays that have followed from API, concepts. The first was motivated by a desire to extend APL to permit arrays of arrays in a way consistent with the nesting concepts of set theory. The treatment, called array theory (AT) [More 73], has gone through several versions and has been the basis for (he data structures in the programming language Nial [Jenkins 85]. An earlier version motivated the extension of APL arrays to nested arrays in APL2 [Brown 84]. Based on experience gained using Nial, efforts are underway to refine array theory and to publish a definitive account [More 90, Franksen 90].
    The second mathematical treatment was developed to provide a firm mathematical reasoning system for algorithms involving flat arrays of numbers. This treatment, called a Mathematics of Arrays (MO A) [Mullin 88], has been used to prove theorems about register transfer operations in hardware design, and to describe the partitioning of linear algebra operations for parallel architectures [Mullin 89]. MOA follows APL more closely than AT and is largely concerned with having a succinct notation in which definitions can be stated and theorems on array transformations can be proved. This work has attracted the attention of researchers attempting to extend functional languages to include array objects.
    Both AT and MOA have their roots in APL and it is not surprising that they are consistent with each other as theories of arrays. The purpose of this paper is twofold: first, lo explain the correspondence between concepts in the two notations in order to assist users of the two notations to communicate, and second, to compare the effectiveness of the two approaches for their purposes.

          in Restifo Mullin, Lenore M. et. al., (eds) "Arrays, functional languages and parallel systems" Kluwer Academic Publishers, Boston, MA, 1991 view details
  • Restifo Mullin, Lenore M. "Psi, the Indexing Function: a basis for FFP with arrays" view details Abstract: An array of arbitary components can be viewed as a variable whose value is a function from subscripts into its components[17]. The subscripts(or indices) of a 1-dimensional array are related to the system of integers, more precisely the natural numbers, which play an important role in the theory of groups. A Mathematics of Arrays(M0A)[13] builds upon these ideas to construct permutation and transformation groups into a calculus of n-dimensional arrays with the Psi function as the fundamental building block. We propose that this calculus provide a basis for Formal Functional Programming(FFP)[2] with arrays for parallel architectures.
    Extract: From introduction
    Since the onset of computers and computation, arrays have been the primary data structure used for scientific problem formulation.

    [...]we develop an indexing function which will be used to operate on n-dimensional arrays over any axis given the size of each dimension. The operations will be a subgroup of permutation and transformation groups. MOA will evolve to include other transformations and permutations which consistently appear throughout nature, i.e. in scientific, applications.
    In this paper we introduce the Psi function and its algebraic properties. Using the Psi function, we give a few elementary operations on I dimensional arrays. In particular, we give the definition of Blelloch's scan operation[3] noting that scan is defined in MOA for n-dimensional arrays over any axis. [13]. We note that all operations are, by default, over the primary axis of an array. This includes scalars(0-dimensional arrays) and vectors(l-dimensional arrays) where the primary axis is the only axis. In this introductory paper we omit the higher order operation Omega which extends operations over all dimensions. Details on Omega as well as other operations may be referenced[13].
    Extract: MOA: a historical perspective
    MOA: a historical perspective
    As previously mentioned, arrays with an associated algebra have been around for over 100 years. It was not until 1970 when Philip Abrams[l] investigated the mathematical properties of certain APL operations with the idea, of developing an APL Machine. He recognized that certain operations could he denned using the structural information of an n-dimensional array, An, i.e. the size of each of An's dimensions. He developed a meta-languago an a. preliminary to a full mathematical theory based on the definition of array operations with structural information and indexing as the building blocks. He described elementary properties of indexing with scalar operation and concatenation. He used these properties in what he refereed to as the .iivi-plification of array expressions which would be used in his D-Machine (or Deferred execution unit). The D-Machine parsed, simplified array expton sions, and passed addresses of arguments to unary and binary operations in a stack oriented architecture called the E-Machine(or Execution unit). In this context he was the first to coin the word deferred execution of an array expression. We can now see that Abram's D-Machine is the basis for an in telligent compiler(i.e. a compiler than can simplify and derive optimal code) for a functional language with arrays, lie left as open questions the need todevelop a full mathematical theory on arrays using structure and indexing with the application of these ideas to parallel processing. Work by Perlis[16], Miller[12], Budd[4] and others furthered this development. MOA achieves closure on the class of operations introduced by Abrams and formally unifies the inner and outerproduct as he conjectured in his thesis.

          in Restifo Mullin, Lenore M. et. al., (eds) "Arrays, functional languages and parallel systems" Kluwer Academic Publishers, Boston, MA, 1991 view details
  • Skillicorn, David B. and Talia, Domenico "Models and languages for parallel computation" pp123-169 view details
          in [ACM] ACM Computing Surveys (CSUR) 30(2) June 1998 view details