FAC(ID:1223/fac001)

Functional alternative to APL 


Functional Array Calculator. APL-like but purely functional and lazy, allowing infinite arrays.

Co-developed at Yale by Hai-Chen Tu with Alan Perlis, who was a longtime advocate of array-centred programming.




Structures:
Related languages
APL => FAC   Augmentation of

References:
  • Hai-Chen Tu, and Alan J. Perlis "FAC: A Functional APL Language" Hawaii International Conference on System Sciences, January 8-10, Honolulu. 1985 view details
  • Hai-Chen Tu, "FAC: Functional Array Calculator and its Application to APL and Functional Programming" YALEU/DCSnRJ168, Yale University, New Haven, CT, (Apr. 1986). view details
  • Tu, H.-C and Perlis, A.J. "FAC: A Functional APL Language" view details Abstract: FAC, a functional array calculator language, features APL syntax and array operations but allows partitions, operators, and infinite arrays ? extensions that are semantically implausible in APL Extract: Introduction
    In the last two decades, the development of programming has unveiled new, important language concepts. In most cases, each concept is characterized by a programming language and is then identified as the paradigm of the language. Once the paradigm of a language has been recognized, the language is expanded accordingly. In the meantime, a language must also be ready to adopt other paradigms because new tasks emerge and its own paradigm may not be able to handle them. When two or more paradigms are integrated, each must contribute its strengths without weakening the others. In this respect, APL integrates nicely with functional languages.
    This article describes the functional array calculator language FAC and gives programming examples.
    Extract: Functional languages and list processing
    Functional languages and list processing.
    Functional (applicative) programming creates shorter and more compact programs than ordinary algorithmic (imperative) programming. It abstracts a problem through a sequence of function compositions instead of state transitions. The major difference is that functional programming maintains referential transparency while algorithmic programming does not. Referential transparency means that any variable or expression, within a given scope, can associate only with a single value. Without referential transparency, a program is harder to write and to debug, which makes program maintenance difficult and thus contributes to the software crisis.
    Not only do functional languages exhibit better programming style, but they also show more parallelism than nonfunctional languages: arguments of a function application can be executed in parallel because there are no side effects. Indeed, both the dissatisfaction with sequential languages and advancing hardware technology have generated the research boom on dataflow machines?computers that can execute functional programs efficiently.2
    Unfortunately, the parallelism and compactness of functional programs haven't been fully realized because Lisp-style list processing has dominated functional languages (see box above).
    To search for a better functional programming style, we can either investigate new styles of list processing or choose some other data structures. The FP system proposed by Backus3 is an attempt of the first kind; it retains the list structure (streams) but advocates nonrecursive functional programming style by using functional forms. On the other hand, we see little progress in identifying alternative data structures other than the current trend of research on typed lambda calculus,4-5 which emphasizes structure programming more than breaking the von Neumann bottleneck. However, we feel that a good candidate for an alternative data structure?arrays?has been effective in a nonfunctional language, APL. (Although some Lisps contain arrays, none has extended the array processing capability close to what APL has.)
    Extract: APL and array processing.
    APL and array processing.
    The central idea of APL programming is to treat arrays, instead of their individual elements, as basic data units. The concept of manipulating arrays as single units creates a unique style of programming in which computation is composed through a series of array transformations. Thus, APL enables us to write very condensed programs and to exhibit highly parallel computation.
    The powerful APL programming style is not derived from Algol-style, scalar-oriented controls, which are actually a distraction, but from linear algebra notation. More specifically, the style uses partitions (for example, A + B distributes + over scalar elements of A and B), and operators (for example, +/A sums all A's elements). The former explores parallelism, and the latter abstracts controls; both contribute to the compactness of APL programs. And, of course, we must credit the uniform structure of arrays for making these notations possible. In all, we can say arrays, partitions, and operators form the backbone of APL programming style. Brief explanations of some APL terminologies are given in the box on p. 39. Array index origin 0 is used throughout this article.
    The three APL constructs have been well recognized, and in the last decade, the most significant progress in APL has been in generalizing those constructs. For example, in APL2,6 nested arrays are used, partitions are extended to defined functions using the each ( " ) operator, and operators become user definable (see box on p. 38).
    But all extensions fall short of expectation because of a major weakness in APL - its semantics. APL was initially designed as an algorithmic language with properties like gotos and side effects. Also, since it was designed as an interactive language, it uses dynamic binding instead of the preferred lexical binding. (When APL was designed, nobody knew how to implement an interactive lexical binding
    Extract: List processing
    List processing
    Almost all functional languages have adopted lists as their main data structure because lists have a simple structure and flexibility. Even though list processing is orthogonal to functional languages, there are good reasons that functional languages rely on lists. First, since lists can simulate any desired data structures, there is little need for other data structures. Second, list processing fits quite well with recursive programming, which is strongly advocated by most functional languages. Finally, the simple semantics of list structure (just rely on car, cdr, and cons) can be easily implemented in any functional language. All these contribute to the misconception that functional programming is the same as Lisp-style list programming.
    From the programming point of view, Lisp-style list processing has some drawbacks in at least two respects. Lists may not be the appropriate data structure to represent problems, and the sequentiality introduced by its structure and recursion may not reflect the parallelism one could have. Lisp-style list processing thus creates the intellectual von Neumann bottleneck in functional languages. Users have to think and program in terms of scalar operations. Here is a simple example. The problem is to decide whether or not a set of numbers are all nonzero. The Lisp solution is to represent the set as a linear list and check through the list recursively for any zero.
    (define (check S) (cond ((null S) t) ((= 0 (car S)) nil) (I (check (cdr S)))))
    (check '(12 3 0 4)) . nil
    (check '(1 234)|si
    (In Lisp, nil represents the Boolean value false, and (, true.)
    The Lisp solution is not as compact as the predicate shown below using set notation. The predicate also exhibits a high degree of parallelism, which the Lisp solution lacks.
    vi e S; /  at 0
          in IEEE Software 3(01) January 1986 (Multiparadigm language projects) view details