CM Fortran(ID:6873/)

Fortran for the CM  


Fortran 77 for the Connection Machine, added machine specific array processing instructions and the FORALL construct

Unlike Starlisp and CStar CM Fortran could make use of a higher level "Slicewise" virtual machine, which meant much faster processing indeed


Related languages
FORTRAN 77 => CM Fortran   Implementation
Paris => CM Fortran   Targetting

References:
  • Argonne National Laboratory. Using the Connection Machine System (CM Fortran). Technical report anl/mcs-tm-118, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439, June 1989. view details
  • Fortran Programming Guide - Thinking Machines Corporation, Cambridge, MA Jan 1991 view details
  • Fortran Reference Manual - Thinking Machines Corporation, Cambridge, MA Jul 1991 view details
  • Fortran Reference Manual - Thinking Machines Corporation, Cambridge, MA Jul 1991 view details
  • Schauble, C. J. C. "The Connection Machine CM: An Introduction" High Performance Scientific Computing University of Colorado at Boulder September 1993 view details Extract: CM Languages
    One of the original purposes of the computer was artificial intelligence - the eventual goal was a "thinking machine" Each processor was only a one      bit processor. The idea was to provide one processor per pixel for image processing, one processor per transistor for VLSI simulation, or one processor per concept for semantic networks.
    The first high-level language implemented for the machine was *Lisp, a parallel extension of Lisp. In fact, the design of portions of the *Lisp language are discussed in the Hillis dissertation.
    However, as the first version of this supercomputer came onto the market, TMC discovered that there was also significant interest - and money - for supercomputers that could be used for numerical and scientific computing.
    Hence, a faster version of the machine came out in 1987 called the CM-1; this was the first of the CM series of computers. It included floating-point hardware, a faster clock, and increased the memory to 64K bits per processor. These models emphasised the use of data parallel programming. Both C* and CM Fortran were available on this machine, in addition to *Lisp. External link: Slicewise programming on CM Extract: PARIS operations
    The language Paris (PARallel Instruction Set) is used to express the parallel operations that are to be run on the PPU. All *Lisp, CM Fortran or C* parallel commands are compiled into Paris instructions. Such operations include parallel arithmetic operations (both floating-point and fixed), vector summation (and other reduction operations), sorting, and matrix multiplication.
    [...]
    A sequencer receives Paris instructions from the FE and breaks them down into a sequence of low-level instructions which can be handled by the one-bit processors. When that is done the sequencer broadcasts these instructions to all the processors in its section. Each processor then executes the instructions in parallel with the other processors. When the execution of low-level instructions is completed, control is returned to the FE. Used independently, each section sets up its own grid layout for computation and communication for each array; if the sections are grouped together, one grid per array is laid over all the processors. These grids may be altered dynamically during the execution of the program.
  • Mehrotra, Piyush; Van Rosendale, John; Zima, Hans "High Performance Fortran: History, Status and Future" Technical Report TR 97-8, Institute for Software Technology and Parallel Systems, University of Vienna, September 1997. view details Extract: CM Fortran
    In the same time period, Thinking Machines, a supercomputer manufacturer, together with COMPASS, a compiler software company, incorporated static layout directives, including alignment of arrays, for a subset of Fortran-8x on the Connection Machine. The CM-Fortran compiler also included an element array assignment statement (a precursor for the HPF forall) which was not incorporated in Fortran 90. The COMPASS Fortran compiler technology for generating code for distributed-memory machines was later utilized in the Fortran compilers for MasPar and DEC as well. Extract: Conclusion
    Conclusion
    HPF is a well-designed language which can handle most data parallel scientific applications with reasonable facility. However, as architectures evolve and scientific programming becomes more sophisticated, the limi-  tations of the language are becoming increasingly apparent. There are at least three points of view one could  take:
    1. HPF is too high-level a language --- MPI-style languages are more appropriate.
    2. HPF is too low-level a language --- aggressive compiler technologies and improving architectures obviate the need for HPF-style compiler directives.
    3. The level of HPF is about right, but extensions are required to handle some applications for some upcoming architectures.

    All three of these alternatives are being actively pursued by language researchers. For example, HPC++ [?] is an effort to design an HPF-style language using C++ as a base. On the other hand, F - - [?] is an attempt  to provide a lower-level data-parallel language than HPF. Like HPF, F - - provides a single thread of flow  control. But unlike HPF, F - - requires all communication to be explicit using "get'' and "put'' primitives.

    While it is difficult to predict where languages will head, the coming generation of SMP-cluster ar- chitectures may induce new families of languages which will take advantage of the hardware support for  shared-memory semantics with an SMP, while covering the limited global communication capability of the  architectures. In this effort the experience gained in the development and implementation of HPF will surely  serve us well.