XALT(ID:8496/)


1900 Algol 60 compiler system


References:
  • Wichmann, BA "Five ALGOL compilers" pp8-12 view details Abstract: A detailed comparison of the times taken to perform elementary statements in ALGOL 60 has revealed wide differences in performance. An examination of the machine code produced by five compilers (Atlas, KDF9 (Kidsgrove), 1900 (XALT), B5500 and 1108 (Trondheim compiler)) has been undertaken to find the reasons for the disparities. The large range of machine architecture means that very different techniques have been used for code generation. This enables one to give guide lines for a suitable architecture for good ALGOL 60 code generation to be possible. Extract: 1900 Algol
    This computer series has a conventional one-address architecture.
    From the point of view of ALGOL it has the disadvantage
    of only three index registers. This makes environment control
    and call-by-name very difficult, and is reflected in the time taken
    to execute 'man or boy'. The compiler assigns store sometimes
    at procedure level and sometimes at block level. The author
    has been told that the current versions of this compiler now
    assign all simple variables at procedure level. The compiler
    does a fairly extensive amount of simple optimisation, so that
    there are very few short sequencies of code that could be
    radically improved.
    The address of the front of the stack is not kept in a register,
    which means that procedure entry and exit is not very fast. In
    order to simplify the parameter handling problem, a simple
    'thunk' mechanism is used for all parameters Extract: Conclusion
    Conclusions
    The main advantage of a non-conventional architecture for the
    compilation of ALGOL 60 appears to be the production of
    extremely compact object code. This is achieved with the
    B5500 by a very short address length within an instruction.
    Because of the dynamic storage allocation of ALGOL 60,
    access to simple variables is always by a small offset from an
    environmental pointer. Hence an address length within an instruction
    of only 9 bits is adequate. Anything in excess of
    9 bits is likely to be wasted. On the other hand, several index
    registers or their equivalent are necessary for environment
    control and array accessing. Such registers must be capable of
    being updated rapidly for procedure entry and exit, and for
    access to name parameters.
    Access to array elements is usually via an array word which
    can be addressed in the same way as a simple variable. A short
    address length may preclude some array access optimisation,
    for instance if 'a' is a global array of fixed size a[200] could be
    accessed by a single instruction provided the address field was
    large enough. In fact the B5500 does not allow array accessing
    optimisation because the storage protection system depends
    upon access via the array word (descriptor). The optimisation
    produced by the ALCOR compilers (Grau, 1967), could be
    done on a machine with a short address length, but not the
    B5500.
    Array bound checking is an area where special hardware can
    be used to great advantage. Unfortunately the hardware on the
    B5500 does not deal with the general value of the lower bound,
    so that explicit code must be generated by the compiler to
    subtract the value of this lower bound if it is non-zero. Options
    to do bound checking on other machines tend to be very
    expensive in processor time. The 1108, although having no
    built-in hardware for array accessing, has a convenient instruction
    for bound checking. With this instruction, a single test
    can be made to see if the operand lies within the range
    defined by two registers.
    Apart from the production of compact code from ALGOL 60,
    it is clear that in many scientific fields non-conventional
    machines can have other substantial advantages. Array bound
    checking has already been mentioned, but other examples lie
    outside the scope of this paper, for instance distinction between
    data and program and the ability to share the available core
    store between processes. The majority of these advantages are
    in the field of operating system design, and so are not considered
    here. Such advantages are likely to have a substantial effect
    upon the performance of the compiling system itself, and the
    easy way in which such systems can be developed.
          in The Computer Journal 15(1) February 1972 view details