SAC-1(ID:564/sac003)Symbolic Algebraic Computingfor Symbolic Algebraic Computing v1 G.E. Collins, University of Wisconsin Early symbolic math system, written in FORTRAN. People: Related languages
Samples: References: in SIGSAM Bulletin 12(7) July 1968 view details in SIGSAM Bulletin 12(7) July 1968 view details in SIGSAM Bulletin 12(7) July 1968 view details in SIGSAM Bulletin 12(7) July 1968 view details in SIGSAM Bulletin 12(7) July 1968 view details in Tobey, R. G. (ed) Proceedings of the 1968 Summer Institute on Symbolic Mathematical Computation, IBM Federal Systems Center, Gaithersburg, Maryland, July-August 1968, IBM Programming Laboratory Report No. FSC-69-0312 (proceedings published June 1969). view details in Tobey, R. G. (ed) Proceedings of the 1968 Summer Institute on Symbolic Mathematical Computation, IBM Federal Systems Center, Gaithersburg, Maryland, July-August 1968, IBM Programming Laboratory Report No. FSC-69-0312 (proceedings published June 1969). view details TSS now has a selected user community of approximately one hundred. The mean time between system crashes is greater than two hours. Most user sessions are only two hours long and the system will crash approximately once every three sessions. The average number of users on line is about ten. Active user projects include IPL, ALGOL, LCC, and SAC (System for Algebraic Computation). The initial comparative measurements of CMU configuration and IBM standard configuration (4 boxes HSS, 2 drums) indicate that the configurations will support about the same number of users with the CMU configuration giving faster conversational response (.1 sec vs 1-2 sec). The current versions of LCC will allow 40 simultaneous users. An improved version has a projected capacity at 75 simultaneous users on the CMU configuration. In addition to the current development on TSS a network between several TSS installations is being implemented. By March 31 the installations at CMU, Princeton, and T. J. Watson Research will be able to send and receive data sets over automatic dial up 2000 band lines. If this first phase is successful the next phase will use 50K band leased lines and allow users at the different installations to use the network facility as a common data base and to run on any 67 from another one. in Tobey, R. G. (ed) Proceedings of the 1968 Summer Institute on Symbolic Mathematical Computation, IBM Federal Systems Center, Gaithersburg, Maryland, July-August 1968, IBM Programming Laboratory Report No. FSC-69-0312 (proceedings published June 1969). view details in Tobey, R. G. (ed) Proceedings of the 1968 Summer Institute on Symbolic Mathematical Computation, IBM Federal Systems Center, Gaithersburg, Maryland, July-August 1968, IBM Programming Laboratory Report No. FSC-69-0312 (proceedings published June 1969). view details in Tobey, R. G. (ed) Proceedings of the 1968 Summer Institute on Symbolic Mathematical Computation, IBM Federal Systems Center, Gaithersburg, Maryland, July-August 1968, IBM Programming Laboratory Report No. FSC-69-0312 (proceedings published June 1969). view details in Tobey, R. G. (ed) Proceedings of the 1968 Summer Institute on Symbolic Mathematical Computation, IBM Federal Systems Center, Gaithersburg, Maryland, July-August 1968, IBM Programming Laboratory Report No. FSC-69-0312 (proceedings published June 1969). view details in [ACM] Proceedings of the Second Symposium on Symbolic and Algebraic Manipulation, March 23-25, 1971 Los Angeles (SYMSAM 71) view details Introduction SAC-1 is a program system for performing various operations on polynomials and rational functions. These polynomials have Infinite-precision coefficients and may contain any number of variables. The rational functions are represented as relatively prime ratios of such polynomials. As a result, the system also provides arithmetic operations on infinite-precision integers and rational numbers. Besides the customary operations on polynomials and rational functions, SAC-1 provides multivariate polynomials greatest common divisor and resultant calculation, multivariate polynomial factorization, real zero calculation for univariate polynomials, partial fraction decomposition and integration of univarlate rational functions, and solution of systems of linear equations having polynomial coefficients. The SAC-1 system is a large hierarchical collection of Fortran subprograms. Since all subprograms are carefully programmed in complete conformity to the ASA Fortran specificatlons, the system can be quickly and easily implemented on virtually any existing medium or large general-purpose digital computer, and indeed has been implemented on many different computers without difficulty. At present the SAC-1 system is known to be implemented at more than 20 institutions, on at least l0 computer models corresponding to at least four different manufacturers. In addition to recompiling the Fortran subprograms on the target computer, it is only necessary to program in assembly language some 18 very simple "primitive" subprograms according to their functional specifications. The entire implementation process can usually be completed in less than a month by one programmer. Besides the broad range of capabilities and the portability and ease of implementation offered by the SAC-1 system, it is also distinguished by its computational efficiency. This efficiency is achieved as a result of both the data structures it employs and the mathematical characteristics of the algorithms which are embodied in the system. The benefit accruing to the user from this efficiency is not only a large saving in computing costs--frequently the increased efficiency is so great that problems can be easily solved which would otherwise not be feasible. Although Fortran is somewhat less than an ideal language for development of an algebra system, it is familiar to most potential users of such a system. Since the SAC-1 system is also thoroughly and clearly documented, initiation into application of the system is quick and painless; one merely writes a Fortran subprogram which calls the appropriate SAC-1 subprograms, including utility subprograms for input, output and memory management. Extract: Organization of SAC-1 Organization of SAC-1 SAC-1 is organized as a series of modules, or subsystems, each module being designed to provide a specific functional capability or class of capabilities. Each subsystem is itself a collection of interdependent subprograms which, in general, relies on the previous subsystems. Since nearly all arithmetic and algebraic objects are represented in SAC-1 as lists (or list structures), the first subsystem is a general list processing system. Then, since the algebraic capabilities depend on sophisticated arithmetic capabilities, the next module is a subsystem for infinite-precislon integer arithmetic. Following is a list of all the completed, or nearly completed, subsystems. 1) List processing system. 2) Infinite-precision integer arithmetic system. 3) Polynomial system. 4) Rational function system. 5) Modular arithmetic system. 6) Rational function integration system. 7) Polynomial real zero system. 8) Polynomial g.c.d, and resultant system. 9) Polynomial factorization system. 10) Polynomial linear algebra system. 11) Rational polynomial system. Each of these systems is documented in a separate report, and these reports are listed among the references at the end of this paper. In the following, a synopsis is given of each subsystem, including the type of data objects on which the subsystem operates, the representation of these objects in computer memory, the operations which may be performed on these objects and, in very broad terms, the algorithms employed to effect these operations. The more detailed information required for implementation and use of these subsystems may be obtained from the documenting technical reports, which also contain detailed descriptions of the algorithms, and listings of all the Fortran subprograms. The mathematical foundations of the more sophisticated algorithms and mathematical analyses of the computing times of these algorithms are partially treated in various other listed references. Extract: The List Processing System The List Processing System The SAC-1 list processing system, is a reference count system as opposed to a garbage collection system. The use of reference counts in list processing was first proposed by Collins in [COL60], where certain advantages in computing time efficiency were discussed. The first volume of Knuth's monumental book, [KNU68], discusses both reference counts and garbage collection in detail. Apart from efficiency considerations, the desired embedding of SAC-I in Fortran dictated against the use of garbage collection, which requires that the locations of all currently defined lists be accessible to the garbage collection program. This requirement is very inconvenient to satisfy within Fortran. Lists are constructed from cells. Each cell consists of k consecutive memory locations, for some positive integer k , where k is an unspecified implementation parameter. Each cell contains four fields: element field, successor field, type field, and reference count field. The arrangement of fields within a cell is unspecified, except that each field must be capable of assuming a certain range of values. Associated with each field are two primitive subprograms: one for extracting the current value of the field from a specified cell and one for storing a new value into the field. Two additional primitive subprograms, which constitute a basis for the SAC-1 input-output system, are required in the list-processing subsystem. One of these reads a standard 72-character record from a specified logical input unit and stores standardized integer codes for the characters read into a one-dimensional array of 72 locations. The other, given such an array, writes a corresponding 72-character record on a specified output unit. Input and output is restricted to the ASA Fortran standardized character set. The list processing system includes subprograms for reading and writing arbitrary lists, with conversion between their internal and external representations. The external representation is a sequence of characters including parentheses and commas, with numerals representing atomic constituents of lists. Recursive subprograms are not permitted in ASA Fortran. Yet, recursive algorithms naturally arise in performing algebraic operations and they must somehow be realized. For this purpose, a general procedure has been devised for rewriting, in ASA Fortran, subprograms which have initially been written as though recursion were permitted. An important part of this mechanism is a pushdown stack, implemented as a SAC-I list, and several subprograms in the list processing system are designed to facilitate use of the pushdown stack. Other subprograms in the system are for list construction, list scanning, list erasure, and other fundamental list operations. Altogether, the list processing system contains 15 primitive subprograms and 19 Fortran subprograms consisting of approximately 300 Fortran statements. Extract: The Infinite-Precision Integer Arithmetic System The Infinite-Precision Integer Arithmetic System In this system, infinite-precision integers are represented as lists whose successive elements are the successive digits in the base-Beta radix representation. The base Beta is an implementation parameter which allows for the different word lengths of different computers as well as different instruction sets for arithmetic operations. Beta may be chosen arbitarily subject to the conditions that it be even, greater than 64, and smaller than the largest integer permitted by the Fortran compiler which will be used for the implementation. However, for maximal efficiency, Beta will usually be chosen about as large as permitted by these conditions. Also, one will usually choose Beta as a power of two if a binary computer is to be used. For example, the University of Wisconsin SAC-1 implementation on the UNIVAC 1108 computer, which is a binary computer with a 36-bit word length, used Beta = 233. In the list representation of an infinite-precision integer, the lowest order digits appear first. Although this may seem unnatural, it is more efficient since in performing arithmetic operations the low-order digits must usually be processed first. In the representation of a negative number, all Beta-dlgits are negative or zero. The integer arithmetic system requires three additional primitive subprograms. These subprograms have inputs and outputs which are Beta-digits. For example, MPY has two inputs, say a and b. The two outputs are the unique W-digits, c and d, such that a.b = c~ + d and c.d _> 0. There is another primitive for division, and one for use in addition and subtraction. The Fortran subprograms of the system provide the arithmetic operations of addition, subtraction, multiplication, and division with or without remainder on infinite-precision integers, and also greatest common divisor calculation, miscellaneous operations, and input-output with conversion. The external form of an infinite-precision integer is of course just a sign and a sequence of decimal digits, which may extend over several records on the input-output unit. SAC-1 calculations involving integers several hundred decimal digits long are more common than exceptional. The system operations are made efficient both by a careful choice of mathematical algorithms and by a careful implementation of the list processing operations required to realize these algorithms in the context of the data representation. In particular, division and greatest common divisor calculation are carried out by efficient mathematical algorithms described in Knuth's second volume, [KNU69]. The G.C.D, algorithm is Lehmer's version of the Euclidean algorithm, which turns out to be typically l0 to 20 times as fast as the ordinary Euclidean algorithm. The division algorithm employs Knuth's method for generation of successive trial quotient digits, which generates the correct quotient digit with probability greater than 1-3/~. Besides the three primitives, there are 17 Fortran subprograms in the integer arithmetic system. Altogether these subprograms contain approximately 800 Fortran statements. Extract: The Polynomial System The Polynomial System The SAC-1 polynomial system, [COL68b], performs operations on multivariate polynomials with infinite-precision integer coefficients. The operations provided include input and output with conversion, the arithmetic operations of addition, subtraction, multiplication and division, substitution and evaluation, differentiation, and g.e.d, calculation. Polynomials are internally represented in recurslve canonical form as lists, following the PM system, [COL66], which first introduced this representation. The basic idea here is that of regarding a polynomial in r variables, for r > 1, as a polynomial in one main variable whose coefficients are polynomials in the remaining varlables. Mathematically, this just amounts to recognition of the natural isomorphism of R[x I ..... Xr] onto R[x I ..... Xr_l][xr] and hence, ultimately, onto R[Xl]...[Xr], the value of which, for theoretical purposes is well understood. But the realization that it is preferable to work with R[Xl]...[Xl] in practical computations are not immediate. The distributive canonical form associated with R[x I ..... Xr] was used in an early version of the PM system, and also in Brown's important ALPAK system, [BRO63]. Generally speaking, the superiority of the recursive canonical form is twofold. First, the use of recursive canonical form leads naturally to recursive subprograms for multivariate polynomials which have essentially the simplicity of subprograms for univariate polynomials. Second, the use of recursive canonical form often results in more efficient implementation of algorithms, since the data are more highly structured, and in a manner which r e f l e c t s more closely the algorithmic process being performed. This was brought out in some detail by Tobey in his Ph.D. thesis, [TOB67]. A non-zero polynomial is in recursive canonical ik= el form when expressed as Z iAi(xl ..... Xr_l) ? x r , where each Ai(x I ..... Xr, I) is a non-zero polynomial in recursive canonical form and e I > e z > ... > e k. Notice that this is a recurslve definition. The SAC-I list representation of A is then the list (~r,o~l,e I ..... ~r,er) where ~r is a specified list representation of and ~ is the recursive canonical list representation Xr i of A i. This is again a recurslve definition, and when r = I, the A. are infinite-precision integers and the i ~ are their list representations. This is nearly the i same as the PM representation, where instead the list (~r,(~l,el) ..... (~r,er)) was used. We shall not devote much attention to the algorithms of the polynomial system. They are, also, much the same as those of the PM system. In particular, g.c.d, calculation is performed using the reduced polynomial remainder sequence algorithm of [COLL67]. Although this algorithm represented a major advance when it was discovered, the theorems on which it was based have subsequently been generalized and applied, along with other ideas, to obtaining a new polynomial g.c.d, algorithm which is, in turn, a major advance over the reduced p . r . s , algorithm. This newest algorithm, which is part of the SAC-i polynomial g.c.d. and resultant system, as discussed in Section l0 following, will soon displace the reduced p . r . s , algorithm in the SAC-1 polynomial system. The polynomial system contains 35 Fortran subprograms totallingabout 11O0 Fortran statements. No additional primitives are required for either the polynomial subsystem or for any of the subsequent SAC-1 subsystems. Extract: The Rational Function System The Rational Function System The rational function system, [COL68c], performs on multivariate rational functions the operations of addition, subtraction, multiplication, division, substitution, differentiation, input, output, and some miscellaneous operations. Non-zero rational functions are represented as pairs (A,B) of non-zero multivariate polynomials with infinite-precision integer coefficients, where A and B are relatively prime in the strong sense that gcd (A,B) = +_ I. In order to achieve uniqueness of representation, it is further required that the leading numerical coefficient of the denominator, B, be positive. The leading numerical coefficient of a univariate polynomial is its leading coefficient and, recurslvely, the leading numerical coefficient of a polynomial in more than one variable is the leading numerical coefficient of its leading coefficient. A polynomial is said to be positive in case its leading numerical coefficient is positive. This turns out to be a frequently useful concept with important properties. For example, the class of positive polynomials is closed under addition and multiplication. As a special case, the numerator and denominator of a nonzero rational function may be "polynomials in zero variables", i. e., infinite-precision integers. Consequently, as a by-product, the rational function system provides arithmetic operations on infinite-precision rational numbers. This achieves an economy in programming since essentially the same algorithms that are used for rational number arithmetic may be used for rational function arithmetic. Note also that, while numerator and denominator must formally contain the same variables (and even in the same order), the denominator need not actually depend on all these variables. In particular, the denominator may be constant, i.e. effectively an infinite-precision integer, in which case one has the equivalent of a multivariate polynomial with infinite-precision rational number coefficients. However, this is usually not the most natural or the most efficient representation of such objects, i.e. multivariate polynomials with rational function or rational number coefficients, and as a result a special SAC-i system for such objects has been developed, namely the rational polynomial system. However, we shall not discuss the rational polynomial system further in this paper; its expected usefulness has not as yet warranted its complete development. The algorithms used for arithmetic operations and differentiation are essentially the same as those previously used by Brown in the ALPAK system, [BRO63]. Nevertheless, it seems worthwhile to briefly discuss them since their importance as efficient algorithms is probably not fully recognized. We will consider multiplication as an example; the principles are much the same for the other algorithms. The "classical algorithm" for multiplying a/b by c/d is to compute e = a ? c and f = b ? d, then reduce e/f to "lowest terms", i . e . , compute g = gcd (e,f) and then divide both e and f by g. The "sophisticated algorithm" reduces a/d and c/b to lowest terms, obtaining a/d and c/b, then computes e = a. c and f= b ? d. It is easyto show that e/[ is in lowest terms. The sophisticated algorithm requires more operations; in particular, it requires two g. c.d. calculations rather than one, and g.c.d, calculation is generally the most time-consuming operation. However, the inputs to these two g.c.d, calculations are much smaller and so there is a net saving in computing time. This saving is already appreciable when the numerators and denominators are large integers and it may be very large when they are multivariate polynomials. These algorithms had previously been used for rational number arithmetic by P. Henrici, [HEN56], in 1956. See also [KNU69]. The rational function system contains 15 Fortran subprograms and about 500 Fortran statements. Extract: The Modular Arithmetic System The Modular Arithmetic System The modular arithmetic system, [COL69c], is a system for performing arithmetic operations in a prime Galois field, GF(p), and for performing various operations on multivariate polynomials over GF(p). Although this system has some immediate applications, its primary purpose is to provide a wide assortment of subprograms needed for subsequent subsystems, as will become clear as these are described. For brevity we will refer to the polynomials of this system as modular polynomials. The prime number p is an input parameter to most of the subprograms since most applications will require the use of many different primes, p is required to be a Fortran integer, i.e. single-precision. One will frequently require a precomputed list of such primes, say perhaps 100 of them, which are nearly as large as permitted and so a subprogram which uses a sieve method to compute all the primes in a given range is provided. Multiplicative inverses in GF(p) are computed using the extended Euclidean algorithm, in Knuth's terminology, [KNU69]. Knuth's analysis of the Euclidean algorithm and empirical comparisons show this method to be approximately twice as fast as the alternative algorithm based on Fermat's theorem (see [C0L69]). Besides arithmetic operations on multivariate modular polynomials, several other very important capabilities are provided. One of these pertains to the Chinese remainder theorem. The elements of GF(p) are taken to be the integers 0, 1 ..... p-l, and the unique homomorphlsm of the ring, I, of the integers onto GF(p) which maps i into itself for 0 _< i < p is denoted by (2(p and called a modular homomorphism. (2~p induces a homomorphism from I[x I ..... Xr] onto GF(p)[x I ..... Xr] which maps each x i into itself and which is also called a modular homomorphism. Subprograms to effect these homomorphic mappings are provided. The algorithm employed for the Chinese remainder theorem is an iterative version of H. L. Garner's method (see [KNU69]), which is applied once for each prime modulus. This version is both simple and efficient, and is especially appropriate whenever the moduli to be used for a given calculation are not predetermined. The natural extension of this algorithm to multivariate polynomials, corresponding to the induced modular homomorphisms, is also realized in the system. If R is a ring and a e R, the mapping %a of R[x] onto R defined by ~a(A)= A(a) for A(x)e R[x] is a homomorphism which is called an evaluation homornorphism. The evaluation homomorphisms are equal in importance with the modular homomorphisms for future applications when we take R to be GF(p) or GF(p)[x 1 . . . . . Xr]. Interpolation plays the same role relative to the use of evaluation homomorphisms as does the Chinese remainder theorem relative to the use of modular homomorphisms, i . e . , to reconstruct an object, given sufficiently many of its homomorphic images. Both evaluation homomorphisms and interpolation over GF(p) are realized by subprograms in the modular arithmetic system. Interpolation is performed iteratively in a manner entirely analogous to the iterative algorithm for the Chinese remainder theorem. The modular arithmetic system also provides some very important operations on univariate modular polynomials, namely g. c.d. calculation and factorization. The g.c.d, subprograms are important for application to multivariate polynomial g.c.d, calculation over the integers, and the factorization subprograms are similarly important for application to multivariate polynomial factorization over the integers. The factorization subprograms constitute an implementation of Berlekamp's important algorithm ([BER67]. [BER68] and [KNU69]) for factorization of any squarefree univariate polynomial over a finite field. This algorithm, however, is practicable only for moderately small primes p (say p < 1000), and so another algorithm for "distinct degree factorization" of such polynomials (see [KNU69], pp. 388-390) is provided, which can be used for larger primes and which sometimes yields a complete factorization. The modular arithmetic system contains 35 subprograms comprised of approximately 1000 Fortran statements. Extract: The Rational Function Integration System The Rational Function Integration System The rational function integration system, [COL70], provides very efficient subprograms for computing the rational part of the integral of any univariate rational function. More precisely, for any such rational function R there are unique rational functions S and T such that R= S + fT and fT is either zero or of the form zik=l=i log (x - ~i) where the ~. 1 and ~i are complex algebraic numbers and the Pi are the distinct zeros of the denominator of T. S and f T are the rational part and transcendental part of ~R, respectively. The system computes both S and T, using an efficient algorithm discovered by Horowitz, [NOR69]. Horowitz' new algorithm, which is also described [HOR70] and [COL70], departs from the classical integration method of Hermite, [HARI6], [MAN68], [TOB67], in avoiding the use of a partial fraction decomposition of the integrand R. Instead, if R = A/B, the squarefree factorization of the denominator, B = bll 1 Bi' where b is the content of B, the B. are pairwise i relatively prime, each B i is positive, and B k has positive degree, is computed. This leads to the generation of a system of linear equations, with integer coefficients. It is shown that the rational number components of the solution vector of this system can be identified as the coefficients of the numerators of S and T, if the denominators of S and T are taken to be blli=2B i and bIIi=iBi , respectively. While partial fraction decomposition is no longer required in Horowitz' integration algorithm there are strong indications that partial fraction decomposition will be an important part of other processes, for example in Risch's more general integration algorithm IRIS69] or, as suggested by Moses, in simplification algorithms (see [COL69b]). Therefore, this SAC-I subsystem also provides subprograms for partial fraction decomposition. Again, a new algorithm is used to compute such decompositions, which involves squarefree factorization and the generation and solution of a system of linear equations. In contrast, the classical method makes use of an "extended" Euclidean algorithm for polynomials. In each case the new algorithms of Horowitz have been shown, by theoretical analysis and by empirical comparison, to be considerably faster for large problems. For solving systems of linear equations with integer coefficients, the integration subsystem uses a modular arithmetic algorithm which is a very special case of the algorithms used in the SAC-i polynomial linear algebra system, and which contributes much to the efficiency of the integration and decomposition algorithms. Likewise, for the required squarefree factorizations, the system uses a modular arithmetic algorithm for univariate integral polynomial g.c.d. calculation which is a special case of the multivariate g.c.d, algorithm used in the SAC-I polynomial g.c.d. and resultant system. The rational function integration system is "incomplete" in the sense that it makes no attempt to express the transcendental part of the integral as the linear combination of logarithms zki=l ~i l°g(x-~i)" Since the ~i are just the (simple) complex zeros of the denominator polynomial of T and the o~. are then determined as the solution of a non-singular system of linear equations, the system could have been completed using approximate numerical methods. However, such an approach is incompatible with the objective of a system using only symbolic, algebraic and exact arithmetic methods and the system will instead be completed later using methods which are still under development. The integration system contains 16 Fortran subprograms and about 500 Fortran statements. Extract: The Polynomial Real Zero System The Polynomial Real Zero System Given any non-zero univariate polynomial A of positive degree with integer coefficients and any positive rational number e , this system, [COL7Ob], computes to within a guaranteed accuracy of 6 all the real zeros of A. More precisely, it computes a list of disjoint intervals with rational endpoints, each of which contains exactly one real zero of A and which together contains all the real zeros of A. Both e and the endpoints are infinite-precision rational num numbers. The polynomial A may also have multiple zeros. The algorithms used in the system constitute an implementation of Sturrn's theorem using exact arithmetic. The first step is to compute the greatest squarefree divisor of A, B = A/gcd (A,A'), which has the same zeros as A but no multiple zeros. Next, a bound R for the zeros of B is computed, so that all zeros of B lie in the left-open, right-closed interval (-R, +R]. The third step is to isolate the real zeros of B into disjoint intervals (a i, bi] by repeated application of Sturm's theorem and interval bisection, starting with (-R, +R]. Sturm's theorem enables one to compute the exact number of real zeros of B in any rational interval ( a , b ] , so one need only bisect any interval containing more than one zero and discard any interval containing no zeros. The fourth and last step is to refine, by further bisection, each isolating interval until its length becomes less than e . Note that Sturm's theorem is not needed for the refinement; it is only necessary to evaluate B at the endpoints and midpoint of any interval. The application of Sturm's theorem requires the computation of a Sturm sequence for B , i.e., a polynomial remainder sequence {p.r.s.) beginning with B and its derivative, B', in which each following term is a "negative remainder" of the two previous terms. Thus the theory of p.r.s.'s, including the fundamental theorem of p.r.s.'s, [BRO68], was used to much advantage in the design of efficient algorithms for the polynomial real zero system. Important use is also made of modular arithmetic, although the advantages of modular arithmetic in this application appear to be less than in many other applications. A complete discussion of the underlying theory, a detailed analysis of the computing times of the algorithms used in this subsystem, and a brief discussion of its potential applicability to quantifier elimination algorithms for the elementary theory of real closed fields (see [TAR51]) are contained in Heindel's Ph.D. thesis, [HEI70]. Heindel also presents empirically observed computing times for various types of examples. As a fragmentary indication of such computing times, the system computed the 25 real zeros of the Chebyshev polynomial of degree 25 to an accuracy of 10 -15 in slightly less than three minutes on a UNIVAC 1108 computer. The polynomial real zero system consists of 26 Fortran subprograms and a total of approximately 900 Fortran statements. Extract: The Polynomial G.C.D. and Resultant System The Polynomial G.C.D. and Resultant System This system, [COL7Od], provides very fast algorithms, based on the use of modular and evaluation homomorphisms, for computing greatest common divisors and resultants of multivariate polynomials with integer coefficients. In broad outline, these algorithms proceed as follows. Given a Pair (A,B) of polynomials over the integers, I, whose o.k., is to be computed, the problem is reduced to computing the g.c.d, of several pairs (A i, B i) of polynomials over finite fields GF(Pi) in the same variables, where A. and B. are the results of applying the modular homomorphisms ~Pi to A and B. If C = gcd (A,B), then it is essentially arranged so that C i = ~bpi(C) can easily be obtained from gcd (A i, B 1) for most primes Pi" Application of the Chinese remainder theorem to sufficiently many C i then yields C. Similarly, given a pair (A,B) of polynomials in r variables over GF(p), the computation of its g.c.d. is reduced, if r > 1, to the computation of the g.c.d.'s of several pairs, (A i, Bi), of polynomials in r - 1 variables, where A i and B i result from applying an evaluation homomorphism ~a to A and B. If C = gcd (A,B) and C i = gcd (A i, Bi), then C is computed by interpolation from sufficiently many of those C i for which C i = ?a.(C)" Finally, if r = 1, then C is computed directly by the Euclidean algorithm for univariate polynomials over a field. The outlines of the resultant algorithms are much the same; these algorithms are described in detail and analyzed in [COLY0c]. As an indication of the tremendous superiority of these algorithms relative to the reduced p.r.s. algorithm, [COL67], which was until very recently the fastest known, the g.c.d, of two univariate integral polynomials, both of degree 50 and with integer coefficients 30 decimal digits long, was computed in 3.64 seconds on a UNIVAC 1108 computer. The degree of the g.c.d, was 25, and its integer coefficients were 15 decimal digits long. It is reliably estimated that the computing time for this problem using the reduced p . r . s , algorithm would have been about 13 minutes, or more than Z00 times as long. As the degrees or number of variables increases, the ratios of the computing times of these algorithms becomes larger and larger. Extract: The Polynomial Factorization System The Polynomial Factorization System The SAC-1 polynomial factorization system, [COLYOe], is the first practical system for the complete factorization of even univariate polynomials over the integers, and this system also efficiently factors many polynomials in more than one variable. Previous attempts at factorization, for example [ENG65], [LOH66], and [IOR66], have generally relied on Kronecker's algorithm with embellishments or variations and have generally been unsuccessful in factoring even univariate polynomials of degrees greater than l 0, at most. In contrast, this system has readily factored a large assortment of univariate polynomials of degrees up to 40. The algorithms in this system are somewhat analogous to those described above for g.c.d, and resultant calculation, but there are also substantial differences. Given a multivariate polynomial over the integers to be factored, a squarefree factorization is first computed; each squarefree factor so obtained must then be factored into irreducibles. Given a square-free multivariate integral polynomial to be factored, the problem is reduced to the factorization of some squarefree polynomial over GF(p) for some (generally small) prime p , which is obtained by applying the modular homomorphism ~p to the given polynomial. In place of the several large primes used in g.c.d, calculation, a single small prime is used (other primes may be "tried" but not "used"). And in place of the Chinese remainder theorem, a version of Hensel's lemrna (see, e.g., [KNU69] or [VDW48]) is used to progress from a modulo p factorization to a modulo pk factorization for some sufficiently large positive integer k. Similarly, to factor a squarefree polynomial in r variables over GF(p), if r > 1 an evaluation homornorphism, %a' is applied, reducing by one the number of variables. In place of the several evaluation homomorphisms used in g.c.d, calculation, only one is used. And in place of interpolation, another version of Hensel's lemma is used to progress from a modulo x - a factorization to a modulo (x - a) k factorization for some sufficiently large k. Finally, when r = 1 Berlekamp's algorithm is used. We have given only a very broad sketch of the factorization algorithms used in this system. Further details, refinements, computing time analyses, underlying theory, alternative algorithms, discussion, examples, and empirically observed computing times are given in Musser's Ph.D. thesis, [MUS71]. Extract: The Polynomial Linear Algebra System The Polynomial Linear Algebra System The primary capability provided by this system [COL70f] is that of solving exactly a system of linear equations having multivariate polynomial coefficients or, as a special case, infinite-precision integer coefficients. The given system may be inconsistent, have a unique solution, or have infinitely many solutions. In the latter case the system computes both a particular solution and a basis for the null space. Provision is also made for solving simultaneously several systems having the same coefficient matrices. More precisely, the system computes all solutions of a matrix equation AX = B, where A is an m by n matrix and B is an m by q matrix. The general method used is that of modular homomorphlsms and the Chinese remainder theorem, evaluation homomorphisms and interpolation, but this method cannot be applied in an obvious manner. The basic problem in applying this method is that of defining an appropriate canonical form for the solution set. The canonical form so chosen is one consisting of a reduced row echelon form, A s, of the matrix A and a matrix B ~ = CB, where C is defined by A ~ = CA. Another important property of the reduced row echelon matrix A ~ is that all its elements belong to any integral domain to which the elements of A belong; this property follows from the fact that the elements of A ~ are all minors of A of order r, where r is the rank of A. A s is called the deterrninantal reduced row echelon form of A. It has the important property that, for all but a finite number of modular or evaluation homomorphisms, e, 69(A ~) = e(A) ~.test for determining whether sufficiently many homomorphisms have been processed. When this test is not applicable, then an ~ priori bound on the required number of homomorphisms must be computed. The algorithms of this system are generally much faster than other algorithms which have been considered, for example the exact division method ([LIP69] and [BAR68]). For an m by m non-singular system with integer coefficients d digits long, the maximum computing time of the exact division method is proportional 5 d z " to m For the modular arithmetic algorithm of this system, the maximum computing time is O(m4d + m3d2). For systems with polynomial coefficients, the improvement in computing speed is even more pronounced. For further information concerning the theory supporting this system and for additional analyses of the computing times of its algorithms, the reader may consult the Ph.D. thesis by McClellan, [MCC71]. in [ACM] Proceedings of the Second Symposium on Symbolic and Algebraic Manipulation, March 23-25, 1971 Los Angeles (SYMSAM 71) view details in [ACM] Proceedings of the Second Symposium on Symbolic and Algebraic Manipulation, March 23-25, 1971 Los Angeles (SYMSAM 71) view details in [ACM] Proceedings of the Second Symposium on Symbolic and Algebraic Manipulation, March 23-25, 1971 Los Angeles (SYMSAM 71) view details in [ACM] Proceedings of the Second Symposium on Symbolic and Algebraic Manipulation, March 23-25, 1971 Los Angeles (SYMSAM 71) view details in [ACM] CACM 14(08) August 1971 view details in [ACM] Proceedings of the Second Symposium on Symbolic and Algebraic Manipulation, March 23-25, 1971 Los Angeles (SYMSAM 71) view details in [ACM] Proceedings of the Second Symposium on Symbolic and Algebraic Manipulation, March 23-25, 1971 Los Angeles (SYMSAM 71) view details manipulation. The 6rst part of the paper is tutorial and several typical problems are introduced which can be solved using algebraic manipulative systems. Sample programs for the solution of these problems using several algebra systems are then presented. Next, two more difficult examples are used to introduce the reader to the true capabilities of an algebra program and these are proposed as a means of comparison between rival algebra systems. A brief review of the technical problems of algebraic manipulation is given in the final section. in The Computer Journal 15(4) 1972 view details SAC-I This is a large system of Fortran subroutines, callable from Fortran, for the manipulation of multivariate rational functions with (infinite precision) integer coefficients. This implementation does have some unfortunate consequences for SAC-I programming in that, unlike most other systems that create an applications-oriented environment, the burden of Fortran accounting is on the user. The excerpt in Fig. 3 from a SAC- I program exhibits this unnaturalness. Note that the users' concern is as much with type declarations and space allocations as with the mathematics. Algorithms supplied include those for rational function arithmetic, polynomial factorization, the solution of simultaneous linear equations, and one that returns (only) the rational part of the integral of a rational function. This system is important, not only for itself but also for the focus it has provided for the prolific school centered around G. E. Collins at the University of Wisconsin for the study of seminumeric algorithms. A series of Ph.D. dissertations has produced not only many new algorithms but also the majority of the worst-case analyses of such algorithms that we possess today. Extract: FORMAC The best known, purely symbolic systems are, of course, Formac and its current version PL/IFORMAC (Petrick, 1971; pp. 105-114). Formac was the first widely available general-purpose algebraic manipulation system and served for a period to define the field. Certainly, there was a time when one could have safely made the statement that the majority of all mechanical symbolic mathematical computations had been done within Formac. The practical success of these systems, in spite of their rigidity with respect to user modifications and their lack of any seminumerical facilities for rational function computations, is probably due to the overall intelligence of the facilities that were provided. Above all, they were certainly sufficient to support the dominant application area of truncated power series expansion. Current support is minimal. Extract: Symbolic systems SYMBOLIC SYSTEMS. We should mention first a sequence of three early programs for the simplification of general symbolic mathematical expressions represented as prefix-notation tree structures. The first, at M.I.T., was due to Hart, and the other two were due to Wooldridge and Korsvold at Stanford. The latter has survived in current usage as a result of its incorporation, subject to modification, into the MATHLAB, MACSYMA, and SCRATCHPAD systems. In the mid-1960s there appeared two systems, Formula Algol and FAMOUS, which, while dedicated to the symbolic manipulation of mathematical expressions, presented the user with almost no built-in automatic simplification facilities. This was due, at least in the case of FAMOUS, to a conscious decision that, since the "simplicity" of an expression is surely context- dependent, it should be reasonable to present the user with complete control over the simplification process. That is, the user'should be compelled to define all transformations, rather than, as with most systems, be permitted simply to switch on and off the transformations supplied by the system architects. No system of this species has ever solved the inherent efficiency problems to the extent that it could serve more than didactic purposes. Probably neither Formula Algol nor FAMOUS could be revived today. Another lost symbolic system of importance is the Symbolic Mathematical Laboratory of W. A. Martin. This system provided high-quality 2-D graphics on a DEC-340 display and was also the first to employ a light pen for subexpression selection. In some ways, it represented a degree of interaction that has not been duplicated by any subsequent system. Nor were its innovative internal programming techniques restricted to its graphics facilities. Of particular interest is the use of hash coding for subexpression matching (Petrick, 1971; pp. 305-310). in Encyclopedia of Computer Science, Ralston, Anthony, and Meek, Chester L. (eds) New York, NY Petrocelli/Charter 1976 view details described. FORMULA PASCAL is an extension of PASCAL with the standard data type FORMULA. A variable of type formula is an infinite precision integer, a polynomial in one or more variables, or a quotient of two formulas. In FORMULA PASCAL we may use the operators +, -, ~, / in formula expressions, and have variables, functions, parameters and arrays of type formula. A FORMULA PASCAL program is translated to a PASCAL program which calls the polynomial manipulation routines of the SAC-I system. The goal is to make use of the advantages of SAC-! and PASCAL and provide a system in which a user who knows PASCAL should not have any difficulties in writing a formula manipulation program. The presentation included an example about the calculation of coefficients for an algorithm to solve differential equations. References: FORMULA manipulation in PASCAL using SAC-I, Informatika Rapport IR 25, Nov. 1977. in Encyclopedia of Computer Science, Ralston, Anthony, and Meek, Chester L. (eds) New York, NY Petrocelli/Charter 1976 view details A BRIEF HISTORICAL SKETCH ------------------------- The development of systems for symbolic mathematical computation first became an active area of research and implementation during the decade 1961-1971. . . . . . . To put the decade 1961-1971 into perspective, let us recall that FORTRAN appeared about 1958 and ALGOL in 1960. These two languages were designed primarily for numerical mathematical computation. Then in 1960/1961 came the development of LISP, a language for list processing. LISP was a major advancement on the road to languages for symbolic computation. An operation such as symbolic differentiation which is foreign to FORTRAN and ALGOL is relatively easy in LISP. (Indeed this is one of the standard programming assignments for students first learning LISP.) As will be noted later, several computer algebra systems were written in LISP. 1961-1966 --------- In 1961, James Slagle at M.I.T. wrote a LISP program called SAINT for Symbolic Automatic INTegration. This was one of the earliest applications of LISP to symbolic computation and it was the first comprehensive attempt to program a computer to behave like a freshman calculus student. The program was based on a number of heuristics for indefinite integration and it performed about as well as a good calculus student. One of the first systems for symbolic computation was FORMAC, developed by Jean Sammet, Robert Tobey, and others at IBM during the period 1962-1964. It was a FORTRAN preprocessor (a PL/I version appeared later) and it was designed for the manipulation of elementary functions including, of course, polynomials and rational functions. Another early system was ALPAK, a collection of FORTRAN-callable subroutines written in assembly language for the manipulation of polynomials and rational functions. It was designed by William S. Brown and others at Bell Laboratories and was generally available about 1964. A language now referred to as Early ALTRAN was designed at Bell Laboratories during the period 1964-1966. It used ALPAK as its package of computational procedures. There were two other significant systems for symbolic computation developed during this period. George Collins at IBM and the University of Wisconsin (Madison) developed PM, a system for polynomial manipulation, an early version of which was operational in 1961 with improvements added to the system through 1966. The year 1965 marked the first appearance of MATHLAB, a LISP-based system for the manipulation of polynomials and rational functions, developed by Carl Engelman at M.I.T. It was the first interactive system designed to be used as a symbolic calculator. Included among its many firsts was the use of two-dimensional output to represent its mathematical output. The work of this period culminated in the first ACM Symposium on Symbolic and Algebraic Manipulation held in March 1966 in Washington, D.C. That conference was summarized in the August 1966 issue of the Communications of the ACM. 1966-1971 --------- In 1966/1967, Joel Moses at M.I.T. wrote a LISP program called SIN (for Symbolic Integrator). Unlike the earlier SAINT program, SIN was algorithmic in approach and it was also much more efficient. In 1968, Tony Hearn at Stanford University developed REDUCE, an interactive LISP-based system for physics calculations. One of its principal design goals was portability over a wide range of platforms, and as such only a limited subset of LISP was actually used. The year 1968 also marked the appearance of Engelman's MATHLAB-68, an improved version of the earlier MATHLAB interactive system, and of the system known as Symbolic Mathematical Laboratory developed by William Martin at M.I.T. in 1967. The latter was a linking of several computers to do symbolic manipulation and to give good graphically formatted output on a CRT terminal. The latter part of the decade saw the development of several important general purpose systems for symbolic computation. ALTRAN evolved from the earlier ALPAK and Early ALTRAN as a language and system for the efficient manipulation of polynomials and rational functions. George Collins developed SAC-1 (for Symbolic and Algebraic Calculations) as the successor of PM for the manipulation of polynomials and rational functions. CAMAL (CAMbridge Algebra system) was developed by David Barton, Steve Bourne, and John Fitch at the University of Cambridge. It was implemented in the BCPL language, and was particularly geared to computations in celestial mechanics and general relativity. REDUCE was redesigned by 1970 into REDUCE 2, a general purpose system with special facilities for use in high-energy physics calculations. It was written in an ALGOL-like dialect called RLISP, avoiding the cumbersome parenthesized notation of LISP, while at the same time retaining its original design goal of being easily portable. SCRATCHPAD was developed by J. Griesmer and Richard Jenks at IBM Research as an interactive LISP-based system which incorporated significant portions of a number of previous systems and programs into its library, such as MATHLAB-68, REDUCE 2, Symbolic Mathematical Library, and SIN. Finally, the MACSYMA system first appeared about 1971. Designed by Joel Moses, William Martin, and others at M.I.T., MACSYMA was the most ambitious system of the decade. Besides the standard capabilities for algebraic manipulation, it included facilities to aid in such computations as limit calculations, symbolic integration, and the solution of equations. The decade from 1961 to 1971 concluded with the Second Symposium on Symbolic and Algebraic Manipulation held in March 1971 in Los Angeles. The proceedings of that conference constitute a remarkably comprehensive account of the state of the art of symbolic mathematical computation in 1971. 1971-1981 --------- While all of the languages and systems of the sixties and seventies began as experiments, some of them were eventually put into "production use'' by scientists, engineers, and applied mathematicians outside of the original group of developers. REDUCE, because of its early emphasis on portability, became one of the most widely available systems of this decade. As a result it was instrumental in bringing computer algebra to the attention of many new users. MACSYMA continued its strong development, especially with regard to algorithm development. Indeed, many of the standard techniques (e.g. integration of elementary functions, Hensel lifting, sparse modular algorithms) in use today either came from, or were strongly influenced by, the research group at M.I.T. It was by far the most powerful of the existing computer algebra systems. SAC/ALDES by G. Collins and R. Loos was the follow-up to Collins' SAC-1. It was a non-interactive system consisting of modules written in the ALDES (Algebraic DEScription) language, with a translator converting the results to ANSI FORTRAN. One of its most notable distinctions was in being the only major system to completely and carefully document its algorithms. A fourth general purpose system which made a significant mark in the late 1970's was muMATH. Developed by David Stoutemyer and Albert Rich at the University of Hawaii, it was written in a small subset of LISP and came with its own programming language, muSIMP. It was the first comprehensive computer algebra system which could actually run on the IBM family of PC computers. By being available on such small and widely accessible personal computers, muMATH opened up the possibility of widespread use of computer algebra systems for both research and teaching. In addition to the systems mentioned above, a number of special purpose systems also generated some interest during the 1970's. Examples of these include: SHEEP, a system for tensor component manipulation designed by Inge Frick and others at the University of Stockholm; TRIGMAN, specially designed for computation of Poisson series and written in FORTRAN by W. H. Jeffreys at University of Texas (Austin); and SCHOONSCHIP by M. Veltman of the Netherlands for computations in high-energy physics. Although the systems already mentioned have all been developed in North America and Europe, there were also a number of symbolic manipulation programs written in the U.S.S.R. One of these is ANALITIK, a system implemented in hardware by V. M. Glushkov and others at the Institute of Cybernetics, Kiev. 1981-1991 --------- Due to the significant computer resource requirements of the major computer algebra systems, their widespread use remained (with the exception of muMATH) limited to researchers having access to considerable computing resources. With the introduction of microprocessor-based workstations, the possibility of relatively powerful desk-top computers became a reality. The introduction of a large number of different computing environments, coupled with the often nomadic life of researchers (at least in terms of workplace locations) caused a renewed emphasis on portability for the computer algebra systems of the 1980's. More efficiency (particularly memory space efficiency) was needed in order to run on the workstations that were becoming available at this time, or equivalently, to service significant numbers of users on the time-sharing environments of the day. This resulted in a movement towards the development of computer algebra systems based on newer "systems implementation'' languages such as C, which allowed developers more flexibility to control the use of computer resources. The decade also marked a growth in the commercialization of computer algebra systems. This had both positive and negative effects on the field in general. On the negative side, users not only had to pay for these systems but also they were subjected to unrealistic claims as to what constituted the state of the art of these systems. However, on the positive side, commercialization brought about a marked increase in the usability of computer algebra systems, from major advances in user interfaces to improvements to their range of functionality in such areas as graphics and document preparation. The beginning of the decade marked the origin of MAPLE. Initiated by Gaston Gonnet and Keith Geddes at the University of Waterloo, its primary motivation was to provide user accessibility to computer algebra. MAPLE was designed with a modular structure: a small compiled kernel of modest power, implemented completely in the systems implementation language C (originally B, another language in the "BCPL family'') and a large mathematical library of routines written in the user-level MAPLE language to be interpreted by the kernel. Besides the command interpreter, the kernel also contained facilities such as integer and rational arithmetic, simple polynomial manipulation, and an efficient memory management system. The small size of the kernel allowed it to be implemented on a number of smaller platforms and allowed multiple users to access it on time-sharing systems. Its large mathematical library, on the other hand, allowed it to be powerful enough to meet the mathematical requirements of researchers. Another system written in C was SMP (Symbolic Manipulation Program) by Stephen Wolfram at Caltech. It was portable over a wide range of machines and differed from existing systems by using a language interface that was rule-based. It took the point of view that the rule-based approach was the most natural language for humans to interface with a computer algebra program. This allowed it to present the user with a consistent, pattern-directed language for program development. The newest of the computer algebra systems during this decade were MATHEMATICA and DERIVE. MATHEMATICA is a second system written by Stephen Wolfram (and others). It is best known as the first system to popularize an integrated environment supporting symbolics, numerics, and graphics. Indeed when MATHEMATICA first appeared in 1988, its graphical capabilities (2-D and 3-D plotting, including animation) far surpassed any of the graphics available on existing systems. MATHEMATICA was also one of the first systems to successfully illustrate the advantages of combining a computer algebra system with the easy-to-use editing features on machines designed to use graphical user-interfaces (i.e. window environments). Based on C, MATHEMATICA also comes with its own programming language which closely follows the rule-based approach of its predecessor, SMP. DERIVE, written by David Stoutemyer and Albert Rich, is the follow-up to the successful muMATH system for personal computers. While lacking the wide range of symbolic capabilities of some other systems, DERIVE has an impressive range of applications considering the limitations of the 16-bit PC machines for which it was designed. It has a friendly user interface, with such added features as two-dimensional input editing of mathematical expressions and 3-D plotting facilities. It was designed to be used as an interactive system and not as a programming environment. Along with the development of newer systems, there were also a number of changes to existing computer algebra systems. REDUCE 3 appeared in 1983, this time with a number of new packages added by outside developers. MACSYMA bifurcated into two versions, DOE-MACSYMA and one distributed by SYMBOLICS, a private company best known for its LISP machines. Both versions continued to develop, albeit in different directions, during this decade. AXIOM, (known originally as SCRATCHPAD II) was developed during this decade by Richard Jenks, Barry Trager, Stephen Watt and others at the IBM Thomas J. Watson Research Center. A successor to the first SCRATCHPAD language, it is the only "strongly typed'' computer algebra system. Whereas other computer algebra systems develop algorithms for a specific collection of algebraic domains (such as, say, the field of rational numbers or the domain of polynomials over the integers), AXIOM allows users to write algorithms over general fields or domains. As was the case in the previous decade, the eighties also found a number of specialized systems becoming available for general use. Probably the largest and most notable of these is the system CAYLEY, developed by John Cannon and others at the University of Sydney, Australia. CAYLEY can be thought of as a "MACSYMA for group theorists.'' It runs in large computing environments and provides a wide range of powerful commands for problems in computational group theory. An important feature of CAYLEY is a design geared to answering questions not only about individual elements of an algebraic structure, but more importantly, questions about the structure as a whole. Thus, while one could use a system such as MACSYMA or MAPLE to decide if an element in a given domain (such as a polynomial domain) has a given property (such as irreducibility), CAYLEY can be used to determine if a group structure is finite or infinite, or to list all the elements in the center of the structure (i.e. all elements which commute with all the elements of the structure). Another system developed in this decade and designed to solve problems in computational group theory is GAP (Group Algorithms and Programming) developed by J. Neubueser and others at the University of Aachen, Germany. If CAYLEY can be considered to be the "MACSYMA of group theory,'' then GAP can be viewed as the "MAPLE of group theory.'' GAP follows the general design of MAPLE in implementing a small compiled kernel (in C) and a large group theory mathematical library written in its own programming language. Examples of some other special purpose systems which appeared during this decade include FORM by J. Vermaseren, for high energy physics calculations, LiE, by A.M. Cohen for Lie Algebra calculations, MACAULAY, by Michael Stillman, a system specially built for computations in Algebraic Geometry and Commutative Algebra, and PARI by H. Cohen in France, a system oriented mainly for number theory calculations. As with most of the new systems of the eighties, these last two are also written in C for portability and efficiency. Research Information about Computer Algebra ------------------------------------------- Research in computer algebra is a relatively young discipline, and the research literature is scattered throughout various journals devoted to mathematical computation. However, its state has advanced to the point where there are two research journals primarily devoted to this subject area: the "Journal of Symbolic Computation" published by Academic Press and "Applicable Algebra in Engineering, Communication and Computing" published by Springer-Verlag. Other than these two journals, the primary source of recent research advances and trends is a number of conference proceedings. Until recently, there was a sequence of North American conferences and a sequence of European conferences. The North American conferences, primarily organized by ACM SIGSAM (the ACM Special Interest Group on Symbolic and Algebraic Manipulation), include SYMSAM '66 (Washington, D.C.), SYMSAM '71 (Los Angeles), SYMSAC '76 (Yorktown Heights), SYMSAC '81 (Snowbird), and SYMSAC '86 (Waterloo). The European conferences, organized by SAME (Symbolic and Algebraic Manipulation in Europe) and ACM SIGSAM, include the following whose proceedings have appeared in the Springer-Verlag series "Lecture Notes in Computer Science": EUROSAM '79 (Marseilles), EUROCAM '82 (Marseilles), EUROCAL '83 (London), EUROSAM '84 (Cambridge), EUROCAL '85 (Linz), and EUROCAL '87 (Leipzig). Starting in 1988, the two streams of conferences have been merged and they are now organized under the name ISSAC (International Symposium on Symbolic and Algebraic Computation), including ISSAC '88 (Rome), ISSAC '89 (Portland, Oregon), ISSAC '90 (Tokyo), ISSAC '91 (Bonn) and ISSAC '92 (Berkeley). ----------------------------------------------- Professor Keith Geddes Symbolic Computation Group Department of Computer Science University of Waterloo Waterloo ON N2L 3G1 CANADA in Encyclopedia of Computer Science, Ralston, Anthony, and Meek, Chester L. (eds) New York, NY Petrocelli/Charter 1976 view details |