9PAC(ID:35/pac001)Report generatorfor 709 PACkage. IBM 1957 Report generator for IBM 709 from Fry and Sibley "These routines provided the basis for a joint development by several users under the SHARE organization of the 709 Package (9Pac) for the IBM 709/90. 9Pac is the principal ancestor of most commercial report generators developed since 1960." Hardware: Related languages
References: Assembly and Compiling Systems both obey the "pre-translation"7 principle. Pseudo instructions are interpreted and a running program is produced before the solution is initiated. Usually this makes possible a single set of references to the library rather than many repeated references. In an assembly system the pseudo-code is ordinarily modified computer code. Each pseudo instruction refers to one machine instruction or to a relatively short subroutine. Under the control of the master routine, the assembly system sets up all controls for monitoring the flow of input and output data and instructions. A compiler system operates in the same way as an assembly system, but does much more. In most compilers each pseudo instruction refers to a subroutine consisting of from a few to several hundred machine instructions.8 Thus it is frequently possible to perform all coding in pseudo-code only, without the use of any machine instructions. From the viewpoint of the user, compilers are the more desirable type of automatic programming because of the comparative ease of coding with them. However, compilers are not available with all existing equipments. In order to develop a compiler, it is usually necessary to have a computer with a large supplementary storage such as a magnetic tape system or a large magnetic drum. This storage facilitates compilation by making possible as large a running program as the problem requires. Examples of assembly systems are /Symbolic Optimum Assembly Programming (S.O.A.P.) for the IBM 650 and REgional COding (RECO) for the UNIVAC SCIENTIFIC 1103 Computer. The X-l Assembly System for the UNIVAC I and II Computers is not only an assembly system, but is also used as an internal part of at least two compiling systems. Extract: MATHMATIC, FORTRAN and UNICODE For scientific and mathematical calculations, three compilers which translate formulas from standard symbologies of algebra to computer code are available for use with three different computers. These are the MATH-MATIC (AT-3) System for the UNIVAC I and II Computers, FORTRAN (for FOR-mula TRANslation) as used for the IBM 704 and 709, and the UNICODE Automatic Coding System for the UNIVAC SCIENTIFIC 1103A Computer. Extract: FLOW-MATIC and REPORT GENERATOR Two advanced compilers have also been developed for use with business data processing. These are the FLOW-MATIC (B-ZERO) Compiler for the UNIVAC I and II Computers and REPORT GENERATOR for the new IBM 709.13 In these compilers, English words and sentences are used as pseudocode. employ new methods in many areas of research. Performance of 1 million multiplications on a desk calculator is estimated to require about five vears and to cost $25,000. On an early scientific computer, a million multiplications required eight minutes and cost (exclusive of programing and input preparation) about $10. With the recent LARC computer, 1 million multiplications require eight seconds and cost about 50 cents (Householder, 1956). Obviously it is imperative that researchers examine their methods in light of the abilities of the computer. It should be noted that much of the information published on computers and their use has not appeared in educational or psychological literature but rather in publications specifically concerned with computers. mathematics, engineering, and business. The following selective survey is intended to guide the beginner into this broad and sometimes confusing area. It is not an exhaustive survey. It is presumed that the reader has access to the excellent Wrigley (29571 article; so the major purpose of this review is to note additions since 1957. The following topics are discussed: equipment availabilitv, knowledge needed to use computers, general references, programing the computer, numerical analysis, statistical techniques, operations research, and mechanization of thought processes. Extract: Compiler Systems Compiler Systems A compiler is a translating program written for a particular computer which accepts a form of mathematical or logical statement as input and produces as output a machine-language program to obtain the results. Since the translation must be made only once, the time required to repeatedly run a program is less for a compiler than for an interpretive system. And since the full power of the computer can be devoted to the translating process, the compiler can use a language that closely resembles mathematics or English, whereas the interpretive languages must resemble computer instructions. The first compiling program required about 20 man-years to create, but use of compilers is so widely accepted today that major computer manufacturers feel obligated to supply such a system with their new computers on installation. Compilers, like the interpretive systems, reflect the needs of various types of users. For example, the IBM computers use "FORTRAN" for scientific programing and "9 PAC" and "ComTran" for commercial data processing; the Sperry Rand computers use "Math-Matic" for scientific programing and "Flow-Matic" for commercial data processing; Burroughs provides "FORTOCOM" for scientific programming and "BLESSED 220" for commercial data processing. There is some interest in the use of "COBOL" as a translation system common to all computers. ![]() in [ACM] CACM 4(01) (Jan 1961) view details A problem of continuing concern to the computer programmer is that of file design: Given a collection of data to be processed, how should these data be organized and recorded so that the processing is feasible on a given computer, and so that the processing is as fast or as efficient as required? While it is customary to associate this problem exclusively with business applications of computers, it does in fact arise, under various guises, in a wide variety of applications: data reduction, simulation, language translation, information retrieval, and even to a certain extent in the classical scientific application. Whether the collections of data are called files, or whether they are called tables, arrays, or lists, the problem remains essentially the same. The development and use of data processing compilers places increased emphasis on the problem of file design. Such compilers as FLOW-MATIC of Sperry Rand , Air Materiel Command's AIMACO, SURGE for the IBM 704, SHARE'S 9PAC for the 709/7090, Minneapolis- Honeywell's FACT, and the various COBOL compilers each contain methods for describing, to the compiler, the structure and format of the data to be processed by compiled programs. These description methods in effect provide a framework within which the programmer must organize his data. Their value, therefore, is closely related to their ability to yield, for a wide variety of applications, a ,data organization which is both feasible and practical. To achieve the generality required for widespread application, a number of compilers use the concept of the multilevel, multi-record type file. In contrast to the conventional file which contains records of only one type, the multi-level file may contain records of many types, each having a different format. Furthermore, each of these record types may be assigned a hierarchal relationship to the other types, so that a typical file entry may contain records with different levels of "significance." This article describes an approach to the design and processing of multi-level files. This approach, designated the property classification method, is a composite of ideas taken from the data description methods of existing compilers. The purpose in doing this is not so much to propose still another file design method as it is to emphasize the principles underlying the existing methods, so that their potential will be more widely appreciated. in [ACM] CACM 5(08) August 1962 view details in [ACM] CACM 5(08) August 1962 view details Historically, the whole area goes back to the General Electric Hanford system of Report and File Maintenance Generators for the IBM 702. This led to the development by SHARE of 9Pac for the IBM 709 and SURGE for the 704. These were nonprocedural languages with implied file maintenance and report generation func- tions. There were neither read nor write commands in these languages; they were completely declarative. The assumption was that if you ran the report generator you would get reports; if you ran the file maintenance generator you expected to maintain the file. The Honeywell FACT compiler successfully extracted many of the file disciplines from SURGE and imbedded them in a procedural language with declarative statements defining record structures. These ideas, that were used on a one-dimensional scale (for operating on magnetic tapes) in the 702 Report Generator, SURGE, 9PAc and FACT, were expanded to handle an n-dimensional or graph type structure on a random access device in the GE Integrated Data Store. Today's discussion is concerned with a back-fitting of the Integrated Data Store's concepts of data declarations and record processing commands to serially stored files, to create a unified generalized approach for all classes of data devices. in [ACM] CACM 9(03) March 1966 includes proceedings of the ACM Programming Languages and Pragmatics Conference, San Dimas, California, August 1965 view details in [ACM] CACM 9(03) March 1966 includes proceedings of the ACM Programming Languages and Pragmatics Conference, San Dimas, California, August 1965 view details in [ACM] CACM 15(06) (June 1972) view details The exact number of all the programming languages still in use, and those which are no longer used, is unknown. Zemanek calls the abundance of programming languages and their many dialects a "language Babel". When a new programming language is developed, only its name is known at first and it takes a while before publications about it appear. For some languages, the only relevant literature stays inside the individual companies; some are reported on in papers and magazines; and only a few, such as ALGOL, BASIC, COBOL, FORTRAN, and PL/1, become known to a wider public through various text- and handbooks. The situation surrounding the application of these languages in many computer centers is a similar one. There are differing opinions on the concept "programming languages". What is called a programming language by some may be termed a program, a processor, or a generator by others. Since there are no sharp borderlines in the field of programming languages, works were considered here which deal with machine languages, assemblers, autocoders, syntax and compilers, processors and generators, as well as with general higher programming languages. The bibliography contains some 2,700 titles of books, magazines and essays for around 300 programming languages. However, as shown by the "Overview of Existing Programming Languages", there are more than 300 such languages. The "Overview" lists a total of 676 programming languages, but this is certainly incomplete. One author ' has already announced the "next 700 programming languages"; it is to be hoped the many users may be spared such a great variety for reasons of compatibility. The graphic representations (illustrations 1 & 2) show the development and proportion of the most widely-used programming languages, as measured by the number of publications listed here and by the number of computer manufacturers and software firms who have implemented the language in question. The illustrations show FORTRAN to be in the lead at the present time. PL/1 is advancing rapidly, although PL/1 compilers are not yet seen very often outside of IBM. Some experts believe PL/1 will replace even the widely-used languages such as FORTRAN, COBOL, and ALGOL.4) If this does occur, it will surely take some time - as shown by the chronological diagram (illustration 2) . It would be desirable from the user's point of view to reduce this language confusion down to the most advantageous languages. Those languages still maintained should incorporate the special facets and advantages of the otherwise superfluous languages. Obviously such demands are not in the interests of computer production firms, especially when one considers that a FORTRAN program can be executed on nearly all third-generation computers. The titles in this bibliography are organized alphabetically according to programming language, and within a language chronologically and again alphabetically within a given year. Preceding the first programming language in the alphabet, literature is listed on several languages, as are general papers on programming languages and on the theory of formal languages (AAA). As far as possible, the most of titles are based on autopsy. However, the bibliographical description of sone titles will not satisfy bibliography-documentation demands, since they are based on inaccurate information in various sources. Translation titles whose original titles could not be found through bibliographical research were not included. ' In view of the fact that nany libraries do not have the quoted papers, all magazine essays should have been listed with the volume, the year, issue number and the complete number of pages (e.g. pp. 721-783), so that interlibrary loans could take place with fast reader service. Unfortunately, these data were not always found. It is hoped that this bibliography will help the electronic data processing expert, and those who wish to select the appropriate programming language from the many available, to find a way through the language Babel. We wish to offer special thanks to Mr. Klaus G. Saur and the staff of Verlag Dokumentation for their publishing work. Graz / Austria, May, 1973 in [ACM] CACM 15(06) (June 1972) view details in [ACM] ACM Computing Surveys (CSUR) 8(1) March 1976 view details |