LAMINA(ID:1405/lam007)


Concurrent object-oriented language.


Structures:
Related languages
Actors => LAMINA   Influence

References:
  • Delagi, Bruce A.; Saraiya, Nakul P.; Nishimura, Sayuri and Byrd, Gregory T. "An instrumented architectural simulation system" Technical Report KSL-86- 36, Knowledge Systems Laboratory, Stanford University, 1986 view details
  • Delagi, Bruce A.; Saraiya, Nakul P.; Nishimura, Sayuri and Byrd, Gregory T. "Lamina: Care applications interface" Technical Report KSL-86-67, Knowledge Systems Laboratory, Stanford University, 1986 view details
  • "Experiments with a Knowledge-based System on a Multiprocessor", Third Intl Conf Supercomputing Proc, 1988. view details
  • Delagi et al, "ELINT in LAMINA, Application of a Concurrent Object language", KSL-88-3, Knowledge Sys Labs, Stanford U. view details
  • Delagi, B. A. and Saraiya, N. P. "Elint in Lamina: application of a concurrent object language" pp194-196 view details Abstract: The design and performance of an "expert system" signal interpretion application written in a concurrent object-based programming language, LAMINA, is described together with a synopsis of the programming model that forms the foundation of the language. The effects of load balancing and the limits imposed by task granularity and message transmission costs are studied and their consequences to application performance are measured over the range of one to 250 processors as simulated in SIMPLE/CARE, an extensively instrumented simulation system and computer array model. DOI Extract: LAMINA Objects
    LAMINA Objects
    The LAMINA object programming model is based on asynchronous communicating objects. The objects communicate using streams. An object, as used here. is a collection of variables, the stale vamables of that object, manipulated by (and only by) a set of procedures. the methods associated with that object. Streams represent sequences of values over time; information sent to a stream builds the sequence represented by that stream.
    Each LAMINA object has associated with it a distinguished stream that is its task stream. The information arriving on an object's task stream specifies tasks for the object; each such piece of information is a message. Each message names a method to execute and includes the parameters for the execution. When a task execution sends a message to a stream, the execution is not normally delayed to wait for a responding message (or even for an acknowlegement of the receipt of the message).
    The information sent to a stream consists of references to streams, and unshared values, which may be both atoms and structures. Values that have internal structure must be encoded before transmission. Encoding involves both graph structure linearization and internal pointer relativization. When such a value arrives at its destination, storage must be allocated to contain it and it must be decoded, that is, internal pointers must be reexpressed in absolute terms.
    Like ACTORS[l], the LAMINA object model is characterized by non-deterministic receipt of messages; message arrival order is not guaranteed to be in sending order. Like ACTORS, message arrival triggers computation. In other ways, however, as discussed in [2], the LAMINA object model departs from ACTORS, by generally trading off flexibility for efficiency, by dealing more directly with mutability, and, since streams are first-class entities, by allowing objects to establish communications over streams other than their task streams.
    [...]messages arriving on the task stream of an object specify tasks to be done by that ob- jeer. There is an eternai dispatch process for each object which takes these messages from the stream and executes them in turn.
    Tasks usually mutate the state variables of the object and generate new messages. Tasks have exclusive access to their execution context but are preemptible and can also have implicit continuations.
    Tasks in the LAMINA object model are normally data driven and run to completion. They are generally intended to be accomplished as the stages of a pipeline, thus organizing the work performed by the objects of the application. Objects only begin tasks when all the needed information is available. In order not to block the pipeline, a task that is started is run to completion unless it is preempted by the underlying system (e.g. for a debug trap or the consumption of run quanta).
    Experience with the LAMINA object model[4,5] has demonstrated that, with few exceptions, the continuation of a task is most readily spedfied explicitly as a message that is sent to an object. When this is not sufficient, an implicit, anonymous continuation (as shown in figure 1) is used to capture the environment needed to later continue the computation, and this is deferred until further information from some other (server) object is available; the requesting object may perform other tasks while awaiting the required information. To form the continuation, any required bindings that are on the stack are copied into a closure and the stack storage is released. Stack allocation is thus used to the greatest extent possible and heap allocation is minimized.
    The binding and control sta~ks for both a task and its (implicit) continuation are empty when execution is begun, non-empty during execution, and empty again when execution is done. Since task preemption is an exceptional condition and since tasks and their continuations otherwise always run to completion, stack storage space is generally reuseable among all the tasks on a processor.
    This avoids the high space penalty of using coarsegrained page-protection-based stack limit mechanisms, allowing the use of efficient virtual memory and cache mechanisms without resorting to coarse-grained task decomposition.
    When the system preempts an object's task, that object does not execute any other tasks until the preemption is resolved. In this way, while the object's pipeline is indeed blocked while the preemption exception is dealt with, the illusion of atomic execution with respect to the context of a task is preserved. However, tasks for other preemptions are satisfied. This means that data consistency can only be preserved if no state is shared between objects.
    LAMINA objects never share structures: they communicate only by exchanging messages, which may contain independent copies of local structures. Thus the atomicity of operations on an object is not affected by the operations on other objects.
    Implicit continuations are not part of the original task's atomic execution. Instead, the task and its continuation are independent atomic executions. The execution of the original task is first completed and its continuation is executed some time later, after the latter's requirements for additional information have been satisfied. In the meantime, other tasks are executed by the object, allowing messages specifying additional work to be passed down the pipeline to other objects.
    Although an implicit continuation is a separate atomic execution, it shares the spawning object's execution environment. Therefore, any structures which are closed over may be altered by other tasks on the same object while the continuation awaits execution: invariants must be reestablished by the completion of each task and continuation.

          in SIGPLAN Notices 24(04) April 1989 incoroporating Proceedings of the 1988 ACM SIGPLAN workshop on Object-based concurrent programming, San Diego view details