Agent-K(ID:8163/)Agent-oriented language with temporalityWinton Davies and Pete Edwards, U Aberdeen 1994 Related languages
References: to explore interactions between supervised andunsupervised learning agents. To this end we aredeveloping our own relational clustering algorithm whichwill permit unsupervised first order learning.Our approach is in some respects similar to that of KBG(Bisson, 1992) which also performs conceptual clusteringover a first order logic representation. KBG findssimilarities between the entities found in relations,whereas our algorithm attempts to find similaritiesbetween the relations that hold between entities. Ouralgorithm combines DINUS (Lavrac & Dzeroski, 1994)and COBWEB (Fisher, 1987). DINUS converts a firstorder learning problem into a propositional one.COBWEB then finds clusters in the propositionalrepresentation. A final step is to describe the clusters infirst order form. This algorithm is still in the preliminaryphase of investigation.The remainder of this paper concentrates on the differentways that agents in a distributed learning system caninteract. We briefly describe the different approaches todistributed learning and the related issues of theoryrevision and knowledge integration. We then concludewith a report on our preliminary experiments in this area.This work focuses on a distributed learning systemcomposed of FOCL agents.2 Distributed LearningThere are three ways learning can occur when data isdistributed. These relate to when agents communicatewith respect to the learning process:?The first approach gathers the data into one place.The use of distributed database management systemsto provide a single set of data to an algorithm is anexample of this (Simoudis, 1994). The problem withsuch an approach is that it does not make efficient useof the resources usually associated with distributedcomputer networks.?The second approach is for agents to exchangeinformation whilst learning on local data. This is theapproach taken by Sian (1991). No revision orintegration is needed, as the agents are effectivelyworking as a single, tightly coupled, algorithm overthe entire data. This restricts the agents to usinglearning algorithms that have been specially modifiedto work in this way. Thus the main disadvantage withthis approach is that it does not allow the use of 'off-the-shelf' learning algorithms.?The third approach is for the agents to learn locally,and then to share their results, which are then refinedand integrated by other agents in light of their owndata and knowledge. This model permits the use ofstandard algorithms, and also allows inter-operationbetween different algorithms. Brazdil & Torgo(1990), Svatek (1995) and Provost & Hennessy(1994) have all taken this approach. The mainproblem here is how to integrate the local results.We are adopting the latter approach, as it providesdistributed processing together with flexibility indeploying 'off-the-shelf' algorithms. The following sectiondescribes the relationship between theory revision,knowledge integration and incremental learning. We willthen describe an empirical comparison of three differentapproaches to distributed learning based on theoryrevision, knowledge integration and a combination oftheory revision and knowledge integration This paper describes our current research which spans the fields of knowledge discovery and software agents.Knowledge discovery (or data-mining) is concerned with extracting knowledge from databases and/or knowledge bases (Piatetsky-Shapiro & Frawley, 1991). Most data-mining systems employ one or more machine learning techniques to find previously unknown patterns in real world data. Later in this section we will briefly introduce the learning method we plan to use in our approach, and mention some general issues which differentiate data-mining from machine learning.Traditionally, data-mining systems are designed to work on a single dataset. However, with the growth of networks, data is increasingly dispersed over many machines in many different geographical locations. In addition, databases are being joined by other sources of information that can be accessed over networks, e.g. knowledge bases, on-line dictionaries, etc. This has raised the issue of not only how to gather distributed information, but how new knowledge can be discovered in distributed information.1Support was provided through a UK Engineering &Physical Sciences Research Council (EPSRC) studentship.Software agents (Levy, Sagiv & Srivastava, 1994;Oates, Prasad & Lesser, 1994) are one response to the problem of using the vast amounts of information store don networked systems. There are many types of software agent (Wooldridge & Jennings, 1994); however, agents are typically thought of as being 'intelligent' programs which have some degree of autonomy. We intend to design an open, flexible data-mining agent. A group of these agents will be able to co-operate to discover knowledge from distributed sources.To date, most knowledge discovery systems have focused on extracting numeric or propositional knowledge from databases. For example, a propositional system could not learn the concept of grand parenthood from a database containing the names of people and their parents. Such concepts, together with recursive relations, are easily formulated as statements in first order predicate calculus.Our approach aims to find first-order relations in data, using techniques from Inductive Logic Programming(ILP). Many ILP algorithms allow background knowledge expressed in first order predicate calculus to be used during learning. Thus a knowledge base could be used to supply existing domain knowledge to an ILP-based data-mining agent.Data-mining systems differ in certain ways from the machine learning algorithms which they are typically derived from. Firstly, they have to cope with large amounts of data. For example, learning over a census database containing information on millions of families is very different from looking at a few hand-crafted examples of 'model' families. The second problem is that real world data has a tendency to contain errors and missing information. Finally, a data-mining system aims to discover knowledge that is novel, useful, and understandable, which typically requires a human to focus the search and to provide feedback on the knowledgediscovered.Our high-level model is shown in Figure 1. One or more agents per network node are responsible for examining and analyzing a local data source. In addition ,an agent may query a knowledge source for existing knowledge (such as rules or predicate definitions). The agents communicate with each other during the discovery process. This allows the agents to integrate the new knowledge they produce into a globally coherent theory.A user communicates with the agents via a user-interface.In addition, a supervisory agent, responsible for coordinating the discovery agents may exist. The interface allows the user to assign agents to data sources, and to allocate high level discovery goals. It allows the user to critique new knowledge discovered by the agents, and to direct the agents to new discovery goals, including ones that might make use of the new knowledge.UserAgentUser InterfaceDiscovery Agent1network communication= network hostDiscovery Agent2DB/KBDB/KBFigure 1: Data-Mining Using Multiple AgentsAs far as possible, our intention is to base our work on the integration of existing technologies in the field of software agents and first order learning. This is in order to concentrate on the core issues of distributed data-mining.We intend to use agents based on Agent OrientedProgramming (AOP) (Shoham, 1990), and the techniques developed as part of the Knowledge Sharing Effort (Patilet al., 1992). In addition, we have already identified a number of recent ILP algorithms, with which we plan to experiment. These include: the information-gain based FOCL (Pazzani & Kibler, 1992), CLAUDIEN (DeRaedt& Bruynooghe, 1993) and SIERES (Wirth & O'Rorke,1992). Extract: Page at Citeseer Extract: Introduction Introduction Yoav Shoham has proposed a new programming paradigm (AOP) based on a societal view of computation. The key idea is to build computer systems as societies of agents and the central features include (i)agents are reactive, autonomous, concurrently executing computer processes; (ii)agents are cognitive systems, programmed in terms of beliefs, goals, and so on; (iii)agents are reasoning (internally-motivated) entities, specified in terms of logic; (iv)agents communicate via speech acts. Since the presentation of AOP, agent based computing has been hailed as "the new revolution in software ''[6] because agent-based systems have advantages in dealing with openness, where components of the system arc not known in advance, can change over time, and are highly heterogeneous (in that they are implemented by different people, at different times, using different problem solving paradigms). In addition, agent based systems have natural metaphor, can deal with problems such as distribution of data, control, expertise or resources and integrate legacy system by adding an agent wrapper, etc.. However, the construction of large-scale embedded software systems demands the use of design methodologies and modeling techniques that support abstraction, inheritance, modularity, and other mechanisms for reducing complexity and preventing error. Unfortunately, so far there has been few such researches in agent-oriented methodologies. If multi-agent systems are to become widely accepted as a basis for large-scale applications, adequate agent-oriented methodologies (AOM) will be essential tgl. Perhaps foremost amongst the methodologies that have been developed for the design, specification, and programming of conventional software systems are various Object-oriented approaches. They have achieved a considerable degree of maturity, and a large community of software developers familiar with their use now exists. At the same time, the OO design and development environment is well supported by diagram editors and visualization tools. But OO methodologies are not directly applicable to agent systems--agents are usually significantly more complex than typical objects, both in their internal structures and in the behaviors they exhibit. In order to construct a complete methodology for AOP, one of the essential things is to develop a programming language because agent-oriented language and the implemented architectures of agents decide about the usefulness of AOP in the applications. In this paper, we try to center upon the language problem, by providing a computable programming language--SPLAW, for BDI agent. BDI agents are systems that are situated in a changing environment, receive continuous perceptual input, and take actions to affect their environment, all based on their internal mental state. Beliefs, desires, and intentions are the three primary attitudes and they capture the informational, motivational, and decision components of an agent, respectively ll't°l. SPLAW is a programming language based on a restricted first-order logic. The behavior of an agent is dictated by the programs written in SPLAW; the beliefs, desires, and intentions of agents are not explicitly represented as modal formulas, but written in SPLAW. The current state of an agent, which is a model of itself, its environment, and other agents, can be viewed as its current belief state; states that an agent wants to bring about based on its external or internal stimuli can be viewed as desires; and the adoption of plans to satisfy such stimuli can be viewed as intentions. We just take a simple specification language as the execution model of an agent and then ascribe the mental attitudes of beliefs, desires, and intentions from an external viewpoint. In our opinion, this method is likely to have a better chance of unifying theory and practice. in ACM SIGPLAN Notices 33(01) January 1998 view details Resources
|