Flavio S. Correa da Silva, Wamberto W. Vasconcelos, David Robertson, Virginia Brilhante, Ana C. V. de Melo, Marcelo Finger, Jaume Agusti. On the Insufficiency of Ontologies: Problems in Knowledge Sharing and
Alternative Solutions. Knowledge-Based Systems Journal, 15(3), pp 147-167, 2002
One of the benefits of formally represented knowledge lies in its
potential to be shared. Ontologies have been proposed as the ultimate
solution to problems in knowledge sharing. However even when an agreed
correspondence between ontologies is reached that is not the end of
the problems in knowledge sharing. In this paper we explore a number
of realistic knowledge-sharing situations and their related problems
for which ontologies fall short in providing a solution. For each
situation we propose and analyse alternative solutions.
Wamberto Vasconcelos, J. Sabater, C. Sierra, J. Querol. Skeleton-based Agent Development for Electronic Institutions. Proceedings of the 1st International joint Conference on Autonomous Agents and
Multi-Agent Systems - AAMAS-2002. Bologna, Italy, July 2002
In this paper we describe an approach for semi-automatic agent development. We focus
on the scenario in which agents are designed to follow an electronic institution, a
formalism to specify open agent organisations. In our approach, an initial design
pattern is automatically extracted from a given electronic institution. This pattern
is then offered to programmers willing to develop agents to perform in the electronic
institution. Our approach supports developers when modifying the initial simple design
pattern into more sophisticated programs.
Virginia Brilhante, David Robertson. Metadata-supported Automated Ecological Modelling. In Claus Rautenstrauch and Susanne Patig (eds.), Environmental Information Systems in
Industry and Public Administration. Idea Group Publishing, Hershey, PA, USA, 2001.
Ecological models should be rooted in data derived from observation,
allowing methodical model construction and clear accounts of model
results with respect to the data. Unfortunately, many models are
retrospectively fitted to data because in practice it is difficult to
bridge the gap between concrete data and abstract models. Our research
is on automated methods to support bridging this gap. The approach
proposed consists of raising the data level of abstraction via an
ecological metadata ontology and from that, through logic-based
knowledge representation and inference, automatically to generate
prototypical partial models to be further improved by the modeller.
In this chapter we aim to: 1) give an overview of current automated
modelling approaches applied to ecology, and relate them to our
metadata-based approach under investigation; and 2) explain and
demonstrate how it is realized using logic-based formalisms.
Wamberto Vasconcelos, et al.. A Lifecycle for Models of Large Multi-Agent Systems. Proceedings of the the 2nd Int'l Workshop on Agent Oriented Software Engineering
(AOSE-2001). Montreal, Canada, May 29, 2001. Lecture Notes in Computer Science,
Vol. 2222. Springer-Verlag, Berlin, Germany
Two key issues in building multi-agent systems concern their scalability and
engineering open systems. We offer solutions to these potential problems by
introducing a lifecycle for models of large multi-agent systems. Our proposal connects
a model for the collective analysis of agent systems with an individual-based model.
This approach leads on to a virtuous cycle in which individual behaviours can be
mapped on to global models and vice-versa. We illustrate our approach with a formal
example but relatively easy for engineers to follow and adapt.
David Robertson, Jaume Agusti, F.C.S. Silva, Wamberto Vasconcelos, A.C.V. Melo. A Lightweight Capability Communication Mechanism. Proceedings of the 13th International Conference on Industrial and Engineering
Applications of Artificial Intelligence and Expert Systems, IEA/AIE 2000 - New
Orleans, Louisiana, USA, June 2000. Loganantharaj, Rasiah; Palm, Gunther; Ali, Moonis
(eds.). pages 660-670. Lecture Notes in Artificial Intelligence, Vol. 1821.
Springer-Verlag. Berlin, Germany. 2000
A persistent problem in managing the interaction between distributed agents is to be
able to coordinate the communication between systems without having continually to ask
each system for information about what it can do. One form of coordination is through
the use of capability descriptions that are advertised by each agent and managed by a
brokering mechanism. The task of the broker is to accept queries and to hypothesise
the means of obtaining answers based only on the capability descriptions. This has the
advantage that plans for coordinating answers can be constructed by the broker without
having to contact the agents. Brokering, however, is not straightforward because
capability descriptions can be complex and may be conditional on interactions with
other agents. Brokering must also take into account the possibility that the
ontologies used by each agent may differ, so some means of relating the terminology of
capabilities of agents is needed. Many sophisticated systems exist for tackling parts
of this problem but there have been comparatively few attempts to build lightweight
engineering solutions by adapting well established methods. We describe a simple way
of implementing a lightweight but powerful brokering mechanism.
Yannis Kalfoglou, T. Menzies, K-D. Althoff, E. Motta. Meta-Knowledge in Systems Design: Panacea... or Undelivered Promise. The Knowledge Engineering Review, vol 15 no 4, pp 381-404, 2000.
In this study we present a review of the emerging field of
meta-knowledge components as practised over the past decade among a
variety of practitioners. We use the artificially-defined term
`meta-knowledge' to encompass all those different but overlapping
notions used by the Artificial Intelligence and Software Engineering
communities to represent reusable modelling frameworks: ontologies,
problem-solving methods, experience factories and experience bases,
patterns, to name a few. We then elaborate on how meta-knowledge is
deployed in the context of system's design to improve its reliability
by consistency checking, enhance its reuse potential, and manage its
knowledge sharing. We speculate on its usefulness and explore
technologies for supporting deployment of meta-knowledge. We argue that,
despite the different approaches being followed in systems
design by divergent communities, meta-knowledge is present in all cases,
in a tacit or explicit form, and its utilisation depends on pragmatic
aspects which we try to identify and critically review on criteria of
effectiveness.
Yun-Heh Chen-Burger, David Robertson, Jussi Stader. Formal Support for an Informal Business Modeling Method. The International Journal of Software Engineering and Knowledge
Engineering, IJSEKE February, 2000. World Scientific Publishing Company.
Also appears as a research report in School of AI, Informatics Division,
University of Edinburgh.
Wamberto Vasconcelos, Schwitter, R., Molla, D., Joćo Cavalcanti. Implementing Prolog-Run WWW Sites. In the Proceedings of the 13th International Conference on Applications of
Prolog (INAP'2000), Waseda University, Tokyo, Japan, 20-22 October 2000.
Also available as: Technical Report ifi-2000.04, June, 2000, Department of
Information Technology, University of Zürich, Zürich, Switzerland.
We describe a modular and customisable architecture for a WWW
server run by Prolog programs and show how each of its components can be
implemented. Our proposal employs standard Prolog-CGI technology but to
improve efficiency we also use client-server modules to perform the
actual services of the site.
Daniela Carbogim, Renata Wassermann. Full Acceptance via Argumentation. In Proceedings of the Discussion Track of The International Joint
Conference SBIA/IBERAMIA'2000 (15th Brazilian Symposium on Artificial
Intelligence and 7th Ibero-American Conference on Artificial
Intelligence), Atibaia, Brazil, 19-22 November 2000.
Any rational agent must have a strategy for deciding whether a piece of
uncertain information is acceptable or not. In this paper we argue for
the use of argumentation theory to solve this problem. The decision
process is seen as an internal argumentation, whereby the agent weights
evidence for and against some piece of information. We work within a
framework for belief revision which takes into account the agent's
limited resources, and apply a formal framework
for argumentation to the problem of fully accepting information.
Yannis Kalfoglou. On the Convergence of Core Technologies for Knowledge Management and
Organisational Memories: Ontologies and Experience Factories. in Proceedings of the ECAI2000 Workshop on Knowledge Management and
Organisational Memories (W11), Berlin, Germany, 21-22 August 2000
In this paper we argue for the convergence of core technologies
for knowledge management and organisational memories. Most of the
work reported in the literature regards knowledge management and
organisational memories as intertwined areas. However, the
technologies used to implement and support them are not treated in
the same fashion. Usually, they are conceived, developed, and
deployed separately. This prevents us from fully exploiting their
strength. We identify two such technologies in this paper:
ontologies and experience factories, originating in these different
communities. We elaborate on their strengths as potential core
technologies and show, through an example case, how their
convergence could be of mutual benefit. We generalise the approach
and speculate on the impact of such convergence in the broader
context of knowledge management and organisational memories.
Tim Menzies, K-D. Althoff, Yannis Kalfoglou, E. Motta. Issues with Meta-Knowledge. The International Journal of Software Engineering and
Knowledge Engineering, 10(4), pp. 549-555, August 2000 (to appear
in a special issue for SEKE99, a draft version is available).
At the SEKE99 conference, knowledge engineering researchers
held a panel on the merits of meta-knowledge (i.e., problem-solving
methods and ontologies) for the development of knowledge-based
systems. The original panel was framed as a debate on the merits of
meta-knowledge for knowledge maintenance. However, the debate quickly
expanded. In the end, we were really discussing the merits of different
technologies for the specification of reusable components for KBS.
In this brief article we record some of the lively debate from that
panel and the email exchanges it generated
Yannis Kalfoglou, David Robertson. Applying Experienceware to Support Ontology Deployment. Proceedings of the 12th International Conference on
Software Engineering and Knowledge Engineering (SEKE2000),
Chicago, IL, USA, July 2000.
Experienceware is a paradigm which emerged in the late eighties
and evolved during the nineties, resulting in technologies such as
experience factories and their constituent experience bases. These
are designed to manage experiences collected throughout the
life-cycle of a software project. Ontologies emerged round about the
same time as a way to represent consensual knowledge about a domain
of interest in reusable and sharable formats. Despite their diverse
origins and ways of development, there is an overlap of scope
regarding one of their goals: to support reuse. In this paper we
make use of this overlap by applying the experience factories
paradigm to ontology deployment and in particular, to support
ontology verification.
Joćo Cavalcanti, David Robertson. Synthesis of Web Sites from High Level Descriptions. 3rd Workshop on Web Engineering, May 2000, Amsterdam, The
Netherlands.
As use of Web sites has exploded, large amount of
effort have gone into the deployment of sites but little thought has
been given to methods for their design and maintenance. This paper
reports some encouraging results on the use of automated synthesis,
using domain-specific formal representations, to make design more
methodical and maintenance less time consuming.
Daniela Carbogim, David Robertson, John Lee. Argument-based Applications to Knowledge Engineering. The Knowledge Engineering Review vol 15 no 2, pp 119-149, 2000.
Argumentation is concerned with reasoning in the presence of imperfect
information by constructing and weighing up arguments. It is an
approach for inconsistency management in which conflict is explored
rather than eradicated. This form of reasoning has proved applicable
to many problems in knowledge engineering that involve uncertain,
incomplete or inconsistent knowledge. This paper concentrates on
different issues that can be tackled by automated argumentation
systems and highlights important directions in argument-oriented
research in knowledge engineering.
Daniela Carbogim, David Robertson. Contract-based Negotiation via Argumentation. In Proceedings of the Workshop on Multi-Agent Systems in Logic
Programming (MAS-99) at the 16th International Conference on Logic
Programming (ICLP-99), November 1999.
Negotiation is often described as the process by which
agents come to a mutually acceptable agreement about some
subject. This definition is quite broad, as this process can be
viewed from many different angles. Here we consider negotiation
from the perspective of contracts. We present a general model of
contract-based negotiation and propose a logic programming-based
argumentation framework to capture and formalise this model.
Renaud Lecoeuche, David Robertson, Catherine Barry. Using Focus Rules in Requirements Elicitation Dialogues. In Proceedings of the International Joint Conference on Artificial
Intelligence 1999 (IJCAI-99), August 1999.
Requirements engineering is a complex task which benefits from
computer support. Despite the progress made in automatic reasoning on
requirements, the tools supporting requirements elicitation remain
difficult to use. In this paper we propose a novel approach where a
tool's reasoning is intimately linked to the dialogue it has with its
users. Because the dialogue is guided by rules ensuring coherence,
the interaction with the tool is more natural. We discuss in detail
the rules we use to organise the dialogue and how we apply them to the
requirements elicitation tool. We present an evaluation of this
approach demonstrating improvements in usability during the
elicitation process.
Yannis Kalfoglou, David Robertson. Managing Ontological Constraints. In Proceedings of the IJCAI-99 workshop on Ontologies and
Problem-Solving Methods(KRR5), Stockholm, Sweden, August 1999.
Available as: University of Edinburgh, Dept. of AI, Research Paper #948.
We explore the use of ontological constraints in a new way:
deploying them in a software system's formal evaluation. We present
a formalism for ontological constraints and elaborate on a meta
interpretation technique in the field of ontologies. Ontological
constraints often need enhancements to capture application-specific
discrepancies. We propose an editing system that provides guidance
in building those constraints and we explain how this helps us to
detect conceptual errors that reflect a misuse of ontological
constructs. We describe a multilayer architecture for performing
such checks and we demonstrate its usage via an example case. We
speculate on the potential impact of the approach for the system's
design process.
Yannis Kalfoglou. The Role of Formal Ontologies. In Proceedings of the 11th International Conference on Software
Engineering and Knowledge Engineering (SEKE99), Kaiserslauten, Germany,
June 1999. Position paper presented at the Panel: "Meta-Knowledge: Does
it Confuse or Complicate Knowledge Maintenance?". Also available as:
University of Edinburgh, Dept. of AI, Research Paper #952.
Yannis Kalfoglou, David Robertson. A Case Study in Applying Ontologies to Augment and Reason about the
Correctness of Specifications. In Proceedings of the 11th International Conference on Software
Engineering and Knowledge Engineering (SEKE99), Kaiserslauten, Germany,
June 1999. A longer version of this paper is available as: University of
Edinburgh, Dept. of AI, Research Paper #927.
In this paper we investigate how software specifications can
benefit from the presence of formal ontologies to augment and enrich
their context. This makes it possible to verify the correctness of the
specification with respect to formally represented domain knowledge.
We present a
meta-interpretation technique that allows us to perform checks for
conceptual error occurrences in specifications. We illustrate this
approach through an experiment: we augmented an existing formal
specification presented by Luqi & Cooke with a formal ontology
produced by the Information Sciences Institute at USC, the AIRCRAFT
ontology. In addition, we explore how we can build and use
application specific ontological constraints to detect conceptual
errors in specifications.
Yannis Kalfoglou, David Robertson. Use of Formal Ontologies to Support Error Checking in Specifications. In Proceedings of the 11th European Workshop on Knowledge Acquisition,
Modelling and Management (EKAW99), Dagsthul, Germany, May 1999. Also
available as: University of Edinburgh, Dept. of AI, Research Paper #935.
This paper explores the possibility of using formal ontologies to
support detection of conceptual errors in specifications. We define a
conceptual error as a misunderstanding of the application domain
knowledge which results in undesirable behaviour of the software
system. We explain how to use formal ontologies, and in particular
ontological constraints, to tackle this problem. We present a flexible
architecture based on meta interpretation in logic programming in
which the specification is viewed as a multilayer design. We
illustrate the significance of this approach for the software and
ontology engineering community via example cases in two domains:
ecological modelling and process modelling.
David Robertson. Can Formal Argumentation Raise our Confidence in Safe Design?. Towards System Safety: Proceedings of the Seventh Safety-Critical Systems Symposium,
Huntingdon, UK.
It is technically possible to build systems of formal argumentation
which help assemble evidence relating system designs to contextual
material, such as safety guidelines. However, a number of assumptions
underly this choice of architecture and influence its ability to
support safer design. Many of these assumptions are not purely
technical and apply regardless of the choice of formal
representation. Using as an example a prototype argumentation system
from a safety-related domain, a set of assumptions is identified and
generalised to this class of system.
David Robertson, Jaume Agusti. Software Blueprints: Lightweight Uses of Logic in Conceptual Modelling. book published by Addison Wesley / ACM Press.
Conceptual models are descriptions of our ideas about a problem, used to
shape the implementation of a solution to it. Everyone who builds
complex information systems uses such models - be they requirements
analysts, knowledge modellers or software designers - but
understanding of the pragmatics of model design tends to be informal
and parochial. Lightweight uses of logic can add precision without
destroying the intuitions we use to interpret our descriptions.
Computing with logic allows us to make use of this precision in
providing automated support tools. Modern information scientists need
to know what these methods are for and may need to build their own.
This book gives you a place to begin.
Where do you start when building models in a precise language like
logic? One way is by following standard paradigms for design and
adapting these to your needs. Some of these come from an analysis of
existing informal notations. Others are from within logic itself. We
take you through a sample of these, from more commonplace styles of
formal modelling to non-standard methods such as techniques editing
and argumentation. Each of these provides a window onto broader
areas of applied logic and gives you a basis for adapting the
method to your own needs.
Jaume Agusti, Jordi Puigsegur, David Robertson. A Visual Syntax for Logic and Logic Programming. Journal of Visual Languages and Computing vol 9, pp 399-427.
It is commonly accepted that non-logicians have difficulty in
expressing themselves in first order logic. Part of the
visual language community is concerned with providing
visual notations which use visual cues ("declarative
diagrams") to make the structuring of logical expressions
more intuitive. One of the more successful metaphors used
in such diagrammatic languages is that of set inclusion,
making use of the graphical intuitions that are taught to
us at school. Existing declarative diagrammatic
languages do not make full use of such set-based
intuitions. We present a more uniform use of sets which
allow simple but highly expressive diagrams to be
constructed from a small number of primitive components.
These diagrams, we claim, provide a good alternative
notation for computational logic and, as we show in this
paper, are the basis of a visual logic programmign
language. The first implementation of this language and a
heterogeneous logic programming environment are also
presented in this paper.
Steve Polyak. A Common Process Methodology for Engineering Process Domains. Accepted at the Systems Engineering for Business Process Change
(SEBPC) workshop, SMBPI: Systems Modelling for Business Process
Improvement (http://www.infc.ulst.ac.uk/informatics/events/smbpi.html),
University of Ulster, March 1999, Available as: Department of Artificial
Intelligence, Report Paper 931, University of Edinburgh, Scotland, 1998.
It will be part of a book publication by Artech House towards the end of
1999.
Process engineering involves a search for new models of organising
work. This synthesis task can become quite difficult and
time-consuming as the amount of detail required and interactions
between activities increases. Domain independent AI planning offers
some promising techniques and representations to assist in this
effort. One of the major impediments to transferring this technology
to applied, real-world settings is the difficulty encountered in
building the domain model which is used in the automated generation
of these plans. Competence, as well as good tools, is necessary to
carry out this task. A plan domain methodology should be available
which provides structured organisational development activities.
Users need to know what tasks they have to perform: for each step,
information must be available about what input will be needed, and
what output will be required, what is to be done and how it can be
done well. This paper presents the Common Process Methodology (CPM)
which aims at providing this support for engineering process
domains.
Renaud Lecoeuche, Chris Mellish, Catherine Barry, David Robertson. User-System Dialogues and the Notion of Focus. Knowledge Engineering Review 13(4).
In recent years, the capabilities of knowledge-based systems to
communicate with their users have evolved from simple interactions to
complex dialogues. With this evolution comes a need to understand what
makes a good dialogue. In this paper, we are concerned with dialogue
coherence. We review the notion of focus, which partly explains this
property, and its use for user-system communication. First, we examine
the major theories dealing with this notion. We describe what their
contribution is and how they differ. Then, we illustrate the benefits
of using the notion of focus and especially the improvement in text
coherence. We pay particular attention to how the notion can
concretely be implemented. Its integration with other techniques and
theories is described. We conclude the paper by pointing out
remaining issues in the understanding of the notion of focus. The
contribution of this paper is to provide a classification of the
theories of focus and to show the improvements they offer in elaborate
user-system dialogues.
Stefan Daume. A Knowledge-based System to Model Thinnings in Central European Forests. MSc Thesis, 1998.
The introduction of new objectives in forest management in recent
years has led to an increased demand for forest management tools to
develop and evaluate silvicultural scenarios. An essential part of
these are thinning models which predict the outcome of thinnings,
regarded as one of the most influential silvicultural techniques. This
project addresses problems observed in existing models and
descriptions of thinnings in general, namely their reliability and
operationality, and presents an alternative approach.
The first part of this work proposes a rule-based representation of
thinnings, which models at a more detailed level than hitherto, the
decision procedures employed by a forester carrying out a
thinning. The model is implemented in Prolog and developed on the
basis of actual data of a thinning carried out by a forester in a
beech-spruce forest in Germany.
The second part presents a case-based extension to the rule-based
approach, in an attempt to cope with the heterogeneity of forests. Its
main part is a meta-interpreter written in Prolog that allows the
adaptation of a generic set of thinning rules from the first approach
according to a known thinning outcome in a tree group.
David Robertson. Pitfalls of Formality in Early System Design. Proceedings of the ARO/NSF Monterey Workshop on Increasing the Practical Impact of Formal Methods for Computer-Aided Software Development.
One of the advantages of using formal methods in design should be that
we can be precise about where our methods fail. However, it is rare to
find discussions in the literature of problems in applying formal methods -
particularly in the early stages of design. One reason for this is that
failures are often caused by the context in which a method is
applied, rather than by some purely technical limitation. Using examples
from research in which I have been involved I shall describe some of the
pitfalls I have encountered and which I have observed frequently in
the research of others.
David Robertson, Jaume Agusti. Pragmatics in the Synthesis of Logic Programs. Logic-Based Program Synthesis and Transformation: 8th International Workshop, Manchester, UK.
Many of the systems which we, and those who have worked with us, have
built were intended to make it easier for people with particular
backgrounds to construct and understand logic programs.
A major issue when designing this sort of system is pragmatics:
from the many logically equivalent ways of describing a
program we must identify styles of description which make particular
tasks easier to support. The first half of this paper describes
three ways in which we
have attempted to understand the pragmatics of particular domains
using well known methods from computational logic. These are: design
using parameterisable components; synthesis by incremental addition of
program slices; and meta-interpretation. These are helpful in structuring
designs but do not necessarily provide guidance in design lifecycles -
where less detailed designs are used to guide the description of more
detailed designs. The second half of this paper summarises an example
of this form of guidance.
Flavio Correa da Silva, Wamberto Vasconcelos, David Robertson. Cooperation Between Knowledge Based Systems. Proceedings of the Fourth World Congress on Expert Systems.
For as long as there has been interest in knowledge based systems
there has been interest in sharing formally expressed
knowledge. It is traditional for this to require a high
degree of social interaction between the suppliers and
recipients of such information but the Internet has
brought with it an interest in more opportunistic,
semi-automatic cooperation between systems. This raises a
variety of technical problems relevant to knowledge-based systems
design and implementation. We discuss these - concentrating on the
use of protocols and interlinguas.
Steve Polyak, J. Lee, M. Gruninger, C. Menzel. Applying the Process Interchange Format (PIF) to a Supply
Chain Process Interoperability Scenario. 13th European Conference on Artificial Intelligence ECAI'98
Workshop on Applications of Ontologies and Problem-Solving Methods,
Brighton, UK, August, 1998.
The goal of the PIF Project is to develop an interchange format to
help automatically exchange process descriptions among a wide
variety of business process modelling and support systems such as
workflow software, flow charting tools, process simulation systems,
and process repositories. As an example of such an exchange, a
demonstration scenario has been created which describes the use of
PIF in the modelling and simulation of an integrated supply chain
where different companies co-operate through a global supply chain
management procedure to deliver commercial electronic goods. This
scenario coordinates the exchange of process knowledge between a
business process modelling tool/library (Massachusetts Institute of
Technology's (MIT) Process Handbook) and a process simulation
package (Knowledge Based System Inc.'s (KBSI) ProSim) with PIF
acting as the interlingua.
Jessica Chen-Burger, David Robertson. Formal Support for an Informal Business Modelling Method. 10th International Conference on Software Engineering and
Knowledge Engineering (SEKE'98), San Francisco, USA.
Business modelling methods are popular but, since they operate primarily in
the early stages of software lifecycles, most are informal. This paper
describes how we have used a conventional formal notation (first order
predicate logic) in combination with automated support tools to replicate
the key components of an established, informal, business modelling method:
IBMs Business System Development Method (BSDM). We describe the knowledge
which we represent formally at each stage in the method and explain how the
move from informal to formal representation allows us to provide guidance and
consistency checking during the lifecycle of the model. It also allows us to
extend the original method to a model execution phase which is not described
in the original informal method. Although our formal modelling system has
not been extensively tested, it relies for much of its interaction on
diagrammatic notations which already appear in BSDM and are therefore already
tested in practice. The role of the formal notation in this case is not to
provide a formal semantics for BSDM but to provide a framework for sharing
the information supplied at different modelling stages and which we can
supplement with simple forms of automated analysis.
Steve Polyak. Applying Design Space Analysis to Planning. AIPS-98 workshop on Knowledge Engineering and Acquisition
for Planning: Bridging Theory and Practice, AAAI Technical Report WS-98-03,
Carnegie-Mellon University, June, 1998.
This paper describes a design space analysis approach towards a
``complete'' planning solution. A complete solution is defined as
one containing the resultant plan, the context in which it applies,
and the argument structure that justifies it. The focus in this
paper is on defining and communicating the argument structure
component. A perspective of a plan as a specialised type of design
and planning as a specialised form of design activity is used. In
doing so, research is drawn upon from the design rationale community
for generating an explanation of a designed artifact. In particular,
the method of generating a design space which represents the
location of the plan within the space of possible plan elaborations
is adopted. An initial implementation, Nonlin+DR, is described and
its potential benefits to stand-alone and mixed-initiative planning
is discussed.
Austin Tate, Steve Polyak, Peter Jarvis. TF Method: An Initial Framework for Modelling and Analysing Planning Domains. AIPS-98 workshop on Knowledge Engineering and Acquisition
for Planning: Bridging Theory and Practice, AAAI Technical Report WS-98-03,
Carnegie-Mellon University, June, 1998.
Early work on the NONLIN and O-Plan projects indicated a need for a
defined methodology which would guide users performing various roles
in the acquisition and analysis of domain requirements for planning.
This work included links to a requirement analysis methodology, CORE
(COntrolled Requirements Expression), tool support via an
intelligent assistant as part of the Task Formalism (TF) Workstation
and an initial collection of guidelines and checklists to aid in
using the TF domain description language. This paper describes work
underway to follow-on from this past research and to infuse it with
knowledge gained from recent research related to planning domain
development, knowledge modelling, design rationale and ontological
and requirements engineering.
Yannis Kalfoglou, David Robertson. Error Checking in Process Interchange Format (PIF) Ontology. Departmental Research Paper 887.
It is widely accepted that conceptual errors in the early stages
of a software process are often the most pernicious. This is because
they reflect a misunderstanding of the domain of application, hence
we can have a conceptual error in a description which "looks good"
from an abstract mathematical point of view. A section of the
Knowledge Based Systems community is now beginning to forge
agreements on the use of terminology for particular domains. Amongst
the more formal of these is the Process Interchange Format. The
existence of these formal agreements allows limited forms of
checking for conceptual errors as a supplement to the normal testing
process. This paper describes a method for performing such checks.
Renaud Lecoeuche, Chris Mellish, David Robertson. A Framework for Requirements Elicitation Through Mixed-Initiative Dialogue. 3rd IEEE International Conference on Requirements Engineering,
Colorado Springs, USA.
In this paper we present our work on requirements elicitation. The
elicitation process is a complex task which necessitates computer
support. Elicitation systems should ideally help their users check the
correctness of the specifications obtained but also actively guide
them in the acquisition of the requirements. We consider hereafter
systems that communicate in natural language. We describe a framework
that tries to improve the quality of the guidance it provides to its
users by taking into account natural language constraints. We discuss
the need for a theory of natural language dialogue structure, and we
show how we have integrated such a theory within an early prototype of
an elicitation system.
Paul Krause, Jane Hesketh, David Robertson. Reliable and Accountable System Design. Knowledge Engineering Review vol 12(3), pp 289-305
Few would disagree with the assertion that safe engineering starts
from the early stages of system design and should be
maintained throughout the lifecycle. different
engineering domains have developed, mostly informal,
frameworks with which they hope to promote this attitude.
An interesting question for the KBS community is whether some of our
methods for knowledge representation and reasoning can be
used to assist understanding, representing and
interpreting such frameworks. This paper concentrates on
what is (arguably) the area of greatest concern: relating
system requirements to high-level design. We highlight
what appear to be the major difficulties which face us in
this area, using examples from systems which have been
built to tackle them.
Steve Polyak, Austin Tate. Rationale in Planning: Causality, Dependencies, and Decisions. Knowledge Engineering Review, 13(2), June, pp. 1-16, 1998.
Available as: O-Plan Technical Paper 41, Artificial Intelligence
Applications Institute, University of Edinburgh, Scotland, 1998.
Traditional approaches to plan representation focused on the
generation of a sequence of actions and orderings. Knowledge rich
models, which incorporate plan rationale, provide benefits to the
planning process in a number of ways. The use of rationale in planning
is reviewed in terms of causality, dependencies, and decisions. Each
dimension addresses practical issues in the planning process and
adds value to the resultant plan. The contribution of this paper is
to explore this categorisation and to motivate the need to
explicitly record and represent rationale knowledge where situated,
mixed-initiative planning systems are present.
Rhys Power, Steve Reynolds, John Kingston, Ian Harrison, Ann Macintosh, Jonathan Tonberg. Expert Provisioner: A Range Management Aid. Applications and Innovations in Expert Systems V, Proceedings
of Expert Systems'97, British Computer Society Specialist Group on
Expert Systems, SGES Publications 1997. Available as AIAI Technical Report -
AIAI-TR-216.
Expert Provisioner is a knowledge-based provisioning system prototyped
for use by the RAF Logistics Command to support their Range Managers in
the procurement of consumable parts. Spares provisioning is one of the most
fundamental and difficult logistics processes. Any item of equipment will
need to have some of its component parts replaced at some time during its
operational life. Re-provisioning is the art of ensuring that spare parts
are available when required without tying up much needed capital in excessive
inventory holdings. To conduct re-provisioning properly requires a great deal
of specific knowledge about item characteristics and customer requirements,
coupled with a high level of expertise in re-provisioning procedures.
The starting point for Expert Provisioner is an electronic purchase order form
and its end point is a recommendation of whether to buy the item or not, its
cost and due delivery date. Purchase recommendations are made based on many
factors including forecast demand, unit costs, shelf life and existing stock
levels. The system removes much of the mundane work in order processing as
well as potential for misinterpretation of information. The system is
designed such that the user remains in control throughout the consultation
and can, if desired, override decisions. Expert Provisioner was implemented
using the NASA CLIPS development tool for the inference engine and knowledge
base. The system was developed and delivered under Windows 3.1 through the
use of AIAI's multi-platform wxCLIPS tool for the user interface.
Jane Hesketh, David Robertson, Norbert Fuchs, Alan Bundy. Automating Reasoning Support for Design. Departmental Research Paper 823 (a reconstructed version of this
paper appears in the Journal of Automated Software Engineering 5(2), 1998).
Formalised design supported by automated reasoning can assist in the
management of requirements - a particular problem for large, detailed
systems. Designers developing an initial requirements into more detail
and then producing a system specification must show not only that all
the requirements have been met but also demonstrate how that has been
achieved. This is especially important in safety-critical systems where
sections of the requirements will be regulations or guidelines. Using
real life examples from emergency shutdown systems for drilling rigs,
we show how lightweight (and therefore less time-consuming) formalisation
supports validation in an engineering approach to requirements management.
We have developed a requirements assistant - an interactive system for
formalising and managing information about requirements including
guideline requirements. As a design proceeds, relevant requirements
are found automatically and checked before being notified to the
designer with an accompanying explanation of whether or not they
are currently satisfied. Progress in satisfying requirements is
monitored automatically and contributing choices are recorded.
Such evidence of adherence to guidelines is an assurance of the
validity of the design. During any subsequent system modification,
reference to this evidence can aid designers by drawing attention
to the implications changes will have on maintaining guideline
satisfaction.
David Robertson. Distributed Specification. Proceedings of the 12th European Conference on Artificial Intelligence
Most existing work on formal specification is focussed on a particular
method or specification language, considered in isolation. In practice,
few non-trivial specifications are produced by a single person, or by a
group of persons with a common view of the world. It is far more common
for a variety of views of a problem to coexist, each with different
forms of communication. It has proved difficult to assemble such
heterogeneous specifications - leading to breakdowns in communication
and consequent failures of systems. As more of this communication
is conducted remotely by electronic means the need to
support distributed specification in a controlled way is increased.
This paper presents one way of tackling this problem,
based on a set of tools for describing specifications at a variety of
stages in their development. By constraining the interfaces
between tools, we aim to provide a more structured system for
collaborative specification.
Norbert Fuchs, David Robertson. Declarative Specification. Knowledge Engineering Review (special issue on Logic Engineering),
vol 11(4), pp 317-331
Deriving formal specifications from informal requirements is
extremely difficult since one has to overcome the
conceptual gap between an application domain and the
domain of formal specification methods. To reduce this
gap we introduce application-specific specification
languages, i.e. Graphical and textual notations that can
be unambiguously mapped to formal specifications in a
logic language. We describe a number of realised
approaches based on this idea and evaluate them with
respect to their domain specificity vs. generality.
Virginia Brilhante. Inform-Logic: a System for Representing Uncertainty in Ecological Models. MSc Thesis, 1996.
The construction of ecological models is known as an activity where
conventional modelling and simulation techniques are applied to a poorly
understood problem. The usual outcomes are simulation models populated
with uncertainty, giving numerical results without any statement of this
uncertainty behind them. What can be done about this? It is very unlikely
that in a near furutre we will be able to fully understand the complexities of
an ecosystem. The alternative that we investigate in this work is to make
uncertainty explicit to the modeller and/or user, having recognised that the
complexity of the real world, the poor understanding we have of it, and the
nature of modelling itself, lead to the inevitable presence of uncertainty in
ecological models. In INFORM-logic, a system dynamics ecological
model is taken as a sample and described in a logical representation.
Sources of uncertainty are declaratively represented, propagated during
simulation, and combined. The combined sources of uncertainty are then
presented to the user, giving support for reasoning about the existing
uncertainty in the ecological model.
David Robertson. An Empirical Study of the LSS Specification Toolkit in Use.
The LSS (Lightweight Specification System) toolkit assists in the
development of logic programs, using a variety of high level specification
methods. Many other high level systems impose a single, uniform view of
how specification should proceed. In practice , there is normally no
single understanding of how to describe specifications - there are
instead a variety of different forms of description which have evolved
from work practices of various domains. Any attempt to to disturb these
work practices in a radical way will, naturally, meet with resistance
unless those who must be educated in new methods can see clearly that they
will benefit (soon) from their efforts. LSS addresses this problem by
providing a collection of comparatively simple independent tools, each
of which is directed at a particular community of users who might
reasonably be expected to adjust to the tool without excessive effort.
In this sense, LSS is lightweight - it is intended to be easy to pick up.
Communication between LSS tools is achieved by using Horn Clause logic as
a standard language, although users of some of the tools are buffered from
the logical details by interfaces targeted at the appropriate group of users.
This allows the products of specification from some of the tools to be used
as the basis for more detailed specification (perhaps by other people) using
other tools. This paper summarises the current LSS system and describes the
results of an experiment in applying it to a substantial software engineerign
task: the specification of one of its own tools.
David Robertson. Domain Specific Problem Description. 8th International Conference on Software Engineering and Knowledge
Engineering, Nevada, USA.
Much of software engineering and knowledge engineering has concentrated
on generic languages and methods which are supposed to be transferable
between domains. By contrast, engineers working in real domains usually
employ domain-specific methods and terminology which have evolved from
their experience in getting the job done. This paper argues that we have
paid too little attention to instantiating generic methods to the demands
of specific engineering problems. We advocate narrow but deep studies of
carefully chosen domains, with the aim of harnessing domain-specific problem
descriptions to guide the construction of software specifications.
Jessica Chen-Burger, David Robertson, John Fraser. KBST: A Support Tool for Business Modelling in BSDM. British Computer Society Expert Systems - 95.
This paper describes a knowledge-based support tool
for business modelling with IBM's Business System
Development Method (BSDM).
The tool, KBST, is designed to support the activities
and capture the results of BSDM workshops, where senior
business managers together with a BSDM facilitator
develop business models. In this paper, we show how
case-based reasoning techniques can be used to build
domain specific knowledge into such a tool and how this
can provide guidance in building appropriate business models.
It is also shown that the application development package
HARDY, a hypertext diagramming tool, provides a good platform
for a BSDM support tool.
Peter Funk, David Robertson. Capturing and Matching Dynamic Behaviour in Case-Based Reasoning. First United Kingdom Case-Based Reasoning Workshop, University
of Salford, England.
In the telecommunications domain, reuse of service specifications is a
major issue. However, it has proved difficult to modularise services
because of the high degree of interaction within them. Direct
application of formal logics for the specification of services has
proved impractical because of the size of services. However, much of
this complexity stems from the details of implementation of services; by
contrast, the principal behaviours of a service are often approximated
by simple varieties of logic which are easily accessible to users. We
address the problem of determining, from a library of services, those
which might be appropriate for reuse in constructing a new service.
Simple behavioural sequences are used to provide features within a CBR
system which matches these to behavioural examples supplied by users. By
side-stepping the problem of formally specifying the entire service, we
aim to promote greater reuse of services whiles avoiding a commitment to
full logical specification.
Non-mathematicians often have difficulties expressing formal
requirements. By using a CBR approach the user can sketch out simple,
familiar behaviours and with these examples the system is able to
retrieve relevant cases and interactively produce a formal requirement
sketch capturing the new required behaviour. A case in the case library
encapsulates a particular formalised behaviour in a simple logic,
sufficient to capture the dynamic behaviour of the domain. With a
simulator the user can evaluate the behaviour without confronting her
with the formal representation itself. Our domain is telephone features
such as call waiting, redirect call, call back. These telephone services
are stored in the case library as cases, each consisting of a set of
state-transition rules. In previous papers we have described the general
architecture of the system (see [Funk, Robertson 1994]). In this paper
we focus on matching dynamic behaviour and the formal representation of
the cases.
Edjard Mota, David Robertson, Alan Smaill. NatureTime - Temporal Granularity in Simulation of Ecosystems. Early version of paper appearing in Journal of Symbolic
Computation 22(5).
Granularity of time is an important issue for the understanding
of how actions performed at coarse levels of time interact with
others, working at finer levels. However, it has not received much
attention from most AI work on temporal logic. In simpler domains
of application we may not need to consider it a problem but it
becomes important in more complex domains, such as ecological
modelling. In this domain, aggregation of processes working at
different time granularities (and sometimes cyclically) is very
difficult to achieve reliably. We have proposed a new time
granularity theory based on {m modular temporal classes/}, and
have developed a temporal reasoning system to specify cyclical
processes of simulation models in ecology at many levels of time.
Edjard Mota, David Robertson. Representing Interaction of Agents at Different Time Granularities. 3rd International Workshop on Temporal Representation and
Reasoning, Key West, Florida.
In this paper we describe NatureTime logic which we use to
represent and reason about the behaviour of interacting agents (in
an ecological domain), which behave at different time granularities.
Although the traditional application fields of temporal
representation and reasoning still raise many interesting theoretical
issues, we have been investigating some practical problems of
ecological systems which suit different representations of time
than those embodied in traditional simulation models of ecosystems.
These seem well suited to reconstruction using temporal logic
programs.
Edjard Mota, Mandy Haggith, Alan Smaill, David Robertson. Time Granularity in Simulation Models of Ecological Systems. IJCAI-95 Workshop on Executable Temporal Logics.
Granularity of time is an important
issue for the understanding of how actions performed at coarse levels of
time interact with others, working at finer levels. However, it has not
received much attention from most AI work on temporal logic. In
traditional domains of application (e.g. databases, planning, natural
language, etc), we may not need to consider it as a problem, but it
becomes important in more complex domains, such as ecological modelling.
In this domain, aggregation of processes working at different time
granularities is very difficult to do reliably. We have proposed a new
time granularity theory based on modular temporal classes,
and have developed a temporal reasoning system to reason about seasonal
cycles. This time theory may be a suitable framework for an executable
temporal logic for the specification of ecological models, where each
ecological entity is an active agent.
Nam Seog Park, David Robertson, Keith Stenning. An extension of the temporal synchrony approach to dynamic variable
binding in a connectionist inference system.
The relationship between symbolism and connectionism has been one of
the major issues in recent Artificial Intelligence research. An
increasing number of researchers from each side have tried
to adopt desirable characteristics of the other. A major
open question in this field is the extent to which a
connectionist architecture can accommodate basic concepts
of symbolic inference, such as a dynamic variable binding
mechanism and a rule and fact encoding mechanism involving
n-ary predicates. One of the current leaders in this area
is the connectionist rule-based system proposed by Shastri
and Ajjanagadde. We demonstrate that the mechanism for
variable binding which they advocate is fundamentally
limited and show how a reinterpretation of the primitive
components and corresponding modifications of their system
can extend the range of inference which can be supported. Our
extension hinges on the basic structural modification of
the network components and further modifications of the
rule and fact encoding mechanism. These modifications
allow the extended model to have more expressive power in
dealing with symbolic knowledge such as unification of
terms across many groups of unifying arguments.
Nam Seog Park, David Robertson. A localist network architecture for logical inference based on temporal
synchrony approach to dynamic variable binding. IJCAI95 Workshop on Connection-Symbolic Integration, Montreal,
Canada.
This paper describes a localist network architecture which translates
a significant subset of Horn-clause logic into a
connectionist representation which may be executed very
efficiently. The proposed architecture is based on an
extension of the temporal synchrony approach to dynamic variable binding
originally pro posed by Shastri & Ajjanagadde and provides a rule and
fact encoding mechanism.
Andy Bowles, David Robertson, Wamberto Vasconcelos, Maria Vargas-Vera, Diana Bental. Applying Prolog Programming Techniques. Early version of a paper appearing in International Journal of
Human-Computer Studies 41(3).
Much of the skill of Prolog programming comes from the ability to harness it
s comparatively simple syntax in sophisticated ways. It is possible to provide
an account of part of the activity of Prolog programming in terms of the applica
tion of techniques - standard patterns of program development which may be appli
ed to a variety of different programming problems. Numerous researchers have at
tempted to provide formal definitions of Prolog techniques but there has been li
ttle standardization of the approach and the computational use of techniques has
been limited to small portions of the programming task. We demonstrate that te
chniques knowledge can be used to support programming in a wide variety of areas
: editing, analysis, tracing,
transformation and techniques acquisition. We summarize the main features of sy
stems implemented by the authors for each of these types of activity and set the
se in the context of previous work, using a standard style of presentation. We
claim that a techniques-based system which integrates these features would be wo
rth more than the sum of its parts, since the same techniques knowledge can be s
hared by the different subsystems.
Alberto Castro. A Techniques-based Program Generator. MSc Thesis, 1994. Departmental Technical Paper 29-94.
In this work we address program generation for simulation models
in a restricted class of ecological domain.
A first approach to deal with this problem, the EL Program Generation
System was presented in Robertson 1991. That system produced a Prolog program
by assembling modular predicate-level definitions obtained from a library of
program schemata.
The approach presented here is to use Prolog Programming Techniques
as the element of program construction which acting in a clause-level, is
expected to make the generation process more flexible and explicit.
The resulting system, TBPG -- A Technique-Based Program Generator, embodies
the ideas developed through this dissertation. It is a two-level processing
tool which uses a Techniques Editor to construct a Prolog predicate by using
a techniques library, controlled by a Generation Control
module which manages the generation process.
Peter Funk, David Robertson. Case-Based Support for the Design of Dynamic Systems. 2nd European Workshop on Case-Based Reasoning.
Using formal specifications based on varieties of mathematical logic is
becoming common in the process of designing and implementing software.
The advantage of this procedure is that it enables us to verify the
specification's properties before the real system is implemented. Up to
now, formal methods were usually applied to include all details of the
final system in the specification. In large, complex systems this
requires sophisticated logic, which makes theorem proving a difficult
and complex task. Telecommunication systems are large and complex.
However, our case-based approach uses coarse-grained requirements
sketches to outline the basic behaviour of the system's functional
components, thereby allowing us to identify, reuse and adapt
requirements (from cases stored in a library) to construct new cases. By
using cases that have already been tested, integrated and implemented,
less effort is needed to produce requirements specifications on a large
scale. Using a hypothetical telecommunication system as our example, we
shall show how comparatively simple logic can be used to capture
coarse-grained behaviour and how a case-based approach benefits from
this. The input from these examples is used to induce a set of
transition rules that are applied to match and identify the cases whose
behaviour corresponds most closely to the designer's intentions.
Peter Funk, David Robertson. Requirements Specification of Telecommunication Services Assisted
by Case-Based Reasoning. 2nd International Conference on Telecommunication Systems,
Modelling and Analysis, Nashville, USA.
Producing formal specifications within a suitable logical framework has
been used as a methodology for specifying systems with exceptionally
high reliability requirements. There are substantial difficulties in
scaling up the approach to complex real-world specification tasks. It is
time-consuming and tedious work to develop a formal specification of
some new demand, and often the connection with the initially required
behaviour is difficult to maintain. The addition and integration of a
new demand into the existing specification is a difficult task, in which
the risk of accidentally changing some previously required behaviour is
high. However, supporting the specification process with case-based
reasoning offers a number of advantages. First, by providing a case
library that stores both a required behaviour of the system and its
final representation, the connection between them can be maintained.
Similarly, previously successful modification and extension cases are
identified and can be used and adapted to the current task. Finally, we
can test the modified specification by verifying that previously
required behaviours are covered, and thus identify parts affected by
changes (a simulator and a theorem prover are implemented for this). Our
example domain is the specification of telecommunication network
services. A decidable and deterministic temporal logic is used as the
representation. The system accepts input in the form of behavioural
examples, which are used to identify similar cases in the case library.
A set of domain-independent metrics based on a set-theoretical approach
and domain dependent global parameters are used for fine-tuning matching
between cases.
Edjard Mota. Temporal Representation of Ecological Knowledge. MSc Thesis, 1994. Departmental Technical Paper 31-94.
This work proposes a temporal logic to represent knowledge about
seasonal cycles in ecosystems. The logic is mainly based on what we call
modular temporal class, and a simple temporal logic
interprerter system is also defined, and implemented to reason with
ecological sentences expressed in a temporal language we call
NatureTime. The NatureTime system was tested with some ecological
examples from the Indiginous Knowledge project(IK). This project
concerns with the construction of a knowledge based toolkit to represent
and reason with ecological and agroforestry knowledge in developing
countries. This work shows how useful can be a temporal reasoner system,
like NatureTime , to be integrated in the IK project for making
predictions, and detecting conflicts.
Nam Seog Park, David Robertson, Keith Stenning. Reasoning with Limited Unification in a Connectionist Rule-Based System. ILP Workshop on Logic and Reasoning with Neural Networks,
S. Margherita Ligure, Italy.
At the intersection between symbolic inference and connectionism, there is
interest in producing systems constructed from connectionist components
which perform types of inference comparable to symbolic systems. The
attraction of such systems is that, under restricted conditions, they may
be capable of very fast, "reactive" responses to external stimuli. A
major open question in this field is the extent to which a connectionist
architecture can accommodate basic concepts of symbolic inference, such as
unification of terms. One of the current leaders in this area is the
rule-based system proposed by Shastri & Ajjanagadde (1993). We demonstrate
that the mechanism for variable binding which they advocate is
fundamentally limited and show how a reinterpretation of the primitive
components of their system can extend the range of inference which can be
supported.
David Robertson, Nam Seog Park, Jaume Agusti. Layered Design of KBS from Specification to Hardware. ECAI-94 Workshop on Formal Specification of Knowledge-based
Systems, Amsterdam, Netherlands.
Knowledge based systems are now being
embedded within the hardware of household items, such as cameras and
washing machines. These systems demand a high degree of reliability,
both in terms of the components used in implementation and in ensuring
that the initial specification of the system is accurately implemented.
We examine this problem in the context of a standard KBS specification
language: first order predicate calculus (FOPC). Two key problems
emerge. The first is to provide a reliable bridge between the axioms of
FOPC and the design required for hardware implementation. If our FOPC
specification can be translated to an equivalent propositional axiom set
then we can provide an automatic transtation to existing design
languages (e.g. functional logics). However, most KBS specifications
cannot be simplified in this way. By drawing on work from a hitherto
unrelated area we demonstrate, by example, how a significant class of
non-propositional FOPC axioms can be translated automatically to an
implementation-level design language. The second problem is in keeping
track of the gradual process of refinement necessary when moving from an
initial, underspecified description to the restricted class of FOPC
appropriate for automatic translation to an implementation design. We
describe a layered system of formal specification which permits
incremental specification of the design and enables designers to
converge on the appropriate class of FOPC axioms.
David Robertson, Jaume Agusti, Jane Hesketh, Jordi Levy. Expressing Program Requirements using Refinement Lattices. Methodologies for Intelligent Systems (ISMIS-93).
Requirements capture is a term used in software engineering, referring
to the process of obtaining a problem description - a high level account
of the problem which a user wants to solve. This description is then
used to control the generation of a program appropriate to the solution
of this problem. Reliable requirements capture is seen as a key
component of future automated program construction systems, since even
small amounts of information about the type of problem being tackled can
often vastly reduce the space of appropriate application programs. Many
special purpose requirements capture systems exist but few of these are
logic based and all of them operate in tightly constrained domains. In
previous research, we have used a combination of order sorted logic (for
problem description) and Prolog (for the generated program) in an
attempt to provide a more general purpose requirements capture system.
However, in our earlier systems the connection between the problem
description and the resulting program was obtained using ad hoc methods
requiring considerable amounts of domain-specific information, thus
limiting the domain of application of the system. We are experimenting
with languages which provide a formal connection between problem
description and application program, thus eliminating the need for
domain-specific information in the translation process. This paper
introduces a formal language for requirements capture which bridges the
gap between an order sorted logic of problem description and the Prolog
programming language. The meaning of a Prolog predicate is often
characterised according to the set of bindings which can be obtained for
its arguments. It is therefore possible to develop a hierarchical
arrangement of predicates by comparing the sets of results obtained for
stipulated variables. Using this hierarchical structure, we provide
proof rules which may be used to support part of the requirements
capture process. We describe the notation used for the refinement
lattice; define its relationship to Prolog and demonstrate how the
language can be used to support requirements capture. An interactive
system for extracting Prolog programs from our refinement hierarchies,
using an algorithm similar to the one described in this paper, has been
implemented.
David Robertson, Jane Hesketh. Making Specification Design More Accountable. ONR/ARPA/AFOSR/ARO/NSF Workshop on Increasing the Practical
Impact of Formal Methods for Computer-Aided Software Development, Monterey,
California.
If something
goes wrong with a software/hardware implementation then it is important
to know what caused the problem. Sometimes the error lies in incorrect
implementation of a specification but it is thought that a more common
source of problems lies in earlier stages of the development process,
where there may be mismatches between the requirements of the domain and
the specification. For this reason, it is common (particularly in
safety-critical applications) to find tightly monitored regimes of
specification, in which the strategies of specification designers follow
prescribed conventions and are closely constrained by regulatory
reviewers. It would be useful if this process were as explicit as
possible, making the design and reviewing process more open and
accountable. One way of doing this is to provide formal descriptions of
design strategies and to require designers to endorse their use of these
by reference to appropriately formalised parts of the regulations. We
examine how this may be done, using as a concrete example the domain of
oil platform emergency shutdown systems.
Wamberto Vasconcelos. A Method of Extracting Prolog Programming Techniques. Departmental Technical Paper 27.
We present a method of extracting the programming techniques employed in
prolog programs. Techniques are dynamic entities consisting of the syntax
of the program and how it is used. The method records how subgoals are
employed and uses this, together with their syntax and other auxiliary
information, to partition the program into single-argument procedures possibly
sharing variables. A technique is formally characterised as a sequence of
such single-argument procedures.
Soon-Ae Yang, David Robertson, John Lee. Use of case-based reasoning in the domain of building regulations. 2nd European Workshop on Case-Based Reasoning.
In traditional legal decision support systems, it has been regarded
as natural to represent statutes in terms of decision rules and to link
these to a separate case-based reasoning system for handling precedent.
Statutory legal rules used in these systems are formal and prescriptive.
Building regulations in Scotland are part of statute law and constitute
part of a legal system together with case histories. In recent years,
the regulations have been becoming less prescriptive and more emphasis
has been put onto the interpretive use of the regulations. In developing
a system which can assist domain experts in interpreting the regulations,
this trend has presented us with difficulties in employing this traditional
approach and has led us to a unified case-based model of the regulations
and case histories. In this paper, we first describe the characteristics
of the regulations and the activities involved in this domain. Second, we
explain the reason why we abandoned the traditional approach. Third, we
describe the system which has been developed using this case-based model.
Soon-Ae Yang, David Robertson. A case-based reasoning system for regulatory information. Early version of a paper appearing in International Journal
of Construction Information Technology 3(2) 1995.
We describe a knowledge-intensive case-based reasoning system (KICS)
which is under development in the domain of statutory building regulations,
using case histories from The Scottish Office's Building Directorate.
The aim of the system is to assist domain experts in interpreting regulatory
information in the relaxation process. The system does not make any decision
about relaxation but is intended to provide the experts with relevant
information so that they can arrive at an informed final decision. First,
we describe three knowledge bases: the Model Knowledge Base, which contains
abstraction hierarchies of legal rules from the statutory regulations
and case histories; the Case Library, which contains information about
cases; and the Domain Knowledge Base, which contains background domain
knowledge used in processing cases. Second, we describe how relevant
information, relevant legal rules and similar cases, is retrieved. Finally,
we describe how new regulatory information is acquired from the relaxed case.
Andy Bowles, Wamberto Vasconcelos. Characterizing Prolog Programming Techniques. Logic Programming Environments Workshop at International
Logic Programming Symposium '93.
A programming technique captures some common code pattern.
Programming techniques have been applied in a wide range of programming
contexts with some success. Here we discuss a language for describing
techniques in a formal way. This allows a precise definition of techniques
and is a prerequisite for a consistent treatment across a programming
environment.
Alberto Castro. A logic-based tool to deal with configuration problems.
In this work it is proposed a first-order-logic-based tool
to deal with configuration problems.
The architecture proposed here involves representing and processing
knowledge in two levels, declarative and procedural,
as well as using multiple knowledge bases.
In addition to the project of the system, a language is specified
to describe the knowledge about a specific problem,
including parameters related with the solution process.
At last, a prototype of the proposed tool,
developed in the Intelligent Systems Laboratory at Federal
University of Espirito Santo,
is described.
Mauricio Miqueleiz, Nam Seog Park. A connectionist implementation of passive elaborative inferencing.
O'Brien et al. (1988) reported that readers generated elaborative
inferences only when a text contained characteristics (a strong biasing
context or a demand sentence) that made it easy to predict the specific
inference that a reader would draw and, virtually eliminated the
possibility of the inference being disconfirmed. Garrod et al. (1990),
however, offered two refinements to the conclusions. First, the two text
characteristics manipulated may have produced different types of
elaborative inferencing: a biasing context results in a passive form of
elaborative inferencing, involving setting up a context of interpretation,
whereas the presence of a demand sentence invites the reader to actively
predict a subsequent expression. Secondly, clear evidence for either
type of inference will be apparent only with truly anaphoric materials.
This paper describes how a passive form of elaborative inferencing,
reported by Garrod et al, may be implemented in a connectionist manner.
We used the connectionist model proposed by Shastri & Ajjanagadde (1993)
to represent a text containing characteristics in the form of a network.
And we have recently adopted meta-predicates to represent relationships
between contexts and anaphors. These meta-predicates are encoded into the
network and used to control the flow of inference according to the
relationships specified by them.
Nam Seog Park, David Robertson, Keith Stenning. An extension of the temporal synchrony solution to dynamic variable
bindings in a connectionist system. Departmental Research Paper 666.
A structured connectionist model using temporal synchrony has been
proposed by Shastri & Ajjanagadde. This model has provided a mechanism
which encodes rules and facts involving n-ary predicates and handles some
types of dynamic variable binding. Their model, however, has problems in
dealing with important know ledge representation issues such as binding
generation, consistency checking, unification, etc. This paper presents a
restructuring of the original model which overcomes many of its limitations
while, at the same time, reducing the number of types of node required and
maintaining the merits of the original model.
Maria Vargas-Vera, David Robertson, Robert Inder. Combining Prolog Programs in a Techniques Editing System. Third International Workshop on Logic Programming Synthesis
and Transformation.
Techniques editing, as proposed by Sterling et al., allows Prolog
programs to be constructed by initially selecting a "skeleton" which
determines the flow of control of the program, and then adding on top
of this the extra features required by the program. This means that it
is easy to obtain as an end-result of techniques editing not only the
final program but also a history of its development, in terms of the
skeleton and extensions used to build it. We describe how this program
history information can be used to produce efficient combined programs
from pairs of initial programs constructed independently by a techniques
editor.
Maria Vargas-Vera, Wamberto Vasconcelos, David Robertson. Building Large-Scale Prolog Programs using a Techniques Editing System. International Logic Programming Symposium '93.
We describe an integrated environment which addresses three distinct
aspects of large-scale Prolog Programs: the formalization of the
programming practices one should use in order to build reliable and
maintainable programs, the computer-aided use of these practices to
develop programs, and the combination of these programs into more
sophisticated and efficient programs. We also propose the use of program
histories (i.e., the information of how programs were developed using a
techniques-based editor) to improve the process of program combination.
Wamberto Vasconcelos. Designing Prolog Programming Techniques. Third International Workshop on Logic Program Synthesis
and Transformation.
We propose a medium in which expert programmers can designm test and
organise Prolog programming techniques. The proposed approach employs
simple single-argument program fragments and their combinations in order
to represent techniques. The devised techniques can be made available
to other programmers, by means of techniques-based editors.
Soon-Ae Yang, David Robertson, John Lee. KICS: A Knowledge-Intensive Case-Based Reasoning System for Statutory
Building Regulations and Case Histories. 4th International Conference on AI and Law, Amsterdam,
Netherlands.
There have been several knowledge-based systems for statutory building
regulations during the last decade, such as Fenves et al's systems using
the SASE model, Stone and Wilcox's system using a rule-based approach,
and Waard's system using Cornick et al's model-based approach. However,
they take into account only one side of building regulations, considering
them only in the context of design systems and ignoring the existence of
case histories. Building regulations are also part of a legal system and
have characteristics of law. In this paper, we propose a Knowledge-Intensive
Case-based reasoning System (KICS) which can be used for the retrieval
and maintenance of building regulations and case histories. First, we
propose a unified knowledge representation scheme for both statutory
building regulations and case histories. Second, we describe the retrieval
of regulations information, which uses the notion of implied similarity
as well as structural mapping. Finally, we describe knowledge acquisition
from case histories, which is guided by knowledge gained from statutory
regulations and case histories.
Maria Vargas-Vera, David Robertson, Robert Inder. An Environment for Combining Prolog Programs.
The purpose of this paper is to describe an environment for the
construction of complex Prolog programs by combining simpler Prolog
programs. The technique consists of the development in parallel of
pieces of software which are combined to produce the final program.
For producing elegant and efficient programs we need to keep
information such as: the kind of program according to a classification
effected by considering features of the program and information about
history development such as the initial skeleton and techniques that
the user applied in the construction of the program. The problem of
combining two programs for producing a new program while retaining
correctness properties is not an easy problem. Therefore we design a
system which provides assistance in deciding how to combine the two
programs. The system is based on program transformation, a technique
used for optimisation of programs, supplemented with knowledge of
program development.
Maria Vargas-Vera, David Robertson, Robert Inder. A Mathematical Framework for the combination of Prolog Programs.
The purpose of this paper is to describe a mathematical framework which
supports our environment for the construction of complex Prolog programs by
combining simpler Prolog programs. This framework includes three groups of
properties: correctness properties which hold after applying each
composition method; properties for the type of program which is obtained in
each stage of the combining process and properties for the definition of
the join specification. Our motivation for providing this framework is
to guarantee the soundness of the composition process at the three levels
ie. on our methods, on the type of combined program and properties which
are useful to the user level.
Wamberto Vasconcelos. Formalising the Knowledge of a Prolog Techniques Editor. 9th Brazilian Symposium on Artificial Intelligence,
Rio de Janeiro, Brazil.
A Prolog Techniques Editor (PTE) is a knowledge-based software
development tool which provides its users with simple Prolog programs
depicting common flows of control and ways to enhance them into more
elaborate and useful code. However, the expert Prolog programmers who
provide these components are offered no methodology or notation to help
them formalise, test, compare and organise their programming knowledge in
order to be supplied to the PTE. In this paper we propose the use of a
repertoire of commands and their combination in ordered sequences as a way
to formalise the PTE's programming practices.
David Robertson. A Simple Prolog Techniques Editor for Novice Users. 3rd Annual Conference on Logic Programming, Edinburgh.
This paper describes a working prototype
system which uses descriptions of standard Prolog techniques to provide
a basic techniques editing system, ultimately intended for use by novice
programmers. A notation for representing techniques, based on Definite
Clause Grammars, is described in the context of previous theoretical
work by Kirschenbaum, Lakhotia and Sterling. Details are supplied of a
mechanism for using these techniques to provide guidance during program
construction and an example is provided of the system in operation. I
conclude by suggesting the extensions needed in order to make the
prototype useful for practical applications.
David Robertson. Multi-Level Cooperative Dialogue in Intelligent Front Ends. Early version of a paper appearing in the Journal of Artificial
Intelligence in Engineering 6(1).
The control of multi-level cooperative dialogue
is considered a difficult problem in knowledge based systems but may be
easier to tackle in the more restricted domain of Intelligent Front
Ends. This paper discusses the requirement for multi-level cooperative
dialogue in IFE programs and describes several techniques for supporting
this form of interaction, using examples from implemented systems.