next up previous
Next: About this document

A More General View of Endorsements

The mechanism for propagating uncertainty which we described in Chapter 8 of the book propagates the intersections and unions of endorsement sets, with it being left to modellers to judge how this information influences their assessment of the model. In some cases we may wish to provide more sophisticated propagation methods which reduce the volume of information presented to those assessing the model. In this section we discuss briefly how this may be done, using an example adapted from Sullivan & Cohen's paper at IJCAI'85.

In user modelling systems it is common to have some representation (hidden from the user) of standard plans of action. By comparing user input to these stored plans, the user modelling system attempts to guess which particular plan the user is most likely to be following. This allows the system to provide the user with advice about how best to achieve the hypothesised plan. Clearly this matching process involves considerable uncertainty, since initial user input is unlikely to be able to determine precisely what the correct plan should be. Plan recognition systems for complex real--world tasks need to use large numbers of sophisticated plans but, to keep our example simple, imagine that we have a rudimentary planning system in which only two linear plans are allowed to be used. Each of these is described using the predicate , where P is the name of the plan and S is the sequence of names of the tasks which it involves.

Plan 1 involves the sequence of tasks `a' then `b' then `c'. Plan 2 involves the sequence of tasks `b' then `d' then `e'.

  

Input from the user will consist of selecting a particular sequence of tasks and we want to guess, at each point in the input sequence, which plan the user has in mind. Our classification of the plan at any given stage in the sequence may be uncertain for any of a number of reasons. To encode these reasons we use endorsements which we define as rules of the form:

where P is the identifier of the plan to which the endorsement applies; A is the action to which it applies; is the sequence of actions which have taken place prior to action A; E is a term describing the endorsement; W is the weight of the endorsement, which can be either positive or negative; and C is the condition of the endorsement rule. In our example, we use five of these endorsement rules (labelled 4 to 8 below).

An observed task, A, could be a mistake if it appears in the sequence, S, of any plan, P.

 

Plan is the only grammatical possibility given observed action, A, if there is no other plan containing A.

 

There is an alternative plan, , to plan for task A if both plans involve A. This weakens our certainty that plan is the one we are observing.

 

If the last task of the observed sequence was A and we have just observed task B and there is a plan, P, which has a subsequence containing A followed by B then our confidence that we are dealing with plan P is increased because this maintains continuity between tasks.

 

If the last task of the observed sequence was A and we have just observed task B and there is a plan, , which contains B but there is another plan, , which contains A followed by B then we are more sceptical that we are dealing with plan because this disturbs the continuity between tasks for .

 

Suppose that the user attempts task 'a', followed by task 'b', followed by task 'd'. Using endorsement rules 4 to 8, we can generate at each step sets of endorsements for each of the two potential plans. This is shown in Figure 1. Each endorsement has been numbered for ease of reference.

  
Figure 1: Endorsements for the sequence a, b, d.

Looking at the sequence of tasks in Figure 1, we can see that the set of endorsements for each plan will tend to grow larger with each successive task. This is undesirable, since the number of active endorsements could become very large for long sequences of tasks. To reduce this problem, we introduce rules for removing from the pool of endorsements those which we consider to have become obsolete due to the arrival of new informaton. In our current, simple example we use only two such rules:

If for plan, P, we have the negative endorsement that task, A, could be a mistake and we also have the positive endorsement that continuity of the plan via some task, B, is desirable then we retract the negative endorsement on A.

 

If task B allows us to switch to plan P, and thus introduces an undesirable discontinuity with in our choice of plan given task A, but B also allows a desirable continuity in plan P to task C then we retract the negative undesirable discontinuity endorsement on P.

 

By applying the above rules at each step in the sequence of tasks from Figure 1 we can prune out obsolete endorsements from each plan, thus leaving fewer to be carried forward to later tasks. Figure 2 illustrates this process diagrammatically. The numbers on the diagram refer to endorsements from Figure 1. The horizontal layers denote steps in the task sequence. The introduction of an endorsement is shown by the appearance of its number in the appropriate task layer and vertical lines show the duration of the endorsement. Lines ending in a cross denote pruning of an endorsement. Those endorsements which remain in the final task are terminated by an arrow head.

  
Figure 2: Example endorsement sequence





next up previous
Next: About this document



Dave Stuart Robertson
Tue Jul 7 09:57:08 BST 1998