Michael J. O'Donnell1
8 April, 1993
Most formal systems for logical reasoning are essentially definitions of a relation , called logical inference. When is a set of propositions expressed in the language used by such a formal system, and is another such proposition, means that the formal system allows to be inferred from hypotheses in . In the vast majority of formal systems studied by logicians, the relation is required to be monotonic in its left argument.
Practical reasoning often involves defeasible [#!defeasible!#] (retractable) steps. In particular, there are often default assumptions [#!default!#] about typical cases, that are accepted until specific reasons to reject them are found. The example of concluding that a given bird flies until learning that the bird is a penguin has achieved chestnut status.
I believe that the penguin example, and many similar examples, provide a strong case for nonmonotonic behavior in a reasoning procedure. That is, there are certainly cases in practice where the discovery of new knowledge requires a retraction of old assertions. But, the explication of this nonmonotonic behavior can be attached to at least two different sites in a formal system of reasoning: the inference relation , and the formulae themselves. I will argue for the latter choice, but first I will reject some particular forms for doing so, in spite of their initial appeal.
In principle, nonmonotonic behavior of procedures using a monotonic
inference relation may be explained by weakening the conclusions of
inference. For example, instead of
The basic idea of weakening conclusions to maintain monotonicity may be refined [#!weakening!#], but I believe that most reasoners rightly reject this complication of the form of final conclusions. Since the normal mode of assertion of a proposition in everyday life implicitly means something like, ``As well as can be determined, holds,'' it seems inappropriate to introduce explicit formal disclaimers into every proposition. The best known formal systems for mathematical reasoning (the Classical First-Order Predicate Calculus heads the list, but the same holds for higher-order classical logics, set theories, and formalizations of intuitionistic or constructive reasoning) all treat the formal assertion of a proposition as meaning something like, `` is absolutely true, with no possibility of error.'' But, nowhere in popular usage, not even in mathematical journals, is that formal notion actually practiced. If it were, Russell's antinomy would have been a catastrophe rather than a stimulus to further research. So, I reject the weakening of conclusions by explicit formal disclaimers, since it forces a detailed accounting for assumptions that are made uniformly and universally in the practice of language, and thereby obscures the natural structure of practical assertions of propositions.
Rather than explaining nonmonotonic behavior by weakening the
statements of the conclusions of inference, we need to strengthen the
hypotheses. An obvious possibility is to add the same sorts of
propositions that are used to weaken the conclusions, yielding
Another way of augmenting the hypotheses is to use a formal operator
, where for propositions and sets of
means that a reasonably
diligent search for known propositions relevant to yields only
. Then, the inference that
is represented by
can trigger nonmonotonic behavior, since the addition of new information to the knowledge base, or even the discovery of more information that was already in it, can change from true to false. In the penguin example, once we discover the proposition among our knowledge, the metatheoretic proposition becomes false, and the conclusion can no longer be drawn, because the hypothesis required to infer it does not hold. becomes true at this point, but it does not support the inference that .
Further refinement of these ideas may show how to avoid the annoying
on both sides of , or may reveal that
search keys have a different type than propositions, with some useful
relations determining which search keys are relevant to inferring
which propositions. One natural operator to consider is the consistency operator
[#!consistent!#]. When is a set of propositions,
that is internally consistent (noncontradictory). Using
, we might approach the penguin problem by allowing
In the next two sections, I argue that the use of a nonmonotonic logical inference relation is highly misleading, and not the appropriate way to support nonmonotonic behavior in a procedure for reasoning. Rather, I argue that nonmonotonic behavior should be supported by including, in the hypotheses of a logical inference, an explicit formal representation of the metatheoretic fact that certain propositions contain all of the knowledge relevant to a given problem discovered by a reasonably diligent search. I use the form , where is a proposition and is a finite set of propositions, to represent such metatheoretic assertions. I do not believe that this form is exactly the right one in practice, but it has enough of the essential elements to serve for the current comparative argument.
The crux of my argument depends on choosing between two interpretations of the logical inference relation denoted by , one of which allows the relation to be nonmonotonic, and the other of which demands monotonicity. Roughly, means that, given knowledge of the propositions in , it is rational to infer as well. Two different ways of making this rough idea more precise yield radically different results for practical reasoning.
Since knowing each of the formulae in a set implies knowing each of the formulae in every subset of , must be monotonic. is typically nonmonotonic, since as we make the set larger, the information that contains all available relevant knowledge becomes weaker, and therefore supports fewer inferences. In principle, logicians may choose to study any interesting relation between propositions and sets of propositions that they please. But, by focusing attention on an infelicitously chosen relation, and referring to it as ``logical inference,'' we may mislead those who are trying to apply formalisms to practical problems. An intuitively well grounded focus of attention may clarify thinking. In the next section, I analyze the concrete operations performed by a reasoning agent, and argue that is a useful relation for studying such agents, while is not.
The choice of an inference relation such as or as the basis for studying practical reasoning should depend on the following properties of a typical useful reasoning agent.
In principle, propositions might be stratified hierarchically, so that whenever a judgement about the truth of is required for a reasonably diligent search for all information relevant to , then no judgement about is required for a reasonably diligent search for all information relevant to . Then might conceivably work, if we require to include all the propositions found by the latest search. That is, the two-phase procedure using may be repeated to simulate interleaving of search and inference. But, experience doesn't suggest such a natural stratification, and it seems pointlessly expensive to require completion of a reasonably diligent search in the large number of cases where the inference does not require it semantically.
is an interesting relation, and it is
normally a stronger inference tool (not a logically stronger
relation), in the sense that there are normally and such
, while the
reverse will not happen with conventional modes of assertion. So, in
those cases where an appropriate search has been completed, and all of
its results included in , an agent should be allowed to add
. But, the agent must notice the fact that a diligent search is complete (this is
necessary even if all inference steps use
since the agent must decide when to stop gathering members of , and start applying
), so it seems natural to encode
the act of noticing completion of the search as a metatheoretical
proposition added to current knowledge. This leads to the encoding of
So, it appears that the information carried by is essential in the design of practical reasoning procedures, and that the information in may be encoded into by adding appropriate metatheoretic operators, which construct assertions that certain sets of propositions contain all of the results of certain searches, to our formal language. On the other hand, I doubt that can be defined efficiently and naturally from . An obvious candidate definition is
I do not claim that the particular syntactic form of is the right one for practical reasoning. But, this form illustrates some of the crucial properties of the right solution.
Of course, arguments of the sort advanced above do not solve any hard technical problems. The hard problem to solve in automating practical reasoning is the problem of reacting to updates (changes in knowledge) [#!update-problem!#]. That problem is essentially the same at a technical level, whether we use a monotonic inference relation or a nonmonotonic . In conventional monotonic systems, without metatheoretic operators, the addition of new knowledge leads only to the addition of new inferences. The hard part of the update problem is how to handle retraction of previously accepted knowledge. There is no clear consensus even on the proper form in which to express retractions.
Both nonmonotonic inference relations, such as , and metatheoretic operators, such as , seem to add a complication to the problem of changing knowledge, since they both lead to cases where added knowledge requires conclusions to be retracted. I conjecture, though, that once the basic problem of managing retraction of knowledge is solved, the extra interaction of additions and retractions introduced by nonmonotonicity will become much clearer. I propose that, by representing nonmonotonic reasoning behavior through metatheoretic operators, such as , with the monotonic inference relation , we can improve the flexibility and transparency by which future breakthroughs in the update problem are applied to a wide range of interesting domains of reasoning.
While the 4 properties of useful reasoning agents mentioned in Section 4 support the use of formal systems that are conventional to the extent of being monotonic, they by no means support the particular choices of classical or intuitionistic reasoning. In fact, an additional property of reasoning agents argues strongly against such choices:
An obvious fix is to have the agent test the consistency of before applying logical inference. Unfortunately, consistency checking ranges from intractably expensive to fundamentally impossible, so the time and resource for consistency checking would dominate that for real reasoning steps. Rather, property 5 requires the use of paraconsistent formal systems [#!paraconsistent!#], such as relevance logics [#!relevance!#] instead of classical or intuitionistic formal systems. Further research is required to illuminate the choice of a particular paraconsistent formal system for a particular reasoning agent. The right paraconsistent formal system can reduce the harm done by errors and contradictions in the hypotheses used by a reasoning agent, allowing useful reasoning to go on during the typically long time between the introduction of an error or contradiction and its detection. But, they still do not address the hard problem of retracting information when an error is finally detected.
This document was generated using the LaTeX2HTML translator Version 99.2beta6 (1.42)
Copyright © 1993, 1994, 1995, 1996,
Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.
The command line arguments were:
latex2html -split 0 attack.tex
The translation was initiated by Mike O'Donnell on 2001-12-05