Abducing Priorities to Derive Intended Conclusions

Abducing Priorities to Derive Intended Conclusions

Katsumi Inoue and Chiaki Sakama

Proceedings of the 16th International Joint Conference on Artificial Intelligence (IJCAI-99), pp.44-49, Morgan Kaufmann, 1999.


We introduce a framework for finding preference information to derive desired conclusions in nonmonotonic reasoning. A new abductive framework called preference abduction enables us to infer an appropriate set of priorities to explain the given observation skeptically, thereby resolving the multiple extension problem in the answer set semantics for extended logic programs. Preference abduction is also combined with a usual form of abduction in abductive logic programming, and has applications such as specification of rule preference in legal reasoning and preference view update. The issue of learning abducibles and priorities is also discussed, in which abduction to a particular cause is equivalent to abduction to preference.

Full Paper (gzipped postscript 161K)