Refining Theory with Multiple Faults

Somkiat TANGKITVANICH  Masamichi SHIMURA  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E75-D   No.4   pp.470-476
Publication Date: 1992/07/25
Online ISSN: 
Print ISSN: 0916-8532
Type of Manuscript: Special Section PAPER (Special Issue on Algorithmic Learning Theory)
Category: 
Keyword: 
theory refinement,  correcting multiple faults,  learning from theory and examples,  inductive logic programming,  

Full Text: PDF(577.4KB)
>>Buy this Article


Summary: 
This paper presents a system that automatically refines the theory expressed in the function-free first-order logic. Our system can efficiently correct multiple faults in both the concept and subconcepts of the theory, given only the classified examples of the concept. It can refine larger classes of theory than existing systems can since it has overcome many of their limitations. Our system is based on a new combination of an inductive and an explanation-based learning algorithms, which we call the biggest-first multiple-example EBL (BM-EBL). From a learning perspective, our system is an improvement over the FOIL learning system in that our system can accept a theory as well as examples. An experiment shows that when our system is given a theory that has the classification error rate as high as 50%, it can still learn faster and with more accuracy than when it is not given any theory.