SeCAM: Tightly Accelerate the Image Explanation via Region-Based Segmentation

Phong X. NGUYEN
Hung Q. CAO
Khang V. T. NGUYEN
Hung NGUYEN
Takehisa YAIRI

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E105-D    No.8    pp.1401-1417
Publication Date: 2022/08/01
Publicized: 2022/05/11
Online ISSN: 1745-1361
DOI: 10.1587/transinf.2021EDP7205
Type of Manuscript: PAPER
Category: Artificial Intelligence, Data Mining
Keyword: 
Explainable Artificial Intelligence (XAI),  machine learning,  explanation,  transparency,  interpretability,  

Full Text: PDF(8.8MB)>>
Buy this Article



Summary: 
In recent years, there has been an increasing trend of applying artificial intelligence in many different fields, which has a profound and direct impact on human life. Consequently, this raises the need to understand the principles of model making predictions. Since most current high-precision models are black boxes, neither the AI scientist nor the end-user profoundly understands what is happening inside these models. Therefore, many algorithms are studied to explain AI models, especially those in the image classification problem in computer vision such as LIME, CAM, GradCAM. However, these algorithms still have limitations, such as LIME's long execution time and CAM's confusing interpretation of concreteness and clarity. Therefore, in this paper, we will propose a new method called Segmentation - Class Activation Mapping (SeCAM)/ This method combines the advantages of these algorithms above while at simultaneously overcoming their disadvantages. We tested this algorithm with various models, including ResNet50, InceptionV3, and VGG16 from ImageNet Large Scale Visual Recognition Challenge (ILSVRC) data set. Outstanding results were achieved when the algorithm has met all the requirements for a specific explanation in a remarkably short space of time.


open access publishing via