Explainable AI
While correctness and accuracy of AI solutions are core foci within the broad AI community, they are not the only desirable features of AI systems. When working with AI, humans also want to know the reasons behind the AI’s predictions. This leads to a burgeoning research area — ‘Explainable AI’ — aiming to produce explanations supporting outputs of AI systems. Explanations are useful in many ways such as increasing human trust and satisfaction, ensuring fairness of the AI, extracting learned knowledge, and enabling human-AI collaboration.
We have been conducting research based on two main approaches of explainable AI: devising (argumentation-based) transparent models and explainable (black-box and white-box) models. Our works cover many modes of data (e.g., numbers, images, texts) and application domains (e.g., scheduling, product recommendation, and medical suggestion).