The workshop schedule can be found here: https://isee4xai.com/xcbr-workshop-2022/
XCBR is a workshop aiming to provide a medium of exchange for information about trends, research issues and practical experiences in the use of Case-Based Reasoning (CBR) methods for the inclusion of explanations to several AI techniques (including CBR itself).
The success of intelligent systems has led to an explosion of the generation of new autonomous systems with new capabilities like perception, reasoning, decision support and self-action. Despite the tremendous benefits of these systems, they work as black-box systems and their effectiveness is limited by their inability to explain their decisions and actions to human users. The problem of explainability in Artificial Intelligence is not new but the rise of the autonomous intelligent systems has created the necessity to understand how these intelligent systems achieve a solution, make a prediction or a recommendation or reason to support a decision in order to increase users’ trust in these systems. Additionally, the European Union included in their regulation about the protection of natural persons with regard to the processing of personal data a new directive about the need of explanations to ensure fair and transparent processing in automated decision-making systems.
The goal of Explainable Artificial Intelligence (XAI) is “to create a suite of new or modified machine learning techniques that produce explainable models that, when combined with effective explanation techniques, enable end users to understand, appropriately trust, and effectively manage the emerging generation of Artificial Intelligence (AI) systems”.
For this purpose, the XCBR workshop is intended to have a structure of activities that helps an exchange of ideas and interaction, suited to highlight the main bottlenecks and challenges, as well as the more promising research lines, for CBR research related to the explanation of intelligent systems.
CBR systems have previous experiences in interactive explanations and in exploiting memory-based techniques to generate these explanations that can be successfully applied to the explanation of emerging AI and machine learning techniques.