Concepts have recently gained attention as an explainable AI (XAI) tool because of their human interpretability. While recent work shows that concepts can provide meaningful explanations for high-dimensional data drift, many open questions remain. This topic offers several directions, such as studying the properties of the embeddings required for concept extraction, adapting concept-based drift localization to online settings, extending the current approach from images to different data domains such as text, or developing new methods for detecting drift directly through concepts.
Literature