Concepts have gained significant attention as an explainable AI (XAI) technique due to their human interpretability. While concept-based explanations are widely studied in image and text domains, much less work has been done in the context of 3D data. This project will explore whether human-interpretable concepts can be identified in 3D datasets, and if so, evaluate their suitability as an explanation technique.
Literature