This project is still in the making. Detailed information and background will follow (2025-2026).
Central starting points for explainable AI (XAI)
There are numerous methods for explainable AI. Many fall into one of the following high-level categories:
- Understanding the data distribution
- Models that are intrinsically interpretable
- Using physics or domain knowledge
- Visualizing model internals/representations: t-SNE
- Understanding feature influence: Grad-CAM, LIME
- Quantifying uncertainties
- Documenting training and data
- Providing textual explanations
Working with LLMs, Chatbots, GenAI – from a user’s perspective
My blog post Collaborating with Chatbots discusses system-side limitations, human biases when interacting with chatbots and numerous practical challenges for user training.
Stakeholder and their explainable AI requirements
Verschiedene Stakeholder haben unterschiedliche Anforderungen an vertrauenswürdige KI.
- KI-Entwickler*innen
- KI-Anwender*innen
- Betroffene Personen (müssen nicht nur Nutzer sein)
- Auditoren und Regulatoren, Governance-Spezialisten
- Entscheidungstragende (z.B. im Business)