Veritable Research: Explainable ML
A core research direction for Veritable is studying how to robustly explain models in order to understand, introspect, and trust them.
Veritable solutions are based on years of explainability research conducted at Carnegie Mellon University. We continue to view explainability as the backbone for trust in ML systems.
Publications
In the media

MODEL EXPLANATIONS | 3 min
Machine learning models require the right explanation framework. And it’s easy to get wrong.

EXPLAINABILITY | 4 min