Veritable Research: Explainable ML

A core research direction for Veritable is studying how to robustly explain models in order to understand, introspect, and trust them.

Veritable solutions are based on years of explainability research conducted at Carnegie Mellon University. We continue to view explainability as the backbone for trust in ML systems.


In the media