Interpretable AI

Interpretable AI will be the Gold Standard. Be Right for the Right reasons.

Meet the Untangle Platform.
The Untangle Platform allows Deep Learning engineers, decision makers and domain experts to understand their models better.
Find out more

How Explainable AI Works:

Explainability while training

We explain how the model is currently learning while the model is still being trained.

Post-Hoc Explainability

We do a deep dive on what the model has learned as well as the tipping points within the model after the training process has ended.

Internal Explainability

We enable you to explore the internals of the network and visualise concepts helping you derive more business value.

External Explainability

We ask your model questions to figure out how it has learned in the context of your dataset.

Value of Interpretability:

  • Explainable outcomes leading to human or automated actions
  • Model debugging, error detection and correction
  • Faster and lighter deep learning models
  • Improve robustness of models: Make models work in the real world.
  • Identify gaps in data.
  • Modify models and their representations instead of having to relearn based on new data.
Curious to learn how we can help you?
Contact Us!

Where Deep Learning is used, we add value.

Understanding your Deep Learning models today can add huge value. Tomorrow it will be required by regulators.
Insurance
Manufacturing
Medtech
Self Driving Cars
Fintech
Computer Vision

Automation of decision making is not the future it is here and now!

By understanding Deep Learning we can remove bias, harden against adversarial attacks and makle sure that models are not making spurious correlations.

We are on a mission to build the most intuitive and effective tools to allow the understanding of AI.