Interpreting Machine Learning Models: Vidora’s Quarterly Report – Q3, 2018
As the business applications of Machine Learning (ML) expand, non-technical stakeholders are increasingly interested in interpreting machine learning models. Traditionally, ML systems are difficult to interpret, but a well-implemented and interpretable machine learning model can enable your business to make faster, smarter decisions.
In our Q3 quarterly report, we discuss the benefits of understanding and interpreting machine learning models, and some of the techniques which allow you to realize those benefits.
- Validate whether your model is performing well enough to deploy
- Diagnose a model that is performing poorly
- Use models to then important strategic aspects of your business
In this report we focus on a specific type of ML – Supervised ML. Supervised ML problems have the form: y = f(x).
Vidora enables anyone in any business to build and use complex machine learning models. With Vidora’s self-service platform, Cortex, machine learning is intuitive, interpretable and fast. Cortex also automates the entire machine learning pipeline from raw data to model outputs. Experts in machine learning and artificial intelligence from Stanford, Berkeley and Caltech developed Cortex. Finally, Cortex sits at the heart of some of the largest global brands, such as Walmart, News Corp, and Discovery.