Richard Sutton’s recent blog post, “Bitter Lesson”, makes the compelling case that most of the biggest advances in Machine Learning are the result of general purpose machine learning algorithms, rather than fine-tuned algorithms that are developed based on our understanding of human knowledge.

This certainly seems to be the case in the world of my PhD. I studied computer vision for object recognition Here, the state-of-the-art moved towards increasingly abstract and less hand-crafted modeling techniques. During my time at Caltech, the state-of-the-art relied on both choosing underlying feature detectors (e.g. SIFT, etc) and the underlying structure of the model. For example, we looked at the constellation of parts researched throughout the 2000s in Pietro Perona’s lab at Caltech. After that, we leveraged bayesian inference to optimize object models.

Convolutional Neural Networks

Currently, the most successful object recognition learning algorithms have done away with both feature detectors and explicit models. These algorithms leverage large-scale neural networks (“deep neural networks”) called convolutional neural networks. Convolutional Neural Networks (CNNs), first popularized on a smaller scale by Yann LeCun in the 1990s at NYU, do not require explicit feature detectors. In fact, the early layers of the network seem to automatically generate features. These then become more abstract the further one traverses in the network. For instance, see this paper by Rob Fergus, in which he visualizes activity within convolutional neural networks trained on Caltech data-sets. Note that the models no longer require the same care as we needed when designing “constellation of parts” models. However, the CNNs do require a large degree of hand-tuning. This is especially the case when it comes to defining the structure of the neural network.

Richard Sutton highlights that more general learning mechanisms have also achieved superior performance in other areas of AI. He highlights examples like Chess, Go, and NLP.

Vidora’s Approach

At Vidora we take a philosophical approach using general purpose machine learning in our product Cortex. Early tasks in the Machine Learning Pipeline, like feature cleaning, feature engineering, and model tuning are currently reliant on bespoke techniques developed by humans based on intuition and prior knowledge. Inevitably the short-term gains inherent in bespoke engineering will gave way to clever learning and search paradigms powered by massive computational power. We plan to be at the forefront as this transition becomes a reality.

Read Richard’s full blog post here – http://incompleteideas.net/IncIdeas/BitterLesson.html.

And learn more about Vidora and see a demo by emailing us at partners@vidora.com.

Want to Learn More?


Schedule a demo and talk to a product specialist about how Vidora’s machine learning pipelines can speed up your ML deployment and ultimately save you money.