Eric Siegel and I had a great discussion about doing Machine Learning BACKWARDS recently – you can watch the recording below or on our YouTube Channel. Eric, if you don’t know, is the founder of Predictive Analytics World, a leading consultant, and author of “Predictive Analytics“. You can also check out Eric’s new Coursera class.
This discussion was prompted by Eric and I talking about the rate of failure in Machine Learning projects. For instance, one survey said that 85% or more of machine learning projects fail to add
business value and that number has gone up, not down, in recent years. Companies want to find a technology they can buy that will somehow solve these problems – to conclude that something is wrong or missing with their ML technology and that buying ModelOps or investing in their data infrastructure will solve these problems. At Decision Management Solutions, we have a lot of experience helping companies – big, boring established companies – automate decision making. This experience shows that it’s not that the ML models fail completely so much that the ML models end up stuck in a perpetual pilot – they kinda sorta work and people get excited about them but they never get deployed and operationalized. They never make any business difference.
Eric had a great phrase he uses in his new Coursera class on greenling and managing ML projects – he talks about “planning backwards”. Project teams need to decide how they’re going to use the model before they make the model or even define what should be predicted by the model. The team needs to understand the operational environment and deployment scheme and focus on the “carrot” – the value of the model. Projects fail to realize value most often because they they failed to launch – operationalize – the model. And it turns out you can’t build the model and then figure this stuff out – you have to have very specifically defined and agreed upon how any model is going to be deployed before you decide how to prepare the data and actually do the modeling. Otherwise you end up with an ML model described in PowerPoint and, as Eric says “PowerPoint is the place where models go to die”.
Anyway, check out the video below and if you are interested in how we approach this why not read our white paper on Framing Analytic Requirements.