Three Steps to Boost Your Deployment Score

As we approach the end of the year, our friends over at the International Institute for Analytics (IIA) are re-tweeting some of their best content. One post in particular caught our eye – What’s Your Deployment Score? This is a great piece by Tom Davenport (who wrote a foreword for my new book Digital Decisioning: Using Decision Management to Deliver Business Impact from AI) about deployment:

Great data and fantastic models have no impact on the world unless they are deployed into operational systems and processes. If they’re not used to improve marketing campaigns, decide on optimal inventory levels, or influence hiring processes, they might as well not exist. The same goes, by the way, for AI models and systems. If they’re not deployed they’re just expensive and interesting baubles.

In this discussion he talks about some of the very low rates of deployment seen in surveys. Tom’s numbers – one in 8 or so – are in line with other studies we have seen across predictive analytics, machine learning and AI more broadly (like this one by McKinsey). The vast majority of analytic (and machine learning and AI models) are therefore “expensive baubles”.

Failing to deploy and use a working model is one of those unacceptable analytic failures. Making sure this doesn’t happen on your machine learning and AI projects was the subject of a great article on applied AI from Cassie Kozyrkov of Google. Tom points out that one key is to define success based on deployment, tying the analytic team’s success to the deployment itself, and this is something we strongly recommend. Being clear if a project is a discovery-oriented project, a pilot or a “real” production project is also helpful.

We do a lot of work helping companies get models into deployment. Based on that work, here are three tips to boost your deployment score.

  1. Invest in decision-centric business understanding. Tom quotes Kimberly Holmes saying that “analytic models must solve stakeholders’ problems”. Like Tom and Kimberly, we think this is “THE most important factor in choosing which models to build”. To make sure you understand your stakeholders’ problems and can deliver models that solve them, build a decision model of the decision you expect to influence. Only a clear understanding of the decision-making that is to change will effectively connect an analytic model to a business problem.
  2. Automate as much of this decision as you can and use a mix of technology to do so. Automation helps ensure that “stakeholders cannot opt out of using the model” as Kimberly puts it. Success in automating decisions means picking the right technology – predictive analytics, machine learning, AI, business rules or human decision-maker – for each piece of the modeled decision. Mix and match to get deployment to happen.
  3. Focus on continuous improvement. Start with a small, incremental improvement and then improve every week. Use simulation and ongoing analysis of decisions made and decision outcomes to improve results. Don’t aim for a single “moonshot” project, apply lots of smaller, more practical models to create sustained improvement. This de-risks the project, gets the organizational change started earlier and keeps everyone focused on business results.

If you want to learn more, check out this paper on predictive enterprises or this short brief on succeeding with AI.