Cathy O’Neill wrote an interesting piece on regulating automated decision making recently. I am not going to argue about whether use of algorithms should or should not be regulated because I think it is inevitable that they will be. The question is how companies should respond to these regulatory efforts as they are rolled out to ever more consumer-facing decisions.
Regulation, she says, is “a necessary effort to reassert human control as opaque algorithms take over bureaucratic processes.” Many organizations are going to react to this by trying to pass regulatory approval by insisting on manual decision-making -that the “final” decision is a human one. As an approach, this is simply not going to work. Too many decisions must be automated because the time window for making the decision is too short, the decision is required 24×7, consumer expectations are such that only an immediate response will be acceptable, or for one of many other reasons. If AI cannot be used in these decisions, then the potential (positive) impact of AI will be reduced dramatically.
So how do you ensure “human control” over “opaque algorithms” if not by ensuring that the final decision is made manually? Three things – explicability, decomposition and business control.
First, a lack of explanation is unconscionable and unnecessary for AI algorithms that are now or soon will be regulated. You can use opaque algorithms in many places – most regulators don’t care how you come with marketing offers or detect fraud, for instance, so you can use any algorithm you like. But when you start thinking about credit, access to limited services or the criminal justice system, you need to use algorithms that can be explained. Many such algorithms exist and it’s a matter of getting your data science team to let go of “must be the absolute most accurate algorithm” and adopt a “best of the algorithms that are sufficiently explicable”.
Second, you need to move away from the idea that you are going to build one big “uber-algorithm” – an AI that decides if this loan should be approved or if this claim should be paid. Instead, you need to break your decision-making down into its component pieces (using decision modeling for instance) and then find the right approach for each piece. Some of these pieces – the sub-decisions – will be good candidates for AI, others might use more traditional analytic approaches or be specified as explicit logic. This makes each AI algorithm easier to explain and reduces the impact of each one. If you implement the decision model the way we do using a Business Rules Management System, you can also log every sub-decision as you go. This lets you describe the result of each algorithm, the explanation for that result (why did the algorithm come up with this result) AND what you did with the result. You can clearly show the regulator how you decided and what the role(s) of your AI algorithm(s) were.
Finally, you need the design of this overall decision model – the structure of your overall algorithm if you like – to be under business control. Fortunately, decision modeling is very business-friendly and if you begin, as we do, by asking the business team how they want to decide, your model will reflect how they think. Into this business framework you can plug your AI algorithms. This business control makes it easier to explain to the regulators (they’ll find it easier to believe an experienced, gray-haired executive than her bright young thing data scientist after all) and more likely that legal issues of discrimination will be identified before they are encoded.
Because a decision model can be reviewed and explained, this will allow regulators to see how you intend to build your algorithm before you build it. This will help catch problems algorithms are embedded without imposing the kinds of delay that reduces accuracy in your AI algorithms. Such a decision model provides “evidence … that they follow relevant laws against discrimination”.
A decision model is a clear, understandable, business-centric and regulator-friendly framework for your AI algorithms. Plus, it helps you build the AI algorithms and analytic models you need to improve your decision-making. It’s good now to focus and frame your AI projects for success, and it’ll be even better later when the regulators come calling.