Why is this ad appearing on this site? This was a frequent questions both advertisers and publishers asked of Google. Behind the scenes, machine learning models match the best ad to the best website, given a set of constraints including budget and the dynamics of the ad auction. To untangle the decision chain across the many different targeting systems to answer the question is a knotty task indeed.
In March, the head of Google research Peter Norvig spoke at MIT. One of the challenges facing computer scientists over the next decade, he said, is debugging the complex machine learning systems that power revolutionary products from photo recognition to speech recognition to machine translation.
Microsoft announced yesterday they built a speech engine that understands spoken English as well as humans. Google Photos identifies photos of people throughout their growth from infant to toddler to teen. These benefits are just the beginning of the advances we’ll see from new forms of machine intelligence. But they face the same challenge as the Google advertiser asking why did the ad appear here? How can we explain how hyper-complex systems work?
The problem isn’t limited to a few marketers. Regulators in the European Union are writing legislation that grants citizens a right to explanation. In a world where machine learning ranks certain recruiting candidates above others, evaluates college student applications, characterizes work output, we will all want to understand why.
Today, we ask these questions of email. Why wasn’t this one tagged as spam? Or Netflix and Amazon recommendations. No, I do not want to read a bodice-ripper.
But in the future, as computer systems assist humans in broader and more critical decisions, the questions will persists and multiply in both complexity and importance.
Why wasn’t I accepted to my first choice college? What about my code did the algorithm deem to be low-quality? I’m ranked lower than my peers at work: why? A large high-frequency trader sold a huge quantity of out-of-the-money puts on Apple and the stock market fell 10 percent in a day. Why?
I don’t believe that these novel machine intelligence systems will supplant and replace consequential decisions in the interim for many applications. They will support and inform human-decision making, which reinforces the need for many of these systems to explain why.
Sakichi Toyoda pioneered the use of the Five Whys , a diagnostic tool to diagnose, understand and improve Toyota’s production system. Jeff Bezos used the Five Whys in 2004 to investigate why a conveyor belt had injured a worker. This technique helps us establish cause and effect.
Today, we can’t ask the Five Whys of most of the intelligent computer systems. Establishing cause and effect in complex multi-layer neural networks is a challenging problem to solve, and it will be a critical one if machine learning is to fulfill its promise.
Published 2016-10-19 in Trends