Gian Luigi Chiesa is one of the talented data scientists at FRISS. He loves combining R and Python.
What do we do every time we’re about to cross a road and there’s a car approaching? We look into the driver’s eyes and we think: Oh, he sees me, I’m safe. Or he doesn’t, and I’m not. We make this conclusion because we know that he or she is human and we think alike. We know how he would react because we know how we would react.
But what if the next time you’re about to cross a road and the car is approaching there are no eyes to look into? What if it’s a self driving car? You cannot make eye contact with an algorithm. And you may want to think about these kind of scenarios because they are much closer than what we think.
Soon, self driving, fully autonomous cars will be occupying our roads. And instead of humans behind the wheel there are these powerful AI driven machines. They make sense of the complex world. And they make split second decisions we bet our lives on.
Such AI innovations are also trickling down into other areas as well. In the form of a smart assistant for example. How many of us are already daily interacting with smart assistants such as Siri. Or Amazon’s Alexa. Or Google Duplex.
The interactions that we have with such machines will become indistinguishable from human interaction. And this means we must be able to trust them. Having trust is easy when you can see that the decisions that the machines makes are correct. For example when a complex system is predicting the weather, we can clearly see that often it is correct. And even when it isn’t, often the worst thing that can happen is you getting wet.
Trust is much more difficult when we cannot directly observe the outcome. The consequences of their decisions. Think about the banks when they have to grant a loan. Or the insurance companies when they have to accept or not accept a policy application at underwriting. In such circumstances it becomes paramount to trust the machines. Because by not accepting a loan, you’re affecting someone else’s life. A whole family future. In these circumstances we must be able to trust the machines.
Let me tell you a short story. It’s 2015. It’s New York. And a man called Jason, when he was walking home, got jumped by 4 guys. They broke his eye socket and his jaw. And only because he was lucky enough, it didn’t affect his hearing or he didn’t become blind. A few days later, his partner, Virginia, went to the pharmacy to pick up the painkillers for her partner.
And, when she was about to receive them, the pharmacist told her that her insurance was cancelled. The insurance was denied. She start panicking. She started asking why. But the pharmacist surely couldn’t tell her why she was denied. So she started calling the insurance company staff. Started asking why it happened. And even the insurance company on the other side of the phone could not tell her why she was denied.
Only because she had the resources and the time to investigate by herself, she later found out that what happened, was that a few weeks earlier, before the incident, they just moved to New York. Because she had a new job. And right after that they claimed thousands of dollars for the facial surgery for Jason. And we all know that shortly claiming after entering a policy, it’s an indicator for fraud. We use it on a daily basis at FRISS.
But the issue here is not the indicator itself. The issue here is that the insurance company in this case was not able to provide an explanation. That that was why this whole cancellation happened.
This is a real story. And the Virginia in this story is Virginia Eubanks who later after this incident
wrote a book called Automating Inequality where she explores more extreme cases where this automated system are used by the police to target and to punish the poor.
So what can we do? Well a step in the right direction is to make this AI machine, this black box model, more transparent. More interpretable. Because we are in front of 2 kind of situations. Either we observe that the decisions they make are systematically correct. Like the weather forecasting. Not always, but often. Or we know how such decisions are made. This leads me to talk to you about the AI and interpretability in AI.
Because when we talk about interpretability in general, it was very well defined by Miller is, “the degree to which a human can understand the cause of a decision”. So how does this fit into the AI concept? Well, AI, it’s no more than a model which takes data as input and provides predictions as output.
So interpretable AI in this context means interpretable models. Transparent models. And some models are interpretable by design. Think about the decision trees. Decision trees were a very popular model in use for classification. And in this example we tried to predict whether a person is a male or a female based on the height first.
So we asked: Is she or he taller than 180 centimeters? Yes: we classify as a male. No: we ask another question. Does he or she weigh more than 80 kilograms?
Yes: we classify as a male.
No: we classify as a female.
So we can clearly see in each step, at each node, how the prediction was made. What was the kind of question and decision that was made at each step. The issue with this is that it’s a very simple model.
Decision trees such as also linear regressions are very simple models. And simple models can not answer more complicated questions. For other complicated questions such as image recognition or weather forecasting, the AI came to the rescue with more complicated solutions. Deep Neural Networks.
Deep Neural Networks are really complex but highly performant models which are at the core of the smart assistants. At the core of the self driving cars or the weather forecasting. These are highly performant models which are a black box to us. We don’t really know how each prediction is made. And this can become a problem. Because, automated black box models can have several severe consequences.
Firstly, a hazard in the context of automated systems, is that one should not confuse correlation with causation. For example: We can very well predict the number of shark attacks based on the number of ice cream sales. As the ice cream sales go up the number of shark attacks are also found to rise.
But obviously this is a spurious correlation. The reality behind this is that the common causal effect is that it’s summer. So that’s when people go more often to the beach. And they’re more susceptible to shark attacks. Why does this happen? Well…data to model AI, if collected properly, reflects society. And there’s a catch. Every society has its own fundamental flaws. Its own challenges. Which can create patterns hidden into data. And if not careful, this pattern can lead to potential poisonous prejudices.
Let’s analyze a couple of examples where this went completely wrong. This can lead for example to the phenomenon of redlining. Which is the systematic denial of health services, or bank loans, or even the building of a supermarket to people living in certain areas which are often associated with ethnic minorities or a particular race.
Another even more disturbing example happened back in 2016 when an Israeli startup called Faception, claimed they could have flagged and classify criminals based only on the analysis of the facial treats, with the deep learning model.
The explanation behind the system was really scarce. And we can clearly see where this can go wrong. This can just reinforce our stereotype of a criminal or a terrorist, and can lead to a much more and even worse controversial kind of discrimination. So we are in front of a dilemma because we want both highly performant models, but we also want transparent AI.
What can we do? Several techniques have been deployed in the last years, which allow you to take your black box model and sort of explain the predictions, so that humans can understand it. And one of these techniques is LIME.
LIME is the acronym for Local Interpretable Model-Agnostic Explanation. Which sounds really fancy, but the idea behind this technique, is really simple but very clever. So I’m gonna try now to show you how it works.
Let’s consider for example a very famous black box model. Which is the Google Inception Neural Network which classifies images. And let’s consider the picture of this majestic Wiener dog. If we run our model for this image we get a probability of 88% that this picture represents a Wiener dog.
Now we want to know what made this black box model make such a decision. What LIME does, it randomly creates perturbed images. Like modifying the pixels of the image. For example in the first one you can see that the mouth of the Wiener dog was blurred out. And in the second one the head was taken out. If we rerun these 2 examples through the system we see that the probability of being a Wiener dog is sort of stable. It stays the same. So these were not really influential to make the prediction.
But what if we take out the body of the Wiener dog. And we rerun the example through the model. We see hat the probability of being a Wiener dog decreased significantly to 12%. What does this mean? This means that the shape of the body of the Wiener dog in this case was one of the most important factors in our model to make the prediction of a Wiener dog.
This was with images. But with tabular data, with structured data, it behaves pretty much the same way. So let’s say we have this strip of data which is about a claim. So we have the car model, the claim type, the person age, the area and the time of which the claim occurred. LIME creates 3 perturbed examples where we perturb first the model, then the person age and then the time of the claim. And we see that if we rerun this through our fraud model the probability remains pretty much the same.
But we have a fourth example where we perturb the area in which the claim occurred. And we see that the probability decreased to 32%. A huge decrease. What does this mean? This means that the area was one of the top predictors of our model to predict fraud. This is a clue of a redlining phenomenon going on.
Does this mean we solved the problem? Does explainable AI mean trustable AI? Unfortunately it’s not that simple. As Cassie Kozyrkov explains here: “Trust based only on explainability is like trust
based on a few pieces out of a giant puzzle.” This means that these explanation techniques are just an intuition of what’s going on. They can help us, but they’re not in any way complete for us to trust AI.
So what’s let’s say the correct path, a good principle for us, to trust AI. We just saw LIME. And even much more important is to have a reliable testing framework. Also with constant monitoring. You want to check regularly that your model is doing what it is supposed to do. And finally reliable data collection. You cannot trust your model if you don’t trust the guy who actually collected the data.
As data becomes bigger and models become more complex, the gap between humans and machines becomes wider. And explainable AI, trustable AI, aims to close this gap. To build a bridge between humans and machines. So that humans can understand machines better.
And by researching and developing new model’s interpretation techniques we can have a better understanding of intelligence. We can have a better understanding of the world’s phenomena. Which, ultimately, is the real goal of science.