Gian Luigi Chiesa, Data Scientist at FRISS, talks about the need for explainable artificial intelligence.
AI will be everywhere
Amongst experts, there is zero doubt that soon Artificial Intelligence (AI) will be part of everyone’s daily life. Fully autonomous, self-driving vehicles will be occupying our roads. Instead of humans behind the wheel, there are powerful AI-driven machines making sense of the complex, ever-changing world, allowing us to drive even safer than today, by making split-second decisions we bet our lives on.
AI will be indistinguishable from humans
Such AI innovations are trickling down to other areas as well. Many of us are already interacting with AI-driven machines in the form of smart assistants, like Apple’s Siri, Google’s Duplex or Amazon’s Alexa. The interactions we have with such machines will become indistinguishable from human interactions. This means we must be able to trust their decisions.
AI needs to be trusted, sometimes
Having trust in a machine is easy if you can see that its predictions are correct. For instance, we can directly observe that a self-driving car is making a turn nicely and is braking in time, and there is always the possibility to overrule its decisions in more challenging conditions. When a complex system is predicting the weather, we see that it is usually correct. And even when it isn’t, often the worst thing that can happen is you getting wet.
AI fairness and correctness
Trust is much more difficult when you cannot directly observe the predictions. For instance, when automatically not accepting an insurance policy application based on an AI-driven model. Note that in such cases it’s much harder to judge whether the actual decision is correct or even justified. Furthermore, it is vital when your AI-driven decisions affect people’s lives, like in automated eligibility systems for food stamps or health insurance benefits. In such cases, you must be able to justify your decisions.
As models become better and data becomes bigger, the algorithms behind them become more and more complex, like in Deep Neural Networks, which can consist of hundreds of hidden layers and thousands of nodes. The price we pay for this complexity is the lack of interpretability, the answer to why we make certain decisions.
Discrimination
AI-driven machines are very good at picking up trends in data and in using this information to make predictions. If collected properly, data reflects society. Every society has its challenges and fundamental flaws, which are hidden in the data.
If not careful, AI machines can encode potentially poisonous prejudices. And this can lead, for example, to the phenomenon of “redlining,” which consists of a systematic denial of healthcare benefits and other financial services to people living in specific areas which are often associated with ethnic minorities or a particular race.
FRISS = honest insurance = explainable AI
FRISS is committed to honest insurance. We strive to provide evidence that our predictions are systematically correct and fair. We achieve this by fusing cutting-edge machine learning models together with model interpretation and explanation techniques such as LIME or global surrogate models.
We believe that honest AI means transparent AI, where you can see what data is used in each prediction, and you can quickly correct it if you believe you make predictions in an unfair way. Humanity’s history is a story of automation. But trust should not be automated. Trust is gained through honesty and fairness. And if you don’t understand the process, how do you know you are fair and honest?