Turning the Machine Learning Black Box into a Glass Box
Some methods of Machine Learning are referred to as a black box. Take the episode of the Sam Harris podcast featuring an interview with AI pioneer Stuart Russell:
"So, if I’m not mistaken, most, if not all of these deep learning approaches, or even more generally machine learning approaches are essentially black boxes, in which you can’t really inspect how the algorithm is accomplishing what it is accomplishing."
It is indeed difficult to argue against this; however, bear in mind that we choose Machine Learning over traditional statistical methods because Machine Learning tackles larger amounts of data.
The same is true for simple rule-based systems; once the rules are sufficiently complex, understanding the output’s reasoning becomes more difficult. It’s within these complex systems that Machine Learning shines.
We see that the actual black box issue stretches beyond the recent and ubiquitous reference to Machine Learning systems. Nevertheless, as for all complex systems, we need to ensure transparency and explainability.
There are two ways to do so:
1. Disclosing the single components of a Machine Learning Model: the algorithms and scripts
2. Explaining why and to what extent various components have influenced an output
At Visium, we ensure both 1) knowledge transfer and 2) explainability of outputs. Turning the black box into a glass box is vital to building trust in AI and Machine Learning.
Do you want to know more about the Pandora's box of Machine Learning? Check out this article written by one of our AI experts. Axel goes into more detail about explainability of outputs and demonstrates how our engineers at Visium achieve this.
https://www.forum-epfl.ch/app/uploads/2019/05/Mag_final.pdf#page=32