Black Box is not a one-size-fits-all artificial intelligence solution

Author
Katelyn Liu
2 years ago
Share:

Modern milestones in the artificial intelligence domain, led by IBM’s Watson defeating reigning Jeopardy! champions in 2011, Gameplay’s (now a subsidiary of Google) victory in Go against world champion Lee Sedol, to name a few, have our science-fiction fantasies running rampant. Snapping back to reality, the true capacity of today’s AI may need a closer eye for scrutiny.

Large image

What is a “Black Box”?

Though AI today is an all-pervasive technology, a major challenge is that, most AI models are incomprehensive to humans3. The recent AI buzz has generated enough awareness in various fields that you might have certainly heard the term deep neural network3,4.

In deep neural networks, data inputs get combined and recombined in numerous hidden layers that emulates the layers of neuronal inputs in the brain, and as they continue to do so in such a model, it gets increasingly more complicated and challenging to interpret how all the inputs flow together to form the final outcome/prediction3.

This lack of transparency in AI has been termed as a “Black Box”3. While deep neural network models are useful for classifying data into clusters, when the data lacks labels (i.e. they allow you to group the unlabeled data based on similarities), the concern is that there exists a trend in which we default to “Black Box” solutions for all situations3,4.

This can be problematic in scenarios where there is a need to understand the rationale behind the generated prediction, where the prediction is used to drive subsequent actions that may deeply affect people’s lives, such as during the determination of an individual’s risk of recidivism3,4.

An example: COMPAS

COMPAS is an acronym for Correctional Offender Management Profiling for Alternative Sanctions. This is a proprietary software used in 46 out of the 50 states in the US, to assess a defendant’s risk of future arrests to aid a judge’s decision in sentencing2. The proprietary algorithm computes its prediction based on the answers of a 137-item questionnaire, and how it combines and recombines the answers to produce its predictions is complicated and uninterpretable2.

Meanwhile, a simple interpretable model created by Angelino et al. (2018) only considers a handful of rules regarding the defendant’s age and criminal history, and it works as follows: if the person has either more than 3 prior offences, or is between 18-20 years old and male, or is 21-23 years old with 2-3 prior offences, then their simple model predicts they are to be rearrested within 2 years, otherwise not2.

When it comes down to accuracy, studies have demonstrated that Angelino et al.’s transparent set of rules is just as accurate as COMPAS2. More, in a recent study conducted at Dartmouth College, Julia Dressel and Hany Farid have shown that COMPAS is no better predicting an individual’s risk of re-offending than random volunteers recruited from the internet5.

Another criticism of “Black Box” models is that since they are data-dependent, if the data fed into these models are biased, the yielded prediction are likely to reflect the bias as well.

More, the secretive nature of “Black Box” models makes it inherently difficult to identify and troubleshoot where the algorithm may be contributing to biased or erroneous outcomes1,2,3

Peekaboo.ai: an explainable AI solution

While “Black Box” models can be useful in certain situations, its limitations are known, and because of that it is important to minimize the tendency to default to “Black Box” solutions for all scenarios. This is especially critical when AI prediction are used in downstream decisions that can seriously affect an individual’s or individuals’ lives.

AI models that are designed to be inherently explainable, where users can understand how the inputted data are combined to formulate the resultant prediction are necessary in such cases.

Peekaboo.ai is a powerful AI solution that does exactly that: it delivers predictions based on your data, in a way that the user can back trace the computational steps and understand how the prediction are formulated from their data.

Peekaboo.ai finds relations in your data like many leading proprietary algorithms, but what sets it apart from the crowd it’s its explainable nature.

To find out more on how Peekaboo.ai can fit your AI needs, visit peekaboo.ai.

References:
1. Angelino, E., Larus-Stone, N., Alabi, D., Seltzer, M., & Rudin, C. (2018, August 3). Learning certifiably optimal rule lists for categorical data. arXiv.org. Retrieved October 26, 2021, from https://arxiv.org/abs/1704.01701.
2. Rudin, C., & Radin, J. (2019). Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition. Harvard Data Science Review, 1(2). https://doi.org/10.1162/99608f92.5a8a3a3d
3. Tanusree De, Prasenjit Giri, Ahmeduvesh Mevawala, Ramyasri Nemani, Arati Deo, Explainable AI: A Hybrid Approach to Generate Human-Interpretable Explanation for Deep Learning Prediction, Procedia Computer Science, Volume 168, 2020, Pages 40 -48, ISSN 1877-0509, https://doi.org/10.1016/j.procs.2020.02.255.
4. Tu J. V. (1996). Advantages and disadvantages of using artificial neural networks versus logistic regression for predicting medical outcomes. Journal of clinical epidemiology, 49(11), 1225–1231. https://doi.org/10.1016/s0895-4356(96)00002-9
5. Yong, E. (2018, January 29). A popular algorithm is no better at predicting crimes than random people. The Atlantic. Retrieved October 25, 2021, from https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/.