Rate this post

There is an abstract.

The Explainable Machine Learning Challenge was a landmark challenge in artificial intelligence. To create a complicated black box model for the dataset, the goal of the competition was to explain how it worked. The team did not follow the rules. They created a fully interpretable model instead of sending in a black box. There is a question of whether the real world of machine learning is similar to the Explainable Machine Learning Challenge, where black box models are used even when they are not needed. The implications of this team ‘s thought processes are far beyond the competition itself.

interpretability, explainability, machine learning, finance

Hundreds of top computer scientists, financial engineers, and executives crammed themselves into a room within the Montreal Convention Center at the annual Neural Information Processing Systems ( NeurIPS ) conference to hear the results of the Explainable Machine Learning Challenge. The need to make sense of outcomes calculated by black box models was reflected in the first data science competition.

Over the last few years, the advances in deep learning for computer vision have led to a widespread belief that the most accurate models for any given data science problem must be inherently uninterpretable and complicated. Modern machine learning techniques were born and bred for low-stakes decisions, such as online advertising and web search, where individual decisions do not affect human lives.

Humans ca n’t understand how variables are combined to make predictions in machine learning because the models are created directly from data. Black box predictive models are so complicated that no human can understand how the variables are related to each other to reach a final prediction, even if one has a list of the input variables.

Interpretable models are different from black box models in that they are limited to a better understanding of how predictions are made. In some cases, it can be made clear how variables are related to form the final prediction, where only a few variables are combined in a short logical statement, or using a linear model, where variables are weighted and added together. Sometimes models are comprised of simpler models put together, or other constraints are put on the model to add a new level of insight. Most machine learning models are designed to be accurate predictors on a static dataset that may or may not represent how the model would be used in practice.

It ‘s not true that accuracy must be sacrificed for interpretability. When very simple interpretable models exist for the same tasks, it has allowed companies to market and sell proprietary or complicated black box models. The model creators can profit without considering the consequences to the affected individuals. Designers claim the models need to be complicated in order to be accurate. The tradeoffs of favoring black box models over interpretable ones are the subject of the Explainable Machine Learning Challenge.

Prior to the winners of the challenge being announced, the audience was asked to do a thought experiment where they had cancer and needed surgery to remove a tumor. There were two images on the screen. One image depicted a human surgeon, who could explain anything about the surgery, but had a 15 % chance of death during the surgery. The robotic arm that could perform the surgery had a 2 % chance of failure. The robot was supposed to mimic a black box approach. Total trust in the robot was required, no questions could be asked of the robot, and no specific understanding of how it came to its decisions would be provided. The audience was asked to vote on which of the two surgeons they would prefer to perform life-saving surgery on. The robot was voted for by all but one hand.

While it may seem obvious that a 2 % chance of mortality is better than a 15 % chance of mortality, framing the stakes of AI systems in this way obscures a more fundamental and interesting consideration : Why must the robot be a black box ?

The audience of the workshop was only given the choice between the accurate black box and the inaccurate glass box, not the possibility that the robot did not need to be a black box. The audience was n’t told how accurate the data was for the surgical outcomes or about the potential flaws in the data that was used to train the robot. In assuming that accuracy must come at the cost of interpretability, this mental experiment failed to consider that interpretability might not hurt accuracy. It ‘s possible that an understanding of when the model might be incorrect will improve accuracy.

Being asked to choose between a machine and a human is a false dichotomy. The use of black box models for high-stakes decisions has resulted in a lot of problems. In healthcare, criminal justice, and beyond, these problems exist.

The assumption that we must always sacrifice interpretability to get the most accurate model is wrong. It has been demonstrated many times in the criminal justice system. Angelino et al created a machine learning model for predicting rearrest. There are a few rules about someone ‘s age and criminal history. If the person has at least three prior crimes, or is 18 years old and male, or 21 years old and has two or three prior crimes, they are predicted to be rearrested within two years from their evaluation. While we are not advocating to use this particular model in criminal justice settings, this set of rules is as accurate as the widely used ( and proprietary ) black box model called COMPAS., 2018.

The simple model above is just as accurate as other state-of-the-art machine learning methods., 2018. The interpretable models, which were very small linear models or logical models in these studies, performed just as well as the more complicated machine learning models., 2016 Black box models for criminal risk prediction do not seem to have a benefit. The black boxes are more difficult to trust and use.

There does n’t seem to be a benefit in accuracy for black box models in several healthcare domains and across many other high-stakes machine learning applications where life-changing decisions are being made. Caruana et al. Razavian et al. Rudin and Ustun show models with interpretability constraints that perform just as well as unconstrained models. Black box models can hide a lot of serious mistakes. Rudin is seen in 2019. Even in computer vision, where deep neural networks are the most difficult kind of black box model to explain, we and other scientists. Chen et al., 2020 ; Y. Li et al., 2017 : L Adding interpretability constraints to deep learning models leads to more transparent computations. Accuracy has not been compromised even for deep neural networks for computer vision.

Trusting a black box model means that you trust not only the model ‘s equations, but also the entire database that it was built from. Without knowing how the 2 % and 15 % were estimated, we should question the relevance of these numbers for any particular subpopulation of medical patients. There are flaws in every dataset we have seen. There can be huge amounts of missing data, unmeasured confounding, and systematic errors in the data. Data collection issues can cause the distribution of data to be different than we originally thought.

Data leakage is a common issue with black box models in medical settings, where some information about the label y sneaks into the variables x in a way that you might not suspect by looking at the titles and descriptions of the variables. In predicting medical outcomes, the machine might pick up on information within doctors ‘ notes that reveal the patients ‘ outcome before it is officially recorded.

Some scientists have tried to offer explanations of black box models and hypotheses about why they make decisions. The explanations usually try to either mimic the black box ‘s predictions using an entirely different model, or they provide another statistic that yields incomplete information about the calculation of the black box. The explanations extend the authority of the black box rather than acknowledging it is not necessary. Sometimes the explanations are wrong.

When ProPublica journalists tried to explain what was in the proprietary COMPAS model for recidivism prediction. They assumed that if one could create a linear model that approximated COMPAS and depended on race, age, and criminal history, then it must depend on race. The dependence on race only goes through age and criminal history when one approximates COMPAS using a nonlinear model. An incorrect explanation of a black box can cause it to spiral out of control. If the justice system had only interpretable models, ProPublica ‘s journalists would have been able to write a different story. They might write about how typographical errors in these scores occur frequently, with no obvious way to fix them, leading to inconsistent life-changing decision making in the justice system. Rudin et al., 2019.

At the NeurIPS conference, in the room full of experts who had just chosen the robot over the surgeon, the announcer described the competition. The home equity line of credit ( HELOC ) dataset was provided by the FICO and contains data from thousands of anonymous individuals, including aspects of their credit history and whether or not the individual defaulted on the loan. The goal of the competition was to create a black box model for predicting loan default.

For a competition that required contestants to create a black box and explain it, the problem would actually need a black box. But it did not. After playing with the data for a week or so, we realized we could analyze it without a black box. Even though we used deep neural network or classical statistical techniques for linear models, we found that there was less than a 1 % difference in accuracy between the methods, which is within the margin of error caused by random sampling of the data. When we used machine learning techniques that provided very interpretable models, we were able to achieve accuracy that matched the best black box model. We were confused about what to do. We can either play by the rules and give a black box to the judges or we can provide a transparent, interpretable model.

Our team decided that for a problem as important as credit scoring, we would n’t give a black box to the judging team in order to explain it. We thought a banking customer with little mathematical background would be able to understand our model. Each mini-model of the model could be understood on its own. An interactive online visualization tool was also created. The credit history factors on our website would allow people to understand which factors were important for loan application decisions. There was no black box at all. We knew we would n’t win the competition that way, but we needed to make a bigger point.

There are a lot of applications where interpretable models ca n’t possibly be as accurate as black box models. If you could build an interpretable model, why would you use a black box ? Maybe they want to keep the model proprietary. It is possible that interpretable deep-learning models can be constructed for computer vision and time-series analysis. Chen et al., 2020 ; Y. Li et al., 2017 : O Li et al. Ming et al. The standard should be changed from the assumption that interpretable models do n’t exist to the assumption that they do.

When scientists understand what they are doing when they build models, they can create systems that are better able to serve the humans who rely on them. The so-called accuracy–interpretability tradeoff is a myth, as more interpretable models often become more accurate.

The false dichotomy between the black box and transparent model has gone too far. Imagine how the rest of the world would be fooled if hundreds of leading scientists and financial company executives were misled by this dichotomy. It affects the functioning of our criminal justice system, our financial systems, our healthcare systems, and many other areas. We should insist that we do n’t use black box machine learning models for high-stakes decisions unless there is a model that can achieve the same level of accuracy. We have not been trying to make an interpretable model. We would n’t use black boxes for these high-stakes decisions if we did.

  1. The Explainable Machine Learning Challenge website is here: https://community.fico.com/s/explainable-machine-learning-challenge

  2. This article is based on Rudin’s experience competing in the 2018 Explainable Machine Learning Challenge.

  3. Readers can play with our interactive competition entry for the challenge here:http://dukedatasciencefico.cs.duke.edu

  4. Our entry indeed did not win the competition as judged by the competition’s organizers. The judges were not permitted to interact with our model and its visualization tool at all; it was decided after the submission deadline that no interactive visualizations would be provided to the judges. However, FICO performed its own separate evaluation of the competition entries, and our entry scored well in their evaluation, earning the FICO Recognition Award for the competition. Here is FICO’s announcement of the winners:

    https://www.fico.com/en/newsroom/fico-announces-winners-of-inaugural-xml-challenge?utm_source=FICO-Community&utm_medium=xml-challenge-page

  5. As far as the authors know, we were the only team to provide an interpretable model rather than a black box.

There is a disclosure statement.

Cynthia Rudin and the other person have no financial or non-financial disclosures for this article.

There are references.

Angelino. Larus-Stone, N. Alabi, D. Seltzer, M. Rudin, C. This is the year. Learning optimal rule lists for data. The Journal of Machine Learning Research is a journal.

R., Lou, Y. Gehrke, J. Koch, P., M. Elhadad, N. The year 2015. Predicting pneumonia risk is one of the models for healthcare. The 21stACM SIGKDD International Conference on Knowledge Discovery and Data Mining was published. 721–1730 It is possible to make a video 1145/2883258 2788613

Chen, C. Li, O. Barnett, A Su, J. Rudin, C. The year 2019. This looks like deep learning for image recognition. The 33rd International Conference on Neural Information Processing Systems was held.

, H., Chen, C. Rudin, C. This is the year. A neural network that explains its predictions is deep learning. The Thirty-SecondAAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and EighthAAAI Symposium on Educational Advances in Artificial Intelligence were published.

Murias, M. Major, S., Dawson, G. Dzi, K. Carin, L. Carlson, D The year 2017 : Neural nets are used to target EEG/LFP. The 31st International Conference on Neural Information Processing Systems was published. 4620–4630 )

The person is called Ming, Y.,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Qu, H., and Ren, L. The year 2019. Sequence learning via prototypes. The 25thACM SIGKDD International Conference on Knowledge Discovery & Data Mining was published. It was 902–913 ). It is possible to make a video 1145/3292500 3330908

N Razavian., Blecker, S.,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Smith-McLallen, A. Nigam, S. Sontag, D The year 2015. Population-level prediction of Type 2 Diabetes from claims data. Big Data, 3 ( 4 ). It is possible to make a video There is a big one.

J Angwin., J. Mattu, S.,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, May 23, 2016 There is machine bias. The ProPublica website. That ‘s right, https : //www. Propublica. Machine-bias-risk-assessments-in-criminal-sentencing is an article.

Rudin, C. The year 2019. Use interpretable models instead of explaining black box machine learning models for high stakes decisions. Nature Machine Intelligence, 1, 206– 215. It is possible to make a video 1038/s42256-019-0048-x

Rudin, C. Ustun, B This is the year. Machine learning can be used for healthcare and criminal justice. Interfaces, 48 ( 5 ). It is possible to make a video 1287/inte

Rudin, C., Wang, C. Coker, B The year 2019. There is an unfairness in recidivism prediction. The Harvard Data Science Review was published. It is possible to make a video 1162/99608f 6ed64b30

Tollenaar, N. Van der Heijden, P. There were1-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-65561-6556 There is a comparison of statistical, machine learning and data mining models. The Journal of the Royal Statistical Society, Series A : Statistics in Society. It is possible to make a video 1467-985X

Zeng, J. Ustun, B. Rudin, C. The year 2016 There are models for recidivism prediction. Statistics in Society is a journal of the Royal Statistical Society. It is possible to make a video 1111/rssa

Cynthia Rudin and other people. The article is licensed under a Creative Commons Attribution license. International license, except where otherwise indicated with respect to particular material included in the article.

Source: https://nhadep247.net
Category: Machine