Explaining and analyzing AI decisions
A University of Texas at Arlington computer scientist has earned a three-year, $385,000 grant from the National Institute of Standards and Technology to analyze both how machine learning systems make decisions and what happens when they make wrong ones.
Jeff Lei, professor in the Computer Science and Engineering Department, will address two specific issues in machine learning:• explanations for decisions made by machine learning systems
• incorrect decisions and how to correct them
Machine learning systems increasingly help humans make decisions in a number of areas, from mortgage application approvals to medical diagnoses to self-driving automobile systems. With such high stakes, it is critical that systems make correct decisions.
Machine learning uses a large set of data points to make a decision. Data points that are closer to a decision point exert more influence than those that are farther away. Lei will engage in so-called neighborhood exploration by looking at various data points in the vicinity of the decision point instead of looking at the entire training set. Doing so can significantly reduce computational complexity.
Data can be biased, or people make mistakes during data collection, Lei said. What led to a bad decision can be identified by exploring the data points that had the most influence on the decision.
“Artificial intelligence is helpful in making decisions, but because of the complexity of the process, it isn’t quite transparent,” he said. “This is a serious concern in domains where decisions have important consequences. We must provide good explanations for why decisions are made, pinpoint the root cause of any incorrect decisions and suggest changes to correct them to maintain public trust and ensure that the systems are working as intended.”
Lei’s work has great potential to increase the use and ability of AI technology for future applications, said Hong Jiang, chair of the Computer Science and Engineering Department.
“One of the main attractions of AI technology is its apparent power in automating the decision-making process by providing accurate predictions via training on massive amounts of data,” Jiang said. “Ironically, however, one of the biggest challenges facing AI is the inability to explain predictions and their accuracy, because how AI algorithms reach their conclusions has long been considered a mysterious black box. Professor Lei’s work on explaining decisions made by AI is very timely and potentially highly impactful.”
- Written by Jeremy Agor, College of Engineering