- 06/08/2023
The Challenge
Most of today’s machine learning (ML) methods are complex, and their inner workings are difficult to understand and interpret. Yet, in many applications, explainability is desirable or even mandatory due to industry regulations. While there are interpretable machine learning models out there, some of them are not expressive enough. Models such as decision trees could get deep and difficult to interpret very easily. Finding an expressive rule with low complexity, but high accuracy seems like an intractable optimization problem.
The Impact
Explainable AI models can be used in many areas of the firm, such as creating interpretable rules to understand why certain customers signed up for a product while others did not. The rules can lead to high level insights and help business owners improve their products.
The Outcomes
We successfully implemented our XAI model, and benchmarked it on a few public datasets for credit, customer behavior, and medical conditions. Our model is generally competitive with other classifiers. Our XAI model can potentially be powered by special purpose hardware or quantum devices for solving Quadratic Unconstrained Binary Optimization (QUBO). The addition of QUBO solvers reduces the number of iterations and could lead to a speedup.
The Deep Dive
FCAT researchers proposed the model based on expressive Boolean formulas. The Boolean formula defines a rule according to which input data are classified. Such a formula can include any operator that can be applied to one or more Boolean variables, such as And and AtLeast. For further details on this project, read the full paper here.