The human brain processes a lot of information and often uses “shortcuts” to help classify information, which can create cognitive biases. People tend to believe data that supports their current thinking (confirmation bias), the first piece of data they see (anchoring) or to trust the first data they notice while ignoring other data (attentional bias). Each of these user biases can impact the effectiveness of an AI system. In a recent project, my FCAT team was tasked with processing data to uncover new insights for business’s stakeholders. In this project we found that the stakeholders believed the system when it confirmed their thinking but didn’t trust it when it presented new or different information.
As luck would have it, I was recently able to participate in a session for World Usability Day1 that focused on the topic of appropriate trust of users in AI systems. The consensus reached from the discussion was that transparency and explainability will be critical for future adoption – both for the user experience and ethical considerations of the systems. The discussion highlighted the notion of Explainable AI (XAI), which is a developing field that supports the notion that algorithms cannot be “black boxes”. The idea is gaining industry traction. For example, the Defense Advanced Research Projects Agency (DARPA) has an XAI research program2 that aims to further machine learning techniques that are equally high-performing and understandable to human beings. At some point, the notion of XAI is not going to be optional. In the EU, as an example, the General Data Protection Regulation (GDPR) Article 223 forbids decision-making based solely on automated processing.
For AI systems to be the most useful, they need to engender the appropriate level of trust in users. Users need to be able to tell when the system should be trusted and when it should be questioned. This is even more challenging, and important, when cognitive biases are in play. Another factor to consider are the “stakes” involved – e.g. an algorithm that recommends videos has more margin for error than one that approves loans, suggests medical diagnoses and courses of treatment, or is used in national defense.
To achieve higher levels of trust in AI systems, technologists, data scientists, and data engineers need to embrace the concept of XAI and work on their algorithms so that their output enables UX designers to:
- communicate transparently about how the algorithms work
- explain any biases in the data
- build “scaffolding” to develop user trust in systems and account for cognitive biases
- support users in making better decisions about when to trust and when to question the output of AI enabled systems
Colleen McCretton is Director, User Experience Design in FCAT