FCAT RESEARCH
Designing Automated Systems That Humans Will Trust
By: DEANNA LAUFER | May 17, 2021
Automation is moving up the value chain, taking on more “human” tasks like financial planning, factory floor management, and home health care. In response, companies need to reevaluate how to build trust in automated systems, considering both their reliability and the human-machine relationship.
  • Facebook.
  • Twitter.
  • LinkedIn.
  • Print

Massive COVID-19-driven workplace, home, and social disruptions have accelerated the move toward automation. As David Gitlin, CEO of HVAC manufacturer Carrier Global, said last year, “Where you have people spaced out three feet and you want to get them spaced out six feet…you’re going to see a trend towards robotics.”1 Chatbot vendor Kasisto saw a 30% increase in inquires since the pandemic began, while robots were used to ferry food and medical supplies, disinfect public spaces, and enforce quarantines.2

But these aren’t just the factory robots and virtual assistants we’re used to. Automated systems are taking on previously “human” tasks, as they mature.

From assistant to specialist
Virtual assistants like Siri, Alexa, and Cortona are just that – they’ll tell you the weather forecast or your last five credit card transactions and not much more. But newer AI assistants can complete tasks independently, like Google’s Duplex that arranges hair appointments and Plataine’s intelligent manufacturing assistant that can reroute jobs to new machines to avoid bottlenecks. Launched in 2017, Woebot administers cognitive behavioral therapy via 2 million weekly conversations.

From software tool to colleague
Robotic process automation is hot – 66% of firms surveyed by Forrester planned to spend more on automating repetitive, rules-based tasks last year.3 Some firms are going even further. EY is training AI engines to read business documents as a first step towards them independently generating client strategy reports.4 And hospital robots have graduated from delivering pills to performing surgical procedures autonomously.5

From consumer electronic to companion Sony discontinued its original Aibo robot dog in 2006 due to poor sales, but successfully relaunched a new, cuter model in 2017 designed to reciprocate love and affection much the same way as a real pet. Taking animals as inspiration, Paro’s seal shaped robot provides both therapy and companionship to dementia patients. And Embodied’s 15-inch Moxie robot uses machine learning to engage children in social, cognitive, and emotional development adapted to their individual needs.

The Building Blocks of Trust in Automated Systems

As automated systems advance further into the realm of human capabilities, the mechanisms for developing trust in them become more complex – just like the humans they’re emulating. Efficiency is no longer the overarching goal. Instead, building trust involves two elements: mechanical trust and relational trust (see Figure 1).

Mechanical Trust

Mechanical trust is earned when automated systems perform as expected, alert us when they don’t, and make it easy for us to understand how they work. The building blocks of mechanical trust are:

Dependability
Dependability is the promise that automated systems will perform as expected. If they didn’t, humans would not rely on them, period. Dependability encompasses technical competency proven via expert knowledge, technical facility, or routine performance. For example, algorithms have been developed that detect breast cancer, heart arrythmias, and early-stage Alzheimer’s better and faster than humans. The UCSF Medical Center uses automated pharmacists to prepare and track medications; its systems performed with 100% accuracy in their first 5 years, compared to a human pharmacist error rate of 3.6%.6

Transparency
Transparency refers to a machine’s openness about its nature and capabilities. Says Kasisto’s CTO, "when the customer is unsure if they’re interacting with a human or a machine, they’ll feel misled, lose trust and have a poor experience."7 That’s why its KAI bot introduces itself to customers as such. Transparency is also important when machines make mistakes. A University of Massachusetts study found that humans have more confidence in robots that err when the machines can express self-doubt – in the study’s case via a red light and frowning face.8

Explainability We don’t need to know how our Roomba works to trust it – we just care that it cleans our floors. But for higher risk tasks like medical diagnosis and autonomous driving, explainability becomes essential to developing trust. As Microsoft CEO Satya Nadella says, "We want not just intelligent machines but intelligible machines." 9 A recent academic study confirmed that 88% of physicians would prefer a diagnostic algorithm that could explain its decisions.10

Relational Trust

When machines perform social functions, we develop trust in them much the same way we would a friend or colleague. As such, relational trust between human and machine is earned through:

Relatability When people perceive systems as more human-like, they develop stronger emotional relationships with them. To increase customer engagement with his Sphero robots, founder Ian Bernstein gave them a backstory (they come from the planet Spheron) and encouraged naming them. As a result of that relationship, Bernstein said people were more forgiving when their robot made mistakes, such as getting itself stuck in a corner, and even apologized to it.11 Chatbots are no different. Capital One designed Eno – which enjoys a 95% approval rating – with character quirks such as eagerness and a love of puns to make it feel more human. Eno’s team says the pattern of customers thanking the chatbot and telling it "I love you" is a sign that it’s perceived as warm, humble, and relatable.12

Adherence to behavioral norms
Relatability also extends into how a machine behaves in our society. In their book User Friendly, designers Cliff Kuang and Robert Fabricant write, "humans expect computers to act as though they were people and get annoyed when technology fails to respond in socially appropriate ways."13 So, it’s not enough that a self-driving car stops at a stop sign. Instead, Audi’s is designed to mimic the behavior of a human driver slowing down to a stop, relaying to pedestrians that it’s safe to cross. If a group of people surround and block the Starship delivery robot, it responds by saying "Hello I’m a Starship delivery robot. Can you please let me pass?." According to a company executive, its pragmatic behavior usually solves the situation.14

Anthropomorphism
An innate tendency of human psychology is to proscribe human characteristics to non-human things. Designers of a shelf scanning robot for Walmart purposely did not give it a face because they didn’t want customers to think they could interact with it; store staff interceded and donned the robot with a signature name badge.15 More advanced companion robots such as Misty can mimic human voices, expressions, and movements. Misty’s founder explained that without eyes or expressions, “we don’t know what the robot is thinking."16 Trust expert David DeSteno puts it this way, “Technology has reached the point where it can mimic human expressions closely enough that our minds will automatically respond. It can "ping" our trust machinery – the unconscious mechanisms our minds use to decide who to trust."17

As firms develop automated systems for customer service, planning and advice, and other applications higher up the value chain, they will need to consider how to apply the principles of building mechanical and relational trust. What type of skills are needed to design trusted machines? How should a firm measure trust in automation? And how should firms design systems that employees will embrace?

Deanna Laufer is Director, Emerging User Interfaces/Technology, FCAT

 
  • Facebook.
  • Twitter.
  • LinkedIn.
  • Print
1 Lynch, D. (2020). Soaring joblessness could shake U.S. economy, politics for years. The Washington Post. https://www.washingtonpost.com/business/2020/05/08/jobs-coronavirus-unemployment-economy-politics/
2 Based on an April 2020 interview with Kasisto; and see Murphy, R. et al (2020). The unsung heroes of the COVID-19 crisis? Robots. Fast Company.
https://www.fastcompany.com/90494765/the-unsung-heroes-of-the-covid-19-crisis-robots
3 The Future of Work Is Still Being Written: But Who Is Holding The Pen? Forrester Research (2020).
4 Castellanos, S. (2019). Unleash the Bots: Firms Report Positive Returns With RPA. The Wall Street Journal. https://www.wsj.com/articles/unleash-the-bots-firms-report-positive-returns-with-rpa-11551913920
5 Milner, M. et al (2019). Robots are coming to a hospital near you. Fast Company. https://www.fastcompany.com/90345453/robots-are-coming-to-a-hospital-near-you
6 Zaleski, A. (2016). Behind pharmacy counter, pill-packing robots are on the rise. CNBC.
https://www.cnbc.com/2016/11/15/duane-reades-need-for-speed-pharmacy-robots-are-on-the-rise.html and
Cina, J. et al (2006). How Many Hospital Pharmacy Medication Dispensing Errors Go Undetected? Journal on Quality and Patient Safety. http://patientsafetyresearch.org/journal%20articles/Hosp_Pharm_Meds.pdf
7 Caskey, S. (2020). Creating a Convincing (and ethically sound) AI Personality. Kasisto. https://kasisto.com/blog/creating-a-successful-and-ethically-sound-ai-personality/
8 Desai, M. et al (2013). Impact of robot failures and feedback on real-time trust. 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
9 Nadella, S. (2016). The Partnership of the Future. Slate.
http://www.slate.com/articles/technology/future_tense/2016/06/microsoft_ceo_satya_nadella_humans_and_a_i_can_work_together_to_solve_society.html
10 Diprose, W. et al. (2020). Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association.
11 Based on an interview with Ian Bernstein, founder of Misty Robotics, April 2020.
12 Hay, S. (2017). Eno = AI + EQ: Designing a Financial AI That Recognizes and Responds to Emotion. Capital One.
13 Kuang, C. and Fabricant, R. (2019). User Friendly: How the Hidden Rules of Design Are Changing the Way We Live, Work, and Play.
14 Lee, T. (2020). The pandemic is bringing us closer to our robot takeout future. Ars Technica. https://arstechnica.com/tech-policy/2020/04/the-pandemic-is-bringing-us-closer-to-our-robot-takeout-future/
15 Corkery, M. (2020). Should Robots Have A Face? The New York Times. https://www.nytimes.com/2020/02/26/business/robots-retail-jobs.html
16 Based on an interview with Ian Bernstein, founder of Misty Robotics, April 2020.
17 DeSteno, D. (2017). Can You Trust Technology? HuffPost.
http://www.huffingtonpost.com/david-desteno/can-you-trust-technology_b_4683614.html
978941.1.0
close
Please enter a valid e-mail address
Please enter a valid e-mail address
Important legal information about the e-mail you will be sending. By using this service, you agree to input your real e-mail address and only send it to people you know. It is a violation of law in some jurisdictions to falsely identify yourself in an e-mail. All information you provide will be used by Fidelity solely for the purpose of sending the e-mail on your behalf.The subject line of the e-mail you send will be "Fidelity.com: "

Your e-mail has been sent.
close

Your e-mail has been sent.

This website is operated by Fidelity Center for Applied Technology (FCAT)® which is part of Fidelity Labs, LLC (“Fidelity Labs”), a Fidelity Investments company. FCAT experiments with and provides innovative products, services, content and tools, as a service to its affiliates and as a subsidiary of FMR LLC. Based on user reaction and input, FCAT is better able to engage in technology research and planning for the Fidelity family of companies. FCATalyst.com is independent of fidelity.com. Unless otherwise indicated, the information and items published on this web site are provided by FCAT and are not intended to provide tax, legal, insurance or investment advice and should not be construed as an offer to sell, a solicitation of an offer to buy, or a recommendation for any security by any Fidelity entity or any third-party. In circumstances where FCAT is making available either a product or service of an affiliate through this site, the affiliated company will be identified. Third party trademarks appearing herein are the property of their respective owners. All other trademarks are the property of FMR LLC.


This is for persons in the U.S. only.


245 Summer St, Boston MA

© 2008-2024 FMR LLC All right reserved | FCATalyst.com


Terms of Use | Privacy | Security | DAT Support