ARTIFICIAL INTELLIGENCE
Human Centered AI: Q & A with Ben Shneiderman
By: JOHN DALTON | August 18, 2022
The remarkable progress in algorithms for machine and deep learning have opened the doors to new opportunities, and some dark possibilities. However, a bright future awaits those who build on their working methods by including human-centered AI strategies of design and testing. As many technology companies and thought leaders have argued, the goal is not to replace people but to empower them by making design choices that give humans control over technology.

FCAT recently hosted University of Maryland Professor Ben Shneiderman as a guest of our Speaker Series. Shneiderman is a trailblazer in the field of human-computer interaction. He is credited with pioneering the use of clickable highlighted weblinks, high-precision touchscreen keyboards for mobile, devices, tagging for photos, and more. FCAT’s John Dalton was able to catch up with the professor for a brief Q&A.
  • Facebook.
  • Twitter.
  • LinkedIn.
  • Print

JOHN DALTON: Your new book, Human-Centered AI, is the most balanced, pragmatic and optimistic analysis of artificial intelligence that I’ve read. You lay out a comprehensive guide to building reliable, safe, and trustworthy applications that feature both high levels of human control and high levels of automation. A critical part of your argument is that if we want to achieve a flourishing and humane future it’s essential for us to understand that computers are not in fact people, and vice versa. Why is clarifying the difference between humans and computer so important?

BEN SHNEIDERMAN: Some advocates of artificial intelligence promote the goal of human-like computers that match or exceed the full range of human abilities from thinking to consciousness. This vision attracts journalists who are eager to write about humanoid robots and contests between humans and computers. I consider these scenarios as misleading and counterproductive, diverting resources and effort from meaningful projects that amplify, augment, empower, and enhance human performance.

I respect and value the remarkable capabilities that humans have for individual insight, team coordination, and community building. I seek to build technologies that support human self-efficacy, creativity, responsibility, and social connectedness.

JOHN DALTON: We’re awash in news about automation that fails, involving everything from biased school admissions and credit applications to autonomous vehicles that kill. Even Boeing ran into challenges recently with the 737 MAX. Civil aviation has some of the most robust safety measures and standards in place. What can even those of us outside of the airline industry learn from tragedies like that?

BEN SHNEIDERMAN: The two Boeing 737 MAX crashes are a complex story, but one important aspect was the designers’ belief that they could create a fully autonomous system that was so reliable that the pilots were not even informed of its presence or activation. There was no obvious visual display to inform the pilots of the status, nor was there a control panel that would guide them to turn off the autonomous system. The lesson is that the excessive belief in machine autonomy can lead to deadly outcomes. When rapid performance is needed, high levels of automation are appropriate, but so are high levels of human independent oversight to track performance over the long-term and investigate failures.

JOHN DALTON: Your vision for the future is one in which AI systems augment, amplify and enhance our lives. Are there products and services out there today that you believe already do this?

BEN SHNEIDERMAN: Yes, the hugely successful digital cameras rely on high levels of AI for setting the focus, shutter speed, and color balance, while giving users control over the composition, zoom, and decisive moment when they take the photo. Similarly, navigation systems let users set the departure and destination, transportation mode, and departure time, then the AI algorithms provide recommended routes for users to select from as well as the capacity to change routes and destinations at will. Query completion, text auto-completion, spelling checkers, and grammar checkers all ensure human control while providing algorithmic support in graceful ways.

JOHN DALTON: As you point out in your book, there’s a lot of work to do before our design metaphors and governance structures support truly human-centered AI. What can we do to accelerate the adoption of HCAI?

BEN SHNEIDERMAN: Yes, it will take a long time to produce the changes that I envision, but our collective goals should be to reduce the time from 50 to 15 years. We can all begin by changing the terms and metaphors we use. Fresh sets of guidelines for writing about AI are emerging from several sources, but here is my draft offering:

  1. Clarify human initiative and control
  2. Give people credit for accomplishments
  3. Emphasize that computers are different from people
  4. Remember that people use technology to accomplish goals
  5. Recognize that human-like physical robots may be misleading
  6. Avoid using human verbs to describe computers
  7. Be aware that metaphors matter
  8. Clarify that people are responsible for use of technology

Another step will be revising the images of future technologies to replace humanoid robots with devices that are more like cars, elevators, thermostats, phones, and cameras.

John Dalton is VP Research in FCAT, where he investigates socioeconomic trends and engages in in-depth studies focused on emerging interfaces (augmented reality, virtual reality, speech, gesture, and biometrics).

  • Facebook.
  • Twitter.
  • LinkedIn.
  • Print
1039171.1.0
close
Please enter a valid e-mail address
Please enter a valid e-mail address
Important legal information about the e-mail you will be sending. By using this service, you agree to input your real e-mail address and only send it to people you know. It is a violation of law in some jurisdictions to falsely identify yourself in an e-mail. All information you provide will be used by Fidelity solely for the purpose of sending the e-mail on your behalf.The subject line of the e-mail you send will be "Fidelity.com: "

Your e-mail has been sent.
close

Your e-mail has been sent.

This website is operated by Fidelity Center for Applied Technology (FCAT)® which is part of Fidelity Labs, LLC (“Fidelity Labs”), a Fidelity Investments company. FCAT experiments with and provides innovative products, services, content and tools, as a service to its affiliates and as a subsidiary of FMR LLC. Based on user reaction and input, FCAT is better able to engage in technology research and planning for the Fidelity family of companies. FCATalyst.com is independent of fidelity.com. Unless otherwise indicated, the information and items published on this web site are provided by FCAT and are not intended to provide tax, legal, insurance or investment advice and should not be construed as an offer to sell, a solicitation of an offer to buy, or a recommendation for any security by any Fidelity entity or any third-party. In circumstances where FCAT is making available either a product or service of an affiliate through this site, the affiliated company will be identified. Third party trademarks appearing herein are the property of their respective owners. All other trademarks are the property of FMR LLC.


This is for persons in the U.S. only.


245 Summer St, Boston MA

© 2008-2024 FMR LLC All right reserved | FCATalyst.com


Terms of Use | Privacy | Security | DAT Support