RESEARCH
Eliminating AI Bias: A Human + Machine Approach
By: SARAH HOFFMAN | AUG 27, 2020
Bias in AI is a known problem. Cases involving medical care, parole, recruiting, and loans have all been tainted by flawed data sampling or training data that includes biased human decisions.1 The good news: large organizations are waking up. Even the Vatican has chimed in with a charter on AI ethics.2 Even better news: there are practical methods for combatting AI bias.
  • Facebook.
  • Twitter.
  • LinkedIn.
  • Print
 

Combat AI Bias with Smart Human Interventions

Emphasize inclusive design. During World War II, the U.S. Air Force faced a high death rate due to cockpit designs based on the body dimensions of the average man, which didn’t match even a single pilot.3 Designing systems with inclusive design in mind — so that it can be accessed and used by as many people as possible — is also needed with AI. Instead of focusing on the “average man,” we need to make sure we account for all our users. Unfortunately, diversity in AI is even more scarce than diversity in technology overall.4 One way to reduce bias in AI is to have more diversity among the workers creating the technology — people who are able to understand, recognize, and care about the potential applications of a system.5

Set up ethics boards and hire AI ethics professionals. Search LinkedIn and you will find jobs with titles like “Senior Director, Trusted Artificial Intelligence,” “AI Ethics and Governance Lead,” and “Director, ML Ethics Transparency and Accountability,” jobs that exist to ensure that AI is being used responsibility. Microsoft’s AI Ethics and Effects in Engineering and Research (Aether) group includes recommendations on regulating the use of facial recognition technology that have prompted the cancellation of significant sales efforts due to concerns about ethical misuse of products.6 Instagram’s newly formed “equity and inclusion team” will examine how Black, Hispanic, and other minority users in the U.S. are affected by the company’s algorithms, including its machine learning systems.7

Upgrade engineer training. Today, many computer science programs include at least one ethics course. Harvard even embeds ethics throughout their computer science curriculum by working with their philosophy department, helping students see how ethical issues can arise from many contexts.8 This type of education shouldn’t stop when one graduates. Engineers should learn about the challenges of measuring accuracy for different demographics, best practices for labeling training data, and how to use inclusive design during beta testing.

Combat AI Bias with Technology

Take advantage of tools that help with AI explainability and fairness. Google, Microsoft, and IBM have developed automated tools to detect and fix bias in AI algorithms.9 Google’s ML-fairness-gym, which was published in open source in early February, lets researchers study the long-term effects of AI’s decisions by simulating outcomes so the fairness of a policy can be assessed.10 “Counterfactual fairness,” a technique that DeepMind is exploring, ensures that a model’s decisions are the same in a counterfactual world where attributes such as race, gender, or sexual orientation, were changed.11 There are also open-source technologies such as Local Interpretable Model-Agnostic Explanations (LIME) that can look for unintended discrimination before it gets into models.12

Combat Human Bias with Smart AI Interventions

Clean up data sets to improve human decision making. When there are known sensitive categories – such as race, gender, marital status, and sexual orientation — AI can be used to make sure that such data can’t be used to nudge humans not to rely too quickly on common cognitive biases. San Francisco scans police reports and automatically redacts race information using AI.13 Companies like Hulu and Twilio use AI tools for recruiting to bring in more diverse candidates who might be overlooked with traditional recruiting methods.14

  • Facebook.
  • Twitter.
  • LinkedIn.
  • Print
1 https://www.washingtonpost.com/health/2019/10/24/racial-bias-medical-algorithm-favors-white-patients-over-sicker-black-patients/ https://newsroom.haas.berkeley.edu/minority-homebuyers-face-widespread-statistical-lending-discrimination-study-finds/ https://www.cnbc.com/2018/12/14/ai-bias-how-to-fight-prejudice-in-artificial-intelligence.html https://www.cnbc.com/2018/10/10/amazon-scraps-a-secret-ai-recruiting-tool-that-showed-bias-against-women.html
2 https://www.wsj.com/articles/vatican-advisory-group-issues-call-for-ai-ethics-11582893000
3 https://99percentinvisible.org/episode/on-average/
4 https://ainowinstitute.org/discriminatingsystems.pdf
5 https://www.wsj.com/articles/diversity-in-tech-needed-to-reduce-ai-bias-academic-says-11581330601
6 https://www.forbes.com/sites/deborahtodd/2019/06/24/microsoft-reconsidering-ai-ethics-review-plan/
7 https://www.wsj.com/articles/facebook-creates-teams-to-study-racial-bias-on-its-platforms-11595362939
8 https://embeddedethics.seas.harvard.edu/
9 https://venturebeat.com/2020/02/05/googles-ml-fairness-gym-lets-researchers-study-the-long-term-effects-of-ais-decisions/
10 https://venturebeat.com/2020/02/05/googles-ml-fairness-gym-lets-researchers-study-the-long-term-effects-of-ais-decisions/
11 https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai
12 https://venturebeat.com/2018/05/21/explainable-ai-could-reduce-the-impact-of-biased-algorithms/ https://towardsdatascience.com/understanding-model-predictions-with-lime-a582fdff3a3b
13 https://www.cnet.com/news/san-francisco-is-using-ai-to-reduce-racial-bias-in-making-criminal-charges/
14 https://venturebeat.com/2019/04/24/eightfold-raises-28-million-for-ai-that-matches-job-candidates-with-employers/
943269.2.0
close
Please enter a valid e-mail address
Please enter a valid e-mail address
Important legal information about the e-mail you will be sending. By using this service, you agree to input your real e-mail address and only send it to people you know. It is a violation of law in some jurisdictions to falsely identify yourself in an e-mail. All information you provide will be used by Fidelity solely for the purpose of sending the e-mail on your behalf.The subject line of the e-mail you send will be "Fidelity.com: "

Your e-mail has been sent.
close

Your e-mail has been sent.

This website is operated by Fidelity Center for Applied Technology (FCAT)® which is part of Fidelity Labs, LLC (“Fidelity Labs”), a Fidelity Investments company. FCAT experiments with and provides innovative products, services, content and tools, as a service to its affiliates and as a subsidiary of FMR LLC. Based on user reaction and input, FCAT is better able to engage in technology research and planning for the Fidelity family of companies. FCATalyst.com is independent of fidelity.com. Unless otherwise indicated, the information and items published on this web site are provided by FCAT and are not intended to provide tax, legal, insurance or investment advice and should not be construed as an offer to sell, a solicitation of an offer to buy, or a recommendation for any security by any Fidelity entity or any third-party. In circumstances where FCAT is making available either a product or service of an affiliate through this site, the affiliated company will be identified. Third party trademarks appearing herein are the property of their respective owners. All other trademarks are the property of FMR LLC.


This is for persons in the U.S. only.


245 Summer St, Boston MA

© 2008-2024 FMR LLC All right reserved | FCATalyst.com


Terms of Use | Privacy | Security | DAT Support