FCAT RESEARCH
The Cost of Manipulating AI
By: SARAH HOFFMAN | JUNE 23, 2021
We’ve all received email messages with misspellings, designed to outsmart AI-driven spam filters. Perhaps you’ve also heard about how a two-inch piece of tape tricked Tesla cars into speeding up 50 miles per hour.1 What these and hundreds of other anecdotes demonstrate is that it’s relatively easy to manipulate AI systems. Indeed, efforts to "fool" AI have already impacted our industry, where we find numerous examples of people trying to manipulate:
  • Facebook.
  • Twitter.
  • LinkedIn.
  • Print

Investors. Companies today recognize that the target audience of their disclosures is no longer solely human analysts and investors; a significant amount of buying and selling of shares is triggered by recommendations made by algorithms.2 Given this, companies have adapted their language in their forecasts, SEC regulatory filings, and earnings calls to manipulate their AI audiences.3Since 2011 – the year when two finance professors published a detailed finance-specific dictionary including positive and negative words as a training tool for Natural Language Processing algorithms – companies that expect high machine downloads have been avoiding those negative words and have been changing the tone of their voice, exhibiting more positivity and excitement in their vocal tones hoping the algorithmic readers will come to positive conclusions about their content.4 The words cautious executives avoid using most on Wall Street: "restatement," "declined," "misstatement," "closure," and "late."

Recruiters. Knowing that AI filters scour resumes to weed out candidates based on little more than matching the language in a resume with the language in a job posting, job seekers insert keywords from the job posting to boost their chances.5 To beat AI video interviews, interviewees are learning to gesture, smile, and nod frequently. 6 A recent experiment in Germany found that a local video-analyzing AI assumed people were more hirable if they had a bookshelf behind them and less so if they were wearing eye glasses.7 Want a job? Drop the glasses and buy some books.

Employers. With so many people working from home during the pandemic, many employers turned to AI and algorithms to track their employees’ productivity, and some workers figured out creative ways to outsmart these systems.8 Before Zoom removed its "attention tracking" setting which alerted a call host when a participant was focused elsewhere, people fooled the system by using a second device or making sure to head back to the zoom window before 30 seconds passed. Presence Scheduler, which can set your Slack status as permanently active, doubled in sales and traffic in the beginning of the pandemic (until Slack closed the coding loophole).9

Why This Matters

As companies explore the many ways that AI can help them grow their businesses while controlling costs, these behaviors should give pause.

A stubborn trust gap deepens. People generally tend to reject algorithms even when they are more accurate than humans. Research also shows that people quickly lose trust in algorithms faster than with humans after seeing them make mistakes.10 Making matters more complicated, people with different educational backgrounds, genders, and ethnicities view machines differently, and people are more accepting of AI in different contexts, such as in a mall or hotel vs. watching children.11 As AI manipulation goes mainstream, trust in AI may become even more difficult to create and sustain.

Reliability of new data diminishes. In one essential dimension, AI is unlike any other technology: the performance of any AI system is only as good as the data it ingests. That means that fooling AI poses a serious challenge to building reliable AI solutions. As people figure out ways to outsmart AI, that becomes the new data for AI to train on, potentially giving an even greater advantage to those who know how to outsmart the system. Making things even more unequal: figuring out how to outsmart AI is not equivalent for everyone. Those of us who are most knowledgeable about the specifics of technology may be better at cheating AI tools. For example, some students were able to easily outsmart AI grading systems because their tech-savvy parents figured out that the AI was only looking for the presence of the correct answer.12

The Path Forward

AI is not going away, and we need an approach that helps increase trust in AI and reliability of AI’s data. Systems that are perceived to be negligent, biased, or easy to manipulate could be harmful to a company’s brand. We need solutions that:

Incorporate humans. We need metrics that consider the human in addition to the algorithm. For example, consider remote employee performance evaluations. Using AI tools can be helpful, but we also need to consider how some employees may "game" those metrics. Measuring time online should be one small factor to consider, but not at the expense of manager and peer feedback. The U.S. Patent and Trademark Office uses AI to read texts of patent applications to spot the most relevant previous inventions. Some patent applicants attempt to manipulate the algorithm by creating hyphenated words and assigning new meanings to existing words, in addition to including irrelevant information, and omitting relevant citations, so their innovations appear novel to an algorithm. Researchers found that the AI benefited strongly from human collaboration with people with domain expertise and those with experience using ML technology.13 Similarly, perhaps a purely algorithmic solution is not the right solution for investment decisions.

Emphasize transparency and fairness. By being transparent in how we are using the data and AI models we can eliminate some of the urge people feel to manipulate a system. Google faced backlash because its Duplex AI could mimic a human and didn’t identify itself as a robot.14 IBM researchers also found that showing users a confidence score of the AI model increased levels of trust.15 Having a diverse AI team and training that team on topics like measuring accuracy for different demographics and using inclusive design during beta testing can help reduce bias within AI systems and may also help us uncover some of the issues driving the urge to fool AI.

Leverage machine learning robustness techniques. A recent industry survey by Microsoft found that few industry practitioners are taking the threat of adversarial machine learning (techniques aimed at fooling AI models by supplying deceptive data) seriously or using tools to mitigate the risk.16 That’s unfortunate, because there are several techniques that can improve machine learning robustness, making it more difficult to fool the AI model. Some approaches incorporate adversarial examples into training; others use more than one machine learning model with ensemble learning.17 For example, a facial recognition system that uses multiple neural networks with different structures would likely prevent an adversarial attack since the attack would exploit the vulnerabilities of a couple of networks but the final decision would be based on a majority vote.

Reimagine AI design. When we design products, we need to think about these two sides: How do we build trust in AI – perhaps by starting small – and also, we need to consider how people might try to outsmart the products we’re building. In addition, we should consider defaults that allow users to opt in for products that use customer or employee data. In addition to the inclusion of human metrics and the ensemble model mentioned above, consider diversity in the type of data used. For example, AI systems for investing could look beyond a static list of positive and negative words, possibly incorporating non-verbal cues such as body language, when available.


Sarah Hoffman is VP, AI and Machine Learning Research, in FCAT.

  • Facebook.
  • Twitter.
  • LinkedIn.
  • Print
1 https://www.wired.com/story/tesla-speed-up-adversarial-example-mgm-breach-ransomware/
2 Cao, S., Jiang, W., Yang, B., & Zhang, A. L. (2020). How to Talk When a Machine is Listening: Corporate Disclosure in the Age of AI (No. w27950). National Bureau of Economic Research. https://www.nber.org/papers/w27950
3 Ibid.
4 Ibid.
5 https://www.nytimes.com/2021/03/19/business/resume-filter-articial-intelligence.html
6 https://www.insidehighered.com/news/2019/11/04/ai-assessed-job-interviewing-grows-colleges-try-prepare-students
7 https://web.br.de/interaktiv/ki-bewerbung/en/
8 https://www.cnbc.com/2021/05/27/office-surveillance-digital-leash-on-workers-could-be-crossing-a-line.html
9 https://www.wired.co.uk/article/work-from-home-surveillance-software
10 Hidalgo, C. A., Orghiain, D., Canals, J. A., De Almeida, F., & Martín, N. (2021). How Humans Judge Machines. MIT Press.
11 Ibid.
12 https://twitter.com/DanaJSimmons/status/1300680868269158400
13 Choudhury, P., Starr, E., & Agarwal, R. (2020). Machine learning and human capital complementarities: Experimental evidence on bias mitigation. Strategic Management Journal, 41(8), 1381-1411.
14 https://www.theguardian.com/technology/2018/may/11/google-duplex-ai-identify-itself-as-robot-during-calls
15 Zhang, Y., Liao, Q. V., & Bellamy, R. K. (2020, January). Effect of confidence and explanation on accuracy and trust calibration in ai-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 295-305). https://arxiv.org/abs/2001.02114
16 Kumar, R. S. S., Nyström, M., Lambert, J., Marshall, A., Goertzel, M., Comissoneru, A., ... & Xia, S. (2020, May). Adversarial machine learning-industry perspectives. In 2020 IEEE Security and Privacy Workshops (SPW) (pp. 69-75). IEEE. https://arxiv.org/pdf/2002.05646.pdf
17 https://www.infoworld.com/article/3215130/how-to-prevent-hackers-ai-apocalypse.html
85737.1.0
close
Please enter a valid e-mail address
Please enter a valid e-mail address
Important legal information about the e-mail you will be sending. By using this service, you agree to input your real e-mail address and only send it to people you know. It is a violation of law in some jurisdictions to falsely identify yourself in an e-mail. All information you provide will be used by Fidelity solely for the purpose of sending the e-mail on your behalf.The subject line of the e-mail you send will be "Fidelity.com: "

Your e-mail has been sent.
close

Your e-mail has been sent.