- 01/17/2025
In his most recent book, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, Sayash Kapoor and his co-author Arvind Narayanan give readers a clear-eyed explanation of why AI fails and why people keep falling for bogus claims and misleading hype. But this isn’t an anti-AI screed — on the contrary, Kapoor and Narayanan also share why they believe that more novel and generative forms of AI might unlock true utility.
FCAT’s VP of Research John Dalton had a chance to catch up with Kapoor about the state of AI and delve into the risks and possibilities surrounding these tools.
Q: First, Sayash, I’d like to thank you and Arvind for writing this book and for the blog that preceded it. Your work continues to provide some of the most sober and sane thinking about artificial intelligence that I’ve encountered. What do you think is the biggest misunderstanding people have about AI?
I think the biggest confusion stems from the fact that AI is an umbrella term that refers to a set of interrelated but completely distinct technologies. While some types of AI have made massive progress in the last few years, other types — like AI used for making predictions about people's futures — have not really made too much progress at all. It doesn’t help that there’s not really an overarching definition of what people mean when they use the term “AI.”
How do you define it?
In our work, we found that there are three loose criteria, which are not necessarily independent or exhaustive, but they give you a flavor of what we mean when we say “AI.”
The first is a technology that automates a task that requires creative effort or training for humans. So, for example, in the last few years we've seen many text-to-image tools. These are models that can generate images using prompted descriptions, which would typically require a lot of creative effort from a human artist. Those tools could be considered AI.
The second criterion is that the behavior of these tools is not directly specified in the code by the developer. For example, consider a thermostat that uses your past preferences and learns how you’ve previously set the temperature to automatically determine the setting you find most comfortable. That’s also AI.
The last criterion is that there should be some flexibility of inputs. If the tool only works when recognizing cats or dogs because it has already seen them within the content used to train it, that’s not AI. However, if it works well (perhaps not perfectly) on new images of cats and dogs, then that’s AI.
I really love the title of your book and blog. As you know, in order for there to be snake oil, there’s got to be a buyer. In the book, you do a brilliant job of explaining why predictive AI is especially prone to overpromising and underdelivering — but we still fall for it. Why is that?
The fact that AI is an umbrella term for all of these different technologies causes a lot of confusion. We talk a lot in the book about social media algorithms, robotics, and self-driving cars — even robotic vacuums. Vendors and the media often conflate these applications with the advances we’ve seen within generative AI.
But I think it’s also important to look at what prompts the demand for AI snake oil. We have a whole section in the book on how AI appeals to broken institutions. For instance, if you look at hiring automation, these tools can be so appealing because you have a hiring manager who may have to sift through hundreds or even thousands of resumes for a single or small number of jobs. When you're in that position, turning to a tool that claims to authentically and appropriately provide an objective ranking of the top 10 candidates seems extremely alluring. As long as you have resource-constrained institutions, they will turn to either AI snake oil or some other “magic bullet” to solve their problems.
Just to be clear, you’re not saying that all AI is snake oil. Predictive AI has a lot of problems, but is there a bright spot?
We’ve seen legitimate technical advances with generative AI. I think GenAI, when looked at more broadly, has the potential to impact the lives of all knowledge workers — largely speaking, everyone who thinks for a living. I think this trend will only continue to grow with time as we figure out the appropriate use cases.
We didn't write this book because we think all AI is snake oil. On the contrary, we wanted to give people a way to distinguish snake oil from the tools making rapid and genuine progress, helping them to ignore the former and tap into the latter.
References & Disclaimer
John Dalton is VP of Research at FCAT, where he studies emerging interfaces (augmented reality, virtual reality, speech, gesture, biometrics), socioeconomic trends, and deep technologies like synthetic biology and robotics.