Credit: valentina slyisarenko / bubaone / iStock
How to spot real value in AI — and avoid the snake oil
By
Bad artificial intelligence doesn’t just miss the mark. It can lead to real consequences for people and serious risks for businesses.
That was the message from Princeton University professor Arvind Narayanan during a recent talk at MIT about his new book, “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.” The book was co-authored by Sayash Kapoor.
Their core argument: Some AI tools used today, especially in hiring, lending, and criminal justice, don’t just underperform; they simply don’t work. And others work exactly as claimed but are used in the wrong ways or for harmful ends. And in both scenarios, AI tools can produce outcomes that are inaccurate, biased, or even dangerous when applied at scale.
For business leaders, the challenge is knowing what to trust, and where to draw the line. Narayanan offered the following insights.
Spot the weak links in predictive AI
Predictive AI systems — designed to forecast human behavior and support decision-making — are increasingly being used in areas such as hiring and criminal justice. According to Narayanan, many of these tools don’t live up to their claims.
One example he cited: software that analyzes 30-second videos of job candidates to evaluate their personality traits based on speech and body language. These videos often don’t focus on a candidate’s qualifications at all — sometimes on just their hobbies — yet are used to generate scores that claim to predict job fit.
Narayanan called the approach “an elaborate random-number generator” and pointed to experiments that tested the software’s reliability. Minor visual changes, like adding a bookshelf in the background or removing a pair of glasses, led to “radically different scores,” even when the underlying video was the same.
These concerns extend to the criminal justice system, where algorithms are used to guide decisions about whether someone should be detained before trial. The performance of these tools is often weak, Narayanan said — they have a predictive accuracy rate of less than 70%, at best.
“We’re making decisions about someone’s freedom based on something that’s only slightly more accurate than the flip of a coin,” he said.

Leading the AI-Driven Organization
In person at MIT Sloan
Register Now
Focus on what generative AI does well
Not all AI is snake oil. Generative AI — tools that can generate text, images, and code — is already proving valuable.
“Generative AI is useful to basically every knowledge worker — anyone who thinks for a living,” said Narayanan.
He offered a personal example of using an AI-powered app generator to help his daughter understand fractions. On the spot, the model helped him build an interactive tool that turned learning into a game.
“We played with this for like 15 minutes, and it really helped her,” he said. “You couldn’t have imagined doing this a couple of years ago.”
Still, even the most impressive systems aren’t flawless. Hallucinations, which result when models produce information that sounds accurate but isn’t, remain a known risk. And they’re not easily fixed, given that generative AI inherently involves randomness.
The takeaway: These tools can be useful, but only when paired with the right checks and accountability.
Address hidden risks
As generative AI tools become more widely used, there’s also an increase in the risk of misuse — and many examples are already playing out in the real world.
Related Articles
Narayanan pointed to AI-generated foraging guides that gave users inaccurate, potentially dangerous advice about which mushrooms are safe to eat. He also noted the rapid spread of deepfake pornography apps, which can use an uploaded photo to create explicit images nonconsensually.
And even when AI systems are used for their intended purposes, their impact can still be damaging.
Facial recognition technologies, for example, have reached high levels of technical accuracy — but that hasn’t eliminated concerns about misuse. “Mass surveillance using facial recognition … now works really, really well,” Narayanan said. “And that, in fact, is part of the reason that it’s harmful, if it’s used without the right guardrails.”
The lesson is this: Just because a tool works doesn’t mean it’s safe. Leaders need to weigh both its value and its potential for misuse.
Ask the right questions
Narayanan suggested asking two questions to avoid costly missteps: Does the tool work as claimed? And, even if it does, could it cause harm?
He pointed to AI-based cheating detectors as a cautionary tale that would fail the test. These tools often flag the wrong students — especially non-native English speakers. “They just don’t work. … That very much feels like snake oil to me,” Narayanan said.
Generative AI has its own pitfalls. Narayanan said that while AI-based agents can do things like navigate a website or do online shopping, they do not have the reliability users expect. Software products that aim to complete these tasks “are pretty much dead on arrival,” he said.
His guidance for businesses: Stay grounded; focus on narrow, well-defined problems; and don’t mistake hype for readiness.
Treat AI as infrastructure, not magic
AI should be viewed less as magic and more as infrastructure, Narayanan said. “One day, much of what we call AI today will fade into the background,” he said.
That’s not a reason to step back, it’s a reason to stay sharp. “We need to know which applications are just inherently harmful or overhyped,” he said. “Even when it does make sense to deploy an AI application, we need guardrails.”
Watch Arvind Narayanan’s talk about AI Snake Oil