Jump to ratings and reviews
Rate this book

AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference

Rate this book
Confused about AI and worried about what it means for your future and the future of the world? You're not alone. AI is everywhere—and few things are surrounded by so much hype, misinformation, and misunderstanding. In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor cut through the confusion to give you an essential understanding of how AI works, why it often doesn't, where it might be useful or harmful, and when you should suspect that companies are using AI hype to sell AI snake oil—products that don't work, and probably never will.
While acknowledging the potential of some AI, such as ChatGPT, AI Snake Oil uncovers rampant misleading claims about the capabilities of AI and describes the serious harms AI is already causing in how it's being built, marketed, and used in areas such as education, medicine, hiring, banking, insurance, and criminal justice. The book explains the crucial differences between types of AI, why organizations are falling for AI snake oil, why AI can't fix social media, why AI isn't an existential risk, and why we should be far more worried about what people will do with AI than about anything AI will do on its own. The book also warns of the dangers of a world where AI continues to be controlled by largely unaccountable big tech companies.
By revealing AI's limits and real risks, AI Snake Oil will help you make better decisions about whether and how to use AI at work and home.

360 pages, Hardcover

First published September 24, 2024

Loading interface...
Loading interface...

About the author

Arvind Narayanan

8 books12 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
58 (27%)
4 stars
84 (40%)
3 stars
58 (27%)
2 stars
7 (3%)
1 star
3 (1%)
Displaying 1 - 30 of 31 reviews
Profile Image for Jason Furman.
1,308 reviews1,119 followers
November 2, 2024
Some of AI Snake Oil is very good, including its skepticism about AI hype, an excellent chapter on the limits of AI doomerism, and a focus on how AI is used by humans rather than its autonomous capabilities. But much of the book—including its ultimate recommendations—is deeply misguided, reflecting a misunderstanding of capitalism, a mix of concerns not really AI-related, a one-sided review of the evidence, and a failure to compare AI to the alternatives—namely flawed humans and non-AI technologies. They are also more skeptical about progress in AI than I would be, though I don’t have strong convictions about who is right on this.

The book opens well by pointing out that AI is an overly broad term which confuses debates about it, analogizing it to a world where we only used the word “vehicles” and some people arguing for their efficiency meant bicycles while their debate opponents were focused on SUVs.

They distinguish between predictive AI, generative AI, and social media content moderation AI (in a chapter that feels out of place). They argue that much predictive AI is based on unreproducible papers with several errors, including testing on training data (“leakage”) and lacking structural models that break down when behavior changes. Moreover, companies deploy and sell systems that are often untested and sometimes aren’t even AI (occasionally with humans behind them) as part of widespread AI hype.

I found this mostly compelling but disagreed in places. They criticize flawed machine bail decisions without engaging with literature showing how it can improve or comparing to how terrible human judges are with their limited time and information. They discuss an AI hiring system that can be gamed based on attire or interview language—again something humans do too, probably worse. They’re overly fatalistic about prediction: while perfect prediction is impossible, we can do better than coin tosses and provide uncertainty estimates for users to weigh errors.

They’re more positive about generative AI except regarding what they view as large-scale intellectual property theft. While I haven’t settled my views here, I’ve long thought IP protections are overbroad and hinder innovation—my instincts lean that way on generative AI too, though I’m uncertain. People get enormous growing benefits from generative AI; if stricter IP protections just shifted rents that might be acceptable, but radically reducing innovation would be problematic.

Their chapter on AI existential threat is masterful and should be widely read. They effectively critique doomer arguments: they expect only incremental progress toward AGI, note that AI risks can be fought with better AI making unilateral disarmament counterproductive, argue “alignment” is premature given unknown future technologies, contend that paper clip-maximizing AI couldn’t exist without human-like understanding, and emphasize focusing on human misuse through measures like restricting bioweapon ingredients.

Their deeper flaws emerge from skepticism of capitalism that leads to indefensible positions. They criticize OpenAI’s Kenyan data annotators earning $1.46-3.74 hourly while engineers make nearly million-dollar salaries at an $80 billion company. This is pure demagoguery—the relevant comparison is to these workers’ alternatives, not to AI engineers. Even their criticism that “data annotation firms recruit prisoners, refugees, and people in collapsing economies” could be read positively: AI creating employment for the least employable is potentially beneficial.

The social media chapter focuses on Facebook’s Type I and Type II content moderation errors, but as they acknowledge, this mostly reflects human judgment rather than AI. They offer no real alternative to this complex task, noting Facebook couldn’t afford to handle 83 Ethiopian languages and moderate rare but crucial events. They praise Mastodon, which is far less usable than X and, by their admission, may not be scalable.

More broadly, they seem nostalgic for public provision, nonprofits, and smaller companies. They note “The early internet was funded by public funds and DARPA... before 1990s privatization,” overlooking that pre-privatization internet was barely accessible and limited in utility. Similarly, criticizing large AI companies ignores that they’ve produced the major breakthroughs.

They argue AI progress will be slow because profit-focused companies won’t invest in understanding how AI works. While true for some firms, well-funded companies with long-term horizons are likely to invest in understanding if it creates competitive advantage.

Some recommendations are sensible—like improving research reproducibility and enforcing deception laws. Others seem tangential, like supporting randomized college admissions above certain thresholds—an AI-irrelevant proposal they wouldn’t extend to bail decisions. Ultimately, what people attempt with AI, especially predictive AI, is challenging—but the alternatives are often worse.

[DISCLOSURE - I asked Claude "Can you do a very, very light edit of this" and posted that edit. I write these reviews very quickly, originally just did them for myself, and often have typos. Hopefully this eliminated the typos and improved the language a little--but also may have introduced some changes I didn't love because I didn't check Claude's edit carefully. My hope is the improvements outweighs th worsenings--but even better is if I had spent more time to take advantage of but not fully follow the AI edits.]
Profile Image for Jean.
177 reviews
August 28, 2024
Fantastic book! EVERYONE should read it. Clearly and thoroughly sorts out the reality from the hype, explaining why we are where we are (some extremely problematic uses already exist, hence: "snake oil") and what the future may hold (no, giant, sentient robots aren't taking over). Excellent insights and discussions of the different forms of AI (predictive, generative, content moderation), the problems and promise of each, and how we might steer in the right direction.

Read this book if you're curious about AI, afraid of AI, have to make decisions about implementing AI, have kids, use social media, make policy, vote, wonder about AI in your work, are a journalist, are interested in tech, or just enjoy high-quality expository writing. Then sign up for the authors' newsletter.

I read an advance copy and reviewed it here: https://www.practicalecommerce.com/ai...
Profile Image for Ari Damoulakis.
221 reviews7 followers
October 6, 2024
I am really not exaggerating but for me this is a very important book I hope all you my GR good friends will read so you will know to be careful and that many dangerous things could be done by humans who are not careful with AI.
Listen, I love AI.
As a totally blind person it has already done many amazing things for me and wonderful changes in my life, but even I definitely also know that it has problems when, for example, it tells me an object is something which it isn’t.
I will rely even more once I achieve my plan to be able to soon buy Envision Smart Glasses, which I am so super excited about.
But this book will also show you the terrible consequences AI could have for many humans, especially if other people use it wrongly, or maybe even deliberately skew models to take advantage of or defraud other people, or if AI is unintentionally misused because biases are accidentally built in or by mistake many factors aren’t taken into account.
Or if humans start relying on flawed AI and do not apply their own judgments to many situations.
And as for predictive AI? Well, ai makes mistakes now. We as humans are sometimes irrational and AI could create wrong futures even if it could predict haha.
Better humans live your own lives and let us hope we just don’t become cogs in decisions made by large companies who have too much faith in the future their AI might try predict is best for us.
And yes, I am still mad at facebook’s AI for refusing to let me comment on my friends’ posts with what we all know are stereotypical South African geographic jokes. You know, you can’t even use ‘I’ll kill you,’ in a sentence on comments to friends you’ve had over 20 years without the AI refusing to post it because it thinks I am issuing death threats or hate speech.
Profile Image for Trina.
1,143 reviews3 followers
September 10, 2024
Having read several books on AI in the last few months, this wasn't groundbreaking for me. I do think some of their approaches were different (certainly less doom and gloom than some) and I thought the final part where they imagined the world in two different ways depending on how we deal with AI was interesting. I am still waiting for a real explanation of why LLMs were able to train on data that can now be used in perpetuity without compensation to the original creators.
Profile Image for Becca.
68 reviews
September 5, 2024
This was a great read! Helped expand some of my ideas and understanding of AI. As well as temper some of the things hear floating around.
Profile Image for Debbie Mitchell.
413 reviews9 followers
November 20, 2024
With so much discourse about Artificial Intelligence over the last few years, I’m so grateful for this book!

The authors are both computer scientists who explain the difference between predictive AI, generative AI, and content moderation AI. They talk about the limitations and potential for each of these types of AI.

I really appreciated how these two computer scientists highlighted and addressed issues of racism in tech and also the way tech industries are overly centralized in the west and all of the problems that has caused.

I also didn’t realize how bad AI detection software is and how it can often target non-native English speakers. Something to be careful about as a professor!

I also loved that they talked about universal basic income as a solution for folks whose livelihoods are threatened because of AI.

Would definitely recommend!
November 1, 2024
This book offers a valuable overview of the current AI landscape, distinguishing between effective AI applications and those that are merely hype. It delves into the inherent challenges of certain AI problems, particularly those related to predicting individual human behavior. Additionally, it sheds light on the complexities of content moderation in social media while exploring the potential opportunities presented by Generative AI.

A recurring theme throughout the book is the necessity for enhanced regulations, transparency and reproducibility within the AI field. It emphasizes the importance of proactively addressing and mitigating the risks associated with the growing use of AI technologies.

Finally, the book gives us two possible futures, depending on how we handle AI ethics and regulations. It's slightly unsettling and it makes you think about the choices we're making today.

If you're feeling confused about all the AI hype going on, this book is a quick and easy way to get a better understanding.

For those who have been following this field for some time, not a lot of new information and could probably skip to reading their two versions of the future at the end of the book.
Profile Image for Susan.
191 reviews
November 5, 2024
A comprehensive deep dive into the failings and inadequacies of AI (predictive, generative, and content moderation AI), to warn readers of “AI snake oils”, i.e. AI that does not and cannot work as advertised. I loved the discussion on AGI, and how general intelligence requires a certain degree of common sense, judgement and social intelligence. Overall worth a read, although the book felt fairly long.
Profile Image for Wouter.
183 reviews
October 2, 2024
Critical and positive book about AI. It explors three forms of AI: GenAI, predictive AI, and content moderation AI.

There weren't many eye-openers or any profound insight. It solidly describes how we get here and the potential and deceit of AI.

It was ironic that the book ended with two scenario predictions whilst being very critical of predictive AI.
Profile Image for Gavin.
1,137 reviews470 followers
October 25, 2024
Despite the title it's not a diatribe. They square the obvious success of some AI systems with their thesis about hype by saying that generative AI is real and predictive AI is often fake. This is an improvement over many ethicists and journalists, who say that it's all fake (and also dangerous).
Accurately predicting people's social behavior is not a solvable technology problem, and determining people's life chances on the basis of inherently faulty predictions [is immoral]

But their squaring isn't right either. First, pedantically, generative AI is literally predictive: an LLM implements generation by sequentially predicting the next word or pixel or waveform. But more broadly, predictive AI has had huge successes in well-defined domains: OCR, image captioning, controlling reactor plasma, predicting protein structure and interactions and chip design. And the essentially identical field of statistical modelling includes applications that underwrite the modern world: unit pricing in firms, price discovery in markets, fraud detection, risk modelling, and industrial planning. Climate models have been very accurate about temperature rise. They grant this point in a nice section about weather models:
Increased computational power, more data, and better equations for simulating the weather have led to weather forecasting accuracy increasing by roughly one day per decade. A five-day weather forecast a decade ago is about as accurate as a six-day weather forecast today.

which they put down to it being simulation-based (modelling the physics) rather than machine learning (learning a probabilistic model without hard dynamics underneath). But many of the astonishing successes I mentioned above are pure probabilistic modelling.

OK, so they mean predictive AI in social domains besides economics. COMPAS is taken to be the central example of all predictive AI. It's right to point out that people use AI to predict complex and loopy systems like policing, hit songs, and epidemics, which are just hard / predictions are endogenous / just don't have enough data. But this isn't enough for their overall dismissive stance.

I liked this example of clear sight:
Facial recognition is different from other facial analysis tasks such as gender identification or emotion recognition, which are far more error-prone. The crucial difference is that the information required to identify faces is present in the images themselves. Those other tasks involve guessing something about a person... When critics oppose facial recognition on the basis that it doesn't work, they may simply try to shut it down or shame researchers who work on it. This approach misses out on the benefits that facial recognition has brought. For example, the Department of Homeland Security used it in a three-week operation to solve child exploitation cold cases based on photos or videos posted by abusers on social media. It reportedly led to hundreds of identifications of children and abusers.


They are unfussed by defining "AI", which is one ok choice:
We won't fret about the fact that there's no consistent definition. That might seem surprising for a book about AI. But recall our overarching message: there's almost nothing one can say in one breath that applies to all types of AI.

But you can go past "monovalent" (criterion-based) thinking by moving to a vector view - which these authors are easily capable of and which graphs could partially convey to the audience.


The depressing part is this: among the vast universe of "good enough" cultural products, it is a largely random process that determines success. This is a mathematical consequence of cumulative advantage. The effect of an initial review of a book or rainy weather on the opening weekend of a film can get amplified over time. A noted actor signing on might attract other famous actors, leading to success- breeds-success dynamics during the film production process...


Kudos to them for including a chapter on inherent AI risk. But no kudos for their actual argument:
The main problem with [the usual AI xrisk] argument is that it posits an agent that is unfathomably powerful yet lacks an iota of common sense to recognize the absurdity of the request, and will thus interpret it extremely literally, oblivious to the fact that it goes against human safety. This kind of mindless, literal interpretation is characteristic of traditional AI agents, which are programmed with knowledge of only a very narrow domain. For example, an AI agent was tasked to finish a boat race as quickly as possible, ideally learning complex navigation strategies. Instead, it discovered that by going in circles, it could accumulate reward points associated with hitting certain markers, without actually completing the race!

But the more general the agent, the less likely this is. We don't think an agent that acts in this extreme way will actually be intelligent enough to acquire power over anyone, much less all of humanity. In fact, it wouldn't last five minutes in the real world. If you asked it to go get a lightbulb from the store "as fast as possible," it would do so by ignoring traffic laws, risking accidents. It would also ignore social norms, cutting in line at the store. Or it might decide not to pay for the item at all. It would promptly get itself shut down.

The first point here is that they are making very confident claims about a system that doesn't exist, something they're usually against. How can they know that? How can they know that to a degree such that it's not worth worrying about despite the huge potential consequences?

The second is that we already have an existence proof for intelligent agents without moral sense: sociopaths.

The third is that it's actually them that are underestimating such an AGI. If it has any autonomous goal which conflicts in any way with ours, and if it is smart enough, then it will not stupidly ignore traffic laws and out itself, but instead play along until it has a decisive advantage.

As it happens I think we will get to see signs of deception and misalignment in models, they won't one-shot us. (We already had some in a somewhat dumb model.) So I look forward to them changing their minds when we do. (But one-shotting is not out of the question.)

They say that putting probabilities on unobserved quantities is meaningless. But this is equivalent to them saying that the probability is radically unknown, could be anything. This is not reassuring.

As usual, they misunderstand the actual biological threat model of AI: it's about making it cheap enough for scruffy terror outfits, and (even more so) actually designing the genetic sequence of the agent.

It's very funny to see them using cognitive biases to explain why people think AI will work and be dangerous. Bulverism.

Overall, worth reading, worth arguing with, far ahead of the crowd.

(To obtain the quotations in this review (despite Kindle blocking copy-paste) I used Snipping Tool's built-in OCR, a lovely tiny bit of AI.)
611 reviews11 followers
October 6, 2024
I’ve been following AI efforts over the last few years. I did a bit of a deep dive 2-3 years ago & came away with a skeptical view of the technology overall. While some is fun to play with, most of the technology is simply hype, of which some is quite harmful. I was delighted the authors of the site AI Snake Oil wrote a book to encapsulate their ideas.

I find most AI, in its current form, is primarily snake oil. Companies hype their products, but nothing pans out. Everyone is chasing the funds from investors who want to be first into what they believe is the next round of high tech billionaire creation.

But what is actually produced is a bunch of junk. The authors jump right into the worst of it, the predictive AI tools. There is absolutely no way a machine can predict an individual’s behavior at any point in the future. Life is full of too much randomness. Any data sets are highly biased. Take their take down of predictive hiring tools. Have a candidate answer a bunch of questions & the companies guarantee they can identify the candidate that will work out the best. Yet when interrogated, the models outcomes are about as accurate as a coin flip. The same with predicting who should get bail and who shouldn’t.

It is this aspect of AI that I find scary. It isn’t rogue AI (that is purely imaginary), but the 100% belief in systems that affect individuals. There isn’t a way to analyze the results or know why it chose the result the system did. It simply is. Following the machine without question is the lazy way out. We’ve made people believe in the infallibility of the machine, yet it is simply a reflection of the people that trained it.

The authors touch on the fact that each query or interaction with AI models is computationally expensive. Hoping newer GPU cards will make this better is wishful thinking. AI will hit a wall due to its enormous power & water requirements. The USA doesn’t have the power infrastructure to handle the desired exponential growth. Investors want a hockey stick graph (ie fast ramp), instead it’ll be a flat line. One way out is to push as much as possible down to local devices, but this is only starting. On top of that, who is going to pay for running the models? OpenAI is some hugely valued company, yet they can barely monetize what they have.

As someone that works for me said recently, during the dotcom era, everyone needed a website. Currently not everyone needs or wants AI. I have SaaS tools that have rushed to add AI type tools, yet all of them suck. The hype has gotten the public to spout out the desire for AI like features, yet have zero idea what that means. They don’t realize that most are just snake oil.

Profile Image for Michael.
334 reviews8 followers
November 17, 2024
Sadly this book was really bad. It’s just lots of classic “here’s how tech is bad”. It’s somewhat well summed up in the Ted Chiang quote towards the end that all fears about AI are actually fears about capitalism. These are not interesting or insightful. A book written in 2024 that uses studies from GPT-2 and doesn’t discuss hyperscaling or agents or LLama just isn’t contributing anything new to the conversation. There’s perhaps some utility is separating out generative and predictive AI (and for some reason adding social media moderation as a third thing?) but it totally fails to explain the parts that make this new era new. It’s not just ML 2.0. it’s that we have fundamentally new capabilities because of scaling and advances in the algorithms. It’s that we have better chips. It’s that we can eliminate many coordination and interoperability challenges by having computers interact more like humans. It’s that we can unlock the potential of smart but untrained people because most innovation is just pattern matching applied in new ways.

Sad because I was hoping for much more.
445 reviews7 followers
November 17, 2024
Good deep dive on the AI and it's potential impact on society. The book covers many of the questions that I see asked about the potential risks of AI as well as helping the reader to understand what AI can actually do and what it cannot.

I felt the chapter on generative AI and the final chapter about "where do we go from here" were the two most helpful for framing the discussion, but the entire book is good for those interested in the topic.

Importantly, it is not highly technical and focuses more on the societal impacts. I would recommend this for a) any businessowner, b) readers interested in AI as a topic, c) those that may have fears of how AI will impact their jobs or family.
Profile Image for Tejas.
24 reviews3 followers
October 4, 2024
Prediction is only a probability and not an inevitability: yet, how many predictive AI tool based solutions come to you giving any idea of the extent of probability of its accuracy instead of the binary yes/no reply? None.

Transparency, privacy, reliability are all the factors in question here: quite correctly so.
This book will undoubtedly become the reason for AI based policies to get implemented optimally within the next year or two. All the policy makers in the developing nations should be reading this book to devise their regulations for AI.
150 reviews8 followers
October 13, 2024
AI Snake Oil is an immensely readable and engaging overview of AI as it stands today. Perfect for readers with no more than a passing awareness of what's under the hood of AI, the authors clearly break it down into its use cases (predictive/generative/content moderation), with the limitations and potentials of each. This is not a technology-heavy book--it places AI in the context of society and the structures around it, which makes the authors' recommendations for the future of AI a lot more nuanced and deliberate.
32 reviews
November 12, 2024
Excellent introduction to the field, particularly good on puncturing the hype around AI, and how AI boosters are no different from the charlatans that jump on every new technology. There's a chilling vision of a world where governments have given a few dominant AI companies free rein to run the world as they see fit, because it's cheaper than proper regulation and they depend on billionaire funding come election time, chilling because we're already there. I would have liked more technical information on how the nuts and bolts of AI models work, but maybe that's for another book.
Profile Image for Daniel Maurath.
235 reviews1 follower
November 10, 2024
It’s a good reference for understanding the ways that AI claims can be misleading and also a good reality-check on its current capability. But the writing was dry and formulaic and their argument for why artificial intelligence isn’t going to go rouge was basically non-existent (in short: “of course we’ll program failsafes”). I’ll still follow their newsletter though - good to keep up with what happens!
Profile Image for P.
339 reviews3 followers
November 20, 2024
The book is fine. I think it's got a strange flow and structure to it... it's not really well put together. There's a lot of good information here and some important distinctions among different kinds of tech we call "AI", but this book is quite basic for anyone with a basic understanding of how AI works. Great for beginners, not so much for others. 6/10
Profile Image for Elizabeth Taireh.
55 reviews4 followers
October 17, 2024
This was my first book about AI. If you've read a lot about AI or already very informed on the topic, maybe this won't be an eye opening book for you. For me, it made the topic much more approachable and gave me a deeper understanding of AI.
154 reviews5 followers
November 3, 2024
Interesting. As it turns out, I'm a member of a choir and I've just been preached to.

I already knew that there was a lot of overpromising around AI, but I still learned a lot more about it, so that's cool.
Profile Image for Jaanika Merilo.
94 reviews42 followers
November 21, 2024
I am a great believer in AI in regards to expanding our capacities. But expanding while beingevidence-based and controlled.

Having said that, this is a very interesting to read how the AI might not really yet be there for different uses.
371 reviews
September 7, 2024
I really like how they explained AI and the specific scenarios where it is beneficial and detrimental. It really helped me to understand it better.
49 reviews1 follower
October 17, 2024
Not what I expected. Spent too much time on general social issues.
Profile Image for sam.
75 reviews6 followers
Read
November 5, 2024
If you're interested in better understanding how AI works, what it can actually do, and what companies and media is lying to you about - this one is worth your time!
Profile Image for Aubrey Drummond.
Author 3 books3 followers
November 15, 2024
Common sense book about the impact of AI. Reaffirmed my thoughts, and some insightful history on Artificial Intelligence.
Displaying 1 - 30 of 31 reviews

Can't find what you're looking for?

Get help and learn more about the design.