What do you think?
Rate this book
360 pages, Hardcover
First published September 24, 2024
Accurately predicting people's social behavior is not a solvable technology problem, and determining people's life chances on the basis of inherently faulty predictions [is immoral]
Increased computational power, more data, and better equations for simulating the weather have led to weather forecasting accuracy increasing by roughly one day per decade. A five-day weather forecast a decade ago is about as accurate as a six-day weather forecast today.
Facial recognition is different from other facial analysis tasks such as gender identification or emotion recognition, which are far more error-prone. The crucial difference is that the information required to identify faces is present in the images themselves. Those other tasks involve guessing something about a person... When critics oppose facial recognition on the basis that it doesn't work, they may simply try to shut it down or shame researchers who work on it. This approach misses out on the benefits that facial recognition has brought. For example, the Department of Homeland Security used it in a three-week operation to solve child exploitation cold cases based on photos or videos posted by abusers on social media. It reportedly led to hundreds of identifications of children and abusers.
We won't fret about the fact that there's no consistent definition. That might seem surprising for a book about AI. But recall our overarching message: there's almost nothing one can say in one breath that applies to all types of AI.
The depressing part is this: among the vast universe of "good enough" cultural products, it is a largely random process that determines success. This is a mathematical consequence of cumulative advantage. The effect of an initial review of a book or rainy weather on the opening weekend of a film can get amplified over time. A noted actor signing on might attract other famous actors, leading to success- breeds-success dynamics during the film production process...
The main problem with [the usual AI xrisk] argument is that it posits an agent that is unfathomably powerful yet lacks an iota of common sense to recognize the absurdity of the request, and will thus interpret it extremely literally, oblivious to the fact that it goes against human safety. This kind of mindless, literal interpretation is characteristic of traditional AI agents, which are programmed with knowledge of only a very narrow domain. For example, an AI agent was tasked to finish a boat race as quickly as possible, ideally learning complex navigation strategies. Instead, it discovered that by going in circles, it could accumulate reward points associated with hitting certain markers, without actually completing the race!
But the more general the agent, the less likely this is. We don't think an agent that acts in this extreme way will actually be intelligent enough to acquire power over anyone, much less all of humanity. In fact, it wouldn't last five minutes in the real world. If you asked it to go get a lightbulb from the store "as fast as possible," it would do so by ignoring traffic laws, risking accidents. It would also ignore social norms, cutting in line at the store. Or it might decide not to pay for the item at all. It would promptly get itself shut down.