I have just finished reading Yuval Harari’s current bestseller, Nexus: A Brief History of Information Networks from the Stone Age to AI. As the title suggests, this is long (400 plus pages) and slow book. Actually as commented in a review published in The New York Times, it is actually two books.
“Really, what we have is two separate books, neither brief. The first 200 pages are indeed historical in their way. Unfortunately, this is a dizzying, all-in version of history that swerves unsatisfyingly from Assyrian clay tablets to a 19th-century cholera outbreak to an adaptation of the “Ramayana” on Indian TV to the Peasants’ Revolt in medieval England to the Holocaust in Romania, and so on. It doesn’t feel controlled, or even particularly expert — and the effect is a little like a flight where the person sitting next to you is well-read, hyper-caffeinated and determined to tell you his Theory of Everything.
…the second half of the book is where the action is. The meat of “Nexus” is essentially an extended policy brief on A.I.: What are its risks, and what can be done? (We don’t hear much about the potential benefits because, as Harari points out, “the entrepreneurs leading the A.I. revolution already bombard the public with enough rosy predictions about them.”) It has taken too long to get here, but once we arrive Harari offers a useful, well-informed primer.”
I found the second half full of interesting and critical commentary. See for instance my earlier posting entitled, “Insights from Yuval Harari’s “Nexus” – On Social Media Truth Loses!” His comments about the AI industry, the futility of trying to regulate it, and its inevitable dominance of our lives is sobering and probably true.
I recommend reading Nexus if you are interested in where AI is heading. Feel free to go straight to Part II on page 191.
Below is the entire New York Times review.
Tony
——————————————————–
The New York Times
Pulling Back the Silicon Curtain
Yuval Noah Harari’s study of human communication may be anything but brief, but if you can make it to the second half, you’ll be both entertained and scared.
Yuval Noah Harari sounds the alarm on our A.I. future. “When the tech giants set their hearts on designing better algorithms, they can usually do it,” he writes. But will they?Credit…Philip Cheung for The New York Times
By Dennis Duncan
Dennis Duncan is the author of “Index, A History of the.”
Published Sept. 10, 2024. Updated Sept. 22, 2024
NEXUS: A Brief History of Information Networks From the Stone Age to AI, by Yuval Noah Harari
In the summer of 2022, a software engineer named Blake Lemoine was fired by Google after an interview with The Washington Post in which he claimed that LaMDA, the chatbot he had been working on, had achieved sentience.
A few months later, in March 2023, an open letter from the Future of Life Institute, signed by hundreds of technology leaders including Steve Wozniak and Elon Musk, called on A.I. labs to pause their research. Artificial intelligence, it claimed, posed “profound risks to society and humanity.”
The following month, Geoffrey Hinton, the “godfather of A.I.,” quit his post at Google, telling this newspaper that he regretted his life’s work. “It is hard to see how you can prevent the bad actors from using it for bad things,” he warned.
Over the last few years we have become accustomed to hare-eyed messengers returning from A.I.’s frontiers with apocalyptic warnings. And yet, real action in the form of hard regulation has been little in evidence. Last year’s executive order on A.I. was, as one commentator put it, “directional and aspirational” — a shrewdly damning piece of faint praise.
Meanwhile, stock prices for the tech sector continue to soar while the industry mutters familiar platitudes: The benefits outweigh the risks; the genie is already out of the bottle; if we don’t do it, our enemies will.
Yuval Noah Harari has no time for these excuses. In 2011, he published “Sapiens,” an elegant and sometimes profound history of our species. It was a phenomenon, selling over 25 million copies worldwide. Harari followed it up by turning his gaze forward with “Homo Deus,” in which he considered our future. At this point, Harari, an academic historian, became saddled with a new professional identity and a new circle of influence: A.I. expert, invited into the rarefied echelons of “scientists, entrepreneurs and world leaders.” “Nexus,” in essence, is Harari’s report from this world.
First, it must be said that the subtitle — “A Brief History of Information Networks From the Stone Age to A.I.” — is misleading. Really, what we have is two separate books, neither brief. The first 200 pages are indeed historical in their way. Unfortunately, this is a dizzying, all-in version of history that swerves unsatisfyingly from Assyrian clay tablets to a 19th-century cholera outbreak to an adaptation of the “Ramayana” on Indian TV to the Peasants’ Revolt in medieval England to the Holocaust in Romania, and so on. It doesn’t feel controlled, or even particularly expert — and the effect is a little like a flight where the person sitting next to you is well-read, hyper-caffeinated and determined to tell you his Theory of Everything.
In a nutshell, Harari’s thesis is that the difference between democracies and dictatorships lies in how they handle information. Dictatorships are more concerned with controlling data than with testing its truth value; democracies, by contrast, are transparent information networks in which citizens are able to evaluate and, if necessary, correct bad data.
All of this is sort of obvious-interesting, while also being too vague — too open to objection and counterexample — to constitute a useful theory of information. After a lot of time, we have arrived at a loose proof of what we hopefully felt already: Systems that are self-correcting — because they promote conversation and mutuality — are preferable to those that offer only blind, disenfranchised subservience.
In the end, however, this doesn’t really matter, because the second half of the book is where the action is. The meat of “Nexus” is essentially an extended policy brief on A.I.: What are its risks, and what can be done? (We don’t hear much about the potential benefits because, as Harari points out, “the entrepreneurs leading the A.I. revolution already bombard the public with enough rosy predictions about them.”) It has taken too long to get here, but once we arrive Harari offers a useful, well-informed primer.
The threats A.I. poses are not the ones that filmmakers visualize: Kubrick’s HAL trapping us in the airlock; a fascist RoboCop marching down the sidewalk. They are more insidious, harder to see coming, but potentially existential. They include the catastrophic polarizing of discourse when social media algorithms designed to monopolize our attention feed us extreme, hateful material. Or the outsourcing of human judgment — legal, financial or military decision-making — to an A.I. whose complexity becomes impenetrable to our own understanding.
Echoing Churchill, Harari warns of a “Silicon Curtain” descending between us and the algorithms we have created, shutting us out of our own conversations — how we want to act, or interact, or govern ourselves.
None of these scenarios, however, is a given. Harari points to the problem of email spam, which used to clog up our inboxes and waste millions of hours of productivity every day. And then, suddenly, it didn’t. In 2015, Google was able to claim that its Gmail algorithm had a 99.9 percent success rate in blocking genuine spam. “When the tech giants set their hearts on designing better algorithms,” writes Harari, “they can usually do it.”
Even in its second half, not all of “Nexus” feels original. If you pay attention to the news, you will recognize some of the stories Harari tells. But, at its best, his book summarizes the current state of affairs with a memorable clarity.
Parts of “Nexus” are wise and bold. They remind us that democratic societies still have the facilities to prevent A.I.’s most dangerous excesses, and that it must not be left to tech companies and their billionaire owners to regulate themselves.
That may just sound like common sense, but it is valuable when said by a global intellectual with Harari’s reach. It is only frustrating that he could not have done so more concisely.