Book Review: Superagency

Reid Hoffman, the author, was a supporter of OpenAI when it began as a non-profit organization in 2015. Before that he was a founding board member of PayPal, a founder of LinkedIn, and has been on the board of Microsoft since 2017. He defines Superagency this way:

Superagency is the state of widespread empowerment that occurs when millions of people get simultaneous access to a breakthrough technology. With hands-on, self-directed AI, individuals benefit from their own new superpowers—and everyone else’s too.

Hoffman makes the point that often in history, new technologies have sparked visions of impending dehumanization and societal collapse. As examples, he cites the printing press, the power loom, the telephone, the camera and the automobile. Factory automation was thought to lead to the “permanently unemployed.”

The book’s tone is optimistic. Clearly, Hoffman is aware of all the concerns about AI but he sees it as a mostly positive force in our future. In fact, the subtitle of the book is, What could Possibly Go Right with Our AI Future.

Because the future course of AI is hard to predict, OpenAI uses a business strategy called “Iterative Deployment.” Rather than a grand plan, which would likely prove to be wrong, the idea is to release versions of the product to the public in a progressive manner and get feedback from users that will be much more helpful than anything that could be learned in a lab.

OpenAI’s first product was launched November 30, 2022, with no fanfare. Within two months, they had 100 million users.

Open AI is a chatbot trained on vast amounts of information, it is known as a Large Language Model. An LLM doesn’t understand facts. It relates “tokens” of information to other tokens and the predicts the next token. It doesn’t think. AI can’t reason, at least not yet. Any awareness is simulated. They make mistakes, sometimes called hallucinations, but actually more confabulations. They make mistakes even when they have the right answer.

Despite the fears of AI taking over the world, many say that AI, no matter how well trained, will never achieve “artificial generally intelligence,” or AGI. This is the holy grail of this field, but it may not be possible. AI isn’t able to think as humans do. Can it be trained to think like a human, to reason? That isn’t clear yet. 

Human Agency is a fundamental concept. It holds that we can each make our own choices, act independently, and exert influence over our own lives. Hoffman believes that humans, and he calls our species Homo techne, are defined by the way we make new ways of being in the world through our toolmaking. AI is our latest tool for remaking our way of being in this world. AI will enable our next great leap forward. By harnessing AI, humans can create superagency for themselves. This will happen when a critical mass of people using AI will operate at levels that compound throughout society, which is the power of a network.

The criticism of Big Tech is that this value created by gathering all our private information is then used to enrich the owners of the Big Tech companies. Hoffman says the value flows both ways. While the owners of the Big Tech companies are enriched, there is also a lot of value flowing to all of us. Data isn’t an extraction industry. It is not diminished with use. The gathering of data and making it available to all provides value to all.

The difference between what people pay for a service and its value to those people is called ‘consumer surplus.’ Researchers have measured the value of online services by asking how much would need to be paid to a person in order for them to give up a service, such as access to Facebook. That value is usually substantially less than the amount the tech company, in this example Meta, charges. Broadcast television and radio, being free, are good examples of consumer surplus.

When OpenAI went public with their first public models, there were calls to limit all development of AI for a time period (fat chance) so the rules could catch up. Hoffman argues, persuasively, that doing this is futile. In fact, fast and relatively unregulated development of technology in general and AI in particular, is the way to keep it safe. There is competition, there is constant testing in the real world, and there is the parallel development of AI tools for protection.

Starting in the 1990s, policymakers in the U.S. considered and allowed unfettered development of high tech. This was called permissionless innovation. It has proved to be a good model. Clinton and Gore released a policy document that took a hands-off approach to the internet. Maybe Al Gore really did invent the internet. This provided the environment that led to cloud services, smartphones and social media. It also helped bring us electric cars, CRISPR, solar, telemedicine and much more. Some of it bad but most of it good.

Hoffman says that the invention of GPS systems using satellites provides a good example. GPS was developed for the military, then opened to civilian use. At first civilians only got approximate data but soon it was the precise info for all. It was discovered that this precise information was extremely valuable. Turn-by-turn navigation became possible. This was an example of what can happen when government taxes a pro-technology approach.

Hoffman says it will be similar with AI. Self-determination and broad participation are needed to get the widespread benefits. Large language models are systems for analyzing, synthesizing and mapping language flows. LLMs do not possess intelligence the way humans do.

The book puts forth a few fundamental principles:

  1. Designing for human agency is the key for producing broadly beneficial outcomes for individuals and societies.
  2. When agency prevails, shared data and knowledge become catalysts for individual and democratic empowerment, not control and compliance.
  3. Innovation and safety are not opposing forces, but rather are synergistic ones giving millions of people hands-on access to AI through the process of iterative deployment. This is both a productive and safe way to make AI more capable and more inclusive.
  4. Similar to what happened during the rapid adoption periods of the automobile and the smartphone, our collective use of AI will have compounding effects. Not only will you as an individual benefit from your newly accessible superpowers, but you’ll also benefit from the fact that millions of other people and institutions will have access to these new superpowers too.

Hoffman says that technology isn’t a challenge to humanity, it is a time-tested key to human flourishing.

It’s a terrific book. It is well written, well-reasoned and a real eye-opener about the optimistic view of how the future of AI may unfold.

No comments yet.

Leave a Reply