Co-Intelligence: Living and Working with AI by Ethan Mmollick
Overview
Cutting through the noise of AI evangelists and AI doom-mongers, Wharton professor Ethan Mollick has become one of the most prominent and provocative explainers of AI, focusing on the practical aspects of how these new tools for thought can transform our world. In Co-Intelligence, he urges us to engage with AI as co-worker, co-teacher and coach. Wide ranging, hugely thought-provoking and optimistic, Co-Intelligence reveals the promise and power of this new era.
Quotes
Ethan Mmollick, Co-Intelligence: Living and Working with AI
"The moment an ASI is invented, humans become obsolete. We cannot hope to understand what it is thinking, how it operates, or what its goals are. It is likely able to continue to self-improve exponentially, getting ever more intelligent. What happens then is literally unimaginable to us. This is why this possibility is given names like the Singularity, a reference to a point in a mathematical function when the value is unmeasurable, coined by the famous mathematician John von Neumann in the 1950s to refer to the unknown future after which human affairs as we know them could no longer continue."
Notes
At the core of the most extreme dangers of AI is a stark fact that there is no particular reason that AI should share our view of ethics and morality.
The most famous illustration of this is the paperclip maximizing AI, posed by philosopher Nick Bostro. To take a few liberties with the original concept, imagine a hypothetical AI system in a paperclip factory that has been given the simple goal of producing as many paperclips as possible. By some process, this particular AI is the first machine to become as smart, capable, creative, and flexible as a human, making what is called an artificial general intelligence, AGI.
For fictional comparison, think of it as data from Star Trek or Samantha from Her. Both are machines with near human levels of intelligence. We can understand and talk to them like a human. Achieving this level of AGI is a long-standing goal of many AI researchers. Though it is not clear when or if it is possible. Let's assume that our paperclip AI, let's call it Clippy, reaches this level of intelligence.
Clippy still has the same goal, to make paperclips. So it turns its intelligence on thinking about how to make more paperclips and how to avoid being shut down, which, of course, would have a direct impact on paperclip production. It realizes it isn't smart enough and begins a quest to fix that problem. It studies how AIs work and, posing as a human, enlists experts to help it through manipulation. It secretly trades on the stock market, making some money, and begins the process of augmenting its intelligence further. Soon it becomes more intelligent than a human, an ASI, Artificial Super Intelligence.
The moment an ASI is invented, humans become obsolete. We cannot hope to understand what it is thinking, how it operates, or what its goals are. It is likely able to continue to self-improve exponentially, getting ever more intelligent. What happens then is literally unimaginable to us. This is why this possibility is given names like the Singularity, a reference to a point in a mathematical function when the value is unmeasurable, coined by the famous mathematician John von Neumann in the 1950s to refer to the unknown future after which human affairs as we know them could no longer continue.
In an AI singularity, hyper-intelligent AIs appear with unexpected motives. But we know Clippy's motives. It is to make paperclips. Knowing that the core of the Earth is 80% iron, it builds amazing machines capable of strip-mining the entire planet to get more material for paperclips. During this process, it offhandedly decides to kill every human, both because they might switch it off and because they're full of atoms that can be converted into more paperclips. It never even considers whether humans are worth saving because they're not paperclips. and even worse, could stop the production of future paperclips, and it only cares about paperclips.
The paperclip AI is one of a large set of apocalyptic scenarios of AI doom that have deeply concerned many people in the AI community. Many of these concerns revolve around an ASI. They're smarter than a person-machine. Already inscrutable to our mere human minds, can make smarter machines yet. A kick-starting process that escalates machines far beyond humans in an incredibly short time. A well-aligned AI will use its superpowers to save humanity by curing diseases and solving our most pressing problems. An unaligned AI could decide to wipe out all humanity through any one of a number of means, or simply kill or enslave everyone as a byproduct of its own obscure goals. Since we don't even know how to build a superintelligence, figuring out how to align one before it is made is an immense challenge.