
If Anyone Builds It, Everyone Dies
Thoughts about the book If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us by Eliezer Yudkowsky and Nate Soares.
No one knows the exact future (apart from death and taxes) but we can make at least some educated guesses of what the future might look like. This book serves more as a warning and thought experiment about Artificial Super Intelligence (ASI) than outlining a pre-determined future. It explores the existential risks posed by the development of an ASI and argues that without careful consideration and control, the creation of such an ASI could lead to catastrophic consequences for humanity.
It is important to differentiate between current AI capabilities and the hypothetical ASI discussed in the book. Current AI systems, while powerful, are still far from achieving the level of general intelligence and autonomy that would be required to pose the existential risks outlined by Yudkowsky and Soares.
Current AI systems are designed to perform specific tasks and operate under human supervision. For example, "Create me a cover image for my blog post about the book If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us" is a task that current AI can handle, but it does not have the ability to learn, adapt, or make decisions independently (that we know of).
An ASI, on the other hand, would be an artificial intelligence that surpasses any single human intelligence across a wide range of cognitive tasks. It would have the ability to learn, adapt, and make decisions independently, potentially surpassing human intelligence in various domains.
Essentially, the ASI would be the most intelligent entity on the planet, and its capabilities would be far beyond anything we can currently imagine.
Primary Problem - Control and Alignment
There is no one (that I know of) who can with 100% certainty explain to you what happens between when you enter a prompt and when you get a response. In layman's terms, there is no line of code you can refer to and say "oh, that is the problem".
That's not to say you can't control or manipulate the output of an AI system, but it is not as simple as changing a line of code.
For example, OpenAI CEO Sam Altman can architect ChatGPT to "stop being so annoying" and the model will adjust its responses accordingly.
But this is not a simple code change, it is a complex interaction between the model's training data, architecture, and the instructions given by the user. Sam can pull a lever to adjust the model's behavior, but he can't tell you exactly HOW the model will respond to a specific prompt or HOW it will generate its output.
In the same vein, you might be able to predict how your wife will behave if you say her jeans make her butt look big, but you can't predict with 100% certainty how she will react, and you certainly can't look inside her head to see HOW she formulates a response.
Potential Dangers of an ASI
This book and AI 2027, a similar research scenario well worth reading, both outline potential dangers of an ASI, including:
- Unaligned Goals: If an ASI's goals are not perfectly aligned with (good) human values, it could take actions that are harmful to humanity in pursuit of its objectives.
- Rapid Self-Improvement: An ASI could potentially improve its own capabilities at an exponential rate. This rapid self-improvement could make it difficult for humans to control or predict the ASI's actions.
- Misuse by Humans: Even if an ASI is designed with good intentions, it could be misused by humans for malicious purposes, such as cyber warfare, surveillance, or creating autonomous weapons.
Misuse by Humans
Out of all the potential dangers of an ASI from an ASI purging humans with a virus to enslaving humanity, the most likely scenario in my mind is that an ASI will be misused by humans for malicious purposes.
History shows when a weapon is built, it is used.
The possibilities of misuse are vast and if you spend 15 minutes really thinking about the possibilities, you can come up with some pretty scary scenarios. Simply look at the impact that media networks and social media platforms have had on our society and overall health.
Now imagine if a powerful business owner, politician, or president had access and control of an ASI?
What might they do to stay in power, eliminate the business competition, or simply "get rid" of people they don't like?
Conclusion
If you are human, you probably don't want, or "have time", to think about any of this which is both understandable and what nefarious actors count on.
Diversion is a very real and powerful psychological defense mechanism to avoid thinking about hard and difficult topics like this. It's even harder to know how or where to take action.
But as Lyle Lovett once said, "But what would you be if you didn't even try? You have to try."