# We All Might Be Screwed

<figure><img src="https://159734377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FM5P6xgzVrfbWUGkbvT0t%2Fuploads%2Fgit-blob-51308e3cde1c9d91a4fe6455804ac8c006db9d20%2Fimage%20(1).jpg?alt=media" alt="An epic robot god dominates the human landscape" width="563"><figcaption><p>Dramatic depiction of a post-AGI world ruled by giant robot gods.</p></figcaption></figure>

Before going any further, it's important to note that we're going to be running on the assumption that [artificial general intelligence](https://en.wikipedia.org/wiki/Artificial_general_intelligence) (AI that possesses human-level intelligence) is not going to show up tomorrow, nor will [artificial superintelligence](https://en.wikipedia.org/wiki/Superintelligence) (AI that possess god-like intelligence far beyond humans). That said, the timeline has shortened dramatically. When I first wrote this book in 2016, most serious researchers put AGI decades away, if they believed it was possible at all. Now, many of those same researchers are talking about years rather than decades.

There are two reasons I still focus on sub-AGI automation:

1. Even the most optimistic AGI timelines remain uncertain. Nobody actually knows when or if it will arrive.
2. If and when AGI/ASI is created, it's not unreasonable to assume that every human job (and potentially every human life) is going to be in danger, making this book irrelevant anyway.

In other words, AGI remains speculative enough that planning for it specifically isn't useful, but the possibility is no longer something you can dismiss. If you want to know more, I recommend starting with the [Wikipedia article for the "intelligence explosion" concept](https://en.wikipedia.org/wiki/Intelligence_explosion).

The core idea is that an AGI would be intelligent enough to improve itself, and the speed advantages conferred by computer hardware would make that process so rapid that a human-level machine would quickly become far superior to us. In other words, AGI leads to ASI, and it may happen so fast we can't stop it.

If AGI can be created and it does lead to an intelligence explosion, we're all screwed—at least in terms of work. Why would anyone ever hire people to produce goods and services again when the machines can do it all? Would the machines even be willing to take on the tasks we give them? There aren't any clear answers to these questions yet.

The future of AI is a hotly debated topic, and I'd prefer not to get bogged down in predicting when or if such a powerful technology might show up.

With all this in mind, I've decided to focus on the AI that's already here: machine learning, generative AI, and the automation systems being deployed right now. These are already reshaping how we work and live, and we can make better decisions if we focus on technology we can actually observe rather than speculation about god-like machines.

But you don't need AGI to be in serious trouble. AI systems available *today* are already powerful enough to displace millions of workers. That's the threat this book addresses, and as you'll see, it's more than enough to worry about.

### Key Points

* Artificial general intelligence (AGI) is human-level machine intelligence.
* Artificial superintelligence (ASI) is machine intelligence that is so far beyond ours that it is god-like and unfathomable to our puny human brains.
* AGI timelines have shortened dramatically. Serious researchers now talk in years rather than decades, but it remains uncertain.
* If human-level AI does show up, there won't be much you or I can do about it since it will likely be too powerful for anyone to resist.
* This book focuses on the AI that's already here, which is more than powerful enough to threaten your livelihood without AGI ever arriving.
