We All Might Be Screwed

If AI gods show up, nobody will be able to help you.

Before going any further, it's important to note that we're going to be running on the assumption that artificial general intelligence (AI that possesses human-level intelligence) is not going to show up in the near future, nor will artificial superintelligence (AI that possess god-like intelligence far beyond humans). The reasons for this are two-fold:

  1. It's not clear how long it might be until AGI/ASI shows up, but it looks like it's still a long way off (some believe it will never show up).

  2. If and when AGI/ASI is created, it's not unreasonable to assume that every human job (and potentially every human life) is going to be in danger.

In other words, it's highly speculative technology that, if it were invented, would make this book useless because nobody would be able to compete with the machines anyway. If you want to know more about why, I recommend starting with the Wikipedia article for the "intelligence explosion" concept.

The core idea is that an AGI would be intelligent enough to improve itself, and the speed advantages conferred by computer hardware would make that process so rapid that a human-level machine would quickly become far superior to us. In other words, AGI leads to ASI, and it may happen so fast we can't stop it.

If AGI can be created and it does lead to an intelligence explosion, we're all screwed—at least in terms of work. Why would anyone ever hire people to produce goods and services again when the machines can do it all? Would the machines even be willing to take on the tasks we give them? There aren't any clear answers to these questions yet.

The future of AI is a hotly debated topic, and I'd prefer not to get bogged down in predicting when or if such a powerful technology might show up.

With all this in mind, I've decided to focus on sub-AGI automation technology, such as machine learning and generative AI. These are already making a big splash in how we work and live and we can make better decisions if we focus on technology that we can already see being deployed in the real world.

Worrying about terminators running the world without humans isn't helpful for what I'm trying to accomplish. We don't know if it will happen (or if it even could happen), and if it did unfold that way then we're screwed anyway. With this in mind, let's move forward on the assumption that more simplistic automation systems are going to be the dominant AI paradigm for the foreseeable future.

Key Points

  • Artificial general intelligence (AGI) is human-level machine intelligence.

  • Artificial superintelligence (ASI) is machine intelligence that is so far beyond ours that it is god-like and unfathomable to our puny human brains.

  • So far it looks like both AGI and ASI are still a long way off, and it's not clear that either will arrive at all.

  • If human-level AI does show up soon, there won't be much you or I can do about it since it will likely be too powerful for anyone to resist.

  • As such, this book runs on the assumption that neither is right around the corner and there's still hope.

Last updated