Humans possess what is sometimes termed general intelligence, which an academic way to say that we're good at dealing with things we haven't encountered before. We know how to generalize what we've learned in the past and transfer that knowledge to new contexts.
This is not to say that we're perfect at this process—we aren't—but overall we're incredibly adaptable. We don't fall apart the moment something unfamiliar comes into our world, even if it's alien to us.
Humans are capable of learning processes that involve inquiry, experimentation and stored memories to make the most of the novel stimuli we encounter. Our "ecological niche" is pretty much the whole universe, as we are the only animal that is capable of living even in the vacuum of space because of our engineering capabilities.
For example, if you visit a city you've never been to before and walk into a restaurant, you should feel confident that you still know how to buy food. Or if you meet a person you've never met before, you may know nothing about them but can generalize based on your past experiences with other people and find a way to get along with them.
This comes with trade-offs. Our brains have limits, and those limits create biases, which in turn short-circuit our ability to survive sometimes. But overall, we're quite flexible compared to most other animals.
In contrast, a machine version of this type of intelligence, which we call artificial intelligence (AI) doesn't exist as of right now.
The closest things we have at the moment are machine learning (ML) systems, which take in data, spot patterns, and then generate outputs that accomplish some task with those patterns.
For example, there are machine learning researchers who work on algorithms that can do things like identify cancer in x-rays with better accuracy than a person. This is accomplished by feeding an ML algorithm a large set of labelled examples, with some classified as "cancer" and others classified as "not cancer." The ML system then "learns" what patterns constitute cancer and starts to classify real x-rays.
While ML has created useful systems like this, it does not constitute intelligence. These machines don't know anything about the world except how to spot cancer—and if you try to get them to handle novel tasks they will fail miserably.
Another set of examples are the game-playing ML systems built by companies like DeepMind.
They often utilize a ML paradigm known as reinforcement learning (RL), where an agent (in this case, a player) is situated in an environment (a game world, board, etc.) and given a reward function (winning the game, maximizing points, etc.). The agent then plays the game millions or billions of times, trying out every imaginable move until, after countless iterations, it arrives at the best possible strategies.
These make for demos that have far-reaching impacts, like when DeepMind's AlphaGo system beat the #1 Go player in the world, but at the end of the day these are still not intelligent machines. They are machines trained with a very specific type of data to accomplish a very specific type of task.
One caveat: An intelligent machine does not necessarily have to be exactly like a human intelligence. It could theoretically operate at the same (or higher) level than a human through some means we haven't even conceptualized yet. But that's a conversation for a different time...
Why Knowing This Matters
There's tons of hype around AI and ML these days, and it's easy to get "lost in the sauce" of that hype. The ML systems you interact with on a regular basis are impressive, and often quite good at what they do.
No example stirs more of these feelings than ChatGPT, the first free, public system billed as AI. People who had zero experience or expertise with AI/ML systems suddenly felt they had the most powerful technology ever at their fingertips. All they have to do is ask a question and BOOM a clear answer is given by ChatGPT.
The problem is that this isn't AI. ChatGPT is built on top of what's known as a large language model (LLM), which is an ML system that takes in an ocean of data and then uses what it learns from those patterns to probabilistically guess what kind of output it should provide based on a given input.
While the results are sometimes quite useful, such as when people use it to write Python code or blog posts, it's important to understand the limitations that stem from the fact that it is not actual AI.
ChatGPT, like all other LLMs, has no model of the larger world, no understanding of context, and in general doesn't know anything. It's just using a probability function to guess what a reasonable output looks like, and then spitting it out.
For example, if you ask a LLM like ChatGPT what 2 + 2 equals, it's easy for it to answer because there are undoubtedly many, many instances of that exact equation on the web. However, if you throw much more complex math at it, chances are high that you'll get an incorrect answer.
The general idea here is worth spelling out: if you don't know the difference between artificial intelligence and machine learning, you won't have a sense of how this field works. It's easy to get hoodwinked by the various hype artists in AI when you don't understand that under the hood, it's just machine learning.
Why I Still Use AI (Sometimes)
The simple reason I use the term "AI" is that it's what people are curious about and it's how this type of technology is marketed and talked about. If I were to jam "machine learning" down everyone's throat, fewer people would know what I'm talking about and therefore fewer people would take the time to read through this guide.
I use AI and ML interchangeably, and just know that from this point forward when I say "AI" I pretty much always mean ML (unless I'm talking about a specific sub-concept).
Human intelligence, sometimes called general intelligence, is about the ability to understand the world, generalize knowledge and solve novel problems.
Artificial intelligence, aka human-level intelligence, does not currently exist.
A more accurate term for modern AI is machine learning, which involves ingesting data, spotting patterns in said data, and then generating a specific type of output.
Even the most impressive ML
Understanding the difference between the two will give you a clearer appreciation for the capabilities and limits of what we can do with these technologies.
I still use AI in this book because that's what most laypeople think of when they think of machine learning, and those are the people who will benefit most from this book.