# What is Artificial Intelligence?

## Human Intelligence

Humans possess what is sometimes termed [**general intelligence**](#user-content-fn-1)[^1], which an academic way to say that we're good at dealing with things we haven't encountered before. We know how to *generalize* what we've learned in the past and transfer that knowledge to new contexts.

This is not to say that we're perfect at this process—we aren't—but overall we're incredibly adaptable. We don't fall apart the moment something unfamiliar comes into our world, even if it's alien to us.

Humans are capable of learning processes that involve inquiry, experimentation and stored memories to make the most of the novel stimuli we encounter. Our "ecological niche" is pretty much the whole universe, as we are the only animal that is capable of living even in the vacuum of space because of our engineering capabilities.

For example, if you visit a city you've never been to before and walk into a restaurant, you should feel confident that you still know how to buy food. Or if you meet a person you've never met before, you may know nothing about them but can generalize based on your past experiences with other people and find a way to get along with them.

This comes with trade-offs. Our brains have limits, and those limits create biases, which in turn short-circuit our ability to survive sometimes. But overall, we're quite flexible compared to most other animals.

## Artificial Intelligence

So where do machines stand? **Artificial intelligence** (AI) gets thrown around constantly, and it's worth understanding what the term actually means and where it falls short.

At the core of modern AI are **machine learning** (ML) systems, which take in data, spot patterns, and then generate outputs that accomplish some task with those patterns. For example, there are ML systems that can identify cancer in x-rays with better accuracy than a person. This is accomplished by feeding an ML algorithm a large set of labelled examples, with some classified as "cancer" and others classified as "not cancer." The ML system then "learns" what patterns constitute cancer and starts to classify real x-rays.

Then there are the game-playing systems built by companies like [DeepMind](https://deepmind.com), which utilize a paradigm known as *reinforcement learning* (RL). An *agent* (a player) is situated in an *environment* (a game world, board, etc.) and given a *reward function* (winning the game, maximizing points, etc.). The agent then plays the game millions or billions of times until it arrives at the best possible strategies. DeepMind's AlphaGo system famously beat the #1 Go player in the world using this approach.

And then came the large language models (LLMs): systems like ChatGPT, Claude, and Gemini. These changed the game entirely. LLMs can write code, draft legal briefs, explain quantum physics, and carry on nuanced conversations across virtually any domain. They handle analogies, metaphors, and even irony with surprising competence. Genuinely powerful, genuinely useful, and a qualitative leap beyond older ML systems.

**But they are still not human intelligence.** LLMs don't experience the world. They don't have goals of their own, and they work in fundamentally different ways than our brains do. An LLM generates its outputs through sophisticated pattern completion over enormous datasets. Think of it as an alien form of capability: it can look like understanding without working the way understanding works in a human brain.

This distinction matters less than it used to. Whether or not these systems are "truly intelligent" in a philosophical sense, they are capable enough to reshape entire industries. What matters for survival is a more practical question: what can it do, and what can't it do? If you can answer that accurately, you're ahead of most people.

## Why Knowing This Matters

There's tons of hype around AI these days, and it's easy to get "lost in the sauce" of that hype. But it's equally dangerous to dismiss these systems as toys. The truth is somewhere in between, and finding it requires understanding what's actually happening under the hood.

Modern AI systems like ChatGPT are built on top of **large language models** (LLMs), which ingest oceans of data and learn to generate outputs based on patterns in that data. The results are often impressive, sometimes astonishingly so. These systems can write working software, pass bar exams, solve graduate-level math problems, and generate analysis that would take a human researcher hours.

But they also have real limitations. They can confidently produce nonsense (a phenomenon called "hallucination"). Sometimes they fail at tasks that seem trivially easy to a human. They lack lived experience or genuine understanding of consequences. And they can be manipulated in ways that a thoughtful person wouldn't be.

Overestimate AI and you'll trust it with things it can't handle, making costly mistakes. Underestimate it and you'll be blindsided when it takes over tasks you thought were safe. People who thrive will be those who develop an accurate, clear-eyed view of what these systems can and can't do. That picture is changing fast, which is why staying informed matters so much.

## A Note on Terminology

Throughout this book I use "AI" and "ML" somewhat interchangeably. Technically, machine learning is the set of techniques that powers modern AI, but in everyday conversation people just say "AI." So will I. When a more specific distinction matters, I'll call it out.

## Key Points

* Human intelligence, sometimes called general intelligence, is about the ability to understand the world, generalize knowledge and solve novel problems.
* Modern AI is genuinely powerful. It can write code, pass professional exams, and handle tasks across many domains.
* But it is not human intelligence. It works in fundamentally different ways and has real, significant limitations.
* What matters for survival is a practical question: what can it do and what can't it do? That picture is shifting fast.
* Both overestimating and underestimating AI are dangerous. Develop a clear-eyed understanding of its actual capabilities.

[^1]: *It's worth mentioning that the concept of general intelligence is controversial, particularly when considered in the context of other animals. An octopus may not be as adaptable or "generally intelligent as us," for example, but they are masters of their environments. Does that mean they have low general intelligence, or is general intelligence a poorly-defined idea that is too human-centric? This is an ongoing debate.*


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://www.aisurvival.org/foundational-ai-knowledge/what-is-artificial-intelligence.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
