# Are You Replaceable?

<figure><img src="https://159734377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FM5P6xgzVrfbWUGkbvT0t%2Fuploads%2Fgit-blob-e5f2eb86dd4ab5aa7fb74a82cd4132d24c753e04%2FJob%20automation.jpg?alt=media" alt="A robot orders a white-collar worker to get on an assembly line"><figcaption></figcaption></figure>

Given what you know about AI, ML and the importance of data, we need to define a framework for spotting occupations that are likely to get automated. On the one hand, as an entrepreneur, it may mean your barrier to entry is lower. You won't have to hire as many people (or any, in some cases), and that equals more potential opportunity.

For employees, the situation is a bit more grim, since your paychecks could end up on the chopping block. And it won't be a matter of finding a new employer: you'll need to come up with a whole different career to pursue! Anyone who is mid-career or later will have to think long and hard about this subject.

Here are a simple set of questions that you can use as a heuristic for figuring this out:

1. Does the work involve predictable patterns of activity?\
   \&#xNAN;*If it's very predictable, then it's a prime target for all kinds of automation.*
2. Are there already frameworks in place for handling the work?\
   \&#xNAN;*Frameworks often translate into structures that can be fed into an algorithm.*
3. How difficult is it to gather data about what is being done?\
   \&#xNAN;*If it's easy to gather data, it may be easy to train a machine learning system.*

In case you aren't sure about whether data is being (or even can be) collected, think about how many machines and/or pieces of software you interact with on a daily basis. This is important, because jobs that involve lots of tech are jobs where data can be easily collected.

A related question is: **"How ambiguous is my work?"** Even if there are lots of data points available, a job can be very difficult to automate when there's a lot of human-induced ambiguity involved. These types of jobs are usually heavy on human-to-human interaction, and tend to revolve around systems that are not easy to define.

### What Can Be Automated <a href="#what-can-be-automated" id="what-can-be-automated"></a>

Given this framework, let's walk through some examples and see if we can spot some potential targets for automation.

#### Trucker: Eventually, But Slower Than Expected <a href="#trucker-definitely-going-to-be-automated" id="trucker-definitely-going-to-be-automated"></a>

Trucking is a multi-billion dollar industry that relies heavily on human labor. Companies need to move physical goods across land, and trucking does that by putting a human being behind the wheel.

For years, self-driving trucks were presented as an imminent revolution. And yet, progress has been slower than almost everyone predicted. The technology works reasonably well on highways in good conditions, but the messy reality of urban driving, bad weather, construction zones, and unpredictable human behavior has proven incredibly difficult to solve.

This is actually a great illustration of Moravec's paradox (more on that below): driving looks like a simple, mechanical task, but it's actually deeply embodied and requires the kind of real-world judgment that AI still struggles with. The data is easy to collect, the task is well-instrumented, and the financial incentive is enormous. And *still* it's taking longer than expected.

That said, the trajectory is clear. Autonomous trucking on major highways is being deployed in limited corridors, and the technology will expand over time. Truckers should be preparing for a future where their role changes significantly, even if full automation isn't arriving tomorrow.

#### Fast Food Worker: Being Automated in Stages <a href="#fast-food-worker-definitely-going-to-be-automated" id="fast-food-worker-definitely-going-to-be-automated"></a>

Fast food is a ripe target for automation because it's an assembly line for food. Getting food prepared and delivered to the customer is a well-defined, predictable, and repetitive process.

Ordering has already been largely automated. Self-service kiosks and AI-powered drive-through systems are now common at major chains. McDonald's, Wendy's, and others have been deploying these for years, and the trend is accelerating.

Food preparation has been slower to automate (another instance of Moravec's paradox). While the process *seems* simple (grab buns, add condiments, drop fries, fill a cup), the physical manipulation involved is actually harder to automate than the cognitive work of taking an order. Robot kitchens exist, but they're expensive and brittle compared to a human who can handle the thousand small variations that come up in a real kitchen.

The pattern here is instructive: the *information-processing* parts of the job (taking orders, managing inventory) are getting automated first, while the *physical* parts lag behind. This is the opposite of what most people would expect, and it's a pattern you'll see across many industries.

#### Software Engineer: Rapidly Being Transformed <a href="#software-engineer-degraded-by-automation" id="software-engineer-degraded-by-automation"></a>

When I first wrote this book in 2016, I predicted that programming would be automated and caught flak for it. Most programmers were confident their jobs were safe. They were wrong.

AI coding tools have gone from novelty to near-necessity in just a few years. What started with GitHub Copilot suggesting lines of code has evolved into AI systems that can build entire applications from natural language descriptions, debug complex codebases, and handle the kind of routine programming work that used to employ armies of junior developers.

Why did this happen so fast? Exactly what my original framework predicted: there's an enormous amount of data about how programs are built (open source repositories, documentation, Stack Overflow), and most programming involves assembling well-known components in predictable patterns. This made software engineering one of the most data-rich, pattern-heavy professions in existence. A prime target for ML.

Software engineering is now shifting from "person who writes code" to "person who directs AI to write code and verifies the output." The people who thrive will be those who understand systems at a deep architectural level, can evaluate AI-generated code critically, and can handle the genuinely novel problems that AI still struggles with.

Polarization has arrived. Elite engineers, particularly those working on AI systems themselves, command enormous salaries. But the market for mid-level developers who primarily write routine code is getting squeezed hard. If your value proposition is "I can build a standard web app," you're competing against tools that can do it in minutes for pennies.

#### Copywriter: Already Being Automated <a href="#lawyer-probably-wont-be-automated" id="lawyer-probably-wont-be-automated"></a>

People who write the words you see on ads, landing pages, and other forms of marketing materials are quickly seeing their ability to survive in the marketplace disappear. With the appearance of ChatGPT and a variety of marketing-specific tools built on the OpenAI GPT systems, it's become trivial to generate this form of writing.

For now, the role of a copywriter has become copyeditor - they feed some ideas into the machine, and then edit or add to the output. While they can certainly still add some value in places, it's hard to argue that becoming a copywriter or hiring a full-time copywriter is a wise decision at this point.

#### Lawyer: More Threatened Than Expected <a href="#lawyer-probably-wont-be-automated" id="lawyer-probably-wont-be-automated"></a>

I originally put lawyers in the "probably safe" category. That call is looking shakier than I'd like.

AI is now doing far more than just sorting through court rulings. LLMs can draft contracts, summarize case law, generate legal briefs, and even provide basic legal analysis that used to require hours of attorney time. The paralegal and junior associate work I said would get automated? It's happening faster than anticipated, and the automation is creeping further up the skill ladder than most lawyers expected.

That said, the core of legal work remains deeply human: consulting with clients in sensitive situations, crafting a narrative from incomplete evidence, reading a jury, negotiating with opposing counsel. These tasks are drenched in ambiguity, interpersonal judgment, and the kind of contextual awareness that AI still can't match.

The likely outcome is fewer lawyers doing more work, augmented by AI tools. Attorneys who learn to leverage AI effectively will be far more productive, which means firms need fewer of them. If you're in law, one lawyer with AI tools can now do the work of three without them. Guess which two get cut.

#### Police Officer: Probably Won't Be Automated <a href="#police-officer-probably-wont-be-automated" id="police-officer-probably-wont-be-automated"></a>

Like lawyers, police officers have a job that's incredibly difficult to automate. This is because their work revolves around complex, ambiguous interactions between people. Couple that with an equally complex, ambiguous legal system which has to be followed and you'll quickly see how difficult it would be to build robot cops.

Consider how hard it would be to automate the most controversial (and yet central) component of the job: the use of force. Police officers are unique in that they have to make decisions–usually very quickly–about whether it's acceptable to pull out their guns and shoot other human beings. Their decisions are sometimes wrong (for a variety of reasons) and [innocent people die](https://en.wikipedia.org/wiki/Shooting_of_Philando_Castile).

Even though there are certainly cases where the officer is acting in bad faith, much of the time these wrongful shootings are due to the need to make split-second decisions about ambiguous situations. Each situation is unique, and it would be incredibly difficult to build a computer system (even an ML system) that could accurately detect all the nuances needed to make a "good" decision to shoot.

Officers have been, and will continue to be, augmented with technology (such as cameras on their cars and uniforms), but the core of their job will be difficult to automate for the foreseeable future.

### Moravec's Paradox <a href="#recap" id="recap"></a>

An interesting idea that has proven to be quite accurate over the years is [Moravec's paradox](https://en.wikipedia.org/wiki/Moravec's_paradox). The TLDR of this concept is that we're quite bad at building machines which do things we view as easy, but good at building machines that handle tasks we view as difficult.

Consider this: unless you're an infant (and if you're reading this as a baby, I'm impressed), walking, talking and grabbing things aren't tasks you have to think too hard about. We do them subconsciously, as they are highly evolved abilities that have been around since the beginning of our species. Nobody has to really teach you how to do any of those things, unless you've suffered some kind of injury and have to re-learn them.

On the other hand, playing chess and writing code are *not* evolutionarily ancient skills. As such, you have to be trained how to do those things and many people find both tasks very difficult. Because they don't come naturally to us, we must be taught how to do both.

Which one is harder to create a machine around? Turns out it's the first category, largely because we aren't even sure how to describe those kinds of actions to a computer. We've poured who-knows-how-much time and money into building robots that can walk like humans, but after decades of trying we still haven't nailed.

Translate this to work, and you can see where the pattern is emerging. Blue collar jobs, which everyone thought would be the first to go, tend to be the hardest to automate. How can you create a machine that crawls under a sink and does the work of a plumber when just about every house and plumbing situation has its own weird wrinkles?

White-collar jobs like designers and writers are losing their jobs to machines rapidly, despite the fact that everyone thought "creative" jobs would be the safest. Turns out that creativity isn't what matters, it's ambiguity.

### Augmentation and Displacement

AI is both augmenting and displacing workers, often simultaneously. The pattern that's emerging is that AI augments the best performers in a field while displacing the rest. One lawyer with AI tools can do the work of three without them, which means two of those three are out of a job even though the work is still being "done by a human."

Survival in this phase of the technology requires you to be on the right side of that equation: the person who uses AI tools to multiply their value, not the person who gets multiplied out of a job.

### Summary

* You can ask three questions to get a rough idea about how in danger a given profession is: Does the work involve predictable patterns of activity? Are there already frameworks in place for handling the work? How difficult is it to gather data about what is being done?
* Jobs with high levels of ambiguity and uncertainty are the least likely to be fully automated, while those that are structure-heavy and prone to data collection are the most likely.
* Moravec's paradox describes this dynamic well: physical and embodied skills are proving far harder to automate than cognitive, information-processing tasks.
* AI is increasingly a force for both augmentation and displacement. The pattern emerging is that one person with AI tools can do the work of several without them.
