AI and the economy
Lately, I've been thinking a lot about how AI will impact the lives, and livelihood, of our global human population, and how it's already doing so. I think it's worth taking some time to explore this in more detail.
How AI models are trained
AI models are trained by being fed data that then teaches them how to write, draw, or animate (among other things) like a human. This requires ginormous amounts of data.
Where does this training data come from? It comes from different sources, but let's focus on LLMs for now. One big source of training data for LLMs is books. AI companies have repeatedly been caught downloading literal torrents of published books, millions of them, without permission, paying no one. If you or I did that, we'd face harsh penalties, and rightfully so.
These companies get a slap on the wrist. Meanwhile, this data is used to create new content, reducing the demand for human authors moving forward. Using the works of human authors, without their permission, often explicitly against their wishes, to create technology that replicates their work, or uses the skills it learns from their work, to put them out of a job.
This is not theory. Demand for graphic designers is already tanking. Image generation models are trained on the works of artists from around the world, again, without their permission, in ways that have led to multiple ongoing copyright lawsuits (and are ethically dubious). Laws are not the ultimate authority on right and wrong. They're frequently wrong. AI companies are pushing for governments to declare their training on copyrighted works as fair use. People argue its the same as human learning, but one artist taking inspiration from another does not put millions of others at risk of losing their jobs, being unable to keep shelter over their heads or food on their plates. It's a bad faith argument in my eyes.
The current economic impact
As AI model capabilities grow, software engineering roles are down, graphic design jobs are down, Coca Cola is using AI to generate their holiday campaign videos, which might otherwise have had teams of dozens of humans like actors, directors, writers, makeup artists, voiceover artists, videographers, video editors, etc.
These jobs are already gone, and they're not coming back. Sure, any one of these people could find a different job, but what happens when all of them are in the same boat? What happens when there is one job for every 100 applicants?
The broader economy is struggling, but the lines go up, propped up almost entirely by mega-investments in AI companies. The reality on the ground for everyday people is not good.
The "just get good at AI" argument
AI fanatics will make the argument that you should just "get good at AI", "upskill yourself". That might work in the short term, for maybe a couple years. But AI continues to improve, and needs less and less supervision. Agentic AI tools like Claude Code are already working for 30+ hours without human supervision. There are AI scientists who are already able to do months of work in hours, again, autonomously.
Today, humans can "get good at AI" and command these systems, but the role of prompting has already evolved dramatically over the last 3 years since ChatGPT first came out. We used to ask for a single function, giving clear instructions on what the function should be and how it should work. Now, we give instructions like "add a notification system to my app", sit back sipping coffee while AI agents create the database tables and indexes, design the APIs, make the UI, wire it up to the backend, write unit tests, even control your browser to test the output. They already self correct. I've done tasks like these where after one clear instruction, I've had the AI work for an hour, an hour where I was sitting with nothing else to do.
Yes, the AI makes mistakes. It won't always. Every few months the capabilities grow.
My own prompts are more and more like a PM/Tech Lead combo level of abstraction. It'll only get more abstract from here. What "skill" will I need to have then to be "good at AI"? When AI can understand me and my instructions so well its as though it can read my brain?
ChatGPT from 3 years ago could not do anything but reply to a single question. ChatGPT of today often does dozens of minutes of work unattended when using Codex, going through hundreds of iterations and steps to work toward a goal.
Coding is the easiest to automate this way, but if you can do it, soon, an AI will.
When the AI can do your job 10x faster, 10x cheaper, 10x better, tell me WHY ANYONE would hire YOU? 10 x 10 x 10 = 1000x human output. What will a human in the loop do? Slow the AI down?
When any skill you can imagine, the AI can either automate already, or can learn to automate within minutes, why are humans needed? Claude Skills is a good example of the direction we're heading in here. MCP servers already make the world accessible to AI agents. Computer using models already allow AI agents to do tasks where there's no easy API for them to use (eg ChatGPT Atlas browsing agent).
The widening divide
This is going to further widen the divide between the haves and the have nots. Today, someone who "has not" can work hard, grind, and with a good amount of luck, they can become one of the "haves".
When the "haves" have armies of AI agents at their disposal, all of which cost money and are inaccessible to the "have nots", what could someone without pre-existing resources possibly do to accumulate them? Turn to crime? Because anything else you could do no longer exists as a job by this point.
You'll become a software engineer? Companies no longer hire junior engineers because they have Devin 17.0. You'll start a tech consultancy? Your would-be clients are using Lovable already to build full stack apps from idea to production in days with zero engineers involved. You'll open a bakery? Your would-be customers are in the same position as you, they have no money to buy your products.
You'll start a company? Even if your would be customers have money, someone else will come in and replicate your entire business overnight using teams of AI agents, likely a giant like Amazon who already does this with Amazon Basics (without as much AI involvement, yet), but it doesn't need to be a megacorp, just someone a few steps ahead of you will have their capabilities and the outcomes their capital can generate magnified so much that it will be truly hopeless to even try competing. Like Sam Altman said it would be hopeless for anyone to try competing with them on building AI models (although that specific statement didn't stand the test of time)
Who will buy?
This is what worries me most. I am building my own products. I'd be a fool to consider myself "safe". If all my customers are out of jobs, they can not buy my products, I can not make money, I am on the streets.
If no one can add value to the economy anymore because they cannot hope to compete against AI, who will buy your products and services?
Companies like NVIDIA and OpenAI and Oracle will trade among themselves like they already do, and they will yield so much power that they will have governments bending to their will (kind of already happening, though right now its more manipulative nationalist sentiment narratives). They will not need regular people for anything. AI robots will cook their meals, clean their homes, guide their workouts, drive their cars. AI agents will design products for their companies, develop them, market them, handle customer care, come up with the next product, end to end. They will have a circular economy where they are each others customers, the elites and the megacorps.
Maybe if you're already a millionaire, you will survive. For a while. But its not like AI companies will stop evolving, so who knows.
The jobs of the future
What will they look like?
I could go on....
What will be left to do?
Capitalism
The thing is, this feels inevitable. Capitalism optimises for cost saving. AI is cheaper. Humans are expensive, demand leaves, have emotional needs, consume more resources. Capitalist societies give corporations "fiduciary duties" to "make as much money as inhumanely as possible, consequences be damned". This is why fines are already seen as "the cost of doing business". AI is going to make this much worse.
Capitalists already gave mothers formula for free until they stopped producing milk, then jacked up the prices on formula and babies starved. 212,000 babies in low- and middle-income countries died preventable deaths annually. They fulfilled their "fiduciary duty" and made more profits for their shareholders. If they didn't care about literal babies, you think they will care about Janet the SF hipster engineer losing her cushy job?
The incentives are perverse, and we have billionaires running the government already. The very government that was meant to protect the average citizen already is controlled by the people – Musk, Altman, Amodei – who are raking in from this AI boom.
Be careful who you listen to
What are their incentives and intentions?
When an AI company leader says something, understand that they're trying to shape public perception, trying to hype up their company for a bigger paycheck in their next funding round. They already are inconsistent in whether their words and actions align. They claim AI could be a species ending mistake, as they build ever more capable models while firing their AI safety researchers for raising the alarm.
What happens next? What can we do?
Honestly, I don't know. I'm an atheist, but this situation feels so hopeless that all I can think is "god help us".
Investment in AI continues to explode, job growth continues to stagnate and move in the wrong direction, while people who speak out against AI are considered as "anti-progress" by the very people who will be equally jobless as the "naysayers".
I am not anti-AI. I love the potential of the technology, I love how it democratises the power to create, I love how it accelerates progress. My problem is with the current crop of mega capitalist AI companies.
Will all of us starve? Will we figure out a solution that actually works? Will we put a stop to this madness until we figure this out? Is this the Great Filter?
Let's hope it isn't, and let's work towards figuring out where we go from here, and let's curb the power these companies have come to hold dramatically in the meantime.
I do not just want to whine. I will think about possible solutions and share them in a future post. One I can already think of: 100% of AI company profits go to a World Development Fund that is distributed to all countries. There's many problems there but that's for another day.