AI is a big source of anxiety for many people nowadays. I am not qualified to talk about AI on a technical level, but I’m still not worried about it. Even though many people are scared that AI or AGI could automate away most jobs in the near future, I’m skeptical this will be as disruptive as they think.
Technological Adoption Is Slow and Incentivizes Reassignment
Adoption of new technologies in an economy is hard, and takes a long time, even if the technology already exists.
We think of the internet, cell phones, and other technology as being extremely disruptive, because they are, but these technologies didn’t start truly disrupting our lives or economy until decades after they were first introduced. Adoption was so slow that firms and employment markets were able to adapt to them.
For any firm, it takes time and resources to acquire new technology. One can’t replace an entire factory workforce with machines overnight, but slowly over time, if only because that’s the only affordable pace. This slower integration gives a firm flexibility to also not fire employees, but to get them to do new tasks.1
I think reassigning workers (as opposed to firing and hiring new ones) is an understated way that firms optimize productivity when they introduce technology.
For example, when ATMs were invented in the late 1970s/early 1980s, it was thought that it would render bank tellers extinct, but instead the number actually doubled between the 1970s and 2010. The reason why is because banks were able to open new branches. It’s possible and perhaps likely that many tellers were fired during this process, but many of them were likely reassigned to new local branches. Instead of Small Town A having one Bank of America employing three tellers, they may have opened two more branches, with each branch employing one teller.
People Will Always Trust Other People More Than AI
Automation and technological innovations are often distrusted when they are first adopted, and that leads to slow adoption. You want someone on staff who knows how the thing being automated works, so if it breaks, you can revert to the previous method or troubleshoot a solution.
A good example of this is many firms running their website using a service like WordPress (which automates many web design processes), but they still have someone on staff (or a hired consultant) who can make necessary changes to the website in the event that WordPress misbehaves.
AI won’t be much different. Even though it’s more sophisticated and “smarter” than other technology, it’s not something you can trust in the same way you trust a human. The stakes for a human doing their job correctly or incorrectly is getting paid or losing their job, but for an AI, you’re just clicking a button that says “this was wrong.” This is not reassuring!
In this sense, AI’s have no “skin in the game” and big mistakes can be costly. AI malfunctioning is not like a PC malfunctioning or a lamp breaking because there is someone “using” the PC or lamp. People who fear that AI will kill jobs assume there’s no one “using” AI.
So, I fully concede that AI may one day be better at designing an email campaign than I am, but my boss will want that campaign done right and someone with skin in the game to rely on. That boss would keep me around to make sure everything the AI does is done correctly, if only because they trust me. If I fail, they can fire me and hire someone who will do it right; but they can’t really fire the AI in the same way.
Many Projects Are About Coordination, Not Computation or Production
Going off the shortcomings of human trust in AI slowing down adoption, it’s worth mentioning that the pace of supply and demand in a market is a human pace, not an “efficient” one.
Products sometimes go to market at a certain time of year because they will provide maximal value for a firm at that time.
Think of holiday decorations. Say your company has had a party every year on St. Patrick’s Day for the last three years, but no one wants to have a party this year because it falls on a Sunday. An optimized AI may go ahead and order those supplies in January because that’s what the algorithm suggests is the best financial decision based on previous experience. But no one wanted to have that party, so the outcome is you have decorations your company spent money on, but ultimately didn’t want. You either have to tediously go through the motions of returning them or find a spot in the office to store them next year.
Sure, AIs over a long time period may learn to not do this, but in the short and medium term, there are costs to over-efficiency. Put another way: there are false positives when one thinks too algorithmically and acts unincumbered, which is basically how many doomers imagine AI to function in a work place.
The best solution is to just not outsource AI for everything and to recognize that the relationship between production and consumption or buyer and seller isn’t a technical one, but a human one2 (the time of year and human needs, not just creating or purchasing the thing).
AI is optimized for efficiency, productivity, and automation, but those are not all the qualities needed by many workers. A significant portion of office job productivity nowadays is project/product management, interfacing with multiple interested stakeholders, and coordinating with them to solve problems.
As any office worker knows, an inordinate amount of time is spent in meetings and waiting for other people to get back to you. Speaking for myself, I probably spend more of my time coordinating for my job than I do creating.3
AI will save me lots of time, but I’d still have to visually check what the AI did before I act on it or present it to someone else, much like a copyeditor checks a writer before sending it off to publish.
I’m not going to trust AI to created my deliverables without my approval or take my job sitting in these coordination meetings, and I don’t think most people would either.
There Are Economic Reasons Too…
To recap: adoption of new technologies is often slow, but boosts productivity and leads to firms reassigning workers, not firing them. I don’t see people trusting AI more than people they pay when it comes to decisions that have financial implications. Given these trust issues, and the fact that product or project releases for many companies are primarily coordination problems, not productivity or computation problems, I don’t see AI replacing all white collar workers any time soon.
By the time AI does replace current white collar jobs, we’ll either all be so collectively rich from productivity that it won’t matter or we’ll have created brand new industries and jobs that people will re-skill to fill those jobs. But now we’re getting into reasons that rely on economic theory to not worry about AI. That will be the subject of my next post. Stay tuned!
It takes less time and resources to onboard an employee transferring into a new position that they have relevant skills for and company knowledge, than it does to fire that employee and hire and onboard someone completely new. I’m personally experiencing this as my workplace had mass layoffs and I’m now tasked with doing things that I’m not technically an expert on. But it’s cheaper and more efficient to train me than hire someone new.
We could be more inclusive of future AIs by calling them “economic stakeholders,” which I define as entities that have an interest in buying or selling in the marketplace. As AIs function as productivity machines with pre-programmed narrow interests, I don’t see them as economic stakeholders.
It’s not because I have a “bullshit job” or am not productive, but because for my work, what you would call “product releases” (mass emails to supporters) happen on specific days of the week (per observed trends over time), and as a team we know about them well ahead of time. I’m efficient enough at what I do to create them well ahead of time.


I'm an AI-optimist but typically focus on the moral and philosophical arguments for it. Your piece here offers some really good points for the economic and business side with jobs specifically-- I like that. AI is definitely something people are worried about, especially when it comes to having enough room in future industries for both them and humans. One point you made was critical to understanding this: the automation of medial jobs means businesses have opportunities to create new positions, get new workers, expand their reach, etc. It's an often overlooked feature of progress in general. There will always be problems, that's actually good. And every solution will bring new problems to solve, but they will be better than the previous ones! It's takes somewhat of an optimistic outlook and I think you've got that.