A closer look at the GenAI jobs transition
Why GenAI job losses won't materialize the way some expect
The idea that technological change leads to long-term job losses is overblown, but the idea that technological change leads to transitions in the economy, many of them painful, is real. The alarmist discussion of generative machine learning models (LLMs or GenAI) assumes the impact will be unilaterally negative or positive. This is short-sighted and reactionary.
The Economist gives a good overview of how LLMs will impact different jobs depending on the nature of the role:
Exploring the impact.
These three examples are illustrative of the broad impact we can expect to see from GenAI. We will continue to distill at a deeper level where the human touch is truly needed to drive value: building, maintaining, and understanding relationships with people, and discerning nuanced patterns and relationships in the real world where data is not meaningfully collected.
Anytime we use a human as an intermediary to play telephone with a machine, that is where we can expect jobs to be removed. It has never been easier to communicate directly with an expert system.
New roles will emerge that involve teaching machines how to do what we want them to do, and monitoring them over time to ensure that quality increases over time. For all the hype about GenAI, they are still just machine learning models that need a continuous stream of high quality training and evaluation data to improve (we can talk about how synthetic data may or may not save us later). How many human feedback people has OpenAI hired now?
I think three patterns will hold across all knowledge work jobs, and especially jobs that involve the creation of written or coded assets and artifacts (I am not yet ready to discuss impacts on visual creative disciplines):
Recommended by LinkedIn
AI Copilots > AI Colleagues.
Overall, the best way to view LLMs is in the way that Microsoft has branded them: as copilots. Almost every knowledge-worker role will have a GenAI-based copilot. The barrier to the development of these systems is not the generative capabilities of the models in existence today; it is the level of implicit knowledge that has not yet been codified explicitly in a way that these models can be trained to understand.
Much of the work over the next 5+ years will be making knowledge work processes explicit rather than implicit so that the menial aspects of those processes can be automated and the nuanced aspects can be copiloted for improved speed and quality. Some of this work will be done by B2B SaaS startups (look at this year’s YC cohort) targeting specific verticals for knowledge work process codification. Some of this work will require further industrialization of processes and procedures before the work can be meaningfully codified (i.e. Biotech R&D). This is why LLMs will ultimately make people more productive (in a real-world, not necessarily in an economic sense), requiring fewer person-hours per task (and thus fewer persons in that role per company).
Yet the number of new ventures launched, and correspondingly, the number of successful ventures launched, should expand faster than the jobs disappear from automation. Early stage companies will still need great sales and customer service people to understand the market, build relationships, and deliver values. A company without a data model can't use an AI analyst. You can't train an AI on the data you don't collect yet. You can’t automate what you don’t know how to do yet. If humans don’t know how to accurately collate feedback and find product-market fit, why should we expect AI to solve that problem?
But what about Devin?
Frankly, I think we should all have learned by now that until something is fully deployed in production at the enterprise level, we should not assume it’s as transformational as it appears. Self-driving cars and software engineer replacements (no code FTW!) have been coming for decades, but at the end of the day, few of us want to bet our lives or our livelihoods on agents we can’t sue, fire, or throw in jail. Accountability is real, and its importance is often underestimated. It's built into the very fabric of our regulations. People will always be expected to constrain, teach, and double-check the machines.
The hype machine is telling us we need "agents" to do our work for us, but in truth there are relatively few arenas where the speed + accuracy + cost tradeoffs from agents will deliver without human oversight, intervention, and training. We are closer to a world of independent agents than we have ever been, but not particularly close in my opinion.
Transitions forced by tech are painful, but often productive.
With population growth slowing and the population aging, I don’t believe AI will lead to the level of joblessness some seem to think is imminent. Nevertheless, transitions are difficult for people in general, and particularly those who lack the financial means to gain skills and search for jobs in earnest, those who are averse to change, and those who have benefited from a role that has been protected from disruption for decades. So I expect people who fall into those categories to experience real pain from the transitions that are coming in the job market.
So if joblessness isn’t the concern some fear it is, could GenAI contribute to solving the US’s economic productivity problem? In theory, yes (haven’t I just told you how much more productive people are going to be?). But in truth, I don’t actually think the productivity problem is a problem, and I don’t care if US “productivity” increases or decreases because I don’t believe that it measures what we should care about as a society. If you’re interested in why, we can talk about that in the comments.
Cloud Data Architect | DataOps & Digital Transformation | AI & Data Product Development
9moFully agree. Nice post! More than anything, I expect employers to keep all the same people in all the same roles, but just try to pay them 60% less because of how much of their jobs can be shifted to AI. There are some basic business rules we have to suspend to allow for this "AI will replace us all" mentality. If AI can replace engineers, a la Devin, does that mean that Google will no longer have engineers? Not a chance. Google's business proposition requires it to have unique access to products and services that others cannot. By nature, these things cannot be built by an AI, or everyone could have one without paying Google. Therein actually lies an interesting caveat - AI leads to a democratization of market access. It should allow more AI-centric businesses to emerge... but business hates democratization. Business is all about consolidating power to deliver shareholder value. This principle of business is inherently antithetical to AI. Just like we struggle to transition to renewable sources of energy, I moreso expect AI to become an arms race where we are not trying to empower people to do more, but rather the major players will compete to leverage patent law to lock down as much AI IP as possible. It's already happening.
Oncology and Organoids | Data-Driven Drug Development | Repeat Founder
9mo"Anytime we use a human as an intermediary to play telephone with a machine, that is where we can expect jobs to be removed. It has never been easier to communicate directly with an expert system." Great insight! We call this the "human-ware" layer where largely low-value work is being performed, often with errors, and with a consequently high opportunity cost. In our context, it would be things scientists manually entering in data into systems.