A closer look at the GenAI jobs transition

A closer look at the GenAI jobs transition

Why GenAI job losses won't materialize the way some expect

The idea that technological change leads to long-term job losses is overblown, but the idea that technological change leads to transitions in the economy, many of them painful, is real. The alarmist discussion of generative machine learning models (LLMs or GenAI) assumes the impact will be unilaterally negative or positive. This is short-sighted and reactionary. 

The Economist gives a good overview of how LLMs will impact different jobs depending on the nature of the role: 

  1. Sales roles, which rely more on differentiating oneself and building relationships with potential customers 1:1, will remain mostly unchanged. Great salespeople will be able to start conversations with more people without diminishing the quality of their interactions, but the art of the close will remain an art.
  2. Customer service roles that require quick problem resolution while de-escalating tension will be made more productive as GenAI improves how efficiently solutions can be delivered without a human touchpoint or by relatively inexperienced support people. Despite repeated attempts to automate the role out of existence, the average hourly rate for customer service roles has increased and the number of roles have expanded over time. I would expect average hourly rates to remain high, while the number of roles per company might start to contract as support teams specialize more thoroughly in the relational parts of their roles.
  3. Language translation roles have been under threat by software like Google Translate since the early 2000s, but salaries for translation roles have only fallen slightly while the number of available translation roles has expanded. If you are in a business meeting, you still need a trusted native speaker who can discern subtleties and subtext. And globalization hasn’t stopped just because translation has gotten easier. This is a great example of how the reductionist approach to “GenAI will take our jobs” falls flat. 

Exploring the impact.

These three examples are illustrative of the broad impact we can expect to see from GenAI. We will continue to distill at a deeper level where the human touch is truly needed to drive value: building, maintaining, and understanding relationships with people, and discerning nuanced patterns and relationships in the real world where data is not meaningfully collected.

Anytime we use a human as an intermediary to play telephone with a machine, that is where we can expect jobs to be removed. It has never been easier to communicate directly with an expert system. 

New roles will emerge that involve teaching machines how to do what we want them to do, and monitoring them over time to ensure that quality increases over time. For all the hype about GenAI, they are still just machine learning models that need a continuous stream of high quality training and evaluation data to improve (we can talk about how synthetic data may or may not save us later). How many human feedback people has OpenAI hired now?

I think three patterns will hold across all knowledge work jobs, and especially jobs that involve the creation of written or coded assets and artifacts (I am not yet ready to discuss impacts on visual creative disciplines):

  1. The good will get much better; everyone else will stay the same: I expect top-end software engineers, doctors, lawyers, financial advisors, and data scientists to be exponentially more productive. A high-quality technical co-founder of a startup will be able to do the work of three people as that person’s role reviewing code is now restricted to writing and reviewing generated code. This will diminish the number of software engineers required to reach product-market fit, and more companies will launch with less venture funding or customer funding, but GenAI will also increase the number of startups that can develop a meaningful minimum viable product and find product-market fit. The amount of effort required to develop a new version of an existing software product is lower than ever. 
  2. Barriers to role entry will fall, and roles that used to require more training or expertise will require less: This will be the largest category of impact on the job market. Roles that don’t require an explicit licensure or credential to enter (like software engineers, data engineers, and data scientists) will experience the impact first. The number of people entering these roles will continue to expand, and the salaries for these roles will fall, on average. However, like the translators in the example above, the expansion of the labor pool for these roles will also cause more organizations to hire for these roles. Roles that do require explicit licensure (like lawyers, medical doctors, and nurses) will experience the same impact, but over a much longer period of time, as LLMs make the process of passing licensure exams easier and less painful while reducing the importance of memorization. Since roles with explicit licensure have bodies designed to control the supply of people in the role, the impact will be mitigated by a lack of competition in the labor market for these roles. Often, a less-tightly-controlled, lower-salaried alternative (like nurse practitioners to doctors or analytics engineers to data engineers) will expand at a faster pace to capture a larger proportion of economic activity.
  3. Skills profiles for what constitutes a “quality” employee will change, and this will enter the hiring process: Having a GenAI copilot for a role shifts the need for certain skills in that role. For some roles, like customer service representatives, more emphasis will be placed on relationship building and deescalation, since the requirement to know the ins and outs of a company’s system will be less important. This will open up the availability of these roles to people who are less good at consuming vast quantities of content and better at connecting and communicating with human beings. While a frustrating subset of people have historically created knowledge fiefdoms in companies where job security was directly related to how much time they had been there (not necessarily their skills or experience), smart companies will prioritize written documentation. While Confluence or Wikis used to feel like a waste of time, these documents are now the key to enabling new employees to feel like tenured ones via copilots.

AI Copilots > AI Colleagues.

Overall, the best way to view LLMs is in the way that Microsoft has branded them: as copilots. Almost every knowledge-worker role will have a GenAI-based copilot. The barrier to the development of these systems is not the generative capabilities of the models in existence today; it is the level of implicit knowledge that has not yet been codified explicitly in a way that these models can be trained to understand. 

Much of the work over the next 5+ years will be making knowledge work processes explicit rather than implicit so that the menial aspects of those processes can be automated and the nuanced aspects can be copiloted for improved speed and quality. Some of this work will be done by B2B SaaS startups (look at this year’s YC cohort) targeting specific verticals for knowledge work process codification. Some of this work will require further industrialization of processes and procedures before the work can be meaningfully codified (i.e. Biotech R&D). This is why LLMs will ultimately make people more productive (in a real-world, not necessarily in an economic sense), requiring fewer person-hours per task (and thus fewer persons in that role per company). 

Yet the number of new ventures launched, and correspondingly, the number of successful ventures launched, should expand faster than the jobs disappear from automation. Early stage companies will still need great sales and customer service people to understand the market, build relationships, and deliver values. A company without a data model can't use an AI analyst. You can't train an AI on the data you don't collect yet. You can’t automate what you don’t know how to do yet. If humans don’t know how to accurately collate feedback and find product-market fit, why should we expect AI to solve that problem?

But what about Devin?

Frankly, I think we should all have learned by now that until something is fully deployed in production at the enterprise level, we should not assume it’s as transformational as it appears. Self-driving cars and software engineer replacements (no code FTW!) have been coming for decades, but at the end of the day, few of us want to bet our lives or our livelihoods on agents we can’t sue, fire, or throw in jail. Accountability is real, and its importance is often underestimated. It's built into the very fabric of our regulations. People will always be expected to constrain, teach, and double-check the machines.

The hype machine is telling us we need "agents" to do our work for us, but in truth there are relatively few arenas where the speed + accuracy + cost tradeoffs from agents will deliver without human oversight, intervention, and training. We are closer to a world of independent agents than we have ever been, but not particularly close in my opinion.

Transitions forced by tech are painful, but often productive.  

With population growth slowing and the population aging, I don’t believe AI will lead to the level of joblessness some seem to think is imminent. Nevertheless, transitions are difficult for people in general, and particularly those who lack the financial means to gain skills and search for jobs in earnest, those who are averse to change, and those who have benefited from a role that has been protected from disruption for decades. So I expect people who fall into those categories to experience real pain from the transitions that are coming in the job market. 

So if joblessness isn’t the concern some fear it is, could GenAI contribute to solving the US’s economic productivity problem? In theory, yes (haven’t I just told you how much more productive people are going to be?). But in truth, I don’t actually think the productivity problem is a problem, and I don’t care if US “productivity” increases or decreases because I don’t believe that it measures what we should care about as a society. If you’re interested in why, we can talk about that in the comments.


Matt Swan

Cloud Data Architect | DataOps & Digital Transformation | AI & Data Product Development

9mo

Fully agree. Nice post! More than anything, I expect employers to keep all the same people in all the same roles, but just try to pay them 60% less because of how much of their jobs can be shifted to AI. There are some basic business rules we have to suspend to allow for this "AI will replace us all" mentality. If AI can replace engineers, a la Devin, does that mean that Google will no longer have engineers? Not a chance. Google's business proposition requires it to have unique access to products and services that others cannot. By nature, these things cannot be built by an AI, or everyone could have one without paying Google. Therein actually lies an interesting caveat - AI leads to a democratization of market access. It should allow more AI-centric businesses to emerge... but business hates democratization. Business is all about consolidating power to deliver shareholder value. This principle of business is inherently antithetical to AI. Just like we struggle to transition to renewable sources of energy, I moreso expect AI to become an arms race where we are not trying to empower people to do more, but rather the major players will compete to leverage patent law to lock down as much AI IP as possible. It's already happening.

Alan Chramiec, PhD

Oncology and Organoids | Data-Driven Drug Development | Repeat Founder

9mo

"Anytime we use a human as an intermediary to play telephone with a machine, that is where we can expect jobs to be removed. It has never been easier to communicate directly with an expert system." Great insight! We call this the "human-ware" layer where largely low-value work is being performed, often with errors, and with a consequently high opportunity cost. In our context, it would be things scientists manually entering in data into systems.

To view or add a comment, sign in

More articles by Ross Katz

Insights from the community

Others also viewed

Explore topics