Three ways AI changes how we work

Three ways AI changes how we work

Large language models (LLMs), the hot AI technology of the moment, are transforming how we work. Despite their limitations - unpredictability, mistakes, and hallucinations - they're incredibly useful when approached as collaborators rather than infallible tools. Here are three fundamental ways AI is changing knowledge work, including creative work like music composition, visual arts, graphic design:

Who: What to delegate and what to do yourself. With LLMs, it’s faster to do some things yourself than ask someone to do them for you and report back. With the help of Claude and other LLM-based tools, I am doing research and analysis tasks myself that I previously would have dedicated to my team. 

What: What work gets done. Because LLMs are faster than people, we will produce more with it: more research, more analysis, more designs, more prototypes. It could become common to produce alternative outputs for a client or boss to choose from. We’ll do more simple micro-tasks as well: finding better phrasing, revising paragraph structure, and getting overviews of unfamiliar topics. (“What's a word for the family of animals that include alpacas and llamas? And what else is in that family?”) 

How: How work gets done. LLMs invite new ways of getting things done. Prompt engineering—experimenting with different ways of phrasing a question to coax useful output from an LLM—is one example. Another is brainstorming with the LLM, or asking it for feedback on your ideas or your language as you go. Perhaps the most important change is how we review and validate work.

Using LLMs without understanding how to validate their output carries significant risks. There are multiple cases of lawyers getting in trouble for submitting legal briefs that cited fake cases hallucinated by an LLM. I heard another story recently in which a lawyer used contract-management software (I don’t know if it was LLM-based or not) to draft a contract for his client that contained a clause that would have disadvantaged the client in favor of his counterparty. That’s not good lawyering.

These kinds of mistakes can happen if we consider the software to be flawless the way traditional software generally is, or are not aware of the unique types of mistakes, like hallucinations, LLMs can make. We need to have an accurate mental model of how these systems work to validate their output effectively. We need to get smarter at error checking.

Working with LLMs is more like working with people than with machines. But it’s not exactly like working with people either.

What new ways have you found to work with LLMs? How are they changing your daily tasks?


Note for fans of The Economist : I asked Claude “Could you rewrite this in the style of The Economist magazine?” Here's the uncanny result.

Uri Fishelson

Global Director - Sustainability & Climate Technologies @ Deloitte, Open Innovation Expert

2w

This was an interesting piece as I interact with LLMs daily and always consider the best way to do so. The issue of delegating comes to mind as the task might take a shorter time but confirming facts and hallucinations vs finding the right person to do the job isn’t an easy exchange . I loved the economist version of the piece? It doesn’t feel like a human wrote it but there are humans who do write in this style. So is the LLM a good copycat or do they write like LLMs?

Like
Reply
Christopher Rice, Ph.D.

Futurist, Technologist, Strategist. I help leaders in higher education, foundations, and State & Local government to avoid the dangers of hype and build better futures in practical, actionable ways.

2w

I really appreciate your thoughts on this, David. I think one of the biggest challenges we have is in recognizing the time required to validate the outputs. I wonder lately whether, say, Excel would have earned the adoption that it has if users had to validate the outputs of every calculation to make sure that it didn't randomly change cells in the formulas entered, or if it was randomly changing the data in referenced cells. Would we want a world in which we had to pull out a calculator to check every Excel workbook? It would certainly be a lot less useful for me. Similarly, experimenting with tools like Perplexity, Claude, ChatGPT, and Gemini, I just found them making too many references to false sources or misinterpreting the data contained in sources or not catching critical bits of nuance in its summarizations, etc. to be useful to me. I have to guarantee the quality of my work to clients, and the tools rapidly revealed themselves as too untrustworthy or too time-consuming to keep in my workflow. If I'd hired an intern or junior researcher to do the work and I had to do this level of checking and supervision over their work, I would fire them. So I fired the tools.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics