Will agents redefine enterprise processes for good?

Will agents redefine enterprise processes for good?

The fact that process changes are harder than tool changes is something that came up in one of my conversations with a customer, and looking back at my notes I thought it’d be a great topic to explore further.

First of all, let me share some definitions so that we are all on the same page: 

  • By “tool” I mean an application that helps a user carry out a task, for example, Salesforce allows teams to track the opportunities they are working on.
  • By “process” I mean the user journey that includes using that tool. Taking the example of Salesforce, the process starts when a team identifies an opportunity, and ends when the opportunity is either won or lost. 

Tools are the brushes and colors that a painter has available, but the process from ideating the painting to making it come to life in a masterpiece involves so much more than just the tools. It’s the intimate involvement of humans in processes that makes them harder to change. If you think about it, adding a new brush to the painter’s toolkit simply adds more opportunities, whereas asking the painter to stop painting indoors and move outdoors might deeply impact their creative process. 

Working with many organizations across different verticals I’ve seen the tool/process interaction playing out in three main ways: 

  1. Because too many tools are available or have been added through time with no replacement, processes are slow and unclear: tools’ capabilities partially overlap, and every user adopts a slightly different process.
  2. Tools are introduced with the hope of resolving a process issue, but that doesn’t happen because the tool only solves for part of the end-to-end user journey.
  3. Tools are not used because they don't align with the process that employees are familiar with and used to following.

In all my conversations, the biggest deterrent to change is a sticky process. Even so, most companies are looking at introducing AI within a tool rather than to disrupt a process. In the few cases where processes were involved, the conversation quickly turned from technology to people, because eventually changing the way you do things is hard, harder than changing the tool you use to do that thing.

In this article I’ll reflect upon whether and how there’s a way AI can make process changes more effective and more seamless for people and organizations.


Processes are all the things we expect people in our organization to do manually, and because of that, they are by definition error prone and often highly repetitive. For example: 

  • Data entry of customer information
  • Registration of products and their attributes for online selling
  • Patient records
  • Tracking of budget and expenses
  • Inventory management

The problem with processes is that they are sticky and grow by inertia. Once a cohort of people gets used to doing things in a certain way, they will continue to do things in the way they are familiar with. They will adapt any new tools to the existing process rather than change their process to adapt it to the new tool.

What this means is that you might introduce a new tool that is more powerful and adds capabilities, but if the process around the tool is the same, not much will change in terms of the overall end-result. I strongly believe that disruption and innovation come from doing things differently, and so we need to figure out a better, faster and easier way to change processes.


How could GenAI and Agents more specifically help with this? 

  1. You could substitute the entry point for the process with a conversational interface, so that users can just feed information in natural language.
  2. Behind the scenes you’d have one or more agents working to populate the system with the information received, basically substituting the manual process of a human entering information into the system and navigating it.

The reason why I think LLMs and agentic frameworks can be a true game changer for process optimization is that the interface that users would interact with is very familiar: it’s written text or spoken language. They won’t need to learn how to do things differently, they won’t need to use a new process, they’d just need to interact using natural language, like they already do multiple times a day. By the way, this also means that we could potentially have more people now able to do the job, because we will be able to support more modalities, more languages, opening up functions to people with impairments which wouldn’t allow them to work with a single modality. 

When it comes to the “behind the scenes”, the real engine that runs the process is an agentic system. The promise of agents and agentic systems is that they will be able to carry out tasks on behalf of the user, which means that a lot of the tasks that are currently manual could be automated.

You can test an agentic experience for free today, by using Gemini’s Deep Research capability. What this tool does is that it creates a plan of action off the back of your initial prompt, it submits it for your review, and once you sign it off it starts carrying out the research on your behalf. This includes formulating a plan of action and carrying out activities autonomously, like searching the web.

It’s this step of creating a plan, referred to as “reasoning”, plus actually carrying out activities on your behalf, like searching the web, that fundamentally differentiates a model from an agent.

Agents make use of models, often multimodal, as well as tools, functions, and potentially other agents as well. They have memory and a reasoning engine which, just like a brain, orchestrates all the other components.

The fact that one “trigger”, in this case a prompt, can initiate a much more complex reasoning plan that includes using tools to carry out actions in autonomy is what makes me think that we will soon be able to automate processes in a way that will benefit both end users and organizations alike. 

A few components that will be fundamental to make this work are the following:

  1. You need to be able to connect to all your different data sources if you want to automate the process – this is a Data rather than an AI problem.
  2. You need to add some logic to spot whether the information provided is wrong in the first place – in this sense I think such a system is good not only to automate manual entry, but also because it can learn from the data and flag potential errors during data entry.
  3. There must be an observability strategy, whereby the user can quickly visualize what happened to the information it has entered – even a simple dashboard would do.

It’s a lovely coincidence that while I’m writing this we have just launched an enterprise-grade solution that does what I’ve been talking about: natural language interface (remember NotebookLM?), powerful LLM models (Gemini family), ability to connect to all company’s data, and ability to create specialized agents out of the box. It’s called Google Agentspace and I’m looking forward to speaking with enterprises who will start using it from today and revert back to my audience here to tell you more about it.


The future ahead is truly great!


Lucrezia


Thank you Lucrezia, I'm diving deep into this

To view or add a comment, sign in

More articles by Lucrezia Noli

  • Building with AI

    Building with AI

    An analysis of the Opportunity Framework of AI Use Cases Originally published on November 6, 2024 on my website. A…

    1 Comment

Insights from the community

Others also viewed

Explore topics