RIP to RPA: The Rise of Intelligent Automation

RIP to RPA: The Rise of Intelligent Automation

By Kimberly Tan

As AI turns labor into software, the opportunity to productize external professional services (e.g., in legal or accounting) has become a hot topic. However, we believe there is also substantial opportunity in productizing internal work within organizations. These responsibilities often fall under the umbrella term of “operations” and can range from full-time data entry and front desk roles, to routine operational tasks embedded in every other role. This work generates fewer media headlines, but they are the internal stitching that holds companies together.

These ops roles involve critical, but often repetitive and mundane tasks. Companies have historically attempted to automate these tasks by using Robotic Process Automation (RPA), but with generative AI, we believe true automation through agents is now possible. We’ve already seen early examples of agents working in production, such as Decagon’s automated customer support. And with companies like Anthropic launching capabilities like computer use to enable models to meaningfully interact with existing software, there is a clear emerging infrastructure stack for founders to build verticalized intelligent automation applications.

These examples preview a world in which AI agents are able to fulfill the original promise of RPA, turning what used to be operations headcount into intelligent automation and freeing workers to focus on more strategic work.

The Original Promise of RPA and the Impact of AI

Read more


Welcome to LLMflation - LLM inference cost is going down fast

By Guido Appenzeller

To a large extent, it is a rapid decline in cost of the underlying commodity that drives technology cycles. Two prominent examples of this are Moore’s Law and Dennard scaling, which help to explain the PC revolution by describing how chips become more performant over time. A lesser known example is Edholm’s Law, which describes how network bandwidth increases — a key factor in the dotcom boom.

In analyzing historical price data since the public introduction of GPT-3, it appears that — at least so far — a similar law holds true for the cost of inference in large language models (LLMs). We’re calling this trend LLMflation, for the rapid increase in tokens you can obtain at a constant price.

In fact, the price decline in LLMs is even faster than that of compute cost during the PC revolution or bandwidth during the dotcom boom: For an LLM of equivalent performance, the cost is decreasing by 10x every year. Given the early stage of the industry, the time scale may still change. But the new use cases that open up from these lower price points indicate that the AI revolution will continue to yield major advances for quite a while.

 Read more


More from a16z

The Best Way to Achieve AGI Is to Invent It

 On this episode of the AI + a16z podcast, longtime machine-learning researcher, and University of Washington Professor Emeritus, Pedro Domingos joins a16z General Partner Martin Casado to discuss the state of artificial intelligence, whether we’re really on a path toward AGI, and the value of expressing unpopular opinions. It’s a very insightful discussion as we head into an era of mainstream AI adoption, and begin to ask big questions about how to ramp up progress and diversify research directions.

 Read more

How to Build a Thriving AI Ecosystem with Lisa Su, CEO of AMD

In this wide-ranging conversation with a16z Operating Partner Bob Swan — himself formerly CEO of Intel — Lisa Su lays out her vision for the evolution of compute within the AI ecosystem, touching not only on raw power and the continuation of Moore’s Law, but also how AMD will support “the right compute for each form factor” for a wider range of real-world gen AI use cases.

Read more

How GPU Access Helps Startups Be Agile

In this episode of AI + a16z, General Partner Anjney Midha explains the forces that lead to GPU shortages and price spikes, and how the firm mitigates these concerns for portfolio companies by supplying them with the GPUs they need through a program called Oxygen.

 Listen now

Neural Nets and Nobel Prizes: AI’s 40-Year Journey from the Lab to Ubiquity

 In this episode of AI + a16z, General Partner Anjney Midha shares his perspective on the recent collection of Nobel Prizes awarded to AI researchers in both Physics and Chemistry. He talks through how early work on neural networks in the 1980s spurred continuous advancement in the field — even through the “AI winter” — which resulted in today’s extremely useful AI technologies.

Listen now

 a16z Growth’s David George on His Frameworks for Late-Stage Investing

 David George, general partner and head of the a16z Growth fund, sat down with Tyler Hogge and Sterling Snow for the I/O Podcast to discuss his mental models for growth-stage investing, what it really takes to go public, where AI is today and where it’s headed, and more.

 Listen now


Connect with a16z

  • Interested in receiving this newsletter on a monthly basis? Subscribe here.
  • Want more insights from a16z? Subscribe to our suite of newsletters here.


Disclosures

You are receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters, you can unsubscribe immediately.

This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. This newsletter may link to other websites and certain information contained herein has been obtained from third-party sources. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation.

References to any companies, securities, or digital assets are for illustrative purposes only and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund which should be read in their entirety.) Past performance is not indicative of future results.

Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Content in this newsletter speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see disclosures for additional important information.

Michał Babula

Global Medical Manager Digital & Technology

2w

For me, it is easier to set up agentic workflows than RPA – it is a matter of preference.

Like
Reply
Nagesh Nama

CEO at xLM | Transforming Life Sciences with AI & ML | Pioneer in GxP Continuous Validation |

2w

It's fascinating to see the potential of AI in transforming internal operations. In the life sciences sector, AI-driven automation can significantly enhance GxP compliance and validation processes. For instance, using AI to analyze manufacturing data can predict deviations before they occur, ensuring continuous compliance and reducing downtime. This proactive approach not only improves efficiency but also ensures higher standards of quality and safety. The integration of AI in these areas could revolutionize how we maintain compliance and streamline operations. Looking forward to more developments in this space.

Srividhya Vaidyanathan

Energy & Supply Chain Executive | Technology, Strategy, Supply Chain | AI Adoption | Strategic Decision Making | Doctoral Candidate | Views are my own

1mo

This post underscores a critical opportunity: using generative AI to revolutionize supply chain operations, turning repetitive tasks into strategic enablers. Supply chains thrive on efficiency, yet roles like inventory management and demand forecasting remain manual and reactive. Generative AI can change that. Imagine AI agents dynamically predicting stock needs by analyzing trends, weather, and geopolitics, or adapting to disruptions by rerouting logistics and renegotiating supplier terms. Unlike RPA’s rigid workflows, these agents respond in real time, adding agility and foresight to operations. Declining LLM inference costs ('LLMflation') are lowering adoption barriers, enabling mid-sized firms to deploy advanced tools previously reserved for industry giants. The challenge now is building AI systems that are efficient, transparent, and trustworthy—essential for a field as complex as supply chains. By turning operational headcount into intelligent automation, generative AI is transforming supply chains from reactive to anticipatory networks, unlocking resilience and unprecedented efficiency.

Like
Reply
Brandon Johnson

Accelerated Inference & Edge Computing @ NVIDIA | Deep Tech Investor 🚀

1mo

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics