Why I believe the hype around hyperautomation

Why I believe the hype around hyperautomation

Just when you’d got to grips with automation, along comes hyperautomation.

The buzzword cynics will think that’s a bad joke, a Will Farrell-style parody of a “top trend to watch!” from yet another thought leader.

I’d usually count myself among them, but on this occasion I’m sold on the potential of what Gartner has described as “the idea that anything that can be automated in an organization, should be automated”.

On the basis of that definition, hyperautomation will certainly take years to fully materialize. But that doesn’t mean that far-sighted data leaders shouldn’t start thinking now about what it might do for them — and here’s why.

Hyperautomation’s true potential lies beyond the low-hanging fruit

I had the opportunity to share my view of the general hyperautomation landscape in an interview with Digital Bulletin earlier this year. Much of the attention today is focussed on optimizing out the disconnected, repetitive and labor-intensive processes that tend to grow up organically over time in most businesses. These processes exist by the thousands in some companies, draining productivity while being difficult/impossible to garner any intelligence from.

Achieving that would be a great result. Organizations would be able to direct more resources to value generation, and redeploy personnel to more engaging work.

But I believe that’s only the start of what hyperautomation can offer. Once that low-hanging fruit is taken care of, more interesting applications will arise as visibility into operations reaches unprecedented levels of clarity. I’m particularly interested in digital twins.

The near endless opportunities of digital twins

As businesses become increasingly digital and software based, and connected physical assets more common, an opportunity has arisen to create virtual representations of entire systems (in this case the system is the business itself). These digital twins are not models or simulations. They’re digital counterparts rendered out of real-time data streams and capable — in theory — of showing the system’s past, present and possible futures.

In industrial settings this kind of thing is already fairly commonplace, with monitoring and predictive maintenance applications for physical assets and infrastructure. Think airplane engines or transcontinental pipelines.

But expanding that concept to business operations opens up almost endless possibilities. For example, before pressing the button on important strategic decisions leadership could refine their assumptions based on far-reaching performance insights and forecasts. They could also model various ways to run particular processes to discover the best possible approach. Or by applying machine learning algorithms to the data, organizations could reveal optimization opportunities at an incredible scale, beyond any current analytics applications.

Digital twins are getting closer, slowly but surely

That’s the theory. The reality will of course be incredibly complex and probably beyond many companies for some time. You’d need data of the very highest quality, which remains a persistent challenge even for advanced organizations. Across that data and any supporting applications you’d also need a level of standardization and integration not yet seen anywhere in the enterprise world.

Yet the reason hyperautomation is being talked about more and more is that the advancing maturity of digitalization and of data management tools has brought solutions to these challenges much closer. Today it is simpler than ever to work with legacy systems, to integrate platforms and standardize data flows. Data science skills have also become more advanced and some of the techniques more democratized.

If and when those technologies and capabilities mature to the required level, hyperautomation-enabled digital twins will still remain a huge undertaking. Organizations will probably start by building digital twins of individual processes, one by one, before eventually knitting them together to provide more complete coverage.

While this might sound all very technical and might concern some people that advanced digitization and digital twins could replace jobs, I believe the core focus of digital twins is the opposite. The key benefit is to understand your company processes, by defining them in a very structured, ideally digital way, and especially to create transparency. This will eventually allow your employees to optimize the aspects of their daily work, and to develop their job roles positively.

Don’t overlook the soft benefits of cutting-edge analytics

Another potential upside of hyperautomation arises from what might be termed its soft benefits around recruitment and retention.

Data science talent is rare, expensive and not short of career opportunities. These individuals are eager to work on cutting-edge and impactful projects, and yet far too much of their time is spent on tedious struggles with data quality, ingestion, loading, governance and so on. As I told Digital Bulletin, hyperautomation could help here if it enables machine learning algorithms to be applied at massive scale to identify and even correct inaccurate data on their behalf.

So, while it might be early days, it could also emerge that hyperautomation and digital twins become a powerful tool for recruiting and retaining data science talent — if (and it’s a big if) you can formulate a convincing vision for what is likely to be a multi-year effort.

Bettina Ostermann

Independent Health Insurance Broker

7mo

Mathias, thanks for sharing!

Like
Reply

To view or add a comment, sign in

Insights from the community

Explore topics