LTG, License To Grow
Simplification studies by author

LTG, License To Grow

The Legacy Paradox: Simplification increase complexity (temporarily)

Modern challenger companies are agile, serve on demand, with no strings attached, and continuously improve their ability to scale core capabilities efficiently, to provide a dynamic mix of products and services. 

For incumbent companies to beat, or match, challenger companies on agility, they (Incumbents) must un-complicate flows and systems, to be faster at changing processes, systems and value chains.

Therefore, most - if not all - incumbent companies today have a simplification agenda aiming at killing legacy processes and applications and employ new, simpler processes and systems to become more agile and respond faster to changes in business demand. To grow.

This is particularly true for financial services due to an almost monopolistic architectural imprint of mainframe vendors in these industries, which has caused an industry-wide legacy bias, and a universal simplification agenda.

This article addresses the simplification challenges from a bank’s perspective, but aspects of diagnosis and solution is universally applicable.

Ambidexterity and rapid change

All bank CTOs and CIOs are under fire for legacy consumption and -reliance, and that may not be totally fair, as the problem has been brewing for 20-30 years and is caused by the same urge to consolidate and achieve scaling advantages when the systems were built 40 years ago, as we see today in e.g. the drive towards centralization in Cloud technologies.

A different, ambidextrous, and much more agile profile is required for banks, to ramp up capabilities and services to serve next generations of customers, while retaining capabilities to simultaneously serve current generations of customers.

Real-time banking means rapid change. For banks with growth ambitions, this means discharging legacy dependencies and simplifying system portfolios with the ultimate goals to become agile, and be able to rapidly adapt the product portfolio to the needs of new generations of consumers.

Anatomy of the Simplification journey

A successful simplification journey starts with the individual bank ‘cleaning up’. This means pruning the product portfolio, eliminating dormant and complex options with questionable profitability, and selling, sourcing or cancelling overly complex and expensive individually crafted products and contracts with terms and conditions which are demanding to service.

When initiating the journey, the key initiation activity for the simplification journey is anchoring of rock-solid alignment with sponsor, management and board expectations regarding duration, deliveries, investment level, capacity needs and focus, over time.

That is - unless we want disruption to become our new best friend – the figure sequence below illustrates why understanding this can mean the difference between success and failure in legacy replacements. Explanation follows.

No alt text provided for this image

Figure 1: Three simplification journey perspectives (Expected, Actual and Execution)

In banks, products, services and their relationships with customers and partners are governed by processes and controls, and the governance is enabled and automated by systems, encapsulated in the infrastructure of the individual bank.  

As banks have grown bigger over time and business needs and product requirements have evolved, systems have evolved in a way which mirrors the evolution of business, except for one thing:

Where customers and employees disappear or retire as part of a natural cycle, systems code persist, and must be manually removed, initially function by function, ultimately system by system. Failure to do this will increase cost of ownership exponentially.

The role of Simplification

Objective for the Simplification journey is for banks to simplify by replacing accumulated complexity in legacy application landscape with lean and structurally simpler modern solutions. In the risk managed environment of banks, this replacement process is executed controlled and stepwise from pre-program product clean-up to implementation and the following period of coexistence, allowing for proper stabilization and reconciliation prior to decommissioning of the legacy systems and data.

In a ‘safe’ replacement process, the period of co-existence starts as early as possible, preferably already during implementation and ends with decommissioning, allowing for proper migration, reconciliation and close-down of the legacy systems, in control and in accordance with accounting and audit requirements.

Cost of the Simplification journey is primarily driven by the need for program delivery capacity, and increased complexity increases duration, or cost (or both). Differences between estimated and actual cost for replacement of Legacy systems have been seen to go beyond double up, which makes the simplification journey appear risky, counterintuitive and hard to want at first sight.

However, experience has taught us that for ageing legacy system portfolios, there are no fast track, no short cuts and very limited copy-paste opportunities. In other words: No free lunches. Figure 1 pretty much shows it as it is, cost is highly correlated with complexity and it means that probability is high that funding beyond initial estimates will be required, even when execution is properly governed. It is tempting to budget after the ‘expected’ linear journey in Figure 1, but the real cost will follow a pattern closer to the ‘real’ journey in Figure 1, co-existence causing cost to expand and then contract, as decommissioning gets going..                                                                                                                                                     Lesson to learn here is that the challenge of the Simplification journey is that to simplify, we must endure higher complexity - at least temporarily. Lack of transparency into this dynamic can get management teams in trouble, unless they make sure to explain and remind sponsors and stakeholders about the particular complexity dynamics of the journey, reflected in the curve.

Why is it so?

The linear intuition reflected in the ‘Expected’ scenario above is seducing many inexperienced teams to underestimate the impact of the journey dynamics.

Simply put, the complexity increases exponentially, as the journey progresses, the example below illustrates why it is so. In order to commence decommissioning, new solution must be stabilized, and then reconciliations can be used to verify that differences between the new and existing solution are in control (within acceptable limits).

No alt text provided for this image

Figure 2: The mechanism driving increasing complexity

Deploying the new solution adds technical complexity and migrating data adds operational complexity, reconciliation differences between the new and old solution must be in control by the end of the stabilization period. If differences are not in control, decommissioning cannot start.

The start of the co-existence period marks the beginning of the most costly period of the program where cost of operations more than doubles at the same time as residual development and changes keeps increasing costs. Goal at this stage is to achieve completion of decommissioning to cut away complexity and reduce cost of operations to the new lower steady state level. 

What to do about it

The probability of increasing complexity is asymptotic to 100%. It is a given, and rather than fight this reality, it is pays off (big time) to work with it, respect the underlying logic and find effective ways to reduce the duration of co-existence.

There are essentially two options for reducing the duration, one is to reduce scope, either by re-prioritizing and cutting away deliveries, the other, by chopping up scope - if possible, to complete the co-existence and decommissioning in smaller, parallel increments. Under capacity constraints only the first option, reducing scope, is available.

In reality what often happens is application of budget constraints, typically leading to outsourcing or offshoring driven Cost Down experiments. The mismatch of long term solutions to short term problems in these experiments almost always lead to misaligned incentives and frustration in the onboarded organization, which already has to cope with a ‘fight or flight’ responses, demotivation and rapid decay of governance in the old organization.

The main take-aways here are to recognize increasing complexity as a certain key challenge with the potential to disrupt any initiative, to mobilize all available forces to reduce the duration of co-existence, and to be mindful of the unwelcome side-effects of budget constraints.

To Cap or Not to Cap

Now, there is an approach which can be applied to increase execution certainty, this approach involves capping of complexity, and it comes with a requirement for dedicating teams for end-to-end delivery of individual initiatives.

Complexity Capping is required to avoid boiling the ocean by putting too many things into motion too quickly. Capping is particularly effective in situations where aggressive funding is used to increase velocity by ramping up capacity at maximum speed. In these situations capping will allow individual teams to form, storm and norm, and further to this, completing end-to-end deliveries will generate team feelings of accomplishment, bolster morale and increase motivation.  

Without Complexity Capping, complacency overhead builds as the organization struggles to keep up with speed of onboarding, essentially leaving the organization stalling, unable utilize the increasing capacity, sending complexity, and cost per unit of output with it, through the roof.

This is escalated by the Agile fallacy: Advocating hard for the ‘management is overhead’ and ‘reduce capacity for management’ agendas, while complexity is increasing exponentially.

No alt text provided for this image

Figure 3: Illustrated effect of complexity capping

Bottom line: Cap complexity to increase throughput and avoid overloading teams, to keep them productive for longer durations, and with higher cumulative throughput, and avoid falling prey to the Agile fallacy.

Complexify to Simplify

Simplification requires temporary endurance of increased complexity, in order to reach the point where decommissioning can start, and actual simplification begins.

Without decommissioning, total cost will increase exponentially, indefinitely, in the co-existence period. To mitigate these cost increases, decommissioning must be started, and the overlap between new and old system (the co-existence period) must be minimized.

In the best-case scenario only marginal cost reductions will materialize from simplification in the short to medium time horizon. Seen over a longer periods of time, the combination of simplification and decommissioning is the only way to drive business growth, without exponentially increasing costs. In the worst-case scenario, decommissioning will be down-prioritized and delayed, simplification will have no effect on complexity and total cost will increase exponentially.

Last point here is that simplification is a license to growth, and properly executed it is merely a hurdle to overcome in a growth journey, but it will add complexity and costs temporarily in the co-existence period. Minimizing the co-existence period is key to achieving the operational and economic benefits of simplification (and limit the temporary complexification).

Hussam Ahmad

Ready to Accelerate Your Business and Build Profitable Products? Elevating Team Performance with Expert Lean Agile Coaching and Training.

3y

Very good description of a topic that touches many organizations and people. What we need always to keep in mind while simplifying things is to simplify and optimize for the whole system and not parts of it.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics