There is waterfall and there is Agile; and how the twain shall meet.

There is waterfall and there is Agile; and how the twain shall meet.

Update : 18 Apr 2020 - 

I have been holding back on this write up for the last few days as I was not sure it was fully 'done' yet. This for an article on iterative development - the irony is not lost on me.

On reading through the Waterfall paper in early 2012 on the recommendation of my team-mates, I had the chance to re-evaluate my ideas of sequential phased manner in which we developed software at my earlier job. Illuminating as it was to find that the "waterfall" paper did not espouse a rigid one way progression of work in phases, it also sat right with the working experience we had developing within the "waterfall" methodology. I could not find the original article but did find another blog article which elucidates the same thoughts in some detail (see here). As a team we had been practicing SCRUM Agile since 2008.

While the phases left us to develop on features in a 3 month cycle, we were routinely interrupted to fix the issues found in testing phase the programming team's oversights on the documentation handed-over to development (post "Analysis and Design"). This was one of those situations where one needs to choose their battles carefully. With the Manager under pressure to deliver the features in prescribed time, the BA and Architect unwilling to accept presence of gaps in requirements; the only time development could push back on the rework from testing -

1. Is it a large enough change to impact the current development on your plate?

2. Is it truly a requirement specification gap, that can be easily established? This would more be a matter of principle. Such gaps are created in specs as the Tech spec writer is not as close to the code base as a developer. (Some food for thought - why do you think following LEAN and AGILE methodologies easily avoids this problem? If you don't have a ready answer - read on.) 

3. Will the code change for the minor issue require a relatively large change and another full manual regression? Then this could be negotiated with the BA and the test team :)

Side Note: It is worth pointing out here that all 3 issues can be addressed by a proper CI/CD pipeline.

Notwithstanding these situations, whether responsible or not, development was bound to fix issues found in testing as by default, any issues found were by default deemed developers' mistakes / oversights.

As a result, each estimate that we made on development cycles, kept enough slack to address issues resulting from testing of our earlier development. Although Waterfall methodology followed had no room for inclusion of work outside of current phase, the practical implementation required us to do the exact opposite. From my memory, it had always been clear to development and project management as well, that development and testing required close working and a proper feedback loop. Below is close to what we actually followed as a development team. Although in wide usage around the same time, I will keep away from delving into the Spiral model to stay on point with the subject.

Iterative waterfall process used in most development efforts

Above: Iterative waterfall process used in most development efforts

A confusion in understanding the difference between framework and methodology is at core of the conflict between Agile and traditional Project Management practice. Project Management has evolved following Waterfall SDLC methodology; but now attempts to operate with Agile philosophy inspired development frameworks like SCRUM, DAD, SAFE Agile, Extreme Programming, to name a famous few.

No alt text provided for this image

Agile Crystal Framework

No alt text provided for this image

Agile DAD Framework

No alt text provided for this image

Agile Extreme Programming Framework

No alt text provided for this image

Agile SCRUM Framework 

Having silo-ed teams to work on each phase of development is an exact sign of waterfall methodology applied to product development - here each feature (or set of clubbed features) is bound as a project and executed as per the specific waterfall SDLC phases in use.

A close look at the AGILE inspired frameworks and methodologies, easily provides us the fundamental difference in approach - which (as stated) is the source of conflict. While an honest implementation of Waterfall methodology is never strictly uni-directional, it still has clearly defined phases. At any given point of time development effort is identified as being in a specific phase, until an agreed upon date (per execution plan). This also leads to an obsession of tracking dates (and other KPIs) and to declare project at risk only if it slips on specific milestone (and KPI based measures). The catch is, time slippage is a poor indicator of project / feature health. It also appears with such latency that there is precious little recourse available for corrective action.

In an iterative, incremental methodology the approach is to provide consistently updated, publicly shared "information radiators" that serve as immediate feedback for immediate corrective actions without larger impact. This also consistently bubbles up missed or underestimated risks during initial ceremonies like design, Spikes, estimations. (Coming soon - Agile information radiators vs waterfall progress indicators).

Clearly (as I too can attest), any Project Manager following a sequential model of development (specifically - Waterfall) is lead to judgement errors based on his traditional monitoring of signals. They either fail to act in time believing the signal to be of no consequence this early; or veer to the other extreme of having an alarmist reaction - followed by over correction. The whole scenarios plays out repeatedly with variations.

On repeated such occurrence, the resulting mutual exasperation of the PM and development teams results in the PM only asking for end dates. The PM unfortunately could relegate himself to only being an observer of the project execution (continuing to rely on the same unreliable progress indicators). The development team treats other important project activities as being outside of their purview (creating a defacto silo and loses out feedback from the larger impact of their deliverable on the product. (Coming soon - Real life scenarios of detrimental effect on job satisfaction due to lack of visibility on value add of testing work)

WBS, Product Backlog and the PM's quandary -

Product Backlog - Fundamentally, a story estimate is essentially only that - a best effort estimate based on expert opinion and current level of information. Expectantly it is also corrected for known bias. It is also tracked often and corrected from immediate feedback. The story at hand needs to be identified fully, with all known blockers removed. The backlog at any given time of the project is not fully groomed, nor are all tasks / stories fully updated. In fact that would be a big red anti-pattern flag irrespective of any agile framework in use. Hopefully, I will be able to speak more on backlog grooming in another post, at present we run the risk of veering off course.

WBS - The WBS is actually expected to be fully identified as a sum of its individual tasks. All tasks are to be defined fully (a very non-pragmatic approach for large, complex systems), which is inherently difficult for any one team or even select set of people. The unknown - unknowns are always hiding in plain sight :). Every task is broken down to a granularity that can result into actionable work once design phase is completed.

In this context, when working with Agile development teams, the waterfall accustomed PM is misdirected in their belief that estimation held for each story in sprint, is actually development's commitment to complete. If that is missed, it implies (to their mind) a slippage from the commitment made on the deliverable. That a story may be deemed incomplete despite substantial work being complete, is alien to his reference frame of project monitoring and control. The story may have a newly identified dependency, or a requirement gap found right after testing. Or, some other clause of the agreed upon "Definition of Done" or "Acceptance Criteria" may not be met. An incomplete story moving to the next sprint is certainly a cause for concern and directly affects progress indicators (e.g. traditional SCRUM burn down / burn-up charts). It is an input to overall progress parameters, but can be incorrectly seen as cause for immediate concern. True slippage will be visible once team has feedback on their velocity over multiple sprints that tracks completion to a date later than earlier planned.

Harnessing an important Waterfall methodology tracking tool in Agile inspired methodologies -

In looking to move to a backlog tracking method instead of using WBS, will provide us with a better control and feedback loop to drive the project. Consequently, we stand to lose a view of dependencies typically visible to us on a Gantt chart. I have found Gantt and fishbone to be excellent tools to understand inherent work dependencies and avoid pitfalls of stalled / bottle-necked work early in development iterations / sprints. While we do have dependencies identified, having a Gantt chart to highlight critical dependencies, important assumptions, and critical path(s) based on which the team moves forward. This is of great relevance since external dependencies may be time bound and may put project MVP timelines beyond acceptable cost or time. Generally Gantt charts are plotted on specific timeline, but we can still benefit from Gantt with plotting tasks / stories along relative timeline. A good reference is listed here.

Traditional Waterfall (phased) approach also pits testing and development into enemy camps; and leads to an unhealthy relationship, detrimental to the product feature throughput (more on this in another post). When applying waterfall performance standards to a team that runs by Agile methodology, an often seen example is the exchange between development and test automation teams. If the test or automation teams believe their contribution is measured only based on errors reported, they are driven to find trivial errors and test in isolation to their development counterparts. There is no incentive to actually collaborate with developers to find new scenarios such that test team may add to regression suite(s) while developers also may program defensively. Although, this makes the best case for overall progress, it is also true that testing teams' contribution in such scenarios remains hidden, un-appreciated. Contrarily, un-monitored test coverage for known use cases over user personas rears its ugly head as production issues. TL;DR teams fail optimizing their throughput because existing standards of progress monitoring based on KPIs. This encourages participants working to KPIs instead of results.

Better computation resource, Quicker quality feedback turns the value chain on its head -

Maturing of CI / CD, coupled with lowered computation cost, allows continuous test trials on WIP. (failures are less costly). Over time developers realized that trying out an idea is the fastest way to hack a solution rather than conjecturing based on limited data or information. The idea of fail-fast could take root as costs associated with computation resources dropped precipitously - the Waterfall model that actually formalized development process to frugalize on computation resource has lost relevance.

In fact over a period, what we have witnessed is complete commodification of computation resource (mainframes to data centers, then cloud computing). Being able to run development code more freely now transforms the developer's skill as the truly critical resource - and justifiably also categorized as a potential bottleneck. Perused with this viewpoint, it is no surprise most new software development approaches endeavor to address efficiency of development resource and re-usability of developer output. In fact, the penetration and commodification of data intensive and complex computation projects involving machine learning, data mining, scaled statistical modelling are all a result of reduced computation cost per unit. Such projects were neither feasible earlier and nor could they be delivered with a waterfall approach successfully. 

With this is context, it is a small leap of imagination to visualize direction of development practices we see in vogue today. Automation of test and deployment code (DevOps), ability to run regression test repeatedly (CI/CD Pipelines), large volumes of data used to perform aggregations and information extraction (Big Data), previously not possible without prohibitively expensive and infeasible computation.

Side Note : The only exception to the above would be approaches devised for mission critical or real time systems - nevertheless, these too have seen some changes with using simulations (hitherto unavailable).

The rigor of WATERFALL is needed to be inculcated in the AGILE framework inspired methodologies. Some of the ideas still have relevance and many have endured to be included in various practices as Agile teams mature their process for specific needs -

1. The Paradox of Plenty is a very real risk that is necessary to guard against in modern methodologies. Teams inculcate good habits in harnessing and leveraging the development resource while taking for granted highly available computation resources that hide other performance inefficiencies like -

  • less than optimal time -space complexity of core algorithms.
  • lack of effort at code optimization 
  • de-prioritized technical debt, and its long term costs . eg : fueling poor development practices
  • poorly designed code that scales poorly
  • strong platform or environment dependencies as more services become commodified 
  • absence of fault tolerance at a higher layers.

All these (and some more) lead to brittle code that will work well in the usual environments but will not degrade gracefully. The rigors of keeping track of technical debt and setting aside resource to address it are critical to long term success of the product developed using Agile methodologies.

2. POC (called Spikes in SCRUM) for feasibility , Analysis & Design

Waterfall methodology formalizes thought process and helps teams' senior members commit to the idea of providing a reference architecture, design and also helps risk assessment. The catch traditionally, has been that it attempts to capture unknown complexities and demand variations before they are even evident; as such it attempts to build a solution schematic without being able to fully identify the problem space.

If we do not over-commit or crystallize the outcomes, having exercises to build a reference architecture, design and POCs, is directionally invaluable to development work. It also supports improved estimations during development cycles. As such, Agile frameworks are malleable to specific team needs. I have found incorporating these in early sprints of a project to be extremely beneficial. A journey to discover what works for your team can be attempted from here.

SAFE Agile as a framework for scaling agile approach across an organization does prescribe it more formally as Agile Architecture. In fact referring the SAFE Agile site periodically would help cultivate a mindset to validate everyday process decisions.

Although in theory the idea of a complex design be grown iteratively sounds great, in practice this has potential for great risk. In particular we can look at examples of complex systems where expert knowledge is required. E.g. a search solution chooses to incorporate an outsourced product to facilitate rapid turnaround on first few requirements; eventually it is not able to support more involved requirements. By now the plumbing code around the outsourced product has been developed - which will need the team to redo avoidable work. This is but a small example, over time I will be able to write in some more of my first hand experiences.

With an eye on the product road map, evolving requirements, technical community support, a proper design and architecture would be required upfront to mitigate such risks. Two important factors to consider here are -

  • The development team does not have the technical experience or the long term vision on how to evolve the product.
  • The architects and the senior management are better suited to see the pitfalls and seed the initial architecture and design based on their analysis so as to protect the team from wasting time in trying solutions that wont work. Of course providing visibility and documenting for the team the decisions taken is a critical responsibility of the architects and managers. This is essential to the practice of ANY agile development methodology. 

3. Road map prioritization and resource feasibility based on team(s) velocity.

"Agile road maps are built to reflect product strategy but also to respond to changes, such as shifts in the competitive landscape, value propositions, and engineering constraints" (ref). This is the primary difference in thought that requires a periodic revaluation of the road map in an agile approach. 

This is not in keeping with the tradition of road map and strategy documents brought out annually by senior members. The product road maps and strategy were something teams visited after they were created by the product team, once a year. 

A product road map reevaluation cadence, will need a product backlog to be groomed accordingly. The product backlog 'funnel' visualization is something that I came across only in the recent few years, but believe it to be a must-use tool for any product development team - to be freely shared between product, business and development. I could not agree any more than I already do that the product backlog should be groomed following the typical 80-20 rule and that stories be fleshed and prioritized to meet the exact road map requirement of the near future. In fact the backlog should be ordered largely based on near term, mid term and long term, with each being addressed differently. 

For a more formalized understanding of backlog grooming, the reader could look at "WSJF - Weighted Shortest Job First" used in SAFE Agile, or the Kano Model, MoSCoW Model, among other popular useful tools.  

No alt text provided for this image

Agile Backlog prioritization "Funnel" 

Between the development Agile leaders and the Project Drivers - the Product backlog grooming is actually an excellent place to start the discussion around the approach to development. The WBS dictates deep dives and clear definition of the tasks in the project; whereas the the backlog will have a lot of "stories" related to the project not full detailed when the work kicks off. This does not leave the PM in a comfortable position; and I suspect you will have to be very well prepared to address this dichotomy - likely the PM will come to you in due time. In these scenarios it is imperative to remember that you are expected to "respond, not react". You must see your conversation to the "notional" end of the project - milestone by milestone. 

4. Documentation in terms of business requirements and Tech specs - living documents!! 

It is critical to remember that if your stakeholders (think C-suite) and PM are waterfall oriented they derive progress clarity based on the reference of documents delivered out of analysis and design phases. They understand true nature and scope of the work to be undertaken from the details capture in such documents. In all likelihood, they also mistake the documentation coming out of agile analysis and design to be the final version. As such, you run the risk of them deeming the contents too light on detail to provide a green for execution. They may also hold on to every goal in the initial documentation as being the team's commitment.

This has been the experience of many a development teams including mine. Despite stakeholders being aware and supportive of agile methodologies, it cannot be expected of them to transform their a mindset in a single day; a mindset that has built over years of executing projects a certain way. This is particularly aggravating for Development teams in organizations where core business is not Software based/ driven and stakeholders need reeducation for a proper understanding of software development practices. It is the responsibility of the Engineering manager and also in their best interest to introduce early (and repeatedly thereafter) the idea of a living document. The document must be versioned and updated as per changing situations. Having highlighted the need for flexibility in scope of deliverable, the Engineering manager will also need to identify an MVP that can be delivered to production at the earliest to help business feel secure and earn an even footing.  

In the longer run, business and PM will come to appreciate the quality of documentation that comes out of such exercise and how it organically facilitates documentation of all decisions that can be revisited to understand the thought process and conditions around it.

I would like to highlight that no methodology prescribes any set of documentation but does insist on sufficient documentation. The caveat on documentation is that it does not become the sole vehicle for information transfer. This is embedded in one of the 4 values of Agile philosophy (thus flows as a principle to any specific methodology implementation) : "individuals and interactions over processes and tools".  If you think it should be "working software over comprehensive documentation" instead - think again and do write in the comments. :)

Here is quick read for you to get started on understanding the difference in approach to documentation between the two schools of software development methodologies.

Signs that the PM is thinking in Waterfall SDLC when dealing with the agile development team 

1. Constantly asks for delivery dates.

2. You get to hear about the commitments made by team and how it is missed.

3. micromanages each task and gets alarmed at each task slippage.

4. Misuses sprint retrospectives to blame individuals for delays or rework

5. Daily stand-up ceremony is used to publicly question technical decisions or new unknowns not captured during planning.

6. Wants to add new stories to tasks in the current sprint. this is a common mistake and seen as being inflexible despite talking about agility.

7. Asks for a percentage done on a feature - this is an innocent question but should not be answered. As an exercise for yourself, think about why it should not - please share your thoughts in the comments.

8. Questions the meaning and utility of Tech Debt and related changes as they add no value to product features.

A cautionary note to re-check own bias when enforcing process as leaders - 

After so many years of practicing Agile development cycles, I still have to be mindful in occasionally self-evaluating my thought process. I write this here to demonstrate to Agile Champions, commonly found mental conditioning; so that they may take advance steps to assuage such fears - 

1. The need for perfection in my sub-conscious.

2. Fear of criticism - particularly afraid that the criticism may turn out be be valid - no place to hide. (if interested, further read on the idea "servant leadership" and being publicly self critical)

3. Not recognizing that I was taking on too much together (poor estimation). Future iteration with reader inputs will help improve the article in way I could never (incompletely defined problem space).

4. Attaching value to only the final product - an imaginary 'ideal' that no one can validate since IT DOES NOT EXIST! - (hardening design making solution inflexible to changes).

5. Delay in putting out the article one it met the MVP requirement, even when I have reasonably capture my thoughts on ground issues faced by teams when moving to Agile.

References - 

Taco Kemna

Senior Software Engineer at Atlassian

4y

have you also looked at other iterative methodologies like IBM’s RUP? Rational Rose Unified Process was pretty elaborate and had a huge focus on artefacts. One big difference with many Agile methods was simply that RUP talks about roles and Agile about people. Small difference but in RUP everyone was replaceable but in most Agile methods, the teams are built around irreplaceable people. I think that is bad in Agile. What’s you gonna do if your superhero is a no-show? Other huge difference I keep seeing is the lack of documentation. I document the shit out of everything I do, because 3 months from now, my own code had left my immediate memory and I need to understand what my 3 months prior me was thinking Documentation is hard to maintain, but it has extreme value when it comes to being Agile. Surely you can be quick without documentation but Agile is not just quick, it implies a learning feedback loop. Documentation massively reduces feedback loop lifetimes. Anyway, having been in waterfall worlds and extreme agile worlds, I am glad to also have learned RUP, it’s a pretty good methodology for medium to large projects

Kiran MV

Product Management | Product Ownership | Travel | GDS | Airport Systems | DCS Integration | Retail | Hotel-Tech | Payments | Professional Services |

4y

A textbook-meets-experience write up.Must say,serves as a dummy for agile !

Like
Reply

To view or add a comment, sign in

More articles by Andy Deshpande

  • Agile Anti-patterns

    Agile Anti-patterns

    Over the years I have had the opportunity to observe the progression of Agile teams as they form, mature and eventually…

Insights from the community

Others also viewed

Explore topics