Daniel Moka’s Post

View profile for Daniel Moka, graphic

I help you master Test-Driven Development (TDD)

If you catch a bug while coding, it will cost you $80. If you catch a bug with your CI pipeline, it will cost you $240. If you catch a bug with your Q&A process, it will cost you $960. If you catch a bug in production, it will cost you $7,600. If your client catches your bug, it will cost your invaluable reputation. ------ Join 25,000+ software crafters: craftbettersoftware.com

  • No alternative text description for this image
sushil bharwani

Product Design | UI Architecture | UI Development | Leadership

23h

I Agree the cost of bug fix increases depending upon how late you find it, just want to understand where are the numbers for amount coming from?

Robert Batusek

organization consultant | agile coach | CTO | Certified LeSS Trainer

1d

I like this chart a lot ~20 years ago, but today I think it has one flaw. It assumes that your "QA process" happens after coding, which is not true when you use TDD and other modern technical practices like SBE or Trunk-Based Development. Your quality assurance happens together with the coding. The same is true for production. If you use modern practices like feature flags combined with observability, the cost of finding the defect can actually be quite low. Daniel Moka I get your point and agree with it - we shall find defect as soon as possible by shortening the feedback loop. However, I think we should find a better way to visualize this.

Daniel Bartley

Data-oriented business solutions. Translates documentation into code and reverse docs-to-code.

1d

Can we reproduce your calculations? Is this data publicly available so we can adjust for inflation/ exchange rates? Did the numbers vary by industry?

Vincent Wijnen

Automation Consultant

13h

I think it is bizarre that people still use this nonsensical rule and data. So a bug while coding costs you 80 dollars? Based on what? I have made so many bugs in my code that I could fix in 30 seconds, that number is nonsense. And if you lose reputation if the client finds it, it means you didn't manage expectations, didn't build fail-safes in, forgot proper monitoring and logging and stopped thinking after the software went to production. Seriously, we always answer everything with 'it depends' (BECAUSE IT ALWAYS DOES!), but when you start slinging graphs it is suddenly the truth? Can we wake up and stop giving anyone with power things they can use to annoy us with? So really, what are you selling?

Jesse Braddock, JD

Engineering Leadership | FullStack, Data, Analytics, and Product Engineering | Operating at the Intersection of AI/ML, UI/UX, Product, and Data Science

22h

I'd like to align this post with the general 1:10:100 heuristic in quality management, which illustrates how costs curve escalates during each lifecycle stage of software An activity that costs $1 in pre-production may cost $10 during development and $100 once in production. Exact figures can vary, and I've seen the 1:10:100 rule taught as 1:40 or 1:400 ( agile based teams may see numbers on the lower end while waterfall teams will likely see higher costs ) Whatever the exact numbers are, the underlying principle remains: addressing issues early significantly reduces costs and prevents larger problems down the line. The admonition is empirically based and clear: when possible, don't defer activity that can be done in pre-production to production and beyond.

Deepak P.

Principal Software Engineer @ Cyble Inc. | Go | Node.js | Rust | ReactJS | microservices | Cyber Security | Threat intelligence | Continuous Engineering | Agile

1d

Interesting, how did you calculate those costs?

Mateusz Szymczyk

Independent DevOps Consultant

47m

It's funny to read the comments. Developers and QA'a say: "Very True," while people with experience in different areas questions presented numbers as 'made of.' :D The most valid comment I found is this: "With DevOps, good observability, continuous deployment, and testing in production, that graph shows its age". (cc Stuart Crocker )

Like
Reply

Daniel Moka While I do think there must be some truth in your statements, is there any scientific proof for your statements and numbers? What’s backing your statements? Anyone can make claims and throw around statements and numbers without any sound basis…

Benjamin Hummel

Helping dev teams build better software | CTO & Co-Founder @ CQSE GmbH

1d

Daniel, I agree with the general goal of shifting left and catching issues early on. But I really dislike the way you present this here. There are studies supporting increased cost when fixing bugs later, but none has the precision you suggest (like exact factors or dollar amounts). Similarly, there is no support that cost raises exponentially, like suggested by your graph. It might be more expensive, but likely not exponentially. Finally, a lot of those studies are old. With modern development approaches and being able to push a commit through QA into production within hours (if not minutes) , a lot of the assumptions from older research just don't hold anymore. If you have studies that support those numbers that I am not aware of (and please not tool vendors' marketing fluff), I would be keen to learn. But the way this is presented here does more harm than good to our profession. We are software *engineers*, not software marketers.

Edwin Siebrasse

Test Automation Specialist at Sogeti part of Capgemini

3h

It looks like this picture is ok. For many years there is a piramide of cost increase that says the cost grows 10x per fase. It was empirical, but I haven’t seen an update on it. If my memory serves me well the phases were: requirements; design; coding; testing; realization; maintenance. They then argued that a 100.000 times increase of cost between requirements and maintenance is the ratio between an eraser and a sledgehammer.

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics