Measurable Software Engineering Best Practices vs. Software Development Life Cycle
I took this picture whilst going on a seal watching boat ride in Nofolk, 2023

Measurable Software Engineering Best Practices vs. Software Development Life Cycle

Software engineering is a wonderful ocean to swim in as long as you understand which direction to swim, which tide to accept, and which direction/tide to avoid.

The software engineering practice contains a number of stages, regardless of what tool you build and what company you are working at. There is also a set of good habits that we can inject into any of those stages to gain the best out of it.

This article covers some of the key stages involved in software engineering and a set of good habits that you can follow at each stage.

No practice is good enough if you cannot measure its outcome. Therefore, we will also delve into some of the mesurable indicators that you can use to track and monitor the outcome of each stage.

Please note that this is not an exhaustive list. While the measurable indicators we discuss below give you a good starting point, those need to be adjusted based on your requirements.


📋 Requirement gathering

  • Understanding the requirement in detail in line with SMART approach (Specific, Measurable, Achievable, Relevant & Time-bound)
  • Communicating with stakeholders to understand the business requirement
  • Prioritising requirements based on business needs
  • Ensuring there are no vague or ambiguous objectives at the end of this phase


Measurable Indicators:

  • The number of high-level requirements & the number of broken-down high-level requirements into actionable deliverables
  • Itemised, grouped task list with their priority
  • Traceability score: how well the requirements maps with the itemised list 


🖼️ Design

  • System design: architecture (e.g., service-orientated, micro-services, etc), Infrastructure (e.g., containerisation), performance/scaling (e.g., fault tolerance, MTTR) & Security (e.g., OWASP)
  • Database design: architecture (e.g., data redundancy, high availability) & objects (e.g., data normalisation, data partitioning, data sharding)
  • Code change: adhering to a change management process, reusability, modularity and so on


Measurable Indicators:

  • The number of reused modules & the number of newly created modules
  • The number of primary/secondary nodes & pods
  • Resources required to achieve a specific performance metric
  • The number of modules/classes affected & unaffected by the change
  • For those modules/classes that are affected, the impact score
  • Class dependency coverage (the number of child extends the parent class)


🛠️ Development

  • Framework-agnostic best practices: SOLID principles (Single responsibility, Open to extension, closed for modifications, Liskov substitution—parent class and child class must be interchangeable; child class must implement everything in the parent class, Interface segregation & Dependency inversion—depending on abstraction)
  • Framwork best practices (e.g., dependency injection, treeshaking)
  • Software package management
  • Continuous development & build


Measurable Indicators:

  • The number of lines per class file
  • The percentage of code review coverage per class/per source
  • The number of successful/unsuccessful development builds
  • The number of package upgrades


🧪 Testing

  • Code syntax analysis/linters (e.g., unused variables, unintended missing breaks in switch cases, etc)
  • Unit testing
  • End-2-end testing
  • Stress testing & so on


Measurable Indicators:

  • The number of passed deliverables (against the original requirement) in the first iteration
  • Passed and failed unit/e2e test cases
  • Usage (load) vs average time to respond
  • The number of new issues introduced in the release-cycle
  • Defect leakage ratio (number of defects missed during testing)


🚀 Deployment

  • Blue-green deployment (i.e., monitor the new (green) version for issues before rerouting traffic from the old (blue) version)
  • Canary deployment (i.e, monitor issues in the targeted/smaller set and progressively roll out to larger audience)
  • Automated continuous integration/deployment
  • Deployment with roll-back
  • Post-deployment health-checks (e.g., average response time)
  • Tagging


Measurable Indicators:

  • Deployment success rate
  • MTTD (mean-time-to-deploy)
  • Number of post-deployment defects/issues
  • Number of post-deployment defects and the number of affected users


❤️ Maintenance

  • Proactive monitoring for issue patterns before they become serious (e.g., slight latency on low load converting to high latency on high load)
  • Reactive monitoring (e.g., no response from health-check triggers a message on communication channels)
  • Scaling (e.g., horizontal scaling due to demand)
  • Incident management
  • Technical-debt management


Measurable Indicators:

  • The number of new issues that occurred since the last deployment
  • The number of existing issues reduced since the last deployment
  • The number of issue triggers raised by the monitoring system
  • Mean-time to resolution for incidents
  • Average latency from various geographical locations
  • The number of nodes added/removed from the cluster due to demand


I hope you enjoyed the article. I am curious to hear what you think of these good practices and measurable indicators. Feel free to drop a comment below.

♻ Repost if you found this post useful. It means a lot to me. 🙏

I share insights on software engineering topics for your growth.

Be sure to follow me and check my profile 👍

Let's learn and grow together 🚀

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics