How do we make sure our systems are healthy and sustainable? Watch our new video on how testing increases our confidence in our systems to find out. #customsoftware #businesssoftwaredevelopment #businesssolution #businesstools
Transcript
The RoleModel Way of Testing is the methods and mindset of software testing that we apply in our projects. We've developed these techniques over hundreds of successful projects that have resulted in sustainable software assets. Automated testing is code written alongside business logic, executes scenarios and validate if functionality is correct. It's not a replacement for manual testing, but a way to constantly ensure that the evolving software is healthy and maintains high quality. Sometimes developers go to war over what kind of test to write or what techniques to use. But they're often missing the forest through the trees by not asking the question "Why do we write tests?" nd that guides us and our thinking. So we want to answer the question, why do we write tests? Writing tests is core to the RoleModel focus of iterative value. Without it, we don't have a solid foundation for us to build upon. We write tests to increase confidence and provide documentation so that ultimately we can work iteratively with confidence. Having confidence in your test suite means ensuring that your tests and the overall suite is correct, that they do what they say they are doing. That means they need to be consistent, not having intermittent or occasional failures that erode your confidence and provide a tight feedback loop that's significantly faster than manually testing each change that you make to the system. Why is confidence so important when you're developing software? well, let's think about what happens when you don't have confidence in your software. You become afraid of it. And it's kind of this don't touch it, it might break feeling and I've seen this so many times as a consultant where we come into these software projects, where there's no tests and the software has just continued to deteriorate over the years and it gets to a place where we can't build new features in here, every time we do something it breaks and on and on and on. And the way that you fight against that deterioration that just sort of seems to naturally happen in software is you got to be continually improving the software, refactoring it, and making it better. That's the way you fight against the entropy that happens in systems. And tests enable you to make a change, make something better and then you can run your test suite and know did I break anything? Though correct and consistent should be obvious, they often get so much of the focus that the need for a tight feedback loop gets lost. Receiving quick feedback from tests looks like a developer being able to make a change to a class or a method and getting feedback in a few seconds, refactoring a subsystem, and seeing the result of those tests in less than a minute, and running the entire test suite in a matter of a few minutes to be able to ensure that change they made is correct. Automated tests also provide documentation. Code is read 10 times more often than it is written, and well written tests, as with well written code, provide an insight to the intent of the test. So the way we write tests at RoleModel documents the system, bridging the gap between the business stakeholders and the technical team. The language used to describe the tests, to describe the expectations of the system, is in the business language. And the tests themselves then execute that language to prove that the system fulfills the necessary conditions. To have them act as a bridge between the business and the technical teams they need to be written plainly, simply and understandably expressing what the system should do in the language of the business itself. Well, the the craft of writing the tests is to ensure that what is described by the language is actually done in the test, so that they become executable, proof that the system works as is intended. This means that tests are expressive, they don't just describe what, they are testing what, they describe the why behind it, so that can be understood in a common language. Our suite of automated tests allow us to work iteratively with confidence. This is because we have a safety net that ensures that we can confidently delete or add code because we know that we aren't going to break something that already exists there. This means that knowledge from the developer and the team working on the project initially is an encoded into the software asset so that they can focus on the part of the problem that they are solving at the moment and also make it easier to ramp additional developers on the project later. Our software is able to be adjusted to whatever those needs may be with confidence. We can add new features. We can adjust old ones. And know that everything else still works the way it should. For that to happen, you need to put care into how you write your tests. You need to apply craftsmanship to them and some things to think about as you're writing your tests, be looking for duplicate tests. Don't test the same feature multiple times, that just makes it hard to maintain in the future, and it makes your test suite run slower. Speaking of slowness, watch the overall time that it takes to run your test suite. If it takes an hour to run your test suite. they're just not gonna get run as much and you're going to start losing the benefit of them and also be continually refactoring tests, they're code as well. And developers need to be looking at them to understand how they work. So put the same care into writing your test as what you do writing the actual features of the application. And if you do these things, you're going to have a test suite that instills confidence, it provides great documentation and you and any future developers who work on this project will thank you. Like the code that we write, we want our tests to embody intention, revealing names in the language of the business they're using. We should look for opportunities to refactor and reuse tests as much as possible, and then ensure that they're always run and correct by having continuous integration in code reviews that look at the tests as much as they do with the production code. So a lot of ways to write automated tests. How do we approach that at RoleModel? Well, we start with writing our tests first. We want to drive functionality from a failing test so we can go through the red, green refactor cycle of the test first development process. This also means we start from the outside and work our way in. When writing our tests, we start from the perspective of the end-user of the software and work our way into the core business logic of the system. We follow three steps and we do them iteratively. The first step. is writing a new expectation that the system does not fulfill. We create a test and we watch it fail. Second, we write code that makes the system meet that expectation, that causes the test to pass. Third, we step back and we consider the system as a whole and we look at the trade-offs that we've made so far. And we ask ourselves are they correctly tuned? Is this the best system that could possibly meet the expectations that have now been written for it? And we adjust the code if we need to. And because we have the tests in place, we can adjust that code with confidence. We don't have to worry about regressing past behavior or now failing to meet expectations that were previously met. Michelangelo is quoted as saying that inside a block of marble is a sculpture rating waiting to be uncovered. The job of the sculpture is simply to remove the excess and reveal what was already there inside of it. And for the skilled practitioner of TDD, the software system is similarly to be revealed by the process of creating these tests and developing the system. Ultimately, using TDD helps us to create the best systems possible for our customers that meet their needs of today. And create the flexibility for us to continue to meet their needs of tomorrow. So the RoleModel Way of Testing helps us to better manage the sustainable software assets that we build with our customers and ensure that we can deliver iterative value. We do this by ensuring that our tests are increasing our confidence and providing documentation.To view or add a comment, sign in