Earlier this week, Uncle Bob stirred up the TDD hornet nest again trying to wade the fine line between RAD and pure TDD with excessive use of IOC’s. In one of the many responses to the twitter debate, Jimmy Bogard responded that you should use testing methodologies, “When it provides value … it depends.” Since I am speaking about Unit Testing in SharePoint at the upcoming VS Live events, I figured I should put in my two cents as well.
In general, I tend to agree with Jimmy’s pragmatic approach. I’m not a test first zealot by any means, and don’t think that trying to achieve 100% code coverage is a panacea. Indeed, even if you do have 100% code coverage, there can be significant business rules that you may not be testing that you should have tests for. Also, your code may well have exception checking, input validation, data binding, etc. code that spending time testing can only decrease your overall ROI.
When designing systems, I tend to use the following rules of thumb to guide me in when to write coded tests for my solutions. This list is likely not inclusive and as always, to every rule there is an exception.
If it’s a core piece of business logic, it should have a test.
When building systems, there will be a number of cases where business rules assert that under certain situations something specific should happen. In those cases, I recommend writing a unit test. The test should be named appropriate to the business defined rule. For example, if the user should be notified when customer’s age a value drops below an acceptable minimum, I would name the test something along the lines of Customer_InvalidWhenAgeBelow18. This way we document the system requirements through the tests and have a quick way of validating our logic directly with the stakeholders to make sure our tests are asserting the correct things. Once we are done, we can give the test report to the customer to show them that all of their assertions are coming up true before handing off the code.
If you hit F5 to debug an issue more than 5 times, it would have been quicker to write an automated test for it.
On most systems that I work with, the act of debugging an issue can be quite time consuming. Just navigating the system to the point where you want to test often involves a number of steps, including logging into the system, creating a new record, setting the test values, trying to submit the values and checking the results. Often, if you took the time to write up a testing harness for these situations, you can bypass many of the manual steps and just test the logic that you need in isolation. Additionally, if you take the appropriate time upfront to set some test objects (fakes or mocks), then they can be reused for similar tests in the future – further reducing your future testing development time and effort.
If there’s a complex calculation that you are automating, include tests for it.
Even the most thorough coder can make simple math errors when coding calculations. Flipping a “+/-“ check or evaluating a complex logical truth table can be a brain twisting task at times. A simple error can ripple through the system causing unexpected errors. In many cases when automating calculations, you know up-front what some of the expected input and output values are. Write specific tests for them up-front and you will be well rewarded down the road. Better yet, write one test and provide a list (in Excel/Xml/etc.) with the valid inputs and outputs and then use a data driven test approach to evaluate the various permutations of the values.
If there is a user detected bug in logic, write a test before fixing the bug.
All too often, I find my developers trying to debug an issue and thinking they’ve found the solution only to realize that they just located a different bug and never actually fixed the true defect. When writing tests, you should aim to write them so that they fail first and only pass once you’ve corrected the underlying defect. I’ve also seen tests that appear to have passed even when the test failed, but the author of the test never knew it because the test was always passing (in most cases, they were swallowing test failure exceptions). By writing your test to fail first and then fixing the implementation, you are assured that your fix solves the problem at hand, and have a built-in regression test for future changes.
Often, when trying to get teams to start using a testing approach, I have to work to convince them that the tests are worth the time and effort. Typically, I find resistance particularly from teams that follow the just “get ‘er done” approach. In the consulting world, we often feel the pressure of producing the requested system at the expense of tests which are typically not included as a payable deliverable. However by building up a testing suite, we typically find the value when a test locates a regression defect prior to shipping the product which would have had a cost if it shipped with the defect. Following this more pragmatic approach and remembering these rules of thumb can provide a sense of safety in refactoring and increase your agility and velocity over time when maintaining systems.
Do you have additional rules of thumb that help to guide you? Let me know what you Thinq.