As with any startup, the excitement of improving and adding to our current product often overshadows more mundane aspects of software development, such as automated software testing. In response to this, here at RJMetrics we have recently been examining our current suite of automated tests and reevaluating our strategy towards testing. There are already lots of great blog posts about unit testing philosophy and best practices, so instead I’d like to share some of our personal experiences navigating the human elements of automated software testing.
- Testing can be a divisive issue. Each developer subscribes to his or her own philosophy of software development, and this includes testing. You will never get everyone to agree on these issues and could spend hours debating the goals and merits of testing, so my best advice is to decide upon and document a team philosophy early on. This includes how your team defines the basic terms such as unit and integration test, and classifying the types of tests that are important for your product. You should also clarify the main goals of your tests (to catch bugs, aid in refactoring, etc), as this affects how tests are written and what is tested. Once this is in place and everyone is on the same page, you can move forward with more meaningful discussions.
- A smaller number of thoughtful unit tests is infinitely better than a large number of poorly written unit tests. In addition to failing to catch real bugs, bad unit tests break as a result of unrelated code changes. Fixing such tests slows down product development by taking time away from other projects, and it takes an even worse psychological toll on the team. Spending minutes or hours debugging a failing unit test is enough to turn the most avid supporter sour on testing. To avoid such tests, everyone should be familiar with unit testing “best practices” and any testing pitfalls specific to your codebase should be addressed as early on as possible.
- People are more likely to write and run unit tests if its easy to do. Try to simplify the process as much as possible by providing a collection of helper functions and objects that will allow developers to focus on testing the code at hand. For us, this means providing a set of “stock” objects, ready to be used in a test.
Bearing in mind the experiences above, we are moving forward to improve and simplify our automated testing. There is still much more for us to learn and we will periodically reevaluate and keep everyone posted on the success of our testing initiative in catching bugs, refactoring old code and aiding in new code development.