I think that one of the biggest things that developers – myself included – could be better at doing is testing. I’m not necessarily talking about just writing unit tests, or just doing usability tests, but just the general act of testing.
That is to say that perhaps it includes all of the above, perhaps it includes just providing automated and manual testing for part of an application, or perhaps it includes something else that hasn’t been mentioned.
Whatever the case is, testing is important. I’m not afraid to admit that it’s arguably my weakest aspect when it comes to working on projects for myself or for others. To be clear, this isn’t to say that I don’t test (because I do), but that I have room to improve (because I do), and I think we all do.
In fact, I think that there’s something psychological about developers and testing; otherwise, why would there be so many books, articles, and tools on the topic?
But here’s what I think: Developers get into a weird mind set when they’re working on a project and think that a minimal level of testing is enough not because we’re malicious or because we’re lazy, but because I think we think we know how the the users and the context of where our code is running will respond.
But we don’t and this has been demonstrated time and time again.
To that end, there are a number of ways to improve the way in which we test our work. And if you’re not testing at all, then one of the easiest (and even one of the most complete) ways to go about testing your work is through the use of a software test plan.
What Is a Software Test Plan?
Ask 10 different developers (no binary jokes here, please :) and you’ll probably get 10 different answers, but generally speaking I think of a software test plan as any documentation that contains a list of steps that need to be evaluated during the alpha and beta stages of software development.
From the outset, the document should be based on the requirements that were provided when the project began, and the document should be able to be updated throughout the testing process as issues that were not previous documented are caught.
But this carries an implication: Someone else is working with you on testing the project.
Black Box Testing
One of my preferred ways to actually test a project is to have someone who is a non-developer or at least someone is detached enough from the codebase to run through scenarios for using the project.
This can be someone internally, this can be a friend, and so on. It doesn’t matter – at the very least, it just needs to be someone who was not contributing to the codebase and can work through the requirements, a set of user scenarios (or user stories), and who can easily voice their feedback and have it welcomed (that last piece is key).
They key here is two fold:
- You gather feedback from someone who is removed enough from the work to offer valuable insight as to how they see it (not how you see it).
- You get enough feedback to add, modify, check off, or improve your test plan.
If you go into this looking for ways to correct the people using your work, you’re not having the work tested properly.
After you’ve completed an initial round of testing, it’s important to review the feedback that came back. If bugs or errors were reported (or even any points of confusion), clarify them, and work to make the features easier to discover, use, or whatever the plan was, then resolve any existing bugs.
Next, rinse, repeat, and do the same until there’s very little left to test – at least for a strong 1.0.
What Does a Test Plan Look Like?
Honestly, there’s no set template for this. I’ve seen things ranging from complex spreadsheets, to simple documents that have been printed out the include checkboxes with labels to mark whether or not something works as expected.
But does this matter?
I’d say it only matters so far under the two conditions:
- It’s easy for the user or the test participate to understand and to work with,
- It’s easy to update for the next iteration of the tests.
Granted, you can make the test as simple or as involved as you’d like and depending on the complexity of the system, but the ultimately goal is to make sure that users can not only make sense of what’s in front of them, but to make sure that they are capable of working through the items in the test plan.
For one example, with a recent project, I broke down my test plans for WordPress plugins into the following sections:
- Installation and Activation
- Plugin Settings
- Individual Post (or Custom Post Type) settings
- From here, I the introduce sections for each of the elements on the individual post or CPT screens.
Of course, this is but one example.
I can’t provide a template for something as varied as plugins, but that’s not the point: The point is that we should providing an exhaustive list of tasks that the plugin should do that allow us to determine whether or not the feature works and, if not, the steps required to reproduce the problem.
What About WordPress Products?
So inevitably, this question will arise as to how do you create software test plans for products that are going to be release to the masses?
I mean, it’s one thing to be working one on one with a client and have a small team available to evaluate your work; however, it’s a completely different beast to have a product that you’re going to be released to the masses.
Here’s the thing: In order to get people who will provide valuable feedback and testing for a theme or a plugin within the context of WordPress, you need to find people who are at least familiar with WordPress; otherwise, it’s all the same to them – there’s no distinguishing when the theme or plugin begins and WordPress ends.
If you end up presenting this to someone who isn’t familiar with WordPress, then you’re going to inadvertently begin gather user testing on WordPress – not your product.
Trust me. I’m speaking from experience.
So with that said, test plans are just as applicable for products as they are projects for clients, but the way in which you go about assembling a team needs to be geared to those who know WordPress and themes; otherwise, I doubt you’ll find a lot quality in the feedback that you get.
All Off the Cuff
Of course, this is all off the cuff thoughts. That isn’t to say that it’s invaluable because this is something that I’m begun to institute in recent project and it’s helped a ton – I can’t understate that – but I also know that this is not necessarily the proper, say, academic way to go about doing something.
But at the end of the day, when you’re at a time crunch to launch a project, have limited time and resources, and need to make sure your project is as bullet proof as can be at least for the initial launch.
Thankfully, we have easy ways to release updates, but that shouldn’t be an excuse for releasing weak software from the beginning, right?