Most can agree that testing is important, yet so few actually test.
I landed my first full-time job as a PHP developer in 2007. Having just finished my Masters degree in Business Administration and Computer Science, I was greatly influenced by one particular study partner, the Ruby On Rails creator, David Heinemeier Hansson. Among other things (design patterns and refactoring), he preached test driven development. I had picked it up and was ready to practice it.
Like many other new-born graduates, I was sad to learn that my idealistic expectations didn’t match reality. Testing was new to my first employer, but we gave it a try on my eager recommendation. We wrote our first few automated tests and quickly got going. It was fun. It worked. And it quickly proved valuable in preventing the introduction of future errors.
But like so many other times that good intentions flourished, a deadline struck us. Like lightning from a blue sky. We had to rush to finish on time and as a result, we abandoned our good intentions for a while. We promised ourselves that we would get back in the saddle again, once we had scrambled past the deadline.
We never picked up test driven development again in that team.
I’ve seen similar stories unfold at each and everyone of the companies I’ve worked in. On my own teams, on other teams, and in the teams I’ve managed in one way or the other. I’ve seen good intentions come and go – the latter mostly as push for meeting deadlines and deliver features incrementally (not iteratively) that forced teams to scramble.
Again and again, I’ve seen product managers and product owners so eager on delivering features from a plan they didn’t bother to test themselves, they ended up destroying the testing efforts of everybody else, down the line.
Too often, I have fallen victim to committing teams to rollout plans we had no idea whether could hold. Rollout plans with risk logs of everything we thought could go wrong – that we didn’t control ourselves. We just failed to de-risk what we did have control of – ourselves.
We should have tested
Looking back, what we should have spent more time doing, was to contain the risk that we could control. We should have reduced the complexity of our elaborate rollout plans by testing it. We should have focused on de-risking our proposed plans by removing uncertainty.
We should have made tests to see how easy it was going to be to migrate data from one system to the other, what approach would be more effective, and how much data we needed to migrate for the new system to be viable and effective.
We should have tested how much design we could reuse from one brand to the other, what design variations provided value and which didn’t, and how to best communicate and evolve these findings with stakeholders, so that we would spend our time on what mattered.
We should have tested how long it would take to roll out one section, so that we could adjust our plan for the rest. Or better, have tested 3 different ways we could have rolled out a section to decide which was better by comparing and contrasting each solution.
We should have tested what problems our stakeholders were really struggling with and what part of the product would have had the greatest impact on business, so that we could have focused on the most critical assumptions rather than
We should have tested what mattered to our customers.
Could have. Should have. Would have. We didn’t.
We assumed success and never compared and contrasted
Even if we had tested all of these things, it wouldn’t have mattered much. Our biggest mistake was that for each problem, we had only one solution. And for that solution, we assumed success.
Sure, our projections could have been much better, had we tested each solution before we had rolled it out. However, our main problem was that we had no idea whether there was a better solution for the problem we had tried to solve. We didn’t make the effort.
Our biggest mistake was that for each problem, we had only one solution. And for that solution, we assumed success.
Whatever solutions we had planned, we had only visualized them in our heads. They were imaginary. Thinking about how to execute on something is not the same as doing actual work to figure out whether that is the best solution.
We assumed that we were going to be successful and that our plan would work. So we skipped over the obvious experiments. If we did any testing, it was of the one solution we had planned.
For testing and experimentation to be of any value, you need to test multiple approaches to compare and contrast which is better. This will allow for establishing a baseline to judge all subsequent results.
We don’t test to learn from the success cases. If you know in advance that it’s going to work, it is not a test. We test to learn from the failed cases. How do we know if a case has failed, if there are no other cases to compare to? We don’t.
The danger of committing to untested plans
Too many times, I’ve fallen victim to presenting and signing off on year-long plans that never had a chance of holding true. They were at best educated guesses.
The danger of presenting year-long plans and committing to them is that they look real. They seem as if they are full of certainty. Whoever approves the plans expect that success is an execution problem: that the projections will materialize as described.
The danger of committing to untested plans full of uncertainty is that the intention is to implement them.
Testing could be your new competitive advantage
The act of testing doesn’t have to be hard.
Software testing is already an elaborate discipline full of helpful and sophisticated tools to make test-driven development as easy as can be.
Usability testing is a well established discipline among user experience professionals with many helpful tools for recruitment, presentation, and automation.
Our success at Amazon is a function of how many experiments we do per year, per month, per week, per day. Being wrong might hurt you a bit. But being slow will kill you.
To help make product testing easier, I created the Validation Patterns card deck – a collection of 60 lean product experiments that will help you get answers in hours or days rather than weeks or months.
It is the perseverance and discipline to keep going, even when deadline strikes, that is hard. It is an innate focus on increasing the velocity of experiments and learning over a longer period of time that will separate you from the competition.
Many companies want to do more testing and experimentation and also manage to initiate testing initiatives in one way or the other. Most companies struggle to keep up such initiatives and quickly fall back to how things were.
It is hard to argue that you shouldn’t test. Yet, why are so many companies struggling to do so on an ongoing basis?
If you manage the discipline and perseverance of testing, it could be a game changer for your business. As so many companies are struggling to keep it up, it could be your competitive advantage.