There is an art to testing. Clearly there is a science too, but one key thing is to understand people and their typical behaviours. We often say that you cannot proof-read your own document. I know that whenever I have written something my colleague will find many spelling errors and improvements that I just didn’t see! Well the same is true for software – or indeed any product that you create. Many projects now are built in an Agile mode – a small team, highly focussed with direct ‘user’ input (or at least a representative for the users) and rapid turnaround of ideas and implementations. This can be highly effective, but can also lead to a bubble effect.
Social media is telling us all about the ‘confirmation bubble’ where it is easy to get absorbed in the views of all those that agree with you, but in testing you should be aware of “Confirmation bias” – the human habit to favour information that supports an existing viewpoint, while playing down the significance of any information that challenges it. We all do it to some extent: build assumptions into our working practices, then fail to challenge those assumptions properly. In software development, this can lead to costly mistakes, so every developer should be ready to test their assumptions as well as their code.
Modern Agile software development practises – particularly Test Driven Development (TDD) – have led to the frequent and detailed functional testing of software against specifications while under development. This can only be a good thing. But if you only ever think about testing to specification, then you may miss the opportunity to test your assumptions.
Non-functional testing, to identify “quality of experience” factors like usability, is needed to make sure your software will be well received and understood by the target audience – this will bring into focus the assumptions that have been made throughout the development process. The suggestions below might seem obvious, but they can get missed in the rush to get a product to market, which can result in design flaws getting through to the final product – and now that users can quickly register delight or discontent via app store reviews, it’s an area that none of us can afford to ignore.
Prototype or mock up your app’s functionality and trial it with testers who have no knowledge of its design, or with potential users if possible. Exposing your design to people who don’t know how you intend it to be used will show up any weakness in the flow of usage, or functions and controls whose use will not be obvious in real-world situations. Involvement of the user at all stages of design and development is key to producing usable apps, but if this is only done by a single person acting as the “voice of the user”, assumptions can still creep in without being noticed. As no two users, will think identically, or solve a problem identically, trialling the design with people who don’t have any foreknowledge is a great way to uncover snags.
By the time functionality is 75% complete, it’s a good time for a spot check on usability. Again, testers who don’t know the specification, or potential users, will be best at uncovering any discontinuities or awkward functions that might have seemed fine during development. This will probably work best as a more formal testing process against a consistent set of objectives (more about that below).
When the application is feature complete and fully functionally tested, it’s essential to do a last non-functional test cycle, in case something unhelpful has managed to creep in during the last stages of the project and evaded detection during functional testing. As before, try to use people who don’t know the application in detail, as they will bring the lack of prior assumptions to testing that help uncover usability issues. This will act as a final “sanity check” that your app is the best it can be before it gets released to market.
Remember that the non-functional testing suggested above doesn’t have to be big, formal or complex. For your sanity checks during design and early development, depending on the size of your business, it might be as simple as asking a trusted friend to try it out for a few minutes - or the finance director, or the guy who maintains the server room – anyone who hasn’t been involved in the design and development.
But as you progress through development, and definitely by the time you get to the last check before release, it would be best to make it a more formal and documented process. In a competitive market, it’s always good to have a record of a consistent QA process to be able to produce if challenged on quality.
It’s a good idea at the final stage to invest in the services of a professional testing service, if possible. They are experts at finding the sort of issues which may need to be considered in a risk assessment before release. If external testing is beyond the resources available for the project, allow at least half a day for a thorough check of all non-functional issues by in-house staff, in addition to your planned functional testing.
If you’re unsure what to use as the basis of non-functional testing, the App Quality Alliance publish sets of recommended Testing Criteria for applications on various platforms (http://www.appqualityalliance.org/resources), covering all the areas of usability, as well as other key issues such as accessibility, or network performance and power usage.
In addition there is an Online Testing Tool that can be used by even the most inexperienced tester to step through the essential Baseline Testing Criteria, make sure all the tests are correctly completed, and produce a formal report at the end (http://www.appqualityalliance.org/AQuA-online-testing-tool-demo).
One advantage of using third party tools like these is that they counteract the unconscious tendency to build confirmation bias into the wording of in-house tests. Using external test cases can act as a valuable sanity check before committing to production.
These can all help bring a professional level of discipline and detail to your non-functional testing, and ensure that you’ve done everything possible to root out hidden assumptions.
Article was written by Greg and Martin from App Quality Alliance