Saturday, July 3, 2010

The Role and Importance of Quality Assurance (QA)

There is a moment where the young and enthusiastic learn that seat-off-the-pants is quick but eventually leads to catastrophe. You can tell at which stage engineers are by asking them what they think of QA: if they think it's an occupation for the lesser divinities of programming, they aren't there yet; if they have enough experience, they will think of QA engineers as demi-gods whose verdict makes and breaks months of coding.

Having been on for decades, I am of course a very, very strong proponent of mandatory QA. To me, this last step in the development process fulfills three main goals:
  1. Interface stability and security Making sure that the code does what it is supposed to do especially in boundary conditions that developers typically overlook. The most common scenario is that of empty data (null pointers, etc.) somewhere code assumes there to be an object, but testing code for SQL injections is another, perfectly invaluable example. This has nothing to do with the functionality of the code, but with its ability to behave properly in unusual conditions.
  2. Performance and stress testing Checking how the code behaves under realistic scenarios and not in the simple case the developer faces. Instead of 5 users, make 500,000 run concurrently on the software and see what it does. Instead of 100 messages, see what the system does with 100,000,000. Instead of running on a souped up developer machine with a 25" display, look at your software from the point of view of a user with a $200 netbook.
  3. User experience and acceptance Ensuring the flows make sense from the end use perspective. Feel yourself into the user and try performing some common tasks. See what happens if you try doing something normal, but atypical. For instance, try adding an extension to a phone number and see whether the software rejects the input. 

We have gone a long way towards understanding how these three goals help the development process. What is just as important, though, is to see how they (a) have to be implemented, and (b) what the downsides are of not implementing them.

Implementation

The modern trend is towards implementing interface tests at the developer level. The basic idea is that there is a contract between developers, and that each developer has to write a series of tests that verify the code they wrote actually performs as intended. The upside is that the code will do as desired and that it is fairly easy to verify what kind of input is tested. The downside is that the testing code almost doubles the amount of programming that needs to be done.

Agile methods, with their quick iterations, are particularly emphatic about code testing. Each developer is required to provide testing code to the tune of the main deliverable. At first, it seems odd that people willing to throw out code regularly would be so adamant about testing it. Closer inspection, though, shows that if there is no complete set of tests, the time saved by not implementing them is paid by having to find and remove inconsistencies and incompatible assumptions.

Stress and performance tests usually have to be separated from the interface tests, because they require a complex setup. Performing a pure stress test without a solid data set leads to false negatives (you think the code is OK, but as soon as the real data is handled, it breaks where you didn't think it would). A good QA department will have procedures to create a data set that is compatible with the production data and will test against it.

There are two goals to this kind of test: (a) characterization and (b) profiling. Characterization tells the department how the code performs as load increases. Load is a function of many factors (e.g. size of database, number of concurrent users, rate of page hits, usage mix) and a good QA department will analyze a series of these factors to determine a combined breaking point - a limit beyond which the software either doesn't function anymore or doesn't perform sufficiently well.

Profiling, on the other hand, helps the developers. The software profile gives the developers an idea where the code breaks down. Ironic, considering that a software profile is a break down of where the processors spent time. Profiling needs very active interaction between QA and development, but is a very powerful tool for both.

Finally, user acceptance tests are performed by domain experts or using scripts provided by domain experts. This is the most delicate of function of QA, because the testers become advocates or stand-ins for users. In this capacity, they test how the software "feels". They have to find a grasp of what the user will ultimately think when faced with the software.

It is here that the tension between developers and testers gets worst. The attitude of many developers is that the software performs as intended and they are frequently upset when a tester complains about something immaterial that forces them to do a lot of work for something that seems minor, like splitting a page in two or reversing the logic with which information is gathered.

It is also here that the engineering manager has to be most adamant and supportive of the testers. Ultimately, the users will perform the same tasks for many, many times. To them, an extra click may translate to wasted hours on a daily basis, something that would infuriate anyone.

Not Implementation

What is the downside of not implementing Quality Assurance? If you are a cash-strapped, resource-strapped Internet startup, the cruel logic of time and money almost forces you to do without things like QA, regardless of the consequences. So let's look at what happens when you don't follow best practices.

First, you can easily do without unit tests in the beginning. I know, you wouldn't have expected to hear that from me, but as long as your application is in flux and the number of developers is small, unit tests are very inefficient. You see, the more you change the way your application operates, the more you are likely to have to toss your unit tests overboard. On the other side, the fewer developers you have, the less they are going to have to use each other's code.

Problems start occurring later on, and you certainly want to have unit tests in place after your first beefy release. What I like to do is to schedule unit test writing for the time period right after the first beta release - the time allocated to development is near nothing, and you don't want to push the developers onto the next release, potentially causing all sorts of issues with development environments out of sync. So it's a good time to fill in the test harnesses and write testing code. Since the developers already know what features are upcoming, they will tend to write tests that will still function after the next release.

Second, performance tests are a must before the very first public release. As an architect, I have noted how frequently the best architecture is maligned because of a stupid implementation mistake that manifests itself only under heavy load. You address the mistake, fix the issue, and everything works fine - but there is a period of time between discovery and fix that throws you off.

Performance and scalability problems are very hard to catch and extremely easy to create. The only real way to be proactive about them is to do performance and load testing, and you should really have a test environment in place before anything goes public.

There are loads of software solutions that allow you to emulate browser behavior, pretending to be thousands or millions of users. Some of them are free and open source, many are for-pay and extremely expensive. Typically, the high-end solutions are for non-technical people, while the open source solutions are designed by and for developers.

Finally, lack of final acceptance testing will have consequences mostly if your organization is not able to cope quickly with user feedback. In an ideal world, you would release incremental patches on a frequent basis (say, weekly). Then you can take actual user input and modify the application accordingly.

The discipline required to do this, though, is a little beyond most development shops. Instead, most teams prefer to focus on the next release once one is out the door, and fixing bugs on an ongoing basis is nobody's idea of a fun time. So you are much better off putting in some sort of gateway function that has a final say in overall product quality.

Many engineering teams have a formal role of sign off. Unless the responsible person in the QA department states that the software is ready for consumption, it isn't shipped. I found that to be too constricting, especially because of the peculiar form of tunnel vision that is typical of QA: since all they ever see of the software is bugs, they always tend to think of the software as buggy.

Instead, I think it more useful to have a vote on the quality and release: in a meeting chaired by the responsible person in QA, the current state of the release is discussed and then a formal vote is taken, whose modality is known ahead of time. Who gets to vote, with what weight, and based on what information - that's up to you. But putting the weight of the decision on the shoulders of a person that has no responsibility but detecting issues is unfair.

No comments:

Post a Comment