All software needs to be tested. In fact, software testing is a major part of the overall software development process that involves many people and countless hours of detailed work. Unfortunately, most testing efforts are under-planned: some software testing professionals work in the field for years without ever seeing a really comprehensive QA plan and test suite. Part of the problem is that QA efforts often begin too late in the release cycle when there is too much pressure to take shortcuts.
This white paper shows how ReadySET Pro can be used to quickly create a comprehensive system test suite with test cases. ReadySET Pro's unique content-rich templates help you write better test suites that can improve your product quality. These templates make the writing process much faster than starting from scratch, which means that you are more likely to be able to complete your plan, even under time pressure.
The figure below illustrates where your QA plan and test suite fit with other project documents. This white paper focuses on the yellow "Quality Assurance" box. Ideally, your testing documents are just part of an overall set of project documents. But even if you do not have the others, you will be able to follow the discussion of test planning documents.
Software development projects that don't have enough test planning tend to bog down with defects that can put the entire project's success at risk. Test planning helps in the following specific areas:
The rest of this white paper works through the steps shown in the diagram below. (You may notice that this is very similar to the use case writing steps.)
Note that in steps 4 and 5, we recommend that you only specify the most important test cases in detail. In any complex system, there will be a large number of potential test cases. We encourage you to take a breadth-first approach: map out your test case suite first, then fill in details incrementally as needed. This concept is key to getting the most value out of the limited time that you have for test planning.
Software quality is not one-size-fits-all: different software products need different types of testing because they have different QA goals. For example, a real-time system may place much more priority of performance than would a typical desktop business application.
The task of QA planning is discussed in detail in the "Quality Throughout the Life-Cycle" white paper. The main parts of the overall QA plan are:
The overall QA plan addresses all quality activities. Quality can be achieved by building in better quality from the start, and by testing to find and remove defects. Specific QA activities include: coding preconditions, reviewing design and code, unit testing, integration testing, system testing, beta testing, using analysis tools, and field failure reports, among others. The rest of this paper will focus in on just the system testing activity.
Once you have prioritized your QA goals, it is time to outline the system test suite. A test suite document is an organized table of contents for your test cases: it simply lists the names of all test cases that you intend to write. The suite can be organized in several ways. For example, you can list all the system components, and then list test cases under each. Or, you could list major product features, and then list test cases for each of those.
One of the best test suite organizations is to use a grid where the rows are types of business objects and the columns are types of operations. Each cell in the grid lists test cases that test one type of operation on one type of object. For example, in an e-commerce system, a Product business object would have test cases for each of the following operations: adding a product to the system, listing or browsing products, editing products, deleting products, searching products, and calculating values related to the product such as shipping cost or days-until-shipment. The next row an e-commerce test suite grid might focus on the Customer Order business object and have test cases for almost all the same operations.
The advantage of using an organized list or grid is that it gives you the big picture, and it helps you put your finger on any area that needs more work. For example, in the e-commerce grid, there might be a business object "Coupon." It is obvious that shoppers use coupons, but it is easy to forget to test the ability for administrators to create coupons. If it is overlooked, there will be a clearly visible blank space in the test suite document. These clear indications of missing test cases allow you to improve the test suite sooner, make more realistic estimates of testing time needed, and find more defects. These advantages allow the found defects to be fixed sooner and help keep management expectations in sync with reality, which helps keep the project out of crisis-management-mode.
After you have outlined your test suite, this step becomes much easier to do well. Having an organized system test suite makes it easier to list test cases because the task is broken down into many small, specific subtasks.
Put your finger, or cursor, on each list item or grid cell in your test suite. Then, for each one, ask yourself about the relevant system requirements. If you have a written use case document, you will often be able to turn each use case into one or more test cases. There may be some list items or grid cells that really should be empty. For example, an e-commerce application might not have any delete operation for the Customer Order business object. Explicitly mark with "N/A" any cells that logically should not have test cases. If you cannot think of any test cases for a part of the suite that logically should have some test cases, explicitly mark it as "TODO".
The name of each test case should be a short phrase describing a general test situation. Append a unique number to each test for the given test situation. For example: login-1, login-2, login-3 for three alternative ways to test logging in. And, sales-tax-in-state-1 and sales-tax-out-of-state-1 for two different situations where collected sales taxes are reported to the government according to two different procedures. Use distinct test cases when different steps will be needed to test each situation. One test case can be used when the steps are the same and different input values are needed.
As you gradually fill in the test suite outline, you may think of features or use cases that should be in the software requirements specification (SRS), but are not there yet. Quickly note any missing requirements in the SRS document as you go along.
Before moving on to the next step, it is worth highlighting the value of having a fairly complete test suite outline. The test suite outline is a useful asset that can help your project succeed. At this point, you can already get a better feeling for the scope of the testing effort. You can already roughly prioritize test cases. You are already starting to look at your requirements critically and you may have identified missing or unclear requirements. And, you can already estimate the level of specification-based test coverage that you will achieve.
In step three, you may have generated between ten and fifty test case names on your first pass. That number will go up as you continue to make your testing more systematic. The advantage of having a large number of tests is that it usually increases the coverage.
The disadvantage to creating a big test suite is simply that it is too big. It could take a long time to fully specify every test case that you have mapped out. And, the resulting document could become too large, making it harder to maintain.
A good strategy is to be selective before drilling down to the next level of detail. For example, you might prioritize the test cases based on the priorities of the features or use cases that they test. Also, it's a good idea to first write descriptions rather than get into detailed steps for each test case. Going deep into the details of just a few test cases may be enough to shake out ambiguity or incompleteness in the requirements. The remaining cases should all be specified eventually, however you might choose to rely on ad-hoc testing for lower priority features in early releases.
For each test case, write one to three sentences describing its purpose. The description should provide enough information so that you could come back to it after several weeks and recall the same ad-hoc testing steps that you have in mind now. Later, when you actually write detailed steps in the test case, you will be able to expect any team member to carry out the test the same way that you intended.
The act of writing the descriptions forces you to think a bit more about each test case. When describing a test case, you may realize that it should actually be split into two test cases, or merged with another test case. And again, make sure to note any requirements problems or questions that you uncover.
Now it is time for the main event: actually writing the test case steps and specifying test data. This is a task that you can expect to take ten to forty-five minutes for each test case. That might work out to approximately ten test cases in a typical work day. So, you must be selective to get the most value in return for your limited available time.
Focus on the test cases that seem most in need of additional detail. For example, select system test cases that cover:
Each test case should be simple enough to clearly succeed or fail, with little or no gray area in between. Ideally, the steps of a test case are a simple sequence: set up the test situation, exercise the system with specific test inputs, verify the correctness of the system outputs. You may use programming constructs such as if-statements or loops, if needed.
Systems that are highly testable tend to have a large number of simple test cases that follow the set-up-exercise-verify pattern. For those test cases, a one-column format can clearly express the needed steps. However, not all test cases are so simple. Sometimes it is impractical to test one requirement at a time. Instead, some system test cases may be longer scenarios that exercise several requirements and verify correctness at each step. For those test cases, a two-column format can prove useful.
In the one-column format, each step is a brief verb phrase that describes the action that the tester should take. For example, "enter username," "enter password," "click 'Login'," "see Welcome page," and "verify that greeting has correct username" are all steps. Verification of expected outputs are written using the verbs "see" and "verify." If multiple inputs are needed, or multiple outputs must be verified, one-column test cases will simply have more steps.
In the two-column format, each test case step has two parts: a test input, and an expected output:
You may notice that the two formats for test cases mirror the two formats for use cases. The difference is that use cases are a form of requirements, whereas test cases deal with more details of the implemented system. Use cases focus mainly on the user's tasks and how the system supports those tasks, while specifying as few implementation details as possible. A major advantage of use cases is that they are simple enough to be read by actual users who can help validate requirements. In contrast, test cases should more technical documents with enough implementation detail to allow any member of the development team to carry out a test exactly the same way.
If you have written use cases, they can be copied and pasted as a good starting point for test cases. When leveraging use cases in this way, make sure to add enough detail to make the test reliably repeatable.
If you only have one test input value for a given test case, then you could write that test data value directly into the step where it is used. However, many test cases will have a set of test data values which must all be used to adequately cover all possible inputs. We encourage you to define and use test input variables. Each variable is defined with a set of its selected values, and then it is used in test case steps just as you would use a variable in a programming language. When carrying out the tests, the tester should repeat each test case with each possible combination of test variable values, or as many as practical.
Carefully selecting test data is as important as defining the steps of the test case. The concepts of boundary conditions and equivalence partitions are key to good test data selection. Try these steps to select test data:
Recall that one of the advantages of writing test cases is that it forces you to clearly think through the requirements. Capture your insights by writing notes and questions as you go. If a test case step exposes an unclear requirement, make a note of it in the appropriate part of the system requirements specification.
A suite of system test cases can find many defects, but still leave many other critical defects undetected. One clear way to guard against undetected defects is to increase the coverage of your test suite.
While a suite of unit tests might be evaluated in terms of its implementation coverage, a suite of system test cases should instead be evaluated in terms of specification coverage. Implementation coverage measures the percentage of lines of code that are executed by the unit test cases. If there is a line of code that is never executed, then there could be an undetected defect on that line. Specification coverage measures the percentage of written requirements that the system test suite covers. If there is a requirement that is not tested by any system test case, then you are not assured that the requirement has been satisfied.
You can evaluate the coverage of your system tests on two levels. First, the test suite itself is an organized table of contents for the test cases that can make it easy to notice parts of the system that are not being tested. Second, within an individual test case, the set of possible input values should cover all input value equivalence partitions for each parameter.
This white paper laid out the steps needed to quickly create a system test suite and test cases using the ReadySET Pro templates. The keys to effectively system testing are to:
ReadySET Pro provides valuable help for planning your system testing by giving you templates that include reusable content and set good examples for you to follow. Both of these advantages give you a big head start on your own test plans. ReadySET Pro users typically save at least three hours by using the test case suite and test case templates alone. See how these savings, and the savings from other templates, add up to days or weeks of project time by trying the ROI Calculator.