Pedagogical Product Line
Overview
Business Case
Scope
Requirements
Concept of Operations
Architecture
Unit Test Plans
Production Plans
System Test Plans
Brickles Product
Pong Product
Bowling Product
Bibliography
Glossary
Misc Documents

Arcade Game Maker Pedagogical Product Line: Brickles Test Plan

Introduction

The product to be tested will be downloaded free of charge as a means of attracting new clients. It represents Arcade Game Maker (AGM) and must be high quality. This plan defines tests that are the last in a series of quality assurance tasks.

This plan follows the IEEE-829 standard for test documentation. To complete it, we perform the analyses required to develop the Brickles test plan.

Test Items

The Brickles game is a subset of the total functionality needed for the product line. The set of use cases is shown in the following diagram.


Brickles Use Cases

The following use cases apply to Brickles:

For the full text for these use cases, refer to Arcade Game Maker Pedagogical Product Line: Requirements Model.

Tested Features

Except for features listed in the next section, all features described in the above figure will be tested. Review each of the relevant use cases in Arcade Game Maker Pedagogical Product Line: Requirements Model.

Features Not Tested (Per Cycle)

The following features will not be tested:

Testing Strategy and Approach

Syntax

The player communicates with the Brickles program using the mouse and keyboard.

Description of Functionality

The functionality being tested is the basic Brickles game running in a simple Win32 environment.

Arguments for Tests

The following table includes an analysis of the scenarios in the above figure's use cases, and the next table includes the expected results.

Variable Data Types
Variable Data Type Equivalence Classes
Left mouse button boolean Up, down, down & up
Mouse location Point Inside window, outside window
Number of pucks lost int 0, 0<x<3, 3
Number of bricks remaining int 0, all, 0<brickCount<all

Expected Output

Expected Output
Test Case Left Mouse Button Mouse Location Pucks Lost Bricks Remaining Expected Results
1 Down & up Inside window 0 0 Won dialog
2 Down & up Inside window 1 0 Won dialog
3 Down & up Inside window 2 0 Won dialog
4 Down & up Inside window 3 >0 and <all Lost dialog
5 Down & up Inside window 3 all Lost dialog
6 Down Outside window Don't care >0 No effect
7 Down & up
and then down & up
Inside window 2 0 Won dialog

Specific Exclusions

No functionality will be excluded.

Dependencies

These tests assume that all the components of the Brickles game have been completed and unit tested.

Test Case Success/Failure Criteria

The expected results are all discrete. A test case will be a success if the actual program results exactly match the expected results shown in the above table (Expected Output).

Pass/Fail Criteria for the Complete Test Cycle

All tests must pass for the product to be released. The use case set is a minimal set of requirements.

Entrance Criteria/Exit Criteria

Subsets of the system test suite will be executed as soon as a build is achieved.

The system test phase is exited when the release criteria described in Pass/Fail Criteria for the Complete Test Cycle are achieved.

Test Suspension Criteria and Resumption Requirements

Testing is suspended after the tests for all available features have been applied. Testing resumes when a new feature is added to the build.

Test Deliverables/Status Communications Vehicles

The test plan and use cases are placed in the configuration management system so that they can be used for each system revision. These assets will also contribute to creating the test cases for other products.

Arcade Game Maker Pedagogical Product Line: System Test Plan Template is also a deliverable from this first system test phase. The template is derived by parameterizing any Brickles-specific information.

A test report will be produced. It includes test results and the status of system testing.

Testing Tasks

The test process has three main phases: (1) analysis, (2) construction, and (3) execution.

Analysis

The scenarios in the use cases are the basis for the analysis. Each scenario is first examined for varying data. Varying data types are used as the basis for the initial data table. Specific data variables are instantiated and assigned values.

Construction

The test cases will be executed directly from the data tables. No software will be constructed.

Execution and Evaluation

The tests will be executed by a trained tester who will evaluate the response produced by each test and rate it as passed or failed.

Hardware and Software Requirements

The Brickles game runs on any hardware that supports the .Net Common Language Runtime (CLR). The system tests will be performed on a machine other than the development machines.

Problem Determination and Correction Responsibilities

The developers will be responsible for reading the test report, determining the cause of each failure, and correcting the underlying defects.

Staffing and Training Needs/Assignments

This testing will initially be carried out by executing the game. No special training is required for this activity. If a capture/playback tool is adopted later, tool training will be required.

Test Schedules

Within two working days of receiving a developer-certified build, system test personnel will test and report.

Risks and Contingencies

If system testing is not completed in a timely manner, there may not be enough time to repair all defects. This is a high risk given the need to develop a test plan template concurrently. This can be mitigated to some extent by scheduling work on the template after the Brickles test plan is completed but before the Brickles sytem is ready for test.

Analyses

This section describes the techniques and agreed-upon standards used to test each product in the AGM product line. They are justified and put into context by McGregor [McGregor 01b].

Coverage Standards

The system test coverage standard is based on use case information in Arcade Game Maker Pedagogical Product Line: Requirements Model. The frequency and criticality fields for each use case determine the correct level of test coverage. Both are rated on high, medium, and low scales. The following table shows the combinations of these values and the coverage standard defined for each.

Coverage Standards
Value Combination
Frequency, Criticality
Coverage Criteria Details
High, high Main X For each tested scenario, construct test cases for all pair-wise combinations of scenario variables.
Alternative X
Exceptional X
High, low Main X For each tested scenario, construct test cases for all pair-wise combinations of scenario variables that are defined in the system state machine.
Alternative X
Exceptional
Medium, high Main X For each tested scenario, construct test cases for some pair-wise combinations of scenario variables that are defined in the system state machine.
Alternative
Exceptional
Medium, low Main X For each tested scenario, construct test cases for some pair-wise combinations of scenario variables that are defined in the system state machine.
Alternative
Exceptional

Examples

The analyses needed to support the creation of test cases from the use cases are conducted by filling out the tables below. Example values are provided in the following tables.

Data Type Analysis
Variable Data Type Equivalence Classes
age int 0-18; 19-21; 22-65; 66+
     
     
     
Test Case Construction
Test Case No. Variable 1 (age) Variable 2 Variable 3 Variable 4
1 12      
2 19      
3 56      
4 67      

Modifying the System Test Plan

Modify the specific system test plan every time the requirements for the system being tested are changed using techniques described in the above Examples section. Modify the generic system test plan when techniques are not producing effective test cases and standards are not producing satisfactory results. That modification process is shown in the following figure.


Attached Process

TE is computed for each unit test as shown below:

Identified defects are cataloged and then analyzed to determine their origin. At the end of a product line increment, the test effectiveness is computed on a component-by-component basis. When the average TE goes below 75%, test coverage standards are made more comprehensive.

References and Further Reading

For details about the references cited in this document, see Arcade Game Maker Pedagogical Product Line: Bibliography.