Testing Management Process

Test Director Overview

Testing Management Process

While using the Test Director, the testing management process can be defined using the following four steps:

  • Testing requirements
  • Design and develop test
  • Run the test in manual mode or automatic mode
  • Analyze the defect
    Accordingly, Test Director Use can be divided in to four phases.
  • Test requirement management
  • Test planning
  • Test execution
  • Test result analysis



Test requirement management

Requirement manager is used to link the requirements with the test to be carried out. Each requirement in the SRS has to be tested at least once. In the SRS, the functional requirements and performance requirements are specified. Functional requirements are generated from use case scenarios. Performance requirements are dependent on the application.


Test Planning

In test planning, the QA manger does a detailed planning and addresses the following issues:
  • Hardware and software platforms on which the testing has to be carried out.
  • The various test to be performed (functional/ regression testing, performance testing etc.)
  • Time schedule for conducting the test
  • Roles and responsibilities of the person associated with the project.
  • Procedure for running the tests. (manual or automatic)
  • Various test cases to be generated.
  • Procedure for tracking the progress of testing.
  • Documents to be generated during the testing process.
  • Criteria for completion of testing.

During the test planning stage, test design is done which involves defining the sequence of steps to execute a test in manual testing. This is the most challenging task for test engineers as the test cases have to be created intelligently to uncover the possible bugs. The test engineers also identify the common test scripts that can be reused to test different modules and map the workflow between tests.
The test plan is communicated to all the test engineers and also the development team.


Test Execution

The actual testing is carried out based on the test cases generated, either manually or automatically. In the case of automated testing, the test scheduling is done as per the plan. A history of all test runs is maintained and audit trail, to trace the history of tests and test runs, is also maintained.

During this phase, test sets are created. A test set is a set of test cases. For example, a login test set is the set of test cases to test the login process. In addition, execution logic is also set. The logic specifies what to do when a test fails in a series of tests. Consider the sequence of tests- login to a database, update a record and logout. Suppose during testing, the login test itself fails. One alternative is to stop the testing completely, the other alternative is to go ahead with the testing, but report that login test has failed.


Test Result Analysis

In this phase, the test results are analyzed which tests passed and which test failed. For the tests, an analysis has to be carried out as to why they failed. Also, each bug is classified based on its severity. A simple way of classification is:

  • Critical
  • Major
  • Minor
    A more detailed way of classification is:
  • Cosmetic or GUI related
  • Inconsistent performance of the application
  • Loss of functionality
  • System crash
  • Loss of data
  • Security violation

When a bug is reported to the developer, it is not enough if you inform that there is a bug. You need to give additional information such as what is the problem, what is the system configuration on which the test was run, what is the version of the software, which testing tool is used, what is the step by step procedure to reproduce the problem.

The bug report was stored in a database. The privileges to read write and update database need to be decided by QA manager. For example, the test engineer will update the database to indicate the new bug and give the status as “new”. The developer may remove the bug and update the status as “cleared”. The QA manager may change the status to “approve” or “to be fixed”. It is also possible to assign priorities to the bugs, critical bugs have high priority as compare to cosmetic bugs.

Based on the bug tracking and analysis tools, the QA manager and the project manager can take the decision whether the software can be released to the customer or still more testing is required.






0 Ask a Question or Comment:

The latest info about icet, icetcounselling details and very good stuff on UML,Behavioral Modeling Diagrams and SOFTWARE TESTING,various testing methods.

Blogger Templates by OurBlogTemplates.com 2008