Showing posts with label Black Box Testing. Show all posts
Showing posts with label Black Box Testing. Show all posts
Advantages & Disadvantages
Testing Strategies
Black Box Testing | |||
Testing Strategies | |||
Testing Strategies/ Techniques | |||
| |||
Functional Testing
Black Box Testing | |||||||
Functional Testing | |||||||
| |||||||
Stress Testing
Black Box Testing | |||
Stress Testing | |||
The application is tested against heavy load such as complex numerical values, large number of inputs, large number of queries etc. which checks for the stress/load the applications can withstand. Stress testing deals with the quality of the application in the environment. The idea is to create an environment more demanding of the application than the application would experience under normal work loads. This is the hardest and most complex category of testing to accomplish and it requires a joint effort from all teams. A test environment is established with many testing stations. At each station, a script is exercising the system. These scripts are usually based on the regression suite. More and more stations are added, all simultaneous hammering on the system, until the system breaks. The system is repaired and the stress test is repeated until a level of stress is reached that is higher than expected to be present at a customer site. Race conditions and memory leaks are often found under stress testing. A race condition is a conflict between at least two tests. Each test works correctly when done in isolation. When the two tests are run in parallel, one or both of the tests fail. This is usually due to an incorrectly managed lock. A memory leak happens when a test leaves allocated memory behind and does not correctly return the memory to the memory allocation scheme. The test seems to run correctly, but after being exercised several times, available memory is reduced until the system fails. | |||
Load Testing
Black Box Testing | ||
Load Testing | ||
The application is tested against heavy loads or inputs such as testing of web sites in order to find out at what point the web-site/application fails or at what point its performance degrades. Load testing operates at a predefined load level, usually the highest load that the system can accept while still functioning properly. Note that load testing does not aim to break the system by overwhelming it, but instead tries to keep the system constantly humming like a well-oiled machine.In the context of load testing, extreme importance should be given of having large datasets available for testing. Bugs simply do not surface unless you deal with very large entities such thousands of users in repositories such as LDAP/NIS/Active Directory; thousands of mail server mailboxes, multi-gigabyte tables in databases, deep file/directory hierarchies on file systems, etc. Testers obviously need automated tools to generate these large data sets, but fortunately any good scripting language worth its salt will do the job. | ||
Usability Testing
Black Box Testing | |||
Usability Testing | |||
This testing is also called as ‘Testing for User-Friendliness’. This testing is done if User Interface of the application stands an important consideration and needs to be specific for the specific type of user. Usability testing is the process of working with end-users directly and indirectly to assess how the user perceives a software package and how they interact with it. This process will uncover areas of difficulty for users as well as areas of strength. The goal of usability testing should be to limit and remove difficulties for users and to leverage areas of strength for maximum usability. This testing should ideally involve direct user feedback, indirect feedback (observed behavior), and when possible computer supported feedback. Computer supported feedback is often (if not always) left out of this process. Computer supported feedback can be as simple as a timer on a dialog to monitor how long it takes users to use the dialog and counters to determine how often certain conditions occur (ie. error messages, help messages, etc). Often, this involves trivial modifications to existing software, but can result in tremendous return on investment. Ultimately, usability testing should result in changes to the delivered product in line with the discoveries made regarding usability. These changes should be directly related to real-world usability by average users. As much as possible, documentation should be written supporting changes so that in the future, similar situations can be handled with ease. | |||
Smoke Testing
Black Box Testing | ||
Smoke Testing | ||
This type of testing is also called sanity testing and is done in order to check if the application is ready for further major testing and is working properly without failing up to least expected level. A test of new or repaired equipment by turning it on. If it smokes... guess what... it doesn't work! The term also refers to testing the basic functions of software. The term was originally coined in the manufacture of containers and pipes, where smoke was introduced to determine if there were any leaks. A common practice at Microsoft and some other shrink-wrap software companies is the "daily build and smoke test" process. Every file is compiled, linked, and combined into an executable program every day, and the program is then put through a "smoke test," a relatively simple check to see whether the product "smokes" when it runs. | ||
Volume Testing
v
Black Box Testing | ||
Volume Testing | ||
Volume testing is done against the efficiency of the application. Huge amount of data is processed through the application (which is being tested) in order to check the extreme limitations of the system. Volume Testing, as its name implies, is testing that purposely subjects a system (both hardware and software) to a series of tests where the volume of data being processed is the subject of the test. Such systems can be transactions processing systems capturing real time sales or could be database updates and or data retrieval. Volume testing will seek to verify the physical and logical limits to a system's capacity and ascertain whether such limits are acceptable to meet the projected capacity of the organization’s business processing. | ||
Domain Testing
Black Box Testing | ||
Domain Testing | ||
Domain testing is the most frequently described test technique. Some authors write only about domain testing when they write about test design. The basic notion is that you take the huge space of possible tests of an individual variable and subdivide it into subsets that are (in some way) equivalent. Then you test a representative from each subset. | ||
Regression Testing
Black Box Testing | |||
Regression Testing | |||
Regression testing is a style of testing that focuses on retesting after changes are made. In traditional regression testing, we reuse the same tests (the regression tests). In risk-oriented regression testing, we test the same areas as before, but we use different (increasingly complex) tests. Traditional regression tests are often partially automated. These note focus on traditional regression. | |||
Regression testing attempts to mitigate two risks:
| |||
Regression testing approaches differ in their focus. Common examples include: | |||
Bug regression: We retest a specific bug that has been allegedly fixed. | |||
Old fix regression testing: We retest several old bugs that were fixed, to see if they are back. (This is the classical notion of regression: the program has regressed to a bad state.) | |||
General functional regression: We retest the product broadly, including areas that worked before, to see whether more recent changes have destabilized working code. (This is the typical scope of automated regression testing.) | |||
Conversion or port testing: The program is ported to a new platform and a subset of the regression test suite is run to determine whether the port was successful. (Here, the main changes of interest might be in the new platform, rather than the modified old code.) | |||
Configuration testing: The program is run with a new device or on a new version of the operating system or in conjunction with a new application. This is like port testing except that the underlying code hasn't been changed--only the external components that the software under test must interact with. | |||
Localization testing: The program is modified to present its user interface in a different language and/or following a different set of cultural rules. Localization testing may involve several old tests (some of which have been modified to take into account the new language) along with several new (non-regression) tests. | |||
Smoke testing also known as build verification testing: A relatively small suite of tests is used to qualify a new build. Normally, the tester is asking whether any components are so obviously or badly broken that the build is not worth testing or some components are broken in obvious ways that suggest a corrupt build or some critical fixes that are the primary intent of the new build didn't work. The typical result of a failed smoke test is rejection of the build (testing of the build stops) not just a new set of bug reports. | |||
User Acceptence Testing
Black Box Testing | |||
User Acceptence Testing | |||
In this type of testing, the software is handed over to the user in order to find out if the software meets the user expectations and works as it is expected to. In software development, user acceptance testing (UAT) - also called beta testing, application testing, and end user testing - is a phase of software development in which the software is tested in the "real world" by the intended audience. UAT can be done by in-house testing in which volunteers or paid test subjects use the software or, more typically for widely-distributed software, by making the test version available for downloading and free trial over the Web. The experiences of the early users are forwarded back to the developers who make final changes before releasing the software commercially. | |||
Alpha Testing | |||
In this type of testing, the users are invited at the development center where they use the application and the developers note every particular input or action carried out by the user. Any type of abnormal behavior of the system is noted and rectified by the developers. | |||
Beta Testing | |||
In this type of testing, the software is distributed as a beta version to the users and users test the application at their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers. Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the company. Report errors. | |||
Subscribe to:
Posts (Atom)