Showing posts with label Black Box Testing. Show all posts
Showing posts with label Black Box Testing. Show all posts

Black Box Testing; Introduction

Black Box Testing

Introduction

Black Box Testing is testing without knowledge of the internal workings of the item being tested. For example, when black box testing is applied to software engineering, the tester would only know the "legal" inputs and what the expected outputs should be, but not how the program actually arrives at those outputs. It is because of this that black box testing can be considered testing with respect to the specifications, no other knowledge of the program is necessary. For this reason, the tester and the programmer can be independent of one another, avoiding programmer bias toward his own work. For this testing, test groups are often used, "Test groups are sometimes called professional idiots...people who are good at designing incorrect data." 1 Also, do to the nature of black box testing, the test planning can begin as soon as the specifications are written. The opposite of this would be glass box testing, where test data are derived from direct examination of the code to be tested. For glass box testing, the test cases cannot be determined until the code has actually been written. Both of these testing techniques have advantages and disadvantages, but when combined, they help to ensure thorough testing of the product.
Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box.





Advantages & Disadvantages

Black Box Testing

Advantages & Disadvantages

Advantages of Black Box Testing

  • More effective on larger units of code than glass box testing
  • Tester needs no knowledge of implementation, including specific programming language
  • Tester and programmer are independent of each other
  • tests are done from a user's point of view
  • will help to expose any ambiguities or inconsistencies in the specifications
  • test cases can be designed as soon as the specifications are complete


Disadvantages of Black Box Testing

  • Only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever
  • Without clear and concise specifications, test cases are hard to design
  • There may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried
  • May leave many program paths untested
  • Can't be directed toward specific segments of code which may be very complex (and therefore more error prone)
  • Most testing related research has been directed toward glass box testing






Testing Strategies

Black Box Testing

Testing Strategies

Testing Strategies/ Techniques

  • black box testing should make use of randomly generated inputs (only a test range should be specified by the tester), to eliminate any guess work by the tester as to the methods of the function
  • data outside of the specified input range should be tested to check the robustness of the program
  • boundary cases should be tested (top and bottom of specified range) to make sure the highest and lowest allowable inputs produce proper output
  • the number zero should be tested when numerical data is to be input
  • stress testing should be performed (try to overload the program with inputs to see where it reaches its maximum capacity), especially with real time systems
  • crash testing should be performed to see what it takes to bring the system down
  • test monitoring tools should be used whenever possible to track which tests have already been performed and the outputs of these tests to avoid repetition and to aid in the software maintenance
  • other functional testing techniques include: transaction testing, syntax testing, domain testing, logic testing, and state testing.
  • finite state machine models can be used as a guide to design functional tests
  • According to Beizer 2 the following is a general order by which tests should be designed:
    • Clean tests against requirements.
    • Additional structural tests for branch coverage, as needed.
    • Additional tests for data-flow coverage as needed.
    • Domain tests not covered by the above.
    • Special techniques as appropriate--syntax, loop, state, etc.
    • Any dirty tests not covered by the above.






Functional Testing

Black Box Testing

Functional Testing

In this type of testing, the software is tested for the functional requirements. The tests are written in order to check if the application behaves as expected. Although functional testing is often done toward the end of the development cycle, it can—and should, —be started much earlier. Individual components and processes can be tested early on, even before it's possible to do functional testing on the entire system. Functional testing covers how well the system executes the functions it is supposed to execute—including user commands, data manipulation, searches and business processes, user screens, and integrations. Functional testing covers the obvious surface type of functions, as well as the back-end operations (such as security and how upgrades affect the system).









Stress Testing

Black Box Testing

Stress Testing

The application is tested against heavy load such as complex numerical values, large number of inputs, large number of queries etc. which checks for the stress/load the applications can withstand. Stress testing deals with the quality of the application in the environment. The idea is to create an environment more demanding of the application than the application would experience under normal work loads. This is the hardest and most complex category of testing to accomplish and it requires a joint effort from all teams. A test environment is established with many testing stations. At each station, a script is exercising the system. These scripts are usually based on the regression suite. More and more stations are added, all simultaneous hammering on the system, until the system breaks. The system is repaired and the stress test is repeated until a level of stress is reached that is higher than expected to be present at a customer site. Race conditions and memory leaks are often found under stress testing. A race condition is a conflict between at least two tests. Each test works correctly when done in isolation. When the two tests are run in parallel, one or both of the tests fail. This is usually due to an incorrectly managed lock. A memory leak happens when a test leaves allocated memory behind and does not correctly return the memory to the memory allocation scheme. The test seems to run correctly, but after being exercised several times, available memory is reduced until the system fails.






Load Testing

Black Box Testing

Load Testing

The application is tested against heavy loads or inputs such as testing of web sites in order to find out at what point the web-site/application fails or at what point its performance degrades. Load testing operates at a predefined load level, usually the highest load that the system can accept while still functioning properly. Note that load testing does not aim to break the system by overwhelming it, but instead tries to keep the system constantly humming like a well-oiled machine.In the context of load testing, extreme importance should be given of having large datasets available for testing. Bugs simply do not surface unless you deal with very large entities such thousands of users in repositories such as LDAP/NIS/Active Directory; thousands of mail server mailboxes, multi-gigabyte tables in databases, deep file/directory hierarchies on file systems, etc. Testers obviously need automated tools to generate these large data sets, but fortunately any good scripting language worth its salt will do the job.







Usability Testing

Black Box Testing

Usability Testing

This testing is also called as ‘Testing for User-Friendliness’. This testing is done if User Interface of the application stands an important consideration and needs to be specific for the specific type of user. Usability testing is the process of working with end-users directly and indirectly to assess how the user perceives a software package and how they interact with it. This process will uncover areas of difficulty for users as well as areas of strength. The goal of usability testing should be to limit and remove difficulties for users and to leverage areas of strength for maximum usability. This testing should ideally involve direct user feedback, indirect feedback (observed behavior), and when possible computer supported feedback. Computer supported feedback is often (if not always) left out of this process. Computer supported feedback can be as simple as a timer on a dialog to monitor how long it takes users to use the dialog and counters to determine how often certain conditions occur (ie. error messages, help messages, etc). Often, this involves trivial modifications to existing software, but can result in tremendous return on investment. Ultimately, usability testing should result in changes to the delivered product in line with the discoveries made regarding usability. These changes should be directly related to real-world usability by average users. As much as possible, documentation should be written supporting changes so that in the future, similar situations can be handled with ease.






Smoke Testing

Black Box Testing

Smoke Testing

This type of testing is also called sanity testing and is done in order to check if the application is ready for further major testing and is working properly without failing up to least expected level. A test of new or repaired equipment by turning it on. If it smokes... guess what... it doesn't work! The term also refers to testing the basic functions of software. The term was originally coined in the manufacture of containers and pipes, where smoke was introduced to determine if there were any leaks. A common practice at Microsoft and some other shrink-wrap software companies is the "daily build and smoke test" process. Every file is compiled, linked, and combined into an executable program every day, and the program is then put through a "smoke test," a relatively simple check to see whether the product "smokes" when it runs.








Volume Testing

v
Black Box Testing

Volume Testing

Volume testing is done against the efficiency of the application. Huge amount of data is processed through the application (which is being tested) in order to check the extreme limitations of the system.
Volume Testing, as its name implies, is testing that purposely subjects a system (both hardware and software) to a series of tests where the volume of data being processed is the subject of the test. Such systems can be transactions processing systems capturing real time sales or could be database updates and or data retrieval.
Volume testing will seek to verify the physical and logical limits to a system's capacity and ascertain whether such limits are acceptable to meet the projected capacity of the organization’s business processing.








Domain Testing

Black Box Testing

Domain Testing

Domain testing is the most frequently described test technique. Some authors write only about domain testing when they write about test design. The basic notion is that you take the huge space of possible tests of an individual variable and subdivide it into subsets that are (in some way) equivalent. Then you test a representative from each subset.











Regression Testing

Black Box Testing

Regression Testing

Regression testing is a style of testing that focuses on retesting after changes are made. In traditional regression testing, we reuse the same tests (the regression tests). In risk-oriented regression testing, we test the same areas as before, but we use different (increasingly complex) tests. Traditional regression tests are often partially automated. These note focus on traditional regression.

Regression testing attempts to mitigate two risks:
  • A change that was intended to fix a bug failed.
  • Some change had a side effect, unfixing an old bug or introducing a new bug.

Regression testing approaches differ in their focus. Common examples include:

Bug regression: We retest a specific bug that has been allegedly fixed.

Old fix regression testing: We retest several old bugs that were fixed, to see if they are back. (This is the classical notion of regression: the program has regressed to a bad state.)

General functional regression: We retest the product broadly, including areas that worked before, to see whether more recent changes have destabilized working code. (This is the typical scope of automated regression testing.)

Conversion or port testing: The program is ported to a new platform and a subset of the regression test suite is run to determine whether the port was successful. (Here, the main changes of interest might be in the new platform, rather than the modified old code.)

Configuration testing: The program is run with a new device or on a new version of the operating system or in conjunction with a new application. This is like port testing except that the underlying code hasn't been changed--only the external components that the software under test must interact with.

Localization testing: The program is modified to present its user interface in a different language and/or following a different set of cultural rules. Localization testing may involve several old tests (some of which have been modified to take into account the new language) along with several new (non-regression) tests.

Smoke testing also known as build verification testing: A relatively small suite of tests is used to qualify a new build. Normally, the tester is asking whether any components are so obviously or badly broken that the build is not worth testing or some components are broken in obvious ways that suggest a corrupt build or some critical fixes that are the primary intent of the new build didn't work. The typical result of a failed smoke test is rejection of the build (testing of the build stops) not just a new set of bug reports.






User Acceptence Testing

Black Box Testing

User Acceptence Testing

In this type of testing, the software is handed over to the user in order to find out if the software meets the user expectations and works as it is expected to. In software development, user acceptance testing (UAT) - also called beta testing, application testing, and end user testing - is a phase of software development in which the software is tested in the "real world" by the intended audience. UAT can be done by in-house testing in which volunteers or paid test subjects use the software or, more typically for widely-distributed software, by making the test version available for downloading and free trial over the Web. The experiences of the early users are forwarded back to the developers who make final changes before releasing the software commercially.


Alpha Testing

In this type of testing, the users are invited at the development center where they use the application and the developers note every particular input or action carried out by the user. Any type of abnormal behavior of the system is noted and rectified by the developers.


Beta Testing

In this type of testing, the software is distributed as a beta version to the users and users test the application at their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers. Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the company. Report errors.






The latest info about icet, icetcounselling details and very good stuff on UML,Behavioral Modeling Diagrams and SOFTWARE TESTING,various testing methods.

Blogger Templates by OurBlogTemplates.com 2008