. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

What types of software testing should be considered

Black box testing: This type of testing does not rely on any internal design or coding knowledge. These tests are based on requirements and functionality.

White box tests: they are based on the knowledge of the internal logic of the code of an application. The tests are based on coverage of code statements, branches, routes, conditions.

Unit tests: the most ‘micro’ test scale; to test particular functions or modules of code. This is normally done by the programmer and not by the testers, as it requires detailed knowledge of the internal program, design, and code. It’s not always easy to do this unless the application has a well-designed architecture with strict code; may require development of test driver modules or test harnesses.

Incremental Integration Testing: Continuous testing of an application when new functionality is added; requires that various aspects of an application’s functionality be independent enough to function separately before all parts of the program are complete, or that test drivers be developed as needed; made by programmers or by testers.

Integration Testing – Testing combined parts of an application to determine if they work together correctly. The ‘parts’ can be modules of code, individual applications, client and server applications on a network, etc. This type of testing is especially relevant for client/server and distributed systems.

Functional tests: these tests are oriented to the functional requirements of an application; this type of testing should be done by testers. This doesn’t mean that developers shouldn’t verify that their code works before publishing it (which, of course, applies to any stage of testing).

System tests: are based on the general requirements specifications; covers all parts of a system combined.

End-to-end testing – This is similar to system testing; involves testing an entire application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems.

Sanity testing or smoke testing: This is typically an initial test to determine if a new version of software is working well enough to be accepted for a major testing effort. For example, if new software crashes systems every 5 minutes, causing systems to be crawled or corrupting databases, the software may not be in normal condition to warrant further testing in its current state.

Regression testing: This is retesting after bug fixes or software modifications. It is difficult to determine how many new tests are needed, especially at the end of the development cycle. Automated testing tools are very useful for this type of testing.

Acceptance Test – This can be said to be a final test and was performed based on end user/customer specifications, or based on usage by end users/customers over a limited period of time.

Load Testing – This is nothing more than testing an application under heavy loads, such as testing a website under a range of loads to determine at what point the system response time degrades or fails.

Stress testing – The term is often used interchangeably with ‘load’ and ‘performance’ testing. It is also used to describe tests such as functional testing of the system under unusually heavy loads, intense repetition of certain actions or inputs, input of large numerical values, large and complex queries to a database system, etc.

Performance Testing – The term is often used interchangeably with ‘stress’ and ‘load’ testing. Ideally, ‘performance’ tests are defined in the requirements documentation or Test or QA Plans.

Usability Testing: These tests are done for ‘ease of use’. Clearly, this is subjective and will depend on the target end user or customer. User interviews, surveys, video recording of user sessions, and other techniques may be used. Programmers and testers are generally not suitable as usability testers.

Compatibility Testing – Testing how well the software works on a given hardware/software/operating system/network/etc. ambient.

User acceptance testing: determining whether the software is satisfactory to an end user or customer.

Benchmark Testing – Comparison of the software’s strengths and weaknesses with other competing products.

Alpha Testing – Testing an app when development is nearing completion; Minor design changes may still be made as a result of such testing. This is typically done by end users or others, but not by developers or testers.

Beta Testing: Testing when development and testing are essentially complete and final bugs and issues need to be found before final release. This is normally done by end users or others, not by programmers or testers.

Mutation testing: A method of determining whether or not a set of test data or test cases is useful by intentionally introducing multiple code changes (“bugs”) and retesting with the original data/test cases to determine if the “errors” are detected.

Leave A Comment