April 3, 2006

A Glossary of Testing Terms

(Last updated august 18, 2010)
Acceptance Criteria
The exit criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity.  [IEEE]
Acceptance Testing
The process of the user testing the system and, based on the results, either granting or refusing acceptance of the software/system being tested.  [wikipedia]
The process of comparing the program to its initial requirements and the current needs of its end users. It is an unusual type of test in that it is usually performed by the program's customer or end user, and normally is not considered the responsibility of the development organization. [G. Myers]
Testing to verify readiness for implementation or use.
Formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system.  [B. Hetzel] 
Accessibility Testing
Verifying a product is accessible to the people having disabilities
Ad-Hoc Testing
The process of improvised, impromptu bug searching.  [J. Bach]
Agile Testing 
Testing practice for projects using agile methodologies, such as extreme programming (XP), treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.  [ISTQB]
Alpha Testing
Testing conducted internally by the manufacturer, alpha testing takes a new product through a protocol of testing procedures to verify product functionality and capability. In-house testing. This is the period before Beta Testing.
In-house testing performed by the test team (and possibly other interested, friendly insiders).  [C. Kaner, et al] 
Any condition that deviates from expectation based on requirements specifications, design documents, user documents, standards, et, or from someone's perception or experience.  Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation.  [IEEE]
Application Programming Interface (API)
An application programming interface (API) is an interface in computer science that defines the ways by which an application program may request services from libraries and/or operating systems.  [wikipedia] 
Application Under Test (AUT)
The application which is the target of the testing process.
An independent evaluation of software products or processes to ascertain compliance to standards, guidelines, specification, and/or procedures based on objective criteria, including documents that specify:
(1) the form or content of the products to be produced
(2) the process by which the products shall be produced
(3) how compliance to standards or guidelines shall be measured [IEEE]
Audit Trail
A path by which the original input to a process (e.g. data) can be traced back through the process, taking the process output as a starting point.  This facilitates defect analysis and allows a process audit to be carried out.  [ISTQB] 
Automated Testing
Testing which is performed, to a greater or lesser extent, by a computer, rather than manually. 
The degree to which a component or system is operational and accessible when required for use.  Often expressed as a percentage.  [IEEE]
Back-to-Back Testing 
Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies.  [IEEE] 
Backus–Naur Form (BNF)
Backus–Naur Form (BNF) is a metasyntax used to express context-free grammars: that is, a formal way to describe formal languages.  [wikipedia]
A specification or software product that has been formally reviewed or agreed upon, that thereafter serves as the basis for further development, and that can be changed only through a formal change control process. [IEEE] 
More generally, a baseline is a set of observations or values that represent the background level of some measurable quantity.  Once a baseline is established, variations to that baseline can be measured after something in the system is changed.  
Behavioral Testing
When you do behavioral testing, you specify your tests in terms of externally visible inputs, outputs, and events. However, you can use any source of information in the design of the tests. [C. Kaner]
Benchmark Test
(1) A standard against which measurements or comparisons can be made. (2) A test that is to be used to compare components or systems to each other or to a standard as in (1). [IEEE]
Bespoke Software
See - Custom Software 
Beta Testing
Testing conducted at one or more customer sites by the end-user of a software product or system . This is usually a "friendly" user and the testing is conducted before the system is made generally available.
A type of user testing that uses testers who aren't part of your organization and who are members of your product's target market.  The product under test is typically very close to completion.  [C. Kaner, et al] 
Black Box Testing
Black box testing refers to testing that is done without reference to the source code or other information about the internals of the product. The black box tester consults external sources for information about how the product runs (or should run), what the product-related risks are, what kinds of errors are likely and how the program should handle them, and so on. [C. Kaner]
Testing that treats the product as a black box. You don't have any access to or knowledge of the internal workings, so you probe it like a customer would - you use it and abuse it till it breaks. [E Brechner]
Blink Testing
Looking for overall patterns of unexpected changes, rather than focusing in on the specifics.  For example, rapidly flipping between two web pages which are expected to be the same.  If they are not the same, differences stand out visibly. Or, rapidly scrolling through a large log file, looking for unusual patterns of log messages.
Bottom-Up Testing
An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components.  [wikipedia]
Boundary Value
In the context of a software program, a boundary value is a specific value at the extreme edges of an independent linear variable or at the edge or edges of equivalence class subsets of an independent linear variable.  [Bj Rollison]
Boundary Value Analysis
Guided testing which explores values at and near the minimum and maximum allowed values for a particular input or output.
Branch Coverage 
The percentage of branches that have been exercised by a test suite.  100% branch coverage implies both 100% decision coverage and 100% statement coverage.  [ISTQB]
Buddy Drop
A private build of a product used to verify code changes before they have been checked into the main code base. [E. Brechner]
A computer bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from working correctly or produces an incorrect result. Bugs arise from mistakes and errors, made by people, in either a program's source code or its design. It is said that there are bugs in all useful computer programs, but well-written programs contain relatively few bugs, and these bugs typically do not prevent the program from performing its task. ... [wikipedia]  
Bug Bash 
In-house testing using secretaries, programmers, marketers, and anyone who is available.  A typical bug-bash lasts a half-day and is done when the software is close to being ready to release.  [C. Kaner, et al]
Bug Triage or Bug Crawl or Bug Scrub
A meeting or discussion focused on an item-by-item review of every active bug reported against the system under test.  During this review, fix dates can be assigned, insignificant bugs can be deferred, and project management can assess the progress of the development process. Also called a bug scrub. [R. Black]
A regular meeting toward the end of development cycles to manage issues. Typically, these meetings are attended bu representatives from the three primary engineering disciplines: program management, development, and test. [E. Brechner]
Build Verification Test (BVT)  or Build Acceptance Test (BAT) 
A set of tests run on each new build of a product to verify that the build is testable before the build is released into the hands of the test team.  This test is generally a short set of tests, which exercise the mainstream functionality of the application software. 
Capability Maturity Model (CMM) 
A five level staged framework that describes the key element of an effective software process. The Capability Maturity Model covers best-practices for planning, engineering and managing software development and maintenance.  [CMM]
Capability Maturity Model Integration (CMMI) 
A framework that describes the key elements of an effective product development and maintenance process.  The Capability Maturity Model Integration covers best-practices for planning, engineering and managing product development and maintenance.  CMMI is the designated successor of the CMM.  [CMMI]
Capacity Testing 
Testing to determine the maximum users a computer or set of computers can support.  [A. Page]
Capture/Playback Tool (Capture/Replay Tool) 
A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed).  These tools are often used to support automated regression testing.  [ISTQB]
Cause-And-Effect Diagram 
A diagram used to depict and help analyze factors causing an overall effect.  Also called a Fishbone Diagram or Ishikawa Diagram.
Churn is a term used to describe the amount of changes that happen in a file or module over a selected period.  [A. Page]
Code Complete
The stage at which the developer believes all the code necessary to implement a feature has been checked into source control. Often this is a judgement call, but on better teams it's actually measured based on quality criteria (at which point it's often called "feature complete"). [E. Brechner]
Code Coverage
An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.  [ISTQB]
Code Freeze
The point in time in the development process in which no changes whatsoever are permitted to a portion or the entirety of the program's source code.  [wikipedia]
Command Line Interface (CLI)
In Command line interfaces, the user provides the input by typing a command string with the computer keyboard and the system provides output by printing text on the computer monitor.  [wikipedia]
Commercial Off-The-Shelf Software (COTS)
A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.  [ISTQB]
The capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions.  [ISTQB]
Compliance Testing
The process of testing to determine the compliance of the component or system.  [ISTQB]
Component Testing
Component testing is the act of subdividing an object-oriented software system into units of particular granularity, applying stimuli to the component’s interface and validating the correct responses to those stimuli, in the form of either a state change or reaction in the component, or elsewhere in the system.  [M. Silverstein]
Computer-Aided Software Testing (CAST)
Testing automated, in part or in full, by another program.
Concurrency Testing
Testing to determine how the occurrence of two or more activities within the same interval of time, achieved either by interleaving the activities or by simultaneous execution, is handled by the component or system.  [IEEE]
Configuration Management 
A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to thise characteristics, record and report change processing and implementation status, and verify compliance with specified requirements.  [IEEE]
The degree of uniformity, standardization, and freedom from contradiction among the documents or parts of a system or component.  [IEEE]
The extent to which software is free from errors.  [B. Hetzel]
Critical Path 
A series of dependent tasks for a project that must be completed as planned to keep the entire project on schedule.  [SEI]
Cross Browser Testing 
Cross browser testing is a type of compatibility testing designed to ensure that a web application behaves correctly (sometimes identically) in several different browsers and/or browser versions.
Cross-Site Scripting 
Cross site scripting (XSS) is a type of computer security exploit where information from one context, where it is not trusted, can be inserted into another context, where it is. From the trusted context, an attack can be launched. [wikipedia]
Custom Software 
Software developed specifically for a set of users or customers.  The opposite is Commercial Off-the-shelf Software.  [ISTQB]
Cyclomatic Complexity 
Cyclomatic complexity may be considered a broad measure of soundness and confidence for a program. Introduced by Thomas McCabe in 1976, it measures the number of linearly-independent paths through a program module.  [SEI]
Daily Build 
A development activity where a complete system is complied and linked every day (usually overnight) so that a consistent system is available at any time including all latest changes.  [ISTQB]
Data-Driven Testing 
Testing in which the actions of a test case is parameterized by externally defined data values, often maintained as a file or spreadsheet. This is a common technique in Automated Testing.
A scripting technique that stores test inputs and expected outcomes as data, normally in a table or spreadsheet, so that a single control script can execute all of the tests in the data table.  [M. Fewster]
The process in which developers determine the root cause of a bug and identify possible fixes.  Developers perform debugging activities to resolve a known bug either after development of a subsystem or unit or because of a bug report. [R. Black]
Decision Coverage 
The percentage of decision outcomes that have been exercised by a test suite.  100% decision coverage implies both 100% branch coverage and 100% statement coverage.  [ISTQB]
A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition.  A defect, if encountered during execution, may cause a failure of the component or system.  [ISTQB]
Defect  Density
The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-of-code, number of classes or function points).  [ISTQB]
Defect  Leakage Ratio (DLR)
The ratio of the number of defects which made their way undetected ("leaked") into production divided by the total number of defects.
Defect  Masking
An occurrence in which one defect prevents the detection of another.  [IEEE]
Defect  Prevention
The activities involved in identifying defects or potential defects and preventing them from being introduced into a product.  [SEI]
Defect  Rejection Ratio (DRR)
The ratio of the number of defect reports which were rejected (perhaps because they were not actually bugs) divided by the total number of defects.
Defect  Removal Efficiency (DRE)
The ratio of defects found during development to total defects (including ones found in the field after release).
A noticeable or marked departure from the appropriate norm, plan, standard, procedure, or variable being reviewed.  [SEI]
Direct Metric 
A metric that does not depend upon a measure of any other attribute [IEEE]
Distributed Testing 
Testing that occurs at multiple locations, involves multiple teams, or both. [R. Black]
The set from which valid input and/or output values can be selected.  [ISTQB]
A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.  [ISTQB] 
Eating Your Own Dogfood 
Your company uses and relies on pre-release versions of its own software, typically waiting until the software is reliable enough for real use before selling it. [C. Kaner, et al]
The practice of using prerelease builds of products for day-to-day work. It encourages teams to make products correctly from the start, and it provides early feedback on the products' value and usability. [E Brechner]
End User 
The individual or group who will use the system for its intended operational use when it is deployed in its environment.  [SEI]
Entry Criteria 
A set of decision-making guidelines used to determine whether a system under test is ready to move into, or enter, a particular phase of testing.  Entry criteria tend to become more rigorous as the test phases progress. [R. Black]
Equivalence Partition 
A portion of an input or output for which the behavior of a component or system is assumed to be the same, based on the specification. [ISTQB]
The occurrence of an incorrect result produced by a computer.
A software error is present when the program does not do what its end user reasonably expects it to do.  [G. Myers] 
A human action that produces an incorrect result.  [IEEE] 
Error  Guessing
A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.  [ISTQB] 
Error  Seeding
The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal, and estimating the number of remaining defects.  [ISTQB] 
To communicate a problem to a higher level of management for solution.  [R. Black]
Exhaustive Testing 
A test approach in which the test suite comprises all combinations of input values and preconditions.  [ISTQB]
Exit Criteria 
A set of decision-making guidelines used to determine whether a system under test is ready to exit a particular phase of testing.  When exit criteria are met, either the system under test moves on to the next test phase or the test project is considered complete.  Exit criteria tend to become more rigorous as the test phases progress.  [R. Black]
Expected Results
Predicted output data and file conditions associated with a particular test case.  [B. Hetzel] 
Exploratory Testing
Test design and test execution at the same time.  This is the opposite of scripted testing (predefined test procedures, whether manual or automated). Exploratory tests, unlike scripted tests, are not defined in advance and carried out precisely according to plan. Exploratory testing is sometimes confused with "ad hoc" testing.  [J. Bach]
Exploratory software testing is a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the value of her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project. [Kaner, Bach, Bolton, et al]
Simultaneous (or parallel) test design, test execution and learning. [Bolton] 
Extreme Programming (XP) 
Extreme Programming (XP) is a method or approach to software engineering and the most popular of several agile software development methodologies. It was formulated by Kent Beck, Ward Cunningham, and Ron Jeffries. [wikipedia]
Deviation of the component or system from its expected delivery, service or result.  [ISTQB]
Failure Mode 
A particular way, in terms of symptoms, behaviors, or internal state changes, in which a failure manifests itself.  For example, a heat dissipation problem in a CPU might cause a laptop case to melt or warp, or memory mismanagement might cause a core dump.  [R. Black]
Failure Mode and Effect Analysis (FMEA)
A systematic approach to risk identification and analysis of identifying possible modes of failure and attempting to prevent their occurrence  [ISTQB]
The process of evaluating an object to demonstrate that it does not meet requirements.  [B. Beizer]
An incorrect step, process, or data definition in a computer program.  [IEEE]
Fault Injection
The process of intentionally incorporating errors in code in order to measure the ability (of the tester, or processes) to detect such errors.  Sometimes called Defect Injection. 
Fault Model
An engineering model of something that could go wrong in the construction or operation of a piece of equipment.  [wikipedia]  This can be extended to software testing, as a model of the types of errors that could occur in a system under test.
Fault Tolerance
The capability of the software product to maintain a specified level of performance in cases of software faults (defects) or of infringement of its specified interface.  [ISO 9126]
A desirable behavior of an object; a computation or value produced by an object.  Requirements are aggregates of features.  [B. Beizer]
Feature Freeze 
The point in time in the development process in which all work on adding new features is suspended, shifting the effort towards fixing bugs and improving the user experience.  [wikipedia]
First Customer Ship (FCS) 
The period which signifies entry into the final phase of a project. At this point, the product is considered wholly complete and ready for purchase and usage by the customers.  It may precede the phase where the product is manufactured in quality and ready for General Availability (GA).
Fishbone Diagram 
A diagram used to depict and help analyze factors causing an overall effect.  Also called a Cause-And-Effect Diagram or Ishikawa Diagram.
Formal Testing
Process of conducting testing activities and reporting test results in accordance with an approved test plan.  [B. Hetzel]
Functional Requirements 
An initial definition of the proposed system, which documents the goals, objectives, user or programmatic requirements, management requirements, the operating environment, and the proposed design methodology, e.g., centralized or distributed.
Functional Testing 
Process of testing to verify that the functions of a system are present as specified.  [B. Hetzel]
Testing requiring the selection of test scenarios without regard to the structure of the source code.  [J. Whittaker]
Fuzz Testing 
A software testing technique that provides random data ("fuzz") to the inputs of a program.  If the program fails (for example by crashing or by failing built-in code assertions), the defects can be noted. [wikipedia]
Gap Analysis
An assessment of the difference between what is required or desired, and what actually exists.
General Availability (GA)
The phase in which the product is complete, and has been manufactured in sufficient quantity such that is ready to be purchased by all the anticipated customers.
Glass Box Testing  (Clear Box Testing)
Glass Box testing is about testing with thorough knowledge of the code. The programmer might be the person who does this. [C. Kaner]
Graphical User Interface (GUI) 
Graphical user interfaces (GUIs) accept input via devices such as computer keyboard and mouse and provide graphical output on the computer monitor.  [wikipedia]
Gray Box Testing 
A combination of Black Box and White Box testing.  Testing software against its specification but also using some knowledge of its internal workings.
Happy Path 
A default scenario that features no exceptional or error conditions.  A well-defined test case that uses known input, that executes without exception and that produces an expected output.  [wikipedia]
Simple inputs that should always work.  [A. Page]
IEEE 829  (829 Standard for Software Test Documentation)
An IEEE standard which specifies the format of a set of documents used in software testing.  These documents are Test Plan, Test Design Specification, Test Case Specification, Test Procedure Specification, Test Item Transmittal Report, Test Log, Test Incident Report, and Test Summary Report.  [IEEE] 
An operational event that is not part of the normal operation of a system. It will have an impact on the system, although this may be slight or transparent to the users. [wikipedia] 
Incident Report
A document reporting on any event that occurred, e.g. during the testing, which requires investigation.  [IEEE 829]
Incremental Testing 
A disciplined method of testing the interfaces between unit-tested programs as well as between system components. Two types of incremental testing are often mentioned: Top-down and Bottom up.
Independent V&V 
Verification and validation of a software product by an independent organization (other than the designer).  [B. Hetzel]
Input Masking 
Input masking occurs when a program throws an error condition on the first invalid variable and subsequent values are not tested.  [Bj Rollison]
A formal evaluation technique involving detailed examination by a person or group other than the author to detect faults and problems.
Integration Testing
Integration testing is the phase of software testing in which individual software modules are combined and tested as a group. It follows unit testing and precedes system testing. [wikipedia]
An orderly progression of testing in which software and/or hardware elements are combined and tested until the entire system has been integrated.  [B. Hetzel] 
The testing of multiple components that have each received prior and separate unit testing.  [J. Whittaker]
Interface Testing 
Testing conducted to ensure that the program or system components pass information and control correctly.
Internationalization (I18N)
Internationalization is the process of designing and coding a product so it can perform properly when it is modified for use in different languages and locales.
Ishikawa Diagram
A diagram used to depict and help analyze factors causing an overall effect, named after Kaoru Ishikawa.  Also called a Fishbone Diagram or Cause-And-Effect Diagram.
Keyword-Driven Testing 
A scripting technique that uses data files to contain not only test inputs and expected outcomes, but also keywords related to the application being tested.  The keywords are interpreted by special supporting scripts that are called by the control script for the test.  [M. Fewster] 
Kludge (or kluge) 
Any ill-advised, substandard, or "temporary" bandage applied to an urgent problem in the (often misguided) belief that doing so will keep a project moving forward.  [R. Black]
Link Rot 
The process by which links on a website gradually become irrelevant or broken as time goes on, because websites that they link to disappear, change their content or redirect to new locations. [Wikipedia]
Load Testing
Load testing is subjecting a system to a statistically representative (usually) load.  The two main reasons for using such loads is in support of software reliability testing and in performance testing.  [B. Beizer] 
Localization (L10N)
Localization refers to the process, on a properly internationalized base product, of translating messages and documentation as well as modifying other locale specific files.
Low-resource Testing
Low-resource testing determines what happens when the system is low or depleted of a critical resource such as physical memory, hard disk space, or other system-defined resources.  [A. Page]
Maintainability describes the effort needed to make changes in software without causing errors.  [A. Page]
Mean Time Between Failure
(MTBF) The average time between failures.
Mean Time To Repair 
(MTTR) The average time to fix bugs.
Acronym for My Eyes Glazed Over; refers to a loss of focus and attention, often caused by an attempt to read a particularly impenetrable or dense technical document.  [R. Black]
Memory Leak
A particular type of unintentional memory consumption by a computer program where the program fails to release memory when no longer needed. This condition is normally the result of a bug in a program that prevents it from freeing up memory that it no longer needs. [Wikipedia]
A reasonably complete set of rules and criteria that establish a precise and repeatable way of performing a task and arriving at a desired result.  [SEI]
A collection of methods, procedures, and standards that defines an integrated synthesis of engineering approaches to the development of a product.  [SEI]
A quantitative measure of the degree to which a system, component, or process possesses a given attribute [IEEE]
The assignment of a numeric value to an object or event according to a rule derived from a model or theory.  [C. Kaner]
A scheduled event for which some individual is accountable and that is used to measure progress.  [SEI]
Monkey Testing
Pounding away at the keyboard with presumably random input strings until something breaks.  [B. Beizer]
Mutation Testing
With mutation testing, the system/program under test is changed to create a faulty version called a mutant.  You then run the mutant program through a suite of test cases, which should produce new test case failures.  If no new failures appear, the test suite most likely does not exercise the code path containing the mutated code, which means the program isn't fully tested.  You can then create new test cases that do exercise the mutant code.  [J. McCaffrey]
Negative Testing
Testing whose primary purpose is falsification; that is testing designed to break the software.  (also called Dirty Testing)  [B. Beizer]
Non-functional Testing
Testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability.  [ISTQB]
Any means used to predict the outcome of a test. [W. Howden]
The mechanism which can be used to compare the actual output that the application under test produces with the expected output.  [J. Whittaker]
Operational Testing 
Testing by the end user on software in its normal operating environment (DOD).  [B. Hetzel]
Pareto Analysis
The analysis of defects by ranking causes from most significant to least significant.  Pareto analysis is based on the principle, named after the 19th-century economist Vilfredo Pareto, that most effects come from relatively few causes, i.e. 80% of the effects come from 20% of the possible causes.  [SEI]
Performance Evaluation
The assessment of a system or component to determine how effectively operating objectives have been achieved.  [B. Hetzel]
Performance Testing
Testing conducted to evaluate the compliance of a system or component with specified performance requirements.  [IEEE] 
Testing that attempts to show that the program does not satisfy its performance objectives.  [G. Myers]
Pesticide Paradox
The phenomenon that the more you test software, the more immune it becomes to your tests - just as insects eventually build up resistance and the pesticide no longer works.  [Beizer] 
Positive Testing
Testing whose primary purpose is validation; that is testing designed to demonstrate the software's correct working.  (also called Clean Testing)  [B. Beizer]
Priority indicates when your company wants a particular bug fixed.  Priorities change as a project progresses.
A prototype of software is an incomplete implementation of software that mimics the behavior we think the users need.  [M.E. Staknis]
A series which appears to be random but is in fact generated according to some prearranged sequence.  [ISTQB]
The ability of a set of inherent characteristics of a product, system or process to fulfill requirements of customers and other interested parties [ISO 9000:2000]
Conformance to requirements or fitness for use. [North Carolina State University]
Quality means "meets requirements".  [B. Hetzel]
The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.  [ISTQB]  
Quality Assurance (QA)
Preventing bugs.  [B. Beizer]
A planned and systematic pattern of actions necessary to provide confidence that the item or product conforms to established requirements.  [B. Hetzel] 
Quality Control (QC)
Testing, finding bugs.  [B. Beizer]
Quality Factor
A management-oriented attribute of software that contributes to its quality.  [IEEE]
Quality Gate
A milestone in a project where a particular quality level must be achieved before moving on.
Rainy-Day Testing
Checking whether a system adequately prevents, detects and recovers from operational problems such as downed network connections, data bases which become unavailable, equipment failures and operator errors. [R. Stens] 
The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure.  [ISTQB] 
Regression Testing
Regression testing is that testing that is performed after making a functional improvement or repair to the program.  Its purpose is to determine if the change has regressed other aspects of the program.  It is usually performed by rerunning some subset of the program's test cases.  [G. Myers]
Selective testing to verify that modifications have not caused unintended adverse side effects or to verify that a modified system still meets requirements.  [B. Hetzel] 
Release Candidate
A Release Candidate is a build of a product undergoing final testing before shipment.  All code is complete, and all known bugs which should be fixed, have already been fixed.  Unless a critical, show-stopper bug is found during this final phase, the Release Candidate becomes the shipping version.
Release Test (or Production Release Test)
Tests designed to ensure that the code (which has already been thoroughly tested and approved for release) is correctly installed and configured in Production.  This is often a fairly quick test, but may involve some migration tests as well. 
The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations.  [ISTQB]
Repetition Testing  (Duration Testing)
A simple, brute force technique of determining the effect of repeating a function or scenario.  The essence of this technique is to run a test in a loop until reaching a specified limit or threshold, or until an undesirable action occurs.  [A. Page] 
A condition or capability needed by a user to solve a problem or achieve an objective.  [B. Hetzel]
That which an object should do and/or characteristics that it should have.  Requirements are arbitrary but they must still be consistent, reasonably complete, implementable, and most important of all, falsifiable.  [B. Beizer] 
Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.  [ISTQB]
Possibility of suffering loss.  [SEI]
The degree to which a system or component can function correctly in the presence of invalid inputs or stressful environmental conditions.  [IEEE] 
Root Cause
The underlying reason why a bug occurs, as opposed to the observed symptoms of the bug. [R. Black]
Sanity Testing
A quick test of the main portions of a system to determine if it is basically operating as expected, but avoiding in-depth testing.  This term is often equivalent to Smoke Testing.
The ability to scale to support larger or smaller volumes of data and more or less users. The ability to increase or decrease size or capability in cost-effective increments with minimal impact on the unit cost of business and the procurement of additional services.
The ability of a software system to cope, as the size of the problem increases.
A description of an end user accomplishing a task that may or may not be implemented in the current product. Scenarios typically involve using multiple features. [E. Brechner]
Severity refers to the relative impact or consequence of a bug, and usually doesn't change unless you learn more about some hidden consequences.
The impact of a bug on the system under test, regardless of the likelihood of its occurrence under end-user conditions or the extent to which the failure impedes use of the system. Contrast priority. [R. Black]
Smart Monkey Testing
In Smart Monkey Testing, input are generated from probability distributions that reflect actual expected usage statistics -- e.g., from user profiles. There are different levels of IQ in smart monkey testing. In the simplest, each input is considered independent of the other inputs. That is, a given test requires an input vector with five components. In low IQ testing, these would be generated independently. In high IQ monkey testing, the correlation (e.g., the covariance) between these input distribution is taken into account. In all branches of smart monkey testing, the input is considered as a single event.  [T. Arnold] 
Smoke Test
A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertain that the most crucial functions of a program work, but not bothering with finer details.  A daily build and smoke test is among industry best practices.  [ISTQB] 
Soak Testing
Testing a system with a significant load extended over a significant period of time, to discover how the system behaves under sustained use.  [wikipedia]
Software Audit
An independent review for the purpose of assessing compliance with requirements, specifications, standards, procedures, codes, contractual and licensing requirements, and so forth.
Software Error
Human action that results in software that contains a fault that, if encountered, may cause a failure.  [B. Hetzel]
Software Failure
A departure of system operation from specified requirements dr to a software error.  [B. Hetzel]
Software Quality
The totality of features and characteristics of a software product that bears on its ability to satisfy given needs.  [B. Hetzel]
Software Quality Assurance 
The function of software quality that assures that the standards, processes, and procedures are appropriate for the project and are correctly implemented.  [NASA.gov] 
Software Reliability 
Probability that software will not cause the failure of a system for a specified time under specified conditions. [B. Hetzel]
A tangible, usually partial expression of requirements.  Examples: document, list of features, prototype, test suite.  Specifications are usually incomplete because many requirements are understood.  For example, "the software will not crash or corrupt data."  The biggest mistake a tester can make is to assume that all requirements are expressed by the specification.  [B. Beizer]
A statement of a set of requirements to be satisfied by a product.  [B. Hetzel]
Documentation that specifies how a product should be experienced, constructed, tested, or deployed. [E. Brechner]
SQL Injection
SQL injection is a hacking technique which attempts to pass SQL commands through a web application's user interface for execution by the backend database.
Mandatory requirements employed and enforced to prescribe a disciplined uniform approach to software development.  [SEI]
Statement Coverage 
The percentage of executable statements that have been exercised by a test suite.  [ISTQB]
Statement Of Work
A description of all the work required to complete a project, which is provided by the customer.  [SEI]
Static Analysis
Analysis of software artifacts, e.g. requirements or code, carried out without execution of these software artifacts. [ISTQB]
Straw Man Plan
Any lightweight or incomplete plan, such as the first draft of a test plan or a hardware allocation plan, that serves as a starting point for discussion and a framework for coalescing a more concrete plan. [R. Black]
Stress Testing
Stress testing involves subjecting the program to heavy loads or stresses.  This should not be confused with volume testing; a heavy stress is a peak volume of data encountered over a short span of time.  [G. Myers] 
String Testing
A software development test phase that finds bugs in typical usage scripts and operational or control-flow "strings".  This test phase is fairly unusual.   [R. Black] 
Structural Testing
Structural testing is sometimes confused with glass box testing. I don't think you can do black box structural testing, but I think the focus of structural testing is on the flow of control of the program, testing of different execution paths through the program. I think that there are glass box techniques that focus on data relationships, interaction with devices, interpretation of messages, and other considerations that are not primarily structural.  [C. Kaner]
Testing which requires that inputs be drawn based solely on the structure of the source code or its data structures.  Structural testing is also called code-based testing and white-box testing. [J. Whittaker]
Structural Tests
Tests based on how a computer system, hardware or software, operates. Such tests are code-based or component-based, and they find bugs in operations such as those that occur at levels of lines of cide, chips, subassemblies, and interfaces. Also called white-box tests, glass-box tests, code-based tests, or design-based tests. [R. Black]
A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it.  It replaces a called component. [IEEE]
Sunny-Day Testing
Positive tests.  Tests used to demonstrate the system's correct working. 
Acronym for Scientific Wild-Ass Guess; an educated guess or estimate.  SWAGs abound in test scheduling activities early in the development process.  [R. Black] 
System Reliability 
Probability that a system will perform a required task or mission for a specified time in a specified environment. [B. Hetzel]
System Testing
System Testing is done to explore system behaviors that can't be done by unit, component, or integration testing. Example, testing: performance, installation, data integrity, storage management, security, reliability.  Ideal system testing presumes that all components have been previously, successfully, integrated.  System testing is often done by independent testers.  [B. Beizer]
Testing of groups of programs. 
Process of testing an integrated system to verify that it meets specified criteria.  [B. Hetzel]
The testing of a collection of components that constitutes a deliverable product.  [J. Whittaker] 
System Under Test
(SUT) The system which is the target of the testing process. 
The word test is derived from the Latin word for an earthen pots or vessel (testum).  Such a pot was used for assaying materials to determine the presence or measure the weight of various elements, thus the expression "to put to the test."  [B. Hetzel]
An activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluation is made of some aspect of the system or component.  [IEEE] 
Test and Evaluation
As employed in the DOD, T & E is the overall activity of independent evaluation "conducted throughout the system acquisition process to assess and reduce acquisition risks and to estimate the operational effectiveness and suitability of the system being developed."  [B. Hetzel]
Test Bed
An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.
Test Case 
A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
A specific set of test data along with expected results for a particular test objective, such as to exercise a program feature or to verify compliance with a specific requirement. [B. Hetzel]
Test Case Specification
A document specifying the test data for use in running the test conditions identified in the Test Design Specification. [IEEE]
Test Condition
A test condition is a particular behavior that you need to verify of the system under test.
Test Data
Data that is run through a computer program to test the software.
Input data and file conditions associated with a particular test case.  [B. Hetzel] 
Test Design
A selection and specification of a set of test cases to meet the test objectives or coverage criteria. [B. Hetzel]
Test Design Specification
A document detailing test conditions and the expected results as well as test pass criteria. [IEEE]
Test-Driven Development 
Test-driven development (TDD) is a programming technique heavily emphasized in Extreme Programming. Essentially the technique involves writing your tests first then implementing the code to make them pass. The goal of TDD is to achieve rapid feedback and implements the "illustrate the main line" approach to constructing a program.  [wikipedia] 
Test Environment
The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.
Test Harness
A test environment comprised of stubs and drivers needed to execute a test. [ISTQB]
Test Incident Report
A document detailing, for any test that failed, the actual versus expected result, and other information intended to throw light on why a test has failed.  [IEEE] 
Test Item Transmittal Report
A document reporting on when tested software components have progressed from one stage of testing to the next. [IEEE]
Test Log
A chronological record of all relevant details of a testing activity. [B. Hetzel]
A document recording which test cases were run, who ran them, in what order, and whether each test passed or failed. [IEEE]
Test Plan
A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.  [IEEE]
A detail of how the test will proceed, who will do the testing, what will be tested, in how much time the test will take place, and to what quality level the test will be performed.  [IEEE]
A document prescribing the approach to be taken for intended testing activities.  [B. Hetzel] 
Test Procedure
A document defining the steps required to carry out part of a test plan or execute a set of test cases.  [B. Hetzel]
Test Procedure Specification
A document detailing how to run each testing, including any set-up preconditions and the steps that need to be followed. [IEEE]
Test Script
A document, program, or object that specifies for every test and subtest in a test suite: object to be tested, requirement (usually a case), initial state, inputs, expected outcome, and validation criteria.
Test Strategy
A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects). [ISTQB]
Test Suite
A set of one or more tests, usually aimed at a single object, with a common purpose and data base, usually run as a set. [B. Beizer]
Test Summary Report
A management report providing any important information uncovered by the tests accomplished, and including assessments of the quality of the testing effort, the quality of the software system under test, and statistics derived from the Incident Reports.  [IEEE] 
Test Tool
A software product that supports one or more test activities, such as planning and control, specification, building initial files and data, test execution and test analysis.  [ISTQB]
Test Validity
The degree to which a test accomplishes its specified goal 
Normally the term "testability" refers to the ease or cost of testing, or the ease of testing with the tools and processes currently in use.  So, a feature might be more testable if you have all the right systems in place, and lots of time.  Or it might not be very testable, because you have reached a deadline, and have run out of time and/or money.  Sometimes, the term "testability" refers to requirements. There, it's used as a measure of clarity, so that you can know if the test of a requirement passes or fails.  So, "the UI must be intuitive and fast" may not be very "testable", without knowing what is meant by "intuitive" and how you would measure "fast enough".
The degree to which a software artifact (i.e. a software system, software module, requirements- or design document) supports testing in a given test context.  Testability is not an intrinsic property of a software artifact and can not be measured directly (such as software size). Instead testability is an extrinsic property which results from interdependency of the software to be tested and the test goals, test methods used, and test resources (i.e., the test context).  A lower degree of testability results in increased test effort. In extreme cases a lack of testability may hinder testing parts of the software or software requirements at all.  [wikipedia]
A skilled professional who is involved in the testing of a component or system.  [ISTQB]
Testing is the process of executing a program with the intent of finding errors. [G. Myers]
The act of designing, debugging and executing tests.  [B. Beizer]
Testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results.  [B. Hetzel]
The process of executing a software system to determine whether it matches its specification and executes in its intended environment.  [J. Whittaker]
The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.  [IEEE] 
Software, and sometimes data, used for testing. 
Thread Testing
Testing which demonstrates key functional capabilities by testing a string of units that accomplish a specific function in the application.
Top Down Testing
An approach to integration testing where the higher level components are tested first, while the lower level components are stubbed out.
Total Defect Containment Effectiveness (TDCE)
This metric shows the effectiveness of defect detection techniques in identifying defects before the product is released into operation.  TDCE is calculated as "Number of defects found prior to release" / "Total defects found (including those found after release)" * 100%   [Westfall]
The smallest thing that can be tested.  It (usually) begins as the work of one programmer and corresponds to the smallest compilable program segment, such as a subroutine.  A unit, as a tested object, does not usually include the subroutines or functions that it calls, fixed tables, and so on. [B. Beizer]
Unit Testing
Testing of units.  In unit testing, called subroutine and function calls are treated as if they are language parts (e.g. keywords).  Called and calling components are either assumed to work correctly or are replaced by simulators.  Unit testing usually is done by the unit's originator.  [B. Beizer]
Testing of individual programs as they are written.  [B. Hetzel] 
The testing of individual software components or a collection of components.  [J. Whittaker]
Usability Testing
Testing to determine the extent to which the software product is understood, easy to learn, easy to operate, and attractive to the users under specified conditions. [ISTQB]
Use Case
In software engineering, a use case is a technique for capturing the potential requirements of a new system or software change. Each use case provides one or more scenarios that convey how the system should interact with the end user or another system to achieve a specific business goal. Use cases typically avoid technical jargon, preferring instead the language of the end user or domain expert. Use cases are often co-authored by software developers and end users. [wikipedia]
User Acceptance Testing (UAT)
A formal product evaluation performed by a customer as a condition of purchase. Formal testing of a new computer system by prospective users. This is carried out to determine whether the software satisfies its acceptance criteria and should be accepted by the customer. User acceptance testing (UAT) is one of the final stages of a software project and will often be performed before a new system is accepted by the customer.  [wikipedia]
User Interface (UI)
The user interface (also known as Human Computer Interface or Man-Machine Interface (MMI)) is the aggregate of means by which people interact with the system. The user interface provides means of input and output.  [wikipedia] 
User Interface Freeze
The point in time in the development process in which no changes whatsoever are permitted to the user interface.  Stability of the UI is often necessary for creating Help documents, screenshots, marketing materials, etc. 
Validation Testing
The process of evaluating software at the end of the development process to ensure compliance with requirements.  [B. Hetzel] 
The process of evaluating an object to demonstrate that it meets requirements.  [D. Wallace]
The process of evaluating a system or component to determine whether the products of the given development phase satisfy the conditions imposed at the start of that phase. [IEEE]
Evaluation performed at the end of a phase with the objective of ensuring that the requrements established during the previous phases have been met.  (More generally, verification refers to the overall software evaluation activity, including reviewing, inspecting, testing, checking, and auditing).  [B Hetzel] 
Volume Testing
Subjecting the program to heavy volumes of data. [G. Myers]
A review process in which the designer leads one or more others through a segment of design or code he or she has written. [B. Hetzel]
White Box Testing
Testing done under structural testing strategy. (also called Glass Box Testing) [B. Beizer]
Testing that uses instrumentation to automatically and systematically test every aspect of the product. [E. Brechner]
Zero Bug Bounce (ZBB)
The first moment in a project where all features are complete and every work item is resolved. This moment rarely lasts very long. Often within an hour, a new issue arises through extended system testing and the team goes back to work. Nevertheless, ZBB means the end is predictably within sight. [E. Brechner]