Software Engineering Body of Knowledge (v3) (2014) (811503), страница 23
Текст из файла (страница 23)
The strongest criterion, all definition-use paths, requires that, for each variable,every control flow path segment from a definition of that variable to a use of that definition isexecuted. In order to reduce the number of pathsrequired, weaker strategies such as all-definitionsand all-uses are employed.3.3.3. Reference Models for Code-BasedTesting[1*, c4]Although not a technique in itself, the controlstructure of a program can be graphically represented using a flow graph to visualize codebased testing techniques.
A flow graph is adirected graph, the nodes and arcs of which correspond to program elements (see Graphs andTrees in the Mathematical Foundations KA).For instance, nodes may represent statements oruninterrupted sequences of statements, and arcsmay represent the transfer of control betweennodes.3.4. Fault-Based Techniques[1*, c1s14]With different degrees of formalization, faultbased testing techniques devise test cases specifically aimed at revealing categories of likelyor predefined faults.
To better focus the test casegeneration or selection, a fault model can beintroduced that classifies the different types offaults.3.4.1. Error Guessing[1*, c9s8]In error guessing, test cases are specificallydesigned by software engineers who try to anticipate the most plausible faults in a given program.A good source of information is the history offaults discovered in earlier projects, as well as thesoftware engineer’s expertise.3.4.2. Mutation Testing[1*, c3s5]A mutant is a slightly modified version of theprogram under test, differing from it by a smallsyntactic change.
Every test case exercises boththe original program and all generated mutants:if a test case is successful in identifying the difference between the program and a mutant, thelatter is said to be “killed.” Originally conceivedas a technique to evaluate test sets (see section4.2.
Evaluation of the Tests Performed), mutation testing is also a testing criterion in itself:either tests are randomly generated until enoughmutants have been killed, or tests are specificallydesigned to kill surviving mutants. In the lattercase, mutation testing can also be categorized asa code-based technique. The underlying assumption of mutation testing, the coupling effect,is that by looking for simple syntactic faults,more complex but real faults will be found. Forthe technique to be effective, a large number ofmutants must be automatically generated andexecuted in a systematic way [12].3.5. Usage-Based Techniques3.5.1. Operational Profile[1*, c15s5]In testing for reliability evaluation (also calledoperational testing), the test environment reproduces the operational environment of the software, or the operational profile, as closely aspossible.
The goal is to infer from the observedtest results the future reliability of the softwarewhen in actual use. To do this, inputs are assignedprobabilities, or profiles, according to their frequency of occurrence in actual operation. Operational profiles can be used during system testing4-10 SWEBOK® Guide V3.0to guide derivation of test cases that will assessthe achievement of reliability objectives andexercise relative usage and criticality of differentfunctions similar to what will be encountered inthe operational environment [3].3.5.2. User Observation Heuristics[10*, c5, c7]Usability principles can provide guidelines for discovering problems in the design of the user interface [10*, c1s4] (see User Interface Design in theSoftware Design KA).
Specialized heuristics, alsocalled usability inspection methods, are appliedfor the systematic observation of system usageunder controlled conditions in order to determine how well people can use the system and itsinterfaces. Usability heuristics include cognitivewalkthroughs, claims analysis, field observations,thinking aloud, and even indirect approaches suchas user questionnaires and interviews.3.6. Model-Based Testing TechniquesA model in this context is an abstract (formal)representation of the software under test or ofits software requirements (see Modeling in theSoftware Engineering Models and Methods KA).Model-based testing is used to validate requirements, check their consistency, and generate testcases focused on the behavioral aspects of thesoftware.
The key components of model-basedtesting are [13]: the notation used to represent themodel of the software or its requirements; workflow models or similar models; the test strategyor algorithm used for test case generation; thesupporting infrastructure for the test execution;and the evaluation of test results compared toexpected results.
Due to the complexity of thetechniques, model-based testing approachesare often used in conjunction with test automation harnesses. Model-based testing techniquesinclude the following.3.6.1. Decision Tables[1*, c9s6]Decision tables represent logical relationshipsbetween conditions (roughly, inputs) and actions(roughly, outputs). Test cases are systematicallyderived by considering every possible combination of conditions and their corresponding resultant actions.
A related technique is cause-effectgraphing [1*, c13s6].3.6.2. Finite-State Machines[1*, c10]By modeling a program as a finite state machine,tests can be selected in order to cover the statesand transitions.3.6.3. Formal Specifications[1*, c10s11] [2*, c15]Stating the specifications in a formal language(see Formal Methods in the Software Engineering Models and Methods KA) permits automaticderivation of functional test cases, and, at thesame time, provides an oracle for checking testresults.TTCN3 (Testing and Test Control Notationversion 3) is a language developed for writing testcases.
The notation was conceived for the specificneeds of testing telecommunication systems, so itis particularly suitable for testing complex communication protocols.3.6.4. Workflow Models[2*, c8s3.2, c19s3.1]Workflow models specify a sequence of activities performed by humans and/or software applications, usually represented through graphicalnotations. Each sequence of actions constitutesone workflow (also called a scenario).
Both typical and alternate workflows should be tested [6,part 4]. A special focus on the roles in a workflow specification is targeted in business processtesting.3.7. Techniques Based on the Nature of theApplicationThe above techniques apply to all kinds of software. Additional techniques for test derivationand execution are based on the nature of the software being tested; for example,Software Testing 4-11• object-oriented software• component-based software• web-based software• concurrent programs• protocol-based software• real-time systems• safety-critical systems• service-oriented software• open-source software• embedded software3.8. Selecting and Combining Techniques3.8.1. Combining Functional and Structural[1*, c9]Model-based and code-based test techniquesare often contrasted as functional vs.
structuraltesting. These two approaches to test selectionare not to be seen as alternatives but rather ascomplements; in fact, they use different sourcesof information and have been shown to highlight different kinds of problems. They could beused in combination, depending on budgetaryconsiderations.3.8.2. Deterministic vs. Random[1*, c9s6]Test cases can be selected in a deterministic way,according to one of many techniques, or randomly drawn from some distribution of inputs,such as is usually done in reliability testing.
Several analytical and empirical comparisons havebeen conducted to analyze the conditions thatmake one approach more effective than the other.4. Test-Related MeasuresSometimes testing techniques are confused withtesting objectives. Testing techniques can beviewed as aids that help to ensure the achievement of test objectives [6, part 4].
For instance,branch coverage is a popular testing technique.Achieving a specified branch coverage measure(e.g., 95% branch coverage) should not be theobjective of testing per se: it is a way of improving the chances of finding failures by attemptingto systematically exercise every program branchat every decision point. To avoid such misunderstandings, a clear distinction should be madebetween test-related measures that provide anevaluation of the program under test, based onthe observed test outputs, and the measures thatevaluate the thoroughness of the test set.
(SeeSoftware Engineering Measurement in the Software Engineering Management KA for information on measurement programs. See SoftwareProcess and Product Measurement in the Software Engineering Process KA for information onmeasures.)Measurement is usually considered fundamental to quality analysis. Measurement may also beused to optimize the planning and execution ofthe tests. Test management can use several different process measures to monitor progress. (Seesection 5.1, Practical Considerations, for a discussion of measures of the testing process usefulfor management purposes.)4.1. Evaluation of the Program Under Test4.1.1. Program Measurements That Aid inPlanning and Designing Tests[9*, c11]Measures based on software size (for example,source lines of code or functional size; see Measuring Requirements in the Software Requirements KA) or on program structure can be usedto guide testing.
Structural measures also includemeasurements that determine the frequency withwhich modules call one another.4.1.2. Fault Types, Classification, andStatistics[9*, c4]The testing literature is rich in classifications andtaxonomies of faults. To make testing more effective, it is important to know which types of faultsmay be found in the software under test and therelative frequency with which these faults haveoccurred in the past. This information can be useful in making quality predictions as well as inprocess improvement (see Defect Characterization in the Software Quality KA).4-12 SWEBOK® Guide V3.04.1.3. Fault Density[1*, c13s4] [9*, c4]A program under test can be evaluated by countingdiscovered faults as the ratio between the numberof faults found and the size of the program.4.1.4. Life Test, Reliability Evaluation[1*, c15] [9*, c3]A statistical estimate of software reliability,which can be obtained by observing reliability achieved, can be used to evaluate a softwareproduct and decide whether or not testing can bestopped (see section 2.2, Reliability Achievementand Evaluation).4.1.5. Reliability Growth Models[1*, c15] [9*, c8]Reliability growth models provide a prediction ofreliability based on failures.
They assume, in general, that when the faults that caused the observedfailures have been fixed (although some modelsalso accept imperfect fixes), the estimated product’s reliability exhibits, on average, an increasingtrend. There are many published reliability growthmodels. Notably, these models are divided intofailure-count and time-between-failure models.4.2. Evaluation of the Tests Performed4.2.1. Coverage / Thoroughness Measures[9*, c11]Several test adequacy criteria require that the testcases systematically exercise a set of elementsidentified in the program or in the specifications(see topic 3, Test Techniques).