Software Engineering Body of Knowledge (v3) (2014) (811503), страница 21
Текст из файла (страница 21)
Finally, SoftwareTesting Tools are presented in topic six.1. Software Testing Fundamentals1.1. Testing-Related Terminology1.1.1. Definitions of Testing and RelatedTerminology[1*, c1, c2] [2*, c8]Definitions of testing and testing-related terminology are provided in the cited references andsummarized as follows.1.1.2. Faults vs. Failures[1*, c1s5] [2*, c11]Many terms are used in the software engineeringliterature to describe a malfunction: notably fault,failure, and error, among others.
This terminology is precisely defined in [3, c2]. It is essentialto clearly distinguish between the cause of a malfunction (for which the term fault will be usedhere) and an undesired effect observed in the system’s delivered service (which will be called afailure). Indeed there may well be faults in thesoftware that never manifest themselves as failures (see Theoretical and Practical Limitationsof Testing in section 1.2, Key Issues). Thus testing can reveal failures, but it is the faults that canand must be removed [3].
The more generic termdefect can be used to refer to either a fault or afailure, when the distinction is not important [3].However, it should be recognized that the causeof a failure cannot always be unequivocally identified. No theoretical criteria exist to definitivelydetermine, in general, the fault that caused anobserved failure. It might be said that it was thefault that had to be modified to remove the failure,but other modifications might have worked justas well. To avoid ambiguity, one could refer tofailure-causing inputs instead of faults—that is,those sets of inputs that cause a failure to appear.1.2. Key Issues1.2.1. Test Selection Criteria / Test AdequacyCriteria (Stopping Rules)[1*, c1s14, c6s6, c12s7]A test selection criterion is a means of selectingtest cases or determining that a set of test cases4-4 SWEBOK® Guide V3.0is sufficient for a specified purpose. Test adequacy criteria can be used to decide when sufficient testing will be, or has been accomplished[4] (see Termination in section 5.1, PracticalConsiderations).1.2.2. Testing Effectiveness / Objectives forTesting[1*, c11s4, c13s11]Testing effectiveness is determined by analyzinga set of program executions.
Selection of tests tobe executed can be guided by different objectives:it is only in light of the objective pursued that theeffectiveness of the test set can be evaluated.1.2.3. Testing for Defect Discovery[1*, c1s14]In testing for defect discovery, a successful testis one that causes the system to fail. This is quitedifferent from testing to demonstrate that thesoftware meets its specifications or other desiredproperties, in which case testing is successful ifno failures are observed under realistic test casesand test environments.1.2.4. The Oracle Problem[1*, c1s9, c9s7]An oracle is any human or mechanical agent thatdecides whether a program behaved correctlyin a given test and accordingly results in a verdict of “pass” or “fail.” There exist many different kinds of oracles; for example, unambiguousrequirements specifications, behavioral models,and code annotations.
Automation of mechanizedoracles can be difficult and expensive.1.2.5. Theoretical and Practical Limitations ofTesting[1*, c2s7]Testing theory warns against ascribing an unjustified level of confidence to a series of successfultests. Unfortunately, most established results oftesting theory are negative ones, in that they statewhat testing can never achieve as opposed to whatis actually achieved. The most famous quotationin this regard is the Dijkstra aphorism that “program testing can be used to show the presence ofbugs, but never to show their absence” [5]. Theobvious reason for this is that complete testing isnot feasible in realistic software.
Because of this,testing must be driven based on risk [6, part 1]and can be seen as a risk management strategy.1.2.6. The Problem of Infeasible Paths[1*, c4s7]Infeasible paths are control flow paths that cannotbe exercised by any input data. They are a significant problem in path-based testing, particularlyin automated derivation of test inputs to exercisecontrol flow paths.1.2.7. Testability[1*, c17s2]The term “software testability” has two relatedbut different meanings: on the one hand, it refersto the ease with which a given test coveragecriterion can be satisfied; on the other hand, itis defined as the likelihood, possibly measuredstatistically, that a set of test cases will exposea failure if the software is faulty. Both meaningsare important.1.3. Relationship of Testing to Other ActivitiesSoftware testing is related to, but different from,static software quality management techniques,proofs of correctness, debugging, and programconstruction.
However, it is informative to consider testing from the point of view of softwarequality analysts and of certifiers.• Testing vs. Static Software Quality Management Techniques (see Software QualityManagement Techniques in the SoftwareQuality KA [1*, c12]).• Testing vs. Correctness Proofs and FormalVerification (see the Software EngineeringModels and Methods KA [1*, c17s2]).• Testing vs. Debugging (see ConstructionTesting in the Software Construction KAand Debugging Tools and Techniques in theComputing Foundations KA [1*, c3s6]).Software Testing 4-5• Testing vs.
Program Construction (see Construction Testing in the Software Construction KA [1*, c3s2]).2. Test LevelsSoftware testing is usually performed at different levels throughout the development and maintenance processes. Levels can be distinguishedbased on the object of testing, which is calledthe target, or on the purpose, which is called theobjective (of the test level).2.1. The Target of the Test[1*, c1s13] [2*, c8s1]The target of the test can vary: a single module, agroup of such modules (related by purpose, use,behavior, or structure), or an entire system.
Threetest stages can be distinguished: unit, integration, and system. These three test stages do notimply any process model, nor is any one of themassumed to be more important than the other two.2.1.1. Unit Testing[1*, c3] [2*, c8]Unit testing verifies the functioning in isolationof software elements that are separately testable.Depending on the context, these could be theindividual subprograms or a larger componentmade of highly cohesive units. Typically, unittesting occurs with access to the code being testedand with the support of debugging tools. The programmers who wrote the code typically, but notalways, conduct unit testing.2.1.2. Integration Testing[1*, c7] [2*, c8]Integration testing is the process of verifying theinteractions among software components.
Classical integration testing strategies, such as topdown and bottom-up, are often used with hierarchically structured software.Modern, systematic integration strategies aretypically architecture-driven, which involvesincrementally integrating the software components or subsystems based on identifiedfunctional threads. Integration testing is often anongoing activity at each stage of developmentduring which software engineers abstract awaylower-level perspectives and concentrate on theperspectives of the level at which they are integrating. For other than small, simple software,incremental integration testing strategies are usually preferred to putting all of the componentstogether at once—which is often called “bigbang” testing.2.1.3. System Testing[1*, c8] [2*, c8]System testing is concerned with testing thebehavior of an entire system.
Effective unit andintegration testing will have identified many ofthe software defects. System testing is usuallyconsidered appropriate for assessing the nonfunctional system requirements—such as security, speed, accuracy, and reliability (see Functional and Non-Functional Requirements in theSoftware Requirements KA and Software Quality Requirements in the Software Quality KA).External interfaces to other applications, utilities,hardware devices, or the operating environmentsare also usually evaluated at this level.2.2. Objectives of Testing[1*, c1s7]Testing is conducted in view of specific objectives, which are stated more or less explicitlyand with varying degrees of precision. Statingthe objectives of testing in precise, quantitativeterms supports measurement and control of thetest process.Testing can be aimed at verifying different properties.
Test cases can be designed to check thatthe functional specifications are correctly implemented, which is variously referred to in the literature as conformance testing, correctness testing, or functional testing. However, several othernonfunctional properties may be tested as well—including performance, reliability, and usability, among many others (see Models and QualityCharacteristics in the Software Quality KA).Other important objectives for testing includebut are not limited to reliability measurement,4-6 SWEBOK® Guide V3.0identification of security vulnerabilities, usabilityevaluation, and software acceptance, for whichdifferent approaches would be taken. Note that,in general, the test objectives vary with the testtarget; different purposes are addressed at different levels of testing.The subtopics listed below are those mostoften cited in the literature. Note that some kindsof testing are more appropriate for custom-madesoftware packages—installation testing, forexample—and others for consumer products, likebeta testing.2.2.1. Acceptance / Qualification Testing[1*, c1s7] [2*, c8s4]Acceptance / qualification testing determineswhether a system satisfies its acceptance criteria,usually by checking desired system behaviorsagainst the customer’s requirements.