Software Engineering Body of Knowledge (v3) (2014) (811503), страница 24
Текст из файла (страница 24)
To evaluate thethoroughness of the executed tests, software engineers can monitor the elements covered so thatthey can dynamically measure the ratio betweencovered elements and the total number. For example, it is possible to measure the percentage ofbranches covered in the program flow graph or thepercentage of functional requirements exercisedamong those listed in the specifications document.Code-based adequacy criteria require appropriateinstrumentation of the program under test.4.2.2. Fault Seeding[1*, c2s5] [9*, c6]In fault seeding, some faults are artificially introduced into a program before testing.
When thetests are executed, some of these seeded faults willbe revealed as well as, possibly, some faults thatwere already there. In theory, depending on whichand how many of the artificial faults are discovered, testing effectiveness can be evaluated and theremaining number of genuine faults can be estimated. In practice, statisticians question the distribution and representativeness of seeded faultsrelative to genuine faults and the small sample sizeon which any extrapolations are based.
Some alsoargue that this technique should be used with greatcare since inserting faults into software involvesthe obvious risk of leaving them there.4.2.3. Mutation Score[1*, c3s5]In mutation testing (see Mutation Testing in section 3.4, Fault-Based Techniques), the ratio ofkilled mutants to the total number of generatedmutants can be a measure of the effectiveness ofthe executed test set.4.2.4. Comparison and Relative Effectivenessof Different TechniquesSeveral studies have been conducted to compare the relative effectiveness of different testingtechniques.
It is important to be precise as to theproperty against which the techniques are beingassessed; what, for instance, is the exact meaninggiven to the term “effectiveness”? Possible interpretations include the number of tests needed tofind the first failure, the ratio of the number offaults found through testing to all the faults foundduring and after testing, and how much reliability was improved. Analytical and empirical comparisons between different techniques have beenconducted according to each of the notions ofeffectiveness specified above.5. Test ProcessTesting concepts, strategies, techniques, and measures need to be integrated into a defined andSoftware Testing 4-13controlled process. The test process supports testing activities and provides guidance to testers andtesting teams, from test planning to test outputevaluation, in such a way as to provide assurancethat the test objectives will be met in a cost-effective way.5.1. Practical Considerations5.1.1. Attitudes / Egoless Programming[1*c16] [9*, c15]An important element of successful testing is acollaborative attitude towards testing and qualityassurance activities.
Managers have a key role infostering a generally favorable reception towardsfailure discovery and correction during softwaredevelopment and maintenance; for instance, byovercoming the mindset of individual code ownership among programmers and by promoting acollaborative environment with team responsibility for anomalies in the code.5.1.2. Test Guides[1*, c12s1] [9*, c15s1]The testing phases can be guided by variousaims—for example, risk-based testing uses theproduct risks to prioritize and focus the test strategy, and scenario-based testing defines test casesbased on specified software scenarios.5.1.3. Test Process Management[1*, c12] [9*, c15]Test activities conducted at different levels (seetopic 2, Test Levels) must be organized—togetherwith people, tools, policies, and measures—into awell-defined process that is an integral part of thelife cycle.5.1.4. Test Documentation and Work Products[1*, c8s12] [9*, c4s5]Documentation is an integral part of the formalization of the test process [6, part 3].
Test documentsmay include, among others, the test plan, testdesign specification, test procedure specification,test case specification, test log, and test incidentreport. The software under test is documented asthe test item. Test documentation should be produced and continually updated to the same levelof quality as other types of documentation insoftware engineering.
Test documentation shouldalso be under the control of software configuration management (see the Software ConfigurationManagement KA). Moreover, test documentationincludes work products that can provide materialfor user manuals and user training.5.1.5. Test-Driven Development[1*, c1s16]Test-driven development (TDD) originated as oneof the core XP (extreme programming) practicesand consists of writing unit tests prior to writingthe code to be tested (see Agile Methods in theSoftware Engineering Models and Method KA).In this way, TDD develops the test cases as a surrogate for a software requirements specificationdocument rather than as an independent checkthat the software has correctly implemented therequirements.
Rather than a testing strategy, TDDis a practice that requires software developers todefine and maintain unit tests; it thus can alsohave a positive impact on elaborating user needsand software requirements specifications.5.1.6. Internal vs. Independent Test Team[1*, c16]Formalizing the testing process may also involveformalizing the organization of the testing team.The testing team can be composed of internalmembers (that is, on the project team, involved ornot in software construction), of external members(in the hope of bringing an unbiased, independentperspective), or of both internal and external members. Considerations of cost, schedule, maturitylevels of the involved organizations, and criticalityof the application can guide the decision.5.1.7. Cost/Effort Estimation and Test ProcessMeasures[1*, c18s3] [9*, c5s7]Several measures related to the resources spenton testing, as well as to the relative fault-findingeffectiveness of the various test phases, are usedby managers to control and improve the testing4-14 SWEBOK® Guide V3.0process.
These test measures may cover suchaspects as number of test cases specified, number of test cases executed, number of test casespassed, and number of test cases failed, amongothers.Evaluation of test phase reports can be combined with root-cause analysis to evaluate testprocess effectiveness in finding faults as early aspossible. Such an evaluation can be associatedwith the analysis of risks. Moreover, the resourcesthat are worth spending on testing should be commensurate with the use/criticality of the application: different techniques have different costs andyield different levels of confidence in productreliability.5.1.8. Termination[9*, c10s4]A decision must be made as to how much testing is enough and when a test stage can be terminated.
Thoroughness measures, such as achievedcode coverage or functional coverage, as well asestimates of fault density or of operational reliability, provide useful support but are not sufficient in themselves. The decision also involvesconsiderations about the costs and risks incurredby possible remaining failures, as opposed tothe costs incurred by continuing to test (see TestSelection Criteria / Test Adequacy Criteria insection 1.2, Key Issues).5.1.9. Test Reuse and Test Patterns[9*, c2s5]To carry out testing or maintenance in an organized and cost-effective way, the means used totest each part of the software should be reusedsystematically.
A repository of test materialsshould be under the control of software configuration management so that changes to software requirements or design can be reflected inchanges to the tests conducted.The test solutions adopted for testing someapplication types under certain circumstances,with the motivations behind the decisions taken,form a test pattern that can itself be documentedfor later reuse in similar projects.5.2. Test ActivitiesAs shown in the following description, successfulmanagement of test activities strongly dependson the software configuration management process (see the Software Configuration Management KA).5.2.1. Planning[1*, c12s1, c12s8]Like all other aspects of project management,testing activities must be planned. Key aspectsof test planning include coordination of personnel, availability of test facilities and equipment,creation and maintenance of all test-related documentation, and planning for possible undesirable outcomes.
If more than one baseline of thesoftware is being maintained, then a major planning consideration is the time and effort neededto ensure that the test environment is set to theproper configuration.5.2.2. Test-Case Generation[1*, c12s1, c12s3]Generation of test cases is based on the level oftesting to be performed and the particular testingtechniques. Test cases should be under the control of software configuration management andinclude the expected results for each test.5.2.3. Test Environment Development[1*, c12s6]The environment used for testing should be compatible with the other adopted software engineering tools. It should facilitate developmentand control of test cases, as well as logging andrecovery of expected results, scripts, and othertesting materials.5.2.4. Execution[1*, c12s7]Execution of tests should embody a basic principle of scientific experimentation: everythingdone during testing should be performed anddocumented clearly enough that another personSoftware Testing 4-15could replicate the results.
Hence, testing shouldbe performed in accordance with documentedprocedures using a clearly defined version of thesoftware under test.5.2.5. Test Results Evaluation[9*, c15]The results of testing should be evaluated todetermine whether or not the testing has beensuccessful. In most cases, “successful” meansthat the software performed as expected and didnot have any major unexpected outcomes. Notall unexpected outcomes are necessarily faultsbut are sometime determined to be simply noise.Before a fault can be removed, an analysis anddebugging effort is needed to isolate, identify,and describe it.
When test results are particularlyimportant, a formal review board may be convened to evaluate them.5.2.6. Problem Reporting / Test Log[1*, c13s9]Testing activities can be entered into a testinglog to identify when a test was conducted, whoperformed the test, what software configurationwas used, and other relevant identification information. Unexpected or incorrect test results canbe recorded in a problem reporting system, thedata for which forms the basis for later debugging and fixing the problems that were observedas failures during testing. Also, anomalies notclassified as faults could be documented in casethey later turn out to be more serious than firstthought.