Software Engineering Body of Knowledge (v3) (2014) (811503), страница 25
Текст из файла (страница 25)
Test reports are also inputs to the changemanagement request process (see Software Configuration Control in the Software ConfigurationManagement KA).5.2.7. Defect Tracking[9*, c9]Defects can be tracked and analyzed to determinewhen they were introduced into the software,why they were created (for example, poorlydefined requirements, incorrect variable declaration, memory leak, programming syntax error),and when they could have been first observed inthe software. Defect tracking information is usedto determine what aspects of software testingand other processes need improvement and howeffective previous approaches have been.6. Software Testing Tools6.1. Testing Tool Support[1*, c12s11] [9*, c5]Testing requires many labor-intensive tasks, running numerous program executions, and handlinga great amount of information.
Appropriate toolscan alleviate the burden of clerical, tedious operations and make them less error-prone. Sophisticated tools can support test design and test casegeneration, making it more effective.6.1.1. Selecting Tools[1*, c12s11]Guidance to managers and testers on how to selecttesting tools that will be most useful to their organization and processes is a very important topic,as tool selection greatly affects testing efficiencyand effectiveness. Tool selection depends ondiverse evidence, such as development choices,evaluation objectives, execution facilities, and soon.
In general, there may not be a unique tool thatwill satisfy particular needs, so a suite of toolscould be an appropriate choice.6.2. Categories of ToolsWe categorize the available tools according totheir functionality:• Test harnesses (drivers, stubs) [1*, c3s9]provide a controlled environment in whichtests can be launched and the test outputs canbe logged. In order to execute parts of a program, drivers and stubs are provided to simulate calling and called modules, respectively.• Test generators [1*, c12s11] provide assistance in the generation test cases.
The generation can be random, path-based, modelbased, or a mix thereof.• Capture/replay tools [1*, c12s11] automatically reexecute, or replay, previously4-16 SWEBOK® Guide V3.0executed tests which have recorded inputsand outputs (e.g., screens).• Oracle/file comparators/assertion checkingtools [1*, c9s7] assist in deciding whether atest outcome is successful or not.• Coverage analyzers and instrumenters [1*,c4] work together. Coverage analyzers assesswhich and how many entities of the programflow graph have been exercised amongst allthose required by the selected test coveragecriterion.
The analysis can be done thanks toprogram instrumenters that insert recordingprobes into the code.• Tracers [1*, c1s7] record the history of aprogram’s execution paths.• Regression testing tools [1*, c12s16] supportthe reexecution of a test suite after a sectionof software has been modified. They can alsohelp to select a test subset according to thechange made.• Reliability evaluation tools [9*, c8] supporttest results analysis and graphical visualization in order to assess reliability-related measures according to selected models.Software Testing 4-171.1. Testing-Related Terminology1.1.1. Definitions of Testing andRelated Terminology1.1.2. Faults vs. Failures1.2. Key Issues1.2.1. Test Selection Criteria /Test Adequacy Criteria(Stopping Rules)1.2.2. Testing Effectiveness /Objectives for Testing1.2.3. Testing for DefectIdentificationc8c1s5c111.2.4. The Oracle Problem1.2.5. Theoretical and PracticalLimitations of Testing1.2.6. The Problem of InfeasiblePaths1.2.7. Testability1.3. Relationship of Testing toOther Activities1.3.1. Testing vs.
StaticSoftware Quality ManagementTechniques1.3.2. Testing vs. CorrectnessProofs and Formal Verification1.3.3. Testing vs. Debugging1.3.4. Testing vs. Programming2. Test Levels2.1. The Target of the Test2.1.1. Unit Testing2.1.2. Integration Testing2.1.3. System Testingc1s14, c6s6,c12s7c13s11, c11s4c1s14c1s9,c9s7c2s7c4s7c17s2c12c17s2c3s6c3s2c1s13c3c7c8c8s1c8c8c8Nielsen 1993[10*]Sommerville 2011[2*]c1,c21. Software Testing FundamentalsKan 2003[9*]Naik and Tripathy 2008[1*]MATRIX OF TOPICS VS. REFERENCE MATERIAL2.2. Objectives of Testing2.2.1. Acceptance / Qualification2.2.2. Installation Testing2.2.3. Alpha and Beta Testing2.2.4. Reliability Achievementand Evaluation2.2.5. Regression Testing2.2.6. Performance Testing2.2.7. Security Testing2.2.8. Stress Testing2.2.9. Back-to-Back Testing2.2.10. Recovery Testing2.2.11. Interface Testing2.2.12. Configuration Testing2.2.13. Usability and HumanComputer Interaction Testing3. Test Techniques3.1. Based on the SoftwareEngineer’s Intuition andExperience3.1.1. Ad Hoc3.1.2. Exploratory Testing3.2. Input Domain-BasedTechniques3.2.1. Equivalence Partitioning3.2.2. Pairwise Testing3.2.3. Boundary-Value Analysis3.2.4. Random Testing3.3. Code-Based Techniques3.3.1. Control Flow-BasedCriteriac1s7c1s7c12s2c13s7,c16s6c15c8s11,c13s3c8s6c8s3c8s8Nielsen 1993[10*]Kan 2003[9*]Sommerville 2011[2*]Naik and Tripathy 2008[1*]4-18 SWEBOK® Guide V3.0c8s4c8s4c15s2c11s4c14s2c8s1.3c4s4.5c8s5c6c9s4c9s3c9s5c9s7c43.3.2. Data Flow-Based Criteria3.3.3. Reference Models forCode-Based Testing3.4. Fault-Based Techniques3.4.1. Error Guessing3.4.2. Mutation Testing3.5. Usage-Based Techniques3.5.1. Operational Profile3.5.2. User ObservationHeuristics3.6. Model-Based TestingTechniques3.6.1. Decision Table3.6.2. Finite-State Machines3.6.3. Testing from FormalSpecifications3.7. Techniques Based on theNature of the Application3.8. Selecting and CombiningTechniques3.8.1. Functional and Structural3.8.2. Deterministic vs.
Random4. Test-Related Measures4.1. Evaluation of the ProgramUnder Test4.1.1. Program MeasurementsThat Aid in Planning andDesigning Testing4.1.2. Fault Types, Classification,and Statistics4.1.3. Fault Density4.1.4. Life Test, ReliabilityEvaluation4.1.5. Reliability Growth ModelsNielsen 1993[10*]Kan 2003[9*]Sommerville 2011[2*]Naik and Tripathy 2008[1*]Software Testing 4-19c5c4c1s14c9s8c3s5c15s5c5, c7c9s6c10c10s11c15c9c9s6c11c4c13s4c4c15c3c15c84.2. Evaluation of the TestsPerformed4.2.1. Coverage / ThoroughnessMeasures4.2.2. Fault Seeding4.2.3. Mutation Score4.2.4. Comparison and RelativeEffectiveness of DifferentTechniques5. Test Process5.1. Practical Considerations5.1.1. Attitudes / EgolessProgramming5.1.2. Test Guides5.1.3. Test Process Management5.1.4. Test Documentation andWork Products5.1.5. Test-Driven Development5.1.6. Internal vs.
IndependentTest Team5.1.7. Cost/Effort Estimation andOther Process Measures5.1.8. Termination5.1.9. Test Reuse and Patterns5.2. Test Activities5.2.1. Planning5.2.2. Test-Case Generation5.2.3. Test EnvironmentDevelopment5.2.4. Execution5.2.5. Test Results Evaluationc11c2s5c3s5c6c16c15c12s1c12c15s1c15c8s12c4s5c1s16c16c18s3c5s7c10s4c2s5c12s1c12s8c12s1c12s3c12s6c12s7c15Nielsen 1993[10*]Kan 2003[9*]Sommerville 2011[2*]Naik and Tripathy 2008[1*]4-20 SWEBOK® Guide V3.05.2.6. Problem Reporting / TestLog5.2.7. Defect Tracking6. Software Testing Tools6.1. Testing Tool Support6.1.1. Selecting Tools6.2. Categories of Toolsc13s9c9c12s11c12s11c1s7, c3s9,c4, c9s7,c12s11,c12s16c5c8Nielsen 1993[10*]Kan 2003[9*]Sommerville 2011[2*]Naik and Tripathy 2008[1*]Software Testing 4-214-22 SWEBOK® Guide V3.0REFERENCES[1*] S.
Naik and P. Tripathy, Software Testingand Quality Assurance: Theory andPractice, Wiley-Spektrum, 2008.[2*] I. Sommerville, Software Engineering, 9thed., Addison-Wesley, 2011.[3] M.R. Lyu, ed., Handbook of SoftwareReliability Engineering, McGraw-Hill andIEEE Computer Society Press, 1996.[4] H. Zhu, P.A.V. Hall, and J.H.R. May,“Software Unit Test Coverage andAdequacy,” ACM Computing Surveys, vol.29, no.
4, Dec. 1997, pp. 366–427.[5] E.W. Dijkstra, “Notes on StructuredProgramming,” T.H.-Report 70-WSE-03,Technological University, Eindhoven, 1970;http://www.cs.utexas.edu/users/EWD/ewd02xx/EWD249.PDF.[6] ISO/IEC/IEEE P29119-1/DIS Draft Standardfor Software and Systems Engineering—Software Testing—Part 1: Concepts andDefinitions, ISO/IEC/IEEE, 2012.[7] ISO/IEC/IEEE 24765:2010 Systems andSoftware Engineering—Vocabulary, ISO/IEC/IEEE, 2010.[8] S. Yoo and M. Harman, “Regression TestingMinimization, Selection and Prioritization:A Survey,” Software Testing Verificationand Reliability, vol.
22, no. 2, Mar. 2012,pp. 67–120.[9*] S.H. Kan, Metrics and Models in SoftwareQuality Engineering, 2nd ed., AddisonWesley, 2002.[10*] J. Nielsen, Usability Engineering, MorganKaufmann, 1993.[11] T.Y. Chen et al., “Adaptive Random Testing:The ART of Test Case Diversity,” Journalof Systems and Software, vol.
83, no. 1, Jan.2010, pp. 60–66.[12] Y. Jia and M. Harman, “An Analysisand Survey of the Development ofMutation Testing,” IEEE Trans. SoftwareEngineering, vol. 37, no. 5, Sep.–Oct. 2011,pp. 649–678.[13] M. Utting and B. Legeard, PracticalModel-Based Testing: A Tools Approach,Morgan Kaufmann, 2007.CHAPTER 5SOFTWARE MAINTENANCEACRONYMSMRModification RequestPRProblem ReportSoftware ConfigurationManagementService-Level AgreementSoftware Quality AssuranceVerification and ValidationSCMSLASQAV&Vduring the postdelivery stage.
Predelivery activities include planning for postdelivery operations,maintainability, and logistics determination fortransition activities [1*, c6s9]. Postdeliveryactivities include software modification, training,and operating or interfacing to a help desk.The Software Maintenance knowledge area(KA) is related to all other aspects of softwareengineering.