Software Engineering Body of Knowledge (v3) (2014) (811503), страница 20
Текст из файла (страница 20)
2010[3*]Sommerville 2011[2*]McConnell 2004[1*]MATRIX OF TOPICS VS. REFERENCE MATERIAL3.4. Constructionc22, c23Testing3.5. Construction forReuse3.6. Constructionwith Reuse3.7. Constructionc8,Qualityc20–c253.8. Integrationc294. ConstructionTechnologies4.1. API Design andUse4.2. Object-Orientedc6, c7Runtime Issues4.3. Parameterizationand Generics4.4. Assertions,Design by Contract,c8, c9and DefensiveProgramming4.5. Error Handling,Exception Handling,c3, c8and Fault Tolerance4.6. ExecutableModels4.7. State-Basedand Table-Drivenc18ConstructionTechniques4.8. RuntimeConfiguration andc3, c10Internationalization4.9. Grammar-Basedc5Input Processingc16c16c7c1c1c8Silberschatz et al. 2008[7*]Null and Lobur 2006[6*]Mellor and Balcer 2002[5*]Gamma et al.
1994[4*]Clements et al. 2010[3*]Sommerville 2011[2*]McConnell 2004[1*]3-14 SWEBOK® Guide V3.04.10. ConcurrencyPrimitives4.11. Middleware4.12. ConstructionMethods forDistributed Software4.13. ConstructingHeterogeneousSystems4.14. Performancec25, c26Analysis and Tuning4.15. PlatformStandards4.16. Test-Firstc22Programming5. Construction Tools5.1. DevelopmentEnvironments5.2. GUI Builders5.3. Unit TestingTools5.4. Profiling,PerformanceAnalysis, andSlicing Toolsc1Silberschatz et al. 2008[7*]Mellor and Balcer 2002[5*]Null and Lobur 2006[6*]c8c2c9c10c30c25, c26Gamma et al. 1994[4*]c6c30c22Clements et al.
2010[3*]Sommerville 2011[2*]McConnell 2004[1*]Software Construction 3-15c8c13-16 SWEBOK® Guide V3.0FURTHER READINGSREFERENCESIEEE Std. 1517-2010 Standard for InformationTechnology—System and Software LifeCycle Processes—Reuse Processes, IEEE,2010 [8].[1*] S. McConnell, Code Complete, 2nd ed.,Microsoft Press, 2004.This standard specifies the processes, activities,and tasks to be applied during each phase of thesoftware life cycle to enable a software productto be constructed from reusable assets.
It coversthe concept of reuse-based development and theprocesses of construction for reuse and construction with reuse.IEEE Std. 12207-2008 (a.k.a. ISO/IEC12207:2008) Standard for Systems andSoftware Engineering—Software Life CycleProcesses, IEEE, 2008 [9].This standard defines a series of software development processes, including software construction process, software integration process, andsoftware reuse process.[2*] I.
Sommerville, Software Engineering, 9thed., Addison-Wesley, 2011.[3*] P. Clements et al., Documenting SoftwareArchitectures: Views and Beyond, 2nd ed.,Pearson Education, 2010.[4*] E. Gamma et al., Design Patterns: Elementsof Reusable Object-Oriented Software, 1sted., Addison-Wesley Professional, 1994.[5*] S.J. Mellor and M.J. Balcer, ExecutableUML: A Foundation for Model-DrivenArchitecture, 1st ed., Addison-Wesley,2002.[6*] L.
Null and J. Lobur, The Essentials ofComputer Organization and Architecture,2nd ed., Jones and Bartlett Publishers,2006.[7*] A. Silberschatz, P.B. Galvin, and G. Gagne,Operating System Concepts, 8th ed., Wiley,2008.[8] IEEE Std. 1517-2010 Standard forInformation Technology—System andSoftware Life Cycle Processes—ReuseProcesses, IEEE, 2010.[9] IEEE Std. 12207-2008 (a.k.a. ISO/IEC12207:2008) Standard for Systems andSoftware Engineering—Software Life CycleProcesses, IEEE, 2008.CHAPTER 4SOFTWARE TESTINGACRONYMSAPITDDTTCN3XPexecute.
This is why, in practice, a completeset of tests can generally be considered infinite, and testing is conducted on a subset ofall possible tests, which is determined by riskand prioritization criteria. Testing alwaysimplies a tradeoff between limited resourcesand schedules on the one hand and inherentlyunlimited test requirements on the other.• Selected: The many proposed test techniques differ essentially in how the test setis selected, and software engineers must beaware that different selection criteria mayyield vastly different degrees of effectiveness. How to identify the most suitableselection criterion under given conditions isa complex problem; in practice, risk analysistechniques and software engineering expertise are applied.• Expected: It must be possible, although notalways easy, to decide whether the observedoutcomes of program testing are acceptableor not; otherwise, the testing effort is useless.
The observed behavior may be checkedagainst user needs (commonly referred toas testing for validation), against a specification (testing for verification), or, perhaps, against the anticipated behavior fromimplicit requirements or expectations (seeAcceptance Tests in the Software Requirements KA).Application Program InterfaceTest-Driven DevelopmentTesting and Test Control NotationVersion 3Extreme ProgrammingINTRODUCTIONSoftware testing consists of the dynamic verification that a program provides expected behaviorson a finite set of test cases, suitably selected fromthe usually infinite execution domain.In the above definition, italicized words correspond to key issues in describing the SoftwareTesting knowledge area (KA):• Dynamic: This term means that testingalways implies executing the program onselected inputs.
To be precise, the inputvalue alone is not always sufficient to specify a test, since a complex, nondeterministicsystem might react to the same input withdifferent behaviors, depending on the systemstate. In this KA, however, the term “input”will be maintained, with the implied convention that its meaning also includes a specified input state in those cases for which itis important.
Static techniques are differentfrom and complementary to dynamic testing.Static techniques are covered in the SoftwareQuality KA. It is worth noting that terminology is not uniform among different communities and some use the term “testing” also inreference to static techniques.• Finite: Even in simple programs, so many testcases are theoretically possible that exhaustive testing could require months or years toIn recent years, the view of software testinghas matured into a constructive one. Testing isno longer seen as an activity that starts only afterthe coding phase is complete with the limitedpurpose of detecting failures.
Software testingis, or should be, pervasive throughout the entiredevelopment and maintenance life cycle. Indeed,planning for software testing should start with theearly stages of the software requirements process,4-14-2 SWEBOK® Guide V3.0Figure 4.1. Breakdown of Topics for the Software Testing KAand test plans and procedures should be systematically and continuously developed—and possibly refined—as software development proceeds.These test planning and test designing activitiesprovide useful input for software designers andhelp to highlight potential weaknesses, such asdesign oversights/contradictions, or omissions/ambiguities in the documentation.For many organizations, the approach to software quality is one of prevention: it is obviouslymuch better to prevent problems than to correctthem.
Testing can be seen, then, as a means forproviding information about the functionalityand quality attributes of the software and alsofor identifying faults in those cases where errorprevention has not been effective. It is perhapsobvious but worth recognizing that software canstill contain faults, even after completion of anextensive testing activity. Software failures experienced after delivery are addressed by correctivemaintenance.
Software maintenance topics arecovered in the Software Maintenance KA.In the Software Quality KA (see Software Quality Management Techniques), software qualitymanagement techniques are notably categorizedinto static techniques (no code execution) andSoftware Testing 4-3dynamic techniques (code execution).
Both categories are useful. This KA focuses on dynamictechniques.Software testing is also related to softwareconstruction (see Construction Testing in theSoftware Construction KA). In particular, unitand integration testing are intimately related tosoftware construction, if not part of it.BREAKDOWN OF TOPICS FORSOFTWARE TESTINGThe breakdown of topics for the Software Testing KA is shown in Figure 4.1. A more detailedbreakdown is provided in the Matrix of Topicsvs.
Reference Material at the end of this KA.The first topic describes Software Testing Fundamentals. It covers the basic definitions in thefield of software testing, the basic terminologyand key issues, and software testing’s relationship with other activities.The second topic, Test Levels, consists of two(orthogonal) subtopics: the first subtopic lists thelevels in which the testing of large software istraditionally subdivided, and the second subtopicconsiders testing for specific conditions or properties and is referred to as Objectives of Testing.Not all types of testing apply to every softwareproduct, nor has every possible type been listed.The test target and test objective togetherdetermine how the test set is identified, both withregard to its consistency—how much testing isenough for achieving the stated objective—andto its composition—which test cases shouldbe selected for achieving the stated objective(although usually “for achieving the stated objective” remains implicit and only the first part of thetwo italicized questions above is posed).
Criteriafor addressing the first question are referred to astest adequacy criteria, while those addressing thesecond question are the test selection criteria.Several Test Techniques have been developedin the past few decades, and new ones are stillbeing proposed. Generally accepted techniquesare covered in the third topic.Test-Related Measures are dealt with in thefourth topic, while the issues relative to Test Process are covered in the fifth.