Software Engineering Body of Knowledge (v3) (2014) (811503), страница 42
Текст из файла (страница 42)
Effort (or equivalent cost) is theprimary measure of resources for most softwareprocesses, activities, and tasks; it is measured inunits such as person-hours, person-days, staffweeks, or staff-months of effort or in equivalentmonetary units—such as euros or dollars.Effectiveness is the ratio of actual output toexpected output produced by a software process,activity, or task; for example, actual number ofdefects detected and corrected during softwaretesting to expected number of defects to bedetected and corrected—perhaps based on historical data for similar projects (see Effectivenessin the Software Engineering Economics KA).Note that measurement of software process effectiveness requires measurement of the relevantproduct attributes; for example, measurement ofsoftware defects discovered and corrected duringsoftware testing.One must take care when measuring productattributes for the purpose of determining processeffectiveness.
For example, the number of defectsdetected and corrected by testing may not achievethe expected number of defects and thus providea misleadingly low effectiveness measure, eitherbecause the software being tested is of betterthan-usual quality or perhaps because introduction of a newly introduced upstream inspectionprocess has reduced the remaining number ofdefects in the software.Product measures that may be important indetermining the effectiveness of software processes include product complexity, total defects,defect density, and the quality of requirements,design documentation, and other related workproducts.Also note that efficiency and effectiveness areindependent concepts.
An effective software process can be inefficient in achieving a desired software process result; for example, the amount ofeffort expended to find and fix software defectscould be very high and result in low efficiency, ascompared to expectations.An efficient process can be ineffective in accomplishing the desired transformation of input workproducts into output work products; for example,failure to find and correct a sufficient number ofsoftware defects during the testing process.Causes of low efficiency and/or low effectiveness in the way a software process, activity, ortask is executed might include one or more of thefollowing problems: deficient input work products, inexperienced personnel, lack of adequatetools and infrastructure, learning a new process,a complex product, or an unfamiliar productdomain. The efficiency and effectiveness of software process execution are also affected (eitherpositively or negatively) by factors such as turnover in software personnel, schedule changes, anew customer representative, or a new organizational policy.In software engineering, productivity in performing a process, activity, or task is the ratio ofoutput produced divided by resources consumed;for example, the number of software defects discovered and corrected divided by person-hours ofeffort (see Productivity in the Software Engineering Economics KA).
Accurate measurement ofproductivity must include total effort used to satisfy the exit criteria of a software process, activity, or task; for example, the effort required tocorrect defects discovered during software testing must be included in software developmentproductivity.Calculation of productivity must account forthe context in which the work is accomplished.For example, the effort to correct discovereddefects will be included in the productivity calculation of a software team if team memberscorrect the defects they find—as in unit testingby software developers or in a cross-functionalagile team. Or the productivity calculationmay include either the effort of the software8-10 SWEBOK® Guide V3.0developers or the effort of an independent testing team, depending on who fixes the defectsfound by the independent testers.
Note that thisexample refers to the effort of teams of developers or teams of testers and not to individuals.Software productivity calculated at the level ofindividuals can be misleading because of themany factors that can affect the individual productivity of software engineers.Standardized definitions and counting rulesfor measurement of software processes and workproducts are necessary to provide standardizedmeasurement results across projects within anorganization, to populate a repository of historical data that can be analyzed to identify softwareprocesses that need to be improved, and to buildpredictive models based on accumulated data. Inthe example above, definitions of software defectsand staff-hours of testing effort plus countingrules for defects and effort would be necessary toobtain satisfactory measurement results.The extent to which the software process isinstitutionalized is important; failure to institutionalize a software process may explain why“good” software processes do not always produce anticipated results.
Software processes maybe institutionalized by adoption within the localorganizational unit or across larger units of anenterprise.4.2. Quality of Measurement Results[4*, s3.4–3.7]The quality of process and product measurementresults is primarily determined by the reliabilityand validity of the measured results. Measurements that do not satisfy these quality criteriacan result in incorrect interpretations and faultysoftware process improvement initiatives. Otherdesirable properties of software measurementsinclude ease of collection, analysis, and presentation plus a strong correlation between cause andeffect.The Software Engineering Measurement topicin the Software Engineering Management KAdescribes a process for implementing a softwaremeasurement program.4.3. Software Information Models[1*, p310–311] [3*, p712–713] [4*, s19.2]Software information models allow modeling,analysis, and prediction of software process andsoftware product attributes to provide answers torelevant questions and achieve process and productimprovement goals.
Needed data can be collectedand retained in a repository; the data can be analyzed and models can be constructed. Validationand refinement of software information modelsoccur during software projects and after projectsare completed to ensure that the level of accuracyis sufficient and that their limitations are knownand understood. Software information models mayalso be developed for contexts other than softwareprojects; for example, a software informationmodel might be developed for processes that applyacross an organization, such as software configuration management or software quality assuranceprocesses at the organizational level.Analysis-driven software information modelbuilding involves the development, calibration,and evaluation of a model.
A software information model is developed by establishing ahypothesized transformation of input variablesinto desired outputs; for example, product sizeand complexity might be transformed into estimated effort needed to develop a software product using a regression equation developed fromobserved data from past projects. A model iscalibrated by adjusting parameters in the modelto match observed results from past projects; forexample, the exponent in a nonlinear regressionmodel might be changed by applying the regression equation to a different set of past projectsother than the projects used to develop the model.A model is evaluated by comparing computedresults to actual outcomes for a different set ofsimilar data.
There are three possible evaluationoutcomes:1.results computed for a different data set varywidely from actual outcomes for that dataset, in which case the derived model is notapplicable for the new data set and shouldnot be applied to analyze or make predictionsfor future projects;Software Engineering Process 8-112.results computed for a new data set areclose to actual outcomes for that data set,in which case minor adjustments are madeto the parameters of the model to improveagreement;3.results computed for the new data set andsubsequent data sets are very close and noadjustments to the model are needed.Continuous evaluation of the model may indicate a need for adjustments over time as the context in which the model is applied changes.The Goals/Questions/Metrics (GQM) methodwas originally intended for establishing measurement activities, but it can also be used to guideanalysis and improvement of software processes.It can be used to guide analysis-driven softwareinformation model building; results obtainedfrom the software information model can be usedto guide process improvement.The following example illustrates applicationof the GQM method:• Goal: Reduce the average change requestprocessing time by 10% within six months.• Question 1-1: What is the baseline changerequest processing time?• Metric 1-1-1: Average of change requestprocessing times on starting date• Metric 1-1-2: Standard deviation of changerequest processing times on starting date• Question 1-2: What is the current changerequest processing time?• Metric 1-2-1: Average of change requestprocessing times currently• Metric 1-2-2: Standard deviation of changerequest processing times currently4.4. Software Process Measurement Techniques[1*, c8]Software process measurement techniques areused to collect process data and work productdata, transform the data into useful information,and analyze the information to identify processactivities that are candidates for improvement.In some cases, new software processes may beneeded.Process measurement techniques also providethe information needed to measure the effects ofprocess improvement initiatives.