The art of software testing. Myers (2nd edition) (2004) (811502), страница 30
Текст из файла (страница 30)
Experiments show that people whoshun such tools, even when they are debugging programs that areunfamiliar to them, are more successful than people who use the tools.Avoid Experimentation—Use It Only as a Last ResortThe most common mistake novice debuggers make is trying to solvea problem by making experimental changes to the program. Youmight say, “I know what is wrong, so I’ll change this DO statementand see what happens.” This totally haphazard approach cannot evenbe considered debugging; it represents an act of blind hope. Not onlydoes it have a minuscule chance of success, but it often compoundsthe problem by adding new errors to the program.Error-Repairing TechniquesWhere There Is One Bug, There Is Likely to Be AnotherThis is a restatement of the principle in Chapter 2 that states whenyou find an error in a section of a program, the probability of theexistence of another error in that same section is higher than if youhadn’t already found one error.
In other words, errors tend to cluster. When repairing an error, examine its immediate vicinity for anything else that looks suspicious.Fix the Error, Not Just a Symptom of ItAnother common failing is repairing the symptoms of the error, orjust one instance of the error, rather than the error itself.
If the pro-172The Art of Software Testingposed correction does not match all the clues about the error, youmay be fixing only a part of the error.The Probability of the Fix Being CorrectIs Not 100 PercentTell this to someone and, of course, he would agree, but tell it tosomeone in the process of correcting an error and you may get a different answer. (“Yes, in most cases, but this correction is so minorthat it just has to work.”) You can never assume that code added to aprogram to fix an error is correct.
Statement for statement, corrections are much more error prone than the original code in the program. One implication is that error corrections must be tested,perhaps more rigorously than the original program. A solid regression testing plan can help ensure that correcting an error does notinduce another error somewhere else in the application.The Probability of the Fix Being Correct Dropsas the Size of the Program IncreasesStating it differently, in our experience the ratio of errors due toincorrect fixes versus original errors increases in large programs. Inone widely used large program, one of every six new errors discovered is an error in a prior correction to the program.Beware of the Possibility That an Error CorrectionCreates a New ErrorNot only do you have to worry about incorrect corrections, but alsoyou have to worry about a seemingly valid correction having anundesirable side effect, thus introducing a new error.
Not only isthere a probability that a fix will be invalid, but there also is a probability that a fix will introduce a new error. One implication is that notonly does the error situation have to be tested after the correction ismade, but you must also perform regression testing to determinewhether a new error has been introduced.Debugging173The Process of Error Repair Should Put You TemporarilyBack into the Design PhaseYou should realize that error correction is a form of program design.Given the error-prone nature of corrections, common sense says thatwhatever procedures, methodologies, and formalism were used inthe design process should also apply to the error-correction process.For instance, if the project rationalized that code inspections weredesirable, then it must be doubly important that they be used aftercorrecting an error.Change the Source Code, Not the Object CodeWhen debugging large systems, particularly a system written in anassembly language, occasionally there is the tendency to correct anerror by making an immediate change to the object code with theintention of changing the source program later.
Two problems associated with this approach are (1) it usually is a sign that “debuggingby experimentation” is being practiced, and (2) the object code andsource program are now out of synchronization, meaning that theerror could easily surface again when the program is recompiled orreassembled. This practice is an indication of a sloppy, unprofessionalapproach to debugging.Error AnalysisThe last thing to realize about program debugging is that, in additionto its value in removing an error from the program, it can haveanother valuable effect: It can tell us something about the nature ofsoftware errors, something we still know too little about. Informationabout the nature of software errors can provide valuable feedback interms of improving future design, coding, and testing processes.Every programmer and programming organization could improveimmensely by performing a detailed analysis of the detected errors,or at least a subset of them.
It is a difficult and time-consuming task,174The Art of Software Testingfor it implies much more than a superficial grouping such as “x percent of the errors are logic-design errors,” or “x percent of the errorsoccur in IF statements.” A careful analysis might include the following studies:• Where was the error made? This question is the most difficult oneto answer, because it requires a backward search through thedocumentation and history of the project, but it also is the mostvaluable question.
It requires that you pinpoint the originalsource and time of the error. For example, the original source ofthe error might be an ambiguous statement in a specification, acorrection to a prior error, or a misunderstanding of an enduser requirement.• Who made the error? Wouldn’t it be useful to discover that60 percent of the design errors were created by one of the10 analysts, or that programmer X makes three times as manymistakes as the other programmers? (Not for the purposes ofpunishment but for the purposes of education.)• What was done incorrectly? It is not sufficient to determine whenand by whom each error was made; the missing link is adetermination of exactly why the error occurred.
Was it causedby someone’s inability to write clearly? Someone’s lack ofeducation in the programming language? A typing mistake?An invalid assumption? A failure to consider valid input?• How could the error have been prevented? What can be donedifferently in the next project to prevent this type of error?The answer to this question constitutes much of the valuablefeedback or learning for which we are searching.• Why wasn’t the error detected earlier? If the error is detectedduring a test phase, you should study why the error was notdetected during earlier testing phases, code inspections, anddesign reviews.• How could the error have been detected earlier? The answer to this isanother piece of valuable feedback. How can the review andtesting processes be improved to find this type of error earlier infuture projects? Providing that we are not analyzing an errorDebugging175found by an end user (that is, the error was found by a testcase), we should realize that something valuable has happened:We have written a successful test case.
Why was this test casesuccessful? Can we learn something from it that will result inadditional successful test cases, either for this program or forfuture programs?Again, this analysis process is difficult, but the answers discoveredcan be invaluable in improving subsequent programming efforts. It isalarming that the vast majority of programmers and programmingorganizations do not employ it.CHAPTER 8Extreme TestingIn the 1990s a new software development methodology termed Extreme Programming (XP) was born. Aproject manager named Kent Beck is credited with conceiving thelightweight, agile development process, first testing it while workingon a project at Daimler-Chrysler in 1996. Although several otheragile software development processes have since been created, XP isby far the most popular.
In fact, numerous open-source tools exist tosupport it, which verifies XP’s popularity among developers and project managers.XP was likely developed to support the adoption of programminglanguages such as Java, Visual Basic, and C#. These object-based languages allow developers to create large, complex applications muchmore quickly than with traditional languages such as C, C++, FORTRAN, or COBOL. Developing with these languages often requiresbuilding general-purpose libraries to support your efforts.
Methodsfor common tasks such as printing, sorting, networking, and statistical analysis are not standard components. Languages such as C# andJava ship with full-featured application programming interfaces(APIs) that eliminate or reduce the need for custom library creation.However, with the benefits of rapid application development languages come liabilities. Although developers were creating applications much more quickly, the quality was not guaranteed.