The art of software testing. Myers (2nd edition) (2004) (811502), страница 24
Текст из файла (страница 24)
Security testing is the process ofattempting to devise test cases that subvert the program’s securitychecks. For example, you could try to formulate test cases that getaround an operating system’s memory protection mechanism. You cantry to subvert a database management system’s data security mechanisms. One way to devise such test cases is to study known securityproblems in similar systems and generate test cases that attempt todemonstrate similar problems in the system you are testing. For example, published sources in magazines, chat rooms, or newsgroups frequently cover known bugs in operating systems or other softwaresystems.
By searching for security holes in existing programs that provide services similar to the one you are testing, you can devise test casesto determine whether your program suffers from similar problems.Web-based applications often need a higher level of security testing than do most applications. This is especially true of e-commercesites. Although sufficient technology, namely encryption, exists toallow customers to complete transactions securely over the Internet,you should not rely on the mere application of technology to ensuresafety. In addition, you will need to convince your customer base thatyour application is safe, or you risk losing customers.
Again, Chapter9 provides more information on security testing in Internet-basedapplications.Performance TestingMany programs have specific performance or efficiency objectives,stating such properties as response times and throughput rates under138The Art of Software Testingcertain workload and configuration conditions. Again, since the purpose of a system test is to demonstrate that the program does notmeet its objectives, test cases must be designed to show that the program does not satisfy its performance objectives.Storage TestingSimilarly, programs occasionally have storage objectives that state, forexample, the amount of main and secondary memory the programuses and the size of temporary or spill files.
You should design testcases to show that these storage objectives have not been met.Configuration TestingPrograms such as operating systems, database management systems,and message-switching programs support a variety of hardware configurations, including various types and numbers of I/O devices andcommunications lines, or different memory sizes. Often the numberof possible configurations is too large to test each one, but at the least,you should test the program with each type of hardware device andwith the minimum and maximum configuration.
If the programitself can be configured to omit program components, or if the program can run on different computers, each possible configuration ofthe program should be tested.Today, many programs are designed for multiple operating systems,for example, so if you are testing such a program, you should test itwith all of the operating systems for which it was designed. Programsdesigned to execute within a Web browser require special attention,since there are numerous Web browsers available and they don’t allfunction the same way.
In addition, the same Web browser will operate differently on different operating systems.Compatibility/Configuration/Conversion TestingMost programs that are developed are not completely new; they oftenare replacements for some deficient system. As such, programs oftenHigher-Order Testing139have specific objectives concerning their compatibility with, andconversion procedures from, the existing system. Again, in testingthe program to these objectives, the orientation of the test cases is todemonstrate that the compatibility objectives have not been met andthat the conversion procedures do not work. Here you try to generate errors while moving data from one system to another.
An example would be upgrading a database management system. You want toensure that your existing data fit inside the new system. Variousmethods exist to test this process; however, they are highly dependenton the database system you employ.Installability TestingSome types of software systems have complicated installation procedures.
Testing the installation procedure is an important part of thesystem testing process. This is particularly true of an automated installation system that is part of the program package. A malfunctioninginstallation program could prevent the user from ever having a successful experience with the main system you are charged with testing.A user’s first experience is when he or she installs the application. Ifthis phase performs poorly, then the user/customer may find anotherproduct or have little confidence in the application’s validity.Reliability TestingOf course, the goal of all types of testing is the improvement of theprogram reliability, but if the program’s objectives contain specificstatements about reliability, specific reliability tests might be devised.Testing reliability objectives can be difficult.
For example, a modernonline system such as a corporate wide area network (WAN) or anInternet service provider (ISP) generally has a targeted uptime of99.97 percent over the life of the system. There is no known way thatyou could test this objective with a test period of months or evenyears.
Today’s critical software systems have even higher reliabilitystandards, and today’s hardware conceivably could be expected tosupport these objectives. Programs or systems with more modest140The Art of Software Testingmean time between failures (MTBF) objectives or reasonable (interms of testing) operational error objectives can potentially betested.An MTBF of no more than 20 hours or an objective that a program should experience no more than 12 unique errors after it isplaced into production, for example, presents testing possibilities,particularly for statistical, program-proving, or model-based testingmethodologies. These methods are beyond the scope of this book,but the technical literature (online and otherwise) offers ample guidance in this area.For example, if this area of program testing is of interest to you,research the concept of inductive assertions.
The goal of this methodis the development of a set of theorems about the program in question, the proof of which guarantees the absence of errors in the program. The method begins by writing assertions about the program’sinput conditions and correct results. The assertions are expressedsymbolically in a formal logic system, usually the first-order predicatecalculus. You then locate each loop in the program and, for eachloop, write an assertion stating the invariant (always true) conditionsat an arbitrary point in the loop. The program now has been partitioned into a fixed number of fixed-length paths (all possible pathsbetween a pair of assertions).
For each path, you then take the semantics of the intervening program statements to modify the assertion,and eventually reach the end of the path. At this point, two assertionsexist at the end of the path: the original one and the one derived fromthe assertion at the opposite end. You then write a theorem statingthat the original assertion implies the derived assertion, and attemptto prove the theorem. If the theorems can be proved, you couldassume the program is error free—as long as the program eventuallyterminates. A separate proof is required to show that the program willalways eventually terminate.As complex as this sort of software proving or prediction sounds,reliability testing and, indeed, the concept of software reliabilityengineering (SRE) are with us today and are increasingly importantfor systems that must maintain very high uptimes.
To illustrate thisHigher-Order Testing141point, examine Table 6.1 to see the number of hours per year a system must be up to support various uptime requirements. These values should indicate the need for SRE.Recovery TestingPrograms such as operating systems, database management systems,and teleprocessing programs often have recovery objectives thatstate how the system is to recover from programming errors, hardware failures, and data errors. One objective of the system test is toshow that these recovery functions do not work correctly. Programming errors can be purposely injected into a system to determine whether it can recover from them. Hardware failures such asmemory parity errors or I/O device errors can be simulated. Dataerrors such as noise on a communications line or an invalid pointerin a database can be created purposely or simulated to analyze thesystem’s reaction.One design goal of such systems is to minimize the mean time torecovery (MTTR).