GRASP—A New Search Algorithm for Satisfiability (Презентации лекций), страница 4
Описание файла
Файл "GRASP—A New Search Algorithm for Satisfiability" внутри архива находится в следующих папках: Презентации лекций, Статьи к лекциям. PDF-файл из архива "Презентации лекций", который расположен в категории "". Всё это находится в предмете "boolean sat-smt solvers for software engineering" из 11 семестр (3 семестр магистратуры), которые можно найти в файловом архиве МГУ им. Ломоносова. Не смотря на прямую связь этого архива с МГУ им. Ломоносова, его также можно найти и в других разделах. .
Просмотр PDF-файла онлайн
Текст 4 страницы из PDF
The programswere compiled with GCC 2.7.2 and run on a SUN SPARC5/85 machine with 64 MByte of RAM. The experimentalevaluation of the three programs is based on two differentsets of benchmarks:• The UCSC benchmarks [4], developed at the Universityof California, Santa Cruz, that include instances of SATcommonly encountered in test pattern generation ofcombinational circuits for bridging and stuck-at faults.• The DIMACS challenge benchmarks [4], that includeinstances of SAT from several authors and from differentapplication areas.For the experimental results given below, GRASP wasconfigured to use the decision engine described in Section2.5, to allow the generation of clauses based on UIPs, and tolimit the size of clauses added to the clause database to 20 orfewer literals. All SAT programs were run with a CPU timelimit of 10,000 seconds (about three hours).For the tables of results the following definitions apply.A benchmark suite is partitioned into classes of relatedbenchmarks.
In each class, #M denotes the total number ofclass members; #S denotes the number of class members forwhich the program terminated in less than the allowed10,000 CPU seconds; and Time denotes the total CPUtime, in seconds, taken to process all members of the class.The results obtained for the UCSC benchmarks areshown in Table 1. The BF and SSA benchmark classesdenote, respectively, CNF formulas for bridging and stuck-atfaults.
For these benchmarks GRASP performs significantlybetter than the other programs. Both POSIT and TEGUSabort a large number of problem instances and require muchlarger CPU times. These benchmarks are characterized byextremely sparse CNF formulas for which BCP-based conflict analysis works particularly well. The performance difference between GRASP and TEGUS, a very efficient ATPGtool, clearly illustrates the power of the search-pruning techniques included in GRASP.An experimental study of the effect of the growth of theclause database on the amount of search and the CPU timecan be found in [15].
In general, adding larger clauses helpsreducing the number of backtracks and the CPU time. Thisholds true until the overhead introduced by the additionalclauses offsets the gains of reducing the amount of search.GRASP was also compared with the other algorithmson the DIMACS benchmarks [4], and the results areincluded in Table 1. We can conclude that for classes ofbenchmarks where GRASP performs better the other programs either take a very long time to find a solution or areunable to find a solution in less than 10,000 seconds.
We canalso observe that benchmarks on which POSIT performsbetter than GRASP can also be handled by GRASP; only theoverhead inherent to GRASP becomes apparent.Another useful experiment is to measure how well conflict analysis works in practice. For this purpose statisticsregarding some DIMACS benchmarks are shown in Table 2,where #B denotes the number of backtracks, #NCB denotesthe number of non-chronological backtracks, #LJ is the sizeof the largest non-chronological backtrack, #UIP indicatesthe number of unique implication points found, %Gdenotes the variation in size of the clause database, and Timeis the CPU time in seconds. From these examples severalconclusions can be drawn.
First, the number of non-chronological backtracks can be a significant percentage of the totalnumber of backtracks. Second, the jumps in the decision treecan save a large amount of search work. As can be observed,in some cases the jumps taken potentially save searching millions of nodes in the decision tree. Third, the growth of theclause database is not necessarily large.
Fourth, UIPs dooccur in practice and for some benchmarks a reasonablenumber is found given the number of backtracks. Finally, forBenchmark#MClassBF-0432BF-1355BF-2670SSA-0432SSA-2670SSA-6288SSA-7552AIM-100AIM-200BFDUBOISII-32PRETSSAAIM-50II-8JNHPAR-8PAR-16II-16HFGPAR-3221149537123802424413178824145010101073410GRASP#STime21149537123802424413178824145010109500047.6125.768.31.151.50.219.81.810.87.234.47.018.26.50.423.421.30.49,84410,31127,18430,00040,000100,000TEGUS#S1953257038024232517462414501010104000Time53,852993,915295,4101,593120,00017.53,406107.914,05926,65490,3331,23142,57920,2302.211.86,0551.59,983269.632,94230,00040,000100,000POSIT#S2164537123802413271748241450101096000Time55.8946,1272,9710.22,8260.060.01,290117,99120,03777,189650.140,69185.30.42.30.80.172.110,12011,54030,00040,000100,000Table 1: Results on the UCSC and DIMACS benchmarksmost of these examples conflict analysis causes GRASP to bemuch more efficient than POSIT and TEGUS. Nevertheless,either POSIT or TEGUS can be more efficient in specificbenchmarks, as the examples of the last three rows of Table 2indicate.
TEGUS performs particularly well on theseinstances because they are satisfiable and because TEGUSiterates several decision making procedures.5 Conclusions and Research DirectionsThis paper introduces a procedure for conflict analysisin satisfiability algorithms and describes a configurable algorithmic framework for solving SAT. Experimental resultsindicate that conflict analysis and its by-products, non-chronological backtracking and identification of equivalent conflicting conditions, can contribute decisively for efficientlysolving a large number of classes of instances of SAT. For thispurpose, the proposed SAT algorithm is compared withother state-of-the-art algorithms.The natural evolution of this research work is to applyGRASP to different EDA applications, in particular test pattern generation, timing analysis, delay fault testing andequivalence checking, among others.
Despite being a fastSAT algorithm, GRASP introduces noticeable overhead thatcan become a liability for some of these applications. Conse-Benchmark#Baim.200.2.y2 109aim.200.2.y374aim.200.2.n129aim.200.2.n239bf0432-007 335bf1355-07540bf1355-63811bf2670-00116dubois30 233dubois50 485dubois100 1438pret60_40 147pret60_60 131pret150_25 428pret150_75 388ssa0432-00337ssa2670-130 130ssa2670-141 377ii16a1 110ii16b2 2664ii16b1 88325#NCB #LJ #UIP %G50352020124207872175639988331325764597191202588131612371724822162667171638495341613941251554322422151150810352011028039624GRASP TEGUS POSITTimeTimeTime1530.381000.31230.13440.19485.1871.2510.3230.404660.686322.80103426.224070.413540.355884.844473.85310.15172.07663.42013.6164 175.85132 >10,0002.800.6469.9387.536,6494.83>10,000>10,000>10,000>10,000>10,000652.30639.27>10,000>10,000221.71>10,000>10,0005.996.9421.657,991>10,000>10,000>10,00011.79>10,000>10,00025.64>10,000>10,000>10,000175.49173.12>10,000>10,0000.0114.2370.82>10,00016.3816.73Table 2: Statistics of running GRASP on selected benchmarksquently, besides the algorithmic organization of GRASP, special attention must be paid to the implementation details.One envisioned compromise is to use GRASP as the secondchoice SAT algorithm for the hard instances of SAT whenever other simpler, but with less overhead, algorithms fail tofind a solution in a small amount of CPU time.Future research work will emphasize heuristic control ofthe rate of growth of the clause database.
Another area forimproving GRASP is related with the deduction engine.Improvements to the BCP-based deduction engine aredescribed in [14] and consist of different forms of probingthe CNF formula for creating new clauses. This approachnaturally adapts and extends other deduction procedures,e.g. recursive learning [9] and transitive closure [2].Acknowledgments[4][5][6][7][8][9][10][11][12][13][14][15][16]This work was supported in part by NSF under grantMIP-9404632.References[1] M. Abramovici, M. A. Breuer and A. D.
Friedman, Digital Systems Testing and Testable Design, Computer Science Press,1990.[2] S. T. Chakradhar, V. D. Agrawal and S. G. Rothweiler, “ATransitive Closure Algorithm for Test Generation,” IEEETransactions on Computer-Aided Design, vol. 12, no. 7, pp.1015-1028, July 1993.[3] M. Davis and H. Putnam, “A Computing Procedure for[17][18]Quantification Theory,” Journal of the Association for Computing Machinery, vol.
7, pp. 201-215, 1960.DIMACS Challenge benchmarks in ftp://Dimacs.Rutgers.EDU/pub/challenge/sat/benchmarks/cnf. UCSC benchmarks in /pub/challenge/sat/contributed/UCSC.J. W. Freeman, Improvements to Propositional SatisfiabilitySearch Algorithms, Ph.D. Dissertation, Department of Computer and Information Science, University of Pennsylvania,May 1995.M. R. Garey and D. S. Johnson, Computers and Intractability:A Guide to the Theory of NP-Completeness, W. H. Freeman andCompany, 1979.M. L. Ginsberg, “Dynamic Backtracking,” Journal of ArtificialIntelligence Research, vol. 1, pp.
25-46, August 1993.J. Giraldi and M. L. Bushnell, “Search State Equivalence forRedundancy Identification and Test Generation,” in Proceedings of the International Test Conference, pp. 184-193, 1991.W. Kunz and D. K. Pradhan, “Recursive Learning: An Attractive Alternative to the Decision Tree for Test Generation inDigital Circuits,” in Proceedings of the International Test Conference, pp. 816-825, 1992.T. Larrabee, Efficient Generation of Test Patterns Using BooleanSatisfiability, Ph.D.