UAFL: Additional Experimental Results
Typestate-Guided Fuzzer for Discovering
Use-after-Free Vulnerabilities
About UAFL
Existing coverage-based fuzzers usually use the individual control flow graph (CFG) edge coverage to guide the fuzzing process, which has shown great potential in finding vulnerabilities. However, CFG edge coverage is not effective in discovering vulnerabilities such as use-after-free (UaF). This is because, to trigger UaF vulnerabilities, one needs not only to cover individual edges, but also to traverse some long sequence of edges in a particular order, which is challenging for existing fuzzers. To this end, we first propose to model UaF vulnerabilities as typestate properties, then develop a typestate-guided fuzzer, named UAFL, for discovering vulnerabilities violating typestate properties. Given a typestate property, we first perform a static typestate analysis to find operation sequences potentially violating the property. Then, the fuzzing process is guided by the operation sequences in order to progressively generate test cases triggering property violations. In addition, we also adopt the information flow analysis to improve the efficiency of the fuzzing process. We performed a thorough evaluation of UAFL on 14 widely-used real-world programs. The experiment results show that UAFL substantially outperforms the state-of-the-art fuzzers, including AFL, AFLFast, FairFuzz, MOpt, Angora and QSYM, in terms of the time taken to discover vulnerabilities. We discovered 10 previously unknown vulnerabilities and received 5 new CVEs.
Code coverage relevant to operation sequences
Experimental Raw Data
We have public our experimental raw data (experiments in our paper), and you can download it from here.
Additional Experiment
To evaluate seed selection, we configured another fuzzer by disabling our seed selection, the initial experiments (8 repetitions) on two programs (GNU_cflow and boringssl) shows that the performance of bug finding decreases by 9.39 %. We have made our experimental raw data available, and you can download it from the following link.
Our Evaluation Data Set
We select evaluation benchmarks considering several factors, e.g., popularity, frequency of being tested, development activeness, and functional diversity. We also open the detail information of our benchmark program, including the POC files and the initial seeds, you can download it from here.
The initail seed and POC is in the folder "Fuzzing" under each project.
Publication
@inproceedings{wang2020uafl,
author = {Wang, Haijun and Xie, Xiaofei and Li, Yi and Wen, Cheng and Liu, Yang and Qin, Shengchao and Chen, Hongxu and Sui, Yulei.},
title = {Typestate-Guided Fuzzer for Discovering Use-after-Free Vulnerabilities},
booktitle= {2020 IEEE/ACM 42nd International Conference on Software Engineering},
year ={2020},
address = {Seoul, South Korea},
}