Case Studies on Detected Vulnerabilities and Bugs

NOTE: A list of our detected vulnerabilities and concurrency bugs during evaluation are available at our supplementary material page.

The following case studies on vulnerabilities and bugs aim to highlight:

  • Reasons that MUZZ can have better results than other fuzzers.
  • Concurrency-bug induced vulnerabilities (V_cb) is a proper subset of concurrency-vulnerabilities (V_m).
  • Initial seeds usually are not enough to reveal multithreading relevant vulnerabilities (V_m).
  • Concurrency-bugs may not introduce vulnerabilities.
  • How the tracked states help generate more high-quality seeds, detecting vulnerabilities, and revealing concurrency-bugs.

pbzip2-d

Vulnerability: Decompression in multithreading mode causes SIGSEGV (stack-overflow)

./pbzip2 -f -k -p2 -S16 -d ./c01.bz2

It may terminate unexpectedly with the following message:

pbzip2: *ERROR during BZ2_bzDecompress - failure exit code: ret=-4; block=4; seq=-1; isLastInSeq=1; avail_in=10
[1] 40436 segmentation fault ./pbzip2 -f -k -p2 -S16 -d ./c01.bz2

AddressSanitizer compiled version reports a stack-overflow:

AddressSanitizer:DEADLYSIGNAL
=================================================================
==51048==ERROR: AddressSanitizer: stack-overflow on address 0x7efc9e9a2e18 (pc 0x7efc9d4223c6 bp 0x7efc9e9a32d0 sp 0x7efc9e9a2d60 T3)
    #0 0x7efc9d4223c5 in _IO_vfprintf /build/glibc-OTsEL5/glibc-2.27/stdio-common/vfprintf.c:1275
    #1 0x7efc9d42567f in buffered_vfprintf /build/glibc-OTsEL5/glibc-2.27/stdio-common/vfprintf.c:2329
    #2 0x7efc9d422725 in _IO_vfprintf /build/glibc-OTsEL5/glibc-2.27/stdio-common/vfprintf.c:1301
    #3 0x441c24 in __interceptor_vfprintf (/home/ubuntu/work/pbzip2/pbzip2-asan-2/pbzip2+0x441c24)
    #4 0x4f60a9 in handle_error(ExitFlag, int, char const*, ...) /home/ubuntu/work/pbzip2/pbzip2-asan-2/pbzip2.cpp:637:2
    #5 0x4f6708 in issueDecompressError(int, outBuff const*, int, bz_stream const&, char const*, int) /home/ubuntu/work/pbzip2/pbzip2-asan-2/pbzip2.cpp:769:2
    #6 0x4f6708 in decompressErrCheckSingle(int, outBuff const*, int, bz_stream const&, char const*, bool) /home/ubuntu/work/pbzip2/pbzip2-asan-2/pbzip2.cpp:830
    #7 0x4f9358 in consumer_decompress /home/ubuntu/work/pbzip2/pbzip2-asan-2/pbzip2.cpp:1557:14
    #8 0x7efc9e50a6da in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x76da)
    #9 0x7efc9d4e888e in clone /build/glibc-OTsEL5/glibc-2.27/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:95
SUMMARY: AddressSanitizer: stack-overflow /build/glibc-OTsEL5/glibc-2.27/stdio-common/vfprintf.c:1275 in _IO_vfprintf
Thread T3 created by T0 here:
    #0 0x4ac4fd in pthread_create (/home/ubuntu/work/pbzip2/pbzip2-asan-2/pbzip2+0x4ac4fd)
    #1 0x503c1b in main /home/ubuntu/work/pbzip2/pbzip2-asan-2/pbzip2.cpp:4252:12
    #2 0x7efc9d3e8b96 in __libc_start_main /build/glibc-OTsEL5/glibc-2.27/csu/../csu/libc-start.c:310
==51048==ABORTING

The root cause is the call to vprintf at handle_error, which finally exceeds the stack limit.

Several observations:

  • During evaluation, we investigated several fuzzing runs manually. For this crash, in each of the six runs, MUZZ generated a proof-of-crash. Based on the "seed generation genealogy", it was either generated from "crossover of two multithreading-relevant seeds", or "arithmetic/bytewise flips mutations from an existing multithreading-relevant seed". While these "ancient seeds" were not found in other fuzzer generated seed queue.
  • This crash can only happen in multithreading environment since when only one thread is specified (-p1), the program will never execute consumer_decompress and the stack will never exceed the limit.
  • The crash does not result from concurrency-bugs such as data-race, deadlock, etc. This is because that, if with the author of the program also applies some restrictions when writing implementation code for single thread flow (wrongly), it may still cause the stack-overflow error, too.
  • The crash can never occur with the initial seeds which are valid bz2 files; it does not occur with input files whose content header do not match bz2 file format. In fact, this vulnerability only happen during error handling (handle_error) when one decompression thread finds something wrong when processing its part of the input content.
  • This vulnerability occurs when executing multithreading code. The conventional grey-box fuzzers that apply AFL-Ins instrument evenly on the program's basicblocks. Since it requires several checks in order for the seeds to be valid, these instrumentations will deviate the fuzzers to emphasize the non-multithreading parts.

pbzip2-c

Vulnerability: a floating point exception may happen when the number of processors are more than the file size.

echo > FILE
pbzip2 -r -f -k -p2 FILE

The root cause is that at pbzip2.cpp:4126, the `blockSize` may be 0, which cause a divide-by-zero error. Given that the default number of processors to be used for compression is the core count, many larger files may cause this exception.

  • Certainly this vulnerability has nothing to do with concurrency-bugs.
  • This crash only occurs in multithreading since no one will use empty file (zero bytes) for compression purpose.
  • The initial seeds we used can never trigger this crash since we were using files that are at least 4 bytes while 4 threads are utilized in our experiments.

vpxdec-v1.8.0-178

Vulnerability: SEGV on multithreading decoding on vp9 (vp9_predict_intra_block)

This has recently been assigned a CVE ID.

$ gdb --args ./vpxdec -t 4 ./poc.webm -o /tmp/test
Reading symbols from ./vpxdec...done.
gdb$ run
Starting program: /home/ubuntu/work/libvpx/libvpx-orig/install/bin/vpxdec -t 4 ../../../poc.webm -o /tmp/test
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7ffe33b85700 (LWP 5006)]
[New Thread 0x7ffe33384700 (LWP 5007)]
[New Thread 0x7ffe32b83700 (LWP 5008)]

Thread 1 "vpxdec" received signal SIGSEGV, Segmentation fault.
__memmove_avx_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:291
291 ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S: No such file or directory.
gdb$ thread apply all bt

Thread 4 (Thread 0x7ffe32b83700 (LWP 5008)):
#0 0x00007ffff761e9f3 in futex_wait_cancelable (private=<optimized out>, expected=0x0, futex_word=0x9e6764) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
#1 __pthread_cond_wait_common (abstime=0x0, mutex=0x9e6710, cond=0x9e6738) at pthread_cond_wait.c:502
#2 __pthread_cond_wait (cond=0x9e6738, mutex=0x9e6710) at pthread_cond_wait.c:655
#3 0x00000000005e04fc in thread_loop ()
#4 0x00007ffff76186db in start_thread (arg=0x7ffe32b83700) at pthread_create.c:463
#5 0x00007ffff6da088f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 3 (Thread 0x7ffe33384700 (LWP 5007)):
#0 0x00007ffff761e9f3 in futex_wait_cancelable (private=<optimized out>, expected=0x0, futex_word=0x9e65b4) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
#1 __pthread_cond_wait_common (abstime=0x0, mutex=0x9e6560, cond=0x9e6588) at pthread_cond_wait.c:502
#2 __pthread_cond_wait (cond=0x9e6588, mutex=0x9e6560) at pthread_cond_wait.c:655
#3 0x00000000005e04fc in thread_loop ()
#4 0x00007ffff76186db in start_thread (arg=0x7ffe33384700) at pthread_create.c:463
#5 0x00007ffff6da088f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 2 (Thread 0x7ffe33b85700 (LWP 5006)):
#0 0x00007ffff761e9f3 in futex_wait_cancelable (private=<optimized out>, expected=0x0, futex_word=0x9e6404) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
#1 __pthread_cond_wait_common (abstime=0x0, mutex=0x9e63b0, cond=0x9e63d8) at pthread_cond_wait.c:502
#2 __pthread_cond_wait (cond=0x9e63d8, mutex=0x9e63b0) at pthread_cond_wait.c:655
#3 0x00000000005e04fc in thread_loop ()
#4 0x00007ffff76186db in start_thread (arg=0x7ffe33b85700) at pthread_create.c:463
#5 0x00007ffff6da088f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 1 (Thread 0x7ffff7fa2740 (LWP 30707)):
#0 __memmove_avx_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:291
#1 0x000000000061e855 in vp9_predict_intra_block ()
#2 0x00000000004d63d3 in decode_block ()
#3 0x00000000004d584e in decode_partition ()
#4 0x00000000004d555d in tile_worker_hook ()
#5 0x00000000005e0417 in execute ()
#6 0x00000000004cc858 in vp9_decode_frame ()
#7 0x00000000004d8f95 in vp9_receive_compressed_data ()
#8 0x00000000004c9329 in decode_one ()
#9 0x00000000004c82b3 in decoder_decode ()
#10 0x000000000046d18b in vpx_codec_decode ()
#11 0x000000000040c2ec in main_loop (argc=<optimized out>, argv_=<optimized out>) at vpxdec.c:846
#12 main (argc=<optimized out>, argv_=0x7fffffffba58) at vpxdec.c:1122

The root cause turned out to be an integer-overflow when vp9 frame size too big.

  • With the provided initial seeds, MUZZ detected this vulnerability in 24 hours (among six fuzzing runs, twoof them detected the vulnerability in 5h38min and 16h07min,respectively), however MAFL/AFL/MOPT cannot detect this within 360 hours (15 days), with all of the six runs. It is worth noting that the proof-of-crash seed is really hard to generate. And we believe that the reason that MUZZ is able to detect this highly results from the fact that MUZZ's coverage-oriented instruemntation.
  • This vulnerability can only happen in multithreading settings.
  • This vulnerability has nothing to do with data-race, lock-order-inversion, etc.
  • The input must be an invalid input for vpxdec.

gm-cnvt

Concurrency-Bugs: data race on the statistical information for output:

WARNING: ThreadSanitizer: data race (pid=29974)
  Atomic write of size 8 at 0x7ffee84393b8 by thread T6:
    #0 __tsan_atomic64_fetch_add <null> (gm+0x476040)
    #1 .omp_outlined. /home/exp/work/gm/GM-tsan/magick/gradient.c:123:7 (gm+0xafa24b)
    #2 __kmp_invoke_microtask /home/exp/work/imagemagick/openmp/BUILD/../runtime/src/z_Linux_asm.s:1399 (libomp.so+0x7a292)

  Previous read of size 8 at 0x7ffee84393b8 by main thread:
    #0 .omp_outlined. /home/exp/work/gm/GM-tsan/magick/gradient.c:124:11 (gm+0xafa261)
    #1 __kmp_invoke_microtask /home/exp/work/imagemagick/openmp/BUILD/../runtime/src/z_Linux_asm.s:1399 (libomp.so+0x7a292)
    #2 DrawImage /home/exp/work/gm/GM-tsan/magick/render.c:3538:20 (gm+0x624fd1)
    #3 DrawPatternPath /home/exp/work/gm/GM-tsan/magick/render.c:4610:10 (gm+0x631457)
    #4 DrawImage /home/exp/work/gm/GM-tsan/magick/render.c:2797:22 (gm+0x61d3a8)
    #5 ReadMVGImage /home/exp/work/gm/GM-tsan/coders/mvg.c:237:10 (gm+0x8e2ea6)
    #6 ReadImage /home/exp/work/gm/GM-tsan/magick/constitute.c:1607:13 (gm+0x555462)
    #7 ReadSVGImage /home/exp/work/gm/GM-tsan/coders/svg.c:3945:13 (gm+0x98ca4b)
    #8 ReadImage /home/exp/work/gm/GM-tsan/magick/constitute.c:1607:13 (gm+0x555462)
    #9 ConvertImageCommand /home/exp/work/gm/GM-tsan/magick/command.c:4362:22 (gm+0x4e225a)
    #10 MagickCommand /home/exp/work/gm/GM-tsan/magick/command.c:8886:17 (gm+0x5136b2)
    #11 GMCommandSingle /home/exp/work/gm/GM-tsan/magick/command.c:17408:10 (gm+0x539381)
    #12 GMCommand /home/exp/work/gm/GM-tsan/magick/command.c:17461:16 (gm+0x539025)
    #13 main /home/exp/work/gm/GM-tsan/utilities/gm.c:61:10 (gm+0x4c242b)

  Location is stack of main thread.

  Thread T6 (tid=29981, running) created by main thread at:
    #0 pthread_create <null> (gm+0x433666)
    #1 __kmp_create_worker /home/exp/work/imagemagick/openmp/BUILD/../runtime/src/z_Linux_util.cpp:958:14 (libomp.so+0x6ef74)
    #2 DrawImage /home/exp/work/gm/GM-tsan/magick/render.c:3538:20 (gm+0x624fd1)
    #3 DrawPatternPath /home/exp/work/gm/GM-tsan/magick/render.c:4610:10 (gm+0x631457)
    #4 DrawImage /home/exp/work/gm/GM-tsan/magick/render.c:2797:22 (gm+0x61d3a8)
    #5 ReadMVGImage /home/exp/work/gm/GM-tsan/coders/mvg.c:237:10 (gm+0x8e2ea6)
    #6 ReadImage /home/exp/work/gm/GM-tsan/magick/constitute.c:1607:13 (gm+0x555462)
    #7 ReadSVGImage /home/exp/work/gm/GM-tsan/coders/svg.c:3945:13 (gm+0x98ca4b)
    #8 ReadImage /home/exp/work/gm/GM-tsan/magick/constitute.c:1607:13 (gm+0x555462)
    #9 ConvertImageCommand /home/exp/work/gm/GM-tsan/magick/command.c:4362:22 (gm+0x4e225a)
    #10 MagickCommand /home/exp/work/gm/GM-tsan/magick/command.c:8886:17 (gm+0x5136b2)
    #11 GMCommandSingle /home/exp/work/gm/GM-tsan/magick/command.c:17408:10 (gm+0x539381)
    #12 GMCommand /home/exp/work/gm/GM-tsan/magick/command.c:17461:16 (gm+0x539025)
    #13 main /home/exp/work/gm/GM-tsan/utilities/gm.c:61:10 (gm+0x4c242b)
...
...
...
SUMMARY: ThreadSanitizer: data race (/home/exp/work/gm/GM-tsan/install/bin/gm+0x476040) in __tsan_atomic64_fetch_add
==================
ThreadSanitizer: reported 206 warnings

The corresponding concurrency-bug has been fixed by the GraphicsMagick maintainer.

  1. ThreadSanitizers usually report many pairs of issues, we have to analyze manually according to the root cause of the data-race. Otherwise there are too many duplicates (differences between the number of observed concurrency violation executions and the number of concurrency-bugs).
  2. The root cause is a missing lock for the interleaving between the write of row_count and the read of it. It's possible to have the following interleavings where row_count is different.
    T1: row_count++;
    T2: row_count++;
    T1: if (QuantumTick(row_count,image->rows))
    T2: if (QuantumTick(row_count,image->rows))

and

    T1: row_count++;
    T1: if (QuantumTick(row_count,image->rows))
    T2: row_count++;
    T2: if (QuantumTick(row_count,image->rows))

The bug however can be considered benign since the value of row_count is only used to display some statistics on the console -- it will never cause any vulnerabilities.