Fuzzing with DOM Level 2 and 3

Overview
Fuzzing techniques proved to be very effective in finding vulnerabilities in web browsers.
Over time several valuable fuzzers have been written and some of them (mangleme, cross_fuzz) have became a "de-facto" standard, being widely adopted by the security research community.
The most common approach in browser fuzzing leverages on DOM Level 1 interfaces, where DOM elements are randomly created, crawled, tweaked and deleted.
Using this approach hundreds of memory corruption bugs have been uncovered in all mainstream browsers but, due to widespread coverage, spotting new bugs is becoming increasingly difficult.

At DeepSec conference in Vienna, I showed an evolutive approach of browser fuzzing that relies on some DOM interfaces introduced by W3C DOM Level 2 and Level 3 specifications.
Using this approach a fuzzer prototype has been built and tested against IE9, IE10 and Chrome, providing interesting results: more than 70 different crashes have been generated and several memory corruption errors have been found, some of which turned to be exploitable.

Technique analysis
The usual approach in browser fuzzing leverages on DOM Level 1 interfaces for performing DOM mutations:
  1. a big quantity of random HTML elements are created and randomly added to the HTML document tree
  2. the DOM tree is crawled and element references are collected for later reuse
  3. elements attributes and function calls are tweaked
  4. random DOM nodes are deleted
  5. garbage collection is forced 
  6. collected references are crawled and tweaked again
This approach is effective but suffers from some design limitations:
  1. every DOM mutation (e.g. element tweaking, deletion, etc) is performed on a single HTML element, no mass mutations are managed
  2. the execution workflow  can only be altered by the cardinality and the type of randomly generated DOM nodes (e.g different tag names, attributes, etc).
The entropy of a browser fuzzer can be taken to a further level introducing some new functionalities defined in DOM Level 2 and Level 3 specifications.

Working with aggregations of nodes (DOM Level 2)
DOM Level 2 specifications introduces several new interfaces that allow to perform mutations on collections of DOM elements.
For instance interfaces like DocumentFragment, Range, TextRange, SelectionRange allow to create logical nodes aggregations and execute CRUD mutations on them using a rich set of APIS.
A quick list of these methods: createRange, deleteContents, extractContents, cloneContents, cloneRange, removeRange, createTextRange, pasteHTML, etc
The expectation is that each of the methods provided for the insertion, deletion and copying of contents can be directly mapped to a series of Node editing operations enabled by DOM Core.
In this sense these operations can be viewed as convenience methods that also enable a browser implementation to optimize common editing patterns.
It turns out that implementation bugs in this methods can lead to serious memory corruption errors when not correctly mapped to atomic-safe node operations.

Using document traversal data structures (DOM Level 2)
In the classic fuzzer approach, crawling the DOM tree is performed walking the physical tree from the top level node (DOCTYPE or HTML) to the leaves using DOM Level 1 methods (.children, .parentNode, .nextSiebling, etc).
In DOM Level 2 several data structures are available to create logical view of a Document subtree (eg. NodeList, TreeWalker, NodeIterator); we refer to these as the logical views to distinguish them from the physical view, which corresponds to the document subtree per se.
These logical views are dynamic, in the sense that they modify their structure to reflect changes made to the underlying document.
That's why some memory corruption scenarios arise when DOM mutations performed on the physical tree are not correctly managed on the logical counterpart.

Introducing events (DOM Level 3)
In order to add more entropy to the fuzzer workflow, events firing and propagation can be used. DOM Level 3 specification defines a standard way to create events, fire them and manage event listeners for every DOM node. On top of that, specification defines a standard for event propagation model.
From the spec page (http://www.w3.org/TR/DOM-Level-3-Events/#dom-event-architecture):
"Event objects are dispatched to an event target. At the beginning of the dispatch, implementations must first determine the event object's propagation path."
The propagation path of an event include 3 steps:
  1. capture phase: the event propagates through the target's ancestors from the document root to the target's parent. Event listeners registered for this phase handle the event before it reaches its target.
  2. target phase: the event arrives at the final target element. Event listeners registered for this phase  handle the event once it has reached its target.
  3. bubble phase: the event  propagates through the target's ancestors in reverse order, starting with the target's parent and ending with the document root element. Event listeners registered for this phase must handle the event after it has reached its target.
From the spec page: "The propagation path must be an ordered list of current event targets through which the event object must pass. For DOM implementations, the propagation path must reflect the hierarchical tree structure of the document. The last item in the list must be the event target; the preceding items in the list are referred to as the target's ancestors, and the immediately preceding item as the target's parent. Once determined, the propagation path must not be changed; for DOM implementations, this applies even if an element in the propagation path is moved within the DOM. or removed from the DOM. Additionally, exceptions inside event listeners must not stop the propagation of the event or affect the propagation path."

So the idea here is to let an event fire and alter the DOM tree structure after the propagation path has already been defined. This can be easily obtained attaching random eventListeners to every DOM element, setting the "capturable" flag to true. Whenever an event targeting a node is intercepted by a node's anchestor, some random operations on DOM tree are performed, removing o inserting whole sets of elements. Then the propagation of the event goes on.
Note: this technique proved to be  dramatically effective in crashing IE9/10, probably because IE9 is the first IE version to support standard W3C event model and is not still "bullet-proof".

The fuzzer prototype: Nduja
The protype has been implemented in JS and the fuzzing algorithm can be summed up as follows:
  • Initialization
    • create a given number of random HTML elements
    • for each element tweak random attributes/functions/styles
    • for each element add a given number of event listeners
    • append the elements in a random position in the document tree
    • create logical views of a random DOM subtree (nodeIterator, treeWalker)
  • Crawling
    • Crawl DOM tree
    • Create random Range (or similar DOM Level 2 structure) and apply random mutatios on it (delete, insert,surround, etc)
    • Crawl DOM logical views
  • Event management: events are managed accordingly to the dispatching phase
    • if the event is in the capture/bubble phase:
      • remove some random event listeners from the current target
      • create random Range (or similar DOM Level 2 structure) and apply random mutatios on it (delete, insert,surround, etc)
      • crawl physical tree and logical views
    • if the event is in the target phase
      • add some other event listeners to the event target node

Results
The prototype has been tested against IE9/Win7, IE10/Win8 and Win7/Chrome 23.
Every crash has been collected on the fly using a windbg script and classified with !exploitable/pageheap/!analyze.
Counting the distinct function calls/offsets where IE crashes, more than 70 different bugs exist.
Majority of them are heap corruption errors, mainly UAFs and double frees.
Grinder Framework has been used in order to let the fuzzer run, collect results and reproduce testcases.

Using Nduja fuzzer I've been able to discover a bunch of 0 days vulnerabilities: three of them (UAF) have been reported to MSRC and already patched in some recent MS bullettins:
  • UAF in EventListener - CVE-2012-2546 (patched with bullettin MS12-063)
  • UAF in InjectHTMLStream - CVE-2012-4781 (patched with bullettin MS12-077)
  • UAF in CMarkup - CVE-2012-4782 (patched with bullettin MS12-077)
Further developments
There is no final version of Nduja fuzzer,  the sample attached  is just one of the infinite possible implementations of the same fuzzing approach: events+massive mutations+iterators.
My suggestion is to pick up the code, modify at will and test your own variant.
There are a lot of browsers on which the fuzzer has not been widely tested yet: Firefox, Opera, Safari, and all mobile OS browsers.

Notes on the Nduja fuzzer
The fuzzer is:
- highly prototypal
- not really optimized/performant
- highy uncommented

It is published for educational purposes only and I will not be responsible for any misuse or any harm it can cause on your own HW or SW.
Use it at your own risk. :-)

Acknowledgements
  • Roberto Suggi Liverani for his valuable support during the early stage of my research
  • Giorgio Fedon of MindedSecurity for encouragements and for providing some precious insights on memory bugs exploitability conditions. THX.A.LOT.
  • Luca Carettoni: it was a pleasure to meet you and share a cup of coffee  @ DeepSec. Hope to see you again soon!
Ċ
Rosario Valotta,
Jan 3, 2013, 2:56 AM
ċ
Rosario Valotta,
Jan 7, 2013, 4:07 AM
Comments