SoC 2006 Supplemental Information

All of the stuff too long to include in the application. 


A concise introduction to the background material relavent to my Project Application.  The material here is the nucleus of documentation efforts for this project.  As such, I will attempt a brief introduction to the research background, but will utlimately provide links and citatations for the interested reader to follow.

 Over time, this page will expand to include periodic updates on the progress of my project.  Assuming my proposal is provided with positive acceptance post-haste!  Ah, alliteration!

Application (submitted: May 4, 2006)

My Google Summer of Code 2006 Project Application: 

Software Transactional Memory for User-Space Micro-Threads.


My resume can be viewed in PDF format.  Please pardon the fact that this particular draft was used for a specific job application.  A newer draft will take its place soon. 


Haskell is the standard pure, non-strict, lazy, strongly-typed, higher-order, functional language. Haskell also contains a monadic sub-language.  Let us examine each of these terms so that their meaning is clear.

Pure: Haskell functions can only produce output from their input arguements.  This means that side-effects are not allowed.  Side-effects are computations that affect program state outside of a function.  Side-effects include input/output, state, exceptions, and concurrency (a.k.a. the "Awkward Squad").  However, Haskell includes a very clever means to allow such things in a strictly controlled manner.

Non-strict: Haskell functions do not evaluate their arguements until the arguement is actually necessary in the function body.  This means that Haskell functions can be passed very complicated and expensive computations as arguements and will only ever evaluate these computations if they are actually used.  This is supposed to avoid unnecessary computations.

Lazy: Haskell, in general, tries to avoid doing computations until they are necessary.  This is related to the non-strict term above.  Like non-strictness, laziness can help Haskell avoid wasting computational time and memory.  Unfortunately, laziness can be a difficult concept against which to program, but it allows such wild things a cyclical data structures.

Strongly-typed: Haskell uses the Hindley-Milner type system.  This type system is extrememly expressive.  All Haskell expressions have a type that strictly determines how they may be composed with other expressions.  Additionally, Haskell has a powerful notion of polymophism, allowing "any" types.    

Higher-order: Haskell functions can accept functions as input and may return functions as output.  This allows many complex programs to be represented by a handful of higher-order functions in a manner which is similar to the Object-Oriented notion of Design Patterns.

Functional: To paraphrase John Hughes in "Why Functional Programming Matters", functional programming languages provide more useful abstractions and stronger "glue" for developing complex applications.  Typical applications in a functional language use a small set of higher-order functions (dubbed "combinators") to build more complex procedures.  Recursion is used to simulate the looping behavior of imperitive languages. Haskell is usually deemed a "pure" functional language, for all the reasons listed above.  Haskell programs are often terse and refactoring tends to lift repetitive function definitions into higher-order functions.

Monad: Is a scary word for a simple, powerful, "warm-fuzzy-of-an" idea.  In Haskell, one often programs in an imperitive style using two simple operations: result and bind (usually represented by ">>=").  Monads are "notions of computation"; in other words, a monad captures the sequence of operations that compose a computation and the series of variables that are bound in that computation.  This allows a programmer to control the flow of a procedure, something that is usually left to the language runtime or compiler given the non-strict and lazy nature of Haskell.  Monadic code is still lazy and non-strict, but it follows a defined sequence of operations.  This allows Haskell to easily interface with the outside world (input/output), other languages (foreign function interfaces), catch and throw exceptions, carry state between computations, and implement concurrency primitives.  All of these things are captured in Monads so that they cannot directly interfere with each other or disturb Haskell's assumptions about the purity of its functions.  Of course, the devel's [pun intended] in the details.

Software Transactional Memory (STM)

 In a nutshell, STM provides a means for disparate threads of computation to share a dependance on data.  Of course, data boils down to locations of computer memory.  So, STM allows one thread to modify memory while other threads must wait.  Of course, there are many synchronization algorithms that allow this property, but STM deals with it in superior means.  However, to make the subject managable, I will limit the discuassion as to STM Haskell.

 STM Haskell allows a transactional variable (a TVar) to be shared among different threads of communication.  When a thread reads from a TVar, the read is actually written into a thread-specific log.  The thread then performs operations using this copy of the TVar.  At the end of the computation any updates to the TVar are written to a separate entry in the log.  Now, the computation attempts to commit its update to the original TVar.  The TVar is compared to the copy in the thread's log.  If they match, then the update is commited and the TVar assumes its new value.  However, if the TVar does not match the copied value, then the log is erased and the computation is rescheduled for a later time.  The current STM Haskell implementation always reschedules the computation to occur after a successful commit to the TVar by another computation.

STM Haskell  allows  STM  computations to be composed in sequence using ";", like any other monadic construct.  A computation can "retry" which means it voluntarily erases its log and is scheduled to rerun when a successful commit occurs.  Two STM computations can be composed as alternatives using the "orElse" operator.  The STM computation composed by "T1 orElse T2", first, tries transaction T1.  If T1 fails or retries, the transaction T2 is attempted.  If T2 fails or retries, the entire transaction, "T1 orElse T2", is suspended and rerun when one of the transactional variables on which T1 or T2 depend has a successful commit.  Notice, there is no innate bias to which transaction will be rerun, it is the first whose TVar(s) commit.  Finally, STM transactions are run using the "atomic" function.  Transactions can be composed freely, as required by the application, but they do not execute outside of the "atomic" function.

 Recent advances also allow transactions to have invariants.  In other words, at the creation of a TVar or every time a commit is attempted, certain invariant properties can be checked.  Failure of the invariants will fail the transaction and invariant functions cannot perform any lasting side-effects as their effects are cleared as the TVar commits. 

STM Haskell is build on top of Concurrent Haskell.

User-Space Micro-Threads 

User-Space Micro-Threads is a slightly misleading misnomer I am using for lack of a better description.  In actuality, they are a combination of event systems and threads.  They intersperse computation with events, which act as system calls to the event handler, which calls into the thread scheduler.  A key feature of micro-threads is that they are implemented entirely in Haskell without modifying the language runtime or relying on an interface to operating system threads.  Of course, for making use of multiprocessors, such things are desired, which can be provided by Concurrent Haskell.

In this project I will consider two implementations of micro-threads: Continuation-Passing Style (CPS) Monads and Resumption Monad Transformers (ResT).  Of the two methods, CPS is more powerful and general, but ResT is more concise and easier to understand.  It is possible that the project can provide useful code for both styles of micro-threads.  I am biased towards ResT as I have more understanding of it, having only recently become aware of a concrete implementation of the CPS technique.  

Whence Google Should Pay Me! 

 Of course, the real meat and potatoes of this project will be determining exactly how STM Haskell depends upon Concurrent Haskell and swapping in user-space micro-threads.  And yet, STM Haskell is bound to a particular runtime of the Glaskgow Haskell Compiler as much of its functionality is implemented as C code.  This will be the potentially difficult part to implement entirely in Haskell.  Answering, "Why was it implemented like this?" and "How do we do it in only Haskell?" is the first order of business.  After being selected for SoC 2006, of course ;)

It also helps that one of the implementors of STM Haskell is willing to be my mentor! 


Please see the Bibliography of the Project Application for the current Bibliography.  More entries will be provided in the near future.