« On Semicolon Wars | Main | Ordinal Numbers Intro »

Oregon Summer School: Early Lectures

I am having a great time at the Oregon Summer School for Language-based Techniques in Concurrent and Distributed Systems & learning loads of interesting things. I'll summarize a few of the earlier lectures, but there are lots more besides these.

Cormac Flanagan presented his technique for checking & inferring that Java programs observe a locking discipline that gives desired atomicity properties. A motivation for this is that Java's synchronized keyword allows you to protect a block with a lock; but it is up to you to make sure that all of the right locks are held--in general it can be hard to tell whether a piece of code is atomic with respect to the rest of the program. Flanagan's system allows you to annotate a method or a block with atomic; a static checker then infers whether it is truly atomic by virtue of the locks it holds (viz-a-viz other bits of code in the program). The analysis is somewhat conservative, in that it may reject programs that are actually correct, but the techniques seem to lead you to write the kind of lock-based code that is ordinarily used in practice; Flanagan's team has run the checker successfully on large bodies of threaded benchmark code, and has even found bugs in the Java standard library (e.g. with the append function on Strings). The biggest drawback to this work is that it still relies on locks, and deadlock can still occur.

Dan Grossman gave an nice survey of possible semantics for transactions. Here again, programmers would wrap blocks of code with an atomic keyword, but now we are proceeding from semantics to implementation, rather than the other way around. Some of the semantic questions surround the interaction of transactions with exceptions, the nesting of transactions, and the distinction between weak and strong atomicity [1]. Dan convinced me that when an exception escapes an atomic block, you should not roll back the transaction. One good reason for this (among many) is that it preserves "serial elision"[2]: if you erase the atomic keywords, you get a sequential program that behaves the same as the original program would behave in a sequential context.

Strong and weak atomicity are distinguished by how you treat reads and writes that are not protected by an atomic block. An interesting tidbit is that Haskell's STM system moots the distinction by cleanly separating transactional memory from non-transactional memory (they have different types). This means that the low-level implementation can provide only weak atomicity, but at the language level there is no risk of other threads altering transactional memory.

Dan's thesis is that if you provide explicit threads and mutable shared memory, *then* transactions are a great solution to the problems that arise—but that it's not clear whether threads with shared memory are the best model of concurrency.

To contrast, we've had two presentations on alternate approaches to concurrency. Charles Leiserson presented Cilk, a conservative, faithful extension of C that offers declarative parallelism. Designed for big parallel algorithms (think FFT, matrix multiplication, etc.), this language allows you to tag subexpressions for evaluation in separate threads—the principal mode of communication is simply the return value of the subexpression. This model removes the chance of race conditions and deadlocks (Although the full power of C is still available so you can still shoot yourself in the foot). The language design seems reasonably elegant (just a few keywords are added) and it has the property they call "serial elision"; a drawback is that return values need to be used in specific ways (e.g., assigning directly to a local variable) and there are ways to use it unsafely (e.g., trying to use a spawned return value before the spawned thread has actually returned).

Leiserson also gave some very simple and interesting ways to analyze the parallelism of an algorithm, which gives you a good guideline to how much speedup you can expect as you add more processors. Essentially, you need to add up the total amount of work done as well as the length of the critical path (the longest dependency chain) and look at the ratio. I hope to post more about this another time.

Matthew Flatt of PLT Scheme fame (currently at University of Utah) gave a really neat demo of the Concurrent ML primitives in Scheme, in realtime: he built up a program iteratively, running it each time, while we watched. This worked surprisingly well. At times, it was easy to get confused, but asking lots of questions was a strategy that allowed me to grasp the ideas. The concurrency primitives are a lot like pi-calculus, in that they allow synchronous rendezvous and a "choice" combinator. This sublanguage is pure in the way that Erlang is: locks are not needed. Of course, the synchrony means the primitives are hard to implement (efficiently and reliably) in a distributed setting.

Michael Hicks presented "futures," a language feature first implemented in MULTILISP; futures permit declarative concurrency like Cilk, but the language handles some of the necessary mechanics: futures automatically collect or wait on result values when they're needed, whereas Cilk requires the programmer to explicitly wait for the results (and dies if this is done incorrectly!).

[1] Weak atomicity requires only that transactions are atomic with respect to one another; strong atomicity requires that they be atomic even with respect to code to non-atomic blocks. The latter is, of course, much harder to implement.

[2] So-called by the Cilk people.

Comments

You say that cilk eliminates data races and deadlock.

My opinion is that cilk lets programmers describe multithreaded programs in a higher level manner, while still preserving low level control over what is happening. It is orthogonal to the data-race issue. You still need to use locks if you start threads that access shared memory. That's why they provide the Nondeterminator: the dynamic data-race detection tool.

Now, I believe a comparison between (j)Cilk and the proxy framework presented by Hicks may be interesting. It's too bad the two groups did not communicate so far.

The proxy framework is nice because it is general: you can implement both `futures' and lazy evaluation in essentially the same way. However, because the mechanism is more general, they payed less attention to semantics. I believe that the interaction between futures and exceptions (the serial elision is not preserved with exceptions, even in the absence of data races) is completely broken.

Radu,

I guess I didn't come across the way I wanted to. I said that the model of declarative concurrency eliminates the chance of race conditions; Cilk offers that model, but it still allows you to use other communication and synchronization features, as you mention.

Like yourself, I was sorry that the groups didn't communicate more. Cilk offers something very, very similar to the idea of futures from MULTILISP (and later other languages), which Michael Hicks described. Other groups use other names for very similar ideas, too. Someone should put this all on a common footing!

Post a comment