July 7, 2009

Thesis submitted

So I've been rather busy. The first bit of news is, I submitted a thesis. Warning: it has not been examined and may be full of errors, inaccuracies, omissions, or ill-advised conclusions.

The thesis turned out better than I'd hoped, in the sense that all four major pillars of the work were also accepted to conferences (the papers are "Links: Web Programming without Tiers," "The Essence of Form Abstraction," "A Located Lambda Calculus," and "The Script-writer's Dream: How to Write Great SQL in Your Own Language and Be Sure It Will Succeed").

The night I printed it, I was elated, walking to the printer to collect my final copy, knowing I had done it—created some programming language features, written a 180-page thesis, persevered. In a couple of months I'll defend it, warts and all, and then perhaps I'll be completely finished. But it's downhill from here. I think.

Update: The final version has now been submitted, and the above link points to the latest version.

August 5, 2007

Memoizing pages

Tim Bray, "On Being for the Web":

What do ETags do? They allow you, when you fetch a Web resource, to send a short string along with the request saying “this is the signature you sent me for the version I know about”; then the server can look and see if the signature is still good, and if so not bother sending the data redundantly. Which is potentially a big, big win.

But also potentially not. If you look at actual real-world servers that are maxed out, they’re mostly maxing out on computation or back-end database traffic, not on bandwidth to the Net. So the saving that would be really valuable would be if you didn’t have to rebuild the requested page. Unfortunately, both Rails and Django rebuild the requested page, then compute the signature, then check the ETag. Which is extra run-time work in order to conserve a resource (Web bandwidth) that probably isn’t the problem.

This is sharp. Maybe web frameworks (or <cough /> web-centric languages) should provide support for easy memoization.

March 21, 2007

Overloading in web programming languages

A list of thinks pronounced "links" that relate to either the web or programming:

Published slides from Amazon talk 16 Mar 07

Herewith I'm making available the slides from the lunchtime talk I gave at Amazon's Edinburgh office last week (16 March 2007). Thanks again to Andrew Birkett for inviting me.

It might be hard to gather much information from these slides without the accompanying performance, but you might enjoy looking at the drawings or trying to run the code samples.

March 17, 2007

Links Talk at Amazon in Edinburgh

So yesterday I went down to Amazon's development office in Edinburgh, and gave a talk on Links, at the invitation of Andrew Birkett. The talk went well and I was quite pleased with the response and the sharp questions.

One of the biggest concerns that the Amazon people had, which we haven't tried to address in Links, is failure: how the language allows them to handle failure. In Amazon's systems, there are loads of interdependent services that depend on each other to display a single page, and any of them might go wrong; the developers described to me a wide variety of alternate responses that they might want to give, depending on the situations. For example, in some instances, a page feature might collapse down to nothing (disappearing from the page) when it fails. Other times, if a service doesn't respond, the front-end web software might use cached data to display a feature.

This came up in regard to Links because of the way we're making client-server (Ajax) calls simple RPC calls. The question there is, what happens if the remote call doesn't finish successfully, either because the server code couldn't get the right data, or because the network mechanisms themselves failed; how in Links can we handle that anomaly? Part of the answer might be an ordinary exception mechanism, which we can support fairly easily, but we should think more about how Links programmers deal with exceptional situations.

The crowd was very friendly and engaged with the talk quite deeply, I think. They fed me plenty of questions on particular points. Many of these had to do with various kinds of possible failure, as I mentioned; another theme was metaprogramming, prompted because they noticed a certain amount of repetition in my slides (e.g. (foo=foo, bar=bar) when constructing records; I do hope we can improve on that).

I gather they [present-day Amazon developers] do most of their work in Java, but they weren't thrown off when I started talking about continuations or any of the functional-programming idioms that we use in Links. There were a few self-confessed language afficionados in the crowd, but they weren't the only ones who didn't miss a beat at the functional idioms, the tuple types, or the unusual kinds of variable binding we use for our "formlets" (oh, reader—you don't know about our formlets—I'll have to post about that soon).

Between the talk itself and chatting with developers outside of it, I had a quite nice time. Thanks to Andrew and all of the others for listening so well & treating me nicely, even though I'm an academic!

UPDATE: Slides from the talk are available.

January 8, 2007

Work Today

Spent some time hacking on Links:

  • Attempted to convert our manual (in POD format) into LaTeX. In the end I felt that it's easier to work with POD format as long as we don't need any fancy stuff like formulae.
  • Added a naive dead-code-elimination optimization.
  • Spent some time experimenting with converting the CPS translation (for the JS target) into a one-pass conversion. Although this was a good exercise and revealed some bugs and opportunities for optimization, I think a true one-pass translation is still beyond me.

August 1, 2006

Oregon Summer School: Further Notes

I've been putting off writing up the rest of the lectures from the Oregon Summer School because I seem to have lost my trusty notebook where all the brilliant things were intoned. But I guess it's gone on long enough; the world must know what happened at Oregon Summer School 2006.

Some of the presenters I mentioned before talked on more than one topic. Michael Hicks, after talking about futures, went on to discuss some work on "dynamic software update" or DSU. Hicks has developed a set of techniques for dynamically loading a new version of an existing program, and switching it over while it runs, with zero downtime. This includes translating old data representations to new ones. As I listened to the talk, I kept thinking, "OK, sure, we're going to design our code in this highly-restricted way, and then as long as we're willing to bend over backwards to do some tricky retrofitting, then hypothetically we might be able to update the code without downtime."

I was wrong! Hicks and his team have actually taken three years' worth of updates to sshd and vsftpd (amounting to a couple dozen updates, including some whole-number revisions) and updated running instances with each successive version, all without crashing or taking the server down. I was quite astonished that these techniques could be applied to changes that have already been made in the wild. Of course, they had to write some code to translate in-memory data structures on the fly—but they didn't have to re-architect the application to make it fit. Everyone in the seminar room woke up when Hicks showed the slide showing all the versions, with their dates, that had been dynamically loaded into these servers.

I would be interested to see whether these DSU techniques turn out to be a good software-engineering tradeoff in the long run. Most of the time, just having an extra machine to handle load while you bounce individual servers to the new version is a cheap way to get the same result. And you still have the challenge of writing your updates so that they're compatible on the wire: you can update sshd's internal structures on the fly, but updating the protocol might be more challenging. Also, to be slightly critical, sshd and vsftpd together make a pretty constrained class of software: slow-changing servers that mainly wait for connections and spawn off processes to handle them. Would this work for a more sophisticated system like a fancy real-time game system, where the gamers are actively interacting through the system?

Matthew Flatt argued for programming-language features inspired by OS features. The case was reasonably compelling: an IDE like DrScheme needs to run user programs in a special bomb-proof box, so that user programs can't impinge on the workings of DrScheme itself. This extends to lots of issues: device ownership, memory consumption, non-termination. Flatt argued for an abstraction called a "custodian" that manages all those resources together; killing the custodian frees up all the resources it manages. At the same time, he wants to enable sharing of data between programs, as an OS might allow. This makes the memory-management problem much harder, of course, since you need a policy for determining which custodian is "charged" for a block of memory, when it's shared between many. Flatt outlined a policy, whose details I didn't get, which apparently works in his setting.

Sandhya Dwarkadas talked about transactional memory from the hardware point of view. Unfortunately, her talk was pitched in the vocabulary of computer architects, so I didn't understand any of it! At a high level, what I took away was that transactional memory might be easy for processor makers to provide, by taking advantage of the cache-coherency systems that are already being included in multiprocessor machines.

Jeff Foster talked about another system for statically detecting race conditions, like Flanagan's for Java, but this time for C code. It amounts to a kind of pointer alias analysis, and the details are very complicated. A question that wasn't raised, which just occurred to me: Why was alias analysis necessary in C but not in Java? I think the answer will be that the Java system may assume that most access to data members are from within the class definition (and thus are not by reference).

Shaz Qadeer had the true misfortune of presenting last, after we'd patiently sat through 48 hours of lectures. For myself, I know I didn't follow his (or Jeff Foster's) presentation as closely as most of the others. Someone has to go last, I guess. Qadeer's presentation was on model-checking concurrent software. Some of the material he presented was basic model-checking stuff (like "What is LTL?") but he quickly jumped ahead to cover fancy techniques for state-space reduction. I'm always surprised when speakers do that. If you assume that I don't know the basics, then why do you expect me to absorb those and with some advanced material in one lecture? If you want to focus on the advanced stuff, then why not just say, "This is for people who already know X," and just give a quick refresher for X? The advanced students were probably bored while us newbies asked questions about LTL, and us newbies got bored once our intuition had been outstripped and we couldn't follow the lecture closely anymore.

All in all, the quality of the presentations at the Summer School was quite high. I was surprised that I could follow about 40 of the 48 hours of lectures, and got something out of almost every one (the previous 48 seminars I'd attended didn't have half that hit rate).

We also had a great time: Jim Allen's nightly walks around Eugene were quite nice, and we always ended up at a pub (if you like beer, they have lots of good ones in Oregon [my favorite: the Black Butte Porter, not to everyone's taste]). I met loads of people there and really enjoyed it. To PhD students in the US and abroad, I'd suggest you go to the Oregon Summer School in future years.

January 30, 2006

Multi-way Joins in Links

Multi-way joins are now working (transformed from simple list comprehensions, that is) in the Links interpreter. There were a number of subtleties in the rewriting code that prevented this from working.

Currently, my cohort Jeremy and I are not sure whether it's better to use a deep-searching algorithm, like the one we have now, or rather to use simple, local rewrite rules with a fixed depth. The advantage of simple rules would be the ease of showing correctness. But it might be hard to show that such rules must produce more optimal behavior—there is the possibility of hill-climbing, reaching a local optimum and thereby missing a global optimum. The deep-searching rewriter we have now is rather hard to keep track of, as it relyies on simultaneous knowledge of several different kinds of AST nodes, whereas we suspect that a shallow-rule system could be built with only about one rule for each way of combining two node constructors.

To decide this question, I think we need to flesh out the shallow-rule system, and in parallel we need to pursue some kind of assertions about the deep-search system. I still haven't been able t find any tidy results about this in the literature. There are scads of papers by Libkin & Wong that sound related, but I haven't yet done a thorough search to see what's there.

Some things I discovered: Codd, 1970, A relational model for large shared data banks is the original paper for (flat) 'relational algebra' which is the formalism ours needs to compile into. Libkin & Wong's 1994 paper New Techniques for Studying Set Languages, Bag Languages and Aggregate Functions argues about the comparative power of various nested calculi, including those called BQL and NRCaggr.

January 23, 2006

Term-Rewriting Combinators

Progress on the Links interpreter.

Jeremy checked in his completely-overhauled optimizer combinators. We now have ways of defining term-rewriters by combining rewriters; and we can compose a structurally-recursive function just by giving one case and specifying a strategy, such as "bottom-up" or "top-down." This approach is pretty clearly laid out in the papers of Limsoon Wong, but it took us a while to rewrite out existing optimizer functinos this way and to find just the right combinators. Hopefully we'll have a chance to publish the specific techniques that we're finding useful and the pitfalls.

Now, we've cut our optimizer code from ~1200 lines to about 400, a rather good improvement. It's far more readable and approximately as correct! We also spent some time pairing to create unit tests, which was fruitful and makes me feel much more confident that these things will keep working in the future.

January 11, 2006

Browsers are [not] quick; more on functional GUIs

UPDATE: This was complete rot. My test code was buggy, which caused the flicker. I patched it up and my draggable list moves smoothly now. Jeremy reports that he once tried an app that would re-create an entire (small) DOM tree on every keypress, and it worked reasonably fast. How do browsers manage to redraw pages as smoothly as they do?

An observation: web browsers move very fast when executing calls that modify the DOM.

For example, suppose list is a list of items, of which x is one, and you delete the item, then immediat insert it in the same place:

y = x.nextSibling;
list.insertBefore(y, x);

This will typically cause the whole list to be redrawn in the intermediate state, so the list gets shorter for a fraction of a second before the element is reinserted.

However, if you delete an element and reinsert it in a different place, the browser seems to redraw it smoothly, as if the element just moved:

y = x.previousSibling;
list.insertBefore(y, x);

That seems to be a difference of behavior, which I'm at a loss to explain.

In any event, this seems to rule out the "whole component replacement" that I talked about before, in implementing a purely-functional web GUI.

How to work around this difficulty? I still like the scoped, conditionalized structure that I outlined in that post. I suppose we could still have an inner function return a complete component, and have the runtime system compare it with the existing state of the component, executing a minimal series of DOM instructions to make the two agree.

Another approach, which I'm prefering, is to have the inner function return a "delta"—an expression representing how the component should change, something like a list of DOM instructions. I wouldn't want to allow the instructions in such an object to depend on one another, so I'd like to cook up a set of "deltas" which would be mutually independent, if possible. This would amount to a new mini-language for manipulating DOM trees, one that had no notion of sequential execution.

January 9, 2006

Query optimization limitations: projection

Another problem for Links' current query optimization algorithm. It should be possible to create a proper query out of something like this (supposing y is a value that might or might not be in the table one or more times):

for x <- t in
where x == y

Yet currently the algorithm requires the loop variable to be used only through projections (e.g. x.age), giving up if it sees it used raw.

To compile this into SQL we'd probably need to expand it to a fieldwise comparison (x.a == y.a && x.b == y.b && ...) but that should be no sweat.

The slightly deeper problem is that the current algorithm works by tracking which variables correspond to which table fields, rather than a more general approach which would associate variables with table rows or fields. This might be a modest fix or it might require more digging.

January 5, 2006

Reference Draggable List

Worked up a reference implementation of a draggable list in the raw JavaScript today. This reminded me of the language hurdles to doing such a simple thing, and gives me something that's as close to the functional style as I can presently imagine. It does use global data and mutable locations a bunch, so it will be fun to tear it apart and give a truly functional way of writing it. In all, it's about a 100 lines, though it's not terribly flexible, so it will want some smithy.

January 3, 2006

Better Comprehension Through Memo(r)ization

I've been thinking about how to deal with these problems optimizing query-backed comprehensions.

A reasonably general example of the problem is as follows. I include a join condition for conversation sake, but also keep in mind the possibility that the condition could be always-true.

for x <- s in
    let y = f(x) in
        for z <- t where == in
            (x, y, z)

First let's characterize the running time of this algorithm, as is. Let s be the size of the table s, let F(n) be the cost of running f on an input n. Also suppose that there is a cost q for each query, and that this is non-trivial since it involves marshalling data to and from an external process, which may be on another machine. Finally, let k be the average number of rows in t which match the join condition; i.e. that k such that the number of tuples in the result is sk. The above algorithm should run in time O(sF(n) + sk + sq).

Now how is it affected by optimization? We have a rewrite rule which would rewrite the above as

for (x, z) <- s * t where == in
    let y = f(x) in
        (x, y, z)

Which performs fewer queries but evaluates f(x) more times. This version should take time like O(skF(n)). In lots of cases, this is worse.

It might be possible to estimate these costs, yielding a heuristic optimization rule that the compiler would employ.

But if we know that f is pure, we can memoize it, and that would give a much better optimization.

Supposing we can do the memoization lookups in some kind of O(log m) time, this would give a program that runs in time upper-bounded at O(sF(n) + sk + s log m)); we've saved the O(sq) cost of doing multiple queries but we've incurred the cost of the lookups at O(s log m).

Tricky Query Optimizations

Optimizing language-integrated queries is trickier than it looks.

Besides the structural problem I talked about yesterday, there are cases where a naive "optimization" can easily make the result run much slower than the direct version. This will happen whenever some time-consuming operation occurs between an inner and an outer comprehension. Suppose t is a table with loads of big numbers in the number column; consider this Links algorithm:

for x <- t in
    factors = factors_of(x.number);  # (costly)
    for y <- s in
        (y, factors)

A naive optimizer would say, "I should combine these two queries rather than re-running the inner query over and over again." It would yield an algorithm that works like this:

for (x, y) <- (t, s) in
    factors = factors_of(x.number);  # (costly)
    (y, factors)

But this algorithm might evaluate factors_of many more times than the original, since there may be more than one pair (x, y) for any given value of x. The optimization improved performance with respect to the database, but worsened the performance of the algorithm itself.

January 2, 2006

Nesting into SQL

It's perfectly plain in Buneman et al.'s "Comprehension Syntax", but I didn't realize it until now: Links's list/bag/set comprehensions don't correspond to SQL. There are simple expressions, using only comprehensions and no fancy language features, that cannot be compiled directly into SQL, because SQL's relations are always flat, whereas the structures our comprehensions build can be nested. To wit:

for x <- t in
  [(, for y <- s where y.t_id = in [])]

This should give us a list of pairs, the first of which is the name attribute of a thing in t and the second of which is a list of corresponding id values taken from s. But SQL has no structure directly corresponding to a "list of tuples of lists."

Currently, the Links interpreter will simply decline to optimize these comprehensions, leading to a single query against the table t and as many queries against s as there are rows in t. This is abysmal; it's distinctly worse than coding in PHP where at least you can write one efficient query.

I see two plausible solutions; one is to modify Links to bring it into line with SQL's model. I'm not too fond of this; ruling out nested structures across the board would be unacceptable, and creating a sub-language for queries, which contained only flat relations, would diminish the Links goal of unifying the algorithmic language with the query language.

Alternatively, and much better, we could do the join, and then post-process the results to group them into the proper structure. In this case, the query would be

select, from s, t where s.t_id = order by

And the post-processing might be

fun grouper(row, accum)
    if accum <> [] && == hd(accum).name 
    then [{name =; id = []++hd(accum).id}]++tl(accum)
    else [{name =; id = []}]++accum
fold(grouper, [], results)

This just takes the records with a common name field and folds them into a single record, with all the corresponding id fields listed therein. I'm relying on the ordering by in order to do this in linear time, but a more general/robust solution could probably work in n log n time, by keeping an indexed structure to look up the record into which any given row should be folded.

It will take a bit of tinkering to come up with the grouper for a given nested comprehension, in general. Some thoughts toward that:

In SQL, only the so-called aggregate functions (max, average, sum, count) may be applied to the values being grouped together—essentially, only functions that yield an atomic datum. The operation we need might be equivalent to a "group by" whose operation is list/bag/set construction. Looking at it this way might allow us to easily apply algorithms and transformations that already exist for SQL's "group by." But how to recognize what columns are being grouped by and which ones are being grouped together?

Well, a condition that makes the example problematic is that we have nested queries, and there is a "bare" piece of the top query: a value from t is used outside of the inner query. I think this condition implies the sort of nested structure I'm talking about. Furthermore, it is a smoking gun as to how we should structure the groupings. The variable used in the outer comprehension represents the one to group by, the other comprehension, in itself, becomes the aggregation operation.

This needs to be refined, of course, but I think it is a lead. Essentially, the idea is to take a nested comprehension like this:

for x <- t in
  [(, for y <- s where y.t_id = in [])]

and convert it to a faux-SQL query like so:

select, makebag( from s, t
where s.t_id = group by

where makebag( ) is the "aggregate function" that constructs a bag from its arguments. (In fact, this should fall out as a comprehension over the grouped elements.) The faux query comes apart into a real query and a post-processing step:

select, from s, t where s.t_id =

group_by [name] makebag results

I can't see writing a general post-processor within Links, since it would need something like reflection, to dissect an arbitrary row on given fields. Let's use a Haskell-like meta-language notation. Suppose we want to group by some list of fields g and that project g is a function that projects those fields out to form a new record. We also have an aggregation function, f. The needed post-processor should behave like this (though more efficient algorithms should be used):

group_by g f rows = map (onSecond f) (collate g rows)
    where collate g list = [let ys = whereEq (project g) x list) in
                                (x, map (project g') ys)
                            | x <- uniq (map (project g) list)]
              where g' = complement g       -- fudging here a bit
          whereEq f x = filter (\y -> f y == x)
         onSecond f (x, y) = (x, f y)

I aim to set all this in a more gentle theoretical framework. I'd like to have a formal definition of the NRC, SQL, and our query-optimizable language; then we can give the translation between them, and then it's a simple matter of programming. Shouldn't be too hard.


December 5, 2005

Items to do from today's meeting


  • Active Links code as the target of a/@href. (done)
  • Table declarations at top level are too eager.
  • Label expressions rather than serializing them in full (when serializing).

Major tasks:

  • Implement Sam & Dave's Wineshop.
  • Package up Links interpreter for neophytes to build it.
  • Think about building Links interpreter as a web service.

November 17, 2005

Sugar for continuations

Implemented a rudimentary version of the handle/with construction in Links. Now you can write, e.g.:

    <foobar> {string_of_cont(handler)} </foobar>
    result -> "You gave me " ++ result ++ ", fool!"

The result of this construct is something like

<foobar> (some base64 stuff) </foobar>

and the base64 stuff is a continuation which wraps its argument in "You gave me ..., fool!" This is not quite what I want but it's a start.

November 14, 2005

Bugs fixed in Links (Mutually-recursive functions, etc.)

Today, I got mutually-recursive functions working at the toplevel in the Links interpreter. I also modified the way pickling is done to make sure that we never try to pickle a first-class OCaml function. Gilles did a weird thing of putting the functions that implement Links primitives. This makes them hard to pickle. I pulled them out of the environment, but we still have the problem that partial applications of primitives won't pickle right: it loses the args and forgets how many have been applied. That should be straightforward to fix, although it requires changing every primitive in the book.

I also want to start creating end-to-end tests that check that our web interface is working OK. There's a lot of untestable stuff there right now, which makes me nervous. I'm looking into using perl's WWW::Mechanize for this.

November 11, 2005

Functional Reactive Web GUIs

Here's a bit of syntax for doing a kind of "functional reactive web programming." This example aims to implement the draggable list problem.

fun dragList(origWidget) : mouseIsDown => DOM -> DOM {
    draggingElement = findHitElement(mousePos);
    fun updateIndicator(widget) : mouseMoved => DOM -> DOM {
        reorderNodes origWidget mousePos.y;
    finally {
        # updates the widget once the behavior's condition 
        # finally becomes false
        reorderNodes origWidget draggingElement mousePos.y

This defines a "behavior" called dragList which could be applied to a node in a DOM tree. The dragList widget has a type mouseIsDown => DOM -> DOM. Here the small arrow -> denotes an ordinary function type (arg on the left, result on the right). What comes before the big arrow (=>) denotes a condition. This is something which must be true for the runtime system to invoke this function. So the type mouseIsDown => DOM -> DOM says, "if the mouse is down, you can call this function with a DOM tree and expect another DOM tree." The runtime system will repeatedly pass the current DOM of the relevant widget and replace it with the result of the function.

You might apply the behavior to an HTML element something like this:

<ul l:behavior=dragList>

The same behavior can be attached to many page components this way. Conceivably, more than one component's behavior could be active at a given time. But within a behavior, internal functions should be serialized. In this system, a behavior can only destructively affect the component on which it is invoked—it follows from this that concurrent behaviors on distinct components cannot interfere. It remains to be seen whether every desirable GUI experience can be implemented this way.

Now, diving into the body of the dragList behavior:

    draggingElement = findHitElement(mouseDownPos);

The dragList behavior begins by letting draggingElement refer to the element where the mouse is originally clicked; we'll use it later.

Next dragList gives an internal function which has its own condition; as long as the outer block is active, the inner block stands to be called.

    fun updateIndicator(widget) : mouseMoved => DOM -> DOM {
        reorderNodes origWidget draggingElement mousePos.y

The inner function's condition only applies while the outer function's condition is true. So what this block expresses is that, while the outer condition is true, whenever the mouse moves, the runtime system should call updateIndicator. In order to implement the nested conditions, the compiler will write code to register and unregister the inner function with the appropriate event handlers whenever the outer function's condition is satisfied/falsified.

Finally, the use of the inner function, like that of the outer function, is to update the component. The runtimes system passes the current DOM for the component and replaces it with whatever the function returns. In this way, we model changes over time without destructive operations.

Now to progress to the next level of sophistication, we can observe that the type annotations are unneccessary and the compiler should be able to derive both the argument types and also the conditions under which the function needs to be called. This works because there are special thunks which refer to environmental variables that can change. Omitting the type annotations from the original figure, we get:

fun dragList(origWidget) {
    draggingElement = findHitElement(mouseDownPos);
    if (not(childOf(origWidget, draggingElement)))
        return origWidget 
    fun updateIndicator(widget) {
        reorderNodes origWidget draggingElement mousePos.y
    finally {
        # updates the widget once the behavior's condition 
        # finally becomes false
        reorderNodes origWidget draggingElement mousePos.y

the compiler could still determine, making use of referential transparency, that updateIndicator will not return a distinct result unless the value of mousePos has recently changed. Thus it can infer the condition type of mouseMoved and behave as before, registering the function with the mouseMoved event at the javascript level. Similarly, the outer function, dragList, should be invoked whenever the mouseDownPos special thunk has changed its value. In fact, the value returned will not change as long as draggingElement is a null value—that is, as long as mouseDownPos lies outside the elements within this widget, and findHitElement returns some node which is not a part of this widget. (In fact this explicit test is ugly, and in a moment I'll look at ways of eliminating it.)

This can be seen as a declarative approach to web GUIs, because each function, inner or outer, is essentially just "giving the value" of the widget as a function of some environmental conditions. Besides the mouse button and position, other environmental variables could be used (keys down, wall-clock time, a changing signal from the server, etc.). These "special thunks" are analogous to what Hudak and Elliott call "signals" in the area of Functional Reactive Animation [1].

Now ideally, the compiler should determine when two inner functions of one behavior would compete—when their conditions overlap; then there would be a race to update the component, possibly with different values. This is a mistake and the programmer should be prompted to change the code so that only one value of the widget is implied by any given set of environmental changes. Perhaps this derivation could be by means of a condition type on the special thunks, and a typing rule which bubbles these conditions up to any containing expressions.

Lexical scoping is an important factor here, which needs to be pinned down. In this case the inner function was a function of origWidget, that is, the original structure of the component—rather than its own paramter, widget, which would give the current structure of the widget at the time of invocation. This is just a design choice for the behavior-designer; under other circumstances it may be more natural to use the current widget instead. Of course, the runtime system should take care not to destroy the lexically-scoped origWidgetvalue while this behavior is active.

A lot needs to be pinned down. The condition types have been presented a bit too breezily. After all, mouseIsDown has a clear truth value at any moment; but mouseMoved describes an instantaneous change in a sort of continuous variable. Some care should be taken in defining these conditions. The question of how to handle an instantaneously-changing variable and its downstream effects has been looked at in functional reactive programming.

Also, a bit more expressivity would be useful. Above, we would prefer to describe the condition of the dragList function more finely as: "mouse is down and its original hit point is within me." This calls for some way to parameterize the condition type, e.g. "mouseIsDown && firstHit(insideMe)." It's not obvious that all the natural conditions and parameters will be easily and soundly expressible.

Finally, in talking to Phil about some of these ideas, he suggested that rather than operate on successive values for the view of the widget, the behavior functions should instead operate on a model of the widget; another set of viewing functions should be automatically invoked to transform a model into its realization in DOM form. I think this is a good idea and I need to explore this further.

[1] Elliott, C. and Hudak, P. 1997. Functional reactive animation. In Proceedings of the Second ACM SIGPLAN international Conference on Functional Programming (Amsterdam, The Netherlands, June 09 - 11, 1997). A. M. Berman, Ed. ICFP '97. ACM Press, New York, NY, 263-273.

November 7, 2005

Ways of Denoting Form Handlers

I've been exploring the idea of a "continuation" as the handler for an http request; to that end I have the Links interpreter working with serializable first class continuations, and an escape construct for creating them. The code is finally in a stable state.

Having created one of these things, which effectively represents a control stack, the programmer can embed such an object in a web form, and posting the form passes the "value" of the form to that continuation.

In trying to program for it, I became convinced that using the escape construct is very confusing. I think my handle ... with ..., or something like it, would be much more natural. There are still some questions in my mind about that, as it still requires some notion of explicit continuation values, which I'd like to expunge. More on that later.

As an alternative, the Links programmer can specify an expression to be evaluated when the form is posted (as always). I call this feature the l:expression feature and the other one the l:handler feature. The difference is fairly subtle and I'm not yet sure how to explain the two; in fact each is as powerful as the other, but they have a difference of intention. With l:expression, the coder is saying, "this expression represents the next page the user will see," whereas with l:handler it's more like, "this handler will process the form and do more stuff, possibly showing many more pages, before dropping the user in a 'neutral' place." The l:handler mode is more natural if you want to handle the form and then return to some dynamic context, for example if you want to display a page that acts like a subroutine, returning to whatever user experience it was invoked from.

It will require a lot more playing around to figure out good ways to present these and a real account of the pros and cons. Ping me if you're interested.

November 3, 2005

Meeting Notes

Notes from today's Links Research Group meeting:

Places to store data:

  • URL
  • Hidden fields
  • Cookie
  • Database
  • JavaScript state
  • Server state

Parameters/issues around these storage types:

  • For data the client will handle, we need security options
  • Data stored in the database can be under an imported schema or the schema can be auto-generated from the a type in the language.
  • Cookies have names
  • Tables have names
  • Server-side data might have security options, too

The idea was raised by Jeremy of offering different "views" to the same data, particularly data in the database. Ezra noted that this is true to common practice, where oftentimes an Obejct-Relational mapping is used to provide a layer of data abstraction above the database. Phil noted his work on alternate "views" of data structures in a functional language, supported by conversion functions that convert between representations.

Reasons for encryption:

  • protect data from intruders
  • protect data from everyone, as with passwords (think one-way encryption)
  • keep client data secret from the server (begging proof-carrying code as a way for the user to trust the client-side code)
  • keep algorithms & other server-side proprietary data secret

Rodney mentioned that database systems already have the notion of separate read/write authorization for fields; could a similar policy could be used to prevent different pieces code from modifying given pieces of data, even if some code is on the client and some on the server?

Progress: Web Stack Implemented

My copy of the Links interpreter is now capable of serializing a control stack along the lines of what I called for on Monday.

I have a working demo of a login routine. You can call this routine and it will:

  1. return a login form to the caller, with the form/@action bound to the continuation of the routine itself.
  2. on post, check the username/password; if they are incorrect, recurse, displaying the form again
  3. if username/password combo is correct, return to the caller.

This way, various callers can invoke the login routine and it will keep track of the context where it was called—you don't have to tell it explicitly.

In working with this I've become aware of just how confusing it is to work with explicit first-class continuations. I'll be working on offering a better syntax for that.

To get precisely the feature outlined in Monday's post, I need a way to get and set the cookies—but I'm going to defer that as I think the syntax issue is more interesting.

October 31, 2005

Links App Work

I'm working on a demo app in Links, our language for web development. It's stimulating me to see lots of things wrong with the language and the implementation, and things that need to be done to make it work really well. I'll note some here; we need:

  • a way to perform redirections. We should, of course, allow redirections to go to Links "continuations" but also to arbitrary URLs. I've got an inkling that the type of continuations should be precisely the same as the type of URLs, or that there should be an easy conversion.
  • a way to load code from other files. I'm already copying and pasting code between different files.
  • call a subroutine which never returns to the caller, but which knows who the caller is & can return on some later request.
  • more helpful error messages
  • error messages that tell you what line the problem occured in
  • a way to freeze an expression so that it will be evaluated when a link is clicked

The above things are fairly desparate; without them we can't prove that Links is good for the web. Lower-priority needs:

  • Ability to interpolate code in {...} with regular old "strings" (why should XML quasiquotes be special?)
  • Ability to send error messages to the browser (exceptions rule)

(Links team at Edinburgh: try this)

October 24, 2005

My Research

Before getting too much into the thick of things, I should should say who I am and what I'm doing.

I'm working on a doctoral project to help make a new language for web development at the University of Edinburgh. The thrust of the project is to combine best practices from a number of domains and present them in one coherent language—essentially, we want to mitigate the so-called "impedance mismatch" between different technologies that make up a given web system.

For example, web applications typically are backed by a database, but the database comes with its own language, SQL, for extracting data; one typically has to write a lot of glue code to repackage the data that comes from the DBMS for use within the main language that the system is coded in. Then, in order to create a rich user experience, you typically have to code in JavaScript, and again you have to repackage your data to make it work with JavaScript & the DOM. All these multiple representations for the same data wastes a lot of web developers' time. We hope to bring all these technologies in under one linguistic umbrella, and provide a more convenient way to express a complete web system.

Along the way, we might come up with some ideas and approaches that are useful in other contexts. So if these issues are interesting to you, even if you don't expect to adopt a new programming language anytime soon, I hope you'll subscribe and follow along.

The Draggable List: a Benchmark Widget for Web Languages

Here's a nicely-defined problem that comes up repeatedly in web development, and serves as a simple example of some of the problems that come up in programming client-side web applications. We can measure web languages by how well they can express a solution to this.

The goal is to create a "draggable list." Here the user is shown a list of things—for example, to-do items—and has the opportunity to re-arrange them by dragging & dropping. While the user is dragging, the UI must indicate where the dragged item would be placed if the user let up the mouse at that moment. When the user lets go of the mouse button, the list should, of course, be permanently reordered on the page; afterwards the user can do more drag actions, or not.

The behavior should be abstracted from the concrete HTML/CSS representation so that the behavior is reusable with respect to a wide variety of visual treatments.

A side condition is that it should be easy to get the reordered list data *itself*, not by scraping the DOM tree. In this case it's easy, but the approach ought to extend directly to more complex widgets with more complex models.

How to visually indicate where the item would be dropped should be up to the coder. It's worth noting that there are generally two good ways of doing it. One way is to leave all the existing items in place but to display a solid line between the two elements where this one would be inserted. Only after the user lets go are any of the other elements moved on the screen. The other approach is to move around a representation of the dragged element—in this case, the other elements are displaced while you drag. This approach is taken in the best-known ready-made example of the Draggable List (if anyway knows of other good examples, please let me know).

Looking at the source of that implementation, JavaScript performs poorly on the Draggable List benchmark problem. Any other contenders?

UPDATE: Another implementation, very appealing to the user. But, appealing to the programmer?

October 9, 2005

Links on MacOS X

UPDATE: This post originally refered to the version of Links that was current at the time. Updates made Jan 31, 2007 are in parens.

I got Links running on MacOS X today. Here are some things you have to do:

  • Install PostgreSQL from source. To do this, download the source and follow the instructions, but after ./configure and before make, you (no longer) have to:
    • undefine HAVE_POLL in src/include/pg_config.h (change the relevant line to #undef HAVE_POLL )
  • Install postgresql-ocaml (latest version in Markus Mottl's directory). Note that you don't want the postgresql-ocaml library that comes up first in the Google search; that's an older version. You want Markus Mottl's version.

    In one case, I had to modify OCamlMakefile to set INCDIRS and LIBDIRS to point to /usr/local/pgsql/include and /usr/local/pgsql/lib, respectively.

  • (No longer required.) Install pcre (get it via FTP). To do this, follow the instructions, but after ./configure and before make, go into the file libtool and change the line
    allow_undefined_flag="-flat_namespace -undefined warning"
    (it appears twice in the file)
  • (No longer required.) Install pcre-ocaml (latest version found in this directory)
  • >(No longer required.) Install findlib (ocamlfind)
  • (No longer required.) Install equeue
  • Install ocamlnet
  • (No longer relevant) Don't install curl (ocurl). At least, I couldn't get it to install—I got some kind of mismatch between it and the libcurl headers that were installed on my MacOS 10.3 system. Instead, I redefined the grab_url function in so that it wouldn't use curl for now.

October 3, 2005

Links Bug

This program doesn't seem to infer a type for arg:

fun fact(n) { if (n == 0) then { 1 } else { n * fact(n-1) } }
fun factpair(n) { (arg=n|(fact=fact(n))) }
r = factpair(12);

(arg=12,fact=479001600) : (arg:'28,fact:int)
479001600 : int
12 : '34

September 30, 2005

Gripes with SLinks Implementation

Things to change when we re-do it:

  • OCaml is annoying:
    • Syntax too verbose.
    • Syntax errors are unhelpful (Syntax error)
  • Build objects into a build directory.
  • Get rid of sl_ prefix on everything.
  • Eliminate 'conn which infects all the expression types.
  • Need comment syntax in Links
  • Only one expression is allowed per batch file. Want more to support easy testing.


  • Why are results and expressions different types? Answer: because it separates eval'd and uneval'd objects.
  • Why is the environment split into two values, globals and locals? At least, make it (globals, locals)? Answer: semantics are different; also we want to avoid marhsaling "static" data.
  • How best to do a testing framework in Links?

September 28, 2005

Do CPS transformation on interpreter

I eventually grokked lambda-lifting and worked on some code for it but ultimately I wasn't getting what I wanted out of it.

The larger goal was to have a function from terms to terms: to convert them such that old term was now wrapped in a lambda, expecting a continuation. Phil convinced me this was bunk, since evaluating the new terms still happens according to the same process as the old ones.

The proper goal should have been to modify the *interpreter*. The interpreter, then, should evaluate terms *using* a continuation-passing style. Trying to modify the terms in-place was a bad idea, since it makes them harder to read, messes up the types, and may screw up some other stuff down the road. CPS should simplify things for us, not complicate them! So, I wrote up a lambda-calculus interpreter that uses the continuation-passing style. SML code for this is below.

Continue reading "Do CPS transformation on interpreter" »