" /> Ezra's Research: February 2010 Archives

« January 2010 | Main | March 2010 »

February 27, 2010

Krugman

Economist Paul Krugman, as portrayed in this week's New Yorker profile, reminds me a lot of a computer scientist. MacFarquhar explains one of Krugman's economic ideas, namely that locations specialize economically—cars in Detroit and chips in Silicon Valley—and reports this interesting reaction to it:

'I explained this basic idea to a non-economist friend, who replied in some dismay, "Isn't that pretty obvious?" And of course it is.' Yet, because it had not been well modelled, the idea had been disregarded by economists for years.
(MacFarquhar, "The Deflationist." The New Yorker, March 1, 2010.)

Much of what we academics do (in computer science and, apparently, in economics) is to take an intuitive idea and make it precise—modelling it, if you will. Those who simply think intuitively may have already absorbed the idea and treat it as old hat—but the model, by making it more precise, illuminates it further and more securely and can lead to new inspirations.

MacFarquhar adds another interesting idea here:

His friend Craig Murphy, a political scientist at Wellesley, had a collection of antique maps of Africa. ... Sixteenth-century maps of Africa were misleading in all kinds of ways, but they contained quite a bit of information about the continent's interior—the River Niger, Timbuktu. Two centuries later, mapmaking had become much more accurat, but the interior of Africa had become a black. As standards for what counted as a mappable fact rose, knowledge that didn't meet those standards—secondhand travellers' reports, guess hazarded without compasses or sextants—was discarded and lost. Eventually, the higher standards paid off—by the nineteenth century the maps were filled in again—but for a while the sharpening of technique caused loss as well as gain.

I'm intrigued by this idea that higher standards of precision can cause us to 'forget' disciplinary knowledge, material that no longer meets the standards. Of course, the higher standards are all about eliminating 'knowledge' that isn't actually correct. But some of it, perhaps, is correct, and anyway there's an outer shape to the knowledge that is useful: after all, there was still a River Niger even if the mapmakers didn't know exactly where it flowed.

This might happen in computer science, too. I think of Don Knuth's penchant for reading antique sources describing algorithms from pre-modern times.

There might be inspiration in the older, less-precise knowledge of one's field: it might be a source of ideas to be modelled.

February 16, 2010

What I'm doing now

So what I do lately is I work for a company in Cambridge, MA that has a big XQuery engine for searching "semi-structured" databases. That is, we use language technology to allow people to write nifty searches.

I had some other interesting job offers, but I took this one because (a) it was in the Boston area, which is where I wanted to be at the time and (b) I wanted to get behind one central product and contribute some programming-language knowledge to it. I was surprised to discover a company that would let me do both.

As part of what how we're improving the product just now, we're building a parallelizing compiler for relational algebra—roughly the sort of stuff that titilates me as a langauge engineer, and tangentially (at least) related to some of my thesis work.

Since it's a company trying to compete against others, I probably won't be blogging much about what I'm doing at work, but I still hope to keep this blog active and post researchy notes here.