« What I'm doing now | Main | Simply adding lambda won't make your rewrites diverge »


Economist Paul Krugman, as portrayed in this week's New Yorker profile, reminds me a lot of a computer scientist. MacFarquhar explains one of Krugman's economic ideas, namely that locations specialize economically—cars in Detroit and chips in Silicon Valley—and reports this interesting reaction to it:

'I explained this basic idea to a non-economist friend, who replied in some dismay, "Isn't that pretty obvious?" And of course it is.' Yet, because it had not been well modelled, the idea had been disregarded by economists for years.
(MacFarquhar, "The Deflationist." The New Yorker, March 1, 2010.)

Much of what we academics do (in computer science and, apparently, in economics) is to take an intuitive idea and make it precise—modelling it, if you will. Those who simply think intuitively may have already absorbed the idea and treat it as old hat—but the model, by making it more precise, illuminates it further and more securely and can lead to new inspirations.

MacFarquhar adds another interesting idea here:

His friend Craig Murphy, a political scientist at Wellesley, had a collection of antique maps of Africa. ... Sixteenth-century maps of Africa were misleading in all kinds of ways, but they contained quite a bit of information about the continent's interior—the River Niger, Timbuktu. Two centuries later, mapmaking had become much more accurat, but the interior of Africa had become a black. As standards for what counted as a mappable fact rose, knowledge that didn't meet those standards—secondhand travellers' reports, guess hazarded without compasses or sextants—was discarded and lost. Eventually, the higher standards paid off—by the nineteenth century the maps were filled in again—but for a while the sharpening of technique caused loss as well as gain.

I'm intrigued by this idea that higher standards of precision can cause us to 'forget' disciplinary knowledge, material that no longer meets the standards. Of course, the higher standards are all about eliminating 'knowledge' that isn't actually correct. But some of it, perhaps, is correct, and anyway there's an outer shape to the knowledge that is useful: after all, there was still a River Niger even if the mapmakers didn't know exactly where it flowed.

This might happen in computer science, too. I think of Don Knuth's penchant for reading antique sources describing algorithms from pre-modern times.

There might be inspiration in the older, less-precise knowledge of one's field: it might be a source of ideas to be modelled.