100s of Results
The most exciting thing I've come across lately is the survey paper "Hundreds of Impossibility Results for Distributed Computing" by Faith Fich and Eric Ruppert. It shows what an exciting field distributed computing is, in that there are so many strong results covering a small number of models.
The paper is a good introduction to the field in general, giving a description of some of the key problems (consensus, broadcast, leader election, resource allocation), the important models of distributed computation (synchronous/asynchronous, message-passing/shared-memory, faulty/non-faulty), and also a concise presentation of some of the clever proof techniques that are used to establish impossibility results.
Researchers in this area have a fairly clear task in front of them: present algorithms that solve one of the problems on one of the models—or present a proof that there is no algorithm for that problem and model. Any such result is an exciting, solid advancement: either something is possible, and we've learned how to do it, or it is not possible and one must strengthen the underlying model in order to achieve it.
Of course, there is no shortage of generality in the field—it's not all about individual algorithms. There is an appealing and powerful result known as Herlihy's hierarchy, or the consensus hierarchy, which states that primitives in the shared-memory model can be characterized by the number of processes for which they can be used to solve (wait-free) consensus. Some primitives, such as read/write, cannot solve it for any number of processes. Others, such as test-and-set, can be used to build algorithms for consensus for small numbers of processes. Certain primitives are universal: they can solve consensus for any number of processes. What's more, consensus itself is universal: any other shared object can be implemented on top of a system that solves consensus. Characterising the primitives in this way has clear implications for building real systems.
Contrast this with the theory of "process calculus." Those who know me know that I am not at all fond of this vein of research—I believe its merits are unproven after decades of research. The field of process calculus has hundreds of theorems, no doubt—but these theorems are all about the bisilimarity of one process calculus with another, or about when two individual systems can be considered equivalent. A result of Milner's shows that any CCS formula can be factored uniquely into a parallel composition of non-concurrent processes. But this is a statement about the process calculus itself, rather than about the systems that it could model. The field seems to spend all its time proving that its formalisms are consistent or not entirely unrealistic, rather than cutting to the chase and proving interesting statements about computational phenomena.
Proofs of the kind summarized in Fich and Ruppert's paper must surely be considered theory, rather than systems research: they are strong mathematical arguments about a broad class of phenomena. But the proofs in "distributed computing" are eminently useful, whereas those in "process calculus" are largely abstracted to the point of dubious utility.
If anyone reading this knows more about process calculus/process algebra, and wants to challenge me on this, please do. I've not read the process algebra literature as deeply as I've read the distributed computing literature. I'd love to be convinced that I'm wrong.