Semantics
Interface Automata
by Luca de Alfaro, Thomas A. Henzinger
Conventional type systems specify interfaces in terms of values and domains.
We present a lightweight formalism that captures the temporal aspects of software
component interfaces. Specifically, we use an automatabased language to capture both
input assumptions about the order in which the methods of a component are called,
and output guarantees about the order in which the component calls external methods.
The formalism supports automatic compatibility checks between interface models, and
thus constitutes a type system for component interaction. Unlike traditional uses of
automata, our formalism is based on an optimistic approach to composition, and on
an alternating approach to design refinement. According to the optimistic approach,
two components are compatible if there is some environment that can make them work
together. According to the alternating approach, one interface refines another if it
has weaker input assumptions, and stronger output guarantees. We show that these
notions have gametheoretic foundations that lead to efficient algorithms for checking
compatibility and refinement.
The idea of expressing order of message exchange as type is certainly not new (as anyone exposed to web service choreography hype can tell  oh, just kidding, of course the theory is much older). However, the specific approach looks interesting (not the least because of appealing to game semantics).
Lightweight Static Capabilitites
We describe a modular programming style that harnesses modern type systems to verify safety conditions in practical systems. This style has three ingredients:
 A compact kernel of trust that is specific to the problem domain.
 Unique names (capabilities) that confer rights and certify properties, so as to extend the trust from the kernel to the rest of the application.
 Static (type) proxies for dynamic values.
We illustrate our approach using examples from the dependenttype literature, but our programs are written in Haskell and OCaml today, so our techniques are compatible with imperative code, native mutable arrays, and general recursion. The three ingredients of this programming style call for (1) an expressive core language, (2) higherrank polymorphism, and (3) phantom types.
Pursuant to this thread about the membrane pattern in static languages from Mark Miller's excellent Ph.D. thesis. I don't yet know whether a solution is derivable from this work, but Mark was kind enough to point me to it, and Oleg seems to want to see it distributed, so here it is—Mark and/or Oleg, please let me know if this is premature.
From the "Whoa!" files:
Concoqtion: Mixing Indexed Types and HindleyMilner Type Inference
This paper addresses the question of how to extend OCamlâ€™s HindleyMilner type system with types indexed by logical propositions and proofs of the Coq theorem prover, thereby providing an expressive and extensible mechanism for ensuring finegrained program invariants. We propose adopting the approached used by Shao et al. for certified binaries. This approach maintains a phase distinction between the computational and logical languages, thereby limiting effects and nontermination to the computational language, and maintaining the decidability of the type system. The extension subsumes language features such as impredicative firstclass (higherrank) polymorphism and type operators, that are notoriously difficult to integrate with the HindleyMilner style of type inference that is used in OCaml. We make the observation that these features can be more easily integrated with type inference if the inference algorithm is free to adapt the order in which it solves typing constraints to each program. To this end we define a novel â€œorderfreeâ€ type inference algorithm. The key enabling technology is a graph representation of constraints and a constraint solver that performs HindleyMilner inference with just three graph rewrite rules.
Another toughtocategorize one: dependent types, the CurryHoward Correspondence, logic programming, theorem provers as subsystems of compilers, implementation issues... it's all in here.
Update: A prototype implementation is available here, but it took a bit of Googlefu to find, and it's brand new, so be gentle.
Update II: The prototype implementation isn't buildable out of the box, and includes a complete copy of both the Coq and O'Caml distributions, presumably with patches etc. already applied. So it's clearly extremely early days yet. But this feels very timely to me, perhaps because I've just started using Coq within the past couple of weeks, and got my copy of Coq'Art and am enjoying it immensely.
Update III: It occurs to me that this might also relate to Vesa Karvonen's comment about typeindexed functions, which occurs in the thread on staticallytyped capabilities, so there might be a connection between this frontpage story and the frontpage story on lightweight static capabilities. That thought makes me happy; I love it when concepts converge.
Socially Responsive, Environmentally Friendly Logic
by Samson Abramsky
We consider the following questions: What kind of logic has a natural semantics in
multiplayer (rather than 2player) games? How can we express branching quantifiers, and
other partialinformation constructs, with a properly compositional syntax and semantics?
We develop a logic in answer to these questions, with a formal semantics based on multiple
concurrent strategies, formalized as closure operators on KahnPlotkin concrete domains.
Partial information constraints are represented as coclosure operators. We address the
syntactic issues by treating syntactic constituents, including quantifiers, as arrows in a
category, with arities and coarities. This enables a fully compositional account of a wide
range of features in a multiagent, concurrent setting, including IFstyle quantifiers.
This paper seems to unify multiple interesting directions  logic, game semantics, concurrent constraint programming (and concurrent programming in general).
At the same time it remains very accessible, without overwhelming amount of math, so can be hopefully useful not only for academics. I, for one, was waiting for exactly this kind of paper for two years (and my interest is very practical).
Multiplayer CurryHoward correspondence, anyone? Or CurryHoward for web services?
Abstracting Allocation: The New new Thing. Nick Benton.
We introduce a FloydHoarestyle framework for specification and verification of machine code programs, based on relational parametricity (rather than unary predicates) and using both stepindexing and a novel form of separation structure. This yields compositional, descriptive and extensional reasoning principles for many features of lowlevel sequential computation: independence, ownership transfer, unstructured control flow, firstclass code pointers and address arithmetic. We demonstrate how to specify and verify the implementation of a simple memory manager and, independently, its clients in this style. The work has been fully machinechecked within the Coq proof assistant.
This is, of course, related to TAL, PCC etc. If you find the deatils too much, I suggest reading the discussion (section 7) to get a feel for the possible advantages of this approach.
Programming Languages and Lambda Calculi looks like a comprehensive treatement of the semantics of typed and untyped callbyvalue programming languages. I imagine if one had a basic undergraduate education in programming language theory and wanted to get up to speeed in a hurry this would be a great resource.
Securing the .NET Programming Model. Andrew J. Kennedy.
The security of the .NET programming model is studied from the standpoint of fully abstract compilation of C#. A number of failures of full abstraction are identified, and fixes described. The most serious problems have recently been fixed for version 2.0 of the .NET Common Language Runtime.
This is highly amusing stuff, of course. Some choice quotes:
if sourcelanguage compilation is not fully abstract, then there exist contexts (think â€˜attackersâ€™) in the target language that can observably distinguish two program fragments not distinguishable by source contexts. Such abstraction holes can sometimes be turned into security holes: if the author of a library has reasoned about the behaviour of his code by considering only sourcelevel contexts (i.e. other components written in the same source language), then it may be possible to construct a component in the target language which provokes unexpected and damaging behaviour.
One could argue that full abstraction is just a nicety; programmers donâ€™t really reason about observations, program contexts, and all that, do they? Well, actually, I would like to argue that they do. At least, expert programmers...
"A C# programmer can reason about the security properties of component A by considering the behaviour of another component B written in C# that â€œattacksâ€ A through its public API." 
This can only be achieved if compilation is fully abstract.
To see the six problems identified by thinking about full abstraction you'll have to go read the paper...
We are seeking comments on the final draft of our ICFP 2006 paper: Delimited dynamic binding, by Oleg Kiselyov, Chungchieh Shan, and Amr Sabry.
Dynamic binding and delimited control are useful together
in many settings, including Web applications, database cursors, and
mobile code. We examine this pair of language features to show that the
semantics of their interaction is illdefined yet not expressive enough
for these uses.
We solve this open and subtle problem. We formalise a typed language
DB+DC that combines a calculus DB of dynamic binding and a calculus
DC of delimited control. We argue from theoretical and practical
points of view that its semantics should be based on delimited
dynamic binding: capturing a delimited continuation closes over
part of the dynamic environment, rather than all or none
of it; reinstating the captured continuation supplements
the dynamic environment, rather than replacing or inheriting it. We
introduce a type and reductionpreserving translation from DB+DC to DC,
which proves that delimited control macroexpresses dynamic
binding. We use this translation to implement DB+DC in Scheme, OCaml,
and Haskell.
We extend DB+DC with mutable dynamic variables and a facility to
obtain not only the latest binding of a dynamic variable but also
older bindings. This facility provides for stack inspection and (more
generally) folding over the execution context as an inductive data
structure.
The paper comes with a large amount of
accompanying code—in Scheme, OCaml, SML, and Haskell. The code (described in the paper's appendix) uses realistic examples to show how the joint behavior of delimited continuations
and dynamic binding is illdefined in various PL systems, and solves the problem by a
fullfledged implementation of dynamic binding in all these systems. Any comments or suggestions would be very welcome!
Nested commits for mobile calculi: extending Join
by Roberto Bruni, Hernan Melgratti, Ugo Montanari
In global computing applications the availability of a mechanism for some form of committed choice can be useful, and sometimes necessary. It can conveniently handle, e.g., contract stipulation, distributed agreements, and negotiations with nested choice points to be carried out concurrently. We propose a linguistic extension of the Join calculus for programming nested commits, called Committed Join (cJoin).
It provides primitives for explicit abort, programmable compensations and interactions between ongoing negotiations. We give the operational semantics of cJoin in the reflexive CHAM style. Then we discuss its expressiveness on the basis of a few examples and of the cJoin encoding of other paradigms with similar aims but designed in different contexts, namely AKL and ZeroSafe nets.
To me the main interest lies in section 4.2, which shows an encoding of a subset of AKL in cJoin.
More references to relationships between logic/constraint and "pifamily" approaches to concurrency are welcome!
Continuations for Parallel Logic Programming
by Eneia Todoran and Nikolaos S. Papaspyrou
This paper gives denotational models for three logic programming
languages of progressive complexity, adopting the
"logic programming without logic" approach. The first language
is the control ow kernel of sequential Prolog, featuring
sequential composition and backtracking. A committedchoice
concurrent logic language with parallel composition
(parallel AND) and don't care nondeterminism is studied
next. The third language is the core of Warren's basic Andorra
model, combining parallel composition and don't care
nondeterminism with two forms of don't know nondeterminism
(interpreted as sequential and parallel OR) and favoring
deterministic over nondeterministic computation. We show
that continuations are a valuable tool in the analysis and
design of semantic models for both sequential and parallel
logic programming. Instead of using mathematical notation,
we use the functional programming language Haskell
as a metalanguage for our denotational semantics, and employ
monads in order to facilitate the transition from one
language under study to another.
This paper happens to combine several topics that interest me lately  AKL (a precursor of Oz), denotational semantics, continuations, and implementing programming languages in Haskell.
If you share at least some of these interests  take a look!

Recent comments
15 min 32 sec ago
1 day 27 min ago
1 day 1 hour ago
1 day 2 hours ago
2 days 7 hours ago
2 days 8 hours ago
3 days 3 hours ago
3 days 4 hours ago
3 days 4 hours ago
3 days 20 hours ago