This Is the Title of This Story, Which Is Also Found Several Times in the Story Itself

This is the first sentence of this story. This is the second sentence. This is the title of this story, which is also found several times in the story itself. This sentence is questioning the intrinsic value of the first two sentences. This sentence is to inform you, in case you haven't already realized it, that this is a self-referential story, that is, a story containing sentences that refer to their own structure and function. This is a sentence that provides an ending to the first paragraph.

This is the link to the story

Active Libraries and Universal Languages

Active Libraries and Universal Languages

The ideas in this dissertation had their early origins in the years I spent working on the Blitz++
library for numerical arrays. Rather than relying on the compiler to optimize arrays, it performed
these optimizations itself using template techniques (‘expression templates’ and ‘template metaprograms’).
The fact this could be done made me wonder about the general problem of domainspecific
code optimization. From reading the literature it seemed a widespread debate: where ought
domain-specific optimizations be performed? In compilers? Compiler plug-ins? Metalevel code?
Preprocessors? Libraries? The C++ experience indicates that with a sufficiently powerful language
and compiler, libraries can define their own optimizations, and we can package abstractions and
optimizations together as a coherent library. Template metaprogramming is, let’s be frank, a rather
miserable programming environment — its popularity suggests a real need in this area. The definition
of Active Libraries helped turn these vague thoughts into a concrete goal: to realize
compilers and languages to support such libraries in earnest.
This dissertation proposes one possible direction for this.

Or, shorter:

Roughly speaking, this thesis addresses the question: How might we provide DSLs that are fast
and safe?

Naked Objects

The Naked Objects Approach is not a sleazy way to make a quick buck, but a framework for writing business applications that does away with the usual Model-View-Controller architecture. To quote the website:

In the naked objects approach ... writing a business application implies writing only the business domain objects themselves. All business functionality is implemented as behaviours or methods on those objects - the objects can be described as being 'behaviourally complete'. These business objects are then presented directly and automatically to the user, by means of a completely generic viewing mechanism. Similarly, the persistence layer can be generated completely automatically from the domain object definitions (manual mapping will still be required if the objects must interface with existing databases).

Sounds like polytypic programming to me! Ruby on Rails has something similar with scaffolding, and the Django framework in Python does the same.

It's nice to see application of theory, though I'm virtually certain those doing the applying wouldn't recognise it as such.


PyPy, the Python implementation written in Python, was mentioned here a couple of times in the past. After it was mentioned in a recent LtU discussion, I took another look, and boy did they make a lot of progress when I wasn't looking. PyPy can even compile itself now... You should check it out again if you are interested in this sort of thing.

There's even an introduction to the techniques used by PyPy, including a nice (but very high level) overview of abstract interpretation.

Semantic Distance: NLP Not a Resource Sink

Following the story on Mind Mappers and other RDF comments of late, I thought this NLP slide show (PDF) should get a story. Dr. Adrian Walker offers an interesting perspective in a friendly crayon-colored format, including a critique of RDF. Source site Internet Business Logic has other offerings including an online demo.

Ragel State Machine Compiler

What kind of task is Ragel good for?

  • Lexing programming languages.
  • Parsing file formats.
  • Creating robust custom protocols.
  • Checking your "Theory of Computation" homework.


  • Describe arbitrary state machines using regular language operators and/or state tables.
  • NFA to DFA conversion.
  • Hopcroft's state minimization.
  • Embed any number of actions into machines at arbitrary places.
  • Control non-determinism using priorities on transitions.
  • Visualize output with Graphviz.
  • Use byte, double byte or word sized alphabets.
  • Generate C/C++/Objective-C code with no dependencies.
  • Choose from table or control flow driven output.

An Overview of the Singularity Project

Singularity is a research project in Microsoft Research that started with the question: what would a software platform look like if it was designed from scratch with the primary goal of dependability? Singularity is working to answer this question by building on advances in programming languages and tools to develop a new system architecture and operating system (named Singularity), with the aim of producing a more robust and dependable software platform. Singularity demonstrates the practicality of new technologies and architectural decisions, which should lead to the construction of more robust and dependable systems...
Singularity... starts from a premise of language safety and builds a system architecture that supports and enhances the language guarantees.

An interesting overview of what sounds like an intersting project.

The choice of implementation language is also interesting:

Singularity is written in Sing#, which is an extension to the Spec# language developed in Microsoft Research. Spec# itself is an extension to Microsoft’s C# language that provides constructs (pre- and post-conditions and object invariants) for specifying program behavior. Specifications can be statically verified by the Boogie verifier or checked by compiler-inserted run-time tests. Sing# extends this language with support for channels and low-level constructs necessary for system code....integrating a feature into a language allows more aspects of a program to be verified. Singularity’s constructs allow communication to be statically verified.

An interesting aspect is the support for meta-programming, which is implemented in an unusal manner:

Compile-time reflection (CTR) is a partial substitute for the CLR’s full reflection capability. CTR is similar to techniques such as macros, binary code rewriting, aspects, meta-programming, and multi-stage languages. The basic idea is that programs may contain place-holder elements (classes, methods, fields, etc.) that are subsequently expanded by a generator.

Many other intersting design decisions are discussed in the paper (e.g., various DbC facilities), so do check it out.

Abstract interpretation for constraint handling rules

Abstract interpretation for constraint handling rules. Tom Schrijvers, Peter J. Stuckey, Gregory J. Duck. PPDP’05

Program analysis is essential for the optimized compilation of Constraint Handling Rules (CHRs) as well as the inference of behavioral properties such as confluence and termination. Up to now all program analyses for CHRs have been developed in an ad hoc fashion.In this work we bring the general program analysis methodology of abstract interpretation to CHRs: we formulate an abstract interpretation framework over the call-based operational semantics of CHRs. The abstract interpretation framework is non-obvious since it needs to handle the highly non-deterministic execution of CHRs. The use of the framework is illustrated with two instantiations: the CHR-specific late storage analysis and the more generally known groundness analysis. In addition, we discuss optimizations based on these analyses and present experimental results.

I haven't read this paper carefully yet, but since the authors claim it is the first published account of abstract interpretation for CHRs, I decided to mention it here anyway.

Automatic type inference via partial evaluation

Automatic type inference via partial evaluation. Aaron Tomb, Cormac Flanagan. PPDP’05.

Type checking and type inference are fundamentally similar problems. However, the algorithms for performing the two operations, on the same type system, often differ significantly. The type checker is typically a straightforward encoding of the original type rules. For many systems, type inference is performed using a two-phase, constraint-based algorithm.We present an approach that, given the original type rules written as clauses in a logic programming language, automatically generates an efficient, two-phase, constraint-based type inference algorithm. Our approach works by partially evaluating the type checking rules with respect to the target program to yield a set of constraints suitable for input to an external constraint solver. This approach avoids the need to manually develop and verify a separate type inference algorithm, and is ideal for experimentation with and rapid prototyping of novel type systems.

Also somewhat relevant to the discussions here about type checking as abstract interpretation.

Implicitly Heterogeneous Multi-stage Programming

Implicitly Heterogeneous Multi-stage Programming. Jason Eckhardt, Roumen Kaiabachev, Emir Pasalic, Kedar Swadi and Walid Taha

Previous work on semantics-based multi-stage programming (MSP) language design focused on homogeneous languages designs, where the generating and the generated languages are the same. Homogeneous designs simply add a hygienic quasi-quotation and evaluation mechanism to a base language. An apparent disadvantage of this approach is that the programmer is bound to both expressivity and performance charcteristics of the base language. This paper proposes a practical means to show that this can be avoided by providing specialized translations from subsets of the base language to different target languages.

The idea is that the first stage is done in OCaml and the second stage in C or Fortran (or other similar language). The main point is that the output from the first stage is a subset of OCaml (actually, a subset of Ocaml AST), which is more regular and permits efficient translation to C (as compared to what's required in compiling full Ocaml).

The generated C code is, of course, automatically type-correct. As Oleg remarks, this brings us close to the goal of enjoying all benefits of abstractions with no overhead.

More information here.

XML feed