## D is for Domain and Declarative

The list of accepted papers is out for the IFIP Working Conference on Domain Specific Languages. Happily for me, the program reveals much interest in languages for reasoning, decision making, and search. Even among people who are not my coauthors. :)

Declarative programming tends to attract skepticism because it has the reputation of poor and hard-to-control performance. The approach of DSL embedding appears to ameliorate this problem, and the success of SAT solvers appears to chip away at this reputation.

Meanwhile, the call for papers is out for Principles and Practice of Declarative Programming 2009, which has a venerable program committee. The submission deadline is May 7.

## Comment viewing options

### Declarative programming

Hi, I've got a question about how you do EDSLs. The EDSL argument you offer does go a long way to addressing skepticism, because you've got the host languages control and data structures. However, a lot of the sorts of optimizations I have wanted for EDSLs are intensional in character, and seem to call for analyzing the source programs. Straightforward "denotational" embeddings can often make this quite difficult to achieve.

For example, if you are doing monadic nondeterministic search, there can be a big difference between the expressions:

   do x <- a_big_set
y <- generate_another_big_set(x)
guard (divisible_by_one_thousand x)
return (x+y)


versus

   do x <- a_big_set
guard (divisible_by_one_thousand x)
y <- generate_another_big_set(x)
return (x+y)


Extensionally, they're the same, but operationally the first might end up doing way more work than the second! Being able to analyze cases like that and optimize them is often essential.

I know how to cope with this in a few cases (basically, it's usually easy when the DSL can be expressed as an applicative functor, because often NBE-style trickery becomes feasible), but for more general things like monadic EDSLS I'm at sea.

### Declarative metaprogramming hurts sometimes

I suspect that unless you can eg support a large applicative fragment you're probably sunk? Bind pretty much drops you in it, you can't 'really' do much through the second parameter because with different input it could yield something else entirely.

I'd actually started with a longer comment in terms of return and join, but the problem of having host-language fixpoints turn your programs infinite shows up with any EDSL of course. But it did get me thinking of one thing that may be cool: is it worth modelling embeddings with Pure Type Systems as a way of looking at interactions between host and embedded language?

### Embedding DSL efficiently

Our paper is specifically about efficient EDSL. We show that although
we cannot intensionally analyze the embedded code, we nevertheless
achieve performance comparable with the state-of-the art systems (IBAL,
PRISM). The code archive presents a few benchmarks.
The blood-type benchmark for example can't be handled in a naive
way. The music example is especially demanding as the search space is
very large and the probability of evidence is very low (about
10^(-7)).

The example

do x <- a_big_set
y <- generate_another_big_set(x)
guard (divisible_by_one_thousand x)
return (x+y)

is very close to the one described in Section 5 of the paper. As the paper
says To reduce the delay between the production and observation of
random values, we can revise the probabilistic program using lazy
evaluation.'' Again, the code archive (file slazy.ml) gives the
complete code, near the end.

### Thanks

Actually, I only saw the "J is for Javascript" paper -- I didn't realize at first that "my coauthors" was actually two links to two different papers! I'll look at your "Embedded Probabilistic Programming" paper now....