LtU Forum

A question of separation logic and monads

I am slowly working my way up to an understanding of separation logic and one thing I have not seen with separation logic is a transformation into functional code with immutable data structures.

With normal imperative code you just wrap a big fat monad around everything and there is a straightforward transformation into functional code with immutable data structures.

The disadvantage of a big fat monad is that you loose the ability to reason about parts of the program that have side effects but are independent of each other. I would expect therefore that smaller monads that could be split and combined by the rules developed for separating logic and the separating conjunction would exist.

Does anyone know if anyone has already research that? Or perhaps I am making an elementary error in my reasoning?

On constness

I'm trying to decide how I want to approach the issue of immutability in my language. So far, I've identified three general approaches.

Obviously, there are the purely functional languages, in which there is no mutability at all. I'm not interested in that approach, mainly because my focus is on multi-paradigm programming and that approach is too narrow for me.

The second approach is that used by C++, where 'const' is a type modifier - that is, a mutable type can be declared read-only after the fact.

The third approach is that taken by Java and many other languages where there are separate mutable and immutable types, which may conform to a common interface but which have separate implementations. For example, you have ArrayList and ImmutableList, both of which implement the List interface.

An advantage of the Java approach is that the immutable implementations can in many cases be simpler and more efficient than their mutable counterparts. A case in point is ImmutableList, which has an optimization whereby empty lists are represented by a reference to a static singleton instance.

On the other hand, it's extra work to have to implement two versions of every collection. That extra work isn't a problem for the most commonly-used containers, but for the long tail of specialized and possibly application-specific types it can be a significant burden.

C++ on the other hand has it's own version of the 'double work' problem, whereby a lot of classes end up having to implement two versions of many methods, a 'const' and 'non-const' version. However, this can be solved by language design - a 'conditional const' type modifier which is const only if the enclosing type is also const. This allows many of these double-definitions to be collapsed.

However, you still have the problem of maximizing efficiency of container implementations. Suppose I have a C++ class X that internally contains a vector. Adding the const modifier to X doesn't switch the implementation of the vector over to a more efficient, immutable version; And even if the language supported it, it wouldn't be very efficient because now you have to add a level of indirection between X and the vector (using virtual methods or some other technique) so that X's implementation can switch between the mutable and immutable implementation of vector based on whether X is mutable.

So I'm wondering if there are any other interesting approaches out there, or if there's a sweet spot that I'm missing.

Jonathan Blow's next foray into game language design

As of about a week ago, Jonathan Blow (creator of various well received games such as Braid as well as various PL efforts) has pair of talks on twitch.tv (later both put on youtube) where he issues a call to arms for game developers to design a new programming language around their concerns. He discusses what he sees as some of the shortcomings of existing languages that are supposed to be C++ replacements, and some of the requirements he feels game devs have. There was a Q&A session as well.
The talks are more practical than PL theoretical, but interesting (and occasionally frustrating) never the less.

Twitter feed recommendation: Meredith Patterson

Meredith tweets under the handle @maradydd and contributes much to the tiny niche comprising the intersection of type theory and security. For example:

  1. She shares links to and comments upon interesting contentshe summarises a 1964 post of Doug McIlroy's asserting four key failures of programming languages. (To recast these concerns as desiderata, I call these the McIlroy Systems-PL Minimum);
  2. Her retweets are sharp, e.g., on requirements gaps in formal specifications of programming languages; and 'It’s almost as if basing systems around an untyped programming language is a bad idea';
  3. And she promotes worthy things like the @typetheorypodcast and neat little sequent calculus tutorials.

Kaya: Declarative Reactive

The presentation of Kaya at the Future of Programming Workshop at the Strange Loop conference.

Kaya is declarative like SQL, reactive like a spreadsheet, and a spreadsheet metaphor is used to render powerful, expressive data structures. Code is not written in a text editor, but instead you compose applications in a spreadsheet-like editor. The resulting contextual nature ameliorates some of the typical need for control structures, and leads to a more natural way to compose applications.

Controlling time and space

Evan's (the Elm guy) strangeloop talk, it is actually quite good (and not bad for Prezi); slides:

http://prezi.com/rfgd0rzyiqp_/controlling-time-and-space/

Video of talk:

https://www.youtube.com/watch?v=Agu6jipKfYw

A very nice (and useful) overview about the different flavors of FRP.

Extended Axiomatic Language

Axiomatic language is a formal system for specifying recursively enumerable sets of hierarchical symbolic expressions. But axiomatic language does not have negation. Extended axiomatic language is based on the idea that when one specifies a recursively enumerable set, one is simultaneously specifying the complement of that set (which may not be recursively enumerable). This complement set can be useful for specification. Extended axiomatic language makes use of this complement set and can be considered a form of logic programming negation. The web page defines the language and gives examples.

Seeking artricle on syntax sugar and comparing programming languages

I recall there was an article on LtU that defined term syntax sugar. IIRC the generic idea was that syntax sugar requires only the local transformation. And language features that require non-local program rewrites cannot be called syntax sugar and it can be core of argument that one language is higher-level then another.

Optimisation by repeated beta- and eta-reduction

The following post recently showed up on Planet Haskell: Morte: an intermediate language for super-optimising functional programs. From the post:

Now suppose there were a hypothetical language with a stronger guarantee: if two programs are equal then they generate identical executables. Such a language would be immune to abstraction: no matter how many layers of indirection you might add the binary size and runtime performance would be unaffected.

Here I will introduce such an intermediate language named Morte that obeys this stronger guarantee.

The typed lambda calculus possesses a useful property: every term in the lambda calculus has a unique normal form if you beta-reduce everything. If you're new to lambda calculus, normalizing an expression equates to indiscriminately inlining every function call.

I am worried about this because the author explicitly wishes to support both folds and unfolds, and, according to Catamorphisms and anamorphisms = general or primitive recursion, folds and unfolds together have the expressive power of general recursion—so that not every term has a normal form (right?). More generally, it seems to me that being able to offer the strong guarantee that the author offers implies in particular a solution to the halting problem, hence a non-Turing-complete language.

Later, the author says:

You can take any recursive data type and mechanically transform the type into a fold and transform functions on the type into functions on folds.
I have a similar concern here; it seems to me to be saying that folds can express general recursion, but I thought (though I don't have a reference) that they could express only primitive recursion.

Have I got something badly conceptually wrong?

Re-thinking Prolog

A recent paper by Oleg Kiselyov and Yukiyoshi Kameyama at the university of Tsukuba discusses weaknesses and areas for improvement to Prolog.

Quite many computations and models are mostly deterministic. Implementing them in Prolog with any acceptable performance requires the extensive use of problematic features such as cut. Purity is also compromised when interfacing with mainstream language libraries, which are deterministic and cannot run backwards. Divergence is the constant threat, forcing the Prolog programmers to forsake the declarative specification and program directly against the search strategy. All in all, Classical Prolog is the exquisite square peg in the world with mostly round holes

The strong points of Prolog can be brought into an ordinary functional programming language. Using OCaml as a representative, we implement lazy guessing as a library, with which we reproduce classical Prolog examples. Furthermore, we demonstrate parser combinators that use committed choice (maximal munch) and can still be run forwards and backwards. They cannot be written in Classical Prolog. Logic variables, unification, and its WAM compilation strategy naturally emerge as a "mere optimization" of the Herbrand universe enumeration.

The paper mentions the strength of the approach used by miniKanren (which embeds logic programming with fairer search strategy than normal Prolog into Scheme) and Hansei (which embeds probability based nondeterminism into Ocaml using delimited continuations to allow direct-style expression of monadic code).

After motivating some choices by studying the prototypical example of running append backwards they cover running parsers with "maximal munch" rule backwards - something that cannot be (declaratively) expressed in prolog.

A very interesting paper on logic programming! It also thanks Tom Schrijvers of CHR fame at the end.

XML feed