LtU Forum

Opposing Hierarchies of Complexity

Here is a thought for discussion: There are two models in programming

1. The machine/cpu model with pointers, addressing, manual memory allocation, modular arithmetic, floating point numbers etc.

2. The mathematical model with values, integers, real numbers etc.

The code itself built up from machine-code is simplest when closest to (1), and that abstractions are built of ever increasing complexity. For example a GC is a fairly complex piece of code. Adding two stack allocated machine-word ints is not.

The mathematical model is built from type-theory. We can define mathematics constructively. (I am thinking more of Twelf than Coq here). It seems the type system is at its simplest expressing the fundamental concepts of mathematics, but modelling a real CPU is a complex application for type-theory (and mathematics).

So a function with a simple type, has a complex implementation (involving garbage collection and layers of abstraction), and a simple function would have a complex type (possibly something involving linear refinement types, monads etc). You can see this in the difference between a simple functional language which requires a large runtime, and typed assembly language which requires a complex type system.

TAL, whilst an interesting idea seems to low level fixed on a single CPU instruction architecture. So the idea is a language like 'C' with a complex type system like TAL, that can implement a pure functional DSL which would be enforced by the type system. You could have a pure strict functional language, but where a simple monadic program would compile directly to 'C' code.

There should also be some proofs, for example the garbage collector implementation should result in the correct simple types for objects managed by the GC. The complex linear/effect types expressing the manual management of memory will only disappear from the type signatures if the GC code is correct. As discussed in another topic, the GC would require the language to have the capability of reflecting memory layout, stack context and concurrency control.

I am interested in any work / papers that might present a similar language concept, or develop a suitable type-system or implementation to this.

Eve development diary

We put up a development diary for our work on the Eve language. The first post is a rough overview of the last nine months of development as reconstructed from github and slack, starting with the original demo at Strange Loop. In the future we will run more detailed posts once per month.

The original motivation came from reading an old comment on LtU bemoaning the fact that there is very little record of the development of todays languages and the thoughts processes behind them. Hopefully this will be useful to future readers.

Tools for layered languages?

I like things like Shen and Haxe and ATS, where I can take a high level language and target another, but more common language. The intent is to be able to e.g. play with: how portable your source code is vs. how much you use 'native' libraries. So Shen -> {Python,Javascript,...} or Haxe -> {C++,Javascript,...} or ATS -> {C,PHP,...}.

But of course a problem is that it is unlikely to come with real tooling like a source line debugger, or profilers, or etc.

Apparently there is some leverage to be had if you are targeting C, but it seems more like dumb luck than based on principle, to me. :-)

Does anybody know of efforts to "solve" this "problem"?

automatic test discovery without reflection?

A programming language which is intended for production use must support the ability to easily create unit tests. Unit testing frameworks typically support a feature called test discovery, whereby all tests that are linked within an executable can be automatically detected and run, without the need for the programmer to manually register them.

For many languages such as Java and Python, reflection is used for this purpose. For languages that don't support reflection (e.g. C++) other, more cumbersome, methods are employed such as registration macros.

One problem with using reflection is that for AOT (Compiled ahead-of-time, as opposed to JIT) languages, reflection is expensive in terms of code size. Even in compressed form, the metadata for describing a method or class will often be larger than the thing being described. This can be a significant burden on smaller platforms such as embedded systems and game consoles. (Part of the reason for the bulk is due to the need to encode the metadata in a way that is linker-compatible, so that a type defined in one module can reference a type defined in a different module.)

Most of the things that one would use reflection for can be accomplished via other means, depending on the language. For example, reflection is often used to automatically construct implementations of an interface, such as for creating testing mocks or RPC stubs, but these can also be done via metaprogramming. However, metaprogramming can't solve the problem of test discovery, because the scope of metaprogramming is limited to what the compiler knows at any given moment, whereas the set of tests to be run may be a collection of many independently-compiled test modules.

A different technique is annotation processing, where the source code is annotated with some tags that are then consumed by some post-processor which generates additional source code to be included in the output module. In this case, the processor would generate the test registration code, which would be run during static initialization. The main drawback here is complexity, because the annotation processor isn't really a feature of the language so much as it is a feature of the compilation environment - in other words, you can't specify the behavior you want in the language itself.

I would be curious to know if there are other inventive solutions to this class of problems.

BNFT (Backus Naur Form Transformation) rerelease

Another version of my BNFT (Backus Naur Form Transformation) - now in javascript (previous in java and C++).

The tool can be used to quickly set up a grammar for your shiny new DSL and translate it into textform (typically javascript).

Javaversion was mentioned here: http://lambda-the-ultimate.org/node/3610

Javascript demo currently sports examples of Brainfuck and Turtle.

Github source is here: https://github.com/phook/BNFT

Online test console is here: http://phook.dk/BNFT/BNFT testbed.html

SNAPL, a new PL conference on "big-picture questions and long-running research programs"

SNAPL: The Inaugural Summit oN Advances in Programming Languages

Calling all Programming Language Researchers!

The programming languages community has several excellent conferences. However, conferences are focused on incremental bits of novelty. We want to create a new kind of venue that complements these: to present and discuss big-picture questions and long-running research programs; to view progress along the long arc of a research effort. Other communities do this better (e.g. the CIDR series in databases).

To this end, we are putting together the inaugural Summit oN Advances in Programming Languages (SNAPL). We hope that the biennial SNAPL will grow into a venue where our community comes to enjoy talks with inspiring ideas, fresh insights, and lots of discussion.

For the event to succeed, we need the stars in the community to submit. Please start working on your paper.

Important Dates:
Submission: January 6, 2015
Decisions announced: February 20, 2015
Final versions due: March 20, 2015
Conference: May 3-6, 2015 (Asilomar, California, US)

Preaching to the already converted: Om

So there are lots of things people here have already been exposed to, but a lot of them seem to fail to get much traction. Thus I very much appreciate this talk by David Nolan which helps "sell" more of the FP side of the FP vs. OO wars if you will, via Om: Clojurescript for React.

(Of course, he has to go and spoil it all by (1) selling AOP as well, and (2) insufficiently stressing the frustrations of trying to do these things in the Real World... but I guess I can't expect perfect nirvana to descend upon all people everywhere.)

Taking Back Control (Flow) of Reactive Programming

A short position paper that tries to motivate managed time by contrasting it with data flow approaches; to be presented at SPLASH's REBLS workshop; abstract:

Event-driven programming avoids wasting user and CPU time, but is difficult to perform since program control flow is necessarily inverted and twisted. To make reactive programming easier, many advocate burying control flow within abstractions that are composed via data flow instead. This might be a mistake: data-flow has issues with expressiveness and usability that might not pan out. Instead, control flow could be re-invented to hide the adverse affects of CPU time while preserving expressiveness and directness

CFP: Off-the-Beaten-Track (OBT) workshop at POPL 2015

Announcing the 2015 edition of the OBT workshop, to be co-located with POPL 2015, in Mumbai, India. Two-page paper submissions are due November 7, 2014.

From the web page (http://www.cs.rice.edu/~sc40/obt15/):

Programming language researchers have the principles, tools, algorithms and abstractions to solve all kinds of problems, in all areas of computer science. However, identifying and evaluating new problems, particularly those that lie outside the typical core PL problems we all know and love, can be a significant challenge. This workshop's goal is to identify and discuss problems that do not often show up in our top conferences, but where programming language research can make a substantial impact. We hope fora like this will increase the diversity of problems that are studied by PL researchers and thus increase our community's impact on the world.

While many workshops associated with POPL have become more like mini-conferences themselves, this is an anti-goal for OBT. The workshop will be informal and structured to encourage discussion. We are at least as interested in problems as in solutions.

A question of separation logic and monads

I am slowly working my way up to an understanding of separation logic and one thing I have not seen with separation logic is a transformation into functional code with immutable data structures.

With normal imperative code you just wrap a big fat monad around everything and there is a straightforward transformation into functional code with immutable data structures.

The disadvantage of a big fat monad is that you loose the ability to reason about parts of the program that have side effects but are independent of each other. I would expect therefore that smaller monads that could be split and combined by the rules developed for separating logic and the separating conjunction would exist.

Does anyone know if anyone has already research that? Or perhaps I am making an elementary error in my reasoning?

XML feed