archives

BNFT (Backus Naur Form Transformation) rerelease

Another version of my BNFT (Backus Naur Form Transformation) - now in javascript (previous in java and C++).

The tool can be used to quickly set up a grammar for your shiny new DSL and translate it into textform (typically javascript).

Javaversion was mentioned here: http://lambda-the-ultimate.org/node/3610

Javascript demo currently sports examples of Brainfuck and Turtle.

Github source is here: https://github.com/phook/BNFT

Online test console is here: http://phook.dk/BNFT/BNFT testbed.html

automatic test discovery without reflection?

A programming language which is intended for production use must support the ability to easily create unit tests. Unit testing frameworks typically support a feature called test discovery, whereby all tests that are linked within an executable can be automatically detected and run, without the need for the programmer to manually register them.

For many languages such as Java and Python, reflection is used for this purpose. For languages that don't support reflection (e.g. C++) other, more cumbersome, methods are employed such as registration macros.

One problem with using reflection is that for AOT (Compiled ahead-of-time, as opposed to JIT) languages, reflection is expensive in terms of code size. Even in compressed form, the metadata for describing a method or class will often be larger than the thing being described. This can be a significant burden on smaller platforms such as embedded systems and game consoles. (Part of the reason for the bulk is due to the need to encode the metadata in a way that is linker-compatible, so that a type defined in one module can reference a type defined in a different module.)

Most of the things that one would use reflection for can be accomplished via other means, depending on the language. For example, reflection is often used to automatically construct implementations of an interface, such as for creating testing mocks or RPC stubs, but these can also be done via metaprogramming. However, metaprogramming can't solve the problem of test discovery, because the scope of metaprogramming is limited to what the compiler knows at any given moment, whereas the set of tests to be run may be a collection of many independently-compiled test modules.

A different technique is annotation processing, where the source code is annotated with some tags that are then consumed by some post-processor which generates additional source code to be included in the output module. In this case, the processor would generate the test registration code, which would be run during static initialization. The main drawback here is complexity, because the annotation processor isn't really a feature of the language so much as it is a feature of the compilation environment - in other words, you can't specify the behavior you want in the language itself.

I would be curious to know if there are other inventive solutions to this class of problems.

Tools for layered languages?

I like things like Shen and Haxe and ATS, where I can take a high level language and target another, but more common language. The intent is to be able to e.g. play with: how portable your source code is vs. how much you use 'native' libraries. So Shen -> {Python,Javascript,...} or Haxe -> {C++,Javascript,...} or ATS -> {C,PHP,...}.

But of course a problem is that it is unlikely to come with real tooling like a source line debugger, or profilers, or etc.

Apparently there is some leverage to be had if you are targeting C, but it seems more like dumb luck than based on principle, to me. :-)

Does anybody know of efforts to "solve" this "problem"?

Eve development diary

We put up a development diary for our work on the Eve language. The first post is a rough overview of the last nine months of development as reconstructed from github and slack, starting with the original demo at Strange Loop. In the future we will run more detailed posts once per month.

The original motivation came from reading an old comment on LtU bemoaning the fact that there is very little record of the development of todays languages and the thoughts processes behind them. Hopefully this will be useful to future readers.