Virtual Machine and Runtime Framework

Hello,

I'm currently designing an intermediate programming language, with its virtual machine and compiler. The goal is to provide language designers a common runtime framework easy to target, well optimized, and a good set of librairies.

The language is designed to support a wide kind of semantics and type systems (static and dynamic) and offers all together imperative, OO, and functional features. It might be quite easy to generate from your favorite language to this intermediate language. This way, you can use either a safe small and embedable VM or a full speed JIT to run the code.

For reusing the libraries, you might have to write some wrappers for your language semantics but that's still less work than doing C/C++ FFI.

I'm planing to release soon the full specifications of the language with an OCaml toy interpreter, and will welcome any comments. VM and JIT will follow later.

But before that, what do you think about this project ? Will you feel like using this runtime system ? and what kind of features are you excepting to be in the specs ?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

What makes it different?

I know from my own experience that this is a hard question to answer, but why design yet another VM? What makes it different, e.g., from Parrot or Mozart?

OTOH, the purpose may be an exercise in design (as opposed to creating a software product).

Anyway, the declared goal is pretty ambitious for a single-person project. I am certainly looking forward to see the specs.

Lowlevel VS Highlevel

Parrot, and to a different degree C-- are abstracting the processor instructions or the VM bytecode with another (human-readable but still lowlevel) bytecode.
Mozart is quite specific to distributed PL and already provide some semantics and a type system that makes maybe difficult to generate to it (just thinking about the things I saw, I didn't studied Mozart/Oz very much).

The Neko intermediate language is quite an high level programming language already, so for example you couldn't use this framework to efficiently compile lowlevel languages such as C.

Compared to lowlevel approach, that means that the framework is doing more job, and so language designers have less to do.

Why not just compile to Scheme, OCaml, ...

You're providing a high-level language. The specs say it's an intermediate language, but it's not for the following reason:

  • The syntax is complex. I would expect an intermediate language to have either a BASIC-like syntax (a procedure is a list of commands) or LISP-like (a procedure is a tree). Infix operators with all the presidency and associativity issues just don't belong to an intermediate language.
  • The language is not powerful enough. Why are there no label/goto in the language? Not all control flow graphs can be mapped easily to Neko.
  • The semantics are ambiguous. Which loop would break and continue target in case of nested loops?
  • I think adding macros to an intermediate language is an overkill. They will open a can of worms (separate compilation, phase distinction, hygiene, ...) that you are not addressing. Your comment stating ``Macro locals variables have then to be carefuly chosen'' shows the shortcomes. The example you list is uses is_true. As a user of the system, I would either expand is_true before outputing the Neco code (which should be easy compared to everything else I am doing), or just define is_true as a procedure.
  • The document does not address FFI.
  • This is in addition to the other comments posted by others in this thread.

Because...

  • syntax : that's true the syntax is a bit complex compared to lisp (every syntax is complex compared to Lisp :), but it might be more human-readable by people that are not familiar with lisp. As for infix operators, you can safely add a lot of parenthesis around each operation (like lisp is doing). It's just that Neko doesn't force you to do that.
  • goto/label : it's an interesting feature that I might add if some people need it. I think that most of high level languages ban them but might have some loop constructs different than while and for and will need them
  • I'll add precision to break and continue semantics.
  • macro is just a bit of syntactic sugar. I think some users might like them since they can looks like procedure calls, and still performs like inlining without bloating the source code and the AST
  • FFI is a VirtualMachine feature. It will be accessible using some specific primitives.

Using directly OCaml might be difficult if you're not familiar with its weird syntax and type inference. I'm trying to make Neko usable by a lot of people. But compiling from Neko to OCaml might be interesting in order to compare VM performances with OCaml native compilation.

Thanks for your comments

Not sure

Nicolas Cannasse: the syntax is a bit complex compared to lisp [...] but it might be more human-readable by people that are not familiar with lisp.

I thought Neko was supposed to be an intermediate language. Machine readability is far more important than human readability.

Nicolas Cannasse: As for infix operators, you can safely add a lot of parenthesis around each operation (like lisp is doing). It's just that Neko doesn't force you to do that.

First, in lisp parenthesis are not something you safely add around operations. Second, I don't buy the readability argument. Machine-generated code looks a heck lot worse in C-like languages with infix operators and tons of disambiguation parenthesis than it does with Lisp-like languages with tons of grouping parenthesis.

Nicolas Cannasse: macro is just a bit of syntactic sugar. I think some users might like them since they can looks like procedure calls, and still performs like inlining without bloating the source code and the AST

And how will you perform inlining without bloating the source code and the AST? You are going to expand them aren't you? Inlining decisions should be made by you, not your users since in theory, you should know better when to inline and when not to inline (based on the benefit of inlining vs. making a procedure call). Your users should not need to know the internals of your compiler to make such decisions. Would inlining hints be more appropriate? Also, how are you dealing with recursive macro expansion?

An AST?

Why not store the (compressed) AST and compile it on the fly. I've often wondered why this approach isn't more common. Are there any inherent drawbacks which make it infeasible in practice?

Michael Franz did this with Juice, but that's all I could find about this approach last I looked.

Calculus? Maybe logic? Type system?

The language is designed to support a wide kind of semantics and type systems (static and dynamic)

What about reflection?

What about dependent types?

A more general question: are you going to base it on some (either widely known or custom) calculus?

LLVM

Did you have a look at LLVM?

LLVM

Yes I did, LLVM is quite lowlevel and requires a lot of work for targeting from dynamicly typed languages for example, since it's working directly on memory (same as C--).

Runtime features

Here are a few ideas implemented very well in the VB runtime system:

a) The interpreter is capable of not running in the main thread when callback from Windows occur. This aids with screen refreshing, and makes live debugging possible

b) The VM supplies enough metadata to the IDE to enable live modification of code. This assists with rapid modification of programs. Recently, I've read some musings whether source should be stored as a machine representation, in binary, like Word, rather than holding it in ASCII.

c) Make the call stack serializable. Stackless python offers rather nice features that struggled to be accepted into the main source. See if you could put it in on day one.

First, a question

Is this related to Motion-* ?

Kind of.

Kind of.

I have now written several languages and compilers, MotionTypes and MotionScript being two of them, and they both require to target the MTS Virtual Machine (that is currently running a website of us) and the Flash Bytecode (for games and other client-side apps).

So of course once Neko is working I'm planing to use this technology for runtime framework for both MotionTypes and MotionScript. The MTS virtual machine will be used as a base for Neko VM.

Then, a few suggestions

  • Java's Thread.dumpStack() is wonderfully useful for debugging. Can you put something similar ?
  • Nearly-automatic binding to, say, COM, XPCOM, .Net, or some similar (preferably cross-platform) component framework would be equally wonderful.
  • Unicode strings, maybe.

And, as for using it, well, it depends on what you put in it. I might (or might not) be soon involved in the implementation of an experimental component-based distributed language, so this might be a good opportunity to test your platform.

Non-technical obstacle

Will you feel like using this runtime system ?

The most important non-technical problem is the fear that the author will abandon the project, or that it will be changing in incompatible ways.

It makes very hard for any new project others would depend on to take off. If it does not change, its shortcomings will persist forever. If it changes, compilers will have to cope with several versions and will have to be adapted to new versions.

It's safer to target old and stable target languages, such as C.

The same applies to using libraries, but in this case it's generally easier to spot which changes need adaptation.

You're true.

You're true.

I guess this can be avoided with a good community talks and try no to make incompatible changes without being sure everybody agree.

However the work to target Neko might be several times lower than targeting C (without some of the multiplatform problems that C have also have to deal with).

Specs

I put the specifications draft online here :

http://team.motion-twin.com/ncannasse/neko/specs.html

Any comments welcome

Looks like javascript!

I like javascript!

But now I'm wondering if you can really get optimized performance out of this. Javascript seems to be about a thousand times slower than comparable code in Java.

In what way is Neko different from Javascript that resolves this performance difference?

Javascript is interpreted.

Javascript is interpreted. Neko is made to be compiled. There can be several stages of optimizations :

  • interpreted on the fly, very slow
  • compiled to bytecode : quite quick, usable
  • Just-In-Time : fast

Not completely true

At least in Mozilla, JavaScript is compiled to byte-code.

Some concerns

Just off the top of my head, what about ...:
  1. ... non-CBV languages (non-strict, or lazy, think Haskell)?
  2. ... tail-call optimization (think Scheme)?
  3. ... concurrency (any model, think Oz or Erlang)?
  4. ... more advanced types, at least tagged products and sums (almost any statically typed FP needs more than that)?

I don't honestly see how this intermediate language is better for aforementioned languages than C (or Perl5). Am I missing something?

Re : Concerns

  1. lazyness : this might be simulated using closures, but is not straightforward (see remarks on concurrency)
  2. tail-call optimization : that's true it's not possible to do now since it's call by-value. The language need a new construct for this, similar to OCaml "let rec".
  3. concurrency : distributed programming is better for DSL, since it requires a lot of language specific constructs as well as a specific runtime. Neko is trying to answer the Basic Language Runtime problem first, trying to make too much things at the same time will lead nowhere.
  4. advanced types : Neko basic types should be enough to encode most of (if not any) type systems. Tagged sums can easily be represended as arrays, the first cell storing the identifier. That's actually the runtime representation of most FP languages. Neko types should be seen as "toolboxes" for implementing your language, you are actually free to represent your types the way you want using basic constructs.
  5. better than C ? : I think there is very big difference between writing a code generator from an high level language to Neko than doing it to C. Try to implement closures, dynamic typing, exceptions and garbage collection, for example, in C from scratch and compare with just generating a Neko program and then using an optimized runtime with tools that will come later (debugger, profiler ...).

TCO essential for FP [nothing new here]

If you want your language to be considered as an intermediate language for FP languages, tail call optimization (or an efficient way to implement it) is essential.

tail-call optimization : that's true it's not possible to do now since it's call by-value.

Most languages with TCO are call-by-value, e.g. Scheme, ML, Ocaml.

The language need a new construct for this, similar to OCaml "let rec".

Let rec is not really related to TCO. Are you thinking of mutual recursion?

Precisions

Unlike ML styles languages, all values are mutable in Neko. That means that the following Neko function :

fib = function(n) {
  if( n 

is translated in OCaml by :

let fib = ref (fun _ -> 0) in
fib := (fun n -> if n 

Which is obviously not TCO.
I was thinking adding a new keyword "rec" referencing the current function that would enable the following :

fib = function (n) {
  if( n 

Proper Tail Calls

Mutability has nothing to do with proper treatment of tail calls.
Maybe reading this clarifies the meaning.

Seen that before

This code

fib := proc(n)
    if n

is correct Maple.

What is the problem with that? The amount of 'dynamism' implicit in allowing such constructs is that optimizations are extremely difficult to achieve. Why? Because in more complex examples, a function can update itself as it is running. While completely evil, it is perfectly legal - and defies optimization.


Such a design only really works for a language which is meant to be interpreted.

Standard stuff

The above code would also be correct Scheme:

(define fib (lambda (n) (if (

To emphasize the point, SET! can be used:

(define fib #f)
(set! fib (lambda (n) (if (
Such a design only really works for a language which is meant to be interpreted.

Not so. It does have efficiency implications, though. To mitigate this, some Scheme compilers offer ways to specify that function values will not be mutated.

Tail recursion

Unlike ML styles languages, all values are mutable in Neko.

The same is true in Scheme — at least, all variables are identifiers bound to locations, and locations are mutable (with an exception to support read-only data). Scheme has "proper tail recursion" (a technical term), and it doesn't need anything like the 'rec' feature you describe. Scheme has a LETREC form, but that's not directly related to tail recursion.

The Scheme standard, R5RS, has more info about this. Sections 3.5 covers tail recursion. For more details about tail recursion, see Will Clinger's paper Proper Tail Recursion and Space Efficiency. The first two pages of that paper provide a good overview of the subject.

If you haven't already examined Scheme closely, it's worth a close look, because it sounds as though Neko has some similar qualities. There's a lot of literature on Scheme implementation strategies.

Regarding the 'fib' example, the 'fib' function in Scheme would translate to similar OCaml. However, that version of 'fib' is not a tail recursive function anyway, because the calls to fib don't appear in tail position.

My mistake

My mistake, I answered too fast. Tail Recursion does not need an additional construct, and can be performed even if the value is mutable.

Similar concerns

So far, this intermediate language seems a bit better than C but not enough to make me want to use it rather than C. Now that Andris mentions it, concurrency does sound like an obvious prerequisite.

Am I to understand that during compilation from a functionnal programming language, all (or most) type information is dropped ? Also, could you detail the intended semantics of your garbage-collector (especially in presence of concurrency).

Btw, are you confident that you can write a compiler more efficient than, say, .Net's JIT ? Sounds quite difficult to me.

31 bits

Why the 31 bits integers; is it because you're going to use the ocaml runtime/GC library? I know of some of the advantages but I think a system where you just bind an integer into a normal tagged record with autoboxing might actually perform just as well and be slightly easier to implement. Makes FFI a bit easier too I guess.

Another note, if you want to have incremental compilation for generics, for instance, you will need complex RTTI compiled into the system.

31 bits performances

I'm thinking that unboxed 31 bits integers are quite a big speed improvment. Do you have any paper on performances comparison ?

What do you mean by "complex RTTI" ? All Neko values are fully browsable using RTTI. If a language needs more RTTI because he have a more rich type system, it can easily add its own RTTI written in Neko.

Living in a box


I'm thinking that unboxed 31 bits integers are quite a big speed improvement. Do you have any paper on performances comparison?

No, I know that C# didn't take that route; I looked around a bit but found mainly noise on the subject. There is this reference which discusses this on the Python list; 31 bits don't seem to help much unless you're into number arithmetic algorithms. To me it seems a simple trade-off. Most people tend to think that integer operations become slower because of continuous (un-)boxing but with some simple rules it shouldn't matter too much (for example: if [] represent boxed values and operators you can easily optimize [x] [+] [y] [+] [z] to [x + y + z]). Note that with a 'fast' 31 bit representation you essentially need to solve most of these problems too.

Of course, if you want arrays of integers it helps if they fit in the same space as pointers, but even that is easily solved if you switch to a strongly typed intermediate language or have boxing rules on the type system.

Btw, it also helps implementing generics since you always know that a value corresponds to a pointer; I thought even OCaml even had/has some bugs in the type system in that respect (expecting a pointer where suddenly an integer is returned).


What do you mean by "complex RTTI" ? All Neko values are fully browsable using RTTI. If a language needs more RTTI because he have a more rich type system, it can easily add its own RTTI written in Neko.

If you want to support generic/polymorphic datatypes - I use the term loosely - and have incremental compilation you need to know that an object is abstract in some type or value. So, hmm, there are three points in the process where you would like that information: compile-time, link-time, and run-time. At the moment this kind of information is not present in your VM - it needs to be addressed if you want generics. But having said that, I think you are right: this can be solved by having RTTI structures and algorithms expressed in Neko. Your approach could be better.

SML/NJ uses 31 bit ints

Appel discusses the choice of 31-bit ints in his book "Compiling with Continuations". I don't recall the details — I'll take a look later.

[Update: I had previously summarized this in this post]

Frameworks versus toolkits

I reviewed your specification, and did a little reading on your blog. My assessment is that you'll probably be successful in implementing Neko, and I wish you well in that endeavour.

My own opinion is that frameworks don't have as much popular appeal as toolkits do. The trouble with a framework is that people have to buy into the entire concept rather than being able to dabble with just one part of it at a time and use their legacy code with it.
That's why FFI is so critical to a programming language's success.

In my own work, I'm trying to focus on creating a C library (named Figure) that decouples language and operating system features from one another as much as possible. The only critical part of the system is the calling convention, which is compatible with standard C calls.

My target audience for Figure is similar as yours for Neko: language developers. However, I'm also working on the larger pool - that of application developers who want to add advanced language features to their legacy programs incrementally.

Once the C implementation of Figure is complete, I intend to adapt the same work to other target languages. If you're willing, I'd be happy to talk more about what Figure provides so that you can take its ideas into your design of Neko's runtime. I'd benefit from the peer review, and it would make porting Figure to Neko a lot simpler.

http://fig.org/gord/