Virtual machine implementation and dynamic languages

I'm looking for references to virtual machine implementations and dynamic languages.

I seem to recall something recently about what the Java VM lacks wrt dynamic languages and what other implementations (Parrot?) do that enable dynamic languages.

What would a Universal VM look like? Is such a thing possible?

I'm not googling the right keywords, I'm not finding what I'm looking for.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Differences between JVM and Parrot

The main differences between the JVM and Parrot, IIRC:

* Parrot is register-based; JVM is stack-based. Note that this has nothing to do with your question, just an interesting bit of trivia.

* Parrot has an "invoke dynamic" instruction, which does the runtime lookup (and handles failure appropriately). Java's invoke instructions all have semantics which assume the typechecker has verified the call is indeed valid. Note that Java can do dynamic invokation using the introspection interface; but that is slower (and inconvenient for the programmer, but that's not a JVM issue). There have been proposals to add "invokedynamic" to the JVM.

* Do note that all reasonable VMs are "universal" in the sense that are Turing-complete, and can host any Turing-complete language. Despite being optimized for static languages, the JVM hosts several dynamic languages including Jython and Groovy. Likewise, there is no reason one couldn't write a Java-to-parrot compiler.

There are plenty of books on the particulars of Parrot, the JVM, (as well as Microsoft's CLR). In addition, two recent books treat the subject at a more scholarly level: Iain Craig's Virtual Machines, and Virtual Machines: Versatile Platforms for Systems and Processes by Smith and Nair. See this thread for more information.

One book to avoid

I would recommend not buying Bill Blunden's Virtual Machine Design and Implementation in C/C++. This book covers very little about virtual machine design; it basically describes an implementation of a quite primitive virtual machine. The style is neither professional nor scholarly; and is frequently amateurish. The author is prone to ranting on irrelevant topics (he flames Microsoft and Linux with equal relish).

This is one of those books that clear-cuts entire forests in order to include page after page after page after page after page of printed source code. When I encounter books which include whole programs (especially with little or no annotations), I often feel cheated.

One interesting part of the book, which isn't covered in other texts that I'm aware of, is how to build a debugger (and put debugging support into your virtual instruction set). Unfortunately, his coverage of this topic is--like the rest of the book--amateurish. A scholarly treatment of this subject, it goes without saying, is sorely needed in the literature.

Last I heard...

"invokedynamic" was being added to the 1.7 JVM.

http://blogs.sun.com/roller/resources/gbracha/JAOO2005.pdf

What about dynamically optimizing introspection?

I have read that the latest hotspot compilers can actually convert introspection calls to normal calls on the fly, by changing the underlying bytecodes and therefore making introspection/reflection quite fast.

Branch/call strength reduction

There is a whole bunch of different ways one can "strength reduce" function calls. Going from invokevirtual (or whatever the JVM calls it) or invokeinterface to invokestatic is frequently a productive one. Even more productive (though it might require further bytecode patching) would be inlining of invokestatic at runtime/loadtime--that would be beneficial, especially for small methods.

Hey, way to point out another one

Self does this, but since there the notion of "static" is much weaker, Self simply does this for all methods calls if it can prove it when it is (re)compiled which happens at runtime. In fact, even if it can't prove it it will still do it by branching like 'if foo? x then inlined x.m(y) else x.m(y)'.

Self

The programming language Self is definitely something you want to look at.

For high-performance (at one time), dynamic VM technology

Self, at one time, had rather advanced VM technology--especially for implementing dynamic method calls. Polymorphic inline caches, in particular, were shown to make method calls faster in the average case.

How advanced Self is, by modern standards, I don't know. Sun, MS, and IBM have spent oodles o' money into optimizing their implemenations of the JVM and CLR, and doubtless have lots of clever tricks. Most "optimizing" VMs these days perform dynamic compilation/translation to native code (JITting) as a key optimization strategy, as even poorly-optimized native code is an order of magnitude faster than the most tightly-written bytecode interpreter.

The Smith and Nair book, mentioned above, goes into great detail on this topic.

And all these ideas came from Self

or at least are (and were) in Self before; many of them are quite old ideas. Self was never truly a bytecode interpreter (and anyways its bytecode is massively too simple for it to be performant), but one of the guiding principles was that it should "look" like it is. In all likelihood Self is still more advanced in both implementation and dynamic aspects than mainstream languages.

I'm curious:

What does Self do, that modern JVMs don't? I'm not doubting you--but I find it hard to believe that Self hasn't been strip-mined for good ideas to put in VM implementations elsewhere. Especiallly since it's a Sun-sponsored project.

(Some information on Self's implementation can be found here).

It's a pretty direct line

It's a pretty direct line from SELF to the current Sun JVM. See this 1998 BYTE article. Here's the key paragraphs:
HotSpot (a code name) isn't new technology. It grew from research that began at Stanford University in 1987, based on earlier work on Smalltalk at Xerox Palo Alto Research Center (PARC). Stanford researchers created an object-oriented language called Self to explore dynamic compilation. Several of them later founded a small company known as LongView Technologies (or Animorphic Systems). Sun acquired the company to make HotSpot.

Sun is using HotSpot as an umbrella term for several acceleration techniques, including the generational garbage collector described earlier and a new thread monitor. But the most interesting part is adaptive compilation.

HotSpot is like a hyperactive JIT compiler -- it profiles, it compiles, it recompiles. It can start compiling when the program launches or wait to see how the program runs. It can make decisions about which parts of a program to compile according to how often the code executes or how much time the code takes to execute. It balances the time required for compilation against the time saved by faster execution.

Self

As someone already hinted at, a lot of the former Self compiler and VM researchers went on to help build HotSpot.

There are at least a couple of interesting technologies which originated in the context of Self yet aren't really applicable to Java. Examples that spring to mind are polymorphic inline caching and the cartesian product algorithm (the latter is admittedly a compiler technology more than a VM technology).

but are applicable to Smalltalk

Eliot Miranda commenting on PICs & Smalltalk VM technology

why use PICs?

If you implement the dispatch scheme in this paper [Minimizing Row Displacement Dispatch Tables], dynamic dispatch becomes about as cheap as in C++ and speeding up dispatch becomes less of an issue than other things.

Pemo

PICs, in Self, have a variety of other uses plus other potential uses other than just speeding message sends. For example, Self essentially inlines the performance monitoring code there. And in general, it is a good place to interleave some code. Also, in Self's case the technique meshes well with the other various techniques and can be added almost without cost. Finally, there are second order optimizations. The inlining, even partial, can lead to the necessary information to perform other optimizations.

I'm surprised you're surprised

Consider the parallel statement: "but I find it hard to believe that Self hasn't been strip-mined for good ideas to put in [languages] elsewhere"

Variations on this are oft lamented facts on LtU.

One half of my statement should be undeniable: Self is more "dynamic" than any mainstream language. As Per Vognsen points out this immediately leads to techniques that simply are not useful in e.g. Java. Further facets that _may_ also not be incorporated: Self has what are essentially higher-order functions, the techniques used to gain performance with them may or may not be used for things like inner classes. I have no idea if the dynamic deoptimizing techniques (for debugging purposes) are used. I imagine many of the techniques to reduce latency for wide-ranging changes are not used, being much less relevant. I also think there are some optimization techniques that were "abandoned" by Self that may be much more useful in a statically typed language; what I've heard about HotSpot, for example, leads me to think that they have not been reexplored in this context.

Mining Self

I have been working on a research project for the last year, and we really wanted to use many ideas from Self, including the language, for a very complex simulation project which involved a lot of user scripting.

In my opinion, one of the main reasons Self hasn't been mined more for ideas is that is one of the most frightening pieces of code ever released. It's a huge C++ code base with different styles (templates on and off, etc.) and a really complex use of preprocessor macros as higher-order functions.

I'm not trying to diminish the work of the programmers. The Self VM is one of the most advanced and fast VMs out there now, even if the project's been "in comma" for more than five years. Every paper that project produced it's required reading for anyone implementing a dynamic language.

But anyone trying to look at how Self works is in for a lot of headaches.

Another reason is that the Self VM released by Sun only runs on PowerPCs and Sparcs. Maybe now that many programmers "switch" to Mac OS someone will rediscover it.

BlueBook online

In my to-read list I have
the implementation part of the blue book, which is the foundation for the SqueakVM. Not state of the art, but it is interesting.

Neko

You might have a look at Neko.

Instead of defining the VM in terms of opcodes, it's defined by an intermediate (still high-level) dynamicly typed language. As a result, it's less work to target the VM.

It's designed to be some kind of Universal VM for highlevel languages, fast but lightweight and easily embedable. However I wonder if there is things missing before it can be called "Universal".

call/cc ?

I think using a simple language as the target for different frontend languages is a great idea, and in general I find neko/nekovm interesting, but I was wondering if maybe there is something missing to ease usage for languages that provide reified continuations.

You could probably implement, say, call/cc using exceptions, but maybe there is something I'm missing.

Delimited continuations

Instead of call/cc I would suggest supporting one of the families of delimited continuation (or control) opeartors (e.g., see Generic implementation of all four *F* operators: from control0 to shift).

The user can then easily and cheaply emulate the other families as well as unlimited first-class continuations.

[on edit: ugh, I totally forgot that Oleg demonstrated (once more) that call/cc AND a mutable cell is enough to efficiently implement delimited control operators. Still, depending on your philosophy of thought, you might prefer either orthogonal notions of unlimited continuations and cells, or a unified notion of delimited continuations.]

Shift/reset v. call/cc

There are still implementation reasons for choosing one or the other. For example, it's been suggested (but not empirically verified I don't believe) that for implementations that use heap allocated stack frames (e.g. SML/NJ) the call/cc w/ mutable cell technique would be one of the better ways to go at least compared to some of the techniques to directly implement shift/reset. However, the information required for shift/reset is quite a bit more "local" typically than call/cc which suggests more efficient for the common case implementations. Of course, any VM/language is going to directly implement mutable cells so... (or is it? a linear type discipline and optimizing linear array usage and a state monad interleaved with shift/reset...) Of course (also), if the VM supports TCO then the language compiler can simply use CPS.

Lua

You might want to take a look at Lua. Both the language and its VM were designed for "efficiency through simplicity". So, you won't see any fancy optimization strategies, but you will get a clean, simple, and fairly fast VM that's easy to tinker with. In fact, many users of the language go in and customize their VM in one way or another.

There's also a JIT under development that was designed according to the same principles. It even allows custom optimizers and code generators (written in Lua) to be plugged into the "compiler pipeline". It potentially could be a great playground for experimenting with those sorts of things.

Some (mostly simplistic) papers have been written about the implementation of Lua, and there is a guide to the VM instructions.