Invokedynamic

Gilad Bracha:

Basically, it will be a lot like invokevirtual (if you don’t know what that is, either open a JVM spec and find out, or stop reading). The big difference is that the verifier won’t insist that the type of the target of the method invocation (the receiver, in Smalltalk speak) be known to support the method being invoked, or that the types of the arguments be known to match the signature of that method. Instead, these checks will be done dynamically.

There will probably be a mechanism for trapping failures (a bit like messageNotUnderstood in Smalltalk).

The goal: Improve the support for dynamically type checked languages on the JVM, of course.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

messageNotUnderstood ?

I think he means doesNotUnderstand: , MessageNotUnderstood would just be NullPointerException

By the way, going off topic, I was thinking about this today: is it possible to provide a facility similar to Smalltalk's doesNotUnderstand (or ruby's method_missing, or CL's no-applicable-method etc) in a statically typed language?

If it is possible, was it done somewhere?

I want you to think about wha

I want you to think about what that would actually mean - do you mean winking at a type error? The alternative is that the type system is not powerful enough, and a cast (or other technique to circumvent the type system was used), or that some sort of soft typing is being used (for instance as in Oz).

If you don't mean that, then I assume that you mean a polymorphic function that is defined over a certain number of types, and is defined with some sort of default clause that does not actually use the value in any operation which is not known to be similarly defined over all other types. This is possible in C++ with templates.

Not a Type error

I was'nt thinking of "winking at a type error", what I was thinking was to get help from the type system to do automatic delegation (Java's Dynamic Proxy is close).

Which fits in your second statement, I think.

I would be thankful if you could point me to an example of how you would solve the problema with C++ templates

Compile-time polymorphism

I would be thankful if you could point me to an example of how you would solve the problema with C++ templates

You can only do compile-time polymorphism using C++ templates and I'm assuming this is what Marcin was alluding to. There are basically two approaches, either using templated classes or templated functions.

Templated class example:

template<typename T>
struct Helper
{
    static void f(const T& value)
    {
        // Default handler.
    }
};

template<>
struct Helper<int>
{
    static void f(int value)
    {
        // int handler.
    }
};

template<typename T>
struct Helper
{
    static void f(const std::vector<T>& value)
    {
        // std::vector handler.
    }
};


Templated function example:

template<typename T>
void f(const T& value)
{
    // Default handler.
}

void f(int value)
{
    // int handler.
}



It is hard to say whether one is strictly better than the other. Because the templated function approach uses argument list overloading rather than template specialization, it interacts nicely with inheritance whereas the templated class approach does not. On the other hand, you don't have the benefit of partial specialization with templated functions. (See the std::vector partial specialization in the templated class example.)

A similar question just popped up on gd-hackers

This thread on the Gwydion Dylan gd-hackers mailing list seems relevant.

MessageNotUnderstood class

I have a vague recollection that the Digitalk Smalltalk implementations may have named the message messageNotUnderstood:

In other Smalltalks, MessageNotUnderstood is a Smalltalk class - a kind of (perhaps resumable) Exception.

doesNotUnderstand:aMessage is a Smalltalk message - the default implementation Object>>doesNotUnderstand: creates a new instance of MessageNotUnderstood and instantiates it from the current Context.

"Dynamic" vs. "dynamically typed"...

Tangent : that's dynamically typed languages - not the increasingly common horrible misnomer dynamic languages (as opposed to static languages, where nothing moves, like hieroglyphics, perhaps?).

This is a very thought-provoking comment, to me. For people who spend a lot of time thinking about type systems, it's pretty natural to assume that when people talk about "dynamic languages," that's just sort of a contraction of "dynamically typed languages," and the focus is really squarely on the type system.

But I think this hides a really important point. Advocates of typical "dynamic languages," especially the newer crop (Python, Ruby) are not typically advocating dynamic typing for its own sake. They don't want to write Java code with no type checking... What they really like about these languages is a whole host of "dynamic" features that, taken as a whole, we don't know how to statically check.

An obvious example is reflection. In a language like Ruby, reflection is pervasive, natural, and comes with a very low cost (syntactically and cognitively, not performance). This allows the object system in Ruby to be "dynamic" in a way that even typical object systems for Scheme, for example, are not. The syntax of the language is little more than sugar for API calls that manipulate a run-time data structure of classes, objects, etc.

We can't offer a good static analysis for a lot of these features. In many cases, of course, it's not even possible in principle. So if you want some of these features, then you're pretty much stuck with dynamic type checking as well, at least for the time being. But dynamic typing isn't the end toward which you're arguing, it's just a consequence.

So I think that, to some degree, insisting that everyone say "dynamically typed languages" rather than "dynamic languages" is already framing the discussion in a biased way. It focuses on the dynamism that we're stuck with, rather than the dynamism that we want.

This might seem like sort of a ridiculous nit to pick, but I think this way of thinking is already evident in the design that Bracha proposes (invokedynamic). The idea of invokedynamic seems like "Java without the type system," without offering any of the features that people find really valuable and interesting in those other languages.

The term

My avsersion for the term dynamic languages is wll known (and I also don't like dynamic typing: I prefer the term dynamic type checking), but never mind about that.

You have some reflection capabilities in Java, and on .Net. Does that make these language dynamic? Not according to most.

Also: metaprogramming (macros, staging) can support some of the things reflection is used for (read below on AR). Are these enough for statically typed languages to be called dynamic. Again, not by most people who use the term.

My feeling is that the term doesn't really describe languages, but rather environements (REPLs etc.), programming style etc. But maybe I am wrong about this: most of these languages are indeed dynamically type checked.

I recall Guido saying something about the term in the presentation I linked to a couple of weeks ago.

Style

I think that the expression "dynamic language" is most often used to mean "late-bound language". Of course, this isn't just about type systems... but dynamic typing does help to delay binding a long way.

There's a distinct programming style that relies on extremely late binding. Languages like Java fail to support that style, not because they don't have enough features (reflection is nice sometimes), but because they have the wrong features. For example, you can't write a transparent proxy for a Java class that has final methods.

advocacy vs technical description

"the term doesn't really describe languages"

Agreed, but dynamic language is about marketing and advocacy and promotion more than technical description - just be happy that "agile language" hasn't really caught on :-)


My hope is that many folk will use the dynamic language in a mushy vague way, and we'll all understand that it's a vague hand-waving term and be able to move past that to more specific discussions.

While dynamic type checking (

While dynamic type checking (or, more commonly, spotty, uneven, and often no type checking at all) is a characteristic of the dynamic languages, it's by no means the most useful. Nor is it the most annoying, when dealing with things from the bottom up.

Introspection's not, in my experience, a defining characteristic. The languages that people commonly think of as dynamic generally have, at best, awkward introspection capabilities. That's sort of a shame, but there you go.

In addition to dynamic type checking, there's also late code loading, which is a bigger hassle. Perl/Python/Ruby et al, with their integrated compilers, tend to encourage this, and it is immensely useful -- it's not at all uncommon to find configuration files that are really executable code. Rather than parsing the file to extract the info, you just compile and run the configuration file.

Probably the single biggest feature that the dynamic languages have going for them is the extreme late-binding of functions and methods, late binding to the point where they're re-bound each and every time they're invoked, such that a function or method can be redefined or deleted on the fly and the changed function will be used instead of the old function on the next invocation.

These dynamic features -- delayed/weak/nonexistent typing, dynamic redefenition, and at-runtime code compilation -- certainly aren't the only features that the dynamic languages appealing, but they are what makes them dynamic. (And a damn pain to write efficient runtimes for, but that's a rant for another time)

Interesting...

Probably the single biggest feature that the dynamic languages have going for them is the extreme late-binding of functions and methods, late binding to the point where they're re-bound each and every time they're invoked, such that a function or method can be redefined or deleted on the fly and the changed function will be used instead of the old function on the next invocation.





That makes sense to me, but I guess I've been thinking of that type of obsessively late binding as being effectively a type of reflection/introspection, in the sense that any syntax for method or function definition amounts to sugar for run-time manipulation of some interpreter representation of classes/namespaces/etc.



Now if the language syntax is the only way that this introspection facility is exposed, then I'd agree with your assessment that it's "awkward"... But I don't think that's quite what you meant... ;)



I'd be curious to know if you find this at all persuasive...

Well said!

I've always used the term "dynamic languages" to mean "languages that make me feel dynamic". I include both dynamically typed languages (e.g. Ruby, Smalltalk) and statically typed languages (e.g. Haskell) and even weakly typed languages for some things (Tcl).

There's much more to "dynamism" than the type system. I think it's more the malleability of the language: that you can change the language to match your thinking about the domain rather than vice versa.

But we do know how to check a lot of it...

I'm not certain that there's all that much useful stuff in Ruby that we don't know how to check, or could not be remade in a way that we know how to check whilst losing no useful functionality. The main issue with things like this in ruby is that the language was never designed with any thought to early type checking, so a lot of design decisions were made that make it impossible to do.

I think in large part it's a matter of the spread (or lack thereof) of certain types of ideas though the programming community, too. After all, Ruby was designed more than ten years ago, back in the 90s, before even GC was mainstream outside of scripting languages. I've been playing with programming for more than twenty years, and doing it as a profession for more than five, and yet it's only this year I'm finally picking up FP, and starting to understand its advantages.

Give it another ten years, and all this stuff will really start to come together, I think. We are probably now at the point where no new language won't support GC and some sort of OO, and functional features are starting to creep in to or be tacked on to existing popular languages. Recent changes in Java, such as templates and adding invokedynamic to the JVM as this article mentions show this sort of cross-fertilization effect as well. In ten years we may well have a "scripting" language that can feel as much like Haskell as Ruby can feel like Smalltalk.

We'll probably have a "scripting" equivalant of Haskell in

Dynamism

I'm not certain that there's all that much useful stuff in Ruby that we don't know how to check, or could not be remade in a way that we know how to check whilst losing no useful functionality.

ActiveRecord (one of the most attractive things in Rails) makes heavy use of the fact that class definitions are executable code, i.e. you can do metaclass hackery - auto-generating methods for talking to the database etc. - at class-definition time.

Mapping this into a typed language is a non-trivial effort (yes, I know about HaskellDB, but it isn't aimed at the same problems as ActiveRecord).

ActiveRecord

ActiveRecord (one of the most attractive things in Rails) makes heavy use of the fact that class definitions are executable code, i.e. you can do metaclass hackery - auto-generating methods for talking to the database etc. - at class-definition time.

ActiveRecord and its ilk are really nice. While the dynamicism of Ruby makes something like AR more convenient to implement, you can achieve the same effect in a more statically-typed language by a hybrid approach based on code generation. Firstly, you would allow untyped access using accessors, e.g. getValue("name_of_column") and setValue("name_of_column", new_value). Secondly, you could generate strongly-typed interfaces at compile-time by connecting to the database. (This integrates really well with C# 2.0's support for "partial classes".) This seems to basically cover all the AR use cases I can think of. It could be made about as convenient as AR by decent build process integration.

Ah!

Let the dynamic-fan in me make a few nitpicks.

by decent build process integration

But you don't _need_ a build process in Ruby. You edit the code, then run it.

This seems to basically cover all the AR use cases I can think of.

Does your AR look-alike know that "person" is the singular for "people"? The real ActiveRecord does :-)

generate strongly-typed interfaces at compile-time by connecting to the database

So now your build process needs a network connection to the main database?

See, dynamic fans have a reason to complain about static fans making things too complex :-)

Round two

Let the dynamic-fan in me make a few nitpicks.

FWIW, I do most of my programming these days in Python so I'm very much a "dynamic-fan". :)

But you don't _need_ a build process in Ruby. You edit the code, then run it.

I know. Ultimately what matters is workflow though, agreed? If you have to use a compiled, statically-typed language (there are good reasons that might constrain your choice) then my point was that you could achieve a workflow that is fairly close to working with AR in Rails.

Does your AR look-alike know that "person" is the singular for "people"? The real ActiveRecord does :-)

That kind of thing is not difficult regardless of how you end up handling things. I re-implemented most of AR in Python in just two evenings worth of work, it's really pretty simple implementation-wise.

So now your build process needs a network connection to the main database?

Sure. If you already have the functionality for doing all that stuff (which you would need for the run-time parts of the class library) it's a simple matter to write an IBuildTask class that plugs into MSBuild. (MSBuild is the new build system in Visual Studio 2005, which is in beta right now. It's awesome.)

I agree this is more complex than it would be in a dynamic language (there's that word again) like Python or Ruby. But presumably you have good reasons for using a compiled, statically-typed language that would justify the extra work. My point was that in this case (and other cases) most of the workflow gap between dynamic and non-dynamic languages can be bridged with just a small amount of tool work.

BCPL VM

"Another tangent: I always refer to the .Net VM and not the the CLR. It’s a VM, such things have always been called VMs"

For years I probably imagined that VMs were invented for Smalltalk at PARC, so it's fun that the first chapter of Virtual Machines is about BCPL.

VMs

"Another tangent: I always refer to the .Net VM and not the the CLR. It’s a VM, such things have always been called VMs"
For years I probably imagined that VMs were invented for Smalltalk at PARC, so it's fun that the first chapter of Virtual Machines is about BCPL.

That looks like a fun book, thanks for the link! Springer always puts out excellent, if pricey, books.

By the way, the irony of that quote you reference is mind-boggling. Surely everyone remembers the original PR push for Java? Sun was bandying the term "VM" around in relation to Java as if they themselves invented VMs! Because of this I think a lot of "commercial programmers" made (and still makes) the connection "VM <-> Java". Given this, it's not at all surprising that Microsoft is favoring the CLR term as a point of differentiation. In any case, it's hilariously ironic that a Sun engineer is getting upset about it!

Dynamic methods

Another tangent maybe, but the way to invoke properties of any type dropped out in the Ivory system goes as follows:

Say we have a function raising an event against a list of rules:

raiseEvent event::(Exp *) ads::ADS =
   let invokeMethod ref::Ref =
      case ref of {
         rule::Rule -> ((rule.method) rule event ads)::Void
      }
   in
      mapProcRefs invokeMethod (ads.ruleList)

The critical expression to consider is:

((rule.method) rule event ads)::Void

The type checker has sufficient information here to infer that:

(rule.method)::(Rule -> (Exp *) -> ADS -> Void)

But it is also in the scope of a class declaration:

class Select a where
{
...
   (.)            :: (a -> Name -> *);
...
}

Now, in a dynamically typed system, * unifies with any other type. so this ends up as a run-time type check.

What happens is that firstly, the arguments are stacked, then secondly, rule.method is partially applied; in this case with no arguments. If it has the correct function type, then the argument satisfaction check returns immediately with a pointer to the closure and the type register set.

The calling function (invokeMethod in this case) only has to check the return type and re-enter the closure with the argument base registers reset.

If any other type was returned, then an inconsistent type exception is thrown.