Home
Feedback
FAQ
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
Martin Odersky is blogging Raising Your Abstraction: A Brief History of Scala
Interesting. I don't know much about Scala. I was slightly suprised that the author's goal was to merge the functional and OOP paradigms: I always thought of it as an OOP language with support for some functional features - like the direction C# seems to be heading in.
Is fair to say it is the primarily OOP twin of the primarily functional OCaml?
I wonder if this isn't a question of how the cultures around the languages use them, rather than a commentary on their technology. For example, what's missing in O'Caml's objects? We have classes (or immediate construction), multiple inheritance, parameterized classes...
Similarly, how is Scala "primarily OOP?" It has first-class functions, algebraic data types and pattern matching, closures, and much more. In fact, Scala and O'Caml really seem to be kissing cousins, down to their being the only two statically-typed languages I know of that don't suffer from the "expression problem."
So I suspect that it boils down to O'Caml being from the ML family (and its culture reflecting that), and Scala being from the Java family (and its culture reflecting that). But I think Scala could win a lot of OO/imperative fans over since it generates Java bytecode. It will be interesting to see how this plays out.
Do ocaml and scala solve the "expression problem" by supporting both oo and fp styles or is there something deeper I don't see?
Also, I'm looking forward to reading more of the blog posts about Scala so thanks for the link.
Independently Extensible Solutions to the Expression Problem, Zenger and Odersky
(more papers on the expression problem)
Creighton Hogg: Do ocaml and scala solve the "expression problem" by supporting both oo and fp styles or is there something deeper I don't see?
Less deep, as far as I can see: parametric polymorphism, proper typing of binary functions, open "self types," and structural typing are all it seems to take. "On the (un)reality of virtual types" uses O'Caml objects to demonstsrate, but "Code Reuse Through Polymorphic Variants" uses, of course, polymorphic variants in the paper, although there are some half a dozen alternative implementations at the site I link to here. Independently Extensible Solutions to the Expression Problem goes through a similar set of exercises for Scala after first giving the best description of the difficulties that most statically-typed languages have solving the problem that I've yet seen. Somewhat astonishingly, the first two papers aren't cited in the rather recent "Independently Extensible." It appears that the groups arrived at essentially the same conclusions independently of each other.
Independently Extensible Solutions to the Expression Problem is an interesting paper. Unfortunately, the code doesn't work as-is with Scala 2. This seems to be related to a change in mixin composition, but I don't understand Scala nearly well enough to fix the code. This is an example of the error:
expr.scala:121 error: `{' expected with super[DblePlusNeg].Num(v)
And if the argument is removed, then
expr.scala:121 error: class Num needs to be a trait be mixed in with super[DblePlusNeg].Num
I had compiled the code with an earlier version of Scala (1.4.0.3) and it compiled without error. (The binary method example did not function correctly, though, so I upgraded to the latest version of Scala in case that malfunction was caused by the compiler.)
The code in the paper does indeed no longer work in Scala version 2. Like you suspect, it's caused by changes between Scala 1 and Scala 2, in particular in what concerns the mixin model. It can be easily fixed however. I think we need to update the paper, to take the changes into account. I'll post here once that's done.
For the time being I have put the updated code on my website: http://lampwww.epfl.ch/~odersky/papers/ExpressionProblem#Note
This compiles under Scala version 2. Binary methods also work (I am not sure why they didn't in the version you tested).
abstract-data-mixin.scala compiles but it throws a ExceptionInInitializerError when a test object is run.
I narrowed it down to this (stripped-down) code:
trait Base { trait Num { val value: int def eval = value } } object testBase extends Base with Application { val term = new Num { val value = 1 } }
def eval = value seems to be the guilty line because the code works without it. This is the error:
def eval = value
Exception in thread "main" java.lang.ExceptionInInitializerError at testBase.main(temp.scala) Caused by: java.lang.NullPointerException at Base$Num$class.$init$(temp.scala:2) at testBase$$anon$0.(temp.scala:9) at testBase$.(temp.scala:9) at testBase$.(temp.scala) ... 1 more
I am using Scala 2.1.5 on Windows. Sorry to be bothersome.
Hmm. Code won't work because the anonymous class' "value" isn't initialized until after the base class' initializers are run, and the base class relies on the value in the subclass. Scala's initializers (basically anything that isn't a def) run in order, and are subject to certain rules of class linearization -- how multiple classes are folded together into one class. Looks like a simple oversight to me.
The problem is not in the code but in scalac. There's a known compiler bug which caused member val's to be evaluated too early in some circumstances. It's fixed in the version I am running; that's why I did not notice the problem when I posted the file. The fix will be in the 2.1.6 distribution. Sorry for the bumpy ride!
I just noticed that Scala 2.1.6 was released on June 16 and tried your updated code again. It compiles and works. Thank you for your help in getting this working.
It was a regrettable oversight that we did not cite Garrigue's "Code Reuse through Polymorphic Variants" paper. We had come across it before, but did not remember that it was relevant when we wrote our paper. The papers solve essentially the same problem, but with quite different tools. The (un)reality paper is also about functional/oop expressiveness in the large sense, but not specifically about the expression problem.
Thanks for the feedback!
To the extent that your and Garrigue's work addresses difficulties found in many statically-typed languages, and which have come to be exemplified (identified?) by the "expression problem," and which neither Scala nor O'Caml suffer from, I would have thought that at least some comment to the effect of "Scala is not the only language to solve this problem; Objective Caml relies instead on..." I think a compare-and-contrast between the Scala and O'Caml approaches would be quite useful, especially if you are able to identify significant advantages to the Scala approach.
Of course, I remain incredibly impressed by both Scala and O'Caml—both offer an extremely rare integration of functional and object-oriented programming in an inferred statically-typed setting, neither suffers from the expression problem, and both have extreme expressive power. Now if only I could convince my Java-shop employer to use Scala...
It was a regrettable oversight that we did not cite Garrigue's "Code Reuse through Polymorphic Variants" paper.
Afaics, the opening paragraph of that "Code Reuse through Polymorphic Variants" paper starts with a false premise, asserting that the tension between type-safety and reuse is a 0-sum tradeoff, when in fact type-safety aids reuse/composability, because the semantic bounds are well-defined, especially when combined with referential transparency. Rather the author missed the generative essence, which is that granularity of types is what determines the tension.
Afaics, the importance of typing is to enforce semantic bounds at compile-time (i.e. locality of concerns), to avoid proliferating run-time "exceptions" (misbehavior of any degree):
http://esr.ibiblio.org/?p=2491#comment-276758 (Jocelyn is me)
And static typing is required for mixing referentially transparent and opaque code.
For the static typed OOP language that an abstract class (aka "interface") and "shadowing" mixins, then making interfaces as granular as possible minimizes the tension, while getting rid of virtual inheritance on non-abstract classes avoids the Liskov Substitution Principle problem.
Thus Haskell's (FP) notion of a class (where any extension can add "methods" to, i.e. functions that operate on, a pre-existing type without forcing a subtype) doesn't add any conceptual granularity power, and it undesirably removes the option of the instantiator of the instance to have control over the attachable behaviors.
Algebraic types (e.g. enum in Copute, a la HaXe) can be extended similarly to classes via inheritance, if they are unified with classes.
This is a reply to my post yesterday (posted 7am EST, 12pm EST midnight now, 12am noon for me in Asia), but it will not appear properly indented under my prior post, because I am unable to click "reply" to my prior post, because it hasn't yet appeared on this page (not yet approved by the moderator), not even visible to me privately while logged into LtU.
Shelby Moore wrote:
...the generative essence, which is that granularity of types is what determines the tension. Afaics, the importance of typing is to enforce semantic bounds at compile-time (i.e. locality of concerns), to avoid proliferating run-time "exceptions" (misbehavior of any degree)...
...the generative essence, which is that granularity of types is what determines the tension.
Afaics, the importance of typing is to enforce semantic bounds at compile-time (i.e. locality of concerns), to avoid proliferating run-time "exceptions" (misbehavior of any degree)...
...while getting rid of virtual inheritance on non-abstract classes avoids the Liskov Substitution Principle problem...
Let me show the derivation of those above assertions.
LSP states that a property is inherited for all subset(s) when the inherited property is provable for the superset.
LSP thus states it is generally undecidable that subsets inherit semantics. This is due to the Linsky Referencing principle which says that it is undecidable what something is when it is described or perceived, i.e. the static typing compiler can enforce the relationships between the declared types-- not algorithms applied to the variants of the interface. Thus, type of a method function (aka interface) is proven to inherit at compile-time by the static typing compiler, but the semantics derived from that inherited interface type are undecidable/unprovable.
Shelby Principles on LSP
...was a downcast. Either I was misinformed, or more likely it's considered a feature. Wonder's whether Scala has lack of same.
Chris Rathman: The thing I've heard lacking.... ...was a downcast. Either I was misinformed, or more likely it's considered a feature.
It's absolutely considered a feature by the O'Caml community. The response to requests for downcasts in O'Caml tends to be "if you want variant types, why don't you use variant types?" True confession time: in working on some Unreal archive I/O code I've been noodling with for far too long, even switching from C++ to O'Caml along the way, I'd initially thought to use O'Caml's object system in an attempt to maintain a parallel between Unreal classes and O'Caml classes. But recently I ran into the limitations not having downcasts imposes, and it finally occurred to me—and I'm talking about within the past couple of weeks—that most of the object-oriented programming I've done really has been done in order to emulate variant types! So for me, Code Reuse Through Polymorphic Variants is probably the most important O'Caml paper I've read. Now I just need to investigate the new pa_monad syntax extension and Oleg's monadic delimited continuation code so that I can use zippers with my iterators over variant types...
Not missing. You'd probably want to avoid writing this, but it's there if you really really need it. You can usually do much better with pattern matching.
val node: Node = ... if (node.isInstanceOf[Composite]) { val composite = node.asInstanceOf[Composite] ... } else if (node.isInstanceOf[Leaf]) { val leaf = node.asInstanceOf[Leaf] ... }
I could have written:
node match { case composite: Composite => ... case leaf: Leaf => ... }
Pattern matching is more fun when combined with case classes. You pattern match and extract all at once:
trait Node[T] case class Leaf[T](t: T) extends Node case class Composite[T](left: Node[T], right: Node[T]) extends Node
def print[T](node: Node[T]) = node match { case Leaf(t) => println(t) case Composite(left, right) => print(left) print(right) }
Note that pattern matching works very nicely with comprehensions. Here's a way to get all the "right" sides of composites into a collection.
for (val Composite(_, right)
In the lastest compiler you can also do this:
val fileWriterClass = classOf[java.io.FileWriter]
Nothing is missing from OCaml's objects. Both Scala and OCaml seem to produce interesting OOP papers.
Like I said, I don't know much about Scala. The differences between Scala code and OCaml code may indeed be more related to culture. I think the languages might encourage different approaches, even if they are very similar, though.
In languages like C++ where you don't have ADTs, pattern matching or higher order programming you tend to approach all problems from an OOP point of view. In OCaml, OOP isn't used everywhere, just in the places where you want a stateful object. In Scala, it seems the object system is instead extended with features such as pattern matching on objects, widening the problems OOP can easily tackle.
As I said, I don't know much about Scala, these are the impressions I get of it. I hope you see what I'm saying: OCaml encourages the use of OOP only in certain cases, while Scala instead makes OOP more powerful, and encourages its use everywhere. I'm probably wrong of course ;-)
My sense at the moment is that you're basically correct as to what the languages emphasize—like you, I don't yet know Scala well enough to comment more strongly.
As I commented to Chris Rathman, I recently had the revelation about O'Caml and objects that the one feature they steadfastly refuse to implement—"downcasts" or dynamic casts—they're exactly right not to implement, since the visitor pattern and downcasts are a poor man's (unsafe) approach to variant types. The irony for me is that that covers most of my use of OOP—I'd been wanting variant types for decades and just didn't realize it!
When we want to act "differently" for objects of different types, we have several basic choices. We can collect the various behaviors into a function (or group of functions) -- this is stratification by functionality. It's easier to see how the function is implemented across many types because they're all in one place, but harder to add new types, as you must visit each function group and make appropriate modifications.
You can stratify horizontally by using conventional OO polymorphism. This makes it easy to add new types to the system (just implement the "contract"), but makes it harder to add new functions (must add them to all types).
In Scala (and others) you can also "mix-in" functionality, in which functionality is selectively blended into an existing class. If thought through carefully this can give you the best of both worlds, but it can make writing the base classes more complex.
You can combine pattern matching and polymorphism in Scala to an extent; Scala's "case classes" (algebraic data types) are somewhat limited in that a case class cannot extent another case class (I believe that this restriction is present to allow optimized compilation of matching patterns). You can, however, create any number of conventional subclasses of a case class you want. You can then pattern match on the algebraic base class then invoke a polymorphic method as well:
case class Node(left, right) { def isDynamic = false } class DynamicNode(left,right) extends Node(left,right) { override def isDynamic = true } class NonDynamic(left,right) extends Node(left,right); x match { case node @ Node(left,right) if node.isDynamic => ... operate on dynamic nodes ... case _ => ... }
OCaml is not OO-centric. For instance the compiler and the standard library themselves are barely using OO at all. OO in OCaml is also tricky. There is nothing missing but there's actually maybe too much : binary methods are an interesting concept but it creates headaches even for people that don't want to use them :)
I would say that except for research or some specific applications where OO makes perfectly sense (like natural hierarchical structures such as UI framework), the OCaml programmer doesn't need objects. Or at least that's my own experience.
Scala looks interesting but with all due respect it might be "too much" functional to drive acceptance by the Java community. OTOH it's perfectly suitable for OCaml programmers that need a clean language that target the JVM.
Not quite. While I can't argue for most "OO" languages out there, it's worth noting that the idealogy behind it is sound. Specifically, it's all about abstraction. When you create a class and pass that around, you're implicitly passing all the functions around, as well. This does several things: it isolates object creation and explicitly shows what a given piece of code does and does not have access to and decouples the class and said code. Say you're creating a bignum data structure and you want to create the usual operators for it. This works fine in O'Caml, barring the fact that you can't overload the main operators for it, which somewhat negates the point. Let's say, though, that for this example O'Caml supports operator overloading. So you've got your global operators which operate on the data structures and you go use it in some of your code. It works fine and you keep coding, until you realize you need to use it in some library code which has no notion of your bignum operators. The solution to this would require somehow passing the operators to said library function, preferably implicitly. It should be obvious where I'm going with this: you need a class. Now, you may say this is an isolated example, but it's not. Most of the time, yes, you can get away without using classes, but that's not all you must consider; classes, and hence OOP, offer flexibility by default, meaning your code is going to be less rigid and more open to change.
Modules are more suited for this situation. Objects are best used for encapsulating state. With a powerful module system, you are able to group collections of functions and pass them around in this manner. The boundaries between modules and objects can get quite blurry though.
First-class modules are stateless objects for all intensive purposes. In any event, operations on bignums aren't exactly stateless--you wouldn't want to pass around a number and a the bignum module for addition; that's what classes are for.
First-class modules are stateless objects for all intensive purposes.
Just put a reference variable in the module structure and you have instant state objects. A couple of examples can be found in the Alice ML Shape example and the translation of CTM chapters 6 & 7 where OO is emulated with (a) wrapped functions; (b) records; and (c) functors. Indeed, chapters 6 & 7 of CTM are very good study about the transition from Abstract Data Types to Object Oriented Programming (which IIRC William Cooke was helpful, though I don't remember if that was on LtU).
I think the module system of ML in general and Alice in particular could use some more refinements to make them friendlier to an OO style, but ultimately I think the better solution is to get the kinks worked out of higher order modules rather than classes. (Seems like self referential modules and some syntactic sugar should do the trick, though I don't remember all the specifics). I haven't looked at O'Caml to know if their modules are similar in capability, but odds are probably pretty good that they too can be used to encapsulate state.
[Note: And I guess I should mention that CTM chapters 6 & 7 try to make the case for both Abstract Types and Objects, lamenting the fact that many OO languages ignore the former.]
[Another Edit Note: Going to the fridge just now, I remember the biggest problem of higher modules as they currently are used - you can't pass an instantiated functor instance as a parameter to a function call. If you could pass them as parameters, then you'd be home free. IIRC, there are some decidability problems still to be sauced before we get to that point].
This isn't quite true if you also relate them to concepts like compilation units, packages and the like. If there're significant differences in what you can let the type system express for that compared to records (for example, you might have a different decidability/expressivity tradeoff) that's valuable.
A "collection of functions" sounds an awful lot like an object/trait to me, and is directly expressible in Scala. Scala's path-dependent type system allows very sophisticated combinations of functions to be created, implicitly derived from one another, and even easily engage in late binding.
Any function can be "enriched" into a fuller object. In Scala functions are objects:
trait Function2[A,B,T] { def apply(a: A, b: B): T; } class PreAndPost[A,B,T](func: Function2[A,B,T]) extends Function2[A,B,T] { def apply(a: A, b: B) = { pre val ret = func(a,b) post ret } def pre: unit = { } def post: unit = { } }
Recent comments
25 weeks 5 days ago
25 weeks 5 days ago
25 weeks 5 days ago
48 weeks 4 hours ago
1 year 1 day ago
1 year 1 week ago
1 year 1 week ago
1 year 4 weeks ago
1 year 9 weeks ago
1 year 9 weeks ago