Home
Feedback
FAQ
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
A fork in the back? See discussion over at HN. People in the know are encouraged to shed light on the situation.
No knowledge of current events, but this sounds normal; it's the same community I remember from 7 years ago.
Paul is aggravated that Scala is retaining too many things done by other Scala contributors, most notably the collections library. I expect he will not stay motivated to work on Policy for long, given no funding and probably not many users, but I really look forwarding to seeing what he comes up with. I confess, I am sympathetic to his perspective on the latest collections design.
Typelevel is about not having to hack the Eclipse plugin any more, a fate I don't envy anyone. Wait, no, it's about being more "functional". I always struggle to respond constructively once we start talking about being "functional enough". In this case it's a conference chair of ICFP that is not functional enough. One way to look at this situation is that, as inclusive as Typesafe is, sometimes it does say no to including an idea in the compiler or the standard library. Some of those people eventually go off and do other things.
Unless there is some kind of secret funding that has not been revealed, these are both tiny hobby efforts compared to Typesafe. As well, in my opinion, Typesafe has some of the best language-design and compiler-implementation staff in the world.
I don't think that's a good summary. Also GHC maintainership reject ideas which are not good enough, but nobody thinks of forking GHC as a consequence, for all its imperfections, because GHC is much better at what it does and has a much higher standard.
Instead, Scala is rich of warts, bugs, type unsoundness problems, and when the Scala maintainership keeps declaring things to be "OK", people are unhappy. In fact, Odersky himself agrees there's some problem, so that he started rewriting the compiler from scratch. However, that's not going to fix enough warts with the language itself.
Along the rest of the post, I'm going to list a few of the warts I happen to know/care about. You can find more in https://github.com/typelevel/scala/issues (and on the Internet).
Another way of putting it is that Scala is ignoring many lessons from Haskell, together with many lessons from ML, for inferior solutions, and people keep running into the bad consequences.
Wait, no, it's about being more "functional". I always struggle to respond constructively once we start talking about being "functional enough". In this case it's a conference chair of ICFP that is not functional enough.
Sorry, but argument by authority isn't a valid form of reasoning. For instance, Scala lacks parametricity — everything can be compared through pointer identity.
It does not have a well-enough working inliner, which means that lots of functional coding practices have a much bigger cost than in other functional languages. But also, since everything can be compared by identity, it's not even valid to rewrite a :: Nil map identity to a :: Nil (this is just one example).
a :: Nil map identity
a :: Nil
By the way, since Odersky cares about the efficiency of Scalac, this leads to a lot of the compiler being overly micro-optimized. In https://github.com/scala/scala/pull/2879/files, we see people injecting booleans into Type for efficiency. Somehow, contributors don't manage to deal with the resulting implementation complexity:
Type
// OPT: This could be Either[TypeVar, Boolean], but this encoding was chosen instead to save allocations.
Next point:
Unless there is some kind of secret funding that has not been revealed, these are both tiny hobby efforts compared to Typesafe.
Most of Typesafe is not working on Scalac. AFAIK, their compiler engineers are Adriaan Moors, Jason Zaugg, Gregor Kossakowski and Lukas Rytz, and IIRC that's it. EPFL has many more contributors, but they're mostly PhD students working on research.
Also, most of the Scalac activity was from Paul Phillips anyway — though I think most of the implementation was done by Odersky originally.
As well, in my opinion, Typesafe has some of the best language-design and compiler-implementation staff in the world.
Martin Odersky wrote (the basis of) the current javac as part of a research project — essentially alone, I understand. But Scala is a much bigger language. And Scalac is a bunch of undocumented code held together by duct tape implementing an undocumented language, which the staff has a hard time fixing. One possible explanation is that lots of code was hacked together by PhD students of varying coding ability and without a clear understanding of the intended compiler architecture.
The scariest bug I remember was in comparing type variables, from the 2.10.x series. Quoting from https://github.com/scala/scala/pull/2879:
Even though this was a regression in 2.10.1, I'm submitting this to 2.11 as the change isn't without risk, and the only reason this worked in 2.10.0 was because of another bug (TypeVar had a structural equals.)
I like the Scala core, but lots of the rest seems very inelegant, and badly implemented.
Say, because of their syntax for implicit arguments, you are not even guaranteed that you can refactor:
val a = f(args) a(b)
to f(args)(b).
f(args)(b)
My only disclaimer is that I never contributed much to Scalac, so my opinions wouldn't be relevant unless validated by more informed contributors, and that I never looked at GHC code so I only guess they're doing better. But I did look at papers on GHC implementation, and everything they describe is so much clearer.
That's also symptomatic of another problem: you can't get Scala right without writing as many papers as done for GHC. For instance, Scala "supports" GADTs for a rather different language than GHC, so the solution must be novel. Is there a paper about it? Not about everything they do. Does the implementation work? Not always. Is it sound? Neither. Is it expressive enough? Not even. See my paper — sorry for the self-plug, but Odersky himself links to it, so I guess I can link to it as well.
Scala is intrinsically a harder problem than Haskell, which doesn't have to deal with subtyping. Comparing scalac to GHC is ridiculous and SPJ would probably be the first to call you out on it (and ignore the many summer interns who hack on it).
Anyways, I'm glad to be out of it. One wonder how Martin calmly deals with all the vile, the guy must be coated in steel.
The problem is writing a program, and both Scala and Haskell are potential solutions. Therefore to say Scala is a harder problem does not make sense. Perhaps you mean Scala is a more complex solution, therefore comparing GHC and scalac is not a like for like comparison? To me this suggests that subtyping might be a bad idea if it makes the compiler hard to make sound. A sound compiler seems the more fundamental requirement.
A language involves design decisions that define its market. Scala and Haskell were designed under very different constraints: Scala wanted to merge FP and OOP (so subtyping must be dealt with); Haskell wanted to be the purest FP possible.
It is telling that many people with Haskell backgrounds are flocking to Scala; it must be doing something right.
Merging OOP and FP is not a requirement. The requirement is to write a program to solve a problem. OOP and FP are techniques that can be used in the solution. The idea that Scala needs to do 'xyz' is false if the problem can be solved in Haskell without it. Objects are not needed to solve the problem. If you cannot combine objects and FP soundly (or its too complex to implement in an obviously correct way) then it seems like a bad idea.
However I don't think objects are the problem, its subtyping, and subtyping is not a necessary feature for an OO language in any case (although I can see Scala wanted to stick close to Java, this is not a requirement but a design decision). I prefer parametric polymorphism with type classes operating over objects with no subtyping. This solves the dependency-injection problem in a simpler way than the cake pattern. Type-classes solve the same problem and as subtyping and support dependency-injection in a sound and elegant way.
For example we can require implementors of the "Ord" type class also implement "Eq" by dependency. This effectively makes Eq behave as a subtype of Ord. Any type implementing Ord can be passed to a function requiring Eq, but without losing its unique type identity (thanks to parametric polymorphism in the function definition).
What Scala does right is offer interoperability with Java on the JVM. This means it is easy for a Java shop to move to Scala, especially as you can transfer OO skills. My guess would be Haskell people get jobs using Scala, rather than it being a choice thing. Scala itself seems to be moving towards wanting things from Haskell like kind polymorphism.
Many of us would argue that OOP without nominal typing is simply...functional programming. Type classes have classes in the name, for sure, but they augment functional styles and provide nothing that we would recognize to support objects (e.g. encapsulation, is-a relations, ...). People also tend to forget that assignment and nominal subtyping really go hand in hand (both boil down to semi-unification when doing type inference), and that nominal subtyping is really the only way to go for your flexible imperative programming needs.
Take nominal subtyping away from Scala, so no meaningful classes or traits, and what kind of language are you left with? No Java interop surely, but more than that...it would be just another FPL.
What is nominal subtyping necessary for? Keeping full type information where possible is just a better solution object-oriented or not. Of course you get 'is-a' relation. You can say does X implement Eq. This is the same as a virtual base class called Eq.
The only difference comes with heterogeneous collections, where existential types provide the ability to have a list of 'X' where X implements Eq for example.
For assignment you just want a singleton existential container. The only thing you miss is upcasts, which I think are signs of bad design in any case.
You can still have objects for encapsulation, but I would prefer something like modules as this avoids the need for friend functions. Nothing as sophisticated as ML modules, as type classes provide functors etc.
Various people (including ML ones) would argue that subtyping is required in a module system.
The Backpack paper (coauthored by SPJ) explains some strong modularity limitations of Haskell. The solution still involves some form of subtyping and mixins.
I think typeclasses could be theoretically equivalent, but it seems somehow they aren't in practice. I'm fuzzy on this, but I understand that since you can't control what instance you use, you don't use them the same way — Haskell users want pretty strong interfaces from typeclasses — that is, typeclasses should be lawful, while no such requirement arises for modules.
Now, Scala simply gives you first-class ML-style modules by using objects and path-dependent types. As you'd expect, this gives you better expressivity and is harder to get right.
Type classes are completely equivalent to interfaces but limited to static polymorphism. They can be combined with existential types to recover dynamic polymorphism, minus upcasting.
You can do the rest, path dependent types, and simple modules soundly in a straightforward way, its just subtyping which does not fit. This us because it is really a different mechanism to achieve the same thing. In subtyping you downcast losing the unique typing of an object so A becomes B. With type classes the type classes can have dependency chains (like inheritance hierarchies) but at the type level. An object of type A, remains type A, even when you treat it as type B.
You don't need to worry about which instance to use if modelling subtyping, because a subclass always has a distinct type.
Modules go beyond objects in that the implementation can depend on the value, whereas in objects the virtual function table is part of the class (type). We can deal with this by creating a singleton type for each module instance (which is like the solution for applicative functors in f'ing modules for ML), in which case the existing type dependency mechanisms already deal with this. Without this unique type per instance, applicative modules are unsound.
Well, Scala makes objects first-class, and gives them type members, but that doesn't really give you ML modules -- it is considerably more restrictive in various ways. I recently summed up a few key differences on StackOverflow.
In particular, modules are fully structurally typed, whereas Scala mostly relies on nominal typing. Likewise Haskell's type classes, btw.
This was helpful (but see comments on StackOverflow), and the discussion in the MixML paper seems very interesting (will have to study it more). I know of next to nothing else on the Scala-ML relation, with few exceptions.
1. The only discussion I knew on the topic was in "A Nominal Theory of Objects with Dependent Types", which is extremely brief and vague but says that Scala has modules, signatures and generative functors (which were encoded as classes back then).
2. Then there's "Scalable Component Abstractions" describing the "cake pattern", which is essentially a scheme to use ML-inspired modularity constructs, but I find it insufficient — many developers run into issues with the cake pattern, which aren't sufficiently discussed in that paper. (Also, IIRC it doesn't explain the relation with ML modules either).
(In case it's unclear: I don't doubt you know both papers).
There are also virtual classes, which were expressible (in a pattern) at least Scala before 2.8. I think virtual classes are superior to the cake pattern in terms of modularity and understandability, but you need to be an OO lover to try it. It goes like this, start with a base trait:
trait T0 { type S <: SImpl def makeS : S // do not use SImpl as a type, just use S trait SImpl { def self : S ... } type R <: S with RImpl def makeR : R // use R instead of RImpl, R is a variant of S trait RImpl extends SImpl { def self : R ... } }
T0 is a trait with a virtual class member S. S is represented as a type parameter and is initially implemented with the SImpl trait. Say you want to "upgrade" S with more code/members in trait T1 (my Scala is rusty, so I'll use override to express refinement):
trait T1 extends T0 { override type S <: SImpl trait SImpl extends T0.SImpl { ... } override type R <: S with RImpl trait RImpl extends T0.RImpl with SImpl { ... } } trait T2 extends T0 { override type S <: SImpl trait SImpl extends T0.SImpl { ... } override type R <: S with RImpl trait RImpl extends T0.RImpl with SImpl { ... } }
Anyone who views trait T1 or T2 will view upgraded types of S. Likewise, anyone who views traits T1 or T1 will view an upgraded type of R that is always a subtype of S.
S is finally implemented as a class with all traits extended; composition of SFinal is checked by binding it to type S:
class C3 extends T1 with T2 { override type S = SFinal override def makeS = new SFinal class SFinal extends T1.SImpl with T2.SImpl { override def self = this } override type R = RFinal override def makeR = new RFinal class RFinal extends SFinal with T1.RImpl with T2.RImpl { override def self = this } }
Basically, this is a type safe formulation of virtual classes. FP people dislike virtual classes because they solve the expression problem without using functional programming (which is supposed to be an exclusive benefit of functional programming). They probably also just dislike objects :)
But really, Scala is (or was) one of the most powerful OO languages out there; no other popular typed language came close to realizing virtual classes (gBeta isn't that popular, there are also Kim Bruce's languages). There was more than enough in Scala to get the OO community excited about it, however the FP community (some of them being very snobbish) that sprung up around Scala scared most (or all?) of the object people away.
This also shows that there is more to the future of OO programming that what we have become used to in Java. This is why I'm quite depressed about the stagnate state of OO research.
Just in case its not clear, you should realise from my other posts that I like imperative programming with objects, although I would actually put myself in the datatype generic programming category, IE I program in Java but prefer C++ with STL.
My objection is not with the OO in Scala but with the type system. In some ways C++'s type systems seems preferable even though it is less functional, template function objects work well. When Concepts are added its pretty good.
Virtual classes seem a rather complex way to implement dependency-injection. All this stuff is much easier with C++ templates, and can be made type safe with Concepts. Concepts of course map directly onto type-classes. So once again type classes provide a simpler and more elegant way of achieving the same thing.
C++ templates give you the ability to express some kind of mixins; I'm not sure if concepts make those type safe or not (but I'll take your word for it). Note, however, that the only way to parameterize across different kinds of template instances is through...templates. Whereas with virtual classes, subsumption still works! You could even leverage imperative assignment to multiple kinds of virtual class families.
Type classes have the same problem: no subsumption, and all you get is parametric polymorphism. As stated before, you can get some mileage out of assignment using extensional, but subsumption is a bit more flexible I think.
The use cases can overlap, but not completely. Also, in Haskell, programming with type classes is quite different, you aren't really using object thinking at that point, but something else.
With concepts you get subsumption. If the concept of a bidirectional iterator requires a forward iterator, then any class implementing a bidirectional iterator will be accepted where a forward iterator is required. Or were you referring to some other notion of subsumption?
Type classes provide this as well, as you would expect being equivalent to concepts. The subsumption happens because of the dependency relations between type classes.
I'm referring to the formal definition of subsumption (aka polymorphic subtyping):
if σ is a subtype of τ, then a value of type σ may be provided whenever a value of type τ is required.
The closest you get to this in Haskell is...O'Haskell.
That's a tortology, of course subtyping gives you a property defined in terms of subtypes, that parametric polymorphism and type classes do not. But you have not said anything useful.
Type classes do not need subsumption to achieve the same level if expressiveness, with greater elegance.
Type classes can express anything subtyping can (with static polymorphism). Existential types allow dynamic polymorphism, the only thing you cannot express is an upcast (which I think is a good thing).
Perhaps you have an example in mind that you could share, as I am interested to put this to the test?
Subsumption has different capabilities. Really, don't take my word for it; just read about the differences from the source:
http://www.haskell.org/haskellwiki/OOP_vs_type_classes
It boils down to programming being different with type classes, the solutions are completely different. Ya, at the end of the day, we have Turing completeness, but saying solution A and B are similar, just that solution B is "more elegant" is quite outlandish. If you want to make such claims, at least justify it in someway. Otherwise, this is just a standard "OOP sucks, go functions" ideology thread :)
Also, I think type classes are a great way of typing non-object values (like points and vectors), but not what I would call objects (with non-structural aliasable identity, encapsulated mutable state).
Do you have a short example that makes your point in code (the shorter the more strongly it makes your point)?
I think the OOhaskell paper I contributed to had some relevance to this and goes into more depth than that wiki page, but I think you are confusing Haskell's pure immutable stuff with type-classes. C++ concepts are the same thing in a mutable, impure language.
I think your categorisation of this as OOP vs FP is simplistic. My problem is with subtyping specifically, and I am suggesting that OOP is possible and still will look like OOP with type-classes and parametric-polymorphism instead. It may be possible to keep the syntax and semantics the same and just change the type system.
Do you have a link to the O'Haskell paper? I don't see it in the thread.
The template solution to these problems (at least those addressed by the virtual class pattern) is quite similar to what we did in Jiazzi. I wonder if concepts can correctly handle cyclic linking or the proper checking of extension across template boundaries. What Scala adds though, is that the compositional "modules" are just traits that are at the complete mercy of Scala's type system (compositions are just values treated in the same way). I'm guessing your solution using C++ concepts/templates doesn't involve classes at all, however.
As for examples, you could just check out the ones in the Jiazzi paper.
Here is a link to the OOHaskell paper: http://arxiv.org/pdf/cs/0509027v1
Here is a link to an interesting paper that references it: http://research.microsoft.com/en-us/um/people/simonpj/papers/oo-haskell/overloading.pdf
The second is looking at type inference in Haskell's type system specifically, there should be other ways of achieving this, maybe overloading on type-classes dependencies will work.
The method I am suggesting uses type-classes as type-classes. That is functionality depends on the type of the object. You would combine this with modules for data hiding, separating the concerns of data-hiding and overload-resolution). Inheritance (subtyping) would be modelled by creating a new instance for the subtype, this could be generated automatically from a subtyping-like syntax.
Thanks, I'll take a look! I see OOHaskell is different from O'Haskell :)
Here's a quickly put together example. You have to imagine an OO language that is converted to 'Haskell' as the intermediate language in the compiler. Here is an encoding of subclassing with inheritance using type-classes:
-- one class per method class Drawable t where draw :: t -> IO () class Movable t where move :: t -> IO () -- BaseWindow class class (Drawable x, Movable x) => BaseWindow' x data BaseWindow = BaseWindow instance BaseWindow' BaseWindow instance Drawable BaseWindow where draw _ = putStrLn "draw <base window>" instance Movable BaseWindow where move _ = putStrLn "move <base window>" -- FancyWindow class class FancyWindow' x -- no new methods data FancyWindow = FancyWindow instance BaseWindow' FancyWindow instance FancyWindow' FancyWindow -- inherit definition of draw instance Drawable FancyWindow where draw _ = draw BaseWindow -- override definition of move instance Movable FancyWindow where move _ = putStrLn "move <fancy window>" test :: BaseWindow' x => x -> IO () test x = do draw x move x main :: IO () main = do test BaseWindow test FancyWindow
output:
*Main> main draw <base window> move <base window> draw <base window> move <fancy window>
Edit: The only reason this has one class per method is that I was originally trying to use "newtype" and "deriving" to inherit the definition of "draw" but it seems simpler just to explicitly forward the definition of draw to the 'superclass'.
Great, now call draw/move on a mixed list of windows :)
I think we disagree on what OO programming is. Here is my C# code for the same example:
class BaseWindow { void draw() { Console.WriteLine("draw "); } void move() { Console.WriteLine("move "); } } class FancyWindow : BaseWindow { new void move() { Console.WriteLine("move "); } } public static void Main() { var baseWindow = new BaseWindow(); var fancyWindow = new FancyWindow(); baseWindow.draw(); baseWindow.move(); fancyWindow.draw(); fancyWindow.move(); }
Output: the same as yours.
Right, we know polymorphism in Haskell is overloading (and dictionary magic carries over to generic parameter bindings). But what happens when you have a collection, or a mutable cell? Polymorphic subtyping can be useful.
I'm not saying that these problems cannot be solved, but the solutions are fundamentally different in design from what I would call OOP, or at least what the solution would look like in Scala.
You could look at the HList paper which I co-authored to see many solutions to the problem of a list of things implementing X: https://www.researchgate.net/profile/Keean_Schupke/publications
Its not really a problem for the type system. Lets just use existential types:
{-# LANGUAGE ExistentialQuantification, FlexibleContexts, UndecidableInstances #-} -- BaseWindow class class BaseWindowClass t where draw :: t -> IO () move :: t -> IO () data BaseWindow = BaseWindow Int instance BaseWindowClass BaseWindow where draw _ = putStrLn "draw[base window]" move (BaseWindow x) = putStrLn $ "move[base window] " ++ show x -- FancyWindow class data FancyWindow = FancyWindow Float instance BaseWindowClass FancyWindow where draw _ = draw (undefined::BaseWindow) move (FancyWindow x) = putStrLn $ "move[fancy window] " ++ show x data IsABaseWindow = forall x . BaseWindowClass x => IsABaseWindow x test [] = return () test (IsABaseWindow x:xs) = do draw x move x test xs main = test [ IsABaseWindow(BaseWindow 1), IsABaseWindow(FancyWindow 2.0), IsABaseWindow(BaseWindow 3), IsABaseWindow(FancyWindow 4.0) ]
Note the syntax is not ideal, its the type system that I am thinking about. One could trivially add a layer of syntactic sugar to make this look like normal OOP. It would be straightforward to produce a simple parser that reads the restricted C# syntax you have above, and outputs the required Haskell. Note the compiler could still compile the imperative code directly as it has always done (no need to actually translate into Haskell), you would just be using the type system to type check the AST before accepting for compilation. Maybe you are confusing the language syntax with the type-system?
Ah, I see. So does IsABaseWindow basically reify the type at run-time?
No, reification would be where the type depends on the runtime value, for that you need dependent types, which is probably a better solution but not implementable in Haskell. This is probably the direction I would go in to actually do this, but I think you would need open-datatypes too (or maybe type-families would work).
What it actually does is hide the type, so that it is non-recoverable at runtime. You can see this as a virtual-function table, where each type in the list guarantees it provides all the required methods (Drawable and Movable). Once the "downcast" to IsABaseWindow has happened there is no upcast, and no types are permitted that would allow you to discover the 'hidden' type inside IsABaseClass. So it behaves like virtual methods but with no variance (co- or contra-)
Don't you mean upcast to isabasewindow (make the type more general)? It seems you don't allow downcasting, not up casting.
You are right, I got it upsidedown. For some reason I was thinking of trees with roots in the ground, not the sky :-)
That is common in the FP world for some reason :) I wonder why our trees grow down whole yours grow up, it is a weird cultural anomaly.
Binary methods are the big deal. See The Power of Interoperability: Why Objects Are Inevitable for a good argument. BTW, IMHO it does not really prove that objects are actually required — you can encode object-like binary methods with typeclasses:
class Num a where add :: Num b => a -> b -> a
However, my Haskell-expert colleagues and I seldom see such use of typeclasses, while that's what you get with OO (Java syntax):
class Num { Num add(Num b); }
This paper dismisses record encodings a bit too quickly. The paper says it "leads to awkwardness and verbosity", but the example code it gives is significantly less verbose than the corresponding code in a traditional OO language:
let makeSingletonSet x = IntSet { contains = fn y => x=y, isSubsetOf = fn IntSet s => S#contains x }
vs
class SingletonSet implements IntSet { private int x; public SingletonSet(int x2){ x = x2; } public bool contains(int y){ return x == y; } public bool isSubsetOf(IntSet s){ return s.contains(x); } }
Objects are a particular mode of use of records. Perhaps you want special syntactic support for that mode of use, but I'm not at all convinced that that mode of use is common enough, and the advantages of special syntax great enough to outweigh the costs. A tiny amount of sugar for functions in records is enough to get a syntax that is vastly superior to any mainstream OO language:
let makeSingletonSet x = IntSet { contains y = (x=y), isSubsetOf s = s.contains x }
Other than verbosity, the paper claims that this is awkward. That is an inherently subjective judgement. I find that this record encoding actually greatly clarifies OO semantics. Objects are just records of functions. A constructor is just a function that returns an object. An interface is just a record type. A private member is just a lexically bound variable that the functions close over.
Similarly, ADTs are another mode of use of (dependent) records, for which you may also not need special support. And type classes are just records + implicit arguments. Do more with less.
I think if you write that program in Scala using OO, it would be just as short. It would look something like this (warning: I'm not a Scala programmer).
class SingletonSet(x : Int) extends IntSet { def contains(y) = { y == x } def isSubsetOf(s) = { s.contains(x) } }
You're right, I was thinking of Java, C#, C++ as mainstream OO languages.
Since with product types you have a bunch of projection maps out of the same object, the benefit of implicit "this" or similar syntax would seem to be undeniable.
The idea that Scala needs to do 'xyz' is false if the problem can be solved in Haskell without it.
By the same token, the idea that Haskell needs to do 'xyz' is false, if the problem can be solved in assembly language without it. And by definition, all problems that a computer can solve at all can be solved by writing assembly-language programs.
The question is, at what cost? Scala is a tool: you pay a cost to use it, and a cost not to use it. If the latter exceeds the former, use it. The same with Haskell.
This misses the point. The argument was that if there exists an obviously sound way to do it (looking at Haskell) then to say Scala needs to do it in an unsound way (or so complex it is hard to ensure soundness) is clearly false. Scala doesn't need to do it that way, it chose to do it the complex (unsound?) way.
I think the title refers to Dijkstra's "gotos considered harmful"? To respond with a quote from Stepanov, "Dijkstra was wrong" :-)
Given its design goals, Scala had to choose between tractability and soundness. I think the right choice was made.
I agree that the problem is harder, so Scalac will never be as simple; but one could argue that, as a consequence, you need to prioritize correctness even more.
Take GADTs again as example: they're harder in Scala, yes, but you don't have to hack together support without a foundation. Martin's plan now is to drop support for covariant GADTs. And it's not like they are implemented the same way and it doesn't work for Scalac: GHC's core language reifies equality proofs, so it can detect casts, while Scalac does nothing of the sort.
It might well be that a robust Scala will take a decade to be designed.
ignore the many summer interns who hack on it
Are you implying that GHC's code quality isn't as good as I imagine, or that it has more manpower?
GHC's core language reifies equality proofs, so it can detect casts, while Scalac does nothing of the sort.
Haskell has that whole equality thing working for them, while Scala has to deal with a lot of inequality (via subtyping). Very different problems.
No, quite the opposite: I'm implying that your slam about PhD student code quality in scalac is unwarranted, since GHC is relatively in the same position (pre-TypeSafe anyways). We could maybe argue the coding or management abilities of SPJ vs. Odersky, but we are still comparing apples and oranges since the problems and goals are quite different.
Good point, so one should have inequality witness if possible or figure out a different solution and write a paper about it.
I don't mean that PhD students don't write good code in general, but that some don't — I'm thinking of the infamous pattern matcher, which was bad enough to be rewritten from scratch. I hear the new one's better, and still see enough bugs with it.
The guy who wrote that is quite smart, and the problems he was solving were novel; don't confuse "this code is crap" with "this programmer is bad." Sometimes rewrites are quite necessary, especially if understanding has increased substantially since the last hacky code was written.
It is not like Haskell, where you work everything out perfectly on paper (or with a theorem prover) first before you start programming (which is a myth anyways, GHC goes through plenty of changes). Sometimes, a journey is involved; that journey can be quite long and involve multiple hand offs. Many people don't get that and throw stones at anything they don't understand, or isn't well understood, or just like to show how far they can piss. A lot of pissers around Scala make it an unhappy place. Criticism should be like "X could be done this way to improve it" rather than "you suck."
I spent a couple of years doing V2 of the Scala IDE by myself, V1 was done by some undergrads and was crap. Someone else did V3, because they said my V2 was crap. I still hear there are plenty of people who aren't happy with the Scala IDE, but it is better than before. I've also progressed on IDE development given what I started at EPFL; that experience was useful.
I'm sorry if my tone has been aggressive. I've seen what you are describing, and I also dislike the atmosphere and the tone of some people. I appreciate the tradeoff and (part of) the difficulties facing Scala, I don't have the expertise to appreciate all of them.
But I don't think dismissing the problems is accurate either. In fairness, you didn't, but I disagreed with Lex Spoon's post.
But even without any polemic, Scala is at once powerful and frustrating. Learning Haskell is even more frustrating, until you grok it and you know what GHC will do. It's harder to get that certainty from advanced Scala. (One might speculate on other factors).
That's one reason some people stick around and try to deal with this frustration. (I'm not justifying, just analyzing). In fact, I believe the Typelevel fork is a constructive reaction to that frustration.
In case it needs saying, my drunken predictions about what people will do in the future should not be taken too seriously. I don't know what plans people have in mind, or what the future will hold.
I stand by my assertion, though, that my dad is WAY more functional than your dad.
yup, if you want to ship something today with Scala you'd be ok to go with Typesafe; there's no language / system without warts. Long term if you want something that is more technically clean end-to-end, hope that 10 years sees Policy doing well. There's other good people who want to Do Things Right who don't see eye-to-eye with Typesafe cf. Lift vs. Play. good thing we have so many choices, first world problems, i guess.
nothing.
Recent comments
23 weeks 1 day ago
23 weeks 1 day ago
23 weeks 1 day ago
45 weeks 3 days ago
49 weeks 4 days ago
51 weeks 2 days ago
51 weeks 2 days ago
1 year 1 week ago
1 year 6 weeks ago
1 year 6 weeks ago