Twitter and Scala

Thought this might be of interest to the Scala (and Twitter and Ruby on Rails) folks here. The Twitter dev team has switched to Scala because RoR couldn't handle the server side traffic. To me this is really interesting since most shops are usually hesitant to switch to "academic" languages, but here we are. And I'm pretty sure its not an April Fool's prank either!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

How this happened

This appeared a couple of years ago, an implementation of (the back end of) twitter in Scala, capable of handling about 1e6 users using a couple of core duo machines. When the real twitter started having serious load issues, they jumped on it. More links: at scala-lang.org, at the Register, the code.

This is extremely cool!

This is extremely cool!

Not quite the story

I'm a System Engineer at Twitter and also a long-time LtU reader.

To clarify, we haven't replaced RoR with Scala, we've augmented our Rails system with Scala where we think it fits better.

If you're thinking about building a production system in Scala, I heartily recommend it.

Welcome!

I was aware that Scala and RoR were being used together at Twitter before this was posted, but this forum topic is very worthy of LtU. I definitely agree with Luke. Thanks to snedunuri for posting it, Tom for interesting background references, and you for your clarifications.

I look forward to reading more comments from you, as you are able. :-)

Yep. Scala has been on our

Yep. Scala has been on our language-to-watch list for quite awhile now (it even has its own LtU department). Sure nice to see it gain traction.

Scala cons?

As a programmer with C/C++ and Java background, I find Scala syntax much more comfortable than Ruby's. What did you guys at twitter found weird? Ruby is a much bigger leap in my opinion.

As for the language features I agree. Scala has a lot going on. So much that my 6-month haskell knowledge that I picked up in college barely helps me keep my head above water. Overall it's a good thing, pushes me into learning new paradigms. Not a deficiency.

Proebsting's Law

I think this story really helps put Proebsting's Law in a proper context. While I don't entirely disagree with his observation, I don't like the overall argument, the sentiment, or the methodology used to support it.

As others have pointed out, adding optimizations and informing programmers about how to make good use of them is very helpful. This improves programmer productivity by allowing them to work at a higher level of abstraction with little or no penalty. Implementation details cannot be neglected.

To illustrate, the Glasgow Haskell Compiler has remarkably improved over the last 10 years, in every respect. I'm particularly amazed how much faster the compiler itself is these days, especially starting with 6.8. This makes me very happy.

I have Haskell programs that I wrote and use on a semi-regular basis that simply must be compiled with optimization. So I'm not sure his observation is overly applicable to Haskell. Haskell implementations in general have come a *long* way since the days of Gofer, Hugs, and Haskell 1.3.

"Proper context" == refuted?

This improves programmer productivity by allowing them to work at a higher level of abstraction with little or no penalty. Implementation details cannot be neglected.

I agree - Proebsting's Law applies mainly to languages with a low abstraction ceiling, that don't have much room for performance increases due to optimizations.

Vague analogy: mechanical optimizations aren't much use because they haven't really made bicycles go that much faster over the past few decades. So we shouldn't bother performing mechanical optimizations on cars either - the Model T should be enough for anyone.

Alternatively, there's a

Alternatively, there's a range you'll get out of a particular language. To get more, you'll need to switch languages.

This is really more of an argument over trying to pile compiler optimization on compiler optimization and instead going for big ideas in your optimizations: irrespective of whether you need a new language, to make a big impact, you'll need a new runtime.

Mostly agreed

This is basically what I was trying to get at with when I said "proper context", more elegantly stated. However the implementation issues aren't necessarily runtime.

A very interesting problem is how one can provide a given set of domain-specific optimizations as a "library", a question that certainly has not been adequately settled yet. GHC has it's rewriting rule system, though it's fairly arcane and sometimes limited. Macros enable some of this too, however they aren't always a good choice when it comes to implementing these types of optimizations.

Basically, once your core language is adequately optimized, can you go beyond that for some specialized domain, without "polluting" the rest of the implementation?

Edit: Also, a "library" approach to optimizations do have a significant psychological and sociological benefit... it helps programmers become more aware of what's going on underneath the hood, and organization helps in finding useful optimizations that apply to a given problem domain.

Modular Compiler Optimisations

Equality Saturation tries to create a very modular framework for compiler optimisations. I'm not sure how well it can scale, but it's certainly a very elegant approach and even avoids the unforeseen interaction problem that may occur with rewrite rules.

There are nice papers (with

There are nice papers (with examples!) every year about composable compilers, yet there still seems to be a huge disconnect between research in this area and what we do. I think the publishing cycle has significantly decreased the likelihood of any of these approaches being convincingly tested. Chris Lattner devoted many years to LLVM; that's shockingly rare for this type of systems+PL research.

JS

Look at Javascript in last year. It's an another example.

Indeed

That came to mind as well. Thanks for explicitly bringing it up. :-)

benchmarking gcc?

I'd like to see benchmarks of gcc -O3 for older gcc versions plotted over time. That would be one way to experimentally prove (or disprove) Proebsting's Law.

Yep

This methodology would be in the right ballpark. :-)

A caveat, however, is that newer versions of a compiler often introduce support for newer versions of processors... either by being somewhat aware of their implementation details, or by using entirely new hardware instructions. So even this has it's pitfalls.

Interview "Twitter on Scala"

Interview with three twitter engineers where they talk about use of Scala in production at Twitter by one of the authors of "Programming in Scala" book.

It's about the runtime!

Its all about runtime!
Most language runtimes translate themselves into a lower level - machine assembly (runtime being the OS), VM assembly or just an intermediate (like Python)

In theory, the language from which the translation was done does not matter - the runtime is the thing that executes the instructions. Thus, switching language for performance is inacurrate - You switch runtime for perfromance. It could be that for a given runtime only one language/translator (compiler) pair exists, but that is not always the case: for example, for x86 a number of compilers exist, for python language i know of at least three runtimes - CPython (the default, built-in), IronPython (.NET), and Jython (JVM).

My not so informed opinion is that Ruby's runtime is known for being slow with plenty of quirks, as metasploit project has experienced. For example, switching types of string concatenation decreased start-up time by several orders of magnitude.

Why not write a RoR translator for LLVM and see what happens?

Why not write a RoR translator for LLVM ?

Have a look at MacRuby : they are doing exactly that (this and a real cocoa & objective-C integration instead of some binding)
http://www.macruby.org/blog/2009/03/28/experimental-branch.html

PL doesn't matter for performance?

Perhaps the target machine language does matter for relative performance, but I think it a bit of a stretch to say that programming language semantics has no impact.

Semantics matters!

Very much so... just a quick example, as Ruby is _inexpertly_ commented upon in my post below. Before R6RS, the completely unrestricted use of call/cc in Scheme caused quite a bit of pain for implementors and ruled out a number of appealing optimizations without a great deal more effort.

Basically, back in 1989 Alex Bawden realized there was an unintended consequence of call/cc and the way that letrec was defined in R3RS, allowing one to implement mutable references. This is detailed in his famous usenet post LETREC + CALL/CC = SET! even in a limited setting.

Even though few programmers, if anybody, use call/cc in such a devious way in real code, it does prevent a number of blind optimizations to letrec and call/cc, some of which can be re-introduced with more sophisticated analyses.

With R6RS, there was a big push to restrict call/cc and patch up this oversight; allowing for simpler, better implementations. This also part of R5RS-ERR, which gives R5RS implementations the option of benefiting from these changes as well.

Clearly, PL matters, but compilers matter more!

Clearly, PL matters for performance, but only as much as interpreter/compiler can optimize it, so I'd say - compiler matters!

E.g. there have been tests of C/C++ vs. Java and in a number of cases Java won the race. Reason - Java was more high level and compiler could reason more about the intent and optmize for it.

PL, architecture > compiler

PL and computer architecture matter way more than the compiler with respect to performance. A compiler can only optimize so much if it can make certain assumptions, while language design can create strong assumptions that leads to efficient code. In your example, a very high-level language where everything is either explicit or unspecified could potentially be heavily optimized, while a low level language means that programmers are exposed to more details and the compiler can assume less. This is all PL design, not compilers.

Architecture also matters, as we've found out with Intel's EPIC failure: compilers can only do so much statically, and having dynamic optimizations on chip is still as important (or more important) for performance.

This is not to say that compiler optimizations don't matter, but they actually matter less, not more, than they did say 10 years ago.

Speeding up Ruby

I know a guy who's working on speeding up Ruby and RoR, and I listened to him informally talk for about hour or more about all his profiling observations and efforts.

Honestly I didn't follow most of what he said very well; it all sounded very plausible from what I do know from general implementation principles, but I know little about Ruby and have never had much incentive to learn it. (Though RoR is very cool!) I would have followed his ideas much better had he been talking about Python... then again that's a different problem. ;-)
I think he was targetting LLVM, but I honestly don't recall.

Most of what I took away boiled down to: 1. speeding up RoR is not an easy problem, 2. many language features and commonly used idioms interfere with a quality implementation, 3. aggressive partial evaluation was what he was focusing on at the moment, though it was often rough going, due to certain language features, and 4. all in all, he wasn't terribly impressed with the decision making behind Ruby.

Edit: I remember him mentioning, as one of the difficulties he was up against at the time, was the use of "undefined" methods that would traverse the entire inheritance hierarchy, then raise an exception handler that would then create that method.

<code>method_not_found</code>

I remember him mentioning, as one of the difficulties he was up against at the time, was the use of "undefined" methods that would traverse the entire inheritance hierarchy, then raise an exception handler that would then create that method.

This is the method_not_found method -- if a method lookup fails, then an object's method_not_found method is called. Normally this method results in an exception but you can implement it to make a class that seems to magically always have a method for what you want to do. I believe this technique is used extensively in Rails.

Perhaps not entirely related to the langage

Obie Fernandez (disclaimer: he is a rails consultant) talks here about the twitter-scala from the codebase perspective, which indicates 2 things:

  • the initial ruby codebase was relying on statically-typed idioms, which may explain they had issue with ruby and why scala is a perfect choice (i.e. to write statically-typed code use a statically-typed language)
  • rewriting a monolithic and spaghettic code base into a more structured one tends to improve things, independently of the language change

I don't judge any of the languages, but some things aren't caused by the languages and other may be caused by the way we use them.

(Ed. to fix link)

Your link is invalid. the

Your link is invalid.

the initial ruby codebase was relying on statically-typed idioms, which may explain they had issue with ruby and why scala is a perfect choice

Perhaps the quote is out of context, but that doesn't make much sense to me. How would making the codebase more dynamic make it faster?

Eh? They're not making it

Eh?

They're not making it more dynamic; they're taking code written in a dynamic language in a static style ('statically-typed idioms') and moving it to a (mostly) statically typed language. The idioms fit the language better, thus the overhead of fitting a square peg in a round hole is less.

That's how I read the quote anyways...

Quote: "the initial ruby

Quote: "the initial ruby codebase was relying on statically-typed idioms, which may explain they had issue with ruby". How does using statically typed idioms explain why they had issues with ruby? That further implies that had they not used "static idioms", it would be faster in Ruby.

I understand that static idioms would fit better with Scala, but it doesn't explain the problems with Ruby.

If the idioms fit worse with

If the idioms fit worse with the language, that is going to have some practical impact. But yes, it does imply that if they'd written things in 'the ruby way' that it would be better.

*shrug*

Lots of speculation.

Reliable, high performance code

In the interview I linked to earlier they explain their problem with Ruby. Look under "Reliable, high performance code". Mainly to do with the need for long running processes, typing and good threading support.

backseat driving

the initial ruby codebase was relying on statically-typed idioms, which may explain they had issue with ruby and why scala is a perfect choice (i.e. to write statically-typed code use a statically-typed language)

That is not clear at all from the linked document. Alex Payne pointed out that his system grew very large and that the level of validation they required was tantamount to a type system -- this is a not uncommon complaint about dynamic languages.

Given the great Rubyist's lack of visibility into the system under discussion, how is this any more than backseat driving?

Obie's Opinion

Yeah, to be perfectly honest, I read Obie's opinion and was not impressed. I have nothing against Ruby or RoR, and I am _not_ dismissing the language on (lack of) implementation merit, but it's obvious he's rather ideological in his worldview, and cannot accept fair criticism... not to mention he really wasn't making his case with regard to what little insight we have into Twitter's Ruby codebase.

On the other hand, I would totally dismiss GNU Guile. That's an implementation of Scheme that really has no merit relative to (almost any) other Scheme. Guile should be put to rest... an understanding of Guile really offers no insight into what Scheme can be.

Figuring I'd eventually be

Figuring I'd eventually be asked about this @ work, I looked into it a bit more. It's useful to divide it into frontend/backend (expressiveness/scalability), where frontend code makes webpages and the backend deals with requests.

The business problem requiring a change is the scaling of requests; Twitter's needs specialized so, once forked, they could no longer rely on Ruby/RoR community and apparently couldn't do it themselves. For the scaling issues they hit, there are *runtime* issues with Ruby (e.g., big stacks, green threads, perhaps a GIL), but I wouldn't call them linguistic ones; you can fix these by tweaking the runtime. In this light, Scala itself doesn't matter: it's just a vehicle for providing the JVM. As a critique, this path to scalability is questionable. Wasting performance here has a noticeable hardware and energy cost, so using high-level languages for e.g., high-performance message queues seems like a choice for those with exorbitant funding or access to systems not in production. I'm not a backend guy, but that seems to be the trend in both research and industry.

The issue with duck typing is orthogonal and the traditional (religious) one. Given how much the developers enjoyed RoR, which exemplifies modern dynamic programming, it's surprising. I'm curious as to what the contracts community would have to say about it. Extrapolating, I suspect the gobs of code where they desire the typing *aren't* the hot dispatch loops needing the tweaking; the choice to go with Scala inappropriately conflates the two.

Anyways, linguistically, this doesn't seem interesting. 1) don't use an interpreter for your backend and 2) not using a static type checker means you won't get compile warnings (with the usual implications of forcing a decision in discipline on how to write code). Am I missing something?

Programming Linguisitically Interesting

Anyways, linguistically, this doesn't seem interesting. 1) don't use an interpreter for your backend and 2) not using a static type checker means you won't get compile warnings (with the usual implications of forcing a decision in discipline on how to write code). Am I missing something?

Isn't this exactly the kind of thing that is PL linguistically interesting?

Let's start from some well-trodden truths here at LtU:

  • a PL is more than just the form of its code, but includes things like default deployment model, the standards and contributions of its community, etc.
  • there is no such things as a perfect PL, nor are statements about PLs such as "PL X is always better than PL Y" sensible.

If you take these truths to heart, then the kind of reasoned trade-off discussion within a specific problem domain such as that found in the interview is exactly the kind of thing that is PL linguistically interesting.

Ruby was designed as an interpreted language. Scala was designed with its type system to allow for useful compiler warnings.
Those are interesting and meaningful points of differentiation between them.

The thing that is not interesting from a strictly PL point of view is the dueling marketing campaigns to decide which PL is the most "exciting and productive for programmers". That question probably can't be resolved through reasoned study.

I suspect that is the real source of your feeling of "What's the fuss?". ;-)

Right, I meant PL as the

Right, I meant PL as the semantic object, and the rest as the greater "programming system" or "programming environment".

E.g., JavaScript was designed to be like Scheme, which, at that point in time, was also interpreted. Adobe JIT'd it and Andreas Gal added tracing; there's no reason we can't do the same with Ruby (e.g., JRuby). It's not that Twitter put Ruby on the backend, but a slow version of it. Perhaps JRuby, *for the hotspots*, would be as fast as Scala? And, if not, I'm sure the JRuby devs would like to know why. It's not about the longterm language design.

In terms of static errors.. again, I'm not sure what their error detection/prevention regimine was, so I can't speculate. I can say it looked like an attempt at contracts. Regardless, this is about longterm language design: static analyses of such code has generally been a failure, but loadtime analysis does work a lot better and might have avoided their woes.

For both of these cases, it wasn't the language difference but the abuse of language, whether in choice of implementation or style of code. Perhaps that's the statement -- 1) flexible languages are more prone to abuse 2) going on a limb, it currently takes substantially more effort to implement a good dynamic language.

can you clarify?

perhaps a GIL

Pardon my ignorance, but Googling didn't help me here-- what is a GIL?

it's just a vehicle for providing the JVM. As a critique, this path to scalability is questionable. Wasting performance here has a noticeable hardware and energy cost, so using high-level languages for e.g., high-performance message queues seems like a choice for those with exorbitant funding or access to systems not in production. I'm not a backend guy, but that seems to be the trend in both research and industry.

Can you clarify this? There are some ambiguous references ("this path" -- which path? "high-level languages" -- meaning Java? "that seems to be the trend" -- what seems to be the trend?) and it sounds like you have some interesting points.

GIL

Global Interpreter Lock. I believe the name originally comes from Python.

Right; you hope the global

Right; you hope the global lock occurs when all the other threads are blocked on IO. If not, you're wasting your machine.

The path to scalability being to use the JVM +JVM libraries, not Java in particular. In general/limited settings, it's probably ok, but once you know your workload/requirements and are ready to roll your own backend, the JVM is an odd choice. Shap had an old post here somewhere about systems papers comparing themselves to Apache that comes down to the same thing: if you do less then the general version, of course you'll be faster. The otherside is uptime, which Twitter struggled with, which might be more of an argument for Twitter to reexamine what they're building inhouse (e.g., can they rely upon Erlang's VM?).