andrew cooke wrote:
[...]
# First, you'd expect C to win out because it's likely as close to assembly as you can get and all the problems are within the realm of an assembly program. So how come it doesn't? It turns out that there are no C programs for certain tasks. Apparently, for example, you can't program with linked lists in C(!).
not as easily as the other languages for sure (see below)
# Nor was setjmp compared with exceptions. So on a bunch of arbitrary tests C scores zero.
As for me, i find it normal not to call setjmp ``exceptions''. The problem with
the C score is that it doesn't tell why it is so low.
# Second, you'd expect C++ to be identical to C - if you are fine-tuning C++ programs for speed you can get down to a similar "assembly level". Instead, C++ programs are slower because they use untuned code with templates (a buffer in the C code for one program started large and doubled on exhaustion - behaviour tuned to tests like these - while the C++ code used a Vector that probably started smaller and increased less greedily, for example).
C++ programs *are* slower unless you write it in C. When writing in C you know a
lot of things about the behaviour of the prog you're writing. A library can't
make such an assumption and can only hope to have a good general behaviour. IMO
it's funny to think you won't loose anything using the STL vs hand-adapted
algorithm. Would you use a C generic library, it would also be slower (or even
worse like glib).
# These two points indicate that the scoring reflects some kind of trade-off between high-level code and speed. So why accept tuned code supplied by gurus for some languages? (All the cmucl code for example had type and compiler declarations added that should produce code significantly faster than what I would call "normal" Lisp).
for sure there should be a line to accept/refuse some versions (otherwise you'd
have to accept native written part, inline assembly...)
# Third, how can you draw useful conclusions from the memory usage figures? For most languages you need to know whether the GC has been working or not. I
since for many examples the memory consumption for different N values is stable,
it either means no GC was needed or GC did work, uh?
[...]
# On the other hand, there were some interesting comments on different tests, and I would appreciate understanding why OCaml did so well on the string
well, it has a compiler and a native backend, that helps...
# concatenation - if it is building a list of pointers to the string then amortized costs (including, say, printing the string) might be higher than with other languages. My final comment ignores all the above :-) Given what I've read in the OCaml/Caml faqs, I don't understand why that language is apparently so quick - IIRC, the compiler doesn't have many optimisations (there's one for handling argument tuples effectively and another to avoid boxing floats in arrays, but I have the impression that's about it) while Cmucl (CMUCL?) is reputed to be a very complex beast that can produce very fast code. Is the difference (OCaml "beating" Cmucl) due to the nature of ML? Some pay-off from compile-time type knowledge?
I see only 2 solutions to match type inference:
- JIT: aka doing at runtime what ML does at compile-time
- type declaration/annotation: cmucl seems to include some (declare (fixnum n))
# People on c.l.l deny such an advantage exists...
With no type annotation? wow. I've searched a little c.l.l in deja but have not
found :-/
Ehud Lamm wrote:
# it comes to performence in depth study of language features (and how they relate to implementation) are much more useful than this type of benchmark.
In depth study doesn't show cross-languages overall behaviour and it is much
more difficult to sum up from them. Another pb for languages implemented many
times is to know exactly what optimizations are implemented (eg: before 2.96,
gcc didn't optimised tail-calls)
|