## Is STL algorithms doing damage to the cause?

I like STLâ€™s containers. However, no matter how hard I try, I canâ€™t enjoy using STLâ€™s algorithms (by which I donâ€™t mean sort, but for_each, transform, etc.) They require excess baggage, forcing me to invent weird structs to act like functors or to declare redundant variables (with very verbose types) to collect the results, etc. I desire the Haskell version that I see in my head, but the C++ version that pours from my fingers into the buffer looks ugly and rigid.

Disappointed, I end up converting them into explicit loops because they somehow look cleaner. I wondered if this ever happened to somebody else, and I found this thread started by Scott Meyers in comp.lang.c++.moderated. Itâ€™s a gigantic thread, but from some of the (both pro & con) arguments Iâ€™ve read, I realized that it is just not fair to discuss these concepts in a C++ setting just like itâ€™s not fair to discuss static typing in a C++ setting.

So, I wonder if (non-FP) peopleâ€™s experience with STL is bad PR for FPâ€¦

## Comment viewing options

### I don't think it's bad PR

So, I wonder if (non-FP) peopleâ€™s experience with STL is bad PR for FPâ€¦

Your problem is that same that everyone trying these methods experiences - C++ doesn't implement closures in a natural way so you have to make "weird structs" to represent them - and every closure you need typically needs a new struct tailored for that specific case. Like you I often end up coding old fashioned loops. But I'm sure that everyone who does this is fully aware that the ugliness is an issue with C++ and not a feature of functional programming itself. In fact, I myself was pushed to learn Haskell as a result of realising that the weird templates I'd been writing all these years could be replaced with something simpler. So I doubt this is bad PR for FP.

### My poor choice of words

By bad PR I actually meant if their experience may cause people to throw the baby with the bathwater because unlike you I am not sure if everybody blames it on C++.

### C++ and STL

The STL algorithms are a good argument in favor of boost::lambda, FC++, or Phoenix.

### desire

"I desire the Haskell version that I see in my head, but the C++ version that pours from my fingers into the buffer looks ugly and rigid."

well, i guess there's a very simple solution for that then: use Haskell.

### Catch-22

Well yeah. But there is a working C++ compiler on my computer already, and it is well maintained because everyone uses it. (gcc, so lets not pretend it is perfect, just good enough and it works) There are C++ compilers for more than just about any other language (C might beat it, but barely), so once again C++ is easier to get working on my obscure system. (If only because gcc has been ported to everything)

By contrast, last time I tried to install Haskel on my system I failed because few people care about it, and thus there were bugs in the distribution that prevented an install. Now I could have got it working, but I was just looking to play with it. I would have played with it, if it was easy to get working, but since I have not played with it I didn't feel a compelling need (despite all the people here who love Haskel) so I didn't.

One more point in favor of C++: STL is the only library I know of that gives you the big-O of their algorithms. I've worked in other languages where something seemed like the perfect solution until after I implemented it and discovered there are a O(N^2) (or worse) algorithm involved under the covers.

In short, once again the better looses to the okay.

### Off-topic FYI

In many (most?) GHC library modules the big-O figures are given. For instance see Data.Map

GHC runs on a ton of different systems, so I don't see why you bring up your exceptional situation as an argument.

### I wasn't refering to any specific library.

As I said, I was unable to get Haskel to work on my machine. (Though I freely admit that I gave up early in the attempt)

I guess I should have been clearer, many libraries implement algorithms that are worse than they seem. "hash tables" that can be O(n) in some cases. Any library can get this right, but many do not.

### Simple solution

 > ghc -V The Glorious Glasgow Haskell Compilation System, version 6.4.1 > cat test.hs main = print (sum [1..1000000]) > ghc -o test test.hs > ./test Stack space overflow: current size 8388608 bytes. Use +RTS -Ksize' to increase it. 
Haskell is great for academic exercises. But for people who have to solve large scale software engineering problems with constrained hardware it doesn't seem very practical. So I try to think in Haskell and implement in C++.

### Great

Then I eagerly await your one-line C++ program that prints the sum of the integers from 1 through 1,000,000 without error.

### Easy (well, in two lines)

 % cat gauss.cpp #include <iostream> int main() { std::cout <<"500000500000\n"; return 0; } 

### Cheating?

Hmm, this might be cheating, since I'm unsure if any C++ standard allows this, but g++ calculates it correctly on my x86 based machine. (Although I believe "long long int"s are perfectly valid ISO C99).

#include<iostream>
main(){ long long s=0; for(int x=1;x<=1000000;++x) s+=x; std::cout << s << "\n";}


Of course, I'd rather use a real big num library.

### Strictness analysis

I don't have access to GHC right now, but I guarantee if you add -O it will work. sum is not defined with a strict foldl (in the Report) and thus GHC's definition must preserve this non-strictness in it's definition of sum. However, GHC can usually recover it with strictness analysis, but that only gets run if optimizations are on. Alternatively, this is almost as short and will work even without optimizations:

import Data.List (foldl')
main = print (foldl' (+) 0 [1..1000000])

You may want to see http://www.haskell.org/hawiki/StackOverflow

### Thanks, that makes perfect sense.

I do find it tricky to predict the resource consumption of even a short Haskell program compared to C++. To bring this back on topic: I think that issues like this probably have a more significant impact on would-be FP programmers than people thinking that ugly C++ syntax is a necessary feature of FP.

### ML

While ML perhaps isn't as sexy as Haskell, it is usually quite easy to understand the resource consumption and run-time behavior of ML programs. Here is one way to do the sum in Standard ML (tested with SML/NJ 110.57):

(*** Stuff that you should have in library ***)
datatype 'a stream = S of unit -> ('a * 'a stream) option
fun upto (lo, hi) : IntInf.int stream =
S (fn () => if lo > hi then NONE else SOME (lo, upto (lo+1, hi)))
fun getItem (S t) = t ()
fun fold f a s = case getItem s of NONE => a | SOME (x, s) => fold f (f (x, a)) s
val sum = fold IntInf.+ 0

(*** The one-liner ***)
val () = print (IntInf.toString (sum (upto (0, 1000000))) ^ "\n")


### wrong conclusion

It works if compiled with optimization. If you don't want to rely on compiler magic, you have to be a bit more explicit about what you really want, which is eager evaluation in this case:

main = print (foldl' (+) 0 [1..1000000])

No, Haskell does not need ungodly amounts of memory, it just needs a different skill set than C++ does.

### The goal here is to solve difficult problems.

Not to write one-liners.

Would you like to show me the correct Haskell way to sum the integers from 1 to 1,000,000?

And I wish I could reparent this comment to its correct context below Paul Snively's. :-(

### -O

Same program, but compile with -O flag.

### I agree

The main problem I have in moving away from explicit loops to STL algorithms is the introduction of new structs (since C++ has no closures). The structs cannot be introduced in local scope, and that makes the code more verbose.

I am not sure that clever template-based solutions are the best way around this problem since figuring out compile-time errors in templated code is pretty tricky. More so if the code is written by someone cleverer than you :-)

Given these, loops seem to be the simplest in practice, atleast for the easy stuff.

Yes, STL FP is awful. I usually just write a for` loop instead.