The fate of reduce() in Python 3000

Link

Guido van Rossum comments on the elimination of map(), filter(), reduce() and lambda from Python 3000:

About 12 years ago, Python aquired lambda, reduce(), filter() and map(), courtesy of (I believe) a Lisp hacker who missed them and submitted working patches. But, despite of the PR value, I think these features should be cut from Python 3000.

He believes map and filter are unnecessary because there are list comprehensions. But what about reduce ? It seems Guido doesn't believe in folds:

So now reduce(). This is actually the one I've always hated most, because, apart from a few examples involving + or *, almost every time I see a reduce() call with a non-trivial function argument, I need to grab pen and paper to diagram what's actually being fed into that function before I understand what the reduce() is supposed to do. So in my mind, the applicability of reduce() is pretty much limited to associative operators, and in all other cases it's better to write out the accumulation loop explicitly.

And then, with these functions gone, there is no use for lambdas, he argues.

I find it positive, in general, when designers eliminate features from a programming languages, if they are rarely used, but maybe he is being too harsh on reduce(); it may be my predilection for functional programming getting on the way, though.

via PLNews

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Uses of reduce

I think I agree with Guido about the uses of reduce. Especially when sum works for most datatypes, like concatenation for arrays and strings.

I'd love to see everyone's most recent non-trivial use of reduce (or fold, or apl's insert or something similar).

recent folds

My most recent non-trivial folds are over files and databases rather than lists. It would be no fun to write these ones out as loops since for performance they take care of chunking/buffering within the custom fold operators. On the other hand, the Python way might be to move the hairy part into a generator and then write a trivial loop by hand, and that could be very neat.

The all and any look nice but I don't understand why they need to be builtins. Some restriction on how generator expressions can be used, or?

Today I feel like (reduce append list-of-lists) is a pretty subtle piece of code.

I Love Guido

I've just written some code to automate evaluation of a student project. One crucial part is a search over a directory tree for submitted code. This is naturally expressed as a fold.

My problem with GvR's reasoning is his desire to drag everyone down to his level of ability. That certainly doesn't lead to a language I want to program in.

Level of ability?

I don't think it's exactly a question of ability - I mean, I dare say GvR could transform himself into a Haskell ninja if he had a mind to. Plenty of other Pythonistas have managed it.

I think the problem, if it is a problem, is that Python is intended as a single-paradigm language, and that paradigm isn't functional.

There is a niche for a dynamically-typed, "impure" functional language with a simple syntax and broad library support. And that niche is filled by PLT Scheme.

Ability...

I refer to this quote:

almost every time I see a reduce() call with a non-trivial function argument, I need to grab pen and paper to diagram what's actually being fed into that function before I understand what the reduce() is supposed to do

It's not a problem I've ever had... It just takes some time to learn the pattern of using fold, just like you have to learn the patterns of using generators, or objects, or anything else.

naturally?

One crucial part is a search over a directory tree for submitted code. This is naturally expressed as a fold.

I don't find this natural at all. I'd implement it as a filter over an iterator/generator.

And a generator is...

an inside out fold!

Touchee

Advantage Noel. If Python was serious about having many ways to do things...

Someone ought to write up an accessible-to-imperative-programmers explanation of the relationship between fold/unfold and generators. It would make a good Wikipedia article...

You're thinking of Perl

It's Perl's philosophy to have "many ways to do things". The Python philosophy has general leaned towards "one clear, idiomatic way to do things". At least that's been my impression.

Towards the best collection traversal interface

Modularization

But it's much nicer to create a generator for all data types, and then define map, fold and filter in terms of generators, with list comprehensions.

I think it's the bananas paper in practice.

My most recent non-trivial use of reduce.

(defmethod lipschitz-value-function ((csg-union csg-union))
  (let ((blending-function (or (blending-function csg-union)
                               (constantly 0.0))))
    (reduce #'(lambda (value-function1 value-function2)
                #'(lambda (position)
                    (let ((value1 (funcall value-function1 position))
                          (value2 (funcall value-function2 position)))
                      (- (min value1 value2)
                         (blend-value blending-function value1 value2)))))
            (unioned-surfaces csg-union)
            :key #'lipschitz-value-function)))
This computes the implicit function of a blended union of implicit surfaces, by reducing a function-returning function over a list of functions. Is that non-trivial enough? ;-)

Oh, I have similar expressions for intersection and difference. I thought about abstracting things a bit so I'd have only one reduce instead of three, but I didn't feel it was worth it.

Looks good..

.. but let me just grab a pen and paper to figure out how that's supposed to work. ;-)

Nice but...

OK, I agree this is cool, but union really is the sum for surfaces isn't it? And intersection the product. How do you do difference for a list of surfaces? (Or difference of a list of anything for that matter.)

OK, I agree this is cool, but

OK, I agree this is cool, but union really is the sum for surfaces isn't it? And intersection the product.
I'm not sure what you're getting at.

How do you do difference for a list of surfaces? (Or difference of a list of anything for that matter.)
The way Common Lisp does it for numbers: left associatively. (- a b c d) == (- (- (- a b) c) d).
If I wanted something to be right associative, I'd have to use reduce with the :from-end t option.

My most recent python use of reduce

Since the haskell uses I have on hand are pretty arcane, here's the use I had of python's reduce sitting in my sent-mail folder.

The problem in question was an investigation of various methods to turn a floating point number into a fraction. (rational number) Anyway, I wrote it in python because I was playing around with python at the time.

In general in this program fractions are represented as a tuple of (numerator, denominator) - however, even if I had gone and defined a class and overloaded +, I'd still need reduce or something like it because ... well, just look at the code:

def stdMethod(x, tol=defaultTolerance):
    L = [int(x)]
    x = x - int(x)
    maxerr = x
    while (maxerr > tol):
        x = 1/x
        next = int(x)
        L.append(next)
        x = x - next
        maxerr = x * maxerr / float(next)
    # *****
    L.append((1,0))
    L.reverse()
    return reduce(lambda x,y: fadd((x[1],x[0]),(y,1)), L)

This code builds up a long continued fraction, eg: 4 + 1/(3 + 1/(2 + 1/(5))). In the code, this would be represented by L = [4, 3, 2, 5] at the point marked *****.

Anyway, reduce is then exactly what I need to convert this list of integers into a rational number. Even had I used a class and overloaded +, I still would want to let the last few lines be:

    L.reverse()
    L[0] = Rational(L[0])
    return reduce(lambda x,y: y + 1/x, L)
That is, a simple sum() just won't do.

Interesting. I think your

Interesting. I think your second example is different from the first; I think you misread your original code when you wrote this post.

I think your example 4 + 1/(3 + 1/(2 + 1/(5))) is wrong; I think you mean 1/(4 + 1/(3 + 1/(2 + 1/(5)))). In which case your second example should have been:

return reduce(lambda x, y: 1/(x + y), reversed(L), 0)

Without reduce(), it looks like this:

r = Rational(0)
for i in reversed(L):
    r = 1/(i + r)
return r

Erm..

I think your example 4 + 1/(3 + 1/(2 + 1/(5))) is wrong;

Why do you suppose that?

I think you mean 1/(4 + 1/(3 + 1/(2 + 1/(5)))).

But then his list would be L=[0,4,3,2,5] and not L=[4,3,2,5]. I'm sure his specification is correct, because it folows the conventions of the compact abbreviated notation for simple continued fractions. I didn't verify his code, but the conversion of the list L—the compact abbreviated notation for simple continued fractions—into floating point notation looked correct to me. Correct me if I'm wrong, but it looks like your code will interpret the list [4,3,2,5] as [0,4,3,2,5]. In fact your code will interpret every list as having an implied 0 as the first element, in which case you cannot represent continued fractions greater than one. However, as far as I am aware, there is no such restriction on continued fractions.

nuts

It seems you could parallelize an associative fold rather easily - an explicit accumulation loop, maybe less so.

GvR says that you can just define a nested function - unless you can nest functions inside loops/conditionals and not just functions, and unless they can survive after their parent scope ends, that's a silly loss of expressive power.

I sure wish C++ had lambdas (or even just inline anonymous classes). Can a user write functions that take expressions as arguments in Python, or is that only allowed for builtins? If not, it seems like GvR wants to restrict functional style only to builtin functions and data structures.

How to parallelize

It seems you could parallelize an associative fold rather easily - an explicit accumulation loop, maybe less so.

That's the point. You can only parallelize associative fold. And Guido says in Python only + and * are associative, and then sum() and product() are much easier to read.

I can think of other associative operations than * or + ...

It seems Python's priority is neither performance nor parallel performance, so perhaps that was a red herring.

People do tend to use specific libraries for parallel code, but it would be nice if parallel-friendly constructs were an encouraged part of the basic idiom.

Lambdas in C++

I sure wish C++ had lambdas

There's always Boost.Lambda

Or...

Phoenix, which is also in Boost via Spirit; or FC++, which also gives you monads and almost a complete Haskell standard prelude.

If you want to do functional programming in C++, you have a lot of perfectly viable options.

maybe...

if you don't mind:
- extending functional expression libraries to you custom types when necessary
- reverting to a more explicit solution whenever your compiler fails to do template expansion because of some shortcoming in the library implementation or the standard or an obscure combination of both
- reverting to a more explicit solution whenever your compiler fails to do template expansion because of non-compliant implementation
- trial'n'error development cycle because of nonsense page-long compiler errors
- O(n^x) compile time
- unclear syntax
- ...

existing lambda libraries are a cool showcase for the power of the c++ language, but not a feasable replacement for first-class lambda for any but the trivial case.

there is (still) hope for lambda becoming a first-class citizen of c++...

You can.


>>> def f():
...     if 1:
...             def g():
...                     print "test"
...     g()
...
>>> f()
test

fold == for loop

It seems you could parallelize an associative fold rather easily - an explicit accumulation loop, maybe less so.

I don't understand this. Aren't the two are equivalent? Seriously.

In either case, the main problem is proving that the operation is associative and without side effects. In an extremely dynamic language like Python (where there's no static type information, and classes can be modified at run time, and there's operator overloading, and the reduce() builtin could be replaced at run time with something else), I just don't think it's realistic. It's totally impossible at compile time; and exceedingly difficult and expensive at run time.

And even if all those stars happen to align, the case for parallelization is still bad in CPython, since the implementation can only use one CPU at a time anyway (due to the Global Interperter Lock).

Actually, the idea of aggressively optimizing this seems a little far fetched to me to begin with. Do any Common Lisp implementations actually optimize reduce this way? The definition of reduce does not require the caller to guarantee associativity; there are even some non-associative examples.

Are HOF to be retained?

I think his reasoning is backwards here - map(), filter() and reduce() do not require lambdas (they could just as easily use named [formally defined] functions). The question is rather one of what is to happen to (a) first class functions and (b) higher order functions? Are we to gather that functions are to become second class with this direction? Can't say that I understand Guido's reasoning concerning adding typing nor eliminating any feature that might appeal to a Scheme hacker.

Personally, I'm beginning to like Matz's guiding philosophy more - programming should be an exercise in coolness. The day Python moves toward removing lambdas is the day I start to explore Ruby - which already has much better oop, regexp and reflection capabilities, and is not afraid to be appreciated by those who embrace FP.

Python's lambda is broken

...or at least deliberately hobbled, to limit its expressive power so that people will use alternative means of expression.

A proper anonymous function syntax would be an acceptable replacement. But I don't know if we'll get one.

reduce probably won't be much missed - there's very little Python code out there that uses folds the way true a true FP origamist would use folds. But I would miss the ability to write HOFs like reduce very much.

Whacky scoping

Not to mention Python's so-called `lexical scoping'. What a bunch of jokers the python language designers are.

python

Python is following very closely java footsteps: having just one formal way to do things, adding some metadata in the form of @attributes ( called 'function decorators' in python ), the need to declare classes to cope with the broken lexical scoping...

Perhaps they are trying to go after Java programmers at all costs?

it's coming out very depressing of late...

"Function decorators" in Python

The "function decorators" are not that much attributive metadata but usually higher order functions that take the decorated function as an argument and return another function. The decorators are executed when the decorated function object is created.

Example: If a method f() is decorated by @classmethod a new function object f = classmethod(f) will be created. This mechanism can be used to attach certain attributes on the function of course:

def addfuncattr(attr,value=None):
def wrap(f):
setattr(f,attr,value)
return f
return wrap

@addfuncattr("some_attr",42)
def foo():pass

>> foo.some_attr
42

in other words...

"The "function decorators" are not that much attributive metadata but usually higher order functions that take the decorated function as an argument and return another function."

... you "decorate" them with meta-data. :)

Personally, I'm beginning to

Personally, I'm beginning to like Matz's guiding philosophy more - programming should be an exercise in coolness.

I think programming should be an exercise in communication.

It really sounds like this gu

It really sounds like this guy doesn't use higher-order functions much, but that may in part be due to python's syntax. I looked up list comprehensions and, as a ruby hacker, I couldn't see why you would bother with them when functions like map() work just as well and don't require special syntax to be added. Then I realized that ruby's map() takes a block, and python's doesn't.

On the other hand, I do find ruby frustrating in other ways. My particular pet peeves are that there's a special syntax for passing a single block to a function that doesn't work for more than one block, and there's a lot of unnecessary syntax ("if", "while", "for", etc.) in the language, making the language harder to learn and remember.

SuperCollider

Plug: SuperCollider is like Ruby or Smalltalk, has list comprehensions, collect, select, inject (map, filter, fold), allows passing multiple blocks outside the parens. "if", "while", "for" are message sends, not built-in control structures.
http://c2.com/cgi/wiki?SuperCollider

if (x, { a }, { b })
is equivalent to:
if (x) { a } { b }

while({ a }, { b })
is equivalent to:
while { a } { b }

I looked up list

I looked up list comprehensions and, as a ruby hacker, I couldn't see why you would bother with them when functions like map() work just as well and don't require special syntax to be added. Then I realized that ruby's map() takes a block, and python's doesn't.

Yeah, but list comprehensions came from Haskell, which doesn't have that problem.

Have you looked at LINQ? Why have query comprehensions when you could write out all those method calls explicitly?

The answer is that syntax matters. You could write

x = swap . (concatMap .) . (. swap map) . swap (.)

but it's clearer to say

x f as bs = [f a b | a <- as, b <- bs]

The logic of GvR

The same GvR reasoning applied to OOP: Almost every time I see a virtual method call, I need to grab the code of all the possible callees and imagine inlining each at the call site before I understand what the method call is supposed to do. So in my mind, most of the time it is better to write out the case statement explicitly and to inline the callees.

(You know, at the dawn of OOP, some opinion leaders of the time actually held that view; they said "you cannot tell the flow of control" etc.)

So no one is using these features? Right...

Seems google isn't to going to be happy with the new direction of Python.

Not all that functional, actually

Goopy is somewhat underwhelming - they seem to have interpreted "functional" as meaning "doing things with lists" - more often than not through explicit iteration. Not many HOFs around. Not that that's bad - for Python - but Goopy doesn't make a great counter-example right at the moment...

underwhelming

You are right about it being underwhelming, but I think it does make a point. First, they use HOF, and these are easier to use when you have lambda. Second, contrary to GvR, for them "functional" isn't a dirty word...

Was'nt it this one?

Definitely google loves map & reduce, but I guess they did not do this in python :)

I've nominated this node...

...to be my first LtU node of the week. I described by saying the thread has become:

  • has become a sort of forum for complaining about/defending Python's piss poor support of core FP style. Regular readers of this diary will probably have become aware that my attitude to the language moves around the spectrum from "distaste" through "irritation" and "outrage" to "cold hatred".

You want functional Python?

I'll give you functional Python...

Teaser:

> token = isalpha |seq| many(isdigit)
> tokens = token |sepBy| whitespace
> runParser("p123 q456 hello world", tokens)
['p123', 'q456']

It's gorgeous, can you add an explicit license?

I'd like to use this in my Plone/Zope paying work, could you add an explicit license so I'm able to do so?

--Shae Erisson - ScannedInAvian.com

Seriously...?

I should point out that I don't recommend it for production use - it's never been tried with an input of significant size or a parser of significant complexity, and I'm not certain I understand either the space or time complexity of its operation (it's hard enough in Haskell...).

At the very least, I would test and tune it first.

Still, if you think you can use it, why not? I would normally attach a GPL notice, but if that's problematic I can make an exception (whatever goes nicely with Python/Plone/Zope, I guess).

There was a time...

when LtU discussion were more technical than this.

A better time it was, don't you think?

Instead of saying that you don't like (or do like) the language, how about giving focused examples?

A few comments did just that, and they deserve closer attention than they seem to have gotten.

Yeah, I want unit tests.

If it doesn't have unit tests it's just opinion.

(I think that's a misquote of Heinlein?)

--Shae Erisson - ScannedInAvian.com

Python reduce broken

Jacob Matthews has informed me that Python's reduce is broken, so it is significantly less useful than a true fold:

Heh. reduce() is broken in python (in that it's not actually a fold function), so I don't blame Guido for not being able to figure it out in his head. Rather than handling the zero- and one-element list cases properly, it (a) doesn't take a basis element, (b) raises an error on empty lists and (c) RETURNS THE SINGLE ELEMENT WITHOUT CALLING A FUNCTION ON IT in the one-element case (true even in recursion). For that reason it doesn't behave as you'd expect at all, and it's impossible to safely reduce an alpha list using any function that doesn't have type alpha -> alpha. For instance, as a Schemer you might expect this to sum up the lengths of all the strings in a list (note reduce calls the input function with the accumulator first rather than second):

 >>> reduce(lambda x,y:x+len(y), ["hello","there","everybody"])

But it actually raises an error:

Traceback (most recent call last):
  File "", line 1, in ?
  File "", line 1, in 
TypeError: cannot concatenate 'str' and 'int' objects

since it's trying to add "hello" to the length of "there".

The way it's defined, reduce is not fold, it's a broken fold. So if you want to be charitible to Guido, you could interpret his message as saying: "Look, we tried to put some functional programming tools into Python, but we ended up completely screwing them up; let's just take them all out rather than give the incorrect perception that we're a real functional programming language."

use an initializer


>>> reduce(lambda x,y:x+len(y), ["hello","there","everybody"], 0)
19

help(reduce)


>>>help(reduce)
reduce(...)
    reduce(function, sequence[, initial]) -> value

I think it's fascinating that

I think it's fascinating that this mystery Lisp hacker is never named, but always blamed, for submitting these implementations. =)

In Python's younger days, "map" and "filter" were often cited as examples of Python's great expressiveness, flexibility, and support for higher-order programming. "lambda" was a natural addition, since the "map" and "filter" often take functions that are rather unique and not used elsewhere. When list comprehensions were added, though, there were suddenly two ways to the same thing, and this jarred with Python's core philosophy (in strict opposition to Perl's TMTOWTDI).

Interestingly, list comprehensions suffer from the same limitation as the "crippled" lambda; they can only contain expressions. Nobody seems to complain about this fact.

In general, Python's development seems to favor the common case, making the oft-needed tools standard and accessible, at the sacrifice of the more obscure and unusual. Slicing strings and lists, for example, is a very common operation in Python programming, so there is syntax support for it. The use of map and filter were, by a large margin, the most common use of HOFs, so list comprehensions were added.

But once you've got syntax support *and* library support for something, you've got two ways to do it. And the syntactical way seems more official most of the time, so it's not surprising which way they want to eliminate.

"lambda" is a tough one, because it's undoubtedly useful, but not useful enough. Anytime someone complains that there should be a better anonymous function or block syntax, they get shot down because you can already define a named function; surely you just want anonymity so that you can write obfuscated code and make everyone get out their pen and paper. But Ruby has shown that code blocks can be added to a syntax much like Python's, and this solves the expression/statement dichotomy rather nicely (IMHO) plus it allows you do design control structures and new looping constructs that can be quite handy. I think that it's mainly an issue of style; people don't like any of the options they've seen for adding code blocks to the language because they're all "ugly" in some way or another.

Now, "reduce" is really an odd one. The reason "sum()" and friends are being added is to eliminate the only common use cases for "reduce". It's not like an FPL, where "fold" is a fundamental construct from which "map" and friends are derived. Folds favor stateless, recursive programming. Python is designed with imperative, iterative programming in mind. Once again, two ways to do it, so you've got to kill the unpopular way.

There really are very few use cases for "reduce" in Python, mainly because the common, built-in data structures (lists and dictionaries) are imperative. Reduce can be used with imperative data structures, but it's sort of silly to keep passing the accumulator around, if you're just going to mutate it anyway.

So, I'm not too sad to see reduce go away, but I do think it should be retained in a "functional" module; hopefully, this will happen at some point. It seems like Guido is a bit more open to the idea than I previously thought. Right now, "itertools" is sort of a sneaky way to get a module of functional tools into the language without actually using the f-word, and it's hard not to like "itertools" since it's based around lazy streams and thus can lead to more memory-efficient code.

Once again, I think this whole thing has way more to do with "one, and preferably only one, obvious way to do it" than with a deliberate bent against functional programming. But there is also a bit of resentment in the Python community toward "Lisp weenies", for whom Python's lambda, syntax, and (intentional) lack of macro support are never good enough. As has been pointed out already, Python and Scheme are actually quite similar, if you disregard parentheses and macros. However, the Python community seems intent on proving that Greenspun's Tenth does not apply to their language, and having "lambda" and friends lying around makes for a much harder argument.

Python is designed first and foremost to be readable, and that seems to be the motivating factor in these decisions. It's a language for the smart masses, if you will. It attracts smart people, but tries to remain understandable to many, hence the traditional control structures, block layout, emphasis on iteration over recursion, and of course, OO. These language attributes are probably not going to budge, no matter how powerful the alternatives might be.

However, if it weren't for map/filter/reduce/lambda and the exellent tutorials by David Mertz on "charming functional programming", I never would have caught on to FP, and that would have been a shame, because I've learned so much useful stuff by studying functional languages since I got a taste of FP in Python.

I would like to see Python go on being Python, but I also hope that FP is not deliberately made more difficult in Python due to political or idealistic reasons.

If you don't like Guido's direction, don't use Python.

Or fork it. Or use Scheme or Ruby or Perl or JavaScript or something else.

I'm not defending Guido at all--fold/reduce is such a fundamental thing that ought to be in the language somewhere (if nothing else, in the library). I suspect that if Py3k doesn't provide it, numerous application and library developers will provide their own versions (which, naturally, won't work in the same fashion, etc.)

But...

One interesting phenomenon I've long observed is that many programmers think that every language should be like their favorite. Many complaints about C++, especially early ones, seem to be of the form "it's not Smalltalk". Now there are lots of things about C++ which annoy me, but the fact that it's not a dynamically-typed "pure" OO langauge isn't among them.

Bjarne Stroustrup famously quipped "If you want Smalltalk, you know where to find it."

Likewise with the present situation. If you want Haskell, or Scheme, or ML--those are all easily available. Python serves a niche, and serves it well. Like anything, if it fails to continue serving the niche, it risks being replaced.

And I suspect that removal of "reduce" from the language will impair it's ability somewhat.

PL as social contract

If you don't like Guido's direction, don't use Python.

If this was the only available approach to language designers, I would have only this to say:

Sic semper tyrannis!

Histrionics aside, i believe it is reasonable for a language community to expect their knowledge and practice in that PL to remain monotonically increasing.
(i.e. you don't have to unlearn previous knowledge or undo previous programs.)

I can see if a feature was broken, and caused major problems in programs that used it, it would be reasonable to make a non-monotonic change.

But to arbitrarily remove features because they seem unaesthetic in retrospect seems guaranteed to erode a PLs credibility as a serious language for use in real apps.

One interesting phenomenon I've long observed is that many programmers think that every language should be like their favorite.

Is this unreasonable? If you like a language because of certain features, you consider those features good. Then of course their absence from another language will be bad (unless there is some mitigating good added instead).

Python serves a niche, and serves it well.

What is that niche? Pythonistas seem to tout it for a pretty wide range of purposes.

Bjarne Stroustrup famously quipped "If you want Smalltalk, you know where to find it."

That sounds like "let them eat cake". (I'm full of revolution today.... ;-) )

I really don't see what is wrong with asking PL designers, especially ones with clout, to try to make their languages "good" for a well-articulated, theoretically sound value of "good".

They can of course respond with a better-articulated, more theoretically or design-philosophically sound response, but "because I said so" is not such a response.

There are times to throw stuff out

There are times when you need to throw things away.

I don't keep a bottle of bluing in my laundry room, now that modern bleaches do the same job better. There may be a place where bluing would be better for my laundry, but nobody keeps it around so even where it is better we don't use it.

We are talking about Python3000. That is the next MAJOR version. The reason for a major version is to break backward compatibility in some way - in the process correcting mistakes. Since reduce is used so rarely in python programs it is time to re-think the decision to use them.

Python's goal is to be readable above all else. I have no doubt that a good LISP hacker can write from scratch something faster/better than a good Python hacker could write the same thing. However with python the code will be much easier to read when he is done.

If you like a language because of some features, that is fine. However you should broaden your mind to other ways.

Reduce and map in python are not as powerful as other languages that have the same features. Python has a different way they want you to do things. This is not good or bad, it is different. (Overall. Where the python way won't work python is clearly worse, but where the python way works you can argue it is better for readability.)

I have been doing some serious python work for over a year now, and I have never found a need for map or reduce. There are python alternatives that are easier to use and understand. I found a use for lambda exactly once, and could have done a full function just as easy then. While everyone is different, I think my experience is typical of python programs. (That is Python programmers using 2.4, early versions of python lacked some of the alternatives I'm using)

Not so simple

There's more to it than people trying to impose favorite features across languages. In the same way that modern mainstream media tends to dumb down the electorate, we're seeing mainstream languages dumbing down programmers. An average programmer who primarily uses a single language has to go to a lot of effort to learn about features outside their chosen language. That may not matter in many cases, but it can matter much more when the features in question are actually a natural fit for the language they're using — then it becomes as though they're learning a pidgin language instead of the real language(s) it's based on.

Python is training large numbers of programmers to think inside a box which is now threatening to become even more restrictive. That's not doing most of those programmers any favors. It's fine if those programmers recognize that, and expand their intellectual and professional horizons with other languages, but that doesn't always happen. For many of those who don't make that leap, GvR is doing them a disservice. That can have a broader impact, too — the Python community is large, with a diverse set of stakeholders, and it interacts with other language communities in various ways.

Python serves a niche, and serves it well. Like anything, if it fails to continue serving the niche, it risks being replaced.
True. And part of that process will be people who make noise about why it should either be fixed or replaced. The dodo didn't die out on its own, someone had to go out and hunt them to extinction.

Python vs FP?

Is it just me, or is there a larger trend of an anti-FP backlash within the Python community (or at least some elements thereof)? There was the brouhaha on tail-call elimination a while back (along with claims suggesting that TCE is not necessary in languages with mutable variables, as it's primary use is to allow efficient implementation of iteration via recursion in single-assignment languages that can't implement it directly--or something like that). Guido (in the referenced posting) suggested that you don't need lambdas when you have objects (which is technically true--objects and lexical closures are known to be equivalent--but there are many times when one or the other is more syntactically convenient). And then there's the debate on introducing static typing to the language...

Python is, in many ways, a philosophical descendent of Smalltalk--complete with the belief that mutability of a program ought to be maximized rather than minimized. Which is at odds with the philosophies underlying FP--which include the belief that constraints upon the programmer which increase the compiler's ability to reason about a program's behavior are a good thing.

Maybe I'm extrapolating the curve far beyond the sample set (and I hope I am)... but it would be a shame for this sort of thing to occur. But that said (the point of my original post)--the Python community ought to tailor the language to fit their needs. If the result doesn't meet the needs of other programmers, so be it.

Re: Python vs FP?

Is it just me, or is there a larger trend of an anti-FP backlash within the Python community (or at least some elements thereof)?

I think it's more pragmatic than that. Python has integrated key elements of FP: lists, tuples, unpacking of structures, returning complex types. When I was first exposed to FP (via the language Hope), what impressed me the most was the way lists were tossed about without any concern about what memory actually looked like. That's now part of the foundation of Python.

I don't think that deeper FP has truly proven itself. I don't mean that as flame bait; I've gotten to where I prefer FP for many things. But look at all the constant bickering in comp.lang.functional. Look at how short the lists of real world functional programs tend to be. (Erlang is the poster-child to a great extent, being used in high-end telecom systems, but then there's always debate about whether or not Erlang is functional because its concurrency model is impure).

In short, I think that the useful parts of FP have been integrated into the mainstream, while other parts have proven to be too esoteric (that is, they're beautiful from a purist's point of view, but there lots of other approaches as long as you don't mind veering away from being mathematically pure).

Thinking inside a box

All languages make you think inside a box. That's why you have to learn many languages to gain a real, broad understanding of programming. Purity is inversely proportional to box size.

every language should be like their favorite?

or languages that develop significant clout so that you can be forced by circumstances other than preference to use them? C++ has that clout, Java has it. Python is moving to that position, and while so moving is dropping abilities that made it palatable to Functional programmers (somewhat an hyperbolic statement)

which is the cause and which is the effect

A cynical person might wonder whether Python's move into mainstream popularity merely coincides with the deprecation of overt functional programming techniques. Or whether one is the cause and the other is the effect.

Smalltalk doesn't maximize mutability

Python is, in many ways, a philosophical descendent of Smalltalk--complete with the belief that mutability of a program ought to be maximized rather than minimized.

I don't think that is the Smalltalk philosophy. I remember reading a quote I think from Alan Kay that the idea of Smalltalk was to shrink the assignment operator as far as possible and then encapsulate it.

The two goals are not, of cou

The two goals are not, of course, incompatible. In fact, Alan Kay has long said that late binding is key to building systems that can be rapidly modified, and that he thinks late binding is essential until software engineering becomes mature enough for that not to be a primary concern.

Agreed

Python is just a descendent of ABC, a Scripting Language for Third-Grade Students, and that's all it will ever be. To wit: every Plone developer I know is telling me I should write a better CMS in the language I work on, and I can't blame them. Unsurprisingly, it's a Smalltalk descendent that follows this quote that you bring up, although I can't recall where it's actually from (or if it's from Alan at all).

Quote

James McCartney: I remember reading a quote I think from Alan Kay that the idea of Smalltalk was to shrink the assignment operator as far as possible and then encapsulate it.

water: Unsurprisingly, it's a Smalltalk descendent that follows this quote that you bring up, although I can't recall where it's actually from (or if it's from Alan at all).

The quote is definitely from Kay. I seem to recall that it's from a relatively recent ACM interview. It's been refreshing to me to read/listen to much of what Kay has said recently, because it's reinforced something that I feel is important, which is the intellectual debt that Smalltalk owes Lisp. As a friend of mine who wrote Bistro ("a place where people meet to share Java and Smalltalk") reminded me, people tend to place too much emphasis on classes in Smalltalk and overlook the other two important constructs: protocols and blocks, and blocks are just lambda. So even the granddaddy of OO languages (yes, I know about Simula; please don't post) had a fully capable functional subset. It only took C++ about 15 years to catch up...

Smalltalk and FP

I don't think that is the Smalltalk philosophy. I remember reading a quote I think from Alan Kay that the idea of Smalltalk was to shrink the assignment operator as far as possible and then encapsulate it.

Well, but it does make almost everything mutable, e.g. all collections, and even class definitions. The iteration protocol relies on side effects. Object initialization is done by creating an object with nil attributes and then changing their values.

Functional style can be found only in anonymous functions. But even here they used to have broken scoping rules (local variables shared with the method), tail calls don't necessarily run in constant stack space, passing a named method requires creating a wrapper block (or using a symbol but then it's incompatible with blocks), there are no Lisp-style lists, the syntax of block invocation is not pretty, etc.

IMHO Smalltalk is as far from functional style as Perl, Python and Ruby. That is, it provides most technical mechanisms necessary for FP, but its conventions and customs discourage it.

Right

So why did Guido say that Python had lexical scoping, when almost nothing about binding can be determined for sure statically? I have to say I'm happy to see all the FP stuff being punted from Python, so that I don't have to hear the claim that "Python supports Haskell-like programming idioms" anymore.

I don't have to hear the clai

I don't have to hear the claim that "Python supports Haskell-like programming idioms" anymore.

Ahem.

Actually, I kind of agree. I don't think it's a good idea to try to fake this stuff in a language that doesn't really support it terribly well. But I'd prefer to see it supported better than not supported at all.

Heaviside operators

I've always found the Heaviside operators used in J and K to be much clearer than reduce. "+/" means "insert '+' between elements of a list." "*/" means "insert '*' between elements of a list." Examples:

   +/2 4 6
12
   */2 4 6
48
   -/2 4 6
4

+/ same as fold(+,0) ?

Sorry to post to this old thread but I'm curious to know if these 'heaviside' are as powerful as the fold operator? For the basic +,- it seems to be, but what about when fold is generalized to data types other than lists? I've read in more than one place that map, filter, actually all relational algebra operators can be built from the fold operator...so this simpler notation of J and K are still as powerful as fold?

SoI think "+/" is the same as

SoI think "+/" is the same as "fold(+,0)", but the notation you describe isn't as powerful, in general. (The full APL notation might have a real fold in it, but I don't know it so I can't say for sure.)

The reason is that your "Heaviside" operators need a unit for the function they are applied to, to act as a seed. That is, for addition (+), 0 is a unit -- x + 0 = x. For multiplication (*), 1 is a unit -- x * 1 = x. So there's a tradeoff; you can apply the / to fewer functions than fold, but it's terser.

Fold only looks more powerful.

Actually, you can apply / to fewer items than fold.

The only purpose of unit in fold(foo,unit) is to handle the empty list case, something that
foo/unit,x
covers.

I'm not sure where the extra power is.

For those who are really concerned

Go search through the python language mailing lists. Someone brings this up every now and then, usually someone who doesn't know Python as well as they think they do, and just about all of their arguments get shot down.

There's nothing with lambda you can't do with def, list comprehensions get rid of 99% of the time you're going to use the FP functions, and writing out a few reduce calls by hand isn't going to kill anyone. Guido has already acknowledged that somebody will make a lib to do these things immediately anyway.

Aha!

FP is math?

I'm just a beginner starting a cs grad program, but as I understand it, FP was the beginning of the grand attempt to make a program conform to mathematical rigor. If that is the Holy Grail, then Python is definitely a Monty Pythonesque attempt to find this Holy Grail. But then what language does offer this sort of rigor?

I recall in another life as a geography student learning about digital mapping and satellite remote sensing. There were always huge piles of data to produce a raster image--and the only intelligent thing you could do with it is compress it. The vector method seemed to address this huge redundancy somewhat, but that also was based on piles of data. Then we heard about fractal geometry, where you could literally define a coastline as 1.23425247767448955, i.e. a number describing where the line was relative to a 2-dimensional field (that is, 1.2343... is not a 1-dim point, nor a 2-dim plane, but something in between). Where that went I don't know.

My point is that a purely mathematical way of mapping and satellite imagery would be to have a mathematical formula that would literally crank out the map or image. It might be way too Nietzschian to say there is a mathematical way to derive-prove that a place on an x,y grid would be name "Cheshireton". But you get the point. This seems to be what they're after with Fortress, if I'm not totally reading it wrong.

I've read FP people make very bold statements, such as FP is the only way forward. I take it they mean the only way to get the math rigor, the reliability--and not just some way to do a (new) sorting algorithm better or faster. A system based on proofs should be capable of being built to the sky--and not have it topple like a tower of Babel because impure handwaving or "anecdotal programming paradigms" weakened the structure. At least that's the theory as I understand it. Corrections welcome.

One of the great burrs under my saddle is what Ted K (Unabomber), a mathematician, said about software, how it runs somewhere but no one knows much about it. It handles something critical, but for all intents and purposes, maybe not out of-, but, say extra-control. If software was like math, any competant "math weenie" should be able to grok it, especially it's probablistic dimensions. This would quiet poor Ted's and my mind considerably more than the old, retired COBOL pro who had a look-see for $200/hr.

But having some FP in Python a start, right?, in the search for this Holy Grail.

Rigor

But then what language does offer this sort of rigor?

Haskell is the "pure" FP language, though I wonder if you already know this and are asking a question that I don't quite understand. If you are looking for something that guarantees our software will never fail then you are looking for the impossible. Even programs that have been "proven" correct may have an error in the specification.

If you want even more rigor than Haskell provides, take a look here: A Comparison of Z, B, and VDM.

Being sequential

I'm just a beginner starting a cs grad program, but as I understand it, FP was the beginning of the grand attempt to make a program conform to mathematical rigor. If that is the Holy Grail, then Python is definitely a Monty Pythonesque attempt to find this Holy Grail. But then what language does offer this sort of rigor?

If you are interested in "rigor" you also might be interested in rigorous software development methodologies. Therefore I strongly recommend commitment to the latest Waterfall 2006 conference.

Kay

Learning through meta-programming

When I find myself worring about what Common Lisp's REDUCE actually does, I write myself a formal function:

(defun formal-function (x y)
  (list 'f x y))
This allows me to try it out at the REPL
CL-USER> (reduce #'formal-function
                 '(x y z))
(F (F X Y) Z)
That helps but Common Lisp's reduce has keyword arguments, :from-end and :initial-value. What do they do? What does :from-end default to if the programmer omits it? One could sit at the REPL and try them out, or you could type in a form that does it for you.
CL-USER> (dolist (direction '(nil (:from-end t) (:from-end nil)))
           (dolist (initial-value '(nil (:initial-value 0)))
             (let ((form (append `(reduce #'formal-function '(1 2 3))
                                 direction
                                 initial-value)))
               (format t "~&~S~%=> ~S~%if F = cons => ~S~2%"
                       form
                       (eval form)
                       (eval `(flet ((f (x y)(cons x y)))
                               ,(eval form)))))))
(REDUCE #'FORMAL-FUNCTION '(1 2 3))
=> (F (F 1 2) 3)
if F = cons => ((1 . 2) . 3)

(REDUCE #'FORMAL-FUNCTION '(1 2 3) :INITIAL-VALUE 0)
=> (F (F (F 0 1) 2) 3)
if F = cons => (((0 . 1) . 2) . 3)

(REDUCE #'FORMAL-FUNCTION '(1 2 3) :FROM-END T)
=> (F 1 (F 2 3))
if F = cons => (1 2 . 3)

(REDUCE #'FORMAL-FUNCTION '(1 2 3) :FROM-END T :INITIAL-VALUE 0)
=> (F 1 (F 2 (F 3 0)))
if F = cons => (1 2 3 . 0)

(REDUCE #'FORMAL-FUNCTION '(1 2 3) :FROM-END NIL)
=> (F (F 1 2) 3)
if F = cons => ((1 . 2) . 3)

(REDUCE #'FORMAL-FUNCTION '(1 2 3) :FROM-END NIL :INITIAL-VALUE 0)
=> (F (F (F 0 1) 2) 3)
if F = cons => (((0 . 1) . 2) . 3)

NIL
If you conceive the code your are trying to write as a variant on copying a list you can quickly see that you need to take
CL-USER> (reduce #'cons '(a b c) :from-end t :initial-value nil)
(A B C)
as your starting point.

I think that this brings up three issues for the programming language designer to consider

First, confining your languages reduce to a limited set of approved, easy functions stops the apprentice programmer from using a formal function to see what it does.

Second, program writing programs are directly useful in understanding reduce. I use reduce with a formal function to get (f (f 2 3)) and if I am uncertain of the implications of what I am seeing I can evaluate (f (f 2 3)) with f bound to a function of interest to me.

The concern about reduce is that it is hard to understand. The context for that debate is assumptions about the resources that the apprentice programmer has available to him to help him study the language. A language designer might omit meta-programming from his language. A educator might schedule meta-programming as the final, esoteric initiation, too late to assist in understanding other language constructs, or omit it from his curriculum entirely.

My concern is that the difficulties of reduce might be an artifact of that context. If apprentice programmers were taught meta-programming, not for immediate use, but as a learning aid, then other language constructs might be easier to learn in this context. Meta-programming provides the opportunity to test out variations on hard language constructs, a fresh perspective on language syntax, and a source of exercises which are about the language being studied rather than a toy problem in a domain that might be unfamiliar to the student.

Is there a domino effect here? Python lacks meta-programming, so it is not available as an aid to studying the language. This makes it harder to learn Python, which forces hard constructs to be dropped as being too difficult to learn

Is there a domino effect

Is there a domino effect here? Python lacks meta-programming, so it is not available as an aid to studying the language.

Does it?

This makes it harder to learn Python, which forces hard constructs to be dropped as being too difficult to learn.

The docstring of Pythons reduce function works perfectly fine for me:

print reduce.__doc__
reduce(function, sequence[, initial]) -> value

Apply a function of two arguments cumulatively to the items of a sequence,
from left to right, so as to reduce the sequence to a single value.
For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates
((((1+2)+3)+4)+5).  If initial is present, it is placed before the items
of the sequence in the calculation, and serves as a default when the
sequence is empty.

Documentation is nice when you have it

Python's docstrings are nice, but for them to work they have to be (1) present and (2) accurate.

That being said, I've often wished that more languages had ways to attach notes to code, rather than relying on specially marked comments.