A name for this form of variable capture?

I have a question, which has arisen after running into an unexpected variable capture in a Python program I am working on.

Here is a sample:

def codeExample1(hashTable):
  def makeGenerator():
    for keyName in hashTable:
      def valueGenerator():
        for keyValue in hashTable[keyName]:
          yield keyValue

      yield (keyName,valueGenerator())

  return makeGenerator()

Now, I was expecting keyName in the function valueGenerator to be bound to the value of keyName at the current step in the iterator over hashTable, effectively creating a new copy of the valueGenerator function each step.

What occurred instead was that keyName was bound to the value of keyName at the last step of the iterator, effectively making each valueGenerator call the same.

To correct this unexpected behavior, one would rewrite valueGenerator to take a parameter like this:

def codeExample1(hashTable):
  def makeGenerator():
    for keyName in hashTable:
      def valueGenerator(_keyName):
        for keyValue in hashTable[_keyName]:
          yield keyValue

      yield (keyName,valueGenerator(keyName))

  return makeGenerator()

So that was a bit of a long lead in, but what I'm trying to get at is what is the name for the type of variable capture exhibited by Python in this example and what is the name of the variable capture I was expecting to occur?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Dynamic scoping?

Python is dynamically scoped--which causes more than a bit of heartburn when doing thinks like this. If a variable is not found in the current environment, then it is found in the *callers* environment, then the caller's callers, etc... up the call stack.

Other languages with dynamic scoping include Perl as well as early Lisp dialects. Many of the traditional Unix shells are also dynamically scoped.

The alternative, found in old procedural languages like Pascal, modern Lisps (including CL and Scheme), and functional langauges like *ML or Haskell, is static (lexical) scoping. Free variables not in the current environment are resolved using the enclosing lexical scope, or its enclosing scope.

Lexical scoping makes creating closures much easier; simply reference the variables you want to close over at the time the closure is made, and it works.

Dynamic scoping is useful if you want the caller to be able to manipulate the environment of a function. But it isn't amenable to static anaylsis--a function can, in general, have arbitrary callers--so scope binding errors cannot be caught at compile-time.

Common Lisp actually supports both types of scoping; though static scoping is the default. The switch from dynamic to static scoping is considered by many a key step in the development of Lisp.

It was my impression...

Python is dynamically scoped

It was my impression that Python was always statically scoped from the beginning, i.e. the call stack was never searched. It's just that nested blocks didn't necessarily capture bindings from the enclosing block. Instead, variables always came from the scope of the immediate block first, then a global scope, and a then a "built-in" scope. Python 2.1/2.2 changed that and added nested lexical scoping. Here's the relevant spec. I'm not a Python expert so somebody correct me if I'm wrong.

Other languages with dynamic scoping include Perl

I'm pretty sure Perl started with dynamic scoping only but later added lexical scoping in Perl 5, and lexical variables are generally recommended as default practice. This FAQ explains more.

modern Lisps (including CL..)...[are] static (lexical)

I seem to remember that in Common Lisp the default is lexical scoping, but dynamic scoping is there when you want it using defvar and defparameter. The CL Hyperspec has more.

Python is lexically scoped

Python is neither dynamically scoped nor statically scoped. It is lexically scoped, but name lookup is dynamic.

The issue is that the for loop does not create a new scope. So when the name is looked up the value that it is currently bound to is used - which is the last value from the loop.

To quote myself

It was my impression that Python was always statically scoped from the beginning, i.e. the call stack was never searched. It's just that nested blocks didn't necessarily capture bindings from the enclosing block. Instead, variables always came from the scope of the immediate block first, then a global scope, and a then a "built-in" scope. Python 2.1/2.2 changed that and added nested lexical scoping.

Edit: so I've rethought this since posting. I always thought of these kinds of scoping rules (both the original rules and the 2.2+ lexical rules) as "static" in the sense that, given a whole program, one can statically determine the scope of a symbol. However, languages like Python put an interesting wrinkle onto things: dynamically loaded modules can add symbols to the global scope. In an of itself that wouldn't be interesting - but what is interesting is that a given function could try to work with a symbol that hasn't been defined yet, handle the resulting exception in some useful way, then later get called after the symbol has been defined and work with it.

I'm still inclined to call it a static scoping rule, but there does seem to be something distinctly un-static about it. Thoughts?

It's not "unusual" scoping.

It's not "unusual" scoping. Python is closing over a mutable variable in the first case. Every valueGenerator closure is referencing the same variable, keyName, which is being mutated. Arguments in python are call-by-value, so in the second example, you are getting a copy of the contents (an object reference) of keyName. The second example doesn't rely on scoping (except for hashTable). This can be much clarified by assuming immutable bindings and explicitly having a reference data type e.g. a la ML.

I didn't mean to imply it

I didn't mean to imply it was 'unusual', if fact I avoided that term on purpose.

What I wanted to imply was the behavior was unexpected (by me at least). What I expected was what Scott called 'lexical scoping', and you say can be achieved with immutable bindings.

I've been hit with these problems in Python quite a bit, and they are always nasty bugs to track down because the code looks correct, and is in fact operating as intended, except my intentions were wrong!

It is lexical scoping.

It is lexical scoping. Whether the bindings are mutable or immutable doesn't change that. Python is not doing anything unusual or unexpected here as far as scoping is concerned. If you implemented 'yield' in Scheme, it would do the exact same thing. This is just aliasing. The problems you've been hit with are not python problems, they are mutation problems. Let's say python had immutable bindings like most FP languages, but had an ML-like ref type so you can make an immutable binding to a ref to model a mutable bindings. new x makes a ref containing x, !r reads the contents of that ref and r := y sets the contents of the ref r to y. So we are assuming something like

a = 3
a = a + 1

is not allowed, but you can say instead

a = new 3
a := !a + 1

to accomplish the same.

Your first block of code looks like (expanding the outer for):

def codeExample1(hashTable):
  def makeGenerator():
    keyName = new None
    for kN in hashTable:
      keyName := kN  
      def valueGenerator():
        for keyValue in hashTable[!keyName]:
          yield keyValue

      yield (!keyName,valueGenerator())

  return makeGenerator()

and your second looks like:

def codeExample1(hashTable):
  def makeGenerator():
    keyName = new None
    for kN in hashTable:
      keyName := kN
      def valueGenerator(_keyName):
        for keyValue in hashTable[_keyName]:
          yield keyValue

      yield (!keyName,valueGenerator(!keyName))

  return makeGenerator()

A little thought should reveal why they behave differently.

Loop variable

I would add that the real problem here is not mutability in general, but that in Python (and other imperative languages) the index variable of a for loop is considered mutable. This doesn't go down well with higher-order programming like we see here. I would even call it a severe design error.

Pedantic, but...

The problem isn't even necessarily tied to mutability. The loop index being mutable wouldn't cause this problem so long as each loop iteration used a different mutable value.

Not pedantic - good design alternative

It's not necessarily pedantic, it seems like a good design alternative.

If a language doesn't generally have a concept of immutable variables then adding such a concept just in "foreach" loops might be a fairly major distraction. But specifying that "foreach" creates a binding to a fresh new reference cell on each iteration would solve the OP's problem and still stay true to the rest of the language.

If efficiency's a concern then the compiler/interpreter might even be able to use escape analysis to determine when it could reuse the same reference cell or even put it on the stack safely.

foreach's hidden functions

But specifying that "foreach" creates a binding to a fresh new reference cell on each iteration would solve the OP's problem and still stay true to the rest of the language.

I agree, that's a nice idea. It has the benefit of being completely equivalent to a functional foreach, e.g.:

 (foreach-key collection (lambda (keyName) ...))

And efficiency concerns can be handled by the standard analyses for optimizing lambda-based code, no specialized analyses should be needed.

I'm now clear on what is

I'm now clear on what is occurring.

In the first example, each valueGenerator holds a binding to a reference to the content of keyName, and if the generators are evaluated after the for loop in makeGenerator they all will produce identical output.

Of course what I had been expecting was that each generator would be given a copy of the value in keyName at its point of creation, this is what is done through explicit argument passing in the second example.

If the first example had worked as I wanted, would that still be considered lexical binding?

Sorta

If the first example had worked as I wanted, would that still be considered lexical binding?

Kind of, in that the binding can be determined statically, simply by examining the lexical structure of the code. However, the semantics you describe would be different from most languages which use lexical binding, so simply describing it as such could be misleading. You'd also have to describe what happens to mutable variables that are closed over.

Writing an interpreter is a highly recommended way to become familiar with these issues. That's what books like SICP and EOPL take you through.

Ok, so obvously I am missing

Ok, so obvously I am missing a lot of background here, so bear with me.

From my understanding, the variable keyName is closed over when valueGenerator() is called, not when it is defined, and in Python when you close over a mutable variable the closure is given a reference to the mutable variable. What I was expecting was that I would be given a copy of the mutable variable.

So to bring this back to my original question, does the semantic that mutable variables are by default copied into closures have any proper name? Conversely, does the typical semantic, the one Python exhibits also have a proper name?

SICP

Let me heartily re-recommend Structure and Interpretation of Computer Programs (commonly called SICP). It clarifies the distinction between symbols, bindings, memory locations, etc. in a somewhat simpler language (Scheme) that can still have the same issues.

What's being captured by your closure is a "binding" - the relationship between a symbol (in this case the variable keyName) and the thing the symbol refers to (in this case a mutable cell which, confusingly, is also often called a variable).

Since bindings are what are normally captured I don't know if there's a distinct name for this form of capture. The alternative you're talking about, where the value of the cell is captured instead of the binding, may have a name but I've never heard of it.

Anybody? Capture by reference vs capture by value?

Hmm...

I think it would be too unusual to have a name. The Python semantics are far more common in imperative languages. It's interesting to note that the semantics proposed by the original poster are, in fact, used in Java for bindings captured by inner classes. But Java introduces the restriction that such bindings must be declared final in the outer scope, presumably just to avoid having to name the capturing discipline: for immutable bindings, the two disciplines are indistinguishable.

We cannot implement what we cannot name?

...presumably just to avoid having to name the capturing discipline:

It would be fun seeing that being defended: "we didn't do this because we couldn't think of a name for it!"

But afaik the actual reason was that it meant that inner classes could be implemented without having to deal with extending the lifetime of the enclosing scope, which tends to involve copying the stack, heap-allocated frames, etc.

It's the reason inner classes could be implemented long ago with nary a second thought, compared to full closures which are only now being accepted for implementation in an upcoming version.

(Tangentially, Jame's Gosling's recent post about closures is interesting.)

Could have been done

But afaik the actual reason was that it meant that inner classes could be implemented without having to deal with extending the lifetime of the enclosing scope, which tends to involve copying the stack, heap-allocated frames, etc.

It could have been implemented by adding another level of indirection to a mutable heap object under the hood just for captured mutable references - the compiler has plenty of info to not heap allocate the whole frame. That's what Java programmers do by hand when they want to achieve this effect and, AFAIK, how Scala achieves the effect on the JVM. I haven't kept up on the latest Java 7 proposals, but I wouldn't be surprised if the BGGA proposal does just that.

But in essence you're right about the reasons. I remember reading something somewhere that said motivation for limiting to immutable references was to avoid that invisible compiler magic of an extra level of indirection and the concomitant extra heap allocation. I just wish I remember where I read that and whether the person who wrote it was actually "in the know" or just speculating.

heap big trouble

It could have been implemented by adding another level of indirection to a mutable heap object under the hood just for captured mutable references

That's still equivalent to heap allocating the environment for the closure, and heap allocation is one of the things they apparently didn't want to do:

I just wish I remember where I read that and whether the person who wrote it was actually "in the know" or just speculating.

Guy Steele discusses it in an ll1-discuss post:

Actually, the prototype implementation *did* allow non-final variables to be referenced from within inner classes. There was an outcry from *users*, complaining that they did not want this! The reason was interesting: in order to support such variables, it was necessary to heap-allocate them, and (at that time, at least) the average Java programmer was still pretty skittish about heap allocation and garbage collection and all that. They disapproved of the language performing heap allocation "under the table" when there was no occurrence of the "new" keyword in sight.
...
One of the early design principles of Java was that heap allocation occurrs if and only if a construct involving the "new" keyword is executed. Adding full-blown closures violated this design principle. (Dynamic class loading also violated the principle, but users seemed to be comfortable with that, perhaps because they believed that "once the computation got going" no more heap allocation would occur.)

I Had Not Known This...

...so thanks for the quote. Very enlightening!

I remember when inner classes came out, in Java 1.1. My disbelief was profound, because I had met the designer of the spec, John Rose, some years earlier thanks to an introduction from John Ulrich, one of the developers of MacScheme. Ulrich was excited about Rose's work on Integrating the Scheme and C Languages. So it was a bit odd to see someone who understood closures extremely well design such an ugly hack. :-) Having the context of early user feedback, and the design rationale that heap allocation only occurs as a consequence of applying the "new" operator, clarifies things tremendously.

That was it

That was the post I had read. You'd think I would remember that Steele wrote it, eh?

It does illustrate one of the complexities in doing usability studies on a language. If the testers had been Schemers there would have been complaints about a million other things, but not that. But the testers were likely former C++ folk who of course expected to know exactly what was on the heap and how it got there and when it was going to go away.

C++ folk indeed

...we were not out to win over the Lisp programmers;
we were after the C++ programmers. We managed to drag a lot of them
about halfway to Lisp.

Guy Steele

The Pied Piper of Mountain View

Of course, it must be mentioned the "lot of C++ programmers" which were drug "halfway to Lisp" consisted largely of the bad ones.

C++ owes a debt of gratitude to Java. Contrary to numerous mid-90s predictions of C++'s resulting demise--the language today is flourishing, shorn of an application domain (bespoke software) for which the language is woefully inappropriate--both technically and culturally.

Of course, for the poor folks that were drug "halfway to Lisp"--they found out that this is like winning a flight "halfway to Hawaii". (Assume, for purposes of this analogy, that you start on the US West Coast...)

Heads, tails...

But afaik the actual reason was that it meant that inner classes could be implemented without having to deal with extending the lifetime of the enclosing scope, which tends to involve copying the stack, heap-allocated frames, etc.

Sure, but there are two sides to that coin... In terms of implementation rationale, I'm sure you're absolutely right. But I think in terms of semantics, my view is just as good... I imagine the original conversation like this:

"Well, to make this variable visible in the inner class, we'd have to copy the whole stack, or else require a complicated analysis [etc, etc, redacted...]"

"Hmm, that's a problem... Or else! We could just copy the content of the variable into a new variable with the same name! The programmer would never be the wiser!"

"Brilliant! But oh sweet Jesus, what if they change this new variable? How will we propagate the change to the old variable?"

"Crud! Correct as usual, Uncle Friday... But wait! What if we just force them to make the old variable final?! Then they'll really never know the difference, and better yet, we won't even have to explain it!"

"Let's ship it!"

I'm sure that's inaccurate in any number of ways, but it's my fantasy, so I'll have it the way I want. In any case, this is the binding discipline the original poster described, and as far as I know there's no name for it but "broken"... ;)

But wait! What if we just

But wait! What if we just force them to make the old variable final?! Then they'll really never know the difference, and better yet, we won't even have to explain it!"

It is a pretty compelling argument. :)

Re that binding discipline being "broken", I wouldn't go that far. "Limited", perhaps. As you pointed out, "for immutable bindings, the two disciplines are indistinguishable" - so it's a hybrid model, with closures being purely functional w.r.t. their lexical environment.

Actually, I agree.

Actually, I agree with you. I was mostly kidding when I said "broken". In fact, I think it's a clever and reasonable compromise, given their constraints. On the other hand, it sure has frustrated me at times.

Well it seems my question

Well it seems my question has been answered, thanks for all the replies.

In summary the binding semantics really have no formal names beyond self-descriptive 'capture by *'.

As for the semantics exhibited by Python, they are obviously correct for Python programs; as such nothing is broken or even 'wrong'. Perhaps the implementation is a compromise, perhaps there are better solutions, but that is how Python does it. Ce la vie.

Also maybe its time I start looking for a programming language with more functional semantics, that treats data as immutable, since these little mutability cases seem to be the source of most of my Python bugs.

:)

Also maybe its time I start looking for a programming language with more functional semantics, that treats data as immutable, since these little mutability cases seem to be the source of most of my Python bugs.

Now there's something I'm sure we can all agree on!