archives

Barendregt's Convention

Hi, I am doing some proofs on a calculus with existential types and I am fairly liberally applying Baredregt's convention (approximately that bound variables are distinctly named). My assumption is that I have to justify using this convention and that that justification is along the lines of 'the rules of the calculus abide by the convention'. However, I can not find any information about a formal way of justifying this. Does anyone have experience of this, or pointers to relavent papers?

Thanks, Nick

Relating Complexity and Precision in Control Flow Analysis

Relating Complexity and Precision in Control Flow Analysis, David Van Horn and Harry Mairson. ICFP 2007.

We analyze the computational complexity of kCFA, a hierarchy of control flow analyses that determine which functions may be applied at a given call-site. This hierarchy specifies related decision problems, quite apart from any algorithms that may implement their solutions. We identify a simple decision problem answered by this analysis and prove that in the 0CFA case, the problem is complete for polynomial time. The proof is based on a nonstandard, symmetric implementation of Boolean logic within multiplicative linear logic (MLL). We also identify a simpler version of 0CFA related to eta-expansion, and prove that it is complete for logarithmic space, using arguments based on computing paths and permutations.

For any fixed k > 0, it is known that kCFA (and the analogous decision problem) can be computed in time exponential in the program size. For k = 1, we show that the decision problem is NP-hard, and sketch why this remains true for larger fixed values of k. The proof technique depends on using the approximation of CFA as an essentially nondeterministic computing mechanism, as distinct from the exactness of normalization. When k = n, so that the "depth" of the control flow analysis grows linearly in the program length, we show that the decision problem is complete for exponential time.

In addition, we sketch how the analysis presented here may be extended naturally to languages with control operators. All of the insights presented give clear examples of how straightforward observations about linearity, and linear logic, may in turn be used to give a greater understanding of functional programming and program analysis.

There's ton of really good stuff in here; I was particularly fascinated by the fact that 0-CFA is exact for multiplicatively linear programs (ie, that use variables at most once), because linearity guarantees that every lambda can flow to at most one use site.

A name for this form of variable capture?

I have a question, which has arisen after running into an unexpected variable capture in a Python program I am working on.

Here is a sample:

def codeExample1(hashTable):
  def makeGenerator():
    for keyName in hashTable:
      def valueGenerator():
        for keyValue in hashTable[keyName]:
          yield keyValue

      yield (keyName,valueGenerator())

  return makeGenerator()

Now, I was expecting keyName in the function valueGenerator to be bound to the value of keyName at the current step in the iterator over hashTable, effectively creating a new copy of the valueGenerator function each step.

What occurred instead was that keyName was bound to the value of keyName at the last step of the iterator, effectively making each valueGenerator call the same.

To correct this unexpected behavior, one would rewrite valueGenerator to take a parameter like this:

def codeExample1(hashTable):
  def makeGenerator():
    for keyName in hashTable:
      def valueGenerator(_keyName):
        for keyValue in hashTable[_keyName]:
          yield keyValue

      yield (keyName,valueGenerator(keyName))

  return makeGenerator()

So that was a bit of a long lead in, but what I'm trying to get at is what is the name for the type of variable capture exhibited by Python in this example and what is the name of the variable capture I was expecting to occur?