Higher order functions vs. function arity and calling conventions

Languages like CL or Scheme may have complex lambda lists, but still have a single "calling convention."

OTOH, Haskell, just for example, supports multiple arity functions via a single tuple argument or via curried function definitions.

So now we look a some function like map, or say map/2 if arity overloading is not allowed. Let's presume that map/2 is a curried function. We might call it like this.


map/2 plus [1,2,3] [2,3,4]
  => [3,5,7]

Now, should plus here be a curried function or a function that takes a tuple of two arguments?

This is NOT a "factual" question about Haskell specifically.

I'm more interested in how higher order functions might or might not support functions of multiple "calling conventions" (more below) in some more or less elegant manner. Should the "calling convention" be considered part of the type of the function and a kind of "type parameter" of higher order functions like map/2 above?

Just for the sake of argument, imagine that we supported a variety of calling conventions typical of the Windows and some C compilers - fastcall, c, pascal, stdcall, etc. - that are language level keywords.

Can we imagine a function type signature such as:


iplus :: stdcall Int -> Int -> Int  -- what do arrows really mean at this point?
iplus x y = x + y

map/2 iplus [1,2,3] [4,5,6] -- What happens now?

Presumably this is no longer "really" a curried function, and we might devise some other syntax to make this more clear, something like:


iplus :: stdcall Int * Int -> Int
iplus x y = x + y

map/2 iplus [1,2,3] [4,5,6] -- What happens now?

My point is to show that a more flexible language with regard to calling convention, here facilitating direct integration with OS supported calling conventions, further complicates cleanly implementing simple higher order functions like map or filter or any? or every? or foldl and so on - unless I'm missing something (which I hope I am).

Any words of wisdom on defining/implementing higher order functions (and compiling these) in the face of multiple language supported calling conventions?

Mucho thanks!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

In dynamically typed

In dynamically typed Haskell:

apply f [] = f
apply f (x:xs) = apply (f x) xs

// e.g. apply plus [1,2]

mapVarags f xss = map (apply f) (transpose xss) 
// where transpose transposes a list of lists

// e.g. mapVarargs plus [[1,2,3],[4,5,6]]

HOFs imply a calling convention

Even in Scheme one can have arbitrarily many calling conventions. The binary addition function can be represented as a two-argument procedure in the usual way, a one-argument procedure accepting a two-element list, a one-argument procedure accepting a two-element vector, a two-argument function accepting two zero-argument procedures which return the actual values to be added, ....

In standard Scheme, at least, there is no way to introspect on a procedure to determine even the number of arguments it accepts, never mind what types they can have. As a consequence, a HOF like Scheme's three-argument map simply has to assume a particular calling convention, and in fact it assumes the two-argument procedure calling convention. This choice is justified only by convenience. It is the two-argument addition procedure that gets the standard name +, and it is the mapping procedure that calls + and procedures like it that gets the standard name map. If it's important to you to invoke functions with a non-standard calling convention from a mapping procedure, simply write such a mapping procedure and give it a new name.

Can't Scheme just use apply (most of the time)

The exception would be #:start style keyword arguments, which I believe aren't first class in Scheme.

IIRC, CL can apply keyword arguments as well.

So we hope + is defined (define (+ x . xs) (foldl x %primop+/2 xs) or similarly.

High-level vs. Low-level calling conventions

When we started the BitC work, we initially believed that the Haskell tuple convention handled arity. The early BitC procedure call semantics was specified in terms of tuplization.

We shortly realized that tuples are moderately strange beasts from a decomposition standpoint, and we re-specified our tuples as pair-consing (in BitC, pairs are unboxed types having unboxed content). That is (1,2,3) became convenience syntax for (1, (2, 3)). Then we noticed that if the third argument has parametric type we had lost arity again. So we considered re-defining the argument convention to always pair-cons unit onto the end. And then we noticed that we now needed to know arity as part of type in order to decide how to parse curried application correctly. At that point we gave up and went to multi-argument procedures.

More recently, I realized that the compiler can internally re-write curried application syntax into arity-aware application after the initial typing pass provided the arity is known either lexically or from type. The problem is that we need to be able to tell the difference at application time between:

'a -> 'b -> 'c   ; curried
'a 'b -> ('c)    ; two args returning 'c
'a -> 'b -> ('c) ; alternate writing of 'a 'b -> 'c

That is: given knowledge of arity, we can preserve the syntactic style of currying, but not its semantics. We eventually decided not to do this, because the implicitly curried style of application makes it hard for the programmer to understand which applications allocate from the heap and which do not. Allocation from the heap is not a problem, but given the design goals of BitC we want it to be syntactically evident when it happens.

From an implementation perspective, arity dictates register selection at the calling convention level, so the two types above are distinct at the implementation level. Given this, I wonder if we haven't abstracted too far by adopting the currying concept.

Though I concede that arity complicates the type rules. :-)

By the way, there is a subtlety with your example:

iplus :: stdcall Int * Int -> Int
iplus x y = x + y

The question is whether the types:

Int * Int -> Int
Int -> Int -> > Int

are the same. If they are, then note that dropping the declaration can lead to a calling convention violation, and consequently to a safety failure. This can be worked around in the implementation either by trampoline support or link failure, but something needs to handle it.

Mind explaning what's going on with the syntax?

'a -> 'b -> 'c ; curried
'a -> 'b -> ('c) ; alternate writing of 'a 'b -> 'c

You put parentheses around 'c to change the meaning of the arrows?

He expanded on this proposal

He described this proposal on the BitC mailing list back in March if you're interested. The general idea is that the bracketing indicates where a closure allocation occurs.

Missing it

Is the post you linked to supposed to explain the syntax here?

Yes, he shows an example or

Yes, he shows an example or two of this bracketing notation.

# let multiply x y = x *

# let multiply x y = x * y;;
val multiply: int -> int -> int = // correct, and expected
# let multiply2 x = (fun y -> x * y);;
val multiply2: int -> (int -> int) = // correct, but undesired

I assume you mean this (mailing list version), but this doesn't look the same as what's above (LtU version). The mailing list version seems to suggest that we greedily group arrows together into the largest possible atomic N-ary function type, using parens to indicate where first order functions are passed around. The LtU version doesn't follow that pattern, AFAICT.

Declarations and function types...

This was more or less just a "casual example". We might restrict our "special" calling conventions to functions with type declarations, as above.

Or we might allow the function definition itself to be written as follows without a type declaration:

stdcall plus x * y = x + y

This would allow the type inference mechanism to do its magic on the argument and return types, while still specifying the calling convention.

BTW, "stdcall" is meant as the Windows API calling convention (a hybrid of the C/Pascal calling conventions), not some abstract concept of "the standard calling convention."

Perhaps better examples would have been:

fastcall plus x * y = x + y
pascal minus x * y = x - y

To address a comment from jcowan, we have this lovely parametric polymorphism that allows us, among other things, to write, name, debug and maintain one type parametrized function definition instead of many. While naming seems insignificant, it's a major boon to the API riddled brains of we mere mortals :-)

So why not for function calling conventions? NOTE: For now, let's put aside a few of the "typed-scheme" papers describing typing strategies for variable arity functions, like map or zip, in cases such as:

x = map odd? [1,2,3]
y = map (+) [1,2,3] [2,3,4]
a = zip [1,2,3] [2,3,4] [3,4,5]
b = zip [1,2,3] [2,3,4]

Let's skip these simple arity issues altogether for now, and instead focus on calling conventions - whether de facto machine standards or language level, such as "all standard functions are of one argument" vs. "in addition we also support functions of complex CL style opt, rest and key lambda lists", just for examples.

So, for a language that is intentionally "promiscuous" with regard to (externally linkable libraries of) functions with many different calling conventions - it would be *very* nice to have a "parametric parameter" over the calling convention(s) of higher order functions.

I imagine that this might require:

(1) some auto-generation of "call-wrappers" around some actual function parameters of "non-standard" calling conventions (with some loss of performance) or ...

(2) better still, some compilation strategy akin to C# and friends' generation and compilation of separate code of generic functions for application to arguments of static types, allowing for processing of "unboxed" values that may not fit withing a machine word, etc.

This compilation strategy applied to functions of various calling conventions might increase code size, but in principle at least, yield much better performance than generating "function call wrappers" for a wide variety of function calling conventions.

Well, fine

If you have a theory for doing parametric polymorphism on calling conventions (in the sense I note, which is quite generally applicable to any language that has containers of some sort), then by all means make use of it. I don't know of any such theory, though.

It sounds like the problems

It sounds like the problems stem from the combination of pair-consing and tupled argument calling convention. I'm not sure what the rationale for pair-consing is, but by eliminating of that, tupled arguments should map nicely to the native calling convention.

Furthermore, tuples provide the necessary heap-allocating explicit partial application support you're seeking.

p.s. Partial Application > Curried Functions

In general, I prefer a general partial application facility over single argument TUPLE ARGUMENT functions, and not allowing curried function definitions (via a special syntax, at least - one can always define curried functions explicitly with lambda syntax).

Forgive any stupid syntax, of which I'm not currently paying much attention:

times-plus (a,b,c) = a * b + c
a = map (times-plus(_,4,_),  [1,2,3], [2,3,4])  -- partial application
b = map (times-plus(3, ...), [2,3,4], [4,5,6])  -- akin to currying

Having read a paper by SPJ, I believe, on how to best optimize curried functions passed to higher order functions (caller discriminated, IIRC), I came to the conclusion that it's best to simply forgo curried function definitions altogether. This seems to me to yield overall greater expressive power.

But maybe my preference is unfounded or due to some other language quality criteria of mine that I've not (yet) fully analyzed rationally.

N-ary functions vs currying vs tupling

If you want to optimize multiple argument functions, or make arity meaningful for purposes of interoperability, then both tupling and currying as a means for expressing multiple arguments induce their own set of problems, at least in combination with polymorphism.

If you prefer currying, and thus define the arity to be the number of arrows in the type, then functions that are polymorphic in their codomain cause a problem, e.g. forall a. T -> a. Instantiation with an arrow type "raises" the arity.

If you prefer tupling, and thus define the arity to be the arity of the tuple left of the arrow, then functions that are polymorphic in their domain cause a problem, e.g. forall a. a -> T. Instantiation with a tuple type "raises" the arity.

As shap was already saying, the only way to solve this statically is by introducing n-ary function types as a separate notion, so that T,U -> V is different from T -> U -> V as well as T * U -> V.

Otherwise, you will either need monomorphisation (i.e., whole program compilation or jitting), or some runtime arity conversion. Most FPL implementations go for the latter, in various forms.

PS: Not sure why you are suggesting that tupling is more powerful. Both approaches are equivalent in expressive power. The main question is what your implementation and your syntax optimizes for. (You can even optimize for both, but then things can really get hairy.)

Thanks for the clarification!

Thanks for the clarification!

Tuples vs. Currying

Here's a link the SPJ article to which I referred above.

Making a Fast Curry: Push/enter vs eval/apply for Higher-Order Languages

I suppose implementation concerns drive my preference for using tuples to express arity. For a function whose argument syntactically destructures a tuple, one can generate both a "direct call" taking arguments on the stack as well as a second tiny "stub call" that takes a reference to a tuple. The stub unpacks the tuple elements and then calls the "direct call". In a simple implementation, HOFs are just passed the "stub call" taking a tuple reference.

In the case of a function that takes a non-destructured tuple type argument, we need only generate a single function taking a reference to the tuple actual argument.

Favoring tuples and requiring explicit syntax for partial application, if I'm thinking straight, is less expressive than using curried functions. If three-arg-fun is curried, a call such as map three-arg-fun [1,2,3,4] will nicely return a list of functions.

A tuple taking version of three-arg-fun in the above call would just generate a type error instead, and require some syntax like map (\x -> three-arg-fun(x, ...)) [1,2,3,4] to accomplish the same thing.

The flip side

map three-arg-fun [(1,'a',1.0), (4,'b',6.0)]

That only works for tuples (assuming the obvious types).

Very good point....

and I'd wager that this flip-side "apply-like function application behavior to tuple-grouped arguments" is of greater general programming utility than the automagical partial application of curried functions.

But maybe that's just an issue of individual programming style, targeted application domain or what have you.

But

But at least in a dynamically typed language, it's not hard to work around it:
map (apply curried-three-arg-fun) [(1, 'a', 1.0), (4, 'b', 6.0)]

If functions are untyped and curried by default, this form of apply is trivial. Just fold the original function over the list (or tuple) of arguments, until either you've used them all or you get an error ("attempt to call a non-function" or something like that).

Uniformly Applicative Structures

Perhaps this is relevant: Bellot and Jay describe uniformly applicative structures. "...polyadicity is almost never a primitive concept. It is carried out through curryfication or structuration" they go on to explain a calculus that is intended to naturally handle variable arity functional forms. It appears to be a formalization of the approach taken in Bellot's language GRAAL.