User loginNavigation |
archivesHigher order functions vs. function arity and calling conventionsLanguages like CL or Scheme may have complex lambda lists, but still have a single "calling convention." OTOH, Haskell, just for example, supports multiple arity functions via a single tuple argument or via curried function definitions. So now we look a some function like map, or say map/2 if arity overloading is not allowed. Let's presume that map/2 is a curried function. We might call it like this.
Now, should plus here be a curried function or a function that takes a tuple of two arguments? This is NOT a "factual" question about Haskell specifically. I'm more interested in how higher order functions might or might not support functions of multiple "calling conventions" (more below) in some more or less elegant manner. Should the "calling convention" be considered part of the type of the function and a kind of "type parameter" of higher order functions like map/2 above? Just for the sake of argument, imagine that we supported a variety of calling conventions typical of the Windows and some C compilers - fastcall, c, pascal, stdcall, etc. - that are language level keywords. Can we imagine a function type signature such as:
Presumably this is no longer "really" a curried function, and we might devise some other syntax to make this more clear, something like:
My point is to show that a more flexible language with regard to calling convention, here facilitating direct integration with OS supported calling conventions, further complicates cleanly implementing simple higher order functions like map or filter or any? or every? or foldl and so on - unless I'm missing something (which I hope I am). Any words of wisdom on defining/implementing higher order functions (and compiling these) in the face of multiple language supported calling conventions? Mucho thanks! Specifying Solvers?In parsing, (non-dependent) type systems, and semantics, we have made remarkable progress in specification. In the first two areas, we have reference processing models and reference specification languages that tell us, definitively, whether a program is admissible or not. That is: they are definitive both positively and negatively. In the last, we have definitive specification provided the program is well-formed and well-typed. The situation seems less clear for solvers. We appear to have reference models for how a solver operates, and reference languages for specifying what a solver may do (that is: the legal operations) but no way of specifying what the solver must do, nor the conditions under which convergence with a solution (or at least certain types of convergence) are guaranteed. One consequence of this is that the specification of languages that rely on general-purpose solvers is necessarily reduced to "the implementation is the definition", and in some cases this leaves us in a state where the admissibility of a program may depend on the details of a particular solver implementation or even the performance of the CPU on which the compile occurs. I see no problem with stronger vs weaker compilers. My concern here is to ensure that there exists a precise specification of the language that all correct implementations must accept in order to be conforming implementations. Is there work or progress along these lines that I should look at? |
Browse archivesActive forum topics |
Recent comments
22 weeks 4 days ago
22 weeks 4 days ago
22 weeks 4 days ago
44 weeks 5 days ago
49 weeks 7 hours ago
50 weeks 4 days ago
50 weeks 4 days ago
1 year 1 week ago
1 year 5 weeks ago
1 year 5 weeks ago