Formal treatments (or examples of) of "function concatenation"?

We're familiar with the notion of function composition which is essentially sticking functions end-to-end:

(a -> b) -> (b -> c) -> (a -> c)

However, is there any formal treatment of function "concatenation" for functions that operate on n-tuples that would be essentially sticking functions "side-by-side"? The type would be something like this (where a, b, c, and d are n-tuples and ++ is the concatenation operator):

(a -> b) -> (c -> d) -> (a ++ c) -> (b ++ d)

I think a visual example allows a good intuition for what I'm talking about. If F is a function from a 1-tuple to a 2-tuple, and G is a function from a 2-tuple to a 1-tuple, then their composition would be this:

     IN
      |
   .--o--------.
   |     F     |
   |--o-----o--|
      |     |
   .--o-----o--.
   |     G     |
   |--o--------|
      |
     OUT

and their concatenation would be this:

     IN          IN    IN
      |           |     |
   .--o--------.--o-----o--.
   |     F     |     G     |
   |--o-----o--|--o--------|
      |     |     |
     OUT   OUT   OUT

Composition indicates a dependence (e.g. G is dependent on the output of F) that requires sequential evaluation whereas concatenation indicates an independence (e.g. F and G do not share inputs) that allows for parallel evaluation. Both operations are associative.

There exist operations with a similar spirit to the concatenation expressed above: the functional form of construction in Backus's FP, the *** operator for the -> arrow in Haskell, and the family of spread combinators in Factor to name a few. However, none of these deal with concatenated inputs/outputs and none of these are associative, so they're not quite what I'm after.

Any pointers towards related mathematical notions or languages with a similar feature would be much appreciated.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Mathematically invisible

I am tempted to give a mathematical explanation that involves some phrase of the form “Mathematicians usually consider functions operating on tuples, rather than multi-place functions”; but you've explicitly mentioned that you're operating on tuples, so I must be missing something.

Anyway, here's a go at it. A function F such as you indicate seems to be something like F : A \to X \times Y, and a function G to be something like G : B \times C \to Z (the notation is mathematical, not computer-scientific, or whatever the adjective is). To such maps we naturally associate a map A \times (B \times C) \to (X \times Y) \times Z, and then use associativity of Cartesian products as necessary to re-bracket the domain and co-domain.

As such, the answer seems to me to be that this is just what a mathematician would think of as a product function on a product space, written as F \times G; but, again, this relies on some invisible re-bracketing that one might not want to make invisible in programming.

Have I missed the point of the question?

It would seem so

It would seem so, but that's my fault for not being more specific.

The problem with your description is that the product operator I'm working with isn't associative; ((a, b), c) is not the same as (a, (b, c)), as is usually the case in programming languages.

If F has the type:

(a, (b, (c, ()))) -> (d, ())  -- an arity of 3 -> 1

And G has the type:

(e, ()) -> (f, (g, ()))  -- an arity of 1 -> 2

Then the concatenation of F and G must have the following type:

(a, (b, (c, (e, ())))) -> (d, (f, (g, ())))  -- an arity of 4 -> 3

(I've opted to associate to the right above, but the left would obviously work just as well if done consistently.)

The only way I can give a dynamic semantics for the concatenate operation would be to assume that all functions take an n-tuple and return a pair, the first element of which is the "result" n-tuple and the second element of which is the "additional values unused in the computation" n-tuple. For example, the addition function would be implemented as such:

add : (int, (int, a)) -> ((int, (int, ())), a)
add (x, (y, r)) = (x + y, r)

Essentially, all functions need to be lifted such that they return the values they don't use. (This is rather similar to the semantics for addition in a stack-based language with the difference that the "extra" values are kept separate rather than just appended to the result.) Composition would have to work as follows (where ++ is the tuple concatenation operation):

compose : (a -> (b, c)) -> ((b ++ c) -> (d, e)) -> a -> (d, e)
compose f g = g . (++) . f

The point of all this is that we can implement concatenation as such:

concatenate : (a -> (b, c)) -> (c -> (d, e)) -> a -> (b ++ d, e)
concatenate f g = first (++) . groupl . second g . f
  where groupl (a, (b, c)) = ((a, b), c)

As you can hopefully see in the type for 'concatenate', 'g' is operating on the values that were unused by 'f'. Obviously typing such an operation requires a type-level concatenation operator (and an additional kind of variable that must represent an n-tuple cons or the n-tuple null, again, as is the case for a stack based language (see Cat)).

Arrows come to mind.

An arrow tutorial.

Conveyor belt metaphor looks close to your drawing.

Monoidal category

This is exactly the setting of a monoidal category.

"++" is you monoid operation, types (a, b, c, d) are object, function types (a -> b) are arrows, and your operation acts on both objects and arrow, with the property that any arrow f ++ g, with (f : a -> b) and (g : c -> d), is typed (a++c -> b++d).

Monoidal categories do not necessarily have an associative product, but (a++b)++c and a++(b++c) should be isomorphic (wich is probably the case in your setting). The special case were this isomorphism is an identity is called a "strict monoidal category".

Graphical notations such as the one you show have been developped for monoïdal categories. The idea is basically that you can concatenate graphical representation of two arrow to get the graphical representation of their result by the operation.

Very helpful

That was very helpful, thank you.

Cartesian product of predicates

Suppose you have two predicates

R(x,y) and S(y,z)

equipped with functional dependencies (to make them functions)

R:x->y and S:y->z

then their composition is natural join projected to the set of their distinct attributes. What you called "concatenation" is Cartesian product. If we rename attributes (of one or both relations) to become disjoint, then it is again natural join.

Bigraphs

You might be interested in Robin Milner's bigraphs, which have the operations you are talking about, but the "concatenation" is called "juxtaposition" and corresponds to the tensor product in a symmetric monoidal category.

Faust

Yann Orlarey's Faust is a functional signal processing language. It combines the lambda calculus with a block diagram algebra (BDA). Your example would be written simply as F:G in Faust. BDA ops for parallel composition and "looping" are provided as well. Very useful (and fun!) for programming audio plugins, software synthesizers and the like. It's the only practical FPL for signal processing that I know which generates code which can compete with carefully handcoded signal processing algorithms.

Finally

I've finally gotten an opportunity to look at this. The "second extension to sequential composition" and "parallel composition" in this paper were the operations I was looking for. Thanks for the pointer.