## SequenceL - declarative computation on nonscalars

I recently came across the language SequenceL, which it seems NASA is using in some of its human spaceflight programs. SequenceL is described as a high-level language for declarative computation on nonscalars. One of the key features of the language appears to be the avoidance of the need to explicitly specify recursive or iterative operations. For example, given the function definition

Search(scalar Target, tuple [Subscript, Word]) =
Subscript when Word = Target


which applies to tuples, the programmer can apply the function directly to lists of tuples without any need to specify how that application will be performed, e.g.

search(fox,[[1,dog],[2,cat],[3,fox],[4,parrot]]) â†’ 3
search(rabbit,[[1,dog],[2,cat],[3,fox],[4,parrot]]) â†’ []


The language designers (Daniel Cooke and J. Nelson Rushton) claim that this kind of thing leads to more concise and readable code, and a more direct representation of a specification.

Unfortunately, the SequenceL website appears to be inaccessible at the moment. However, Daniel Cooke's site includes links to a number of papers and talks that describe SequenceL. In particular, the paper Normalize, Transpose, and Distribute: An Automatic Approach for Handling Nonscalars provides a detailed description of the "Consume-Simplify-Produce/Normalize-Transpose" approach that is embodied by SequenceL. There's also an overview of SequenceL available through CiteSeer.

## Comment viewing options

### Seems kinda like the idea

Seems kinda like the idea that inspired what Itay explores here.

### Just mailed the SequenceL website issue to Dr. Daniel Cooke ...

Sorry about the website - we got seriously hacked and the IT guys quickly swept in and disconnected us. I can send a SequenceL interpreter (a unix binary) or other information to those who may be interested. We also have an alpha version of a compiler that generates multi-threaded C++ code for the multi-core in alpha form - but it is being commercialized. The CSP-NT identify parallelisms so there is no need for programmer intervention. All examples you may have seen (including the simple search above) are all you need to specify for the parallel code to be generated. We have a paper we just submitted to a workshop that shows some of the parallel results. Please feel free to email me for the time being. (dcooke@coe.ttu.edu)

This approach extends every user function definition f:t->u to a function f':forall(F=Id|List|..) F(t)->F(u), an overloaded function accepting elements of type t as written (Id(t)==t), and also elements of type List(t), and so on, automatically performing a functor map operation on the elements.

By doing this, the system precludes the user from overloading a function irregularly on a container type (rather than just performing a map), and it prevents certain uses of universal types and recursive types since those situations lead to ambiguities over whether a given function application is intended to be taken on the base type or as a container mapping over another type.

I feel it's much cleaner to avoid this sort of feature in a language design, and compensate by making mapping syntactically easy as Haskell does with its list comprehensions.

### I like recursive map

I agree on this, only the paper makes a good case of having recursive map as a language feature-- does Haskell make recursive map easy?

Also what if the only way to deal with lists in the language is that, does it help resolve ambiguities?

### Got it wrong- it is not recursive map

In fact, the NTD system does not perform a recursive map in the instantiate case. Rather it applies the function on leaves, which are scalar. This becomes obvious with reverse(vector x):

reverse([[[1, 2, 3], [4, 5]], [6, 7]])
transpose -> [reverse([[1, 2, 3], [4, 5]]), reverse([6, 7])]
transpose -> [[reverse([1, 2, 3]), reverse([4, 5])], reverse([6, 7])]
distribute -> [[[3, 2, 1], [5, 4]], [7, 6]]


If that was a recursive map, we would end up with [[7, 6], [[5, 4], [3, 2, 1]]].
I am mostly-convinced that NTD is useful and seriously thinking of adding it to my language dodo now, but it will require a special syntax-- it will not be applied automatically everywhere.

### Not good enough yet

I agree in principle, but my experience with APL variants has taught me that Haskell's support for this kind of thing has a way yet to go. Applicative functor operations help a bit, but then that breaks down for infix operators, which sucks, because of course arithmetic ends up being lifted to sequences all the time.

Anyway, I agree that the "everything is auto-lifted all the time" approach leaves a lot to be desired, but once you've really used it, the convenience is hard to give up. I think the right balance remains to be found.

For balance, I think the spread operator (*.) in Groovy is about right. To lift a method call, just use *. instead of . for the method qualifier. Thus

x*.foo()

is equivalent to

x.collect(element ->element.foo())

where "collect" is Groovy's name for "map" in other functional languages. Thus lifting isn't automatic, and doesn't cover the recursive lift case, but at one character it's hard to beat.

### Infix, multiple args?

How does spread handle lifting over multiple arguments or lifting infix operators? I'm thinking specifically of things like q's f[x;;y]'[w;z], which partially applies f to x and y and then maps the result over w and z, or x + y * z, which just "does the right thing" whether x, y and z are scalars, collections, etc.

(I'm not trolling, I'm really curious. I also think q does not set a high bar in any respect, I'm only using it as an example. Frankly, I think the language is a mess.)

### Neither

It only lifts the target, and doesn't do infix operators.

NT is only triggered when an actual argument of a function call is of a different type (specifically, a more "deeply nested" type) than the declared type for the corresponding parameter in the function definition. Therefore. there is no ambiguity in functions on recursive data types. The function is strictly defined for the base case.

SequenceL prevents overloading of functions in ways that conflict with the "automatic" overloading induced by NT. The question is whether it is more valuable to write such overloaded definitions, or to have the convenience of NT. For all of the industrial-size applications we have written, NT was more handy. Your results may vary ;)

### K's atomic functions?

This seems very similar to K's 'atomic functions' (possibly extended).

For example, "matrix multiplication" example from the paper above looks in K like that:

  _mul:{x _dot\:y} / dot each row of x with whole y
_dot:{+/x*y}     / sum of x*y


and

  1 2 + ((2 3;5 4);5 6 7) / results in
((3 4
6 5)
7 8 9)


In K, only several functions are atomic -- mostly arithmetic ones. The generalization of this concept would seem very interesting. (I haven't read beyond first pages yet :-))