APL, J, Hardware
started 8/19/2000; 12:34:47 AM - last post 8/22/2000; 11:10:30 PM
|
|
andrew cooke - APL, J, Hardware
8/19/2000; 12:34:47 AM (reads: 1010, responses: 3)
|
|
APL, J, Hardware |
Chris Rathman
posted a link to
APL idioms that gives a flavor of the language. If that makes you curious here's a
nice introduction to APL and J. The
FAQ contains more info.
After reading
this report I realised that different hardware supports different kinds of multi-threading. It seems that IBM's next chips will be better at supporting languages with explicit threading within programs, while Intel's are aimed at detecting parallelism within small chunks of code. It struck me that APL or J, with their strong emphasis on arrays, seem perfect for the kind of Instruction Level Parallelism that Intel is concentrating on.
I've never thought much about the link between hardware and languages, although I recently read a comment that functional languages, with less emphasis on storing values in memory, should be more efficient in traditional architectures (where memory access is the bottleneck). Anyone know of other hardware/language connections?
Posted to "" by andrew cooke on 8/19/00; 12:41:24 AM
|
|
|
|
David Bakin - Re: APL, J, Hardware
8/19/2000; 8:25:35 AM (reads: 1158, responses: 2)
|
|
"that functional languages, with less emphasis on storing values in memory, should be more efficient in traditional architectures"
I don't believe this to be true - although there are no user-accessible operations to store into memory in a true functional language the implementation certainly stores into memory - otherwise would we need memory at all? The only way to get efficiency in a functional language is to detect previously computed results and share them - this is the "update" problem - so there are stores due to that as well as due to result construction. In fact, memory bandwidth usage of functional languages is somewhat higher to extremely higher (in current implementations) than in imperative languages due to the high rate of structure (closure) construction, which puts great pressure on the garbage collector.
An interesting note about APL (and, by extension, J): They lend themselves to algorithms that are O(n) and O(n**2) - since they are array based. That is, the notation encourages one to think in terms that lead to algorithms like that. You have to go out of your way to program an algorithm that runs in O(log n) or O(n log n).
I really like APL but never quite got the hang of J - the Iverson vocabulary for programming language syntax which is stolen from natural language grammar (noun, gerund, adverb, etc.) is so different than the way anyone else describes programming languages that it is difficult for me to get used to, plus J relies on the ASCII character set rather than a special character set like APL so the resulting programs look more like line noise than TECO ever did. (Not that anyone sees line noise anymore since error-correcting modems have been ubiquitous for a decade, much less anyone still knows the joys of TECO.)
|
|
andrew cooke - Re: APL, J, Hardware
8/19/2000; 2:34:57 PM (reads: 1216, responses: 1)
|
|
I must admit that your point about closures is convincing; and the update problem too. But for the sake of argument (I've tried to remember where I found the comment but had no success), can we ignore closures and large data structures for the moment?
With those out of the way, does the lack of "real variables" (as opposed to identifiers for local intermediate values - what is the correct terminology here?) imply that within a function data are more likely to be kept in registers and that the return value, rather than being stored in memory, is likely to be required immediately by the "next function" (the caller).
(I'm assuming a non-lazy language, with static typing or a nice compiler or whatever to avoid worrying about type information.)
Of course I'm clutching at straws - I'm just curious whether the above makes any sense at all (it seems believable to me, but that doesn't mean much). Is it a slight effect that is insignificant compared to the cost of closures and updating (or not) large data structures, or do you get equivalent gains from a good optimizing compiler for a procedural language, or am I just completely wrong?
Cheers,
Andrew
PS. I'm pretty sure there are problems for which functional languages are provably less efficient than procedural languages - there was a discussion about this not long ago on comp.lang.functional - and (a bit less sure) I think this is because of the update problem. But my original comment was about efficient use of memory access rather than efficient algorithms, although I'm not sure that's either a useful distinction, or even correct....
PPS. Sorry, this could probably be more concise and better argued - I've just come back in from a large meal!
|
|
Skib - Re: APL, J, Hardware
8/22/2000; 11:10:30 PM (reads: 1272, responses: 0)
|
|
I think it is doubtful that values are more/less likely to be in registers since you will still have to make "proper" function calls at some stage (save the registers, stack pointer and do a jump). Otherwise, with any sufficiently large program I think you will run out of registers ;-)
Following on from this, the "labelled values" probably have a much better chance of being allocated on the stack which helps the cpu cache. No proof this can be done but I think it's plausible.
A good procedural compiler should have a better chance of inlining function calls though which would give it the advantage (though you did say non-lazy so perhaps not unless you're curried).
Well the quicksort which can't sort arrays in place, is going to cause less efficient use of memory, so algorithms and memory usage are somewhat related. The exact implementation of Lists, for example, make a lot of difference to efficiency of memory and algorithms.
If you changed the nice compiler to an intelligent, exhaustive optimising one. I am sure lots of run-time details could be inferred and hence optimised, such as the need of a stacked function call.
PPS. Er, I'll blame this on a large drink!
|
|
|
|