The perfect advanced programming language for the productive industrial developer

To each their own language, but am I alone in aspiring a productive, efficient and fun programming language for the advanced programming professional?

Here's a list of 50 items I'd like to see for the basis of a new language:

* runs really fast (compiles to native code) and has a performance profile within 20% of C.
* compiles really fast (have you seen "GO" compile?)
* has an "interpreter" (compiles and runs code on the fly)
* can be used as a systems programming language (ie: you can write device drivers with this stuff)
* supports both functional and imperative programming (leans on the functional)
* functional programming is pure and identifiable at compile time.
* imperative programming is identifiable at compile time.
* passing an imperative function to a higher order pure function renders the pure code impure.
* for/while loops and mutability allowed in pure functions so long as the output is pure (and there's no IO and such)
* Mutability allowed within imperative code, but contained within function body for pure code.
* OOP but no classes (see Haskell Type classes and GO's interfaces)
* Strongly typed and supporting Generic programming
* Soft-realtime GC
* NO null pointers
* Continuations
* Data types can be extended, methods can be specialised.
* No exceptions, but a similar throw/handle mechanism for errors _within_ a single body of code to increase readability.
* Fault-Tolerance built right into the library
* Inner functions
* name spaces
* modules
* neither lazy nor eager evaluation by default, but three distinct function calling operators. CALLER PICKS {lazy, eager, usecallersemantics}
* exception to the above, function arguments -- lazy by default (so you can write your own control loops).
* understandable compiler errors
* efficient pure array programming
* efficient in-memory database structure (multi-index table data structure)
* easily parse binary data (see Erlang's bit syntax)
* compiled regular expression syntax
* extensive message-based concurrency
* multi-core, multi-note runtime environment (can relocate a computation from one box to the next and back again)
* parsable code so an IDE is easy to build
* possible and straightforward provide a contextual method list in the IDE: it's a huge productivity booster
* executable carries meta-data such as dependencies and run-time reflection.
* supports embedding DSLs with code (e.g. so you can declare XML data and have it parsed and turned into code at compile time)
* supports embedded assembly code --automatically marking the code as "unsafe"
* scalable: not declaring a module header imports a standard bunch of stuff that makes it easy to write programs that easily do what Python, Perl, Ruby do.
* must be able to mark an executable as "safe" and indicate dependencies in meta-data so it can be run in a sand-boxed environment
* fast string processing
* UTF-8 support out of the box
* Module split into "public" (on top and default) and "private" section. --no mucking about retyping all the "exported" functions
* Immediate guaranteed collection of objects living on the stack: support for RAII (great for managing resources other than memory)
* Haskell-style indenting
* pattern matching
* guards
* Good debugger which supports back-stepping
* No debug build vs opt build.. Debug info efficiently ignored unless run under a debugger.
* currying possible, but explicit .. e.g: map(someFun,__): Currying can lead to hard-to-figure-out compile errors.
* Scala-style _. i.e. _.moo() creates a lambda on the object. Similarly run(_, 42, _) returns a function with two args.
* no semi-colons at the end of the line

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Ponies!!! Ok, its probably

Ponies!!! Ok, its probably impossible to support all of your features in one language, but Scala is probably your best bet.

Ponies?

Ponies? I think he wanted unicorns...

Could it be the D programming language ?

* runs really fast (compiles to native code) and has a performance profile within 20% of C.

check

* compiles really fast (have you seen "GO" compile?)

check

* has an "interpreter" (compiles and runs code on the fly)

D has "compile time function execution", though that isn't the same as what you're asking for

* can be used as a systems programming language (ie: you can write device drivers with this stuff)

Yes, as long as you eschew stuff that uses the GC

* supports both functional and imperative programming (leans on the functional)

Yes.

* functional programming is pure and identifiable at compile time.

check

* imperative programming is identifiable at compile time.

check

* passing an imperative function to a higher order pure function renders the pure code impure.

check

* for/while loops and mutability allowed in pure functions so long as the output is pure (and there's no IO and such)

check

* Mutability allowed within imperative code, but contained within function body for pure code.

check

* OOP but no classes (see Haskell Type classes and GO's interfaces)

Has OOP and classes

* Strongly typed and supporting Generic programming

yes

* Soft-realtime GC

Not sure what that means, but D does have GC

* NO null pointers

D has null pointers

* Continuations

probably not

* Data types can be extended, methods can be specialised.

Extension inheritance, and yes to overloading and partial specialization

* No exceptions, but a similar throw/handle mechanism for errors _within_ a single body of code to increase readability.

Has exceptions

* Fault-Tolerance built right into the library

Not sure what that means

* Inner functions

Yes

* name spaces

Yes

* modules

Yes

* neither lazy nor eager evaluation by default, but three distinct function calling operators. CALLER PICKS {lazy, eager, usecallersemantics}

D does have lazy and eager function parameters, but the callee picks.

* exception to the above, function arguments -- lazy by default (so you can write your own control loops).

Lazy is specifiable as a storage class, but it isn't the default

* understandable compiler errors

Always a work in progress

* efficient pure array programming

Getting there. D currently has array operations, the code gen for them could be better.

* efficient in-memory database structure (multi-index table data structure)

Strong user defined type creation abilities means you can create your own. Associative arrays are built in.

* easily parse binary data (see Erlang's bit syntax)

Don't see why that should be a problem with D.

* compiled regular expression syntax

Regex is supported via a library.

* extensive message-based concurrency

Coming soon!

* multi-core, multi-note runtime environment (can relocate a computation from one box to the next and back again)

IDEs are being developed.

* parsable code so an IDE is easy to build

Yes. Compiler also outputs JSON format files with much information

* possible and straightforward provide a contextual method list in the IDE: it's a huge productivity booster

Up to the IDE, not the language

* executable carries meta-data such as dependencies and run-time reflection.

Mostly there, could be improved.

* supports embedding DSLs with code (e.g. so you can declare XML data and have it parsed and turned into code at compile time)

Yes.

* supports embedded assembly code --automatically marking the code as "unsafe"

Yes!

* scalable: not declaring a module header imports a standard bunch of stuff that makes it easy to write programs that easily do what Python, Perl, Ruby do.

Need to specifically import what you use.

* must be able to mark an executable as "safe" and indicate dependencies in meta-data so it can be run in a sand-boxed environment

Functions and modules can be optionally marked as "safe", "trusted", or "system".

* fast string processing

Yes.

* UTF-8 support out of the box

And UTF-16 and UTF-32.

* Module split into "public" (on top and default) and "private" section. --no mucking about retyping all the "exported" functions

Yes.

* Immediate guaranteed collection of objects living on the stack: support for RAII (great for managing resources other than memory)

Yes.

* Haskell-style indenting

Whitespace is not significant.

* pattern matching

Not directly, but you can do similar things with the templates

* guards

D does have scope guards.

* Good debugger which supports back-stepping

Can use any C debugger.

* No debug build vs opt build.. Debug info efficiently ignored unless run under a debugger.

Yes.

* currying possible, but explicit .. e.g: map(someFun,__): Currying can lead to hard-to-figure-out compile errors.

Currying is possible.

* Scala-style _. i.e. _.moo() creates a lambda on the object. Similarly run(_, 42, _) returns a function with two args.

Does have lambda functions and closures.

* no semi-colons at the end of the line

Sorry, semi-colons required!

* multi-core, multi-note

* multi-core, multi-note runtime environment (can relocate a computation from one box to the next and back again)

IDEs are being developed.

I believe he talks about distributed computing not IDEs.

Working on it...

Looks like we have some similar ideas, dodo resembles your ideal language... but it exists only on paper :(

Here's a list of 50 items I'd like to see for the basis of a new language:

* runs really fast (compiles to native code) and has a performance profile within 20% of C.

Dodo is designed to compile to efficient code, but performance will depend on the compiler which doesn't exist for now. Also calling a function on a non-local object incurs a significant penalty.

* compiles really fast (have you seen "GO" compile?)

Can't tell...

* has an "interpreter" (compiles and runs code on the fly)

I want that, but not sure how easily it would happen. Dodo code is meant to be translated to a sublanguage with a few primitives that should be fairly easy to interprete.

* can be used as a systems programming language (ie: you can write device drivers with this stuff)

Some aspects of my language support this indirectly, one of them being that functionality can be delegated to code written in another language (eg. C)

* supports both functional and imperative programming (leans on the functional)

Yes

* functional programming is pure and identifiable at compile time.

Not really true in dodo, as the functional side can use capabilities which are non-pure.

* imperative programming is identifiable at compile time.

Side-effects are restricted to constructors in dodo. Only exception is that functions can use active capabilities.

* passing an imperative function to a higher order pure function renders the pure code impure.

Constructors cannot be invoked in a pure function. Passing a capability to a pure function makes it impure.

* for/while loops and mutability allowed in pure functions so long as the output is pure (and there's no IO and such)

Yes

* Mutability allowed within imperative code, but contained within function body for pure code.

Yes

* OOP but no classes (see Haskell Type classes and GO's interfaces)

No, dodo is prototype-based and supports OOP classes.

* Strongly typed and supporting Generic programming

Not sure what you mean exactly, but I want generic classes and functions in dodo.

* Soft-realtime GC

No GC implemented, can't tell.

* NO null pointers

All dodo variables start with a default value which is not null.

* Continuations

That is a basic feature of dodo, but continuations are not first-class.

* Data types can be extended, methods can be specialised.

Dodo has OOP inheritance and overloading for classes or objects (prototype).

* No exceptions, but a similar throw/handle mechanism for errors _within_ a single body of code to increase readability.

Dodo exception mechanism is based on continuations.
The syntax in imperative code is inspired by D. Dodo allows exception propagation and only one event handler per function body, so that may not be what you are after.

* Fault-Tolerance built right into the library

What does that entail?

* Inner functions

Yes

* name spaces

No

* modules

Yes

* neither lazy nor eager evaluation by default, but three distinct function calling operators. CALLER PICKS {lazy, eager, usecallersemantics}

Dodo uses eager evaluation by default, but it supports yielding iterators and futures.

* exception to the above, function arguments -- lazy by default (so you can write your own control loops).

You said "three distinct function calling operators", and now it is lazy by default?

* understandable compiler errors

I want that.

* efficient pure array programming

Dodo has array (matrix), map and set literals. It has map, filter, fold and zip. I also want SequenceL-like generalisations.

* efficient in-memory database structure (multi-index table data structure)

In the base language? For what purpose?

* easily parse binary data (see Erlang's bit syntax)

Need to check Erlang.
In dodo, a field of 5 bits is noted flag[5]

* compiled regular expression syntax

Not sure about "compiled", but there is a regex syntax.

* extensive message-based concurrency

Yes

* multi-core, multi-note runtime environment (can relocate a computation from one box to the next and back again)

That is planned

* parsable code so an IDE is easy to build

May not be, but syntax is based on C/Java

* possible and straightforward provide a contextual method list in the IDE: it's a huge productivity booster

Don't know

* executable carries meta-data such as dependencies and run-time reflection.

I need to think about that, dependencies may be runtime-only

* supports embedding DSLs with code (e.g. so you can declare XML data and have it parsed and turned into code at compile time)

Not planned.

* supports embedded assembly code --automatically marking the code as "unsafe"

Not planned, but dodo is designed to interface with code in other languages.

* scalable: not declaring a module header imports a standard bunch of stuff that makes it easy to write programs that easily do what Python, Perl, Ruby do.

Header and body separation is not mandatory.
The default imports can be overridden and specified on the command line when compiling.

* must be able to mark an executable as "safe" and indicate dependencies in meta-data so it can be run in a sand-boxed environment

That is equivalent to the requirements for code migration (multi-node), so it should happen

* fast string processing

Do you mean text transformations?
Dodo has specialised functions for different string encodings, but flexibility is a priority over pure performance.

* UTF-8 support out of the box

Yes.
Dodo supports several string encodings. A character is defined as one or more UTF code points which correspond to a single character (no ligature), an UTF-8 string can contain less codes than characters or vice-versa.

* Module split into "public" (on top and default) and "private" section. --no mucking about retyping all the "exported" functions

The header contains only public declarations.
There are no public declarations in the body.
Retyping may be useful if you want the public function body in the body of the module.

* Immediate guaranteed collection of objects living on the stack: support for RAII (great for managing resources other than memory)

Dodo uses continuation-passing style, which I believe is equivalent to RAII.

* Haskell-style indenting

No

* pattern matching

Yes

* guards

No

* Good debugger which supports back-stepping

Er, that is still a dream

* No debug build vs opt build.. Debug info efficiently ignored unless run under a debugger.

I want this

* currying possible, but explicit .. e.g: map(someFun,__): Currying can lead to hard-to-figure-out compile errors.

No currying

* Scala-style _. i.e. _.moo() creates a lambda on the object. Similarly run(_, 42, _) returns a function with two args.

Intriguing...
Dodo has lambda functions but they are much more verbose.

* no semi-colons at the end of the line

Yes

* * Good debugger which

Good debugger which supports back-stepping
Er, that is still a dream.
Maybe not.

or ocaml

go go gadget ocaml.

I assume he meant for Dodo...

I assume he meant for Dodo...

Stupid question

Why is it that everyone *insists* that the language they use to write the accounting system has to be the same language that the OS is written in, or at least could be written in? Why is it so bad to say "if you're writing a device driver, you do it in language X, otherwise, you should be writing in language Y"?

And why stop at device drivers? Why not also demand that the same language be suitable for designing hardware in? Why one language to design CPUs, and one language to program them in? Now, you may say "Brian, that's silly", but back when I did device drivers professionally, there was a lot of interest in code that could be moved between hardware and software as needed. Have the code in software for simpler and cheaper hardware, move the code into hardware for speed. FPGAs blur the line between "software" and "hardware". Why stop at device drivers?

I think this question would

I think this question would have been more clearly expressed in French.

As an aside

Have the code in software for simpler and cheaper hardware, move the code into hardware for speed.

As far as I know that's still the direction in hardware, especially on cell phones where the major advances have been. The pushing codecs into hardware to get video processing to work at low electrical consumption has been a major advance of the last 5 years and it has exactly followed this model.

A few more items

That's a great wish list for the next mainstream language. I would add...

* Optional/Soft/Partial/Gradual Typing. In other words, it's a dynamic language by default, but you can generate static code without switching to a complete different language and getting into interop headaches. Cython and Common Lisp are some clunky examples.

* Compiler extension (Lisp-style macros, only better). Maybe that's what you meant by DSLs. I'm thinking most of the language could be bootstrapped this way... not that it would be easy (or desirable) for the average programmer to write compiler extensions!

Not sure why people are so

Not sure why people are so enamoured with optional/soft/gradual typing, when it seems like such a burden when reasoning about code. That is the goal after all, and it seem much better achieved by static typing with support for dynamics where needed (and preferably dynamics that are safe, ie. safe casts/typecase); thus, you always know just by looking at the types whether you're in a dynamic or static context, with static being the default. Admittedly, most statically typed languages do not have good support for dynamics, but that isn't a fundamental limitation.

Development and Distribution Architectures

Not sure why people are so enamoured with optional/soft/gradual typing, when it seems like such a burden when reasoning about code. [...] thus, you always know just by looking at the types whether you're in a dynamic or static context.

People want the rapid application development properties commonly associated with dynamic typing, and the optimization and meaningful static analyses often associated with static typing. They often see 'soft typing' as the means to achieve both.

It should be easy to see why people are enamored with the idea, especially if you aren't generously assuming they'll follow that idea to its logical ends. To most people, a 'hybrid' is the 'best of both worlds' with nary a thought that it might also be the 'worst of both worlds'.

The hybrid procedure/function in Scheme and ML certainly are flexible, for example, but the potential side-effects in functions can greatly hinder optimizations - including partial-evaluations, eliminations, and reordering... especially where one function takes another as an argument. Further, worse than merely hindering optimizations, they make it very difficult to reason about in which contexts an expression could be reused or distributed (i.e. can this 'function' be used in multi-threaded code? which dynamic vars does it need?).

Rather than a hybrid, something like a weave or layering is often only slightly less flexible yet far more practical in terms of other engineering properties (reuse and deep composition, optimization, concurrency, disruption tolerance, security, distribution, etc.) because you - and your compiler - can do more reasoning about it.

What you propose, Sandro, is something closer to a weave or layering between static and dynamic 'contexts'... though far too short on details to evaluate.

Opposing both of you, I believe that the sharing of manifest types between developers is required primarily by a deeply flawed development and code-distribution architecture that involves treating 'libraries' and 'applications' as deliverables. The most obvious problems I see with the common design are (a) the heavy reliance on ambient authority and trust (by both developer and client!), (b) the weak control by the developer over (truly) global instantiation patterns (and therefore persistence, sharing, run-time upgrade), (c) difficulty integrating with foreign resources.

I believe the future to involve 'tierless programming' where a developer may compile up a procedure in an IDE, feed it a few capabilities identifying remote resources and service registries, execute, and within milliseconds have new objects actively integrating with resources (web-cameras, unmanned systems, etc.) across the world. Ideally, the location or IDE at which the object-graph is instantiated has no lasting effect on where the processing is performed. The capabilities linked by the IDE/user would be far more critical. Those capabilities may optionally come from the IDE itself, to support debugging or IDE-as-a-browser.

In such an architecture, the developer could control instantiation-patterns with an iron fist. If one wants an instance-per-user, then one provides an instance-factory. If one wants a shared instance for all users, one provides a common resource. If one wants different instances for different domains, that can be done too. Singleton pattern would be unnecessary, and 'global' resources would be just as truly global as you need them to be. Because instancing is controlled at a global scale and from common initial capabilities, live upgrades can also be achieved without much difficulty, though it may take some careful design to ensure the upgrade 'capability' is unlikely to trouble active systems.

For 'client-side' security, foreign code hosted on a client's machine should obtain no more authority than would the same foreign code running on a foreign machine under ideal network conditions. If achieved, it means that the distribution of code is for performance, disruption-tolerance, and resilience. This is achievable using capability security. To be clear: ideally, all application and library code is considered foreign and untrustworthy.

For 'developer-side' security, the developer of a service should be able to limit the distribution of sensitive things - usually information and authority (capabilities) - such that untrusted hosts cannot gain access to them except via betrayal by a trusted host. For capabilities, one also wishes the ability to revoke trust from a compromised machine and thereby revoke direct access to those capabilities and thereby limit further compromise. To this end, one needs the ability to specify restrictions on what gets distributed, and where. This is feasible utilizing some of the designs aimed initially at DRM, excepting preventing distribution to an untrusted host is a lot more practical than preventing further distribution from an untrusted host. Trust may be expressed via a combination of certificates and escalation policies. Limiting distribution of capabilities is far more readily defined (and enforced) than is limiting distribution of information... but E language style sealers/unsealers can help for the 'information' angle via making the capability a requirement to access certain information from certain sources.

Favoring such an architecture still benefits from safety-analysis within a 'build', but benefits much less when comes time to integrate with the outside world. There is a limit to how much you can trust any 'manifest type' document. Even if you could trust it initially, it may become dated during the run, or be breached maliciously. So you don't trust such an API document - not for anything really important, like security.

Fortunately, once security is separate from safety, you don't really need to care whether the foreign code you host is "safe" except to maximize performance (i.e. by avoiding the extra conditions for fail-safety). Types in the foreign code aren't going to much help this purpose, either - at least not in any trustworthy way that avoids need for fail-safety conditions.

To be clear: ideally, all

To be clear: ideally, all application and library code is considered foreign and untrustworthy.

IMHO: Exactly what leads to thinking of software architectures in the most general sense as first and foremost all based on a dynamically federated, dynamically distributed system model, which can then be collapsed into fewer 'tiers' as trust boundaries become known.

A Matter of Perspective

If you look at it from the perspective of a programmer who puts a lot of value in the guarantees provided by a language with strong static typing, but would like the ability to occasionally "break the rules" in instances in which the type system can't cleanly handle what they're trying to do, what you say makes a lot of sense.

However, a programmer who generally likes dynamically typed programming languages might see things a little differently. They may be more interested in allowing the compiler to make more aggressive optimizations in key areas and have the compiler catch a few stupid mistakes here and there, but they probably don't have much interest in having most of their program statically typed (perhaps they should, but that's a separate issue).

Another category of programmer who might prefer optional/soft/gradual typing is one who finds a dynamically typed language more productive for prototyping/exploratory programming, but wants the safety of static typing for the final product. At the start of the project, they may not be interested in adding any type annotations at all, but as their understanding of the problem becomes more clear they'll want to start adding annotations to the code.

At the start of the project,

At the start of the project, they may not be interested in adding any type annotations at all, but as their understanding of the problem becomes more clear they'll want to start adding annotations to the code.

Experience has taught me that working out the types upfront clarifies my understanding far more quickly than cobbling together bits of inconsistent code. This benefit is contingent on a low management overhead for type information, as with type inferred languages. Not the first time this has been said of course, so take that personal anecdote for what its worth.

for/while loops in functional

In in a pure language what is a for loop or a while loop going to do that something like map, fold or filter doesn't do? Either I want to access the data in an associative way in which case: foldl, foldr make sense (depending on where I want to start) or the result doesn't have interdependencies in which case map works, or I want to stop at some point in which case takeWhile....

I don't see what the for/while loops do for me in a pure language. Almost by construction they only make sense if I'm doing something, then doing something else, which means their is an implicit global "time" variable.

Consistency

The most likely reason for this request is so that fewer concepts are necessary to distinguish functional code from imperative code (i.e. so the former looks more like the latter). Essentially, the author of the OP is aiming for 'purity' to really mean 'all side-effects are confined'.

One can, of course, write for/while loops inside a functional program (so long as it's turing complete) using something like monadic programming. This sort of program description might be useful for complex functional searches (i.e. for planning and AI).

Personally, I'd favor achieving termination properties for the functional subset, and carefully add both impurity and non-termination in higher layers.

Adding non-termination.

Personally, I'd favor achieving termination properties for the functional subset, and carefully add both impurity and non-termination in higher layers.

Exactly how is it possible?

I mean, that if you have terminating functional base feature set, you cannot add non-termination in higher layers. At least, by simple combination of features.

I can cite PiSigma as an example of contradictory optinion. They added non-termination right into core language.

Consider a terminating core

Consider a terminating core language, and a Delay monad for expressing non-terminating programs.

The difference is in terminology.

But we cannot translate core+Delay into pure core language. Or can we?

So I see that pair as a whole basis, instead of basis extension.

Layered Languages

you cannot add non-termination in higher layers. At least, by simple combination of features

You cannot use features from a terminating, pure, functional language subset to add non-termination or communication effects.

But that doesn't mean you cannot add non-termination or communication effects in higher layers, i.e. via introducing a primitive that cannot be used inside the 'functional language subset' (i.e. any sub-program using a certain primitive will be in the 'higher layer').

The trick is to guarantee that code in the lower layer cannot invoke code in the higher layer. As Sandro suggests, introduction of a special 'delay' monad is one way to achieve this.

concepts

You are probably right in the motivation. But IMHO this makes things worse. The hidden concept of time is sneaking in with the open concept. The real complexity in switching from for/while to map/fold/take is to stop thinking of algorithms in terms of do A then do B then do C.... There is no way to be pure and still think in terms of ordering operations. At least that's my $.02. So assuming you are correct about the motivation it sounds like a bad idea.

Hidden concept of Time

The reason I guessed at that particular motivation is that, when I was first getting into language design, I had some similar thoughts on the subject of 'purity'.

At this time, I agree with you: "There is no way to be pure and still think in terms of ordering operations." Ideally, for a 'pure' language, the lazy, lenient, strict, etc. evaluation orderings (among others) always result in the same outcome (modulo performance). And that can be achieved by ensuring that the pure language subset is also strongly normalizing (in addition to confining communications effects), and that the concept of 'time' or 'delay' is introduced to the language more explicitly.

Familiarity

My motivation is so that you can write a side effect-free function using Java-like syntax. I am not going to force programmers to learn new ways if they don't want to. I just want to make it easy to go the functional route if they desire to.

Example:

# Side effect-free iterative style function
int sum_iter(int[] values)
{
   int s = 0
   loop foreach (value in values)
   {
      .s += value
   }
   return s
}

# Functional version
def sum_func(int[] values) = `fold values` + `seed 0`

The base language is continuation-based, so there is a notion of do A then do B then do C... in dodo anyways.

Encapsulated mutability within a pure functional language

That is correct! The software I write needs to be fast.
Really fast. We mesure our latency in microseconds and often improve speed by avoiding allocating on the heap and creating copies of objects.
The other challenge is that it is also very complicated software to reason about. There are large bodies of business code with many edge case scenarios.

Why do I need to throw away the guarantees of strong static typing and functional pureness when I am writing a critical segment of code?

If I am serialising an object so that it can go out on the network,
it is much faster to express it using mutability: (e.g. adding data to a pre-allocated byte-vector on the stack).

The end result, though is that I can still write a pure function:

serialize(trade: Trade) : Vector[Byte8]

For all intents and purposes the function is pure:
-> It causes no side-effects
-> The output is purely dependent on its input
-> run it twice with the same input and you'll get the same result back

Additionally, if this was some complex risk calculation, I could then use these guaranteed properties to cache or parallelize the computation if my static & dynamic analisys showed me it made sense.

The function uses mutability internally??? so what?? why do I have to throw away a for and while loop when it's good for the job?

** For the people who may think: you should really be writing this stuff on an FPGA: yeah, we tried, but there's no way you can deliver results within a reasonable timescale. We also ran into problems where it wasn't any quicker because of the overhead of getting the data in and out of there. If the software were simple you could do it, but as it's business logic it's dependent on large amounts of referential data that sit cached in hashmaps in ram.

*** Why a single language? Simple: Language interop is too expensive (speed of execution, speed of development) because it involves writing adapters and marshalling.

Re: a single language?

how does .net's approach fare as a middle ground?

This seems like a contradiction

The output is purely dependent on its input
...but as it's business logic it's dependent on large amounts of referential data that sit cached in hashmaps in ram.

Assuming these hash maps of ram are static you are really passing a pointer to large data structures and that's something FP has no problems with. Just predigest the data. Now if those hash maps are dynamic you really do have state in a fully meaningful sense and you aren't going to get much benefit from functional safety.

why do I have to throw away a for and while loop when it's good for the job?

Because they aren't good for the job for pure functions. They encourage exactly the things you are trying to avoid: side effects, interdependencies so you can't cache or parallelize.... Either you don't need side effects in which case you want to use looping structures that are much more powerful in other ways; or you do need them. Quite simply for and while aren't good looping structures for non imperative code, they aren't good for the job.

And if you need them there are type safe ways of getting them:

fac n = result (for init next done)
        where init = (0,1)
              next   (i,m) = (i+1, m * (i+1))
              done   (i,_) = i==n
              result (_,m) = m

for i n d = until d n i 

Possible Haskell answers to a couple of points

For:

* for/while loops and mutability allowed in pure functions so long as the output is pure (and there's no IO and such)

Haskell's ST monad lets you do this. Does that count as a satisfactory answer for this request?

Oh, and on the explicit currying side, one Haskell answer would be to define your functions in an uncurried form and use the new TupleSections language feature.

Untested code:

{-# LANGUAGE TupleSections #-}
myadd3 :: (Int,Int,Int) -> Int
myadd3 (x,y,z) = x + y +z

...

map (myadd3 . (a,_,c)) [5,6,7,8]