## On compositionality

Jules Hedges has written a thought-provoking blog post, On compositionality where he connects the familiar idea of compositionality to the idea of emergent effects in nature, where systems can be understood as either having compositional properties or emergent properties.

The key point about emergent systems is that they are hard to understand, and this is as true for engineering as it is for science. He goes on to say "As a final thought, I claim that compositionality is extremely delicate, and that it is so powerful that it is worth going to extreme lengths to achieve it", so that avoiding emergent effects is a characteristic of good programming language design.

Some thoughts:

1. His examples of emergent systems are biology and game theory from an economic perspective. I would add to this list physics: of his co-authored paper showing that the spectral gap is undecidable, David Pérez-García said "For example, our results show that adding even a single particle to a lump of matter, however large, could in principle dramatically change its properties."
2. Spolsky's famous characterisation of interfaces built on shaky foundations as Leaky abstractions to me makes the distinction between compositional and emergent systems a little less than sharp.
3. We could talk endlessly about the list of what he regards as compositionality-breaking features of PLs. The evils of global state are well-documented, but I find dmbarbour's argument that Local state is poison a very interesting alternative way to look at what properties do we want from code; more generally, what kind of compositionalty PLs offer is very much paradigm dependent. Gotos are considered harmful, but the Linux kernel has little trouble with longjmp because of its mandated coding style: compositionality in engineering is a not just a matter of semantics but also of use. He targets OO and Haskell's type classes - I think he is quite correct - note that within these paradigms one can regain compositionality by restricting to LSP or algebraic classes, and supporting his thesis we see that these ideas have spawned ideas for the design of new, cleaner PLs.

## Comment viewing options

### "Local state is poison" link

"Local state is poison" link is invalid.

Thanks

### Leaky abstractions, like the

Leaky abstractions, like the Chomsky hierarchy, came to computer science from linguistics. From Edward Sapir's book Language (1921, p. 39):

The fact of grammar, a universal trait of language, is simply a generalized expression of the feeling that analogous concerpts and relations are most conveniently symbolized in analogous forms. Were a language ever completely "grammatical," it would be a perfect engine of conceptual expression. Unfortunately, or luckily, no language is tyrannically consistent. All grammars leak.

### I wonder if that is the same

I wonder if that is the same kind of leakiness. The class of grammars is cleanly partitioned by the Chomsky Hierarchy. Internally any given grammar is a ball of spagetti with no clear abstraction between its constituent rules, but do we really use grammars as composition blocks where the internal details leak across the abstraction boundary? The example that springs to mind of composition of grammars is the classic lexer/parser split where the boundary does not leak if the language is context-free, although it does if the parser has context-sensitive behaviour (i.e. the lexer hack). I suspect this is really an issue because the regular grammar does not sit naturally inside the context-sensitive grammar as we are missing a more natural way of the context-sensitive grammar itself. I thought it was conventionally agreed that “grammars do not compose” - are there interesting cases where they do and the leakiness across the interface between them becomes an issue?
[ edit: wow autocorrect does some horrible things ]

### Common cause

I submit both forms of leak are caused by the mismatch between sapient human minds and non-sapient formal structures. Mathematics is (as I've remarked in another thread) a conversation between human minds, and its abstractions are sapience-shaped; programming abstractions break down because they are, in essence, attempting to "explain" abstractions to the non-sapient computer. Natural language is —not going out on a limb, here— also a conversation between human minds; hence formal grammars break down because they are trying to contain sapient dialog within a rigid structure. Sapience doesn't fit into the confines of a formal system (I've blogged relating this to Gödel's Theorems), so both programming abstractions and formal grammars "leak".

### Grammars do compose

Grammars compose if you do them the right way. It's like saying types don't compose.

The way to make grammars compose is the same as types .. because grammars ARE types, just more powerful ones than the weak types in most PLs.

The technique should be well known to readers here.

Open Recursion.

### Grammar composition principles

The recursion is not the best way to compose grammars IMHO.

There are two better partial samples of grammar composition:

1. Free operator definition in Prolog (and some other languages)
2. Macros in Lisp

I have tried to extend the first approach to generic grammars long time ago. Some design decisions there are obsolete (particularly with respect of keywords), and I've changed them in more recent unreleased version, but now I'm focused on other project. But I still believe that such approach is the better approach to grammar extension and composability. At least it defines grammar along natural line of thinking: contexts, statements, operators, literals, while traditional grammar definition languages forces language designers to complier these concepts to grammar productios. Such translation should be done by parser generator rather than by human.

### Prolog operators

"Such translation should be done by parser generator rather than by human."

The parser generator for a sufficiently advanced language is maybe 1% of the work of compilation; it really isn't worth asking a tool to take that 1% off of your plate, because there's a lot (of great value) that you can learn and discover by actually writing the parser by hand. (I have regretted the few times that I let a tool steal this fun and educational work away from me.)

For basic languages, or for disposable grammars, I would guess that a parser generator would have a far greater positive impact (percentage-wise). OTOH, the absolute costs are low enough in those instances as to be ignored.

I am very curious, though, what specific aspects you like about the free operator definition in Prolog. What makes it (to you) such a compelling way to compose a grammar?

### Beauty of prolog operators

Prolog operators are buitiful from modularity point and locality point of view. There is still problem of global state and lack of visibility rules, but for old interpreted language this is understandable.

Let's consider two modules. One introduce operator + and other *.

The developers do not have to think about each other in prolog. Their operator definition are independent. They only define :-op(+, 500, yfx) and :-op(*, 300, yfx) in the source code. And it is possible to any graphics symbol.

The total language will be updated, so these operators can be used now. There no prior agreement on operators required, provided that they do not conflict.

In usuall grammar productions, if one is adding operator, the entire grammar has to be updated, because production with lesser priority should be aware about next priority layer. The grammar has to be global and updated each time we have to introduce something.

The another thing is that intention of the developer and the text in code matches each other. The developer want to introduce operator rather than write update grammar production. The operator syntax is practially shortest way to get things.

And you are considering closed programming languages. Think DSLs and programming language extensions. For DSLs or langauge extensions, we might need to write productions that mutually recursive on production level with base language. But there
is no such problem if we use statements and operators as language definition elements.
Only some limited extensions like this could be implemented w/o introducing new language elements. Closeness of programming languages is the current roadblock on language innovation. For example, Java users had to wait for long time time, before they have received enums and for(i : iterable) operators in Java 5. Such language extensions should have been providable by the language plugin.

### Might not be that simple?

I'll need to give this some thought. Unary and binary operators are quite simple (parsing, associativity, precedence) compared to the example you provided of "for(i : iterable)".

Think DSLs and programming language extensions.

Yes, I can clearly see the benefit in those cases. Entropy is a far greater devil in a language used more widely.

### ETL is example

Yes. It is more complex, but the same composition principles could be still implemented.

See ETL examples and the rest of the documentation. The version 0.2.x had some design mistakes (in the area of handling of keywords and numeric literals) that were fixed in unreleased version 0.3.0 (availalbe on github, lacks updated documentation for now). As for note, the grammar language is defined using the grammar language.

### Nicely done

I can see that you put great care into that project. It will take me some time to digest that, though!

What I've found is that any complex constructs tend to give grammar-driven parsers a very hard time, and often it is those very same constructs that tend to be most "useful" to the person using the language. Just to pick an example at random (that happens to be fresh in my memory), anything that isn't LALR(1) and may need to backtrack in any significant manner:

 /** * Parse the multi-form of the VariableInitializer construct, or nothing. * * @return null or a MultipleLValueStatement */ MultipleLValueStatement peekMultiVariableInitializer() { if (peek().getId() != Id.L_PAREN) { return null; } Mark mark = mark(); expect(Id.L_PAREN); List listLVals = new ArrayList<>(); while (true) { boolean fFirst = listLVals.isEmpty(); if (peek().getId() == Id.VAR || peek().getId() == Id.VAL) { // this has to be a multi VariableInitializer listLVals.add(new VariableDeclarationStatement(new VariableTypeExpression(current()), expect(Id.IDENTIFIER), false)); } else { // assuming that we haven't already started building a list of declarations, // encountering an expression followed by anything other than an identifier (for // a declaration) or a comma indicates that we're going down the wrong path Expression expr = parseExpression(); // next token meaning // ---------- ---------------------------------------- // COMMA the expression must be an LValue (commit) // IDENTIFIER expression was the type portion of a declaration (commit) // R_PAREN list is empty : oops, it's not a multi VariableInitializer // list is NOT empty: done parsing the multi VariableInitializer // otherwise list is empty : oops, it's not a multi VariableInitializer // list is NOT empty: it's an error if (peek().getId() == Id.IDENTIFIER) { // there is a variable declaration to use as an L-Value listLVals.add(new VariableDeclarationStatement(expr.toTypeExpression(), expect(Id.IDENTIFIER), false)); } else if (fFirst && peek().getId() != Id.COMMA) { restore(mark); return null; } else { // the expression has to be the L-Value if (!expr.isLValueSyntax()) { log(Severity.ERROR, NOT_ASSIGNABLE, expr.getStartPosition(), expr.getEndPosition()); } listLVals.add(expr); } } if (!fFirst && match(Id.R_PAREN) != null) { return new MultipleLValueStatement(listLVals); } expect(Id.COMMA); } } 
 By cpurdy at Tue, 2018-09-18 20:44 | login or register to post comments 
 
 
 
 </code></code> 

### ETL compilies grammar to LL(*)

ETL compilies grammar to LL(*) on expressions and statement annotations. LL(1) in all other places. The documentation incorrectly says LL(1) because it acutually look ahead only for only one token during parser, but the parser keeps stack of states internally and output queue for term parser uses marks and structural token insertions to emulate different AST building operations for yf* cases.

The good thing is that this is done internally by grammar interpreter, so user of the langauge does not have to bother with it. Grammar compilation is done in the runtime. It is theoretically possible to cache compilation results on disk using Java serialization, or compile grammar directly to Java code, but I currently do not see need for it, the compilation of grammars to runtime structures is relatively fast and in 0.3.0 even asynchronous.

And the grammar compiler just fails on conflicts.

There is also error recovery build in the parser that rely on common statement/block language layer. The 0.2.0 supports only simple "end-of-statement" recovery. The 0.3.0 support more complex error recovery strategy that recovers to the next to keyword or block start.

P.S.: There is error recovery sample in the tutorial.

### Grammars do compose

Duplicate, slow server .. sorry.

### On "local state is poison"

On "local state is poison". I completely disagree.

Local state is a way of thinking locally about the things. It is a tool of complexity management.

The real poisonous component is unification of entities of different life time. There is a persistent state and ephemeral state in the program. The first need live upgrade, and so on. The second can die any time, and its task to affect persistent state.

There is really no way to consistently upgrade information in live process. But lets follow what patterns are used in real enterprise applications. We could argue if they best or not, but they follow the track of managing this problem. There are regularly attempts to go from that pattern (recent one is the approach OODB that has died almost completely), but they still returned to traditional path (in form of Graph databases).

Firstly, there are clear separation in persistent state and ephemeral. The ephemeral state is not reliable and it is lost on every application restart. It is designed to die and is not explicitly managed and often implicitly typed. The persistent state is usually explicitly designed and typed.

Secondly, there are usually clear code boundaries between persistent and ephemeral changes. The change is usually done in transactions that reduce risks of inconsistent changes in persistent state.

Thirdly, the persistent state has data definition language that assumes live patch model (alter table, create table, etc). The ephemeral data definition language is complete type model.

Yes. These languages are completely different because of different conditions. But we could see the same level of the abstract chain in the both that corresponds to cognitive operations groups defined by J. Piaget.

Congintive StageOrganizational PrinciplePrograming LangaugeDatabase language
Senso-motorNamesCalculatorKey-value
Pre-operationalSingle-level patterns (sequences, labels, global scope)Fortran 66, Assembler, BASIC (classic)CODASIL, SQL
Concrete OperationsHierarchies in code and dataC, PascalGraph databases
Formal OperationsFirst-class meta-constructs (functional programming, generics, contracts)OOP (Java, C#), FP(Haskell)None

I guess the line will follow up. There will be true OO databases in the future (OODB was false name, there were almost no OO), with garbate collection, generics, encapsulation, and other things.

If you could see, each stage adds new construct that organize constructs from previous stage. For example, patterns (sequence of actions) organize names (actions), hierarchies (blocks) organize patterns (sequences), meta-structurues (classes) organize hierarchies (procedures and data structures).

According to M. L. Commons the next abstraction stage is system thinkings. So the true next generation programming langauge will introduce some concept of system that will organize classes. And if we would look at the current state of big systems, there are numerous efforst of defining DSLs for organizing systems. In Java world there is Spring that is a language for defining a system of connected components, in other lanagues there are also numerous dependency injection frameworks. There are patterns for cloud applications like providing emergent system properties like reliability over unreliable hardware (which is also an effect of system thinking).

### The first need live upgrade,

The first need live upgrade, and so on. The second can die any time, and its task to affect persistent state.

The point that David was making in that article is that the ephemeral state often can't "die at any time". There can be any number of ongoing activities, and abandoning any of them to force an upgrade could leave the system in an inconsistent state, either internally or with external resources.

If the program is required to deal only with persistent state on the other hand, then that enforces an additional constraint that every state transition be well-defined, recoverable (and probably bounded).

My point that it should be designed to be able to die at any time and persitent state should be designed recoverable. If it cannot die at any time, this is a design bug. Hardware fails. And even worse, sometimes hardware fails suddenly.

Persistent state is expensive from resource standpoint for a reason, the additional resources are spent for additional property. And persistent state require more work from developer to plan ahead (per storage element like field or table). Upgradability is not free. It causes additional work from programmer point of view.

On other hand, ephemeral is cheap from resource standpoint and and requires less devloper time. Upgrade is just regual wipe of data. We do to have to pay for upgradeability.

This is trade-off. In ideal word we would have both properties: cheap, durable, and upgradeable (I guess we could select at most two properties here). However I do not see how we could remove this trade off.

Making all state global would not change anything. If anything, we will go to the days of Fortran 66, where we had to implement recursion using arrays that emulate call stack. So we would be able to allow programs with lesser behavoiur complexity.

In the article "local state is poison" author just suggest everyone to pay the cost by moving everying to persistent state, whether one needs it or not.

### In the article "local state

In the article "local state is poison" author just suggest everyone to pay the cost by moving everying to persistent state, whether one needs it or not.

No, this is a strawman. Your argument is effectively the same as the argument made in that article, you just have a different understanding of what constitutes "local state". Rather than debate the minutae, here's a summary of the argument:

A program is a set of purely functional transitions on persistent state. Any transitions that access external resources must be lifted to persistent states. Thus, any computation can be moved, killed/resumed, or analyzed at any point. This is exactly what the article argues for, and you seem to largely agree.

### IO is the purpose of program, and it is essentially ephemeral

A program is a set of purely functional transitions on persistent state. Any transitions that access external resources must be lifted to persistent states.

Disagree. This is narrow niche of practical situations (batch jobs over database) or it just moving problem to intermediates.

Thus, any computation can be moved, killed/resumed, or analyzed at any point.

Agree. But not "thus", as previous statements specify a niche solution (for example, batch jobs). The part "or analyzed at any point" depends on interpretation, but generally might not be possible.

This is indeed a difference between understanding of local state. I just do not believe in "purely-functional" limit in the real word. The real word applications interact with environment, and that environment interactions could not be captured as purely functional in normal situation. There is always place for failure (CAP-theorem as example). The application still should be ready for it.

For example, calling web service is still local state, until we save it to database. The program should be designed to be ready to fail on start of the call, during the call, and after the call. The call to the service should have a meaningful recovery policy. Such policy is not necessarily automatic. There might be detect and update SQL scripts launched manually, or just sum reserved for payment to lawyers that will discuss lost transactions in courts (I've seen both such manual solutions in my practice).

There is no way to provide absolute reliability. The programs and hardware will always have a chance to fail and it is not possible to reliably maintain invariants and avoid data loss in distributed systems (CAP-theorem). The question is how it is damage control is implemented. Lawyers could be cheaper than maintenance of extra persistence layer that could fail anyway.

"Local state" article argues that ephemeral state is reducible to purely functional state. It is not. Part of it could be reduced to it, but IO could not be completely abstracted. At best, it could be moved to intermediates. And doing IO is the primary purpose of most of practical applications (income). Everything else is the loss of resources (cost). The ideal program will keep the same IO behavior, while having zero resouce computations. Even compiler purpose is reading sources and writing good object code correctly, everything else contributes to global warming, electricity bills, and to hardware amortized cost.

### "Local state" article argues

"Local state" article argues that ephemeral state is reducible to purely functional state. It is not. Part of it could be reduced to it, but IO could not be completely abstracted.

"Completely abstracted" is undefined. Monads completely capture side-effects in a purely functional fashion, so in principle, your claim appears false.

I can't conceive of a single program that can't be represented with purely functional transitions between stable persistent states, so if you have a real example, please share.

### IO modads define DSL isomorphic to imperative programmin

I have limited exposure to Haskell as I did not liked its closed-world principle and lack of open types. However I guess I figured out IO monads.

If we skip long reasoning on pure functional theory under them, the IO monads are the way to implement imperative DSL inside functional code.

main = do
name <- getLine
putStrLn ("Hey " ++ name ++ ", you rock!")


There would have been no difference if we wold have just embedded functional sublanguage into imperative language like it is done in Scala or Kotlin. IO monads is just more complex way to do the same using some purist means. We translate imperative code to some functional abstractions then translate it back to imperative control flow (assembler and system calls). We could just as well to skip that "pure-functional" step.

Thus IO monads is just the failed attempt to escape imperative code when working with IO. There is no escape from the submarine, promises provide the way better means to organize IO code then IO monads using hybrid imperative-functional means as there is no loss for concept translation.

### Promises

Promises are a monad, with unit = return and bind = then. They are used to implement an asynchronous imperative layer over an imperative layer.

### Promise is aproximately Async monad

Promise is aproximately equal to Async monad in Haskell by functionality. But they are not necessary implemnted like that.

Most promises (that I have seen and created) have two fundamental interfaces:

• Resolve promise with success or failure.

So it is more like a single element memorizing event channel.

The flatMap operation on promises imperative channel is usually utility method that works over these two fundamental operations. Saying that this is monad is somewhat misleading, it is also possible to say that it is the object.

Async is a bit non-typical monad in Haskell. IO, List, STM monads provide total order of operations, and Async provide partial order. It is the only partial order monad in Haskell that I know of. And I'm not sure if it does not invalidate some assumptions about monads. Total order is important feature of other monads.

### Implementation not important

How monads are implemented is not important, they are an interface. In JavaScript:

(new Promise((unit, err) => {
setTimeout(() => {unit(1)}, 1000)
})).then((x) => {
return new Promise((unit, err) => {
setTimeout(() => {unit(x + 1)}, 1000)
})
}).then((x) => {
console.log(x)
})


Consider the type of 'then' it accepts a function from a value and returns a promise, in Typescript:

then(f(x : any) : Promise<any>) : Promise<any>


Which would be itself on a promise object. Rewriting this as a Haskell type would look like this:

then :: Promise a -> (a -> Promise b) -> Promise b


Which clearly fits the type of bind : m a -> (a -> m b) -> m b, with m = Promise
Then unit is Promise.resolve, which would have type: 'a -> Promise a' which fits the type for unit 'a -> m a' with m = Promise as well.

Should be pretty easy to show it follows the monad laws too.

### Except that JavaScript Promise is not a monad

See "No, Promise is not a monad" for an analysis.

### Sort of

The argument appears to be that left-identity and associativity do not hold for every possible function. Therefore it does hold for some variations. In the end JavaScript is not strictly typed, so it cannot exclude these other variants. However the above criticism is probably valid for _all_ attempts to define a monad in JavaScript. In Typescript we could define a type signature for a monad, and a Promise would comply, it would just exclude the non-compliant use cases.

I guess this comes down to the distinction between some promises are monads, and all promises are monads.

Do you lose any useful functionality by excluding the non-monadic promise variants?

### DSL argument

The operations you have mentioned are read operations on the promise, but promise (JavaScript or otherwise) has write operation as well (for javascript, these two operations are passed to function with which promise is created). And thse write operations is not pure, and they are non-deterministic, and order dependent. They cause non-local change to computuation process.

This is at least an extension to typical Monad protocol, which is focused on reading.
And this write protocol is essential feature of promise, discussing the Promise w/o acknoledging it is the same as just saying that promise is an object. Yes it is an object as well, but it is more than it. We can use promises well and express the data dependencies with them exactly because the promises are mutable objects.

But I still see no counterargument to the of my reply that started discussion about promises.

The thing is that whether with promises or not. Haskell program starts with imperative DSL language with main with IO monad, then it drops the IO monad somewhere to do purely functional computuations. And it is only possible to work with persistent storage using this imperative DSL language.

This imperative DSL language has exactly the same problems as other imperative langauges (order dependency, ability to bring storage into inconsistent state, and so on). Programming with this imperative DSL does not give any advantages over using normal imperative language (from reliablity and interruptablity point of view), but gives a lot of disadvantages as it is too cumbersome to use IMHO when one needs things done.

In Kotlin or Scala, we as well start with imperative langauge, but we could use functional features as needed. This is the same as with haskell, but without akwardness of saying to oneself that we are wrting purely functional code purely for "side effects" it produces. We just honestly doing imperative part of the programm, i.e. primary and intended effects rather than shyly calling them "side effects".

These are "side effects" only if we conside the object methods "functions", because they somewhat looks like it. They are not functions. They are processes with parameters. Pure functions are just well isolated processes that do not have other interaction with world except to accepting parameters and producing outcome (and this is if ignore memory load and CPU load that could affect timings and success of other computations). This is important case, but it is just a case that supports some interesting optimizations. But the truth is that the algrebra of processes is different from the algebra of the functions despite of the some similarities. Specifically because processes are interacting with the environement.

### Promises, Promises

See Andreas' comment below, the JS promise could have been defined to be a monad (if it did not recursively unwrap I think), so "Promises" in general definitely can be a monad, it's just that the JS one misses by a small margin. If the committee could have accepted a compliant definition, and it would make explaining this a lot easier :-)

### Monadic behavior is not a complete promise API

My argument is orthogonal to JS peculiarities. My argument is that a promise is a mutable object (stateful channel) with both read and write operations. Monad-like is behavior is just one of the facets (i.e. read API) and it is incomplete at that.

Every promise I know has method that change its state to some value or error if it was unresolved before it. This is unavoidable, because of interaction with external system. Something like:

setResult(p :  Promise a, value : a) : Unit
setFailure(p : Promise a,  failure : Error) : Unit


These methods return nothing, order sensitive, are called only for side effect.

It could be done in different ways.

• CompletableFuture in Java directly invokes methods
• JS passes these two functions to method body passed as constructor parameter when promise is created
• E programming language and AsyncFlows use resolver interface

But this is radically different from IO monad, where there is no change state, but only read operations. Promise behaves like a monad only after it was completed with success or failure. Until that time promise looks like completely different animal. For example:

function aNever() { return new Promise(function (a,b) {})}


Returns a promise that never completes.

Since promises are order non-deterministic and order dependent, I doubt that any process that uses could be called functional. This is different from IO monad that has value or failure when it is available. The describing promise w/o these resolver operations is a bit strange since these operations cause non-deterministic and non-pure functional behavior and are essential parts of promise API. The other essential set of operators is listening to state changes, as it is essential writing coordination code like loops. So minimal complete promise API is the following:

createPromise() : Promise a
setResult(p :  Promise a, value : a) : Unit
setFailure(p : Promise a,  failure : Error) : Unit
onFailure(p : Promise a,  handler : Error -&gt Unit) : Unit
onSuccess(p : Promise a,  handler : a -&gt Unit) : Unit


There is often separation between listening and resolution aspect (E, JS, AsyncFlows), but sometimes they are lumped together like in Java's CompletableFuture.

All other is just usability extra. Creating already failed or already resolved promise is also an optimization over this set of operations.

On other hand, operations like map, flatMap, and then are accidental parts of promise API that provide some utility read API for promise using listeners. And these could have been provided outside of promise by some operators. For example, ANY or ALL operators are better looking when it is defined outside of promise as operator over closures that return promises (like this) rather than as promise method.

Saying that Promise just a monad does not give it justice. The monad-like behavior is just one of possible read API for the promise, so it is only one of facets of promise and incomplete at that as it does not include listen operation that does not return a promise.

When we are using promises and connect them using pure functions, we get small determistic islands in the sea of generally non-deterministic code. Any promise represents unknown result, otherwise we would not have used it. Saying that such code is purely functional, restartable, or otherwise pure is just masking the truth by saying that sea between islands is not important.

First a Future is not a Promise, it is something different.

A monad is an interface, so a Promise can be a Monad, and be other things at the same time. To put it in very object-oriented terminology, a Promise implements the Monad interface.

So when people say "A Promise is a Monad" it is like saying "A Dog is an Amimal", however dogs have special features that generic 'animals' do not have.

Promises are not in general cancellable, JavaScript promises are not cancellable, and because a promise starts immediately you cannot really 'set the result' of a promise. You might be able to do it but its really undefined behaviour because the async process will carry on and call resolve eventually itself.

So in JavaScript calling either "resolve" or "reject" from outside a running promise is a bad practice, and not how promises were intended to be used. If you want something cancellable you want to use a Task instead.

In general a loop over a promise is a fold, and can be expressed like this in JS:

return list.reduce((promise, x) => {
return promise.then(() => {
...
})
}, Promise.resolve())


### Terminology woes

Note that different schools of terminology exist regarding the meaning of the future/promise distinction. And they occupy these terms in incompatible ways. One puts the distinction on whether the type exposes a resolution operation, another whether you can block on it directly or need to chain a continuation. And then there are others that don't follow either of these distinctions and just use the terms randomly or more or less interchangeably. Ironically, that includes the original papers from the 70s.

### The argument appears to be

The argument appears to be that left-identity and associativity do not hold for every possible function. Therefore it does hold for some variations. In the end JavaScript is not strictly typed, so it cannot exclude these other variants. However the above criticism is probably valid for _all_ attempts to define a monad in JavaScript. In Typescript we could define a type signature for a monad, and a Promise would comply, it would just exclude the non-compliant use cases.

That is incorrect on both accounts. Monad laws and typing are orthogonal.

If JS promises had been designed properly and did not perform "recursive unwrapping" then they would indeed form a monad (and as a consequence, async would be a monad). This was very well known to the JS committee at the time, as this point was discussed to death (I was in those discussions). But in the end the fraction won who argued for "convenience", whatever that means.

Inversely, types can't rescue it, because you cannot actually type the then method once you have given it this bogus semantics. The monadic type you'd want -- and the one that TypeScript uses(*) -- is simply unsound. It would take some form of type complement (with a quantifyer such as in "Promise<T> for any T that is NOT a subtype of PromiseLike<U> for another U..."). But expressing such negative information is impractical in most type systems, because it interacts badly with type abstraction.

(*) TypeScript makes some weak attempt to cover more cases by typing it
 Promise<T>.then<U, V>(   onfulfilled? : (value : T) => U | PromiseLike<U>,   onrejected? : (reason : any) => V | PromiseLike<V> ) : Promise<U | V> 

but this breaks down if U or V are e.g. instantiated with some Promise<Promise<W>>, in which case the actual result value is a Promise<W> not a Promise<Promise<W>> -- provided W itself isn't again a promise type, and so on ad infinitum.

### Shame

Seems a shame typescript's type system is not able to constrain this adequately, I see it cannot fix it entirely but it could disallow nested promises, It also seems a pity that JavaScript intoduces something that cannot be soundly typed, but then again it is a dynamic language.

I can use a promise like a monad though, so my comments that some promises are monadic still holds, and I could go a long time without ever realising that forms of promises exist that break the monad laws, as they seem to be completely unnecessary?

Edit: to clarify I mean some JS promises comply with the monad laws (the ones that do not use recursive unwrapping). As long as I do not use recursive unwrapping the promises are a monad.

### The issue isn't that some

The issue isn't that some promises are monads but the type system isn't strong enough to separate them from the others. It's that no promises are monads [edit: actually, I don't even know what that means; it's the promise type function that could be, but isn't, a monad], because the definition of monad has a quantifier in it and if there's a counterexample for promises - other promises. If you had a language that helpfully flattened [1, [2, 3], 4] to [1, 2, 3, 4] automatically for you, then lists wouldn't be a monad either.

### Hmm

Doesn't the IO monad do exactly that?

f _ = putStr "2 " >>= (\_ ->
putStr "3 "
)

main = putStr "1 " >>= (\_ ->
f ()
) >>= (\_ ->
putStr "4 "
)



Will print "1 2 3 4 "?

function print(x) {
return new Promise((resolve, reject) => {
console.log(x)
resolve()
})
}

function f() {
return print("2 ").then(() => {
return print("3 ")
})
}

function main() {
return print("1 ").then(() => {
return f()
}).then(() => {
return print("4 ")
})
}


will print "1 2 3 4 " also.

So doesn't the call semantics of Haskell 'auto-unwrap' as well?

Left Identity:
return Promise.resolve(x).then((y) => {return f(y)})
=== return f(x)

Right Identity:
return m().then((x) => return Promise.resolve(x))
=== return m()

Associativity:
return m().then( (x) -> {return f(x)} ).then( (y) -> {return g(y)} )
=== return m().then(  (x) -> {return f(x).then((y) -> {return g(y)})}  )
=== return (  m().then((x) -> {return f(x)})  ).then(y -> {return g(y)})


### Wrong examples

Your examples don't exhibit the problematic case. That occurs when the content of the promise is itself a promise.

Specifically, substitute Promise.resolve(z) for x in your Left Identity law. Then JS violates the law, because what the expression actually computes to is f(z) not f(Promise.resolve(z)) as the law would require. Right identity breaks in a similar manner, IIRC.

Another angle on this is that the JS promise semantics deeply violates parametricity.

### Wrong Example?

Thats the wrong example, you should put :-)

Promise.resolve(() => {return Promise.resolve(2)})


Anyway, I do see that you can break the laws when you pass a promise because of the unwrapping. However as long as 'm', 'f' and 'g' in the above examples do not accept a promise as their argument, then the resulting code does obey the monad laws. I don't think I have ever written a promise like this, and a simple rule would be to avoid having promises as arguments to your building block promises. A type system with disequality could write:

type Promise<A> requires forall B . A != Promise<B>


Edit: TypeScript 2.8 can actually define the type we want:

type NotPromise<A> = A extends Promise<any> ? never : A

function unit<A>(x : NotPromise<A>) : Promise<A> {
return Promise.resolve(x)
}

let a : Promise<number> = unit(2)          // okay
let b : Promise<Promise<number>> = unit(a) // error


This will not allow the definition of 'b'. Unfortunately TypeScript infers the type 'Promise<{}>' for all promises, so you have to give explicit type annotations for it to work.

### Not sure

Not sure I understand your example, but note that both .then and Promise.resolve unwrap a promise argument recursively, which is what my version was exploiting.

Yes, I mentioned above that some form of type complement could express this, but type systems don't tend to have that (for good reasons). More to the point, a type constructor with a constraint on its domain wouldn't qualify as a monad AFAICT.

Edit in reply to edit: Wow, I wasn't aware that TS has added type-level conditionals now. From what I've seen about the theory of such features, I'm not sure how wise a move that is.

### Example

Well that wasn't meant to be a serious example, but it can be completed like this:

function unit(x) {
return () => Promise.resolve(x)
}

function bind(x, y) {
return () => x().then(y).then((z) => z())
}

function run(x) {
return x()
}

run(bind(unit(1), (x) => {
console.log(x)
return unit()
}))

run(
bind(
bind(unit(2), (x) => unit(x)),
(x) => {
console.log(x)
return unit()
}
)
)

run(bind(unit(unit(3)), (x) => {
console.log(x)
return unit()
}))

run(bind(
unit(unit(4)),
(x) => {
run(bind(x, (y) => {
console.log(y)
return unit()
}))
return unit()
}
))


Basically using a function to prevent recursive unwrapping.

In summary const was right when they said that a (JavaScript) promise is approximately a monad, but for the wrong reasons. The additional API functions do not stop a promise from being a monad, and in general there are promises that are monads, just not the JS ones. If only the committee had listened to Andreas.

### TypeScript Soundness

TypeScript deliberately does not try to be sound. Thats why it does not do HM type inference, and instead only infers the 'outer' types like 'Promise' for all promises.

If Godel tells us a system cannot be complete and consistent, TypeScript have deliberately gone for complete, and sacrificed consistency, I would not be surprised to find a contradiction in there somewhere.

### If JS promises had been designed properly

Good job by the committee.

### ephemeral global state

Ephemeral state is indeed useful! Especially for transitory features of low information-relevance, like tweening animations.

But ephemeral vs persistent is on a distinct categorical axis from local vs global. We aren't limited to "global persistent" vs "local ephemeral". For example, we can model "global ephemeral" states like a publish-subscribe bus where old published records expire, and fresh records must be continuously published. Or we can mount shared-memory volumes as a temporary, ephemeral filesystem. Conversely, we can roughly model "local persistent" states using techniques like Smalltalk images.

As far as "restarts" go: even with persistent state, it can be useful to model resets for volumes of state - e.g. clearing a volume of state to default values, capturing or restoring a checkpoint. I explicitly mention this in the article (look for term 'reset'). A restart can easily and explicitly be modeled as a reset caused by an external condition. In general, I would posit that restarts/resets are something we should explicitly model and support within our software system definitions or paradigms, not leave entangled with an ad-hoc OS process model. Ad-hoc restarts don't work nicely for large scale, long-running systems such as distributed apps or overlay networks, and become difficult to reason about.

In my experience, it's convenient if "ephemeral" is a semi-transparent property for a subset of stateful resources. By semi-transparent, I mean: a program may assign an ephemeral partition of state resources to a subprogram, but the subprogram cannot directly check whether given state resources are ephemeral. This makes it easy to support fine-grained orthogonal persistence, and fine-grained tradeoffs between durability and performance. Similarly, when I implement a transactional update model, I allow individual transactions to be marked 'durable', which determines whether we must wait for sync upon commit. (Non-durable transactions that only touch ephemeral state resources are effectively a software-transactional memory, which can be useful.)

In the end, I feel your "complete disagreement" builds upon a simple category mistake, whereby you wrongly conflate (or causally correlate) 'global' with 'persistent'.

### Performance

I am interested in your approach to state and I think this is relavent to compositionality. I cannot get past the performance aspect of global state. Surely on multi-core processors (Threadripper has 64 threads on 32 cores, next year AMD will have 64 cores) global state becomes an unacceptable performance limitation?

I can understand a small website keeping all its state in a database and having a stateless webserver/service, but as you scale up again you want multi-master databases with eventual consistency.

It seems to me that local state (perhaps something like the actor model, or a Kahn process network with bounded queue size, where each node has state) is what you want for parallel scalability?

### re: (parallel) performance

Local state does simplify local reasoning about parallelism. But whether this improves actual parallelism is a separate question.

In practice, we try to partition global state resources and partition our computations in ways that roughly align. For example, with tree-structured state, we can assign different subtrees to different subprograms or services. For a publish-subscribe state resource, we might partition it into channels and topics that are each processed by only a few agents or subprograms. For an entity-component system, we might partition state into 'component' tables, and arrange for most update tasks to only touch a few tables. At very large scales, these partitions might be physically distributed, migrating data or computations as needed.

Insofar as this alignment of partitions succeeds, we will discover opportunities for parallelism. We cannot reason about this locally because it depends on non-local configuration decisions, but we can still achieve a lot of parallelism in practice.

Also, there are many opportunities for parallelism in stateless computations, e.g. collections processing. In my experience, even with Actors model systems, this is often the greater source of parallelism.

I don't believe that local state is offering sufficient advantages for parallelism, especially not when we consider that runtime extension and upgrade also become significantly more important at larger scales.

### CPUs?

Consider designing something highly parallel like a modern pipelined CPU. We need state after each pipeline stage to locally store the computed result before the next pipeline stage. It would take a thousand times longer to store this back to main memory between stages, ruining the performance. Actors can model this kind of system precisely because they contain state, and an actor can model the pipeline register as well as the combinational logic that computes the next pipeline stage. This kind of local state is not even accessible from outside the device, and nor would you want it to be.

If we consider compiling code to an FPGA then we would want similar local state to pipeline computations.

Similarly in a shared nothing distributed computer (like most modern super-computers) each node computes locally and data is streamed between nodes.

I don't believe that local state is offering sufficient advantages for parallelism

These are all examples where local state enables significant parallelism. I don't think a CPU could operate realistically without it, I do not think a large super-computer cluster like used in weather forecasting or atomic simulation could operate efficiently without it.

I agree with the original article that composition is an important property for software and hardware. I can see how that works with pipelined computation.

I find your logic on global state compelling. I would like it to work, but I cant seem to make it fit the use-cases I am looking at.

### re CPUs

A system with a large number of CPUs can usefully be treated as a simple distributed system. Latencies are non-uniform due to cache, NUMA, network, etc.. Attention to this "physical locality" is relevant for performance, but is not the "local state" opposed in the article.

For global state resources, I usually model distributed systems by introducing logical latencies (and sometimes disruption) between state partitions. I've also experimented with modeling GPGPU partitions that constrain data types and computations. Modal types (such as Int@Here vs Int@There, or Int@Now vs Int@Then) also seem potentially useful to work with explicitly distributed and/or staged computations, but I haven't figured out how to best use them.

Global state does not imply random access.

### StateSpace

So, could global state have a non-uniform distribution? For example consider a 1000 node cluster each with "local" storage. Could we model this as a tree / file-system with a top level directory that contains 1000 sub directories, each being the local state of each machine?

Would the logical conclusion to this be effectively two declarative languages? One is the data/state, and the other the pure/combinatorial functions and assignments?

Would you have something like this for initial state:

/a = 1
/b = 2


And this sort of function:

/c := /a + /b


It does remind me of Prolog and DataLog, which are languages I like. Prolog treats everything as one global state space.

### re StateSpace

could global state have a non-uniform distribution? For example consider a 1000 node cluster each with "local" storage. Could we model this as a tree / file-system with a top level directory that contains 1000 sub directories, each being the local state of each machine?

Yes! If we intend to be more precise, we could instead model a logical network, defining available edges in a graph. But a tree is a very simple and convenient approximation. And arguably, non-uniform tree-structured global state is commonly used. Consider filesystems where we might "mount" ephemeral shared memory vs durable disk vs distributed network directories.

The state is "global" in the sense that the tree or graph of state resources is external to the computation, yet "local" in a physical/performance sense to computations performed at a particular node. My article regards only the former sense - local vs global, not local vs remote.

The ambiguity of natural language can too easily lead to confusion. In hindsight, I might have been wiser to use different adjectives. Perhaps "external vs internal" state, distinguished from "centralized vs distributed" state and "ephemeral vs persistent" state as three orthogonal axes. Then my article would favor external state and be neutral regarding the other two properties.

*Aside:* physically local vs remote is a relative property in context of a distributed resource model (e.g. this data is near that processor). In comparison, distributed vs centralized are better adjectives to describe a system as a whole. We can't reasonably discuss physically local state without first having a model of physical distribution.

Would the logical conclusion to this be effectively two declarative languages?

I do not believe that externalizing state strongly implies a declarative computation model. For example, entity-component-system effectively externalizes state (into component tables) from computations (the system updates). But ECS commonly uses an imperative system update model.

If we do go declarative, it would likely be a more sophisticated model than you describe above. For example, it's very convenient if we can represent many-to-one "inputs" such that separate expressions contribute to a shared value or decision (i.e. concurrency). We could do this by having expressions that "add" to a set or other commutative monoid, for example. For modeling stateful applications, it's very convenient if we can either refer to 'prior' states or contribute to 'future' states, using some variant on temporal logic.

In the vein of Datalog, you might obtain some inspiration from Dedalus (Datalog + Time) and Bloom (Dedalus + Partitioned Space and more). But I'm not prescribing any specific model. Even favoring entity-component-system is a pretty good step in the right direction, IMO.

### Data Hiding

Presumably data hiding can be achieved by filesystem like permissions?

If using something like the actor model, or KPNs I assume instancing an actor would 'mount' its state in the hierarchy somewhere (the tree would follow the modules I guess) allowing a user with adequate permissions to look at that state, but not necessarily at the same latency and bandwidth that the actor would have locally to this?

### re Data Hiding

Within an application, the main purpose of data hiding is isolate trust and bugs. It is sufficient for a program to partition access to data to different subprograms, controlling where data is shared. This requires that there is no ambient access to the tree of resources. Object capability model is easily adapted for this purpose. (It can also be enforced at global scales, cf. Tahoe-LAFS.)

But I wouldn't advocate Unix-style file permissions! That's a terrible security model, and fine grained partitioning of authority and access is too awkward.

As for latency and bandwidth, that depends on how we model data access. We might ask for a copy of a standard view of the data, which gives us many round-trip latencies. Or we might support sending code to evaluate near the remote data. In the latter case, we can do a lot to mitigate latency and bandwidth issues by handling conditional behavior remotely and distribute computations across many machines, avoiding unnecessary round-trips.

### Capabilities or Effects

I would be interested in hearing your thoughts on capabilities Vs effects for global state. This seems to me to be a parallel to modules Vs type-classes. I find I prefer type-classes because modules need to be passed down. You get the 'plumbing' problem where I have code that is called by some 3rd party library, and I want to access some module or capability from my callback function, but I don't want to have to change all the function types in the library to get the capability or module to my code. You don't have the same problem with type-classes or effects.

### re Caps or Effects

I use both. Object capabilities and algebraic effects are easily layered.

We can use an opaque data type to model a capability. The value will be unforgeable by subprograms (though we can provide an abstract interface to manipulate it), transferrable (like any other value), and will represent both access and authority to a resource (in context of some effects model). This value is then used in a subset of effect requests.

Given an abstract interface for the opaque capability, we can support its refinement. For example, given a value representing full access to foo/ and a suitable API, we might be able to (via pure function or effectful request) refine this to read-only access to foo/bar/. Fully generalizing, an abstract "capability" value might simply be an opaque wrapper around another (possibly monadic) effects model. The main caveat is that we must never conflate layers. For example, if we're programming in Eff, and assuming Eff permits user-defined handlers, we must not treat Arg -> Eff Result as a capability. But this isn't a difficult constraint. It's trivial to wrap Eff with an opaque type and smart constructors, for example.

Regarding the "plumbing problem", I feel the syntactic issues are relatively trivial, and not nearly so distinct as you imagine. For example, if you have some function foo :: (Foo a) => a -> a, and you want to add new behavior using another typeclass Bar, foo :: (Foo a, Bar a) => a -> a, you'll also need to add the Bar a restriction to all transitive clients of foo that don't already have it. This corresponds (roughly) to adding a new 'capability' argument to a function then passing it down to foo. Alternatively, Foo represents a collection of operations, and we could add more ops to Foo without changing the foo interface. This would correspond to simply providing a record of capabilities then later adding another cap to the record. There is some moderate convenience because typeclasses are implicit at call-site in Haskell. But, well, Scala has its implicits, and even Haskellers have played with (and mostly rejected) the idea of implicit parameters. AFAICT, making repetitive data-plumbing syntax more implicit should be treated as a separate concern.

### Transitive Functions

This corresponds (roughly) to adding a new 'capability' argument to a function.

I don't want to have to fork a dependency written and maintained by someone else to add an argument to all the functions in the call graph. This would be a big maintainability problem in large programs that tend to depend on many different third party libraries. If I count the number of libraries npm pulls in for an average project it is in the hundreds.

If we assume that every function has an implicit capability argument, because we don't want to create a dichotomy of functions that do and do not support IO callbacks (which would result in having two flavours of many common library functions), and this capability argument represents all capabilities available to a function as a collection, then we can omit this argument from function type signatures, and we are back to an effects model where we can assume that any access to an IO function must be using the capability context passed to the current enclosing function. This is essentially the same as the type-class or effect model.

### It is not an intrinsic

It is not an intrinsic feature of algebraic effects that the syntax for adding an effect is minimal. That's a *language feature* to make algebraic effects tolerable to program with. Otherwise it might be like adding every exception to a Java throws clause.

There are plenty of ways around adding explicit parameters, e.g. simple reader monads with an environment-sandbox value, or sophisticated staged programming with constraint based dependency injection. But this is fundamentally a separate concern from the features of caps vs effects.

I agree that carefully restricted use of caps can simulate fine-grained effects. Caps are quite expressive that way. However, that is not how I use them.

### Java Exceptions

Otherwise it might be like adding every exception to a Java throws clause.

Yes, this is an example of the kind of thing I want to avoid, where every function in the call graph between the throw and the catch has to be decorated with the exceptions.

In the spirit of this article, I don't find this a good solution because monads do not compose.

sophisticated staged programming with constraint based dependency injection

I think you are referring to type-class like mechanisms which can in effect be considered compile-time logic programs that propagate constraints and inject dependencies, which is the kind of solution I prefer.

But this is fundamentally a separate concern from the features of caps vs effects.

I must be misunderstanding something, because to be this seems to be exactly the difference between a capability and an effect. If we limit ourselves to a simple system where we only have channel-IO, then a capability is just a channel endpoint passed as an argument. An effect would just allow this to be implicit rather than explicit right? What am I missing?

### caps vs effects redux

this seems to be exactly the difference between a capability and an effect. If we limit ourselves to a simple system where we only have channel-IO, then a capability is just a channel endpoint passed as an argument. An effect would just allow this to be implicit rather than explicit right? What am I missing?

I think you've conflated some ideas too deeply for me to easily unravel. So I'll just present a simple re-introduction to the concepts. I think of it this way:

• A capability is a value, like a phone number. You can write it down in a phone book, send it in a text message, encrypt it, or seal it in an envelope. Except it's like sixty digits long and sparsely allocated in a secure way, so no spammers or scammers be guessing at phone numbers.
• An effect is an interaction with an environment, like dialing a phone number or spitting into the wind. An effect might be *described* by values, which represent events or requests or even work-orders with entire decision-trees. But strictly, it's the interaction, the rendezvous, the communication of values that we call "effect".
• Holding a capability does not imply direct access to an effect, much like a phone book cannot dial phone numbers. With this in mind, it's entirely feasible to separate caps and effects. Whether they still qualify as "object" capabilities is mere pedantics. Caps preserve the important compositional security properties so long as they're unforgeable.

The ideas of capability and effect aren't even in the same conceptual category.

It is true that we can use capabilities in a restricted manner to implement an effects system. Similarly, we can use wood to build a house, or use numbers to represent sound. But what does this tell us of the "difference" between "wood vs house" or "number vs sound" or "capability vs effect"? Nothing! Nothing except that there's probably a deep misunderstanding by whomever is presenting the argument.

You use "effect" in context of extensible algebraic effects, which is a particular way of describing effects and modeling interactions (basically, via open sum type of resumable exceptions). Even in that context, I find it very useful to distinguish a phone-number capability from a specific effect-description like Dial(phone-number), and to distinguish the effect-description from the actual effect where the computed Dial(phone-number) is somehow communicated to an "environment".

True, but not especially relevant. Choosing to have an implicit monad built-in to your language certainly does nothing to improve this particular issue. What we can do is develop monads over extensible models - e.g. row-polymorphic types, open variants, pubsub topics, temporal-constraint variables - such that we can "compose" them via extension.

### fwiw i'd guess that cap >> effect

...in that caps are often the way to get the ability to use/invoke an effect. in the chicken or the egg sense, i'd see caps as the ur-thing, since to be able to invoke an effect without a cap just means the cap is pervasive.

### Effect Permissions

Like you can view an OO capability to be both the ability and permission, so that possessing the object is the permission, we can view Effects as implicit permissions. Simply having the effect handler in scope is both ability and permission to use it.

I think some thought would need to be given to effects as permissions, as it does not seem as well defined as capabilities as abilities. For example there is an obvious need for a 'sandbox' as way to remove all effect handlers except specific ones from scope. Perhaps you could have 'hide x in ...' hide a variable in this scope, and have a constrained universal form 'hide all except x in ...'. In this way the 'main' program can start with OS defined effect handlers for all its IO, and functions can be called with constrained access. A final form would need to allow a new handler to be defined where that handler runs with the outer scopes permissions, but is available in the inner scope, to provide controlled/limited access to a resource as in 'hide all except (handler ...) in ...'

### Object Model

I understand what you are saying, that the capability is just the phone number (or permission to dial the number), but then how do you dial? In an actor model the actor could be both the permission to dial and the IO channel to dial over. In this sense the actor is the IO endpoint, and simply possession of the endpoint is permission to use it. A classic example of this is a file handle in an object oriented system. Likewise with effects simply having the effect handler in scope could be considered permission to use it.

So I agree that capabilities and effects are not the same thing, and yet they are both capable of working without the other. In my "keep it simple" mindset, why would you want to make things complex by having both. You should either choose an OO model (or actors) and use capability objects that provide the methods, or go functional and let the effects be the permissions too. What advantage is there to having both? There is a natural affinity between the language structure and either capabilities (OO) and effects (functional/algebraic). One advantage to tying the permission to the ability is that you cannot have a failure to dial, as with object-capabilities you can only dial the numbers you have "pn123456.dial()", and with Effect-Permissions you can only have the numbers you can dial "dialPn123456 ()".

I would go further and say that functional programming has an algebraic structure, and OO has a co-algebraic structure. As such there is always going to be a mis-match in trying to fit algebraic and co-algebraic features together, unless you are aiming at a multi-paradigm (everything including the kitchen-sink) type approach. I think other features can be viewed in this light as well: type-classes (algebraic), modules (co-algebraic), and maybe others. This suggests two "sweet-spots" for designing simple single-paradigm languages:

1. Objects, Modules, Object-Capabilities, Local State
2. Functions, Type-Classes, Effect-Permissions, Global State

I agree about an embedded monad and using row-polymorphism, in effect you then have imperative value level syntax, with a "functional" type assigned to it. This is one of the possible models I am interested in.

### re Object Model

then how do you dial? In an actor model the actor could be both the permission to dial and the IO channel to dial over

Consider the following: Haskell base library implements channels. These channels exhibit all of the essential properties of object capabilities - unforgeable, transferrable, accessible only through parenthood, endowment, or introduction. Yet, we can only directly use those channels in context of the IO monad. Does this language-layer constraint mean the channels are not capabilities? I say nay.

Similarly, imagine if you developed an actors-model language with a built-in purely-functional sub-language or DSL. Wouldn't those actor references still qualify as 'capabilities' even though they cannot be invoked from within a useful subset of your language expressions?

What matters most for capabilities is not ambient ability to invoke them, but their security properties and ability to transfer them.

capabilities and effects (...) What advantage is there to having both?

That's a reasonable question.

I had initially separated capabilities and effects in context of modeling distributed and mobile computations. You might think about this as an "actor near the user" vs "actor near the camera" vs "actor near the database" vs etc. which might be separated by a physical network, and which may further possess varying degrees of physical mobility (because mobile phones, camera drones, replication of databases, etc.). I eventually abandoned actors, but they're where I started.

In context, it's often convenient to model capabilities that can only be used by computations that are already "near" a particular resource. Benefits for doing so: we can restrict 'network disruption or partitioning' errors to a subset of capabilities, we can better control network bandwidth and latency. Further, we can adapt to temporal partitions - i.e. staged computing - where we might wish to control which effects are permitted during a specific phase of computation. Finally, we might also work with specialized partitions, like GPGPUs, where external communication and effects must be severely restricted.

An obvious consequence is that capabilities are no longer globally usable. A computation can only use a subset of the capabilities it holds, perform a subset of effects, based on where (or when, etc.) it is located. Nonetheless, we still benefit from object-capability model security patterns for managing and distributing authority through an application or network.

But there was a less obvious consequences regarding granularity of security reasoning. It's feasible to work with very fine-grained effects or capabilities, but developing and maintaining and generalizing the fine-grained code can very easily become a hassle. However, when we combine effects and caps each of a more modest granularity, the combined result is both fine-grained and very easy to reason about. It's somewhat analogous to choosing ranges on orthogonal X and Y coordinates: the intersection is a fine-grained rectangle.

Thus, I would advocate use of both caps and effects on the basis that it simplifies code, mitigating hassles of fine-grained manipulations. And if you're working with staged or distributed computations, there are several more motives.

functional programming has an algebraic structure, and OO has a co-algebraic structure. As such there is always going to be a mis-match in trying to fit algebraic and co-algebraic features together

As I mentioned a few posts up, caps are easily modeled within functional languages by use of abstract or opaque data types, and are easily restricted to use within a subset of algebraic effects.

I grant that opaque data doesn't imply an object model (no messaging, no polymorphism, no inheritance, no local mutable state, etc.). You could pedantically argue they aren't "object" capabilities. But if values are unforgeable and transferrable, and we can easily use them for object capability model security patterns, then I'm of the mindset "walks like ocaps, quacks like ocaps, it's ocaps". Or, at the very least, they're "caps".

### Extreme languages

I don't want to argue that what you propose are not capabilities, I agree that an unforgable token is all you need.

It was more that I was interested to see what a completely "algebraic" language would look like and feel like to program in.

I am also interested in the following ideas which are relevant to this topic:

- Co-algebras are modular, algebras are anti-modular?

- Algebras compose (to algebras), co-algebras compose to co-algebras?

- Should we only care about composition, not modularity?

### re compositionality and modularity

Compositionality strongly implies modularity. Because we can always treat components as modules: given program X*Y, we can trivially modularize X and Y. However, the converse is not true. Modules cannot generally be treated as components because modularity does not imply standard composition operators or compositional properties for reasoning.

I would say that, other than compositionality, extensibility is a very interesting property. Our ability to usefully modify software system behavior without deep modifications to its definition. Using global/external state, we can have composable extensions.

### Composition is Anti-Modular.

There is a problem with composition and modularity. Consider the program "(X*Y)*(Z*X)". Modularity should allow us to locally define the meaning of operators, so in the left subprogram '*' is locally defined to be some commutative, associative (and distributive?) function, and in the right sub-program a different function with the same properties. The question is, what definition of '*' do we use to compose the two programs.

Composition implies some global definition of composition operators, which is anti-modular.

We can see from above how this applies to algebras, I am not sure it applies to co-algebras, because given "x -> a * b * x", from a modularity perspective the decomposition of 'x' all takes place within the scope of a single module.

Edit: I am referring to a module, as in the computer science property of modularity, not the mathematical concept of a module.

### re absurd definitions of modularity

You posit that modularity should allow override of specific symbols like '*'. Why so? Would you also argue Java modules aren't modules because the 'class' and 'while' keywords have global meaning? Languages with keywords would be anti-modular by your definition, which is absurd.

Modularity isn't even about symbol or syntax overloading. When normal programmers discuss modular software systems, they are usually talking about some ability to slice and splice the system, not ability to fully override a host language syntax or behavior with a local language. I grant the latter is a fascinating property, but it should be given another name.

### Modularity of Algebras

'*' is just the product operation in our algebra, why would it not be overloadable? It is in mathematics, as in an abstract algebra the product can be any commutative, associtive, operation. It is no different from a binary function except it is infix rather than prefix.

I didn't think this was a particularly radical concept (you can do it in Haskell)?

### re conflating modularity with other concepts

Software modularity doesn't imply the ability to overload symbols. Modularity doesn't FORBID this either. Overloading is not a "radical concept", just an "irrelevant concept" (in context). Software modularity isn't about symbol or syntax overloading. It's about our ability to partition a software system for independent implementation and maintenance.

For compositional software systems, we normally have one algebra for software components and composition carefully chosen by a language designer, not a panoply of user-defined algebras. Overloading mostly helps for user-defined symbols. Keyword '*' without overloading works well for compositional software.

Relevant to a 'compositionality vs modularity' argument, modularity also implies nothing about whether the "modules" will have homogeneous vs heterogeneous structure. Compositionality allows for easy partitioning along 'component' boundaries. This allows a form of modularity with homogeneous, fractal structure aligned on the component type (which could be some model of "functions" or "stream processors" or "machines" or "behaviors" or "pubsub environments" or "rewriting graphs" or similar). However, modularity does not imply compositionality, because modularity in general might involve heterogeneous structure with ad-hoc glue. Hence, there is a clear subset relationship: all compositional systems are modular, but not vice versa.

Aside: Composition also doesn't prohibit symbol overloading, e.g. via namespace resolution or desugaring named locals into simple components. I had contemplated pointing this out above, but it's also irrelevant.

### to repeat something you've said before

e.g. objects in oop might be modular, but they don't tend to compose; whereas functions in fp more likely do.

(i get it viscerally but wonder technically why is that? because some state is internal in oo whereas it is all external in fp? because oo is too often is-a rather than has-a? also, does it depends on what composition means? are there algebraic orchestration langauges so that e.g. actor message passing style could be seen as supporting composition?)

### re fp, oop, and compositionality

It's largely cultural. FP language designers have mathy backgrounds, adore parametric polymorphism, and largely shun non-algebraic data types (e.g. favoring Option over Nullable). FP could easily have gone another route, historically (cf. Ceylon's types). With the marvelous benefits of hindsight, FP language designers could have done much better in some ways (e.g. row polymorphic records or heterogeneous collections to construct data structures compositionally via anonymous intermediate types; effect types for composing fine-grained interactive behaviors). But the FP culture of composition is still there, and improvements are being made.

But there are also several structural reasons that OOP is not so good at composition. Some of which I describe in Why Not Events (see enumerated points).

### black boxes (existential, OOP) vs. white boxes (algebraic)

Some reports from trenches: objects compose just fine, it just takes a practice to get used to them, because they introduce new composition principle: models or black boxes. The model is a partial description of what actually happens, and other components interact via model, w/o knowledge of the model implementation. So it is black box composition principle. Structured programming and algebraic types in contrast use hierarchical white-box composition principle.

Interface is an example of model. Parameter of generic type is a model of type. Generic function that accept function as argument uses model of behavior. So we split reasoning in to parts, outside of black box, and inside black box. Both parts of reasoning need only to care about black box surface, without caring about other side.

Encapsulation is just the principle that objects must have minimal black box surface (interface) while keeping volume (behavior and state). This is a complexity management principle.

Algebraic types are white box composition where all cases are enumerated, in contrast to objects which are black box composition (existential types). As black box composition is higher on meta-level, it is more costly to use, but it manages complexity better.

The systems with most complex behavior are written in object-oriented languages today. And these system are composition of objects. Generic collections are more usable in modern Java than in Haskell IMHO, and I have tried both.

Practically every graphical UI today today is a composition of objects. Modern complex web site programming problem is that HTML is algebraic type, but UI need objects. So frameworks like Angular turn algebraic HTML to objects. Even if UI is written in C, it is still composition of object, it is just syntax of language requires jumping through loops to create objects (void pointer + function pointer is existential type or object).

Functions are just one-method object after all. Purely functional languages use black box composition principle for behavior and white box composition principle for data. OO languages use black box composition for both aspects. And black box composition principle is more powerful composition tool as it is better handles complexity, since it restrict reasoning to surface of black box w/o requiring to peek inside it. Functional languages discovered good syntax for writing simple one-method objects, but that is all. All OOP languages that picked it that I know, picked closures as a syntax sugar. FP languages picked objects as significant change of semantics (OCalm, Type Classes in Haskell) or did not provide related validation (Common Lisp). This is evidence for higher level abstraction.

And I would say that FP compose less outside of selected examples, because one need special effort to use related set of functions. GUI and IO libraries did not look nice in FP comparing to OOP counterparts from point of view of usability and extensibility.

Type classes in Haskell are worse in usability comparing with Java interfaces and classes, because they require writing functions separately then explicit composition into instance of type class, and there is always problem with function definition scope (why I have to introduce a type class for overloads, and why I have to figure out some global names for functions that form implementation of type classes). While Java we just write "implements Interface", and operation is active, there are less mental steps to it.

### Implementing Interfaces

While Java we just write "implements Interface", and operation is active, there are less mental steps to it

Typeclasses are interfaces, and object-classes are types, so there is no difference between declaring a Java class that implements an interface, and declaring a Haskell type that implements a type-class.

The advantage of a type-class is that you can declare instances for types that already exist, even types written by other people. In other words typeclasses are interfaces that late-bind.

For all the cases where you would use Java interfaces there are no other differences, so there should not be any extra mental steps.

Of course you can do other meta-programming things with typeclasses that you cannot do with Java interfaces, but you don't need to think about such things if you just want a simple interface.

... because they require writing functions separately then explicit composition into instance of type class

I don't understand this, you can declare the functions directly in the implementation as in:

instance Show MyType where
show (MkMyType x) = show x


So there is no need to write functions separately from the instance.

### Type classes and recursive functions

My Haskell memories date back to 6 years ago, so I may got something wrong.

However, If I remember correctly, to implement recursive function for type class, one has to either define it separately, or use where operator, where new name of function has to be defined. My expression was incorrect in sense that it applies to recursive case only.

Most type class samples that I have seen at that time in the standard library, chosen to have a separate function, unless it is simple one liner, and for one-liners sometimes too. So there is an additional step between implementation and interface function for large class of functions. Even if it is not required, it looks like it highly suggested by language features.

### re black boxes etc.

objects compose just fine

Within OOP systems, normally only subsets of objects from an instance of the appropriately named composite pattern will exhibit the characteristics of compositionality.

We aren't using the informal natural language definition of 'compose'. Compositionality is more than just ad-hoc gluing objects together. Review the OP.

Algebraic types are white box composition where all cases are enumerated

True, but misleading. Other half of this story: Algebraic types are frequently layered with abstract data types. This allows a clean and convenient separation of responsibilities between structure description and black box implementation hiding.

Practically every graphical UI today today is a composition of objects.

True. Composite pattern has proven to work reasonably well for UI, and has a lot of historical momentum.

Of course, compositional data also works, which is why we've got HTML. Then we've got DOM that interprets HTML into an object. Then we've got React which reinterprets DOM back into a data structure (via virtual-DOM). There's also entity-component-system, which composes data non-hierarchically.

FP compose less outside of selected examples, because one need special effort to use related set of functions.

Although functions are capable of composing, most FP languages and (critically!) their library ecosystems aren't developed around it. Much ad-hoc glue instead of tacit programming with simple composition operators.

To support functional composition well involves API design, discovering and flattening the speedbumps through empirical practice. Otherwise, you'll forever be adding data plumbing code before each useful function, or just forcing it with named locals.

OOP languages (..) picked closures as a syntax sugar. FP languages picked objects as significant change of semantics (..) This is evidence for higher level abstraction.

Well, objects are certainly a more universal abstraction. They're every language design responsibility rolled into one big language feature. Single responsibility rule? Hah, fools! Objects do data hiding, modularity, mutable state, identity, hashing, inheritance, reflection, AND interface abstraction, all in ad-hoc ways!

(From perspective of many pure FP programmers, Object is already a God Object.)

GUI and IO libraries did not look nice in FP comparing to OOP counterparts

IIRC, most GUI and IO libraries in FP are FFI translations from the OOP counterparts.

Type classes in Haskell are worse in usability comparing with Java interfaces and classes (...)

I suspect this is a matter of opinion and experience. You describe stumbling over issues with type-classes that I do not recall from my experiences. I do appreciate the ability with typeclasses to add new interfaces to old types without adding a wrapper.

### object = black box

Well, objects are certainly a more universal abstraction. They're every language design responsibility rolled into one big language feature. Single responsibility rule? Hah, fools! Objects do data hiding, modularity, mutable state, identity, hashing, inheritance, reflection, AND interface abstraction, all in ad-hoc ways!

(From perspective of many pure FP programmers, Object is already a God Object.)

Object expresses the single concept of black box. The behavior and data could not be separated inside black box, and functional languages also mix them together for behavior black box abstraction. Lambdas capture state from context and thus have a private state too, only this state is usually captured implicitly from the lexical scope. But it is still a private state. Internally lambdas are the same pointer to data + pointer to stateless function as Java object.

Strong point of functional languages is the place where they do use black box abstraction (high order functions), the weak one is where they do not use it consistently (data).

All features of object that you have named is shadow of the high-level black box concept cast on the lower level concepts. The object is high-level concept that organizes behavior and data into something new, and as it organizes it, it put some restrictions on both of them. Data hiding is one of restrictions, modularity is declaration that there is surface of black box, etc.

Although functions are capable of composing, most FP languages and (critically!) their library ecosystems aren't developed around it. Much ad-hoc glue instead of tacit programming with simple composition operators.

Yea. And I argue that this happen because abstraction level is unsufficient.

There is practically no reason to use pure functional languages like Haskell from productivity point of view. For example, Kotlin and Scala can do practically all what is good from usability point of view in these languages, and these languages could do even more in areas where FP languages are weak.

Algebraic types are white box composition where all cases are enumerated

True, but misleading. Other half of this story: Algebraic types are frequently layered with abstract data types. This allows a clean and convenient separation of responsibilities between structure description and black box implementation hiding.

Abstract data type is black box abstraction. The problem is that Haskell is just a half step there. It has it abstract types in form of type classes. But 99% of code use white-box linked list instead of that generic collection interface for declaration. Because linked list is explicitly supported by syntax, and generic type abstraction is not.

In OOP languages abstract data types are truly abstract, and most of functionality is written against interfaces rather that implementation. There are iterator or stream interfaces, but they are still abstract, an do not require conversion of one data structure (for example, hash map) to another (list).

Dot-access operation like "object.toString()" looks like a trivial things, but it separates black box and usage of black box. The fact that Haskell does not have such operator leads to numerous usability issues in the library and causes under-use of type classes.

Laziness and immutability of structures makes list to behave almost as stream or iterator with memorizing function, but still there are big warning letters everywhere that warns about performance issues if wrong version of fold operator is used. The design of efficient data structures is much less straightforward for FP languages.

There are some advantages for immutable collections in corner cases, but straightforward usage is complicated because it breaks back box assumption. Every mutation operation on black box has to return a new version of black box for purely functional code. If there is a strict hierarchy of black boxes - it is fine, but if there complex graph, we have to replace every usage site of changed black box, or to introduce intermediate structure that names all black boxes and everything should refer through this structure (causing GC and other issues). The idea of black box that we do not care about its content, but we have to suddenly care if they have a reference to other black box in this case.

Again OOP languages could do anything FP could do as syntax sugar, but pure FP have some problems with what OOP could do trivially. This is a strong indication of higher-level.

### Abstract data type is black

Abstract data type is black box abstraction. The problem is that Haskell is just a half step there. It has it abstract types in form of type classes.

In my experience, most use of abstract data types in Haskell is via the module system: export a type-name without exporting constructors or accessors. This also supports smart constructors.

Object expresses the single concept of black box.

In my experience, many objects in OOP software seem to be white boxes - JSON round tripping, a '.Class' getter, downcasting, public fields, etc...

linked list is explicitly supported by syntax, and generic type abstraction is not

The "view patterns" extension to Haskell has improved this situation, allowing Haskell pattern matching even with abstract data types.

I grant that Haskell programmers could make wider use of Traversable, Foldable, etc. generic collections interfaces.

There are some advantages for immutable collections in corner cases, but straightforward usage is complicated

I find immutable collections a lot easier to use. No issues with iterator invalidation, locking for atomic updates, reentrancy, etc..

but if there complex graph

An FP programmer will learn to do most things without graphs. They're never the path of least resistance.

Of course, there's an exception if the domain naturally involves a graph, but even then you can probably make do with a simple graph model (e.g. a sparse matrix or adjacency list).

Where would I need a complex graph?

Again OOP languages could do anything FP could do as syntax sugar

An old argument: impure FP (like Scheme) does everything pure FP does, and strictly more (adding set-cdr!, prints, file access, etc.). Then what good is pure FP? There is also value in our ability to locally, precisely control and reason about a model, and I think syntactic sugar won't do the job.

### Pure code fragments are islands in Haskell as well

The "view patterns" extension to Haskell has improved this situation, allowing Haskell pattern matching even with abstract data types.

They getting close to the point where Groovy and Scala has started. So it is not part of the standard language yet as I assume.

I grant that Haskell programmers could make wider use of Traversable, Foldable, etc. generic collections interfaces.

If they do not use, than there is possibly a thing that makes it unnaturally hard. I would have investigated it if I were interested in making Haskell more usable. This could bring Haskell to full FOOP language level eventually.

In my experience, many objects in OOP software seem to be white boxes - JSON round tripping, a '.Class' getter, downcasting, public fields, etc...

The data transfer objects are just objects dedicated to keep some state, it is shallow, but still black box, that could for example start using virtual getters/setters for backward compatibility. Public non-constant fields are discouraged for every OOP language that has this distinction, some outright do not allow implementing them. And let's skip JavaScript, that was procedural language non-completely lifted to OOP.

I find immutable collections a lot easier to use. No issues with iterator invalidation, locking for atomic updates, reentrancy, etc..

The mutable collection still provide something: a single version of truth about its content. Immutable collections lead to multiple versions of truth.

An FP programmer will learn to do most things without graphs. They're never the path of least resistance.

Of course, there's an exception if the domain naturally involves a graph, but even then you can probably make do with a simple graph model (e.g. a sparse matrix or adjacency list).

Where would I need a complex graph?

For example, if one need to load entity graph from database that conditionally navigate it updating there and there basing on incoming request, and save it back to database. There are tricks like keeping entity references in separate map and make entities in the graph only to keep primary keys of referenced entities. But such approach is extra step for the straightforward task. And this task happens a lot in the practical tasks.

An old argument: impure FP (like Scheme) does everything pure FP does, and strictly more (adding set-cdr!, prints, file access, etc.). Then what good is pure FP? There is also value in our ability to locally, precisely control and reason about a model, and I think syntactic sugar won't do the job.

Pure FP cases are islands in sea of imperative programming in case of Haskell. The monad-related DSL is used make imperative programming possible, and it is still looks like imperative programming and quacks like imperative programming despite of all purity claims. We can purely reason only about these islands that do not use IO monads, everything with IO monads is practically imperative code with the same reasoning problems. And as application exists for purpose of input and output, this situation cannot be avoided.

Take Kotlin now, we have could have functional islands over immutable structures as well and we will be able to reason about them as well. The only difference is lack of compiler check if we escape from pure functions and some OOP languages do have such feature. I just see a little value for it as we usually should not care about what happen in black box. Back box could be not pure as well and still behave acceptably in pure-like way (for example memorizing function that uses mutable state by the fact).

So reasoning argument fails here I think.

### Emergent properties are the next big thing

We aren't using the informal natural language definition of 'compose'. Compositionality is more than just ad-hoc gluing objects together. Review the OP.

This is completely separate topic possibly deserves a separate post.

The first link in original post describes the author feelings about new higher-level composition principle: systems. The author is possibly feeling that true composition principles are only hierarchical and black box composition principle, and feel bad about emergent properties. Apparently, OP follows this sentiment.

And that new system composition principle is already adopted in software to solve most complex cases, and it just has not resulted in new programming languages paradigm yet. The new paradigm would make it easier to produce and describe desired emergent properties while making it harder to produce undesired ones. Like OOP helped with constructing good black boxes.

Example of applying this system composition principle is emergent property of reliability over unreliable hardware in cloud computing. Eventual consistency is also example of application of this system composition principle. Almost all current machine learning efforts also part of this developing paradigm. So good emergent properties are future of software development, and not something one should avoid at all cost. This just requires mind shift to system thinking. Like OOP required mind shift from white box to black box thinking.

PS: And the composite pattern is not the only OOP composition pattern. All GoF patterns are composition patterns, possibly with exception of singleton. And UI uses these patterns as well, for example observer pattern for event listening, the builder pattern for UI construction (for example, Eclipse UI framework). But it could be seen in other places as well, for example IO streams in Java were we could chain Char IO, Char 2 byte conversion, http chunk encoding, SSL, and sockets using similar interfaces.

### Global Coherence.

To further elaborate, type-classes (which I think are algebraic, as opposed to object classes that are co-algebraic) need global coherence, which is the concrete effect of being an abstract algebra. Global coherence is anti-modular. So you can see the concrete consequences of this property.

### re global coherence

I disagree with your broad generalization that "global coherence is anti-modular". A more accurate statement would be qualified: "global coherence that is explicitly represented and maintained is anti-modular." Because shotgun edits and global consensus are anti-modular.

Yet, we can allow many forms of global coherence that are implicit due to the design of a system without hurting modularity of the system. Software modularity in most systems arguably depends on global coherence of interface features like calling conventions, data representations, concurrency model, etc.. Compositional systems naturally exhibit implicit global coherence, with the entire system possessing a consistent fractal structure.

type-classes need global coherence, which is the concrete effect of being an abstract algebra

Aside: It is not abstract algebras that require global coherence. Haskell type-classes have another aspect: a singleton mapping from type to class instance. It is this feature that requires global coherence. If we are just modeling and programming with abstract algebras, a first-class record of functions over an abstract type is sufficient and local. (Although, for static safety analysis, we'd also want alternative type-system features.)

### Singular Mapping

Haskell type-classes have another aspect: a singleton mapping from type to class instance.

Given that we have to choose an instance based on type only, there has to be a singlar mapping, or else you have undefined behaviour.

If we allow overlapping instances, we must have a mechanism for picking one instance, for example specialisation. The problem here is we now get "spooky-action-at-a-distance" as defining a new specialisation can change the behaviour of other modules that are already deployed.

### re singular mapping

we have to choose an instance based on type only

That's indeed how type-classes work. But, to clarify, the limits of type-classes are not the point I disagreed with. Quoting you:

1. "type-classes need global coherence"
2. "which is a concrete effect of being an abstract algebra"

I disagree with the second point, which I had emphasized, not the first. If we're discussing the "effects of being an abstract algebra", then we aren't restricted by the arbitrary additional constraints of type-classes, such as singular mapping. For example, it's easy to simply parameterize an abstract algebra, in which case we can reason locally like with any other functional parameter.

A corrected second point would say, "which is a concrete effect of singular mapping". Of course, since the corrected version of your argument has nothing to do with algebras, it obviously won't support your earlier hypothesis about algebras being anti-modular.

"spooky-action-at-a-distance" as defining new specialisation can change the behaviour of other modules that are already deployed

Local reasoning is a useful property, but is NOT strongly implied by modularity. For example, actors model and OOP systems are widely acknowledged as modular, but objects can interact at a distance through a service registry or database.

More generally, consider extensibility. It is possible for a model of extensions to be modular (like plugins) or even composable (define a system as a composition of extensions, e.g. publish-subscribe or behavioral programming). Yet any extension naturally involves some form of "action-at-a-distance", otherwise we're only left with invasive editing of code.

Aside: To me, the interesting question is how to maximize local reasoning without sacrificing too many benefits of an open/extensible system (or vice versa). It is this question that gets one into fine-grained effects, capabilities, logical monotonicity, conflict-free replicated data-types, etc.. a lot of open research.

### Implicit Vs Explicit Operator Lookup.

Lets use a concrete algebra to illustrate. Boolean Algebra has operators and, or, etc. Mathematically we might write "x = a & b", in this case the convention is like a type-class, the definition of '&' is implicit.

then we aren't restricted by the arbitrary additional constraints of type-classes

I would argue the opposite, the normal presentation of an abstract algebra includes type-class like implicit definition of operators. In order to make your point about locality, you are having to add a new property, not part of the definition of an abstract algebra, which is the ability to look up an operator in an explicit dictionary.

### conflating medium and message

We should avoid conflating medium and message. When we're talking about the properties of abstract algebras, we are NOT talking about the properties of the language within which we represent the algebra. The abstract algebras don't gain the property of being ASCII just because our language is restricted to ASCII. Abstract algebras also won't inherit properties of typeclasses just because you use typeclasses to represent them. Abstract algebras would not inherit the features of lambda calculus if Church-encoded.

That aside, you're also conflating an extra issue of "normal presentation" as though it's relevant. Ugly, verbose abstract algebras would still be abstract algebras. Use of "boolean.and" is no more or less a symbol to resolve than "&". But I grant that concise, aesthetic presentation is a good language design goal. Nonetheless, properties of type-classes are still not implied. For example, to solve the problem of "normal presentation", a language designer might try projectional editors or some form of specialized syntax macros. Typeclasses are just one option, and every option has its consequences.

Although you conclude "algebras are anti-modular" and "composition is anti-modular", the most you can successfully argue seems to be "type-classes have some negative consequences for modularity".

### Co-algebras

My limited understanding is that selecting a function from a product is co-algebraic, so you cannot have a dictionary from which you select a function without introducing a co-algebra.

### re co-algebras

Whether co-algebras are involved in representing an algebra is irrelevant. Why even mention them?

What matters to "compositional systems are modular" is that compositional systems are easy to partition on component boundaries for separate development and maintenance, which meets normal definition of software modularity. QED.

Note that the encoding medium, including explicit or implicit use of co-algebras or support for abstraction or concise presentation, is irrelevant to this argument.

So far, your arguments seem to have the general form "composition is anti-modular BECAUSE I don't want just modularity, I also want these other several features which have nothing to do with modularity, and I've only thought up one way to achieve them all, which involves a variation on type-classes, which are known to have some issues with modularity." This isn't good logic. Pointing out how valuable those other features are cannot fix the logic. Digressions about co-algebras? Not helping.

Further, you clearly haven't spent much effort thinking about alternatives to type-classes.

### Back to co-algebras.

Can you think of a non-typeclass way that does not involve introducing a co-algebra?

I assume you ask about "ways OF abstracting algebras". An algebra is usually defined in mathematics by a product of functions and sets, like (+, 0, Integers). First-class representations of algebras in a PL will naturally involve a similar first-class product. If you intend to avoid that, we are left with second-class approaches, where the product of functions is implicit to an environment rather than a value we directly reference. Second-class approaches include typeclasses, implicit parameters, best-fit multi-methods, template programming to abstract over an algebra's prefix, use of local DSLs that prevent direct access to the tuple value, etc..

I think it is not relevant. Expression of algebras could be first-class or second-class within a language, can freely use co-algebras or even Church-encodings, and these choices won't affect the properties of the algebra itself such as whether algebraic expressions are modular, nor what can be represented within the algebra. Further, abstraction of algebras is mostly irrelevant for compositional software systems (where most matters what we can represent within one carefully chosen top-level algebra of software components, not how well we can abstract many user-defined algebras).

Although I usually don't mind a short digression, I feel that I'm climbing a tower of seemingly off-topic digressions upon digressions none of which seem even remotely relevant at this point. Abstraction, presentation, representation, all irrelevant (in context). So, please argue for relevance to original topic before anything else.

### Composition

The original topic was "On Compositionality", so I see this digression as exploring the nature of compositionality. For me there is something interesting here which I can best express as: There is something about composition, modularity and algebras that is problematic.

I disagree that composition and modularity are the same thing (or that anything composable is modular). Composition is a property something like:

A composed-with B -> B.A

Modularity is something like:

A:(x = 27) B:(x = 32) x(A) + x(B)

To build a program from modules does not require composition, we do not compose modules, we simply include the modules in the same program. At no point do we apply module 'B' to something followed by applying module 'A' to something. Modules are not even applicative.

Conversely composition does not require modules, we do not have separate namespaces with composition, we require that applicative functions can be applied one after the other. At no point do we define anything to be different values within different scopes.

Regarding algebra's I am thinking more of abstract algebras like f-algebras and f-coalgebras which are "F A -> A" and "A -> F A" respectively. Where the function of the algebra rather than being a simple '+' or '*' can encode a sum type, allowing arbitrary expression evaluation.

### Composition and Modularity (redux)

Modularity is something like (...) Modules are not even applicative.

Methinks you simply have a very incorrect notion of modularity. You've somehow conflated it with symbol scoping and management.

Note that "modularity" in general has a meaning outside of software and language (we can speak of modular physical systems, robotics, etc.). Even if we restrict ourselves to "modular programming", the definitions orient around independence, exchangeability, and interface-vs-implementation. Nothing directly to do with symbol management. Conventional languages express "interfaces" in terms of symbols, but that's just a (very common) language design choice, not fundamental to modularity.

We can easily design a system of applicative modules, if we wish. For example, take a conventional language and imposing a simple restriction: "every module must export one function, named f". Such a constraint would preserve the essential properties of modularity, although we'd need a few more tweaks to integrate with the type system and support opaque data types.

It's also not difficult to find or design modular software systems whose interfaces don't involve exporting and sharing symbols - e.g. plugins, device drivers, pubsub agents. E.g. plug-in systems still meet the essential characteristics of modularity: independent, exchangeable, interface separate from implementation. These aren't conventional language modules, but conventions aren't what define software modularity.

To build a program from modules does not require composition

Indeed. Composition implies modularity. But modularity does NOT imply composition. And, in practice, conventional programming languages do not attempt to support composition at the module layer. (But don't let convention blind you to alternatives.)

At no point do we apply module 'B' to something followed by applying module 'A' to something.

Okay, it seems you ALSO have a severely deficient comprehension of compositionality. A compositional system is simply one where we have:

• a set (type/kind) of components {A,B,...}
• a finite set of composition operators like (*,+) closed within the set of components
• (hopefully) some useful compositional properties, which we can reason about incrementally (∃F. ∀A,*,B. P(A*B) = F(P(A),'*',P(B))).

Effectively, I've described an algebra, albeit easily more ad-hoc and informal than the rings and groups of mathematics. Some component systems might additionally describe 'primitive' components, with the full set reachable under transitive closure from the primitives.

We can restrict this further in context of compositional software systems. The kinds of components we're interested in must additionally fit some usable notion of "program". For example, our component might be a function, an event stream processor, a hierarchical state machine, a rewrite graph, a make-file, a constraint-set, a virtual model of a hardware chip, a pubsub 'environment' with multiple agents and a message bus, etc..

The set of composition operators depends heavily on the specific kind of component. Constraint sets may involve a few compositions like "either" or "both". Pubsub environments might involve compositions that join two message buses or filter a message bus. A make-file might add new rules, override old ones. Functions and stream processors would generally support pipeline composition, and maybe concurrent composition like (f *** g)(x,y) = (f(x),g(y)) (cf. Control.Arrow).

Composition includes functional composition, but is a much broader idea.

Besides some general benefits from standard glue and easily learned fractal structure, much utility of compositionality derives from the compositional properties, which allow us to reason effectively with a shallow understanding of the system. Static types are a common example of a compositional property.

Conversely composition does not require modules

Compositional systems are intrinsically modular: we can trivially decompose a compositional expression into a module per component.

Of course, we must express our compositional system in some language. We could devise a language without a module system or where the module system is maliciously awful in the spirit of INTERCAL. In that highly contrived scenario, I grant that utilizing the modularity intrinsic to the compositional system would be awkward. A solution might involve a projectional editor or zoomable UI.

But if we're designing a language around compositionality, that contrived scenario is a non-issue. Instead, modules will align nicely with software components. Like, in flow-based-programming systems, we might use a file per grey-box component in a UI view.

At no point do we define anything to be different values within different scopes.

It is not difficult to support named locals and shadowing within compositional software systems - e.g. as a syntactic sugar for the programmer's convenience. For example, with lambda calculus we can implement let expressions as a local rewrite let x = expr in body to ((λx.body) (expr)). With a Forth-like language, we might use a slightly more sophisticated rule, but it can still be achieved. Even in the worst case, we could just substitute the body at parse time like a preprocessor macro.

Naturally, let x = expr1 in one context is distinct from let x = expr2 in another.

Besides named locals, we can also support scoped modules.

We can easily impose a simple rule that modules with names like foo-local-* can only be directly referenced from modules with names like foo-*. This essentially gives us locally scoped modules, private to a foo-* prefix/namespace. It also aligns conveniently with software packaging and development teams - just give each project a distinct prefix. A smart development environment could make it convenient to edit several components in one view.

Ultimately, programming with component systems doesn't need to be all that different from conventional programming from the symbol management point of view.

we do not have separate namespaces with composition

Namespaces are ultimately about short-hand expression of tediously long names. Namespaces exist to make software easier to read and write. It is not difficult to implement namespaces in a manner suitable for compositional systems.

For example, I very frequently name other modules. But, instead of very-long-and-tedious-module-prefix-foo we might prefer a qualified vp-foo. A language in which we express components might support this via phrases like IMPORT very-long-and-tedious-module-prefix AS vp, local to a module or block, then simply implement as syntactic sugar.

Alternatively, I could support pet-names in a projectional editor or edit-session, moving namespaces out of the code and into a user or project specific programming environment.

...

Anyhow, you're focused far too stubbornly on symbol management, to an extent of seeming willfully ignorant about the many historical approaches-to and layers-of software modularity. Local symbols may be used within modular or compositional systems if convenient (which is often the case if we express programs textually) but fine-grained symbol export is not required by any common definition of software modularity.

### Category Theory

Composition seems quite well defined to me, after all Category theory is all about things that compose.

... modularity does NOT imply composition

We agree here, not all module systems are going to be a category.

Composition implies modularity

If composition implies modularity, then all categories would be modules? This does not seem quite right to me.

We both agree that modularity is not composition. In which case how would you describe how modularity is different from composition? If composition is well defined in category theory, what does modularity add to this? If modularity is not about namespacing, or something other than composition, then it trivially reduces to composition.

### re category theory

how would you describe how modularity is different from composition?

Compositional systems are DEFINED by standard glue (composition operators), fractal structure (compositions of components are also components), recursive reasoning (compositional properties). These properties can and have been formalized. You will see a similar description in another's words if you can be bothered to follow the "On Compositionality" link in OP.

Modularity is DEFINED by the ability to split a system into independent, interchangeable parts that hide their internal complexity behind a simple interface.

That compositional systems are modular (via splitting on component boundaries) can be inferred from definitions. That modular systems include many non-compositional systems can easily be proven by example. However, they're distinct properties and not directly comparable. If you ask for the difference, I can only ask you compare the definitions.

all categories would be modules?

No. Compositional systems are modular on component boundaries. Representing a component as a module is not the same as representing a category or algebra as a module.

Also, compositional software systems usually only have one top-level 'category' (like functions or a process model). The extent to which we can represent other categories as modules would depend on Turing completeness and Felleisen expressiveness of this top-level "host" category.

If composition is well defined in category theory, what does modularity add to this?

Isn't this question rather similar to, "If all rivers are wet, and I clearly know what a river is, why do we even need this word 'wet'? What does it add?"

'Modular' describes many more systems than 'compositional' does. And it describes a different aspect of a system than 'compositional' does. Modularity is a distinct and useful property. That's enough to keep it.

If modularity is not about namespacing, or something other than composition, then it trivially reduces to composition.

Did you not very recently agree with "modularity does NOT imply composition"? Modularity certainly does NOT reduce to composition in the sense of standard glue, fractal structure, recursive reasoning.

Modularity is simply about what normal definitions of 'modularity' describe modularity to be about. That's how language works. What could possibly even motivate you to add extra properties to it?

(Keean in my imagination: "Well, I felt modularity was too boring as it was, so I decided to add abstraction, namespaces, generic programming, recursion, compile-time-cookies, and ice-cream. Now, modularity is a lot more exciting, but I still haven't figured out how to implement the ice-cream.")

Modularity is just a word to describe a useful system property. It's FINE for it to be simple and boring 99% of the time. If you want modularity to be exciting again, feel free to travel back in time to when GOTOs were still common, play with esoteric languages like Snusp, bang your head against aspect oriented programming for a while, or attempt to clean up academic software that's suffered a dozen thesis projects. Later, if you want to talk about namespaces, I suggest use of the word 'namespaces'.

### Tautology, and Languages with Composition that are not Modular.

Modularity is simply about what normal definitions of 'modularity' describe modularity to be about. That's how language works. What could possibly even motivate you to add extra properties to it?

That is called a tautology, its not a definition, and does not explain how it is different from composition.

Consider a system where we have a composition operator, but a bidirectional type system. Here we have a language that supports composition, but is not modular because we cannot split the code into separate parts.

I can think of other examples where the language has composition, but requires whole program analysis to compile, and hence is not modular. This is enough to show that composition does not imply modularity.

### Sophistry

That is called a tautology, its not a definition

I've given or linked definitions many times in this thread. Even earlier in the same post! You simply ignore them, quote an obvious non-definition, which was clearly not even intended to be a definition (it's intended as a reminder to stop ignoring the definition), and complain that it isn't a definition. Are you even attempting for honest discourse at this point?

requires whole program analysis to compile, and hence is not modular.

(Mumble mumble.) Seems I should have added "separate compilation" to this never-ending list of features Keean wrongly believes is required by definitions of "modularity". Oh, and "static compilation" too.

Do you really believe that features like link-time optimizations, deferred safety analysis, and integration testing are contrary to software modularity? That you cannot divide software module development across teams any longer after your linker added a new optimization feature?

There are plenty of real world module systems that involve some phase for further computation (type or size specialization, proof carrying code, constant propagation, JIT specialization, etc.) during or after link time.

### What is modularity?

Modularity is DEFINED by the ability to split a system into independent, interchangeable parts that hide their internal complexity behind a simple interface.

Yes I agree that by that definition it seems composition implies modularity as composition would just be an applicative module. However it really depends on how we view "independent"? We could reasonably take the position that without separate compilation they are not really independent? After all if we are making universal statements it must apply to both interpreted and compiled languages, otherwise we should qualify with: "composition implies modularity in an interpreted language".

### independence of modules

The "independence" relevant to software modularity is simply the avoidance of coupling: the implementation of one module should not need to change when the implementation of the other modules it interfaces with change. That is, only changes in interfaces should force an update to another module's implementation.

Separate compilation isn't implied or required. Neither is separate execution.

### Composition does not imply well designed modularity?

It seems I am not alone in considering modularity requires separate compilation, I can find it mentioned in several peer-reviewed papers, so perhaps there is more variance amoungst CS researchers than your absolutist position implies?

Of course the converse applies, and I should accept that not everyone sees separate compilation as necessary, especially when considering interpreted languages.

Edit: Doing some more research on this I came across a Princeton CS resource that had seven heuristic principles for a well designed module system. In point (1) it mentions separate compilation. Thinking about this, and the differing opinions on what is and what is not modularity, perhaps I could restate my original hypothesis in a way that draws less opposition:

Composition does not imply well designed modularity.

### re composition and well designed modularity

not alone in considering modularity requires separate compilation

Separate compilation is a valuable feature, and a lot of people desire it together with modularity. But this isn't at all the same as "required BY modularity". I doubt most people who mention the properties together believe the definition of modularity requires or implies separate compilation. They simply want both.

Yet, even among language system designers that favor compilation and modularity, you'll find many that have either deferred part of compilation to after linking (e.g. CLR's generics) or have otherwise compromised separate compilation (e.g. for generic programming, compile time evaluation, export of inline functions, link time optimizations, templated interfaces). Performance seems to be valued by many more highly than separate compilation.

Composition does not imply well designed modularity.

Of course. Composition by itself does not even imply the 'components' are a sensible basis for software. Compositional software systems are software systems. Naturally, a lot of attention must be given to choosing a category and language with the various properties we desire.

If separate compilation is among the additional requirements or high-priority desiderata, it can be achieved. But doing so may have some opportunity costs such as ability to support a highly expressive type-system or specialization and inlining and constant propagation across module boundaries. If separate compilation is lower priority, it could be abandoned or partially compromised to make room for those other nice features. Perhaps it's sufficient for only a subset of components to be compiled separately, e.g. based on type or annotations.

### Separation compilation and optimizations

Dynamic component loading is a "must" for extensible system. Not all systems need to be extensible, but some must be such, for example IDE-s.

Separate compilation does not exclude optimizations in case of compiling to intermediate code. .NET and Java are testaments to this. Static type checks and name resolutions are done at compile time, cross-module optimizations are done at run time.

In case of separate compilation ability to load separately compiled code is important, rather than the format of that code (intermediate or machine). However source code is usually a bad option for a number of reasons, particularly because of custom build pipelines and dependency management.

So one component could define an interface and then other component will be written later to implement this interface, and these two components are merged together later to provide functionality. For example, IDE plugins or Web Applications on HTTP Server. Last time I checked, for Haskell on need to either compile loaded application module from source or use FFI to access loaded Haskell module as (so loosing all Haskell type system for calls).

### re separate compilation and optimizations

Extensible systems are about non-invasive feature extension. Open systems are about dynamic entry and exit. Closed extensible systems are possible (e.g. using multi-methods that are sorted into a priority table and analyzed for complete coverage at compile-time). But I grant most interest and discussion of extensible systems regards open extensible systems.

Compilation is rewriting code to another language. Target language could be JavaScript or CLR or C-- or x86. I often think we'd be better off targeting a higher level simplified intermediate language, like a C--. Nonetheless, whether we want separate compilation of source language modules would still depend on relative access to optimizations, type analysis, etc.. and how well the target language module system aligns with source modules.

### Not Alone?

I accept your overall argument, I just want to show that I am not (was not?) alone in thinking modularity requires separate compilation. At least one peer reviewed paper (that I found within a few minutes of searching) seems to:

https://www.dtc.umn.edu/publications/reports/2006_19.pdf

make the following claim:

"However, true modularity calls for separate compilation and
testing."

### re not alone

In context, that paper makes quite clear that the authors understand the definition of modularity, modular logic in this case, is achieved without separate compilation. The sentence you chose was just a segue into the topic of separate compilation, not a claim defended by the paper. Don't cherry pick or misrepresent, plz.

### True modularity

I thought it was fairly clear? Are you saying that "true modularity" is different from "modularity". Is somehow the common CS definition of modularity not truly modular?

### No

The paper first describes modular logic (implemented without separate compilation), then several sections later describes implementation for separate compilation. It's clear in context the authors know that modularity doesn't imply separate compilation. And the "true modularity" sentence is not presented or defended as a claim by the paper. I believe you're delving into sophistry here, cherry picking a sentence out of context then misrepresenting both that sentence and its authors to provide false evidence for your argument.

You probably aren't alone in having believed that modularity requires separate compilation. But the authors of the quoted paper are not evidently an example.

Aside: Without a clear and widely accepted definition, I am inclined to interpret "true modularity" in the same spirit as "true Scotsman".

### Usefulness

With a compiled language, a module that does not support separate compilation would be pretty useless. I guess there is no rule against useless concepts, but I think that's the spirit in which their "true modules" comment is intended.

I get that many people are only interested in interpreted languages, or want to define concepts in a minimal way that allows concerns to be discussed separately, but considering "useful" concepts, with a compiled language we really want to discuss a "compilation module", or modular compilation.

On reflection what I should have really said was:

Composition does not imply modular compilation

### re Usefulness

Modularity without separate compilation is useful even for compiled languages and programs!

Programming with separate compilation and static linking is like this: we develop modules separately, compile them, gather the archive files in one place, link, execute. Separate testing is performed by linking into separate test executables.

Without separate compilation, but still static: we develop modules separately, gather them in one place, link and compile in one step, execute. The programmer experience is almost the same. The modules are still separately testable, too, via linking-compiling into separate test executables.

For runtime linking, without separate compilation we'd need a link-loader to also perform some runtime compilation - not an unusual feature these days, cf. JIT compilers.

How do you justify "pretty useless"?

From my perspective, it's separate compilation that lacks a strong argument for utility.

What exactly does separate compilation do for me? It allows working with multiple source languages? Well, very little prevents compiling other languages into mine, perhaps add it as a backend like compiling to C-- or JavaScript. Performance advantages? Must be weighed against cross-module link-time optimizations. It obfuscates distributed code? Well, we can develop dedicated source obfuscators if you care about that, and they'd likely work better against disassemblers. What other utility arguments exist in favor of separate compilation?

...

Addendum (apparently while you typed): Further, in context of the OP topic, compositional systems align very nicely with incremental techniques for analysis, optimization, and compilation, which can serve as an alternative to separate compilation for certain purposes like fast edit-compile-test.

### Opinions?

Modularity without separate compilation is useful even for compiled languages and programs!
...
Without separate compilation, but still static: we develop modules separately, gather them in one place, link and compile in one step, execute. The programmer experience is almost the same. The modules are still separately testable, too, via linking-compiling into separate test executables.

The programmer experience is not at all the same, its going to be slow. The develop-compile-test cycle is going to be too slow and that will impact productivity, and also the flow of the programmer.

For runtime linking, without separate compilation we'd need a link-loader to also perform some runtime compilation - not an unusual feature these days, cf. JIT compilers.

Interpreters have been common since Lisp (if not before), this is just moving the goalposts, its irrelevant to static compilation.

From my perspective, it's separate compilation that lacks a strong argument for utility.

I suppose you dont do much programming in C/C++, Pascal, Modula2, Ada, Rust, Go etc...

t obfuscates distributed code? Well, we can develop dedicated obfuscators if you care about that, and they'd likely work better against disassemblers

Reverse engineering from assembly is hard, and any simple obfuscater can easily be reversed, or the code dumped after de-obfuscation. Fully homomorphic encryption does exist, but it is really slow, and that only protects the data not the code. What would be cool would be a processor that can execute encrypted code on encrypted data and produce an encrypted result all without any access to the key - however that would operate on the assembly level so you would still need a compiler to produce the encrypted code. Any JIT/compiler/interpreter on the 'unsecure' machine would risk exposing the source code.

Well, very little prevents compiling other languages into mine

Its a big world and different people have different priorities. No one language will ever be suitable for all uses. For some people static compilation is a legitimate requirement, whether its because of embedded systems, performance requirements, security of source code, minimising distributed code size. There is no need for this to turn into static compilation vs JIT vs interpreter, which would be a big digression. Let's just assume static compilation has legitimate uses, and limit ourselves to composition and modularity in that context.

I find this discussion has been useful for me, because there is a tendency to become overly focussed on what you are working on directly, and then make assumptions about that. It is clear that earlier in this discussion I made this mistake. The lesson for me is that I may need to qualify things with the context in which I am working.

### From my perspective, it's

From my perspective, it's separate compilation that lacks a strong argument for utility.

I suppose you dont do much programming in C/C++, Pascal, Modula2, Ada, Rust, Go etc...

Separate compilation is indeed used widely, or at least pursued as a goal. (As A.Rossberg notes below, most languages in your list fail to accomplish this goal.) I will try to clarify my statement:

Separate compilation is a language design choice with various consequences. Some consequences are positive (e.g. avoid recompiling unmodified modules in edit-compile-link-test cycle), many are negative (e.g. hinders cross-module optimizations, hinders staged computing, hinders safety analysis for distrusted sources, reduces portability compared to deferred compilation), and a few are neutral (e.g. obfuscation of implementation code is good or evil depending on who you ask).

But separate compilation is not our only option.

Given a set of goals, such as fast re-compilation and obfuscation of code, we can design alternative solutions. For example, we can leverage incremental techniques for fast edit-compile-test (incremental is an excellent fit for compositional software), and separately use a dedicated code obfuscator tool for the rare cases it's desired.

I am not aware of any benefits that are unique to separate compilation, which cannot be achieved in alternative designs. Meanwhile, the weaknesses of separate compilation are rather severe for some important use-cases, so there's a lot of motivation to pursue those alternatives and move our design trade-offs to someplace less bothersome.

Comparing separate compilation vs naive full re-compilation is a false dilemma. Utility of separate compilation should only be judged in context of various practical alternatives. And it is with this perspective that I say, "From my perspective, it's separate compilation that lacks a strong argument for utility."

(re JIT compilers) ... this is just moving the goalposts, its irrelevant to static compilation.

Static compilation is a means, not an ends. The goalposts are features like "good performance" and "dynamic software loading" and "fast edit-compile-test", etc.. Those goalposts aren't moving.

We can achieve performance and dynamic software loading goals using JIT compilers. With some caching by the JIT compiler, we can also support incremental edit-compile-test with dynamically loaded source modules.

Reverse engineering from assembly is hard, and any simple obfuscater can easily be reversed

Compilers are not designed to obfuscate code, and don't do a great job of it. Reverse engineering is difficult for other reasons. For example, organically grown source code can be difficult to grok even when it's sitting in front of you with ad-hoc documentation like comments and function names.

That said, I don't prioritize obfuscation as a goal. IMO, it's more evil than good - why shouldn't I be allowed to understand what's running on my machine? But if I were to obfuscate code, I'd use a dedicated tool.

Its a big world and different people have different priorities.

True.

For some people static compilation is a legitimate requirement

Most people who would say static compilation is a requirement for their use case are simply wrong. They confuse or conflate their actual requirements (support for embedded systems, performance, etc.) with a specific means to achieve them (static compilation). I'm sure there are some legitimate exceptions. But there's a huge logical gap between "static compilation is necessary" (an intrinsic or unavoidable requirement) vs "static compilation is sufficient" (to achieve other requirements) or even "static compilation is the best choice for me right now" (in context of personal knowledge and available tools). Likewise separate compilation.

The reason for confusion? Lack of experience with or even knowledge of potential alternatives, I would guess.

### Performance

No JIT or interpreted language has the consistently fast performance of 'C'. Interestingly looking at the work on HLVM it seems to be boxing values that is the main cause of slowness, not the garbage collection. Also shipping the compiler with the executable results in a bloated runtime.

The reminds me of the problems with hardware/software co-design, where we can compile code to run efficiently on an FPGA, but we cannot ship the compiler with the application easily, it's huge, the optimisation takes ages, and its generally proprietary.

It's a big problem with Java apps they if you package a runtime with the application, it makes the download huge which puts people off, however if you have to download the runtime separately to run the application that puts people off downloading it in the first place. Java apps work on Android because the runtime is bundled with the platform, but look at the trouble with Oracle that has come from that.

### re performance

No JIT or interpreted language has the consistently fast performance of 'C'.

First, that's simply untrue. For example, many Khronos group languages - OpenCL, etc. - involve runtime compilation, a form of JIT. OpenCL consistently provides better performance than C for certain problem domains. And if you want, even C code can be constructed at runtime, JIT compiled, and loaded. Clang supports that.

Second, I feel that you're misrepresenting. In context, I had mentioned JIT as an alternative for runtime linking (e.g. to load a plugin). For example, even assuming sophisticated bi-directional types that make us unable to compile separately, we can still compile and link component for each specific usage site where we know the types at both ends. Use of JIT for dynamic code does not imply using it globally. We can still use ahead-of-time (AOT) compilation for most of our application, and full AOT if we do not need runtime linking. You might look at Mono AOT for an example of mixed compilation modes.

Third, in context of our earlier discussion, it seems you imply that "consistently fast performance of 'C'" is a common performance requirement. But a "requirement" is bare minimum performance for a task, not the best possible. Performance better than sufficient is not a requirement. (Great performance is desirable. But wants versus needs.) For requirements, the right question is whether your JIT compiler or interpreter is adequate, not whether it's comparable to C.

shipping the compiler with the executable results in a bloated runtime

In many cases, that isn't a problem. When it is a problem, i.e. because you have some size requirements, you have at least four options: compile ahead of time, ship with a minimal compiler instead of the full-featured one, choose a language designed by people who hate bloat so your normal compiler and runtime is small, or offload some bloat over a network connection.

Static compilation isn't a bad choice! But to claim it's a "legitimate requirement" (rather than a choice) should require you eliminate all other options, including those you and I do not think of. Outside of meeting certain legal certification requirements (like DO-178C), where "legitimate" refers to laws of men rather than laws of logic or physics, I think it'd be rare to find cases where static compilation is a proper non-functional software requirement.

Do we even need static compilation to boot an OS? Apparently not.

Well you need static compilation to compile the bootloader if you are lucky and can make it work without hand coding in assembler.

First, that's simply untrue. For example, many Khronos group languages - OpenCL, etc. - involve runtime compilation, a form of JIT. OpenCL consistently provides better performance than C for certain problem domains.

I guess that depends on what you mean by JIT compiled. The graphics driver software contains what is effectively a static compiler that is used when the kernel is loaded. This is just in time "static" compilation.

When I was referring to JIT I was meaning a interpreter which does hot-spot analysis and selective compilation of hot code. Maybe this is not strictly the definition of JIT, but this is how a lot of languages that are JIT compiled work.

If we are considering OpenCL style languages, this is the equivalent of distributing the source code and expecting the end user to compile it, Vs distributing a pre-compiled binary. The process is effectively static compilation.

I want a language where the type checking is done on the developers computer, and the code must have passed type checking before it gets shipped to the user. The user should never see a type error. I also want the code that is shipped to be obfuscated, tamper proof, and secure. If the code has passed type-checking it does not need to be redone on the user's computer, so there is no need to ship the compiler, the toolchain, the library source or the typechecker. With modern languages everything you need for even a simple application can be huge, and take a long time to download.

These days I have moved to Typescript and the stencil web components compiler, so I even statically compile web applications. WebAssembly is an interesting target for compilers, but you can also compile to JavaScript.

Third, in context of our earlier discussion, it seems you imply that "consistently fast performance of 'C'" is a common performance requirement.

I didn't mean to imply it was common, just that it can be a requirement. Performance is a problem in a lot of things I do, but mostly algorithm choice and data storage make the biggest differences. Even so moving from Python to Go for the server and JavaScript to TypeScript on the client has been worthwhile.

For requirements, the right question is whether your JIT compiler or interpreter is adequate, not whether it's comparable to C.

For some applications faster is just better. Some things are competitive and if you can get the answer faster than someone else, you can beat them (financially, strategically, tactically etc).

Typescript is actually quite a good argument for static compilation. You get type-checking before you ship, and you can develop using the latest version of the language, leaving the compiler to deal with version compatability. Having to download the latest JavaVM to run one be application must put a lot of people off even trying some software.

### I want a language where the

I want a language where the type checking is done on the developers computer, and the code must have passed type checking before it gets shipped to the user. The user should never see a type error.

Consider a few scenarios:

• You try to load a signed, separately compiled plugin, requiring a set of methods of some given type. Signature check or link loader notices some issues (like older version of interface missing a method) and says, "Nope! Not right."
• You try to validate a configuration file or incoming form data. Parser or validation says, "Nope! Not right."
• You try to parse, compile, and load a component, requiring it to have a particular type. Parser or compiler says, "Nope! Not right."

Whether you call these type errors or not might depend on perspective. The calling app doesn't experience type errors (just plain old errors). But arguably the error in each case is because a "type-check" on the input has failed. However, we should choose one consistent perspective. Either none are type errors, or all are. Runtime compilation is not special in this respect.

For some applications faster is just better.

I agree. I'd even extend that to most applications. But why do I get the impression that you mistakenly conflate "X is better" with "X is a requirement"?

Requirements are things we absolutely need to meet. Like, real-time decoding of a compressed video requires us to meet certain performance constraints - so many frames per second. Having better performance than required is better, perhaps lets us run on cheaper hardware or allows clients to open 40 tabs instead of 30. But that's not a requirement. It's desiderata.

We can't make tradeoffs on requirements. But we CAN trade among desiderata, according to our priorities. E.g. trade a little bit of performance to improve safety or scalability or security or even performance-predictability (such that average performance is a little worse but worst case performance is a lot better). It's important to not confuse requirements with desiderata.

Some things are competitive and if you can get the answer faster than someone else, you can beat them

Technical merit is secondary when it comes to deciding winners in the market. But I do acknowledge the marketing power of "performance" among a large subset of programmers.

(Meanwhile, in my role as a user, I open a tab to load my web-client e-mail and receive four megabytes in 147 requests over twenty seconds. I check the subject lines of a few new unread messages, then close the tab.)

### Type Errors

Programmers have problems understanding type errors so what chance do users have? The kind of feedback you are talking about needs to be in the interactive user interface. Considering file upload we might provide a button to choose a file from your local computer using a file picker. We might process this in the background to avoid stalling interactive use. On completion we would need to provide domain specific feedback to the user of the reason their file failed to pass. This often needs to be specifically crafted, as generic feedback like "type error line 37, not an Int" would not be usable feedback. Users do not have the time to learn an abstract system, it's not part of their job. Programmers on the other hand write a lot of programs, having a systematic way of expressing errors that is a formal language you can carry from one project to another is worthwhile.

Let's turn this around:

- Why not compile statically if you can?

- Why not transform away the runtime by transpiration to the underlying machine architecture if you can?

### Heh.

Programmers have problems understanding type errors so what chance do users have?

Indeed (said the guy who disapproves of static typing).

### Programmers have problems

Programmers have problems understanding type errors so what chance do users have? The kind of feedback you are talking about needs to be in the interactive user interface.

Progressive disclosure is a good UI principle for these cases. Like, in Ubuntu, when software crashes (e.g. due to null pointer or unhandled exception or assertion failure) I'm not directly shown the stack. I'm given a dialog asking if I want to e-mail the developers, and an option to peek at the content. More than one layer of disclosure would be nice, of course.

Let's turn this around: Why not ... if you can

The answer to both 'why not' questions is essentially the same: design tradeoffs.

Avoiding runtime compilation hinders efficient dynamic behavior like mobile agents or user scripts (basically stuck with statically compiled interpreters). Separate compilation can hinder site-specific optimizations like constant propagation or working with generic unboxed types. Etc.. (This is not an exhaustive list of costs.)

If the tradeoffs aren't hurting your use case, that's great! Didn't cost anything you care about, is optimal tradeoff. Otherwise, even "if you can" get by with a design that hinders you, it's wise to consider alternative designs.

Are those tradeoffs real though? We can statically compile to a bytecode (WebAsm? Java .Net CLR) for site specific optimisations we can use configurable generics like OpenMP where the C is compiled, but runtime configuration takes place for parellising. Various compilers can add runtime feature checks and for example use vector extensions if available.

A program that receives a CLR-encoded function or plugin at runtime can only interpret or runtime compile it, not so different from an encoding in any other high level language. Normally, uses runtime compilation. We can (e.g. with Mono) use full AOT compilation for CLR, drop support for JIT from runtime, but then we lose dynamic runtime behavior. This is not different from what I had described.

Compiling multiple ways with runtime feature switches is a valid option, has its own tradeoffs.

All designs involve tradeoffs. We can shuffle them around a lot, but only insofar as we haven't committed to specific design choices.

### Compromise

Is the CLR a "high level language"? I think this is a misrepresentation. The CLR is a virtual machine that executes a bytecode. In this way it is like any other target for static compilation. When using a virtual machine in this way we want to optimise the bytecode to allow platform specific optimisations to take place in the JIT layer where bytecode is translated to machine native. We want all generic optimisations to happen in the AOT compilation from source language to bytecode, to minimise the work the JIT has to do at runtime. So in this case there is both AOT and JIT compilation.

I agree all design is an exercise in the art of compromise. I also think that synergistic features support each other leading to the whole being greater than the sum of the parts.

### re high level language

CLR is a high level bytecode language. It is very nearly a full abstraction of C#, even preserving generics and type information. Basically, it's C# modulo comments and parsing.

But whether CLR is high or low level is not particularly relevant. I can casually drop the controversial word "other" from "other high level languages" and it won't change my point. What matters is that CLR needs further processing and compilation to execute, and in this respect distributing code via CLR is the same as distributing code via high level languages.

You mention "like any other target for static compilation". Targets for static compilation have historically included languages of many levels - C, C--, CLR, JavaScript, Haskell, various compiler-specific intermediate languages, etc.. I've been assuming you refer to static compilation in a narrow, conventional sense. But if you really do mean static compilation in the broadest sense, then neither you nor I can say much about it, good or bad. Arguments be like "Static compilation is good! For reasons!" "But I statically compiled this C program to Snusp and now I'm interpreting Snusp in Python, none of your reasons seem to apply, how is this good?" "Whyyyy would you do that?" "To prove you wrong, obviously."

minimise the work the JIT has to do at runtime

Assume I want to allow my runtime compiler to do a lot of expensive optimization work, especially constant propagation across module boundaries to support deep Futamura projections, yet I still want a fast edit-test cycle or even live programming. How might I go about this?

First, I could implement a durable cache with a JIT-specific intermediate language, referenced using secure hashes, so I don't need to redo most of my optimization work per cycle. (Difference from "separate" compilation is that the cached representation may be specific to types or constants, usage site, JIT version; it isn't meaningfully "separate" from much at all.) Second, I could design my language such that incremental type-checking, optimization, linking, and compilation are convenient, so I can make the most of my cache. Compositional languages are a good option. Third, I could make JIT of first-class functions more explicit (e.g. surgical precision JIT) so programmers have better control of those costs.

Thus, JIT work doesn't necessarily need to be minimized to meet other design goals. Trying to do most of the work ahead-of-time vs. making most of the work reusable are two different viable approaches to meet design goals related to fast edit-test with rich optimizations.

### "Whyyyy would you do that?"

"Whyyyy would you do that?" "To prove you wrong, obviously."

I have admitted that my original position was focused too much on static compilation, and that I wasn't really thinking about interpreters. Substitute modular compilation for modularity in the earlier posts and we should be happy right? My position is that different people and different requirements result in different languages, there is room for all. So what exactly am I trying to prove wrong? Are you seriously trying to argue that there is no reason for static compilation, and we should all move to interpreters with JIT compilation?

### Are you seriously trying to

Are you seriously trying to argue that there is no reason for static compilation, and we should all move to interpreters with JIT compilation?

I am not convinced by most arguments for conventional static compilation. Seems a JIT cache would do well in most cases, and would simplify other features like debugging, library level DSL support (tower of languages), staged computing, deep optimizations, distributed program models (like mobile agents), and live programming. And static compilation, when it is truly required (like bootstrapping), can still be handled as a compilation mode (e.g. Mono and CoreRT support AOT compilation for .Net CLR).

I tend to understand compiled code as a computed view of a codebase. Storing and reusing views is essentially a caching problem. Separate or incremental compilation can be usefully understood as cached views that are relatively stable under most changes to source. Conventional approaches to static compilation and software distribution take a very ad-hoc approach to this caching issue, e.g. involving filesystems and package managers. And they do an awful job of cache consistency - e.g. it's very easy to have modules compiled against an old version of a header or different -D compiler flags. DLL hell is also a symptom of cache inconsistency. Nix package manager is perhaps the exception to prove the rule - carefully tracking code, compiler, and configuration.

Without a JIT cache, compiled code is recomputed too frequently so we will not tolerate expensive analysis and optimizations. If JIT is unpredictable or difficult for programmers to control, that creates other problems. Tracing JIT which is popular today has these big flaws, so I do understand why you're hesitant to embrace JIT! But it's the implicit tracing that's the problem, not JIT. We can support caching, introduce surgical precision programmer control over JIT, use tracing JIT tactics only where explicitly requested. This would solve the common issues with JIT without sacrificing the flexibility it offers for dynamic behavior.

So to answer your question: I would seriously argue in favor of deprecating static compilation as the normal or default performance strategy for software. And for those special cases where static-only compilation is required (e.g. for bootstrap elements), I'd prefer it just be a compilation mode.

Substitute modular compilation for modularity in the earlier posts and we should be happy right?

I think your phrase "modular compilation" is really implying three goals: modularity, performance, and fast edit-test cycles. You'd probably be unhappy to lose any of them. I'd be unhappy if you assume a specific approach (like separate compilation) is required to achieve them.

### pretty random claims

No JIT or interpreted language has the consistently fast performance of 'C'. Interestingly looking at the work on HLVM it seems to be boxing values that is the main cause of slowness, not the garbage collection. Also shipping the compiler with the executable results in a bloated runtime.

These claims are all over the map. 'C' does not have performance; code written in C and built with a particular tool-chain has a measurable performance attribute. That could be C built with a JIT, for example, and it would execute just as fast as the same C sources built ahead-of-time -- or even faster, since the JIT can optimize explicitly for the exact hardware that the code is running on.

And while compilers used to count as bloat, the average web page passed 2MB several years back, so something tells me you're worrying about kilobytes when people are busy wasting gigabytes. (The 1980s called and they want their outdated concerns back.)

It's a big problem with Java apps they if you package a runtime with the application, it makes the download huge which puts people off

In this case, the 2000s called and they want their outdated concerns back. An app -- including the necessary Java runtime -- can be bundled inside of a 10MB download. That's less than 5 old web pages!

Java apps work on Android because the runtime is bundled with the platform, but look at the trouble with Oracle that has come from that.

That is not even a technical issue.

I am not interested in anecdotal speculation. Empirical evidence please? Where is this faster 'C' JIT?

Secondly you may not care about bandwidth, but the network's do. If I am distributing an app I care about the total downloads of the file size times the number of users. In this case increases in size can be significant.

For web pages, framework size and speed is absolutely a concern. We have great difficulty keeping our web applications working acceptably fast even for simple things like displaying a page and moving from one page to the next. Page load time is important, pages should load in under 300ms. Run "lighthouse" on some of these bloated pages and they fail every test.

At least the user does not need to download and install JavaScript on their computer. Thank goodness I can compile from TypeScript so I don't have to worry so much about all the different JS versions out there.

### Ironically

From my perspective, it's separate compilation that lacks a strong argument for utility.

I suppose you dont do much programming in C/C++, Pascal, Modula2, Ada, Rust, Go etc...

Ironically, most of these languages do not actually support proper separate compilation, only Modula 2 and Ada do -- and with sufficient good will and care, plain old C. The others only allow incremental compilation (Rust, Go), or have a hacked-up model that is so broken that its worst case is quadratically worse than whole program compilation (C++).

(To clarify: separate compilation means that you can compile modules independently, i.e., the work span of compiling N module implementations is always 1. With incremental compilation, the span is the length of the longest transitive dependency chain. You cannot generally have separate compilation without separating interfaces from implementations.)

### Seems slippery

separate compilation means that you can compile modules independently, i.e., the work span of compiling N module implementations is always 1.

But what is a module? Aren't you hinting at a problem with your comment about C requiring good will & care? It seems like there is a spectrum of support that a language might have for separate compilation. If you change the definition of a base type used throughout a code base, most compilers will require a full rebuild. On the other hand, with care, you can get workable separate compilation for some parts of your C++ project.

It should be possible to define a language such that changing the type of an abstract data type only requires recompiling that module.

ADA got a lot of stuff right, it's a shame it was never used more widely. The GNU Ada compiler produces code as fast as C/C++ (depending on whether you use tagged types in Ada). With recent updates I even managed to get package signatures to work nicely so that you did not have to re-plumb everything with each instantiation.

### what else?

It should be possible to define a language such that changing the type of an abstract data type only requires recompiling that module.

Sure. Most languages work that way, going back 50 years now: If you don't change the surface area of a "module" (a concept that varies widely from language to language), then only that module needs recompiling.

Your post contradicts Andreas' post above. I guess it depends on whether you include '.h' files as part of separate compilation or not?

### #if 0

I guess it depends on whether you include '.h' files as part of separate compilation or not?

I used to receive libraries from vendors. They came as compiled (but not linked) files, with .h files for using them from our code.

Later, I used to receive libraries from vendors. They came as compiled and linked files (.dlls and .exes for Windows, for example), and they would ship .h files for using them from our code.

I think if you try too hard to apply human words to archaic technology and language decisions, you will inevitably come up short. Not every idea has one clean bin in which to be placed.

In the case of C/C++, the .h is definitely part of the source code, but it is also part of the surface area of a linkable module. I would consider this, in retrospect, to be a rather bad design, but it accomplished the necessary purpose for its day, albeit with a heavy dose of hashtags.

### some parts of C++

If by some parts you mean the parts that do not include templates, then maybe. I really don't think "workable" is a reasonable adjective to describe this though. Try "workaround".

+1

### Modularity in and outside computer science

I think it is worth taking a step back and looking at how modularity is used outside the sense of software modules as they are realised in programming languages.

Modularity is a very important concept in many fields of biology. Of particular interest is its importance in evolutionary biology; Gilbert (2000) Developmental Biology says:

Development occurs through a series of discrete and interacting modules (Riedl 1978; Gilbert et al. 1996; Raff 1996; Wagner 1996). Organisms are constructed of units that are coherent within themselves and yet part of a larger unit. Thus, cells are parts of tissues, which are parts of organs, which are parts of systems, and so on. Such a hierarchically nested system has been called a level-interactive modular array (Dyke 1988). In development, such modules include morphogenetic fields (for example, those described for the limb or eye), pathways (such as those mentioned above), imaginal discs, cell lineages (such as the inner cell mass or trophoblast), insect parasegments, and vertebrate organ rudiments. Modular units allow certain parts of the body to change without interfering with the functions of other parts.

Here modularity is defined in terms of the variability of functional parts: it is not required that modules have determinate interfaces, let alone that their interaction is constrained to occur at those interfaces.

It's great that languages like ML give precise, well-motivated semantics to the programming language constructs that they call modules, and it is great that if a programming language allows us to carve up programs at well-specified interfaces we can implement separate compilation for them, but I think it is wise to use modularity as a more general concept, and I think the biological concept of functional units that admit variability is quite useful in the context of computer science.