## The Platonic Solids of Software Construction and Their Realization in C

The Platonic Solids of Software Construction and Their Realization in C

Synopsis -

Here I try to contrive 5 (actually ends up being 6) 'Platonic Solids' of software construction - IE, the fundamental elements of programming that all programmers in all business domains end up leveraging regardless of their general purpose programming language.

As a practical matter, I then demonstrate how different aspects of each element are either supportable - or not supportable in C. A language like C is chosen partially because when we use it to encode these elements, its weak language semantics actually enable us to understand each element in a more isolated way. For discussion at this level of analysis, this turns out to be useful.

However, I warn readers that this gist-article is more conjecture than science, an emerging idea that, if accurate in its notions, is a precursor to a rigorous investigation. That is why I offer it up for evaluation and critique here.

## Comment viewing options

### Whence the solids

Your starting point is an assertion of priorities, your list of Platonic Solids. We all have preferences for how we want a PL to be; some of us are fortunate enough to find, or create, a language that's close to our ideals. It's of interest to understand the degree of match/mismatch of a given language with a given set of ideals; but the list itself seems to me to be, on the face of it, a subjectively chosen perspective rather than an absolute reference point. If you apply the list to some language X that is not too close to the preferences built into your list, but language X is quite close to someone else's preferences, your comparison will stir a clash between your preferences and theirs.

### Absolutely. One reason I

Absolutely. One reason I linked this gist-article here is to generate discussion on what would actually constitute a 'Platonic Solid' in the context of software construction. I have my list, but it is, as I said, conjecture.

What would you list as the Platonic Solids? Don't worry if it's subjective, with some discussion, we may yet discover a consensus!

EDIT: One thing which you said that I don't understand, however, is that my list is 'prioritized'. That is absolutely not the case - you cannot 'prioritize' axioms in any manner of which I'm aware.

### Miscellania

Some stray thoughts.

When you say resource semantics, how do you see that relating to garbage collection?

I've meditated much, over the years, about how programmers should cope with errors. My own biggest exploration in this regard was guarded continuations in Kernel, which sought a synthesis inspired by the best aspects — and leaving behind the worst — of exceptions and Scheme's dynamic-wind. Although I was pleased with the elegance of guarded continuations compared to the features they were inspired by, I felt that removing a forest of obstacles to elegant error-handling only revealed another forest behind it, with the suspicion of many more beyond. I also understand error-handling to have been key in motivating the initial conception of aspect-oriented programming, which I think is similarly groping after a goal we're nowhere close to. I've slowly come 'round to the belief that we have no clue how to handle error conditions.

I perceive you're pretty solidly embracing types. And categorical type devices. (Relatively early in my blog I wrote a post called "Where do types come from?", where I suggested that types were more of a problem for CS than a solution, and that turned out to be a controversial suggestion — who knew? :-P)

In my last year or so of college, I started exploring what I tentatively thought of as "abstraction theory", trying to analyze the essence of what programming languages ought to be trying to do. That would be... about thirty years ago, now. My thinking on that has evolved into a strong suspicion that we're "doing it all wrong" — that we really don't know how to program, and that if our civilization survives long enough to mature, programmers of the far future will be amazed by how clumsy our programming languages were.

I'm working on a blog post, atm, partly on the premise that our mathematics is more graceful than our programs because our mathematics is a conversation where we explain our ideas to other sapient beings, whereas we have fashioned our programming into a conversation where we try —futilely— to explain our ideas to a computer, a non-sapient device. For a very long time, I had puzzled over why Lisp has bignums and garbage collection: I kept wanting the essence of Lisp to be its elegant evaluation algorithm, which of course I'd sought to make even more elegant with Kernel, but then, were these other features, much though I liked them, just accidentally in the same language? Now I think all these features have in common that they seek to minimize the programmer's conversation with the computer, so the programmer can concentrate more on aspects of programming that are more mathematical in flavor. Whereas I see Haskell as embracing the inhuman rigidity of formal mathematics, which I see as part of the problem; that would certainly explain why I'm a Lisp person rather than a Haskell person.

### I think we agree on the goal

I think we agree on the goal - figuring out what is actually essential to programming, and then using that as a starting point.

But I don't think we can be postmodernist about it and discard the rich context of software construction history when deciding what is essential. In fact, I think said historical context should be the primary basis by which we should, at least initially, construe these things. We can then use them as a bridge to theoretical underpinnings.

On static typing - I guess I am a 'type person', in so much as I prefer to have a static type system than not, but I'm not into type systems that are much stronger than, say, HM. Beyond that, they seem to negate their own utility, in my experience. And on the other hand, I love dynamic type systems like Clojure's protocols which, AFAIK, can't be realized statically.

So when it comes to static typing, I suppose I'm closer to the middle. And the reason is that even with dynamically-typed languages, people always seem to end up wanting the language augmented with ad-hoc, tacked-on inferred static types anyway. The evolutionary standard is, again and again, a path to the middle. This phenomenon of tacked-on staticness to dynamic language might be likened to Greenspunning - except with regard to type systems and toward Haskell rather than Lisp.

On resources semantics and GC - I would straight-forwardly say that GC satisfies the definition of a resource semantics. And so too RAII. And any operational semantics that a language uses to help the programmer deal with resource allocation such as memory and more.

### postmodernism

I don't think we can be postmodernist about it and discard the rich context of software construction history when deciding what is essential.

I think failure to recognize, and failure to hang on to, good ideas is a problem with our field. Had to look up this quote, wasn't sure who'd said it; seems to have been Richard Hamming:

one of my major complaints about the computer field is that whereas Newton could say, "If I have seen a little farther than others, it is because I have stood on the shoulders of giants," I am forced to say, "Today we stand on each other's feet."

My first programming language was vintage 1970s BASIC, and when I got to college it was carefully explained to me that it's impossible to write structured programs in BASIC. Which was odd since I'd been writing structured programs in it for years. I've since seen various successor languages called BASIC, but, like "descendants" of C, the later languages seem inclined to try to gather up the fandom of the earlier language by (somewhat) imitating its syntax, while substituting currently-trendy semantics for whatever deep semantic elegance was probably responsible for the old language acquiring a fandom in the first place. At some point I came to suspect that all languages that naturally attract that sort of passion have something excellent in their semantics, something that might be taken farther, upgraded for a new time and new generation — but isn't upgraded because doing that well is much harder than imitating the latest semantic fad. C had pointer arithmetic elegantly unifying the notions of pointer and array. BASIC, now — what on Earth did BASIC have? Not GOTOs as such, that's incidental. My current guess is that BASIC's key thing was treating a program as a linked list of instructions, with commands for editing the list that amounted to a simple IDE. I continue to muse on the possibilities for an upgraded language on that theme.

So, yes, I agree that we don't want to throw away the past. I'm just not convinced we've ever gotten all that close to our destination yet, nor that we're necessarily even moving toward it, particularly.

### To take your line of

To take your line of argument and extend it:

Not only do we have problems respecting the past, we also have problems recognizing that there is such a thing as a dead branch on the language evolution tree; particularly when it is branch upon which we currently sit.

Take, for example, object-orientation. It's a haphazard entangling of many of otherwise independent programming elements. If a language, like C++, had done the smart thing and attempt to expose these elements in a more independent, composable way, we wouldn't have so many wrong-but-seemingly-correct ways to write C++ code. I'm not saying that C++'s semantics could have necessarily exposed each of these elements in perfect isolation, but you might imagine how more of that isolation could have simplified its design dramatically. For language designers, doing so might start with having more conceptual clarity as is hopefully rendered by literature such as my gist-article).

As programmers, we need to get better at recognizing that not all forward movement is progress (EG, OO, Agile, etc.)

PS - I am thinking of adding 'reflectivity' as a 7th element in the document.

### Not Reflectivity

Whilst I generally agree with your initial list, and I agree that reflectivity is a property some languages have, I don't agree that it is beneficial. Being able to read and mutate the programs own structure at runtime seems both difficult to achieve securely, and overly complex. I think anything you might want to achieve with reflection that is safe is also achievable using static compile time generics, especially if combined with type-classes when you can do a kind of generative logic programming. Reflection seems unnecessary, unsafe, and leads to unmaintainable and opaque solutions. Am I being overly harsh on reflection?

### Reflectivity, not reflection.

I think it's important to note that I used the word 'reflectivity', not 'reflection'. Reflection is understood as a type of implementation, usually one that allows users to operate on user-defined types via metadata at run-time. 'Reflectivity', on the other hand, is not an implementation, but a requirement, one that is ambivalent as to whether it is evaluated at compile-time or run-time. All it says is that a user will always eventually need a (hopefully composable) facility for operating on user-defined types - at some point in time.

Also, the list was initially put together as an observation of things that programmers always end up needing - whether they like it or not. Things that, when not provided for by the designer, end up getting Greenspunned or worse - repeatedly hard-coded. The list is (hopefully) more informed by evolutionary forces than implementation concerns. So when we evaluate reflectivity as an element, we look at it as something we either always end up needing or not, irrespective of any particular implementation. We have no need to for any implementation to be 'good', or to even yet exist.

### reflectivity versus reflection

What is the distinction?

I've got a quote saved in my quotes file, from an OOPSLA '87 paper by Pattie Maes:

A reflective system is a computational system which is about itself in a causally connected way. In order to substantiate this definition, we next discuss relevant concepts such as computational system, about-ness and causal connection.

I hung onto it because I was amused by the offering of a definition in which all the important words are not yet defined — even the word "about" — but there's also some sort of moral there about the slipperiness of defining reflection.

Nasty consequences have flowed from a widespread misapprehension that fexprs are a reflective feature (though I did try to clarify this point early in my dissertation). Mitch Wand titled his 1998 paper "The Theory of Fexprs is Trivial", but in fact the paper wasn't about fexprs but about reflection. The theory of a perfectly reflective language is trivial; but fexprs are not reflective, and in fact it's pretty natural for a language with fexprs to have an equational theory properly containing that of lambda calculus. (I concluded at the time that it would be entirely beyond me to get a paper about this result through a peer-reviewed journal, because if you provide an iron-clad proof of the result to someone who is convinced it's not true, they literally won't hear what you're saying to them; though I've continued to struggle to explain the point on my blog, most recently this.)

So it seems to me the nature of reflection is already deeply confusing; but here we're interested in nuts-and-bolts computation, and it gets deeply enigmatic. There may be a number of interesting concepts in this neighborhood that some people will describe as "reflection". So I honestly don't know — and would like to — what you have in mind either as "reflection" or as its distinction from "reflectivity".

### More Mysterious the Deeper I Look - But...

I haven't thought about reflection nearly as much as yourself. But reading your work does inform me that the idea of a reflective language might tie in more implementation restrictions than I initially thought. But maybe not? I'm not sure.

Frankly, it gets very murky the deeper I look at it, and I think there is some fruit to be borne from an investigation in to just how implementation-independent reflection in a programming language can be.

But my thoughts were, well, the bytes always have to be _somewhere_, and if the compiler (not even the runtime) has enough information to encode a path to those bytes, then that's enough to satisfy some definition of reflection.

So if only the compiler can know about these paths, and all accesses are compiled down to instruction code, just how much security is actually compromised?

### Little-endian or big-endian

Is the machine running the program little-endian or big-endian, what is its word size? Actually looking into the runtime on a general machine is a mess. You could define a virtual machine to run the language consistently, like Java has the JVM, but this restricts you to an interpreted environment, and the complexity of adding a JIT compiler for performance. (Maybe Web-Assembly is a useful target for this kind of approach?)

Maybe a LISP-macro style approach where the reflection is of the source AST representation, because otherwise you don't really know what to expect after the optimiser has been at it, unrolling loops, and hoisting variables etc?

I think generic programming is a cleaner way to achieve the same ends. I think it's better to reason about what a program does, rather than how it does it. In that sense I would look for a kind of compile-time meta-programming that can inspect the type system not the code itself.

### I think that is still a form of reflection?

> Is the machine running the program little-endian or big-endian, what is its word size?

Is this a suggestion that a compiler wouldn't know this at compile-time? AFAIK, native compilers typically have this information. I might be mistaken, however, as I have not concerned myself with that level of detail for a long time.

> I think generic programming is a cleaner way to achieve the same ends. I think it's better to reason about what a program does, rather than how it does it. In that sense I would look for a kind of compile-time meta-programming that can inspect the type system not the code itself.

I think it might be cleaner than what I suggested. However, I think what you're describing is still a form of reflection, just one more sophisticated than what I had in mind.

Are we disagreeing about what constitutes 'reflection'?

### Abstract reflection

You've hit the nail on the head: how implementation-independent? A reflective language allows a program to interact with some aspect of its own computational state; but it's usual for a language to implement an abstract machine, and the computational state interacted with may be computational state of the abstract machine. For example, a language might provide reflective functions for things like how long has the program been running, or how much memory is it taking up (or how much of some other resource is it using); but are those really physical units being returned, or abstract ones? If the emphasis is really reflectivity, there's no need to reveal details of the concrete implementation environment; but often the easiest way to support the reflective facility is to provide concrete details that satisfy the abstract definition of the reflective facility. This then becomes a temptation to turn the facility into a really different sort of facility altogether, a way to get past the abstract language and access the concrete particulars of whatever machine the language is running on. That's a very different sort of facility. Reflective facilities may be designed to strike some balance in how much concrete environment can be revealed without either making programs too platform-dependent or miring them in complex provisions for cross-platform variation.

### reification

I feel I should mention, there are sorts of reflection that are inherently tied to the abstract machine of the language. Call-with-current-continuation is considered a reflective facility, and is thoroughly abstract. This, though, gets into issues of reification, which in programming is closely associated with — but different than — reflection.

Reification is a term from philosophy, where it designates a conceptual error where something is treated as if it were an object when it isn't really an object. This is an insult that philosophers hurl at each other. A philosopher who says there is no such thing as an actual infinity, as infinities can only be approached but never achieved, may accuse others of reifying their infinities — treating infinities as if they were actual things when they aren't. Notice that this is only reification if one accepts that infinities are not actual things; if you don't accept that premise, then no reification has taken place. In essence, reification is thus a type conversion, where some nebulous non-object is manifested in the form of an object.

Call-with-current-continuation is reification because a function would not otherwise have access to its own continuation; the continuation would be theoretically an aspect of the program semantics, but not one that the program itself could get at. Providing this sort of access to continuations is a basic trick in reflective Lisps. Those Lisps also routinely use fexprs, and this has led, historically, to seeing fexprs as reifying their operands. As with infinities, though, whether or not operand-capturing is reification depends on whether you suppose that the operands weren't already objects. This is really getting at something profound about interpreted versus compiled languages (the topic of the aforementioned blog post); compiled languages assume that the unevaluated operands "go away", so they aren't objects and their capture is reification, whereas interpreted languages focus on how the operands are processed, so the operands are still there when the decision is made to capture them and it's not reification.

### Priorities

Seems we've used the word(s) in somewhat different ways. When I speak of making these things priorities, I mean acknowledging them as things that are important. When you speak of prioritizing them, I take it you are referring to assigning a relative order-of-importance to them, which I had not meant to suggest.

### Strong claims require strong evidence

No matter the domain or programming language, programmers always end up needing support for ALL of these things
in some form -

It does not seem to be clear why you looking for eternal truths in software development. Do they really offer more utility to a programmer than truths that work on a particular day, for a particular problem? There is an old adage that if we cannot solve a problem it is tempting to generalise it. There are two issues that spring to mind:

1. Any level of abstraction produces a less complete description of a problem. The minimal level of abstraction that covers a domain is often a productive level to work at.
2. The "range of problems that a programmer may try to solve" is not a natural description, may not be a sound description of a set of problems and thus may not have a unique abstraction that is suitable to describe it.

The second point could (depending on background) also be paraphrased as "the set of problems in programming and the abstractions that describe them do not necessarily form a lattice", or, we tend to throw random things into the collection of "programming problems" depending on whatever arbitrary domain we are working within.

All of which acts as a prologue to say: perhaps you do not need "platonic solids" of language design. There are weaker forms of evidence of utility that may be more appropriate. However, if you are set on finding eternal truths then the opening statements that you make need some serious justification. It is very bold to say that ALL domains require support for specific abstractions. This is the same as saying there exists no domains in which these abstractions are not required - a claim that is easy to invalidate. This suggests that some weaker form of utility would be more appropriate.

* Resource Semantics (such as RAII in C++, or finalizers in C#)

Programs that operate in statically sized domains, or that have well founded conservative estimates of their resources do not require this. Examples include embedded systems, hard real-time systems (i.e. SCADA / industrial control) and anything that has a fixed size input/output relation including a wide range of streaming programings (where each element of the stream is a fixed size).

* Error Semantics (such as exceptions in most languages, or conditions in Common Lisp)

What about domains that do not contain errors - e.g. lock-free algorithms that are guaranteed to exit without an error regardless of state or input? What about scientific codes that have uniform control-flow on fixed-size inputs? What about cryptographic primitives? ...

* Algebraic Types (such as structs and unions in C, or records and DUs in F#)

We have just watched a full cycle of the wheel-of-reincarnation in GPUs. Only now are they approaching a level of sophistication in which algebraic types have become necessary. Previous interations of the hardware have operated fine on formulations of problems using parallel ("exploded") arrays.

* Data Abstraction (such as that which is often badly entangled within object systems in OO languages)

It is unclear (to me at least) what you mean here, and how it differs from algebraic types. Without clarity it is difficult to argue that it is universally necessary.

* Generic Programming / Type Safety (such as function and operator overloading in C++, and generic collections in C#)

This tends to be more useful in open domains - where the final user of the code is not constrained, but in closed domains (i.e. not building libraries, but rather complete systems) the usefulness of this varies more. If we know what type of data will be final, then generality has no value in many performance-constrained domains. Happily a video on this exact topic landed on my desk yesterday and goes through the argument far more completely than I am willing to do here.

* Categories (such as facilities for encoding abstract concepts like Iterables / Enumerables, Functors, Monoids,
and so on)

These are certainly useful in many domains. But even here on LtU there is a little controversy at least over whether they are necessary in a clean description of every problem, and no doubt some of the more prolific posters will chip in on this very point.

So, in summary, universal necessity is an incredibly strong argument that is very difficult to make. Perhaps it is not the most productive way to make the argument that you are advancing. Weaker types of argument require different kinds of evidence, and perhaps advancing some empirical evidence for the ranges of domains that these language properties are useful within creates more room for discussion.

### If you take the platonic

If you take the platonic solids idea as the requirements of a single language that is usable in all of the domains you describe, it makes sense. I don't think the platonic solids idea is claiming that all domains need all of those features all of the time, just that all of those features are needed if you want to program in only a single language in all of those domains.

### Complexity is not free

But it these "solids" are not always required at the same time then why put them in a single language? Extra complexity arises from the interactions between them. If a particular domain does not require all of them then a simpler language exists to service it. If they are not eternal requirements for a language then in what sense are they platonic? Sounds a little like one language to rule them all... and we all know how troublesome binding can be.

### The Interminable Nature of General Purpose Languages

My supposition might be easily assailable, but I take it for granted that no single DSL (and further, no DSLs in any combination) can be a reliable replacement for a general purpose language.

We all know what generally happens when someone creates a DSL intended to allow developers to construct software in lieu of a general purpose language... Due to practical needs (usually the need to reduce code repetition), the DSL gets augmented with more and more ad hoc semantics until one days it itself becomes the equivalent of a very badly designed general purpose language.

This is one way in which historical context informs us of the need for a general purpose language, regardless of business domain.

And if that all holds, then I think my other arguments follow.

EDIT: For clarity, I do not, and would not, attempt to argue against using DSLs in combination with a GPL.

### Specialisation

It seems reasonable that no single DSL can be a reliable replacement for a general purpose language... but what about a collection of DSLs? A collection of languages can be specialised to suit the various tasks in a business domain.

What is the rationale for a single language (e.g. is it to avoid the switching cost on programmers? is it based on the assumption that certain engineers will only have familiarity with some of the languages? ...)

Feature creep is a thing. It happens in general purpose languages as well as DSLs. Over time a tool that is suitable for problem X tends to be applied to problem X'. In language design this often causes feedback, e.g. the development of perl or c++.

One advantage of decomposing a domain into a family of languages is that not all problems have to be solved with the same point in the language design space. Feature creep in general purpose languages is a process without any inhibitory feedback (hence the "creep"). In systems implemented in DSLs there are multiple points in the design space. Not every point will be "closest" to a particular problem and so there is an alternative: that problem should fit into language X rather than language Y.

I'm interested in your rationale for gluing this particular set of properties together. It does not seem to be a minimal set for anything other than "all possible problems". As a proposal for a language design I can see criteria that you are maximising (i.e. the number of problems that have a natural expression) but I do not see any criteria that you are minimising.

### Many thoughts...

> What is the rationale for a single language (e.g. is it to avoid the switching cost on programmers? is it based on the assumption that certain engineers will only have familiarity with some of the languages? ...)

The problem with the approach of multiple DSLs in lieu of a general purpose language is two-fold -

1) In general, DSLs don't reliably compose. Once a set of semantics is sufficiently specialized to a domain, its loses the ability to predictably interop with other equally specialized sets of semantics. I don't have the math to demonstrate _why_ this is, but it's something of which my previous attempts informed me.

2) If you _do_ manage to conjure up a sufficient number of DSLs that can solve the same set of problems as a general-purpose language, then all you've done is recreate a general-purpose language, albeit shattered across multiple DSLs.

It's just another fulfillment of the principle that general-purpose languages, once killed off, are always reborn. No matter what you do, once you kill all the general-purpose languages in your tool set, another one starts being reborn some places (or places!) else inside of your existing tool set. Except it is born all mutated, incoherent, and screaming for death, like some sort of horror story.

> Feature creep is a thing. It happens in general purpose languages as well as DSLs. Over time a tool that is suitable for problem X tends to be applied to problem X'. In language design this often causes feedback, e.g. the development of perl or c++.

For C++, I'd posit that one of the reasons for its feature creep is that it _wasn't_ designed with this sort of 'platonic' approach in mind. Instead, like most OO languages, it was designed in the opposite way! Instead of attempting to expose each of these elements in an isolated but composable manner, it tried to cram nearly all of them into a single, non-decomposable object model. It was only during and after Stepanov's work did we build out a more decomposed but mostly parallel set of semantics (STL-style instead of OO-style). Even today, we see how these two parallel design paths continue to dalliance with the Uniform Call Syntax (as proposed here - https://isocpp.org/files/papers/N4165.pdf and here - http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4174.pdf).

> One advantage of decomposing a domain into a family of languages is that not all problems have to be solved with the same point in the language design space. Feature creep in general purpose languages is a process without any inhibitory feedback (hence the "creep"). In systems implemented in DSLs there are multiple points in the design space. Not every point will be "closest" to a particular problem and so there is an alternative: that problem should fit into language X rather than language Y.

Yes, that's an advantage, but in an approach that I still think is overall unworkable in practice. (Howeverm, there's a lot of room to argue that such a thing is, in fact workable in practice - and I'd like to see someone make more progress in this area than I could!)

> I'm interested in your rationale for gluing this particular set of properties together. It does not seem to be a minimal set for anything other than "all possible problems". As a proposal for a language design I can see criteria that you are maximising (i.e. the number of problems that have a natural expression) but I do not see any criteria that you are minimising.

This gist-article intentionally does not propose any means of gluing these elements together, nor does it try to give a comprehensive description of how a language might provide them (although some can be inferred in the case of a C-style language - if one chooses to). That aside, the gist-article moreso tries to explicate a baseline set of language design requirements that says, "Hey, I know I'm going to eventually need these elements from any general-purpose language I use, and often in different possible combinations, so I will be pleased if you give me a language that can provide them out of the box (either with language semantics directly or from a standard library) so that I don't have to build them myself in an ad-hoc way.'

### A couple of things I misstated (which I did not intend to argue)

Sorry, when I said domain, I meant 'business' domain, not computational domain. EG - whether you're a game programmer or a web developer. Certainly domain-specific languages don't need many of these elements (although they often end up needing some.

Also, I didn't intend to include domain-specific languages, but only intended to mean general-purpose programming languages.

It looks like I have a bit of rewording to do, to say the least!

Hopefully this explanation makes the article seem a little less off-base.

### Necessitarianism?

Necessitarianism? Did the coin bring you here?

### Pragmatism?

Sure it could seem like necessitarianism, particularly if the purveyor were to be dogmatic about it, but it also could be pragmatism. From experience the pragmatic programmer knows what works and what does not work. The pragmatic programmer also considers things like availability of programming talent and the cost of retraining to switch paradigms. The pragmatist chooses something similar to what we already have, but with certain constraints arrived at from actual experience of using those tools over many projects.

### 'Evolutionary Reasoning', to turn a phrase.

From a philosophical perspective, I think my view is moreso informed by the reasoning process uniquely employed by an evolutionary science, such as by evolutionary psychology or biology.

I don't know if that makes me either a pragmatist or a necessitarian or some third thing, however. Regardless, I think that this 'evolutionary reasoning' is a useful tool at least when conjecturing about these complicated matters.

I don't think I would apply this 'evolutionary reasoning' process beyond the initial stage of conjecture, however. In our domain, I think we might all agree there are more precise and better-suited tools for that.

### If he would exist

C++ has a user base of around nine million people who all have a different perspective on what works and what not. About a percent of a percent makes up the steering committee. And nobody can agree on anything; they can vote, though.

The pragmatic programmer doesn't exist.

### Pragmatism Aside, These Things Follow

That's kinda my point -

Contention follows confusion,

Confusion follows incoherence,

Incoherence follows conceptual murk.

If we had a time machine, and we helped Bjarne use this (purportedly) more clear conceptual framework, C++ could have resulted in less incoherence, thus less confusion, and thus less contention.

If this conceptual framework did not (purportedly) help in these ways, why would I be proposing it?

### Bjarne is very important but only gets to vote

Bjarne is very important but only gets to vote these days just like anyone else.

He did a nice job though I am trying to popularize the notion that every programming language designer should be punished for his/her crimes.

I don't think this approach of trying to distill some minimal axioms/functionality for all general-purpose programming languages will lead to anything really constructive no matter how experienced a programmer you are. Though it's interesting to see what you come up with. Your motives are irrelevant to the discussion.

Create a programming language and we'll see what punishment will fit your crimes. That would be my attack vector.

### How My Language Holds Up to My Own Standard

The only interesting language I've built so far is AMSL, a modular / extensible scripting language. Its AST is here, its Prelude is here, and an example of its extension for use in a game engine is here and here.

Now, in my defense, I built this language before formulating the 'Platonic' framework, and the language was not intended to be general-purpose anyway (EG, it would never deign to replace one). That said, it is still interesting to analyze it in the context of the gist-article.

Unfortunately, you'll have to scrutinize some F# code or play with the REPL to critique the language since many of the semantics aren't utilized in the above Prelude (EG, error handling). It's also still in beta due to missing some planned features, EG, a using directive. But as far as semantics go, it mostly satisfies the elements I proposed -

1) Resource Semantics - It's a GC language, and if you need something with which to automatically release resources, you can just use a continuation. The continuation will work predictably because the AMSL's error handling never unwinds the stack.

2) Error Semantics - The language uses a concept called 'Violations' where any value can be substituted with a Violation, and all operations with Violations result in a Violation, sort of like how NaN's work in unchecked floating point math (albeit with error information).

3) Algebraic Types - For product types we have Records, and for sum types we have Unions.

4) Data Abstraction - If you want to implement an abstract data type, you'll either want to develop your new type at the level of F# code in your language plug-in, or use a Record with naming conventions for 'private' fields. Since sometimes you might want to do the latter, I'd say this is only half-supported.

5a) Generic Programming - AMSL supports multi-methods that dispatch based on the type of the first argument, which, while not as sophisticated as Clojure-style Prototypes, does the same job in most cases.

5b) Type Safety - This is a strongly dynamically-typed language, so you might say this is half-supported. Strongly-typed, yes. Statically-typed, no.

6) Categories - Using multi-methods, many categories are available out of the box including Monoid, Functor, Foldable, Enumerable / Traversable, Indexable, Arithmetic, Equality, Comparison, and so on. Users can create new types that support these categories, and make new categories as well.

i) Lambdas - It's a Lisp, so ya.

ii) Async - In practice, this would be done at the level of the language plug-in. The language's semantics could be straight-forwardly extended to support this directly, however.

iii) Reflectivity - There is some reflection support for basic types and type names, but anything to do with serialization would be implemented at the level of the plug-in (and already is in the case of the game engine).

The biggest practical problem with the language (aside from its beta state and lack of documentation) is that there isn't a compiler for it, and for personal reasons, I doubt I'll ever get time to write one... so AMSL code is *extremely* inefficient to run... which might be fine for some uses.

### Lambdas

Just want to pick out another of these that causes problems, and that is Lambdas. Whilst they are convenient, they lead to memory leaks in languages with mutation due to variable capture. It also breaks referential transparency, as the function is no longer dependent only on its arguments. When I first learned functional programming, I found the lack of Lambdas in imperative languages frustrating. Now they are available, I look at code written with them and try and figure out why it's leaking memory or fragmenting the heap so badly.

If we strictly require functions to only depend on their arguments, then you would only allow top-level functions.

It's hard because Lambdas are so convenient, and reduce boilerplate in code, but I almost prefer C++ style "function objects" because it makes the storage requirement explicit.

### C++ lambdas don't have these

C++ lambdas don't have these particular problems, although perhaps that is because they're not really lambdas, but as you say, stack-allocated function objects.

Might I suppose that the real issue you're having with C++ is not lambdas, but attempting to do functional programming in C++, a non-GC language?

In C++, like other non-GC languages, you can use functional programming reliably only to a degree. As you know, immutability in local scopes works well, and const annotations can enable referential transparency for simpler functions. But when it comes to purely functional data structures that inherently rely on allocation, this approach breaks down fairly quickly.

I think the real question is - which definitions of 'lambdas' do C++ lambdas satisfy? The ones in my document that mentions them for their syntactic convenience? I'd say yes. For others, likely not.

### Not C++

I am saying I prefer C++ function objects to the pervasive over-use of closures in functional languages with mutation and garbage collection. JavaScript and TypeScript would be examples of languages with closures, mutation, and GC, but I am trying not to single out particular languages.

The problem I have with them is that it is not at all clear what the cost (in time/space) performance of a closure is.Whilst if you are careful you can use closures efficiently, it is all to easy to capture large state that never gets garbage collected.

Maybe this is something the type system could help with though...

### I think that it might be

I think that it might be sensible for a language to expose a stack-allocated lambda type with some rules about it being non-returnable, and perhaps an explicit conversion function to a dynamically allocateded one. To my knowledge, there's nothing strictly saying a language can't have multiple lambda types.

### old stuff

Some fragments of Lisp history.

First-class functions were supported by pre-diaspora Lisp... okay, maybe I should explain that a bit more. In the beginning, there was one Lisp language. Because nobody had prior experience with how to implement such a thing, the first implementation had severely broken scoping rules, which resulted especially in misbehavior of supposedly-first-class functions. So they added a device for constructing statically scoped FUNCTIONs, which existed alongside dynamically scoped LAMBDAs. It was all rather clumsily done, but it was there in the language. Then, after Lisp 1.5, Lisp splintered into myriad dialects in a sort of Babel-event (a diaspora). Through the later 1960s and 1970s, Lisp was a family of dynamically scoped dialects. The various dialects dropped the full-blown statically scoped FUNCTION device which interfered with storing function parameters on an ALGOL-style stack. And this continued until SCHEME, with specific intent to explore the semantics of Actors, introduced the idea of a purely statically scoped Lisp, starting in the mid-1970s but not catching on until the 1980s.

A parallel track through all this time is macros and fexprs. Static scope isn't natural to macros because macros are compile-time structures, living out their entire life cycle before the run-time stack is ever created; so it took a lot of effort to shoehorn macros into a more-or-less statically scoped discipline. (Fexprs, meanwhile, though they may appeal to a hacker's sense of, well, cool-ness, are horrifically badly behaved without static scope, making it pretty ironic that they were finally abandoned by the mainstream Lisp community in 1980, just as Scheme's static scope was about to catch on; but I digress.) Statically scoped macros have to basically do a compile-time simulation of run-time static scope, so that their scope corresponds to what the run-time system is going to do later. And around the turn of the century, Alan Bawden had the insight that macros with some more sophisticated reasoning — i.e., with suitable static typing — can deduce situations where they can safely pretend to be actual run-time objects; he described the resulting system in a POPL 2000 paper called First Class Macros have Types.

Fwiw.

### early lisp history

Just looking for a good reference on that part of the early history of Lisp. You mention Graham's On Lisp - is that what you would refer to?

### early lisp history refs

The two most basic general references are the Lisp papers in the first and second ACM HOPL (History of Programming Languages) conferences: McCarthy 1978, Steele and Gabriel 1993. My own starting point on the subject (even for looking up those two papers) would be Section 3.3 of my dissertation. I drew on a diversity of other material in assembling that section. Neither HOPL paper covered fexprs — and, rather strikingly imho, both papers' authors not only omitted fexprs but mentioned that they were doing so (evidently aware it's a topic of interest). I think I actually laughed out loud when I realized that one paper explained they were omitting fexprs for lack of time while the other explained they were omitting fexprs for lack of space.

[Correction: I fear I got sloppy there, trusting my memory — the section of my dissertation says those HOPL papers don't directly cover scope, with the justifications re time and space; not fexprs.]

### that is

really funny indeed. Thanks for the hints!

### Would delimited lexical scope help?

In my own little lisp, I implemented delimited lexical scope via the (wall ...) operator. It works like let, except that it inherits no parent environment.


(let (x 1 y 2)
(wall (z (+ x y))
z))

; -> 3

(let (x 1 y 2)
(wall (z (+ x y))
x))

; -> (runtime-error (undefined-symbol x "Name not defined."))



I did this because it was easy, seemed like a neat idea, and felt like a nice dual with delimited continuations. I didn't think much about use cases; I just thought that someone might sometimes want just a part of the environment rather than all of it, as with continuations. I figured maybe it could be used to save memory.

### A type system *could* help...

...but it's not clear that it *should* help.

Basically, the issue is that if you look inside an optimizing compiler, you'll see that it has to keep track of a lot of dimensions: when potentially creating a closure, you need to track whether the function captures any variables or not, whether the function escapes or not, whether the environment can be shared with other closures or not, whether it is only ever called in statically-known contexts, and so on.

Reflecting all of these dimensions in types is *possible*, but it's not necessarily a good idea, because it surfaces a lot of complexity to the programmer.

### I agree

But a remaining question is who decides whether it should. Is it the language designer, making the call for all programs, or the programmer, for his program?

### Terminology

I guess my point is, focusing on Lambdas as an essential component of a general purpose language, when function-objects are arguably a better solution tells me that 'lambdas' are not the correct level of abstraction for this requirement.

The correct level might be

To create objects that encapsulate a function and it's associated environment or state, that can be passed as arguments to other functions to be executed.

We need a term that encompasses:

- function-objects
- lambdas
- closures

### That's a very implementation

That's a very implementation oriented point of view, though.

### Ah, I see what your saying.

Ah, I see what your saying. Perhaps the way to move forward is to loosen or generalize the requirement as to make it more implementation independent. Yes, we may have to get creative with some terminology here - although if we researched enough, I'd suppose some one has already put the terminology together for us. But maybe not :)

### It's not a matter of...

...language designer versus programmer.

If I write a library that exports an Neel-function f, and you write a library that exports a higher-order function map that consumes a Matt-function, then a client programmer who wants to write map f is out of luck. For a client to have the option of not caring about this distinction, we have to avoid making it in the first place. And this means we have a design fork:

1. First, we can accept that we may have to pay the runtime overheads for more general representations.
2. Or, we can weaken our compilation model. The main advantage of type systems is that they are very compositional, which ends up making separate compilation easy. We might choose to give this up and use a more whole-program approach, so that we can use static analysis rather than types. (Rust, C++, and the MLton compiler all make this choice.)

### If I write a library that

If I write a library that exports an Neel-function f, and you write a library that exports a higher-order function map that consumes a Matt-function, then a client programmer who wants to write map f is out of luck.

Not necessarily. The types might track the possibility of a fast path, with a generic slow path fallback available. Or, there might be a way to insert a coercion from a Neel-function to a Matt-function.

I think your bullet 2) is roughly what I want, except I want those static analyses to integrate with the type system, so that I make assertions and provide hints as a programmer.

### Magic type system

My magic type system would allow you to place a time or space constraint, that would fail to compile if you did something that violated the constraint.

A more basic, realisable type system might impose restrictions on effects/side-effects, like this function must not allocate heap memory.

### safe for space complexity

An explicit storage requirement "safe for space complexity" is specified in Standard ML of NJ; see Efficient and Safe-for-Space Closure Conversion