Peak Abstraction

Today I learned a new term from a blog post:

An interesting phenomena I've observed over my years as a programmer is the occurrence of "Peak Abstraction" within a young programmer - and their subsequent improvement.

There is a common pattern. It typically occurs after 3-4 years of professional practice. A programmer is growing in his confidence. He begins to tackle more complex, challenging problems. And, he reflects those problems with complex, challenging data structures. He has not yet learnt the fine art of simplicity.

Every member of every data structure is a pointer or a reference. There are no simple data types. ... Those around them become increasingly desperate. They can see the code becoming inflated, inefficient, unmaintainable.

And then it happens. The programmer realises the error of their ways. They realise that they are adding neither design value nor computational value. They realise they are adding overhead and maintenance trouble. ... And thus the recovery begins. Data structures become simple, expressive, and maintainable.

The complete blog post rags mostly on perf issues, but I'm more interested in the complexity implications. I felt myself go through this before, and to be honest the more powerful the language, the worse my peak abstraction got. It was only when moving to a less expressive language (C# rather than Scala) that I had incentive to keep it simple.

Has anyone else found themselves in an abstraction trap and come down as they grew as a programmer? What can we do in PL design to avoid, or at least discourage, overuse of abstraction? Is this a case of where less might be more?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

More than once

I've done this with pointers, with templates, and with typeclasses. I guess it takes me a while to find the right balance when faced with a new abstraction mechanism. I did manage to avoid going macro-happy.

For a solution, what comes to mind is:

  • Reduce the role of parameterized abstractions in favor of point-free composition.
  • Separate state. Push it to the edges of the programming model, e.g. databases or abstract registers.
  • Focus on the role of abstractions as `lenses` - view and control transformers rather than authoritative models.

I believe we should favor composition over abstraction for many reasons. Doing so seems to simplify abstractions naturally - shallower, but a large vocabulary and predictable, universal combiners.

I was inspired by your recent `crazy PL idea` to explore the idea of eliminating parameters entirely in favor of juxtaposition with modifying words and phrases. I.e. a ball (noun) might indicate it can be modified by a color or material (adjective) and a location (prepositional phrase). I've always felt that there was something wrong with how traits and mixins are supported in OOP, that they should be more like adjectives or adverbs.

Modifying juxtaposition

My sadly neglected toy language is working with that idea, though I unfortunately don't have the free time or PLT/CS foundation to really get something constructed enough to see if it's a good idea in practice.

Necessary evil?

I'm not sure that technically programming languages should discourage the overuse of abstraction. Programmers will naturally overuse a new feature in order to gain understanding about how best to use it. Artificially curtailing that learning process is likely to lead to less skilled programmers.

Perhaps there are ways of accelerating the process?

Interesting point. As part

Interesting point.

As part of an education sure; e.g., I think you should be able to go crazy in your PL immersion class...the one where you learn lots of programming paradigms and not the one where you learn PL semantics by writing interpreters.

But when out in the world using a language to do something productive, is it a good idea to learn the point on the job? I find C# liberating given that is kind of sucks in doing rich abstractions elegantly, an advantage it has over Scala where I tend to over abstract.

I find C# liberating given

I find C# liberating given that is kind of sucks in doing rich abstractions elegantly, an advantage it has over Scala where I tend to over abstract.

I agree somewhat. For instance, these days I find myself less concerned with specific collections and just writing everything against IEnumerable.

Still, I think this abstraction problem is partly because we don't have a cohesive mechanism for specifying extensible abstractions without a lot of syntactic boilerplate. I think it's a human bias along the lines of "if it costs more, it must be higher quality".

I think if a language were designed which automatically abstracted over higher kinds and by default encouraged writing tagless interpreters, we could have our simplicity, extensibility and our abstraction.

Always Learning

Are you advocating not learning on the job? That seems a very odd position to take.

I'm currently using Scala a lot, and being relatively new to it (about 1.5 years now) have yet to develop a strong sense of style. To accelerate this process I am deliberately using every new trick I find (currently on an implicits / typeclasses bender). Does this make me a bad programmer? I don't think so -- I'm aggressive at refactoring out past mistakes as well. In fact I think my employer should be pleased I'm taking such direct steps to improve my proficiency. And as I am my employer, I can tell you with some authority that they are.

My last job more than 4

My last job more than 4 years ago was all about Scala, so perhaps going overboard with abstractions was useful; I always managed to break the compiler anyways, but my Scala style was ultimately declared illegal. There was definitely a cost that came when other people wanted to use my code, since it was abstracted in a style they weren't used to or were even hostile toward. It turns out, in Scala if you want polymorphism you should do mega-case matches rather than use virtual methods (given my OO background, I used virtual methods, drats). My abstractions ultimately led to my downfall.

There is a price to be paid when learning on the job. If you are your own boss or are working alone, its not so bad. Its when other people need to use your code that the problems begin.

Human not technical

Its when other people need to use your code that the problems begin.

This is a management problem, not a technical one. If you're all working in your own little silos of course there will be issue. Yesterday we pair-programmed the heck out of some applicative functors. Nobody is scared on abstraction in our office.

I think modern Scala style has evolved a bit from massive pattern matching.

Abstraction-poor language features

Perhaps, when a language feature is itself less abstract (or, perhaps, less abstractive) than it ought to be, it tends to exacerbate this problem. I'm seeing pointers and macros used as examples here, and I consider both of those to be forcing the programmer to work with more concrete notions than ought to be needed (let the compiler futz with it). However, exacerbated by the language or not, I do recognize this phenomenon in programmer development; I recall at least once coming down with a (thankfully brief) case of the ailment myself, once upon a time. (What I'd prefer to call it, I'm not certain; we seem to be oversupplied with meanings for the word abstraction.)

Abstraction-poor

I don't believe that being poor at abstractions causes this problem. At least, not by itself. I doubt we'll ever see a Brainf*ck programmer with a case of Peak Abstraction.

Alternatively

:-) Alternatively —or equivalently?— perhaps Brainf*ck induces this ailment in all its users. What would a case in that language look like?

The thing that most caused

The thing that most caused me to overcomplicate and overabstract things in the past was planning for future changes, reusability and extensibility. There is a general tendency against modifying code. Instead we want to design our code such that instead of changing existing code, we want to be able to just add new code to adapt the program to new or changed requirements. The worst proliferation of this is Java frameworks that revel in the awesomeness of decoupling and the ability to change the working of an application by piling on extra code and "just changing a line in a configuration file".

Now I try to focus on writing the best and clearest program for solving exactly the requirements at hand without regard to future changes, extensibility or reuse. When new requirements come in you shouldn't be afraid to modify existing code to make it the best code for solving that new set of requirements. This is better than adding code to an "extensible" codebase which as a whole is clearly not the best code for solving just the problem at hand. Every line of code should be adding value towards the goal. Because this strategy keeps code small and clear this is a win even when you do have changing requirements. Only make something reusable when there is evidence of duplication or if there is evidence it will actually be reused.

Reusable, Extensible, Modular, Type Safe, Stateful

I believe there are pressures that lead developers to ramp up the level and complexity of abstraction. Some features - static type safety and separate compilation and runtime plugins, for example - interact in a way that requires a more complex framework of abstractions to support a desired level of flexibility.

As I described earlier, I think parametric abstraction mechanisms are especially vulnerable in this regard. And other abstraction mechanisms - objects especially - often seem to be `closed` after construction. One way to reduce both problems is to favor composition.

Stateful abstractions try to be authoritative, which also creates pressure for more complex abstractions to avoid diamond problems - i.e. where we have two views of the same abstraction that each try to hold authoritative state, and thus have difficulty ensuring consistency. We can reduce this pressure by keeping state external to language abstraction.

Anyhow, I think the problem (or at least the pressure for the problem) could be predicted by some analysis of feature interactions in the language.

One way to reduce both

One way to reduce both problems is to favor composition.

Yes, you only have to figure out in exponential time the parameter configuration that expresses the correct coupling or composition of your components (stateless or not) or even better rely on know-how and ask the system guru which has figured out one in the past which actually worked and let him adapt the configuration file or the Perl script which creates the configuration file from other data of the same kind.

Oh ... did I just describe The Essence Of Enterprise Programming?

I'm not sure what you just

I'm not sure what you just described. But it certainly wasn't composition.

If I were to hazard a guess, you describe discovery and configuration problems for an environment with dynamic resources. I'd note such problems exist regardless of how you `assemble` your application, and that composition does not imply such an environment. I wonder what connotations lead you to leap from `composition` to an Enterprise system.

Pointers and macros are not instruments of abstraction

I resent the use of the word 'abstraction' for what is merely an increase in indirection and complexity.
Using the word 'abstraction' to describe this phenomenon gives ammunition to those low functioning developers that constantly argue to avoid 'abstraction' and always do the most simplistic thing possible.
In fact, proper abstraction will almost always make a system simpler and easier to comprehend and maintain.

Resent it or not

Macros are a mechanism for syntactic abstraction, and pointers are a powerful tool for structural abstraction (and interface abstraction, with function pointers).

Pointers do have the undesirable side-effect of adding complexity and indirection. But I doubt you'll find any `perfect` mechanism for abstraction. (I've certainly not found one, and I've been looking.)

proper abstraction will almost always make a system simpler and easier to comprehend and maintain

No true Scotsman would believe you.

The challenge is identifying `proper abstraction` without relying on hindsight.

Proper abstraction

I define proper abstraction as any tactic used to create software that makes the resulting software less dependent on the implementation details of its functionality.
Using the example from the blog post - if these programmers had put their code behind some 'proper' interfaces then they would not have been (too) concerned about the increasing complexity of the implementation.
I know, in advance, that using an interface to encapsulate behavior is a good thing.

Proper `Interfaces`

Great. You've regressed the fallacy.

I know, in advance, that using an interface to encapsulate behavior is a good thing.

Sure. But which interface? Which details or behaviors should be replaceable? Types? Dependencies? Data storage? Concurrency assumptions?

Answering such questions too eagerly is where `Peak Abstraction` issues start arising as we provide solutions without a problem. Answering them too lazily results in problems without easy solutions, but at least we'll be working on real problems.

Simple isn't easy. Very few people can walk the tightrope between simplistic and complicated, and they get there through experience rather than formula. What we can do is provide languages with contoured guide rails (path of least resistance) and IDEs or module systems that accelerate education of our developers.

No shortage of problem here

I'm acutely aware you and I tend to disagree explosively, if we're not careful. I will remark though —for perspective— on where we appear to part company on this.

I maintain that abstraction is a difficult problem. A really, really difficult problem. The sort that can take centuries to crack. The "Peak Abstraction" phenomenon isn't about a solution without a problem; it's a programmer attempting a doomed solution to a problem they've vastly underestimated, using tools that aren't up to the task. Frankly, anyone who thinks they know the ultimate answer to this monster has underestimated it (it's reminiscent of Feynman's comment about people who claim to understand quantum mechanics). I've certainly not claimed to have the ultimate answer (I'll settle for landing a few solid punches on the thing). And I worry, from your comments on this thread, that you might be caught in the trap of believing you've got The Answer — because that would aim you directly into a more mature version of what, as a temporary phase for junior programmers, manifests as "Peak Abstraction". Underestimating the problem leads to trying to tackle it head-on, and one way or another, that doesn't end well. Growing out of "Peak Abstraction" amounts to recognizing that monster can't be taken down in one fell swoop.

Peak abstraction identifies

Peak abstraction identifies a phase many of us go through as programmers. It happens when our technical skills have developed a lot but our wisdom still remains low. More wisdom basically we know that we don't know much, or something like that. Once you realize that the "right" answer is unobtainable, then we can settle for something good enough, or at least adequate.

The opposite of peak abstraction might be Joel's "Duck Tape Programmer," someone who hacks programs together with minimal abstraction...just get it done dammit and don't think too deeply about it.

Our languages shape our thoughts. Dynamic languages, and Perl in particular, encourage us to be duck tape programmers; Haskell might tempt us toward reaching peak abstraction.

I'd say that it's not that

I'd say that it's not that some languages temp us to use more abstraction than we should, but rather that some languages make higher levels of abstraction counter -productive. That is, ideal abstraction depends on the language.

Abstraction problem

If, as I believe, Peak Abstraction is caused by a confluence of interacting pressures from external sources - for type safety, modularity, extensibility, configurability, runtime upgrade, parallelization, consistency, security, mobile code, etc. - then your initial level of `abstractive power` will not factor into the phenomenon.

Even if you have `infinite abstractive power`, the first thing that happens is some system architect trims the language, curtailing abstraction to simplify or enforce relevant architectural concerns. I understand this and think abstractive power is a rather volatile feature. The critical problem is developing a sane architecture to answer very real challenges, and a common architecture that won't splinter the development community with incompatible libraries and modules. But, putting that quarrel aside, most regular developers are cut down to some set of abstraction mechanisms left by their lord and architect.

And it is that layer - above the architecture, below the libraries (or building architecture compatible libraries) - at which the `peak abstraction` phenomenon will continue to occur. Developers will stretch their abstractive muscles against the chains and constraints of their systems. And they will learn, eventually, to balance such efforts. If the architecture is well justified, they may learn even to appreciate their shackles.

Does John Shutt watch out for the omnipotent architect while shunning the little people? Sometimes, while he waxes on about abstractive power or represents abstraction as both problem and solution for so many software development problems, it seems this way to me. But he harks from Lisp culture; he probably just assumes every little person is an architect.

The real abstraction problem is finding the `right` abstractions - those at the intersection of what we want to express and what we want to reason about or protect. Power is a means to that end. Finding the right abstraction for a broad range of domains and multi-disciplinary problems will certainly take centuries, perhaps forever, as we encounter more domains, more problems, and we continuously lose lessons of the past.

I see two interesting issues

I see two interesting issues buried in there. There should be some way to initiate a civil discussion of them in this forum; if I think of an approach that might work, I'll try it.

A while ago Jacques Carette

A while ago Jacques Carette complained about the "lack of tangible content" on LtU and he might be right. My impression from the more entertaining discussions here is that concepts are vague and I can only index them by the names of participants such as John Shutt's abstractions or David Barbour's compositions which both have a certain mystique. The apparent need to use the most general terms to put the most specific and delicate meaning into them somehow amuses me.

The hermeneutic exercise in case of the referenced article would have been to figure out the use of the word "abstraction" by someone who is at least familiar with Haskell and works with C++ on games professionally as his blog indicates. There is some quite explicit criticism of Boostisms or "modern C++" style which might be also the favorite one of LtU readers who encounter C++ ( which is not a joy but it might happen ). That's also somewhat funny that everyone feels completely unrelated to the sort of stuff the author is doing and using.

Mystique?

My use of `composition` is one I learned from a textbook. Operands (A,B) are `composed` by operators (+,*) to produce `composites` (A+B, A*B), which are generally available for further composition (e.g. A+(B*A)). Compositional properties are also easy to understand, referring to inductive reasoning. If P is a compositional property (or set thereof), then P(A*B) = f(`*`,P(A),P(B)) for some known function f.

I do find mysterious how you moved from `composition` to `Essence of Enterprise Programming`.

John's use of "abstraction" is also simple, at least from a high level. He describes abstraction as movement from language L to L'; e.g. if we add a function, that can be understood as extending L with the function name to result in L'. But it would equally be abstraction if we hid an existing function name.

Granted, his is incompatible with every definition of `abstraction` I've read before (e.g. from John Locke or SICP or wikipedia). And I'd prefer he find a different word for the concept of language manipulation. But it's not mysterious, and I'm used to dealing with operational definitions.

Regarding the proposed hermenuetic exercise, I'm already a regular user of C++ and Haskell. But I think the interesting part is not the original article, but rather Sean's questions and the ensuing discussion.

Fwiw, that's actually not my

Fwiw, that's actually not my definition of abstraction. To be what I'm talking about, the transformation from L to L' has to be effected by means of facilities of L. L' isn't just a successor to L, it is pulled out of L, where it had been latently present in its entirety before it was brought out into the open from the language matrix in which it had been embedded, by means of a program text — that is, L' is abstracted from L. Looking at John Locke's account of abstraction (you can find Locke's account at the top of Chapter 1 of the Wizard Book (and the top of Chapter 1 of my dissertation)), the information-conservation aspect of abstraction is explicit. One interesting property of the PL transformations I'm looking at is that it's entirely possible to start with a language L and, by a series of transformations, end up back at L. Can that be "abstraction", in which at each step something from the old language is omitted? Well, yes, of course it can. Georg Cantor studied sets that can be mapped one-to-one into proper subsets of themselves. Of course, he also went crazy.

[edit: typo]

Locke's definition can be

Locke's definition can be given a shallow and a deep meaning. The shallow meaning might be the one that is now convenient and has been established by set theory. An abstraction is a separation of elements in a set building process, together with a name that can be used to denote the set as a whole well as any of its elements without referring to any one specifically: the set S and the S-element.

The deep meaning would refer to a process of embryogenetic complexity. Either by means of an epigenesis which is a very general concept which needs to be given a specific interpretation to make any sense ( actually it failed at the times of Locke for that reason. Descartes animal spirits weren't convincing ) or through a humunculus which is a preformed human being in the kind of an ovus or a sperm: an ovist or spermist, it's your choice!

A third meaning is the one by the author of the "Peak Abstraction" article. He looks at the compilers assembly code and finds sequences of lwz which just indicates loads from memory, in other words code bloat and potential cache misses. So there is some sort of absolute ground and abstraction means that we cannot say how near or far we are from that ground. This is supposed to be good but nothing could be less clear. The airy abstraction as a form of degeneration.

Depth

(I recall a colloquium speaker some time back remarking that AI is the philosophy of computer science — it tackles problems that nobody knows what to do with, and when somebody figures out what to do with one of them, it ceases to be considered AI.)

Hm. There is, viewed in those terms, something nearly biological about the emergence of L' from L via program text T.

The abstraction in "Peak Abstraction" seems open to two interpretations. A "shallow" interpretation is "distance from the bare metal"; it's not irrelevant, but it is "shallow" in the sense I think you mean. A second interpretation is in terms of the program text rather than the underlying machine behavior (the description rather than the described), which can still sound a bit superficial but taps into the deeper emergence effect I've been studying (L to L' via T).

Excerpts from footnotes in my dissertation:

When relating the principle of abstraction in programming to the same-named principle in metaphysics, we prefer Locke's account of abstraction over that of other classical philosophers because he casts abstraction in the role of a constructive process, not because we have any essential commitment to Locke's conceptualism. As computer scientists, we don't care whether our programming languages exist in reality (Plato), in concept (Locke), or only in name (Ockham), as long as we can design and use them.

Regarding the longevity of the philosophical positions, [W.V.] Quine (himself a logicist) observes in "On What There Is" (from a logical point of view, 1961) that the three positions on the foundations of mathematics correspond to the three medieval positions on the existence of universals (i.e., roughly, on the existence of abstractions). Quine's analogy would group Plato with Frege, Russell, and Whitehead; John Locke with Brouwer; and William of Ockham with Hilbert.

Hm. There is, viewed in

Hm. There is, viewed in those terms, something nearly biological about the emergence of L' from L via program text T.

The closest idea I have of such a transition in our domain stems from the brief episode in the high time of the design pattern mania when pattern languages were envisioned before the crowd started to talk about Ruby's goodness of creating internal DSLs that dumped such pattern languages into the dustbin of history. In case of Ruby this is not achieved through a macro system but a flexible disposition of its basic syntax. All of this was short before powerful type systems, rigor and functional programming were dedicated to save our minds and souls. That's where we still stand.

I understand your comment is

I understand your comment is sarcastic, but there is still a bit of chronological shuffling here; powerful and rigorous type system began saving our souls well before the Ruby people started their noisy rock bands, and we already had "internal DSL" as punning uses of liberal syntaxes; one example jumping to mind would be "cosmetics" in Danvy's functional unparsing work -- granted, this is 1998 while Ruby was created in 1995.

PL design strategy — and sarcasm

Back in 1969, Douglas McIlroy observed at an extensible languages conference that there were two extremes of thought on programming language design represented at the conference — anarchism and fascism. He had good and bad to say about each. Finding this terminology convenient, I adopted it in a blog post a while back, and this was subsequently described as invoking Godwin's law. Which put me in an awkward position — I couldn't tell whether the description was in jest. (One is put in mind of the poem (due to C.L. Jordan) that ends "I didn't keep count of the times I awoke. // Now am I alseep or awake?")

[edit: fixed Godwin's law link]

You are right, but those

You are right, but those dates and undercurrents don't tell a story that must necessarily be told. The proper and accurate history is chaos, an accumulation of facts, picturesque details and trivia, shattered hopes and things that never made it although they should have.

I often like to stick with the popular culture in a love-hate relationship. What fascinates me is the adoption of practices by the hive mind. Both Ruby and Haskell are elements of a universal pop culture of programming which unfolds on the web these days. In the popular history Haskell peaked around 2007, short after Ruby, which did in 2005. Both existed for more than a decade since then and could have existed much longer.

I have an aesthetic distaste against a "saving the world" attitude, about missionaries chewing on my ears but at the same time I enjoy the loudness and vitality of programming language pop and its recreational joys.

Point taken. I'd then say

Point taken. I'd then say that the second source of enjoyment is to "stand right against the crowd of ignorants" by having a better historical picture: "the JIT techniques you're thinking are new date back to twenty years ago!". Besides the ego boost, this also has practical benefits if you want to do things right instead of investing tons of effort in repeating old thought processes.

Of course, smartassing is an

Of course, smartassing is an important part of the game. Everything comes together. Before Haskell became pop, academic computing science was a form of social criticism and we bothered listening to grumpy old men, warning the youth about making mistakes and telling and re-telling the story of the unlucky Ariane 5 from past centuries. They also favorably quoted the Standish Report no one ever read. After that the teenagers wanted to know what is new, hot and cool in category theory. The street has fully appropriated it. This is a phase shift for me, not that some lonely PhD has written a long forgotten paper in ancient times.

By means of

Apologies. I do remember that restriction, and tend to think it implicit when speaking of languages. Guess I wasn't as precise as you desired, though.

FWIW, it isn't always clear to me where `language` stops when you speak of it. In our previous discussions you have argued language and abstraction to extend to open systems: to runtime object capabilities, and to the entire Internet of URLs and all discoverable artifacts.

that is, L' is abstracted from L

That's one leap I have yet to follow. By which definition? Can you point me to a line item in a dictionary? Would another word or word phrase better connote your concept?

Many people use abstraction as a noun, not a verb. That's certainly where we reach `levels` of abstraction. Where, in your model, would they apply such understanding of the word?

One does tend to think of it

One does tend to think of it as implicit... when one is assuming it. It's a bit like forgetting to mention that string concatenation is associative (which I've caught myself doing a time or two) — it's obvious, but it's also what gives a monoid its interesting properties. In this case, the by-means-of clause in the definition is (as noted in another branch of this subthread) key to relating my work on abstraction to the passage from Locke. So the point seemed immediately relevant.

The 'levels of abstraction' metaphor is less about parts of speech, I think, than it is about what is being abstracted from. What I mean —I'm struggling to articulate this as I write, of course— is that abstraction is a leaving out of some information (or a drawing out of some information from a larger matrix in which it had been embedded), but in the case of programming abstraction, you get two different notions of abstraction depending on what you believe to be the nature of the information involved. If the information is about the behavior of the computer, you get a notion of abstraction as distancing from the hardware. If the information is linguistic, you get something more like what I'm pursuing. These two converge, more or less, when one takes a linguistic view of describing the behavior of the hardware — which is why "levels of abstraction" in the distance-from-hardware sense shades over into "levels of abstraction" in the distance-from-base-language sense and then further into "layers of abstraction" in the progressive-language-transformation sense.

A thoughtful craftsman blames his tools

I've experienced this phenomenon before, too, but often concluded the problem lies in a limitation of the language. Sure, over generalization can be harmful, but there is an important distinction to be made between abstracting to a more complicated model to cover more cases and abstracting away from irrelevant details. Simplifying assumptions that make algorithms easier should be encouraged (e.g. working in euclidean 3D space rather than an N-dimensional manifold).

The fact that the original post contains the quote "every member of every data structure is a pointer or a reference" tells me this is more a case of the language making the programmer attend to the irrelevant. An experienced C programmer may decide in some situation to just use an array of 80 characters for a name instead of introducing a pointer to some abstract string type. Fine, but he should also assign blame to C for making him choose between simplicity of expression / performance and the proper level of abstraction. It's important to understand and work with the limitations of your tools, but not a good idea to accept those limitations when building or choosing new tools.

Blaming the Language

You said this with a great deal more pith in an earlier discussion, but I guess such opportunities don't arise very often.

I agree. We can blame our languages when it forces early binding of a choice that is not justifiably relevant at the point of decision. Similarly, we can blame our tools if they offer an illusion of choice, or make bad decisions subtle.

Yet, I don't feel this answers the Peak Abstraction issue. It will still take experience to decide between `working in euclidean 3D space rather than an N-dimensional manifold`, though you could always write both algorithms and dispatch as necessary.

It seems some tools avoid the problem, but it often seems more to do with the lack of `open` modularity (opaque libraries, plugins, services), or a lack of static type safety, than with the abstraction model. So where does the blame lie in such cases? Should we blame our tools for forcing us, in our designs, to acknowledge features that we will almost certainly later find valuable?

Support for future abstraction

Mainly my point was that there is a often natural level of abstraction present in many problems: the level of abstraction that has all of the information needed to solve the problem in the most natural way possible and no more. Supporting this level of abstraction is usually a good idea, IMO. When you start considering abstractions that make things more general but also harder, you start to encounter real trade-offs that aren't induced by your language.

For example, consider writing a simulation that relaxes the assumption of the world being 3d-Euclidean space down to locally Euclidean (not as extreme as a general n-dimensional manifold). This generalization might be useful in games for supporting "portals" that connect different parts of the world, for making buildings that are bigger inside than out (to reduce travel time while outside), and for other unwordly things that are sometimes useful in games.

There are a number of ways that using such an abstraction would increase the complexity of your code, mostly stemming from there no longer being one line between two points (consequently you can see things at two places, etc). This means a certain hygiene is necessary through-out your code and also some algorithms get substantially harder.

If you start without this feature and decide to add it, you've basically got to revisit a bunch of code. However, if you decide up front that you might later want the feature but don't want to pay for it initially, a reasonable approach is to write everything in a way that doesn't assume global Euclidean space, except code that explicitly invokes that assumption. With a little care, you can have your compiler/IDE tell you exactly what places you have to fix to remove that assumption.

So, with complicating generalizations (such as moving away from global coordinates), there is always a price to be paid, and deciding whether the generality benefits of the abstraction outweigh the benefits of the simplifying assumption is indeed a judgment call. But I suspect most of the reason we have so much of a problem with too much abstraction is that programmers feel the pull to make their code naturally general ("I shouldn't have to make this assumption here...") rather than programmers introducing arbitrary abstractions ("I'm going to make my game engine support n-dimensional space, just in case"). And its the former case where languages force programmers to turn to design patterns and extra indirection to achieve things that really could have come for free. I'm generalizing a lot here, so feel free to point out counterexamples to my thesis.

Natural level of abstraction

Does such a property truly exist? I mean, as part of `the problem` rather than our subjective intuitions and the process of software development. If it does exist, is it accessible?

I'm not inclined to a positive answer. My experiences with requirements analysis have only ever involved incomplete and evolving requirements of dubious consistency. Even identifying `the problem` can be a problem.

I agree with your statement, that there exists a level where further abstraction only makes things more difficult for us. But I posit: if this level were easy to recognize - without hindsight, for developers at the beginning of their career - the Peak Abstraction phenomenon would be rare. (Premature generalization is common, but what makes it addictive is that it's also sometimes useful.)

Well, I suppose discipline could avoid Peak Abstraction. YAGNI and DTSTTCPW both encourage developers to err on the side of `simplistic`. Which seems to be generally good advice in contexts where they won't get stuck with that decision. (I'm not convinced of its applicability to API or framework development.)

Natural for the solution, not the problem

I don't mean that there is one appropriate level of abstraction for a problem. I mean that there's a natural level of abstraction for a solution. Even then this is a fuzzy statement, not a hard one.

O Language Designer: Don't

O Language Designer: Don't give me the power to abstract; give me the power to simplify. If the latter requires the former, then so be it. But keep your eyes on the prize!

Well said.

Well said. I agree. Though, simplicity is not the only prize I want.

pretty much, yeah

Subject to what "power to abstract" means (tricky, that), I believe I also agree. To oversimplify remarkably little, abstractive power is a means to simplicity — not sufficient, but in the long run necessary (I've no serious doubt you'll disagree with that part :-). Interestingly, simplicity also seems to greatly facilitate abstractive power; so there's potential for synergy there.