How can languages help us in terms of achieving correct program design?

We talk a lot about how languages can help us with correctness of program execution, but why don't we talk about how languages help us with correctness of program design?

With all the rework that goes into making a non-trivial system well-designed, I'm bummed this subject does not receive much treatment - especially in terms that non-PhD's like me can understand.

Any thoughts?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Maybe make PLs harder to use

If it were more difficult to compose separately developed libraries and knock out "glue code" and if units of source beyond a certain size were more difficult to manage then program design might improve. The speculative reason is that people would then spend more time trying to get things right in the first place.

Along the same lines it might help to get rid of overly fancy IDEs and debuggers.

think to program

That is called "think to program" as opposed to "program to think." The idea being that computers, programming languages, and programming environments are horrible tools in solving problems ("thinking"), and that we should just do everything in our head, on paper, and on whiteboards, until the problem is solved and then we can just fill out our punch cards and be done with it. Compare to a waterfall.

In contrast, the cybernetic "program to think" philosophy is that computers exist to help us think and augment our intelligence. In that case, computers are seen as problem solving devices, where the program is constructed gradually via interaction with the computer (including debugging, fancy IDEs, Google, Stack overflow, GPS units in your car, and so on). Compare to agile.

You'll find zealots on either side of the fence, but I find myself doing both in spurts.

PL's are simply tools.

A programming language is a tool to manage complexity by allowing us to spread out a problem-solving process so that we can consider and correct one small part of it at a time. The same can be done with notebooks and pens, and this is in fact what Al-Kwharizmi, Pāṇini, Euler, etc. were doing when they launched what would eventually become our discipline.

Computers are, very literally, power tools for working with information. Like any power tool, they allow you to make bigger mistakes faster than you could with hand tools. Or, once you've managed the complexity of a good solution to a problem, they can be used to speed and automate the operation of that solution.

The relationship between thinking and programming does not, in my estimation, clearly make either subordinate to the other. In fact, thinking well is a prerequisite to programming well, and the practice of programming is excellent training for disciplined thought.

And the computer as a power tool, of course, allows the creation of solutions for these problems to become a specialization whose benefits can be conferred on those who do not actually do it for themselves, in the same way that power looms made patterned, woven cloth available as an exchange good for those who neither created the patterns nor wove the cloth.

programming is thinking

Many of us believe programming IS thinking. That is, the computer is just an extension of the mind when we are solving problems. It is like a pen and a piece of paper: some of us are boosted by the ability to write things down, and if left to our own head, would quickly get bogged down and not make any significant progress.

Everyone gets that a computer is a tool, that it has the ability to execute the programs we write. What we are then debating is whether the computer has a substantial role in helping us write those programs (e.g. via fancy debuggers and even live programming environments).

scope tradeoff beween narrow focus and broad perspective

That is, the computer is just an extension of the mind when we are solving problems.

As you also say, it's just like using pen and paper: tools focus your mind on what you see using that tool, so you pay less attention to irrelevant extraneous ideas floating around in your mind. It's great when a problem being solved is represented directly by the tool, so instead of trying to hold the image in your mind, it's already there; then you can think about related things like why it's wrong, or what it needs to connect to now, or where a gap appears when something is missing.

But using a tool is a distraction when it doesn't (or can't) represent a thing you need to consider, and are trying to get a handle on what that is. Negative space is hard to consider when using a tool, unless it happens to highlight the missing thing directly (oh I forget to define foo() so I have a link error) because the part that normally goes in this spot isn't there yet. Then a tool works well as a todo list showing you what's finished and unfinished.

Tools don't help much with the question, "What should I be doing?", if the thing you aren't doing yet (your unconscious mind keeps needling you) does not appear as a thing present or missing in a roster of stuff shown by a tool's todo list. Among other things, a tool usually implies it is necessary, so it's hard to ask what other tool you should be using instead. If the tool is not broken, it's not telling you what to fix.

Once you come up with a specific wish like, "If only I was able to do xyz," you can solve the problem of how to get there. But a thought process leading to that wish, before you articulate it, is not especially helped by using a tool unrelated to that domain, because each existing tool keeps you on track thinking about abc instead. After you get really familiar with abc, maybe you notice it's not satisfying something that's missing, but what is it?

A person captivated by the screen of a smart phone is a perfect example of focus on what a tool shows, instead of daydreaming about something else. Whether it matters depends on what a daydream might have yielded. Maybe nothing (pretty normal) or a new useful agenda (less common). Focusing stops you from asking yourself what you should focus upon. Is that close to tautological?

In the first ten years I spent programming, I had to see code in front of me to be fully productive. Before hitting twenty years, I reached a point where picturing it in my mind's eye was exactly the same, modulo the focus aspect, so fixing a problem was best done at a screen, but invention best done away. After thirty years, now I find looking at code on a screen is like wearing blinders, to the extent it shuts out what I might visualize instead: a faint montage of all other screens I could never gaze upon at once, like reviewing plots of a favorite novel instead of reading a page.

we called it "design at the keyboard"

I think what I'm talking about is orthogonal. I'm not saying "think to program". I have nothing against "program to think". (To me, it feels like overly fancy programming environments get in the way of "program to think".)

To me, it feels like overly

To me, it feels like overly fancy programming environments get in the way of "program to think".

Weren't there people in the 70s saying "it feels like overly fancy CRT terminals get in the way of writing programs" (I believe Dijkstra, but I can't find the quote online)? Every generation there are laments that the new fancy tools are just distractions and the old ways are better, then those people retire and we forget the laments (to then make up new ones of our own as even newer tools come out).

re laments

Weren't there people in the 70s saying "it feels like overly fancy CRT terminals get in the way of writing programs"

The version I got from a mentor in the 1980s lamented the use of full screen rather than line editing (on a TTY). He walked the talk and a convincing demonstration that he had a point was to just watch him coding using a (fairly souped up) line editor. For a few months I tried out his editor and approach and I think he was basically right.

Some of his professional peers of similar age were making noise to the effect that symbolic debuggers were crossing from peak ROI to diminishing returns on adding new features. These programmers did't say much about "think to program" but they were big on "debug with your brain, not a debugger". If you couldn't grok your program well enough to get by with ADB (a crude machine-level debugger) then you had bigger problems already than just the particular bug you were trying to fix.

"then those people retire and we forget the laments"

Every piece of consumer-market software I come across these days is complete shit compared to the level of quality I grew up with. The main accomplishment in software engineering seems to have been to raise the general level of tolerance people have for flaky, awkward software. Regarded as social infrastructure the recent generations have created an absolute disaster area (c.f. e.g. recent events re DPRK). Regarded just as quality of life recent generations have created a steady source of gratuitous stress thickly smeared over all of society. It is a biblical-scale catastrophe.

Right, in 60s it was wires,

Right, in 60s it was wires, in the 70s punch cards, in the 80s line editors, in the 90s text editors, in the 00s IDEs, in the 10s who knows what yet. And the line always pushes forward, ADB, GDB, graphical debugger, time machine, ....

There is 10 times more software today than 10 years ago; we have been able to write programs that can do tricky soft landings on Mars; we build DNNs that understand spoken input magnitudes better and it's actually usable now; Google chews through billions of documents a minute.

The web browsers of today are definitely better than those of 30 years ago; consumers have access to a huge diversity of apps in addition to the old standbys which have hit peaks in their evolutions; computers are for people, not just for geeks now. We are currently living during a software Renaissance; I miss nothing from the 80s or 90s.

nitpicking

<nitpicking>The web didn't exist 30 years ago</nitpicking>

I was going to disagree with the bit about missing nothing from the 80s or 90s, but the only old things I could think of that I miss are BASIC and Turbo Pascal — from the 60s and 70s.

I knew that, it was kind of

I knew that, it was kind of the point.

stop making stuff up (re "in 60s")

"Right, in 60s it was wires, in the 70s punch cards, in the 80s line editors, in the 90s text editors, in the 00s IDEs, in the 10s who knows what yet."

That is not true.

There is no such pattern, decade after decade.

In the 1950s and 1960s, increasingly higher level programming languages and environments were pretty much universally recognized as advances. By the 1970s there was still skepticism as to whether and when higher level languages would make assembly language hackers extinct but nobody was lamenting the fact it had become possible to knock out an OS kernel and basic user space using a small team working in C. You could (and can) find old guys who will point out the virtue of pencil and paper, especially when debugging, but this is generally not formulated as a fatal criticism of time-sharing, interactive editing, or symbolic debugging.

"There is 10 times more software today than 10 years ago; we have been able to write programs that can do tricky soft landings on Mars; we build DNNs that understand spoken input magnitudes better and it's actually usable now; Google chews through billions of documents a minute."

Look at what you just did there. All of your examples are outliers: atypical examples of how software is developed today. You named mars lander software with an extremely high ratio of labor hours to lines of code. You named speech recognition software that has incrementally improved over decades but whose performance has improved by leaps and bounds thanks to the externalities of better training opportunities in the modern world. You named search software that (hype notwithstanding) is pretty old school and again has an exceptionally high ratio of labor to code.

Here's what's more typical: Last night I had a nice dinner out with someone who not once but twice disgustedly threw aside their "smart" phone because flaky software got in the way. In the preceding couple of days I'd had to do cold reboots on a couple of consumer devices that -- although they have only very simple tasks to perform -- pretty reliably crash or flake out badly if used continuously for too many hours at a time. I use a commercially supported OS and desktop environment on my workstation which, likewise, is only good for so many hours before it flakes out. That desktop environment is a neverending source of glitches that one learns through painful experience to avoid triggering. Users of alternative desktops tell me their experience is no better. The word processors available to me today are slower but not significantly more featureful than those decades ago. It is more or less impossible to put up a functioning blog without relying on a bloated stack that is costly to keep installed and working and that yields up a constant supply of network security issues.

There was a qualitative shift for the much worse starting around the late 1980s.

"The web browsers of today are definitely better than those of 30 years ago;"

On the one hand I'd argue otherwise because modern browsers have Javascript, CSS, and the DOM which are collectively a bad design.

On the other hand I'd point out that the first browsers, 30 years ago, were at that time slow and buggy and flaky compared to functionally similar multi-media editors of the same era.

Early browsers helped to ratchet up tolerance for worse software because of how they were commercially promoted.

" We are currently living during a software Renaissance; "

Your "renaissance" has become a literal battleground in international conflict and it has produced a software infrastructure that leaves developed society fantastically vulnerable to sustained disruption and collapse. If you think the professions should be proud of this I think you are insane.

Come on, there has always

Come on, there has always been Luddites who have complained that the new technology is not as good as the old. This is quite natural and expected. Those horse buggy salesmen don't want to become obsolete because of a car, paper map sellers or London cab drivers who learned "the knowledge" also have a hard time because of GPS units becoming ubiquitous; their skills/experience became devalued over night.

It wasn't just high labor, it was better techniques, better tools, better hardware, everything got better. Programming and the number of programmers has increased by an order of magnitude. We can argue that programmers today are not as careful as programmers in the past, but honestly given the other advancements, they don't need to be. We have chosen to get more things done rather than get fewer things done perfectly.

Last night I had a nice dinner out with someone who not once but twice disgustedly threw aside their "smart" phone because flaky software got in the way. In the preceding couple of days I'd had to do cold reboots on a couple of consumer devices that -- although they have only very simple tasks to perform -- pretty reliably crash or flake out badly if used continuously for too many hours at a time.

Yet they aren't going to give up their "smart" phone, it has changed their lives and going back to old feature phones or even no phones isn't really an option. Its not like computer systems were not flaky in the 80s, but you couldn't move them from your desk anyways.

The word processors available to me today are slower but not significantly more featureful than those decades ago. It is more or less impossible to put up a functioning blog without relying on a bloated stack that is costly to keep installed and working and that yields up a constant supply of network security issues.

You can still use WordStar, you don't need a blog. If the older alternatives were so great, they'd still be in use, but the people have spoken; the world marches on even if to the tune of worse is better.

Your "renaissance" has become a literal battleground in international conflict and it has produced a software infrastructure that leaves developed society fantastically vulnerable to sustained disruption and collapse. If you think the professions should be proud of this I think you are insane.

Disruption is good, change is inevitable; the fatalist naysayers are almost always wrong. Absolutely nothing has changed throughout history, including the fact that our hindsight always sees a past in its ideal form as somehow being better.

But there is more to this: I've thrown my lot in with tools being apart of the programming experience, and hence a part of programming languages. I have a vested interest in promoting my position and reading progress one way. Other (most?) PL people have thrown their lots into programming languages being central and more advanced tooling being at best, a little help, and at worst, an annoyance. We are just experiencing that culture clash right here; either of us hopes that are investments are the right ones for the future or they will be devalued. again nothing surprising.

Happy New Year from the future (Chinese time zone) :)

progress

Are you also a climate change denier? The supposed causes of that disaster are all "progress" as you are using the term.

Living in Beijing...we have

Living in Beijing...we have some of the highest pollution in the world...I'm not a climate change denier, we have a big problem. But I think we can solve the problem by moving forward rather than by moving backward. Better technology will save us, rather than just reverting to an agrarian low-impact society.

The world has always had problems. The problems of today are comparable to the problems of yesterday, and they are always fixed eventually. LA and London used to be a smoggy mess; now they aren't. Things don't tend to get worse over the long term.

the issue is what tech is "better"

"Better technology will save us,"

Which raises the question we started with: what technology is "better"?

My suggestion is that PLT is going in the wrong direction when it tries to make it easier and easier to assemble massive libraries and to build applications by slathering "glue code" to join those libraries. Part of my evidence is that the better we have gotten at that kind of programming practice, the worse the robustness and general quality of the results. Nevertheless these bad practices have thrived in the marketplace for reasons we could separately examine.

Around these bad practices the PLT community has built up a largely unsubstantiated mythology that contains elements like: compositionality is paramount; the function metaphor is the best way to organize programs; it is a PLs job to facilitate ever higher levels of abstraction; automatically verified or enforced "safety" is highly valuable.... etc.

At the same time, the PLT community is often (quite correctly) noting and kvetching that their work seems to have little real-world impact relative to the effort put into it.

And at the same time the PLT community sometimes admits how little it actually knows about the impact of theory on practice. (See for example the recent LtU item about an empirical test of the impact of static type checking.)

For a couple of decades PLT has been more and more a field in which people say they don't really know whether what they build does any good; nobody cares about their work; but by god they are certain that a particular mythology of what's important in PLs must be right.

Meanwhile in the real world we have software systems that are so bad it should be regarded as a serious environmental, political, public safety disaster. The full-on "software crisis" hypothesized in the 1970s went off like a bomb yet theory and pedagogy have studiously avoided taking note of the crisis.

So if everything PLT practitioners mythologize about is wrong then what will real progress look like to today's PL theoreticians? I think it has to look like a rejection of the mythology. To people stuck in the mythology that's going to look backwards even though it isn't. For example, from the perspective of today's (messed up) PLT I think we can probably help to improve system design by making programming harder (as "harder" is currently understood).

I think at this point it's

I think at this point it's safe to say we have different world views. I totally agree that PL research is mostly completely broken, but I disagree that this has mattered very much and has led to an apocolyptic software crisis.

If anything, PL hasn't lived up to its potential in making software development more accessible. There is definitely a lot of demand for software that can't be written simply because there is not enough talent to go around. Doubling down on the hair shirt of making development harder, longer, more correct, less accessible, doesn't help that.

A slightly different take

I think the main problem is the s/w industry's focus on "reusability". This has not scaled well. Even if your program may be provably correct, it may bring in a component that is not (and the dependency chain may be very long!). Couple that with versioned shared libraries, dynamically loaded libraries, incomplete/incorrect documentation, interfaces that are not complete or well designed or hard or impossible to test, code "evolution" where more and more cruft gathers since no one "dares" to clean up, etc. We have our own version of "too big to fail" banks of libraries. Even if there was a perfect PL, we are not going to rewrite zillions of lines of C/C++/Java code.

And I am not sure the fault lies with the PL designer community. There was the Eiffel language (design by contract -- you can specify routine pre/post conditions, class invariants, assertions etc.) but that never went anywhere. On the other hand I don't find Java, Perl, Python, Ruby, Rust, Dart, D, Swift, Go, Julia etc. as terribly interesting -- they all play in more or less the same milieu. [They're all fine languages and I do use a subset of them frequently but they don't excite me the way I was when I first encountered Pascal, C, Lisp, Scheme, APL, Prolog, Haskell, SmallTalk, Algol68, k, Beta etc.]

I don't think the PL community can do much to solve the "software crisis". And I am afraid getting rid of debuggers & IDEs is not going to help either.

Rather than the PLT going in the wrong direction, it is the software industry that has gone off in the wrong direction.

Perhaps we need to think not about specific languages but about the "glue". I hate to suggest it but perhaps we need to think of every library as a `product' and every protocol as a "contract" and have them satisfy a minimum set of criteria before they can be used. If we (& I am afraid the legislators) can create enough pressure here, perhaps people will start moving to a world with fewer shared libraries, with better documented and vetted protocols, with more traceability, accountability etc. I hate to suggest this because it is sure to get mired in bureaucracy.

If we want to improve reliability,

There is one thing I think we could do, that I don't think I've seen any PL do yet.

If you have two function definitions, with the same name and argument signature, it ought to be seen as an assertion that those functions have exactly the same semantics.

So they could be run in parallel, and signal an error whenever they give different return values or different side effects or side effects in a different sequence.

The presumption is that the exact same bug is less likely to manifest in both versions of the function, and therefore any externally-visible differing behavior is likely to herald the presence of a bug in at least one of them. - and when there is no differing behavior, it would seem much more likely that the behavior specified actually *is* as the programmer believes it to be.

Recovery blocks

This reminds of the research done on recovery blocks starting in early 70s by Brian Randell @ Newcastle & others. The idea was, you have a number of alternative "components" that perform the same function and an acceptance test. So you try alternative 1, and if it passes the acceptance test you're done. Else you repeat the process with alternative 2 etc. If none work, raise an error. There were syntactic proposals such as

ensure  acceptance test
     by alternative-one
else by alternative-two
...
else error

Though I don't know if this was actually implemented in any language.

This idea was later extended to parallel & distributed execution of alternatives. See The Evolution of the Recovery Block Concept paper for a detailed review/retrospective and references to lots of papers.

The idea of factoring out error checking and retrying is quite appealing but the test itself can become an Achilles heel. Testing can often be harder to get right than the thing being tested (and can get very tricky). If you just compare the output of two functions (like in hardware triple modular redundancy), you're not going to catch a case where they both fail the same way (errors are not usually random). And in some cases more than one answer may be acceptable (think floating point error tolerance or sets of items where the order may be different). And if any alternative is side-effecting, to undo that can be tricky or impossible! Finally, the cost of developing two or more alternatives can also be high -- not to mention now you have two pieces of code to vet for security issues!

re "slightly different"

I agree with just about all of that and I really like this formulation:

"We have our own version of "too big to fail" banks of libraries."

Beautiful. I would say we should not be too intimidated by that, though. We don't have to get rid of that junk all at once. Meanwhile it is (duct tape and spit) enabling people to collaborate. In the end we do not have to create functional replacements for all of that junk and probably shouldn't. (For example, I hope and kind of predict that eventually the WWW as we know it will be replaced by something with far less inter-twingled functionality. The long term future for multi-media hypertext is greatly simplified by shunning both embedded Javascript and URLs that start with "http:".)

And I am not sure the fault lies with the PL designer community.

I'd go further and say that I'm sure the fault does not lie with the PL designer community. I think that community is just lost in spacey thoughts, missing opportunities to do good.

Rather than the PLT going in the wrong direction, it is the software industry that has gone off in the wrong direction.

I think the industry has mystified PLT practitioners who, for the most part, keep making very misguided attempts to serve the industry. This leads to a constant stream of frustrations and confusions. PLTers think they know "better" from "worse" in PLs based on their misunderstandings of the dynamics of industry. Then PLTers are surprised when, in industry, "worse" keeps winning by putting forth awkward missing-the-point parodies of what PLTers think is "better".

Behind the mystery the material fact is that our political economy does not create a software industry with any incentive to improve quality or to regard the mass of all deployed software as a social infrastructure. In fact, economic competition is often won in computing by making the overall situation much worse, socially. If you don't believe that you will have to explain to me such phenomenon as mass government surveillance of the kind Snowden pointed out; massive security problems in international politics like Sony or the hacking of Iranian centrifuges; the "digital divide"; the replacement of decentralized netnews and email with centralized simulations like Google mail; the crapification of hypertext; the distortions of copyright, trade secret, contract, and patent law to create and preserve monopolies on communications and cultural artifacts; the constant low-level grind of bad user interfaces; ... one could go on to make a very long list of examples of how so-called software engineering has failed at social responsibility.

PLTers these days seem to join the general community of programmers in downplaying or ignoring those serious problems that industry has created. They join the view that this is "progress" which must be good. PLTers have for a few decades now been playing "me too!, please!" to industry's bad lead.

To pick one of your examples you mentioned Eiffel's "design by contract".

I like that example because decades ago, when that first came out, I realized for the first time that there was a PL design that had embedded within it a whole theory of the division of labor. Eiffel was premised on assumptions about capitalist production and in particular assumptions about how property rights should evolve to capture software source code.

Was Eiffel really about better, socially responsible engineering? Or was it about an early stab at how to best industrialize the mass production of software? (Answer: the latter.)

The alleged theoretical rationalizations in PLT for Eiffel say no such thing. They talk about theories of correctness and robustness. Maybe a little bit about the labor-saving advantages of interface divides but not really in depth.

How odd, though. PLT never bothered to try to quantify or test those supposed correctness benefits in any meaningful way. Eiffel was judged not much of a success because industry didn't pick it up. Eiffel was judged not on the stated terms of its theory but on the actual play out it got (or didn't get) in capitalist expansion.

Eiffel didn't try to solve society's problem. It tried to solve a problem capitalists faced. It happened to fail compared to, say, PHP. It is an example of how PLT is driven by an industry it poorly understands; PLT is organized as disingenuous theories that are never at risk of serious empirical examination; PLT is constantly shocked and surprised at its own irrelevance and consistent lack of impact compared to effort; PLT lacks any tradition or even much language to even begin to try to understand the sorry state of social software infrastructure today.

PLT is in the condition where everything it knows is, apparently, wrong.

So, from the perspetive of PLT today, maybe we'd do better to change the goals to (what today looks like) "make programming harder." It'd be hard to do much worse than PLT is currently doing.

Evolution

You should really have a chat with Stefan Hanenburg someday (I think you referred to one of his papers in a previous comment).

PLT never bothered to try to quantify or test those supposed correctness benefits in any meaningful way.

There are huge challenges in doing real empirical analysis since humans are involved and so bias/noise are difficult to minimize. And don't try to test anything complicated that requires invested knowledge, since the bias at that point becomes unbearable. What Stefan and company are doing is nothing short of heroic but how it can scale remains to be seen, and even the HCI community doesn't do empirical user studies on AutoCAD.

Eiffel didn't try to solve society's problem. It tried to solve a problem capitalists faced.

You see this as binary, that we should be coming up with working solutions directly rather than trying things and seeing what sticks, which is basically how evolution works. The former is totally possible with the experience that the latter gives you (at least you know what kind of works and kind of doesn't work), but I guess what you want is a process that is more analytic and less experimental.

So, from the perspective of PLT today, maybe we'd do better to change the goals to (what today looks like) "make programming harder." It'd be hard to do much worse than PLT is currently doing.

Well, it would be a great way of killing off an industry and shrinking global GDP by a few trillion dollars.

Interesting short-list of myths

Interesting short-list of myths. The abstraction item caught my attention, naturally. Though you didn't quite say these myths were false (though you clearly suggested they might be), rather you said they were unsubstantiated, the ever-higher-level thing does seem pretty unnecessary to me. My usual illustration for the metaphor of "radius of abstraction" has abstractions growing outward from a base language in all directions, not just "up".

the alternative to abstraction

I think reflection and generality are the alternative to PL features for abstraction. An abstraction in a program doesn't have to correspond to a particular family of abstractions that are specially privileged by a PL. You can express all known abstractions using a macro-assembler, for example.

You can express all known

You can express all known abstractions using a macro-assembler, for example.

There is, of course, much opportunity for getting bogged down in terminology over what one means by "abstraction". I would say that macros have zero capacity for expressing abstraction — if you're using a macro-assembler, you've got no abstraction at all. Remember, abstraction was the paradigm that replaced the extensible languages paradigm in the 1970s, and the extensible languages paradigm was essentially macro-based.

"what one means by abstraction"

I would say that macros have zero capacity for expressing abstraction

Nuts to that. We might agree that "expressions of a single abstraction" are usefully regarded a family of texts all of which can be manipulated in terms of a common abstraction by means of automated processes.

I'll say it again: Given some abstraction I can specify manipulations in terms of that abstraction. Given a text that purportedly expresses the abstraction, there should be some mechanical way to translate a manipulation of the abstraction into a faithful manipulation of the text.

In macro-assembly language a program would then "express an abstraction" if it were possible to define a mechanical process for transforming the macro-assembly language program in terms of the abstraction. The mechanical process would not have to work on every possible macro-assembly language program -- only on programs that purportedly express the abstraction of interest in a particular style.

In short, abstractions can be expressed by convention in just about any programming language. Heck, why not even machine language.

Giving up a PLs "built in" facilities for particular abstractions is not the same as giving up on those abstractions. Instead, it is just a separation of concerns.

Remember, abstraction was the paradigm that replaced the extensible languages paradigm in the 1970s, and the extensible languages paradigm was essentially macro-based.

Could you be more specific about what you mean?

Around the 60s/70s I think there was a rise-to-dominance of the recursive functions metaphor in programming languages (including OO).

In part that movement was carried along by advances in optimizing compilers. Structured programming, recursive functions, abstract data types.... everything .... in part was completely arbitrary and driven to prominence mainly because it was the particular arbitrary choices compiler writers were making.

In other part the movement was pushed by an academic aspiration to organize programming around an elegant kernel and a tower of higher and higher level abstractions. There was an ambition to give programming a kind of mathematical respectability along those lines so could be academically recognized as something more than tinkering with gadgets.

I'm not sure I can agree that "abstraction was the paradigm that replaced the extensible languages paradigm". For me it looks more like that is what people tried to do but it has never quite worked. Meanwhile, software has turned into an environmental disaster.

Yup, bogged down

Evidently I was right about getting bogged down in the definition of the term. I'm inclined to avoid even trying to straighen that out, as past experiences suggest it's impossible (at least, without some explanatory tactic that I've so far not managed to develop).

As for what happened in the 1970s, I agree that abstraction never became a paradigm in the strict Kuhnian sense. However, some of the social aspects of a scientific paradigm shift did occur. Extensible languages had a good deal of momentum at the end of the 60s; there was an international symposium in 1969, and another in 1970. The extensible movement fizzled in the next few years after that, and when Thomas Standish wrote a paper about extensible languages in 1975, it was essentially a post mortem on the movement. Abstraction was on the rise through that same period. Socially speaking, the abstraction movemet ostracized the extensibility movement. SIMULA had been presented at one of the extensibility symposia as an extensible language, but that didn't last. Standish was effectively writing about macros when describing the limitations of extensibility. And when Guarino wrote in 1978 about the history of "abstraction" in programming, she gave no hint that the extensibility movement had ever existed. I'd been studying programming languages in academia for at least half a dozen years before I discovered there ever had been an extensible language movement; the shunning was pretty thorough. Of course, if abstraction had fully succeeded as a paradigm, it would have been more typical for it to preserve extensibility as an object of ridicule rather than erase it from history (that's why we're taught Aristotle's claim that heavy objects fall faster than light ones).

standish et al.

"... when Thomas Standish wrote a paper about extensible languages in 1975, it was essentially a post mortem on the movement. Abstraction was on the rise through that same period. Socially speaking, the abstraction movemet ostracized the extensibility movement. SIMULA had been presented at one of the extensibility symposia as an extensible language, but that didn't last. Standish was effectively writing about macros when describing the limitations of extensibility. And when Guarino wrote in 1978 about the history of "abstraction" in programming, she gave no hint that the extensibility movement had ever existed."

Fascinating stuff. I found Standish's (tiny!) 1975 paper and also Loretta Rose Guarino's 1978 "The Evolution of Abstraction in Programming Languages". I also refreshed my memory about Bell Labs in the 1970s (e.g., when m4 came out).

I think there is a discontinuity in the literature that was not present in practice. I think the discontinuity may have mostly to do with rivalry and factionalization among people publishing papers as anything.

In particular I think "extensible languages" continue without interruption at Bell Labs through tools like m4 and the various "tiny languages". In the lisp-world, too, extensibility continues. Later, the same set of ideas are developed as code assistants, templates, and "framework" systems.

What died that Standish wrote about seems to me to be the institutional support for what Standish et al. were doing. Their society of letters died. But -- uncited -- the lines of thought continue in those other places.

pretty much that, yeah

That's about what I was aiming for with "social aspects of a paradigm shift". Abstraction didn't become a true paradigm because it didn't utterly dominate as an idea the way a paradigm would, but there was the factionalization, and the suppression (as demonstrated by the discontinuity in the literature).

As for Lisp, there's a nice quote from the 1969 symposium I've saved in my quotes file, by M.C. Harrison in a panel discussion. He'd prefaced his remarks, as I recall, by saying he preferred extendible to extensible because the latter reminded him too much of extensive:

any programming language in which programs and data are essentially interchangable can be regarded as an extendible language. I think this can be seen very easily from the fact that Lisp has been used as an extendible language for years.

fwiw

I might add, for whatever it's worth, that by my understanding of the ideas involved, the key difference between extensibility as practiced (i.e., macros) and abstraction is essentially information hiding. Standish observed that, starting from a base language, you can build one shell of macros around it but get in trouble when you try to build a second shell of macros around that — which I perceive to be because when you build the second shell of macros you have to attend to all the details both of the first shell of macros and of the base language. There basically isn't anything in the system that you can disregard when adding to the system. This is inherent in the tradtional concept of macros, because a traditional macro can't usefully expand to anything that isn't visible at the point where the macro is called (that's partly dynamic scope, partly phase distinction, and... not sure but what maybe partly something else). Whereas abstraction seeks to let you pay attention to less than 100% of the existing system. The term leaky abstraction, much used in current energetic LtU discussions, has to do with which facets of the system can/cannot be hidden, and how well/badly, by abstraction devices; but this is in contrast to traditional macros that don't even try. Hence my suggestion that a macro-assembler does no abstraction.

abstractions, hiding, and leaks

Macros can express abstractions but might "leak" more than we'd like. Dijkstra-esque abstraction mechanisms can express abstractions but might "leak" more than we'd like.

What's the difference?

So, the difference between a macro and an abstraction

is whether the code/meta-code is cognizant of types?

I'm a bit skeptical that types are anything but a drastically weak compile-time language. What if we allow a strong compile-time language?

What if we stop acting as if it's the end of the world to have a compile-time language capable of expressing non-terminating programs? The solution to that is that a non-terminating compile time program is a bug, don't write that.

Macros, functions, types

The difference between a function and a macro doesn't have to do with types. It has to do with how macros inspect expressions. Functions interact with their parameters only through value semantics. Macros can also manipulate environment of evaluation or maybe even inspect the form of their parameters (how they were built, rather than just what they do).

I'm a bit skeptical that types are anything but a drastically weak compile-time language. What if we allow a strong compile-time language?

I think a number of people here are sympathetic to some vague sentiment in this direction. On the other hand, your thoughts as expressed here seem pretty fuzzy. Types are expressions that establish properties for infinitely many possible value expressions with only a finite amount of checking required. That makes them different than ordinary programs in most formulations.

Macro Assemblers

I spent a lot of time programming in macro assembly, and although you can create macros for lists, matrix algebra etc, the big limitation is register allocation. You end up either hard coding registers (bad in my opinion) or passing the registers as parameters, such that swap would take a source and target register, plus one to use as a temporary. Spilling is still manual. However given macros with register allocation and both value variables representing virtual registers and meta variables representing macros, you should be able to implement any abstraction that exists in any other programming language?

Unless you take the position that lambda abstraction is the only (singular) abstraction any programming language ever has?

a strong compile-time language

I'm a bit skeptical that types are anything but a drastically weak compile-time language. What if we allow a strong compile-time language?

Funny you should mention that. If I'm following you, that's been for several years near the top of my list of cool uses for fexprs. While Kernel's standard environment binds symbols to combiners that do ordinary operations immediately, it seems one might readily set up an environment in which those symbols instead generate object code. Taking advantage of the agility with which fexprs tread the line between parsing and evaluation. And, ironically, employing fexprs, with their abstractive power derived from eliminating the phase separation between interpretation and execution, in the first phase of a phase-separated system.

What if we allow a strong

What if we allow a strong compile-time language?

You get something like this, which is powerful, but static checking is undecidable. I think there will be a happier middle ground somewhere lower on the spectrum than allowing full blown Type : Type.

Types considered harmful :-p

If I'm following that rightly (always a dangerous assumption), that's what you get when you turn types into a strong compile-time language. Maybe you'd have better luck with some other kind of strong compile-time language, not based on types.

What would make a type

What would make a type language more intractable than any other sort of language in this context? Type:Type is just accepting general recursion into the type language, thus the non-termination. I don't see how any compile-time language could support general recursion AND guarantee termination, and any trick applicable to such a language should hypothetically be translatable back.

When is a type no longer a

When is a type no longer a type? I would claim that any such language would manifest what looks like a type system to a programmer, even if the construct wasn't based on current type theory.

Un-type-like systems

Starting with "When is a type no longer a type?" suggests a presumption that one would be doing something type-like. But such a compile-time system doesn't necessary have to be oriented toward mechanistic reasoning (I commented on this, somewhat, in another comment). Also, if it does go in for mechanistic reasoning, I'd be skeptical of a claim that such reasoning would have to be type-like. It seems to me that typical advanced typing these days is rather like object-oriented programming, in that it's trying to use one kind of structure exclusively instead of admitting that different aspects of the world have different structures.

Irony

It seems to me that typical advanced typing these days is rather like object-oriented programming, in that it's trying to use one kind of structure exclusively instead of admitting that different aspects of the world have different structures.

You must have been taught a very perverted unnecessarily pure form of OOP. But ignoring that, your statement is quite ironic given that object-oriented programming itself isn't very friendly to contemporary type theory; semi-unification is not handled well, especially with respect to inference. Many of the new OO languages with static typing are basically going off in unsound directions to get what they think is the best experience for the programmer (Scala, Dart, Typescript).

Types are propositions, so

Types are propositions, so to my reading, un-type-like means a language that can't express propositions? I'm not exactly sure what type-like entails, thus I don't know what un-type-like could mean.

In any case, back to the main idea: this was an interesting recent post on the Haskell subreddit on this sort of topic. Intead of supercompilation, we could also use a powerful abstract interpretation. It's an interesting idea.

Types are propositions of a

Types are propositions of a highly restricted form. It often happens that a highly restricted form can express all kinds of stuff if you don't care about lucidity. Myself, I'm inclined to treat lucidity as a top priority.

strong compile time language

If you have a compile time language that is strong, it becomes no different to a run time language. Effectively this is metaprogramming, just code that emits code. What then is compile time? Program1 can emit program2 which in turn could emit program3 or program1 again. It seems like you would end up reinventing the lisp macro. Type systems are useful precisely because they are limited. The ability to infer types from code, have principal types, specialise code by type, equality on types, are some properties that might be important.

Interesting

Some thought-provoking thoughts, there.

If you have a compile time language that is strong, it becomes no different to a run time language.

I have an uncomfortable suspicion we might be getting tangled up (in an already tangled situation) by the word "strong". I've been thinking of it in a computational sense, rather than a type-checking sense. Though I'm not even entirely sure what would be the consequences of a miscommunication on that point.

Effectively this is metaprogramming, just code that emits code.

I agree it's metaprogramming, although in a sense, all programming in a compiled language is metaprogramming, and this might be arranged to look in most cases no different from any other compiled-language programming. And it is, modulo subtleties of infrastructure, code that emits code, though I'm leery of the modifier "just".

It seems like you would end up reinventing the lisp macro.

Not unless by "reinventing" you mean "finding a different way to accomplish a related effect". The classic example (for the theory-minded) is, of course, that lambda-calculus accomplishes "the same thing" as Turing machines, but as practical tools they have wildly different properties.

Type systems are useful precisely because they are limited.

I've found (the perception has crept up on me gradually over time) that type systems nowadays aren't nearly limited enough to avoid a major downside in lucidity. That's pretty much the essence of my skepticism about type systems. I supppose I'm broadly in sympathy with Paul Snively's remark, "Give me Kernel or give me Coq" (it's not the specific languages, it's the philosophies). I suspect you're already in trouble by the time you've introduced structured function types.

The ability to infer types from code, have principal types, specialise code by type, equality on types, are some properties that might be important.

And yet, such games with types can very easily become a source of obfuscation, unless the type system is extremely rudimentary. There seems to be tension between complex type systems and the principle that "dangerous things should be difficult to do by accident".

Give me Kernel or give me Coq redux

> I supppose I'm broadly in sympathy with Paul Snively's remark, "Give me Kernel or give me Coq" (it's not the specific languages, it's the philosophies).

Funnily enough, today I'd say something that sounds quite a bit more like "Give me Kernel and give me Coq." That is, if we take the "isomorphism" in "Curry-Howard Isomorphism" seriously, we should be thinking as much about how software at runtime can acquire information that would enable inference of types not inferable at compile time, and how the introduction of such types would impact both existing code (i.e. revealing logical inconsistencies) and code to be generated at runtime.

It seems to not be sufficiently appreciated, even among most type theorists, that Curry-Howard has been extended so as to cover classical, as well as constructive, logic, and it turns out that doing so provides typing for continuations. The outcome of this, so far, is Krivine's realizability machine, which may actually be capable of expressing all of mathematics, and includes continuations in a logically fundamental way.

One reason I'm excited about this is that it seems to be converging with the research on Gödel machines, "mathematically rigorous, general, fully self-referential, self-improving, optimally efficient problem solvers." If you read the most recent paper, you'll see that they basically define a Krivine machine. The novelty seems to be that they've put a lot of effort into defining the necessary reflective architecture to make good on the list of qualities given above.

So I agree there's more to life than Curry-Howard, and I suppose I agree that once you get to a certain "beyond" stage what you're dealing with doesn't seem very "type-like" as most programmers understand the term. But I think that's because it'll be much more like plain old logic—even classical!—and spread across multiple execution stages, not because it will be any closer to fexprs, either traditional or Kernelesque—although squinting at Gödel machines through a Kernel lens may prove to be as fruitful as doing so through a Krivine machine one.

Why is there a compile time?

This is the kind of regress that led me to the heretical conclusion that the "right thing" is for there to be no semantically significant distinction between phases at all.

I no longer believe in a typechecking or macroexpansion phase - or even a compilation phase as normally understood - semantically separate from runtime. If you can *do* them before runtime, they are worthwhile optimizations and limited correctness assurances. But if you can't do them once runtime has started, you've admitted a separation of phases that - even though it makes the runtimes simpler - has a regress which can arbitrarily complicate the semantics.

Situational constraints

Although I too have (conspicuously) noted drawbacks of a phase distinction, it also strikes me that there are situations where pratical constraints make it pretty hard to imagine not compiling. What if you're programming a deep space probe, which has a small amount of memory for machine-code instructions and is meant to operate light-hours, or -days, away from mission control? Seems as if much more likely scenarios can be qualitatively similar.

phases ∈ {1, 2, N, ∞}

I like to think of scripting-style dynamic languages as only having one phase; typical statically typed languages as having two; Lisps or stuff like Scala's LMS and related advanced meta-programming languages as having N clearly delimited; and some hypothetical ideal that would please Ray, you, and myself as having infinite stages.

The problem with having only one phase is that, yeah, sometimes you need to compile something for performance or verification.

The problem with having two is that you have to make a more or less arbitrary decision about what goes in to which phase, and those phases tend to be very different from each other, so choose carefully.

The problem with having N phases is that you're constantly deciding which phase things are in and managing moving between phases.

A hypothetical infinity-phased system would give you the pleasant programming experience of a single phase and the staging expression power of an N phase system. What would make it "infinitely" phased is that you can't clearly point to phase 5 or 7 or N+1 or whatever. Instead, values pass between stages as smoothly as if there were no stages at all. Like a gradually typed system, you could have a "gradually phased" system, where you start with arbitrary phase interleaving and annotate your way towards something more disciplined as your system evolves.

In my mind, the ideal looks a lot like a dynamically typed version of the aforementioned LMS. I've been tinkering with a language that effectively implements Tiark Rompf's thesis on top of a runtime/first-class pattern-matching system with first-class environments, rather than on top of Scala's static method dispatch.

one , two, many

Just to be clear, I'd prefer N=1 but figure it's not always practical. N=2 would be my next choice, as it it's the minimum complexity necessary for situations where one can't use N=1. If one were going above N=2, I'd definitely want N=infinity :-).

Proofs are Relevant

N=2 is optimal for me. Either something can be statically proved (by the developer before the code leaves the 'factory') or it cannot.

Compile time is about proving things to the developer (variable I will always be an integer when this function is called). Note static type systems prove this (so that it can never happen at runtime) dynamic systems check the type and return a runtime error if somewhere in the code somehow a variable of the wrong type is passed.

The difference is proof (it cannot happen) vs testing (it might happen, return an error).

I don't see a point in static proving about the code for runtime errors, its simpler to just test. So I prefer simplicity, just write runtime code with tests, but statically prove things about it at compile time.

Metaprogramming is hard to debug, and I think its just making up for a lack of expressive power in the language itself. An interesting exercise would be to look at good uses of metaprogramming across a codebase and see what features could replace that.

A type system too weak for metaprogramming but strong enough to prove useful things about a program would seem optimal. My own thinking at the moment is a substructural linear logic over types would be about right - although it would allow metaprogramming. I wonder if any type-system strong enough for proofs is going to permit metaprogramming?

So I prefer simplicity,

So I prefer simplicity, just write runtime code with tests, but statically prove things about it at compile time.

But with tests you can't assert something like "forall n:Nat. P n", can you? You can only test finitely many things or the test diverges.

Testing

Property tests allow us to describe ∀ or ∃ quantified conditions.

Ideally, something like property testing would use an ad-hoc mix of abstract interpretation (for cases that are pretty close to type safety) and concrete analysis. And any error in property analysis should ideally provide a concrete counter-example, since that is highly comprehensible by humans.

We're pretty far from this ideal. Right now, it's difficult to even contemplate use of the same property descriptor for both abstract interpretation and property testing.

What do you mean by property testing?

I assume you don't mean this.

Regardless of what technology is used to prove theorems and generate counter-examples, the notation "forall n:Nat. ..." is pretty convenient for specifying the intended assertion. I don't really see how you'd improve upon it. And its semantics don't seem to correspond to a simple phase separation.

no infinities at runtime

So you don't need to say forall, you just need foreach. Check all values in this list are not zero, thats a runtime test. Prove a value in any list passed can never be zero, thats a compile time proof. I would write this something like this for a runtime test: foreach x in y: x /= 0

Infinite list

But if the list being iterated over is infinite, your computation diverges, right?

foreach n in 1..:
     assert n /= 0
print "I've fallen and I can't get up"

imagined problem

No such list can exist, it is a theoretical, not concrete object. Normally it stands for a list of unknown length. But the more important difference is you cannot process every element before another operation, as the first would never terminate. So in practice you never need to check all values are non zero, you only need to check block at a time (in a pipeline).

But how?

But how could you possibly do so efficiently and reliably? In C#:

0.UpTo(int.MaxValue).Select(x =>
{
  if (x > (1 << 30)) throw new NotSupportedException();
  return x;
});

So in practice you never

So in practice you never need to check all values are non zero, you only need to check block at a time (in a pipeline).

I don't understand where you're coming from. It is useful, in practice, to be able to state and then prove that e.g. "f(n) has property P(n) for all positive integers n." Why would I want to limit this to all positive integers less than some artificial ceiling? You've said you want to allow optional compile time proof, which is great, but then you say that you're only going to support assertions which can (in principle) be tested and verified. This isn't great, IMO.

Miscommunication

I want to be able to state and prove things about code at compile time. I want to verify and test at runtime. Strictly two phases. I think you misread my post "Proofs are Relevant", which was describing this two phase approach. The paragraph you quote was saying exactly this, there are two independent mechanisms, proofs of properties at compile time (via axioms, and the type system) and testing values at runtime.

My point about runtime testing, is assuming you cannot prove something at compile time (because it depends on user input) you would not attempt to do the proof at runtime after the user has input the value, but instead would perform a test. Of course you might be able to prove at compile time the runtime test is sufficient.

My bad

Sorry, I read that a different way, but I see what you meant.

Property testing is

Property testing is basically testing using quantification (forall, whenever, etc.) to describe tests. Usually, this is implemented by generating lots of automatic input, validating properties for each of them, plus a phase to 'shrink' the input when it is discovered the property does not hold to provide a minimal counter-example. QuickCheck is an example.

This could be improved by automatic discovery of boundary conditions, tracking internal states, abstract analysis, and so on. The confidence offered by property tests is frequently somewhere above conventional testing but below 'proof'. Unlike conventional testing, property testing has a nice ability to trade-off time (spent searching for an error) for confidence (that no such error exists). Unlike proof, developers are not severely constrained in what properties they can express via type system or whatever; property tests can be added to most languages.

OK

I'm familiar with QuickCheck.

Unlike proof, developers are not severely constrained in what properties they can express via type system or whatever; property tests can be added to most languages.

This ought to be backwards. You should be able to state and prove more properties than can be meaningfully checked by something like QuickCheck. Propositions of the form (forall x. exists y. ...) come to mind. But I do agree that you're not allowed to assert things easily enough in most current languages.

Property checking, model checkers, SAT mod theories, etc. would be usefully integrated into languages, but I think are mostly orthogonal to how we ought to be specifying the properties we want to verify.

proofs at runtime

Finding a proof is not straightforward, and may take an unknown, unlimited time. It may require guidance by the programmer to find the proof, and this guidance may differ depending on initial conditions.

Personally I do not want pauses of indefinite length at runtime whilst the language runtime searches for a.proof. If this manner of programming were acceptable, then Prolog should be more popular, as Prolog can express exactly these kind of proofs and find (eventually) solutions.

If the purpose of the application is to find proofs then it is of course acceptable at runtime, and a library that functions as a DSL could always be written, but I don't want arbitrary pauses in my editor whilst it proves all possible documents fit in memory.

To me proofs clearly belong at compile time (and optionally so that time consuming proofs don't happen every build). So I can prove my application secure and reliable before I ship it.

I agree with pretty much

I agree with pretty much everything you've written here, so you must be reading me wrong. The only thing I ever objected to that you wrote was that we should just use ordinary computation to check the properties that we want our programs to have. The example I gave of why that's no good is quantification over an infinite domain. The alternative is to have types and assertions about our programs that belong to a type system or logic. Type inference, type propagation and logical reasoning are all compile-time (which preferably coincides with edit-time).

-

Double post.

There are limited proofs whose construction you can automate.

To automate the construction of proofs is not simple.

So even though an induction proof is applicable to more (all) cases of a statement, it is not the case that you can make more statements and get proofs of all of them.

It's worthwhile for a system to attempt to construct or search for an induction proof (or disproof) if it's over an infinite set with well-known properties (such as the integers). But that construction or search definitely WILL fail sometimes, and can fail in the worst possible way - by attempting an infinite computation.

Anyway, the search for a failing case is just as clearly a search for a proof - specifically for a disproof by counterexample. And it is also capable of failing in the worst possible way, by diverging, when we ask it to prove something about an infinite set.

The main difference, conceptually, is that a failing search for disproof-by-counterexample is slightly more useful than a failing search for a disproof-by-induction because even when it fails, a disproof by counterexample provides a confidence interval which may fall within engineering tolerances.

first-class theorems

For Kernel (as I may have remarked on LtU at some point, and have blogged), the approach I've been mulling over is single-phase, not primarily type-based, without any built-in automatic proof-search. A theorem is a first-class object, and the primitive constructors of theorems are axioms. A computation that constructs a theorem object is a proof of the theorem. The programmer is in full control, free to prove whatever they want — if they can figure out how to do so. Preciesely describing the proof is an act of programming.

What are theorem combiners?

You mention that axioms are primitive constructors - what are the not-primitive constructors? Do you have implication and quantification as theorem combiners? How do you prove those?

primitive vs derived/library

I meant primitive as opposed to derived/library. There's no need for a primitive to have arity zero. Modus ponens would be, presumably, a primitive of arity two, taking two theorems where the first is some P and the second is of the form P-implies-Q, and returning a theorem Q.

My pursuit of the idea has been held up because there are some deep things about the situation I don't quite get, that I'd like to have a handle on. In particular, there seems to be somehing strange and elusive about the relation of a logic-embedded-in-computation like this and Gödel's Theorems. (That's the context in which I mentioned it on my blog.

Hmmm

I'm skeptical that this is particularly different than the systems Godel studied. It sounds like you have a logical system whose proofs are guarded under an abstract type. Those objects that you're calling 'theorems' are really the proofs. Then you have on top of that the ability to build proofs with programs, but all of the standard no-go theorems will apply at the outset to whatever logical system you started with in that encapsulated box.

But maybe I'm still missing something important. For example, I don't see how you're planning to prove P implies Q. How does that work?

I approve of skepticism. It

I approve of skepticism. It took me most of that blog post linked above to explain the elusive web of ideas that I'm trying to tease, patiently, into... something, whose ultimate shape I don't actually know (or I'd have the answer already). If ultimately the thing exists. A basic principle for stalking a wild idea is, don't scare away the quarry. Usually there is a standard reason to believe there's nothing there; don't be hasty to take that reason at face value, and don't simply reject it either, but study it patiently, meticulously, and meditate on the details.

As I noted in the blog post, no-go theorems tend to be understood informally in an oversimplified way — but if you find a way around one, it's generally something that's obviously right once you find it. It pretty much has to be extremely hard to see before you find it, because otherise it would have been common knowledge already. There are these odd things, that I described in the blog post, that seem to hint amgibuously at something that doesn't fit the conventional informal understanding of Gödel's theorem, and I'm carefully stalking that so as not to scare it away.

A theorem object, as I've described, is not a proof. This is related to my remarks in, er, some blog post, about why Wand's "The Theory of Fexprs is Trivial" does not prevent vau-calculus from having a nontrivial theory. Suppose you have a fexpr, and you pass an operand to it. The fexpr can analyze that operand down to its constituent atoms. But the fexpr cannot tell how you got the operand. The operand might have been specified in source code, but it also might have been the result of a computation, and there are infinitely many different computations that would have had that same result. If the operand is 42, that could be source code, or it could be the result of multiplying six times seven, or... whatever. The same goes for one of these theorem objects. It's the result of a computation, but you don't know what computation it's the result of. The computation that produced it is the proof; so, saying that you can't tell how the theorem was computed is saying that you can't tell how it was proved.

As a working nomenclature, I've been using the term "axiom" for a function that returns a theorem (reminiscent of "predicate" for a function that returns a boolean). If the function takes only theorems as arguments, I suppose that's a rule of inference; such as modus-ponens. One might have a function that takes something else as an argument, though; one might, just for example, have a function whose one argument is an arbitrary first-class object, and the function returns a certain kind of theorem about that object.

As implied in the blog post, I stopped short of choosing what axioms one would want, because any choice of axioms would run afoul of Gödel's theorem unless one could figure out how to do an end run around Gödel. So bypassing Gödel comes first.

Not Wrong

I approve of your approach, and I enjoy questioning established ideas like Gödel's. I am not sure how this relates to compiler phases? Prolog already finds proofs at runtime, although its treatment of negation is unsound, there are tweeks to Prolog negation that are sound. My issue is more with your lisp statement you claim expresses P = not P. The problem is, it does not, it expresses a transformation, turn rule P into rule not-P. Ie it transforms the computation, it says nothing about the truth of the statement. Obviously the two opposite statements can exist at the same time.

Further it is possible if two independent observations, "P" and "not_P" exist, for them both to be true at the same time. It all seems to depend on the semantics of 'P'. For example if we interpret "P true" to mean P is possibly true, then both P and not_P can be true at the same time.

In any case the computer does not understand 'P' and it is up to us, the programmer to use a set of axioms consistent with the intended meaning of P.

If P := John has an apple, then P implies a proof that John indeed does have an apple, (not P) implies a proof John does not have an apple. We may not have a proof either way (both false). If both are true, it implies we do not have sufficient context, ie one proof refers to John before he eats the apple, and one after, because the statement is not explicit about time, we make assumptions, and these assumptions could be wrong.

When a human is reasoning, and we get an inconsistency, we can examine our assumptions and change them to make sense of the observations (proofs). With a computer program, the assumptions exist in the mind of the programmer, not within the computer, so the program cannot change the assumptions under which it was written.

Is it possible that the answer you are looking for, is that if the program were written by the computer itself, it can revise its assumptions and rewrite the program? This requires an intelligent computer, which may be a limiting factor, as unless computers evolve on their own, it will always be programmers making the initial assumptions, even if those are the assumptions with which the AI program making assumptions about the target program is written.

phases, constructors, and meta-reasoning

I am not sure how this relates to compiler phases? Prolog already finds proofs at runtime...

Actually, I'd mentioned it in relation to proof search, since a key element of my would-be approach is that there is no automatic search for proofs; the programmer is responsible for figuring out how to construct theorems. That approach arises from the idea that a complicated type system, or any other form of automatic proof search system, is a program, and how to write a program should be the responsibility of the programmer, not the language designer.

There does seem to be still some connection to phases of computation; I envision this "theorem" system on a Kernel-like platform, where there is no separation of phases, although, as I've mentioned, I also suppose a Kernel-like platform might be used for code generation, thus producing a two-phase system in which the programmer-directed theorem-proving takes place in the compilation phase.

My issue is more with your lisp statement you claim expresses P = not P. The problem is, it does not, it expresses a transformation, turn rule P into rule not-P. Ie it transforms the computation, it says nothing about the truth of the statement. Obviously the two opposite statements can exist at the same time.

The situation is even more fraught than that. I suggested Lisp evaluation of (A A) is analogous, but the analogy is a bit woolly around the edges. The function A returns, if anything, a boolean; it's a predicate, not an "axiom" (kind of breaks my nomenclature, as this would be a "library axiom", a term that sets my teeth on edge). If function T were going to actually prove something, i.e. if it were going to return a theorem, it would have to somehow look different inside. How it would look different, we don't know; there certainly can't be an axiom that transforms an arbitrary theorem into its negation, as that would mean the system is inconsistent (or trivial, if it isn't actually possible to construct any theorem).

I've been aware for some years, though this doesn't come up in casual mention of the approach, that to support a viable "theorem" type you want a corresponding "proposition" type, where a proposition is essentially a proposed theorem, that may or may not be provable. Think of a theorem as a proposition with a (certified) turnstyle prefix. Propositions could be respresented by some other data structure, but for discussion of the concept it's probably less confusing to treat them as a standalone data type. When you say

Further it is possible if two independent observations, "P" and "not_P" exist, for them both to be true at the same time.

I'm not sure about the word-choice observations, but — if P and not-P are of type proposition then we should be able to construct both as first-class objects, but if they're theorems there should be at most one of them that can be constructed. If we succeed in constructing both theorems, we know the system is inconsistent.

When a human is reasoning, and we get an inconsistency, we can examine our assumptions and change them to make sense of the observations (proofs). With a computer program, the assumptions exist in the mind of the programmer, not within the computer, so the program cannot change the assumptions under which it was written.

You're getting to the heart of Russell's Paradox, there. Or perhaps, more precisely, you're getting to the heart of reductio ad absurdum. When we, as thinking beings, see that our formal reasoning has let to a contradiction, we conclude that something was wrong with the assumptions on which we based our formal reasoning. This is the meta-principle of reductio ad absurdum. When mathematicians around the turn of the nineteeth-to-twentieth century saw that their formal reasoning was leading to contradictions, they decdied by reductio ad absurdum that one of their assumptions was wrong. Which makes it pretty ironic to suggest, as Alonzo Church did, that the wrong assumption might actually be reductio ad absurdum. I suspect the problem really is in some sense with reductio ad absurdum, though; not that it's outright wrong, but that there's something askew with the way classical logic treats the relationship between the two hypothetical realities, one in which the supposition was true, and the other in which the first hypothetical reality is known to be inconsistent.

Good luck

I wish you nothing but good fortune in looking for possible gaps in the no-go theorems and I agree that sometimes it's possible to avoid a hypothesis to avoid the conclusion. But I also think (and suspect you'd agree) that it's often prudent and efficient to alternate between coming up with the crazy ideas and trying to shoot them down. This helps you figure out if the crazy idea has a chance and helps define some boundaries for it.

I don't really see why it matters that you've hidden proofs from the functions manipulating theorems. If a proposition is proven, then you've clearly had its proof around at some point. Having theorem constructors parameterized by functions would avoid this problem, but let me try a different angle. If you get your programming language / logic combination to work, I can make a purely logical system that emulates it. I can laboriously reason about stacks and parameter passing and be able to prove every proposition "John's system proves X" for every X that your system proves. And then I can take as an inference rule:

John's system proves X
----------------------
           X

Now I have a system that proves everything yours does to which Godel should apply.

Yes, that's hand wavy and maybe you can find a crack in it. I even wish you luck! I guess I'm just not getting the same tingles as you that there's a crack nearby.

alternating is good

But I also think (and suspect you'd agree) that it's often prudent and efficient to alternate between coming up with the crazy ideas and trying to shoot them down. This helps you figure out if the crazy idea has a chance and helps define some boundaries for it.

I do agree one should alternate. Not just for prudently cutting an unpromising path of exploration, but also for refining it — as the attempts to shoot them down get scrutinized just as they scrutinize the crazy idea itself. Because one is using intuition to look for cracks in formal reasoning, the cycles of questioning back and forth end just when one's intuition says they should end. If one's intuition tingles too easily it will too often fail to truncate searches that lead nowhere, if doesn't tingle readily enough it'll miss opportunities. Two of my favorite quotes are "A mind is like a parachute. It works only when it is open" (Frank Zappa) and "Keep an open mind, but not so open that your brains fall out" (origin obscure, been around a long time).

        [note: the first quote evidently has been around for longer than Frank has]

I don't really see why it matters that you've hidden proofs from the functions manipulating theorems.

I only meant to clarify the point, since it had come up. It's not exactly hiding, except perhaps in the sense that a fossil hides most of the context in which the now-fossilized object originally occurred.

[...]
Now I have a system that proves everything yours does to which Godel should apply.

Right. If one builds the system in the way that seems most obvious, it doesn't get around Gödel. My intuition is tingling somewhere a bit off from there. How about this (hand wavy also).

Gödel says if you have a system that works like thus-and-such, then either it fails to prove some things that are true under the system, or it does prove some things that are false under the system (or both of course). We have, conceptually, three sets here: things that are true under the system, things that are false under the system, and things that can be proven under the system. Call these T, F, and P. There's definitely some hand-wavy meta-reasoning behind T and F, as they're about what "is" true or false distinct from what is proveable; I'm going to mostly treat that as an unimportant distraction. We know T and F are mutually exclusive (if not, they don't really represent what we understand as true and false), and since we're talking about powerful systems I think it's safe to say there are propositions that belong to neither T nor F. We don't want P to include anything in F. We'd love for P to include everything in T, and it'd be just peaches and cream if P were exactly T.

Clearly, the boundaries of T are infinitely complex (poetically "fractal"); conceptually very much like the boundaries of a formally undecidable set. I've always thought of Gödel's theorem as somehow deeply akin to the Halting Problem, so I'm inclined to associate these two complex boundaries. I'd really like to choose some T and F based not on the vagaries of a particular formal system of logic, but on some sort of "absolute" notion akin to Turing-powerful computation. I also perceive a structural similarity between Russell's paradox, where it seems you've crossed the boundary and are no longer in this "absolute" T, and a recursive function diverging, where it seems you've crossed the boundary and are no longer in set H of programs that halt.

So, what if we could change the nature of what happens when you move out of T? Let's just not worry about proving things false; that is, say we don't need to accept F. Maybe there's a way to tie P to H, by transforming our system in a way that makes it more like computation than like logic. And maybe we can bypass Gödel that way too, because, harking back to my description of Gödel, it started with "if you have a system that works thus-and-such"; maybe when we find what we're looking for, we'll see that it works in some other way that isn't thus-and-such.

Neither true nor false?

If all propositions are either true or false, then there's a simple variant of Godel that says that P and T can't be the same. Suppose M is a Turing machine that outputs all and only true sentences. We can formalize "is ever output by M" in arithmetic just as we would provability in some logic. We can then build a Godel sentence that asserts that "this sentence is not output by M," and this sentence is evidently true if and only if it's false.

So your hope of finding P=T with a Turing machine seems to rely on this proposition being neither true nor false. But such a proposition is just some complicated assertion about integers. If it's neither true nor false then what do true and false even mean?

Turing power

If it's neither true nor false then what do true and false even mean?

What indeed. For pleasingly well defined concepts, seems like computation beats logic all hollow, because Turing-computability is a stable reference point, whereas true and false are dreadfully hard to pin down.

Philosophy

I'm having trouble following the philosophy. It sounds like you're just questioning the reality of truth. OK, but then what's the point of T and F? And I'm not going to argue for Platonism, but if this idea requires me to believe that some statements about the integers (the Collatz conjecture, maybe?) are neither true nor false, I think it may be time to disembark the crazy train. The world may be an illusion but I'm gonna keep on flossing.

Concrete Platonic reality

I'm not questioning the reality of truth. I think truth is something robust. Gödel showed that proof, as we were practicing it, isn't a robust way of getting at truth. So it seems we don't understand how to get at truth as well as we thought we did. I'm stalking a better understanding of robust truth; and I'm interested in trying to harness the robustness of the Church-Turing thesis.

I can't get there

It looks like you have to believe that there is a parameterized proposition P(n) such that one of these two holds:

1) P(n) is true for every n:Nat but forall n:Nat. P(n) is false
2) P(n) is false for every n:Nat but exists n:Nat. P(n) is true

I can't wrap my head around calling such a notion truth.

me neither

Those don't look like truth to me either. I don't know where you get them from, though.

More specifically

Let M be that Turing machine that outputs all and only true things. Let P(n) be the statement "the nth output of M =/= G" where G is the Godel sentence. Each P(n) will be trivially provable by checking the nth output of M and comparing it to G. But the statement "forall n:Nat. P(n)" is G. Since G is never output, it must not have been true. Thus P(n) is true for all n, but "forall n:Nat. P(n)" is not true. Seems odd.

Stalking the wild idea

I don't know exactly what I'm looking for, because I haven't found it yet. I'm not in a hurry, though. There's plenty of stuff in the neighborhood that isn't possible, and if focusing on what isn't possible were a useful way to find something right next to it that is possible, I expect it would already have been found. Either there's nothing there to find, or the way to find it isn't by focusing on what isn't possible. Kind of what I meant about not scaring away the quarry.

Good luck

I'm skeptical but I do wish you luck!

thanks

Appreciated, both well-wishing and reminders of the impossible (not to focus on, but to keep in mind).

Well

Given a proposition of this form:

forall m:Nat. exists n:Nat. P(m,n)

What QuickCheck could do is neither search for proof nor search for counterexample.

Fluid phases and annotations

If I had fluid phases, I'd still want the ability to perform static computations for parts of the application. It might be enough to declare that some variables must be computed statically without modeling them to be part of any particular phase. That plus some static assertions.

deliberate, as well as drastic

I'm a bit skeptical that types are anything but a drastically weak compile-time language. What if we allow a strong compile-time language?
Type systems are a deliberately weak compile-time language - ie, one that can be executed without invoking the Halting Problem. A strong compile-time language would allow type checking to diverge - ie, the system could have a type checking phase of infinite duration. Someone might want to construct a meta-type system to check the type system before allowing type checking on the actual program to proceed in that case.

Prospective blog post

This is actually the subject of the nascent blog post I've been working on most recently (I keep at least half a dozen of them in development, and maybe a few times a year one matures). There are three ways to try to determine that our programs are right: mechanistic reasoning, which mathematicians tried for seriously for about a century, an effort that went down in flames with Godel's theorems; testing; and the means mathematicians have used with much success for thousands of years --- write so clearly that everyone can understand what you're doing and see that it's right (and, having made it really clear, show it to lots and lots of people, which takes advantage of the clarity). Writers of propriety software don't want to show the code to lots of people, so they like the idea of mechanistic reasoning as a substitute for writing clearly (in fact, they go in for obfuscated code). Both mechanistic reasoning and testing can (if not handled very carefully) wreck clarity. The trade-offs between these three approaches can get quite complicated, if you let them.

re blog post

I think you are really on to something there, John. I think it is noteworthy that you have hit upon automation -- the elimination of paid labor -- as a motive force behind engineering techniques that are specific to proprietary software.

Suggestion:

Proprietary software sellers don't care primarily whether or not they show code to others. They are forced by practical considerations to keep source code secret but that is only an ad hoc side effect of the particular details of our copyright and trade secret practices.

Deep down, proprietary software sellers only ultimately, really care about being able to treat a non-rival good (software) as a commodity. (Commodities are necessarily rival goods. Therefore proprietary software can only exist in commodity form if the state intervenes. We happen to have settled on copyright, trade-secret, and contract law as the form of that state intervention.)

In other words, the "trade off" we make between more social means of confidence building in software (like in the free software world) and more automated means of confidence building (like in the proprietary software and libre-software-privatization world) is as much as anything a perverse artifact of highly artificial "market" rules.

Interesting point!

Open source projects can have everybody able to see the source and verify for themselves, and commercial (closed-source) efforts have to rely for credibility/trust on other things. And that opens up a huge market for formal-verification tools that wouldn't be as robust otherwise.

Which, as a market distorted by copyright/trademark configuration, may be a highly inefficient market.

Objects and object thinking

Objects and object thinking addresses gradual program design, and is the MAIN reason why OOP has come to dominate the profession. But thought is difficult to formalize so is often not addressed academics, who are constantly baffled why objects still seem to be more popular vs. formally superior functional techniques :)

But seriously: objects are about building and communicating designs as much as they are about getting the computer to execute solutions. Objects also take more advantage of our linguistic capabilities for communicating things we don't fully understand (since when you are designing, you don't know what the solution is yet), and we can arbitrary relate whatever we want via naming, nominal subtyping, and imperative assignment.

Functional techniques, in contrast, rely on truth-oriented equational reasoning, where getting at solutions is more difficult because fuzziness isn't very tolerated, but when you get the solution, it is more likely to be correct. In other words, to help with design, the OOP techniques that tolerate not-so correct solutions win over the ones that don't, but they might come up short at the end in still not being very correct because you can lie with objects (like you can lie in English), whereas you can't lie with equational reasoning (like you can't lie in math).

extrapolated success

For my first decade or so of programming, I always found OOP an artificial way to program, worse for some purposes than for others. It was a revelation to me when, in the mid-1980s, I took a class using GASP IV (a FORTRAN library for simulation) and found that for that purpose, OOP structure was altogether natural. I figure that's how the claims for OOP got started, likely much the same way the claims for academic, rigid functional programming got started: some folks saw how well it works in some cases, and extrapolated that it's right for everything.

Right, and actually, both

Right, and actually, both techniques serve their purposes fairly well; my code is just a mix of OOP and FP designs used appropriately.

Of course, simulation is the obvious use case for OOP (hence the first OO language being called "Simula"). Many enterprise apps are just simulations of the real world (for design and modelling purposes), so OO works well there also. If you were getting into some hard core math (e.g. the physics engine part of a simulation), you would find FP techniques more useful, though the discrete time steps might be discouraging :)

But in the context of this question, OOP's ability to handle biased thinking, generalizations, ignorance, and elusive truths is a huge benefit in designing programs, but doesn't lend itself very well to correctness. All birds fly because I say so, ignore the penguins behind the curtain.

OO considered a terrible design tool

I appreciate your participation, but I must take exception to your conclusion. While it has been difficult to achieve a stable design in my pure functional F# game engine, achieving stable design in my previous OO game engine was many times greater. So much so that it finally led me to give up on OO as a useful design tool.

While it is true that one can achieve an interim design with OO easier than with FP, to achieve a _good_ design is much, much easier with FP, primarily because it nearly just falls out of following functional principles. Try following OO principles, and the best you will do is chase your tail 90%+ of the time.

Anyone else have similar experiences?

So your pure functional game

So your pure functional game engine completely lacks objects? Or just objects created from classes? I would be surprised if you could get away with a pure functional design for a game engine, but it's been done before (e.g. Yampa arcade).

Objects are pervasive even in languages that aren't considered object oriented, as is equational reasoning in languages not called functional. Entity component systems being one big example of reinventing objects without classes (they don't even try to hide the object thinking).

The distinction between using objects and OOP

My engine uses objects such as for a convenient implementation of dictionary-passing style, but as you say, it does not then become object-oriented.

My engine is based on discrete functional reactive programming (DFRP) and uses a purely-functional simulation core. Side-effects are hidden behind purely functional message queues that are completely invisible to the end-user, and with their implementations cordoned behind an OO-style interface on the other side. So, in essence, we have functional in the large, and objects in the small (to contrast the approach you hear from OOP agilists like Robert Martin).

I believe Yampa, like Elm, is (at least in part) a continuous functional reactive programming (CFRP) style. At some point I'd like to put up a discussion contrasting the two design points. Having chosen the less abstract form, I obviously believe my choice is more pragmatic. However I do still envy the nice, arbitrary time-scrubbing property of Yampa, Elm, et al, even if I believe it to not scale in practice for 'real' games.

Discrete is more practical,

Discrete is more practical, I don't think Elm is continuous in any respect, it is completely discrete (everything is an event stream, I don't think it even has behaviors). FRP is definitely one way to avoid objects in a game engine, at least from the client perspective. Courtney's papers still blow my mind, and provide a true honest to god contrast between functional and object styles.

Glitch is my own framework in this area that is object-oriented but manages time (ensuring all reads see all writes) to tame side effects. It also supports time scrubbing with an option to forget history beyond a certain point in the past to make it scale in practice (see programming with manged time, I'm on tablet so it's hard to link with LtU HTML). My old work was combining FRP-inspired signals and objects (superglue, ecoop 06 and OOPSLA 07).

Fascinating - I thought Elm

Fascinating - I thought Elm was continuous under the hood!

That being the case, I do not understand how you can do arbitrary time-scrubbing while changing dependent values without continuous functions.

Mind elucidating?

Evan would know more, and I

Evan would know more, and I don't know elm well, but I think the scrubbing is done through checkpoints, changes are handled by direct re-execution. Glitch basically does the same thing (easy enough with versioned memory), though re execution is incremental (only what depends on the change directly and indirectly is re-executed).

Interesting, as that is

Interesting, as that is similar to how I am going to approach scrubbing, although I was weary about it because it seemed like kind of a 'hack' as compared to using continuous functions.

I certainly have a lot more confidence in my approach now, at least.

I'm not sure continuous

I'm not sure continuous functions would make scrubbing easy, even Fran had stepping based on discrete events. There are no shortcuts for general computation (reversible computation on the other hand....).

:/

Would it be too cruel to say that you're just not experienced enough yet in OO?

I've always thought that the point of OO is that it allows you to organize your program into somewhat independent APIs. It gives you organizing principles - each class is an API. It has a hidden implementation dependent side and a public interface.

No one FORCES you to organize into something that a human can keep track of, but at least you have the ability.

I, myself, don't have much experience in FP. Using scheme I have been impressed with how much better programs are that can rely on higher order functions like map and andmap. Things that were unusably complex in java just fell into near one-liners that way.

But there's no reason you can't have both, right?

Most people generally do

Most people generally do both, even if they aren't using classes or want to admit it. The only people who really get away without using objects of any recognizable kind are the haskell folks, where purity doesn't allow for identities without a monad to hand out GUIDs.

Haskell scares me - programming is engineering not math

I don't have experience in it yet.
I get how, outside of monoids, everything is just declarative.

I get the normal order evaluation - which looks mildly useful and expressive, but not a perfect match for practical problems; for instance if you're doing signal processing, normal order evaluation would let you pass data through stages of filters as needed and throw away what wasn't needed... But it's not a perfect fit because if you kept a pointer to the beginning so that you could seek back, it would garbage collect NONE of the data you'd processed. In real life, sometimes you want to calculate OVER AGAIN instead of saving everything, and that's not what haskell automates. It's an abstraction that mathematicians thought matched some ideal, rather than a feature designed to be useful for working engineers.

I get that monoids are useful and cool.

But I don't get that you need a straight-jacket type system. I don't get that you have to pretend that monoids work because of type tricks; they're needed because of purity and they're useful, but the type stuff feels like too much cruft.

Statement "X is engineering not math" considered ironic.

*

Not cruel, but inaccurate.

It would not be cruel, but inaccurate.

In fact, it falls directly in the 'No True OOP' fallacy as discussed in this popular article - http://www.smashcompany.com/technology/object-oriented-programming-is-an-expensive-disaster-which-must-end

If you wish to see my expertise with OOP, please look here - http://sourceforge.net/projects/xigameengine/

What a real piece of work,

What a real piece of work, it's like one of those articles you would get on a libertarian or alternative medicine website. And also quite long, it's about 10 pages of ranting.

Couldn't they just start at the treaty of Orlando or something?

I'm obviously not using the

I'm obviously not using the article as a citation, but rather to show the popular objection to the 'No True OOP' fallacy. No reason to have an allergic reaction.

If you want to critique something relevant, critique my OO code at the link I posted. But beware, it will require more effort than a knee-jerk reaction.

Do you consider Self OOP?

Do you consider Self OOP? What about CLOS (even as object in its name!). I mean, wow, the author takes a very narrow view of the object world, and has probably never bothered to browse though very many OOPSLA precedings. If this is "popular," I worry that are standards have fallen quite low. And also, people like me aren't doing our job as academics in the OOP community.

Please, let's not keep

Please, let's not keep focusing on irrelevant details in the article. The only thing relevant in it is the 'No True OOP' fallacy.

Perhaps I should add the 'NTOOP' fallacy to the C2 wiki so that it can, in the future, be presented without so much derailment.

Nobody here has made an

Nobody here has made an argument of the form "no true OO system would ever X", so I don't really see how you are invoking the Scotsman fallacy, which has nothing to do with the claim "there is no true OOP system". What is your point?

Fallacy not about OOP, but OO Programmers

"Would it be too cruel to say that you're just not experienced enough yet in OO?"

That suggestion is basically the "No True OO Programmer" fallacy from the article.

However, I see where the confusion comes from, as I lazily used OOP as an acronym for 'Object-Oriented Programmer'.

The perils of text communication strike again (had this been a real-life discussion, I would have sounded it out - and avoided much confusion).

Well, it's not good style to

Well, it's not good style to make assumptions about people's backgrounds, but that article was way out there.

At the end of the day, people have come up with arbitrary definitions for what it means to program with objects and what it means to program with functions, and they don't always agree on what those definitions are so communication fails to be very effective. Actually, that is very object oriented.

Ugh, badly written

I stopped reading when the author said that Java doesn't have late binding.

He wasn't just wrong, but he was wrong in the middle of being grumpy, sarcastic, flaming...

Maybe my own writing gets too opinionated and grumpy too - I'll take that article as a sign to cool it, I don't want anything I write to be like that.

And Ruby...

Actually the only thing Lisp has that Ruby lacks is forms.

Amazing how people leap to

Amazing how people leap to focus on the wrong thing when someone takes exception to their argument, isn't it?

Again, I'm obviously not using the article as a citation, but rather to show the popular objection to the 'No True OOP' fallacy. No reason to have an allergic reaction.

If you want to critique something relevant, critique my OO code at the link I posted. But beware, it will require more effort than a knee-jerk reaction.

What argument?

You said OOP is a terrible design tool but your only evidence is that you found it easier to stabilize your second design than your first - well that's ALWAYS going to be true.

Also if you say that it's easier to evolve a design in OOP but you have to be "correct" from the beginning in FP, then I'm going to start worrying that you're using tools with too high a hurdle to get things started.

You know if you forced me to write in C or Pascal, I'd still use OO, I'd just get less support from the language. I know because I've been on projects like that.

That article isn't really against OOP exactly, since it's touting Lisp, a language where it's easy to emulate objects and usually has an OO library. He's not talking about Haskell. A Lisp closure is a simple object after all.

Opposite experience

I have used C++ & a coroutine library to implement a hardware simulator (network switch elements, fabric, ASICs, NICs, FIFOs, links....) and it worked quite well (good enough to even handle real network traffic so that e.g. you can ssh through a simulated switch). All the hardware elements "inherited" from a very abstract model of hardware (power on/off, reset, # of ports etc.) and each element used one or more threads. The design was stable enough that other people used it for traffic modeling etc. I had also added a way to monitor pretty much any element so in effect you could put a `test probe' and observe the traffic. There was a config. API to wire things up. An earlier and much less detailed simulator written in C was harder to modify. I have also used C++ based simulator for simulating disks etc. So I think OO is fine for simulation!

Who decides what "correct program design" means?

I don't want a computer telling me how to design or implement my software. Think of a program as being a sort of virtual machine. In many forms of engineering, the number of tools and substances available are limitless, there isn't a single tool or method that you have to use for a whole project. You don't build a house out of one material, nor a car nor a tv.

I want every possible engineering tool at my fingertips ready to work with each other so that I can decide what methods are applicable to the problem and apply them as needed.

The "correctness" of my methods are MY problem and not that of some idiotic automated student grader built into presumptuous software.

I want tools that stay out of my way and let me do my work.

Let me add

I think that programmers who are good and experienced don't have to take pedagogy too seriously - they don't have to sweat "good program design". Taking it too seriously either means that you're a professor or that you aren't so experienced yet.

Following all of the rules of good design slavishly actually makes it nearly impossible to be productive. In the end, your purpose is to get the job done.

And the only thing that's really horrible in a correctly working program is something undocumented that other programmers won't be able to figure out, or that you'll forget. If there's a gotcha in there, document the hell out of it.

What a straw man! Your whole

What a straw man! Your whole argument rests on me either being either inexperienced or a professor! You have created a new fallacy, let's call it the 'No True Programmer' fallacy.

That being said, I understand why typical OO engineers give up on pursuing good design entirely, given how incredibly inefficient their chosen toolset is at achieving it.

Problems are ill defined,

Problems are ill defined, solutions are not optimal, worse is often better, you gotta ship even if you don't have time to come up with the most elegant solution. You like to throw your Scotsman fallacies around, are you Scottish? :)

Only Part-Scottish, so Not Truly :)

The 'No True X' idiom is a useful device to deal with a very common social problem in programming:

Programmers avoid addressing systemic issues that get raised by assuming flaws such as inexperience or idolatry in those who raise them.

No only is it very damaging to our craft, it's also tremendously hurtful to those who are so defamed. It's like the modern-day witch-burning, and a sad reflection of the state of our collective intellectual integrity.

You haven't defined what "good design" means to you.

Uhm, I meant something about what it takes to be productive, I was talking about work, not about you.

... but in any case I think you need something outside the world of programming to calm down. I don't feel like explaining myself to the overly touchy. I'm going to bed now and abandoning this thread.

Physician, heal thyself

Heh, you accuse others of over-sensitivity, then rage-quit the thread.

Oh the perils of internet psychoanalysis...

Versatility. It's a sort of zen virtue.

In my experience, the best programming languages for just writing programs are those which have "the structure of water" as a zen enthusiast I once knew explained it. They can be deformed infinitely, without stress, to fit the shape their environment requires.

So, yes, I really and truly do prefer a multiparadigm language with a reasonably small, general working vocabulary, that doesn't make design choices for me. I like languages where object-oriented and functional programming (and imperative, and stream-oriented, and data-directed, etc) are all Things You Can Do rather than being The Only Way The Language Works.

C comes close to having the structure of water, but it is badly underspecified, its type system is loose, and it's effectively useless for functional-style programming because too many of its fundamental control structures and virtually all of its libraries are built around mutation and state. Still, it allows you more choice in object-oriented design paradigms than C++ does, and allows imperative, data-directed, stream-oriented, etc. programming without any trouble. Despite its many flaws that's more versatility than most more recent languages show.

Scheme is a more recent language that has a similar structure or lack of structure. You can write a program in pretty much any programming style using the Scheme language. If you're an OO enthusiast, you just need to remember that in Scheme they're called "Closures" instead. If you want bells & whistles like inheritance and subclassing you get to decide how they should work to fit your problem, and bind any of dozens of different OO libraries to suit the design choices most appropriate for a particular problem. Or, you know, write your own, which you can do without breaking anything.

The great virtue of these languages (for writing code) is that they allow the solution to fit the problem. The great failure of these languages (for combining code from different projects) is that they allow the solution to fit the problem so precisely that combining any significant functionality from different programs becomes quite difficult, and the time required to get "up to speed" in a particular project is often the time required to understand the design of the WHOLE program rather than just one restricted part of it.

So they tend to be great for writing new stuff on your own, and not so great for a lot of scenarios that businesses tend to value more. Putting together bits of functionality from a bunch of preexisting projects - you won't find many pieces other than those originally written for the same project that fit together cleanly. Handling turnover in engineering staff - If all the programs are special snowflakes in terms of design, a new engineer has to understand the design of the whole program before s/he can be useful, rather than just understanding one small module.

Languages like Java on the other hand have a strait-jacket built into the type system and the design of their OO system - it's absolutely no more versatile than it absolutely has to be, and therefore promotes very homogenous design. It might be a bad fit for some problems, but it's a common vocabulary and allows the larger projects to be coordinated with engineering turnover, allows mashups to be made with code originally from several different projects, etc. For a lot of organizations, that's more important than the fact that a few really bright engineers could be more productive with something else.

Design duality?

Considering this, I think I could conclude that there are at least _two_ ways a language can help you in achieving good design -

1) Give you constructs that help guide you toward design-correctness, or

2) Get the hell out of your way so that you can more freely navigate design space until you find the answer completely on your own.

For my part, rather than saying one is better than the other, I would conjecture that the two approaches are duals.

Making it easy to do things well

Yes. It is always best if building to a really good design is easier, or at least no harder, than building to any other design. Any language that makes a good design require a clunky implementation while a problematic design has a more straightforward implementation is contributing to a problem. That said, it's never worth making some less-favored methodology deliberately difficult to achieve.

It's enough that the language itself - control structures and libraries - do not depend on you doing anything that would mess up a given design paradigm.

This is, btw, one of the things I really like about Scheme. Its control structures and basic library has been very carefully designed so that it does not presume or require that you are doing mutation, or indeed adhering to any particular design discipline. You don't necessarily notice it when you're doing something stateful. After all, set! is right there and not at all difficult to use. But if you're doing something and you just never happen to need mutable state to accomplish some specific purpose, you can get all the way to the end of it and have it running before you even realize you never used it. This happened to me once when I wrote a compiler in Scheme. I had not set out to make it a "functional" program, in particular; I just didn't ever encounter a problem that would have required me to maintain state to achieve a straightforward solution.

I did not even realize, until I had finished it, that it was in a pure-functional style. I/O is side-effecting in Scheme unless you implement monads, but otherwise, that's how easy pure-functional programming can be.

-

-

Chill

Software development should never be approached with such a default sense arrogance, because as rigid and idiotic as computers are, they're a million times faster and more accurate at a certain class of tasks than the human brain. You'll want computers checking your work and pointing out your inevitable mistakes--even better if they can automatically or semi-automatically correct them. Not only that, it turns out that not all other people in the world are idiots; it should be our goal to better make use of such resources, including the machines we have at our disposal as well as the programmers who came before and will come after us. You might actually learn something from someone sometime.

And even if you do insist that you're the one who's finally come to set all things right in the world, trust me, you don't have enough time to write all the software in the world, and you'll probably want to at some point use something you didn't write--so you'll want those other poor stupid bastards to at least run _their_ software through decent static checking tools before they foist their crapballs on you.

Architecture

Good program design comes from understanding the problem domain, and very little to do with what language you program in.

Bad program structure comes from not taking the time to fully understand the problem domain, and any work in that domain to structure it.

Knowledge of meta-structure is helpful, but only if you recognise the applications.

Languages can then help or hinder the implementation of the architecture, but don't define it. Choose language and applications that fit with the architecture. Refactor the architecture only if language choice is limited.

Waterfalls

You will not fully understand your problem before you start programming unless you do "big design upfront" to think of everything (aka the waterfall method). Instead, you might as well start coding, learn what you don't understand, understand more, refactor, and repeat.

I agree that languages aren't as important as we think they are, but your internal language for thinking about your problem is very important, which is also influenced by your preferred programming style and language.

:-)

"Top-down design is a great way to redesign a program you already know how to write." — P.J. Plauger

Agile requires architecture up front too

For work projects, if we consider agile to be minimising the cost of change (which is why we release early to get feedback as soon as possible so we can change quickly), then whilst superficial changes can be made late in implementation, architectural changes do not work well. For example consider changing from a relational to document database a week before release. So to minimise the cost of change architectural decisions should be made before development starts. I find the best way is to use interactive prototyping and spike solutions in an iterative design phase where functionality is agreed with very tight iterations (one or two days) possible because it is all smoke and mirrors. To be properly agile requires a good architectural foundation that makes dynamic change possible.

For personal projects I prefer to start coding, and as soon as it starts to cause problems with the abstraction I start again. If I can't find a better abstraction I suspend the project and come back to it only if I have an idea for a better abstraction that solves the architectural problem.

You claimed that a full

You claimed that a full understanding of the problem domain was required to put a decent architecture in place at the onset, so what are you then iterating on during your agile phase? Just working out the solution?

I guess I am perhaps quite odd because I use programming as a way of understanding a problem domain.

Design phase involves programming.

In the design phase, spike solutions and interactive prototypes involve programming, but nothing that is necessarily going to be kept for development proper. Spike solutions are deep and narrow programming investigations of areas of high risk that you need to understand to give accurate delivery estimates. Interactive prototypes look like real software to the customer but are shallow and broad. This allows very short iterations and rapid design change. However I think also more reading and talking before starting to code is a good thing, re-invented wheels are a waste of everyone's time. Use open source wherever possible, there are plenty of MIT/BSD licensed projects you can use commercially with accreditation.

During software delivery you only iterate where what you deliver does not get accepted by the client/product manager. As you got sign off on the interactive prototype it could be complex interactions, not working as intended in the real environment, or changes in requirements since sign off.

I spent s couple years

I spent s couple years working as a UX prototyper, though for UX design obviously, so we weren't attached to the development team. Still, it seems to beg the question of "when do we start programming" where it seems where are many different answers. Should "practice" count or not?

I had this conversation once...

Me: At this point, I'm confident that we can code this in eight months.

My Boss: What progress have you made in the last month?

Me: During the last month, I've ruled out the possibility that there are unknowns that could take more than a year to overcome and also ruled out the possibility that there are shortcuts we could use to get there in less than six months.

My Boss: How did you do that?

Me: I got Josh to code the weird-assed address sort/merge/unify thing that gave the clowns at [redacted] so much trouble, and Patrick put together a user-experience demo that the marketing and sales staff all figured out how to use within two hours. And Marcie figured out what caused the failure at [company name redacted]. It turns out that there's a couple of database tables that look obviously useful, but the cost of maintaining them while the sort/merge/unify process is going goes through the roof once the database gets too big. [Redacted] uses these tables and when the userbase gets too big their product chokes. Without them it'll take a bit more latency and caching to retrieve the results but at scale it should be no trouble.

My Boss: You're saying Marcie tried something and it didn't work.

Me: There is no such thing as failure at this stage of the process. Marcie learned something valuable and saved us an expensive mistake. If we'd designed assuming those tables would be there...

My Boss: So we can use Josh's code for the address sort?

Me: No, Josh did it in Haskell, so the actual code won't play nice with any of the languages anybody else would use. It would need its own runtime resources on each deployment machine. The important thing there is that he discovered there's no reason why it should be hard. Now that he understands the sort, he can repeat it in a production language that we all use.

My Boss: Why'd he do that? Couldn't he use a production language in the first place?

Me: 'Cause he can do that in about a quarter of the time it would take him in C++, and I really had to find out immediately whether that was going to be a showstopper.

My Boss: Wait, could we all work faster with this -- Haskell, you called it?

Me: I don't know it very well but, as I understand it, Haskell is great for calculating things - the way Josh was using it - but really horrible for putting up UI's and dealing with user input. Also, if we did it in Haskell, our clients would need at least three times as much machinery to run the software per thousand users. Probably more.

My Boss: Hum. Okay, carry on. I'll get together with the sales staff and make sure they know that even though they've seen the UI demo, we are NOT "almost done and ready to ship". Remember last time somebody tried a UI demo on them?

Me. Oy. I remember. Understandable mistake but still.... yeah, thanks.

But you've already been

But you've already been programming before you've started programming, whether the code is shippable or not. You could do the same with a C# prototype and C# product, using the former to explore and applying best practices to the latter, but doing the former is still part of the latter.

Let me put it this way: How do we come up with a good architecture for our program? Well, we program. It seems circular without acknowledging that programming is part of the design process.

Hum, I think that may be backward...

I think it's more like IF you fully understand your problem, THEN you can fully understand your program by doing big design work up front.

But my experience is rarely in writing code for problems we fully understand before we start getting our hands around them. We start by alternating between doing research, then doing demos and prototypes of subproblems, until we are fairly confident that the engineers have gotten familiar with the problem and the design space - THEN we sit down to do serious program design.

We used to skip the demos and prototypes, and the multiple iterations of prototype/research, but that frequently got us bad designs that blossomed into problems later.

Both parts are needed. You have to do the demos and prototypes and see what goes wrong, before you can understand the research papers that explain why it went wrong. You have to study the research that people have done to know what has caused problems in the past, becausee that's what you should be doing demos and prototypes of to make sure you understand the issues.

Anyway, it takes a couple rounds of study and code before you're even ready to sit down and attempt big design.

Sounds good

Yes, I think this is more or less the same as I was trying to get across. In any project there will always be bits you understand well, and bits less well. The less understood bits represent project risk, so you do spike solutions (the prototypes of sub-problems) and demo (the prototypes) to get stakeholder feedback.

The other advantage is you can often guess how long it will take to do a couple of rounds of reading, prototyping and spike solutions so you can give a cost for this part of the project up front. When you have completed this phase, and have done your big-design, then you probably have enough experience to give reasonably good estimates for how long it will take to implement. If you can't give decent estimates you didn't do enough 'spike' solutions on the high risk areas.

What I had in mind when posting this question

I had a partial answer to the original question I posed in the topic of this thread. However, I did not inject it into the conversation - in order to avoid coloring the responses. But now I will do so.

The partial answer I had in mind was that Haskell is an example of a language that guides correct program designs. With its typeclasses, it affords us prefabricated generalizations like applicatives, monads, arrows, et al, that can in turn be leveraged to guide our program designs toward correctness.

For example, when using Haskell, a programmer will often attempt to fit his domain abstractions into those provided by the prelude typeclasses (or by other libraries). Often the motivating factor is to try to capture the free operational correctness and expressiveness properties that they provide. But a clever or experienced Haskell programmer will go further. He may instantiate these typeclasses over his own domain abstractions in order to transit their _design_ correctness properties as well. With this approach, the technique of leveraging many of these type classes is also the technique achieving design-correctness transitively. And even if the beginning Haskeller doesn't know about the transitive design-correctness properties of these typeclasses, he still attains the benefit when he instantiates them anyway.

One can think of Haskell's many primitive typeclasses as little design templates whose correctness, in part, have already been proven, and thus can be used to transit said correctness, again in part, to one's own abstractions. And when one can achieve design correctness at the lowest-level abstractions, it make it much easier to achieve further correctness for the abstraction built on top of them. After all, one's design is in peril no more than when one of one's most foundation abstractions is wrong.

This can also be seen, albeit to a lesser extent, in designing in an OOP system. Thanks to 'interfaces' as we know them in OOP, we also have little design templates whose correctness, at least in some cases, can be 'inherited'.

But these are just two examples, and given the many powerful type abstractions and syntactic constructs that have yet to be seen in mainstream languages, I wonder what else is out that might pay off in kind.

Sorry if I didn't make this clear enough at the outset.

Bug-resistant Languages

General purpose languages usually lack a strong opinion on 'correct' design. Problem specific languages, otoh, might embed a lot of domain knowledge to resist domain specific errors.

However, a general purpose language can help a lot with avoiding general (cross-domain) errors, e.g. involving security, portability, extensibility, or concurrency. Not even a 'good' programmer has the attention and time and global awareness to recognize and avoid the many potential cross-domain errors without help.

I would favor a language that helps me with the mess of -ilities and non-functional properties, such that I can focus my limited attention on a specific problem. This may involve some opinions on program design, but probably not in a domain specific way. Correctness within a domain would be left to my personal efforts.

an API can be a poor man's domain specific language

And a class definition can be an api.

You see where I'm going with this.

API

I agree that API can act as a DSL, and that an OO class can describe an API. William Cook in Enso, and Jonathan Aldrich with Wyvern, explore this approach more deeply. Linear types are especially interesting in this context, since they can serve a similar role as structured programming... without needing a structured syntax.

It isn't my experience that API design generally considers the cross cutting concerns. That's where I want my PL to help.

There is a danger that the

There is a danger that the PL does too much and its no longer clear what its reasoning is (incomprehensible static or run-time errors). It is important that the programming models remain simple for programmers to reason about as long as they are writing the code, even if that means sacrificing stronger guarantees.

I think Rust is trying hard to avoid the trap.

Agreed.

I'm especially interested in feature we can gain by having our PLs doing less, e.g. by taking away ambient authority or the heap or jumps.

What's a "correct design"?

What's a "correct design"?

Yeah, the absoluteness of that phrase bothers me.

I've seen too many people treat prescriptions in programming as if they were absolute definitions "good and bad", value judgements, some analogue of religious reasoning.

But one day's philosophy of what makes good code, one language's, one community's is totally contradicted by another fad, another language, another community.

For example you might remember that Dijkstra once found Lisp offensive. He found punning program as data and data as program offensive. Obviously since his life was devoted to analyzing algorithms, he found any feature that made programs impossible to fully analyze unacceptable. But there are other purposes to algorithms than writing CS textbooks analyzing them, and expressiveness is useful.

I was just trying understand the phrase

It didn't bother me; I just wanted some clarification but I see that Bryan explained it in a later comment.

More than one design can get you to a correct program but one design may be qualitatively better than another; so to me the "quality" of a design is the key attribute but it is hard to pin it down.

Like Ray, I too find that Scheme helps me focus on the problem I am trying to solve and not get bogged down in fighting the language, and that helps in improving the design quality. When I am prototyping something, the single most important factor for me is to get a working prototype so I can play with it and quickly iterate & evolve the design (which includes making mistakes and recovering from them). So for me Scheme is more "modeling clay" or Lego Technic blocks (rather than zen like "structure of water") which allows me to build up a program a little bit at a time. Of course, in a commercial setting one doesn't always have the luxury of choosing a favorite language or of having sufficient time but I still try to prototype and iterate.

But a prototype is not a shippable product. A product must also withstand real life stresses so it is important to not just test correctness but also scalability, performance, ability to withstand nonsense input, resiliency and so on. The common case must be handled efficiently and the corner cases must be handled even if not efficiently. And you need logging and monitoring if you are writing a server of some sort. All these requirements are hard to capture in a language. Good tooling can help here and often you have to create some percentage of tooling on your own.

Racket

is tantalizingly close to being good enough to ship products. It's very efficient, but has a few problems. A "stop the world" garbage collector that probably doesn't scale is a big pain in the neck. Also too many of its basic functions grab a global mutex, so it's not so great on using multiple cores even though it uses system threads.

It has a gradually typed dialect, and that gets rid of a lot of the objections to scheme.