From Programming Language Design (PLD) to Programmer Experience Design (PXD)

This is a fork of another topic.

I think it is high time that we stop talking about programming languages in isolation of the tools that support them. The tools have always depended on the languages, obviously, but increasingly the languages depend on the tools, especially in professional language design contexts. Dart and TypeScript are extreme examples of this where the type systems are there primarily to support tooling, and are much less about early error detection. As another example, C# design is heavily influenced by Visual Studio.

Designing a language invariably involves trade offs, and if we accept tools as part of the core experience, we are able to make decisions that are impossible given the language by itself. For example, many people find type inference bad because it obscures developer documentation, especially if it is not completely local. However, given an editor where inferred type annotations are easily viewed when needed, this is no longer a problem. Likewise, a type system that cannot provide good comprehensible error messages in the tooling is fundamentally broken, but tooling can be enhanced to reach that point of comprehensibility. Type systems in general are heavily intertwined with tooling these days, playing a huge role in features like code completion that many developers refuse to live without.

And of course, there are huge advances to be had when language and programming model design can complement the debugging experience.

There are other alternative opinions about this, to quote and reply to Andreas from the other topic:

Only if you do so for the right reason. Tooling is great as an aid, but making it a prerequisite for an acceptable user experience isn't. A language is an abstraction and a communication device. Supposedly, a universal means of expression, relative to a given domain. It fails to be a (human-readable) language if understanding is dependent on technical devices.

This is a good point: code should stand on its own as a human communication medium. However even our human communication is increasingly tool dependent, as we rely on things like the Google to extend our capabilities. We are becoming cyborgs whether we like it or not.

There are many other places than your editor where you need to read and understand code: in code reviews, in diffs, in VCSs, in debuggers, in crash dumps, in documentation, in tutorials, in books printed on paper.

Code isn't bound to paper. Even when printed out on paper, it does not need to printed out as it was typed in.

You can try to go on a life-long crusade to make every tool that ever enters the programming work flow "smart" and "integrated", and denounce books and papers as obsolete. But that is going to vastly increase complexity, coupling, and cost of everything, thus slows down progress, and just creates an entangled mess. And in the mean time, every programmer will struggle with the language.

This is more of an observation about current and future success of programming experiences. I would say it has already happened: successful programming languages already pay these costs, and new languages must either devote a lot of resources to tooling, or build them in at the beginning, or they simply won't be successful. Some people wonder why language X isn't successful, and then chalk it up to unenlightened developers...no.

Nothing ideological or academic about that, just common sense.

Since an appeal to common sense is made, I would just think we have different world views. We at least exist in different markets.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Base language

By selecting a PL — whether by designing it yourself or adopting one designed by someone else (or, likely, something between those extremes) — you place limits on its future use; although I often say "programming is language design", that moment when you fix the choice of base language is when the limits happen. After that moment, you can create all sorts of tools to make the language "go" further, and you can construct abstractions within the language to make it "go" further, but the limits on both of these things are contained in the language you chose to start with. From a language-design perspective, I'm suspicious of tools because I see them generally being used to try to make up for inadequacies of the language design, in fact some people seem to have a "why bother" attitude toward the language design where they expect the tools to do everything. I've always figured a language should be designed to maximize what can be done within it by abstraction; but whether the same is true for tools, I'm less sure. Designing a language for the sake of the tools one means to use with it seems prone to compromising the basic integrity of the language. I'd rather see the language designed so it's thoroughly awesome when used with nothing but a text editor and bare-bones compiler/interpreter, and then soup it up with awesome tools from there (but I'm also reminded of a remark from somewhere, long ago, about software speed, that for every nine hours you spend "cycle-shaving" on the finished product, you'd have done better to spend one more hour ahead of time coming up with a better algorithm). [I recall a guy on my college dorm floor who was heavily into airplanes; he had posters of high-tech planes all over his dorm room walls, like that USAF thing with forward-swept wings which was then a quite recent development; of course, forward-swept wings makes an inherently unstable aircraft that requires a computer to fly. One day we happened to watch an episode of something-or-other on the TV in the dorm lounge where the "chase scene" was a dogfight that included a barnstormer's triplane, and he remarked with real reverence in his voice that 'those things can fly at thirty miles an hour without stalling'.]

From a language-design

From a language-design perspective, I'm suspicious of tools because I see them generally being used to try to make up for inadequacies of the language design, in fact some people seem to have a "why bother" attitude toward the language design where they expect the tools to do everything.

Isn't a fallacy though? Focusing on the experience doesn't mean we start doing a crappy job on the language design, it just means that trade offs can be more optimally distributed.

I'd rather see the language designed so it's thoroughly awesome when used with nothing but a text editor and bare-bones compiler/interpreter, and then soup it up with awesome tools from there

But then you (or someone else?) also argue that too much type inference is evil, because you can't see the types and the error messages are obtuse...since given a text editor and a command line compiler, that is all you are going to get! By making the language stand on its own, you have already doomed it to a series of sub-optimal design decisions that naturally follow.

Triplanes are great for some tasks, just don't put them up against an F22. Fly-by-wire is amazing, the plane is in a constant state of instability. Manual rudder control is impossible.

By making the language stand

By making the language stand on its own, you have already doomed it to a series of sub-optimal design decisions that naturally follow.

I doubt this. Here's an alternative conjecture: any language feature that would be bad for the bare-bones language would still be bad when augmented by tools. That is, the best experience with tools is founded on the best experience without tools. I'm open to being convinced otherwise, but note that just because some feature X doesn't work well in bare-bones languages, and bare-bones languages with X can be made useful using tools, does not preclude that tools might have done even better augmenting a bare-bones language without X.

How would you develop an

How would you develop an experiment or proof to justify your alternative conjecture? It seems very idealistic, and you're clearly placing a much larger burden of proof on "being convinced otherwise" than on accepting your conjecture.

I think there are a lot of cost thresholds, or activation thresholds, in PX and UX (and emergent systems in general). If some activity is too difficult, it doesn't happen. Even if it were theoretically possible to do with plain text anything you can do with structured content, it won't matter if the effort is high. Every little barrier to development is relevant. Designing for tooling is about shifting costs around, making it cheaper to create a desired toolset such that the costs fall below acceptable thresholds.

I would be very interested if you were to develop a language where tooling costs are very low but the language itself is convenient for plain-text reading and writing by humans. Unfortunately, a lot of conventional features aimed to make a system more convenient for filesystems and plain-text editors and expressiveness or readability in plain text tend to create barriers for tooling - e.g. sophisticated syntax, namespaces, overloading, external build systems and package managers, dynamic dependencies on external data files for content that doesn't fit nicely into plain text, etc..

Sure it's idealistic

Sure it's idealistic. It needn't, by that, necessarily be wrong.

These two contending conjectures have to be weighed against each other, I think, by considering cases: too forward-speculative to subject to proof or, likely, to practical experiment, so thought experiment would seem the available option. Consider sorts of language features, sorts of tools. To attempt a better articulation of my above caveat, suppose we're considering a feature X that, if one were designing a bare-bones language, one would reject. Presumably, we've already considered bare-bones with X and bare-bones without X; we'd have to have considered those, to reach our supposed rejection of X for bare-bones. My intended point was that, if we want insight into the contending conjectures, we would have to also consider, carefully, both tools with X and tools without X. Otherwise, an enthusiast might say, "see, tools can enable X to work after all", which is good to know but doens't differentiate between the conjectures.

agree to disagree

Yes, the tools can become a crutch for bad languages. But let us ignore those experiments. Let us consider people (like those on LtU) who know that the language must still nevertheless not suck, no matter what the deal is with the tools.

Then we can agree to disagree on whether tools are worth adding to the language ecosystem. Personally I heartily say, "hell yes". I find it hard to believe that anybody who has used Emacs vs. an IDE would say there is nothing good about the IDE over Emacs. I say this as an Emacs nut -- any language I used has to have an Emacs mode or else I'm not interested. ;-)

The question, I think, is

The question, I think, is not whether tools are worthwhile, but whether one ought to take tools into account in the design of the bare-bones language. This is all rather abstract, so there's not much to build a reconciliation of views from. Here's a very small definite example. Remember the "goto fail" bug, early last year? The short explanation of the error (as gasche described it in the LtU discussion, here) was

if (foo())
  goto end;
  goto end;

important and useful stuff...

end: cleanup

When that LtU thread started, it seemed so blindingly obvious to me that the language shouldn't have had such an error-prone syntax for its if statement, that I fully expected that point would be made promptly in the discussion at a language-design-savvy site like LtU, so I didn't make the comment myself. For a while. But then it got clear nobody was saying the "obvious", so I did — and, to my amazement, actually got a bit of push back on the suggestion (not that others didn't chime in to agree with me). I actually had to write a separate post to make the (I'd have thought, equally obvious) observation that I wasn't somehow suggesting dead-code detection tools aren't useful.

In this case, the language

In this case, the language could have included indentation in its definition, and the editor could make it blindingly obvious that an uncodnitioned "goto end" was there.

Syntax is a practical concern of clarity and aesthetics, it is also core to the editing and reading experience so it is insane that it would be considered separate of those.

the language could have

the language could have included indentation in its definition,

Which would be part of the definition of the syntax of the language, yes, although it's a rather inflexible choice, likely to get in the way of the programmer choosing the optimal format for a particular situation; I'd favor something such as an endif keyword. Still, either way, syntax of the bare-bones language.

the editor could make it blindingly obvious that an uncodnitioned "goto end" was there.

There's an example of what I mean. A language that only works right if you use a certain editor with it is a bad language.

Syntax is a practical concern of clarity and aesthetics, it is also core to the editing and reading experience so it is insane that it would be considered separate of those.

There's a major modern trend, I've observed it for some years now in such a broad range of situations I don't know how to do justice to the sheer scope of the thing, toward choosing options that are inherently unstable and then expecting to keep the balls constantly in the air. Which fails catastrophically when it fails. I thought of that trend when I heard about the Segway, that has to constantly adjust itself just to stay upright, so if the battery fails it just falls over. I thought of it especially when I heard about replacing the repealed simple prohibition from Glass-Steagall with a complicated regulatory mechanism, because of what that implied about prefering a complicated solution that has to be constantly monitored over a simple solution that you only have to not break. Given the choice between a programming language that makes you more likely to get things right even if you write it with a mimimalist text editor, and a programming language that makes you likely to get things wrong unless you use a special editor, the first is preferable to the second. There's no excuse for failing to design a programming language so it's error-resistant with a minimal text editor.

Fly by wire works this way:

Fly by wire works this way: your modern fighter jet is in a constant state of instability, the only reason it doesn't fall out of the sky is because of a computer driving continuous micro adjustments. The pilot no longer has control over stability.

Now, why bother with all that complexity? Not to mention the failure mode is huge: if the computer crashes, the plane just falls out of the sky. Turns out this enables a lot of agility and component costs, not to mention reliability, and fighter jets (and today even passenger jets) without FBW are not competitive...the marketplace demands that the more complex system wins.

The same is true with programming. Ya, you could use an old fashioned text editor and a language that is editor agnostic. But if something comes along that integrates to achieve a 3-10x productivity increase, you no longer have that choice...your peers who choose the new system will simply out compete you.

There's no excuse for failing to design a programming language so it's error-resistant with a minimal text editor.

No. This is not how the world works.

The same is true with

The same is true with programming.

I've yet to see a reason to think so.

Ya, you could use an old fashioned text editor and a language that is editor agnostic.

How could you make it editor agnostic? I can't imagine how you could prevent the right sort of editor from being an improvement. You seem to think that improving the behavior of the language when written with a minimal text editor would somehow damage the ability of an editor to be an improvement; is there evidence in favor of that belief?

I've yet to see a reason to

I've yet to see a reason to think so.

Are the programming languages you use still optimized for punch card input? Is 72 columns really optimal?

You seem to think that improving the behavior of the language when written with a minimal text editor would somehow damage the ability of an editor to be an improvement; is there evidence in favor of that belief?

Designing for the lowest common denominator has detrimental effects in practice: you try to optimize for EVERYTHING and wind up optimizing for no one. By saying that the language must be functional in notepad, you basically made a decision to dumb it down for that context.

By saying that the language

By saying that the language must be functional in notepad, you basically made a decision to dumb it down for that context.

That's the claim on your part I'm asking for some evidence to support. By describing what I'm talking about as "dumbing down", you're assuming your conclusion. It's only "dumbing down" if it requires giving up something in the upscale-editor case; I've expressed an interest in what that would be, and I'm still interested.

revenge of punched cards

Narrow text columns are a problem when you want long and unique names, which might be hidden by a smart editor, but get serialized verbosely in plain text. Several times Sean has mentioned an idea about disambiguating symbols via means that imply a system is auto-assigning very long names to some things.

I like code in 80 columns, so I can see as many windows side-by-side as possible, without lines wrapping in displayed code. It's hard to get by with only four or five views at once, and only two would be a disaster in window shuffling.

Some folks use screen real estate to write very long lines of code, because the only thing they care about is what they write at the moment, without considering what happens when you need to compare and contrast widely separated parts. Folks who do a lot of maintenance learn to like narrower columns.

Long and descriptive names consume a lot of screen real estate. I hate code that devotes one line of code to each argument passed to a function, because each argument is an expression 32 characters long, and is already indented a lot. There's a conflict between making code small enough to see a lot at once, and making each thing self-describing to the extent further docs are not necessary.

column width

As a user-interface principle, it's harder to read super-long lines than short ones. Newspapers divide their text into columns which, I believe, are a bit narrower than 80 characters. Likewise, good web design limits its primary text to a maximum width which isn't too far from newspaper practice.

Maybe the best programmer experience design would be to enable, support, and encourage programs that can be written, viewed, and edited in narrow columns?

Ya, narrow columns are

Ya, narrow columns are better than wider ones. But we still lack multi column code formats (you can put buffers side by side, but code usually doesn't wrap).

I'm assuming that you want

I'm assuming that you want the programmer to have a good experience when all they can use is Notepad. Heck, let's throw on a bunch other archaic constraints, like they can only compile once a day (that used to be a thing back in the 60s), or code must be readable by non-programmers (again...COBOL).

Every constraint UNIVERSALLY creates a trade off. Sometimes these constraints are great for creative lateral thinking that can lead to new inventions, say, the language must be usable via touch on a tablet. But sometimes the constraints are just reactionary throwbacks to a less fortunate past where we had nothing better than notepad or emacs, and it would be too expensive to provide a more advanced IDE.

Out of peripheral curiosity,

Out of peripheral curiosity, have you ever used COBOL, for an application of the sort it was designed for?

Every constraint UNIVERSALLY creates a trade off.

You're overlooking the possibility that the effects of the constraint are redundant to the effects of other constraints otherwise in force. A constraint can't create a trade off if the trade off already existed.

But sometimes the constraints are just reactionary throwbacks to a less fortunate past where we had nothing better than notepad or emacs, and it would be too expensive to provide a more advanced IDE.

I understand that to imply that wanting source code to be editable with a text editor is a reactionary throwback. I've watched for decades while proprietary data formats live their lives, like mayflies, while the formats that last... are text.

I have never used COBOL

I have never used COBOL before. That was before my time.

Constraints necessarily narrow your design space. Yes, you can have constraints that are not relevant because of other constraints, but editors have a huge effect on the experience and are not one of those.

I understand that to imply that wanting source code to be editable with a text editor is a reactionary throwback.

I'm implying that working with design constraints from the 1970s will lead to 70s-style artifacts. Ya, we can probably do better than we could do in the 70s with these constraints (we know more!), but what if we instead dealt with the constraints of today rather than yesterday?

I've watched for decades while proprietary data formats live their lives, like mayflies, while the formats that last... are text.

This has nothing to do with serialization and persistence formats, which are orthogonal.

Ah, COBOL

I have never used COBOL before. That was before my time.

I had a chance to use it on a summer job in (iirc) the late 1980s. Fascinating experience; I found it an (unexpectedly, given its reputation) well-designed language, elegantly powerful for its intended purpose.

working with design constraints from the 1970s

If one dismisses text as a seventies thing, one is then conspicuously unable to explain the success of wiki markup.

I suggest that if a format can't be accessed without specialized software, it will eventually not be accessible.

A lot of things were just

A lot of things were just too expensive in the 70s. Garbage collection was a niche thing (Lisp, Smalltalk), but it wouldn't be until the 90s until hardware caught up with it to make a mainstream thing. Bitmap displays were around, but also plenty of terminals, and some people were still using punch cards...the MOAD was just in 1969 and it would take a couple of decades for that tech to catch up. But we advanced.

We aren't going back there.

I suggest that if a format can't be accessed without specialized software, it will eventually not be accessible.

Again, irrelevant. Serialization format can be as verbose and readable as you want it to be. We can jury rig a machine that etches code onto stone so that it will be readable for thousands of years.

classic talking-past-each-other

I sense each of us is saying things the other perceives as irrelevant. I wish I entirely understood why that is; there's something deeper going on, that probably isn't what either of us is saying yet, getting at which is why I'm still pursuing this branch of the discussion despite its frustrations.

Garbage collection seems to me an irrelevant example, because it's orthogonal to the bare-bones/tools dimension (or, if it isn't orthogonal, it supports my side).

A format becoming inaccessible is totally dissimilar to a medium deteriorating: etching onto stone would be, presumably, an attempt to store data on a medium that'll last, but doesn't preserve the data if it's written in rongorongo; on the other hand, electronic data can survive quite a while by being copied repeatedly even though individual copies may have a limited life span (though I grant copy errors would get more likely on a scale of centuries), but good luck using an old spreadsheet in a data format that was abandonned thirty years ago.

I feel like we are just

I feel like we are just valuing different things, like with self driving cars. The value systems will work themselves eventually in the marketplace at least. We are only trying to predict where the future successful advancements of programming languages will eventually go, rather than whose ideology/values are more subjectively correct. In the end, the former is all that really matters.

just valuing different

just valuing different things

Yes, that seems so.

We are only trying to predict where the future successful advancements of programming languages will eventually go, rather than whose ideology/values are more subjectively correct.

True, though with the exciting twist that we also get to help shape the future. The observer is not separate from the observed. I've thought about writing an SF novel, but —so far— I've been more driven to build the future than write about it.

There is gold in those

There is gold in those mountains of you look in the right place, otherwise you might come up empty handed! This debate is important in deciding where we focusing our energy looking for the next big thing.

Building is better than talking, but I can't code for 12 hours a day!

giving something up

It's only "dumbing down" if it requires giving up something in the upscale-editor case; I've expressed an interest in what that would be

A few examples:

  • You cannot easily hyperlink dependencies in plain-text. That's a hyper-text feature.
  • You cannot easily have interactive or tutorial documentation in plain-text. You're stuck commenting stuff with dead words.
  • When optimizing for plain text expressiveness and readability, and filesystem integration (to leverage the common text editor), it is common to introduce features such as overloading or multimethods, operator precedence, namespaces, imports, code-walking macros. Such features are convenient for humans, but greatly increase the costs for tools to consistently understand or safely manipulate code (since they need more sophisticated parsers, linkers, code analysis, etc.).
  • You cannot effectively express and edit images, game levels, fonts, or music in plain text, e.g. when creating a game. You'll likely export those software development tasks to external tools. Your program will become entangled with the filesystem. This in turn may complicate procedural generation, partial evaluation, static analysis, portability, etc..

That's just a small taste of what you're giving up. There's a lot more: visualization of behavior, multiple editable views of code, tables and spreadsheets, live programming, collaborative development, fine-grained dependencies and packaging.

A lot of Bret Victor's videos over the last few years demonstrate software development concepts that would be difficult to achieve with a language designed for effective use with Notepad. The videos "Stop Drawing Dead Fish" and many others are worth watching.

Maybe plain-text programming is a Blub, only obviously weak to those who have studied or attempted to develop richer programming models or tools.

Not his point

I don't think you're responding to John's conjecture. He isn't arguing that text is best. He's wondering whether non-text tools might as well be layered on top of a well designed text-only language. Sean's position has been that you have to design the tools and language hand-in-hand.

I think the counterexample to John's conjecture would be an example where the program format depends heavily and usefully on interaction. Something like Coq's Ltac, which is borderline unreadable as a text language. But then, maybe that inability to be read as a transcript means that Ltac isn't actually a good language and so isn't a counterexample after all.

Is the language what the user types or what the user sees? With interactive systems, those don't have to be the same. That's kind of the point. For the record, I'm with you and Sean: design for the end experience. I'm personally trying to have a usable text transcript available, but that's not the priority.

the conjecture

He's wondering whether non-text tools might as well be layered on top of a well designed text-only language.

Review my third bullet point. It's one I consider very important, due to cost thresholds, which magnify the effect of increase in costs. It seems to me that 'well designed' for text-only involves a long series of tradeoffs that are bad for layering tools after the fact.

I think the counterexample to John's conjecture would be an example where the program format depends heavily and usefully on interaction. Something like Coq's Ltac

Yeah, that's another good example.

Heh

Well, dmbarbour's naming some specific things that are desirable but hard to do with text; the injection of specificity is refreshing. :-)

Coq Ltac is an interesting specfic thought, too. :-)

It seems there may be room for a text-based language to contain provisions for more sophisticated uses; arguably that's exactly what any markup or programming language is, after all, text meant to be interpreted by other software.

Thank you

Thank you; that's a thought-provoking list. It does occur to me (though it's late here; I need to turn in) that a bunch of that stuff is not prohibited by text, if the text engages a good markup language. For example, the first bullet can likely be handled by something akin to wiki markup, the success of which has been due to its being an eminently human-manageable text format. The one about multiple editable views of code is, off hand, the one I find most interesting.

Text is more portable

I think the point us you write software in your head, not while sat in front of the keyboard. Entering a program is a very small part if the process. It is better to be able to communicate programs and programming concepts without the computer needing to be there. As such I might write books on programming that need to be understandable without the IDE. Programs have to make sense on the printed page.

Declarative code is clearer and more understandable, and that means code should not be interactive but static like a mathematical equation.

I think however all those features you list can be provided by the tooling in a way that doesn't spoil the read and writability of code.

- A syntax aware cross a referencing tool, that maintains an index of the code. Pass in any function or variable and it finds the definition (you would need to give a source location to disambiguate different scopes).

- The IDE can extract comments from inline syntax and convert into tooltips or margin notes.

- overloading and multi-methods are features of generic programming and are good things irrespective of the method of programming. I don't see any of this list as being really an issue.

- any media editing in the IDE is likely to be inferior to dedicated tools (Photoshop etc). Also different people prefer different tools, so IDE integration us bad for choice and competition. A competitive market for tools is better for the user in the end to avoid stagnation. Another problem is that it would limit the language to media formats supported by the IDE, the actual binary format would have to be shared by IDE and the program being written. I think for this the IDE could provide a media library that can be used in the code, and the language should support user-defined literals, so that a JPEG image would be Base64 encoded in the text which the IDE can display directly.

So none of those arguments are really convincing for me. I think languages need to have an unambiguous linguistic representation (that is I can talk about code and write it down on paper). I think adding support for better cross referencing and annotation is a good idea.

I think its fine for the IDE to manipulate the languages data-structures directly, and I think the compiler should provide an API for IDE writers. Effectively the compiler should be split into a library and command line tooling, and should probably use some kind of database instead of intermediate files.

portability etc.

Text does have nice properties for portability. We've mostly settled on a common standard of ASCII/UTF-8 instead of the historical messes of codepages and big-5 (and EBCDIC, etc.). But text certainly isn't unique in having nice properties for portability. A simple bytecode can also be very portable, for example, and requires much less explanation of semantics and linking models than a typical plain-text PL.

In any case, if a language specifies a simple (maybe even textual) import/export format or mime-type, that's sufficient to address portability and backup concerns even if unsuitable for direct human reading and writing (e.g. due to size, density, organization).

I strongly disagree with your second-person characterization that "you write software in your head, not while sat in front of the keyboard".

In my experience, a lot of programming relies on feedback. I don't frequently build fifty line functions in my heads then lay it out. I get an idea for what I want, I search libraries for available tools to do it, I begin writing something out, I might backtrack a little as my ideas and approach solidifies. I react to compiler errors and fix those, as a common basis for cleanup. I sometimes use type holes to get some extra clues for how to fill a gap. If writing HTML generation code, I might test the results in the browser and tweak and twiddle until satisfied. Refactoring is often a reaction to seeing a pattern in code. I doubt I'm alone in this development style.

Regarding books etc.: I don't have much motivation to explain computing and programming concepts to people who aren't equipped to try it out. Rather than learning from a static book, why not have more interactive tutorials? We should stop drawing dead fish.

I agree that declarative code has a lot of nice properties, and let's have more of that. But I disagree with your tacked-on conclusion, "that means code should not be interactive but static". Spreadsheets are a fine example of how declarative and interactive can be married. Coq Ltac, too. Live coding with declarative languages is also quite feasible.

Theoretically you can, with enough effort, implement tools even for a language and constraints that are relatively hostile to advanced tooling. In practice, you probably won't, or the result won't be as robust. Design for tooling is about affordances, shifting the costs so that tooling becomes an easy and natural approach. Asking what can be provided is simply missing the point. Ask instead what is easy enough that we can reasonably expect it to actually happen.

There are many ways to achieve generic programming. Not all of them are human-syntax-friendly and tooling-hostile like overloading and multi-methods.

You seem to be assuming that media editing in the IDE would primarily be built into the IDE, e.g. it comes with support for JPEG and a built-in JPEG editing tool. My own assumption is that media editing is mostly a library concern, with the IDE providing a simple API. Thus the tooling for media types would be a lot more portable, extensible, and even competitive.

My Best Ideas Happen Whilst I'm Asleep

Although you see coding as an interactive experience, I think you underestimate the amount of sub-concious processing the brain does. The important thing is the cognitive model (the semantics of the language) not the syntax. If the semantics are unpredictable then its difficult to reason about the program.

If the semantics are suitable for humans, then humans will develop a language to discuss the semantics. "I want to iterate over a collection of widgets, inverting each one...". So even if you don't provide a linguistic interface, humans will create one. You then have the problem of two interfaces and two ways to think about coding (visual and linguistic). I think that where humans are involved the linguistic will eventually win.

This means languages should try to be more like existing written languages (say English), but without the fuzzy definitions. So I think static text will be chosen due to fitting better with the way people think.

An example of this is electronics design, where visual component tools have been available for years, but VHDL and Verilog have pretty much replaced a lot of interactive visual editing. I think its because peoples linguistic processing ability is much more sophisticated than their visual processing. They can internalise the code quicker and then manipulate it internally. You can see this when people edit code, vary rarely do they make a single text change, most often they have pre-planned a series of edits to achieve the required re-factoring. They can do this in their heads because of the linguistic nature of code. By externalising these transformations you limit the achievable transforms to those supported by the IDE. Manipulation of linguistic representations is only limited by the human imagination.

When I write a program, I already know what architecture and syntax I want to use. I know what data-structures I want. Its like writing an essay, the IDE is there like a grammar and spell-checker, but I already know what I want before I start typing. I personally use Vi for editing as I don't like to get feedback before I am ready. I frequently adjust code as I am writing it, perhaps only sketching sections in, and skipping around leaving invalid partially complete syntax before coming back to it. I then get feedback from the compiler when I have something I think is ready.

Having said all that, I think code organisation in large projects is a problem. I would really like an editor that show only functions and datatypes used in the current code context. So when I write a function that uses a datatype, I can see its definition, or the definitions of functions I call. But I would like this laid out like a plain text page. I really don't like side-bars, tool-bars, sub-windows or popups... the more it looks like a plain text page the better, but I don't mean like an 80-column dos-text, more like a nicely formatted interactive document (if that makes any sense).

Ideas aren't Code, etc.

I agree that ideas can occur at any time. I often have good ideas in meetings, or showers, or meal times, or while sitting in front of unrelated code. But putting it into code does a lot to refine and concretize an idea. Ideas aren't fully formed, and certainly aren't executable by a machine, without a lot of massaging. Or, at least, my ideas aren't.

Regarding your first few sentences: I don't see an interactive coding experience as incompatible with sub-conscious processing. I also did a lot of sub-conscious processing when driving from home to work, but that doesn't make it any less interactive.

I do agree that a simple, predictable, cognitive model is very valuable. Effective support for local reasoning and compositional reasoning are among the values I rate most highly in language design. Much higher than 'readable' syntax. Any language designer who reaches this conclusion must make a decision: Shall I sacrifice a readable plain-text syntax in order to further improve the cognitive model? My answer was yes.

I agree that people will still create linguistic interfaces. Naturally, if a system is flexible enough to support user-defined visual EDSLs for image editing, it's certainly flexible enough to build many textual languages above the structured one. This is something I actively considered before I reached my answer above.

Visual and interactive programming has a lot of use cases even in a system where various plain-text DSLs might be dominant. Assuming a programming environment where mixed modes can peacefully coexist, I fully expect they will do so. You seem to be assuming a competitive environment where there can be only one, eventually, after enough seasons.

Regarding transforms on code, you again seem to be assuming the IDE supports a fixed set and thus we're "limited by the IDE". An IDE might certainly have a limited set of built-in transforms, but I think this is an area (the same area as support for media types, mentioned earlier) where IDEs should draw from libraries. Upon doing so, the set of transforms extensible and portable (and competitive). And the IDE becomes simpler.

Re: "I already know what I want before I start typing." - That may be the experience for you and some subset of other programmers. But I think it would be unwise to generalize. Some fields of programming are a lot more R&D than others. Some fields more aesthetic and inherently require a lot of tweaking and tuning. Some modes of programming are a lot more reactive or interactive. The preferences and habits of people vary, too.

I agree regarding code organization in large projects, modulo blindly requesting definitions in plain text. I would love to have more interactive documentation, and a rendering of automatic QuickCheck-like sample input and outputs (especially around border cases).

Difference in Emphesis.

I think there is a difference in emphasis here. I am not against any of these user-experience enhancements, but I don't want to loose the ability to treat code as plain text, so I can still write code on paper when I don't have a computer near me.

I think a language implementation should be a library, of which a text/file front end and an I.D.E. can be clients. I would give equal importance to each, but there should be no features that prevent you having a plain text representation of the program.

Agree

This is my perspective as well. I think designing the programming language for end user experience means making sure you have the best IDE experience possible but also that you support a reasonable text encoding.

design decisions

I don't think you'll lose your ability to write code on paper regardless, whether it be sketching out an FBP diagram or a Kripke structure or a graph or a linguistic DSL or even a little pseudo-code that can be translated later on. Plain text is vastly more restrictive than what you can do with pen and paper.

I'm interested alternatives to desktop programming, i.e. the conventional KVM (keyboard, video, mouse) setup. Augmented reality might make it a lot easier to 'program on the fly' with pen and paper, as an alternative to waving your hands about in the air and suffering 'gorilla arm'. I imagine sketching, graphs, would be a big part of that, perhaps combined with haptic approaches (e.g. arranging a few physical objects).

a language implementation should be a library, of which a text/file front end and an I.D.E. can be clients [..] there should be no features that prevent you having a plain text representation of the program

A text/plain front-end and filesystem integration become constraints on design, filters on the feature set, a lowest common denominator. You'll sacrifice a lot of opportunities to get there, at least if you want your language to be conveniently readable and editable by that interface.

If you believe it a tradeoff worth making, that's your prerogative as a designer. I don't share some of your motivations, such as making code pretty for display on dead trees or in primitive text editors. Also, I have some of my own goals, such as unifying PX and UX, an effort that greatly benefits both from interaction and the ability to treat manipulation of rich media types as programming. With such a difference in concerns, our designs will be divergent.

But when people start arguing or conjecturing that there is no tradeoff... that annoys me. I suppose this is a peril of opportunity costs: so long as you're ignorant of them, they don't seem like costs at all.

There is no tradeoff

I'm pretty sure I've seen you advocate against tradeoffs - when you find yourself facing a tradeoff, try to change viewpoints to find a solution that doesn't accept the tradeoff. John made a similar point somewhere in this comment thread, I believe.

I think this is a good situation in which to try. Specifically, what I think we can do is factor out the syntactic component of a language in a way that allows it to be "skinned" with either a textual encoding or a more structural encoding. Finding a reasonable factorization takes a little work because of issues like binding that are handled quite differently by structural and textual encodings. The idea is to move most of the elements that you complained about in your earlier post (overloading, etc.) that seem like artifacts of a text encoding out of the core language semantics and into the text encoding. It's a little tricky, but I think possible and worthwhile.

Invention is the Art of Avoiding Tradeoffs

I have frequently said: Invention is the art of avoiding tradeoffs. cf. TRIZ. If tradeoffs can be avoided, that's a good thing. And I think tradeoffs often can be avoided, if only our puny human brains find a way.

But this avoidance of tradeoffs doesn't allow for placing on a pedestal our primitive paradigms, tools, traditions, and text-editors. By its nature, invention is something that replaces our tools and models. (Also, the initial time and effort and latency and money spent inventing and prototyping and installing and learning the new tools is an investment that should be accounted as a tradeoff at least from an economic perspective.)

A change in viewpoints isn't sufficient to avoid a tradeoff. A change in tooling is essential. No matter what, if you attempt to squeeze a language into simple text editor, tradeoffs are inevitable. What's left is designing towards a set of tradeoffs you find acceptable.

My current PL designs avoid many tradeoffs between textual and visual programming, support for DSLs and tooling. But it simply isn't happening with Notepad. Your skinning idea sounds interesting and worth pursuing, but you will make tradeoffs elsewhere if you intend to support a pleasant experience reading and editing it unskinned in Notepad.

Not with notepad

I don't think anyone is saying that you're going to get the full UI experience in notepad or even emacs. At least that's not what I'm arguing. I'm arguing that we still need good support for text underlying our fancy tools.

The best argument I have for this is that our most efficient method of interaction at the moment is still using the keyboard. Our fancy IDEs need to support "show me the text encoding for this" so that the programmer knows how to type things in.

Another opinion, which is perhaps relevant to this discussion, is that I'm skeptical of heavily interactive programming, like Ltac. Interaction has overhead. The programmer has to pause to understand the context that the IDE is now presenting before entering a response. If the context changes too quickly it's disorienting and slowing. Building text constructs works well as a stable keyboard interface that doesn't require interaction.

with text or over it

I think there are two separate issues here:

  1. effective support for programming with text
  2. textual editing of underlying representation

The first point addresses a lot of the issues you describe: efficient use of keyboard, stable behavior without interaction where you want it, etc.. This is easy enough with embedded languages or textual EDSLs.

The second issue is the one critical to John's conjecture and most of this thread about tooling, and also your "show me the editable text encoding" query.

That text encoding simply won't do you much good if it's too large, too dense, too opaque, too low level, too noisy due to specifying interactive behavior, etc.. Thus, if you insist this query and editing the result should be effectively supported, you'll make sacrifices regarding what media and literal types your language supports.

Algebra, Linguistics, and Brain Architecture.

There are a lot of algebraic structures in programming (type systems, datatypes), and a lot of logic too. When thinking about these things we naturally have a text / symbolic representation of them.

When you say "A text encoding simply won't do you much good if it's too large, too dense, too opaque" I would suggest that if it doesn't have a good text representation it will not be easy for people to think about. I think the brains reasoning is shaped by human language, and abstraction level is a critical part of that. If something gets too detailed we invent new sub-languages to break it down and discuss it.

Put another way, I am sure you can do all sorts of things with a graphical interactive interface that take programming away from linguistic representations, but I think this will make it harder for people to use not easier.

It's the reason why 'cyberspace' never really made much sense. Data is not naturally represented by a 3D landscape, so any mapping is entirely artificial and not as useful as a text representation. The Matrix's tumbling kanji are a much better representation of data than Tron's light cycles and rotating cores.

text representation and reasoning

if it doesn't have a good text representation it will not be easy for people to think about

Well, that's just false. There are a lot of things that don't have good text representations because they're trivial to reason about. Images data, where the primary action upon it is to render it, would be a simple example.

Also, the clarity, comprehensibility, efficacy, etc. of textual representations is NOT compositional. If you take a procedure with one well written line of code, and add another, you now have a sequence with two well written lines of code. If you do this a thousand more times, you have a mess. Quantity impacts quality.

Perhaps you could avoid that mess if you do all your development in the text layer. But if dealing with large media objects - a big graph, for example - that won't happen. Factoring graph data into a dozen little functions would just complicate the tooling and other things, and wouldn't necessarily aide comprehension of the graph (excepting independently meaningful subgraphs).

Returning to an earlier point: you can offload to external tools a lot of content that doesn't fit nicely into your PL, but you'll still need to deal with that content AND you've gained a panoply of problems for testing, packaging, integration, type safety, language purity, partial evaluation, constant propagation, staged programming, transparent procedural generation, etc..

Ignoring the graphical and interactive aspects is simplistic - a local illusion of simplicity that ultimately explodes into complications down the road.

A Cat

I can reason about a cat, the fact that it is a mammal, and an animal. This is reasoning. How do I tell what kind of animal a cat is from a JPEG? The fact is that this kind of reasoning is natively linguistic and word based. Most of the time I don't even need to see the picture, knowing it is a picture of my cat is enough.

Render the Cat

As I wrote above, "image data, where the primary action is to render it". In this context, I don't need to know whether the image is of a cat, a hat, or green eggs and ham. I only need to reason about how to render images. If you want to render your cat without using an image, just using comprehensible human text, you're in for a challenge. Reasoning that your cat is a mammal and animal isn't going to help much.

Rendering is trivial

Rendering the cat image is trivial, its not really reasoning, and its not really a problem. What's wrong with giving the cat image a symbolic name?

Sigh.

Since you seem to have fantastically missed my first point, I'll review it:

  1. You suggested: "if it doesn't have a good text representation it will not be easy for people to think about"
  2. I thought: That's simply untrue, there are a lot of things we think about that lack good textual representations. What's a good simple example?
  3. I said: Nay, things can lack good representation because they are trivial to reason about. Such as image data where the action is to render it.
  4. You thought: (unknown)
  5. You said: "I can reason about a cat [..] How do I tell what kind of animal a cat is from a JPEG?"
  6. I thought: How is reasoning about a cat, or performing image recognition on a JPEG, or whatever he's thinking even related to "image data, where the action is to render it", much less the point I was making? Sigh. I'll just assume I didn't emphasize clearly enough.
  7. I repeated the rendering point with emphasis. And, as an add, I tried to get you to question the efficacy of plain-text programming for a simple aspect of programming: rendering images, without image data.
  8. You thought: (unknown)
  9. You said: Rendering the cat image is trivial, and not really a problem, and what's wrong with (something I never suggested to be wrong)?

First, since you clearly missed it, the triviality was very intentional in contradiction to your earlier suggestion. To clarify:

Compared to other code and objects, you don't need to think hard about whether your image data is going to behave in a manner that, for example, compromises network security. This doesn't mean you don't reason about it, it's just easy enough that it barely registers. And yet the image data still lacks a good text representation. Thus, your hypothesis that things that lack good textual representation are difficult to reason about is clearly contradicted. QED.

Since you also clearly missed the add, I did specify rendering your cat "without using an image". You believe in the power of plain text programming, do you not? So, why would you need a cat pic to render your cat? Just use plain text. I'll be impressed if you find rendering your cat without textually opaque image data to still be trivial.

If you can't tell, I'm annoyed with the direction this thread has taken, and that you ignored more relevant points to discuss your ability to reason about cats.

Trojan Images and Stream Processing

I just don't get your argument? How does if follow that image data is easy to reason about because it does not threaten network security?

Even the initial premise is wrong, images can threaten network security, and have been used to do so. Deliberate byte sequences can encoded to cause buffer overruns in certain JPEG libraries, allowing a root escalation attack, and then launching a network packet sniffer or other payload.

But more fundamentally I don't think the image itself is important. Almost all human thought consists of abstract concepts, that is things that do not have a concrete representation. Because these things are not concrete there are no images to represent them. Instead we have words. What colour is the cat? Can you even represent that question as an image? Do all cats have four legs? As soon as I want to do anything at all useful with the cat image I need words to describe what to do. The cat image itself is irrelevant, most likely coming from a camera or a database of images. When I want to describe how to process the video stream its much easier to describe verbally.

But I agree I might be missing the point. I don't see any problem in having tools to make working with images easier. I don't dislike the idea of an IDE showing a thumbnail of an image in the code, nor do I dislike the idea of being able to cut and paste images in to a REPL loop. My point is this is not really programming, just displaying literals in a better way. The program that manipulates the images would seem to be best expressed as 'text'.

Image data lacks a good

Image data lacks a good textual representation and abstraction not because it's difficult to reason about, but because we don't need to reason about it in any manner but the most trivial - where it came from (a static content module or external database), where it's going (to render).

If you can easily reason about whether an image is a network risk, you can easily reason about the image. Proof by example. QED.

This is the same sort of proof as: "I can walk, therefore I can move." This certainly doesn't imply ALL movement is possible for you (can you bend your knees both ways?). And similarly, I haven't proven that ALL reasoning about the image is easy (can you recognize the cat?). But I did address relevant reasoning: we typically don't do much with images other than render them.

images can threaten network security [..] buffer overruns in certain JPEG libraries

True. If your rendering library isn't memory safe, then perhaps you couldn't easily reason about network security.

Almost all human thought consists of abstract concepts [..] we have words

I've not once argued against abstraction, nor even against use of text as a programming media. There are enormous differences between these three positions:

  1. we should require textual media for all programming
  2. we should support non-textual media for programming
  3. we should abolish textual media for all programming

The first position is common for anyone who insists their PL work nicely for maintenance using a basic plain text editor. The motivation to stick within the limits of widely accessible tools and non-interactive distribution platforms (such as thin slices of dead tree) is obvious.

The second position is the one I take. Use text where it's appropriate. Use visual programming where it fits nicely (a lot of specialized cases). Gain many benefits because much data that would be externalized to avoid polluting plain text can now be modeled and maintained directly within code, and is subject to tests and staging and partial evaluation and transparent procedural generation, and conveniently easy to use with purely functional programming (e.g. no side-effects to load external files). The cost is that we no longer can expect programs to be fully accessible or maintainable via plain text editors.

The third position isn't held by anyone in this thread, but does seem to be a popular straw man in almost every discussion about visual programming. Based on your apparent assumption that I'm against using linguistic abstraction, it seems this discussion is no exception.

The Second Position

It seems we are both somewhere within the second position you stated. I am not so sure about putting media in code though. We spend a lot of time factoring even strings out of code (for internationalisation), putting more in code seems odd to me. I want to leave the design elements to designers, so the web approach of code producing data that is combined with styles, themes and design templates seems more useful to me. Code tends to output simplified XML markup which then gets transformed using XSLT in the presentation layer.

When GUIs are themeable you have to refer to components by their function (left-margin-image for example). The user selects the theme they want for their desktop.

I am not sure I get the use case for inline media literals. We should be separating the presentation layer more, not binding it more tightly.

You're operating under the

You're operating under the popular premise that PX and UX are entirely distinct worlds. I think this premise has been very bad for all of us, both users and programmers. Every application becomes a walled garden. Reuse and extensibility are very poor by default, and inconsistent between apps. Data resources are relatively painful to use compared to string literals and functions.

With the opposite premise, that PX and UX should be more unified, media in code makes a lot of sense. Designers are no longer clearly separated from the codebase (though they might work on different parts of it). Some GUIs might be represented directly within code for contexts like live coding or tangible functional programming.

This doesn't mean we can ignore internationalization and similar design concerns (at least, not more than we already ignore them). But, rather than externalizing your "internationalization database" (which isn't really about stateful data), you'd just model said database as another module in code.

WWW

I still don't quite get it. Programmers do not make good designers, just compare the state of the average desktop application (with programmed UI done by developers) to web applications (where designers have done the UI). There are many more great looking web-applications precisely because of the separation of design from programming, and in reality it doesn't cause any issues, plus re-theming the application is so much easier. If a customer asks for an organisation specific UI its easy to task a designer to look at their corporate style guide and develop a new theme without touching any functionality.

As for internationalisation, its much easier to give the external strings database to a translation agency to translate than it is to give them source code, and expect them to edit it. It is not reasonable to expect them to lean to use the coding environment just to translate strings.

Plus how do you cope with user-theming? In the age of responsive design, where applications should be targeting devices with varied screen sizes and resolutions hard coding media into applications seems the wrong approach. I should be able to have all my apps with the same GUI components and theming, which might be a personal theme where the images will not be available to the developer.

Putting UX skins and behavior

Putting UX skins and behavior into the same codebase doesn't mean we suddenly ignore separation of concerns, coupling and cohesion, best practices for modularity, etc..

It's pretty clear that you haven't (seriously) contemplated what it might mean to unify PX and UX. You keep going on about "programmers" and "designers" as though designers wouldn't be programmers with a specialized goal, operating on a different part of the problem (and hence a different subset of the codebase, if your modularization is any good). Customers and users would also be programmers, albeit with their own codebases and a lot of programming wouldn't look like what we call programming today (i.e. because it mostly wouldn't be manipulation of plain-text source and higher order abstractions).

When you envision "editing source code", you have a picture in your head of C or Java or other plain-text programming languages. But we're deep in a thread regarding programming with other media. Try envisioning images, graphs, tables, etc. as first class source code. Internationalization by sharing source code for editing is NOT unreasonable assuming you've partitioned said content into separate modules with an appropriate media for editing, such as tables.

(That said, you could just as easily provide a conventional database and import the results back into your codebase.)

User theming requires users have their own codebase. Which they would because, upon unifying PX and UX, all users are programmers (even if it's mostly shallow and they're not really thinking about it), and a codebase would replace many roles of filesystems (including access to rich media, like your personal images for skins). You'll either pull a copy of a remote application for local installation, or create local skins for remote services. Overall, this problem is a more specialized variation of the broader extensibility and accessibility concerns I'm aiming to address in the first place.

Don't let your beliefs about how things 'should be' interfere too much with envisioning alternatives for how things could be.

Tooling

I understand how it could work, and Mathematica already works a lot like that. My concerns are more about why you think this is a better approach, and who the target audience is.

If you could get photoshop to save content into a module directly, so that designers did not need to waste time learning to program it might work. But its going to be far easier to adapt the industry formats into your system than persuade Adobe to support it.

Perhaps if a module could be a tar file of images, and the tar file names directly become object names that might be viable. But you still need to import the module into your development environment, rather then just copying the images into the correct place in the filesystem.

It seems you imagine a world where everyone is a programmer, however this seems unrealistic to me. Designers don't want to program, nor do clients who just want to select a skin. At a guess your target is something like the same audience as Mathematica, IE not professional developers building software for end users, but academics, amateurs, researchers, and spreadsheet-users.

If this helps get people into something better than Excel, then its a great idea, but its clearly not for everyone, and its not a good fit for the kind of development I do.

casual programming

Whether it be paperwork, presentations, or programming, every professional is expected to do some things they "don't want to". Whether they like it or not, designers should be expected to program, e.g. to create interactive mockups or quick and dirty prototypes. And the education necessary for this should start at a young age - computation, taught alongside math and science, as a core competency.

Of course, designers today have a legitimate excuse: the poor tooling, the discontinuity between using or consuming content vs. programming it, creates a massive barrier against the sort of 'casual' programming a designer should be expected to perform. Designers cannot reasonably be expected to program until the tools are replaced.

The goal with unifying PX and UX is to make 'casual' programming accessible and easy. While the existence of casual programming might create unwanted burdens for some reluctant designers, it also introduces opportunities and allows many lightweight efforts (tweaks, compositions, etc.) to proceed without bringing in a career programmer.

While this may seem to be aimed at the amateurs and academics, we cannot neglect the professional developer. If the professional feels pressure to use different PL and tools, then the barriers between PX and UX will continue to persist. So the audience is everyone, and fundamentally must be.

People are different

I think this projects a homogeneity on people that may not be desirable. People are different, the best graphic designer in the world could be a terrible programmer. If you insist on only hiring multi-skilled people you may be artificially limiting the talent pool in a way that is prejudicial to your business. I am not sure programming should be regarded as a fundamental skill that everyone needs like reading, writing and arithmetic. IT and computer-literacy is something everyone needs and this is reflected in school curricula. I think it is a tendency of generation-X to see programming as interesting as (personal) computers were a new thing. To post- millennials computers are just something to use, another appliance to use 'office' at work and access the internet.

Some designers can program well enough to produce interactive prototypes, but not all, and even this can have problems tending towards fitting more coding into the prototype to show features working more realistically to the extent the prototype can become a development bottleneck. Sometimes wireframes and mockups are just the best solution (where the application is small, or the code is an inherently complex engineering problem).

To cater for the professional developer the language will need to be able to ship binaries, that is so that the customer cannot reverse engineer the software, but still allow configuration changes such as theming, language changes etc.

terrible programmers

I certainly don't expect homogeneity in programming skills. There's a spectrum for programming, much like there is for writing or planning. OTOH, it wouldn't take a skilled programmer to support many use cases. For graphics design, a domain specific language and a little tile and wire manipulation could probably cover most interactive and animated mockup work.

Terrible programmers are still programmers. Many terrible programmers would improve with experience and a little training. Others could lean on more skilled programmers in the group. While people are different, I think the median skill in programming could be much higher than it is today and that perhaps 90-95% of programming as a (separate) career could be eliminated. There is much value in eliminating the PL/UI tooling gaps such that non-programmers become programmers, even if some are forever terrible.

While I agree that a language's toolset should enable compilation to binaries, your stated motivation for it isn't very sound. There are decompilers and tools like jsnice. Skilled programmers are able to re-engineer and clone just about any app without even seeing the source or binary, just by inferring the intentions and purposes. A belief that binaries somehow guard against reverse engineering should be treated with much skepticism.

Some good reasons to support compiling to binary include performance and bootstrapping, support for a broader range of targets such as embedded programming and unikernels, etc..

If you take a procedure with

If you take a procedure with one well written line of code, and add another, you now have a sequence with two well written lines of code. If you do this a thousand more times, you have a mess.

Perhaps I'm forgetting quite what you mean by "compositional" (a term it's easy to presume one understands); it seems to me that good organization is inherently not compositional, and whether or not it's text doesn't bear on that. Good organization of the whole depends, in highly idiosyncratic ways depending on the nature of the whole, on complex facets of how it's put together; naive composition of many parts into a complex whole is likely to produce a mess, and it's even possible, to some extent, for a complex whole to be well-organized while all the individual parts are messy.

Non-Compositional

Quick review: Algebraic composition is the only composition worth considering. A compositional property can reliably be inductively computed across composition, i.e. `∃f.∀A,+,B.P(A+B)=f(P(A),'+',P(B))` for the entire set of composition operators like +. Trivially, invariants are compositional. Ideally, P is useful but also much smaller than the objects being summarized. Compositional properties are valuable useful for modularity or very large scale systems because we can shift much reasoning to the summary of properties rather than reviewing implementation details.

Of course, properties that are emergent or contextual fundamentally cannot be compositional. And a lot of important human concerns are emergent or contextual. Look and feel. Usability. Performance with a given cache size. "Good organization."

Compositional properties are useful for aligning an organization, but good organization for humans has a lot of emergent and extrinsic human aspects that are also very important, e.g. Conway's law, or concerns about human memory (five-to-seven rule). Overall, I agree with what you wrote above.

Aside: I think compositionality is a useful distinguishing property between type systems (which must align nicely with modules, and thus types are compositional) and other static analysis models (such as abstract interpretation). So, knowing which properties are compositional can give a pretty good idea about which properties we can protect with a type system if we tried.

UI spectrum

I think we may be near agreement. For example, supporting image literals is a good idea and, while you can have a text encoding for images, it's not particularly useful for manipulation of an image. However, I wouldn't characterize image editing as programming. There's a spectrum of activities that we would like to be doing in our next generation IDE, including image editing. The tasks that I consider programming are the ones where I believe I want a stable text UI.

Other examples are building finite state machines or UI layout. Those are things you can do in an embedded non-text editor. But they're specialized and not general purpose programming.

Visual EDSLs

Yes, I consider visual DSLs to be mostly for special purpose tasks - geometries and images, game map editors, Kripke structure, UI layout, etc.. But there are quite a lot of small areas that benefit, and no PL designer will think of them all ahead of time. So I think it's also important that they be achieved and implemented through library-defined EDSLs.

And while images might not seem like programming to you, they're certainly an important part of many programs, much like text and numbers.

Text is a useful medium for a lot of problems, and one we have pretty good devices today for manipulating. I'm certainly not suggesting we deprecate it. But even lightly mixing text with visual media has a significant impact on whether plain text editors remain a suitable development environment.

One more point

I just want to point out that it's not too hard to integrate text and non-text in a way that works reasonably well with the current approach. The text file might look something like this:

-- This is a text source file.
render ($import mycat.jpg)

When viewed in the IDE, that can be an embedded image. A similar encoding issue exists even if you just want to partition a codebase into files. Again, my thinking is to design the IDE experience that you want first, but then to give due consideration to making the experience with unix style tools as painless as can be reasonably achieved without compromising the primary IDE experience.

Editing mixed media files is

Editing mixed media files is difficult, however. Structural or projective editing might work, but language aware editing...it is still an open problem.

What file format?

If I understand what you mean by mixed media files, isn't the problem just that we don't have formats for media that allow the embedding of other media in a standard way?

mixed media

Well, tar files are standard and support embedding, and in principle are easy to manage by a web server or virtual file system. Editing mixed media was the point of 90's format embedding system software.

But the web displaced 90's work on compound document architectures for mixed media in files: OLE at Microsoft (doc file format), OpenDoc at Apple (Bento and Quilt), and a framework at Taligent. The MS and Apple formats were of a file-system-in-a-file variety. Bento was a post-hoc indexing kludge that appended a toc (table of contents) after content written anywhere, and originally targeted compact disk formats, unifying a view under one umbrella. Both docfile and Quilt were block-oriented and suitable for paging (either system-based or hand-cached).

Storage on resource-starved platforms was tough. OpenDoc had to work on 4MB Macs where the majority of users enabled virtual memory the smallest amount possible, just so common code was shared in a mapping by multiple concurrent apps. Otherwise VM was absent, so hand-rolled paging was necessary. Good indexing needed btrees for large docs. There wasn't enough RAM to run a database and OpenDoc too. The OS took a meg or two, OpenDoc code took another meg, and left a meg of RAM to juggle for other apps and i/o paging.

Now we have so much in the way of resources, people just burn it. Deciding what to do is the problem, not whether enough resources are present. Simple sorts of tools tend to use OS file systems. So tar files are a low path of resistance solution, that's portable because every platform understands them. I think emacs even has a tar file viewer, looking at them like a directory.

You can version files in tar by appending new replacements, so it's pretty easy to model VMS style file systems with version numbers.

All you really need is an api for indirection, so you don't depend directly on how a thing is physically represented. Connect to a service that shows a file system, which might be a tar file. Write your own server that fronts your tar file, and then you'll know for sure it's a portable tar file, and yet the code interface works on whatever has the same model. Since we're talking about tooling, throw as many things at it as you like. Go ahead and use an SQL database too, why not. Nothing really stops you from having files, embedded mixed media files, database support of your choice, and absence of dependency on local OS. You can make it functional with copy-on-write versioning and garbage collection.

(Sorry if this was long. All at once is sometimes shorter than eking out a little at a time in a long exchange.)

It is not a format issue but

It is not a format issue but one of real estate: how do you put the image in your code in a way that isn't disruptive? I haven't seen a good proposal for it yet. It might be useful in a side bar (something I'm trying to add to APX).

Disruptive in what way? MIME

Disruptive in what way? MIME already provides a format for embedding and linking resources in a flat representation, you just need a better syntax.

The bits aren't a problem

The bits aren't a problem for PX (that is a problem for engineering). The layout and typography of how to create a multi-media editable document are. I have the feeling we are thinking about different problems.

Why not let the programmer

Why not let the programmer put the image where he wants? I expect images typically will be disruptive, and they probably don't usually belong in your code, but there are probably occasionally reasons to embed them.

I've been thinking about

If reading code is almost as important, or even more important, than writing code, then why don't we typeset our code like we typeset our papers (Literate Programming in reverse, Fortress with layout added to the mix)? In that case, it makes sense to include images, tables, math mode, detailed type set comments, test case results manipulated in real time (iPython), ...

I'm not sure if we should go there or not. On the one hand, it elevates code reading (and other non-writing tasks like debugging) to a polished first-class concern. On the other hand, there are a couple of disadvantages to consider:

  • Will programmers take the time to polish their code for reading? Yes, we say code reading is important, but is it taken that seriously? (will it be used!?)
  • An embellished code format requires either a rich structure/projective editing experience, or (as in Fortress) latex markup for writing that is then typeset for reading. So writing code is made harder, while we also want to read while writing, so having separate write and read representations is probably not going to work.

We can embellish a lot with a language-aware editor that is still based on text (as I've done in my prototypes). However, we can't really embellish layout with pure text input or even with WYSIWYG rich text input. Witness the horror of writing a paper in Word vs. writing it in LaTeX.

It is definitely worth trying, but...I'm not sure if it is on my list yet or not.

programming and markup

The worlds of programming languages and markup languages are on a colloision course. I see stuff I know from programming trying to crop up in markup, and vice versa. I'm not sure quite how to articulate this, but I'll take a shot at it.

Starting from the markup side:  The Wikimedia Foundation has decided it needs to make Wikipedia pages smartphone-friendly with a WYSIWYG interface. There's some appalling politics involved, but the two relevant technical objections are that  (1) the ease-of-human-use of wiki markup is the primary asset the wikis have, so that cutting yourself off from that through a WYSIWYG interface is suicide, and  (2) the primary path by which inexperienced wiki users gradually become more experienced wiki users is by hands-on work with wiki markup, seeing how others did things — it's often trivially easy to see how someone else did something and imitate it, whereas looking at existing content tells you nothing about how to do it with a WYSIWYG editor and, conversely, learning how to do stuff with a WYSIWYG interface tells you nothing about how to do anything else except do that particular thing with that WYSIWYG editor. The obvious solution — obvious to me, that is :-) — is to have an editor that, when you say you want to edit a wiki page, takes you into a mode that shows you the wiki markup and gives you some sort of structured help with editing it. Notice, in this scenario there still is an underlying wiki markup; in my experience, there's no way to successfully pull off a pretense of an underlying text representation, as the pretense is inherently unstable, requiring a lot of work to maintain the pretense and probably lacking the flexibility of the real thing and occasionally suffering from bugs in the internal correspondence. But there are two ways for wikis to go from here:  The Foundation is headed (perhaps without conciously intending it) for a vision in which ordinary users use a dumbed-down WYSIWYG interface to choose amongst the options provided to them by an elite of programmers who use programming languages to control what ordinary users are allowed to do.  The alternative I favor has wikis themselves becoming more plastic and intergrating more features of programming, with straight text-editing always an option while increasingly augmented by a sort of wiki-IDE. The considerations involved with wiki-based programming are rather mind-bending for a programmer; just try to image a major piece of software in which anyone can edit the code (the appropriate emoticon here is, I think, o_O). But it's a vision of wikis that's clearly heading toward programming, and the discussion here seems to suggest programming may be headed toward markup. So maybe the two fuse at some point.

I see stuff I know from

I see stuff I know from programming trying to crop up in markup, and vice versa. I'm not sure quite how to articulate this, but I'll take a shot at it.

This is a pretty common theme actually. Almost as soon as HTML was invented, someone wanted to abstract and compose HTML fragments. We see the same with CSS now. For instance, the Bootstrap CSS framework uses the less stylesheet compiler to introduce and reuse styling abstractions since CSS can't do this natively.

Lesson being, abstractions are important, and if your markup is missing abstraction facilities, your language is incomplete. Others here have already made the case that programming languages should also have markup qualities, and I think the advantages are clear from EDSLs. The languages that can't embed concise markup-like combinators are often considered clumsy and inexpressive.

segregating textual and visual content

We can certainly segregate textual content from the visual. But I think the sample here doesn't offer a very good understanding of the resulting experience. The image 'mycat.jpg' is opaque and we probably would ignore it for most debugging and maintenance purposes. If instead it were a module with an important impact on program behavior - e.g. a Kripke structure for a state machine - the Unix developer will suddenly be juggling a lot of tools.

But I'd really prefer to avoid this segregation. I think mixed-media modules will see a lot of use if well supported. And I think there's a lot to gain from pushing most of the editing and rendering logic into libraries rather than locking it down in external tools.

Operator precedence and the like

When optimizing for plain text expressiveness and readability, and filesystem integration (to leverage the common text editor), it is common to introduce features such as overloading or multimethods, operator precedence, namespaces, imports, code-walking macros.

The need to express mathematical formulas in code will not go away, no matter how smart your environment. So features like operator precedence and overloading will never go away either.

If you have ever had the doubtful pleasure of using a graphical "formula editor" (e.g. the one in MS Office), you quickly realise what a vastly superior and more productive medium text is. Even if it is as terribly designed as LaTeX.

Though with operator

Though with operator precedence, your editor can make the implied precedence more obvious visually (something I've been meaning to implement in APX by shading the inferred tree). Once precedence is visually more obvious, perhaps you can design more expressive precedence schemes without overburdening programmers cognitively (especially those reading code!). Given notepad as the editor, we wouldn't dare go with anything other than something very simple.

Probably the not the explanation...

Given notepad as the editor, we wouldn't dare go with anything other than something very simple.

I think the version of notepad used by the early mathematicians who invented operator precedence supported a wide range of formatting options.

Correct, but early teletype

Correct, but early teletype terminals couldn't replicate those formatting options, and we never really adopted bitmap displays for programming.

Operator precedence won't go

Operator precedence won't go away, but an editor can help with it by making the parentheses a display concern, not something that is baked into the AST.

The formula editor of Maple works well. The formula editor of LyX also works reasonably well, definitely better than plain LaTeX if you ask me. A long time ago people wrote LaTeX in a text editor, and then recompiled the graphical view (PDF/DVI). Then you had live preview: you have a preview next to the LaTeX view that automatically updates as you type. The next logical step, and I don't know if that actually exists, is the ability to click on the math in the graphical view, and get your cursor in the plain text view positioned on the corresponding subexpression. Going further you have an editor like LyX, where you directly edit the graphical view, and the plain text view becomes secondary. It's a whole lot nicer to look at math notation with horizontal lines and greek letters and subscripts than at nested \frac{\sum_{i=0}{...}}{...}.

Graphical editing is first-order

I agree that LaTeX is terrible. That's why it is particularly telling that it is still better than the graphical alternatives I've tried. ;)

And I agree, it's a whole lot nicer to look at math notation with horizontal lines and greek letters (though some people here seem to disagree :) ) -- until you hit the point where you want to abstract over parts of your formulae, introduce abbreviations, parameters, etc. That is, until you want to program.

Likewise, I've yet to see a "visual", "interactive" or otherwise "enhanced" approach to general-purpose programming that has a convincing story for scaling with complexity and abstraction, instead of just being naive about it and trapping us in an unproductive first-order box. Which, btw, is one of the reason why I have also remained utterly unimpressed by some of the Bret Victor talks that others get hyped up about. All this is solving the wrong problem. The real problem with software engineering today is reliability and scalability. Sustainable progress can only come from better abstraction mechanisms, not from interactive gadgetry. In fact, I expect overemphasis on the latter to cause dangerous regression on that front.

Note I'm not talking about

Note I'm not talking about graphical or visual programming or free form drawing. I don't think those approaches are very promising. Rigid ASTs work well. I'm just not convinced that the best way to store or edit them is by linearizing them to an array of characters. What is the issue with abstraction in that context? You still have exactly the same subexpressions that you can extract out into a definition.

I agree that the Bret Victor stuff is not that exciting. They are nice demos with visualizations tailored to exactly that scenario, but it's unclear how it would work for general programming.

That first order box is much

That first order box is much better than the higher order box Haskell programmers want to trap us in. "Hey look at my point free function that passes a function to a function to a function...what the heck does that even do?" The scaling onus is on us, and it's a lot of work, but not impossible. The crux of our disagreement, however:

All this is solving the wrong problem. The real problem with software engineering today is reliability and scalability. Sustainable progress can only come from better abstraction mechanisms, not from interactive gadgetry.

There is a problem with reliability and scalability today, but the abstraction hair shirt isn't going to save us. Again, when only a few lines of code is so abstraction dense that making sense of it requires a huge amount of time...well...it is not a mystery why Haskell will never be mainstream.

Our brains just aren't good enough, our only hope are to become computer-augmented cyborgs...not by embedding chips in our head, but via that "interactive gadgetry" you hate so much. This is happening in almost every other field, perversely the Luddite-leaning PL community seems to be the most against it ("programming is a strictly human activity, keep computers out of it").

There is no "higher-order box"

...because higher-order subsumes first-order. You're throwing out a false dichotomy, but I'm sure you know that.

Our brains are reasonably good at getting comfortable with abstractions. But you have to accept a learning curve. The more advanced a field gets the higher the education that's necessary to master it -- the same as in any other engineering or science discipline. Tooling can help, but cannot avoid that evolvement (and is actively harmful if it tries).

The "higher-order" box I'm

The "higher-order" box I'm referring to is about usability, not expressiveness, but I'm sure you know that. PL researchers focus way too much on expressiveness and not enough on usability; who cares if you can succinctly express X with a bunch of higher order functions chained together if no one knows what the hell is going on?

It is not just a learning curve. The extra indirection and abstractness over the machine approach actually has significant cognitive costs. Data flow and chains of deeply nested indirect function application are a pain in the ass to reason about and debug, almost as bad as the callback hell they are purporting to replace. Perhaps "well-typed programs can't go wrong" was not meant as a claim, but as a threat, since Haskell lacks a good debugging experience!

But again, higher-order programming wouldn't be so bad if the computer could actually help out. Better tooled programming experiences will continue to win out in the market place.

If higher order is MORE confusing then you're doing it wrong

"That first order box is much better than the higher order box Haskell programmers want to trap us in. "Hey look at my point free function that passes a function to a function to a function...what the heck does that even do?"

In my own programming I've seen unreadable, complex loops turn into easy to understand higher order composed logic enough times to know that higher order functions can simplify a lot of code.

In other cases I've seen it eliminate tones of boilerplate.

If going to higher order functions DOESN'T make your program simpler, then don't do it there.

If Haskell is hard to understand it's not because it's using a bunch of higher order concepts, but because it's using bad higher order concepts.

Higher order code requires

Higher order code requires higher order reasoning necessarily, it is actually directly in the definition of "higher order." Indirection adds complexity, whether it is through v-tables or f-pointers.

And have you ever tried to debug a higher order function? I use LINQ in C# and often go to loops for complicated stuff simply because the debugger actually works.

I was using scheme

I don't know what v-tables or f-pointers are, and Racket's debugger doesn't have any problem with higher order functions. But I may not have had to debug any of that.

[edit] oh,virtual tables and function pointers
Hmm, I wasn't doing anything so complex that I was pulling a lot of functions out of tables, or a bunch of objects... My examples were really ones of having statically chosen some tests (sometimes a fair number of them) to use in a long running computation or to use to process a complex data structure, and making the code driving the tests separate from the tests. It was a matter of separating concerns, but the functions being passed around were so predefined that they could have been inlined.

The one exception in the code I used were continuations used for non-determinacy, hard to reason about, but I converted them into abstractions that are very easy to reason about.

It's hard to make any sense of a continuation, but easy to make sense of amb.

Laziness is a big issue,

Laziness is a big issue, though I've heard that VS2015 fixes some of the issues for C#. It is just annoying to have to dive into multiple applications one at a time, when everything would just be there in one frame if you used a loop. Tooling can help, but I haven't seen many decent FP debuggers out there that actually deal with these issues (please point them out if you know of them).

Compared to reasoning about mutable state

Compared to reasoning about mutable state, reasoning about higher-order functions is fairly trivial.

I disagree. And we know how

I disagree. And we know how to make mutable state more usable, but how the heck can we do the same for higher order functions? Not to mention the horrific debugging experiences.

I agree to disagree!

Well. Now I am rewriting my compiler in C++ I can tell you that falling back to first-order reasoning where I had higher-order at my disposal is utterly horrible, unnecessarily verbose, and hard to debug. ;P

Example?

Example?

Edit: maybe this is another issue unrelated to the one at hand. Control flow is dead simple and easy to reason about (and debug!) even if it lacks in abstraction capabilities. Contrast that to composing two functions whose behavior is deferred to be accessed through a value somewhere else. The control flow of the program becomes easily twisted, and we are supposed to reason about the data flow instead (never mind that there are no good data flow debuggers out there). If the indirect procedure manipulates state, we easily cross into callback hell, but if it's a pure function that consumes and returns a monad we are somehow..useable?

Uh? Everything?

Well. Uhm. Everything blows up? It's just the drawback of going back to first-order after being used to a declarative style. But I knew that when I started.

I am gonna be lazy. Just read the compiler bootstrap sources here in the download section and tell me how you would solve all that in C++.

The abstract data types become tagged class hierarchies (tremendous blowup), and instead of parser combinators I now need to do the look-ahead with case switches and lookeahead predicates everywhere I switch between production rules.

It's really a problem. I hope it'll work out but I am not sure yet.

Can you give me a link that

Can you give me a link that isn't blocked in China?

I've written my recursive descent incremental parser and type checker (with aggressive type inference) for APX with just a few virtual methods, a few factory methods accessed through a dictionary. It is way more advanced than most other parsers, and the type checking code is mixed in with the parsing code so we can go straight to code gen afterwards. Oh, and it was easy to write: just take tokens from the lex stream, and decide what to do with them. Parsing is not a hard a problem that requires tons of indirection and abstraction.

Whenever I see a parser combinator, I wonder why the hell they go and add so much complexity for so little functionality (must produce separate trees to type check, not incremental)? And how the heck do you even debug those? it really is no wonder why most industrial languages continue to rely on recursive descent.

No. And I exploit the full functionality of parser combinators

No idea. Can I post a tar file somewhere temporarily, or mail you?

The thing with infinite lookahead parser combinators is that once you have them, you're going to exploit that feature. A lot.

I.e., where I write p + q + r it might try an arbitrary match on p, and q, before deciding r. Now I need to do the match, parse the given alternative, and deliver the right result.

But there are many more examples. This is just what I am encoding in C++ now.

(The good thing, I guess, is that falling back to C++ enables me to check I didn't do too much weird things while parsing stuff.)

You can do infinite look

You can do infinite look ahead with recursive descent, it is just code after all! But if I learned anything from Martin, it's keep the syntax simple (if need more than one token look ahead, you are doing it wrong), users and implementor both benefit (of course, not everyone has the luxury of deciding the syntax they must parse).

Oh. I agree with that.

I wanted to keep my grammar mostly LL(1) too. But given namespaces, I already had a problem. And parser combinators also allow to fail on extra semantic checks I sometimes need to parse arithmetic expressions, so that is a bit of a problem now.

Stuff blew up. But it looks it's solvable.

VPN

Don't you have a vpn service? Or is that illegal in China?

We tried. The VPN providers

We tried. The VPN providers become blocked quickly.

Mailed to your MS account

You got mail.

Parser Combinators and Compiler Architecture

Having hand written parsers for years, I recently wrote a C++ parser combinator library https://github.com/keean/Parser-Combinators, and posted about it on LtU. There are several advantages: The combinators produce faster parsers that are easier to read from the declarative descriptions, and are easier to change and modify without messing up complex parsing, like adding a new term type to a recursive expression. I found after I had written a recursive descent parser, which seemed straight forward for the known grammar, it was really difficult to come back to it months later to change it.

In my compiler architecture I make extensive use of the visitor pattern, as it allows the state for various operations on the AST and types to be kept in the visitor rather than in the objects themselves. Almost every compiler operation apart from parsing can be implemented using visitors. You can see this approach in the type inference code I posted on GitHub https://github.com/keean/Compositional-Typing-Inference.

Yeah well. Probably won't use it. Information flow.

We had this discussion several years ago, and as far as I appreciate your C++ skills, I opted out of it because: a) I found your solution somewhat unreadable, and b) I wanted more, amongst which error reporting, parsing over token streams, and speed. So that, to me, was a non-solution. I went for straightforward recursive descent and I am not regretting it yet.

As far as the visitor pattern goes. Nice, it isn't as verbose as I thought, but what if I want more, more precise (attribute) information flow around the AST nodes?

I.e., you have a node:

                  |
             APPLICATION
            /           \
        TERM0            TERM1

And I want

               ↓:T | ↑:T
              APPLICATION
          ↙:T / ↗:T  ↘:T \ ↖:T
       TERM0              TERM1

The arrows denote the information flow around the nodes. Up-to-down, analytical, information and down-to-up, synthetical, information. (The information is chained, so whatever came out TERM0 goes into TERM1, while both terms may be rewritten.)

I am simply going to implement this without the visitor pattern with a template and a gigantic case switch. Yeah, somewhat ugly, don't care. Well, that's the thought for this moment.

Maybe you have a solution? Would be nice to see but still probably won't make it to what I am implementing.

Fast, Lazy Tokenisation and Nice Error Reports.

Actually my solution has good error reporting, parses over streams of tokens using lazy tokenisation (which is where you apply a transformation the the parser instead of to the input stream), and its faster than hand written recursive descent parsers.

Here's a sample error report from the example Prolog parser:

  what():  failed parser consumed input at line: 46 column: 84
    expr(Fun, C1, arrow(A, B)), a(X) -> b(X) -> c(X), a(X -> Y -> Z), X -> Y -> Z, Z     expr(Arg, C2, A),
                                                                                   ^----^
expecting: variable, operator, op-list
where:
	atom = lowercase, {alphanumeric | '_'};
	op-list = term, [operator, op-list];
	operator = {punctuation - ('_' | '(' | ')' | ',')}- - "." - ":-";
	struct = atom, ['(', op-list, {',', op-list}, ')'];
	term = variable | struct;
	variable = (uppercase | '_'), {alphanumeric | '_'};

The EBNF is auto-generated from the parser combinators, which I think makes for a pretty understandable default error report.

Maybe you don't change your grammars as often as I do though? I have a lot of projects that include parsers and I find it easier to maintain them all with the combinator library than when they were all hand written recursive descent.

Regarding the visitor pattern, multi-directional information flow is not a problem you simply maintain the state in the visitor object. "type_instantiate" is an example of this kind of information flow. A typing is passed in, and it recurses polymorphically over the type's tree structure returning a copy (the synthesised attribute) but at the same time it passes forward the list of variables encountered so far (the inherited attribute). By doing this identical type variables in the input type map to identical type variables in the output. This is necessary because we want identical variables to all refer to the same node, not just have the same name.

No. Handcrafted runs fine atm. But performance. 500KLoc.

Yeah well. It just needs to work for this interpreter/compiler.

I just ran a handcrafted parser on a 500KLoc file. 11secs. And then it takes a whole lot of time to print it? I am not sure what's up. Something with vectors? Really annoying.

Did you test it on a 500KLoc file, yet?

Large expressions

I have tested the backtracking expression parser on 37,000+ character expressions, which takes about half a second on my laptop. The Prolog example parser parses about 3.4MB/s of data. I prefer measuring characters per second, rather than lines per second.

Ah, ok.

30ms on 40K characters (Unicode). Ah well, guess I am going with what I have.

I'll reappreciate your visitor pattern solution but I haven't warmed up to the feeling yet.

With Backtracking

The point was backtracking is optional, so the 20,000 to 45,000 chars/sec (runtime variance) for expressions is fast, as every operator is a two way choice point for infix.

The Prolog parser also parses expressions, but uses a more controlled backtracking, so only the operator parser itself backtracks (there are no choice points). This achieves 3.4M characters per second, so appears to be over 100 times faster. Even this is not a truly fair comparison because its not LL(1) due to the effective infinite look ahead on operator symbols.

The LL(1) csv parser gets over 20M char / second, compared to my recursive descent implementation that gets 5M (4x faster).

So the conclusion is, it is in every case faster, but the factor depends on the precise parser.

When you say you want to parse a stream of tokens, do you run the tokeniser pass on the whole input, storing the result in memory, or have you implemented some kind of lazy (on demand) token evaluator where the token get stored into a linked-list that is consumed by the parser, and triggers generation of more tokens when its empty? The reason I went for a tokenizing transform on the parser itself, even though this results in more use of backtracking is because writing all the tokens to memory will be slow, and resource intensive for large programs, and implementing on-demand proper lazy tokenization seemed more complex.

At the moment you could build a tokenizer out of parser combinators, but you would have to write all the results to an array before passing to a second set of combinators to parse the tokens, but I might implement a lazy stream to link two parsers together if is seems interesting.

I wrote a (dynamically alterable) operator precedence

parser that needs no backtracking. I know that's no big deal, though it isn't the standard shunting yard algorithm - which seems to be mentioned everywhere, though someone said that the standard version of it allows illegal parses which mine doesn't.

It turns out that there are precedence relations that don't fit into grammars that don't have explicit precedence cleanly, adding a new operator would involve adding more than one production - so they don't compose.

I'm going to experiment with building a whole language out of precedence relations with some sort of context (more than one kind of expression) and and mixfix operators.

Those should compose in a useful way, which other grammars seem not to. People say that Parsing Expression Grammar compose, but I don't think they compose in a particularly useable way.

Fixed Operator Precedence

I fixed an order and fixity rules on operators. You can have an infinite number of them, but cannot determine the precedence yourself. It uses a lexicographic ordering for longer operators.

The bad thing of variable precedences is that: a) my language will probably have a lot of operators defined, and b) the semantics of the language change once you start defining the precedences.

So, I fixed it a priori and hope that will pan out.

Here's a little proof of concept version

proof of concept.

If you take this further to having a table to look up the precedence of operators you'll notice that there are two distinct phases of the parse which makes it possible to have the same operator to be both post and prefix or prefix and infix etc by having separate tables for those phases.

This algorithm can do what prolog parsers do, allow you to add new operators and precedences on the fly, while you're parsing.

Oh. I already do that.

I have an operator table with fixed prefix, postfix, and infix predicates. I just like it fixed, it wouldn't be hard to make the operator table dynamic and you would get your functionality.

The only difference is that my operator table is 'static' since I like it that way.

I parse like this:

    def parse_prefix_expr: parser ast =
        (parse_position <*> \p ->
            parse_prefix_op </> \e0 ->
            parse_prefix_expr <@> \e1 ->
            expr_application p e0 e1)
        <+>
            parse_expr_primaries

    def parse_infix_expr: ast -> ast -> parser ast =
        app = [ e0, e1 -> expr_application (pos e0) e0 e1 ];
        i =
        [ e0, op0, e1, op1 ->
            if op_less (txt op1) (txt op0)
            then parse_infix_expr (app (app op0 e0) e1) op1
            else if op_less (txt op0) (txt op1)
            then parse_infix_expr e1 op1 <@> \e2 ->
                app (app op0 e0) e2
            else if op_left (txt op0)
            then parse_infix_expr (app (app op0 e0) e1) op1
            else if op_right (txt op1)
            then parse_infix_expr e1 op1 <@> \e2 ->
                app (app op0 e0) e2
            else failure ];
        [ e0, op0 ->
            parse_prefix_expr <-> \e1 ->
            (parse_infix_op <-> \op1 -> i e0 op0 e1 op1)
            <+> success (app (app op0 e0) e1) ]

    def parse_arithmetic_expr: parser ast =
        parse_position <*> \p ->
        (parse_prefix_expr </> \e ->
            parse_infix_op <-> \op -> parse_infix_expr e op)

Never was really certain wether this scheme works but it passes unit tests, so well, fuck Dijkstra, I guess.

Bugged

I tried to deduce what I was doing, started thinking about it, tried to give a proof, and immediately found a bug.

Ah well. Precisely want I didn't want because I am rushing another compiler. There's a solution there somewhere, will fix later.

Unicode and Shared Pointers

Neither the lexer or tokenizer are lazy. I went for ease of implementation, so the lexer works on an in memory Unicode string and the parser works on a vector of tokens. I could make the lexer lazy trivially, though; since I use lookahead a lot I won't change the in memory vector of tokens.

I probably mostly pay for Unicode handling and heavily for the use of shared pointers to represent term trees. It slows down substantially on 500K of short definitions. (My bet is that even when printing, the program spends most of its time creating and destroying refcounted pointers. Somehow, that slows down on large terms, which completely defeats the use of reference counting.)

Ah well. Life is never easy. Since my language got completely no response, yet, it doesn't pay off to optimize the poor performance on very large files away. Guess I'll stick with this.

Shared Pointers and Poor Emulations of Prolog

I use shared pointers in the Prolog parser, so you could look at that to see how I do it. Its pretty simple, I have a global "name" table that you look up strings to get an ID, then a map of IDs to variable pointers is held in a state object passed as an inherited attribute to the user user-function attached to the parser, which is built out of combinators. For variables we get the ID from the name-map and lookup the ID in the variable-map, and either return the pointer in the map, or allocate a fresh variable and push that into the map. Here's the user code from the example Prolog parser. Note this also keeps track of variables used more than once in a set called 'repeated' as this is useful for optimising the post-unification cycle check:

struct return_variable {
    return_variable() {}
    void operator() (type_variable** res, string const& name, inherited_attributes* st) const {
        name_t const n = st->get_name(name);
        var_t i = st->variables.find(n);
        if (i == st->variables.end()) {
            type_variable* const var = new type_variable(n);
            st->variables.insert(make_pair(n, var));
            *res = var;
        } else {
            st->repeated.insert(i->second);
            *res = i->second;
        }
    }
} const return_variable;

And here's the parser. "var_tok" is the tokenized recogniser for variable names, "variable" applies the above user-function to the output of the recogniser, and adds a name label used when pretty printing parsers for the error messages like above:

auto const var_tok = tokenise(accept(is_upper || is_char('_')) 
     && many(accept(is_alnum || is_char('_'))));

auto const variable = define("variable", all(return_variable, var_tok));

This parser is pretty printed like this automatically by the library:

variable = (uppercase | '_'), {alphanumeric | '_'};

Thinking about parsing more generally, by the time you add backtracking state, a parser ends up as a poor implementation of Prolog (not handling the state / inherited attributes very well). Having already found type inference to be a partial implementation of Prolog, I am tending towards adding parsing primitives to my Prolog implementation rather than making spending too much time on the parser combinators. Of course my Prolog implementation uses the parser combinators so I want to make them as maintainable as possible, as I suspect it will be a long time before my language is self-hosted.

Writing the tokens to memory will be big bottleneck by the way, as you can no longer fit things in the CPU cache for large programs. The penalty for write out and read back could be 20 times (main memory being approx 10 times slower than cache, and needing to write out and read back.

I do like definite clause grammars

Have you ever noticed that if you put a cut at the end of each production of a definite clause grammar, the result is a straightforward implementation of Parsing Expression Grammars?

Leaving the cut out, you get rid of the "language hiding", deterministic nature of PEGs. But you do have to put a cut in SOMEWHERE or the memory useage for decision points grows with the length of the input.

If your purpose for parsing is a compiler then speed and memory no longer matters! Computers are thousands of times faster and have thousands of times more memory than when parsing technology was invented for compilers - we no longer should be wasting our efforts on optimizations like bottom up grammars with tables let alone regular expressions. There is no reason not to use more expressive tools, now.

That said, the precedence parser I'm playing with is sadly efficient enough for the 50's, so I'm contradicting myself. Precedence parsing is so simple that it's funny.

No Dice

I use shared pointers in the Prolog parser, so you could look at that to see how I do it. Its pretty simple, I have a global "name" table that you look up strings to get an ID, then a map of IDs to variable pointers is held in a state object passed as an inherited attribute to the user user-function attached to the parser, which is built out of combinators. For variables we get the ID from the name-map and lookup the ID in the variable-map, and either return the pointer in the map, or allocate a fresh variable and push that into the map.

Well, as I gather that's not shared pointers, right? That looks like raw pointers referenced from a table. Not C++'s reference counted shared pointers. Am I wrong?

I am programming somewhat defensively. Since I don't know what will come up but I thought I need refcounting since as terms get rewritten during various passes it (probably) pays off to simply have thems share the parts which weren't rewritten. I.e., internally you end up rewriting a directed acyclic graph, which implies some form of GC. Since it is a DAG and C++ has shared pointers I chose that.

But I have the uncanny feeling C++'s shared pointer implementation does a hash-table lookup on a pointer which means O(n) degradation, which becomes noticeable on large terms, and completely defeats the purpose.

The whole point of refcounting, which is what I want to use in a tiny VM too, is to do without the O(n) slowdown you normally get with stop-and-copy or mark-and-sweep GCs. So, I am not happy with it.

Wrong Kind Of Shared Pointer and Regions

I am using shared pointers in the sense that all instances of the same variable point to the same shared data structure. I didn't realise you meant a C++ "shared_ptr". You could write a version that uses shared_ptr, but its not very efficient. You don't need a shared pointer in the map in any case as its lifetime is less than the AST that is being created (IE Parsing is an operation over the AST tree, in the same way a pretty printer is, and its safe to assume no AST changes during such operations). If you really want to use shared pointers you can use the same map-mechanism but just cast the pointer to a shared_ptr when constructing new AST nodes that reference the variable returned from this parser, so this code would not need to change.

I use region based memory management, so when AST objets are created a "unique_ptr" to them is pushed onto a stack, so that when the stack is popped they are destroyed. It is a property of parsers, much like Prolog implementations that you only ever need to destroy stuff in blocks at the top of this stack. You can record the top of stack at choice points for backtracking, but in general you normally just let the whole AST get destroyed when it goes out of scope, as destroying the stack in the AST head object triggers all the objects held in the unique pointers to be destroyed. Because the AST objects are owned by the AST head object, you should use non-owning pointers (raw-pointers) from everywhere else that accesses the data from inside the scope of the owning object. Its good practice to separate ownership from referencing, and to have only one owning object.

Oh. I disagree. Completely.

The basis of a compiler is bookkeeping and term rewriting. If I am not going to rewrite a DAG then I am going to rewrite a tree, and the (other) bottle-neck becomes copying of the tree, which is what you'll probably find in C compilers. In the end, if you want optimal performance, you'll probably rewrite trees but then you'll hit corner cases where the copying won't outweigh reference counting.

As far as I know, Prolog rewrites DAGs or graphs. It makes no sense that all terms you construct would be copies of trees. That'll be a hefty penalty to pay for large terms with a lot of sharing.

I am going for reference counting because of ease of implementation and it allows me to, more or less, write idiomatic C++.

Ah well. Maybe someone will sometime implement a better shared_ptr.

Missing the Point

You can have shared_ptr in your AST. The point is the variable parser creates variable nodes, it does not hold any references to other nodes so that's why you would not see any shared_ptr even if you were using them in your AST implementation. You would use a shared_ptr in the node containing the variable. The map is there to make sure references to the same variable name return the same variable_node, as we can then test for variable identity simply by comparing node pointers.

Your point about tree copying is relevant when doing the tree transformations, but as I said you can use share_ptr in the AST. My approach at the moment is to only create new nodes and keep the old ones. When you want to clean up you can make a fresh copy of the tree in a new region and free the old one. I am not saying you should use my approach, I am just saying that's why you wont see shared_ptr in my code, but you can still use them if you want.

Tree copying can often be faster than shared nodes due to memory locality. Prolog implementations now nearly exclusively use structure copying because in practice it turns out to be faster than structure sharing. It keeps references close to the top of the backtracking stack.

Well. It's either sharing or copying, right?

Abstracty an AST is either a tree or a DAG. So you either use some form of GC, refcounting, or you copy trees. Both have corner cases where the sharing either outweighs the copying, or the copying outweighs the sharing. (Though I grant you that at the moment, given the internals and the speed of GCC/LLVM, copying seems to outweigh sharing. But that may depend on the implementation of shared_ptr too.)

It isn't harder than that?

(My semantic analysis is substantially larger than what you need to do for a Prolog language. It includes getting the naming and referencing right in a namespaced language as well as making datatype constructors and method declarations explicit, type checking (interface declarations), and getting the identification of deconstructors and constructors right. So, yeah, I worry about it more than what you need in a Prolog parser.)

Just an Example

The Prolog parser is just an example included with the parser-combinator library. The semantic analysis for the language I am developing is likely to be much more complex.

Copying or Sharing are the major strategies, but what I think is interesting is how the tree-rewriting is a natural fit for Prolog. Prolog also can define the grammars for intermediate languages naturally (if you remember my post on nano-pass compiling in Prolog). It seemed to me writing my compiler I was writing bits of a Prolog implementation in an ad-hoc way for the compiler functions, and that I might be better off using Prolog with appropriate rules instead. I don't think performance is critical at this stage, and I could always extend Prolog with some specific built-ins if I find any performance bottlenecks.

One way to think of this is as a scriptable compiler where you can define the intermediate grammar rules, and tree-rewrite rules in a Prolog-like syntax.

Agreed.

Yeah well. I got you're not far in the semantic analysis yet. I'll go sharing despite the cost on large files. It simply pays of in ease of implementation and I don't have any users, and maybe never will have.

This is a discussion for people interested in constructing compilers which will often need to compile 1MLoc, or more, programs. I think copying outweighs sharing in that case, so a C compiler will always beat a handcrafted Lisp compiler there. But for now, it simply isn't worth it for me.

I bootstrapped more or less declaratively so all I do is term rewriting. No tables, or tables constructed on the fly whenever I need them. I am mostly going to repeat that effort, so I more or less desperately want the sharing.

Ah well. Sucks to pay performance this early in the implementation phase.

Did you ever try?

Control flow is dead simple and easy to reason about

Either your definition of "dead simple" or of "reasoning" must be at odds with the rest of the field. ;) Are you seriously suggesting that, say, Hoare and separation logic is simpler than equational reasoning?

If the indirect procedure manipulates state, we easily cross into callback hell, but if it's a pure function that consumes and returns a monad we are somehow..useable?

Well, not if it is a state monad. Though even then the explicitness of the monad helps some. But you are still assuming state. The trick is to avoid it. Because stateless computations simply don't have magic "behaviour". Instead of worrying about data and control flow, you only have to worry about data flow.

In all fairness

If I was interested in live coding and most of my work would involve getting the interaction between source code and gui right, I wouldn't be too interested in declarative styles of programming too.

Hoare and separation logic

Hoare and separation logic are good illustrations of how red tape can make anything difficult. That's a shortcoming of our mathematics, not of our human faculty of reason. From (I believe) Shakespeare's Henry VI Part Two: "The first thing we do, let's kill all the lawyers."

Unjustified presumption

Many smart people have tried to come up with something simpler over the last 60 years. And separation logic already was a significant simplification! All evidence (and there is plenty of it) indicates that the complexity of these methods is a direct reflection of the complexity of their subject. Unless you can demonstrate an alternative that is vastly simpler?

I intended no assumptions

I intended no assumptions there, in either direction, about what is or isn't possible mathematically. I had in mind more bemusement at our human foibles; if it came across more mean-spirited than that (which admittedly is always a danger when invoking that particular Shakespeare quote), I apologize. I'm skeptical of our ability to figure things out quickly; it seems, for example, you may be putting more confidence than I would in the decisiveness of many smart people trying and failing to achieve something for about half a century.

Yes, with superglue. I gave

Yes, with superglue. I gave a demo during my defense. I'm much happier that I can do real programs now.

Either your definition of "dead simple" or of "reasoning" must be at odds with the rest of the field. ;) Are you seriously suggesting that, say, Hoare and separation logic is simpler than equational reasoning?

Academia has always had different ideas about simplicity than the real world. Theoretically, Is simpler than imperative, but when usability on real problems is considered, there are other issues to consider beyond theory.

You cannot avoid state in extremely interactive programs...and the key to making a stateless computation (like parsing) incremental is to add state. Control flow is easier to reason about than data flow dynamically given that this is what the computer gives you to debug. Usually we have to debug both, but it makes no sense unnecessarily transforming something easy to debug (control flow) into something much harder to debug (data flow).

Control flow

Control flow is easier to reason about than data flow dynamically given that this is what the computer gives you to debug.

That's a non-sequitor, of course. Also, it's ironic that you keep making this argument, given your strong belief in better tooling as a solution.

No irony

My point is that language design must consider tooling. Control flow is debug-able and that is why I prefer it. Show me a decent data-flow or higher-order function debugging experience and I could totally change my mind, there is no irony in that. Yet whenever we get to that point, all I hear about is how debugging isn't really necessary.

Andreas, do you personally use a debugger and value the experience?

SubText / Usable Live Programming

Couldn't subtext or indeed your own work on usable live programming support higher order functions relatively easily? Just like for normal functions you would be able to select a specific call for each lambda, except that the calls are filtered by the call which created that lambda.

For example if you have some code:

function f(list){
   list.map(function(x){ ... x ... })
}

You would first select a specific call of f. Then the selection list for the inner function gets filtered down to those calls that correspond to the outer call that you selected. So if this function f was called on list1 and list2, and if you select the call of list1 for f, then for the inner function you would only see the list of calls that correspond to list1.

Your notion of trace can also be generalized for pure functional programming. In an impure language order of function calls matters, so the trace of a program is a linear list. But for pure functional programs the order of calls matters only when the output of the first function is needed as the input of the second. So a trace is no longer a linear list but a directed acyclic graph. If you have code like this:

trace("the start")
a = f(x)
b = g(y)
c = a + b
trace("the end")

Then instead of having a trace like this:

[the start]
[message 1 from f(x)]
[message 2 from f(x)]
[message 1 from g(y)]
[message 2 from g(y)]
[message 3 from g(y)]
[the end]

You have this, because you know that the two calls are independent:

            [the start]
                |
      /------------------------------\
      |                              |
[message 1 from f(x)]      [message 1 from g(y)]
[message 2 from f(x)]      [message 2 from g(y)]
      |                    [message 3 from g(y)]
      |                              |
      \------------------------------/
                |
            [the end]

Of course you could flatten this to the former list if you wanted, but having more structure is generally useful.

Typical FP call chains are

Typical FP call chains are DEEP, even if you figure out a way to flatten recursion. Each level of call hierarchy about doubles user confusion, meaning they'll get lost pretty quickly if you can't get them to focus a single line of execution at a time. And this is the basic challenge of it all: how can you hide enough details from the user to avoid overwhelming while still allowing them to dig out what they need? So for the FP case, there are just too many function calls to navigate through. The only steps forward I've seen in this area are with slicing (e.g. Roly Perera's work), which is maybe what you are getting at?

Note that statement order also doesn't matter much in Glitch: given re-execution semantics all statements will see side effects of all other statements. That is totally achievable without purity.

What I was getting at was: -

What I was getting at was:

- Higher order functions fit naturally in your Usable Live Programming.
- It may be better to display a given trace as a DAG rather than as a list. When the language is pure this has the additional advantage that only connected things can have an effect on each other.

I think the size of the trace is a problem in any language, imperative or functional. Whether that means a loop with a huge amount of iterations or a big recursion tree doesn't matter that much. In fact I'd expect that a tree with a million nodes is easier to navigate than a flat list with a million nodes. If the branching factor of the tree is 2 then the depth is just 20.

You are right: the work does

You are right: the work does apply to functional programming; and represent traces as DAGs (or at least trees) is a good idea and not just for FP, I've been thinking about this for awhile (also traces in other formats, like tables, but the digraph is probably more cutting edge).

But tracing is necessarily imperative (even if we ignore the code being traced, trace statements themselves have implicit positions in the trace!), and I'm not sure how it fits in to debugging functional programs. Tracing pure functional code doesn't really make sense to me, since the computations can be re-used willy nilly in multiple time-less iterations. Part of what makes tracing work well in YinYang is that it is indexed by time (so we can drag the slider to view different configurations of the trace), but in FP...it seems like everything would be dumped into a single huge trace (time would be there, but the debugger wouldn't be able to see how the code was using it).

So in FP is that everything is explicit and nothing is fixed (time and side effects are just values to be consumed and returned), giving me nothing fixed to latch onto in the programming model beyond function application and reduction rules. For FP, I think the right way to go are custom library-specific domain-specific debuggers, since those explicit values and higher order function compositions are usually defining something rather simple and fixed (e.g. imperative effects, or basic first-order data-flow wiring).

Debuggers

I use debuggers, but typically only as a last resort, when faced with the obligatory memory corruption problems in C++. Those tend to be the most unpleasant moments of my job. In most other cases, especially when dealing with non-trivial algorithms or data structures, debuggers are far too low-level a tool and not very effective. You rarely want to micro-step through control flow or inspect complex graphs manually. Strategic high-level printf digests usually get you much further much quicker. In really complex cases you want to build domain-specific visualisers yourself.

I'd argue that classical debuggers are -- to some degree at least -- a good example of a tool primarily addressing a problem that we shouldn't have in the first place if we were using better, more high-level languages. A symptom of our fixation on low-level control flow hackery. And indeed, functional programmers seem to see the need for a debugger much more rarely, which may explain why you find few impressive ones.

That said, there are or have been data flow debuggers for Haskell AFAICT, but since I never used them I can't judge them. OCaml also has a reasonable conventional debugger, but I've never used it either.

I think your position is

I think your position is taken by much of the PL community on the theoretical side, that we shouldn't need debuggers at all and we should be able to reason about programs in our head, or failing that, printf or some offline verification tool. And that is our disagreement...just what the programmer experience should be in the first place.

Debugging ≠ using a "debugger"

Not quite. Debugging is necessary. But "debuggers" only help at the lowest level of abstraction, which is not what I need to look at most of the time. And they don't scale to what's actually interesting.

Abstraction is the enemy of

Abstraction is the enemy of concrete of course, while debugging difficulty increases as code becomes more abstract (you could argue that the cost is offset by less code to debug, but it's still a hill to climb). There are many domains where debuggers are essential, my crazily-reactive live programming work is one of them I guess, where overly abstract code is bad. When I write code, I think first about how I'm going to debug it since I know I'll have to (too many moving pieces to get it right the first time).

If you construct your debugger out of prinfts or use one provided by the IDE, they are debuggers (tools for debugging). There is no standard definition of a debugger (debuggers could also better support printf debugging).

Why in our head?

Why reason in our head, or with pen-and-paper (most realistically)? Why not tool support? I expect some theory people might not look for tool support there because they're "brainiacs enough", but I'm not sure restricting to that is good.

For instance, some Haskell people expect you to do equational reasoning. Like when you simplify algebraic expressions in school.

However, effective support for that is lacking. Heck, why do *I* need to do these algebraic simplification on pen and paper, instead of having a computer helping me? Mathematicians also use computers instead of pen and paper sometimes — I'm thinking of algebra systems.

In most other cases,

In most other cases, especially when dealing with non-trivial algorithms or data structures, debuggers are far too low-level a tool and not very effective.

Not when you have a good visual debugger, like that found in visual studio. printf doesn't remotely compare to a good debugger in my experience.

That said, I'm not sure what a dataflow debugger is supposed to achieve. If your language is typed, your dataflow is already consistent.

Often when you are debugging

Often when you are debugging you see some value that is wrong. A dataflow debugger lets you answer the question 'where did this value come from?' as opposed to manually stepping backward (if your debugger even supports that) until you see the point where the wrong value was created.

You want both actually.

You want both actually. Printf is great at creating a trace, in the usable live programming paper we show how to make that navigable and integrated into a more comprehensive debugging experience.

I have never seen a type system that can eliminate all bugs from all programs. Data flow needs to be debugged just like everything else.

Reductio ad category theory

I'll agree to a large extent — I can think of collection combinators. Heck, even Smalltalk's collection library is higher-order.

If Haskell is hard to understand it's not because it's using a bunch of higher order concepts, but because it's using bad higher order concepts.

If that were literally true, we could argue that category theory is a source of bad abstractions by pointing out at categorical Haskell code. At least people who studied category theory will disagree.

Let's instead agree with findings showing that abstraction is cognitively hard, and figure out what we can do about that. Heck, the average mathematician complains about category theory being too abstract!

Yet, category theory can make things simpler (that is, having a more compact explanation). Not easier, unless you've already spent the effort to master the abstractions.

(I named "Reductio ad category theory" after "Reductio ad Hitlerum" for extra fun, though I don't propose that I automatically lose the argument by using this technique).

Hidden Types

I think what makes higher order programming hard is the proliferation of hidden types. Take fold, which is easy on its own, and then compose a few together operating on parameters. Its hard to see what's going on as you have to 'unfold' the code in your head to work out what all the intermediate types are. Equivalent imperative code will often iterate (works like vector indexes) over mutable containers who's type is constant throughout the computation. This is much easier to understand.

Bad example?

Mutating a single container would be analogous to multiple maps, not folds. Also, the cases that you can implement by mutating a single container necessarily have the same type at every stage, so there wouldn't really be anything hidden.

Like This:

I'm thinking of examples like this:

http://lambda-the-ultimate.org/node/5103

unproductive first-order box

Now I'm not claiming that visual programming is "solved", but this supposed first-order limitation doesn't really exist: higher order visual programming.

Cool

.

The need to express

The need to express mathematical formulas in code will not go away, no matter how smart your environment. So features like operator precedence and overloading will never go away either.

I'm not suggesting that operator precedence will go away. I'm only pointing out that it (and many other features designed to make textual programming tolerable) do have non-trivial costs for tooling. I've also not argued that text is unproductive as a medium. There are many domains where text works very effectively or at least not too badly.

I agree that text can work reasonably well for math formula. But we do have options for how this happens. E.g. rather than building math formula, syntax, precedence and so on into the language and every tool that parses it, we could simply write a formula (or pieces of it) into a literal string value and parse it via a library-based DSL and a little staging or partial evaluation.

filesystem

When optimizing for plain text expressiveness and readability, and filesystem integration (to leverage the common text editor)

It seems evident to me that the unix-style hierarchical filesystems have a far larger impact on modern language design than text editor support does. After all, folks use a wide variety of editors that only need to agree on the text encoding. Meanwhile, there's only a narrow variety of filesystems and they all must agree on a pretty sophisticated set of abstract semantics.

Not the editor

Reports I've seen was that the Goto-Fail bug was caused by a version control merge, not an edit. It's entirely possible that no human being even looked at that code until long after it had shipped.

Many different sorts of tooling could have prevented Goto-Fail. The editor is the last place I would look for answers.

remedial style improvement helps

Let me apologize in advance for my remarks bordering on trivial.

the language shouldn't have had such an error-prone syntax for its if statement

I suspect no one laid eyes on the code change committing that bug, and that it was an auto merge no one inspected. (Yes, crazy.) The if-statement syntax is error prone though. For example, you might want to search a code base to see if any lines have a semi-colon at the end of the first line of an if-statement:

    if (foo());
        goto end;

After the first time I saw one of those, in college, I turned up two more in a global source code search at work. Braces in K&R style make it look off.

Some shops, like mine, require open and close braces always be used even when optional. So braces and indentation are redundant. Some redundancy is good to act as consistency check. It doesn't seem verbose in practice when an open brace occurs at the end of a previous line, K&R style, and a close brace occurs on what is otherwise a blank line. (I use actual blank lines rarely.)

Tools saying why your code looks questionable are a good idea. False reports are a problem though, when treating warnings as errors per disciplined practice. I like the idea of languages coming with specific suggested tools, in general. Itemizing things a tool could verify in a language seems worthwhile, for example, in a language spec.

syntax can be more robust

It all borders on trivial except that the consequences of an error can be, and in this case were, highly nontrivial.

Requiring the if to be terminated by an endif, a keyword not used for anything else in the language, would make the problem both visually obvious and, most likely, syntactically incorrect even if generated automatically with no human looking at it (until, of course, a human does look when they learn the automatically generated code won't compile because it's syntactically incorrect). endif would also eliminate the problem with

if (foo());
    goto end;

since to avoid a syntax error it would become either

if (foo()) endif;
    goto end;

or

if (foo()) goto end; endif;

Why is it

that programming languages that academia are interested in and actively working on NEVER designed to have robust, easy to scan syntaxes?

Actually usable languages are always left up to hobbyist sorts to invent.

For instance, you based your own paper on scheme. I realize that it would have been a distraction from your thesis to make the language it defined readable and give it a robust syntax, and a layer to preserve the abstraction in that case (s-expression to syntax and back or something).

There's enough that could be said about that that I will stop here instead of changing the subject.

Oh, the 'Lisp syntax' thing

Some academics fall in love with clever devices that make things work; others are looking at something else and can't afford to distract themselves. I have some ideas, myself, regarding Lisp and syntax, but the size of that problem is probably at least one SDU (something I picked up from my own academic experiences: I tend to measure massive personal efforts relative to what it takes to do a typical doctoral dissertation — a Standard Dissertation Unit, SDU).

Even McCarthy didn't intend

Even McCarthy didn't intend s-expressions to be the real lisp notation, but here we are! Notation seems to have more to do with momentum and familiarity than with ergonomics.

Python I think was the first language to treat syntax as a serious asethetic and ergonomic concern.

It's more than Lisp, it's math

Is haskell readable?

Is logic notation?

Are the notations used in papers?
...

Also, the idea that a language has "macros" but those macros can't create a readable notation for anything is mind boggling in its stupidity. Yes it's true that computer scientists have managed to find some abstract meaning for macros and Fexprs that is about implementation, but somehow avoids anything particularly USEFUL, like the ability to process notations that a human being would find actually usable.

[Edit] I guess that's not fair. The non-academic languages haven't managed to make macros that can define usable syntax either. But lisp promised to be an extendable language and it failed to be extended to be as readable as COBOL or FORTRAN which suggests that the goal failed.

Anyway I'm working on the problem myself. Maybe I'll find a way to report back here that won't be this cranky. I have a toothache this morning so I'm cranky.

Purity instead of usability

Another example is the instance that recursion should replace iteration.

...

So try to rewrite all of your numerical methods code in Haskell, or Curry or Prolog.

...

But have a few good friends watching you do it, and make sure that they're strong enough and fast enough to pull the razor out of your hand before you kill yourself.

Sure, iteration can be expressed as recursion, and the lack of state changing variables can sometimes make optimizations a bit more obvious - and some less obvious.

All of this obsession over implementation shouldn't have to be expressed at the human-coded, human-readable level. It's the wrong level.

Please be constructive

My answers to these three questions would be:

Haskell: Yes, when point-free style and arcane infix operators are not abused.

Logic: Yes.

Paper notations: I don't know which you mean, but assuming "inference rules" as a syntax, Yes.

You use strong wordings in your posts ("stupidity", "actually usable", suicide metaphors), but I find them to have little actionable content. I would be interested in less grand rhetoric and more precise, justified discussions.

Your antagonizing style was already pointed out as problematic on the Pycket topic. Could you make an effort to be less blunt and more respectful of the others (present or potentially present) in the discussion?

progress?

I'm sorry, I reedited my comment above to apologize for being cranky.

But take the classic, "Numerical Recipes in Fortran" and recode the programs into Haskell.

If you find that the logic is more obscured in Haskell than in Fortran then 60 years of progress hasn't even managed to preserve the obvious advances of the first CS languages.

I have been programming for 30 years, and I never stop being surprised at how little useful progress I see in programming.

Tempus fidgets

I have been programming for 30 years, and I never stop being surprised at how little useful progress I see in programming.

Fwiw, from the perspective of a second-generation programmer (my mother got her first programming job on ORDVAC in, iirc, 1952), I've got a pretty philosophical attitude toward progress. I grew up in the 1970s when it was "common knowedge" that the pace of history was accelerating exponentially, and I eventually concluded that's a perpetual illusion, caused by our knowledge of history always being most detailed at the late end. For a century or so, as I recall, in the European Dark Ages the number of watermills in Europe was increasing at an exponential rate. Anything in mathematics that happened less than a century ago is recent, and programming is a *lot* like math. CAR Hoare: "[ALGOL 60] was not only an improvement on its predecessors but also on nearly all its successors."

Is haskell readable?No.

Is haskell readable?

No. Most languages lack readable syntax, and I don't think that's restricted to academia. Lisp is frankly one of the best, relatively speaking, largely because it doesn't waste a lot of effort creating hard-to-read syntax for the supposed sake of readability. At least it's simple and syntactically unambiguous. Which is why I've been cautious about envisioning an improvement to it.

Is logic notation?

Depends, relatively, on which notation you're talking about. I'm currently putting together a blog post about Church's 1932 logic, and I gotta say, if you think the modern logical notations are hard to read, they don't hold a candle to this stuff.

Over the years

CS papers have become unreadable to me.

Once upon a time, code would be in some readable notation. Algorithms would be in code.

Now I think I mostly see logic and type notations that I suppose are Haskell and logic put in a blender with greek letters.

Even if I decode the notation, it's proofs not code. I feel cheated out of hours, conceivably weeks of time it would take to glean anything useful from them.

Sub-optimality

I think it's important to consider tools when designing a language, but I don't see why also considering bare-bones tool environments should lead to much sub-optimality. If you're developing any kind of general-purpose language (as opposed to a focussed DSL), different users will have different metrics anyway: what is optimal for one will be sub-optimal for others, so you need to provide a broad near-optimal plateau in any case.

If the motivating idea for *your* language design requires you to sacrifice bare-bones utility, then by all means go for it. But, I think the idea that *any* language design will necessarily require the same kind of sacrifice to work well with advanced tools is a much more extraordinary claim, and deserves appropriate skepticism.

As a concrete matter, I am interested in learning what things are relevant to advanced tool support, so I can avoid those which might make it more difficult, or pursue those which might make it more effective. (I should also mention that I'm inclined to go ahead with my plan to steal as much H-M-style type inference as I can get away with...)

Type inference is a good

Aggressive type inference is a good example where considering a bare bones environment leads to a bad experience. You can't really fly an F18 without a heads up display.

Well. I agree with you.

Good IDEs seem to help programmers. But I wouldn't know, I use vim.

dynamic inspection of complex part behaviors

I'm particularly interested in tools that tell you what happened (or what's happening), since you could a design language where finding out would be hard, unless you considered how a tool would be able to present such info.

stop talking about programming languages in isolation of the tools that support them.

As Moore's law stalls, and further progress in ramping up offerings will involve more cooperating sequential programs, I would like to see tools address system behavior when more than one "program" is running to get things done. I think system behavior should be part of language runtime, so tools are responsible for presenting means for devs to understand, monitor, and control what happens when concurrent entities interact.

I think you had something slightly different in mind, about how a language system could be very clever about code and program relationships. So I wanted to encourage interest in showing emergent effects of dynamic systems as part of the purview of language tools.

Evading the problem

There are many other places than your editor where you need to read and understand code: in code reviews, in diffs, in VCSs, in debuggers, in crash dumps, in documentation, in tutorials, in books printed on paper.

Code isn't bound to paper. Even when printed out on paper, it does not need to printed out as it was typed in.

What is "the language" then? What you type in or what you print out? I say the latter, while the former is just a typing aid. And you need to design the print format first, because your "language" can't be communicated without, i.e., isn't a language. So you've basically reinforced my point.

not language

In all cases where you want to read code, you have a computer. Why think of code as some linear stream of text, it could be a tree, it could have hyper links, no one "reads" code from beginning to end like a novel. They explore it like it is its own little world.

So what is your point then? That we should purely focus on the strings of sequential words that aren't going to be consumed sequentially anyways in any normal context? Or might we focus on how code will be consumed by programmers instead when they are exploring it for various tasks (debugging, understanding, crash analysis, code reviews...).

Unless you are Amish and are not allowed to read code on a computer, I don't see why you insist that the reading and writing experiences be separated from the language.

Language is efficient

Its because linear language is the most efficient way to get information in and out of a human. Our cognative facility is developed through spoken language, so this will always be the most natural form of communication. One word after another modifying a hidden state.

communication, not language

Code is not written beginning to end like a novel. Its access is task specific.

All advancements related to language after the development of spoken language (50-100kya) and writing (6kya) have been related to tools, not the language itself (scrolls, codex, books, printing press, telegraph, ...).

The language of code is already quite developed, the potential for advancement lie elsewhere.

Not all those cases are "computers"

In all cases where you want to read code, you have a computer.

That's a false premise right there, and I very much assume it will remain so for many years to come.

More importantly, even having a "computer" does not imply that you have a smart development IDE, let alone a suitable one for each particular language you might need to look at on a given day of the week. Browsing a programming tutorial on a tablet? Do I need to install an app first, one for each language in the world? Or does each tutorial web page include 30 MB of code smartifier just to be able to display code in interactively readable form?

You are assuming a technological mono culture that is neither realistic nor desirable.

Let me repeat. Using IDEs: very good. Depending on IDEs: very bad. I'm a happy user of Eclipse, but I would be terrified by the thought that I could only read or edit code with a heavy-handed tool like Eclipse. Complex IDEs on the critical path would be a recipe for disaster in many domains. What if I need to debug code remotely, through some narrow SSH tunnel, to give just one example? The success of text-based language as a low-overhead, portable abstraction is neither a coincidence, nor obsolete.

The only exceptions would be domains that are very GUI-driven and proprietary anyway. But even there: Java failed on the web, JavaScript succeeded; there is a strong case that this has a lot to do with its simple, directly readable deployment format as plain text that everybody can read and hack everywhere with low tooling overhead.

Why would you need to edit

Why would you need to edit code over SSH?! This sounds like an extreme edge case, and it's not worth dictating a representation for code to satisfy that. Just download the code to your local machine and edit it there. That's the right thing to do even for plain text.

Code is already processed by some syntax highlighter in order to be displayed in a tutorial. A syntax highlighter for a structured format is even easier to write than making a parser first (or worse, some crazy regex) and then highlighting code based on that. And with a structured format we can easily have nice things like linking function names to their documentation page.

Not so edgey

Because, say, it runs on some device, or in a data center, or some other hardware that is different from my local machine. Having to copy code back and forth for each little edit in a debug cycle would be wildly inconvenient.

This may be an edge case, but probably far more common than you might think. Where I live, people have to do it all the time.

The executable runs on the

The executable runs on the device, not the source code. If for some reason you need the source code on the target device it's not hard to set up file system sync. I see no reason why you would want to run an editor like Vi on the target device and access it via SSH, rather than running a nice editor on your workstation and then syncing the code, or better yet, compiling on the workstation and copying the executable. Warping the entire programming tool chain because you want to avoid having to sync some files seems crazy to me.

Unfortunately

...that is not how it is likely to work in practice, in a heterogeneous environment. And cross building actually is a very hard problem once you move beyond standard text I/O with your program.

I thought cross-building was standard?

I've done embedded development with cross-building and remote debugging in a company, so I'm confused — can you explain what's "very hard"?
We were using C, and I'm not sure tooling for other languages is as advanced; but that alone doesn't make a cross-compiler conceptually hard.
(I'll speculate that Unix geeks who understand functional programming are scarce, and that might be the biggest problem.)

Heterogeneity

Try building or debugging Chrome for Windows on your Linux dev desktop and you'll see. And don't even get me started about mobile OSes... ;)

Everything you discuss is a

Everything you discuss is a tooling concern. You don't want to depend on some tools because you need to use some other tools, and you aren't confident that they can integrate in practice. But what IDE doesn't support remote debugging via a SSH tunnel? I mean, seriously, it's all been done, the plugin model is one of the few successes of extensible architectures.

Dart and Typescript exist solely because JavaScript is not toolable. Why it succeeded where Java failed was strictly due to its integration with the DOM (we could say both were pervasive, but one was made to replace the web, and one was made to augment it). Likewise, few of us want to replace text, we just want to augment it.

We lack concrete examples

Do we have concrete examples of "good" language features that are enabled by wide-spread tooling in a given community? Of "bad" language features? Assuming everyone had bought Sean's point years ago, precisely which parts of the languages we have today would have been done differently?

One example that I think is good: the go fix tool "finds Go programs that use old APIs and rewrites them to use newer ones". There are surprisingly few languages that make wide-spread uses of such tools today, and Go is one of them *because* tooling (because of the other "go fmt" tool that has a totalitarian control on the concrete syntax of user programs and can guarantee that pretty-printing a modified file will produce a diff no larger than reasonable). This is a godsend to API designers; the ability to lower the cost of API breakage for our users.

One example that I think is bad: Java tolerated inane amount of verbosity for years (we discussed the lack of type inference recently) because of decent boilerplate-generation support from major IDE vendors. In my experience the verbosity of the language makes for a very bad experience teaching to beginners (it gives them the idea that programming is mostly about rote memorization of magical incantations).

Here are some syntactic

Here are some syntactic features that interact with auto complete.

The x.f(y) notation found mostly in object oriented languages works well in combination with auto complete. You type "x." and your editor displays a list of operations that x supports. This can be generalized to arbitrary subexpressions: if your cursor is in some enclosing expression C[...] then you get suggestions based on the type expected by C. For example if the type expected is boolean and the lexical environment contains a variable p of type boolean then that variable will be in the list of suggestions. If you view f as a first class function in x.f(y) then this gives the auto complete behaviour of current OO IDEs. For example if x is a List<T> and you type "x." then the expression e in the context x.e must be of type List<T> -> Q. You then look in the lexical environment for all values that match this type, which are the functions that work on lists.

The x.f(y) notation works better than f(x,y) because in most cases you already have some value x and you want to do some operation on it but you may not know the name. It happens less often that you have some operation in mind, but you don't yet know on which variable you want to do it. So typing "x." gives more useful suggestions than "f(". The vast majority of values in the lexical environment are functions, not variables like x and p, so it's useful to have a syntax that gives good auto complete for function names.

Another example in C# is the LINQ syntax which looks like SQL but has the order of some clauses reversed to make sure that variable binding comes before variable use (from clause comes first in LINQ). This ensures that you have auto complete for those variables in subsequent clauses.

Dot is magic

One problem with the generalization you're proposing to dot completion is that literally anything could legal in most contexts.

foo(cat:String):Nat = 3*(c|)

I've drawn the cursor with a pipe. Should we offer to complete the c to cat? What if they are trying to type cat.length?

The magic thing about dot is that you know that 'o.|' is going to be followed by a field of o. So, yes, you can make the generalization you're suggesting, but you'll either always be suggesting almost everything or you'll sometimes miss what they really wanted.

The dot model depends on the

The dot model depends on the type of the receiver for reasons of usability. It's not perfect and comprehensive, but it is good enough in a "worse is better" kind of way. Contrast that to code completion proposals I've seen for Haskell: immensely powerful but ultimately unusable.

And this is really just an accident: OO happened to have a convenient anchor to latch onto when code completion became a thing; Haskell, with all its statically typed glory, did not. Imagine if SPJ and Wadler, etc..., went back and re-designed Haskell with the usable code completion as a first-class language design concern...wouldn't the result be awesome?

I don't think it's a binary

I don't think it's a binary choice between doing the suggestion and not doing the suggestion. It's a ranking problem. That completion after the dot is more useful is exactly my point! This doesn't mean that other completion is useless though. In F# you have the |> operator which is like the dot but works with any expression on the right hand side. So you could do list |> reverse but also list |> (fun l -> reverse l). The completions for list |> could still be ranked according to whether the function takes a list as its first argument, even though there are other compound expressions that could also be valid.

Funnily enough the reason that c could be completed to anything is because the dot operator exists. If you had only f(x,y) syntax and no operators then the only thing that c could be completed to is some Nat or some function returning a Nat. There would be no way to make it a Nat after you typed cats.

You're right

I agree with you. I was just pointing out that dot has somewhat special syntactic properties that interact very well with completion. And most OOP languages consider dot to be pulling fields from the object rather than as supplying the object as the first parameter to an arbitrary function. So the set of candidate fields is closed in most OOP languages rather than being an open set of candidate functions.

Tooling

That is an interesting case study, but what exactly does it mean in term of tooling?

I understand it as "designing the syntax of your language so that the small-search-space thing comes first and the large-search-space thing comes next lets programmer explore the large space, refined by the small-space choice, through tooling". Or maybe it is not small-search-space vs. large-search-space but local-meaning (which kind of implies small search space, but also that completion would not behave reliably-similarly in most places) vs. globally-available-choice. Or asking programmers to write the name they have in immediate memory, and then help them find the one that is not.

So this particular example of yours would be an example of purely-textual syntax choice, informed by what we know to be easy to tool. To compare to Sean's original proposla:

if we accept tools as part of the core experience, we are able to make decisions that are impossible given the language by itself

The present case is not about tooling enabling more/better choices, but rather about tooling proposing a *distinction* of two alternatives that could be perceived to be equivalent.

Ah, I read your question

Ah, I read your question backward: I thought you meant examples of how we design languages differently because of tools. The question of tooling enabling better choices is indeed much more interesting.

Look at language extension. In Lisp you have macros, which allow you to create constructs that interpret sexpr subexpressions in arbitrary ways. There are a couple of problems with them. For one lexical scoping doesn't always work well. There are hygienic macros but they don't feel right to me for various reasons. Macros are limited to sexprs; you can't have arbitrary syntax inside them. Editors may not work well with them: you may have syntax highlighting for keywords so that `if` and `defun` get different color than normal identifiers, but inside a macro the editor doesn't know what are keywords and what are normal identifiers. You can also imagine a language that allows you to hook into the parser, which would allow arbitrary syntax extension, but this makes the editor problem even worse, and they behave too wildly in general. Even arbitrary syntax extension is limited: you can't use it to extend the language with RGB color literals which you can pick visually in a color picker. It's limited to plain text.

If you broaden the view of programming language to the whole user experience you can do better. Lets think about how a simple structural editor would work. You have some representation of the AST, and the editor knows how to display that AST. Each node in the AST gets displayed as its own little GUI widget depending on the node type, with nested widgets for its subexpressions. How would language extension work in this model? We would add a new node type to the AST, and define how that new node type gets displayed as a GUI widget in the editor. So we could define a new node type ColorLiteral(R,G,B) which gets displayed as a small square with the given color, and if you select it you get a color picker. Instead of defmacro you get a defsyntax construct which lets you define and bring into scope a new language construct along with a function that takes a ColorLiteral(R,G,B) and outputs its GUI widget. You can solve the hygiene problem by letting that function take an additional argument that represents the lexical environment. ColorLiteral does not care about the lexical environment, but constructs that support nested expressions do. The `let x = e1 in e2` construct in a given lexical environment E first creates the GUI widget for e1 in environment E and then creates the GUI widget for e2 in the environment E + {x}. So the lexical environment becomes a first class thing, and each node in the tree controls the lexical environment of its children. You have a special AST node `Hole` which represents a subexpression that has not yet been entered. Its GUI widget is a text box which provides auto-complete based on the lexical environment (which it got from its parent node). The things inside the lexical environment are not just variables, but in fact all language constructs. There is an `if` entry inside the lexical environment, which when picked creates an `if` node. Variables in the lexical environment are just those entries that happen to be entries that create a variable node. Note that defsyntax is not special: it just adds an entry to the lexical environment which creates that new node type.

Hopefully this was not too confusing. So what does this enable? It enables clean and truly arbitrary language extension, and each language extension comes with a specific display and editor for each new language construct. In fact a language extension is nothing but an extension to the IDE. You could imagine extending a language with SQL queries, regex literals, bitmap literals, a logic programming sublanguage, pattern matching, tables, state machines, graph literals, etc. If you wanted you could even create a Java sublanguage, for which the AST node would be represented as the ASCII Java code, and the GUI widget would be a plain text editor.

It's also great for language evolution. New language constructs can be tried out as a library. You can modify the syntax of the language without breaking anybody's code: you just change the way AST nodes are displayed. As a trivial special case you can rename functions without breaking anybody's code: it just changes how identifiers of that function are displayed: instead of displaying as "foo" the exact same AST node now displays as "bar".

The x.f(y) notation works

The x.f(y) notation works better than f(x,y) because in most cases you already have some value x and you want to do some operation on it but you may not know the name. It happens less often that you have some operation in mind, but you don't yet know on which variable you want to do it. So typing "x." gives more useful suggestions than "f(".

This doesn't seem insurmountable though. Given a language with f(x,y) calling syntax, we could simply use a placeholder to represent the function to be filled in via autocomplete. For instance, typing "?(x,y)" displays all possible functions accepting two parameters of the appropriate types.

Phrase completions

I wonder how well phrase completions would work? Start with an expression with '?' in any number of places and let the IDE offer completions that make the whole thing make sense.

dual auto-completion

In the x.f(y) case, x is a record (product) type and the list displays the projection maps out of x. There should be a dual kind of auto-completion for union (sum) types which displays a list of constructors. I'm not sure how that works syntactically, and it's non-existent in OO language IDE's but it's straightforward in a visual programming system: dual auto-completion. You simply drag an input backwards onto the canvas, instead of dragging the output as in the standard case.

I don't get it, can you give

I don't get it, can you give me an example of the use case?

unreal blueprint example

Starting at 3:25 in this video. It should be self-explanatory though. Instead of listing methods that take type x as input, it lists methods that produce type x - the most primitive of these being constructors.

Ah, crystal clear!So let me

Ah, crystal clear!

So let me summarize: Blueprints has two forms of code completion: (a) for creating a new patch to receive an output from an existing one, and (b) for creating a new patch that sends to the input of an existing one. If we think about it, there are actually three kinds of code completion possible:

  1. What can this object do? (the traditional kind)
  2. Who can take a value of this type?
  3. Who can produce a value of this type?

The last two are more meaningful in functional languages, and I think that is how Haskell code completion systems work (at least the ones I've heard described to me!). Function application syntax of the language also helps here, given that the producer of the value is always on the most left (vs. OO syntax where the producer is to the right of one or a few dots).

kinds of code completion

(1) is a special case of (2) with clever syntactic sugar in OO languages, namely that you can write x.f(y) instead of x.class.f(x, y).

Technically correct, but

Technically correct, but conceptually I would say (2) is more about data-flow while (1) is about object access. It makes sense that the experiences would be different.

Producer/consumer is in the eye of the beholder

You only need one kind of code completion: "Who can produce a value of type T". Then "Who can take a value of type T" is equivalent to "Who can produce a value of type T -> Q" for some Q.

It's an ugliness of most languages that they treat producers and consumers asymmetrically that way. To treat producers and consumers symmetrically you'd need a linear language with explicit splitting and merging of data flow.

Toolstrapping

I understand where the folks on both sides of this argument are coming from. However, I think that there is a clear middle ground being overlooked.

When I learned the word "toolstrapping", my whole perspective on tooling changed. The challenge is not to create a language with great tooling, the challenge is to create great tooling which makes new kinds of languages possible.

Sean's fighter jet hud display example is a good one in the sense that there are some tasks which really can't be accomplished without tool assistance. What doesn't logically follow is that such a cyborg language needs to live on an IDE island.

The key insight, for me anyway, is that we can't even program a typical 1970s language without tools! We need text editors, but more than that, we need an alphabet! It's long before me, but there was a time when we didn't all agree on which sequences of bits represented which characters, so there was no hope of a your diff tool collaborating with my editor, nevermind somebody else's compiler. It's only once we assimilate a particular technology that we advance. So it was with the written word, so it is with text editors, so it will be with next generation environments.

Another problem with the fighter jet example is that fighter jet pilots still need *lots of training* and are often only trained on one or a few kinds of jets. The skills are not transferable, doubly so if pilots can't take their favorite auto-stabilizer software with them to their new fighter jet. Further, those planes are *expensive* and lots of otherwise well funded militaries can't afford them. Military developments tend to be simultaneously ahead and behind both academic research and industry practice. We should strive for "a rising tide raises all ships" rather than "setting the high watermark".

I also agree with those folks who trumpet the inherit advantages of written language, but I also acknowledge that they are not a universal tool. In particular, textual language tends to fail once inherit information density (entropy? I'm not a Shannon expert) reaches some threshold. Multimedia is the canonical example: The frequency of recorded music or the resolution of modern photography are far more information dense than any human could ever hope to process in text. Nobody can look at a text dump and tell you that it's a video of a rock concert.

So let's not put the cart before the horse by attempting to build a next generation language and it's IDE all at once. Let's instead build tools which augment the text tools that we have now. Only once we've explored how those new tools enhance our existing capabilities should we begin to explore what kinds of new capabilities they've created.

I have some ideas of intermediate form such tools might take, but I hesitate to broadcast my bet, for fear it suffocate my message above.