What do you believe about Programming Languages (that you can't prove (yet))?

The Edge "World Question Center asks the thought provoking, but overly general for this forum, question "WHAT DO YOU BELIEVE IS TRUE EVEN THOUGH YOU CANNOT PROVE IT?"

So let's tailor it a little for LtU...

What do you believe about Programming Languages (that you can't prove (yet))?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Kay is (I suspect) looking

Kay is (I suspect) looking for something closer to the Web 2.0 profile.

Just look at LtU itself or one of the ubiquitous technical blogs. All I do present is dead code - not live objects. There is no Wordpress plugin for embedding GHCi or a Python shell on a website even for unsophisticated purposes like showing code in action not just telling about it. The client side is the least of the problems and the issues are well known: security, latency and scalability. "Computing in the cloud" is right now about a few major IT companies starting their own web hosting. It doesn't make a difference for the web and we don't see any novel types of applications.

Live Documents will trump Web Applications

All I do present is dead code - not live objects.

I understand your meaning. It would be nice to hook 'live' documents for presentations... with such variations as a document that can help a reader test his knowledge, or that provides up-to-date mission status.

I believe (but cannot prove) that we must dump the web-application model in favor of pure, shared 'document model' for user interface if we ever want the ability to present high-performance, scalable, composable, sharable, persistent, robust, zoomable, disruption-tolerant, and flexible 'live objects'.

HTML is fundamentally a web-application platform rather than shared document model. The fact that what is conceptually the same 'form' (by link) may be in two different states for two different browsers exemplifies this. If HTML were a shared document model, then browsers linking the same URI would literally see the same form. Where one user updates the form, other users would see it. Same goes for manipulations to the DOM. But with HTML, each user implicitly gets his own copy of both the form and the manipulations to the DOM. This indicates that each HTML document is really code for creating a new application.

For a pure document-model approach, forms would be shared if they have the same URI (since they'd be the same form). This would likely be handled by publish-subscribe under the hood. Manipulations to the form could directly have effects on the real world, either immediately or upon pushing a 'go!' button. Access to a given form or document of any sort would be the domain of Object Capability Model, likely tempered by a trust-model with certificate-based challenges and auditing in order to limit damage from accidental leaks of capability (and to allow dongle-based security for untrusted-terminal-service access to secure systems).

The document approach doesn't mean being rid of web-applications entirely, but does mean that the creation of web-applications involving unshared forms and documents would need to be an explicit behavior, driven by commands that effect new documents. This would be a little bit like using HTTP 'POST' every time you want an unshared form, except for a critical difference that you could decide to share a reference to the 'new' form and thereby share the form (and even compose it into another document).

Being rid of web-applications also has the positive effect of eliminating most need for a standard document object model. That is, browsers won't need to know how to 'manipulate' a display object, and the under-the-hood management for DOM by document objects can be completely opaque to the browser. The only real need for DOM would be an option for delta-compression of changes to the document definition as part of the publish-subscribe model for low-latency updates to live objects.

Performance and hosting issues for everything but the browser itself (latency, etc.) will be resolved by automatic distribution, duplication, and mobility of code as appropriate.

The client side is the least of the problems and the issues are well known: security, latency and scalability.

I disagree. The client-side is hardly "the least of the problems". Document browser and document server each play equally relevant roles. Security, latency, robustness, disruption tolerance, persistence, sharing, zoomability, graceful degradation, composition, simplicity, and consistency requires an end-to-end model, of which the client (the 'browser') is an integral part.

The solution to the problems and issues you name is a combination of the shared document model, publish+subscribe model, and distribution of objects (securely sandboxed via object capability model).

The ideal solution, I believe (but cannot prove), is to tool both browser and server for distribution of code, such that the browser can inject some server-side intelligence (e.g. to merge and summarize data from many sources into a common document, to support intelligent continuation of behavior in the face of network disruption, etc.) and so the server can add complex behavior at (or near) the browser (allowing explicit construction of 'web-applications', possibly hosted in a cloud near the browser, rather than at the browser itself).

I believe, but cannot prove (yet), that mobile objects (+ duplication for asynchronous stateless objects) and automatic distribution thereof (albeit with hosting limited by trust (certificate-based capability) and distribution directed by possible annotations like 'object X is nearby object Y') is the best solution to latency. It allows the distribution issues to be described in object configurations, provides separation of concerns from the object/actor definitions, and helps with robustness in the face of network disruption, and for graceful degradation.

Most of these problems have solutions available, albeit not standardized in any manner. HTTP is missing publish+subscribe and is unsuitable for capability security, but we don't need to use HTTP - we're free to invent new browsers and protocols or use E language's Vat TP or something similar.

Security, latency, and scalability have known solutions, and are missing only standardization and open-source implementation. We do need something better than X/Open XA for distributed transactions over distributed systems (X/Open XA connects systems that themselves don't interact, which can't be assumed in general), but Actor Model Transactions (extension of STM to Actors Model) shouldn't be too difficult to standardize.

The unsolved problems (to my understanding) are: distributed process accounting and management in open systems, disruption-tolerant distributed garbage collection, non-disruptive runtime upgrade or replacement of objects, providing a platform for creating and testing these programs (e.g. a WikiIDE), and convincing people to try something new.

Thinking big vs thinking small

I disagree. The client-side is hardly "the least of the problems".

Because you are thinking bigger and follow a TOE of distributed computing which attempts to create a unity illusion. This requires to solve all kinds of complicated problems. I cannot contribute anything to this neither in a positive nor negative way. I'm rather pointing to a simple and single use case which represents a problem that is yet unsolved and it is unsolved for reasons I find interesting.

Inventor's Paradox

Ever hear of Inventor's Paradox?

A general end-to-end approach for a broader problem can be simpler, easier to implement, and perform better than could be achieved by aiming to solve a much smaller subset of the problem while constraining yourself to a particular set of tools or models.

I believe it applies here. I suspect you'd learn, if you tried to achieve security while embedding shells to systems in websites, that you would also have lots of complicated problems to solve. It's like Greenspun's Tenth Law for distributed systems: you're going to end up implementing an 80%, slow, buggy, non-standard, non-reusable Theory of Everything distributed programming solution.

Inversion of Abstraction

Alternatively, a distributed programming "theory of everything" might just be one big abstraction inversion. Sure, that terminology is tossed around far too carelessly, but abstraction inversions do exist.

Some would say using call/cc to implement exception handling is an abstraction inversion, but I'm pretty sure you can't implement call/cc in terms of throw and catch.

A grand unified theory of distributed programming would be nice, but theories of everything are notoriously hard to get right. ;-)

but I'm pretty sure you

but I'm pretty sure you can't implement call/cc in terms of throw and catch.

You probably can if the exceptions are resumable.

Maybe

I was thinking along those lines shortly after I posted; I have my doubts, though I haven't really played with resumable exceptions much.

How many languages offer exceptions that can be resumed multiple times though? :-)

A grand unified theory of

A grand unified theory of distributed programming would be nice, but theories of everything are notoriously hard to get right. ;-)

In computation, they are not difficult to achieve. Turing Machine, Church Lambdas, Actors Model...

In any case, an actual 'Theory of Everything' isn't necessary. One only needs a distributed model that covers the end-to-end system under discussion, that is practically implementable, that you can reason about in a strongly predictive sense, and that your reasoning predicts to achieve the desired computational properties (in terms of performance, security, persistence, robustness, disruption tolerance, composability, etc.). This would be a 'theory of everything you need for the system you are building', so to speak.

Alternatively, a distributed programming "theory of everything" might just be one big abstraction inversion. Sure, that terminology is tossed around far too carelessly, but abstraction inversions do exist.

I believe that to call a "theory of everything" an "abstraction inversion" would properly require you find within said theory-of-everything an abstraction that achieves a superset of the theory. Considering that the details of any particular TOE have thus far been omitted, the accusation seems a little premature. Please feel free to now consider yourself among the people who toss around that terminology far too carelessly.

In any case, I would favor a minimal TOE over one containing any abstraction-inversions. Unnecessary specification should be eliminated. But there are some decisions to be made about exactly which parts get eliminated. For example, you don't need both Actors and Turing Complete handlers for actors. Some might be rid of the Actors. I'd be rid of the Turing Completeness in the handlers.

A grand unified theory of distributed programming would be nice, but theories of everything are notoriously hard to get right. ;-)

True. However, you need to compare that effort on a relative scale, against the effort to build an 80%-complete, slow, buggy, non-standard, non-reusable implementation of the theory-of-everything, plus the costs associated with those bugs, and do so more than once since it isn't reusable.

Of course, analyzing what went right and what went wrong in a multitude of 80% solutions does help with the construction of a powerful, consistent model for distributed programming.

The good theories are obvious in retrospect.

In computation, they are not difficult to achieve. Turing Machine, Church Lambdas, Actors Model...

Quite to the contrary, it took a lot of effort in determining why TM's and Church's lambdas were interesting. The Church-Turing thesis represents lot of hard work and perspiration.

Even more effort went into the Actor model, from Simula to Hewitt's Plasma to Scheme, with Erlang mysteriously arriving at another formulation somewhat independently more than 10 years after Scheme. That's over 20 years of effort right there. ;-)

I believe that to call a "theory of everything" an "abstraction inversion" would properly require you find within said theory-of-everything an abstraction that achieves a superset of the theory.

I'm a bit confused; this sounds like a negative tautology. The specific thing I had in mind when I wrote my comment was the huge amount of fuss put into distributed objects once upon a time, which very naturally leads to inversions of abstraction. Distributed Objects very much strike me as something that attempted to be a Theory of Everything and failed, miserably.

What I was trying to get at is that mere act of implementing something relatively simple in terms of something relatively complicated is a warning sign that you might have an abstraction inversion, but doesn't imply it. The "simple" concept must be able to provide a reasonably good implementation of the "complicated" construct. If you can produce an superior implementation of the complicated construct without the simple concept, then it might not actually be an inversion of abstraction.

Of course, analyzing what went right and what went wrong in a multitude of 80% solutions does help with the construction of a powerful, consistent model for distributed programming.

Bingo. I don't think your chance of success is very good without examining the partial solutions as well as some of the failures, as well as regularly writing distributed programs yourself. And I don't think language products really count, I think these programs need to do something specific.

I'm not a distributed programming guru, but I am curious if you've ever closely examined Obliq. I never did myself, though I did mess around with Modula-3's Network Objects long enough to come to the conclusions above, and learning Erlang only emphasized this point.

it took a lot of effort in

it took a lot of effort in determining why TM's and Church's lambdas were interesting. The Church-Turing thesis represents lot of hard work and perspiration

True, but I don't mean to include the effort to prove that a model is 'complete' or 'useful' with the effort of reaching the model in the first place. As I understand Computer Science, it is far more difficult to create a useful language or model of computation that isn't complete than it is to arrive at yet another complete model.

Whether you go to the effort of proving it one way or the other doesn't change the properties of the model; rather, it affects only the degree to which those properties are known.

The specific thing I had in mind when I wrote my comment was the huge amount of fuss put into distributed objects once upon a time, which very naturally leads to inversions of abstraction.

I'm curious: of which 'specific thing' do you speak? I can't help but imagine that 'distributed objects' in your mind refers to a much narrower class of systems than it does in my own. How do you define 'distributed objects' and which abstractions do you imagine are naturally inverted in their design?

Or is it that they were merely missing some important features, such as transparent location or automatic distribution, thus introducing cross-cutting concerns into programmer code, forcing repeated reinvention, and making distribution itself a major chore? I.e. the problem might not be 'distributed objects' abstraction, but rather that the implementation mixed abstractions and did only half the job... about as useful as inventing only the bottom half of a wheel.

I am curious if you've ever closely examined Obliq.

Not in particular, but judging from its description on Wikipedia, I've studied a variety of very similar designs.

The next paradigm shift in PL won't be created by PL researchers

After reading this thread and this novel, I believe but cannot prove that the next paradigm shift in programming languages will come from outside the PL research community. This is because I believe that it will involve visual languages and live coding (among other things), but these subjects seem to be neglected by current PL researchers (so it seems to a layman) while thriving in other fields such as graphics and audio.

Perhaps so, but I bet that

Perhaps so, but I bet that PL researchers would be the ones to tame, secure, and optimize it. And live coding is still within the arena of PL.

Agreed

Agreed, I was just talking about the source of the change. I won't be sad if I'm wrong, either.

It would be much sadder if

It would be much sadder if the tens of thousands of intelligent people who fancy themselves PL researchers couldn't take advantage of the vision and inspirations of billions.