Declarative binding vs. composition

Recently, I've been trying to come to grips with pure functional programming (e.g., Haskell) and other potential kinds of declarative languages that work at the level of declarative binding rather than functions (e.g., constraint languages). As an example of the difference, consider how to make a button blue. Using functions, we define a function that creates a new button that is blue; e.g.,

make_blue(button) -> button
let my_blue_button = make_blue(my_original_button) in ...

The point of the pure functional approach is that we create new values to reflect new desired realities. Now for the declarative binding (constraint) approach:

is_blue_button(a_button) => a_button.background = blue
...
assert is_blue_button(my_button)

The point of the declarative binding approach is that we refine existing values declaratively to reflect desired realities. Different approach, same result I guess. For reactive programming, I've always been interested in the declarative binding approach with SuperGlue vs. the compositional approach of FRP. However, both approaches seem to have their benefits and drawbacks. Explicit function application gives you more immediate control over the values you produce, but without universal quantification we have to explicitly encode conditions around each application site. Implicit bindings allows things to "just happen" when the right conditions are met, but sometimes we have to struggle to control what goes on.

Any thoughts about the different styles?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Perhaps to further confuse

Perhaps to further confuse the situation, I'll throw in relational languages (e.g., lenses, all of joe hellerstein's various datalog extensions for 'declarative' streams to support distributed programming) as further views to integrate :)

Any thoughts about the

Any thoughts about the different styles?

A short meta-remark. I'd like to advocate more non-trivial sample problems enabling comparisons and qualified criticism instead of idea shopping and designer bla bla. I know it is comfortable for our egos that hard criticism is impossible because CS is not a science where assertions are stated and falsified but at least the scientific strength of literature criticism should be targeted with actual texts ( = source code ) used to ground judgments.

I don't buy that.

I don't buy that. First of all, CS is not a science, but an engineering, which is perfectly fine as many other engineering disciplines are fairly credible and even support lots of scientific research (though most engineers do engineering).

Second of all, design is...design, its not something you can use empirical evidence to guide. We can conduct user research after the fact to see which design is "better" with respect to some problem. There is a good reason why most of the successful language design comes from outside of academia.

Because people from outside

Because people from outside academia have more empirical evidence?

Because people inside

Because people inside academia need empirical evidence to publish papers :) When you write a paper, you need some kind of evaluation section to show how the approach is better, so its very difficult to publish just on design. So what you have is feeling and preference...

CS is a science

Its a science dealing with one central notion: computation. Now whether that makes it a subcategory of math, I don't know, but a science nonetheless.

Whether its an interesting, or even a good, science, no idea. The sceptic would say that CS's major scientific contribution to industry would be C, and a bunch of algorithms.

[I am not _that_ sceptic.]

re: any thoughts?

i think a good debugging / maintenance story is important. which one does a better job there, i dunno. when things are spelled out explicitly it has the + of being obvious but the - of often being verbose, whereas the declarative stuff is the other way 'round i guess.

What you can't avoid: methodology and framework

I think the best solution to this problem is via methodology and framework.

Workflow DSLs as well as work on state machines by Miro Samek and Douglas have developed 'pattern languages' for concepts you can't eliminate from any model of computation / model of concurrency (MoC). It's how you represent it that matters. I plan to post a story about this soon, b/c an LtU user asked me "What the heck is Windows' Workflow Foundation (WF)?" Some of these patterns just repeat over and over again, for example the P4's Deferred Choice pattern and Samek's Deferred Event pattern.

I've been really curious about this stuff lately, even reading Axel Jantsch's "Rugby model" for models of computation (Jantsch is a researcher specializing in model-driven hardware design, for embedded systems and System-on-Chip).

re: soon post

looking forward to that.

I think the constraints

I think the constraints approach is better when there is a system of interrelated constraints, rather than an isolated rule. To reduce confusion they should not act on unrelated data, I think a module is a good scope for a constraints system.

That said I have little practical experience with them so I could be very wrong.

Example is too trivial

The idea looks interesting, but you need something meatier to express the differences to a functional approach. Three questions spring to mind that you could answer with bigger examples:
1. In the functional case the definition is constructive; we can extract a sequence of state transitions to get from the initial state to the target state directly from the semantics of function reduction. In the declarative binding case it is not quite so obvious. You seem to be binding properties to logical relations, there is no obvious reduction to get from one to the other. How are you defining it?
2. As your targets seem to be logical relations you should be able to express certain properties more easily than you can with functions - for example if I define a second implication make_button_red can you derive that this implication and the original are mutually exclusive?
3. How do you introduce recursion? If I want to make a list of buttons blue then it is straightforward to do so using a recursive make_button_blue, what is the equivalent step in declarative binding?

The idea looks interesting,

The idea looks interesting, but you need something meatier to express the differences to a functional approach. Three questions spring to mind that you could answer with bigger examples:

The bigger examples exist in my SuperGlue work (here and here). But then you could also look at any constraint system like Kaleidoscope, pseudo high-level environments like Citrus or Adam and Eve. Actually, there are a lot of languages out there that are based on binding rather than application/composition. But the declarative binding is managed in different ways to regain expressiveness.

In the functional case the definition is constructive; we can extract a sequence of state transitions to get from the initial state to the target state directly from the semantics of function reduction. In the declarative binding case it is not quite so obvious. You seem to be binding properties to logical relations, there is no obvious reduction to get from one to the other. How are you defining it?

I'm not sure they are logical relations, just binding properties to values and guarding those bindings somehow to restrict universal quantification. My is_blue_button is simply a type used in this way (and of course, types are one form of predicates). Other constraint systems have other ways of restricting the scope of a constraint (in imperative systems, applying a declarative constraint can even be an imperative operation!).

2. As your targets seem to be logical relations you should be able to express certain properties more easily than you can with functions - for example if I define a second implication make_button_red can you derive that this implication and the original are mutually exclusive?

In SuperGlue, you would wind up with a run-time error since the blue and red binding would conflict. I'm not sure statically checking consistency properties is feasible, but checking them at run-time is still useful. In a functional approach, you could easily make a blue button red, without any error occurring (but then again, you could check that the value you are shadowing in the function is not already a visible color).

3. How do you introduce recursion? If I want to make a list of buttons blue then it is straightforward to do so using a recursive make_button_blue, what is the equivalent step in declarative binding?

The goal would be to avoid explicit recursion; e.g., by applying bindings with respect to universally quantified variables. In this case, the rule that made buttons blue applies to all buttons that can pass the rule's guard. Function application is definitely more expressive than declarative binding, but declarative binding can be more usable in situations where the full power of lambda isn't necessary.

declarative binding

". . . but declarative binding can be more usable in situations where the full power of lambda isn't necessary" confuses me because it seems to me that ToonTalk (TT), the actor model of Hewett, and Janus are all constraint languages and therefore examples of declarative binding, but they also provide the full power of lambda. When I say TT, actors, and Janus exemplify declarative binding, does that make it evident that I am missing something about what "declarative binding" means?

A language can have both

A language can have both declarative binding and lambda. But even though lambda is/can be Turing complete, it wouldn't be used for everything like it is in say...Haskell or Lisp.

Missing the context, at

Missing the context, at least. Sean's use of 'declarative binding' is in reference to its use in the reactive programming context described in the OP. That is, he is exploring how to best go about expressing that one value is related to another such that as one varies over time, so shall the other.

The declarative binding approach: create a variable, and - based upon conditional decisions that are not captured - bind to this variable a particular time-varying expression. Later time-varying changes to the variables that led to this decision will not be reflected in the variable binding. IO, presumably, may also feed into the decision for the binding.

The pure functional 'composition' approach: variables are indistinct from expressions - no delayed assignment - and, as one consequence, you must capture any conditional decisions for a property within the functional expression. IO is prohibited in the functional expression. The lack of delay has other significant consequences. I currently take a rather firm position in favor of this approach.

Janus, TT, Oz, etc. do perform 'declarative binding', but they are dealing with immutable quantities. Because the quantities are immutable there is simply no reason (other than debugging, perhaps) to 'capture the conditional decisions'. After all, if you apply an immutable function to an immutable value, you'll invariably come to the same immutable decision.

Sean's 'declarative binding vs. composition' isn't a true dichotomy, of course. Here's an approach that wasn't explored:

History Rewriting: In a system like Janus one could, presumably, keep all those declarative-binding activation records around in order to capture the decisions that lead to each variable binding. This hybrid would, like all hybrids, share both the best and the worst of both the binding and the functional approaches. It would inherently result in maintaining a very large dependency-graph of variables. Basically, after any change in the reactive quantities (which exists outside the program) one would be recomputing the whole history of the program as it would have been computed if the quantity were at its new value all along. I'm not quite sure where or how IO would fit.

I'm curious as to how well this could perform and scale. I suspect it could perform well with enough graph rewriting to optimize it, but that it would be easy to (by accident) let the program grow wildly out of control - at which point performance at a similar scale would be moot.

How it could perform and scale

History Rewriting: In a system like Janus one could, presumably, keep all those declarative-binding activation records around in order to capture the decisions that lead to each variable binding. This hybrid would, like all hybrids, share both the best and the worst of both the binding and the functional approaches. It would inherently result in maintaining a very large dependency-graph of variables. Basically, after any change in the reactive quantities (which exists outside the program) one would be recomputing the whole history of the program as it would have been computed if the quantity were at its new value all along. I'm not quite sure where or how IO would fit.

I'm curious as to how well this could perform and scale.

We call that a memory leak.

@struggle to control what goes on

i'm curious if anybody has thoughts on that part of what you said, since i feel like (beating a dead horse?) it sounds to me a lot like "how the heck will i be able to debug this thing?"

I doubt step-wise debugging

I doubt step-wise debugging would work. In SuperGlue, I would just debug the interpreter, but that was mostly because my implementation was buggy :). Perhaps additional rules to debug assumptions, along with a visualization of how the rules work together?

@debug assumptions

that is an interesting turn of phrase because it does make me think a little bit about what it means to debug. other people have, of course, thought about that question already, although i haven't studied it a lot -- which i should. at a high level, the problem is that the programmer had a mental model with which they predicted outcome (or failed to see edge cases) and that doesn't jive with what happened. so traditionally they need to see the state and the history of that state sometimes, with the code. partially because they don't know which assumptions are wrong, so they can't make assertions ahead of time (although presumably DbC somewhat disagrees with me here) because otherwise they wouldn't have made the mistake? well, at least locally.

'Open' Functions with FRP

Between the two you describe, I much prefer the pure functional approach.

My opinion is shaped by the following observations:

  • The declarative binding approach fails to capture the conditions that lead to a decision for a particular binding. Which binding is constructed will thus be a function of the observations made when the binding was created. This, by definition, is temporal coupling: the behavior depends on resources available when a service was created, rather than resources available at present. Temporal coupling raises issues of order of operations, disruption resilience, etc. I spend a lot of time fighting these issues in a modular UI system at work.
  • Models, by nature, are reflections of external systems with emphasis on the properties most relevant to the observer in future decision-making processes. Different observers will want different models. Functional composition makes this easy: the model is embedded in declarative functions, and all functions are semantically independent of one another. Declarative binding makes this a challenge: one must perform a non-declarative step to introduce 'reflection objects' that possess bindable variable attributes (such as 'color'). Creating and maintaining collections of these reflection objects is non-trivial, painful, inefficient, and not declarative.
  • The declarative binding introduces a delay between the introduction of a variable and its assignment. One may fail to assign attributes or may assign them to conflicting values. These possibilities are vectors for error or attack.

I do understand the desire for 'implicit' bindings, and I accept that a straightforward application of FRP doesn't provide this by itself. But pure applicative compositions vs. the declarative data-binding design you describe are hardly a true dichotomy, so one should look at various alternatives.

Consider a common alternative: open function definitions, where the function's definition is distributed among multiple resources then composed prior to use. This isn't unusual in programming systems (logic programming, multi-methods or predicate dispatch, etc.), though for multi-model systems it is important that one 'reify' the resources from which the function is composed. If the resources are reified, then the composition itself can be performed by a function, though an EDSL may offer syntactic convenience (especially when meta-programming with type-safety or termination concerns).

This would, for example, allow me to say that "a button is usually colored grey" in rulebook A, and say "a 'go' button is colored green" in rulebook B, and to assert "a 'commit' button in a foobaz panel is colored red if said panel is in an invalid commit state" in rulebook FooPanels. I could then create a 'model' by combining rulebooks, using precedence and heuristics to resolve conflicts. In a fully reactive system, even the rulebooks and their compositions may be reactive entities, allowing developers and users to tweak rules at runtime, create pluggable and runtime-upgradeable rules-sets.

This would allow me to ask for the color of a button, and obtain it based on such things as that button's properties and context within a particular model, without ever requiring I explicitly assign a color to the button in an applicative transform. All the advantages of the FRP design still apply: parallel declarative models, freedom from temporal coupling, resilience against disruption and certain errors or attacks.

... all this for the low cost of one more level of indirection in the FRP design, and perhaps some syntactic sugar or libraries to help build and compose rulebooks.

This would, for example,

This would, for example, allow me to say that "a button is usually colored grey" in rulebook A, and say "a 'go' button is colored green" in rulebook B, and to assert "a 'commit' button in a foobaz panel is colored red if said panel is in an invalid commit state" in rulebook FooPanels. I could then create a 'model' by combining rulebooks, using precedence and heuristics to resolve conflicts. In a fully reactive system, even the rulebooks themselves (and the 'set' of applied rulebooks for a particular model) may be reactive entities, allowing developers and users to tweak rules at runtime, create pluggable and runtime upgradeable rules-sets, etc.

This is what SuperGlue already does very concisely. In the "Live Programming" dialect:

class Button : Widget;
Button.Color = Grey;
class Go : Button;
Go.Color = Green;
class Commit : Button;
class foobaz : Panel;
var CommitInFooBaz : Commit, foobaz.Child, Invalid;
CommitInFooBaz.Color = Red;

SuperGlue uses inheritance relationships to prioritize connection binding, which was the whole point of my dissertation. The live programming dialect supports dynamic inheritance, so we can have classes that inherited only at certain times, while types can refer to container objects.

Your "is" in your rulebook description suspiciously resembles a declarative binding construction to me. I know its possible to express logic programming with forward chaining in a functional programming language, but then doesn't the language become something else?

RE: SuperGlue comparison

This is what SuperGlue already does very concisely.

Your example lacks a mechanism for organizing rules independently of the data to which they are later applied, which prevents declarative maintenance of parallel views (i.e. where buttons with 'go' text are displayed blue in one model and green in another, without ever creating a new object).

Your "is" in your rulebook description suspiciously resembles a declarative binding construction to me.

It's declarative. One might similarly assert that foo 1 "is" 10. This is a partial function definition of 'foo'.

But there is no binding. The partial definition binds nothing to the values 1 or 10 (i.e. if someone else asks for 1.foo, they'll just get an error). Properly, it isn't even a introducing a 'binding' on foo. foo isn't an object! 'foo' identifies a function, for which a partial definition is offered. Whether that particular partial definition gets accepted depends on potential for higher-precedence partial definitions selected by a 'composer' function.

I know its possible to express logic programming with forward chaining in a functional programming language, but then doesn't the language become something else?

Though I didn't express logic programming above, I agree with your basic sentiment: since the above is not a straightforward application of functional programming (it relies upon discipline, perhaps a small framework or EDSL transform), it is not unreasonable to refuse to call it 'functional'.

But one might call it a functional design pattern. We had an related LtU discussion on first-class patterns. One solves a composition problem by distributing a function definition across a collection of inputs. The implementation is (ignoring static type-safety and termination analysis) entirely straightforward in functional. The above applies this design pattern atop a reactive medium, using functional reactive behaviors (which may reference external resources) in place of pure functions.

Your example lacks a

Your example lacks a mechanism for organizing rules independently of the data to which they are later applied, which prevents declarative maintenance of parallel views (i.e. where buttons with 'go' text are displayed blue in one model and green in another, without ever creating a new object).

I disagree. The rules are organized by type of the targets they affect. If you can somehow make the model a part of the button type, then problem solved (go buttons in model A are blue while go buttons in model B are green).

But there is no binding. The partial definition binds nothing to the values 1 or 10 (i.e. if someone else asks for 1.foo, they'll just get an error). Properly, it isn't even a introducing a 'binding' on foo. foo isn't an object! 'foo' identifies a function, for which a partial definition is offered. Whether that particular partial definition gets accepted depends on potential for higher-precedence partial definitions selected by the 'compose' function.

I was referring to your english description. You talk about a rule that binds properties on certain conditions. I refer to that as declarative because you don't specify each instance of the rule's execution. Rather, the rule fires when its conditions are satisfied (I might be taking liberties on terminology here).

I agree with your basic sentiment: since the above is not a straightforward application of FRP, relying upon discipline and meta-programming, it is not unreasonable to refuse to call it FRP. But since the properties most closely match FRP, and its implementation is (ignoring static type-safety and termination analysis) entirely straightforward in FRP, one might reasonably call it an FRP 'design pattern'.

True, but I feel like I have a hard time communicating with FRP people, since what we have in common (declarative manipulation time-varying values) is overshadowed by different ways of basically "building the graph" (function application or via declarative connection/binding/etc...). Functions are a dandy way of building a graph, or anything else that you want, but they aren't the only way and maybe not even the best way. Perhaps my question is...what is the best paradigm for building a FRP-style dependency graph? Arrows?

If you can somehow make the

If you can somehow make the model a part of the button type, then problem solved (go buttons in model A are blue while go buttons in model B are green).

Your answer seems very far from a solution...

Creating a tight coupling between the source of the button and its color is essentially the opposite of: "organizing rules independently of the data to which they are later applied".

Needing to create even ONE 'new GoButton' (regardless of whether it be in Model A or Model B) is a clear violation of "without ever creating a new object" - that is, you are demanding a violation of declarative composition.

A 'pure FP' design never needs to create buttons, which in turn means it never needs to maintain buttons... FP declaratively 'interprets' buttons into a platonic existence from another resource, then decides things like the color of those buttons it has interpreted into existence. The 'rulebooks' in the above example were a mechanism for organizing such decisions as button color independently of the transform that initially produce the buttons. (One might relate this to XSLT or CSS, given all the references to buttons.)

You talk about a rule that binds properties on certain conditions. [...] the rule fires when its conditions are satisfied (I might be taking liberties on terminology here)

I did not talk about "a rule that binds properties" or that "fires".

I apologize if the use of 'rulebooks' terminology threw you. Full rules-based programming tends to involve triggered behaviors, which are often imperative in nature. But I spoke of pure FP design, and thus of pure rules. Examples are: "all pigeons are birds" or "a fever contributes to a conclusion of a cold" or "unless there is argument contrariwise, buttons say a button is colored gray".

These rules get composed into functions like "bird? X" by testing to see if X is a pigeon, or "color B" that tests if B is a button and, if so, says it's grey unless some other rule overrides. A function is responsible for the composition of these rules, thus there is meta-programming involved.

Perhaps my question is...what is the best paradigm for building a FRP-style dependency graph? Arrows?

There probably isn't a "best paradigm", but one certainly could build a list of known problems to solve. Were I to write such a list, the problems I listed for declarative binding would certainly be on it. A couple more would be: avoid assuming a history - even a modest history of just one value - is maintained by arbitrary data resources; avoid hindering demand-driven multi-cast or caching; avoid syntactic risk of cyclic reactive expression; avoid side-effects on updates; avoid need to observe intermediate values if several variables shift in a short time frame.

The only design I know that solves all the problems I can name is the 'pure functional' composition, augmented by a few common design patterns.

Your answer seems very far

Your answer seems very far from a solution...

Creating a tight coupling between the source of the button and its color is essentially the opposite of: "organizing rules independently of the data to which they are later applied".

Needing to create even ONE 'new GoButton' (regardless of whether it be in Model A or Model B) is a clear violation of "without ever creating a new object" - that is, you are demanding a violation of declarative composition.

I agree with you on this. But then we are getting into very different territory. Actually, what you are talking about is sort of a holy grail of UX programming that has been chased for 20 years. To say that we just specify "what we want" from the UI, then have a bunch of rules create a UI based on the context (e.g., a mobile, multi-touch table, or standard desktop). Yes, of course, that's how it should be! But this modularization is similar to a very hard AI problem, no one has been able to do it yet in a very satisfactory way and we begin to wonder if that's possible. Function composition doesn't help much at all, I think.

Ha.

What I'm specifying isn't a 'Holy Grail'. I don't offer any miracle paradigms that read our minds and give us 'what we want' as opposed to 'what we specify'.

Further, you have made reasonable arguments elsewhere that physics may be a more suitable - more "naturalistic" - basis than FRP for UI tasks, since physics allows one to tune transitions in state and include external forces. For physics, Declarative Binding doesn't hurt you because you require object identity and system 'state' in order to control transitions.

What function composition design patterns do offer is a means to tweak style and other properties based on rules distributed among different resources. In the context of UI, you could relate this to HTML with CSS, using 'classid="go"' and such to choose the color green. Except, in a reactive context, reactive HTML + reactive CSS would avoid any need to ever 'refresh' a page. The main problem with such a UI would be transitions: i.e. a button might bounce around the screen based on external changes, unless the browser does some special physics-style work render-side to support transitions.

The OP isn't about UI specifically. It's about declarative composition of reactive 'models'. Whether a UI is a reactive model (view) of some external service, as opposed to an entity in and of itself, is really up to the developer of a particular UI. Arguments for the reactive model concept relate to mashups, sharing, persistence, zoomability.

Regardless, function composition and similar mechanisms of organizing and tweaking the 'rules' guiding the model independently of the 'data' guiding it will be a huge part of any modularity solution, whether it be reactive (tweaking transforms) or physics based (tweaking the physics) or a combination of the two. I think you grossly underestimate the value of said resulting separation of concerns.

What I'm specifying isn't a

What I'm specifying isn't a 'Holy Grail'. I don't offer any miracle paradigms that read our minds and give us 'what we want' as opposed to 'what we specify'.

I didn't say that you were. I'm merely talking about the pure modularization between content and presentation: that we can talk about actions, data, and input independently of layout, widgets, and visuals. So not to have a button at all, but rather an action like "delete the currently selected email message."

Further, you have made reasonable arguments elsewhere that physics may be a more suitable - more "naturalistic" - basis than FRP for UI tasks, since physics allows one to tune transitions in state and include external forces. For physics, Declarative Binding doesn't hurt you because you require object identity and system 'state' in order to control transitions.

Maybe this is orthogonal: I believe that physics is a more suitable approach to the underlying task of maintaining a dependency graph once built, but because physics can handle more relationships and more complicated constraints. Both declarative binding or function composition could give us the dependency graph that physics would process, however. Admittedly, I don't know what physics-based functional reactive programming would actually look like.

Regardless, function composition and similar mechanisms of organizing and tweaking the 'rules' guiding the model independently of the 'data' guiding it will be a huge part of any modularity solution, whether it be reactive (tweaking transforms) or physics based (tweaking the physics) or a combination of the two. I think you grossly underestimate the value of said resulting separation of concerns.

I think we are working with separate ideas in how these systems work. I didn't see reactive or physics as being the primary concern, but expressing relationships or configuration as the hard problem.

At any rate, I need to think about this, really going to physics could completely change the way we think about relationships, which would necessarily affect how we express them. In reality, the color of something depends on their material combined with the properties of lighting. The position depends on where the thing is placed, which can be influenced by various force combinations. Hmm...

Maintained Relations

I didn't see reactive or physics as being the primary concern, but expressing relationships or configuration as the hard problem.

How is 'expressing relationships' relevant if they aren't being maintained over time? I don't believe you can usefully separate the expression of a relationship from the semantics of expressing said relationship. How you wish to express them will be closely related to why you wish to express them. (How is Why, backwards.)

I'm merely talking about the pure modularization between content and presentation: that we can talk about actions, data, and input independently of layout, widgets, and visuals. So not to have a button at all, but rather an action like "delete the currently selected email message."

Using a word like "merely" there doesn't seem appropriate.

Anyhow, I've seen this sort of modularization before.

But it takes a LOT of meta-data to make it work... i.e. the presentation side needs to know which actions are specified for 'the currently selected email message' if it is to group them into a menu.

We often resort to 'artificial' meta-data intended primarily for hooking by the presentation layer (such as 'classid' and 'id' on HTML elements) or for providing suggestions to the presentation layer. This 'artificial' meta-data to support relationships and offer suggestions is convenient, but largely eliminates 'purity' of any modularization; in practice, most coding for presentation ends up in those 'suggestions' and other hooks.

If we were to assume a capability UI, where the only information you can obtain about a capability (without actually applying it) is meta-data that has been provided to you, then I suspect a pure separation would be impossible. The difficulty in analyzing actions for intrinsic properties makes almost any script just as difficult to analyze as a capability. Thus, chances are slim to none we'll ever see a pure separation.

Rather than separation of content and presentation in the initial configuration, I think it better to focus on enabling late-binding transformations and recomposition (for mashups, accessibility, stylization, etc.) of content annotated ('marked up') for presentation. Transformations could then trade one presentation for another, or manipulate content, and so the 'separation of content and presentation' could still be achieved as a 'discipline' for organizing transformations.

_50_ years

Actually, what you are talking about is sort of a holy grail of UX programming that has been chased for 50 years.

Fixed that for you...