Higher order versus Object order

Without wishing to cause a flame war, ..., well actually I do wish to start a discussion ...:)

For the last 25 years I have been working on programming languages roughly in the logic programming/functional programming paradigms.

About 10 years ago I embarked on a research trajectory involving moving from Prolog's 'meta-order' approach to a higher-order approach.

The reasons were technical: meta-order sucks from a software engineering POV.

However, recently, I have changed my mind. Or rather, changed direction.

While robust, HO does not deal with OO programming all that well. (Start flame wars here)

The reason is that OO involves combinations that are difficult for HO styles of programming.

On the other hand, OO does nearly everything that HO can do. I would argue, that at a slight loss of elegance, OO does *everything* that a good software engineer wants to do.

My reasoning is based on the fact that an object can act as a kind of closure but a closure cannot capture the multiple uses of an object - together with the interface contract.

BTW, as far as I am concerned, OO is *not* equivalent to inheritance+subtypes+methods+classes+instances. Those are techniques useful in some situations.

For me, OO is fundamentally about encapsulation and interfaces. The rest is noise (in my opinion of course)

So, in my most recent work, I am throwing out my HO implementations and replacing them with an Object Order implementation....

Comments?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Objects or Modules?

Francis McCabe: For me, OO is fundamentally about encapsulation and interfaces.

In which case it sounds like you're talking about what the functional community would call modules, in the sense of the SML or O'Caml module systems. More recently, research in type theory has blurred the distinction between module systems and object systems in exactly the way it sounds like you want to; see the various papers around Scala to see what I mean, especially "A Nominal Theory of Objects With Dependent Types" and Odersky's "Types for Objects and Modules" talk.

Scala

From what I know of scala, I quite like it. My interest is in logic-based languages -- not Java based languages.

Scala papers/talks moved

"A Nominal Theory of Objects With Dependent Types" and "Types for Objects and Modules" talk now available at http://www.scala-lang.org/node/143

either/or?

I would argue, that at a slight loss of elegance, OO does *everything* that a good software engineer wants to do.

There are three sliders in this sentence: one marked "loss of elegance", one marked "does *everything*" and one marked "good software engineer". It might be helpful to know just where those sliders are set.

In Smalltalk or Ruby or Python you don't have to choose, of course. Functions / blocks are objects. Function composition is object composition, of a kind.

I can't think offhand of a "combination" that is easier in OO than in HO. I had thought that rather the reverse was true. Composability is not what I would consider a strong point of most mainstream OO languages. Do you have any examples of the kind of thing you mean?

Combinations - a clarification

I dont mean combinators!

My intent is closer to the classic notion of combining separate entities into a single package.

BTW, IMO, the SE case for this is interesting: OO packaging allows one to modify the contract (a.k.a. interface+semantics) in a way that minimizes the impact on non-affected uses of the package. This is a simple idea, but essential in a world of changing programs and where the number of uses of a concept is >> the size of the concept definition

No ADT in FP?

OO packaging allows one to modify the contract (a.k.a. interface+semantics) in a way that minimizes the impact on non-affected uses of the package.
Could you elaborate?

Is ADT concept unavailable in FP?

The purpose of a ninja is to flip out and kill people.

Without wishing to cause a flame war

HaHaVeryFunny

While robust, HO does not deal with OO programming all that well.

I had thought that the point of programming was to write programs, and not to do OO.

On the other hand, OO does nearly everything that HO can do... BTW, as far as I am concerned, OO is *not* equivalent to inheritance+subtypes+methods+classes+instances. Those are techniques useful in some situations. For me, OO is fundamentally about encapsulation and interfaces.

The great thing about OO is that you can define it to be whatever you want it to be, and everyone has a different definition. This way OO can survive even when criticisms are leveled against any of its putative incarnations.

I would argue, that at a slight loss of elegance, OO does *everything* that a good software engineer wants to do.

Yes, you are absolutely right! So, what is the most effective way to reeducate all those poor, misguided software engineers? :)

great, another OO definition thread

there's only one definining feature that is unique to OO (or at least originated with it), and that is inclusion polymorphism. Anything else is available in many different forms elsewhere.

I think it is a mistake to co

I think it is a mistake to conflate implementation or techniques with justification. inheritance, polymorphism, etc. etc. are nice tools -- but should not be confused with the motivation for OO programming.

For me, a primary motivation for looking at OO is that it helps in the programming process -- all the programming process.

Inclusion Polymorphism

Wouter: there's only one definining feature that is unique to OO (or at least originated with it), and that is inclusion polymorphism. Anything else is available in many different forms elsewhere.

And there's an argument to be made that inclusion polymorphism causes more problems than it solves.

sure

I didn't argue about its value. I personally prefer inclusion polymorphism expressed in different ways than just using inheritance, multimethods for example. Combining inclusion polymorphism and data structure extension/compatability in one mechanism just gives too many problems.

Encapsulation with closures

an object can act as a kind of closure but a closure cannot capture the multiple uses of an object

For me, OO is fundamentally about encapsulation and interfaces.

Scheme:

(define (mk-point x y)
  (lambda (op . args)
    (let
        (
         (get-x (lambda () x))
         (get-y (lambda () y))
         (add (lambda (pt) (mk-point (+ x (pt 'x))
                                     (+ y (pt 'y))
                                     )
                      )
              )
         (sub (lambda (pt) (mk-point (- x (pt 'x))
                                     (- y (pt 'y))
                                     )
                      )
              )
         (wr (lambda ()  (display "(")
                     (display x)
                     (display " ")
                     (display y)
                     (display ")")
                     )
             )
         )
      (apply (cond
              ((equal? op 'x) get-x)
              ((equal? op 'y) get-y)
              ((equal? op '+) add)
              ((equal? op '-) sub)
              ((equal? op 'display) wr)
              )
             args
             )
      )
    )
  )

(define pt1 (mk-point 1 2))
(define pt2 (mk-point 3 4))

((pt1 '- pt2) 'display)

OO, open recursion, and auto-complete

After overly long in this business, and overly long thinking about languages, I've made these realizations about object-orientation:

1) The only core functionality I that object-orientation enables that other paradigms flail around with is open recursion. I believe I've used open recursion in my designs all of twice in the last five years. All the other talk about encapsulation, modularity, reuse, polymorphism, and sub-typing is done at least as well if not better in modern non-OO languages.

2)The big incidental payoff of object-orientation is that it allows really precise and useful auto-complete. This is largely a side-effect of the fact that the object orientated languages I use structure calls as subject-verb-args, while the non-OO languages structure calls as verb-(subject)-args. Strong auto-complete saves me about a half-hour per day.

Such an old thread. I

Such an old thread.

I probably use open recursion much more than average (I basically grew up on OO), and I find it to be very essential in expressing "naturalistic" designs according to some ontology (IsA/HasA relationships). One of the things I did when I was working on Scala was really push the boundaries of that, with a very recursive version of the cake pattern (which itself is just a non-modular variation on open classes). I think well defined open recursion is a very strong tool in building modular designs and shouldn't be dismissed out of hand.

Having a subject really grounds code completion, though I think functional languages try to accomplish similar with OO-like naming conventions for functions (e.g., string-trim-right in scheme). Ironic that what FPLs try to avoid then comes back as a library convention.

Apologies

Sorry about posting here. It popped up on "Active forum topics", and for some reason I thought it was under "New"

It's okay...

...your observation about dot-notation and autocomplete was interesting enough to justify the resurrection of the thread.

I was only surprised that

I was only surprised that this thread existed and was made before I was even a member (~2006).

Agree

That's a nice observation, and kind of matches my experience, too. Many OO programmers probably use open recursion more often, but I find many of the uses I see in the wild of rather questionable utility. Feels like it often is just a backwards (and obfuscated) way of doing higher-order abstraction (i.e., the recursion is just accidental).

Auto complete; Capabilities

Haskell allows putting the verb in the middle in a nice way: subject `verb` args. I agree that having a subject already in place provides a nice context for narrowing possible verbs.

I avoid recursion in OO, including open recursion. My objects tend instead to represent pipeline processes, with queues and stacks where necessary.

Where I think object orientation helps the most is object capability patterns. That's the most important thing I'll take away from OO. Sadly, most OO languages don't even support them, because they're trying to be all hybridized and get a little of the best and a lot of the worst of all worlds.

Isn't the core of "object"

Isn't the core of "object" capabilities much more cleanly expressed in lambda calculus? The crucial thing you need is for a component X to be able to use other components internally that are not accessible to X's clients. Closures give you exactly this.

Isn't the core of "object"

Isn't the core of "object" capabilities much more cleanly expressed in lambda calculus?

No. Not unless you mean an impure language based on a lambda calculus. In a pure lambda calculus, you can't ever invoke a capability, only return a stream of instructions to do the same.

If you use a powerful type system in addition to lambda calculus, you can use ADTs to represent capabilities. But that's an abstraction inversion as far as ocap model is concerned. ADTs are based on ambient authority to a collection of operations. Better to use capabilities (sealers, unsealers, etc.) to model ADTs.

I found the paper explaining

I found the paper explaining it: A Security Kernel Based on the Lambda-Calculus.

First class procedures and

First class procedures and closures like offered by Scheme are suitable for capabilities. But a more accurate name would be "A Security Kernel with Elements inspired by the Lambda Calculus". Scheme is pretty far removed from lambda calculus in the language design space.

Maybe not mutation, but indeed object

I'm not convinced by the mention of "impure calculus" here -- despite knowing that you have devoted much more thought to this issue that I have.

You are presupposing that the capability encapsulates the right to some particular side-effect. In a more functional language, value production, rather than state mutation, should be privileged (no pun intended; I mean "considered a form of authority"). When a file descriptor ocap represents the authority to read some file in the filesystem, reading requires no mutation. My intuition tells me that a lot of privelege could be presented in a stateless form -- in fact, the stateful/stateless choice is quite orthogonal to the capability presentation, it depends on which portion of our domain model we know ho to or want to represent in a pure setting -- though I agree that for system programming there is a majority of things that we are used to represent in a stateful setting. In fact this is also orthogonal to the "object or not" choice: I won't teach you anything by saying that there are stateless object designs, and stateful functional designs.

I however think you are right to say that the ocap tradition is rooted in the idea that independent actors (capabilities) may each provide unique, abstract implementation of their expected interface. Record of functions sealed by an existential type is the most natural way to present it, and is effectively what some authors (eg. William Cook) call "object-oriented programming". Under those conditions, I agree that ocap is object-oriented programming -- it also coincides with the notion of "first-class modules" in the ML community.

Do you have nice examples of ocap patterns that rely on open recursion? I'm sure there are, and they further justify the use of an "object-oriented syntax" for capability programming.

When a file descriptor ocap

When a file descriptor ocap represents the authority to read some file in the filesystem, reading requires no mutation.

It does admit mutation. Reading a file is both free to mutate a filesystem (e.g. update `access time` on the file, or modify a log), and generally allows mutations by others on the same file to be observed.

It just doesn't require any internal mutation. I agree that internally stateless cap systems can exist, and indeed I plan to build one.

In a more functional language, value production, rather than state mutation, should be privileged (no pun intended; I mean "considered a form of authority").

Yes. We can control value creation in functional languages by use of ADTs. Or in OOP if we require parameters with concrete classes instead of interfaces. Or we could model it via a sealer/unsealer concept. It's a valuable pattern in rights amplification.

I agree that ocap is object-oriented programming

I never said that ocaps are object-oriented programming. I think we could get ocaps out of non OO systems. Just don't confuse ocaps with sealed values! Nor with substructural dataflow constraints, for that matter. There are lots of potentially valuable ways to control authority; we should learn each of them and their places.

Do you have nice examples of ocap patterns that rely on open recursion?

There are such patterns. The ones I most readily think of involve UI with deep recursion of views - each iframe granted certain authority and granting authority.

they further justify the use of an "object-oriented syntax"

I'm not sure anything justifies object-oriented syntax... (assuming you mean the ad-hoc bundling of methods into constructs called `objects` with something other than plain old maps or records).

Roles reversed

Let me argue for the position I thought you held.

1. As I understand them, an important idea of capability systems is that, in some sense, the designation of an action is also the right to this action. There is no notion of "I try to read this file but I'm not authorized": either you see the (read this file) capability and you can use it, either you don't and it makes no sense to think it. This idea maps very closely to the "record of functions" style of object-oriented programming: any object in a capability system comes bundled with the interaction you can have with it.

I'm confident you could adopt a "less OO" style of least-privilege programming: you would have verbs to designate the possible actions, which would require to be passed privilege tokens proving that the action is allowed. I don't think you would lose much by using this style, and in particular it certainly also allows "least privilege" programming; it's just that, in the presentation at least, it seems a bit farther away from the concept of capability bundling designation and privilege.

2. There is a net difference between what William Cook calls "ADTs" and "objects", which can easily be seen in their typed presentation (a different in placement of an existential quantifiers): ADTs provide encapsulation on a set of value sharing the same representation, and objects manipulate sets of values each having an opaque representation. The latter better describes open message-passing systems where each actor has its own private implementation, and in particular most usual capability design pattern (eg. the revocation pattern uses a proxy actor). In a closed world, you could use ADT whose common state is a disjoint union of all desirable behaviors, but you hit the expression problem: I am under the impression that for this style of programming, where the set of operations is relatively fixed, and the set of implementations may be quite large (and representing unknown behavior is a common case), the "product of functions" style is a better side of the duality than the "functions on sums".

That said, there is no absolute, and I would expect most systems to actually use an hybrid of the more "object oriented" and the less "object oriented" approaches. I would still say that the existing practice in capability systems, and in particular the "ocap" tradition, bends more on the "object" side (in this limited sense of existential product of functions).

Secure Tokens vs. Capabilities

you could adopt a "less OO" style of least-privilege programming: you would have verbs to designate the possible actions, which would require to be passed privilege tokens proving that the action is allowed. I don't think you would lose much by using this style

The notion of passing proof tokens (e.g. ADTs, sealed values) to verbs loses much. The basic problems are three-fold:

  1. the recipient of the token might have more authority than the sender, just by knowing more verbs. This makes it difficult to reason about how much authority is being granted.
  2. a token can have different meanings in different contexts, i.e. referring to different ambient resources in dynamic scope; there is no clear designation (though patterns with phantom-types can prevent leaks across scope, used with the ST monad for example)
  3. you can simplify the first two problems by making the set of verbs and meanings universal - so that everyone who holds the token has the same authority and identifies the same resources. But in this case you still cannot achieve essential certain patterns, such as attenuation of that authority, unless every ADT provides a set of verbs just for attenuation (which doesn't happen in practice). It also doesn't scale well, e.g. with versioning.

Tokens provide authority without designating any particular actions or resources, at least in general. Unforgeable tokens provide an effective basis for rights preservation and amplification patterns, but they easily grow out of control.

Capabilities are more restricted - by tightly coupling authority with designation, capabilities improve control, clarity, and awareness of authority. Those are important principles for secure interaction design.

in particular it certainly also allows "least privilege" programming; it's just that, in the presentation at least, it seems a bit farther away from the concept of capability bundling designation and privilege.

Yes, unforgeable tokens, ADTs, etc. certainly allow "least privilege" programming - you can provide appropriate verbs for attenuation, share verbs globally, etc.. But in practice the necessary discipline won't happen consistently. Secure programming is not the path of least resistance.

I agree with most everything you said in paragraphs "important idea of capability systems", "net difference" and "there is no absolute". You only underestimate how important that "important idea of capability systems" happens to be in design and in practice.

Tokens with good granularity

Just to clarify things, I was thinking of a setting where:
- the entire set of verbs is known by everybody
- a token contains authority for a single action (verb) on a single subject/object of the system
- I did not consider static checking that the token is valid for this verb and object, just dynamic failure if the token is wrong (including in the case of confused deputy); you could do that with strong enough type systems (singleton types)

The correspondence between the two styles is given by:

- the ability to implement the object style from the token style, the token being stored in the closure along with the private state of the object.

  let object(state, token) =
    { verb = fun params -> global_verb(state, token, params) }

- the ability to implement the global verbs from the object style by defining the notion of token to be precisely the ocap object. The passed "state" is empty, as it is already hidden in the object/token.

  let global_verb(_, token, params) = token.verb(params)

Impure Calculus

Yes, if you couple the authority granted by a token with precise designation, you can avoid a lot of problems.

We got started here with you being "not being by the mention of impure calculus here", and that these tokens would be capturing authority for "value production, rather than state mutation".

Can you further explain the role you see for pure values with these tokens? What authority do you see protected by these tokens?

You can protect a data source, e.g. a signal value of the form (T -> MousePos). But it isn't clear that tokens are necessary or helpful in that role.

Another thing you can protect is command streams. But to represent a command without sharing the token, you'd need to somehow compose opaque commands. Your pure functions will eventually be of a form similar to `a -> IO b`, and hence your programming style (within composition of IO) is essentially impure object capabilities.

What values are you imagining to protect that require tokens but don't require "impurity" via an opaque effect composition model?

Some differences

Your pure functions will eventually be of a form similar to `a -> IO b`, and hence your programming style (within composition of IO) is essentially impure object capabilities.

We were through this recently, so I won't belabor the point, but I think there are advantages to having that command passing model. For one, it provides a semantics other than spooky action at a distance. But more importantly, when you have a pure command passing model, you're forced to explain who is responding to the commands. That is, you have to anchor your objects and state, rather than letting 'new' create state in the ether. Knowing where state lives lets you control it (e.g. make copies of it).

Essential Difficulty

when you have a pure command passing model, you're forced to explain who is responding to the commands

Real systems are open systems - sensors, actuators, users. We cannot explain who is responding to all commands. This is essential difficulty.

you have to anchor your objects and state, rather than letting 'new' create state in the ether

Eliminating ambient authority for `new` has a lot of advantages - for security, persistence, scalability, live programming. But it is orthogonal to command passing style.

Knowing where state lives lets you control it (e.g. make copies of it).

Untrue! Knowing where state live does not let you control it. And the ability to control state and make copies of it are independent features (not all state you control can be copied, not all state you can copy you can control).

We probably don't disagree on much

When the recipient is external, we can just know we're talking to some port. If by 'eliminating ambient authority for new' you have in mind eliminating a global 'new' altogether and replacing it with allocations on specific heaps, when we agree. As to your last point, knowing where state lives is neccessary if not sufficient for controlling it.

Hmm

Knowing we're talking to some port is insufficient for reasoning about application behavior. It isn't a better answer than "spooky action at a distance". It's worse. In general, we need to have a good idea of what happens in the real world and what sort of responses we can expect when we talk to the port, either empirically or via partial specification. We need a model of action at a distance.

For most values of "control", knowing where state lives isn't necessary to control it. That is a separable concern, if you're willing to accept some indirection. Consider associative memory and the blackboard metaphor.

What you do need to know is how to communicate with the state.

I agree with you that we

I agree with you that we need to model secure action at a distance -- you convinced me of that last time. I would just prefer a less spooky version, like an explicit channel that intermediaries cannot interefere with. My approach also clears up some semantic questions like, if you pass a message to an object, and it signals an effect, and then while handling the effect you send another message to the object, what happens (what state is the object in)?

Knowing that you're talking to some port is the best the local code can do, whether it's a logical port that the OS is funneling to the sensors/actuators or the physical port to the sensors/actuators.

One difference between what I want and ocaps is that ocaps are permanent. e.g. if you pass a capability to an untrusted object and then it returns an object, that object can directly access the capability. If you add a type system that can record that the object doesn't close over the capability, though, it's really a very similar system.

The main difference I see is how I handle suspended objects, continuation capturing, etc.

Open systems are essential.

Open systems are essential.

Your approach and the benefits it offers rely on a closed system assumption. For example, to "suspend" services, state, or software agents - or even to reason about what happens when two messages arrive concurrently - programmers using your approach must first model that service as an object internal to the system.

Their ability to do so is inconsistent, at best. Not every service or resource can be replicated internally. So your developers must choose between a consistent experience or occasional specialization for some features that aren't much applicable to real systems anyway (can't suspend the world or copy the user).

Regarding ocap permanence: revocation patterns are possible, can even be implicit in some abstractions (membranes, linearity, single assignment, expiration). In the design I favor, revocation is logically continuous - and so is providing a fresh capability, via signal.

Closed systems

The ability to manipulate a closed system of entities is present at the system architecture level. You're right that there are constraints when letting that closed system interact with the open system, but that's part of the problem domain and I wouldn't call it inconsistency. You're less interested in supporting customizable system architecture, preferring to build one good-for-most-purposes architecture as your language run-time, out of concern for fragmentation and code incompatibility. That a fair recap?

Custom architecture is not a crisis

Custom architecture is not a crisis faced by users of object capability models. The ability to build objects that build objects (staged programming) makes custom architecture available even at runtime. In cases where closed systems are useful, confinement patterns are possible by controlling construction of objects.

I am very interested in supporting customizable system architecture. I envision ad-hoc architectures created through orchestration and open composition - e.g. glue this database to that cloud service; attach both to an app on my phone. My concerns about fragmentation and code incompatibility aren't instead of a greater interest in customization; they're because of it. I also want to improve post-hoc customization - live programming, plugins and extensibility, and consistency in how we express it.

Capabilities are a very effective basis for these goals. For my vision, capabilities serve better than a closed process approach.

You're right that there are constraints when letting that closed system interact with the open system, but that's part of the problem domain and I wouldn't call it inconsistency.

The developer's experience is inconsistent. That isn't a logical inconsistency, but rather relates to consistency of "notation" or consistency in UI.

Sometimes the problem domain forces us to model a resource as external to our closed system. But the inconsistency happens because of a non-essential choice when not so constrained: shall I model this particular (service or agent or data resource) inside my process, or as external to it? The result is that structures serving similar roles (a database, perhaps) have very different feature sets and interfaces.

Not true

Shall I model this particular (service or agent or data resource) inside my process, or as external to it?

From the point of view of any code that uses services, all services appear external. They only look internal to the component that owns and installs them. There is no UX inconsistency.

I don't see any disadvantage of the Processes I have in mind vs. an object capabilities. Only advantages that you may not be interested in.

They only look internal to

They only look internal to the component that owns and installs them. There is no UX inconsistency.

Those two sentences are contradictory.

I don't see any disadvantage of the Processes I have in mind vs. an object capabilities.

I believe you. But you also have a very different vision than I do for application development and user experience. For your vision, your process model may offer only advantages.

From my perspective, many of the alleged advantages are useless in most cases, and there are costs to achieve them even when we don't use them. Even if I model my system in yours, it would promptly undermine the advantages you propose - e.g. processes can't build arbitrarily deep state if you want live programming, and meeting communication contracts means you cannot simply suspend or copy processes.

I didn't word that well

Access to objects is done with the same syntax everywhere, even within the component that owns them. However, the installing component has the extra ability to manipulate the state "from the outside", moving it, replacing it with another object, etc. This has a distinct UX, as it should, but will typically be encapsulated to a small section of code. There is no UX inconsistency.

Interesting. You are saying

Interesting. You are saying there is no UX inconsistency because most users don't need the features you offer, right?

And those that do can use a little specialized, encapsulated design. Hmm. Sounds like what I was saying with membranes where I need them.

Yes

I'm not familiar with the membrane pattern (I'll go read up later), but if you have a pattern for changing the binding of a swath of objects, then you can probably do equivalent things with that, I'd imagine.