Naked Objects

The Naked Objects Approach is not a sleazy way to make a quick buck, but a framework for writing business applications that does away with the usual Model-View-Controller architecture. To quote the website:

In the naked objects approach ... writing a business application implies writing only the business domain objects themselves. All business functionality is implemented as behaviours or methods on those objects - the objects can be described as being 'behaviourally complete'. These business objects are then presented directly and automatically to the user, by means of a completely generic viewing mechanism. Similarly, the persistence layer can be generated completely automatically from the domain object definitions (manual mapping will still be required if the objects must interface with existing databases).

Sounds like polytypic programming to me! Ruby on Rails has something similar with scaffolding, and the Django framework in Python does the same.

It's nice to see application of theory, though I'm virtually certain those doing the applying wouldn't recognise it as such.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Previously on LtU

It's nice to see theory

It's nice to see theory starting to address real-life practice, though I'm virtually certain those doing the theorising wouldn't recognise it as such.

:-)

Fair point.

Naked Objects and Poly-Typic Theory

Naked Objects is certainly generic but not terribly practical - see Fixing a Generic UI (Clothing Naked Objects).

I tend to agree that the term polytypic programming is late to the game and only useful as a vague umbrella concept.

If we need to create terms for categories of generic programming, then I think it would be better to start by explicitly classifying existing genericity techniques, e.g. type genericity (polymorphism, with an emphasis on state rather than methods), invocation genericity (method or behaviour polymorphism), structural genericity (Visitor or Bulider or Clone patterns) or algorithmic genericity (C++ templates or Java generics) or representation genericity (e.g. everything is an object or a string, possibly annotated with a type).

Another dimension of classification could include the implementation aspects - dynamic lookup (polymorphism), introspection-orientation, common data and schema formats (e.g. XML and XML Schema).

Under this classification, Naked Objects is simply polymorphism with introspection (assisted by naming conventions).

Interesting

The Unreal engine has been using a similar approach since 1995. Programmers define classes using the Java-style UnrealScript language, and use annotations to decide which of the member variables are exposed to designers. The engine then exposes a quite powerful property-editing UI enabling designers to customize the properties of those objects.

Every datatype has a corresponding edit control. Bytes are sliders, booleans are checkboxes, enumerations are drop-down combo boxes, structs create an expandable hierarchy with subcontrols for their components, etc. There are some screenshots showing this here: http://www.unrealtechnology.com/html/technology/ue30.shtml

The resulting user interface is less polished than a typical modelling program (like 3D Studio Max or Maya) where a UI designer lays out every control manually. But the productivity gains from fast turnaround time far outweigh the negatives in this case.

More About UnrealScript, Please?

Hi Tim! Long time, no talk. It's a shame things didn't work out for me to join Epic, but I continue to look forward to forthcoming Unreal technology. :-)

One thing that I've always found fascinating about UnrealScript is that, like Java to some extent, it's a statically-typed language, but without full type erasure: it obviously supports some level of reflection and/or introspection, and all this while being well-integrated with C++ so that you can have native functions, latent functions, and so forth. I remember trying to explain to my stepson why it was impressive that, from the Unreal console, you can say something like "summon xweapons.redeemerpickup" and have a Redeemer pop into the world in front of you. So I'm wondering if you have any (more) comments about reflection and introspection, type erasure, etc. in the context of your current thoughts on type theory and language design. Perhaps more specifically, to the extent that your thinking is influenced by Ontic and doesn't make a strong distinction between types and values in the first place, what do "type erasure" and/or "reflection/introspection" even mean? And how would all of this be integrated with C++? Or would answering these questions give away too much of the farm?

Scaffolding in Rails

I'm not so sure that Ruby on Rails scaffolding is really Naked Objects. It's boiler-plate code code-generation to help get one started on an app, something that works well as example code, but not much beyond that.

There is no "completely generic viewing mechanism" (either with or without scaffolding), and "the persistence layer can be generated completely automatically from the domain object definitions" sounds very much like another Ruby Web framework, Nitro, but the antithesis of Rails.

Objects <-> DB

But in Rails, the persistence layer must exist *first*, in order to derive the the domain object definitions, which is the reverse of what was described for Naked Objects: "persistence layer can be generated completely automatically from the domain object definitions."

Nitro (well, Og, actually) generates the persistence layer (usually a database and tables) from the domain object definitions (the Ruby code for the models).

(Though I wonder if we have different ideas of what "persistence layer" and "domain object definitions" map to.)

Introspection

Languages built on modern type theories guarantee many universal properties which assure that your program does what it says it's going to do. One example is the parametricity theorem (see Wadler's "Theorems for Free"). It guarantees that a program cannot decompose values belonging to a universal type, an important security property enabling you to (for example) call functions in another module with the assurance that they can't do anything malicious with the data.

Introspection, reflection, and even support for unconstrained casting (e.g. from Object to Integer in Java) all violate that property. Therefore, unlimited introspection is not a desirable property in a secure language. (C, C++, Java, C#, and Python aren't secure languages in this regard.)

However, explicit programmer-controlled introspection is a good and reasonable feature. For example, when defining a data type, it would be nice if a language offered a mechanism for specifying that associated metadata should be generated that describes the structure of that data, which may only be directly accessed by the module that contains the definition, but may be passed around to other modules explicitly by the programmer.

More concretely, when I define a type T in the local scope, I should be able to syntactically specify that the compiler should generate a data structure describing T's layout (call it T.metadata, for example), and to make that data public or private. I can then pass that metadata around to dependent-typed functions along with values of that type so that it can be deconstructed -- if I explicitly allow it to be. But, given just a value of some arbitrary type, one should not be able to extract its "type", since that notion violates many universal properties (such as the subtyping property).

Haskell typeclasses enable one to explicitly (not automatically) define metadata associated with a type in this way, and in the Haskell world this is done without violating the language's universal properties. A more automated mechanism ("deriving TypeInfo"?) would make this more user-friendly.

Traversal etc.

So the main point I take from this is that arbitrary introspection/reflection should be disallowed. That seems reasonable. PLT Scheme has one take on it with it's inspectors.

However, reading the posts the prior posts the main criticisms boil down to: you need more customisation than these automated procedures provide. This is where, I think, PL theory can make the largest contribution.

Imagine the process of constructing a GUI from a type definition. This process is a fold over the type. Simple stuff. Now how about customisation? We want to replace some parameters of the fold with our own functions. Again, fairly simple. Now what if there are dependencies between elements? We might need to alter the traversal order, and so on. Structuring all of this cleanly is what we can get from the theory. Strategic programming, and Stratego are the first places I'd look for ideas.

A lot of application development is just writing a nice front-end to a database (say, about 90% of websites are just this). If this could be made this really easy, and yet really customisable, it would be a great tool.

Oleg Again :-)

Noel: Imagine the process of constructing a GUI from a type definition. This process is a fold over the type. Simple stuff. Now how about customisation? We want to replace some parameters of the fold with our own functions. Again, fairly simple. Now what if there are dependencies between elements? We might need to alter the traversal order, and so on. Structuring all of this cleanly is what we can get from the theory.

This sounds like Oleg Kiselyov's work on General Ways to Traverse Collections. I still think his "Zipper qua delimited continuation reified as a data structure" is the most important CS result of 2005. I'm still waiting on an O'Caml implementation, however, as I don't yet understand the module system well enough to develop my own. I really should keep trying, though.

And a bit more

I think there is bit more here than just a universal traversal. Composition is the main additional problem. I don't want to specify my traversal as one monolithic lump. I want to specify individual rules which might require bottom-up and top-down traversal and have some smart system combine them into one uber-fold. Perhaps Oleg addresses this -- I haven't read all his work -- but it isn't in the work I have read. I seems like a neat problem very related to existing work in deforestation, so I wouldn't be surprised if someone has implemented it.

Re: metadata is dangerous

But, given just a value of some arbitrary type, one should not be able to extract its "type", since that notion violates many universal properties (such as the subtyping property).

That doesn't make sense to me, but I am a caveman when it comes to PLT. Perhaps "the subtyping property" explains why I'm being dumb. Pattern matching isn't considered evil so why shouldn't I be able to get the metadata for something and then have my code take different paths based on that? (The main argument I can think of: a better design is to make that kind of stuff more explicit at the regular type level, rather than relying on metadata.) As long as the code isn't trying to change the metadata it should be safe, no?

Re: Theorems for Free

Typeable

deriving Typeable does what you're thinking?

The Subtyping Property

raould: By "given just a value of some arbitrary type, one should not be able to extract its 'type', since that notion violates many universal properties", I mean: In the presence of subtyping, a value is expected to be a member of every type that contains it. In a language with subtyping, singleton types, and a top type, what can you say about "the type of 3"? Well, 3 belongs to every type that contains 3.

The least such type is the singleton type containing 3, and the greatest is the top type. These are the only two universal answers we could hope to give, and neither one yields any usable information. This is unlike Java's o.getClass() function on objects, which yields nontrivial information. If we consider Object as a universal type (which Java and C# encourage), then these languages are obviously aparametric. So then they pile on all sorts of extra security features in order to regain just a few of the guarantees that parametricity could have provided.

Daniel: Excellent, thanks for pointing out "deriving Typeable". So Haskell implementations do indeed solve the introspection problem safely. I learn something new about Haskell every day.

So the intuition here is that if I declare a type "deriving Typeable", then I'm explicitly choosing to expose its internal details. But abstract datatypes not declared that way remain abstract such that it's physically impossible for an outside function to break the abstraction barrier.

So the intuition here is

So the intuition here is that if I declare a type "deriving Typeable", then I'm explicitly choosing to expose its internal details. But abstract datatypes not declared that way remain abstract such that it's physically impossible for an outside function to break the abstraction barrier.

Actually, Typeable just provides a way to compare two types for equality. To do introspection, you need Data.

In other words, a value of type ∃a.a is completely opaque; a value of type ∃a. Typeable a ⇒ a lets you check whether a is a type you're familiar with, but doesn't give you any extra information if it isn't; and ∃a. Data a ⇒ a lets you look at the internal structure.