LtU Forum

Object Oriented Programming + Referential Transparency

After doing a search through some of the archives (although not as extensive as it could have been) I couldn't find any references to the combination of object oriented programming and the maintaining of referential transparency. Is such a thing possible? Are there simple solutions or is a rather complex operation?

Thank you.

MJ

translational vs. denotational semantics

I'm developing a three-phase semantics for a block diagram language. I'm calling the first two phases "translational" and the last phase "denotational." This distinction between "translational" and "denotational" feels right, but I'm not sure it is right and was hoping some LtU members might be able to help.

I'm using "translational" to describe the phases of the semantics that could also be called desugaring. The distinction I see between these phases and the final denotational phase is that the denotational phase cannot be captured as translation to a finite program in the target language. The target language is a lightly sugared lambda calculus or a very limited Haskell, depending on how you look at it. (Actually the latter interpretation is how I actually make this semantics executable. I mention this only because I think it is cool, not because I think it is relevant to the question at hand.)

This all seems to stem from the denotation for the "letrec" construct, which, via the fixed-point operator, involves potentially unbounded recursive calls to the meaning function.

In particular,

meaning [LETREC declarations IN body] env
=
meaning body (fix newEnv)
where
newEnv e' = updateEnv (meaning declarations e') env

and fix and updateEnv are as you might expect, and "meaning" is assumed to be overloaded to work on declarations as well as expressions.

Does this distinction between translational and denotational semantics seem correct? As such, would a denotational semantics of a language w/o letrec be translational? And would a denotational semantics of a language with letrec and with letrec in the target language be translational? Are all operational semantics translational?

(from 2004) Possibly faster Haskell

Hacle: "Can the Clean compiler, in combination with this tool, produce faster executables than existing Haskell compilers?" ... "These results show two cases where the Hacle then Clean compilation system produces faster executables than the GHC compiler. Likewise they also show two cases where the opposite is true. So, since the Hacle then Clean compilation system is potentially beneficial, its completeness is now of importance."

Territorial Exception Handling

[Most exception] solutions suffer from the problem that catching exceptions can only be done within a monad, or via an explicitly passed unique structure. This means that one cannot just introduce exception handling in an existing functional program. One either has to redesign the program, or plan for exception handling from the start. The result is basically a program that is far more “imperative” that the programmer probably hoped for.

In my exploration on territorial exception handling, I hope to make clear that there are better ways to deal with this for a large class of exceptions, without giving up referential transparency, nor efficiency.

Territorial Exception Handling, Marco Kesseler, February 12 2004

(Exceptions are still something I think the programming community hasn't really nailed down. For every person that says they are evil, another says they are salvation. Some folks suggest looking for a middle path.)

Implementing Higher Order Messages

My colleage Nat Pryce has recently posted a challenge which given the recent discussion about 'expressions vs statements' I thought might interest the LtU crowd.

In this post: http://nat.truemesh.com/archives/000535.html he talks what he calls Higher Order Messages. This is basically a way to use higher order functions to abstract away complex traversal code in favour of a series of chained method calls.

So instead of selecting all the elements that match a predicate and then calling a method on them you do this:

elements.where.somePredicate.do.aMethod

Ordinarily this wouldn't be relevant to LtU but his post: http://nat.truemesh.com/archives/000537.html on implementing this idea in Ruby has led to alternative implementations in Java and Scala: https://lampblogs.epfl.ch/b2evolution/blogs/index.php?blog=7&title=higher_order_messages_in_scala&more=1&c=1&tb=1&pb=1

new GUI paradigm

The latest alert box from UI guru Jakob Nielsen would be rather fascinating if it wasn't so darned obvious: the latest paradigm in UI is no longer OO (WYSIWYG is fundamentally a stateful OO paradigm) but is declarative -- they call it results-oriented. But read the alert box, and it will be astoundingly clear that, instead of issuing commands to modify state (ie OO), the new thing is to instead specify your goals. There must be a good PL angle to this story!

Ah well, back to doing some good old command-line stuff for me -- since at least that is based on a functional paradigm!

Expressions vs Statements

While trying to generalize the structure of a program and its source code, I came across this fundamental question, that I still don't manage to answer properly : "what is the difference between expressions and statements".

I am pretty sure that there are many folks here with a different answer from "42" ;)

Formal methods for O/R mapping

Not sure whether this is OT for the new LtU, but in the past we used to discuss O/R mapping, and impedance mismatch (probably from a PLT perspective).

I am currently designing several tools that should work in an environment similar to Hibernate, and I stumbled upon utter lack of at least semi-formal model of what is going on. In half an hour I came up with a simple-minded model for both O and R sides, both using explicit state and mutable references, then considered to use Featherweight Java for O side, then decided it's too heavyweight.

I am sure people are working on things like that (defining formal models of both object and relational, um, models with mutable state and how they map to each other) - after all, it's the heart of almost any J2EE (and probably .NET) application, so the question is - does anybody have pointers to this reasearch?

Thanks.


To give you an idea what I am talking about - the object side of my model is:
  1. State as a finite map from References to Values
  2. Values as either Atoms or Records
  3. Records as finite maps from Names to References
  4. Atoms are Null, Integer, String; Names are Strings.
The relational side is more involved, but models state in a similar way. I am still pondering over mapping between them, and whether to bring type info into equation. At this stage I decided I could borrow some brain cycles.

An alternative question (or excuse :) ) - do you think that PLT community does not pay enough attention to mutable state? After all, it's a staple in most of the commercially used PLs.

PLT and College

I'm a freshman in college and... after a lot of thought, I want to do programming language research for a living, or at least something that involves a lot of PLT (perhaps compilers and interpreters for embedded platforms, etc.)... currently I'm working on a double major in Electrical & Computer Engineering and Computer Science. I've really started to wonder lately if I need the engineering aspect of it, or if I should change to something like Computer Science and Mathematics. Which would be better preparation for a career in PLT research?

Macros/Syntax vs. AST manipulation

The Lisp family has macros that allow you to generate new code by manipulating programs; S-expressions are used as the representation. For many uses, this is what makes it hard to work with. Manipulating program language in such a low-level form is difficult. Looking at Stratego/XT, the authors clearly indicate that performing program manipulation with ATERMS (Stratego's underlying, low-level term representation, analogous to s expression data) is quite difficult and undesirable. Stratego has mechanisms to deal with program manipulation in a more natural way.

My read of Stratego shows that they authors allow the direct use of program fragments in the language being manipulated as a short form for underlying aterms.

What this boils down to is this: What is the best representation of a program to use, when the program must be manipulated/transformed? Are different representations appropriate for runtime manipulation (a program manipulating itself) vs. build-time manipulation?

Lisp chooses S-Expressions. Java programs can be transformed at compilation or runtime by bytecode manipulation -- aspect-oriented programming can be used to solve problems similar to those macros are generally applied to. C++ performs build-time transformation via its template mechanisms. Scala provides flexible, extensible syntax that relies on efficient compilation to deliver transformation-like behavior. Boo (a .NET language) provides, as a part of its standard library/compiler the ability to modify the AST of the program; there is no inconvenient "detour" through syntax.

This leads me to these thoughts:

1. To the extent is can, the type system of a language should provide as much transformational power (templates/type parameters) as it can. The transformational power of a type system, it seems to me, is just as important as type-safety/correctness.
2. Syntax extensions can result in highly readable code, and as such provide oft-needed flexibility. A good test here is building syntax for array-language operators; does the code "look" as natural as that for atomic values? Does it perform well?
3. Programs increasingly operate in an environment of fluid metadata, and must be able to adapt. Therefore, transformation at runtime is highly desirable.
4. Completeness dictates that transformations should be able to construct other transformations. This necessitates runtime transformations.
5. Losing the transformative and corrective powers of type systems seems undesirable when constructing transformations in general -- are type-safe transformations at runtime any different from those performed by compilers?

Modern VMs perform just-in-time compilation; selective optimized compilation of programs is performed. When new code is dynamically generated, that code is folded into the same JIT framework as "normal" code. When existing code is modified, a VM will "de-optimize", then allow further optimization to occur when needed. If a program's type system is modified, is it possible to perform similar work? In general, it seems that a comprehensive model for the evolution of a program's full AST+Data (types/logic/data) is needed. Lisp does this without checks and balances. Is a given transformation of a program acceptable?

Perhaps there is a kind of bitemporal typed lambda system out there somewhere; something that understands what the type system used to be, what we think it is now, and allows both to be combined together in computation. :)

XML feed