archives

Coupling of concepts - good or bad

I have a specific problem with OO design and programming, though it is just an instance of a more generic issue.

On one (methodological) hand, objects are thought to represent "real-world entities". On the other (scientific?) hand, objects are mostly "gobs of recursion" (at least, in single-dispatch OOPLs).

Assuming you agree with the previous statements, when you read some code, how do you tell, whether specific object/class was created because the programmer needed "gobs of recursion" (dispatch via "this") or because he wanted to model a "real-world entity"? Should it be documented? Is it important at all? What is your experience? (mine is pretty limited as for some reason I always end up developing very abstract systems without any "real-world" meaning :-) )

I feel I should not post this to "objective scientific" thread.

Guarded Induction and Weakly Final Coalgebras in Dependent Type Theory

Quite a mouthful, I wonder if it would attract more people if it were titled "Interactive programming in Epigram" :-)

Guarded Induction and Weakly Final Coalgebras in Dependent Type Theory

In this article we are going to explore one approach to the representation of interactive programs in dependent type theory. The approach we take centres on the concept of "monadic IO" used in functional programming. We will see that in dependent type theory, besides non-dependent interactive programs in which the interface between the user and the real world is fixed, as in ordinary functional programming, there is a natural notion of state-dependent interactive programs, in which the interface changes over time.

Concatenative Language Kont

An interesting project.

Kont came out of an attempt to design a programming language in which every program was expressed in continuation-passing style. A concatenative language is a language in which programs are formed by the concatenation of smaller primitives, ala Forth.

simpler representation of operational semantics?

While trying to learn the formal notation used in TAPL (and many other papers), I found this paper ... Syntactic Theories of Sequential COntrol and State by Matthias Felleisen. I didn't find this mentioned in any previous discussion on LTU (even though it is from 1992). This paper, as far as I can tell, seems to represent the same idea of TAPL notation in a much more concise manner. For example (horrible 'summary' follows), in Pierce's book (page 72), some rules are listed as follows:

Syntax:
t::= x | Lx.t | t t
v::= Lx.t

For t -> t'

t1 -> t1' | t1 t2 -> t1' t2 [E-APP1]
t2 -> t2' | v1 t2 -> v1 t2' [E-APP2]
(Lx.t12) v2 -> [x -> v2]t12 [EAPPABS]

According to Felleisen's paper, these rules could be written as:

For t -> t'

C[t] -> C[t']

If we extend the syntax with the following:
C::=[] | (e C) | (C e) | Lx.C

In other words, we don't have to have such verbose rules because 'C' can represent a 'hole' for a term or its reduction.

Ok, this is obviously not a summary of the paper but a question. Is my understanding correct? Is it of any value? I would think so, if one is trying to build 'executable' operational semantics. No?

by the way, I found this article through another paper on Operational Semantics for Scheme. This paper (I haven't attempted reading it yet) has further discussion of Felleisen's paper.