## XPython

Somebody else wants to embed XML fragments into their program code.

Some of what's proposed here really falls under PEP 318 for method decorators, I think.

## Comment viewing options

### Hmm...

enter/leave lets you run code in a "tree shape" but doesn't let you build up a value when you walk the tree. So you need to use side effects to create any result values, which is a giant warning sign to me. My instinctive reaction is to try and model the whole thing using higher-order functions (which shouldn't be too hard), and use the types to regularize the design and verify that all the necessary intro/elim forms are present.

### Control vs. accumulation

I've been warming to this separation of control-flow from value accumulation while programming in Common Lisp. Having orthogonal constructs for visiting the parts of a data structure and for building up your return value can actually be kinda neat.

To illustrate the idea, here's a simple function that takes a list of integers and returns two values: a list of the even elements and a list of the odd elements. The iteration is done by dolist, which is like a for-each with syntactic sugar. The value accumulation is done separately by a nifty macro called collect:

(defun evens-and-odds (numbers)
(collect ((evens) (odds))
(dolist (n numbers)
(if (evenp n) (evens n) (odds n)))
(values (evens) (odds))))

The first argment for collect is a list of specs for what I want to collect and the rest is the code that'll do the collecting. In this simple example the specs are just names. That means that each name gets bound to a local function that I can call with a single argument to accumulate a value into a list (preserving first-to-last order), or call with no arguments to return the currently accumulated value. A more sophisticated spec could give an initial value and a value-combining function to behave more like a fold.

I hated this at first, but I rather like it now.

Of course I would really write this function as:

(defun evens-and-odds (numbers)
(values (remove-if-not #'evenp numbers)
(remove-if-not #'oddp numbers)))


### Control vs. accumulation

In the form above, the side-effects are nicely contained, but presumably there's a closure with a mutable value in it somewhere behind the scenes? Is there a "pure" way to accomplish the same separation?

### Control vs. accumulation

Certainly there's some mutation going on behind the scenes, either to a cons cell or a lexical binding. I'd guess that some clever guy could do the same thing with Monads, and a clever compiler writer could make that compile down to the same code. But since the Lisp code, though not "pure", is so easy to understand I wonder if it matters.

Another "impure" aspect of Lisp is modifying data structures in place, e.g. the list operations SORT and NCONC can trash their inputs while producing their output. I don't like these functions and their common usage one bit. They introduce non-local issues: "am I allowed to modify my arguments?" and "what if someone modifies the value I return?" I think this makes Lisp a low-level language by Alan Perlis's definition.

My thought for the day is that the nice thing about "purity" is keeping things locally understandable, and that anything else that does this is interesting too.

I'd guess that some clever guy could do the same thing with Monads
Great, now anyone who responds to this will be considered immodest. Oh well. Here's a monad-like pure functional solution. I say monad-like, because I haven't verified that it conforms to the monad laws, but it works, it separates out the accumulation and it's purely functional. For a more complete example, see Oleg's Monadic Programming in Scheme. The example below isn't as full-fledged as that one. Also, it doesn't implement general collect-style functionality -- that could be done, of course, but would be a bit more work.

I worked it out top down, starting with what I wanted to see at the top level, using a fold (not essential, any old loop would do), with the accumulating parameter being a monad-like value m, i.e. a procedure which accumulates the desired state:

(require (lib "1.ss" "srfi")) ; for fold (PLT Scheme)

; data structure for result type: ((even ...)(odd ...))
(define (make-evens+odds evens odds) (list evens odds))
(define get-evens car)

(define (even-and-odds numbers)
((fold (lambda (n m)
(bind
m
(if (even? n)
(evens n)
(odds n))))
(lambda (x) x)
numbers)
(make-evens+odds '() '())))

The fold expression is quite similar, overall, to the CL version, except that the definition of evens and odds hasn't been given. Since I'm not trying to develop a generic replacement for collect, I'll just hardcode those functions. That's easy - all they have to do is add the specified value to the evens or odds list, respectively. We just have to make sure they return procedures which take an evens+odds structure. Wrapping the result in a procedure is part of the behind-the-scenes threading of the evens+odds structure through the computation, which has been extracted and hidden from the procedure above.
(define (evens val)
(lambda (evens+odds)
(make-evens+odds (cons val (get-evens evens+odds)) (get-odds evens+odds))))

(define (odds val)
(lambda (evens+odds)
(make-evens+odds (get-evens evens+odds) (cons val (get-odds evens+odds)))))

Now we need the bind operator, which will grow the monad-like value m, using a specified function f:
(define (bind m f)
(lambda (x)
(m (f x))))

This is quite general - it returns a procedures which applies the specified function f to a value x (in this example, an evens+odds structure), and applies the monad to the result, returning a "bigger" monad. This is commonly done the other way around - (f (m x)), essentially - but this way, we compensate neatly for the reversing of the list that happens in the fold.

That's it - now it can be tested:

(even-and-odds '(7 2 3 8 9 10 12 14 13 17 6 5))
=>
((2 8 10 12 14 6) (7 3 9 13 17 5))

Voila, who said monads were difficult? (Especially stripped-down monads.) It's implemented from scratch, no dependence on complicated library routines, other than fold (which could of course be replaced by a recursive function).
Of course I would really write this function as:
(defun evens-and-odds (numbers)
(values (remove-if-not #'evenp numbers)
(remove-if-not #'oddp numbers)))

Of course I would really write this function as:
(define (evens-and-odds numbers)
(partition even? numbers))

:oP

### P.S. I agree with this:

P.S. I agree with this:

My thought for the day is that the nice thing about "purity" is keeping things locally understandable, and that anything else that does this is interesting too.

Side effects that don't affect the ability to reason about the program outside of the place where they're used are fine.

### The IO Monad as roach motel

Side effects that don't affect the ability to reason about the program outside of the place where they're used are fine.

I wonder whether the difference between "harmful" and "harmless" side-effects could be said to correspond to the difference in Haskell between one-way and two-way monads (that is, monads you can "escape" from). The work done in a two-way monad could presumably be done by impure methods without contaminating the rest of the program, whereas the work done in a one-way monad could not be done by impure methods without "destroying the functional properties of the non-monadic portions of the program" (Jeff Newbern, All about Monads).

Actually I'm not convinced that's always true - surely you could scatter printfs liberally around a "pure" program without destroying referential transparency (unless you consider the state of the console to be part of the return value)?

### Actually I'm not convinced th

Actually I'm not convinced that's always true - surely you could scatter printfs liberally around a "pure" program without destroying referential transparency (unless you consider the state of the console to be part of the return value)?

Think about common-subexpression elimination -- if you have code like:

let val x = printf "foo"
val y = printf "foo"
val z = printf "foo"
in
(x, y, z)
end


And the compiler, thinking that CSE is legal, "optimizes" it to:

let val x = printf "foo"
val y = x
val z = x
in
(x,y,z)
end


then the meaning of your program changes, all because the compiler assumed purity when it wasn't so. So just scattering printfs around a pure program won't necessarily give the results you expect -- you might not even be getting the output you thought you were, which is a problem for debugging. :)

### CSE

I admit I hadn't thought of that (being generally ignorant of compiler techniques, I didn't know it was there to be thought of). The case I was thinking of, though, was one in which a "pure" function obtains a value from an "impure" function, where the impurity of the impure function doesn't affect the purity of the pure function.

In the case of two-way monads, the fact that you can recover the value from the monad implies that there isn't any left-over threaded state for the caller to worry about - in other words, the side-effects simulated by (or threaded through) the monad are local (and hence would not be contaminating even if they were real).

In the case of one-way monads, the fact that you can't recover the value from the monad implies that there may be non-local side-effects - so the caller is obliged to "take up the thread", so to speak.

If you want to write output to the console in Haskell you have to enter the IO monad, which is a one-way monad. The question is whether output written to the console is really a side-effect from the point of view of the rest of the program, or whether it can be thought of as an "out of band" side-effect (for the sake of argument, let STDOUT be piped to /dev/null: whatever goes in there no longer matters).

It should be possible to indicate to a compiler that a section of code contains side-effects, so that CSE is not legal there, but that the side-effects are localized so that other code that calls it remains pure and can legally be optimized in this way.

### It should be possible to indi

It should be possible to indicate to a compiler that a section of code contains side-effects, so that CSE is not legal there, but that the side-effects are localized so that other code that calls it remains pure and can legally be optimized in this way.

Two things:

1. Side-effects always introduce ordering but a "pure" language may evaluate expression in any order (specially the lazy ones).

2. How can we distinguish "harmless" and "harmful" side-effects? Someone could pipe stdout to a shell-like process which and writing "print_line ('rm ' ++ file_name)" wouldn't be so "harmless".

-----
Hidden side-effects are evil, support separation of Church and state ;-)

### Harmless and harmful

I meant "harmless" and "harmful" in terms of whether they imperil the assumptions one makes about referential transparency elsewhere in the program.

The point about local side-effects is that they only introduce ordering within a limited scope. Say we have a function sum, that sums the elements in a list. Say we implement a function sum with a looping construct and an accumulator, rather than using a fold. Within the scope of the loop, there are side-effects - the value of the accumulator is mutated - and ordering is introduced. But the value of sum [1, 2, 3, 4, 5] is always the same (the accumulator is reset each time we start the loop), and "pure" code that uses sum can retain all the properties it would have had if sum were implemented using a fold, including order-independance.

I'm not wholly confident that there aren't subtle reasons why I'm wrong about this ("all the properties" sounds like an invitation to point out a property that is violated even if side-effects are contained), but I can't think what those reasons might be.

### Local side-effects are easier to manage

We can manage local side-effects with imperative code simulation (e.g. state monads) or with help of the type-system (e.g. unique types).

There are lots of interesting research related to this in the areas of "type and effect systems", "region inference" and "effect masking". Some papers aren't about local side-effects but they describe techniques that (IIUC) could be used to solve this problem.

### More on "decorators" / PEP 318

Thoughts on PEP318

To me this just looks like a more elegant syntax for doing what I do already with metaclasses. On the other hand, what I do with metaclasses is often grungy and evil, so cosmetic improvement is always welcome.

I can see this getting out of hand (and I'm not the only one). As with anything which enables you to extend the "core" language with customized magic, it does threaten to fragment the design space. The Twisted and PEAK frameworks already have their own quite distinct varieties of metaprogramming-based plumbing, and it's startlingly easy to cook up your own.