Updating immutable data structures & hybrids from functional to imperative

The following are all midpoints between functional and imperative. What do we lose at each point? At what point does what is lost weigh more heavily than what is gained? Are there other useful points in the value type / reference type spectrum?

Point 1:

In functional code we use immutable data structures. Instead of updating an existing value, we create a new value with the desired updates.

Functional languages often have immutable records and syntax for updating them. For example:

let p = {name="Joe", age=27}
let q = p with {age=28}

With nested records this is not so pretty:

let p = {name="Joe", age=27, address={country="France", street="Rue de la Gare", number=43}}
let q = p with {address = p.address with {number=23}}

This pattern is very common when working with functional data structures. Clojure provide certain functions like update-in and assoc-in to handle these patterns. Oftentimes we don't need the old copy of the data structure anymore in the scope it's in. So we can introduce syntactic sugar for record update:

let p.age = 28

-- is sugar for: --

let p = p with {age=28}

This makes use of variable shadowing. This makes nested record update nice:

let p.address.number = 23

Note that other copies of the record are not affected. It has the same semantics as pass-by-copy structs in C/C++.

We can even drop the let in front of variable bindings:

p.address.number = 23

We can also extend this to custom data types (maps, arrays, sets, domain specific objects, etc.) like Common Lisps places (but functional instead of imperative).

Point 2:

Now on another track. If you want to translate this to a functional language:

if <something>:
  x = <calculation of x if something>
  y = <calculation of y if something>
else:
  x = <calculation of x if not something>
  y = <calculation of y if not something>

Then you have to construct a pair:

(x,y) = if <something>
        then (<calculation of x if something>,
              <calculation of y if something>)
        else (<calculation of x if not something>,
              <calculation of y if not something>)

This style can be less clear and perhaps less efficient. We can adopt dominance based scoping: if a block dominates another block, then the variables defined in the first block are visible in the other block. So we can do:

if <something>
  let x = 2
else
  let x = 3

foo x

And the x will be visible outside the if.

Point 3:

We can extend this principle to loops. Loops in functional languages are often written in recursive style:

let xs = <some list of numbers>
let iter ys n =
  if ys==[]
  then iter (tail ys) (n + head ys)
  else n
let sum = iter xs 0

This is reminiscent of programming with gotos.

We can do structured programming by letting variables shadow each other across loop iterations:

let xs = <some list of numbers>
let sum = 0
for x in xs:
  let sum = sum + x

Point 4:

However, some loops are written with higher order functions:

let xs = 
let sum = 0
foreach xs (fun x -> let sum = sum + x)

It's unclear what the semantics are:

let ref x =
  let val = x
  ((fun () -> val), (fun y -> let val = y))

If we allow this, then we are no longer shadowing previous variables: we can implement full stateful references with it.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Lenses, ST, Applicative

I don't like any of these options that depend on shadowing of names for clever syntactic tricks. They seem to obfuscate intent, and I doubt they abstract or compose nicely (as you found in Point 4).

Better points of comparison, IMO, are Lenses, ST monad, and Applicative functors.

Lenses allow "deep" updates on a structured value, such as a record. An excellent Haskell package for lenses is fclabels. Basically, your first example could be expressed as: `set (number . address) 23 p`. Lenses are very compositional.

ST enables local state references and mutable arrays to be used within a pure function in a single-threaded, deterministic manner. If you want a while loop, or to express various graph processing algorithms developed for imperative code, ST allows it.

The Applicative functor model is valuable for expressing staged computation, clear separations between effect and calculation phases. This allows mixing functional and imperative like a layered cake, as opposed to weaving them together like a Monad would.

Hmm, I think you're right.

Hmm, I think you're right. Lenses seem like a particularly nice solution for these nested updates. I didn't know that they could be used for that with such elegance! Do you have an example of what you allude to in your last sentence?

Applicatives

Applicatives are not something I appreciated or understood until I started using them. It's a bit hard to provide code examples, because their benefit isn't what they let you do - it's what they prevent you from doing. Basically, you need to structure your program like:

foo(bar(baz(),qux()),frob(etc()))

In Haskell, you'd encode this as (assuming foo, bar, frob are functions of roughly the form `arg1 -> arg2 -> MyApplicative result`):

foo <$> (bar <$> baz <*> qux) <*> (frob <$> etc)

Each of the parameters and functions be effectful, like method calls in most imperative languages. The difference is that data-flow is in one direction: down. There are no ad-hoc loops, like you can achieve with a Monad. (You can have loops, but you must provide them explicitly in the Applicative language. So they're not ad-hoc.) If you want arguments from effectful operations in a lower layer, you'll essentially need to build a new Applicative to run - i.e. staged programming.

As an example of where I found applicatives useful: I'm designing a simple but flexible framework in Haskell for configurations, dependency injection, and precise resource construction (so I can provide more rules than I use, and don't build anything that I don't need). My earlier design used Monads. But I decided that I want static analysis features - i.e. so I won't start building and discover when half-finished that I'm missing a dependency and need to abort. Monad cannot achieve this because it is too expressive - I can't know which resources it's going to ask for without actually running it. Applicative, OTOH, does allow this analysis, resulting in a much more declarative build model.

ST theory

Do you happen to know the name of the theory/math behind the ST monad you linked to? I have been toying with a very similar idea and calling it "state-sealing" in my head (since state exists, but it is "sealed away" from the rest of the system). I really want to understand the formal semantics of such a thing.

I am coming at it from the opposite direction - trying to add as much functional programming as possible to an imperative language. There, it feels valuable to me as a re-capture mechanic. That is, it is trivially easy for the imperative code to call into the functional code, but this essentially creates a means of doing the opposite.

Lazy Functional State Threads

See Lazy Functional State Threads by John Launchbury and Simon Peyton Jones.

You can also read about ST on hackage.

What's the purpose of this?

What's the purpose of this? To make an imperative language out of a functional one? It looks like you are forced to code in a functional language but you still prefer the imperative style. I think this is quite common, and the possible reasons are:
1) historic domination of the imperative style
2) weakness of the modern functional languages when it comes to practical tasks

Have a look at Aha! (see the other thread), it may give you better solutions to what you are trying to achieve.