Lambda the Ultimate

inactiveTopic CLR Memory Model
started 5/28/2003; 1:21:00 PM - last post 6/2/2003; 10:13:02 AM
Ehud Lamm - CLR Memory Model  blueArrow
5/28/2003; 1:21:00 PM (reads: 2049, responses: 7)
CLR Memory Model
So what is a memory model? Itís the abstraction that makes the reality of todayís exotic hardware comprehensible to software developers.

Chris Brumme has a very informative discussion of the issues involved in designing a memory model for a programming language.

Many programmers take the memory model for granted, and often don't realize that this abstraction layer actually exists. They are certain that hardware "really" behaves the way their programs expect it to behave.

This is something we should really educate people about. Hopefully, as more languages are based on standard VMs, this concept will become more well known.

Another lesson to take home from Chris's post is about the complexity of modern CPUs. Even if you know how a specific CPU works, it is quite hard to predict how programs will behave, even before yo introduce complicating factors like garbage collection, virtual memroy and multiprogramming operating systems.

Posted to implementation by Ehud Lamm on 5/28/03; 1:27:58 PM

Luke Gorrie - Re: CLR Memory Model  blueArrow
5/28/2003; 5:30:45 PM (reads: 847, responses: 0)
The mind boggles!

Daniel Yokomiso - Re: CLR Memory Model  blueArrow
5/29/2003; 11:49:46 AM (reads: 741, responses: 0)
AFAICS the CLR memory model (CLRMM) is as flawed as the Java memory model (JMM). According to the blog entry the same kind of errors described in are possible in CLR. I find this disturbing, because when the CLRMM was being designed these flaws in the JMM where known. Most programmers praise or complain about cosmetic changes in CLR vs. JVM, always bringing the productivity or performance gods, but this kind of problem can hardly be fixed after initial release. Following the interest list is very interesting because each other thread bumps in a hidden backwards compatibility issue (e.g. wait/notify semantics), and they're just scratching in the surface of the Java language. People at are doing a harder work trying to correct Java's flaws. Or am I totally mistaken? Does anyone know what, if any, are the differences between JMM and CLRMM?

Daniel Yokomiso.

"State is hell. You need to design systems under the assumption that state is hell. Everything that can be stateless should be stateless." - Ken Arnold

Dan - Re: CLR Memory Model  blueArrow
5/30/2003; 1:34:41 PM (reads: 680, responses: 2)
Ah, I remember reading this one. I was tempted to comment along the lines of "Anyone who expects predictable low-level behavior from a high-level language is a fool" but that seemed a bit incindiary. Still, expecting to do something like double-checked locking in anything higher-level than assembly (and maybe, just maybe, C) is just asking for trouble. *Especially* in the presence of multiple processors.

That is, after all, what locking primitives are for. If your code is so performance critical that you don't even want to take the time to do that, what the heck are you doing writing the code in a language designed to target a VM, rather than the real M?

Dan Shappir - Re: CLR Memory Model  blueArrow
5/30/2003; 2:26:27 PM (reads: 694, responses: 0)
You are correct with one caveat IMHO: high performance server applications. People want to use VMs to write these types of applications - the development benefits are simply too significant to ignore. OTOH people want their servers to be highly scalable. MS is (also) competing with other software platforms running on the same hardware as its own platform. And MS often looses because of real or perceived scalability issues. If MS would be able to demonstrate noticeable performance gains without forcing developers to jump through too many hoops, it could be a real win for it.

Ehud Lamm - Re: CLR Memory Model  blueArrow
5/30/2003; 2:33:10 PM (reads: 705, responses: 0)
I understand your point of view. The thing to remember, I think, is that some language definitions specify a memory model that conforming implementations must follow. Such a memory model provides guarantees programmers may take advantage of.

Whether language designs should provide this type of abstraction is a good question. Perhaps it is better to make it clear what is guaranteed by the language and what isn't, rather than just hope programmers not make unwarranted assumptions?

Daniel Yukio Yokomiso - Re: CLR Memory Model  blueArrow
6/2/2003; 7:03:59 AM (reads: 624, responses: 1)
FWIW the double-checked locking (DCL) idiom isn't my primary concern. I'm worried about non-atomic object construction. It's a gray area in Java, because you can have code accessing partial constructed objects:

public class Broken {
    final int n;
    Broken() throws Exception {
        new Thread(new Runnable() {
            public void run() {
                try {
                    do {
                        System.out.println("First time: " + n);
                        System.out.println("Second time: " + n);
                    } while (n != 1000);
                } catch (Exception ignored) {
        this.n = 1000;
    public static void main(String[] args) throws Exception {
        new Broken();
The main problem in DCL is when you have one thread creating the object and other getting a partial, non-null reference to it. Once I caught this kind of bug in a project, very nasty. It's similar to non-atomicity of double and long assignments. Most programmers believe they don't need locking for those, because JLS guarantees atomicity for int & friends. If CLRMM is similar they're just giving developers more rope to hang. IIRC some languages (Ada95?) give you guarantees of atomicity for object creation. Perhaps more simmetry would be better (e.g.: atomicity for all or none primitives, possibility of adding synchronized keywords to constructos, etc.)?

Dan - Re: CLR Memory Model  blueArrow
6/2/2003; 10:13:02 AM (reads: 665, responses: 0)
DCL is really the only thing I *am* worried about, since nearly all the other things one might want ordering for are either dead-wrong (and horribly dangerous in the face of an SMP system) or so fragile as to break if any maintenance happens on the offending source file within a half-dozen sectors on the disk. Heck, on an x86 system you can't even guarantee atomic access to an arbitrary longword in an SMP system, since unaligned access is legal. (Though not atomic)

Not using real locks, as provided by your system, around access to shared data is only sane if you're trying to roll your own locking system, and it's been my experience that, to a good first approximation, exactly one person per OS/CPU combination should be doing this. There are so many subtle problems that crop up, even in a uniprocessor system, that are insanely difficult to track down that it just isn't worth it. You get weird word-shear write problems, overlaid structs and objects, double-created objects/structs (which often leak, though that at least doesn't happen with a tracing GC system), race issues, double or half data alterations... it's just a mess.

Guaranteed ordering of memory access doesn't help any of that, and providing a guarantee of memory access ordering actually makes it *worse*, since it gives an illusion of correctness with the occasional heisenbug showing up to puzzle the unwary.

I swear, everyone who writes threaded code should be forced to develop the stuff on quad-processor Alpha systems, possibly the most vicious multithreading environment around. (Well, except maybe for 6 and 8 processor Alpha systems) If the stuff runs right there, it'll run right anywhere.