Is null needed?

In the OO languages I'm most familiar with, there's always been the ability to set a variable to null or nil which of course can lead to all sorts of fun problems. What I'm wondering is if there's an actual need for null/nil or not? Anyone know any research about this? I have a sense that null is the kind of thing that'd end up being invented by programmers if it wasn't already in the language to begin with which makes me wonder if it has been shown to be essential somehow in the same way that number systems seem to need zero.

(I suspect I may be way out of my depth in even asking this question... but what the heck :))

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Yes, but not everywhere

Null is needed in some situations, but in a statically typed language it doesn't have to be pervasively available, the way it is in some languages [edit: like Java, C#, C, C++].

In language with mathematically-inspired static type systems, the need for null is dealt with by types such as Maybe (in Haskell) and the option type (in ML). So if you want nullability in these languages, you have to specify it explicitly, and you can only use it (or its equivalent) in those places.

An enormous advantage of this is that it allows the type system to statically enforce & check correct usage of nullable values, essentially eliminating "null pointer exception" errors or the equivalent.

In dynamically typed languages, the situation is a little different because typically, any value can be any type, so null just comes along for the ride. It would be possible to require the use of some sort of nullable reference type instead of allowing nulls anywhere, but that would probably go against the grain in most dynamic languages.

As for whether null has been shown to be essential, this is a pretty simple issue of representation that probably doesn't require much deep thought. Sometimes you want to represent that something isn't there, and so you need a way to do that. (Although I don't doubt that this could be complicated with say a categorical explanation...)

essentially eliminating "null pointer exception" errors

Transitioning from Common Lisp to SML several years ago, it was quite annoying to get exceptions thrown when accessing empty lists, and these exceptions occurred at run time, greatly deflating my expectations about static typing ensuring things "can't go wrong."

Compare this description of (car x) from the CLHS [1]: If x is a cons, car returns the car of that cons. If x is nil, car returns nil.

with this description of (hd x) from the SML Basis Library [2]: hd l
returns the first element of l. It raises Empty if l is nil.

Making code type safe by raising exceptions at run time didn't seem like a big step forward. It seemed a lot like a "null pointer exception" in fact. Of course, I soon learned that if I had used destructuring bind that I would be more likely to match against all the cases, including an empty list, and that the compiler could check that I did. Functions like hd are an unfortunate wart (I'll resist mentioning annoying arithmetic exceptions in the same CL to ML transition).

There's been a great deal of

There's been a great deal of work on type systems that ensure you can't take the head of an empty list. It's just crummy library design though, as Hindley/Milner is all you really need to ensure safety for this.

Coming from CL

I myself came from CL and also was annoyed at the exceptions being thrown by functions in SML. It *is* true that you don't get much from types if you are using partial functions all over the place. This is partially a cultural problem, i.e. not using match when it is more appropriate, but it is also partially a problem with partial functions!

As you mention, it can be seen as a library problem. As a library designer I think it would be best to localise knowledge about what can happen as much as possible. Something like:

hd : 'a list -> 'a

should probably have type

hd : 'a list -> 'a option

Either that, or make more sophisticated use of the type system.

As far as the work that you mentioned that has been done to use Hindley/Milner to ensure safety, Oleg Kiselyov's lightweight dependent typing page is a good starter for those not familiar with the techniques.

I have an example of safe access for hd/tl in SML here.

Alternatively we can just work in a total language with real dependent types and never have any of these problems! Yay Coq!

Re: Yay Coq!

So how long before everybody caves in and really is just using Coq for everything? Maybe it could be the next Java. I gotta get me on that bandwagon! (I mean that hopefully/seriously.)

Well...

...YNot?

:-)

More details?

There's been a great deal of work on type systems that ensure you can't take the head of an empty list. It's just crummy library design though, as Hindley/Milner is all you really need to ensure safety for this.

Could you provide the details?

The one solution I can think of is to tack a tuple on to the front. So, for example, a non-empty list would have type "(a, [a])". But because "(a, [a])" is not a subtype of "[a]", you can't use regular list functions like "map".

the essence of elimination

Of course, I soon learned that if I had used destructuring bind that I would be more likely to match against all the cases, including an empty list, and that the compiler could check that I did.

That's exactly what I meant by "essentially". :)

With an option type, the compiler can at the very least warn you when you aren't handling the null case, and those warnings tend to be meaningful. That's not possible with pervasive nulls.

Expectations

Functions like hd are an unfortunate wart

I agree to an extent. hd is indeed not a total function and one can easily write bugs with it. The same would go for SML's valOf function. Fortunately, you never need to use them --- you can always instead use pattern matching or safer alternatives. Indeed, my experience has been quite different. I don't remember whether I've ever (unexpectedly) gotten an Empty exception. That is because I almost never use functions like hd. Since you mention transitioning from CL to SML, I would conjecture that the reason you got those exceptions (if you actually got them) was a result of trying to write CL in SML --- not a good idea. In SML you use pattern matching and HOFs to manipulate lists --- not functions like hd and tl.

OTOH, I completely disagree that CL's handling of car would be somehow better. While it might sometimes be what you want (especially since nil is false in CL), my experience is that "naturally" partial functions that are artificially augmented to become total can and do hide nasty bugs and will cause problems later. For example, I've worked on a Scheme code base where a return value of #f was occasionally used liberally to indicate failures (the Scheme implementation also offered exceptions). While this was undoubtedly convenient in places during initial programming, it was a pain in the ass later (esp. while fixing bugs) to try and make sure that you are actually handling all the cases. I saw errors to the effect of trying to perform some (illegal) operation on #f values more than a couple of times. I would except similar symptoms --- trying to do something to the nil value --- resulting from CL's definition of car.

(I'll resist mentioning annoying arithmetic exceptions in the same CL to ML transition).

Interesting. My reaction was the opposite. While it was kind of surprising to get an Overflow exception for the first time in SML, after analyzing my code, I realized there was a problem and was quite pleased that the exception pointed me to the problem.

Note that the SML Basis library offers several arithmetic types: arbitrary precision integers (IntInf), fixed precision (with overflow detection) integers (e.g. FixedInt), as well as unsigned integers with modular arithmetic (e.g. Word). I very much enjoy having all of them.

Having now programmed in SML for some time, I really absolutely don't miss signed modular arithmetic (like in most languages). I've never gotten an arithmetic exception for no reason (and I don't think I've gotten them more that a couple of times --- at least I don't recall unexpectedly getting an arithmetic exception in the last 2-3 years of programming in SML on a daily basis) and when I need arbitrary precision or modular arithmetic, I'll just use the appropriate type.

Constructors

The design of a Java/C#/C++ like language with completely sound option or nullable types is tricky due to the way constructors work in those languages. I dug around a bit and found a comment about this which in turn leads to the paper Declaring and Checking Non-null Types in an Object-Oriented Language.

my guess is "it depends"

So the first thing to ask is, what is null/nil used for? Then the second thing to ask is, how can we do what it is used for with something else? Then another question to ask is, does it end up making a difference worth a hoot?

Unfortunately over time nil/null have been used for many different things in many different contexts. Take Lisp, for example. So answering the first question requires really narrowing down what the context is.

With respect to the second and third questions above, some languages let you require some things to not be null. You can even get that in Java if you use something like ESC/J and annotations. Also, several languages let you have "Option" or "Maybe" types rather than having to use nil/null.

Null as typically represented is not needed

I second Anton's take on null/nil, but I want to emphasize that he is not talking about null as it is commonly represented in popular languages (like Java, C#, C, C++). That kind of null value is dangerous and unsafe. There are safe ways to represent and use null values, which Anton describes.

Clarifying & rambling

I edited my comment to mention some of the dangerous & unsafe languages by name.

Part of what I was trying to get at was that the kind of null value in those languages is quite similar to the Maybe or option types, except that its use is embedded pervasively into the language, e.g. for every reference or pointer variable, whether it's needed or not.

E.g. you could model Java's references something like the following (in Haskell, without worrying too much about details):

data JavaRef a = Null | Ref a

...which is equivalent to Maybe a = Nothing | Just a.

The pervasiveness of this type makes it impossible for the typechecker to distinguish between situations in which nulls are possible and those where they're not (or shouldn't be) possible, and this inability to distinguish means it can't usefully check for improper use of nulls.

This is also why these languages resemble dynamically typed languages in their eagerness to crash with null pointer exceptions at runtime: the sum type above is like a restricted version of the universal type in dynamically typed languages.

Pervasiveness of Maybe type?

Can you elaborate on the pervasiveness of Maybe type? I think that by pattern matching, you can extract the value of the reference and treat the Null (or Nothing) case. Following that, you can forget the Maybe "capsule", and work with the raw and warrantied not-null value.

case ref of
Null -> workWithNull
Ref a -> workWithNotNull a

The Maybe/Option type

The Maybe/Option type compiles down to that very same "raw" code. All nullary constructors (ie. those without parameters) use null, provided they are the only nullary constructors in the type.

in haskell

I think that in haskell, this is usually not the case. You can pass a value to a function but never a reference (ignoring the IO monad and the like). You have the choice to use (Maybe t) when you need to represent a null (or an error) and alternatively a value of type r. But then, to use the value you are forced to check that a Just is passed.
Once you have checked if a value is Nothing or (Just x), you use x and throw away the Maybe.
Of course, if you insist in passing the (Maybe t) to other functions, the type of the parameter will be (Maybe t) on the other functions instead of t, and the type-checker will follow your specifications.

Of course, you will have problems if you assume that the value of type (Maybe t) will always pattern-match with (Just x). You will have an exception ... but you can manage to have a warning in compile time (at least with GHC). But then, why you specify a (Maybe t) parameter if you really expect a value of type t?
In my experience, such errors are much less common than the NullPointerException, because a Maybe is much less used,
for example in the haskell98 prelude only two functions have a Maybe parameter.

I'm not sure what you're

I honestly didn't follow anything you wrote. Just to clarify, we're talking about the machine code generated by the compiler. If you have a type with a single nullary constructor, ie. type 'a option = None | Some of 'a, then the None case is handily represented as a null reference (value == 0), and the Some as a non-null reference (value != 0). You even save some boxing overhead this way too, as the option type needs no runtime representation. Most cases are rarely so simple, but the nullary constructor case alone is still a useful optimization.

Runtime rep of option

That's not quite right. In general, the option type needs a runtime representation: Some has to be boxed. The canonical example is distinguishing between None and Some None.

You can only get rid of boxing when 'a is instantiated with a fully boxed type. But that requires type specialisation, which very few systems do, because you either need whole program or runtime compilation for that.

That's not quite right. In

That's not quite right. In general, the option type needs a runtime representation: Some has to be boxed. The canonical example is distinguishing between None and Some None.

Yes, sometimes Some needs to be boxed, but only if it contains itself as you demonstrate (and even then, you can unbox the "end" of that chain). None never needs boxing that I can see, which was my original point: a solitary nullary constructor for a type can just be represented as null.

Right, nullary constructors

Right, nullary constructors usually are just represented as ints. But unboxing non-nullary ones is only possible if there is only one, and either of the following holds:

1. Its argument type is statically known to be a boxed type (i.e. not polymorphic).

2. You do whole program compilation and the language does not have first-class polymorphism. [Edit: and no polymorphic recursion either]

3. You do runtime compilation.

4. You do runtime type dispatch.

For most implementations that rules out ever unboxing Some.

Good summary! One

Good summary! One question:

You do whole program compilation and the language does not have first-class polymorphism.

Why does first-class polymorphism impose restrictions above and beyond those you listed in criteria #1?

Right, nullary constructors usually are just represented as ints.

That brings to mind another interesting optimization/compression: if the parameter is known to be a boxed type, and the low-order bit is 1, it's a constructor tag (an int), otherwise it's a pointer to the constructed type (with contains the tag). It's something of a generalization of the nullary constructor argument I proposed.

Why does first-class

Why does first-class polymorphism impose restrictions above and beyond those you listed in criteria #1?

It doesn't, those are alternative criteria. If you meet #1 you are always fine. If you don't (e.g. with Some), but meet #2 you are, too. For example, a whole-program compiler like MLton can optimise Some, but it relies on global monomorphisation, which is not possible with first-class polymorphism.

That brings to mind another interesting optimization/compression

I don't understand, aren't you just describing unboxing there?

For example, a whole-program

For example, a whole-program compiler like MLton can optimise Some, but it relies on global monomorphisation, which is not possible with first-class polymorphism.

This last part is what I don't quite understand. You seem to be implying that a language that merely has first-class polymorphism can't be optimized this way under any circumstances, rather than a program written in such a language which uses first-class polymorphism precluding this optimization. I understand the latter, but not the former.

I don't understand, aren't you just describing unboxing there?

Hmm, it's a sort of unboxing of nullary constructors. Given the following ML:

type 'a SomeType = Empty1 | Empty2 | Something of 'a ...

Here's the C code (assuming bit fields respect layout):

typedef enum {
  Empty1,
  Empty2,
  Something,
  ...
} SomeTypeTag;

typedef struct {
  SomeTypeTag tag;
  void *data;
} *SomeTypeBoxed;

typedef union  {
  struct {
    unsigned tag : sizeof(unsigned) - 1;
    unsigned is_empty : 1;
  } empty_ctors;
  SomeTypeBoxed nonempty_ctors;
} SomeTypeRef;

SomeTypeRef is what's passed around (it's at least pointer sized). Matching on a SomeTypeRef goes as follows:

SomeTypeRef r = ...;
if (r.empty_ctors.is_empty == 1) {
  switch (r.emtpy_ctors.tag) {
    case Emtpy1: ...  /* only the empty tags */
    case Emtpy2: ...
  }
} else {
  switch (r.nonempty_ctors->tag) {
    case Something: ... /* only the non-empty ones */
  }
}

So what I was suggesting was reducing heap allocation by folding the empty constructors into the value passed around using the low-order bit to flag it as an empty constructor. Is this the unboxing you were referring to?

You seem to be implying

You seem to be implying that a language that merely has first-class polymorphism can't be optimized this way under any circumstances, rather than a program written in such a language which uses first-class polymorphism precluding this optimization.

Well, sure, if the compiler finds out that the entire program is written in a subset of the language excluding first-class polymorphism, then it can certainly switch to a different compilation strategy. But then you are really having two different compiler back ends for two different languages.

In principle, I could even envision a mixed approach where the compiler uses some sophisticated flow analysis to find out that local uses of the "sublanguage" are isolated. But I can hardly imagine this being worth the effort.

Is this the unboxing you were referring to?

Yes, except that your "is_empty" bit already is part of the usual representation of integers (which is why ints are typically 31 or 63 bit in most FP systems), because services like GC need this information already. So nothing new has to be done for datatypes.

Polymorphic value representation

Well, sure, if the compiler finds out that the entire program is written in a subset of the language excluding first-class polymorphism, then it can certainly switch to a different compilation strategy. But then you are really having two different compiler back ends for two different languages.

I think the heart of my question, is whether this can be a local optimization for code that does not use FCP, rather than a whole-program optimization. For instance, if the use of an option type does not escape a module, it can be unboxed in the module's representation. See also the optimization I describe below.

Yes, except that your "is_empty" bit already is part of the usual representation of integers (which is why ints are typically 31 or 63 bit in most FP systems), because services like GC need this information already. So nothing new has to be done for datatypes.

Right, so my unstated assumption is that integers are fully unboxed. I assume that polymorphic functions use polymorphic values of the following shape:

typedef union {
  void *ptr;
  int integer;
} polymorphic_t;

Since polymorphic functions never actually access the concrete representation, they only pass the values around to other functions that access the representation, I think this is safe. Only the shape matters to polymorphic functions, since they need a uniform representation.

I also don't see why GC necessitates 31-bit integers. I expect only some GCs require that representation. Am I missing something?

Hmm, I may have just figured it out. GC must be done at safe points, but using the above polymorphic representations with unboxed ints, only monomorphic functions are safe points since only they have the needed type information. In order to carry this type info to polymorphic functions, we'd need some form of type passing. Something like a bitmask, with each bit representing a boxed/unboxed flag for the corresponding polymorphic parameter. I wonder if this can be generalized, and how efficient it could be..

On track

I may have just figured it out.

Indeed. :-) And as you say, you can solve this with type passing. But even if you are just using some low-level representation types, it can quickly become complicated. In particular, you have to annotate all polymorphic closure environments, and moreover, plain bits are far from enough when you are dealing with parameterised types.

I don't have any pointers ready, but you may want to google for "tagless GC".

(Note: added comment to ancestor post pointing out that polymorphic recursion can create problems similar to first-class polymorphism.)

Tag-free GC

I don't have any pointers ready, but you may want to google for "tagless GC".

I'm familiar with Goldberg's work on tag-free GC. GC essentially performs runtime type reconstruction. I'm also familiar with the newer "shadow stack" GCs that grew out of this approach too. I'm in the process of designing a low-level code generator, with an eye towards implementing a generic GC using it, so I'm going to be giving these issues more serious thought soon. Thanks for explanations! :-)

and moreover, plain bits are

and moreover, plain bits are far from enough when you are dealing with parameterised types

I don't understand the problem with parameterized types. I'm assuming the following:

1. polymorphic function arguments that are word-sized or smaller can be unboxed; if they are larger than a word, they are boxed.
2. a bitmask flags which polymorphic arguments are unboxed.
3. boxed arguments have a type representation shared by all instances of that type at a well-known offset.

Do you still think the bitmask is insufficient in this case?

Higher Kinds

Note that I said it's not enough when you have parameterised types. More precisely, with higher-kinded polymorphism. Consider this simple example:

f : forall c : *->*. c int -> c (int*int) -> unit

f1 = f [\a.a] 6 (6,6)
f2 = f [\a.a*a] (6,6) ((6,6),(6,6))
f3 = f [\a.int] 6 6
Whether c "is unboxed" is not merely a flag, it is a predicate over types. Hence you generally have to pass (higher-order) predicate functions at runtime.

I've thought about this, but

I've thought about this, but I'm still not seeing it (probably because I'm not clear on the example). I'll try to interpret your example and hopefully you can point out where I'm going wrong.

'f' is a function parameterized by a type constructor 'c'.
Given a type constructor 'c', 'f' takes arguments of constructed type "c int" and "c int*int" and returns unit.

'f1' is a function constructed from 'f' applied to the identity type constructor, so 'f1's final type is

f1: int -> int * int -> unit

'f2' takes a pair constructor for 'c', so 'f2's final type is:

f2: int * int -> (int * int) * (int * int) -> unit

'f3' is a bit puzzling, but since the type constructor is invariant in the return type, I'm guessing that it's always just 'int', so 'f3' is:

f3: int -> int -> unit

If that's correct, I'm still not seeing the problem. At any application of 'f', the compiler should know whether the arguments fit into a word, so it should be able to construct the bitmask for all arguments. 'f1' would have a bitmask "01", while 'f2' would have "00", and f3 has bitmask "11".

Another way I look at it: I'm just proposing to move the bit used in 31-bit integers that distinguishes them from pointers into an external bitmask. Unless ML doesn't have this problem you're trying to point out? Though ML does have type constructors, so I'm still confused...

I didn't realise that you

I didn't realise that you actually suggest tying the flags to the term parameters instead of the type parameters, which would be the canonical thing to do. How do you even intend to deal with simple examples like

f : forall a. a -> t

g [a] (x : a * a) = f (x.2)
h [a] (x : unit -> a) = f (x())

Another way I look at it: I'm just proposing to move the bit used in 31-bit integers that distinguishes them from pointers into an external bitmask.

Since the values in question can be arbitrarily nested inside polymorphic data structures, or even be returned from somewhere, you cannot simply lift it the way you seem to suggest. But maybe I'm misunderstanding your idea.

Regarding my previous example: the uses of type constructor c are not restricted to the visible arguments of f. Here's a quick example:

f [c] (g : forall a. a -> c a) (h : forall a. c a -> a) =
  (h (g 6), h (g (6,6)))

(In ML, you can easily express this example using functors.)

I didn't realise that you

I didn't realise that you actually suggest tying the flags to the term parameters instead of the type parameters, which would be the canonical thing to do.

Hmm, I'm not sure I understand the distinction you're drawing. The way I see it, I'm passing in a compressed type representation (dictionary-passing), one which defines only one operation: gc_collect(). This operation does nothing for the type representation '1', and it traverses the type structure using a fuller type representation for the type rep '0'. A simple breakdown:

/* type of all polymorphic pointers */
typedef union {
  obj_layout obj;
  int integer;
} polymorphic_t;
/* layout of the object with its fields */
typedef struct _obj_layout {
  gc_layout layout;
  char * fields[object length];
} * obj_layout;
/* a description of the layout of an object's fields for GC */
typedef struct _gc_layout {
  unsigned * field_offsets;
  unsigned field_offsets_len;
} * gc_layout;

So each argument has one bit describing whether it's an obj_layout which has full layout information at the start of the object, or it's just an int. Calling another function consists of some shifting to extract the parameter's 1-bit type rep, and constructing a new bitmask/type rep for the child function. I'm not sure that this would be faster than having this bit in the int and shifting during arithmetic, but it's an idea.

Since the values in question can be arbitrarily nested inside polymorphic data structures, or even be returned from somewhere, you cannot simply lift it the way you seem to suggest.

Perhaps I left implicit that a more complete object layout is available elsewhere (ie. at a fixed offset in the object itself)? The 1-bit is only for local unboxing purposes so that polymorphic functions can once again be safe points.

Not type passing

That is a plausible scheme, but probably not very attractive for an FPL. You are more likely to find something along these lines in an OOPL, where objects are heavier anyway.

It is quite different from type passing, though. With type passing, values don't contain additional type information on their own. Instead, you associate information with every type variable, to provide what is not known statically. Separating runtime type information from individual values has the obvious advantage that you pay nothing in the monomorphic case.

That is a plausible scheme,

That is a plausible scheme, but probably not very attractive for an FPL.

Phew! I thought I was taking crazy pills for awhile there! :-)

Instead, you associate information with every type variable, to provide what is not known statically.

Right, isn't that what I'm doing though? I think the representation I described is only needed for polymorphic function parameters. Other parameters are known fully at compile-time as you say.

That's not it

Right, isn't that what I'm doing though?

From what I understood, you extend the representation of structured objects with a descriptor pointer. That seems unavoidable in this approach. That pointer is tied to values, not type parameters. And since all objects can be used in monomorphic as well as polymorphic contexts, you cannot simply equip them selectively. IOW, there is no such thing as a "polymorphic object" - you need to maintain a uniform representation.

From what I understood, you

From what I understood, you extend the representation of structured objects with a descriptor pointer.

Ah, I understand. Yes, the gc_layout object is needed by the GC in this scheme, and it's part of every (compound) structure. Fortunately, this layout can be shared by many instances, so I don't think the overhead is too bad.

What are the other techniques for describing representation that are perhaps more frugal? I'm familiar with:

a) the gc_layout as described,
b) scanning an object for tagged pointers (using 31-bit ints),
c) using structure-specific traversal functions (this would seem to be similar in overhead to (b)),

(a) and (c) seem more general to me though, as they would seem to provide more opportunities for inlining, and they also open up the possibility of making compiler-generated folds over any structure available to the programmer. Plenty of opportunity for polytypic programming here.

Fortunately, this layout can

Fortunately, this layout can be shared by many instances, so I don't think the overhead is too bad.

The descriptor itself is not that much of a problem (although you might want to resort to hash-consing them in a language with first-class polymorphism, where you don't know them all statically).

The issue is the pointer. Functional programs typically deal with tons of very small objects, the majority one to three slots. An additional word in each of them easily blows up memory consumption by some 30 or more percent. This is particularly devastating for the omnipresent list and tuple. That's why the scheme is more appropriate for OO.

What are the other techniques for describing representation that are perhaps more frugal?

I'm not aware of any comprehensive catalogue, and there are lots of ways to mix or combine techniques. But for type passing, the publications by the TIL and TAL people probably are a good starting point. Greg Morrisett's homepage might be one good place to look for those.

The issue is the pointer.

The issue is the pointer. Functional programs typically deal with tons of very small objects, the majority one to three slots.

I get that, but there's still one thing I don't get about the tagged approach: how do you know the size of the structure? I'm assuming here that an int has the lowest bit flagged with a '1', while pointers have the flag cleared, ie. '0' (or vice versa). So you've found the pointer on the stack, and you follow it to traverse the structure it's pointing to, but you don't know the size of this structure, so you don't know when to stop iterating looking for pointers.

Unless this information is also encoded in the structure somehow. For instance, pointers also use the second lowest bit to encode whether it's the "end of struct". All pointers within a structure have a '0' in that position except the last pointer which has a '1'. Is this how it's done?

Finally, has anyone quantified the costs of the shifting and masking operations required to maintain these bits for integer and pointer operations with the various different encodings? ie. ints tagged with a '0' vs a '1', etc.

[Edit: as for overhead of an additional word, mark-sweep garbage collection requires various bits for each object to track its state while collecting. This would seem to require space that wouldn't fit into a tagged representation anyway. What am I missing?]

[Edit 2: ok, I think I've figured it out. The pointer has a 1 in its lowest order bit, and ints have 0. The pointer then further uses the second lowest order bit to flag the last pointer in what I'll call a "compound structure". If the structure is composed only of ints, a value structure, the second lowest order bit is set on the source pointer, so the GC doesn't traverse it at all.

Field offsets are necessarily calculated in monomorphic functions, so we have enough information to subtract 3 or 1 from field offset calculations on the source pointer depending on whether its pointing to a value or compound, respectively.

I'm still not sure what to do when a compound's last pointer points to a value structure. Inline the value structure is one solution. What else?]

Leroy's Efficient Data

Leroy's Efficient Data Representation In Polymorphic Languages describes a pointer tagging scheme for separately compiled polymorphic functions.

However, every heap allocated object has a size field. You've implied that this would be significant overhead in functional languages, which leads me to believe that some more compact scheme was devised after the above 1990 paper. Anybody have a reference to such work?

As for the the tagging I described above, we can use another bit from pointers as a terminator flag. This forces pointers to be 8-byte aligned, which may or may not be a big deal (assuming inlining value structures is not viable for some reason).

[Edit: A Portable Standard ML Implementation (1994) still describes an encoding with a size header.

Also, given the uniform tagged representation I described, it would seem that any null constructors could be encoded as simply integers without allocating any heap values at all.]

[Edit 2: the 8-byte alignment restriction is actually NOT a big deal, since all 4-byte sized data is integer-sized, and so will not need to be allocated on the heap at all. Thus, 8-byte data, like doubles, are the minimum sized data to be heap allocated, so this encoding is looking more doable. The only remaining problem I can see are any bits used by the GC, ie. mark bits and such.]

Right

I was about to give you a pointer to Leroy's Zinc paper. :-)

Yes, FPL implementations typically use a block header. However, implementations go to some length to cram as much required information as possible into this one word (size, constructor tag, GC marks, generation counters, other status bits - we have 6 or 7 different fields in the headers of the Alice VM, for example). An entire pointer would not fit in there, so would potentially add another word. You might be able to move some of the info from the header to the descriptor (e.g. size and tag), but that extra indirection might be costly. And without further measures it will not create enough room either.

Note that some implementation even special-case certain data types to get rid of some headers entirely. For example, they move information from the header into the object pointer itself, using the least-significant bits freed by alignment requirements. For example, cons cells for lists might be condensed to two words if you sacrifice one bit in all pointers to indicate the list tag (IIRC, this hack was found to reduce memory consumption significantly in Oz).

Anyway, this all is a game of trade-offs, and may very much depend on the specific language you implement and what style of programming you want to optimise for.

Edit: Yes, nullary constructors are usually represented as plain integers.

For example, cons cells for

Yes, FPL implementations typically use a block header. However, implementations go to some length to cram as much required information as possible into this one word (size, constructor tag, GC marks, generation counters, other status bits - we have 6 or 7 different fields in the headers of the Alice VM, for example)

Yes, I had been assuming that the constructor tag would consume a whole word, but of course that's a bit wasteful. So it's not using an extra word at all in the sense I meant.

Have you ever heard of an encoding similar to what I described using pointer bits to mark the end of a structure instead of a size field? Freeing the size fields bits would be useful, since they could be for different garbage collection strategies, like refcounting.

For example, cons cells for lists might be condensed to two words if you sacrifice one bit in all pointers to indicate the list tag (IIRC, this hack was found to reduce memory consumption significantly in Oz).

This sounds like it would be generalizable. Assuming 2 free bits in every pointer (since every heap allocated object needs a 4-byte header this seems fine) any constructor with one heap-allocated element can be folded back into the source pointer itself with those 2 bits used for the constructor tag. It must be a heap allocated element since with an int those bits aren't free.

We can do this for up to 4 constructors in total for every type. Considering many types consist of very few constructors, and combined with representing parameterless constructors as ints, I would expect this to amount to a significant savings overall. This seems like a generalized case of the Oz's list optimization. Oz is limited in this case since it's not statically typed. This is probably the "special case" optimization you're referring to? Do any existing implementations do this, like OCaml or SML/NJ?

Bits in pointers

Have you ever heard of an encoding similar to what I described using pointer bits to mark the end of a structure instead of a size field?

None I would remember. I guess it is less attractive if you also need the size for other things, like bound checks (for arrays, or even for tuples in untyped languages).

Assuming 2 free bits in every pointer

I've lost track of what exact scheme we are discussing right now, but note that conventional implementations need one bit already to distinguish integers.

Do any existing implementations do this, like OCaml or SML/NJ?

No, I don't think so. In Alice ML we use this trick to mark futures, to avoid indirections for tests. The same can be applied to lazy languages to mark thunks. I think this is an old trick in the LP community, but I don't know of its use for constructor tags.

I've lost track of what

I've lost track of what exact scheme we are discussing right now, but note that conventional implementations need one bit already to distinguish integers.

Yes, 3 bits are reserved for the runtime in total. 1 bit to distinguish unboxed from boxed. For boxed values we have two free bits, since every heap-allocated value is at least 8-bytes: 4-byte header word + a number of 4-byte value words.

So the next two lowest bits can be used for certain constructor tags if we retain the structure size in the header, or we can do away with the size field and use those bits to mark the end of a structure. The latter would make GC more efficient overall, and make more GC algorithms available given the same representation. The former would make the program more memory efficient, and also reduce GC pressure when the constructor can be folded into the source pointer. Interesting tradeoffs.

Not a requirement

In Haskell and ML that may be true, but it is not foundationally true. In BitC, we have a union type

(forall (RefType 'a)
  (defunion (nullable 'a) :val
    null
    (non-null p:'a)))

Note that :val. nullable is an unboxed type that can only be instantiated over reference types.

However, we also (for the moment, at least) have:

(defunion (optional 'a) :val
none
(some value:'a))

which is an unboxed union type that is instantiable over *any* type (boxed or unboxed). I think you are confusing the question of (un)boxing with the issue of Cardelli-style tag representation optimizations. There are indeed some issues there, but they aren't as bad as they look. Details of how BitC handles this can be found here

If I understand correctly,

If I understand correctly, your point is that BitC "values" are flattened representations of something that would otherwise be boxed because it is larger than a single word. In fact, there are ML compilers that do flattening in certain circumstances, e.g. MLton.

So I agree, you can distinguish between unboxing and tag optimization. But that does not really affect the example: bottom line is that even with flattening you cannot optimise away the Some constructor. (And of course, flattening comes with significant complexity wrt polymorphic code, at least if you are interested in keeping it transparent.)

No, there aren't.

I agree that MLton does unboxing as an optimization. Unfortunately, that isn't good enough. In any sort of performance-sensitive software, representation is prescriptive, not descriptive. The fact that a particular compiler may, as a discretionary choice, unbox things is completely useless for the purposes of robust software construction.

I do agree that the Some constructor cannot be wholly optimized out. I do not agree that flattening introduces significant polymorphic complexity. All of the flattening cases in the BitC compiler are handled in the C emitter. The sum total of their handling might involve as many as 35 lines of code. I didn't actually count. Could be 40, but the point is it's nothing to be concerned about.

The point I was really trying to make in my post was that your original assumptions about representation were artifacts of particular implementations. They happen to hold true for most ML and Haskell compilers, but they are not essentially true.

Agree and disagree

The fact that a particular compiler may, as a discretionary choice, unbox things is completely useless for the purposes of robust software construction.

I disagree. By this argument, anything typically called "optimisation" could be regarded useless. You might need prescriptive behaviour for certain tasks, but in general this is merely a quality of implementation issue. In particular, when the optimisation is only about constant factors, like here.

I do not agree that flattening introduces significant polymorphic complexity. All of the flattening cases in the BitC compiler are handled in the C emitter.

All well, but BitC does not make it transparent. The programmer has to deal with low-level notions like reference vs value types, which are semantically immaterial.

The point I was really trying to make in my post was that your original assumptions about representation were artifacts of particular implementations.

I don't think I made specific assumptions. Rather, finer distinctions about implementation techniques simply didn't matter for the question. Apart from that, we seem to agree.

The compiler's promise

shap: The fact that a particular compiler may, as a discretionary choice, unbox things is completely useless for the purposes of robust software construction.

Andreas: I disagree. By this argument, anything typically called "optimisation" could be regarded useless. You might need prescriptive behaviour for certain tasks, but in general this is merely a quality of implementation issue. In particular, when the optimisation is only about constant factors, like here.

I think the point that shap is making is that compiler optimisations come in two varieties: there are those that the compiler writers put in for the purpose of seeking the best overall performance and would take out if they find another optimisation they like better, and there are those that the compiler writers make as part of a sort of covenant with the language user, where the compiler writer promises that code written in a certain way will ensure good perfomance. Think of how the scheme standards promise TCO.

Why completely useless.

I think that Andreas didn't parse what I wrote in the way that I intended. I did not say, and I do not believe, that optimizer-driven unboxing is completely useless. What I wrote was that optimizer-driven unboxing is completely useless for the purposes of robust software construction. Robust software (by which I meant critical systems) is concerned in part with temporal predictability and liveness in the presence of statically bounded memory, and also with external hardware-dictated representations. In these programs, it is simply not true that concerns of representation are "semantically immaterial." Having arrived at a mathematically tractable (and very beautiful) model of formal semantics, many people in the PL community seem to have forgotten that the formal semantics is a model, and that model is only valid to the extent that it captures the objectively real conditions of interest.

It is a very good thing that mathematical type theorists continue to advance the work of Alonso Church. Real progress is being made that way. This is one of the reasons I told Swaroop that the failure of principal typing in the original BitC type system formalization was an important thing to fix. But it is also a good thing when those of us who are mathematically trained engineers try to reconcile the elegance of mathematics with the objective world, which is a place of warts, bumps, and contusions. Any language that is intended to cope with objective reality must enable the programmer to specify how to deal with that reality. A kernel that cannot prescriptively describe the hardware page table structure is broken. Period. Full Stop. If that insufficiency is semantically immaterial, then the hypothesis concerning what is semantically material has self-evidently failed the empirical test, and the honest advocate of that hypothesis must now go back and refine their hypothesis. Science is the process of fitting hypotheses to facts. Unfortunately, Computer Science isn't science, and PL design these days seems to be mathematics rather than science.

Perhaps that is not the right way to state the issue. I am personally quite happy with the idea that semantics can be separated from representation, and that meaning can be distinguished from method. All of that is fine and good. The problem comes when the formal PL community declares that method isn't interesting, and that programming languages should not permit specification of method.

Andreas is correct that BitC does not make unboxing transparent. BitC is not attempting to be a mathematical perfect language. It is attempting to be a language with a fully articulated mathematical foundation that also addresses the issues of prescriptive representation. Having acknowledged that, it is instructive to note that (ignoring surface syntax) every valid ML program has a direct transcription into BitC with the same meaning[1,2]. In light of this, I'm not clear why Andreas believes that the programmer is "forced" to deal with representation any more or less than they do in ML. It isn't that ML doesn't force the user to deal with representation. Rather, it is that ML doesn't give the user any choices. The programmer in BitC is free to write programs using exactly the same degree of blissful ignorance that they use in ML. And they will get a comparable result: a substantial performance penalty on real-world programs.

Or to put it another way: BitC preserves 100% of what Andreas believes is semantically material, but it also supports the creation of programs that operate correctly in objective reality.


[1] We still need extensible row types, but that is coming very soon.

[2] The same is not true for Haskell, because BitC is an eager evaluation language. Most Haskell programs don't care, but the few that do care require invasive transformation.

Risks of addressing practitioner concerns

Be careful - with an attitude like that, you run the risk of having BitC become popular! Luckily, its current concrete syntax has been empirically proven to eliminate that risk.

Seriously, I think you make some very important general points, independent of the specific disagreement at issue.

BitC syntax

I wholeheartedly agree. BitC seemed to me to be an awesome project, until I found that the surface syntax was based on Lisp/Scheme.

The lexer/parser science has gotten so far that it is now (close to) trivial to implement it. Why not make use of this technology?

PKE.

Parenthetical remarks

My observation was about a likely consequence of the choice of syntax, but it wasn't intended as a criticism of that choice. There are some reasons for the choice of S-exps in BitC, and plans for a more "a more conventional surface syntax" were mentioned.

Early in the evolution of a language, particularly one that is attacking tough problems at the semantic level, ensuring that the language's surface syntax is likely to appeal to the largest mass of programmers is not necessarily the top consideration.

Abandonment was planned from day zero

In spite of the note that Anton van Straaten cites, the criteria for BitC syntax mainly have to do with later prover objectives and current macro system objectives. I originally picked LISP syntax because I didn't want to be fooling with surface issues while dealing simultaneously with deep issues. Syntax arguments expand to fill all available bandwidth. But also, I was engaged in a bit of reverse social engineering. I wanted to restrict the early BitC audience to a fairly select group, and that group either (a) was already comfortable with S-exprs, or (b) were willing to tolerate them. Apparently it worked. :-) BitC is much less based on Scheme than it appears. The bloody bison grammar has 271 reductions!

Pal: lexer/parser science is not as far along as you believe in languages that want to have any sort of sensible macro system. There are no current parser generators that emit BitC code. There are also no parser generators that I know of that do any sort of decent job with error recovery. We will definitely be switching the BitC syntax, and I agree that complexity of parsing is a non-issue. The issue will be preserving a syntax in which there is a properly structured distinction between well-formed input and syntactic input, and in which the well-formed input is amenable to macros. At the moment, the type of macro system that was created for Dylan looks quite promising; I just haven't had time to sort it out.

Come join the bitc-dev list and help us design a more sensible syntax!

I was engaged in a bit of

I was engaged in a bit of reverse social engineering...

Genius.

Misinterpretations

What I wrote was that optimizer-driven unboxing is completely useless for the purposes of robust software construction.

You are probably right that I misinterpreted you. You wrote: "[..] that [optimizer-driven unboxing] is not good enough.", without any further qualification in that sentence. I disagreed with this general statement, because I think it indeed is good enough for most purposes. And for those, the programmer shouldn't have to bother with low-level representation issues beyond some intuitive understanding of the compilation process. I don't consider this outside "objective reality" at all.

On the other hand, I agree that there are critical tasks where this is not the case, and you mention a couple. I also should clarify that I clearly appreciate that a project like BitC intends to address these domains in a more principled manner. It wasn't my intention to come across as not valuing this.

I'm not clear why Andreas believes that the programmer is "forced" to deal with representation any more or less than they do in ML.

In practice, the programmer is "forced" to deal with this because he has to interface libraries that make certain choices about low-level representations that might be incompatible with those "ignorant" choices made by him, or choices made by other libraries. If he writes a library himself, he potentially has to duplicate effort to support different choices. The fact that BitC has two different option-like types is one very simple example of this effect, I suppose.

Also, would the ignorant transliteration of an ML program have the performance characteristics that you expect from an ML compiler? (This is an honest question. I'm not sure that the transliteration you mention would not involve dumb copying of large value objects, for example.)

Quick responses

The option type still exists because I'm a slow editor. It's going away, or at least getting relegated to a deep, dark library some place. This doesn't dispute your point; I only suggest that there may yet be hope.

Concerning an ignorant translation of ML, I suppose it would depend on how ignorant a BitC compiler you use. We should do very slightly better on the Cardelli representation tricks, and there is no reason why a BitC compiler should not apply the same types of representation optimizations that have been seen in some of the more aggressive research compilers, but those obviously haven't been done yet, and it isn't clear how high up the priority list such optimizations would be given the intended applications for BitC. Once we get hooked up to LLVM, we ought to generate pretty good code. Our polyinstantiater is smart enough not to stupidly re-instantiate procedures over reference types.

Optimizations and constant factors

Two issues to take up separately:

1. In critical systems, predictability is more important than speed. In critical systems, many developers would very happily disable all discretionary optimizations, because failures of temporal predictability can kill people in critical applications.

2. It follows from the "constant factor" argument that no optimization is worthwhile unless it produces an exponential improvement. Even log(n) optimizations are not worthy of attention, because real programs are not reasonably expected to encode more than a constant number of bits per electron in the universe, and log(n-electrons) is a relatively small constant factor.

In objective reality, a constant factor of a 100x may make the difference between the user living or dying before the computation completes. This is semantically immaterial, but pragmatically vital.

Andreas: I am aware that you know all of this. You simply had the misfortune to push my buttons when I had nothing better to do. :-)

null = unspecified

The Chickenfolk are now debating whether the Scheme unspecified value should be used as a proxy for SQL's NULL. (I'm for it.)

!Null in dataflow

My experience with dataflow is limited to reading about Lucid and some tutorial examples on Oz via CTM. but I remember that you do not have uninitialized values or null in those languages.

(does any one know of an interpretor for lucid?)

Lucid interpreter

Probably Lucid-Synchrone is not too far from what you ask.

Lucid

Thanks, Got it recently from Prof Wadge,
Lucid Synchrone is interesting too.

Null values in Oz

Oz does not have the problem of performing an operation on uninitialized values simply because the thread will go into a wait state until such time as the value becomes known. Or to put it another way, what do you do with an unknown value? The Oz answer is nothing.

Isn't null needed for some data structures?

Isn't null needed for some data structures? I'm not sure how one would implement something as simple as a doubly linked list without null or a similar concept in strict languages.

With the option type.

With the option type. Depends exactly what you mean by 'null'.

With laziness

Depends on what you mean by "linked list." Using laziness you can make a bi-directional, possibly infinite list (that can even be circular) without ever using a null, even temporarily.

But he said...

But he said "in strict languages"...

D'oh!

Sorry, missed that. Of course, you can fake laziness with macros and lambdas in a strict language... :-)

Null is a vital part of the underlying machine code

Null, being a vital part of the underlying machine code is absolutely necessary to modern programming languages. Reading any modern assembly program, you'll see it peppered with references to NULL values.

What's that? There's no NULL in machine code?

Then it's not necessary, is it?

On a more serious note

To extend the idea: a null value typically creates more work for programmers while not actually making the code any safer or providing benefits in terms of speed or execution. Whereas in modern C, NULL is merely a constant defined to be (size_t) 0 and therefore doesn't actually represent a value outside the normal range of another type; in languages such as Java, Null is a special value or type that requires additional test cases and programming to capture. As it is not a part of the underlying foundation of the machine code, this requires additional instructions and typically forces the fundamental types of a programming language with this sort of construct to be bloated data structures requiring additional processing and RAM to handle. For non-trivial applications, this tiny addition to a language can represent a significant drain on the system's resources both in terms of cycles spent and memory reserved for even the most basic of operations.

In short, Null is a contributing factor to your computer taking several minutes longer to boot today than it did when you had a 66MHz machine. Despite the hardware being dozens of times faster, poor programming practice has been a major cause of today's programs and in some cases even the underlying operating systems being much slower and more memory-intensive than in previous years.

Interesting hypothesis

What you seem to be tacitly suggesting is that Null serves as a canary value, and that any other canary value would do equally well. In particular, a distinguished object instance of appropriate type could serve as a typed Null canary value, but would have the advantage that compilers would not need to guard against its presence. Taken to its logical conclusion, this would lead to a system that was memory safe, but not "reference liveness safe". Better still, it seems plausible that this approach could be implemented within the constraints of a conforming C/C++ implementation.

This is similar to what is Vikram Adve did in SafeCode. You might want to go look at his work.

The problem is not null itself, it's partial functions.

Allowing null as in C makes pointers partial functions, i.e. they might evaluate to two possible values, one of them not associated with code.

The traditional treatment of partial functions, i.e. not to enforce totality, is where most bugs begin with.

Of course, some functions are inherently partial.

1/x, for instance--what *do* you do when x is 0? What do you do when someone tries to take the head of []?

Blaming partial functions doesn't get you much; their existence is a fact of life.

The problem with NULL is that in many languages (or subsets thereof), it's in many ways de-facto bottom type, one that isn't empty. NULL in SQL and VOID in Eiffel behave this way; as does NULL in C/C++ for pointers, and null in Java for references. (See this LtU thread for more discussion on whether or not the bottom type can or should be empty; TAPL also discusses this in some detail). Being a bottom, you can't avoid it with typing discipline; as bot is a subtype of anything. When bot is empty, this isn't really a problem (other than divergence--another form of partiality that is orthogonal to NULL), but when it isn't empty--everywhere you have a value, you have to account for it possibly being NULL.

1/x, for instance--what *do*

1/x, for instance--what *do* you do when x is 0?

Make the definition total of course! ;-)

val (/): int -> int -> int option

What do you do when someone tries to take the head of []?

Also make it total, as Gavin pointed out earlier.

Or, if you really like exceptions instead of explicit error cases, use checked exceptions.

For safety, import Safe.

What do you do when someone tries to take the head of []?

Also make it total, as Gavin pointed out earlier.

As an example of this, Neil Mitchell's Safe library for Haskell provides up to four total variants of partial Prelude functions. See the link for a summary. Here's the source.

We define them as partial, but they are not really partial

When we say that 1/x is a partial function, we have already excluded the possibility of x = 0. But in reality, it's not partial, but the result is not defined for x = 0.

Actually blaming partial functions is the right thing to do, when it comes to programming: not handling all cases is where bugs come from.

For what it's worth, the

For what it's worth, the Cell SPE makes pointers total functions -- it's an architecture with no hardware exceptions ever, so everything (of which memory references are just an example) returns something. In particular, memory space runs from 0 up to 256 kb, and then it loops, so dereference(X) is the same as dereference(X mod 256*1024).

This makes for fun bugs, of the "I have two integer variables declared here; why do they seem to be aliased? [insert lots of debugging] Oh, right, they're both pointers, and I didn't allocate either of them ... so they're both using the memory at location 0, which happens not to break things."

Well, Null is a pointer target. Pointers and OO are orthogonal.

The key thing is: You don't set objects to Null, ever. You set pointers to objects to Null.

You don't need pointers to have an OO language -- see the Fortran 2005 specification for one "major programming language" implementation as proof. Fortran has pointers, yes, but they're a relatively self-contained part of the language, and only used when they're explicitly needed for algorithms that depend on them.

(In particular, note that pointers are not needed in Fortran for dynamic memory allocation or for passing modifiable arguments to subroutines.)

I would think it pretty clear that a Null value for a pointer is useful for explicitly marking the end of linked lists and so forth; certainly very few algorithms that use pointers assume that (a) each element has a pointer and (b) each pointer points to a target. Sure, it's possible to do without a Null pointer by pointing somewhere random and adding a boolean to indicate whether the target is relevant or not, but that's just the semantics of a Null pointer implemented in a messier way.

This has nothing to do with OO, though.

Null pointers vs options

Sure, it's possible to do without a Null pointer by pointing somewhere random and adding a boolean to indicate whether the target is relevant or not, but that's just the semantics of a Null pointer implemented in a messier way.

Using a Boolean flag is indeed stupid. As others have mentioned, you use an option type. This is not the same as "a null pointer implemented in a messier way" because it makes a clear distinction between a pointer and a value that may not exist (which, btw, isn't necessarily a pointer). In particular, it is obvious from the types where values may be absent and where not, and the type system will catch any confusion in that regard.

Ah, indeed. So, under the

Ah, indeed. So, under the hood at machine level, you'd have a regular pointer (which the compiler guarantees will always point somewhere valid, insofar as it can, and doesn't provide the option of "nullifying"), and then the option type is the same representation but might also have a value that represents "there isn't a pointer-to-something stored here"? And, of course, one can query the latter so see if there's a pointer-to-something there or not. That makes a lot of sense to me.

Not so clear

I would think it pretty clear that a Null value for a pointer is useful for explicitly marking the end of linked lists and so forth

The traditional null, though, is (as has been pointed out elsewhere) a member of the bottom type so any value can be null, not just ends of linked lists. If you want to terminate a linked list in a type safe way, use use Option types or a sentinel type for empty linked lists (Nil).

Misses the point.

Brooks is saying the key thing when he indicates that you need an end-marker. Null is a fine end-marker. So is a pointer to a distinguished Nil object. So is a pointer to some other constant value that is guaranteed not to collide with any valid pointer. The primary advantage to compile-time constant values, such as null, is that they need not occupy a register. This advantage can be achieved with a distinguished object provided that object is statically allocated.

The problem with representing NIL by NULL comes when it gets combined with OPTION. At that point you can no longer distinguish between a NIL list and a NULL list pointer. On balance, every tag optimization trick that I know about can be implemented on arbitrary constant values that are statically known at link time. That argues pretty strongly *against* using NULL for such things, but it does raise the interesting inter-operation issue that the following are both potentially sensible definitions of a list in BitC:

    (defunion (list 'a)
      nil
      (cons car:'a cdr:(list 'a)))
    (defstruct (list 'a)
      car: 'a
      cdr: (nullable (list 'a)))

which one you want probably depends more on the constraints of cross-language interfacing than on any particular preference driven by efficiency, safety, or correctness. If BitC had something like typedef, we could make the second one a bit less awkward, but you get the point.

yes, essential like 0

Nil-like values are essential like 0 because we like to define data structures inductively. "A list is the empty list or...", "A natural number is 0 or ...", etc. It's just handy to have around.

The question is, do you want just one? or many?

The statically typed languages generally say "many": you explicitly define a new one for each new type.

Dynamic languages, for some reason, mostly fixate on having "one true nil" so that a nil list is EQ? to a nil tree. (Scheme is a mild exception. It somehow settled on having "two true nils" ("()" and "#f") which divide among themselves the properties of a "one true nil".

The "win" of "one [or two] true nil[s]" in a dynamic language is that it allows easier abstraction of patterns of induction. For example, one could write a common-lisp function "MAP-SEQUENCE" that works on all sequence types using generic functions to access elements and build the result -- using "nil" to stand for the "end of sequence" in all cases.

The "lose" is that a limitation to only one or two true nils gets in the way of composing data types. For example, suppose that I have an associative data structure (say, a hash table). I wish to define that values stored in the table are either lists (possibly empty) or values (possibly nil) of a given record type. If I store a non-nil value in the hash table, the type (list or record) is preserved. If I store a nil value, the table doesn't know if I stored a nil record or a nil list.

Arguably, dynamic languages would be improved if they more commonly allowed the creation of (in scheme terms) multiple NIL? objects none of which are EQ? to any others. With the generic NIL? one could still abstract patterns of induction. With multiple, non-EQ? nils, I'd have an easier time composing data structures.

-t

And arguably...

And arguably, static languages would be improved if they more commonly allowed the recognition that all of these nil values are in some sense the same. And this is exactly the kind of thing that generic programming is all about.

Except that they aren't the same..

Those different kinds of null have distinct types. As long as all typing is static, you're fine, but if you introduce a checked upcast operation, you can get into trouble very quickly.

This is actually a potential hairball in the current BitC implementation. Given:

    (defunion (list 'a)
      nil
      (cons car:'a cdr:(list a)))

The nil is underspecified, but on the other hand it has no field at any underspecified type, and for this reason we are able to emit a single static instance of nil that is reusable for all instantiations of 'a. And we do. The reason we won't get hung up by checked upcast is that structures are extensible (or will soon be) and unions are not.

Types of null

It's important to distinguish between the different "types" of null/nil. As you say, there is the base case of inductive definitions, which may be zero, [], {}, etc. On the other hand there are things like null pointers/references, and things like SQL's "I'm not really here" NULL token. The first kind of nil/null (inductive base case) is essential. The other two literally have no value.

Whether you define all inductive "zero" elements to be the same token or not doesn't seem to buy you much. It's pretty simple to define a (overloaded) nil? test for each data structure (or perhaps mzero).