Any problems with true union types if all values are tagged? (like in a dynamically typed system, Lisp, etc.)

I was initially motivated by a desire to model "Noneable" sort of like C, and not like ML's 'a Option or Haskell's Maybe a.

Wanting to distinguish a type Person from ?Person, I don't want None to be a kind of automatic bottom type of all (ref?) types. So "Noneable" looks like a union, in fact, a union of a type and a distinguished value, None.

In my syntax, it would look something like this (where enum defines variants and Or is a primitive of the type language).


enum NONE is | None;
type ? 'a = Or 'a None;

Wanting to not make this a special case, I was wondering if such union types present any difficulties so long as all values carry type tags in any case. So we might define Num thusly.

type Num = Or Int Dbl;

Carrying type tag info, the values are distinguishable using pattern matching. For the "Noneable" case, we have.


fun 'a not-none-or-dflt (?'a, 'a -> 'a)
| None, dflt = dflt
| v, _ = v;

Using Scala style type test syntax, we might write for Num:


fun toint(Num -> Int)
| x:Int = x
| d = Math.round(d);

My thinking is that values' type tags are already like automagical constructors, hence no need to write:


enum Val is
| aInt(Int);
| aDbl(Dbl);
| aStr(Str);
....
etc.

Am I missing some grand glaring whole in such a "true union" type construct given my tagged values? Any other languages have a similar type system feature that I might look at?

Thanks much.

Scott

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Problems are probably not in construction

Umm, how is this different from basic algebraic data types?

Disregarding that; the inherent difficulty with these sort of or-types is either having to do the pattern match everywhere you actually need to use an instance or doing some type-intersection operation to provide only operations guaranteed to exist on all possible contents of the or-type.

re: the inherent difficulty with these sort of or-types...

IMHO, the need for pattern match based discrimination is the strength of these sorts of types - flexibility (when needed) with forced safety of access.

My 'test case' is my old college buddy who is writing a pretty giant piece of server system software (document semantic analysis, tagging, custom dictionaries, etc.).

He hated the syntax and a few other features, but being a C++ programmer, loved the ?'a features that would essentially eliminate the uncaught null pointer exception. In other words, he'd welcome the verbosity of some construct like these if it caught null pointers.

-- Various pattern matching short hands
when? s:Synonym | find-synonym(x) then
  add-to-syn-set(s);

So the need for pattern matching is a feature, not a bug ;-)

Roger that

Yes, I was looking for my my amateurish conjecture that having type tagged values did make these essentially akin to algebraic data types. So that's good news.

These types would be special purpose (like "Noneable 'a") where I want to enforce a pattern matching based discrimination.

The intersection issue is something I'm just starting to think about.

The only difference I might see between this "Or" type and regular algebraic data types is the mix and match of types and values in a single Or type. Or could be used to enumerate an arbitrary set of values as a type.

Thanks - Scott

Refinement types

If None is a value in the Or type then you're looking at what is called "refinement types", introduced by Pfenning in ML for example. You might want to have a look at the literature on this and the related work on intersection types :)

Why this is a bad idea

I investigated the options for this sort of functionality, and ended up concluding that the Haskell "Maybe t = Just t | Nothing" is preferable to a "special value indicating none-ness". Reasons:

1. "Nullable" types are strange case because the concept doesn't nest. If you have a pure type t, then ?t and ??t are identical. If you have a value x:?t, and you want to perform an action on it which returns either x or none, in the failure case you can't distinguish between x and none.

2. Second, Maybe is a functor and, also, a monad with a plus and zero. However, a type constructor (?) :: type->type isn't a functor. See why?

3. More generally, such a ?t type constructor must necessarily either violate the parametricity property, or be unusable for universally qualified types. Consider:

f :: t -> Bool -> ?t
f a b = if b then a else None

g x = if (f x True)==None then 1 else 2

If the compiler accepts g, then your language is aparametric! But if it rejects g, then it's impossible to reason about values of ?t for unknown types t, whereas with Maybe t, we can at least distinguish between the (Just x) and (Nothing) cases, even if we don't have any operations available for x.

Thus Maybe is the lesser evil. The best language-wide solution seems to be going with Maybe, and then having nice pattern-matching, guard, and comprehension constructs so that it's easy to decontruct Maybe values without unnecessary verbiage.

As an aside

I find functions which returns none in case of failure troubling, if it's a real failure, then you need to know why it failed.
The function which 'detect' the failure knows why it consider the current situation to be a failure, so just returning 'None' is 'destroying' information..

I too do not like functions

I too do not like functions that return None, and without going into details I have a pretty nifty mv-return based feature (not involving None) for safely determining "failure" or "done" or "no result" in those few cases where this is needed.

My language is not pure, and while I have an immutable Vec in my lang design, I also have a mutable and growable Arr data structure, where Noneable bindings will be useful. Noneable bindings will be useful in nasty data structure initialization code involving interacting with the outside world, for retrieving SQL rows, etc.

Now if I were only up to speed enough to follow the more theoretical discussion here :-(

Scott

? is a functor

1: It is perfectly intelligible denotationally, as the full and faithful functor that takes types modelled in the usual category over predomains [1] to types modelled in the usual category over domains; the category over both predomains and domains is a perfectly decent choice of denotations for types;

2: ? raised to be an endofunctor over this union category is perfectly well-defined; it is full but not faithful;

3. Why does choosing to reject parametricity warrant an `!'? We have a parametric core language, and we have an algebraically well-behaved extension of it. I don't see this as a QED.

Finding ? strange as a type constructor I think arises from an ungrounded prejudice that if type constructors can't be modelled by a faithful functors then they aren't `free' (true), and therefore are somehow essentially confused (question begging). If you can see what is going on, and have a nice semantics for it, why all the gnashing of teeth?

[1] Predomains are essentially domains without bottom elements, which sometimes makes them a convenient target for denotational semantics.

? is not a functor

Of course ? is a pre-functor obeying the identity law but, unlike the Maybe functor, it violates the composition law F(f.g)=F(f).F(g). Consider g(5)=None and f(None)=3. We have F(f.g)(5)=3, while F(f).F(g)=None.

Regarding parametricity, it's a valuable property that a language designer shouldn't violate accidentally. Of course, violating it with full knowledge of the understanding issues is a valid tradeoff.

? is a functor, if we interpret application correctly

The category I gave in this post's grandparent is not Cartesian closed and so function application cannot be straightforwardly modelled by composition of morphisms & so I should say that I am just guessing that I can model your example in it.

My idea is that you can model the lambda calculus with union types in it by interpreting function application by means of a limit construction, but I would have to get out pen and paper to be sure that the details all check out: not today!

then your language is aparametric


f :: t -> Bool -> ?t
f a b = if b then a else None

g x = if (f x True)==None then 1 else 2

If your language inferred a type of g :: a -> int then this conclusion would look sound, but that's not the only possibility. It could also infer something like g :: ComparableToNone a => a -> int. You essentially need this power to work well with union types. I do, however, tend to agree with your conclusions regarding Maybe - disjoint sums have nicer properties than unions.

Thanks for for the feedback - a few comments

First Option/Maybe is easily defined in the language.

enum Opt 'a | Some 'a | Nothing;

Second, I'm not particularly concerned with ??Int == ?Int. My intended audience is folks who use Java, C# etc. for whom ?'a defined types is the normal and often undesirable state of affairs.

As for your third comment, I'll admit my limited skills and understanding here and ask for further clarification to help me understand.

I feel I really need to understand this latter problem better, as well as how this problem might or might not apply to the last few years' "hue and cry" for non-nullable types in Java and C++.

Again, thanks very much for the considered comments.

Scott

I'm sincerely and humbly

I'm sincerely and humbly asking for a clarification of this example.

f :: t -> Bool -> ?t
f a b = if b then a else None
g x = if (f x)==None then 1 else 2

First, is the call (f x) shy the Bool parameter or am i missing something?

Second, I'll humbly admit I still don't see the problem :-(

Many thanks from this PLT newbie (but no, an older fella and not a CS student).

Scott

I share this confusion

I share this confusion (probably because I'm not sure what parametricity means, and I'm having trouble finding out).

Perhaps someone with the appropriate understanding could take a minute to update http://en.wikipedia.org/wiki/Parametricity. "Parametricity is a theoretical result that states that functions that have types that are related to each other share certain properties in common" isn't very helpful as a definition when it doesn't go on to explain what properties or how the types must be "related."

Parametricity

Parametricity is Wadler's shorthand for the notion of relational parametricity introduced in

J. Reynolds (1993). Types, Abstraction and Parametric Polymorphism. In Proc. 9th WCC.
Linkable PDF to this would be very nice...

The issue is that we want the space of possible terms inhabiting a type to be somehow determined by the structure of the type; otherwise you have C's type system. If this is true, then we have some sort of compositionality. Relational parametricty goes further and tells us that this compositionality arises in a clean way from the inductive structure of the types.

Induction is nice, so parametricity is useful, and indeed all kinds of nice things follow. But just as mathematicians don't only have induction as their only proof-making tool, so too some aparametric (Tim's coinage?) type systems can still have nice properties proved about them, if they have a reasonable notion of compositionality.

Parametricity and Security

Tim has also convinced me (in private correspondence) that parametricity is key to (static) notions of security. He seems to be in good company on this; see Washburn and Weirich's Generalizing Parametricity Using Information Flow and Washburn's Ph.D. thesis, Principia Narcissus: How to Avoid Being Caught by Your Reflection for details.

Parametricity at the type level?

While I'm a believer in the importance of parametricity at the term level, it's less clear to me that it's important to disallow type level "aparametric" constructions. For example:

Foo[T] = if T==Int then IntList else List[T]

Anyone have opinions on this either way?

Static aparametricity

Your point illustrates the richness of the topic!

In some systems with dependent types requiring compile-time evaluation as part of typechecking, requiring static parametricity is impossible. Once you accept compile-time evaluation, you can essentially write derivation rules in code. So, aparametric meta-level rules like "in the context where T is a type..." are pushed down to aparametric compile-time evaluation of code.

If you recast Haskell typeclasses in terms of dependent types, you can compile the set of instances of a typeclass into a construct like "Show(T:type)=if t==Int then .. else if T==String then ..", and the requirement that all aparametric checks resolve to static (compile-time) decisions coincides with Haskell's requirement that typeclass instantiations be unambiguous.

Thus I'd argue that Haskell relies on compile-time aparametricity in an essential way. If it were parametric, the compiler could never distinguish between Int and String in typeclass instantiations! But this is perfectly compatible with its obedience of dynamic (runtime) parametricity. This is a nebulous claim about Haskell, of course, but when you translate it into dependent type systems, it certainly holds true.

Finally, one essential consideration is that if static aparametricity is allowed, it must be constrained so that you can't use cross-module introspection to violate the language's security properties. Fortunately, this property is easy to satisfy in practice.

Another distinction

From my viewpoint, you shouldn't compile Haskell typeclasses down to constructs as you've described, as choosing which instance is applicable is a local decision. For example, multiplication between square matrices can be interpreted in at least two ways: matrix multiplication or elementwise multiplication. The choice should be based on the context in which multiplication is used, not based on a type-branch inside some global multiplication routine.

So from my viewpoint, instance selection should be viewed similarly to type inference, and I'm interested in whether the inferred code behaves parametrically. But it might not. If one module exports a function that operates on a type, what are you allowed to do with that type? In Coq, I think that the answer is that the only thing you can do is declare parameters of the unknown type - Coq is fully parametric. If types are viewed as having a richer structure (for example as being sets of values), there are more things you can do with them that would violate parametricity. I'm wondering what problems that will cause in practice.

Instance selection

It's a requirement in Haskell that instance selection behave /as if/ there is indeed a global switch - otherwise, the result is incoherent instance usage. There's no real way around this, because which instance is being used also needs to be baked into datatypes in order to preserve invariants - using a new instance means ending up with a new type. An equivalent issue crops up for ML functors.

A guess

Maybe you're saying that if you have:

class Foo a where foo :: a -> Int

instance Foo a where
    foo x = 0

instance Foo Int where
    foo x = x

poly :: a -> Int
poly x = foo x 

-- You want the foo call here to dispatch on the actual type a:
-- poly [1] == 0
-- poly 1 == 1 

Is that right? I don't want instance selection to be based on the actual type there, but on the apparent type 'a'. This is in line with my view of typeclasses as overload resolution.

The only invariant I can see this breaking is "forall x. poly x == foo x" . But I don't see this as particularly important - foo is an overloaded function, poly isn't. Maybe you can elaborate...

That's not actually allowed

That's not actually allowed normally because the instances for a and Int overlap - a subsumes Int. If you turn that off, poly will dispatch on a rather than whatever a is instantiated to. For the latter, you'd leave the Foo a => qualification in place.

The invariant you mention is one reason for the overlap restriction: type classes are used to capture more than just overloading.

Example?

I know this requires allow overlapping instances, and that the behavior when enabled is what I want, not what you (maybe?) want. I also feel that Haskell people seem to consider this a problem, and I'd like to understand why that is. From my viewpoint, the problem is thinking (as suggested by the syntax) that 'Foo a => a -> Int' is a simple function taking a value of any type 'a' such that Foo a holds. As Tim was saying - this violates parametricity (IMO in a bad way).

Instead, I think this should be viewed as a function which takes an extra parameter - the instance - more in line with how the implementation works. From this point of view "incoherence" just means passing two different instance parameters from the same type, and I don't see anything wrong with it. Can you give an example of the kind of useful invariants this might allow to be broken?

It's a requirement in

It's a requirement in Haskell that instance selection behave /as if/ there is indeed a global switch - otherwise, the result is incoherent instance usage.

Is this a requirement for any implementation of type classes? I believe BitC has lexically scoped instance selection, so it's not global. What is an example of "incoherent instance usage"?

Suppose we have a Map

Suppose we have a Map datatype, which relies on the Ord typeclass. Incoherent instance usage would let you mess up Ord-based invariants by using different orderings at different points.

There are various ways of dealing this, Haskell's is a strict 1:1 mapping between types (or rather, typeclass parameters) and instances.

This is a good example

Of why what Haskell does is bad. It limits you to a fixed ordering for your Map class. A better solution IMO would let you bind the ordering function to the map datatype, but to allow different maps to be bound with different ordering relations. (I know you could explicitly take an ordering parameter - that's not the point)

Yep, that's one possible

Yep, that's one possible solution - though different orderings must result in different map types, which forces you to either put the instance in the type or go to full dependent types (incidentally, I've been playing around with a system that takes the first approach and supports explicit evidence passing). The Haskellish way is just to define a newtype with the appropriate ordering.

Agreed

I guess I interpreted your comments about the situation being hard to avoid as referring to language design in general, rather than for reasons specific to Haskell's approach. I worked out a scheme that sounds somewhat similar to what you've been playing with... sort of a module level type system supporting instance unification. I think it could be combined with any number of type-level types systems - dependent types or something lighter (I'm actually using a somewhat unorthodox type system at that level).

The obvious thing in my case

The obvious thing in my case is for => to become a pi-style binder - "forall a.ord1:Ord a => a -> a -> Tree a ord1" etc etc. It's pretty hard to avoid 1:1 types->instances without introducing either phasing (which generates the 'new types') or a pi-style binder, I think. I'd be interested to hear any alternative you're using though?

Parameterising types on the associated instances does give you a 'new type' for each instance, of course :-)

Yes, I think most everything

Yes, I think my system can be encoded with Pi-style constructs. I realized after fleshing out my initial idea that most of the stuff I had done was just a sugaring of an ML style module system with an inference layer on top. But oh well, a little sugar is nice sometimes...

Updated

I updated the definition of "g" to fix the bug.

The essential result of parametricity is: Given a value of universally quantified type, we can't derive any information from the value. We can write functions taking it as a parameter, and bundle it up in data structures, and so on. But we can't "observe" anything about it, for example by comparing it to None, or converting it to a string or integer.

Thus, parametricity is a security guarantee. If you have a function taking values of a universally quantified type, you're guaranteed that the function can't dissect the value. Consider it being a password or private key. Without parametricity, you could see whether it's an integer or string, and print it out!

It also enables us to defined closed operations and values. For example, any function f::(t,t->t)->t can only return its first argument, or some iterative application of its second argument to its first. Without parametricity, you could compare the argument to various values, or cast it to various types, and do just about anything. Thus parametricity plays an important role in guaranteeing that programs obey universal properties.

I think...

you should fix your example, there is no second argument. Did you mean 'f::t,t -> t -> t?', or 'f:: (t,t->t) -> t -> t'?

From context

He means the first element of the pair (t, t->t).

Aaaaaahaaaaaaaaaaah!

Disagreement on operator priorities ;-)

Ya

Same issue for me on first read actually

Why does encapsulation need to be achieved by parametricity?

Parametricity is one way of achieving the security property of encapsulation. But typical FP languages have either opaque functions, or objects with private fields, either of which also provide encapsulation (arguably in a way that is more obvious to programmers). So why does encapsulation need to be achieved by parametricity?

I take the point about closed operations and values, but assuming function values cannot be compared (either intensionally or by identity), we can use a function/object that encapsulates a value to represent a closed value. (It can be detected that it is a function/object and not a primitive value, but by itself that's pretty harmless.)

So the issue of whether a language should have parametricity isn't one of expressiveness: it is just a matter of whether we more often want to be able to cast or inspect the types of values, or more often want to create and use closed values.

One possible answer...

...has already been given on this thread, here.

Encapsulation and parametricity

Encapsulation can be achieved through many means other than parametricity. Java is a good example: It's aparametric since, given an arbitrary object, you can observe whether it's an instance of a given class. But it supports encapsulation via private fields.

Has anybody proven the full encapsulation mechanism of Java to be secure? Such an effort would be extremely complicated because it relies on reasoning about the class loader, introspection functionality, the runtime environment, etc.

This is a very different situation than a parametric language; parametricity guarantees secure encapsulation.

Forget Java

Java is not a good example of a language supporting encapsulation as I suggested, because it's way overcomplicated. If a language has opaque closure values (which does not require it to be any more complicated than the lambda calculus), then you can encode private variables by having a function close over them. That guarantees secure encapsulation by means that are just as simple as parametricity. E and Cajita are languages that use this approach.

So, I remain entirely unconvinced that parametricity is needed for this reason.

Moar!

More on parametricity from Wadler. Especially see Theorems for Free.

See Typed Scheme

The system you are proposing is similar in many ways to the type system in Typed Scheme [1]. For example, you can write this program in Typed Scheme:

(let ([x (assoc k l)])
   (if x
       (cdr x)
       (error "key not found")))

Here the value #f in Scheme is similar to your Noneable value.

[1] http://www.ccs.neu.edu/~samth/typed-scheme/

So, if I understand correctly ....

"The essential result of parametricity is: Given a value of universally quantified type, we can't derive any information from the value. We can write functions taking it as a parameter, and bundle it up in data structures, and so on. But we can't "observe" anything about it, for example by comparing it to None, or converting it to a string or integer."

I imagine that if I'm supporting full reflection/RTTI, parametricity is out the window for my toy language anyway, yes?

Scott

Parametricity is violated in

Parametricity is violated in any functions that do use RTTI, but not in functions that don't. Perhaps there's a simpler way to distinguish functions that access RTTI, and so are not parametric, from those that are, but in a simpler way than information flow systems.

Easy

Sure, just distinguish syntactically between parametric and non-parametric type variables and disallow non-parametric type variables being instantiated with, or RTTI being used on, types containing parametric variables.

Re: Typed Scheme

Typed Scheme is very cool, and I'm considering implementing my compiler in it (although it's a bit "green").

Typed scheme looks at predicates and conditionals to discriminate types/values in a union (if i have my terminology correct).

I'm taking a "lower road," where a positive pattern match and binding is required to distinguish None from a non-null value (or Eof from Char or whatever else the developer dreams up) and to make use of the non-None value.

Scott

Typed Scheme

Typed Scheme is very cool, and I'm considering implementing my compiler in it (although it's a bit "green").

I'm glad you like it. If you've found any bugs or rough edges, which it sounds like you have, please let me know.

Typed scheme

Reading through the docs and working through a few toy examples, a few questions remain for me.

1) Integration with PLT's match library?
More elegant, syntactically obvious handling of type-scheme's version of algebraic data types; allows for nice nesting and so forth.

2) Full support of optional and keyword arguments? Would have to support type variables, etc. I really like the ability to define functions like this.

(define (find x seq #'pred equal?) ...)

3) Documentation regarding integrating typed-scheme into #lang scheme (and vice versa). I like PLT's inclusion of Lex/Yacc libraries, but it's unclear what I'm getting into re: using these with typed-scheme.

Hope that helps.

Scott

Typed Scheme

Here are a few answers. Mostly, I need to work on improving the documentation; sorry about that.

1. `match' should work fine:

#lang typed-scheme

(match (list 1 2 3)
  [(list a b c) (+ a b c)]
  [_ 0])

2. There isn't yet full support for keyword and optional arguments.

3. I believe that you should be able to just `require' from and `provide' to other modules written in the `scheme' language. What aren't you clear about?

In general, Typed Scheme questions are more likely to get answered on the plt-scheme mailing list.

On typed scheme, many thanks

I figured out the scheme/match issue, and I'm now on the PLT mailing list.

Thanks again - Scott

Re: Maybe a = Just a | Nothing

This seems quite elegant to me as well, but my developer buddy "test subject" - by whom I periodically run the latest version of my language design - chuckled pretty hard at the thought of multiple really giant memory mapped files full of "Just a" data structures (which now that I think of it many moons later, would require some interesting memory representation hackage to achieve in any case).

Representations for tagged

Representations for tagged unions don't cost more than representations of the sort of tagless values you were proposing before. Either way, the representation requires you be able to distinguish values.

As a note, you can reply directly to a comment by clicking on the 'reply' option associated with that comment. Doing so helps reduce the degree to which a reply is separated from its source comment.

I recall a recent LTU thread on Maybe a and heap allocation

IIRC, the issue was that data Maybe a = Some a | Nothing required heap allocation for Some a, due to logical support for Some (Some (Some a)) and so forth and so on. I believe (Some (Some Nothing)) was also a relevant case.

As for general type tagging efficiency, I read a paper on Squeak memory management which discussed language definable tagging for tagging strategies of differing efficiency, if limited cases due to limited bits, etc.

Scott

Some Nothing

If you don't want to deal with returning 'Some Nothing' or 'Some Some Something', you do so the same way you would in an object-oriented language.

Compare:

if(null == arg) return null;
return new C(arg);

with

case arg of 
  Nothing => Nothing
  Just y  => Just C y

The 'Some Nothing' case exists and would typically involve heap-allocation, but so does 'new C(null)'.

In any case, strong static types allow the system (in many cases) to improve representation costs. This is performed by leveraging the information available based on the static typing context.

Re: RTTI....

That might be a hard one. The language is an experiment in designing a statically typed language with various dynamic trap doors to use when needed. In the case of RTTI, among other features (like retrieving an inspectable representation of a class), every class automatically defines a type predicate: Int has Int?, Str has Str?, BlueLagoon has BlueLagoon? and so on. I'd have to isolate all of these predicates and then propagate info about their use up through the call chain (kinda like checked exceptions or something).

Still not sure exactly what all the effort would buy me.

Scott

Please click "reply"

When replying to a comment please click the reply link underneath it. Conversations are very hard to follow otherwise.