Proofs of a type system's safety?

I was looking at linear types really hard lately. I made my own implementation of them in haskell.

It's not a good implementation. I don't think it can handle multiple linear variables at once. As well, it's probable leakier than a sack full of holes. However, my code, attached below, isn't the topic I want to talk about anyways, rather I want to use it to raise another topic: how does one prove a type system has a desired property?

What are some papers, proofs, tutorials and, books? Even just the right name to search for would be a good start. I have no clue how to prove that my system is actually safe. I don't even know what the actual theorem would look like in a formal notation. I don't even know what the formal notation would be!

P.S. Please don't poke holes in my code. It's just a rough draft of the idea. Besides I'm asking about how people prove a type system works. Of course, I haven't taken a good look at my code. I don't know how to.

P.S.S. Here's the attached code. The basic idea is there is a context variable in the type. Every time one reads or writes a variable one get's some new context variable and therefore one can't mix the past and the future or two concurrent events.

module(
(-->),
(==>),
($=),
(-\),
lmap
)where
newtype Linear context a = Linear (IORef a)
newtype Process context a = Process (IO a)
instance Functor (Process context) where
fmap f (Process x) = Process $ fmap f x
instance Monad (Process context) where
return = Process .return
(Process x) >>= f = Process (x >>= \a -> case f a of { Process p -> p })

(-->) :: Process context (Linear context a) -> (forall context1. Process context1 (Linear context1 a) -> Process context1 a -> b) -> b
(Process p) --> f = f (fmap fst w) (fmap snd w) where
w = Process $ do
(Linear r) a return (Linear r, a)

($=) :: Process context (Linear context a) -> Process context a -> (forall context1. Process context1 (Linear context1 a) -> b) -> b
(Process p) $= (Process q) = \f -> f $ Process $ do
(Linear r) x writeIORef r x
return (Linear r)
infixr 1 $=

(-\) :: (Process context (Linear context a), Process context a) -> (forall context1. Process context1 (Linear context1 a) -> b) -> b
(a,b) -\ f = ($=) a b f

lmap ::
(
Process context (Linear context a) ->
forall c.
(forall context. Process context (Linear context b) -> c) ->
c
) ->
a ->
b
lmap f x = f (Process $ fmap Linear (newIORef x)) $ \(Process p) ->
unsafePerformIO $ do
(Linear s) readIORef s

--------------------------------------------------

(==>) ::
a ->
(
Process context (Linear context a) ->
forall c.
(forall context. Process context (Linear context b) -> c) ->
c
) ->
b
x ==> f = lmap f x

--------------------------------------------------

sample = 0 ==> \count0 ->
count0 --> \count1 val0 ->
(count1, fmap (+1) val0) -\ \count2 ->
count2 --> \count3 val2 ->
count3 $= fmap (+1) val2

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Type safety : subject reduction + progress

A type system (static semantics) is not safe in itself. It is safe with respect to an operational semantics (dynamic semantics) that tells how the programs execute. Safe means "well-typed programs do not go wrong"; it depends on the dynamic semantics, on what "going" means.

The classical way to prove type safety is to define a small-step semantics (execution as a sequence of small reductions), and to show :
- subject reduction : if a program `t1` is well-typed at type `T` and reduces to a program `t2`, then `t2` is also well-typed at type `T`
- progress : if a program is well-typed at some type, either it's a value (you have to define what a "value" is) or it reduces to another program

"Subject reduction" gives you that a well-typed programs keeps being well-typed during execution. "Progress" gives you that a well-typed program cannot be "stuck" in an error state. Of course that depends what you mean by 'error', which depends on what you mean by 'value'.

As an exemple, for simply-typed lambda-calculus with integers, subject reduction holds, and progress holds if you define values to be lambda-terms and integers constant. `1 (λx.x)` does not reduce, and is not a value, but it is ill-typed.

You can read about this is any book on the theory of type systems. Benjamin Pierce's Types And Programming Languages is the now-classical reference, but there are a lot of other good ones.

I'm not sure this is what I'm looking for

I'm not sure that is what I'm looking for. Those techniques seem to be for proving that for a specific type T it is only inhabited by valid data.

For example, one might prove that for a data type definition: "data List a = List (Maybe (List a)) a (Maybe (List a))", which is hidden inside a module and only exposes a limited set of primitives to the outside, can only be inhabited by valid data and not junk data with these techniques.

I want to do something different. I want to prove that some types aren't inhabited by any data, not that the data inhabiting them is correct. I might want to prove that "forall cntxt a. Linear cntxt a" is uninhabited for example. Subject reduction seems to require one starts with a primary inhabitant of the type in the first place.

I'm not sure how to bridge the two ideas.

Why not?

Subject reduction is the standard method and I believe it applies to your type system. Can you come up with the set of operations that should never occur? Can you write an evaluator that will perform the checks you're interested in at runtime?

Inhabitation is inconsistent

The canonical statement of uninhabitation is consistency : "Not all types are unhabited" – which you often reduce to a variant of "the type ∀α.α is not inhabited".

The usual way to prove this is to prove subject reduction and normalization : for any term `T` , any reduction sequence eventually reach a normal form.

When you have normalization, you can consider only normal forms when reasoning about inhabitation. Are there irreductible terms of type `∀C∀α. Linear C α`? If you're lucky, the answer is no : assuming such a normal form and reasoning by case analysis on it will lead you to a contradiction.

Of course this can only work if your language and its dynamic semantics are restricted enough to have normalization. Most 'practical' languages have non-terminating terms (thanks to eg. a general fixpoint operator) and inhabitants of all type (eg. `fix (λx.x)`). There are also specific cases of languages that are consistent despite non-terminating (see this blog post), when the non-terminating construction is innocent enough typing-wise.

Restricting structural induction

Subject reduction doesn't just work on programs, it also works on unclosed fragments such as variables.

It's easier to prove that a type is uninhabited if the type system satisfies subject reduction: with subject reduction, since this allows one to restrict structural induction on terms to those obtainable through the type inference rules.

Abstract interpretation is useful for the purpose of showing types to be uninhabited.

Doh! So it's universal quantification, not existential.

I think I understand my mistake.

I think I confused universal quantification with existential. Am I supposed to prove that all types are inhabited by the right data as opposed to one particular type? In my case, I'd guess one of the things I'd have to prove would be that normal data couldn't have side effects in them.

About a dynamic checker, that is easy. The basic idea is to make the type "Linear" equal to "IORef (IORef a)" and not "IORef a". This extra level let's one mark old references with errors in case they are ever referred to twice.

As to operations that should never occur, those should be the basic linear logic ones, like the fact that a reference can never be referred to twice.

Recursion gave me headaches in my last attempt. I constructed everything using the map operation of the type "(a -> b) -> Linear a -> Linear b". This didn't work for several reasons because of recursion.

These two were annoying
umap $ \x-> fmap (\_ -> x) x
y $ fmap (\_ -> x)

About the proof by contradiction approach. What basic axioms should I assume that the assumption that the type "forall c a. Linear c a" is inhabited by things other than exceptions, infinite loops and, things that cheated using "unsafeCoerce#" would contradict?