John Hughes on Erlang and Haskell

John Hughes talks about his experience with Erlang vs. Haskell in an InfoQ Interview. While the discussions about strict vs lazy, pure vs side effecting, and dynamic vs static typing are interesting he raises a good question for the LtU crowd at the end:

I think functional programming is a very interesting concept for the future and for the present indeed. One of the things I do wonder about though, is when I got interested in the field, the mainstream was probably Fortran and COBOL and even C was fairly new at that time. The functional programming pioneers spoke of an order of magnitude improvement in productivity and I think functional programming has delivered that.

If you compare Haskell programs to C code or even C++ often, they are about an order of magnitude smaller and simpler. The same is for Erlang, those results are being validated in the industry. Where is the next order of magnitude coming from? I wish I had an answer to that question because it's hard to see almost. When you look at a beautiful Haskell program, how could this be 10 times shorter? But I think we need to be asking ourselves that kind of question. If I had a good idea there, I would spend the rest of my career working on it.

So, LtU, where is the next order of magnitude coming from?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Basis of Comparison

"If you compare Haskell programs to C code or even C++ often, they are about an order of magnitude smaller and simpler."

What substantial (> 100 KLOC) applications have been written in both languages to warrant such an extraordinary claim?

I wrote one (not 100K though)

The original C program was 4KLOC, and the rewritten OCaml version was 598 lines.

The exact numbers were:

              wc -c  wc -l

  virsh     126,056  4,641
  mlvirsh    19,427    598

  % size        15%    13%

Note that the source for both programs has changed a lot since I did that. You'd have to go back through the VC history and look for the particular versions that I compared.

I wouldn't be surprised by

I wouldn't be surprised by shrinkage, but rewriting a system in the same language would probably also shrink it due to gained design insight; that's not apples-to-apples.

New program, not a rewrite

I didn't write the original, and in any case the program is a straightforward shell - hardly any room for "insight" since all it does is to take commands typed by the user and dispatch them to the underlying library.

10x? Not that again!

Why do people take these claims seriously and keep repeating them? So cliche!

'sides, functional programming languages were around long before C - was the order of magnitude productivity gain around then?

I think pretty much any widely-used programming environment is at least 10x as productive as any were 30 years ago - this is not because of any new paradigm, but because of library support - ultimately made possible by better hardware.

So to answer - the next order of magnitude is going to come from library and hardware support, just as it always has.

All Modern Languages have elements of Functional Programming

Java, Javascript : See anonymous functions
C#, Ruby : See lambda expressions

Not in Java

Java has no anonymous functions. Anonymous objects with static closure, yes, but no function type or anonymous function. You can make it work, but it's very ugly. Also, if you have static types, you really need tuples to make functional programming work very well. Java lacks these.

All in all, Java makes true functional programming pretty difficult, except in very simple cases like Comparator.

Too bad the real closures support seems to be dead for Java 7. :-(

No one place

But here are a couple of possibilities:

Rational and composable abstractions for cross-cutting concerns. There's a lot of redundancy and boilerplate that goes into handling the sorts of stuff that aspect-oriented folk talk about. Even if you don't much like their solutions, they are certainly exposing a serious problem.

Moving beyond traversing physical data structures to querying and updating logical data structures. Even in the one-order-of-magnitude languages, there's way to much code spent in traversal of data graphs. Imagine instead of working with in-memory relations and sets rather than lists and maps, with the compiler sweating the implementation details.

Can you give some examples

Can you give some examples of cross-cutting concerns?

From a few years ago when AOP was hot I remember that people were repeating two examples: logging and drawing shapes/observer pattern. Neither seem important enough to introduce a mechanism that destroys local reasoning.

Cross-cutting concerns

Security, transactions, monitoring/management, and session management. Code for these ends up scattered all over most enterprise applications, in ways that are both utterly stereotypical and easy to screw up.

Security, I agree.

Security,

I agree. Information flow and verification of security properties will be useful, though I'm not certain that it will reduce code size; in fact, perhaps the opposite given much code nowadays isn't overly concerned with sophisticated security properties.

monitoring/management

You mean live upgrade? Or heap profiling? Or both?

session management

This is indeed important, and we have a glimmer of good solutions with delimited continuations.

I don't follow -- what's the

I don't follow -- what's the connection between aspects and verifying inf. flow properties? Perhaps you mean something like runtime enforcement when atoms are predicated on dynamic labels?

[I do agree about the use of aspects for enforcing security policies: two of my recent web projects use them for just that. Funny enough, one even brings in information flow type systems -- but I don't think in the way you mean. Will try to remember to update this thread once I can release the writeups.]

edit: aspects do decrease code size and in an important way. If your language has them (which isn't a super big deal at the interpreter / compiler level), you can remove frameworks faking them from your TCB. That's hairy code when not naturally supported, so it's a big win.

Monitoring/management

You mean live upgrade? Or heap profiling? Or both?

Nothing so deep, I'm afraid. Just the everyday plumbing of hooking up enterprise applications to management consoles. Basically making the application as a whole and individual components of it act as Observables. Right now this takes either fancy and fragile reflection, or a large amount of boilerplate.

More Abstraction

Abstracting over patterns (pattern calculus, first-class patterns), abstracting over type structure (polytypism, polyvariadicity, more?), abstracting over execution (parallelism), abstracting over location (mobility), abstracting over bit-level data, and more use of partial evaluation and so less fear of using higher abstraction due to perceived performance degradation. That last point alone can bring significant improvements in conciseness.

Libraries

IMO the largest remaining source of productivity will come from finding the right abstractions to place into libraries. The main improvement to come from languages will be more convenient expression of the needed abstractions. There's only so much concision that an abstraction can bring to a particular problem. The big win is in obtaining libraries that are more universally applicable so that more problems can be solved with canned solutions.

The main improvement to come

The main improvement to come from languages will be more convenient expression of the needed abstractions. There's only so much concision that an abstraction can bring to a particular problem.

I agree. Haskell is seems to be settling on some high-level abstractions and their composition (monoids, arrows, monads, applicative, etc.). I think the first two points I mentioned are in this vein as well. Control over bit-level representation is important for displacing unsafe code with safer, higher-level code, so this too is a win for conciseness IMO.

What happened to predicate dispatching?

What ever happened to predicate dispatching and languages such as Cecil? It's more general than pattern matching and multiple dispatch, and promotes conditional logic to a first class citizen.

apparently it has 'issues' still?

according to Pascal Costanza (at the bottom of all that).

Simple predicate dispatch

The problem with predicate dispatch is that you can't check implication between predicates. If you could determine that P => Q then you'd first test for P and execute P's method, otherwise test for Q and execute Q's method. Unfortunately this is undecidable in general. Subsets like type tests are often decidable. For example since Cat?(x) => Animal?(x) your first choice is to execute a method on Cat and only if that doesn't exist you execute it on Animal.

Is this really necessary? I believe that a much simpler approach works in practice: you look at the definition order of methods. If you have an Animal class and a Cat class you usually define methods on Animal first and on Cat later (later meaning that the methods on Cat come after the methods on Animal in the source code). This suggests the simple policy of trying methods last-to-first, i.e. the opposite order of pattern matching in functional languages.

Predicates and aspects

I think it can be simple, I just assume there aren't large enough projects to tell if it turns to spaghetti code?

I've been toying with the idea of predicated blocks, where a block (lambda) returns a value if the predicate is true, and void if not. Assigning to void does nothing, it is a nonexistent value:

f = n [n even?] -> n * n
x = f eval: 2  # x is now 4
x = f eval: 3  # x is still 4

Because of this, conditional logic is simply applying these predicate blocks:

x = if: [x > 1] -> 1
or
x = ([x > 1] -> 1) eval

I haven't implemented predicate blocks yet, so I can't say if this will turn out or not, but I'm hopeful.

Predicates could also be used for applying AOP concepts:

@security = 5
def foo( n )
  puts n
end

def foo( n ) when n.even?
  puts next_method() + "is even"
end

def Object.send(msg, args) when access < security
  puts "You can't be here"
end

In Ruby, you can override "send()", which is the central dispatch function, but it can get messy/dangerous when you steal core functionality. Predicates would only intercept relevant conditions.

Interesting. Do you have

Interesting. Do you have ideas about composing predicated blocks? For example:

seq(f, g) = [x] -> 
  let v = f eval: x
  if(v is not void) return v
  else return g eval: x

Maybe there are other useful combiners?

I have been thinking about AOP and in particular COP along the same lines: context oriented programming = predicate dispatch + dynamic variables.

I'm not entirely clear on

I'm not entirely clear on the "[x]" part, do you mean the same as this in Haskell (I'm no Haskel expert by the way)?

seq(f, g) = \x -> ...

If so, it could be something like this if there were an OR operator for void:

seq = f, g -> x ->
  return f eval: x || g eval: x
end

Do you have another example of composing functions with predicates? This could be interesting.

BTW, a block with two predicates would look something like this:

stream each: (char [alpha?] -> read-identifier;
                   [digit?] -> read-number
end)

It looks very similar to pattern matching, but can also be passed around since it's a first class citizen. Parenthesis are added to highlight the predicate block.

You could take other logical

You could take other logical operators (&&, xor). I don't know if these are useful.

The maybe monad comes to mind:

compose(f, g) = \x -> f(x) && g(f(x))

Maybe I'm wrong, but...

I don't think context-oriented programming has to be that complex.

Is COP ever defined exactly in the way you describe?

What I said wasn't what I

What I said wasn't what I meant. I meant that predicate dispatch + dynamic binding provides the features of COP, but not the other way around. COP certainly doesn't provide predicate dispatch. COP ⊂ predicate dispatch + dynamic variables.

Layer activations become bindings to dynamic variables and method definitions in layers become methods that dispatch on the values of dynamic variables.

Can you go more into detail

Can you go more into detail about context oriented programming? For example, where do the dynamic variables come into play?

There's a language called Cecil which has predicate classes - an object can become a different class depending on a predicate:

pred full_buffer isa buffer when buffer.is_full;
method put(b@full_buffer, x) { ... }

I think that could be very useful, but I haven't used the language to a great extent.

This is the same as abstract state

The way you define full_buffer is exactly how I would use abstract state in dodo (although dodo is more verbose).

For a polymorphic variable, it makes no sense to store the state separately, dodo makes it part of the type and stores the type pointer in the variable.

class Buffer:
   ...
   rules:
      state = if (is_full) full_buffer else partfull_buffer.
...
<state name="partfull_buffer">
   method Put: Put(x)
   {
       #buffer is part full, we can add an item now
       ...
   }.
</state>
.

Ahh yes, states!

Dodo look interesting, I'll have to take a look at it. States reminded me of UnrealScript states, although I don't think there is a "rules" type of functionality. It can also be done in languages supporting prototypical inheritance, Io in this example:

Buffer := Object clone do(
  full := self clone

  put := method(value,
    self setProto(full)
  )

  full put := method(value,
    writeln("Buffer full!")
  )
)

Or using muli-methods and explicitly passing state (Dylan):

define method put (buffer :: <buffer>, state)
  buffer.state := #full;
end;

define method put (buffer :: <buffer>, state == #full)
  format-out ("Buffer full!");
end;

10x? Hmm...

By saying that Haskell programs could be shrunk 10x smaller, he's implying that there is still alot of redundancy in the code of Haskell projects. Is Haskell really so bad that it has a significantly large amount of room for improvement in this aspect?

In addition, I think his whole focus on program size is misled. As an analogy, once you rise to a certain point above the poverty line (code bloat and redundancy), more money (compression) doesn't increase happiness (readability, usefulness). I don't agree with reducing program size simply for the sake of code compression. Productivity is what needs to focused on.

If any productivity is to be gained after hitting the maintainable-terseness limit, it is through tooling:
* structured editors
* automated code refactoring
* modular, composable language systems (type systems, paradigm implementations, etc.)
* spec-to-code verification

The tool is going to have to do alot of manual coding for us. We're going to be simply telling it what to do -- what refactorings to make, what semantics to implement. Tools will become more intelligent, and thereby, more assistive and autonomous -- more actually helpful and useful.

Information density

I've seen back-of-the-envelope calcs (sorry, no cites) that an average program of the Fortran/C/Java variety has about two orders of magnitude less information density of an average mathematics paper. Assuming that Haskell/OCaml/Scala wins you one of those orders of magnitude, that leaves 10x room for improvement until programming is as expressive as mathematics. The cause is of this is mostly that mathematics allows for a lot more abstraction that even the most flexible programming language currently available. This is all hand-waving, but seems reasonable enough for purposes of argument.

Mathematicians have good libraries

Since you make a comparison to mathematics, I think it's also fair to point out that the definitions and theorems that have been accumulated in mathematics can be viewed as a powerful standard library.

Libraries

Libraries in mathematics are more powerful not just because they've 3000 years to our 60, but because there are so many more sorts of things that they can abstract over, and so many more ways of combining abstractions. We're just about good with abstracting over types, after all, whereas even the least interesting of realms of mathematics abstracts over type constructors, and usually over type constraints and implementation patterns. That means their libraries are more valuable than ours, and more likely to be reused. The interface to Euclid's library is still in use, even though his implementation (proofs) is probably about 30 versions obsolete.

We've ridden this analogy about as far as it will go without breaking, but I think the learnings are still valuable

Selection Bias, Non-determinism & Non-termination

If "mathematical work" doesn't have a high abstraction/information content, it doesn't even become a paper (uninteresting to the community) while most mainstream.lang programs are truly uninteresting to almost everyone.

Also, the average(uh?) mainstream.lang program is implicitly built on top of a whole lot of concerns that are not captured in source code (they are not much better captured by non-mainstream languages, either, imho).

And a good thing too

If programs were as dense as mathematics papers, only mathematicians could read them! That would be a disaster. It is a *very good thing* that most programs are less dense than that. When it comes to readability we should be aiming for code that's readable like the New Yorker is readable, *not* like a mathematics paper is readable.

I'd say Python is getting pretty close to the ideal, actually.

The embarrassing way to code

not just LOC

One argument for comparing code size is the old rule of thumb that people tend to have the same line-per-hour productivity regardless of language. This seemed to hold out as we compared Erlang-based projects with projects using C/C++ or Java, but the code written in these projects was fairly straightforward, without mind-boggling abstractions on either side. The code bases compared were in the hundreds, or even thousands, of KLOC range.

In the first rounds, it felt fairly simple to translate this to a corresponding productivity increase. The main difficulty was to agree upon the LOC ratio, since products compared did not have identical characteristics and functionality. A conservative estimate was that the Erlang code was at least 4x more compact, but obviously, this was not universally agreed upon. My personal opinion is that while several more or less systematic comparisons supported this, the objections tended to be more of the hand-waving variety. I have asked for more concrete contrary evidence for years, but have yet to be shown a single study. I'm convinced such studies would help the discussion.

Initially, the bug ratio per KLOC was roughly the same, but a few years later, the bug ratio in the Erlang projects had been reduced by a factor 4 (mainly through automated testing), while similar improvements were not seen in other projects. Furthermore, portability and the cost of making spin-off products seemed much better on the Erlang side (this is me hand-waving; we didn't follow through with further studies, since the first one was not particularly welcome except among those already convinced.)

We have also seen that the average number of lines of code per developer tends to be some 4-10x higher in the Erlang-related projects than in others (and these all tended to show roughly similar numbers for C++ or Java). If we agree with the observation that Erlang code tends to be 4-10x shorter than corresponding C++ code, and taking into account that project costs increase with size in a way that is much worse than linear, the 10x productivity increase starts to seem like a very conservative estimate over time.

One explanation for this might be that side-effect free code composes so much easier, and is much less sensitive to peripheral changes. Once you have weeded out all bugs from a purely contained component, it stays bug free forever, no matter how many places it is used in (provided it is used correctly). If so, this seems to support the argument that increased productivity will largely come from the growth of high-quality functional libraries that can be composed, opening up for new and even more powerful abstractions, and so on.

shoes and paths

The shoemaker watched the weaver leave, and frowned. Yes, he supposed it was maybe a little bit strange that his children had no shoes. But what kind of shoes could one make for children with lacerated gangrenous feet? And besides, the children were all sickly, and slept a lot, and all died young. So was it really worthwhile? There just isn't a market for such shoes, he decided. And I am, above all else, a businessman. He nodded, and walked across to his last, stepping between the caltrops, his booted feet crunching on the sewage-beslimmed broken glass covering the floor of his home. It's simply unclear how one might approach it, he thought.

How might our programming languages and environments be improved? How about this. Recover what once was, but has been lost. Debuggers which can run time backwards. Multiple dispatch. Gather what exists, but is scattered. Choice of tools need not be a choice of "in which ways shall I be metastaticly crippled on *this* project?". Create the obvious but neglected. Constraint propagating knowledge management in IDEs. Attract funding. Increased competition in areas where programming tech is a secondary competitive advantage. Increased and improved research funding, the lack of which is simply a will to fail. Create, perhaps, a common community vision of what is needed. Helps with funding.

The answer hasn't changed since 1999. Or changed much since 1989.

It's wonderful how much less things suck each year. But change has been so very slooowwwwwww. A long twilight vigil, making do with crippled and crippling tools, hairs going gray while waiting for the economics and knowledge to gel, for civilization to creep forward.

Each year the shoemaker's abused and pregnant wife welcomes the spring with hope, and struggles to make this year different.

Examples, please

Recover what once was, but has been lost.

Can you expand on this?

Debuggers which can run time backwards.

This technology exists today.

Gather what exists, but is scattered.

Master Metadata Management?

Choice of tools need not be a choice of "in which ways shall I be metastaticly crippled on *this* project?".

Vierwpoints Research Institute-related projects like OMeta?

Create the obvious but neglected.

Live sink/hoist and storage of optimization artifacts into a living record of performance trade-offs. What else?

Constraint propagating knowledge management in IDEs.

What are search terms for this that I could use to see what people are doing?

Increased and improved research funding, the lack of which is simply a will to fail.

What project do you have that I can follow and support?

Create, perhaps, a common community vision of what is needed.

Many researchers struggle with figuring out how to connect with practitioners and find practical testbeds for their ideas. Alan Kay's VPRI is no exception. Just ask them how hard it is for them to collaborate with practitioners on their wild ideas.

Tracking ideas is hard, especially when people do not encourage you to follow them. It took me 12 years from the day I started programming to the time I read books like Watch What I Do that explained what the best and brightest were doing with programming systems.

Simple

Better integration of concerns.

The driving force is capital.

1. HP's acquisition of Mercury Interactive and current plans to integrate it into its QA/Testing product line
2. Microsoft's acquistion of Hyperion and Stratature, and choosing to integrate both into the SQLServer 2008 R2 platform

...are probably the two biggest signs that the old approach to islands of tools is over

Smaller signs include Mozilla's focus on providing real value-add on top of its browser for developers.

Types

A very expressive type system with "type libraries" + A very powerful proof engine.

The way this works is that you write what you want the software to do in a constraint based way. Then you let the proof engine find the program that does it. This gives a truly declarative approach. Good proof-engines will of course have to try and find good proofs, as proofs are NOT irrelevant when we're talking about performance.

Bullets of Very Shiny Materials

So, LtU, where is the next order of magnitude coming from?

If I were to make a list:

Reactive Programming
Functional and logic reactive programming styles (especially in combination with currying, partial-evaluation, JIT, and the use of pure combinators to develop impure procedures) allow programmers to avoid concerns about which is more 'dynamic' - the data or the code. This is especially useful in combination with Automatic Code Distribution, which allows running systems to adapt and optimize.

Automated Code Distribution
Getting code from the developers to the users remains one of the most challenging tasks in program development. Doing so rapidly enough for continuous integration testing, doing so without resetting systems at one or both sides, doing so without violation of security and safety concerns, etc. remains very difficult. At the moment, programmers jump through hoops to make this happen with less than palatable results - needing to use third-party 'scripting' languages which tend to duplicate libraries and have application-specific integrations and a number of security concerns. I suppose I should mention 'persistence' here, but I really consider it to be a specialized case of disruption tolerance as part of distribution. Improvements to automated code distribution could greatly change how we develop code, especially with regards to 'live' programming and distributed 'learning' systems where both 'data' and 'code' can come from multiple sources blurring the line between developer and user. Use of reactive programming techniques can safely, securely, and optimally integrate automated code distribution with live systems.

Declarative Meta-Programming
This is a style of programming where logic is used not to program directly, but rather to develop plans that may then be iteratively executed. The primary feature of DMP is automatic selection of concrete strategies to achieve a given goal based upon properties of those strategies (both annotated and inferred) relative to the desired goal properties. That is, this is dispatch based on desired outcomes and contextually available inputs rather than based on predicate or pattern-matching dispatch based on specific inputs. Properties for strategies typically include domain costs (fuel, time, noise, etc.), risks, procedural invariants (like 'be available for communications at least for one minute out of every hour'), post-conditions, etc. The exact properties of a high-level strategy may depend on its context, on which sub-strategies are available, and even upon live feedback. Strategies selected in this manner allow a much looser and far more context-dependent binding between description and implementation than do procedural or functional. DMP doesn't require much 'artificial intelligence' magic - human programmers provide strategies where necessary - but it may readily leverage expert systems and AI where available.

DMP is not practical today, since it really requires the first two elements to be secure and high-performance.

DMP also requires huge libraries/databases of knowledge and meta-knowledge - strategies, assumptions, constraints (law, ethics, physics, capabilities and service contracts), and interpretation (understanding goals, in context). One difference between DMP and other designs is that DMP can potentially leverage large libraries automatically: as a library grows more robust and offer both more specialized and more abstract strategies, better assumptions, more data... DMP programs attached to that library may reactively leverage these improvements without change. While libraries exist today for procedural and functional programming, they are simply not annotated in a manner suitable for automated selection or development of strategies.

Further, more development is necessary to prove desirable meta-level properties for DMP, such as progress and safety - that select DMP sub-programs can be guaranteed to achieve a sub-goal within the contextual constraints and invariants, or at least fail in a very well-defined manner in terms of domain properties. Partial-failures remain especially problematic; knowing which services can recover from partial failures, and under which conditions they can do so, and how to effect this recovery, is critical to automatically producing 'safe' plans. These forms of 'safety' and 'progress' encompass far more than type-safety and computation progress.

The output of DMP for a given goal is a slightly more concrete strategy, with hooks for developing sub-plans, contingencies based upon acquisition of more information. Hooks to consult supervisors and expert systems need not be considered different from other object capabilities and service contracts. Ideally, the output strategy is live and reactive such that re-planning (and re-prioritization) can be performed on-the-fly based on changes in assumptions and knowledge and improvements in strategies and tweaks in functions. Ideally, change subsets can readily be JIT'd for performance. Such 'reactive' properties are also extremely useful in debugging, since the associations necessary to create reactive procedures and plans are also sufficient to explain plans to developers and managers in terms of why strategies were selected based on live data, performance (domain costs of all sorts - fuel, time, noise, etc), desired constraints specific to the context, etc.

For secure composition and system integration, DMP must allow gathering data, strategy, code from disparate sources trusted on a per-use basis. That is, the 'library' or 'database' used by a given application may need to be a reactive transclusion of other libraries and databases. Automated code distribution, in combination with capability security and reactive programming, are elements that can make these distributed multi-sourced libraries practical. Reactive programming allows high performance.

DMP for domain programming is (theoretically) much stronger than related logic-based programming styles. Logic and constraint-logic programming tend to allow conclusions about a world, but don't much help decisions on what to do about it. Rules-based programming executes side-effects based on observations and conclusions, but fails to capture the purpose or reason for those side-effects and so quickly grows into an arcane, tangled mess.

speaking as a code maintainer as well as developer

i worry that reducing boilerplate sometimes leads to things which are inscrutable and hard to suss out when it comes time to fix bugs or extend the system.

Boilerplate Code

It is true that moving something that was explicit semantic noise - but, importantly, explicit semantic - to become implicit can sometimes make things more difficult for a late joining developer. It is difficult even for skilled programmers new to a project to rapidly locate and identify things like asynchronous task queues, polymorphism, subscriptions, etc. and grok how they all tie together in a large project.

However, the problems that exist with a given model will still implicitly exist in the same model implemented in such a manner that the programmer must use explicit boiler-plate. The only differences: there are new opportunities to introduce code bugs, and any design-bugs are buried by semantic noise. By increasing complexity far enough, you reach a point where there are no obvious bugs...

I think it one task of an IDE and language design to help teach developers what they need to know, but I think this compatible with rejecting boiler-plate in every form you can feasibly eradicate.

Yes, you go from writing

Yes, you go from writing repetitive code to having code written for you, which then you have to guess about its execution. Perhaps its best to have a representation of the boilerplate to examine, debug even if you don't write it yourself.

Debugging the generated code

This is what they said about assembly language, too. In some cases it's still true, but the number of times I've peeked at the code generated by gcc represents a miniscule percentage of all the code I've compiled. So... progress is possible.

Type Theory can be a sort of DMP

If you have a type theory where inhabitation is proved by demonstrating a program of the appropriate type, you have a kind of DMP. You get the logical content in the type, and the actual method of achieving the type (the proof) in the term. Powerful theorem provers will find the term automatically in some cases. The term itself can be manipulated using rewrite strategies to achieve performance enhancement, or programmed manually in the case of bottlenecks. I think type-theory is a really nice setting for the implementation of DMP.

Language Expressivity and Abusing Type Systems

Using the type-system for DMP may be useful in a few cases, but that results in a very rigid, 'static' variation of DMP, whereas the ideal form will be able to react to changes in the environment, perform re-planning at need, and yet still be able to be reduced via partial-evaluation in the event that little or no live data is required. DMP is hardly unique in this.

Use of rich type systems to influence evaluation of terms is a symptom - an anti-pattern - not a solution!

Programmers should not be required to use such indirect mechanisms that bifurcate expression of their programs into two languages (the type language and the term language). Programmers should not be required to determine at the time of writing a function or expression whether it will be fully realized in a static or dynamic context... yet the rich use of type languages to inject elements into the term language is often focused on making exactly that distinction!

A language that supports partial-evaluations or even staged evaluations would allow more flexible use of code. A language with terminating 'pure' computations would allow arbitrary degrees of partial evaluation at that layer, even if non-termination is supported in a higher layer (procedures, for example). Partial evaluation should be favored above staged approaches because it is implicit and can cut across terms, but either way the same language can be used regardless of static vs. dynamic context.

We should be going the other direction: banish all 'explicit' type system elements from the languages entirely. Type safety would still be inferred, and data-types would be inherent to exactly what is left 'undefined', and the type-system could still be 'enriched' by enriching the term language (i.e. adding sealer/unsealer pairs would force introduction of phantom types; supporting partial-functions over recursive structure would allow one to properly define matrix operations and add inherent 'uniqueness' qualifications to sets and relations, etc.), but there should be no semantics where the 'types' influence evaluation.

Type inference considered harmful

If I'm tracking you, you're objecting to a term expression that will be evaluated differently depending on what type expression goes with it. That makes some sense (although it doesn't seem very far from rejecting the idea of dispatch on type, which is a bit worrisome). But type inference has always worried me. Suppose I write some code that's supposed to do something; I am, in effect, claiming that it will correctly do what it's supposed to. If I specify a type signature for what I've written, then I'm declaring the circumstances under which I claim it will correctly do what it's supposed to. If I don't specify a type signature, and the language system infers the most general type signature for which my code is syntactically meaningful, then I've lost some control over what my code is understood to mean; I have no straightforward way to say that such-and-such will work correctly under a more limited set of circumstances, and conversely I may be blamed for the failure of my code in a situation where I never had any illusions it would work.

I'm not a fan of traditional static type systems, either, because any type system has to make allowances for Gödel's Theorem — admitting some things that aren't okay and rejecting some things that are — and the choice of those trade-offs is traditionally made by the language designer, leaving no way for the programmer to do anything really new in the type-checking dimension. Is there a way around that? Perhaps. Let propositions about programs be dynamic objects, and set up constructors for these objects corresponding to axioms, so that a proposition can't be constructed unless it's provable. The task of figuring out how to prove something is then a programming problem, and instead of accepting the language designer's compromise with Gödel's theorem, the programmer can choose their own compromise with the halting problem. (Okay, so maybe there are a few kinks to work out, like, what forms of propositions will be supported, what axioms will be supported, and how can proof-construct code be gracefully integrated with its object so that the whole is lucid rather than opaque. But I still think that ore ought to be refinable into something shiny.)

Proof systems and type systems are equivalently expressive

any type system has to make allowances for Gödel's Theorem

But any logic you introduce will necessarily have the same limitation - there will be some things you can't prove in the logic. And for any logic you choose, there's some type system that would have equivalent expressive strength.

That said, I think it's a good idea and is essentially the approach I'm using for my language. One comment I'd make is that type systems are very practical lightweight theorem provers, so you want to make sure you don't lose that practicality.

Proofs and abstractiveness

And for any logic you choose, there's some type system that would have equivalent expressive strength.

Not so sure, though, about abstractive strength. As type systems are theorem provers, they have built into them not only the logic, but the algorithm by which proofs are to be constructed in it. So they confront Gödel twice. Even if you can see a way of proving it within the logic, if the architect of the theorem prover hadn't thought of that technique, you're out of luck. One could add to the algorithm; or one could think of the type structure as the logic, and add to the type structure (but how awkward is it going to be to encode ever-more-sophisticated proof techniques in the type structure?). Either way, if it requires a language redesign it's not abstraction, and if it is abstraction then, it seems to me, we're really working with proofs rather than types.

There's also a very deep — read, "slippery" — metaphysical difference between Gödel's Theorem and the Halting Problem. Gödel's Theorem has to do with logical antinomies, which caused a major existential crisis for mathematics a century ago (and are now a sort of old cemetery next-door, that we've learned to live with, but at moments when we look that way it can still send a chill up the spine). The Halting Problem is about general programs not being able to handle some inputs, which may be annoying but just isn't that big a deal. It would be a nice de-escalation if we could "bleed off" some of our antinomies into mere nontermination.

(Pardon my not being up to speed, but is your language something that there's information about on the web? (Alas my own efforts aren't up to the proof-system stage yet.))

Slippery slope

There's also a very deep — read, "slippery" — metaphysical difference between Gödel's Theorem and the Halting Problem.

I think trying to make this distinction at all is even more slippery than the details of the distinction would be. ;-)

The Curry-Howard interpretation equates the two pretty strongly.

The Halting Problem is about general programs not being able to handle some inputs, which may be annoying but just isn't that big a deal.

By the same token, working with a broken logic isn't that big a deal: all human beings get by pretty well using one. ;-)

However, when you want to be sure about something, it pays to work in more rigorous and restricted systems, even if you lose some expressiveness

Curry-Howard

The Curry-Howard interpretation equates the two pretty strongly.

Not really, no. Not in the sense I'm speaking of, anyway. Curry-Howard is commonly billed as a correspondence between logic and computation, but it's really between two different systems of logic, with no computation involved. Proofs of intuitionistic logic are connected with types of typed language-calculus, not with reductions of typed lambda-calculus.

Besides which, my point isn't about any mathematical difference, or lack thereof, between Gödel and Halting, but rather about a metaphysical difference between them. Actually triggering the anomaly of Russell's paradox destroys the fabric of one's reality (so to speak); but if you define predicate

($define! A ($lambda (X) (not? (X X))))

and then apply A to itself, what happens isn't nearly as traumatic.

All of which is not unrelated to the fact that programs actually "do" something — they impinge on the real world — so that they can be useful to us without our being absolutely sure whether they'll halt, whereas the entire raison d'etre of mathematical proofs is contingent on their absolute certainty.

It's also too easy to claim that human beings use a broken logic; I submit that human beings in their native operational mode aren't using any of the usual systems of "formal logic". What they're actually using is much more like computation than like logic. Present a person will Russell's paradox, and they won't cease to be able to make any distinctions between things. Rather, they'll progress forward by a finite amount through the loop (probably just once all the way around), recognize that it's a loop, and abort the subprocess.

Curry-Howard ain't so common

Curry-Howard is commonly billed as a correspondence between logic and computation, but it's really between two different systems of logic, with no computation involved.

I understand this perspective, I just don't happen to share it. ;-)

In my scheme of things, a logic needs a proof system and any non-trivial proof system is necessarily computational.

Proofs of intuitionistic logic are connected with types of typed language-calculus, not with reductions of typed lambda-calculus.

There are some subtleties glossed over here.

One is that it isn't a coincidence that well-typed terms in the simply typed lambda calculus are strongly normalizing: a non-terminating proof isn't really a proof.

For this reason, you can't really consider a term to be a proof unless its reduction has a normal form, and you don't have a reduction to normal form without computation.

but rather about a metaphysical difference between them

Clearly I accept a different metaphysics than you do. ;-)

Curry-Howard not

In my scheme of things, a logic needs a proof system and any non-trivial proof system is necessarily computational.

Whereas I would say that proofs take place essentially outside of time, making them totally different from computation in its most fundamental characteristic. With which I'm fairly sure you would disagree. :-)

But I'm having a little trouble squaring this with your earlier remark that Curry-Howard pretty strongly equates the two (Gödel and Halting). If it's basic for you that logic necessarily entails computation, then wouldn't that settle the point? Curry-Howard wouldn't seem to have anything to do with it.

Curry-Howard-Gödel-Turing

If it's basic for you that logic necessarily entails computation, then wouldn't that settle the point? Curry-Howard wouldn't seem to have anything to do with it.

That depends on whether you accept Gödel's own interpretation of what he proved or not (he was an overt Platonist).

My take is a lot of his proofs are essentially computational in content, even if that concept wasn't fully extant at the time he was doing them. Curry-Howard is definitely the idea that closes the circuit.

Curry-Howard, Gödel, Plato...

Eureka. Maybe. Not only are we attaching the word computation to different meanings, but I suspect our worldviews aren't even inconsistent with each other. The concept you've attached to the word is probably present, at least latently, in my metaphysics, and vice versa. With some careful thought, it should be possible to work out a bijective mapping between our views, and perhaps get fresh insights out of the bargain (for example, I still don't see why from your perspective there would be any gap left for Curry-Howard to close).

My instinct is to go away and meditate on how the bijection might work. (Which is, perhaps, just as well, since the connection to the topic of this thread — "where is the next order of magnitude coming from" — is by now quite tenuous.)

Just to be clear, the

Just to be clear, the Curry-Howard correspondence says that: 1) Types correspond to propositions, 2) terms correspond to proofs, and 3) reductions correspond to proof transformations.

Strongly equating

There's also a very deep — read, "slippery" — metaphysical difference between Gödel's Theorem and the Halting Problem.

...
The Curry-Howard interpretation equates the two pretty strongly.

I don't agree with everything that John (whom I owe a reply in the old fexpr discussion) says in this discussion, but I think I agree with him more than you, Marc.

To talk of equating these two means coming up with a type system that receives two distinct interpretations, one as a theory of propositions that corresponds to Gödel's theorem, the other as a theory of constructions that corresponds to the Halting problem. The set of ideas about types is rich enough to do this, but for the one to be a logic, the coding up in the type system has to give you, I think, what Yves Lafont calls an internal logic. Doing this in a satisfactory way is tricky, not least because there is an issue of taste involved, and also because what Gödel was doing was coding up logic in a logic-free arithmetic.

The best approach, I think, starts with looking at how category theorists have gone about coding up theories of arithmetic in toposes. It's not trivial.

Sunshine Sketch of a Big Equivalence

To talk of equating these two means coming up with a type system that receives two distinct interpretations, one as a theory of propositions that corresponds to Gödel's theorem, the other as a theory of constructions that corresponds to the Halting problem.

This is probably way too big to explore here, but let me sketch my thinking about this.

For the purposes of this discussion, I will define language to mean an inductively defined syntactic system with finite basis, which is equipped with a truth-valued semantic interpretation.

In my interpretation, Gödel and Turing are both describing pretty much the same limitation on the power of languages, just applied to different languages (arithmetic, universal Turing machines).

This limitation can be summarized by saying that if the language is powerful enough to make interesting claims about itself, it can't have the ability to verify those claims in its semantics. Conversely, languages that can verify all their claims in their semantics have trivial semantics: every well-formed statement has the same truth value.

In this interpretation, you can see that Gödel and Turing start off very close together. So I'll just sketch how the (generalized) Curry-Howard interpretation closes the last gap for me.

One observation I need to make first it that, pace Derek Elkins and others, it is not sufficient to accept terms as proofs unless all terms are strongly normalizing. In a system where terms are not strongly normalizing the best a term can do is witness that "either this term is a proof or it is not normalizing".

One way of capturing this distinction is to say that there are actually two languages at work: the first has as its semantics a second whose evaluation gives the truth values. This scheme looks like this:

CH : Types -> Term Language -> Normalizing Terms
G : Logical Formulae -> Arithmetic Propositions -> Decidable Arithmetic Proofs
T: (Terminating | Non-Terminating) -> Turing Machines -> Halting TMs

Whew! That is all pretty compressed, but hopefully that gets across how I think about these things, and why I see a strong equivalence.

[edit : improved my mapping scheme]

Differing points of view

(Spoiler alert: the differing points of view aren't yours vs. mine.)

Good sketch. There's a lot in there, minable with sufficient persistence. I believe we tend to use the word "proof" to refer to slightly different (though closely related) entities, but that seems like, at most, a minor symptom of conceptual disconnect, rather than a cause.

I'm going to take a shot at sketching my own thoughts. (I may botch the job, but writing the following has helped me bring these thoughts into sharper focus. The fact that my sketch isn't as short as yours, I'll ascribe to my subject being fuzzier. :-)

From the overall shape of your sketch — rather than from specific details in it — it seems that you are looking at both logic and computation from a logical point of view. If one is looking at both from that point of view, and one is interested in similarities between them, one might well choose to zoom in on Gödel's Theorem(s) and the Halting Problem. Oh yes, and Curry-Howard.

The opposite approach would be to look at both logic and computation from a computational point of view. I'll zoom in on Russell's paradox. The classical proof of Russell's paradox is short, and prominently invokes reductio ad absurdum and the Law of the Excluded Middle. If that proof is a computation, then it's a short one. There is no computational difficulty with it. The answer produced at the end of that short computation got some people pretty upset, but only, I suggest, because they were interpreting the answer from a logical point of view; computationally, it's just an answer.

Now, when referring to Russell's paradox before, I set up Lisp predicate

($define! A ($lambda (X) (not? (X X))))

and said that if you apply A to itself, "what happens isn't nearly as traumatic." But from a purely computational point of view, neither of these cases is traumatic, and the two cases aren't really similar, either. The first case was a very short terminating computation, and the second case is a non-terminating computation. So how are they similar? Well, they are both ways of sidling up to the scenario that Russell's paradox is about. If you define set A to contain all sets X that don't contain themselves, and then you reason about it by means of that short computation, you get this answer that's disturbing — because you're trying to get a general answer, and (as Gödel and Halting show in their closely related ways) general answers aren't trustworthy. If instead you define Lisp predicate A and then run it, any answers you get back won't be disturbing, and whether you get back an answer at all depends on the behavior of X. With the Lisp predicate you are taking inputs one-at-a-time — and thus confronting the Halting Problem rather than confronting Gödel's Theorem — so that you still don't have a general answer, but it doesn't upset you because you didn't expect a general answer. (Note: you are confronting the Halting Problem itself, not the theorem that says HP is undecidable.)

So the really important difference between the two cases isn't the fact that one of them is a short, terminating computation while the other is non-terminating. That's just how they contrast with each other if you look at both of them from a computational point of view; and the really important difference is that you don't look at both of them from that point of view: you look at the terminating one from a logical point of view, and the non-terminating one from a computational point of view. Because those were the two different points of view that caused you to look at those two different cases.

This is somewhere in the vicinity of what I meant when I spoke earlier of bleeding off some of our antinomies into mere non-termination. And Curry-Howard, though it can elucidate some connections between what different things look like within the logical point of view, simply can't touch the fundamental difference between the logical and computational points of view.

Looping proofs

The classical proof of Russell's paradox is short, and prominently invokes reductio ad absurdum and the Law of the Excluded Middle. If that proof is a computation, then it's a short one.

The "proof" of Russell's paradox is really that there isn't a proof that resolves it. Any attempt at it loops : "if it's true then it's false then it's true then..." Gödel's proof and Turing's proof are of the same form.

The evaluation operation for all three proofs is different, and Russell has the luxury of relying on our ordinary linguistic interpretation as the "computation", but all three are examples of infinite loops, which is just a non-terminating computation.

So I'm disinclined to make a distinction between the "logical" outlook and the "computational" one: in my interpretation they are just different expressions of the same phenomenon.

Russell's paradox

Russell's paradox is a proof of false in an inconsistent logic. Under CH it corresponds to terminating computation that constructs a value of any type. Essentially the logic allows for a cast from a boolean computation to a set, without regard for non-termination. This is unsound, but you only encounter looping if you look to a model of the logic.

M's paradox

Russell's paradox is a proof of false in an inconsistent logic...
This is unsound, but you only encounter looping if you look to a model of the logic.

Inconsistency is a relationship between theory and model (or syntax and semantics). You can't really consider it without looking at the model.

Under CH it corresponds to terminating computation that constructs a value of any type.

I'll accept that way of looking at it if we stipulate that the "computation" here is a trivial one that just magically gives you whatever you ask for without actually "constructing" anything.

Essentially the logic allows for a cast from a boolean computation to a set, without regard for non-termination.

From my point of view, this describes the whole of classical set theory. ;-)

Set theory isn't so bad

Inconsistency is a relationship between theory and model (or syntax and semantics). You can't really consider it without looking at the model.

Well, "consistent" is usually defined as a property of a formal system without regard to any models it might have. For example, that there exists no proposition P such that "P" and "not P" can both be proven. A common way to prove consistency is to construct a non-trivial model, as an inconsistent system typically won't have one.

I'll accept that way of looking at it if we stipulate that the "computation" here is a trivial one that just magically gives you whatever you ask for without actually "constructing" anything.

Yes, but I think you'll find the same situation if you look for the computational equivalent of (presumed) consistent logics such as ZFC. Of course, I'm not sure that doing so makes any more sense than looking for the logical equivalent of C++'s type system. This is the sense in which I agree with John Shutt - sometimes one end or the other of CH doesn't make alot of sense.

Not a question of bad

Well, "consistent" is usually defined as a property of a formal system without regard to any models it might have. For example, that there exists no proposition P such that "P" and "not P" can both be proven.

Remember the context of the thread: assuming generalized CH and that we've selected a suitable language with evaluation as a model, then having a proof of P is the same thing as exhibiting an expression in the model language that corresponds to P and which evaluates to true.

You can interpret the generalized CH as a further restriction on what counts as an acceptable model of a logic and what counts as consistent.

Yes, but I think you'll find the same situation if you look for the computational equivalent of (presumed) consistent logics such as ZFC.

Yes and no. I would argue that most ZFC proofs are computational constructions in the sense that they rely on a finite number of axiomatic "atoms" and operations. The problem is that some of those atoms and operations, such as the powerset of ω, have no computational model.

Of course, I'm not sure that doing so makes any more sense than looking for the logical equivalent of C++'s type system.

I think it is more meaningful to do so than you are suggesting.

The logic for its type system is going to be broken in some way, since it admits non-terminating programs as witnesses, but it still manages to make some weak but interesting logical guarantees.

Abstractive strength

I'm not really sure I believe the abstractive strength argument, either. Type systems are really proof checkers more than proof creators, so I don't see why you couldn't encode the same proof in either style. Also not sure about the metaphysical differences between Godel and the halting problem.

is your language something that there's information about on the web?

Nothing yet. I may make a not-horrible implementation available for review in the not too distant future, but the logic and proof system probably won't be included in that first release. Proving things formally is so impractical :).

Expressing Constraints in the Term Language

it doesn't seem very far from rejecting the idea of dispatch on type, which is a bit worrisome

Unless you are imagining that rejecting dispatch on type means rejecting polymorphic dispatch, I am curious what is so worrisome about losing dispatch on nominative 'type' of expressions.

It is true that I am rejecting the use of dispatch on type, but I embrace several powerful forms of polymorphism and dynamic dispatch. OO, open functions, and DMP are examples.

type inference has always worried me

To be clear, I haven't been promoting type inference, but rather inference of type safety. The latter does not require assigning type-signatures to expressions. Inference of type-safety only requires determining that each operation expressed in a program is well defined in context.

Assignment of types to expressions is unnecessary because the types themselves aren't used to influence evaluation. For example, there is no need for monomorphism to select the correct type-class from which to draw a function.

If I specify a type signature for what I've written, then I'm declaring the circumstances under which I claim it will correctly do what it's supposed to. If I don't [...] I have no straightforward way to say that such-and-such will work correctly under a more limited set of circumstances

There are mechanisms other than type-signatures to express the limitations, constraints, assumptions, etc. of a sub-program.

For (pure) functions, one may express partial-functions. Rather conveniently, that's the only context they'll ever need to analyze; however, the ability to prove safety across partial functions is a challenge.

For logic-programming, one may express uniqueness (at most one result) and existence (at least one result) requirements in such a manner that a program is 'undefined' unless these conditions hold true. Proving expression that a result can be found, or that a result if found is unique, are extremely powerful primitives and cover almost any situation you might dream up. These are more flexible than partial-functions, since a partial-function is equivalent to always saying 'exactly one' result (both at least and at most).

For sub-structural program requirements (such as linearity or encapsulation of a data-type), one may introduce new primitives. The use of E-language style sealer/unsealer pairs, for example, is useful for security and can easily introduce phantom-type analysis suitable to hiding an abstract data type even for dynamically created modules. (No actual encryption is required unless a sealed value is distributed temporarily to an untrusted host.)

The removal of 'manifest' typing does leave a vacuum in expressiveness, but that vacuum may be filled from the term language. By carefully shaping and extending what may be left 'undefined', one can shape and grow a very rich type-system without ever once manifesting type signatures.

Let propositions about programs be dynamic objects, and set up constructors for these objects corresponding to axioms, so that a proposition can't be constructed unless it's provable.

I don't really share your objection to static analysis, but the notion of expressing contract requirements for sub-program is one I believe worthy. That said, it is often too powerful - unless you are careful in limiting expression, it becomes very easy to express requirements that one cannot prove. Partial-functions and procedural contracts (pre-conditions, post-conditions, concurrency invariants, etc.) have the same problem.

That unobtanium ore is, indeed, quite shiny... but refining it takes a 'sufficiently advanced technology'.

That said, even very limited application can still improve program expression; a few 'guru' programmers will learn what their language can do for them, then write important libraries that depend on it. Further, the limitations on which programs can be proven may grow independently of the program expression, unlike typical type-systems.

I think one should favor specific approaches to specific expressiveness issues that repeat themselves often enough to demand an automated solution. Introducing new term-language primitives to express, say, abstract data encapsulation or a garbage-collection requirement, is not worse than introducing new type-language primitives to do the same in a system with manifest typing, and at the end of the day you are left with a language that requires no sufficiently-smart technologies to make it work.

[Edit: above heavily edited for clarity - now it's as clear as mud, and that's an improvement.]

Rich Type Systems

Use of rich type systems to influence evaluation of terms is a symptom - an anti-pattern - not a solution!

This statement makes me think you completely misunderstood me. Evaluation order isn't important for strongly normalising terms. Rich types, as in Coq aren't influencing evaluation. And calling it an anti-pattern is sort of dodging the argument. While type systems which influence evaluation are interesting, that isn't at all what I was talking about.

Programmers should not be required to use such indirect mechanisms that bifurcate expression of their programs into two languages (the type language and the term language).

I think it's a rather good idea. The one side being the logical content, the other being the algorithmic content. As theorem provers become more robust, we can expect to specify very little of the algorithmic content, and thereby get away with an order of magnitude reduction in how much of a program we write. We leave in both languages until technology reaches the point that we don't need to specify the algorithmic content for the vast majority of jobs, just as before we had inlined assembly and C but now assembly has all but disappeared.

Partial evaluation is basically just proof normalisation. Having a type and program which are separate is not hindrance to this. I give an example in: Program Transformation, Circular Proofs and Constructive Types.

Type inference is much less useful in my opinion, than program inference. Type inference is undecidable for moderately complex types systems; you can't even infer types for something like System-F.

Type inference seems to me to be completely backwards. Why specify a proof but not describe what the proof is of. I'm much more interested in program inference than type inference.

RE: Rich Type Systems

While type systems which influence evaluation are interesting, that isn't at all what I was talking about.

Were you not referring to the type system as the 'theorem prover' when you said: "Powerful theorem provers will find the term automatically in some cases." How is using the type system as a theorem prover to find a term that will directly affect evaluation not using the type-system to influence evaluation?

It was my impression that you were focused on achieving DMP of the terms via the type-language. If that was a misunderstanding, I'll need you to clarify how so.

Were you, by chance, attempting to say that a variation of DMP aimed at producing types or pure terms might also be useful? They call that 'constraint logic programming', IIRC.

I think it's a rather good idea [to bifurcate program expression into two languages]. The one side being the logical content, the other being the algorithmic content. As theorem provers become more robust, we can expect to specify very little of the algorithmic content, and thereby get away with an order of magnitude reduction in how much of a program we write.

I do not believe this can be achieved by strong separation. You'll want to interweave the algorithmic content with the logical content, such that an algorithm can appeal to the logical constructs and the logical constructs can annotate or suggest algorithms that might work. The end result - reducing algorithmic content and gaining an order of magnitude reduction in how much of a program we write - can still be achieved without bifurcation.

Type inference seems to me to be completely backwards. Why specify a proof but not describe what the proof is of. I'm much more interested in program inference than type inference.

The two are not incompatible. Inference of types (or, more importantly, type-safety) proves a useful property about a program. Other useful properties include termination, real-time properties, implementation within a finite space, secure data-flows, and so on. But most of these require, first of all, proving that no 'undefined' operations will be performed - i.e. inference of type-safety.

A type-safety check is right up there with ensuring the program parses correctly. It's a very basic sanity check.

Rich Type Systems Mark II

Were you not referring to the type system as the 'theorem prover' when you said: "Powerful theorem provers will find the term automatically in some cases." How is using the type system as a theorem prover to find a term that will directly affect evaluation not using the type-system to influence evaluation?

It's not influencing since it only gives a specification of what not how.

But most of these require, first of all, proving that no 'undefined' operations will be performed - i.e. inference of type-safety.

A type-safety check is right up there with ensuring the program parses correctly. It's a very basic sanity check.

You are confusing type checking with type inference. Type checking is something you do to show that a term ascribed with a type is correct. Type inference is an attempt to ascribe a type which is type-checkable.

I think we want to be able to do both typeful and algorithmic work at this juncture. However, I think that the algorithm stuff should be phased out as we focus more on the logical content of our programs. I think type-theory is the most natural setting for this, and term inference (otherwise known as theorem proving) is a good way of thinking about how to get an order of magnitude.

Why the focus on the type system?

It's not influencing since it only gives a specification of what not how.

Things that don't have influence on a system, by nature, can be removed from a system without any observable effects.

Therefore, either you can strip all manifest type annotations from the program's AST and have it perform the same observable behavior or the type system is influencing the evaluation.

If one is using annotations in the type-system to specify a term that is then discovered by a theorem-prover and used in an evaluation, that is influence from the type-system upon the evaluation. I don't see how you could claim otherwise.

It doesn't matter that the type-system's influence is via specifying "what, not how" then automatically deriving an expression for the "how".

If you want to specify "what, not how", you have a decision (as a language designer or framework developer) on whether achieve this via extension of a manifest type-system (thus leveraging the light-weight theorem prover) versus extension to the term-language (thus requiring you implement a theorem prover as a library or part of the language run-time, possibly subject to partial evaluation).

You are confusing type checking with type inference.

You are confusing type-safety inference with type-inference.

Type-inference is an attempt to ascribe a type to a term-expression which makes the expression type-checkable in all contexts. Type-safety inference is an attempt to find a proof that the program evaluation will at no point attempt any undefined behaviors.

The most significant differences between the two: type-safety inference is free to invent whichever type-system it needs, so long as one can still prove the basic type-safety principles of progress and preservation. The 'rich type system' is itself inferred for type-safety inference based upon exactly what may be left 'undefined' in the language, and based upon exactly what is left 'undefined' in a particular the program.

It may also be that a safe program which can't be proven safe today can be proven safe tomorrow, as the safety checker is enriched.

A lesser difference: type-safety inference is analyzing for undefined operations, not making undefined operations into defined ones. Thus, the types chosen do not influence program definition or evaluation. Thus, there is never a need for a mono-morphism limit, and there is no need for a 'compile-time' distinction in the language unless the term-language itself requires one (i.e. for staging).

I keep trying to make this distinction clear. Perhaps I need a name other than "type-safety inference" - something less likely to connote "type-inference" to everyone who first reads the phrase. The denotation, at least, should be clear enough, once you realize that 'type-safety' commonly refers to 'progress without encountering any undefined operations'.

For type-safety inference to be 'interesting' and 'useful', the language must allow one to leave stuff artfully undefined, thus requiring richer analysis. For example, one might support conversions based on labeled units, allowing one to convert from "meters" to "feet" but leaving off the ability to convert from "meters" to "decibels". If one is able to keep certain operations undefined, a type-system is implied and the programmer can allow it to catch many errors. If not - due to syntactic constraints, or due to an insufficiently powerful type-safety inference - then the programmer will need to utilize less-direct expression of intent to make up for the weaknesses in the type-safety inference, i.e. by creating separate functions and data-structures for different conversions and unit-classes.

the algorithm stuff should be phased out as we focus more on the logical content of our programs. I think type-theory is the most natural setting for this, and term inference (otherwise known as theorem proving) is a good way of thinking about how to get an order of magnitude.

Why do you believe type-theory is a natural setting for this?

In particular, what advantages does using type-theory and type-system offer the programmer? Why can these advantages not be achieved by supporting logical content (search, theorem proving, etc.) in the term-language? Why are these advantages enough to outweigh the costs to expressiveness and dynamism that comes from supporting logical content via the term-language?

The reason I mentioned 'staging' and 'partial-evaluation' earlier is that the only advantage I see from pushing this theorem-proving into a rich type-system is that the proof becomes 'static' like the rest of the type-system. (I'm assuming you aren't speaking of 'dynamic' type systems.) And, yet, I see the predictability and performance advantages of static construction also to be an enormous disadvantages for flexibility. Further, the same performance and predictability benefits could be achieved to a desirable degree by alternative mechanisms without the loss in flexibility - those being staging or partial-evaluation with the theorem-prover being part of the term-language.

You say that the type theory is the natural place to support these features, but it is my impression (wrong or not) that you're simply not considering alternatives - i.e. that you see the type-system as the only place that a theorem-prover and "logical content of our programs" even can (or should) be included. I remain curious as to why you make this assumption.

Types

Things that don't have influence on a system, by nature, can be removed from a system without any observable effects.

Therefore, either you can strip all manifest type annotations from the program's AST and have it perform the same observable behavior or the type system is influencing the evaluation.

Even if types don't affect any given observation, they strongly affect the set of possible observations. If you remove types, program contexts will be able to observe behaviours they could not observe (at all) before.

In other words, what you say is only true for closed programs. The real benefit of types however is with composing open programs (i.e., modularity). And there their ability to restrict possible observations is what provides encapsulation, representation independence and such. This is much more than just "type safety".

In particular, what advantages does using type-theory and type-system offer the programmer? Why can these advantages not be achieved by supporting logical content (search, theorem proving, etc.) in the term-language?

There are lots of interesting – typically global – properties that you can express, enforce or encode statically in a type system but not dynamically, e.g. freshness, linearity, deadlock-freedom, etc.

Also, in more powerful type systems, term and type level tend to become intertwined in a way that closely resembles what you seem to have in mind. Once "type" is a type (apply necessary restrictions here), types are essentially “part of” the term language.

Types as Interfaces

what you say is only true for closed programs. The real benefit of types however is with composing open programs (i.e., modularity). And there their ability to restrict possible observations is what provides encapsulation, representation independence and such. This is much more than just "type safety".

For object interfaces, I'll grant that you should filter illegal inputs from external resources in order to achieve the same safety properties you inferred from the exposed elements of the sub-program object configuration - i.e. a gatekeeper pattern on exports (which might still be reduced via link-time or JIT partial evaluations).

But there are approaches to open program composition that do not involve the 'traditional' forms of separate compilation, and that cannot readily leverage types in this role in any case - i.e. especially in open, distributed programming, which is my interest area. In these cases, objects in the final program may be produced by different code-bases in different security and trust domains. One must ultimately 'protect' programs by secure design patterns (such as object capability patterns written about at erights.org) rather than by types.

For ADT-style encapsulation, I'm considering use of an E-language style sealer/unsealer pair. If the use of the unsealer on a sealed value is the only well-defined operation, then type-safety inference must introduce phantom-types for each sealer/unsealer created. This will protect the closed program, but also can protect the open, distributed program - even with all its serialization and potential for malign introductions - because encryption is automatically added to the sealed values after they cross to an untrusted host. This approach is suitable for first-class modules.

There are lots of interesting – typically global – properties that you can express, enforce or encode statically in a type system but not dynamically, e.g. freshness, linearity, deadlock-freedom, etc.

It takes a very powerful type-system to express these properties - one that has received much enhancement over time.

I assert that, were you to strip away the type-system such that programmers interested in these properties were forced to express them elsewhere, they could still find ways to enhance the term-language and express these properties there, instead. Further, they could still push their proof into the static type-safety analysis - without introducing any manifest types.

One can handle linearity in a data flow, for example, by introducing a 'use-then-destroy' primitive that is 'undefined' unless one can prove it's the only reference to the value.

Some properties, such as deadlock-freedom, would be better proven even if not expressed in the program. I considered that particular one important enough to prove it at the language layer - i.e. it is impossible to express a program that will deadlock (except between a pair of pessimistic transactions, where progress can safely be achieved).

in more powerful type systems, term and type level tend to become intertwined in a way that closely resembles what you seem to have in mind

With this, I agree. Dependent and constraint types weave the term language into the type language. The reverse weave can occur if values may carry types for use in evaluation and simultaneously provide a constraint... i.e. the tuple (A T) constrained by the type relationship 'A is an instance of T'. Once both weaves are introduced, the two will grow to become nigh inextricable due to backwards compatibility issues.

I'd need to look up my notes of years ago to recall in detail why I rejected it, but the basics come down to: I could not see enough value in it, I could see much undesirable and unnecessary programmer burden [especially in distributing type information across security domains], and inferred types exposed too much information about implementation details [to remote, untrusted hosts] that should not be exposed or coupled against, and it also limited how rich the safety analysis could grow [at an independent pace from the language].

I also rejected all other forms of reflection. I've never found a reason to regret doing so - at least no reason that would survive an open distributed system.

Cannot readily leverage

[...] cannot readily leverage types in this role in any case - i.e. especially in open, distributed programming, which is my interest area.

Well, this happens to have been my interest area as well, and I think types can work smoothly there. At least smoother than value-level sealing.

It takes a very powerful type-system to express these properties - one that has received much enhancement over time.

I assert that, were you to strip away the type-system such that programmers interested in these properties were forced to express them elsewhere, they could still find ways to enhance the term-language and express these properties there, instead.

I very much doubt that. Note again that these are global properties. How would you do global reasoning with a (necessarily local) term-level, user-defined logic only, and without the language infrastructure (compiler, linker, loader) providing support?

Also, I find it strange to claim that programmers can easily come up with encodings that researchers allegedly took ages to develop.

I could not see enough value in it, I could see much undesirable and unnecessary programmer burden, it exposed too much information about implementation details that should not be exposed or coupled against, and it also limited how rich the safety analysis could grow.

Programmer burden, yes, that's the usual trade-off. Exposure of implementation details? Not if done right, and if the type system benefits from the same abstraction mechanisms as the rest of the language. Limits? Well sure, as always, but how can a user logic, that knows less, possibly be less limited?

Types as Interfaces II

I think types can work smoothly [in open, distributed systems programming]. At least smoother than value-level sealing.

I am curious as to how you imagine this the case. Open, distributed systems programming is, first of all, open. That means anybody can connect their program to yours, should your program not be a closed system. Value-level sealing is a secure design for ADT encapsulation that will work even across hosts, yet requires no encryption within a trusted domain (indeed, if one can prove a computation is local, one may eliminate the operation reducing it to a type-safety analysis utilizing phantom types).

Note again that these are global properties. How would you do global reasoning with a (necessarily local) term-level, user-defined logic only, and without the language infrastructure (compiler, linker, loader) providing support?

I've never said anything against "global" reasoning about the program (understanding "global" to be a sub-program of arbitrary size within a trusted security domain).

Type-safety inference certainly is global reasoning. It simply doesn't require a type-system (i.e. it can invent one on its own, without your help). [or receive one externally. Related: Pluggable, Optional Type Systems.]

The term-language itself may have global elements. This is not uncommon in logic programming languages or rule-based programming, where each proposition and entailment rule - possibly spread across several files - combine to form a whole rule. Any case where the clients of an imported code unit can mutually provide code that influences all other clients of the same code unit is an example of global properties coming out of the term-language.

I find it strange to claim that programmers can easily come up with encodings that researchers allegedly took ages to develop.

I'm not aiming to insult the value or design effort that went into these extensions, but rather to assert that the type-system is not the only reasonable place to express them.

I'll amend my earlier statement: the researchers that developed and expressed useful extensions to the type-system could, in the absence of a type-system, find intelligent and creative ways to develop and express near-equivalent 'useful' extensions within the term-language.

Greenspunning type systems

Value-level sealing is a secure design for ADT encapsulation that will work even across hosts, yet requires no encryption within a trusted host

The same can be true for type abstraction. To clarify: I'm talking about what is visible as a language feature. If security is an issue then yes, you may want to encrypt values. But that can simply be an implementation detail of type abstraction, which may be more convenient for the programmer.

Type-safety inference certainly is global reasoning. It simply doesn't require a type-system (i.e. it can invent one on its own, without your help).

Decidability issues aside, this again is only true for closed programs. Once you have modularity and separate compilation, the programmer needs to specify something at the interface boundaries to make these checks possible.

I'll amend my earlier statement: the researchers that developed and expressed useful extensions to the type-system could, in the absence of a type-system, are intelligent and creative and would have found ways to develop and express near-equivalent 'useful' extensions within the term-language.

As far as I can see, if that works at all (of which I remain unconvinced until you can actually demonstrate it), then it essentially amounts to greenspunning a type system.

No Greenspunning Here

If security is an issue then yes, you may want to encrypt values. But that can simply be an implementation detail of type abstraction, which may be more convenient for the programmer.

I agree that it can be done. I do not agree that this is 'more convenient for the programmer', and especially not for the programmers sitting in the remote security domains who must interface with a subset of instantiated components from your program.

In open distributed systems, most forms of "separate compilation" also essentially involve "separate code-bases". There is little or no need for source-level separate compilation, other than perhaps a little advanced caching for compile-time performance, since one need not distribute 'library' binaries. One can simply register resources (objects, data and event sources) to a common factory or publish/subscribe system, if one wishes, then let any automatic code-distribution be controlled by a combination of secrecy constraints, redundancy requirements, disruption tolerance requirements, and performance demands specified or inferred from both the subprogram and runtime profiling.

Obtaining, maintaining, trusting, and integrating documents that declare types for interfacing with remote components is not fun or easy. Dealing with different instances of abstract types, and especially communicating that two different (derived, widely distributed) capabilities might require or provide the same instance of an abstract type, is especially challenging.

You like open distributed systems programming. Start playing around with facet pattern on first-class ADT modules...

Decidability issues aside, this again is only true for closed programs. Once you have modularity and separate compilation, the programmer needs to specify something at the interface boundaries to make these checks possible.

Certainly, the term-language needs to make it clear where the interface boundaries are, and what is well-defined at those boundaries. However, that does not require a type-system, but rather that the evaluation semantics at the relevant edges reject ill-defined inputs in a predictable (and ideally atomic) manner. Implicit type-based guards is just one way to achieve this.

In the language I'm developing, a procedure cannot be defined to have arguments. Only functions have arguments, but they have no side-effects (and are also structurally guaranteed to terminate). Thus, to abstract a procedure as having arguments, one wraps a procedure-definition as the output from a function. Now, functions themselves are only defined on a given input if they are able to map it to an output. If one can guarantee the well-defined output, one may evaluate the function lazily or strictly or via lenient forms (being guaranteed to terminate, this is only a performance distinction).

As a result of such semantics, there is no risk of any side-effects upon passing an invalid parameter to a function returning a procedure. Indeed, doing something illegal like that predictably results in an ill-defined nothing, certainly not a procedure that could be executed for side-effects.

And decidability is an issue regardless. If you attempt to define a rich type-system, you must always balance added features against risk to decidability. One can easily make type-safety analysis decidable, but achieving this requires the language be somewhat less flexible (i.e. rejecting support for arbitrary partial-functions) or will at least constrain which programs can be analyzed. This problem is no different for the presence or absence of a manifest type system!

But, in case of indecision, one can intelligently reject or accept programs that one cannot decide based upon the reasons for indecision and which sorts of analysis failures can be safely tolerated (and, due to support for disruption tolerance in any distributed system, quite a bit can be tolerated...).

it essentially amounts to greenspunning a type system

No, greenspunning a type-system would involve creating a type-language and evaluator atop term-expressions that one could then analyze for types prior to lifting into the program... ideally removing the tags on the way. I have no problem supporting this, i.e. for DSLs and language exploration. But it still encounters the same challenges in open distributed programming systems of distributing type information to developers in untrusted security domains.

The ability to leave stuff artfully undefined may imply that a type-system of a certain degree of minimal complexity is required for type-safety analysis, but does not require this type-system be specified or tied to the language in any permanent fashion.

And type-safety inference is certainly possible, related to structural type-inference. The trivial case is very easy. Making the system powerful and useful, however, requires some careful design efforts.

For example, in my own language I must structurally (and syntactically) distinguish codata from data. Corecursive codata always exists inside a record or tuple with a special '~' label, and special function rules limit recursive processing on data so structured. Were I to support nominative typing (which requires manifest typing) I would be able to avoid that structural distinction... but then I'd have a challenge in how to distribute (and maintain) my type-definitions to developers in remote security domains.

The decision on how to define argumented procedures was also a result of this design effort, though I'm quite happy with the resulting model.

Sounds like a shopping cart

Thus, to abstract a procedure as having arguments, one wraps a procedure-definition as the output from a function. Now, functions themselves are only defined on a given input if they are able to map it to an output. If one can guarantee the well-defined output, one may evaluate the function lazily or strictly or via lenient forms (being guaranteed to terminate, this is only a performance distinction).
As a result of such semantics, there is no risk of any side-effects upon passing an invalid parameter to a function returning a procedure. Indeed, doing something illegal like that predictably results in an ill-defined nothing, certainly not a procedure that could be executed for side-effects.

So even before the procedure is invoked, you are able to analyse the arguments and set up the procedure call accordingly. That is a form of pre-condition, isn't it?
If the application was a shopping cart, the procedure-generating function would check the card details are valid before redirecting to the Confirm Payment page (procedure). I can see this pattern is very useful in distributed applications.

The use of types is typically to answer the questions of what? and where?. Thus if instead of predefined data structures you were to use typed values, the target should have the means to explore the type of a passed value by dynamically finding the layout and location of named fields in the passed value. That leaves the type itself as a predefined data structure, comprised of <name, layout, location> tuples. The added flexibility would make the protocol more chatty but it was done before.

In open distributed systems

In open distributed systems, most forms of "separate compilation" also essentially involve "separate code-bases". There is little or no need for source-level separate compilation

Not sure what you mean by "source-level separate compilation". Whether "linking" is static, as in traditional settings, or via dynamic import, as in open systems, does not matter. You are still employing a form of separate compilation. And separate compilation means separate type checking. My question remains: how can you check type safety separately if the programmer does not specify any types at the interface boundaries?

In the language I'm developing, a procedure cannot be defined to have arguments. Only functions have arguments, but they have no side-effects (and are also structurally guaranteed to terminate). Thus, to abstract a procedure as having arguments, one wraps a procedure-definition as the output from a function.

Sounds like you have reinvented monadic encapsulation of effects. That's good, but how does that relate to anything?

And decidability is an issue regardless. If you attempt to define a rich type-system, you must always balance added features against risk of decidability.

Maybe you are not aware that type inference is a harder problem than type checking? You run into undecidability much quicker. Most advanced typing constructs (e.g. higher forms of polymorphism) cannot be inferred. If you insist on inferring types then you are doomed to cripple the expressiveness of your language.

No, greenspunning a type-system would involve creating a type-language and evaluator atop term-expressions that one could then analyze for types prior to lifting into the program...

That's not the only way to greenspun something. But anyway, I won't try to argue on this anymore, since what you are describing is way far too handwavy and hypothetical to get a hold on. Show me a concrete solution and the discussion may get somewhere.

My question remains: how can

My question remains: how can you check type safety separately if the programmer does not specify any types at the interface boundaries?

To be clear: type-safety is very often not defined in terms of types. I'm accepting type-safety to mean: "progress without any undefined behavior", which is actually broader than most definitions.

Thus, to perform a check of type-safety does not require knowing or specifying "types" at an interface boundary. What it does require is knowing that the behavior of a sub-program is well-defined for any inputs that cross this boundary.

Dynamic type-safety is achieved by the rather expedient means of simply ensuring all operations are well-defined. If a behavior can't be sensibly made well-defined, one well-defines that behavior as throwing an exception. This is trivial, and boring, but it can certainly be done without specifying any types.

Static type-safety is achieved by rejecting (prior to processing inputs) any program compositions that might lead to an undefined behavior. Due to a variety of well-known logical principles, one ends up accepting a subset of all safe programs.

In between the two is soft-typing where one performs a great deal of static typing on the inside but ensures that hooks to the untrusted external systems emit the necessary extra code for run-time gatekeeper checks. This is typical to any open system where untrusted code will be involved.

Despite your assumptions, none of these safety analyses require programmers to emit any sort of manifest specification of 'type'. Nor do they require unique types be assigned to functions and expressions.

Not sure what you mean by "source-level separate compilation". Whether "linking" is static, as in traditional settings, or via dynamic import, as in open systems, does not matter.

By "source-level separate compilation" I mean programming against a document that describes an API with the expectation that the implementation-code - in a "separate" compiled blob - will be "linked" (within a process memory space) to produce a program. Both of the cases you describe (static and dynamic loading of libraries) are forms of source-level separate compilation.

It is useful to distinguish source-level separate compilation from the vast multitude of other forms of separate compilation. For example, my Firefox browser likely works with your Web server despite the fact that they were clearly compiled separately. Pipes-and-filters and publish-subscribe systems (including CEP and such) offer other mechanisms to flexibly support separate-compilation.

These approaches still involve IPC-level "linking" to produce the final program. Indeed, a program like bash must perform some non-trivial pipe hookups to produce a useful program, and a publish-subscribe system often favors a registration process to ensure a more demand-driven design with support for selecting the highest quality resource for data.

Usefully, for a number of reasons (performance, disruption tolerance, resilience), these latter forms of linking may also involve some mobile and untrusted code, similar to delivery of JavaScript or an Applet to my browser. Ideally, this untrusted code runs with the authority invested in it by the remote service, and cannot obtain any authorities by running locally that it could not also have obtained while running remotely. This 'ideal' situation can be achieved by adherence to object-capability principles, and newer languages (like Caja) sometimes aim to achieve them.

Anyhow, one might assert that there is still something akin to 'types' involved in the structure of messages a being passed between these systems, along with the protocols. However, it is not generally someplace that language layer "types" provide all that much help, largely because - even with types - you cannot trust the untrusted code to hold up its end of the bargain. You're forced to check or filter your inputs. And your ability to achieve even that much is extremely limited; e.g. you can't ensure that a reference in a distributed system (a URI) will obey the expected protocol that you might express in a language layer type.

Thus, as I said many posts above, types don't help all that much for safety analysis in open distributed systems programming.

It is my desire to support 'open distributed systems programming' via supporting these sorts of alternative styles of separate compilation and mobile code from within a single language.

Sounds like you have reinvented monadic encapsulation of effects. That's good, but how does that relate to anything?

I'm going to say I was "inspired" by monadic encapsulation of effects; even if it were true, I'd hate to admit to something so simultaneously creative and wasteful as 'reinvention'. ;)

As to how it relates: If you recall, I asserted earlier that one does not need to know 'types', and that it is only necessary one reject inputs that would cause ill-defined operations. Further, I said that this rejection is ideally atomic - i.e. the input is rejected either all up front or not at all.

The advantage of the monad-style encapsulation of side-effects is that one can achieve the atomic safety... i.e. if an input doesn't fit the mold, it will properly get rejected before it can cause any damage.

Maybe you are not aware that type inference is a harder problem than type checking? You run into undecidability much quicker.

Type-inference is a harder problem than type-safety inference. Type-inference - especially in a language with type-based dispatch and similar features where the type-annotations influence program evaluation - must find a unique type-expression for each term-expression.

Type-safety inference is under no such constraints, and thus may be subject to more flexible proofs. This is further aided by ensuring a fair subset of the language is guaranteed to terminate except under certain well-known conditions (functions always terminate, and procedures terminate if they can be assured of receiving each reply). I typically don't start running into decidability issues until I introduce partial-functions, which is a huge can of worms (eqv. to both dependent and predicate types).

Most advanced typing constructs (e.g. higher forms of polymorphism) cannot be inferred. If you insist on inferring types then you are doomed to cripple the expressiveness of your language.

Polymorphism is not a typing construct; or, more accurately, typing is far from the only way to achieve it. Indeed, single-dispatch polymorphism is readily achieved simply by specifying objects or procedures as first-class entities. Multiple-dispatch polymorphism requires some sort of mechanism for open value definitions, but may also be achieved in the term language.

[This is] far too handwavy and hypothetical to get a hold on. Show me a concrete solution and the discussion may get somewhere.

I endeavor to do so, but I suspect it will be at least two more years before I have an implementation I'm willing to announce on LtU.

A Type System by any other name...

I endeavor to do so, but I suspect it will be at least two more years before I have an implementation I'm willing to announce on LtU.

Even a few lines of code (or even pseudo-code) might go a long way to clarifying what you mean.

Everything I've seen you suggest so far sounds like you have a type system, you just aren't making it explicit.

If all terms can be partitioned into groupings and you have no term building rules that range over all terms, that sounds to me like you have a de facto type system, just a very inexpressive one.

De-facto and Implied Type Systems

Everything I've seen you suggest so far sounds like you have a type system, you just aren't making it explicit.

As I've said, what the language and program leave undefined tends to imply that a type-system would have some minimal complexity is necessary to analyze it for safety. I.e. you could not analyze sealers/unsealer safety without phantom types or something of similar power.

But I refuse to say that I 'have' a type-system. There may be many type-systems of sufficient complexity to perform an analysis for a particular program, and I'm not making a commitment to any of them. I believe that commitment is a mistake, since it prevents the safety analysis from growing independently of the language.

I imagine a world where there are several type-systems that a safety-analysis utility might use to attempt to prove safety on sub-programs - similar to proof strategies in Coq. Depending on the structure of a particular sub-program, some of these may be more successful than others. This is inherently more flexible than being forced to infer a type from a particular type-system - one chosen in advance of knowing the sub-program - in order to prove (or achieve, in the case of typeful programming) safety.

Even a few lines of code (or even pseudo-code) might go a long way to clarifying what you mean.

This may be the case, but I have not yet managed to think of any examples that are simultaneously simple enough to express in a few lines of code and yet complex enough to distinguish from type-inference in 'a particular' type-system - much less motivate the distinction.

It might help, though, to say that - rather than defining structurally recursive types and inferring a fold-function, one might define a fold-function and thereby infer a structurally recursive type. That is, function determines form.

If all terms can be partitioned into groupings and you have no term building rules that range over all terms, that sounds to me like you have a de facto type system, just a very inexpressive one.

Trivially, if a record only has labels foo and bar, I cannot ask for baz. It's often quite easy to prove that a record input to a function lacks a 'baz' element, especially if the records are immutable (restricting mutability helps type-systems too).

Less trivially, if I could write a function over two inputs then used the first list to index the second, I could infer some fairly rich safety requirements like 'list of indices' and 'list of least length'.

One of my own motivating examples is inferring proper safety of matrix adds and multiplies where a "matrix" is not a first-class structure (akin to how sets, labeled records, and coinductive-records are first-class structures), but rather is a list of lists whose greater structure is implied by the partial-functions one attempts to apply.

In some cases, it may be reasonable to support a single function in matching arbitrary structures and producing arbitrary structures (i.e. accepting a mix of natural numbers and strings, then producing a mix of functions, lists, sets, records depending on the input), but such a program would almost certainly be far outside what the type-safety analysis tools provided with the language can prove safe, which would almost certainly condemn that program... at least for now. Fortunately for language developers, such a program is also fairly distant from anything the programmers are likely to demand... at least initially.

I don't believe it fair to characterize this approach as 'inexpressive', or even to say that a specific type-system is implied. It would be fair to say that there is a certain minimum set of features any analysis would need to support - derived from both the language and how programs are commonly structured.

I agree, however, that there almost certainly will be a "de-facto" type-system. Like most de-facto things, this 'de-facto' type-system is based on popularity, and will depend on what safety-analysis tools are shipped with the most popular IDEs.

Equivalent problems

Type-inference is a harder problem than type-safety inference.

This may be the crux. I don't understand how you can claim that. Both are equivalent, the only difference is whether you throw away the result in the end.

Just to be sure, type inference does not require unique types. However, to be algorithmically tractable and compositional, you will typically want principal types, or at least something close. Otherwise your analysis will quickly explode. And this is the case no matter whether you are going to output the types in the end or not.

Not Generally Equivalent

This may be the crux. I don't understand how you can claim that. Both are equivalent, the only difference is whether you throw away the result in the end.

The two are not equivalent, but I wonder if there's a difference in assumptions.

With type-inference you are forced to pick a type-system before seeing the program. Then, upon seeing the program, you are required to "decide" on type-expressions from that type-system that will allow you to prove safety.

With a type-safety proof as a more general goal, you are not required to make any decision on type-system or logic in advance of seeing the program.

Only if it were the case that all decidable logics could prove the same set of propositions, would the above two approaches be equivalent.

Just to be sure, type inference does not require unique types.

True. Unique assignment of types is only needed if you're using certain forms of typeful programming, where program behavior depends on which types are ascribed to the term expressions.

to be algorithmically tractable and compositional, you will typically want principal types, or at least something close

Principal types don't much help in open distributed systems. If you can trust what the other guy tells you his outputs will be, and he can trust you in turn, then by nature you've got a closed system. Open distributed systems programming means that the interactions cross trust and security domains.

Besides, those 'closed-system' analysis techniques can be especially useful with automatic distribution. Distribution adds nothing fundamental to the safety analysis, but it allows one to increase the useful size of a 'sub-program' safety-checked, and thus improves the tooth-to-tail ratio of the closed-system safety analysis techniques. For hosting code, security matters more than safety, and certain security approaches (such as object-capability) depend very little on safety.

For safe program composition, principal type isn't so critical. It is not usually the case that we manage very rich types at the edges of a program anyway, and to whatever degree we can reduce those 'edges' (by spreading the gooey middle across multiple runtimes via automatic distribution) the need for 'rich types at the edges' may be further diminished.

As far as tractability: if a type-system wishes to simplify and make the analysis algorithmically tractable at the expense of proving fewer programs, that is an acceptable trade-off - but not a decision good for all programs.

Still equivalent

With a type-safety proof as a more general goal, you are not required to make any decision on type-system or logic in advance of seeing the program.

Of course you are. You have to implement it in advance. What you are saying is that you can hack up one ad-hoc hybrid type system that has rules to switch between different kinds of "modes". That does not make it a different problem, though. It solves none of the issues I was mentioning.

Also, I think this idea of just cooking up arbitrary type systems is fundamentally broken in terms of usability. How is the user supposed to understand type errors?

Principal types don't much help in open distributed systems.

Sorry, you lost me. What have principal types to do with open vs closed? The types at a boundary may be less descriptive in the open case, because you know less, but that has nothing to do with principality.

It is not usually the case that we manage very rich types at the edges of a program anyway

That, I believe, is a fundamentally wrong assumption. Rich types are most likely to pop up at the boundaries of libraries, components and such, because you want to make them maximally reusable, and thus maximally generic.

Open Systems Components

You have to implement it in advance. What you are saying is that you can hack up one ad-hoc hybrid type system that has rules to switch between different kinds of "modes".

What I am saying is that you can provide many logics and strategies for proving safety. This is not "one" ad-hoc type-system. Any one ad-hoc type-system will need to be internally consistent, sound, demonstrating progress and preservation.

As to whether you 'implement it in advance', I agree that it is likely the case that the type-system used by most programs is implemented in advance. Programmers will have a decision whether they will favor adjusting their program to be provable by the current set of strategies, vs. locating or creating new proof strategies that will better be able to prove their program. Most will choose to adjust the program.

That does not make it a different problem, though.

I am confident it does. [See discussion on pluggable and optional type systems.]

this idea of just cooking up arbitrary type systems is fundamentally broken in terms of usability. How is the user supposed to understand type errors?

The user doesn't need to understand a type-error. What the user needs to understand is a safety error. This means that the prover must provide a useful explanation of why an expression is unsafe in a given context (i.e. "could not prove that the Unit argument will be matched at this point in code"). This isn't much different than Lint and other tools needing to provide explanations for emitted 'problems'.

Besides, I think you're a bit optimistic if you feel most users understand the oft arcane errors emitted from a C++ or Haskell compiler, even today.

What have principal types to do with open vs closed? The types at a boundary may be less descriptive in the open case, because you know less, but that has nothing to do with principality.

I'll need to disagree: the need for principality has everything to do with lacking knowledge about the context in which an expression is used.

If you know each context in which a given sub-expression will see use, then you do not need the principal type for the sub-expression because you may type it in context. There may be cases where a type you discover for the sub-expression happens to be the same as you'd discover if seeking a principal type, but - in principle - you do not need the principality.

Rich types are most likely to pop up at the boundaries of libraries, components and such, because you want to make them maximally reusable, and thus maximally generic.

There are approaches to "components" - even reusable and generic ones - that do not involve source-level separate compilation of "libraries". These alternative approaches also work - with a few simple design-patterns quite similar to what you'd see for libraries-as-plugins - for hot-swappable and upgradeable systems and for distributed systems.

I consider the common separately-compiled 'libraries' to be a harmful language feature - in part because it hinders optimizations (partial-evaluations, inlining, dead-code elimination), and in part because of security concerns (initial confinement, safety, authority), and in part because of other engineering concerns (such as developer control over sharing, persistence, instancing).

Thus, supposing you ignore "libraries" as a potential component-boundary (since I refuse to acknowledge libraries-as-components [for separate compilation] as a good thing for future language designs) I am curious if you can find a case where 'rich-types' will be heavily used across component boundaries. If it isn't clear to you, arbitrary "non-library" components can be found by examining publish-subscribe architectures, plugin architectures, complex event processing, Unix-style pipes-and-filters, blackboard metaphor, etc. And of these, in context, we're discussing open-systems variations (i.e. the third-party black-box or multi-process designs, though these architectures may also be expressed using objects within a program).

So... let's consider some rich types: linearity and exclusive transfer, region inference, dependent typing, uniqueness or existence constraints for pattern-matches inside a set, etc.

I can easily see how you might use these types to help 'document' an intended interface, but there are many ways to document interfaces. The relevant question is: How, and where, do we use these rich types across untrusted open-systems component boundaries to prove safety?

Separately-compiled 'libraries' considered harmful?

I don't really understand this perspective. Separate compilation needn't hinder optimization or any of the other engineering concerns (the output could, after all, just be type-checked source code). Deferring any static analysis until all libraries have been assembled into a closed component seems like a big wasted opportunity.

Refer to Informal Def

I had mentioned how I interpret 'separately compiled library' in an earlier post:

By "source-level separate compilation" I mean programming against a document that describes an API with the expectation that the implementation-code - in a "separate" compiled blob - will be "linked" to produce a program. Both of the cases you describe (static and dynamic loading of libraries) are forms of source-level separate compilation.

It seems you're taking a much broader (and largely theoretical) view of what 'separately compiled library' may mean. The practice (shared objects, archives, dynamic-link libraries, etc.) is that of distributing and maintaining near-opaque, lossy blobs along with an API document. And I dislike this practice for the many reasons listed above.

I would quite gladly support such things as caching pre-parsed source-code subject to annotations to enable rapid optimizations and safety proofs, and such. That is not the same as 'separately compiled libraries as software components'; rather, it would simply be an optimization in the IDE. The source code from which the cached forms derive would still be available.

Not an optimization

Static analysis of a library prior to its deployment isn't an optimization - it's a feature to help library authors. A type system gives a way to state and prove properties of your library in isolation from its eventual use context.

Static Analysis != Type System

Static analysis is useful, I agree. However, there are many approaches to supporting library developers that utilize static analysis and that require no type system.

Further, bothering to cache or persist any output from a static analysis is never anything but an optimization. After all, the not-optimized approach is to re-execute the analysis every single time the developer needs it.

I've been trying to explain in prior posts that there are many reasons that libraries should not be used as software components - as something that might "deployed" then somehow "referenced" as part of some software configuration. The argument that type-systems might help achieve safer libraries in a 'deployment' scenario sounds to me a bit like arguing that adding a laser-level makes a shovel safer for use in surgery. That is, I'll grant it could help, but it isn't exactly motivating me to invest in a type-system (or laser-level).

Libraries aren't deployed?

Static analysis ... require(s) no type system.

I agree.

libraries should not be ... "deployed"

You've lost me here. When a new version of a container library is published (why not "deployed"?) types are good at documenting properties of its API and mechanically verifying them.

Library binaries should not

Library binaries should not be "published" or "deployed" or any other word you want to use for the act of treating a library as an independent deliverable between administrative (security or trust) domains. There are other architectures for organizing software into components, and libraries as software components rarely compare favorably.

To be clear: the primary issue with libraries-as-deployables isn't "safety". At least two major problems with libraries and the systems that use them involve their use of ambient authority (i.e. a library is a repository of untrusted foreign code, but is run with more authority than the provider could have obtained by running the same code remotely), and their ad-hoc instancing semantics (esp. with "global" state, whatever that means) and the associated sharing/distribution/persistence issues.

Making libraries slightly more type-safe is not going to fix them.

If you're stuck with libraries, I can appreciate the goal to make them as safe to use as possible - especially since you need all that safety to avoid shooting yourself in the foot with that excessive ambient authority you're given - but we should be looking for ways to be rid of applications and libraries as we write them today, entirely... especially in a discussion like the one in the OP.

The Problem with Custom Type Systems

What I am saying is that you can provide many logics and strategies for proving safety. This is not "one" ad-hoc type-system. Any one ad-hoc type-system will need to be internally consistent, sound, demonstrating progress and preservation.

And where will that be shown? Either you don't, or you pick a single, unifying metalogic/type system, no?

Presumably showing such

Presumably showing such things is left as an exercise for the user.

Not a real problem

Showing the soundness properties of a given type-system is pretty much optional; that is, a developer of a type-system need not prove the type-system from within the language. (If they needed to do so, they'd be at an impasse if they used an alternative logic or had to introduce new axioms, since they likely couldn't prove the logic.) One may - for pluggable type-systems, proof-carrying code, and other features that require a trusted proof (i.e. for use by an optimizer) - favor control by practical mechanisms outside the language definition (such as PKI-signature certification from a source trusted for the subject).

I can see how this might be construed as a problem if you were attempting to treat type-systems as program extensions rather than as IDE extensions. But it needn't be a problem in practice.

Perhaps you should ask a few questions in return: Would a typical programmer care where soundness of a type-system is shown? I've personally never asked where the validity of an optimization technique is proven. These questions rarely arise in the normal course of programming.

I think I get the gist of

I think I get the gist of what David is saying, so I'll try to put forth a concrete example: consider Microsoft Code Contracts. Code contracts are not expressed in or verified by the CLR type system, they are instead expressed as DSL terms which are compiled to CIL. The CIL is itself then processed by a tool with its own static analysis, with no relation to the CLR type system, to verify that all contracts are satisfied. It does so by looking for the special terms of its DSL. This would be an example of a custom post-hoc type system built with language terms.

What's a type?

Andreas Rossberg wrote:

If you remove types, program contexts will be able to observe behaviours they could not observe (at all) before.

Any given program observes the same behaviors regardless of whether it's typed. You seem to be taking the viewpoint that types are specifications and come before implementations. What about a Curry style type system where types describe properties of an existing program? Or what if we dispense with types altogether and prove program properties in some suitable logic? Something along the lines of replacing the type annotation,

f:Nat->Bool

with a proposition such as,

forall x. x:Nat -> (f x):Bool

I'm curious which of your comments you think would apply to such a scheme, or whether you would consider such a system effectively a type system.

Program Behavior and Types

Any given program observes the same behaviors regardless of whether it's typed.

That would depend on the programming language. Type-based dispatch - as is achieved in Haskell's Type Classes and C++ overloaded functions - is likely not well defined in the absence of types, much less achieves the same behaviors.

If we assume that the program is to achieve the same behaviors regardless of whether it is typed, then under this assumption we can also assume that the manifest type-system has no influence on program evaluation (including dispatch decisions). Effectively, 'typeful programming' styles are not possible.

That's the way I'm growing to favor it.

Typeful programming

Really depends what types we're talking about. If they're the logical entities describing (something like) sets of values, then types shouldn't affect evaluation. If on the other hand they're annotations used by a constraint solver to fill in some implicit parameters, then sure they can.

Closed programs are not interesting

Any given program observes the same behaviors regardless of whether it's typed.

As I said, this only matters for closed programs, which is a rather uninteresting case. The more relevant situation is where you rely on types to protect your module against other parts of a larger program that are not under your control. Without types, they could invoke arbitrary undesirable behaviour in your own module.

Open programs too

You seem to have missed my point (probably because I didn't really explain it), which is that this kind of modularity can be achieved without types. If I export a symbol f from my module, and the only thing I tell you about f is that,

forall x. x:Nat -> (f x):Bool

then the only way you can write a correct program that uses my f is to invoke it with a Nat value. If the language requires you prove your code correct, then you get something very close to type safety. It's a little different, since you could include the expression (f "string") in a valid program, so long as that expression is provably never evaluated.

An advantage to this approach is that it gives a simple semantics to the language even when all proof obligations haven't been met.

How does this differ?

How does this differ from a (dependent) type system? Relaxing the typing rules for provably dead code would be perfectly possible in such a type system as well, but why do you care?

Differences

The ability to call (f "string") is certainly not the point. I think it's a conceptually simpler and more accessible approach. In the logic I'm playing with, types denote sets of terms, so subtypes are trivial. It works well with the customizable type system I'm building (though you could probably use type theory as a back-end as well). Nothing goes through Curry-Howard. There are probably other differences I'm forgetting or haven't noticed. Hopefully nothing fatal...

Because Types are Logic

If one is using annotations in the type-system to specify a term that is then discovered by a theorem-prover and used in an evaluation, that is influence from the type-system upon the evaluation. I don't see how you could claim otherwise.

It's not influence on the evaluation. It says that the evaluation will always result in a certain thing, but it doesn't say how the evaluation is to take place. You can have a specification that says a function returns sorted lists. This type doesn't tell you what algorithm is used at all.

As for "type-safety inference" I think I mostly agree now that you've made it more explicit, however I don't this precludes the use of type theory at all.

The reason I mentioned 'staging' and 'partial-evaluation' earlier is that the only advantage I see from pushing this theorem-proving into a rich type-system is that the proof becomes 'static' like the rest of the type-system.

Proofs are not "static". Partial evaluation is simply proof normalisation, i.e. you restructure the proof-tree in some way. The part that is static is the Type. This could potentially allow a large degree of dynamism. Any proof of the type could be substituted if the specification was a sufficient expression of what the program was supposed to do. If it isn't, we can still come up with equivalent proofs using partial evaluation. In the paper I linked I show how you can even create new co-inductive principles by ensuring that the new proof (with the same type) is constructed in a way that preserves bi-similarity of terms (identical observable behaviour in all contexts). I think as we continue to talk that I agree with you, but that you don't understand how type theory can be used to do what you want. It seem to me that it's a very natural setting for it. Why natural? Because natural deduction proofs and proof normalisation (partial evaluation and/or supercompilation) seems to give you exactly what you say you want.

Well typing proofs are not "static"?

It says that the evaluation will always result in a certain thing, but it doesn't say how the evaluation is to take place.

Just keep in mind that one could say much the same thing about using an ADT or Object: "I said to push a value onto the stack, but I didn't say how the evaluation is to take place. The interface says nothing about what algorithm is used at all!" The fact is, I said to do something. That alone is part of the program evaluation, is it not? I agree the difference between specifying an interface and specifying observable constraints and post-conditions is significant, but either could occur in a term language in order to describe an evaluation.

But it is not worth further belaboring this point, lest we argue what 'influence' and 'evaluation' etc. mean.

Proofs are not "static".

I understand 'static' to strongly imply 'based only on information available at code-development time', whether or not the actual normalization is delayed. So, to me, you're saying that the 'proof' - which supposedly tells me my program is well-typed - isn't happening until after my running program has obtained necessary information from its environment. Is this right, or are we just talking past one another based on different understandings of the word 'static'?

In any case, I think logic programming belongs to the users, for normal evaluations at runtime based on dynamic inputs and data resources. Common implementations of type systems don't allow one to achieve this dynamism. It isn't that I don't understand how type-theory could be used for this purpose, but rather that I don't understand why you would use type-theory for this purpose, much less be eager about it, except for the 'proof normalizations' you might achieve by requiring the proofs be based entirely on information available at development-time.

Distributed Applications

Debugging distributed applications can be time-consuming. Any help in this quarter would save in-house IT resources an amazing amount of time.

It's not just the language dummy

80 some posts and no mention of the IDE's role. Programming hasn't been only about the language since at least Richard Gabriel's ph.d thesis. The current crop of IDE's provide an incredible amount of assistance to the programmer. Of course some will observe flippantly that considering the abysmal state of the popular industrial languages in use today developers need all the help they can get.

The point being isn't it a triad: language, library, tools? Of course the focus here is on languages, but this discussion demands a broader scope imo.

oops

Others did mention tools and IDE's, I apologise for the over sight.

optimization and declarative programming

This is rather unformed, but bear with me...

In a functional language, there is very often a direct and expressive way to turn a specification into code. However, this direct and expressive way is often slow. In order to recover speed and move from a prototype to working code, we must pollute our lovely declarative world with icky algorithms. We can often package up and hide the algorithm, but what we end up with are then libraries of data structures, not libraries of algorithms.

If declarative code didn't need optimization, then we'd get an order of magnitude. But I don't think this is largely or fully a problem of insufficiently optimized declarative libraries of data structures. To get the sorts of optimizations we need, there needs to be much more compiler support to specialize based on all available information.

One way to think of this problem is maybe to say that list comprehensions, set and map operations, etc. that can be expressed as SQL queries should be able to be at least as optimized as SQL queries.

i like the following related notes

on the sufficiently smart compiler, about some of the tiger traps which potentially lie in wait down such paths.

and this note saying power can be confusing, i think.

Performance

People often talk about performance, and how important it is that their programs perform well. They use this as an argument to avoid learning about what a certain language or tool could do for their productivity. It might be the wrong tool for the job, in which case they shouldn't use it, but this should be an informed judgement rather than a knee-jerk reaction.

I think that with the recent progress in static analysis of real world programs these are exciting times. We have barely scratched the surface of program transformations for pure functional languages: who knows how good the compilers will be tomorrow?

As a side note: many of the transformations that GHC does can be done in a strict language as well, which would avoid the problems with the programmers mental model of execution.

As far as ad-hoc

As far as ad-hoc optimizations go, I think there's some truth to that. But what's more interesting are principled optimizations -- nobody complains about "sufficiently smart databases" and makes an issue of always writing out query plans by hand. That's seen, rightfully, as a last resort, in the face of an insufficiently smart database. So what I'm saying is if we can capture more about the meaning of our operations, and their domains, then we can probably move far beyond stream fusion (which is essentially about fusing traversals) and into something much more powerful involving reordering, and perhaps even, when necessary deferring certain choices about ordering to run-time, where they can be determined by the actual data in question. A language that could do this even with just pure operations over maps, sets, lists, and adts (i.e., just the relational algebra) would already be quite a leap forward.

A different thesis on where these improvements come from.

I think by looking at a "beautiful Haskell program" and asking how to make it 10x shorter, you're asking the wrong problem. Haskell and Erlang will not (I expect) make beautiful C programs shorter -- if a program is sufficiently well-suited to programming in C that the result can be said to be beautiful, then the C representation will already be quite concise. No, what these languages make shorter are the ugly C programs, where you are fighting the C language in order to write the program, and have to include all sorts of goop that ought to be unnecessary.

The right question to ask, then, is: "What programs are still ugly to represent in all the languages we have today?" For bonus relevance, also ask, "And which of those are types of programs that we will be writing lots of in the future?" Making it possible to write those programs in a beautiful manner is where the 10x improvements come in.

I don't have a wide range of understanding to give a thorough answer to that question, but from where I sit, there are three clear things that count: Parallelism, parallelism, and parallelism.

Now, I know that Erlang is already addressing a lot of parallelism issues, but the problem of writing efficient parallel programs is bigger than one approach can solve.

pedantics: parallel vs. concurrent (& distributed)

i'd say that concurrency is important, too, not just parallelism. and then distribution is probably key long-term as well (and is more in the parallelism vein i'd hazard).

That's a good point. I was

That's a good point. I was somewhat sloppy there, and by "parallelism" what I really meant was the whole cluster of concepts around that, including concurrency and distribution of various flavors.