Why is there not a PL with a mathematical type system?

Hello dear LTU community,

I want to know why there is no programming language which has a type system which is modelled after or "simulating" the mathematical world?

Let me shortly explain what I mean with "mathematical type system":
A type system which has a very basic type "set", basing on this type the more complex type "relation", and basing on this type then the type "function".
Then this type system would have - instead of "integer"s or "real"s - the types "natural number", "irrational number" and "complex number".
This type system would be made complete by the operations on this types:
"set" would have: union, intersection, relative complement and also subset and power set (and some more),
"relation" would have: the operations from the relational algebra (and some more),
"function" would have: "composition", "preimage" and "inverse" (and some more)
the types of the numbers would get the algebraic operations on them.
(Maybe it is presumptuous but I think with this type system you would have some 80-90% of mathematics "captured" in your PL.)

As anyone who looks at the scientific world I, too, see that the world is modelled using mathematics. Natural scientists are using it, Engineers using mathematics and for example there is also mathematical finance. But we programmers don't use mathematics. We are translating mathematical objects like "natural numbers" to "integers", and we don't have a type named "function" as it is defined in mathematics. Our functions don't have derivatives. The programmers don't have set theory under their fingers and also nothing like analysis.

I can imagine some advantages if we would have such a type system:
a) every one on this world would learn programming by learning mathematics
b) every one who can come up with a math. model for his field, his needs (and so on) could just make a program
c) I think it would be way more easier to make it possible where the programmer "draws" a model and the computer calculates the "source".

So, but we don't have such a type system. But there are intelligent and experienced PL developers who gave us
all the widely known PLs (C/C++, Haskell, Assembler and so on).
And this is the reason for my question: What is the reason that no one has come up with this kind of
type system? What is it that I overlook, what makes it impossible to build such a type system in a PL?

regards
Gueven

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Can Epigram be nominated?

Epigram, "Making dependent types matter."

It certainly more mathematical than my daily allowance, can it be enough mathematical for you?

Sounds Like...

Ontic or Concoqtion. If you're willing to abuse your definition of "programming language" a bit, any proof assistant using intuitionistic logic can extract code from the proofs, at least in principle; Coq can in practice, with the target languages supported today being O'Caml, Haskell, and Scheme. According to Adam Chlipala, this might not be as crazy as it sounds at first blush.

Type theory vs Set theory

I think what you are looking for is something along the lines of the B method where you specify using set theory, with relations, talking about injectivity of functions and power sets and the like and go down to code using refinement steps where your relations become functions and more generally where everything becomes deterministic. E.g. you may say "This function returns any x such that P from any y such that Q" at the abstract level and at a more concrete one you show how to compute one such x from a particular y. This refinement mechanism and the whole theory behind it is needed because you don't have Curry-Howard in this setting. The actual concrete language is not really important, it is not so tied to the specification language.

What type theory allows is to associate terms/code with specifications, i.e. for every spec, if computable, you will be able to build a term inhabiting it (within the limits of Turing and Gödel completeness results). In set theory, it is less clear which term language you should use to implement your specifications.

Programming languages ressembles more emanations from the type theoretic world (Epigram, Coq or Agda) because they are already based
on a (much weaker) type theory and not a set theory.

An interesting point is that proof-assistants like Coq are supposedly used by programmers and mathematicians alike, hence they strive to reconcile type theory and classical set theory.
If you're a mathematician, the fact that Coq or Agda's model is compatible with classical axioms will please you. It is not accidental at all :). In fact Coq is even supposed to be usable to a large extent using only set-theoretic notions.

On the other hand, if you're into programming the story is not as nice, because giving an intuitionistic/constructivist/"computable" content to some set-theoretic axioms and constructs is not an easy task and in fact an active research field (see, e.g. Krivine's work). The point is that actually you can't compute with code using these constructs, but that may happen though, apparently Krivine has a realization of the axiom of choice (disclaimer: I am not an expert on this matters).

So if you look at B you already have a programming environment based on set theory, and tools based on type-theory may some day be able to deal with usual set theory while keeping the Curry-Howard isomorphism and hence give rise to the set-theoretic programming language you ask for.

Z & Prolog

I recall that in the early 90s there were a few researchers interested in using Prolog to semi-automate the extraction of code from specifications in Z (the ancestor of B). This would fit the posters requirements, I think.

Because many mathematical constructs are hardly computable

How to compute the inverse of a fucntion? If it’s not supposed to be computed, what does it do in a programming language?

How to represent an arbitrary set? It’s hard even if restricted to subsets of natural numbers. If you think it’s easy, please consider how to check whether a given set is empty.

But how much do we really care?

Most people using mathematics know that many constructs are not automatically computable. But, in practice, there are constructions for most of the things that one really cares about anyways. The constructions are sometimes not universal, sure, so automation is out. That's what makes it fun!

I really love (automatic) type inference. But I would be even happier with a type system which was mostly automatic and, once in a while, ``asked for help'' (not interactively). Then I would have to provide more evidence for what I am asserting to be true (assuming I am right!).

Some proof systems try quite hard to walk that pragmatic fine line -- I have the impression that that is where PVS tries to go. That is where IMPS definitely went. Coq is impressive, but I get a pre-IO-monad-Haskell kind of vibe...

Worse is better

In any high level language there is already a working mode that does what you require: when the computation gets stuck, raise an exception. This is also a request for help.

Since I'm thinking more shallow about this topic than most of you do, I would propose a worse-is-better strategy: drop the global consistency requirement with the axiomatic base and work with abstract interpretation as a conventional programming technique. The latter means that any programmer can write down terms like:


even a;
odd b;
c = a+4*b; // no prior value assigment to a and b necessary

and perform checks

type(c) -> even

It is only required that the type rules to determine the type of c are explicitely stated in the source code. This can be done for example using operator overloading in conventional OO languages and being performed at runtime. These rules might be quite delicate in practice ( testing set membership ) but as you said, performing delicate computations is programmers business anyway. No one really bothers that there is no Ueber-algorithm .

Proving that the type rules, stated in application code, are sound is what theorem provers are about. Instead of dropping this requirement completely it might be delayed. Of course this would induce another split within the PL language communities ( just like that of static / dynamic typing - maybe it's even the same split ) but I do think we can perfectly live with it.

PVS and Coq

Indeed PVS has a method of doing type-checking and proving simultaneously, almost hiding it from the user completely using automation (in simple examples). In Coq, this is harder because proofs are concrete objects of the calculus and manipulating them is awkward but necessary. However, we can build a PVS-like mode on top of Coq where the user do not write proof handling code and the type system infers as much as possible, leaving some proofs as obligations to be solved automatically or interactively depending if you have a decision procedure for the corresponding theory or not. This is exactly what the new Program extension of Coq does. Here's an example of using it to program quicksort.

How is that "linked to reality" ?

Sir, can you make it clear to me, how your more than 100 lines of quicksort is usable in "reality", please? Show me one top-notch programmer out there (send it to Stroustroup and post his answer here, or something like that) who can from the stand understand this "monster".

I am more and more clear that I couldn't make my ideas(or better thoughts) clear to anyone here: I am in search of a very elegant solution for "real world" programming.
So one day I had these thoughts which I wanted to discuss with people who are Programming Language geeks:"Would it be a good solution to make the structures/objects of mathematics one to one directly accessible for a programmer?"

But now I see: You are in one world, I am in one world.

All you are talking about are proofs and formal methods which are really important, I know that. I am in search for a totally different thing.

I want something which small kids to financial professionals to physicians to medics to scientists can use. Easy, clear, learnable in 10 minutes (because you learned it - the basics - already in school).

Maybe you should state

Maybe you should state clearer what you expect from a type system and why did you mention type systems at all? The main purpose of a type system is to rule out certain classes of expressions as being ill typed. This makes resoning about program source code simpler, which leads among other things to compiler optimizations. So when you say that programming shall rely on set theory ( reasonable hints were given in the comments for existing implementations of modeling languages based on the Z-formalism ), one might ask why?

When you are concerned about programmers, having easy access to programming language constructs as in the tradition of modern scripting languages, why don't you drop the requirement of the foundational role of sets and type systems that are based on them, even when it's shown to you that these lead to rather arcane tools? Maybe you should work out yourself a simple prototype in Java, Ruby or any other of your favourite languages to become clear about your own motives and what is possible and what is not.

Easy learnable abstract pure math

I want something which small kids to financial professionals to physicians to medics to scientists can use. Easy, clear, learnable in 10 minutes (because you learned it - the basics - already in school).

I don't think an axiomatic set theoretic approach to fundamental concepts is easily learnable in 10 minutes and necessarily suitable for physicians and financial professionals. I think you need to consider that alternative foundations approaches, such as that of topos theory, provide more programming friendly basics -- indeed and elementary topos is equivalent to a local language for a local set theory built using Martin-Lof Type theory. As others have pointed out, things like Ontic and Epigram provide a type system that fits very naturally with such a foundation. If all you really want is the gloass of mathematical feel then maybe something like Fortress would better suit you?

What's reality ?

First of, pardon the overly polemical topic. I think too there is a gross misunderstanding here.
I linked to this certified implementation of quicksort as an example of using Coq in response to J. Carettes post, and in no way did I imply it was a response to your problem.

Now, this implementation of quicksort is 10 lines long, the proof that it is indeed doing what you want is taking the rest of the file, and I'm pretty sure you and Stroustrup could understand the meaning of the two "Program Fixpoint" definitions. The monstruosity of the proof scripts that I left in I'll have to agree too though :)
This program can be used as part of a larger development of certified software, that's how it is usable in reality.

About your question, I think what you want is a programming environment like the one I described (from "my world") because set theory simply isn't directly implementable (in "your world"), as other people pointed out. Basically the problem is that you have lots of indeterminism and non-computable objects which does not fit well with programming which is mostly giving a deterministic recipe to solve a problem. While set theory may seem so simple that every scientist know it well, it is not from the point of view of computing. There is a gap between this abstract and flexible formalism and our ways of computing. Others have already pointed out numerous systems that bridge it in one way or another.

Thanks

And, as a response to my post, I quite liked what you said. For one thing, it confirmed my impression, which is always a good thing :).

There are two really under-appreciated proof systems out there: AUTOMATH and IMPS. [Yep, they are both old, and essentially dead]. PVS in some ways seems to have learned the most from IMPS. Or at least re-invented many of the same ideas. Some of the ideas in AUTOMATH (for structuring large documents) I have not seen anywhere else! Even things like Telescopes are just now being fully appreciated.

Which Telescopes ?

I wonder what telescopes are you talking about ? Is it the concept of typing contexts for dependent types, which is present in Coq in the form of Section variables which help parameterize a part of your development by some set of variables and definitions (a context),
or is it something else ? I couldn't find anything after (quickly) glancing at what remains of AUTOMATH on the net.

Automath

Telescopes

What I meant were explicit towers of dependent type extensions, each of whose levels could be used in proofs (and the lower the level, the simpler the proof obligations).

What you describe from Coq does sound related -- but I don't (yet!) know Coq well enough to be sure. Thanks for the pointer though.

Making things clearer (I hope)

First I want to thank everyone for their answers and efforts to give me links to interesting PL projects.

But I think I could not write clear enough:
As ingenious as all these PL projects you linked me to are they are way too "abstract". I cannot imagine how to use these
PLs to build for example what we today call a "business application".

As you can easily think through: We can model a "business application"(for an easy example an e-business elecronic shop) just using
mathematical "objects": sets and (basic) operations on these sets (which combined give us rings and groups), logical operations on numbers (these we have in todays PLs) and sets (these we have not in todays PLs), graphs (I don't know of any PL which has "graph" as a type) and as a specialty of these trees (which combined with the logical operators on sets give us automatons and petri nets) and maybe some more.
For example you can model "processes" in the e-biz-shop-app with the automatons and petri nets which are "sub-classes" of sets with logical operations on them.

And - as an answer to Qrczak - I want these basic types with their operations only as far as calculations (and implementations) can be made(not as "unproductive" solutions for riddles some logicians gave us some 100 years ago).

For another example: We have in our world many "relational" structures. But for implementing these into computers one has to use a specific product (relational databases). There is no PL (outside of SQL which binds directly to RDBMS) which has fully relational algebra just for direct use.
But I can think that with a PL which has sets and their logical operations as types and which defines relations using these sets with their operations and which defines functions as special relations, one can come up very, very easily with a full relational algebra in this PL.Right?

And with this PL which give a programmer a type "relation" with its operations "join", "project" and so on a programmer would be way more productive because he could model the real world more easily. Right?

In the end I am (and was) thinking of taking the things from mathematics (sets,rings,groups,graphs,relations,functions,...) one to one into a PL as for example Perl to
a) make it easy for everyone who learns mathematics in school (not even high-school or university) to program (because he could use the things directly in programming)
b) to make it way more easier to model things as many people already model the world in mathematical structures.

For my next posting I am going to give you an example by implementing such a "mathematical world" into (or for the first example onto) a known PL to make it very clear what I mean.

[LtU Policies]

For my next posting I am going to give you an example by implementing such a "mathematical world" into (or for the first example onto) a known PL to make it very clear what I mean.

Before posting along these lines, I'd suggest a visit to the LtU Policies (especially points 4 and 5 on Rules of Discussion). LtU is not really a good place to be laying out design ideas from scratch.

Some of these things DO exist.

Aldor has a large library of types built up around mathematical constructs, as you describe (it started life as the scripting/extension language for a computer algebra system). It may be of interest to you.

By the way, it's untrue to say that today's PLs don't support sets. For example, when I learnt Pascal (Borland's Turbo Pascal 5.5, ca. 15 years ago now) it had them as a built-in language construct. Language-level support for relational algebra is also emerging: see C# 3.0 for example, with it's "LINQ" technology.

Hope that helps.

Except that

Aldor is purely signature based, and even though it does have some support for dependent types, there is no type-level evaluation, so you can't have axioms as part of your type :-(. There is a variant of Aldor which can do this, but as far I know it will not get merged into the main branch. So you can't have a proper definition for a field (or even a group). Rather like Haskell, where monads are syntactic objects even though everyone treats them as semantic. Very dangerous.

OBJ

This also reminds me of the OBJ family of languages which may also bear looking at.

OBJ and CASL and SpecWare and ...

Yes, all of those are worth a close look. The only downside is that it is very hard to figure out what kind of tool support exists for all of them. Basically they need the equivalent of a theorem-proving system as an underlying tool, and it is not clear which of them have that. Without decent tool support, most of those specification languages are, IMHO, much too complex to work with ``in the abstract'', as it were.

Tool support for OBJ, SpecWare, CASL and friends

CafeOBJ and Maude both provide analysis/evaluation environments for OBJ-style specifications. The language associated with SpecWare is slang (or meta-slang?), but I'm not sure if there are any publicly available SpecWare tools. I'm less familiar with CASL, but alternatives like Isabelle, HOL and PVS provide excellent automated analysis support.

CASL tools

CASL is supported by Hets, which links up CASL specs (and other extenstions, such as HasCASL and CoCASL) to a variety of tools, including Isabelle as a theorem prover.

Magma

If you want something that natively handles groups, rings, semigroups, monoids, fields, polynomial rings, etc. as types then you probably want Magma, which provides an exceedingly rich mathematical type system.

Algebra, Geometry, Logic: yes. Analysis: no

Magma is great. And it does way more than algebra. Although as far as I can tell it does not have proper dependent types, so that it can't catch trivial errors (like adding 2 matrices with non-equal dimensions). So that's not that mathematical.

And Magma does not do analysis. Axiom does -- but it cheats, it uses an Expression type, which is well-known in type theory as a Universal type, ie a type where you can stuff in anything you want. So that's not very satisfying. Magma as, until now, avoided the issue by not trying to do any analysis (ie calculus). My spin on that is that the way humans perform calculus kinds of computations is via using an intensional rather than an extensional logic. People doing programming languages have started using modal logic(s), but not full-blown intensional logic (yet).

Guess why type-system-wizard Oleg had to resort to Template Haskell when he did his (second) version of a typed symbolic differentiator? That would be because there is no way to do it in Haskell proper. Sure, he had a rather clever first version, but that did not scale -- simplification could not be performed. When you try to mix the two, you are forced to escape from the type system. Or at least, that's the current state-of-the-art.

Expression-to-expression symbolic integration is much, much worse. And people who buy things like Maple and Mathematica and MuPAD really want integration, summation and differential equation solving (ie expression-to-expression). And, lo and behold, there are no typed systems which offer this properly (Axiom's use of the Expression ``type'' is tantamount to giving up on types, so that does not count).

Analysis is hard...

The catch with calculus is that it in inextricably tied to the infinite, which, in general, computers don't necessarily handle that well. The Risch algorithm for symbolic integration, for instance, cheats by converting the problem into an algebraic one. I do wonder if an different approach, say smooth infnitesimal analysis, which has a conception of the continuum that is perhaps more amenable to computation, might be the way to go. Of course you're changing your logic to do that, and doing non-standard analysis as a result, but it might be something...

Finally someone who recognizes the cheat!

I thought I was alone in recognizing that the Risch algorithm was a cheat, since it doesn't solve the right problem! [I've talked about that in the past, there's some slides on my publications page]. In my opinion, Risch introduces more problems than it solves... although it is still an amazing scientific result.

Smooth infinitesimal analysis, in some guises, is already gaining some ground in this area: there are purely constructive versions of this theory (see the the Wikipedia page and links therein), but the category theory required is beyond me.

Except that this approach denies the existence of discontinuous functions! And, as it turns out, I rather like some kinds of discontinuous functions. So I am rather torn...

Discontinuous functions

Well when you conceive of the continuum in the manner required for smooth infinitesimal analysis, which as you rightly point out, has completely constructive approaches, the fact that all functions are uniformly continuous falls out as an immediate necessity. It is a different continuum. These are, unfortunately, the trade-offs one has to make. Constructive analysis and discontinuous functions just don't mix (at least not very easily). It's really a case of "choose your poison".

Finding the right level [OT: uses physics to make a point]

The catch with calculus is that it in inextricably tied to the infinite, which, in general, computers don't necessarily handle that well.

While true that computers and infinite things don't go well together, it's not entirely true that calculus demands a continuum. It's a question of what you're trying to solve. My background is primarily computational physics, where often the fastest algorithm is one that states the problem in the most natural way for a computer. Of course, the biggest road-block is usually the demands of physical consistency.

One of the lessons that I'm still learning is that physics has less use for the real numbers and a continuum than I used to think. For example, the classical formulation of electromagnetism requires fields, which are basically point-defined quantities over all of spacetime. The formulas involve partial differentiation wrt all sorts of variables. However, all that is really obscuring what's really going on: if I have a chunk of spacetime, then the amount of "stuff" that goes in is equal to the amount of stuff that comes back out. The latter picture can be applies to a spacetime tesselated with triangles as well as just imagining arbitrary surfaces in a R^4 spacetime. I just assign to each face a number representing the flux, and can get the amount of stuff in a volume by just adding up the faces bounding a volume. There's a whole algebra available, which parallels differential geometry over continuum manifolds.

Similar things are possible with fluid dynamics, elastic materials simulations, or even general mechanics (though it gets more abstract, and not neccessarily yielding a computationally convenient view). The point is that actual analysis or vector calculus, etc. aren't actually necessary for a lot of the physics.

There's a lot of work in physics currently towards an algebraic approach to things, so that physical systems are defined by an operator algebra. Actualy computations are done by turning that algebra into a "representation" with representation theory. However, currently we can really only tackle turning groups into matrices, and even then the matricies aren't necessarily suitable for efficient computation. The biggest impediment to directly converting the operator algebra to a computational format is that no language currently allows easy and efficient quotient types.

All true, but approximations...

That's all true, but it is essentially just approximations to calculus -- you are approximating the continuous space with a discrete one that allows for discrete calculation which can be performed on computer. It is the same with, for example, numerical methods to solve PDEs.

What you are talking about is the degree of precision you require in your final result. If you approximation results in smaller uncertainty/error than the required precision then yes, the approximation is just as good as the calculus. The catch, however, is that things don't always behave nicely. Numerical solutions to PDEs, for example, tend to behave nicely at the start, but the error balloons dramatically as things progress. There are a lot of problems where this doesn't cause any problems, but there are just as many problems where it make apprximation/numerical techniques useless. You can tend to get a skewed perspective on the relative frequency of the two classes because we tend to do a lot more of the former (since we can do them), and do our darndest to ignore or simplify away the latter (since if they crop up, we essentially have a roadblock).

Calculus is the approximation, not vice versa

Take for example fluid dynamics. Fluids are actually composed of millions of tiny particles. But we pretend that they're continuous, because then we can use calculus and it's more mathematically tractable. The numerical methods used to solve PDEs are actually probably closer to the real physics than the calculus that they're approximating. (I'm thinking for example of finite difference methods).

True enough

But when you discretize a PDE you are not reducing back to those untold numbers of particles, but simply approximating an approximation. Further, you also have to argue that time is discrete for other numerical methods, which is certainly an open conjecture. I don't see that you are actually getting closer to "the real physics", just finding a different gross approximation for it.

But we don't need it

My main thrust, perhaps somewhat lost, is that calculus as we usually approach it, based on the concept of functions evaluatable at points, is not natural for describing physics. Rather, already-integrated quantities, like flux through a given surface are the natural level to see things; for example, in this case instead of a seperate field and the geometry that we consider, we only ever deal with a (formal) product of the two. This particular thread run back at least to Faraday, who described electrodynamics as "force loops", i.e. potential around a loop. More recently, we see that quantum gravity has started to take this as its departure, in particular loop quantum gravity, which posits that the fundamental observables of space are measured over loops. Now, for me, coming strictly from a point of view of computation, the finer details aren't relevant; nevertheless, naturally discrete versions of things are just easier, and usually more numerically robust, to implement.

To be more on topic: I think physics is undergoing a somewhat structural revolution, where we start focusing more on algebraic methods to describe things. Unfortunately, we currently don't have languages well suited to algebra -- algebraic identities are a pain. Personally, I think that decent, efficient quotient data types would solve a lot of things, but I have no idea on how to implement that... *grin*

Addendum: admins, please say if all this physics is inappropriate for our forum.

Quotient Types (again)

Personally, I think that decent, efficient quotient data types would solve a lot of things, but I have no idea on how to implement that... *grin*

I've mentioned a paper here and other places on LtU that talks about Quotient types in polymorphic programs.

Cellular automata explicitly

Cellular automata explicitly model particles (very approximately). Finite difference methods are closely related to cellular automata. So actually, you do kind of end up back to those "untold numbers of particles".

Elaboration of why analysis is hard.

Maybe the answer i deserve is "go study", but i find this line of comments about calculus being hard interesting, but i cannot yet grasp the reasons for that. I can imagine how infinity may have something to do with it, but i dont really get it. Or is it because you must talk about formulas to define the process of computation?

Can anyone please elaborate a bit on this and maybe give some easy pointers to read about it?

Thanks in advance.

Lucio.

Why analysis is hard (computationally)

Infinity is tricky. There are ways we can deal with it computationally, but it depends on exactly how it arises. The practical way to deal with infinity is to discuss the arbitrarily large. Thus we can have a Turing machine with an infinite tape, which simply means we can have an arbitrarily large position on the tape. Equally we could have a list of infinite length presuming that we only ever need to address one element at a time -- we can ask for the nth element where n is arbitrarily large and all will be fine. Infinity on an "as needed" basis works computationally because we don't need to ever actually get to infinity, just "as big as we like", and thus we can work constructively.

Things start to go off the rails when it comes to calculus however, because now we are dealing with a continuum, and our computations have infinity as a whole intrinsically bound up within them. Both differentiation and integration are the limit of some infinite process, thus "arbitrarily large" isn't good enough anymore; to actually get the answer we need to get all the way to a completed infinity, which you'll never achieve computationally.

In practice we do things like symbolic differentiation and integration by converting them into algebraic problems manipulating expressions, rather than as analysis problems dealing with functions over a continuum. Thus we end up with "expression" types as Jacques points out, rather than proper function types as one would expect.

But why is expression manipulation so effective?

The question that I have been struggling with for over 10 years now is: when analysis is most definitely not algebra, why are algebraic manipulations so often effective in giving answers to problems in analysis?

More precisely, given a specific problem (ie explicit formulas) and an explicit transformation to perform, often algebraic techniques work. On the whole, it seems that this works a little too well.

My current conjecture is that this has to do with 2 things: model theory and Kolmogorov Complexity. Basically, when the problem (and its answer) is small (small via KC, and model theory comes in because we were able to write it down at all!), then there is so much rigidity around that the difficult analytic problem is nevertheless appropriately captured by algebra.

Contrast the following:
1) in Type 2 theory of constructive functions, all functions are continuous, but differentiation is uncomputable.
2) using ``expressions'' to represent functions, differentiation is trivial (an easily written term-rewrite system).
The thing is that view #1 represents functions via approximations, and so builds-in a strong sense of the infinite -- not strong enough to lose continuity, but strong enough to lose differentiation. View #2 allows you to write down some nonsense functions, but since most of the basic building blocks are analytic, you tend to end up with differentiable results (unless you get complete nonsense).

My current favourite expression is 1/(1/(x-x)). Most computer algebra systems believe this is the constant 0. Many compilers do too. Of course, strict operational semantics give _|_ everywhere.

My second favourite is probably (x-sqrt(x^2))*(x+sqrt(x^2)), which shows that ``expressions'', even rather simple ones which represent total functions, contain zero-divisors (when interpreted operationally), so do not form a nice algebraic structure at all.

Another Possible Example

Perhaps something like Kanren would be interesting to you. It really does seem to me, perhaps naïvely, that you want Kanren or Ontic.

Or (for one part) logical languages in general

Logical languages let you speak of "relations". In fact, often logic languages (e.g. Prolog) or similar things are used/desired for query languages (as opposed to SQL). As one example, Escher a functional logic language has interesting support for sets.

Create it!

If you are interested in this thread, then I have a workshop just for you (discussed before). Deadline (papers and position papers) is now May 02, 2007.