Do Logic Languages Need Negation?

I have been working on an interpreter for a mini-subset of Prolog as I find it interesting how logic languages hold a central place in proof-assistants, theorem-provers and type-systems.

What I am interested in is a minimal and sound dialect of prolog to server as a starting point for further investigation. There are obviously several design points to discuss, but negation appears to be the most fundamental. Obviously as I am interested in soundness I want to operate in a Herbrand universe, which means unification with an occurs check.

The problem with negation seems to stem from the Prolog programming model. It appears to me that at some point "failure/unsat" and "falseness" (and "success/sat" and "truthfulness") have become confused with each other. This leads to negation as failure, the unsoundness of which is easily shown:

:- not (not (X = 0)), not (not (X = 1)).
yes.

Various solutions to this have been proposed, three-valued (Kleene) logic, four valued (Belnap) logic, negation as inconsistency, negation as refutation etc. All of those complicate the resolution process, but do solve the problem. Strong negation seems interesting because it is local and not fragile like some of these methods. I particularly liked the combination of a four-valued logic (because the bilattice structure is nice) and strong negation, and started to look at implementing it.

What interested me is that the logic values are seen as success values, so that a functor can be 'true' or 'false' without failing. In which case, why make them special 'values' in the language at all, surely they can be treated as normal atoms? Taking list membership as an example, we might want to define:

member(Key, cons(def(Key, _), _), true).
member(Key, nil, false).
member(Key, cons(_, Tail), X) :- member(Key, Tail, X).

We can even define 'not' as a predicate (fail if arg is true):

not(false).

not_member(Key, List) :- member(Key, List, X), not(X).

or a binary operator

not(true, false).
not(false, true).

not_member(Key, List, X) :- member(Key, List, Y), not(Y, X).

Without any of the logical unsoundness of negation as failure. There is obviously a parallel with dependent type systems (curry-howard), which I don't think have negation.

I have put the source code for my simple logic language "Clors" on GitHub https://github.com/keean/Clors. Its a still experimental, and is suffering form many changes and false starts, so needs a little tidying up (I have just removed meta-interpreters because they turn out to be unsound), and needs updating to use my parser combinator library.

So the question is, do logic languages need builtin negation?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Datalog

Datalog makes an excellent "minimal and sound dialect of prolog to server as a starting point for further investigation". Start there. And study many Datalog variants to see what has already been investigated.

There are many variants of Datalog that different forms of negation - e.g. answer set programming, temporal logic, or even the dead end that is default logic.

Datalog not quite right.

Datalog is too restrictive in one way (clauses are not allowed to contain 'structs'), necessary for list and manipulation of other data structures. It is too permissive in another allowing stratified negation, which complicates resolution. I also question whether manipulating backtracking in any way is necessary, hence the title of the discussion. So datalog misses the point.

If we say a Prolog program returns SAT or UNSAT, where variables can take true or false values (or other atoms), we have something much more like answer-set-programming or a SAT solver.

I would suggest I am looking at this as satisfiability modulo set-theory.

You can represent pointers

You can represent pointers abstractly with ids (eg using skolem functions) and represent arrays by using explicit indexes.

Trees: child(x,y)
Arrays: elem(x,i,y)

We've been working on a CLP language derived from datalog and have managed to bootstrap a simple IDE and compiler using only flat values. Maybe under the hood the runtime will eventually use pointers and arrays to store those kinds of relations, but that can be an operational annotation and doesn't have to creep into the denotational semantics.

Without datastructures/nesting we do need aggregation, sorting/counting and negation to implement interesting programs though. Aggregation and negation have an interesting interaction at the moment - we can't aggregate over empty sets so we have to use negation to detect the absence of results eg

numChildren(x, 0) :-
node(x)
-child(x, y)

numChildren(x, n) :-
child(x, y)
where
groupBy(x)
n = count(y)

If there were no results for given x, the latter rule would not output a count at all. In eg SQL this would be solved with a left-join or a nested query, but I'm reluctant to introduce nesting unless we have to.

Interesting

I don't quite understand how this works. Have a look at the example.cl file in the GitHub archive for Clors. Note there are no builtin functions, so you can see the complete definitions of everything. I assume groupBy would be a built-in function?

I guess I need to be more careful about negation because I am allowing nested structures, but it clearly is possible to be logically sound without negation, or using one of the multi-valued logic approaches. Are you using Datalog like stratified negation as failure?

Are you using Datalog like

Are you using Datalog like stratified negation as failure?

Yes.

I assume groupBy would be a built-in function?

Yes. The first part of the query behaves like normal datalog and generates a set of tuples. Then there is an optional aggregation step which provides groupBy, sortBy and fold primitives. This is incrementally maintained using a tree of tuples with fold results stored in the nodes. By keeping entirely flat data we avoid the difficult problem of incrementally maintaining nested results. It's still an open question whether this style of programming will actually work, but so far the results are promising.

Negation as failure is not a

Negation as failure is not a problem with negation, it's a problem with the open world assumption that Prolog makes. Get rid of that and everything becomes easy. The semantics of predicates becomes easy: a predicate foo(A,B,C) is simply a function from A,B,C to bool. not foo(a,b,c) is simply the boolean negation. About your question whether logic languages need built-in negation, the answer is that it does give extra expressiveness. In Prolog you only have boolean and (written as a comma) and boolean or (written as multiple rules for the same predicate). You can't express not with and and or. So you need at least something that can express boolean not, if you want to have that in your language. Another example of a construct that would allow you to express negation is reflect foo(A,B,C) in X which puts the atom `true` in X if foo(A,B,C) is logically true, and puts `false` in X if foo(A,B,C) is logically false. Essentially that allows you to convert an ordinary simple_member(Key,List) predicate into your member(Key,List,X) predicate. Note that you can do the same with negation:

member(Key,List,true) :- simple_member(Key,List).
member(Key,List,false) :- not simple_member(Key,List).

So reflect is equally expressive as negation.

Of course you could do the transformation that you propose, where you add an extra truth value argument to each predicate, and that would give you the same expressiveness. The situation is roughly the same as with continuations. You can express anything you can express with built in continuations by converting your program to continuation passing style, but we still say that built in continuations give you extra expressiveness. In the same way, that `reflect` construct would give you extra expressiveness, so that you don't need to transform your whole program.

False is not failure

I don't see this is addressing the same problem. What would your system return for:

:- not(not(X = 0)), not(not(X = 1)).

Let me propose that a Prolog program does not return true or false, but infact "sat" or "unsat". Variables can be assigned the values true or false (or other atoms) as part of a satisfied solution.

That would return false (or

That would return false (or it would fail, depending on how you call it).

unclear semantics

Fail would trigger backtracking, so if not(true) backtracks then the X=0 binding is undone allowing the query as a whole to succeed. Can we pause to make sure we agree this?. The difference between false and fail is vital for sound semantics.

You're thinking about this

You're thinking about this on an operational level. What I'm talking about is a different false than the atom `false`. What I mean is the boolean false that sits in the logical semantics of the program, not a value inside the program itself.

Backtracking

So what you are proposing sounds like the three valued Kleene logic versions of Prolog, with true, false and bottom (which backtracks).

No. You just have true and

No. You just have true and false. What I'm saying is really not that complicated. Semantically, each predicate is a function from its domain to bool. Prolog's comma becomes boolean `and`, and multiple clauses becomes boolean `or`. Negation becomes boolean `not`. That's all.

what happens to failure?

So what happens to failure and backtracking? I did mention my interest in this relates to automated theorem proving. Does anything undo variable unification? Without backtracking it won't be doing much of a proof search.

Another point is if we have any return values, why restrict them to just true and false? We could have neither (unknown) or both (inconsistency). Why only return booleans and not other types like numbers, strings even.

Search

As I understand it, backtracking is implicit to search with a limited number of processors, but mostly orthogonal to negation or failure. We can have weighted searches, for example, where we backtrack when a search path seems heuristically poor (without waiting to observe failure). We can also, in SMT style, extract useful information from failed searches and hence not fully backtrack on them.

When you earlier said: "I also question whether manipulating backtracking in any way is necessary, hence the title of the discussion", I was quite baffled. I suppose Prolog seems to conflate the two concepts due to its highly operational search. But I don't really care about how Prolog does things (I'm not fond of Prolog's misfeature set).

Manipulating Backtracking

Various Prolog builtins manipulate backtracking, "cut", "fail" and "not", all manipulate backtracking and are all unsound.

Indeed that is my contention that Prolog does conflate the two. I am less certain about three and four valued logics, which can fix the problem with negation, but are they necessary when they can be coded using ordinary atoms? Should a sound prolog really be returning "sat" or "unsat" rather than yes/no or true/false, and make the difference between the search and the rules being proved clear.

Nothing happens to

Nothing happens to backtracking, but backtracking is an implementation strategy and we should keep it as far away from the semantics as we can. First we should define a sensible semantics, and *then* look at how that semantics can be implemented.

Having other return values can certainly make sense. If you use real numbers between 0 and 1 instead of booleans, then you have probabilistic programming. If you use complex numbers you have quantum programming.

Partial Agreement

So we agree about keeping away from the implementation strategy, and not having unsound negation, cut, var, fail etc.

There is already a method for returning values from functors, and that's by unification (output variables). Why complicate the semantics, implementation and syntax by adding a second method? Is there anything it actually adds to the expressive power, or is it just a personal preference for a particular kind of syntax?

Huh?! We are talking

Huh?! We are talking completely past each other... Lets try a reboot.

What I'm saying is this:

Without an open world assumption, negation is not problematic. You can model , as logical and, multiple clauses as logical or, and negation as logical not. The open world assumption makes negation problematic because other than some statement being true or false, it can now also be not yet defined. Negation as failure says that not yet defined is false, so the negation of not yet defined is true. That's what makes negation problematic. So if you get rid of that, then semantically it works nicely.

So my answer to your question:

So the question is, do logic languages need builtin negation?

is twofold:

  1. Adding negation adds expressiveness in the sense of Felleisen (negation is not macro expressible on top of Prolog without negation).
  2. Adding negation is not problematic when you get rid of the open world assumption.

Disagree.

1) I can easily express negation with no builtin definition, as I demonstrated in the examples above. I can prove A is a member of the List, or is not a member of the list. You will have to explain how these are not valid definitions of negation.

2) The closed world assumption is semantically wrong. Just because you have failed to prove something is true does not mean you have proved it is false. As we want to prove things about real theories in the Herbrand universe, the CWA is simply wrong. You will end up declaring a valid refutation of something, when you only lack the information to prove it correct.

1) Yes, but this requires a

1) Yes, but this requires a whole program transformation.

2) Closed world assumption isn't "wrong", it's just a different semantics. If you want to model lack of knowledge in a closed world language, you need to model it explicitly. I'd say putting the possibility of lack of knowledge everywhere is akin to allowing null pointers everywhere. In most cases, closed world is perfectly fine. For example for member(X,List), it's perfectly fine to say that X is a member of List if and only if X is the first element of X, or it is a member of the tail. Closed world assumption just changes the meaning of definitions from if to if an only if.

Confused

1) What is wrong with a whole program transformation? Its a new language, there are no programs yet :-)

2) I am confused about this. If you have not specified enough axioms to disprove something, you don't what it to be disproved by result of failure to prove. Surely I cannot decide to just ignore this problem. In other words you can make a CWA, but you will get wrong results when applied to real theories. Lets look at some examples form TPTP: http://www.tptp.org, can we prove (or disprove) these kind of things if we make a CWA. Is there a machine implementable translation that will provide all the missing information if something is unprovable? If we have something unknown like the riemann-hypothesis?

I don't say that there is

  1. I don't say that there is anything wrong with it. I just say that it adds expressiveness in the sense of Felleisen, which says that a language construct adds expressiveness to a language if it can't be macro expressed on the original language.
  2. If you get wrong results then you wrote your program wrong. Of course you'll get wrong results if you write programs in a language with CWA but you're pretending that the language is OWA. If you write programs in C pretending that it's Java you also get wrong results.

Understanding?

1. Surely it can be macro expressed. Even if it is a whole program transformation I can write a macro to perform that transformation in a sufficiently expressive macro language? I guess I'll have to read the paper...

2. I am not sure its safe, how will you know if you have all the axioms? It seems I want to reject the CWA on the grounds that it is too easy to make mistakes.

I think for me the most promising alternatives are:

- Two valued logic (proved, not-proved) with no negation.

- Three or Four logic: not-proved, proved-true, proved-false, optionally proved-inconsistent (both); with strong negation.

I guess so :) Can you give

  1. I guess so :)
  2. Can you give an example of such a mistake? Seems to me that open world is almost never what you want in practice. For example if you have definitions such as `member` and `factorial` the definitions are complete, not open.

Examples

An example would be precisely omitting one of the definitions of member. With the CWA, it will happily return 'no'. IE it has proved the list does not contain the queried member. With OWA, (and Kleene three valued logic) you would get 'unknown' as it cannot prove the list either contains or does not contain the value.

In my case with two-valued logic and no negation, you would return a Boolean indicating the state of membership as an output-variable, so a failure (caused by not finding a matching axiom in the database) would be 'unknown', as opposed to proved with the output variable indicating true or false in the case where all the rules for membership where there.

I see. Well if you don't

I see. Well if you don't like that there are other ways to fix the problem. Getting rid of the open world assumption doesn't mean that you have to make a closed world assumption. You can also simply disallow all programs that would give different results in the two models, and require the programmer to explicitly state when things are false. This means that you can still work with ordinary booleans (true & false, no "unknown"), and negation is sound. For example, member could become:

not member(X,[]).
member(X,X:YS).
member(X,Y:YS) :- member(X,YS).

or with a syntax that makes the semantics "predicates are functions from their domain to bool" a bit more explicit:

member(X,[]) = false
member(X,X:YS) = true
member(X,Y:YS) = member(X,YS)

If you had omitted one of the rules, then it would fail with a "pattern match not exhaustive"-error (either at run time or at compile time).

Strong Negation

As I understand it this is strong-negation with a three valued logic, which is one of the options I am seriously considering. The first syntax seems fine, I would probably use "~" or "¬" for not.

It complicates the resolution process, and presumably I need to search for both 'member' and 'not member', and a four valued logic may make sense in case they both succeed. Is the complexity added to the resolution process justified by the increase in expressive power?

Edit: the suggestion above without backtracking on unknown would not actually perform as proof search. We don't need a complete set of axioms to prove (or disprove) something, so I think it would be less useful. It would be fine if you just wanted a committed choice language, but that's not what I am after.

Ah! I think I see what you

Ah! I think I see what you mean now by four valued logic. I was confused because I thought you wanted the boolean connectives like and, or, and not, to work on four valued logic. But what you mean is that for a predicate like member(X,L), it could be that member(X,L) = true leads to a valid solution, or not, and it could be that member(X,L) = false leads to a valid solution, or not. So in total that gives 4 possibilities, and you need to search for member(X,L) = false and member(X,L) = true separately. That is indeed correct, but I think that's unavoidable if you have negation. Take the following program:

foo(...) :- member(X,L) , ...
foo(...) :- not member(X,L), ...

Here you are forced to search for solutions for both member(X,L) and not member(X,L), or disallow such programs completely. Is that what you meant?

You could also go all the way to functional logic programming, then booleans wouldn't even need to be built-in, and you'd have an even simpler semantics too.

Four valued logic and lambdas

Yes, although you might express the logical behaviour with a four valued logic. So member(X,L) might return proved-true or unproven, not_member(X,L) might return proved-false or unproven. Any search for member(X,L) should search for both member (proving true) and not_member (proving false). So there are for possible results both unproved, proved-false, proved-true, and proved-both. I would assume you backtrack on unproved, and you might terminate on proved-both. The logical connectives would reflect this behaviour which would match Belnap logic and form a bilattice. So 'and' and 'or' would have sixteen possibilities depending on the result of each side. If we stick to Horn clauses we only need and, and backtracking behaviour matches the requirements of ands and ors.

As for functions, at the moment I would rather keep unification decidable, although I know lambda-prolog uses pattern unification which is a decidable subset of second order unification, but I am not sure a second method of defining behaviour is needed over Horn clauses. Horn clause programs seem pretty small and elegant, I am not sure what adding lambdas would add to the language? The type checker I have as the example looks pretty neat to me.

Whether you search for

Whether you search for proven true or proven false depends on the context. In most cases you'd only have to search for one. For example if you have in the top level program:

..., member(3,L), ...

Then you'd only need to search for a solution L such that member(3,L)=true. If you had:

..., not member(3,L), ...

Then you'd only have to search for a solution L such that member(3,L)=false.

Functional logic programming does not need any form of unification at all, and definitely doesn't need higher order unification. Instead everything is accomplished by ambiguous choice.

You need to search for both.

You need to search for both as even with the query "member(3, L)", you need to see if you can prove membership or non-membership, unknown or inconsistent. The assumption that all propositions are either true or false is not true :-) Apparently this makes me a constructivist. For example the proposition that the distribution of primes follows the Riemann zeta function, is that true or false? There are an infinite number of potential propositions, and only a finite number have been proved true or proved false, therefore we must consider the majority of unproved propositions in a system where the user (programmer) if making them.

As for functional logic programming, presumably you have to track the ambiguous choice and explore the 'choice' space with backtracking? I am not sure why you would want to get rid of an elegant method like unification, and add lambdas? How would FLP help with my goal of automated theorem proving? In what way is it easier to express theorems in FLP, and view the generated proof?

You need to search for both

You need to search for both as even with the query "member(3, L)", you need to see if you can prove membership or non-membership, unknown or inconsistent.

If you have the constraint member(3,L)=true, you only need to search for an L such that member(3,L)=true. You do not need to search for an L such that member(3,L)=false.

As for functional logic programming, presumably you have to track the ambiguous choice and explore the 'choice' space with backtracking?

Yes, exactly.

I am not sure why you would want to get rid of an elegant method like unification, and add lambdas?

I don't think unification as a language primitive is elegant at all. It doesn't work well with abstractions, since unification works on the structure of values and not on the meaning that that structure represents. Examples:

  • Functions: unification of functions doesn't work well.
  • Abstract data types, e.g. a Set ADT. Two sets represented as a red-black tree may be equal (i.e. have the same elements) but they may not unify, since the internal structure may be different.
  • ASTs: two ASTs should be considered equivalent up to variable naming, but unification doesn't handle that.

Lambdas on the other hand are an excellent language construct. Here's why I think functional logic programming is more elegant than Prolog-style logic programming. You get rid of (1) logic variables (2) unification (3) built-in logical conjunction/disjunction, and optionally negation. Instead you have ONE simple construct, ambiguous choice, on top of which you can express all of that as a library. Furthermore functional logic programming can be much more efficient, since guesses are done lazily.

How would FLP help with my goal of automated theorem proving? In what way is it easier to express theorems in FLP, and view the generated proof?

To do automated theorem proving with FLP you'd define the representation of theorems and the representation of proofs, then you'd write a function check(theorem,proof) that checks whether the proof is a valid proof of the theorem. Then you'd search for a proof such that check(theorem,proof)=true.

sounds complicated.

Okay, I see what you mean about only needing to search one, that makes sense.

I don't see your point about the functional approach. Pattern matching in functional languages is implemented using unification. So I'm not sure how you get rid of unification?

Type checking is based on unification, what would the type checking algorithm in "example.cl" in the Clors GitHub look like in FLP?

As for abstractions it works just fine. I can define "member" to work on lists , sets, or any other container. I can easily extend member for new data types. Matching on data type constructors work just like in functional languages. Code that operates on the containers does not need to know anything about the internal structure and just uses the defined methods which would be part of the container API, so there are no problems with ADTs as far as I can see.

Your description of handling proofs sounds a lot like implementing a logic language inside the functional language.

Shorter Question.

I am particularly interested in proofs, type-inference and type checking. Particularly as proof assistants use dependent types, evaluated by type checking. Logic languages do proof search and again use unification.

In both these systems that look at proofs, unification is a central and common mechanism. Isn't this because it enables a clear and concise representation of terms?

Does FLP offer me anything for a language whose main application is proofs and type-checking?

You can still express

You can still express unification. What you do is write an equality testing procedure `equal a b`, then you constrain `equal a b = true`. It's like the difference between only having a built-in structural equality, and having a user defined equality per type. Especially for proofs/type inference/type checking that may be exactly what you want, since you want alpha equivalence not structural equality.

See also this page: http://okmij.org/ftp/kakuritu/logic-programming.html

Too much like programming

I kind of like Hansei, but it looks too much like programming. I am looking for something where you can have something more like declarative stratements of axioms with maybe some strategy hints if different search is necessary. If I wanted something typed, I think a dependently typed language like Twelf would be more what I am looking for. Still I don't want to rule anything out at this stage, I may change my mind after further exploration.

It seems removing unification complicates the declarative expression of algorithms in the language, for example looking at the definitions of things like split and member.

Programming with logic is

Programming with logic is programming; the best you can hope for is that the complexity of your code scales with your problem.

Type Systems

See my comments below, but in brief, why do type systems use unification? Will you replace unification in the type system with functions?

Answer-set-programming (which is a form of logic programming) allows any NP problem to be elegantly expressed declaratively. Co-inductive logic programming (which is goal-oriented answer-set-programming based on Prolog-like technology) is the direction I want to go with this: http://www.sics.se/~matsc/ICLP2012/CoLP-proceedings.pdf

An embedded DSL is always

An embedded DSL is always going to look worse than a standalone DSL. Take a look at Curry for a standalone functional logic language.

Curry a failure?

I am not sure Curry is really a success, considering how long it has been around. People seem more interested in Logic languages, and more recently dependently typed languages like Twelf which use unification, or functional languages like Haskell (which use unification in their type systems and pattern matching syntax). In any case i suspect Curry uses unification in its type system.

All logic languages are

All logic languages are arguably a failure, so if success is the criterion for whether you want to look at languages or not, you should probably abandon logic languages altogether ;-) Unification in the implementation of type systems is something completely different than unification as a language primitive.

Dependent Types

Prolog seems the most popular (and hence least failed) logic language... so it must do something better than the others (I don't find this argument very convincing, and it is perhaps not that useful).

What about dependent type languages then? If we accept that unification in type systems is useful, and then that dependent types allow us to express complex proofs, then by curry-howard we can write that same thing as a logic statement, which naturally uses unification as the type system uses unification.

The type system is a second language for expressing constraints on the primary language. Can you explain why we want unification in the type system and not in the primary language. What does that lead you to conclude about the application domain of unification based languages (there appears to be at least one application domain where they are preferable). How does that application domain relate to my stated goals?

An interesting point is that dependently typed languages present a hierarchy of type levels (values, types, kinds...), whereas in logic they are all collapsed into one, values. I am not sure which approach I prefer, but the logic approach seems simpler, so it seems I ought to see if it is suitable for what I want, before moving to more complex options.

Can you explain why we want

Can you explain why we want unification in the type system and not in the primary language.

Unification is used in the implementation of some type systems. That doesn't mean that we want it as a language primitive in a programming language. I already gave reasons why unification shouldn't be a language primitive here. Additionally, there isn't one thing that people call unification. There are many different types of unification. With/without the occurs check, with/without support for unifying lambdas, nominal logic unification, unification with additional rewrite rules, so that e.g. (a+b)+c can unify with e+(f+g), etcetera. So you may need a different form of unification for different type systems. Which one are you going to build in to your language?

One thing that confuses me a lot is that you use 'type system' and 'logic programming language' more or less interchangeably. To me that seems like a category error. You can *implement* a type system in a logic programming language, but they aren't the same thing, and you don't necessarily want features from one in the other.

Curry Howard

See computational trinitarianism (or curry-howard). A type system is a programing language, although without dependent types quite restricted. Haskells type system with type classes is equivalent to horn clauses with no overlapping instances. By type system I mean a proper type system like system-f (a formal type system). Dependent type systems are logic languages (for example Twelf).

Unification without an occurs check is unsound - I clearly stated sound unification in a herbrand universe, which you need to make sound deductions. The other forms of unification involve changes to the type system like adding lambdas, or nominal logic, which are more like optional extensions than different systems.

The point I am making is that the type system is a specialised language for proving things about programs. You wouldn't want to start writing complex functions in type signatures, as they would no longer serve their purpose. The same thing applies to logic languages, you want to be able to state the rules (axioms) and not have to get into implementation details and have the language prove them.

I did not mean that we did want unification in the primary language (although it often occurs in pattern matching in functional languages). The point was more to think about how the use-case for the primary language differs from that of the type-system, and consider what about the type-system makes unification suitable. If the type system was the same as the primary language there would be no benefit to stating things twice. Isn't the point of the type system that it provides a clear and readable specification of what the function is doing, without getting into the detail of how it is done. We can look at the following:

f :: a -> b -> Vector a -> Vector b

f(arrow(arrow(arrow(A, B), vector(A)), vector(B))).

note how the types naturally map into the logic language. Now if we wanted to express a constraint:

f :: IsSubType a b => a -> b -> Vector a -> Vector b

f(arrow(arrow(arrow(A, B), vector(A)), vector(B))) :- is_sub_type(A, B).

See computational

See computational trinitarianism (or curry-howard). A type system is a programing language, although without dependent types quite restricted. Haskells type system with type classes is equivalent to horn clauses with no overlapping instances. By type system I mean a proper type system like system-f (a formal type system). Dependent type systems are logic languages (for example Twelf).

Either I don't understand what you mean, or this is superficial reasoning. Sure there are various connections between mathematical logic, prolog style logic programming, and type systems.

which are more like optional extensions than different systems

Whether you call them different forms of unification, or optional extensions of unification the point still stands that you may build one form of unification in as a language primitive, and then you may need another form of unification.

The point I am making is that the type system is a specialised language for proving things about programs. You wouldn't want to start writing complex functions in type signatures, as they would no longer serve their purpose. The same thing applies to logic languages, you want to be able to state the rules (axioms) and not have to get into implementation details and have the language prove them.

Can you explain what you mean by this in the context of the discussion wrt unification and functional logic programming?

although it often occurs in pattern matching in functional languages

You can view it as a very restricted form of unification, but it's something different. Incidentally, pattern matching suffers from some of the same problems as unification, for example that it doesn't work well with abstraction. Views/extractors have been developed to deal with that. I'm not aware of any similar work for unification.

Different Discussion

I don't know, it seems we are having different discussions. Referring back to my original post, I am investigating a language for theorem proving, and type-system implementation (and extension). My point is that propositions, conjunction, disjunction, and entailment are the natural way to express axiomatic systems, and therefore having the language stick close to the natural way to express these things makes sense. In a general purpose language functions are great.

Lets take sort for example axiomatically, we might encode a sorted list like this:

ordered_list(nil).
ordered_list(cons(X, nil)).
ordered_list(cons(X, cons(Y, Tail))) :- X < Y, ordered_list(cons(Y, Tail)).

It is simple enough from reading these axioms to see that it proves the order of the list. The proof works like mathematical induction. ordered_list might be used as a predicate to prove that a sort function always returns a correctly ordered result by partial evaluation and abstract interpretation. I am not even sure these techniques will work in an FLP context, and it would certainly make my implementation many times more complicated, and buggy...

Your description of handling

Your description of handling proofs sounds a lot like implementing a logic language inside the functional language.

It's actually extending a logic with functions. See Multi-paradigm Declarative Languages.

Unification of functions

Why would you want to extend a logic language with functions, when unification of functions does not work well. I looked at the paper which seemed to give a lot of details about how to combine the two (functional and logic), but seemed very short on motivation, apart from to say it was "natural" to want to combine the two. I don't see any motivation for wanting to combine them.

The syntax of axioms using unification seems much more natural and readable. Surely this is why type systems work using unification. I see no suggestion that the type system should be replaced with functions? I see no demand from proof assistants like Coq and Twelf to get rid of unification.

For example:

member(Key, cons(def(Key, Val), Tail), just(Val)).
member(Key, nil, nothing).
member(Key, cons(Head, Tail), Val) :-
    member(Key, Tail, Val).

append(nil, L, L).
append(cons(H, T1), L, cons(H, T2)) :-
    append(T1, L, T2).

unifyall(nil, T).
unifyall(cons(T, Tail), T) :-
    unifyall(Tail, T).

split(X, nil, nil, nil).
split(X, cons(def(X, T), Tail1), cons(T, Tail2), Tail3 ) :-
    split(X, Tail1, Tail2, Tail3).
split(X, cons(def(Y, T), Tail1), Tail2, cons(def(Y, T), Tail3)) :-
    split(X, Tail1, Tail2, Tail3).

typing(Env, lam(Var, Body), Cxt, arrow(A, B)) :-
    typing(Env, Body, C1, B),
    split(Var, C1, Def, Cxt),
    unifyall(Def, A).
typing(Env, app(Fun, Arg), Cxt, B) :-
    typing(Env, Fun, C1, arrow(A, B)),
    typing(Env, Arg, C2, A),
    append(C1, C2, Cxt).
typing(Env, var(X), Cxt, Type) :-
    member(X, Env, just(ty2(Cxt, Type))).
typing(Env, var(X), cons(def(X, Type), nil), Type) :-
    member(X, Env, nothing).

Seems a concise and readable description of my type inferencing algorithm, note there are no built-in functions, this is a complete implementation from first principles, and it handles names correctly without higher order logic, or nominal logic. How could this be improved by FLP (in terms of readability and understandability, performance and efficiency are not a concern)?

Do you plan on removing unification from your type systems, and forcing the programmer to write type annotations using functions? What then is the difference between the primary language and the type system?

I find this very hard to

I find this very hard to read, mainly due to lack of types.

Lack of types

Functional languages like ML and Haskell make a big point of using type inference to keep types out of code? Do you have difficulty reading those? I know some people spend a lot of time trying to get things to work without needing type annotations and consider having to add a type annotation in languages like Haskell a bad thing.

Could familiarity also be playing a part in your difficulty? I found the transition to logic language style (coming from a C++/Haskell background) hard, but perhaps some exposure to dynamic languages like JavaScript and Python made this easier.

Also note that everything there is a type, or more precisely a type class. For example in Haskell (bearing in mind that Haskell search does not backtrack):

class member x l r

instance member x Nil Nothing
instance member x (Cons (Def x y) l) (Just y)
Member x l r => instance member x (Cons z l) r

I can read logic programs

I can read logic programs just fine. In Haskell and ML you can choose not to annotate your functions with their types, though many people choose to do that anyway for readability. But the more important part is the data/type declarations that specify the form of the data. Of course since you're the one who wrote the program, you already know what the shape of the data is, so it's easy to read for you. With enough effort a reader can puzzle it out in his mind by doing mental type inference, but having to puzzle things out in your mind is the opposite of readability.

Type classes are like relations just like the predicates in logic programs are like relations, but that's not what I'm talking about. What I'm talking about is this: for each variable in that program, what is the shape of the data that it holds?

I don't disagree

I don't disagree with this, but if you think that the atoms are types (looking at the Haskell type class example), do you really want types for your types (kinds, and type families have made it into Haskell now)... maybe I do. Dependent types of course have types that depend on terms and terms that depend on types.

What if we matched the functor on name and type of each argument? Of course a type in a flat logic language is just a predicate so you could have something like:

def_list(nil).
def_list(cons(def(K, V), Y)) :- def_list(Y).

maybe(nothing).
maybe(just(X)).

member(Key, List, Maybe) :- def_list(List), maybe(Maybe),
    member_(Key, List, Maybe).

member_(Key, cons(def(Key, Val), Tail), just(Val)).
member_(Key, nil, nothing).
member_(Key, cons(Head, Tail), Val) :-
    member_(Key, Tail, Val).

I did think about adding special syntax like "member :: (name, list, maybe)", but it seems hardly worth it when all it would do is to prepend a goal to the RHS. Also you can't really check the arguments to 'maybe' or 'list' in any meaningful way (their kind is '*'). You may not want to check the type of 'Key' either as you may mean this to be a generic 'map' from type Key to type Value... so I am not sure what sort of checking would make sense.

So my first instinct was to want to add types (being familiar with other typed languages), but I can't help feel that attempts to do so are unsatisfactory for one reason or another.

Why would you want to extend

Why would you want to extend a logic language with functions, when unification of functions does not work well.

The paper I linked is a survey of the progress on functional logic languages as of 2007. Each individual section lists the advantages of that approach, so there's no single section covering what you're looking for. I'm by no means an expert, but a primary motivation seems to be asymptotic performance. Functional constructs direct search in a natural way that significantly reduces big-O due to direct equational reasoning (page 5):

Based on this specification, one can define a function and verify that this definition satisfies the given specification (e.g., by inductive proofs as shown in [20]). This is one of the situations where functional logic languages become handy. Similarly to logic languages, functional logic languages provide search for solutions for existentially quantified variables. In contrast to pure logic languages, they support equation solving over nested functional expressions so that an equation like xs ++[e] = [1,2,3] is solved by instantiating xs to the list [1,2] and e to the value 3.

Perhaps Curry's site will have more motivation for you:

Functional logic programming aims to amalgamate the most important declarative programming paradigms, namely functional programming and logic programming. In comparison with pure functional languages, functional logic languages have more expressive power due to the availability of features like function inversion, partial data structures, existential variables, and non-deterministic search. In comparison with pure logic languages, functional logic languages have a more efficient operational behavior since functions provide for more efficient evaluation strategies (lazy evaluation, deterministic reductions) than predicates.

value notation over performance

So in the case where you are not concerned with performance, and prefer the concise notation (like with types) there is no reason to go that way. Some of the other points seem debateable. For example you can have excistentislly quantified variables, this is where the appear on the right of ':-' but not the left.

It is not my intention to dismiss FLP in general, but for my purpose it seems the wrong way to go. Some kind of Type Theory seems the mmost likely alternative, but due to curry-howard it could still be presented as logic.

Functional logic languages

Functional logic languages don't eliminate unification, so you can still just use that.

Confused

Now I am confused, Jules seemed to be suggesting that FLP did not use unification, but used non-deterministic choice instead. Is he wrong? If FLP is just adding lambda's (functions) to logic programming, then my answer is simple - I don't need functions yet, but if I did need them I would add them (reluctantly, as I value the simplicity of the language).

From the paper I linked to

From the paper I linked to above, narrowing amalgamates reduction, unification, non-deterministic search, so search on unbound logic variables is still present. However, logic variables appear to be formally equivalent to non-deterministic functions in FLP, so you only need one and the other is just sugar.

Logic Variables vs Non-Deterministic Functions

As soon as we introduce functions, I think it takes the focus away from axiomatic statements about what you want, and makes things more about (efficient) implementation. For me at the moment keeping the notation close to traditional logic, with conjunction, disjunction, entailment, seems a better choice.

I can see the point about data-structures not being obvious, which leads me towards some kind of type-theory as an alternative starting point.

unification of normal forms

Two sets represented as a red-black tree may be equal (i.e. have the same elements) but they may not unify, since the internal structure may be different.

Mathematica has an interesting behavior for matching that partly addresses this: Expressions are reduced to a "normal form" before matching. So for example, you can use a syntactic expression (head & args) to represent a list or a set, and get correct matching results because the set elements will be totally ordered prior to testing the pattern.


In[129]:= SetAttributes[unordered, {Orderless}]

In[130]:= MatchQ[List[1, 2], List[1, _]]
Out[130]= True

In[131]:= MatchQ[List[2, 1], List[1, _]]
Out[131]= False

In[132]:= MatchQ[unordered[1, 2], unordered[1, _]]
Out[132]= True

In[133]:= MatchQ[unordered[2, 1], unordered[1, _]]
Out[133]= True

Although Mathematica lacks proper abstract data type capabilities, you could probably achieve this idea by way of a `normalForm` call on each argument prior to unification. In the case of two distinct red-black trees, both would have a normal form of something flat like `set[elements...]`.

Negation missing in dependently-typed languages?

I don't think that's true. As a statement about types (dependent or otherwise), "not(A)" is normally translated as "A -> 0" (where "0" is the empty type, with no inhabitants).

You might enjoy Bob Harper's recent HoTT lectures.

I agree with you that logic holds a central place in proof-assistants and type systems. But I don't think that you need to get rid of "not", it's quite useful (as the 0 type is quite useful).

Maybe we need to get rid of proof search?

Different kinds of negation.

For example you cannot write:

~a -> ~b

By curry-howard, this is the kind of negation corresponding to Prolog negation. What you suggest as negation seems analogous to:

arrow(X, false)

Which is not built-in negation, but defining negation as a functor on atoms, as in the examples I gave in the introduction. So my point is that it looks like there should no be any built-in negation, as to negate sat or unsat is meaningless. You can define a boolean logic with atoms and functors, and search to see if its satisfiable. I will check out those lectures though.

As for proof search, it seems useful when you want to prove things in an automated way.

Could you explain how ~a ->

Could you explain how ~a -> ~b corresponds to Prolog negation? What do "a" and "b" signify here? Maybe you've got some context in Prolog-space that I'm not thinking of.

Of course, (b -> a) -> ~a -> ~b for all types "a" and "b" should be pretty uncontroversial right? We can easily construct a term with that type: \c.\ka.\b.ka (c b). I am not sure why this is significant though.

Is your arrow predicate at the level of "," (and)? If not, it's probably not the right kind of implication right?

Proof search would be useful if it was tractable, but isn't this why people work on proof assistants these days, rather than automated theorem provers?

Just an example.

On reflection I don't think it does correspond to prolog negation, as prolog negation involves failure (and backtracking). I meant 'a' and 'b' to be type variables, with ~ as unary negation. As I might have confused things, at this point lets take a step back.

At the risk of getting side tracked, there is nothing special about '->', it is simply an infix binary functor, so 'arrow(X, Y)' is the prefix form of 'X -> Y'.

As for proof search, SAT and SMT seem to be quite active fields of research, and that is exactly what they do (with DPLL resolution). SAT/SMT is equivalent to Answer Set Programming (also an active area). Goal oriented Answer Set Programming corresponds to Prolog + Co-Data (but we need to deal with soundness). A sound logic language, consider it Satisfiability modulo set-theory, but goal oriented is where I am going with this.

No Not in CoC

To clarify what I was trying to say, there is no 'not' in the theory itself. You can define negation to operate on the values in a type, say by having a Boolean type, and defining a function for negation using Church encodings. My limited understanding is that CoC is constructive and hence does not have negation in the calculus itself.

Negation and constructivity are compatible

You can construct a function from some type to the empty type. It's only double negation that causes a problem constructively (and even there, as far as I understand it's not a problem but a matter of interpretation -- not(not(A)) is not equivalent to A).

Barendregt's Lambda Cube

Just because you can define negation using CoC does not mean that CoC itself relies on negation. CoC is the opposite corner of Barendregt's lambda cube from the simply typed lambda calculus. There is no negation in simply typed lambda calculus. Negation can be defined using the calculus by using Church terms for Booleans and defining the appropriate negation function, but that is different thing. In simply typed lambda calculus there is just abstraction and application (no negation). Traversing the lambda cube we add, terms depending on types (or polymorphism as in System-F), types depending on types, and types depending on terms. Nowhere in this is negation added to the axioms of lambda calculus.

Indeed there is nothing wrong with negation, or double negation defined using CoC or even Prolog - the example I gave of a logic table for "not" using true and false is completely fine. The problem with negation in the axioms of the calculus goes back to the Russel Paradox, and my naive understanding is that this implies you cannot have both recursion and negation in the axioms of a sound system. Effectively the problematic not that is in Prolog, but not in CoC is the negation of a proof, which enables you (when combined with recursion) to say things like this proof is valid, only if it is invalid. However proving that and(not(A), not(B)) == not(or(A, B)), by searching for the proof is fine.

Are we just talking past each other?

You're the first one to introduce Church-encoded booleans here, not me. Can we put that to one side? I think that's a tangent. :)

You don't need to add negation as an axiom, as long as you've got the empty type and function types. This works in STLC, no need to bring in dependent types.

I don't think that Russell's Paradox has anything to do with negation (you can reproduce the argument at other types), but rather it's evidence that general recursion is wrong (you can derive that from the type of e.g. the Y-combinator). So I think that we can put that question to the side as well when talking about negation.

We are talking about types, not Booleans, right? The negation of a type is itself a type -- not a Boolean value (so let's put truth tables to the side as well).

At the level of types, negation is just a plain function type. To "prove it correct", you just construct a term with that type.

To use your example, we can show that "and(not(A),not(B))" and "not(or(A,B))" are equivalent with terms witnessing their interconvertibility:


cleft : (not(A) * not(B)) -> not(A + B)
cleft ka kb = \x.case x of inl:a=ka a; inr:b=kb b

And the other way:


cright : not(A + B) -> (not(A) * not(B))
cright kx = (\a.kx (inl a), \b.kx (inr b))

I don't think that you can search for these proofs in general, although I hope that you do report back on your progress if you go the route of proof search.

Are you interested in evaluating Prolog at the type level (like Haskell kind of does)?

Overloaded Use of Negation

I don't think Russell's paradox occurs without negation and recursion. The set R of all sets that are not members of themselves is a paradox (as R is not in R if R is in R). The set R of all sets that are members of themselves is fine, as R is always in R by definition, its just a plain non-paradoxical infinite set (although I think there are two stable models, as R not in R would be okay too).

I think you are talking about negating a type as in Walder's dual calculus, such that (A -> B) is (~A \/ B)? The use of inl and inr look like Wadlers dual calculus, but they are not in CoC, nor in Martin Lof's type theory, so I am not sure where they are coming from. I think I see where we are missing each other - the negation of types you are talking about (as in dual calculus) is simply moving a functor from one side of the implication (":-" in prolog) to the other. This is a different kind of negation, it is not logical negation. It is the difference between -A = "a hole that accepts type A" and ~A "a type that is any type apart from A" (or perhaps more clearly a value that is not in type A, or a type that is not in kind A, etc...). It is this second kind of negation I am referring to, and i am open to suggestions as to how I can make the distinction more clear.

So everything you say seems fine, but misses the point - probably my fault for not explaining what I mean clearly enough. So to restate more clearly: dependent type systems do not have axioms of logical negation (which would be the equivalent of Prolog's built-in negation), but this doesn't seem to matter as you can still reason about systems with logical negation using them - you can implement logical negation in prolog without a primitive negation - hence my question being, is it needed? This discussion would seem to support there being no need for primitive negation so far.

Just clearing up a few things

I don't think Russell's paradox occurs without negation and recursion.

What I mean is that it's general recursion that makes the paradox happen, and it just happens that Russell used negation in the context of general recursion to show the paradox (but it's not a consequence of negation itself). As a similar example, you can define a variant of Russell's Paradox over Nat with "f n = f (n+1)". So all that I was trying to say was that the use of the Bool type and negation term in his paradox are just incidental.

I think you are talking about negating a type as in Walder's dual calculus, such that (A -> B) is (~A \/ B)? The use of inl and inr look like Wadlers dual calculus, but they are not in CoC, nor in Martin Lof's type theory, so I am not sure where they are coming from.

No I'm not familiar with Wadler's dual calculus; is there a constructive equivalence between (a->b) and (~a+b)? Maybe I should read about that -- it sounds interesting.

Although the equivalence I outlined above was intended for just STLC with an empty type, it does work in CoC and sum types _are_ in Martin Lof's type theory (where "inl" is named "i" and "inr" is named "j").

So everything you say seems fine, but misses the point [...] dependent type systems do not have axioms of logical negation (which would be the equivalent of Prolog's built-in negation), but this doesn't seem to matter as you can still reason about systems with logical negation using them - you can implement logical negation in prolog without a primitive negation - hence my question being, is it needed? This discussion would seem to support there being no need for primitive negation so far.

Yes I think you're right, at least from a type theory POV, I don't think that an axiom for negation is necessary since negation can be built out of implication and the empty type.

I get your point that we might be talking about two different kinds of "negation" -- one meaning that your Prolog search turned up nothing, the other being a constructive proof that a type is uninhabited. I'm curious to know if these ideas of negation come together at some point, or if the search failure should be called something else as you say. I have a compiler for a Haskell-like language with a Prolog-ish system driving type class resolution and there I use "not" in instance constraints to control resolution, and then at the type level when generically deconstructing sum types at the base-case (ie: the tail type of a one-element sum is uninhabited).

This is a different kind of negation, it is not logical negation. It is the difference between -A = "a hole that accepts type A" and ~A "a type that is any type apart from A" (or perhaps more clearly a value that is not in type A, or a type that is not in kind A, etc...). It is this second kind of negation I am referring to, and i am open to suggestions as to how I can make the distinction more clear.

Interesting, yes this is an important distinction. The "not(A) = A -> 0" interpretation is to say that assuming A would lead to a contradiction. The sort of "set complement" idea that you describe sounds more like "U - A" where U is a universe type with A in it and "-" is like set difference (so that the type that you're describing is everything in the universe but A).

I realize that this branch of the conversation might be fruitless for you, but I'm curious to know if this all comes back to plain vanilla type theory at some point.

Reading Material

Oleg (Kiselyov) recommended this thesis to me:

http://reports-archive.adm.cs.cmu.edu/anon/2000/CMU-CS-00-175.pdf

Perhaps we should both have a quick read and then see if we can continue the discussion, as I think a clear understanding of negation is vital.

On the Russel paradox, I don't see f n = f (n + 1) as being at all paradoxical. Representing this using Peano numbers would be: f n = f (succ n). Unifying n and "succ n" would result in the value infinity... but its not a paradox, again its just a recursive type.

Watched First Lecture

I watched the first lecture, and I will definitely be watching the rest. I like the lecturer, enjoyed his presentation, and discovered I am a constructivist. Nothing really new in lecture one, but some important clarifications about proofs and entailment.

Proving Heyting Algebra is Distributive

Watching lecture 2 of Bob Harper's HoTT lectures, he asks you to prove Heyting algebra is distributive. So I thought I would try and automate this proof with Clors. Here is the code:

# conjunction as meet
1.  rule(meet(A, B), le(meet(A, B), A)).
2.  rule(le(meet(A, B), A), meet(A, B)).
3.  rule(meet(A, B), le(meet(A, B), B)).
4.  rule(le(meet(A, B), B), meet(A, B)).
5.  rule(le(C, meet(A, B)), pair(le(C, A), le(C, B))).
6.  rule(pair(le(C, A), le(C, B)), le(C, meet(A, B))).

# truth as greatest element
7.  rule(A, le(A, true)).
8.  rule(le(A, true), A).

# disjunction as join
9.  rule(join(A, B), le(A, join(A, B))).
10. rule(le(A, join(A, B)), join(A, B)).
11. rule(join(A, B), le(B, join(A, B))).
12. rule(le(B, join(A, B)), join(A, B)).
13. rule(le(join(A, B), C), pair(le(A, C), le(B, C))).
14. rule(pair(le(A, C), le(B, C)), le(join(A, B), C)).

# falsehood as least element
15. rule(A, le(false, A)).
16. rule(le(false, A), A).

# implication as exponential
17. rule(meet(A, impl(A, B)), le(meet(A, impl(A, B)), B)).
18. rule(le(meet(A, impl(A, B)), B), meet(A, impl(A, B))).
19. rule(le(C, impl(A, B)), le(meet(A, C), B)).
20. rule(le(meet(A, C), B), le(C, impl(A, B))).

# inference rules (transitivity)
21. rule(A, B) :- rule(A, C), rule(C, B).
22. rule(pair(A, B), pair(X, Y)) :- rule(A, X), rule(B, Y).
23. rule(pair(A, B), pair(Y, X)) :- rule(A, X), rule(B, Y).

:- rule(meet(a, join(b, c)), join(meet(a, b), meet(a, c))).

This runs with the latest version of Clors, that supports iterative deepening, and we find the proof in about 1 second at a depth of 13. Here is the generated proof, although the proof printing is not very well formatted at the moment, as the rules are shown with all their variables substituted:

21. rule(meet(a, join(b, c)), join(meet(a, b), meet(a, c)))
7.  rule(meet(a, join(b, c)), le(meet(a, join(b, c)), true))
21. rule(le(meet(a, join(b, c)), true), join(meet(a, b), meet(a, c)))
20. rule(le(meet(a, join(b, c)), true), le(join(b, c), impl(a, true)))
21. rule(le(join(b, c), impl(a, true)), join(meet(a, b), meet(a, c)))
13. rule(le(join(b, c), impl(a, true)), pair(le(b, impl(a, true)), le(c, impl(a, true))))
21. rule(pair(le(b, impl(a, true)), le(c, impl(a, true))), join(meet(a, b), meet(a, c)))
21. rule(pair(le(b, impl(a, true)), le(c, impl(a, true))), le(join(meet(a, b), meet(a, c)), true))
22. rule(pair(le(b, impl(a, true)), le(c, impl(a, true))), pair(le(meet(a, b), true), le(meet(a, c), true)))
19. rule(le(b, impl(a, true)), le(meet(a, b), true))
19. rule(le(c, impl(a, true)), le(meet(a, c), true))
14. rule(pair(le(meet(a, b), true), le(meet(a, c), true)), le(join(meet(a, b), meet(a, c)), true))
8.  rule(le(join(meet(a, b), meet(a, c)), true), join(meet(a, b), meet(a, c)))

I have a couple of questions: Bob recommends using Yoneda's Lemma, and comments that the global "<= T" and "F <=" are not that useful. However I am using these in this proof. (and not Yoneda's lemma) Is this valid?

There is also an interesting point about the way I have had to encode this with a 'pair' struct and explicit permutations of the pair. I am wondering what the consequences of generalising the LHS of ':-' to allow more that one proposition, which could be matched against any goal in the goal list if the first proposition matches, or is there another way of handling this?

Multi Valued Logics.

What are the tradeoffs between no negation with a two valued logic, compared to strong negation with a three or four valued logic? Here we are talking about proof negation, not boolean negation. For example proving A is a member our list L, or proving A is not a member of list L. Or even we can neither prove nor disprove (unknown) or we can prove it both is and is not a member (inconsistency). For the full bilattice we need a four valued logic, but what do we really gain over the simpler two valued (proved or unknown) logic?

Axiomatic Language

Axiomatic language is a logic programming language without built-in negation. It is just a formal system for defining recursively enumerable sets. These are then interpreted as the inputs/outputs of programs. The definition of the not_member predicate is in here.