Controversy over the definition of "Logic Program"

There is an ongoing controversy with Kowalski about the definition of "Logic Program":

* A logic program is one in which each computation step can be logically inferred.

* A logic program is written in logical clausal form and procedurally interpreted using forward and backward chaining (which was first done in the Planner programming language).

See the following references:

Wikipedia censorship

More Wikipedia censorship by anonymous Wikipedia administrator CBM

What is a Logic Program?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Work on inconsistency robust

Work on inconsistency robust logics and procedural embedding does seem valuable. But I think you shouldn't use Wikipedia as a platform for original research or self promotion. Editing the opening paragraphs of the logic programming article to discuss an alleged controversy, especially one in which you're a primary participant, is not appropriate behavior. Neither, really, is coming to LtU to complain about it being nixed.

I learned of logic programming ostensively, with prototypical examples such as Prolog and Datalog. I'm inclined to favor a definition of logic programming involving logical clauses, i.e. that my program syntactically is a set of logical assertions regarding the program's behavior. But this isn't necessarily contradictory to a more procedural model. We could easily have procedural logic combinators in much the same sense as we have monadic parser combinators as an alternative to EBNF grammars.

What is a Logic Program?

Anonymous Wikipedia Administrator CBM has censored mentioning the article "Inconsistency Robustness in Logic Programs" in the book "Inconsistency Robustness." What is the justification for this censorship?

Just the fact that CBM (whoever that might be) went to the trouble to censor some one's relevant and valid submission indicates to me that the issue is important and deserves to be discussed.

The controversy over the definition goes to the very heart of the Wikipedia article on "Logic Programs." Planner introduced logic programs using the criteria that each computational step must be logically inferred. Prolog is a subset of Planner by restriction to only allowing clausal syntax. Unfortunately, clausal syntax is inadequate for logic programs as explained in the article "Inconsistency Robustness in Logic Programs."

re censorship

"No Original Research" is a core policy of Wikipedia. "No Shameless Self-Promotion" is also on the policy lists. While this issue is dear to your heart, that's just even stronger evidence that you shouldn't get involved. "Neutral Point of View" is another core policy of Wikipedia, and you very clearly don't have one. Having your comments removed was well justified, IMO. Promote your work and argue your case through other venues.

Your idea that censorship implies the content is important is ridiculous. In many cases, as in this one, the content is inappropriate or fails to meet standards.

Book "Inconsistency Robustness" should be ref in Wikipedia

Obviously, the book "Inconsistency Robustness" should be referenced in Wikipedia.

I am surprised that you are in favor of censorship by anonymous Wikipedia Administrator CBM. Whoever made the submission censored by CBM was making good and valid contributions. It was wrong of CBM to arbitrarily censor their contribution.

Many students initially have a very hard time understanding actual Wikipedia censorship practices because they go against the official statements and policies of Wikipedia. However, once they have had some personal experience, they begin to understand how organizations operate in practice can be very much at variance with their official propaganda. This experience serves them well in their careers.

I feel more or less the same

I feel more or less the same about censorship of your self-promotion as I would if you had posted cat pics and had those removed. That you don't even recognize your shameless actions and lack of neutrality as clear violations of official Wikipedia policy tells me a lot about how you've managed to become a 'banned user'.

Get over your persecution complex. And allow someone who can maintain a neutral point of view (NPoV) add an entry for your book if they consider it worthy. If this only happens after you're dead of old age, so be it.

Wikipedia is not perfect

Unfortunately, Wikipedia has some severe organizational problems.

An academic symposium was held at Santa Clara University a few years ago to explore these issues and what might be done about them.

Also, I wrote the following article:
Corruption of Wikipedia

Unfortunately, Wikipedia has not been able to do much about its organizational problems.

No system is perfect

No system is perfect. Much like the design of a PL, the design of policies aim to avoid or suppress some problems even if it might create others that hopefully aren't as bad given the intended usage. In this case, Wikipedia's policies avoid some problems that its creators consider important at the cost of raising barriers for experts wrgt. original research and NPoV. This is not an unreasonable tradeoff, even if you don't like it.

There are other appropriate venues for your original research. And if you really want a wiki, you can certainly create one. IIRC, the author of JActor, Bill La Forge, has been working on actors-model based wikis.

Wikipedia is controlled by mostly anonymous administrators

Wikipedia is controlled by mostly anonymous administrators, who have defeated all attempts at organizational reform.

But in life, people must go with systems as they exist in academia, government, the Web, etc.

Some researchers and students have bravely attempted to contribute to Wikipedia. Unfortunately, anonymous administrators for their own idiosyncratic reasons censor valid and useful contributions. Also, some of these administrators have taken to libeling researchers and students.

Wikipedia has become very political and is controlled by the "in-group" of mostly anonymous administrators. Since some of these administrators have done good work, the Wikipedia project has managed to survive. But it is organizationally unsound and badly in need of reform.

I'd be more sympathetic if

I'd be more sympathetic if you hadn't violated Wikipedia's policies, and if you had gone to a free speech forum to discuss. As is, I can't make myself even begin to care about your alleged plight. CBM was in the right, and you came to the wrong place to complain about it. LtU is a venue dedicated to PL. In PL, free speech is frequently compromised by need to be grammatically correct, safely typed, and mechanically executable.

Why should LtU support censorship of PL research?

Why should LtU support censorship of programming language research?

My sympathies go the brave students and researchers who have had their contributions censored by CBM (whoever that is) and then been libeled as a result.

Unfortunately, Wikipedia is in a bad state of affairs and reforms are stalled. We had a good open discussion forum at Santa Clara University in which Wikipedia backed out after agreeing to participate.

Why should I answer straw man questions?

I haven't suggested that LtU should support censoring of PL research.

OTOH, we could reasonably censor discussion about the alleged censoring of PL research on Wikipedia, since that's clearly off-topic and inappropriate for this venue. I see no reason that LtU should be against censorship in general.

Wikipedia censors original research. And that's okay.

Anonymous Wikipedia administrator CBM censored PL research

Anonymous Wikipedia administrator CBM censored programming language research. Since CBM is anonymous it is impossible to know their qualifications and/or biases.

It seems dangerous to me for LtU to endorse CBM's censorship of programming language research.

LtU

The initials of the person you conversing with are DMB.

LtU

I believe that his name is David Barbour, but I could be wrong.

Don't attempt to generalize

Don't attempt to generalize this to 'censored PL research' as though the circumstances surrounding the action are irrelevant. On Wikipedia, original research isn't allowed. The content must be presented with a neutral point of view. Shameless self promotion is considered inappropriate. These are Wikipedia policies, and you violated all of them.

CBM shouldn't have to censor your actions. You should have self-censored.

If being anonymous truly made it impossible to know qualifications and biases, your attempt at anonymous edits wouldn't have been so transparent to the moderator.

I can't speak for LtU.

Hopefully, LtU can avoid problems that plague Wikipedia

Hopefully, LtU can avoid some of the worst problems that are now plaguing Wikipedia.

For example, does LtU need anonymous administrators to enforce "self-censorship", "NPOV", no original ideas, arbitrarily not allowing some students and researchers to contribute, etc?

LtU is not Wikipedia

LtU has its own vision and policies. But I wouldn't be upset if LtU moderators were more active about shutting down off-topic threads like this one.

Users don't get banned from contributing to Wikipedia for a single or accidental violation of policy. Given that you were already a banned user before your most recent attempts to violate Wikipedia policy, I am certain that the decision was not 'arbitrary'. You deserved it.

And moderators can't ever enforce 'self censorship'. Except on themselves, of course. But self censorship is just another form of self control. Every mature adult is expected to practice it to some degree.

Banning and censoring users is a dangerous business

David,

I am surprised that you vigorously defend banning and censoring researchers and students. This is a very slippery slope!

You must be aware of the corruption, arbitrary actions including banning, and other shenanigans of anonymous Wikipedia administrators that have been well-publicized over the years and summarized in the above referenced article "Corruption of Wikipedia."

Of course, you are free to ally yourself with the anonymous Wikipedia administrators and heartily approve of their censorship.

Regards,
Carl

PS. You are also making some unwarranted assumptions about what is happening in the Wikipedia censorship business.

Moderation in moderation.

Corruption can happen even without moderators. I've seen it happen, e.g. in the C2 wiki and several other forums. Without moderation, the persistent assholes (trolls, cranks, spammers, etc.) gain a disproportionate voice. Anyone who attempts to battle them ultimately just makes a greater mess of things. People attempting actually valuable contributions grow demoralized and leave, especially when those efforts are blasted in a battle. Failing to enforce standards can damage a project or community just as badly as severe moderation. As with many things, the dose makes the poison.

Regardless of whatever is happening on the larger scale, I think the specific examples CBM's 'censorship' you have pointed to were well justified. And I know enough of your behavior (from my interactions with you, and from those same examples you provided), that I would consider even your ban from Wikipedia to be justifiable.

There is no moderation in arbitrary censorship and/or banning

David,

There is no moderation in arbitrary censorship and/or banning. For ample illustrations, please see the article "Corruption of Wikipedia."

Wikipedia doesn't have effective ways to correct its organizational dysfunctionality. The worst forms of censorship often take place when censors are anonymous (as on Wikipedia) because censors are less accountable for their actions. Unfortunately, there are often apologists like you who rationalize and justify arbitrary censorship and banning.

How long do you think that it will take for Wikipedia administrators to reverse CBM's censorship? This is actually a good test of the current strength, knowledge and wisdom of our research community with two parts:
* Ego: it's not easy for Wikipedia administrators to make corrections when they are wrong because they are expected to defer to each other's censorship and not make waves.
* Knowledge: Paradigm shifts have not been easy for research communities to navigate for reasons explained by Kuhn, Lakatos, Feyerabend, etc.

'arbitrary' censorship

I'm not going to pretend that moderators never abuse their powers. I don't advocate 'arbitrary' censorship and banning. I acknowledge it happens, much like police sometimes plant evidence or purposefully make false arrests.

But you violated at least three written rules of Wikipedia, two of them core rules. There is no need for 'arbitrariness' to explain the rejection of your contribution. CBM's censoring, at least in this case, was justified and should not be reversed.

If at some later time your positions on procedural embedding of logic gain wider acceptance through the academic community's own methods (e.g. peer reviewed papers, references) then Wikipedia's community may eventually come to document your work. In the unlikely case the alleged 'controversy' between Kowalski and yourself gains prominence and notability, that also might make it in. Wikipedia doesn't need to operate near the bleeding edge of research to fulfill its goals as an archive of human knowledge. Nor does it need to document niche squabbles and controversies.

Wikipedia should not censor published research

Someone (most probably a student or researcher from Silicon Valley) tried to add a reference and background information on a published book building on what they had learned from seminars, conferences, and articles in the book. This offended anonymous Wikipedia administrator CBM and so he arbitrarily deleted the submission.

At this point, we live in a bifurcated research community with some of the leading researchers in Silicon Valley in one camp and another camp of hardline conservatives like CBM using censorship to try to suppress the first camp. As the first camp continues to gain ground, the hardliners are taking more and more drastic actions attempting to maintain their supremacy. It seems most likely that the anonymous Wikipedia cabal led by CBM will eventually have to give ground. Publication of the book "Inconsistency Robustness" marks a turning point at which Wikipedia administrators led by CBM are forced to become increasingly corrupt in order to hold the line. However, it is important not to underestimate the power of conventional thinking to thwart scientific progress.

Books reasonably have

Books reasonably have notability guidelines for creating an article for them. And there is no particular reason to assume the existence of published research makes it valid or authoritative. I'd hate to see a Wikipedia where Senator Jim Inhofe's The Greatest Hoax: How the Global Warming Conspiracy Threatens Your Future made it into the opening paragraphs of climate change articles by mere virtue of being "published research". Even if the book was only used to correct alleged disinformation in such articles.

Your war analogies make it exceedingly clear that you lack an NPoV. Neutrality means not taking a side.

In any case, I'm letting this conversation go.

Premature for article just on "Inconsistency Robustness" book

It's probably premature for a Wikipedia article just on the the "Inconsistency Robustness" book. Instead, researchers seem to think that it is more valuable to the community to use book chapters to alleviate some of the disinformation and lack of information in existing Wikipedia articles.

I expect that anonymous Wikipedia hardliners led by CBM will continue to try to censor as much information as they can using any excuse that they can think of. Unfortunately, it's just their nature :-(

Also, it's unfortunate that my directness offends your "NPOV" mentality.

PS. Inhofe's book is referenced in the Wikipedia article about him ;-)
Also, I could see his book having a prominent place in the "Climate Change Denial" article.

Please

I should know better

I'll try to stop.

Logic clause syntax is inadequate for logic programs

As pointed out in the referenced article, logic clause syntax is inadequate for logic programs.

For example, the following goal-driven Logic Program works forward from start to find the cost to finish:

When ⊩ Path[start, finish, aCost]→ 
     ⊢ aCost = Minimum  {nextCost + remainingCost
                         | ⊩ Link[start, next≠start, nextCost],
                              Path[next, finish, remainingCost]}
          // a cost from start to finish is the minimum of the set of the sum of the
              // cost for the next node after start and
                   // the cost from that node to finish

Not convinced

Why not something like:

path(start,finish,cost) :- link(start,next,cl), path(next,finish,cp), cost=(cl+cp)

minPath(start,finish,cost) :- path(start,finish,cost),MINIMIZE(cost)

It isn't clear to me that the syntax is the problem here, or isn't something that couldn't be augmented with a few special predicates. Dyna and many other logic PLs or soft constraint logics inherently include concepts of weights that can be minimized. Some logic semantics (such as linear logic) have resource semantics built right into them. The logic semantics probably should include optimization if we're going to include it in our clauses.

Unfortuntely easy to make mistakes with logic clause programs

David,

Unfortunately, it's easy to make mistakes with logic clause programs. For example, your clause program given above is incorrect in Prolog:

minPath(start,finish,cost) :- path(start,finish,cost),mimimize(cost)

bad logic

Your earlier assertion regards whether the syntax is 'adequate', not whether it is "easy to make mistakes" and certainly not whether Prolog has the semantics to easily express optimized search. Let's not shift goalposts. Assuming a special predicate MINIMIZE(cost) is valid in the semantics of a clearly-not-Prolog logic language, how would the clausal syntax still be inadequate?

Erroneous (clausal) logic program

David,

Unfortunately, the following clausal Prolog program is erroneous:

minimumPath(start,finish,cost) :- path(start,finish,cost), mimimize(cost)

Also, the following is erroneous in mathematical logic:

path[start,finish,cost]∧mimimize[cost]⇒ minimumPath[start,finish,cost]

Search and Negation Elimination

I think the key property of logic programming is searching, and you need to stick to subsets of logic that are searchable. See section 2 of: http://www.lix.polytechnique.fr/~dale/papers/ic94.pdf

I don't think inconsistency is an issue, but instead negation as a failure is the problem. If you view logic program output as "proved" or "not proved" then inconsistency does not come into it. The problem is when you consider "not proved" to be "false". Then you start trying to add multi-valued logics and a whole lot of (in my opinion) unnecessary complexity. If instead you stick to constraint propagation for disequality, and remove negation using negation-elimination: www.cs.cmu.edu/afs/cs/user/mobile/thesis/bella/thesis.ps.gz
Then the program "not member" is derived from "member" by term rewriting, and "not member" proved is not the same thing as "member" not-proved.

disclaimer: I am slowly working on implementing this, so far I have a clause based (Prolog syntax) logic interpreter with sound unification (post unification cycle check), a complete search strategy (iterative-deepening search), and i'm working towards complete inference, I already have disequality by constraint propagation. The next step is to implement type-inference as my current understanding is I need to know the types of terms to do negation elimination. You can check out my progress here: https://github.com/keean/Clors

"Negation as failure" is a failure

Planner introduced "negation as failure" as a hack because the memory on the PDP-6 was so tiny.

However, it was not meant to be fundamental because "negation as failure" means that "the less that is known, the more that can be proved."

Speaking of search strategies and memory

I've been planning on working on a search library (not in the context of proof), and I think that mixed strategies are a good idea.

Yes, depth first search is the most memory efficient but not as strong. I'd like to support something where you can specify how much memory yous set aside for a search - and use memory for saving partial searches when you can afford it and iterative deepening when you can't.

diagonalization

I think an important search strategy involves diagonalization, similar to the proof for countability of rational numbers. It can provide a nice mix of depth and breadth, and can be easily augmented with heuristics to reorder some of the search paths.

I can't think of a practical programming context

where diagonalization would apply.

I suspect that proofs that involve induction, such as countability have more wide application and certainly sound interesting (and more promising), but I don't think that's been used in a programming context yet either.

stream join

Diagonalization is a useful search strategy even outside of proofs. Diagonalization is most clearly useful any time you have a join on multiple streams. E.g. if you have:

type Stream a = 1 → ((a*Stream a)+1)
-- cross product of two streams
join :: Stream a → Stream b → Stream (a*b)

If we assume either or both streams may be infinite, then we might leverage diagonalization to ensure we'll eventually reach any given pair.

In turn, this applies to logic programming because we frequently search and join very large relations, and it can be convenient to process relations as streams (which provides a useful degree of laziness) even when we know those streams might be finite.

I don't get it

.

Perhaps

you should try implementing it.

I can't even begin to read

your code. So I can't implement anything. What does it mean?

streams

Even if you cannot read the type, a `Stream a` is a possibly infinite list of `a` values. E.g. we could model the infinite list of natural numbers as a `Stream Int`. This is more or less the pure FP variation of generators.

A join function is a Cartesian product. E.g. if we had two streams each with 100 values, then the resulting join would have 100*100 = 10k values. Perhaps I should instead have used 'cross' as the name.

With potentially infinite lists, it should be obvious why you can't just take the first value of the first list and combine it with all the values of the second list, or vice versa. So diagonalization of some form is useful.

uhm ok...

so once you've generated a_len from A and b_len from B then you have a_len * b_len elements.

But if you generate another B then you have to generate a_len more cross project elements with the new B element... Laziness doesn't seem to help you save memory since you have to keep all of the elements you've generated so far to combine with each new element from the other stream.

What's the problem? Does every A have to get matched with B which cancels both out of the stream? You have to prove that the number of unmatched elements doesn't grow with the square of the generated elements or something?

strict streams

The stream type I described above works perfectly well with strict evaluation, and doesn't need to keep information about already generated elements. It can simply regenerate them each time. By doing so, we trade memory costs for CPU costs.

Diagonalization is a search strategy in much the same vein as DFS or iterative deepening. You can apply it to a pair of streams, or a pair of trees, whenever you can decompose a search into two different domains. I'm not sure why you're asking what problem it's aimed to solve. Diagonalization has been used in some proofs, but it is certainly not the same as the proofs in which it's used.

Is this standard terminology?

Cantor had a scheme for counting the rationals that weaved back and forth on diagonals, but I haven't heard that referred as diagonalization (which is an - AFAIK - unrelated technique for showing that you can't count the reals).

not sure

I certainly heard the term used this way several times in uni from a few professors, but it may have been a local or colloquial use of the word. Natural language is fluid.

Addendum: Another related search strategy on multiple discrete dimensions would include Z-order space-filling curves, but I didn't learn about those until years after uni.

to clarify

I was also thrown by your use of "diagonalization" here. After some googling, I did find that the construction for counting the rationals is sometimes called "Cantor's first diagonal argument", the "second diagonal argument" being, in this context, the one that's about showing a set is uncountable (that Matt M mentioned). I didn't find any use of "diagonalization" in connection with counting the rationals, but I can at least see the logic of your usage now. I would just caution that "Cantor's diagonal argument" and "diagonalization" are very widely used without qualification (as Google will verify) to refer to the "second" diagonal argument.

Applying diagonalization can be tricky: Gödel got it wrong

Applying diagonalization can be tricky. For example, Gödel got it wrong in his incompleteness results.

Please see the following article:
Mathematics self proves its own Consistency and Other Matters

Regards,
Carl

PS. I think you meant to say using diagonalization for "the proof the uncountablity of the real numbers" instead of "the proof for countability of rational numbers."

I meant what I said. cf

I meant what I said. cf

Countability of rational numbers doesn't rely on diagonalization

The proof of the countability of rational numbers doesn't rely on diagonalization.

Yet it's typically expressed

Yet it's typically expressed that way, e.g. as in the linked page.

Diagonalization in contemporary mathematics

David,

You are using terminology in a completely archaic way that is now completely nonstandard.

On the other hand, diagonalization was used by Gödel in the standard way in his work on incompleteness. However, Gödel missed that Russell's theory of types should be applied to the syntax of mathematical sentences thereby ruling out the existence of fixed points used to construct "self-referential" sentences. Consequently, Gödel's incompleteness results are invalid. It was left to Church, closely followed by Turing, to provide a valid proof of the first incompleteness theorem using a nonconstructive proof that did not involve diagonalization. Recently, it was discovered that Gödel's second incompleteness is false in very strong logics. (See the article referenced above.)

Regards,
Carl,

getting caught up in definitions

I grant my use of 'diagonalization' is archaic. But I clarified it with 'similar to the proof of countability of rational numbers', which should be enough for anyone to recognize or look for my meaning. I don't feel any desire to join a tangential discussion about whether Gödel's incompleteness is valid under some alternative assumptions of stratification.

Types are fundamental to both logic and computation

Discussions of types is not tangential to either logic or computation:
* Types are fundamental to avoiding paradoxes in logic.
* Types are fundamental to computation. For example, in the Actor Model it is necessary to know the type of Actor in order to send it a message.

Yet they are tangential to a

Yet they are tangential to a discussion on search strategies.

Iterative Deepening

I like the idea of memoising the results in a fixed size buffer, as this can close the performance gap to breadth-first. It seems a memoised iterative deepening would offer the best of all uninformed search strategies. I also think cycle-detection could be a useful feature.

;-) My first thought was

Nooo, I wanted to (eventually) write a logic system that used that... I don't want to use not-invented here software... :p

Selecting Nodes

What are good methods for selecting nodes for memoisation. A depth limit (memosise all nodes up to depth N) seems simple and cheap, and provides a space limit, whist choosing nodes likely to be revisited. You don't want nodes at the end of the tree to keep getting replaced in a fixed size memory pool. The other way might be to count 're-use' of nodes in each iteration, and replace the ones with a lower count. This seems more problematic as non-memoised nodes would still need a count to be stored somewhere. At the moment I prefer the depth limit. Any other reasonable options?

Exponential decay

Works well for cache, memoization, snapshot histories, building dictionary compression, etc.. The idea is that you reduce entries by some fraction whenever you reach your limit. Eg. decimate, kill one in every ten. Counters are still possible, you cut those too until below some threshold.

I haven't gotten far enough

to have an opinion on this. I wonder if there has been research on it.

My first use of this will be for optimizers not solvers so that's a bit different. For that I assume you keep best paths, but not just best, also best and furthest from each other.

For game searching you'd have different qualities to look for.

I guess logic programs are usually SAT problems.

If you have full search problem then you might as well do depth first. You mentioned noticing loops, I wonder are there problems where a loop means something, and you want a full space search but loop detection.

I really want to learn about DPLL SMT/SAT solvers though that may be unrelated, since they have systems solving problems in more variables than you'd think possible - it seems like there's been a speedup that puts all kinds of problems in reach. It's like the FFT, when you have a new tool that can solve problems much faster than before, you want to look for uses.

It's not showing up in web searches but I remember a Microsoft report that they had an experimental unreleased version of the Z3 SMT solver with some sort of symbolic nonlinear equation extention, that they claimed could mostly keep up with Mathmatica. If true it's interesting is that it is a single method managing to compete with decades of accrued special purpose code.

SAT Solving Logic Programs

I have written some simple SAT solvers, up to the full DPLL and I was just starting with lookahead. I am interested in the idea that co-inductive logic programming (plus negation) is basically equivalent so SAT solving.

Rewriting logic programs into CNF seems hard to me at the moment. I would be very interested in pointers on how to do this. Solving logic programs with SAT seems closely related to Answer Set Programming, and I wonder how that copes with things that have an infinite set of proofs? For example proving anything with axioms for negation can lead to an infinite set of proofs by inserting double-negations. As SAT problems have finite bounds the naive thought is that it is impossible to convert such sets of axioms into CNF, which seems problematic.