Specifying Solvers?

In parsing, (non-dependent) type systems, and semantics, we have made remarkable progress in specification. In the first two areas, we have reference processing models and reference specification languages that tell us, definitively, whether a program is admissible or not. That is: they are definitive both positively and negatively. In the last, we have definitive specification provided the program is well-formed and well-typed.

The situation seems less clear for solvers. We appear to have reference models for how a solver operates, and reference languages for specifying what a solver may do (that is: the legal operations) but no way of specifying what the solver must do, nor the conditions under which convergence with a solution (or at least certain types of convergence) are guaranteed.

One consequence of this is that the specification of languages that rely on general-purpose solvers is necessarily reduced to "the implementation is the definition", and in some cases this leaves us in a state where the admissibility of a program may depend on the details of a particular solver implementation or even the performance of the CPU on which the compile occurs.

I see no problem with stronger vs weaker compilers. My concern here is to ensure that there exists a precise specification of the language that all correct implementations must accept in order to be conforming implementations.

Is there work or progress along these lines that I should look at?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Proof checking vs. finding

I can't answer the question, but I have a few thoughts:

1) Usually it is easy to check solutions, so where the complexity of finding a solution is an issue, it makes sense to me to offer/require some mechanism to say "Here is the solution, check it" as a kind of a footnote. For complex problems, the difference between checking and finding may be the difference be tractable and intractable.

2) Usually it is possible to enumerate proofs and partially order them by length, so easy to require automated finding of "short" proofs. It might be friendly in some contexts to allow the user to parameterize some aspects of the partial ordering.

Checking vs finding...

... is exactly the difference between P and NP.

And, in passing, enumerating proofs is not a very good strategy, since there are just so many of them. In addition, some problems/theorems are known not to have short proofs (See, for example, the lower bounds on resolution proofs of the pigeonhole principle here).

Clarifications

Yes, I am well aware that checking vs. finding is the difference between P and NP for SAT and related problems (hence my use of "tractable" vs. "intractable"). However, the problems Shap has in mind may be either more or less complicated to solve than SAT and may not even be logic (e.g. linear programming, constraint logics, higher order logics, etc.). In most cases, it will still be relatively easy to check a solution or proof, even if it is intractable to find one. Also, "solver" is often used for problems with a short solution that is an existence proof for an existential claim, but there are other types of formula to check in compiling - e.g. figuring out whether a switch statement covers all possible cases corresponds to proving a universal (for every x, x is one of these cases) claim.

Re: Enumerating - shap did not give us the information about whether he was even thinking about a logically complete language (or something like Coq/HOL/PVS that is based on a higher order logic and lacking completeness). Still, it will be true for most any setup with a given set of proof primitives that the set of proofs of length less than some given integer N is a finite number. So one general way to specify how *relatively* easy a proof is in some given language is to specify that there is a proof is less than some given fixed length. My additional comment about ordering the enumeration was not a suggestion about how to implement a solver - that would have been absurd since we don't even know what logic we are talking about! The idea was rather that a programmer/user might know/guess a more direct way to a given proof solution than what would be taken by a theorem prover not knowing anything but the general problem class - e.g. for a concrete example, think of one implementation of given function in pure prolog being more efficient (perhaps only probabilistically) than another that implements the same logical function because it uses a better ordering of the search space. In other words, the user could given tactical hints to the proof search that would help proofs, where they exist, be found more quickly.

P vs. NP

Finding a proof of an arbitrary program property is not NP. There's no bound on how long it could take, and whether a given proposition is provable at all is undecidable.

whether a given proposition

whether a given proposition is provable at all is undecidable.

Could you expand on this? Isn't this dependent on the logic being used? For example I cannot conceive of a valid haskell type that cannot be implemented. Even by something such as

K m n = n

Thinking of the type as the proposition and implementation as a proof it seems to me that the ability to construct the proposition alone means it is provable in some sense, although not necessarily a useful one.

Rice's Theorem

Depends on the logic

Yes, I was a little cryptic. If you fix a proposition and vary the program considered over all Turing Machines, then as Z-Bo said, Rice's theorem applies. If you fix a program and vary the propositions you're considering, then the space of propositions you consider affects complexity.

Could you clarify "solvers"?

I'd like to try to follow the thread. Thanks!

My guess

I assume he means constraint solvers and/or sat solvers.

To Clarify

My concern here is to ensure that there exists a precise specification of the language that all correct implementations must accept in order to be conforming implementations.

Just to be clear: Why wouldn't the normal notation of formal-logic inference rules not be good enough as a specification? Or information flow directed attribute grammars? I mean, you got a lot of manners of specifying inference rules, including and upto cylindric algebras. (No idea what the latter is, just a joke.)