Finding Solutions vs. Verifying Solutions

[Edit] Due to a conversation below, it turned out that sets P and NP are commonly defined differently than I considered them in my initial report. Updated report with the same content, but without this inconsistency can be found here: Finding Solutions vs. Verifying Solutions. I also changed the blog title from "P is not equal to NP" to the above.

Below is the initial, but obsolete post kept for conversational reasons:

In this short report, I argue that P is not equal to NP.

Summary: an approach is made by analyzing functions and their inverses mappings between domain and codomain elements. Drawn conclusions state that verifying data is an inverse function of finding solutions to the same problem. Further observation implies that P is not equal to NP.

Any thoughts?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Probably buggy

Are you aware that the P = NP question is one of the millennium prize problems and carries a million dollar bounty? That should probably make you suspicious of something so simple as your solution.

Other signs: your Venn diagram on page 3 isn't right, since P is a subset of NP. The equations at the top of the page look wrong, too. There are functions in P for which the inverse isn't even decidable (like mapping a proof, theorem pair to the theorem if the proof checks or to false if it doesn't).

Let's check again...

Are you aware that the P = NP question is one of the millennium prize problems and carries a million dollar bounty?

I'm afraid that the solution is worth of 1M only if P = NP. Otherwise, unless it brings up entire new verifying/solving paradigm, it is not worth of even 2 cents. Correct me if I'm wrong, but I made no bigger contribution to the science if the way I'm concluding is right. Just build up a function composition tree and count the leftmost side and the rightmost side elements. Now build up the inverse of the composition, where P and NP should swap their meanings, and count again. No big deal if I'm asked, it should be the same count, but on the opposite sides, proving that P ⊉ NP. I bet that guys from Clay Institute hoped to revolutionize the science. Instead, I offer the math in, the same math we are using all the time, from the first grade. I'm afraid that I reinvented only the hot water :)

That should probably make you suspicious of something so simple as your solution.

But I'd say we didn't see the tree from the forest. How in the world could I ask a million for these 3 pages? :D

your Venn diagram on page 3 isn't right, since P is a subset of NP.

That is what the general belief is. But on the same page I'm proving that that is wrong. Consider a problem that is in NP and not P. If such a problem exists, and If my assumption that we-can-inverse-every-problem is right (that is the central point of the whole report), than P for inverted-problem is becomes named as NP and NP for inverted-problem becomes named as P. It is just about names now, as we are observing a function and its inverse that swap notions of verifying/solvong on every invertion.

I'm not very good at finding examples, but let's try with OCR versus DrawCharacterPixels. OCR is, in my opinion, exactly inverse of DrawCharacterPixels (in a simplest, but not very usable implementation, but let's have it because of its simplicity). If we can say that:

f: OCR
f-1: DrawCharacterPixels


why couldn't we say the opposite:

f-1: OCR
f: DrawCharacterPixels


Though both OCR and DrawCharacterPixels fall in linear complexity (sorry for the example), it is obvious that OCR is more computational intensive than DrawCharacterPixels. Thus, if we have NP and not P problem, why we couldn't imagine P and not NP problem? After all, we just have to swap the direction of the same algorithm, as proposed in the report.

The equations at the top of the page look wrong, too. There are functions in P for which the inverse isn't even decidable (like mapping a proof, theorem pair to the theorem if the proof checks or to false if it doesn't)

I'm sorry to claim the opposite, but "false" yields back an infinite set of theorems that are false, in my opinion. Like somewhere out there exists an algorithm for enumerating tautologies, it shouldn't be impossible to construct an algorithm that enumerates falsehoods. But those, being an infinite set, is another question we should consider.

Probably buggy

Looks cool to me. But hey, it is so simple construction, so lets find the bug :-)

I'm pretty sure that the

I'm pretty sure that the prize is for settling the question, one way or the other. It's a math prize, not related to any practical application it might have.

For usual definitions of P and NP, it is trivially true that every problem in P is also in NP.

Regarding my example, inverting the function I gave requires deciding if a theorem is provable without seeing the proof, which is undecidable for non-trivial logic.

I think your P = NP treatment goes off the rails right away in your characterizing the problem as being about inversion.

Inverse element

I'm pretty sure that the prize is for settling the question, one way or the other. It's a math prize, not related to any practical application it might have.

My solution is so simple that it isn't worth of it, sorry, it's the hot water issue :P

Regarding my example, inverting the function I gave requires deciding if a theorem is provable without seeing the proof, which is undecidable for non-trivial logic.

We have a situation:

f: ... -> ...
f: {x, y, z} -> {u, v, w}
f: ... -> ...


to find an inverse element of, i.e., "v", we have

f-1(v) yields {x, y, z}


As simple as that.

[Edit] Replace: x, y, z, u, v and w by terms and you get lambda calculus. Introduce inverse functions and you get the whole math. Combination of those two gives us an answer to P vs NP question.

Passing thoughts

Just a couple of quick comments.

But I'd say we didn't see the tree from the forest. How in the world could I ask a million for these 3 pages? :D

Actually, if you solve the problem, it wouldn't matter if the proof were short. I recall hearing of a (dis?)proof of a conjecture in graph theory that consisted of a picture of a graph that settled the question. What matters is whether the proof is correct, not how long it is. :-)

your Venn diagram on page 3 isn't right, since P is a subset of NP.

That is what the general belief is. But on the same page I'm proving that that is wrong.

Although Matt M already remarked on this, I'll say a bit more, to make the difficulty clear. Given the sets that the mathematical community means when they say "P" and "NP", it follows immediately from the definitions of those sets that P is a subset of NP. If you are talking about two sets that you call "P" and "NP" such that P is not a subset of NP, then you aren't talking about the sets that others are talking about when they say "P" and "NP". And if you're talking about different sets than others are talking about, you are also talking about a different problem than others are talking about when they ask whether or not "P=NP".

As wiki says,

I use these definitions:

The general class of questions for which some algorithm can provide an answer in polynomial time is called "class P" or just "P".

and

The class of questions for which an answer can be verified in polynomial time is called NP, which stands for "nondeterministic polynomial time."

As it turned out that there should exist a problem in P and not in NP, I suspect that the subset problem setup isn't settled right, either in my case, either in common case, which may be a reason for mass mislead and general unsolvability.

I'll check out more thoroughly and, of course, inform you of conclusion.

P is the set of problems

P is the set of problems that can be solved in polynomial time by a deterministic Turing machine.

NP is the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.

A deterministic Turing machine is a special case of a nondeterministic Turing machine, in which there is always only one choice of what to do next. Therefore, when a problem is solved by a deterministic Turing machine, it is being solved by a nondeterministic Turing machine. So P is a subset of NP.

Inconsistency

I think I found inconsistency. Officially, by Clay Institute accepted definition of the problem, Stephen Cook defines P set like this:

P = {L | L = L(M) for some Turing machine M that runs in polynomial time}

where L is a language and L(M) is a language accepted by a Turing machine M. Obviously, P set is a set of all polynomial time algorithms, whether they are solving problems or verifying solutions.

My interpretation of "P" was that "P" includes algorithms that can solve specific problem in polynomial time, regardless of their verification time (which may be greater than solving time). In my interpretation of the whole problem, the word "solving" should be defined separately from the word "verifying". Stephen Cook's definition of P does not make this distinction, it includes the notion of "verifying" in the general notion of "solving", which is considered as inconsistency in my own approach.

Nevertheless, I hold that my report provides an answer to the essence question of "is P equal NP" the world is buzzing about:

For all the problems whose solutions can be verified in polynomial time, the question is: "Can they also be solved in polynomial time?"


My answer is: "No."

Anyway, I'm sorry for misunderstanding, I should update my report by my own view of the problem initial setup and use different sets than "P" and "NP" to answer the same question that Clay Institute is asking about. It complicates a bit the whole exposing, because I use a different approach than the academic world is accommodated with, but my answer to the big question is still the same.

it includes the notion of

it includes the notion of "verifying" in the general notion of "solving", which is considered as inconsistency in my own approach.

The thing is that any problem, be it verifying or solving, can be reduced to accepting a language. Try.

Also, if I understand you correctly, you are supposing that a calculation and its verification are inverse functions. Why? I would say, that, if p is an instance of a problem and s its solution, solving it would have the form:

p -> s

whereas verifying the solution would be:

p + s -> boolean

I don't think you can think of:

s -> p

because the problem p (or of which p is an instance) may be "surjective" with respect to its solutions, so solving problems is in general not invertible. I think there is also the problem that a computation may not halt, to consider it as a mere function, let alone invertible.

A different approach

Enrique, thank you for observation.

I approach the problem from a different point of view than analyzing Turing machine computation, I analyze a function representing a problem. Actually, this is the situation, as I see it:

problem: parameters -> solutions

Solving assumes applying "parameters" to "problem" and computing "solutions".

Verifying assumes applying "solutions" to "problem-1" and computing "parameters", then comparing them to expectations.

invertible function

I don't think you can interpret an invertible function in any of the sets of problems that are at hand. You might interpret it in the set of invertible problems, but I don't really know whether that set has any interest.

Inverse function interpretation

A real world simplified use:

CharacterFunction: CharKey -> Bitmap

DrawChar: CharacterFunction
OCR: CharacterFunction-1


Other possible uses: code executing versus type checking and code verification, constructing algorithms according to specifications versus deriving specifications according to algorithms, deduction versus abduction in logic, all the use of inverse functions in math, and in my case, concluding that solving is not generally of the same complexity as verifying (though it could be in some cases).

Consider this: you can define a language core such is lambda calculus without inverse functions, but if you extend lambda calculus by inverse functions, you may get the whole math, speaking of which, aren't we touching the very category theory?

Digression - someone said that, in category theory, if you define A: B + B + C, it reduces to A: B + C. I think that they are making a mistake there, and that the correct reduction is A: 2B + C. I should think about this idea more thoroughly soon.

The bottom line: I think we are dealing with the very math here. You could ask what is a use of inverse functions in math, and you'd get a firm answer. I suspect that we are talking about something really big: an advance in science that Clay Institute expected when they posed the question "is P = NP?".

Maybe I'm wrong about the importance of including inverse functions in lambda calculus, but I'll surely have a big check reminder on my schedule.

and in my case, concluding

and in my case, concluding that solving is not generally of the same complexity as verifying

If I understand you right, that is generally known. And also that it doesn't settle the question of P vs NP. The problem is that they are not inverse of each other. Verifying can never have an inverse, since it has boolean value. And solving in general does not have an inverse, which means that what you are trying to propose as verifying does not exist. But perhaps I am misunderstanding.

Trying to answer all the questions

and in my case, concluding that solving is not generally of the same complexity as verifying

If I understand you right, that is generally known.

The whole commotion is about that that is not known, at least for polynomial complexity questions. To quote Wikipedia, which is my primary source of informations:

The P versus NP problem is a major unsolved problem in computer science. Informally speaking, it asks whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer.

___________

The problem is that they are not inverse of each other.

Please read again the topmost post, as I've changed it today, due to conversation flow and remarks from Matt M and John Shutt. I apologize for inconvenience.

Now I'm considering "S" versus "V" sets. They are not inverse either, but functions, being elements of these sets, can have their inverse. Yet on inverting them, their parameters and results swap their places, showing us a property about interchanging "S" and "V" sets in such a relation.
___________

Verifying can never have an inverse, since it has boolean value.

Before checking expected solutions, the process of verifying solutions is exactly the opposite to the process of solving a problem.

In a case of solving, we enter parameters and do with results whatever we want, analyze them, print them, pass them to another function, or whatever we think of.

In a case of verifying, we enter the solution and get related parameters, then we bool-check if those parameters are equal to our expectations, and we return true or false.

Generally looking, solving is completely analogous verifying until the point where we further process data (in the case of solving) or comparing data to expected parameters (in the case of verifying). After reaching the point that makes them analogous, they differ in what we do with returned data from non-inverted / inverted function.
___________

And solving in general does not have an inverse, which means that what you are trying to propose as verifying does not exist. But perhaps I am misunderstanding.

Oh, I wish I could explain it somehow... Think of solving as deducing some conclusion. And think of verifying as a process of cluing up from where the conclusion came (abduction). In both cases we have to do something with returned set afterwards. In the case of solving we just print out the resulting data, while in the case of verifying we have to compare resulting data with expectations. The only point where those two whole processes differ is printing out / comparing with expectations.

Solving should be, in the crux, inverse of verifying, while verifying should be, in the crux, inverse of solving.

Let's consider a function:

f(x) = x2


Semantic table of f(x) looks like this:

f: ... -> ...
f:  -2 -> 4
f:  -1 -> 1
f:   0 -> 0
f:   1 -> 1
f:   2 -> 4
f: ... -> ...


Inverse of f is defined like this:

f​−1(y) = {x∣f(x)=y}


and its semantic table is exactly the opposite of semantic table for f:

f-1: ... -> ...
f-1:   4 -> -2
f-1:   1 -> -1
f-1:   0 -> 0
f-1:   1 -> 1
f-1:   4 -> 2
f-1: ... -> ...


Note the similarity with the semantic table of f, where the left and the right side just swapped their places.

in a case of solving a problem: f(2) = ?, we lookup for 2 on the left side in the semantic table of f, and return 4 from the right side.

In a case of verifying if: f(2) = 4? we look up for 4 on the right side in the semantic table of f and, return 2 from the left side. Now we compare 2 to the expectation noted by "f(2)" and return the boolean of 2 == expectation. Now, isn't this verification the same as solving f-1(4) = ? We get 2 as the solution, the same 2 we got in verification process. We just don't compare it because this time we are "solving" f-1(4) = ?, but it is the same 2 from "verifying" f(2) = 4?

(I omitted the negative verification / solution -2 to make the explanation simpler)

I hope I managed to make the relation clearer :/

In a case of verifying, we

In a case of verifying, we enter the solution and get related parameters, then we bool-check if those parameters are equal to our expectations, and we return true or false.

As Enrique explained, this isn't what most people mean by verification. Verification is checking that an (input, output) pair is correct, not guessing the input that produced an output. So roughly speaking, the verification problem corresponding to drawing a character isn't OCR, but rather a decision problem (yes or no) -- is this image the correct pixels for an 'A' drawn in a particular place with a particular font?

But even if we wanted to think about the inversion problem you're studying, you go on to write things like this:

More there are assigned elements, more time is needed to build up a set of solutions, and this is what makes a function (or its inverse) being a part of P or NP set. Moreover, functions can be composed, even recursively, like in the example of factorial function. A ratio between parameters and amount of solutions ranges from constant, over linear and polynomial, to exponential and even bigger complexity measure.

But algorithm complexity isn't about size of solution sets. It's about how long it takes to compute individual solutions. And we can have functions that are relatively easy to compute but whose inverses are quite difficult to compute. Think about encryption.

So the stuff like this is all wrong:

(f ∈ P) iff (f^ -1 ∈ NP)

And once you have all of these wrong definitions in place, you conclude that one of the most important open problem in CS has obvious solution. This what I find bananas, but I think it's just that you don't have a lot of experience with academics or this problem. I think you'd be shocked to understand the level of brilliance and ingenuity that people have brought to bear on this problem, working on it for years & decades. If you were to find a short proof of P != NP, it will have at least one genius step. You haven't even done any real work. You're just shuffling definitions around. It's bananas to think that anything interesting in this area is going to be proven with something akin to "follows from definitions 1 & 3 by modus tollens".

Ok, sorry,

My redneck blogs aren't going to help anyone sane on this planet. I won't be in your way anymore, let's just let this thread drop down, I'll be quiet from now on and watch how the real work is done by the real academics. My sorry soul isn't worth of being understood, it is a waste of time, space and electricity to put a mouse, keyboard and computer before me.

Sorry, I'm out for good, as no one wants my mess to dangle on her screen.

So long...

sorry...

:/

I'm sorry you took it that way. I'm not an academic either. But I was trying to convey how incredibly hard this problem is. Could you solve it in three pages as a non academic? Sure. Odds aren't in your favor. And the solution isn't going to be something obvious. Some people working on this can do the kind of reasoning in your blog as you and I breathe. I make stupid mistakes all the time too. Most of us do. The main difference and why I mentioned inexperience is that I would know that i must have a mistake somewhere if I had your solution to this problem. That's all I was trying to impart. Please don't let me stop you from working on what you find interesting or from posting here.

No, you was right,

I was feeling that I don't belong here. It was a matter of the time when someone would make a remark like yours. It opened my eyes, finally.

I suggest sticking around

Just relax. You can ask questions, and pick things up, and we can all benefit from your different perspective, too; a community is made up of different kinds of people. We should all push our limits a bit, and that will always mean we occasionally have to pull ourselves back just a bit — but, don't go overboard in pulling back. That's my suggestion, anyway.

I might be wrong always, but...

I admit I overreacted. I'll try to push back the pigment a bit less, and accept constructive critics, but it is not going to be easy.

Indeed, maybe my whole point of NP != P is too simple to be true, but what I ask is to, at least, read what I'm writing, if anyone wants to criticize. I apologized a couple of times here for abusing P and NP terms, I announced that now I consider sets "S" and "V" instead. Now I'm apologizing the third time, offering again the same revision from the last two times: here is the revised report.

Yes, It is simple. It is so simple that it is merely an observation and counting of parameter set versus result set, and comparing different combinations. It isn't a proof, it is an observation.

I suspect that the initial problem setup before a half of a century unfortunately misleads the whole army of solvers, equally as I can accept that I might be misleaded myself, instead. I reconstructed the problem setup in my own view, initially by misinterpreting P and NP sets, and then by redefining my own S and V sets on which I base the observation. Perhaps a different point of view is just what we need. And perhaps not.

If you don't agree with it, say it. If you find this blog not worth of being a part of Lambda the ultimate, say it. But please, don't say that I'm a banana-man because I thought of something first. It is, at least, not nice. You can not possibly know what I went through to pave a road to my conclusion, however simple it is, and which I bring to You on my knees for Your consideration.

But if you don't like this kind of content, I'll understand. I admit that maybe my vocabulary is not rich enough for a site of this kind of reputation.

Thank You for Your time.

Keep posting

I thought I was giving you constructive criticism. My bananas comment probably wasn't nice and arguably wasn't constructive, but was in response to escalating claims about the importance of this work. And I did start with more subtle hints, like pointing out that this prize has a million dollar bounty. Probably best if I just stick to technical responses. My apologies. Again, I am not trying to dissuade you from posting here. We have had members in the past who posted voluminously to every thread in a way that I found abusive, but you're not in that camp.

I think you may have added this edit in response to me:

As any function (or its inverse) may assign multiple elements to the same parameters, function solving complexity is directly dependent of the amount of the assigned elements.

I might be willing to concede that function solving complexity is in some sense proportional to the number of outputs you're trying to compute, but it's not a simple linear function of the number of outputs, because each output takes a variable amount of time to compute. Algorithmic complexity is not about number of inputs / outputs.

And you also haven't addressed the concern that verification isn't inversion.

I think I will, thank you

but was in response to escalating claims about the importance of this work.

Sorry, I was carried by the moment. I'll try to back up a bit.

My information about computing complexity comes from Wikipedia about big O notation.

I might be willing to concede that function solving complexity is in some sense proportional to the number of outputs you're trying to compute, but it's not a simple linear function of the number of outputs, because each output takes a variable amount of time to compute. Algorithmic complexity is not about number of inputs / outputs.

That is right, it is also about number of primitive computations (i.e. adding or multiplying), about possibly rejected results and about a number of recursion steps. What I said is that it depends on a number of results, I didn't say that that is the only dependence parameter. Take, for example a function:

f(x) = 1 / g(x) + iff (x > 0, f(x - 1), 0)

I break all the loops into discrete separated steps where each step takes a constant worst-case time. Broken into each recursion step, I don't measure the number of recursions. Also, if a function returns a sum type elements, I don't measure the number of returned elements. Also, I brake into a separate step calling g(x) function in above example. Those quantitative measures are left to a reader to observe. Because of that, I don't offer a concrete summary O(xyz) function, and I find each broken step having a constant worst-case time (correct me if I'm wrong).

And you also haven't addressed the concern that verification isn't inversion

if the solving is noted as:

f(x) = ?

where "?" is a solution, then verifying is noted as:

f(?) = y

where "?" is compared to expected parameter we are verifying.

In the above case, we calculate y, and in the below case we verify x. Math definition (as I remember and I rechecked it) says that when these two variables switch the places in "function arrow", that is how the inverse function is created (wiki). To say the obvious, invert the inverted function, and you get back the original function. So I conclude:

solve: f
verify: f-1
-------------
solve: verify-1

. I hope I'm not doing any mistakes here.

Godel's Revenge

I just want to give a concrete example to help understand the problem.

Consider I have a set of axioms, and I want to prove two statements equivalent. To do this I have to find a sequence of applying the axioms to one of the statements that transforms it into the other. This is obviously a search with a fanout of N, where N is the number of axioms. We can try each axiom in turn which takes M attempts, and for each of those there are N possible next steps. So to search to a depth of M takes (N^M) steps. Here 'N' is the size of the set of axioms which is our constant, and M is the search depth which depends on the inputs (how 'far' apart the two statements are). So this is obviously NP to find a solution.

To verify the solution we just take the list of axioms from the proof and apply them one after another to one of the statements, and in the end it either matches the other statement or not. So to verify a proof takes exactly M steps, where M is the search depth at which the proof was found. This is obviously P.

So if P = NP, then it must be possible to choose the correct axiom at each stage of the proof search, and we must be able to choose it first time without trying any of the other options, because even trying one extra axiom at each search depth would give 2^M which is still NP.

This suggests that if P = NP we need an oracle that can tell us the correct axiom to choose at each search depth, but that would require us to already know the proof.

Can I claim the million dollars?

Probably not, because the argument is that I may be able to transform the problem into a different problem where I can find the proof in some polynomial function of M time.

If we look at SAT solvers, we find we can short-cut the proof, but only for certain classes of problems. If you look at the tough problems, like factoring large numbers, none of the short-cut work, and the simple exhaustive SAT solvers are faster.

So this is where the difficulty is, intuitively we can point at examples where there is no known short-cut, but yet more and more short-cuts can be found that work for specific classes of problems.

Personally I think P is not NP, but there is a subclass of NP problems, let's call it NP' where there is a short cut, and although the set of problems in NP that are NP' may increase over time, there will always be some problems for which no short-cut exists. However proving the non-existance of a short-cut for any particular NP problem seems difficult if not impossible. Perhaps this is Godel's revenge? Maybe the proof that P is not NP is not possible in a consistent system?

relative complexity

You mention oracles. Complexity classes may be relativized to an oracle; that is, if X is the class of problems solvable with such-and-such resources (usually time or space) without an oracle, XA is the class of problems solvable with those resources given oracle A. Lots of relations in complexity theory (so I'm given to understand; not my expertise) relativize, that is, they would still be true for those complexity classes relative to an arbitrary oracle. What's weird is, the question of P vs NP does not relativize. There are oracles relative to which PA=NPA, and oracles relative to which PB≠NPB. It's like the P=NP question is balanced on some sort of cusp.

I have btw also wondered, from time to time, whether it is possible that the P vs NP question cannot be proven either way. Better yet, might it be possible to *prove* that the question cannot be resolved by proof? If P≠NP, I see no particular reason why there would have to exist a proof of the fact. If P=NP, then there exists a deterministic Turing machine that can always solve an NP-complete problem in polynomial time, but it's not obvious to me that it would have to be possible to *prove* that this machine will always terminate in polynomial time. So, if one supposes that there is no proof resolving the P vs NP question, this would not paradoxically resolve the question. Gödel would not object (so to speak) to the existence of a proof that the question cannot be resolved by proof.

Criticism

I have been trying to make a comment similar to yours since this thread started and I thought that your comment (comment series) on the meta and social aspects was quite good.

You are interested in the

You are interested in the topics, so you belong. Advancing humanity's knowledge is a higher bar—it takes lots of work. And PhDs are still the best way to learn how to do that work.

You seem to think that you aren't "smart enough" or sth. like that. As if this were about being "smart"—it's not. If you want, "smart" is about how good you are at running 100m. Research is about a 1000 km march. Most think the two are strongly related. That's a widespread myth that must be debunked—I used to believe it too.

Let me quote from Lindsey Kuper http://composition.al/blog/2015/04/30/say-experts-instead-of-smart-people/ :

People who come up with stuff like CRDTs understand hard things, sure, but not because they are any smarter than those who don’t. They understand hard things because, as researchers, trying to understand hard things is their full-time job. In fact, such a job is probably easier to get than, say, Peter Bourgon’s job.1 If you can live on a grad-student stipend and face a steady stream of rejections, you too can probably have “trying to understand hard things” be your full-time job for a while.

Some of the people who do that — by no means the “smartest” ones — end up with a Ph.D. via a process that has a lot to do with luck and stubbornness and very little to do with how smart one is. (About half of the people I started grad school with had dropped out by the time I defended my Ph.D., and I can guarantee you that those who stayed are no smarter than those who left.) If they continue to work full-time on trying to understand hard things, then eventually, sooner or later, they’ll probably come to understand some of those things. In return, they give up, at least for a while, on the chance to pursue deep understanding of any number of other things. They may also give up, at least for a while, on the chance to build things like Peter does.

Tx for support,

Tx for support, I needed it. I don't consider myself too much smart, but the real problem is knowing enough material about the things I want to talk about. I should probably be more eager on investigating material on my own before talking about it.

I should probably be more

I should probably be more eager on investigating material on my own before talking about it.

"Investigating what's known" is lots of what beginning researchers need to do, until they become experts themselves on their topic.

The Very Math?

Lambda Calculus + Inverse functions = the whole of Math??

This whole thread is bananas. You should do a TED talk.

Bananas?

Sorry, I thought you might be interested in S != V. I didn't know it would flame out to unimaginable propositions.

Heh sorry

Sorry, I'll try to write something more helpful in response to your longer post.

Problem definition or algorithm that solves the problem?

The thing is that any problem, be it verifying or solving, can be reduced to accepting a language. Try.

By "accepting a language", do you mean accepting a problem definition or accepting an algorithm that solves the problem? If it is accepting an algorithm, then I see what you mean: the algorithm simply wouldn't be accepted if it has greater complexity than polynomial. But I'm using completely different approach, see the previous post.

Accepting a language

I was using "accepting a language" in the way that Stephen Cook defines it in the paper you link above. But Cook only uses (as far as I have read) language accepted, so perhaps I'm abusing the terminology.

Stephen Cook

I interpreted his terminology as: accepts an expression of a (programming) language. Later he bounds the expressions to have some properties of an interest (like polynomial complexity).

I think there is also the

I think there is also the problem that a computation may not halt, to consider it as a mere function, let alone invertible.

This problem is outside my observation technique, as I don't use a Turing machine, I rather use a theory model of lambda calculus. There are two possibilities: either we are dealing with (1) non-valid terms, either we are dealing with (2) infinite sets. In the case of (1), a compiler should report the error. In the case of (2), obviously we can't ever use all the results (unles somehow generalized - this opens a new question), we can enumerate only elements we want to use (some kind of lazy evaluation).

This definition is closely

This definition is closely related to the correct definition, so I suspect there's probably still a hole in your argument.
NP problems are roughly decision problems where you can give a *certificate* of a positive answer, and where verifying the certificate takes polynomial time. That definition (once stated formally, which I'm not doing) is equivalent to the one using nondeterministic Turing machines. In many but not all cases, certificates are what people would intuitively call answer.

For instance, take the NP-complete problem 3SAT, that is, "is there an assignment of variables that makes formula phi true" (for a fixed phi). An assignment that actually makes phi true is a certificate that the answer is yes. Verifying that a certificate is correct takes time linear in the size of phi. So 3SAT is in NP. (Proving NP-completeness is harder). Equivalently, this problem can be decided in linear time by a nondeterministic Turing machine (NTM) that simply nondeterministically guesses the assignment.

If you have a problem that is NP in the NTM sense, you can construct a certificate by recording decisions done to get a positive answer. A verifier for this certificate simply runs the NTM in a deterministic way, following the decisions recorded in the certificate, and checks whether the machine in the end accepts.

Retract, new try

Sorry, just looked further—your actual "verification" looks pretty different. It seems now you are taking the graph of function f and studying how long it takes to do look ups in this graph (in one direction or the other) using a linear search.
It seems you're either omitting constant factors, or the linear search you use always examines all elements.

I haven't checked all details, but the memo might be correct about these linear lookup problems.
But you should never use linear lookup: there are more efficient algorithms for look up, like using indexes/hash tables.

Updated version

Because the interpretation of P and NP sets in the above report is different than in official academic definition, I updated the report and reposted it here:
Finding Solutions vs. Verifying Solutions

Basically, what is considered as "P" and "NP" sets is renamed to "S" and "V" sets, with meanings unattached to former sets. I apologize for making a confusion and abusing the notions of P and NP sets.

(deleted comment)

-