Introduction to computability logic

Introduction to computability logic (pre-print)
... The intuitive notion of (interactive) computational problems is formalized as a certain new, procedural-rule-free sort of games (called static games) between the machine and the environment, and computability is understood as existence of an interactive Turing machine that wins the game against any possible environment.
To all the lovers of games (and Turing machines :)

To claim relevance to PLT: computability logic can be seen as an alternative to linear logic (both being resource-aware). Also, interactive programming can be seen as a game between a programmer and PL environment...

Actually, I enjoyed the first part of the paper more (before getting to Turing machine).

Comment viewing options

Postfunctionalism: Computability is Winnability

An extension of Church-Turing thesis for interactive computability (section 18):
The concept of winnability is an adequate formal counterpart of our intuitive notion of computability, i.e. algorithmic solvability of (interactive) computational problems. In other words, what can be called interactive algorithms are nothing but our play machines. Whatever we would perceive as an interactive algorithm, can eventually be formalized as a play machine, and vice versa: every play machine does nothing but following some interactive algorithm.
And yes, Turing himself was aware of possible extensions, including choice and oracle machines.

I have yet to understand relationships between interaction and concurrency. One of the distinctions is that environment in former is not limited to computable behavior, it may be "a capricious user or the blind forces of nature".

Proofs as programs

Another teaser, having seen an inerest to Curry-Howard isomorphism:

While provability of a formula ∀x∃yP(x, y) in a sound theory merely signifies that for every n there is m
with P(n,m), proving the constructive version ∩x∪yP(x, y) of that formula would imply something much
stronger: it would mean that, for every n, an m with P(n,m) not only just exists, but can be algorithmically
found.

I was disappointed by inability of MSIE to display square union and intersection symbols, forcing me to use rounded versions of them.

"Is Game Semantics Necessary?"

Compare to a paper by Andreas Blass, who studies game semantics in a context of linear logic and (therefore?) limits interactions to "turn-based" (only one side has legal moves at each position - client/server paradigm). The paper discusses propositions as types, "interactive" data types, and logical duality of questions and answers. In comparison, computability logic looks richer.

PS: I wonder, whether UT2010 will be based on some formal logic? :) On a serious side, I expect resource-aware logics to become very important for component-based development (even more than they are for type systems in general).

Game Logics

Andris Birkmanis: Compare to a paper by Andreas Blass, who studies game semantics in a context of linear logic and (therefore?) limits interactions to "turn-based" (only one side has legal moves at each position - client/server paradigm).

I'd be interested in knowing whether it really is "therefore" or not, as I'm keenly interested in logics that can cope with uncertainty in multi-agent environments. So far, those sound like modal logics, but if I'm in error on that point, I'd be grateful for knowing it.

Classical quantifiers are blind

I'd be interested in knowing whether it really is "therefore" or not, as I'm keenly interested in logics that can cope with uncertainty in multi-agent environments.
I am still learning, and I am not a full-time student :( What's probably relevant, Japaridze treats ∃ and ∀ as blind quantifiers, as opposed to other, choice quantifiers:
The meaning of ∀xA(x) is similar to that of ∩xA(x), with the difference that the particular value of x that the user â€œselectsâ€ is invisible to the machine, so that it has to play blindly in a way that guarantees success no matter what that value is. This way, ∀ and ∃ produce games with imperfect information.
It may be worth to note that early papers by Japaridze have "modal logic" in their titles.

You may want to check the latest paper on computational logic for more technical results.

What bothers me, is how to introduce multiple agents. It may be obvious, but it eludes me. Then again, CL is geared towards global duality/simmetry, which is (IMHO) not well suitable for multi-agent environment, but that's only an intuition...

UT2010 and Formal Logic

Andris Birkmanis: PS: I wonder, whether UT2010 will be based on some formal logic? :)

If you've been following Tim's postings here, I'd say it doesn't sound like we'll have to wait for 2010!

Planner, schemer, shooter

If you've been following Tim's postings here, I'd say it doesn't sound like we'll have to wait for 2010!

Well, the only recent post by him I read was about Icon(?)-inspired language.

And from official PR I am getting impression that Unreal 3 is more about "physics" and graphics, than logic(s). I don't expect games to prove (non-trivial) theorems in real time (yet). On the other hand, I have no clue, AIs in current games must do some planning, does it count as proving theorems?

Re: Planner, Schemer, Shooter

Andris Birkmanis: Well, the only recent post by him I read was about Icon(?)-inspired language.

Right, but he's written about language design a few other times; googling "site:lambda-the-ultimate.org tim sweeney" is interesting. He also posts to the Haskell list from time to time.

Andris: And from official PR I am getting impression that Unreal 3 is more about "physics" and graphics, than logic(s).

Yes, but I don't expect it to be eight years from the time the first title based on Unreal Tech 3 ships to the time Tim uses his new language.

Andris: I don't expect games to prove (non-trivial) theorems in real time (yet). On the other hand, I have no clue, AIs in current games must do some planning, does it count as proving theorems?

In some sense. But I don't think Tim's interested in writing theorem provers, although his new language might make a good meta-language for doing that, just as ML or Haskell do. But also like ML and Haskell, the range of applicability of Tim's language should be quite a bit broader. What's been fascinating to me lately here on LtU have been the posts about type-safe network interaction, type-based information flow security, type-safe module upgrade and rebinding, type-based proof of lack of deadlock/race conditions... all things guaranteed to make a game developer's ears perk up, especially if that game developer's titles are involved in professional gaming leagues (yes, they exist) and/or the developer partners with other sponsors of events where cheating/hacking leads to people losing real money. I have to imagine that Tim's paying a lot of attention to these things as well, but obviously I can't speak for him.

Semantically well justified resource-awareness

Andris Birkmanis wrote: To claim relevance to PLT: computability logic can be seen as an alternative to linear logic (both being resource-aware).

And apparently a very reasonable alternative. The discussion at "Game Semantics or Linear Logic?" makes perfect sense to me.

Hilbert Spaces to Dilbert Spaces

I'm only half-following this conversation, but this reminds me of Harry Mairson's work:

Warning: I think "made simple" is a euphemism for "freaking hard."

Turing machines as their essential, ideal archetype

A quote from the paper:
just as the classical physicist (or freshman physics student) views the world via a model of massless beams and frictionless pulleys, and as machine architects have the Turing machines as their essential, ideal archetype
Hurray, Turing machine as an ideal, again.

PS: the paper is quite interesting, by the way. It reminds me of Interaction Combinators by Lafont (probably because both are inspired by Girard's GoI). ON EDIT: by "the paper" here I meant "Hilbert Spaces to Dilbert Spaces", not the computability logic paper.

Du ver&auml;nderst nicht den Teufel...

...Der Teufel verändert dich!

Thanks for posting this. I've read about a quarter of it and it's very interesting (and readable).

Part of the reason I'm interested in this is that I need a logic like linear logic for my work, but linear logic itself is only sufficient and not necessary for a certain condition, so I'm on the lookout for something more general. Computability logic is not it, I think, but I'm interested to see how it is different from linear logic. (I don't see the essential difference so far, but I haven't gotten to the section on modalities yet.)

I also appreciate the author's remarks on syntax vs. semantics.

I would be interested to see a categorical semantics for the proof theory of the propositional fragment of the universal logic.

Bump - a swelling of tissue usually resulting from a blow

Thanks for posting this.
You are very welcome. I am almost as proud as if I wrote the paper :)

Did it turned good after reading the remaining three quorters?

I would very much like to see an extension of CL to multi-party interactions... Got to figure that myself, I guess...

PS: sorry that I bumped this thread, didn't get to read your comment until now - looking for continuation-based semantics in UML state machines bogged me down :)

Depends which linear logic

I'm interested to see how it is different from linear logic
Now that I returned to learning about logics, I found there are many views on what is the linear logic.

As a sequent calculus, I think CL is somewhat similar to Guglielmi's calculus of structures because of deep inference (and yes, that was mentioned in the OP, but I made the whole circle and found it out the hard way).

One of the differences between CL and CoS, though, seems to be that CL looks only at "surface occurrences" of choice connectives, while CoS (I am not sure about that) takes "deep" more literally.

This somehow reminds me of contexts for delimited continuations...

Home work

I tried to apply (half-seriously) CL to formalization of some kind of state machines (presented as regular expressions here). What I was missing was fix point operator. The branching operators of CL enable infinite runs, but they seem not precise enough - I was unable to come up with (finite1) expression that defines a game with structure (xy)* where x is a move by one agent and y is a move by the other (branching allows to define (x|y)* quite easily). That's quite frustrating, as the game is most definitely strict, and therefore static, and therefore should be expressible in CL.
So the question is: am I just blind to some obvious solution, or did I miss the whole point of the paper?

The indefinite expression would be something like:

⊥ ∪ (T ∩ ( ⊥ ∪ (T ∩ ( ⊥ ∪ (T ∩ ( ...


Where both x and y would be encoded by choosing the right alternative (while the left would lead to a loss), informally:

surrender:⊥ ∪ x:(surrender:T ∩ y:( surrender:⊥ ∪ x:(surrender:T ∩ y:(
surrender:⊥ ∪ x:(surrender:T ∩ y:( ...


Argh, inability of browsers to display most of Unicode 0x22** drives me crazy.

Substitution preserves finiteness of runs

After looking at the table on the page 34, I guess the problem is with "finite-depth" row: the only operators that do not preserve finiteness are branchings. Most importantly, substitution does preserve finiteness.

Can we replace substitution with lambda and then express the usual Y operator?.. What properties will be lost?.. I need to check this out..

Can content subsume structure?

From the very beginning I am trying to understand the purpose of structure (Lr - LegalRuns) in the definition of a game. Looks like all it can do can be expressed by content (Wn) - of course, making Wn more complex in the process, but obtaining one function instead of two. What is more important, I see no clear separation between what should be expressed by Lr and Wn.

And the relation seems quite simple - whenever a player makes some "illegal" move, this can be expressed by his opponent "winning" all the runs starting from the current. Reminds of "always" operator of temporal logic...

Halting metaproblem

And the relation seems quite simple
Let's make it more explicit. Define LWn as:
LWn run = case Lr run of {true -> Wn run; false -> false}

or, using the usual shortcutting &&(and further departing from Epigramic spirit)
LWn run = Lr run && Wn run
The question is, whether a game can be defined (for the purposes of CL) not as a pair (Lr, Wn), but as a single LWn? In programming terms, what is the signature, and what is the implementation details? Due to the implications of halting problem, I believe that (Lr, Wn) contains more information than LWn, but is this information used by CL?

I see two possibilities:

1. All theorems of CL can be redefined to use only LWn without need for Lr.
2. Some theorem needs Lr.
Let's focus on the second possibility. Lr has two nice properties, leading to the third:
!(Lr run) => ∀tail: !(Lr (run # tail))
!(Lr run) => !(LWn run)

!(Lr run) => ∀tail: !(LWn (run # tail))
If it is crucial to know value of (Lr run) as opposed to (LWn run), then it means we are interested in the third property, that is, loss of all possible continuations of the run, deducing which from LWn is (probably) equivalent to halting problem.

Before going through all the theorems to verify, whether some of them needs the third property, I speculate further that it would be more natural to expect it to need another property instead (Decided run):

(Dr run) <=> ∀tail: (LWn (run # tail))

PS: Yes, I am trying to use LtU for the purpose expressed in its title :-)

PPS: Lr is a set of runs, but I use functional notation pretending it is a predicate (meta-predicate, if you wish, as it is living outside of CL and returns true/false not T/⊥).

Illegal moves are wins for the opponent

As I understand it, if a player makes an illegal move, that counts as a win for its opponent. Similarly, if a player fails or refuses to move, that also counts as a win for the opponent.

That suggests that a winner can be determined for (almost) every run: Whichever player made the last legal move has won. So, Wn can be derived from Lr.

(Problems: Who wins the empty run? Are there runs with no winner?)

Welcome to the game :)

Thanks for participating!

As I understand it, if a player makes an illegal move, that counts as a win for its opponent.

Yes, and if we are just interested in who won the run (as opposed to "hope" to ever win "ultimately"), we can combine Lr and Wn into LWn, as I proposed.

Similarly, if a player fails or refuses to move, that also counts as a win for the opponent.

There is a problem here if we are talking about real time: when the opponent wins? Looks like immediately on his last move. It may look unconventional to talk about victory changing hands on (potentially) every move, but this is the only way I see. This victory is encoded in Wn. And when we see that in all extensions of the given run the winner always remains the same (think temporal logic), this is the victory in the conventional sense. This victory is partially encoded in Lr - partially, because Lr may say the run is legal, but still there is no hope for the player who did the last move (Lr works the other way - if there is any hope, Lr will accept the run). Intuitively, it means that Lr is not precise enough, and can be constructed arbitrarily imprecise (e.g., accepting all possible runs), that's the reason why I think it is not needed for the theory (though might be improving usability :) ).

That suggests that a winner can be determined for (almost) every run: Whichever player made the last legal move has won. So, Wn can be derived from Lr.

Not exactly, in free games a player might need to make multiple moves before reaching (temporary) victory.

(Problems: Who wins the empty run? Are there runs with no winner?)

Do you mean in the modified theory (without Wn?)?

I find it interesting that making ∩ and ∪ (quantifiers, the capital squire versions of these symbols :( ) typed allows to see (some uses of) ∩ and ∪ (binary operators, the small squire versions of these symbols :( ) as just syntactic sugar.

For example,

A(3)∩A(8)
is sugar for
∩x:{3, 8}A(x)
Of course, to express something like
A(3)∩B(8)
one needs pretty powerful type, and higher-order substitution:
∩x:{A(3), B(8)}x
Am I just asking for troubles... (it might be ironic to remember Curry and Howard in this regard)

Ah, and yes, the usual CL's:

∩xA(x)
is sugar for
∩x:Nat A(x)