Can I express variable occurence ranges in logic?

I wonder whether there is some way out to express variable occurence range in logic. I have the following query:

exists X,Y(p(X) & q(X,Y) & r(Y))

I can use quantifiers and mini-scope to express that a variable first occurs, when I view the conjunction right associative (xfy):

exists X(p(X) & exists Y(q(X,Y) & r(Y))).

In the above I see that a usage of Y starts after the p(X). Similarly I can use quantifiers and mini-scope to express that a variable last occurs, when I view the conjunction as left associative (yfx):

exists Y(exists X(p(X) & q(X,Y)) & r(Y)).

Now in the above I see that the variable X is last used in q(X,Y) and then not anymore.

But what I would like to have is a formalism that allows to express both information, start and end of a variable use. Graphically it would express:

p(X)   &    q(X,Y)   &    r(Y)  

Is this possible in logic? A logic that would come with semantics, substitution rules, etc.. Somehow I have the feeling it would violate the Frege principle. On the other hand approaches such as Continuation Passing Style formulation could eventually solve the problem.

Best Regards

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Cross-posted to a lot of places

Math Overflow, Math StackEchange, and it had been posted to cstheory but since removed.

The idea is not to compute the ranges


The idea is not to compute the ranges given a formula in ordinary logic. But to have a logic that allows to express the ranges in its own language, including a semantics, substitution rule, etc.. for this new logic. Computing the ranges is easy, I have already given an algorithm in my question itself. Namely right associative mini-scope gives start and left associative mini-scope gives end.

This logic would for example allow manipulations such as renaming a variable range. Thus effectively saving one variable name. This is not possible in normal logic as one can easily figure out, i.e. p(X), q(X,Y), q(Y) is not the same as p(X), q(X,X), q(X). But why not use X inplace of Y, after X is anyway not used anymore.

I am interested into whether somebody has already come up with a corresponding calculus. The calculus would govern the manipulations, so that movements that would change the meaning, like for example by variable clashes, are not possible.

Best Regards

Yes, but likely that's not what you want.

You are asking a question that the metatheory implementing the meaning of your logic would be able to ask, not your logic itself (unless it were metacircular, a property I don't think any existing consistent logics have. Check out Type Theory Should Eat Itself for more on this topic). The metatheory has to know what binding means in terms of syntax and not just the introduction and elimination rules.

Formalization of metatheory is easy, but all known formalizations are very difficult to work with, since substitution has to be proven to commute with just about every operation on syntax. This difficulty formed what is now called the POPLmark challenge, to mechanize this burden.

Scope extrusion rules are common, so that the transformations in your post are provably allowed, but a formalization of graphical representation of syntax? Not so common. Easy to do in an unrestricted metatheory, but proving that if your representation determines some scope such that your scope extrusion rules are applicable is a non-trivial metatheoretic endeavor.

Hi,Thank you very much for


Thank you very much for your comment and the very interesting
links. The difficulty I am facing is that I have only this
graphical idea in mind and don't know yet much about the
possible rules that would govern the concept.

Concerning some doubts whether graphical ideas can be formulized.
I think knuth was facing the same problem when he created TeX.
And I guess the resolution was that a linearisation even of
2-dimensional concepts is most often easily possible.

But compared to TeX the 2-dimensions here are not boxes and
their juxtaposition or inclusion, but rather something that
reminds me of proof nets in linear logic. If we could model
variable start and end as positive and negative literals we
would eventually be done.

I forgot to post the motivation behind this kind of logic.
My primary motivation is currently some code generation.
But today I came up with some additional possible
application areas. So my list now goes:

- Compiler Backends: Register Allocation
- Code Verification: Proof Statements
- Natural Language Processing: Anaphora
- Database Systems: Query Optimization
- Logic Programming: Partial Evaluation
- What else?

I have already looked a little bit into the register allocation
literature. There are some results, like from Chaitin, that
register allocation corresponds to graph coloring. Also if I
look at the papers I see that the authors have a refined
language to talk about the issues: live interval,
use position, etc..

When looking at the domain of register allocation, then one
might ask oneself how this domain, a sequence of expression
assignments, eventually if-then-else and while, are modelled
in a functional programming language. I guess monads are
here in use. I have even seen monads covering non-determinism.

But what I have not yet seen, these monads interspersed with
quantifiers, as the graphical intuition would suggest. Maybe
a quantifier corresponds to a variable declaration block. But
then we end up with hierarchical variable scopes again, and
not the overlapping thing I have in mind.

So if somebody has already put hands here, I would be very
interested to know about.

The problem with scope

The problem with scope extrusion would be possibly that
the number of needed variable names increases. Something
which is counter to the intended application areas.

So if you have graphically, in this not yet fully
identified logic:

           +--- Y ---+
   +--- X ---+    +---- X ---+

Then when we translate it into normal logic via scope
extrusion, we can only arrive at:

   +----------- Y -----------+
   +----------- X' ----------+
   +----------- X -----------+

We have to rename one of the X, because the quantifier
extrusion rule has a side condition:

      exists X(A & B) = A & exists X B     if    X not in A


Although my question could give the impression
that I am interested in scope extrusion, I guess
scope extrusion will be not a dominant rule of
this not yet fully identified logic. Since the
logic should be able to deal with variable occurence
ranges, the contrary is the case. The interest could
be more characterized as "scope intrusion".

By these ranges, a new, more intrusive, way of
defining a scope should be possible. The quantifers
should be able to intrude into the conjunction
boundaries independently of each other.

Best Regards


I'd consider the adbmaL work as related. From the poor high-level understanding I have of it, they would represent your term as

exists X, (p(X) & (exists Y, q(X,Y) & (unlet X in r(Y))))

(the ASCI notation "unlet X in " to express removal of an assumption from context is my own.)

The scope is not "linear" as in your intuitive drawing, but covers a subtree of a tree-like expression. "unlet" allows to 'carve' subtrees out of the scoped-over region.

Thank you very much. This

Thank you very much. This could work. Would need to look into it in more detail. I guess the non-linearity is not a no-go. Since the linear case, as you showed, is simply a special case.

Little excursion: Interestingly removal of assumptions is even a sound rule, also in normal sequent logic. If we look at the assumption introduction rule, which would read (the rule is named according to forward chaining, but removal amounts to introduction in backward chaining and vice versa):

   G, A |- B
   ----------- (Right Implication Introduction and Assumption Removal)
   G |- A -> B

One could also define the following rule, which is more or less the inverse of the previous rule. It would not define a complete fusion operator & though. The rule reads:

   G |- B
   ------------- (Right Fusion and Assumption Introduction)
   G, A |- A & B

Its just a short-hand for the valid:

   ------ (Id)
   A |- A     G |- B     
   ----------------- (Right Fusion Introduction and Assumption Fusion)
     G, A |- A & B

Very currious now to look into your proposal.

Could be related to consume/preduce as well

Just noticed that the "unlet" could also be a solution to a question I already have for a long time. A consume/produce modal operator in logic. It conflicts a little bit with the way linear logic does deal with reasources. Since it would give control of the resource consumption to the query and not to the premisses.

For more detail see here:

CPS option?

Is it that you would want to get rid of Z and T in:

exists Z,T(
exists X(p(X) & q(X,Z) & X=T) &
exists Y(q(T,Y) & r(Y) & Y=Z)

How do you propose to use continuation-passing style to solve this?

@Denis: Looks more like equational

@Denis: Looks more like equational theory too me. Where one would first use the following rule (replacement of equals for equals) together with normal predicate logic scope extrusion:

   S=T & A(S) <-> S=T & A(T)               (1)

To arrive at:

   exists X,Y,Z,T(p(X) & q(X,Y) & q(X,Y) & r(Y) & X=T & Y=Z)

Then one would use the following rule (non-empty domain and terms have always values) together with normal predicate logic scope intrusion:

   exists X (X = T) <-> true               (2)

To arrive at:

   exists X,Y(p(X) & q(X,Y) & q(X,Y) & r(Y))

Let's at the moment assume that we do not yet have an equality sign in this new yet to be fully identified logic. An equality sign and corrsponding rules. The absence of an equality sign would not
mean we cannot define semantics, substitution rules, etc.. for
this new logic.

But I don't exclude that something could also be done with equality signs. Do you think the formula you gave does express the variable occurrence ranges from my example? If yes how? Or were you more fascinated by the redundance of Z and T? (Well we could also deem X and Y redundant, and some other picks of variables).

For the CSP question in your response. I did not yet think about, since I would rather like to avoid having = in the first place.


Just a rewrite

I expect that q(X,Y) & q(X,Y) can be simplified to q(X,Y) isn't it?

I was just trying to rewrite your formula in a way which matches better the graphics - and which keeps X and Y separate. Redundancy, yes maybe.

X and Y are overlapping in

@Denis: X and Y are overlapping in the graphics. They share q(X,Y), that is why probably your proposal doesn't match very well the graphics. Goal would be to have a logic that reflects that q(X,Y) is only called once.

When we only look for truth then calling q(X,Y) once or twice wouldn't matter so match. Since in classical logic we have A & A = A. But for the applications that are listed it might be important to be able to express more verbatim the computation, the uterance, etc..