Claiming Infinity

In this paper, Yaroslav D. Sergeyev describes a "new computational methodology for executing calculations with infinite and infinitesimal quantities". In his computational model, he introduces infinite types ("units of measure"). Not only does he claim the originality of this approach, he also claims to have patented the "infinity computer" in the European Union. While his approach seems an interesting contribution in general, the calculation with unknowns seems not exactly without predecessor. Besides, I wonder how is this tenable, as algorithms and computer programs are excluded from patent in European law, even more so if they are related to abstract entities. While I could not find any reference to any programming paradigms in the papers available, it may be worth a discussion here.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Bong! Next please.

I particularly liked Theorem 5.1: The number of elements of any infinite sequence is less than or equal to (1).

(1), by the way, is Sergeyev's infinity, and you can have less than (1) because you can have (1) - 1 or (1)/4 or even more exotic things.

But lo! Theorem 5.2: The number of elements of the set of integers is 2(1) + 1

Actually this is allowed by the sleight-of-hand (more like "Hey look! an eagle!") given in 5.1. Or is it? It's impossible to tell exactly what is allowed and what is not.

The weird thing is that he seems to understand Cantor. He even has a little chat about the 1-1 correspondence between the even numbers and the natural numbers in chapter. By the time you've reached that point you will have been thinking for four chapters "but surely if there are (1) natural numbers and (1)/2 even numbers, your theory is busted". His answer is the brazen "we cannot do this because [due to our postulates] we can only execute a finite number of operations"!

All this would be swallowable, just, if some use could be demonstrated. But all the examples given are of the form "here is a calculation whose result has a (1) in it. There, you couldn't have done that with Cantor could you?". As an analogy with imaginary numbers, if calculations using imaginary numbers always ended up with a formula involving i, and the only justification is "now you can take the square root of a negative number!" everyone would have a right to look completely unimpressed.

He's welcome to his patent.

Is it this time of year again?

Seriously, shouldn't these stories be released closer to the beginning of April? Last time I saw this particular flavour of psuedo-science it was called "Nullity" and caused quite a big splash in the tabloid press in the UK. Given that most entry-level calculus and analysis texts deal with why infinity is not a number, you would expect any work in this area to deal with that issue first. No, instead there is an offhand comment that "Cantor's Approach" causes paradoxes, and then sure enough we have the identity (1)/(1) = 1 assumed as a basic axiom. This is such a simple, and common mistake that there is whole wiki page devoted to it, the history, why it has arisen in previous attempts to define number systems, what goes wrong....

Thankfully he now has the definite patent on the process. Perhaps this is the last year this story will surface. Please?

Not crazy enough for april

I agree with your conclusion, but not with your argument. ∞/∞ only leads to paradoxes when ∞ = ∞ + 1 = 2*∞ and so on, which the guy specifically presents as being distinct numbers. Also, he specifically says that only non-mathematicians would consider the Infinite Hotel manipulations to be paradoxical.

The problem with infinity not being a number is, however, quite a big issue. On page 9, he writes that the sets {1, 2, ...} and {1, 2, ..., â‘ } are the same, which leads him to ridiculous conclusions later on, like "The set â„• is not a monoid under addition because â‘  + 1 does not belong to â„•".

It's a shame that he discards ordinal numbers early on, because his base ① system is strikingly similar to the Cantor normal form (bk*ωk + ... + b2*ω2 + b1*ω1 + b0*ω0) for ordinals. His use of fractional coefficient and negative exponents, however, seems like an interesting extension which I hope has also been explored by a non-crackpot mathematician. He also has fractional, infinite and infinitesimal exponents, which is weird, but not necessarily devoid of interest.

It's also a shame that he shuns axiomatic systems, regarding them as too restrictive. There seems to be a fascinating number system screaming to get out of his paper. Let's release it!

The gross numbers are defined by the following rules.

  1. zero is gross.
  2. if E is a finite set of (distinct) gross numbers, and c is a function associating each element of E to a non-zero real number, then the sum of terms c(i)*â‘ i for each i in E is gross.
  3. only the numbers obtainable by a finite number of application of the first two rules are gross.

It is true that inf/inf=c is

It is true that inf/inf=c is only a problem when inf is a divergent process rather than a number. But trying to form a system with distinct values of infinity and treating each as a distinct value causes other issues with closure.

I spotted the claim about N not being a monoid under addition after I'd posted. I'd love to hear his explanation for why a number system designed by shunning axioms is more useful than one that has algebraic properties.

I'm not sure that trying to liberate a number system from his paper is a good idea. There is a serious danger of being trapped inside it with the author. But putting that worry to one side, there seem to some problems with your definition of rule 2.

Firstly I would claim that (1) is not well-defined (and my html-fu is terrible, but that is an aside), and so I don't know how to evaluate the terms c(i)*(1)**i. Secondly I can't apply rule 2 in an inductive manner to build the set of gross numbers as the only gross number that I know when I start the rule is zero.

I can see where you are going with rule 3, but it would appear that we could only construct gross numbers a finite number of steps from an ordinal (assuming that is what the place-holder in rule 2 would be). There are many systems defined in that way. What do you the think that the "extra" would be that results from fractional coefficients (and actually the thread below reminds me that Conway handles these) or negative exponents?


Let me first reveal my html-fu trick for rendering advanced characters like â‘ : I use unicode! Just copy-paste the character from somewhere else (this post, for example) into the comment box. No HTML required.

Let me also clarify that in rule two, I did not mean that the expression c(i)*â‘ i should be evaluated in order to obtain a new gross number. As you correctly pointed out, that would require a definition for â‘ , which we don't have (nor want).

Rater, I meant to use â‘  syntactically. Compare with the following definition of natural numbers...

  1. zero is natural.
  2. if n is natural, then suc n is also natural.
  3. only the numbers obtainable by a finite number of applications of the first two rules are natural.

Here, we don't fret about the fact that suc is undefined, since we simply use it to syntactically distinguish zero from suc zero, from suc (suc zero), and so on.

Now if we use ① syntactically, like suc, I hope it is more clear how to use rule two in an inductive manner to get more numbers. Starting with zero, the only non-empty set of distinct gross numbers I can construct is the singleton set {0}. But I can build 2ℵ0 different functions from a singleton set to ℝ, so the first stage of induction yields:

1*â‘ 0, 2*â‘ 0, 3*â‘ 0, ...
-1*â‘ 0, -2*â‘ 0, -3*â‘ 0, ...
1/2*â‘ 0, 1/3*â‘ 0, 1/4*â‘ 0, ...
ℯ*①0, π*①0, √2*①0, ...

With the addition of the zero we already have from rule one (and which, come to think of it, we could obtain from rule 2 using an empty set of gross numbers), we now have all real numbers. So you can already see that I'm neither aiming for ordinals nor for Sergeyev's system here.

With another induction stage, we can use real number exponents:

x*â‘ 2 + y*â‘ 1 + z*â‘ 0,
x*①2.5 + y*①π + z*①-1, ...

I have written real number exponents as x*â‘ 2, but I really should have used the full syntax x*â‘ 2*â‘ 0. This is because, of course, our next step of induction can use non-real numbers as exponents:

1*â‘ 1*â‘ 1 for â‘ â‘ ,
x*â‘ 3*â‘ 2+1, and many other numbers I can't think of any use for.

I don't know what good the fractional, negative and gross coefficients yields, and I'm not really curious enough to find out. If Conway did something even remotely similar, then that's good enough for me.

Intro to surreal numbers

There's nothing going on here

Let |S| be the cardinality of S. We have these properties:

  1. |S|=1 if S is finite
  2. |A union B| = |A|+|B| if A and B are disjoint
  3. |A x B| = |A||B|
  4. |S|=|T| if there is a bijection from S to T.

He's given a new definition for |.| that dumps 4 because bijections between infinite sets need an infinite number of steps (whatever that means). Except, he allows some isomorphisms because he's unable to recognise then as such. (Eg. he doesn't recognise that by his own rules, the bijection from the positive integers to the negative integers is a bijection that requires "an infinite number of steps".)

If you're trying to define a new way of counting, and you can't recognise bijections, I think that qualifies you for being a crackpot.

Understanding the author's point of view

All of his sets are effectively finite. His (1) is just a large natural number with certain divisibility properties.

When he writes { 1, 2, 3, ...} he really means { 1, 2, 3, ..., (1) }. Since he chooses (1) to be divisible by all small numbers, he has results like: |{ 2, 4, ..., (1) }| = |{ 1, 3, ..., (1)-1}|. This is just ordinary cardinality of finite sets.

An error somewhere

As the union of two finite set means that 1 = 1 + 1 with your rules 1 and 2..


|S| = 1 if S={x} for some x. That his idea of cardinality agrees with the usual one for finite sets should follow immediately.

Pirahã doesn't really have a "one, two, many" system

In fact, more recent research tends to show that the Pirahã don't have any number words at all, and are in general profoundly uninterested in either enumeration or computation: the words once thought to mean 'one', 'two', actually mean 'a little bit (of)', 'a lot (of)'.

"I think, therefore I have no access to the level where I sum." --Douglas Hofstadter


The discussion of Piraha did nothing for the article, besides showing how the author was willing to discuss topics he hadn't mastered.

in other news

Are LtU'ers familiar with J. Conway's "On Numbers and Games" and has there been any thought given to translating that into a computational basis for numbers in a general purpose way? (His system includes infinitesimals and infinites and has a lot of other interesting properties.) I know that there are some programs around that do simple arithmetic in his system or use some of those approaches for game analysis but I'm wondering how deep his construction of numbers has or can have impact on general purpose computing.

Didn't Knuth do something

Didn't Knuth do something like that?

Surreal Numbers

I picked up that book once, and didn't get all the way through it. A work of fiction in which "a young couple turned on to pure mathematics and found total happiness".

Games are more general than surreal numbers

Both are defined like Dedekind cuts in that they consist of pairs of sets (the left and the right) of values. Think of them as games where the left set is left's moves and the right set is right's moves. The surreal numbers have the condition that every element on the left is less than every element on the right. Games in general have no such restriction. Symmetric games like Nim, ie. games where the legal moves are identical no matter whose turn is next, are typically not surreal numbers.

I'd be surprised if there isn't any computational content via game semantics.

Game semantics for games

I'd be surprised if there isn't any computational content via game semantics.

Indeed, this paper by Paul-Andre Mellies explains some of the history behind this idea, some of the problems, and some solutions.


Even if I've only understood parts of this paper (by Mellies) at a first reading, it was already mind-blowing. Thanks for the reference!

computing with the Dedekind reals

Conway numbers are a generalization of the Dedekind reals, and there's a question of whether even those (as opposed to the Cauchy reals) are a useful basis for implementations of real arithmetic. That they are useful is what Andrej Bauer and Paul Taylor argue in the work described here.

Would be hard

Going from memory, it takes an infinite number of steps before the infinitesimals emerge from the system of numbers that he constructs. In fact I seem to remember that it take an infinite number of steps to construct any non-dyadic fraction. The problem is that the number of steps relates to the size of the description of that particular value; essentially anything that is not a binary string with a floating point would require a description of infinite size.

It's been a while since I read it, perhaps somebody else could correct me?

The problem is that the

The problem is that the number of steps relates to the size of the description of that particular value; essentially anything that is not a binary string with a floating point would require a description of infinite size.

Yes, but the same goes for any representation of the real numbers, yet "exact real arithmetic" exists. The point is that to run a computation, you have to specify a desired degree of precision -- and you can specify any degree you want. But for a given degree of precision, the computation will only depend on a finite portion of the input (by continuity).

A mini-review

I don't find the article silly, though the tone is pompous and there's quite a bit of carelessness in the article. And it reads like someone who has mastered some technical subject, and immediately thinks that he has therefore mastered philosophy of mathematics.

The notion of infinity is generally taken to be a notion of cardinality, and by rejecting the idea that the "number" of two isomorphic sets must be the same, he's rejecting the idea the idea that his numbers are cardinalities: there doesn't seem to be a fact about how many, in his number scheme, an infinite collection of identity-less units are. So what are they? He avoids answering such question with appeal to "Postulate 2. We shall not tell what are the mathematical objects we deal with", which I find inadequate.

If I understand them correctly, his numbers seem to be the same as single-variable quotiented polynomials, where the variable is taken to be "sufficiently large": this is an idea put forward in Shaughan Levine's 1994 book, Understanding the Infinite.

He write, (p4):

Usually, when mathematicians deal with infinite objects (sets or processes) it
is supposed (even by constructivists (see, for example, [11])) that human beings
are able to execute certain operations infinitely many times.

where [11] is "A.A. Markov Jr. and N.M. Nagorny, Theory of algorithms, second ed.,
FAZIS, Moscow, 1996." (orig. Markov, 1954). A theory of constructivism that does not repudiate the idea that effective procedures can take infinitely many steps, is certainly an eccentric theory. I assume Sergeyev refers to Markov's principle, which however, does not allow this: Markov's principle essentially states that if you check that a property holds for each element of an infinite list, and you know that the property doesn't fail for all elements of the list, then you will, in a finite number of steps, run into an element which satisfies the property.

Markov's principle is admissible (i.e., adding the principle to the theory doesn't give you any new theorems) for most intuitionistic theories; constructivists generally reject it for those theories where it is not admissible. Whether accepted or not, it is wrong to claim that constructivists suppose that infinitely long procedures can be carried out.

Tom Lord mentioned Conway numbers: IIRC, these give a theory of number that strictly contains Sergeyev's notion, and which are explained, in terms of his games. Sergeyev cites Conway's couthored publication, but only in passing.

I'd like to see the Sergeyev rework the article, taking care to make sure that the claims he doesn't prove are properly justified against the literature, and making the effort to explain what his numbers are.

So what's in the patent?

Pretty silly

Putting aside that his approach is to put aside most mathematical results obtained this millennium, I don't think this is a case of a guy rediscovering a polynomial ordering and attributing undo significance. He's pretty much a crank.

On page 8:

The new numeral x allows one to write down the set, N, of natural numbers in the form

N = {1, 2, 3,...x − 3, x − 2, x − 1, x }

where the numerals indicate infinite natural numbers. It is important to emphasize that in the new approach the set (5) is the same set of natural numbers

N = {1, 2, 3, . . . }

we are used to deal with and infinite numbers (6) also take part of N.

I agree it was important to emphasize that he considers his set, which has "grossone" as its largest element, to be the same as the regular natural numbers. That saves readers some time.

His theory of "infinite sums" is also interesting. He apparently is just "adding up" infinities of terms without taking limits.


He doesn't discard or deny any well-established results. His own results seem reasonable.

I find he overstates the significance of what he is doing, and, more troubling, he doesn't make enough effort to tie in what he is doing to what has already been done, but this doesn't add up to being a crank.


Did you read the excerpt I provided above where the guy doesn't understand that the Naturals contain no upper bound? Did you see the part where he attempts to add "grossone" numbers without explaining what that even means?

Granted, I didn't study this paper much, but my crank-o-meter was going crazy for the part I did read. [Though, if you respond that I just misunderstood what he had written in these parts, I'll revisit it]


I second Charles' feelings about this paper, and add that, as presented, it is at best an amusement, since he really doesn't offer any motivation for his system, except to make infinities behave more like we expect numbers to.

Given my inclination is to feel that we tend to treat infinity too much like a number already, I have never stayed up nights wishing for this. ;-)

I'm curious what those who like this paper, e.g. Jacques, see in it? I'm guessing they are just warped in different way than I am. ;-)

Jacques response was not

Jacques response was not directed at this (the topic) paper.

Thanks Derek

I've edited my comment to make it clear that the paper I liked was the one by Mellies, thanking Noam for the reference. The topic paper rates highly on my crank-meter. I even believe that there are unexplored, serious approaches to dealing infinity still to be discovered - this particular one just doesn't strike me as a candidate.


I've given the paper that Noam and Jacques were referring to its own story.


That makes more sense. ;-)

Rational exponential expressions

I asked a question a week or so ago on MathOverflow:

Rational exponential expressions

Consider the following extension of polynomials. The rational exponential expressions (REXes) are given by:

  1. The leaves 1 and x, for x drawn from a class of variables; and
  2. Closed under the binary functions of addition, multiplication, exponentiation, and division.


I'm interested in a decision procedure for [which of two functions in one variable is faster growing]. Do we have either the existence of canonical forms, or decidability for this problem?

As it happens, the Grossone numbers considered as polynomials seem to be a generalisation of these REXes, allowing subtraction in addition to the operations I described above. If that's right, then Sergeyev claims to have such a decision procedure. The MathOverflow turned up a nonconstructive result, but no constructive result.

I plan to look more closely at Sergeyev's other papers, but I thought that in the meantime, maybe this is of interest.


I do think the patent sounds kind of cranky, but I'm restricting my attention to the paper.

Section 3 is problematic. It seems to be about subsets of the natural numbers as defined by certain generating series. I think his treatment of endpoints of infinite series of naturals is woolly and problematic, but, well, Conway's surreal numbers does contain omega-1: it's an incoherent notion only in certain contexts.

Section 4 does give a normal form for his numbers that is closed under addition and multiplication. This bit is coherent.

As for an explanation, he doesn't give anything I would call an adequate explanation. I'd like a proper story linking what he does in section 3 to what he does in section 4, one which says, e.g., which generating series get which numbers. I'm guessing that the list of powers of two doesn't get a number. I'm guessing that he has to ditch the talk of endpoints too.

I've no confidence in where the author is going, because he ignores rather than discusses difficulties, and because he doesn't give previous work its due, but I think that the idea of getting transfinite measures from generating series maybe can be interesting. Hence my mini-review, and maybe I'll be surprised by future work.

Woolly mammoth

I revisited the paper last night and his viewpoint makes sense to me now, and I agree with many of the comments you made in your mini-review. I believe you're right that his "grossone" is a stand-in for a large natural that is divisible by every number up to some sufficiently large (smaller) natural. In fact, (and I fear I risk violating Postulate 2 by just talking about this) I think a model for his intended numbers would be arbitrary functions from admissible "grossone" values (some subset of the naturals) to the rationals. It's not just polynomials because he says things like this:

Numbers p_i in (13) called grosspowers can be finite, infinite, and infinitesimal (the introduction of infinitesimal numbers will be given soon), they are sorted in the decreasing order

Also he seems to think that his "record" defining e in (27) on p20 is a valid number, despite that it doesn't fit into his normal form. So if he's honest, it's not really a normal form. Luckily, no justification was given for his choice of normal form other than that it's a generalization of normal normal forms in decimal.

Of course, there's not going to be a total order on arbitrary functions, which he simply assumes is the case, so my model above doesn't exactly match what he has in mind.

So is the guy a crank? I guess it depends what exactly you mean, but the paper is certainly deeply flawed, and I don't really expect anything interesting to come of it. You can already do everything he can do in this system by working with a "sufficiently large integer N."

Very odd, for the least...

Well, I don't even hold a thing such as this Mr. Sergeyev's Ph.D., but I find it pretty odd (even if we'd be accepting to see it as a mere introductory paper to his theory) that no single form of credit (or just, some other relation thereof) is given about what is traditionally considered be the "standard" ontological basis to recall beforehand(*) to discuss further on forming a new theory of that sort :

* For instance, is the underlying logic framework in there requiring the law of excluded middle? (author: yes or no? and then, what's the meta mathematical rationale about that choice)

* Not a word either about Zermelo Fraenkel (with or without AC, the axiom of choice) ?(!)

* No introspective reflection either about his own conclusions and how they relate to Godel's incompleteness theorems and other axiomatics used in number theories (are they believed by the author to be weaker or stronger, and than which? And what are we invited to "take" maybe more prudently, then; nothing at all, really?)

With no limitation to it, all of the above would help the reader be a little more "confident/comfortable", no? If only a priori...

I "fear" it failed to attract my attention enough to go past my first reading of it, and the supposedly "most interesting/promising theorems" in there...

Grossone is indistinct, but consistent and trivial

Sergeyev's reasonings are informal and look funny, but they do not lead to contradictions. His grossone admits an utterly simple and adequate formalization within nonstandard analysis, where all the properties of (1) and all the related assertions become rigorously provable and even trivial.

The above is clarified in an article (also published in Siberian Mathematical Journal).