## Why numbering should start at 0

EWD831 by E.W. Dijkstra, 1982.

When dealing with a sequence of length N, the elements of which we wish to distinguish by subscript, the next vexing question is what subscript value to assign to its starting element. Adhering to convention a) yields, when starting with subscript 1, the subscript range 1 ≤ i < N+1; starting with 0, however, gives the nicer range 0 ≤ i < N. So let us let our ordinals start at zero: an element's ordinal (subscript) equals the number of elements preceding it in the sequence. And the moral of the story is that we had better regard â€”after all those centuries!â€” zero as a most natural number.

Remark Many programming languages have been designed without due attention to this detail. In FORTRAN subscripts always start at 1; in ALGOL 60 and in PASCAL, convention c) has been adopted; the more recent SASL has fallen back on the FORTRAN convention: a sequence in SASL is at the same time a function on the positive integers. Pity! (End of Remark.)

What's the modern trend for indexing from zero vs. indexing from one? And for specifying ranges ("from 5 to 8", 5 ≤ N < 9, "four elements starting from 5")? Why?

## Comment viewing options

### One-sided ranges

Dijkstra argued (convincingly, in my opinion) that expressing the range of a variable i as a <= i < b offers advantages over any other combination of inequalities.

0) "Zero is the most natural number." -- Dijkstra --
(the natural numbers, per Peano's postulates, consist of
zero and its successors).

"How many values occurred i times?" Placing such values
in an array requires beginning the indices at 0; in the
general case one has the possibility that some values
were absent (occurred zero times).

2) It is often meaningful to ask for an algorithm, "How many
iterations were required for some condition to obtain?"
Such counts must begin with zero (natural numbers, again).

3) Consecutive ranges have matching end-points:
a <= 1 < b followed by b <= i < c yields a <= i < c .

4) The length of the range is the difference of its endpoints:
0 <= 1 < 10 has ten elements, as does 5 <= i < 15
(because, of course, 10 - 0 = 10 = 15 - 5 .)

5) In the special of (4) above, if the left limit is zero, then
the upper (excluded) limit is also the count.

6) In iterative algorithms that count from zero, the "current"
value of the count always equals the number of previous
iterations.

7) One of the least important reasons (we ARE all high-level
thinkers, right? ;-) but still relevant for implementation,
is that for a zero-origin array, the index indicates the
displacement from the origin.

### not much has changed in decades of programming language design

You might want to mention this is from 1982 and by E.W. Dijkstra: http://en.wikipedia.org/wiki/Edsger_Dijkstra

I still think his preferences are wrong. The language should adapt to the user/programmer when at all possible, not the other way around. Doing otherwise is why we are still stuck with = vs. == errors, case sensitivity errors and the like in most programming languages.
1 â‰¤ i < N+1; starting with 0, however, gives the nicer range 0 â‰¤ i < N.
1 â‰¤ i â‰¤ N is nicest then?
an element's ordinal (subscript) equals the number of elements preceding it in the sequence.
The convential definition of "ordinal" is that a number means what it says. If you say 3, that means the third item, not the forth: http://en.wikipedia.org/wiki/Ordinal_number

Other solutions to this issue are the use of "first" and "last" extension methods/properties like in ruby I believe.

### thanks

You might want to mention this is from 1982 and by E.W. Dijkstra
Indeed - fixed.

### Dijkstra is right

Most languages index from 0, and that's good.

For specifying intervals, there is a reason for using â‰¤/â‰¤ in addition to â‰¤/< when the bounds are not necessarily integers but some other enumerable quantities like ASCII characters which do have an upper bound. That's why I prefer to have both options.

There is a choice that Dijkstra doesn't address: whether a subsequence of indices is specified as the beginning and the end (preferably â‰¤/<), or the beginning and the length. It's not settled in various languages. For example Java uses the former, while C# uses the latter.

### Dijkstra is wrong.

From an earlier comment by dough:

1 â‰¤ i â‰¤ N is nicest then?

Yes it is. His argument here is a straw man. There are several ways to denote the ranges given a particular indexing strategy. He chose to use the method that made his preference more aesthetic. In fact the only reason I can think of that we use a "<" instead of a "â‰¤" when comparing i to n, is because we choose to start indexing at 0, not the other way around.

Dijkstra's resulting "moral of the story" is also suspect. His conclusions are correct (from a set theoretical point of view) zero, as the first natural number (Peano axioms) could indeed be considered the most natural number, but he's failed to convince me that natural numbers have anything to do with indexing. Was the point here an attempt to convince the audience that zero is more natural, or that his premise must be correct because his conclusion is correct?

What does an index do? It tells the position of an element in an ordered collection. What kind of numbers do we use for this purpose? Is the answer natural numbers? No. This argument is spurious. Natural numbers are for counting. Ordinal numbers are for giving the position in an ordered sequence: 1st, 2nd, 3rd, 4th, ...

For specifying intervals, there is a reason for using â‰¤/â‰¤ in addition to â‰¤/

It is true that certain enumerable quantities do not have an upper bound (such as the set 1/n for all natural numbers n≥1). However, any finite ordered sequence has last element. Since the argument here is in regards to indexing of sequences, we must keep this in mind when discussing the validity of ≤/≤ v.s. ≤/< for ranges on indexes. If the range of indexes is given as 0≤n<n then this is equivalent to the more balanced (because it uses inequalities on both sides) 0≤n≤n-1.

More to the point, although the use of indexing for the purpose of iterating over the entire sequence is a moot point, (as others have already mentioned, there are much better methods for enumerating over an entire sequence), there is still a need for having an indexing strategy on sequences, namely denoting subsequences via ranges. Thus we are left with two interpretations: do I want do denote my subsequence based on position in the array (the 4th to the 9th) or on distance from the 1st element?

### Empty sequence

The (big) question is, how do you specify an empty sequence?

any finite ordered sequence has last element

Do you still think so?

### Point taken. I should have

Point taken. I should have said: Any non-empty finite sequence has a last element.

### Do you really want to

Do you really want to introduce a special "empty sequence" case into every piece of code that deals with a finite sequence of unknown length?

### empty list

You mean like with the empty list?

### Empty sequence

I'm glad you asked that, as I should have remembered it in my original list. An empty sequence can be specified as a <= i < a for any value of a. Using the both-ends-included convention (<= for both relations) means that one has to write something like a <= i <= a - 1, which strikes some eyes (mine included) as unobvious and jarring.

### Both are ugly

a <= i < a means that you have not one representation of the empty set but as many notation as there is a value 'a'.

a <= i <= a - 1 is not good either.

For me a good reason to have array starting at 0 is that it's quite natural when you want to put into array the result of "How many values occurred i times?".

### Indexing from 0 is usually better

It usually leads to simpler formulas than indexing from 1, with less +1/âˆ’1 adjustments, especially when working on intervals and when scaling them to other units.

Consider the translation between two-dimensional indexing and one-dimensional linearization of the elements: with 0-based indices, it's iÂ·n+j; with 1-based indices, it's (iâˆ’1)Â·n+j.

If people had been numbering years and centuries starting from 0, then computing the century of the year would be simpler: year 1977 would be in 19th century, 2000 would be in 20th one. The present convention is shifted by 99 years.

### It usually leads to simpler

It usually leads to simpler formulas than indexing from 1, with less +1/âˆ’1 adjustments, especially when working on intervals and when scaling them to other units.

I agree with this up to a point. I'm not exactly sure what you mean by ranges, but it certainly is the case for things like finding midpoints and scaling (as you mentioned). However, should modern languages force the user to deal with such low level details as linearizing a two dimensional array?

Perhaps I was a little too strong in my "anti zero based indexing" rant? Initially, when I started writing the above post, I admitted that there are cases where zero based indexing makes more sense, but my main gripe was with Dijkstra's faulty arguments (which I found somewhat surprising, because I usually find his "rants" amusing and enjoyable), so I ended up removing that confession.

...year 1977 would be in 19th century, 2000 would be in 20th one...

A minor nitpick, and a great source of confusion in the great 0-based/1-based indexing argument:

I think you mean the year 1977 would be in the century 19, as it would still be the 20th century, because 20th is an ordinal, and ordinals always start at the 1st. (Currently we don't number our centuries, we reference them by ordinals,). This is my main gripe with zero-based indexing is that it trades one type of complexity for another: what does a[1] mean? Does it mean the 1st or the 2nd (both positional questions)? When we are dealing with an ordered sequence, getting at the nth element can become confusing.

### ordered collection ?

"What does an index do? It tells the position of an element in an ordered collection."
You can maybe look at it differently and much simpler as a partial function over N. Then it is probably nicer to start at n=0.

### Memory manipulation mangles mathematical meticulousness

I suspect that "the modern trend" is strongly influenced by C-like languages and the fact that if p is a pointer, then p[0] == *(p+0) == *p. That's a property with consequences which seem stronger to me than any mentioned by Dijkstra.

Still, picking one approach or the other here seems mainly a necessity because of inflexible languages (and people). I don't buy Dijkstra's "choice is too confusing" argument - there are times when it's useful to be able to number starting from 1, perhaps because it reflects some aspect of the problem domain. I like languages that let you specify the ordinal range for an indexed sequence.

### Are you suggesting C is

Are you suggesting C is inflexible in this regard. Remember the evil of pointer arithmetic?

int a[5];
int *offset_a = a - 1;
offset_a[1] = 1;
offset_a[5] = 5;
assert(a[0] == 1);


### Undefined pointer arithmetic

The pointer subtraction in the second line of your example is undefined. See the C FAQ.

### Why Numbering Should Start at 1 In Most Cases

While zero may be the most natural point of departure for writing loops in C and its ilk, it wreaks havoc on the intuition of End Users who don't think of starting with the zero-ith element of a sequence. Indeed, having the upper bound of the loop outside of the range of the sequence it targets at N+1 all but invites overflow problems with hastily written code.

A better solution is to simply map your desired operation to the sequence and let the language handle the iteration internally at which point the value of subscripting from 0 vanishes. In the real world, users don't think in terms of 'how many elements come before the one I care about', they think in terms of wanting the first, second, third, etc. and then plugging that number into their subscript call.

So, why not offer both constructs with whole number indexing as the default with the option to specifiy 'an offset indexed array' if you *really* want to start counting from 0.

Remember, in the real world, if an End User has a nice shiny nickle, he or she has one nickle, nickle one, not nickle zero!

Of course we should be able to have our cake and eat it too. Why not use 0-based indexing for loop counter variables, and absolute indices for user variables, constants, and immediate index values. Also, any truly user friendly language should support the symbolic offsets/accessor functions 'first', 'second', 'third', etc. and 'last', 'second to last', 'third to last', etc.; and we could even throw in 'middle' (which would return nothing for even length arrays), 'left of middle', 'right of middle', 'second left/right of middle', etc.

The bottom line here is that for causual End Users like spreadsheet tinkerers, dereferencing should correspond to real world intuitions and not those of a system's programmer.

So basically, I would argue that Luke has the right idea in asking about specifying ranges like 'from 5 to 8' and 'four elements starting from 5' and that we should implement in our languages a DSL for accessing array elements with the very quasi-natural language expressions he offers to frame the problem. By letting the programmer directly encode what he or she wants to access rather than making them express their query as an offset from zero, we shift this translation task from the programmer to the compiler and thus eliminate an entire class of errors, like those that arrise if an End User were to naively transcribe a FORTRAN function into a language with different subscripting semantics.

Of course, if one doesn't understand a function one is trying to transliterate, one can't express it in such a Quasi-Natural Language (QNL) form; so if our language's End Users were likely to want to port in a sizable FORTRAN legacy codebase we might want to offer a FORTRAN keyword, to signal our language as to what semantics should apply to an explicitly denoted block of trasliterated code. We could then let our development tools normalize it and autogenerate a native functional equivalent that would retain the original FORTRAN source as a comment! Given the highly constrained domain of an unambigous programming language such machine translation is *vastly* more tractable than translating Natural Language.

Remember, in the real world, if an End User has a nice shiny nickle, he or she has one nickle, nickle one, not nickle zero!

That's a different question: "how many nickels do I have". This whole issue depends on first deciding what question you're most interested in. Dijkstra proposes asking the question "How many elements appear before the one you want to identify?" BTW, I dispute the claim that End Users possess intuition.

### The Case for End User Intuition

BTW, I dispute the claim that End Users possess intuition.

Okay, I'll bite (although I will assume that this aside may have been at least partially tongue in cheek) because as the head of The Institute for End User Computing, it is my job to make the case for 'intelligent End User life out there'.

However, I'll agree with you in the context of our discussion, in so far as End Users probably don't think about using offsets and iterating over subsequences by advancing one or more pointers as a natural way to solve any sort of problem.

So even the idea of framing Dijkstra's question in the first place would most likely be outside the scope of their intuition, which is a quite different thing than saying that they don't posses intution about any sort of computation.

Indeed, End Users would most likely assume they could just name a sequence and do something to all of the elements in that range, because we humans natively perform that sort of process in closed form.

If a number in square brackets following a name represents some sort of offset or component element descriptor that stands for one of the items in a list identified by that name, would "my_list[2]" mean the second or third item in "my_list"?

I'd be willing to bet that a statistically significant number of End Users would intuitively pick the former over the latter. This should be empirically testable and repeatable which would validate my hypothesis that End Users do have intuition with respect to such notation. QED

### Intuition

Okay, I'll bite (although I will assume that this aside may have been at least partially tongue in cheek)

You assume correctly. However, dragging intuition into a discussion is a guarantee of fuzzy thinking to follow. At the very least, you need to define what you mean by the word. For example, a dictionary definition doesn't seem to apply in this case: if "direct perception of truth, fact, etc., independent of any reasoning process" or "pure, untaught, non-inferential knowledge" is at work here, how do we explain the fact that "third" in one context can mean "fourth" in another, as I've pointed out in another comment? There does not seem to be an argument for either meaning being inherently more intuitive.

What I suspect is actually meant by "intuition" here is exactly what Jef Raskin proposed, which is that Intuitive equals familiar. But "familiar" is dependent on what the user has already been exposed to, and varies according to a wide range of factors. Certainly, assumed familiarity can be a reasonable basis for a design, but it should not be confused with some sort of ability to directly perceive truth. Only Dijkstra possessed that.

### zero is space centric

My preference for zero based indexing is related to Dijkstra's question rephrased.

Anton van Straaten: Dijkstra proposes asking the question "How many elements appear before the one you want to identify?"

I prefer the question: "How many members must you skip before the desired spot?" And sometimes I use the name "skip" instead of position or index to emphasize the issue (especially when a data structure has a btree shape such that indexing is skipping-centric). It unifies terminology for current members and future additions to the end. If eos (end of sequence) is the count of members, it's also the position at which new members append when skipping all current members. Since eos and size() are synonymous, this eliminates opportunities for still more off-by-one errors.

My view is spatial (relative positions of items occupying space) and constructive, instead of verbal and declarative. I prefer semantics most cleanly expressing accumulation and editing of existing sequences over time -- it seems less messy with zero based positions. Editing is all about calculating where to put things, which involves knowing how much to skip before making cuts. Zero-based distance is clear.

Iterating over sequence members seems a minor use case. Instead, if you want to find what causes awkwardness and what doesn't, try expressing insertions and deletions in a sequence, with use cases including empty sequences and appends. Zero based indexing seems to have fewer edge conditions.

Dijkstra proposes asking the question "How many elements appear before the one you want to identify?"

I always thought that question was only asked to justify his position. Why would the person be asking about the element before the one they wanted? Consider...

Dijkstra: Take the door after the third door on the left.
Everyone else: Take the fourth door on the left.

### multiple true perspectives

Jonathan Allen: I always thought that question was only asked to justify his position.

I was never aware of Dijkstra's paper or his question before I read this thread, and yet his question was like one I asked myself a few years ago, when I wasn't trying to compare zero and one-based indexing. I was trying to summarize arithmetic on (e.g. byte) sequences with the fewest exceptions in words.

But Dijkstra's question is awkward because it seems to care about elements before a member you want. Instead, it's more clear to ask the question in terms of how a sequence finds a position for a member, whether the member has been added yet or not. (Yeah, you might consider this a systems programmer view, but it doesn't invalidate the idea that multiple simultaneoous viewpoints can be true.)

Look at it this way. The index has meaning only in the context of a relationship between sequence and member. The sequence seems primary here, since it's only because the sequence is ordered that indexes are ordered too. So I think it's natural to give priority to the sequence's coordinate system, which has an origin a zero when a sequence is empty. If you wanted to use the same notation before and after you had added the first member, you can make eos a legal index for writing: s[0] = x.

(Note to functional programming folks: assignment needn't mutate state in expressions like this; it can instead return a new value where the change is visible, without modifying old state. And you can make it cheap using copy-on-write with immutable state sharing.)

The problem in terminology resembles that in local and global coordinate systems, say when trying to refer to something inside a sub-view when each view has it's own view transform. (Think about mapping mouse clicks global-to-localinto nested views from systems events, or the reverse local-to-global when expressing view coordinates in a message that must make sense to entities outside the view.)

Both local and global coordinate systems are "true." Arguing that one is nonsense because the other is more understandable is silly. But the local coordinate system of the sequence is "object oriented" and therefore unpalatable to new users, because they don't have as much practice seeing from other viewpoints than their own. But the object's perspective is going to generate a more consistent description of a series of things that happens to the object.

### When is the third floor not the third floor?

The person isn't asking about the element before the one they want, they're asking how many elements to skip, as Rys pointed out.

Consider...

Dijkstra: Take the door after the third door on the left.
Everyone else: Take the fourth door on the left.

This example depends on biased language. In England, if you were talking about floors in a building, the floor that (U.S.) Americans would identify as "fourth" is called "third". So the above example becomes:

England: Take the lift to the third floor.
U.S.: Take the elevator to the fourth floor.

Somehow, those wacky Brits actually manage to communicate locations in office buildings to each other, despite using zero-based counting in that context (or "ground-based counting"). I like the Sapir-Whorf undertones here!

### "Intuition" and "end users" are dangerous phrases

"Intuition" means nothing more than one's un-/sub-/pre-conscious response to a situation based on prior experience. Education is all about providing new experiences and tools for dealing with new experiences.

"End users" are apparently the final arbiters of all matters, having reached that pinnacle of judgement on the basis of having no experience or training.

It reminds me of the scenario of an engineer presenting a design to management which drew on recently-published research results, only to have the presentation dismissed with the comment, "I've never heard of this stuff."

As for the posited great unwashed masses not being able to understand beginning with zero, that notion doesn't survive the following test:

How many years old is a child during her/his first year of life?

Most people of my acquaintance (included non-programmers) have no trouble understanding "Zero" as the answer.

(PS: OK, the attitude above is tongue-in-cheek, based on the numerous comments posted regarding EWD's curmudgeonly mode of expression. ;-)

### How many years old

As for the posited great unwashed masses not being able to understand beginning with zero, that notion doesn't survive the following test:

How many years old is a child during her/his first year of life?

Most people of my acquaintance (included non-programmers) have no trouble understanding "Zero" as the answer.
All right, I concede that "End users" can count from zero.

Now, could you explain me again what is the relation between one's age and the position of elements in a list?

### End users aren't the right audience

I agree that end user intuition is probably that indexing should start with 1 and if I were creating a spreadsheet program I would index from 1. However, if I am creating a programming language I intend for it to be used by professional programmers. The great simplification that comes from indexing starting at 0 is worth the effort for a professional programmer to change his intuition. A small upfront cost pays off over one's career.

Problem solved.

### Quaint, but...

While forcing a programmer to explicitly allocate an integer to iterate an array has a certain 1950's sort of charm, it seems like we, as a civilization, should move on.

(And, flippant one liners aside, I would note that we, for the most part, have. Even Java. Java!)

Happy New Year!

Cheers,
Carson

### What do people actually do?

Starting with the issue of numbering subsequences, one clearly doesn't want confusion over whether limits are inclusive or exclusive to lead to off-by-one errors. Indeed the whole issue of whether a limit is inclusive or exclusive is hair that is best shaved off. The way to do this is to number the gaps between the items, and specify a subsequence by naming the gap at the start and end. This method works out very neatly in two ways. The length is the difference between the start and end numbers. Sequences chain neatly with the end of one being the start of the next.

This still leaves the question of whether to number the gaps between five items like this

1 a 2 b 3 c 4 d 5 e 6
or like this
0 a 1 b 2 c 3 d 4 e 5
Starting from zero is the obvious winner for naming the gaps.

However we are still left with the question of how to number the elements. This question reduces to asking how we wish to simplify the description of unit-length sequences. Is [start=n,end=n+1] to be known by its start or its end?

Look at what people actually do. We can see what people do by looking at the way in which they number years and centuries. It is really ugly.

There is no year zero. We count 3 B.C., 2 B.C., 1 B.C., 1 A.D., 2 A.D.,...

So the first hundred years are years one to one hundred and we call it the first century. That is fine. No problem there. The next hundred years are years one hundred and one to year two hundred. Is that upper limit inclusive? It had better be. Since we started the second century at year one hundred and one the second century must include year two hundred or else it will only have ninety-nine years.

We have painted ourselves into a corner. The century name doesn't match up with the first two digits of the four digit date. The Glorious Revolution was in 1688. It is easy enough to remember that there is an off-by-one problem here, but was the Glorious Revolution in the fifteenth century or the seventeenth century? The rule is to add one. 1601 is in the 17th Century 1699 is in the 17th Century. But where are 1600 and 1700?

Every-one celebrated the new millenium, the start of the twenty-first Century on the night of 31st December 1999/1st January 2000, one year early. Aligning the big boundaries with the big carries is irrestistable. We can sumarise what people actually do: They start indexing from one, but somewhere between a thousand and two thousand they slip in a ninety-nine century, and flip to zero-based indexing.

The argument that sequences should be indexed starting at one because that is what people do leads to a false conclusion. The conclusion doesn't follow because the premis is false. People don't start indexing at one. They do something more complicated and informal which is unsuitable for use in computer programming.

Nevertheless there is a strong moral to be drawn. A controversial way of phrasing it is to say that people start with one-based indexing because they are naive and flip to zero based indexing because one-based indexing sucks and they grow sick of the lossage.

Alternatively we might emphasise that computer programs are for people to read, not for computers to execute. Therefore programming language designers must be psychologists, watching the people around them and noticing what they really do. They have the big parties at 1999/2000 not 2000/2001. Which tells you that when they read a[2000] in a computer program they will assume that there are two thousand earlier enteries in the array, a[1999], a[1998],... all the way down to what? Your computer programming language cannot stick in a 99 year century somewhere along the way, so your design is forced, it has to go all the way down to a[0].

### They have the big parties at

They have the big parties at 1999/2000 not 2000/2001. Which tells you that when they read a[2000] in a computer program they will assume that there are two thousand earlier enteries in the array,...

I don't buy this. There is no reason to come to this kind of conclusion when there is a much simpler explanation: 2000 is a nice big round number. That said, I think you do have a point in regards to counting out years, but that's because we are counting, not indexing. Natural numbers are for counting, and natural numbers do indeed start at zero. The argument here is about indexing (positioning) elements in an ordered sequence, which is classically the domain of ordinal numbers, not natural numbers. If I was told that x was the 2000th element of sequence S, then you bet I would assume that there is 1999 elements before it, because it is the 2000th.

I, of course can turn this argument around. If we start indexing at 0, then when asked the question: give me the 2000th entry in the sequence, then I would have to index at 2001: we're off by one. If I had an sequence of size 2000, and I wanted the last element, I would have to index at 1999, again I'm off by one. The point is that these are positional questions (as is often, but not always, the case when dealing with indexes) and our thinking will be off by one when we use 0 based indexing.

### Day 0 two-fer

They have the big parties at 1999/2000 not 2000/2001.

And the ones who do delay their turn-of-the-century parties to the *end* of **00 ought to be careful.

As to the main topic, I'm with Stan: "Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration." -- Stan Kelly-Bootle ...

### Why bikesheds should be black

Red is the lowest-frequency color visible to humans, but it is not the primary color. The color of the cosmos is in the microwave range, and is not visible: the most primary colory is black. Black also looks cool, especially for bikes.

### Happy New Year

Nothing like starting the new year with a religious debate -- and, since in the sphere of programming we rarely have to wrestle with Truly Important Questions, aesthetics always tends to inflate into religion.

I don't think I could put it better than Brian Hayes in his interesting survey The Semicolon Wars (previously mentioned in this post), from which I quote:

Still another perennially contentious issue is how to count. This one brings out the snarling dogmatism in the meekest programmer. Suppose we have a list of three items. Do we number them 1, 2, 3, or should it be 0, 1, 2? Everyone in computerdom knows the answer to that question, and knows it as an eternal truth held with the deepest, visceral conviction. Only one of the alternatives is logically tenable. But which is it? Consider the Java expression Date(2006,1,1); what calendar date do you suppose that specifies? The answer is February 1, 3906. In Java we count months starting with 0, days starting with 1, and years starting with 1,900.

### Dijkstra's Personality?

Dijkstra never steered away from expressing opinions on religious matters. Not knowing him well enough, I wonder whether this was due to a perfectionist personality? Or was he using it as a teaching technique? Perhaps he felt that bringing emotions into the fray helped engage people in these matters, such that you could use that energy to teach more valuable lessons?

### Dijkstra reminds me...

... of "Mikey" on the old Life Cereal commercials: "He hates everything!" I challenge anyone to find anything positive that Dijkstra has ever written about any computer language, for instance. I think he just enjoyed playing the role of curmudgeon. Not that I don't respect his contributions or his intelligence; I do, very much. But his negativity is so consistent and over the top that I have a hard time taking him seriously, even when I agree with him completely (which is often). I find it curious that someone with Dijkstra's views didn't try to design a general-purpose programming language himself, instead of just criticizing everything everyone else did. My guess is that he did (privately) and failed, and that that left him somewhat bitter about the entire enterprise.

### Algol 60

Algol 60 in particular Dijkstra seemed to like. See EWD1284 especially, also EWD1298.
In these talks it seems Dijkstra had something positive to say about every language he mentioned, though he does have many complaints about most of them.

### Here's a positive Dijkstra quote

This is from his Turing award lecture in 1972, "The Humble Programmer":

The third project I would not like to leave unmentioned is LISP, a fascinating enterprise of a completely different nature. With a few very basic principles at its foundation, it has shown a remarkable stability. Besides that, LISP has been the carrier for a considerable number of in a sense our most sophisticated computer applications. LISP has jokingly been described as "the most intelligent way to misuse a computer". I think that description a great compliment because it transmits the full flavour of liberation: it has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts.

He also was extremely positive about ALGOL 60 (and rightly so; it is a brilliant design with ideas decades ahead of anything else).

### OK, I'll give you ALGOL 60...

... but that LISP quote -- talk about a back-handed compliment! It sure sounds like deep down he didn't _really_ like LISP but felt he had to at least give it some props. "I think Lambda The Ultimate is by far the most intelligent way to misuse a blog." Hmmm...

Personality aside, the impression I get from reading Dijkstra is that his main obsession was in proving programs correct, and you can see why a very simple language with only a few carefully-chosen constructs would appeal to him. But he also wanted programs to be as efficient as possible, so you can see why he would prefer ALGOL to LISP (or at least the LISP of that time). I imagine he also liked PASCAL (anybody want to check?). Working at a high level of abstraction didn't seem to be something he really cared about much, beyond the "structured programming" level, of course.

### Efficiency?

In my opinion, well, yes and no.

Correctness, yes, and efficiency, also yes, in the sense of finding the best algorithm for solving a problem. Efficiency, in the sense of being as fast as possible, no. So, as for that being the reason to favor ALGOL over LISP, then no. My impression of his back-handed complement to LISP is that it had more to do with the use of LISP, rather than the language---Dijkstra was not a big AI fan. (Strong AI, from that viewpoint, is just kind of silly, and weak AI does not really mix with either provable correctness or elegant algorithms for a specific problem.)

### Curmudgeons wanted

I think [Dijsktra] just enjoyed playing the role of curmudgeon.

Considering the language landscape at the time Dijkstra was writing, I'd say he had some justification!

### Dijkstra vs. general-purpose languages

I find it curious that someone with Dijkstra's views didn't try to design a general-purpose programming language himself, instead of just criticizing everything everyone else did.

He did: the "guarded command notation". To the extent of my knowledge, he used it essentially unchanged from the mid-seventies on, at least until I saw his proof of Sylvester's line problem. (EWD1016, 1988; geeze, I suddenly feel old.)

### Numbering Starts with Zero

It is available in English:

http://www.purl.org/stefan_ram/pub/zero

Or in German:

http://www.purl.org/stefan_ram/pub/null

### Array Indexing in C

Indexing by zero in a language like C makes a lot of sense. For example, the first element in an array in C has the same location as the pointer to the array itself. The offset from the beginning of the array for the first element is therefore zero.

When using array subscripting, the address of the element referenced is determined by the pointer address plus the index. For x[n], we'd get the address by x + n. However, if we began indexing from 1, we'd have to do x + n - 1. This is very awkward and would lead to a good deal of confusion when dealing with anything involving pointer arithmetic. It would also mean that *(x + 2) would return the same value as x[3], and so on and so forth.

In short, you can't escape the "off by one" problem, but starting from zero is certainly the better of the two possibilities.

### Why not leave the first element of an array unused?

In this day and age with large computer memories and heap-allocated garbage-collected objects, 4 bytes per array will not be missed in most computing tasks. I think it is better to start indexing at 1, and simply avoid the subtraction by not using the first element of the array.

### Or even better use the first

Or even better use the first element to store the size of the array.

In most languages, arrays are tuples of a pointer to the data plus the length of the data region, so in some sense what you ask for is already done.

In C++, the length of the array is indeed stored at the beginning of the array, and that's the reason operator delete [] works; but it is not part of the standard.

### yes!

I think it is better to start indexing at 1, and simply avoid the subtraction by not using the first element of the array.

This is a handy pattern! The only place I've seen it done was amongst Darius Bacon's simplifications in screenful #2. I was impressed that he removed a lot of distracting calls to 1- in such a simple way.

Darius has a knack for finding simple solutions that would somehow never occur to me. My favourite is awklisp representing conses as integers and having car and cdr each be not a function but an associative array. The integers even include type-tag bits and the the garbage collector cleans out the arrays.

We need to lure some people like Darius and Avi Bryant back onto LtU. I seem to recall that they're attracted by code snippets :-)

### We need to lure some people

We need to lure some people like Darius and Avi Bryant back...

I agree.

### Darius

Darius is my buddy... want me to talk to him? ;-)

### Sure! Though I am sure he

Sure! Though I am sure he knows we miss him...

### Necessity is the mother of invention

Not to detract from Darius' brilliant work, but I represented LISP cells this way in walk, which he references in the manual for his interpreter. The reason in both cases is simple: the only data structure Awk offers is the array!

### Walk

That's right, I'd been impressed by walk before I wrote awklisp. walk uses the original awk dialect which doesn't even allow function definitions -- it was really neat how this still could implement Lisp cleanly with almost nothing to work with. There was no glory in doing it over in the more powerful new awk -- awklisp was all about razzing a commercial extension language we had to use at work that was so poorly implemented, you could do better even in awk.

Thanks for the kind words, guys.

### Seems simple enough to me

If you are, mentally, using the numbers as labels on the slots, the first slot is "one".

If you are, mentally, using the numbers as offsets in units of slots, then the first slot requires you to offset by nothing.

### Specifying ranges

If our integers are limited in range, as they too often are, then every notation will be unable to represent something. Assuming allowed values in [0, maxint]:

• a ≤ x < b: cannot represent [x, maxint].
• a ≤ x ≤ b: cannot represent the empty interval.
• a ≤ x < a+w: cannot represent [0, maxint]

Hacks can allow us to sneak past the above corner cases, such as using b<a to denote an empty interval in the second case. Such hacks may require special handling and are often very fragile.

It is occasionally useful to further abuse the representation in order to denote "split" or "wrapping" intervals on the form [0, b]∪[a, maxint], but this is rare enough that a language should not be designed with this in mind.

Some languages have a preference for one notation over another. Pascal uses inclusive upper bounds to specify types. Python uses exclusive upper bounds in slices and ranges. Perl uses inclusive upper bounds in slices. OCaml has zero-based arrays but inclusive upper bounds in for loops, causing a certain impedance mismatch each time.

Many interval operations (union, intersection etc) are easier to express with either of the start:end representations than using start:length. I usually switch representations for the operation and then convert back.

### sjoerd visscher and Carson

sjoerd visscher and Carson Gross are right, of course. No indexing = no issue.

However, I will add my opinion that 1 ≤ i ≤ N is more natural. I don't care that specific algorithms want to map values to 0. Multi-dimensional array linearisation is a compiler issue, not a language or programmer issue.

For array slices, a half-open interval is superior so you can tell the slice length by difference. If I assume that indexes start with 1 then only a ≤ x < b or a ≤ x < a+b makes sense.

### The good thing about 1-indexing

The good thing about 1-indexing is that you can reuse 'slot 0' for semantic information about the array. In all the languages I've created, arrays store their size in slot zero. The array (buffer) can be passed to functions safely for filling, the 'sentinel' issue is avoided for strings and you can easily index the last element.

Then again, I started with Pascal, so maybe the elegance of Wirth's design has influenced my later prejudices...

### metainfo in oob space and sentinels

You're using slot zero for metainfo when it's out-of-band (oob). But when zero is also a valid index, you can still keep metainfo in oob space for an array; making one index oob for this purpose changes little. You can reserve an arbitrary amount of space for oob metainfo before the array starts.

On the separate topic of sentinels (like nuls at ends of C strings): it's now always a good idea to use pointer plus length representation for strings of all flavors, except when you'e forced to speak to an API (eg standard C library calls) that require an end nul, and even then you can throw in trailing end nuls that get used by no one except retro APIs.

The reason to prefer pointer plus length representation is memory latency. It's prohibitively costly to touch memory often if unnecessary. Scanning to find an end nul after you've found it once is a waste of cycles. String heavy applications that use terminating nuls pay a heavy penalty in memory bus latency if strings are scanned in tight loops in many places in the code. So one would hope languages don't rely on sentinels for primitive strings.

### Semantics again

Although this whole thread is predisposed to degrading to quibbles over semantics, I must ask, when is 0 'a valid index'? I have on my desk here an array of three pencils lined up. What do I have in my hand when I pick up the zeroth one? Is it even a pencil?

And I totally agree with you about pointer+length representation. In my world, the pointer points to the length, which defines the buffer that begins (1-indexed) right after. Easy and natural!

My world rocks.

### XKCD

This post reminds me of this classic comic from XKCD:

http://www.xkcd.com/c163.html

### too many issues involved here but as of ranges ...

> What's the modern trend for indexing from zero vs. indexing from one? And for specifying ranges [...]

Indeed, each trend is present in a set of languages and has many supporters. And of course, most of the source of disagrement comes from either confusing, or trying to tackle at once:
â€¢ order (position);
â€¢ cardinality (in particular, offsets);
â€¢ gap designation (as opposed to item designation);
â€¢ range designation;
â€¢ backward â€” last-to-first â€” position numbering, in addition to the forward-directed.
All these are related to sequences and to each other but are nevertheless different issues.

While both 0-based and 1-based camps have valid arguments in their support, it seems to me that in the closed vs. half-open range controversy half-openness is a clear winner. Not only endpoints fit without overlapping when adding up ranges and zero-length ranges are representable, but consider also these two (I don't recall having seen them mentioned elsewhere but I think they are important):
â€” A correct expression for â€œx is in the range between a and bâ€ in the sence that min(a,b)â‰¤x<max(a,b) is aâ‰¤xâ‰¡x<b. For a closed range ([a,b]) such a succinct expression is impossible.
â€” Splitting [a,b) into [a,t) and [t,b) makes sense for all aâ‰¤tâ‰¤b (including a=t=b) in both the integer and the real-number domains. Again, this is impossible to do with closed ranges.
An important particular case of this is halving a range with t=(a+b)/2, as e.g. in binary search. One needs properly formed sub-ranges, and indeed they are, for any reasonable interpretation of the division operation, i.e., yielding exact or approximate-of-a-kind quotients.

### Depends on the question indeed.

Dijkstra is completely correct regarding the question he asked. It's just that the question Dijkstra asks is not the one programmers such as myself ask when dealing with arrays.

If I'm asking for an item within a sequenced group, I begin with 1 (athletes do not get a trophy for being 0th). If I'm asking for an offset from a known position, then I would begin at zero (and in these cases often include negative numbers as valid index values). If I'm searching for a phone number, I index by name (which here are nothing more than base 26 numbers, can hence be sorted, and it doesn't matter if 'a' is equivalent to 1 or 0).

The reason programmers (as opposed to all other professionals, including mathematicians, physicists, and ancient Greeks) tend to want to start counting with 0 is because old C habits die hard, and we tend to want to use all the available states a specific piece of memory can hold. But this limit is purely psychological, and it's not hard to find where such limits on thinking have stymied people from making great mathematical and scientific discoveries.

In a C array, the name designates relative position (by offset), not absolute position (which for all intents is identical to a name). Array notation was designed to avoid addressing memory locations using pointer arithmetic directly, and it makes no sense to begin counting with 0 in languages that do not have the concept of pointers. An array object is already a point of reference, and serves as the ground floor.

Speaking of that, those building floors were an excellent example. Do most people think of navigating buildings in terms of offsets, or in terms of identifiers? Does it matter if the ground floor is called 1 or 0, or if the floor below that is called 0 or 'basement'? There are buildings that lack a 13th or 6th floor, and some even have multiple floors of the same number.

The values used to begin an index can and should reflect the kind of question being asked. Since arrays are used to group countable elements under a numeric sequence, they should begin with 1.

### it is not only because of pointers

â€œThe reason programmers [...] tend to want to start counting with 0 is because old C habits die hard
[...] it makes no sense to begin counting with 0 in languages that do not have the concept of pointersâ€

â€¢ Zero-based indexing can be found in languages that have nothing to do with C, and in fact some of them predate C.
â€¢ Zero-based indexing is used in languages that have no notion (neither explicit nor implicit) of pointers at all, and semantically are very far from using pointers for referencing. (APL, J, and K are but one line of examples.)
Apparently, there are good reasons to prefer 0-based indexing other than pointers and C (just as there are reasons to prefer 1-based indexing). Some of them were mentioned in this thread.

### Dijkstra words

Dijkstra is completely correct regarding the question he asked. It's just that the question Dijkstra asks is not the one programmers such as myself ask when dealing with arrays.

In fact, the quote mentions "sequences" and "subscript" which are terms normally associated with mathematical sequences, not with lists or arrays. However I believe that he really meant arrays.

### It really all about the commas

It don't matter whether you start with 0 or 1, outputting a commas seperated list is difficult either way:

  module Enumerable
def join(sep)
result = IOString.new
first = true
self.each { |x|
result << sep if ! first
result << result << x
first = false
}
return result
end
end


It don't matter if the array is 1 based or 0 based, because you never see it:

  module Indexable
def shuffle
for index in self.range do
self.swap(index, random(index, self.range.upper))
end
end
end


If you want to shuffle your array (say just prior to sorting it ;)

### Commas

Interesting that such a simple task as printing a comma-separated list is fiddly in many languages. Here're some decreasingly nice snippets:

(format t "~{~S~^, ~}" '(foo bar baz))

#(foo bar baz) do: [:x | Transcript show: x]
separatedBy: [Transcript show: ', ']

L = ["foo", "bar", "baz"],
lists:foldl(fun(X,A) -> A++", "++X end, hd(L), tl(L)).


### Pretty and short in Haskell

join sep (x:xs@(_:_)) = x : sep : join sep xs
join _   xs           = xs

Probably we could do better by expressing this as a fold and/or making it tail recursive but I find this short and nice enough.

join :: (Show a) => String -> [a] -> String

join sep [] = ""
join sep x  = foldr1 (\a b -> a ++ sep ++ b) \$ map show x


I wonder if there's a commonly accepted combinator that shows explicitly that we're dealing with two cases, empty and full lists.

Something like

caseEmptyFull :: b -> ([a] -> b) -> [a] -> b

caseEmptyFull ifEmpty ifFull [] = ifEmpty
caseEmptyFull ifEmpty ifFull x  = (ifFull x)


We can then write

join sep = caseEmptyFull ""
(foldr1 (\a b -> a ++ sep ++ b) . map show)


That snippet seems to make it clearer that we're only passing full lists to foldr1 (and that we need to special-case empty lists); wrapping foldr1 might be better.

### don't show, don't tell

The map show bit makes the function less generic. My join had the polymorphic type: a -> [a] -> [a], which let us use in more places.

Your caseEmptyFull is similar to the maybe function. Actually every ADT can have a natural case function associated with, which is useful to express such idioms (for CADTs you have a natural build).

Actually a one-liner, non-tail recursive, definition using this case-like function could be:

list e f xs = case xs of {[] -> e; xs -> f xs}
join s = foldr (\x -> list [x] ((x:).(s:))) []

One could write the generic case in Template Haskell or use the one available in Morrow.

### Natural

I was trained as a child to start at 1. I refuse to regard this as natural or intuitive because I am aware that it was trained. I am skeptical about its optimality or rationality (let's say for a fixed application), at least a priori, because it is a culture evolved not calculated - it's probably pretty good but not the best. And when I learned that the ancient Greek culture started at 2, the whole matter gained a perspective. The whole matter is not a big deal, to be sure: I have numeracy, I can cope with several slightly different schemes, even several slightly different schemes in the same program trying to trick me. But when people strongly assert "this is natural, that is intuitive", I must ask: how do you know it was not brainwashing.

I really like a thing some of you point out: sometimes we index the items, sometimes we index the gaps between the items, sometimes we index something rather... This overloading confuses the matter further. (Again, not to say that the matter is a big deal.)

My preference goes to mâ‰¤x<n in general, and 0â‰¤x<n whenever possible. I choose them because I have found them optimal (least tricky calculations (length, joining, splitting) and most elegant empty range) in most cases; this much I read from Dijkstra and I found logical, and I would rather re-train myself than miss a logical decision. I am aware that they are annoying in some other cases, and I am happy to use better schemes then. (Again, this is just a small preference; I am quite flexible and numerate. But when some people make strong claims rooted in culture, users, all that partisan stuff, I must indicate an accordingly strong skepticm.) The scheme mâ‰¤xâ‰¤k exposes k as a particular member of the range, but (to me) the knowledge of k is more a self-gratification than a utility.

Pun time: one of you mention that 2000 is a nice big round number. Well, along the same line, 0 is a nice small round number. In fact, 1 is pretty odd to begin with, if you get the pun. :)

Fun fact: one of you mention floor numbering. The British (and consequently most places once ruled by the British) call the ground floor G or 0, and think it is the most natural. The Americans and the Canadians call the ground floor 1, and think it is the most natural. Now, do not think that they're in for a big fight, for they do agree on a most important matter: that English is the most natural, intuitive, superior language. (Oh, they just have to iron out minor issues such as whether it's "bikeshed colour" or "bikeshed color"...)

### Evidence from linguistics

All number systems that originate in natural language, as opposed to sophisticed technical domains such as Arab number writing systems and imperial calendars, start at one, and you can tell the difference between the two by seeing how children count. Counting consists of establishing a bijective relationship between a set of counted objects; a counting sequence which starts with the beginning word, and then visits each successive counting word in turn; and then identifying a counting word that is the cardinality of the counted .

It makes as much sense to count from zero as from one: the only difference is that with 1-counting, the cardinality that you call is the same as the last counting word used in the relation, whereas with 0-counting it is the successor of this counting word. Why the children of no existing cultures do things this way, I leave to the anthropologists, amateur and professional, among you.

### Cardinal /= Ordinal

It makes as much sense to count from zero as from one

But you've said exactly why it doesn't: you have to add one at the end if you start from zero so counting from 1 is less effort. Counting and indexing are different things and even the English language makes a distinction between cardinal and ordinal numbers. So what makes sense for counting (starting at 1) isn't necessarily what makes sense for indexing.

PS Brits index storeys in a building starting with zero (though they call the zeroth floor 'the ground floor').

### natural language and zero

I am not entirely convinced that natural language avoids 0: before a meal (perhaps taken on the 1st floor of a restaurant — one floor above street level0), one might order:

"A glass of white as apÃ©ritif, please" = 1 wine + 0 pastis + 0 proseccos + ...

after a meal, should the hour be late, one might be more explicit:

"I wouldn't like any coffee, just the check, please"  = 0 coffees + ...

Furthermore, while it is true that in my experience athletes do not get a trophy for being 0th, a quick Google confirms that the Peano-style of champion 1st runner-up or winner 1st runner-up is not entirely unknown in natural language rankings1.

[0] in the days before elevators and automobiles, the continental conceit was that the 1st floor, besides being first up from street level, was also ranked first as being the "best". Artists and servants lived up in the garrets, and had to climb (and haul water?) up all those flights of stairs, so lower was better. But one didn't want to be all the way down at ground level, which was the domain of horses and cows and goats (and their manure tips), warehouses for heavy goods, and (truly infra dig) shopkeepers. Now that we have elevators and excavators, the owners live on the tops of their buildings, garage their cars underneath, and have more room to keep profitable tenants in the glass-fronted spaces of the ground floor.

[1] we are looking for the place that is maximal in the sense that for any athlete Y, this athlete X has done as well or better than Y. Should the choice of 0 or 1 as an extreme depend upon whether we are adding, or multiplying, or merely ranking?

### Quantification isn't the same thing as counting.

First order logic has quantification, but doesn't have cardinals or ordinals. Your argument that natural language "doesn't avoid zero" is specious, because zero is a *number*, and "nothing" doesn't involve numbers.

### But is it quantification?

About as specious as an argument that first order logic is involved:

• how is one supposed to order "two beers, please"0 without any cardinals?
champion → 0
1st runner up → S0
2nd runner up → SS0


in the tournament case?

Maybe it's just indoctrination by the economists, but I don't believe I'm doing something fundamentally different when striking a dish from an basket to go from 1 to 0 than when going from 2 to 1. It may take an irregular form, but older IE languages can be even more irregular for small integers, having separate forms for 0,1, and 2 objects and only switching to the regular for the (not exactly transfinite case) of 3 and up.

That said, french does offer some support to quantification, in that I've been told that one uses a partitive because "I'll take the soup" implies one wants all of the soup in the house, where "I'll take some soup" indicates the rather more common situation of wanting only a part of it.

On the other hand, by the time an order goes to the kitchen (or is repeated as confirmation) it's been turned into a vector space of items with scalar multiplicities. That "some soup" implicitly converts to "soup, N times" when summarized on a ticket. Again, as a multiplicity, how is a ticket with no beer so different from a ticket with 1 beer or with 2 beers? (certes, an english pedant would have one say "a ticket without any beers", but the germans are perfectly happy saying "in heaven there's no beer"1)

Bringing this all back to programming languages: with functions, we generally prefer to work with as coarse a topology on the domain as possible (minimizing the case analysis necessary, which improves calculation of properties at compile time and avoids branching at run time). This also holds true for the semantics functor; given an expression to translate, it will be much smoother if our syntax expresses 0, 1, or multiple values in a natural way.2

[0] I am told the most important series of phrases to learn in any language is

• "two beers, please" (nb. the accompanying hand signal varies with geography),
• "my friend will pay" (pointing is usually acceptable here)
• "where are the toilets?".

[1] hence we'd better drink it here

[2] why, then, are infix expressions so popular? Perhaps it's the limited dimensionality of the problem: infix offers arities of {0,1,2} and thereby gets up to arbitrary trees, with names extending that to graphs. But unless one can profitably modify the meaning of any facets in those graphs, we only care about the edges.

### Both feel equally natural

Have you ever started counting things and been off by one because you pointed to the first object and said "zero"? I have, several times, because I had been counting from zero a lot. When I've been counting from one a lot, I don't make that mistake -- but zero-based indexing becomes more difficult.

Extrapolating from my experience, either form of numbering should be equally intuitive once you've been doing it for a while. This means that other arguments should be used, such as the fact that zero-based indexing corresponds better to a computer's usual in-memory representation of arrays.

### My experience

(I hesitate to add to this mass of comments, I had no idea my innocent question would generate so much traffic. But.)

I grew up on zero-based languages and always considered them right and natural. After more than 10 years of zero-based programming I finally tried a language that indexed from one and to my surprise I found it much nicer to use. So if I were to choose between zero- or one- based indexing in a new general-purpose high-level programming language I'd choose one-based, all other things being equal.