Zeno’s arrow paradox and calculus

The development of calculus in the 19th century did not only help calculations useful in physical science. It provided solutions to paradoxes that had puzzled philosophers for thousands of years.

Zeno’s arrow paradox, for example, dates back to ancient Greece. Bertrand Russell discussed it in his book, “The Principles of Mathematics” (1903).

The background is that Russell had studied maths at Cambridge in 1890-3. The syllabus at Cambridge then was old-fashioned and heavily tilted towards mechanics and the maths used in physics. Russell had become interested in philosophical problems, like Zeno’s arrow and other paradoxes. He had friends who were philosophers, like G E Moore and Alfred North Whitehead.

In 1895 he visited Berlin. He went to study economics, and one of the results of the visit was a (sympathetic) book he published in 1896 about the German socialist movement. But he also discovered that mathematical ideas about calculus which he’d never heard of in Cambridge, but were well-known in Germany, could solve a lot of the philosophical problems he’d puzzled over. (Arthur Cayley, professor of pure maths at Cambridge, was a front-rank researcher of world repute, and would have known all those ideas which Russell discovered. However, Cayley was a mild-mannered person: he hadn’t been able to change the syllabus, and by the time Russell got to Cambridge Cayley was old and semi-retired).

Philosophers in Germany, as distinct from mathematicians, were not very interested in those problems. Russell made the connection. From then to 1913 Russell focused on the overlap between maths and philosophy. He moved on to other pursuits during and after World War One.

Russell cites the paradox in the following form:

“If everything is in rest or in motion in a space equal to itself, and if what moves is always in the instant, the arrow in its flight is immovable”.

Another way of stating it is: “If the instant is indivisible, the arrow cannot move, for if it did the instant would immediately be divided. But time is made up of instants. As the arrow cannot move in any one instant, it cannot move in any time”.

One form in which I have come across it is in economics. In a simplified capitalist economy, at any given instant there is only a fixed quantity of effective demand, constituted by what capitalists have to pay each other at that moment for raw materials of production and replacement or repair of fixed capital, plus what they have to pay workers, plus what they pay each other for their own consumer goods and services. That quantity is what it is. Therefore there can be no growth of aggregate purchasing power.

Russell comments on the paradox:

After two thousand years of continual refutation, these sophisms [Zeno’s arrow paradox, and others] were reinstated, and made the foundation of a mathematical renaissance, by a German professor, who probably never dreamed of any connection between himself and Zeno… [Karl] Weierstrass…

He continues:

For the present, I wish to divest the [statement of the paradox] of all reference to change. We shall then find that it is a very important and very widely applicable platitude, namely: “Every possible value of a variable is a constant”. If x be a variable which can take all values from 0 to 1, all the values it can take are definite numbers, such as ½ or ⅓, which are all absolute constants.

He explains what a “variable” is.

A variable is a fundamental concept of logic, as of daily life. Though it is always connected with some class, it is not the class, nor a particular member of the class, nor yet the whole class, but any member of the class. On the other hand, it is not the concept “any member of the class”, but it is that (or those) which this concept denotes.

Then what motion is:

Motion consists in the fact that, by the occupation of a place at a time, a correlation is established between places and times; when different times, throughout any period however short, are correlated with different places, there is motion; when different times, throughout some period however short, are all correlated with the same place, there is rest.

And then what speed at an instant means. In other words, what we usually think of as a thing being in movement at a single point of time, though Russell avoids that usage. It is indeed the case that we cannot define speed at an exact point of time without knowing about the object’s position at other instants in some span of time, however small, which includes that instant.

If f(x) be a function which is finite and continuous at the point x, then it may happen that the fraction

\frac{f(x+h) - f(x)}{h}

has a definite limit as h approaches to zero. If this does happen, the limit is denoted by f′(x), and is called the derivative or differential of f(x) in the point x.

If f(t) describes the positions of an object at time t, for different values of t, then f'(x) is the speed of the object at time x.

Russell explains what the word “limit” means here. The idea behind this definition of limit was developed by the early 19th century mathematician Cauchy, and then refined by Weierstrass. I’ve reworded Russell’s account slightly to conform more to modern usage.

To say that the function f(t) has a derivative d at t=x means that the limit of

\frac{f(x - h) - f(x)}{h}

as h approaches zero is d. In precise terms: given any number ε however small, we can find another number δ so that for any h with |h| < δ

\frac{f(x - h) - f(x)}{h}

differs from d by less than ε.

If the limit in question does not exist, then f(x) has no derivative at the point x. If f(x) be not continuous at this point, the limit does not exist; if f(x) be continuous, the limit may or may not exist.

The only point which it is important to notice at present is, that there is no implication of the infinitesimal in this definition… It is the doctrine of limits that underlies the Calculus.



Carl Boyer comments (The History of the Calculus, p.25) that: “The paradox of the flying arrow involves directly the concept of the derivative and is answered immediately in terms of this… Mathematical analysis has shown that the concept of an infinite class is not self-contradictory, and that the difficulties here… are those of conceiving intuitively the nature of the continuum and of infinite aggregates”.

By “continuum”, Boyer means the number line containing all real numbers, fractions, numbers like √2, and numbers like π as well as whole numbers.

Another idea here which is difficulty for everyday thought to get its head round, but which is conceptualised neatly by Weierstrass’s formulation of calculus, is a function:

  • having a property at a point
  • it being possible to change the function at any other point, no matter how close, without changing the property at the first-named point
  • but that property being impossible to ascertain from a single “snapshot” of the function at the point
  • and instead depending on what the function does in some neighbourhoods, it doesn’t matter how small, of the point.

An example here is continuity. Roughly speaking, we think of a function f being continuous over a span of values of x if it has no “gaps” and can be drawn for that span of x without taking the pencil off the paper. Precisely speaking, we can define what it means for a function to be continuous at a single point.

\lim_{t \to x} f(t) = f(x)

Consider the “popcorn function”.

P(x) = \frac{1}{q} if x is a rational number expressed in lowest terms as p/q

P(x) = 0 if x is irrational

It is continuous at all irrational values of x, and not continuous at any rational values. Although the function is continuous at π, say, we can’t “draw” it smoothly for even the tiniest range around π, because that range would include rational values of x where it is discontinuous. And we could change P for any number of other values of x no matter how near to π – for example, change P(22/7) and P(335/113), etc., to zero or one or whatever – without making any difference to whether P is continuous at π.

However, we can’t see that P is continuous at that single value π, just by looking at the function at that single point. All we can see by looking at that single point is that P(π) = 0. We need to look at how P behaves in some span around the point… only it does not matter how small that span is.



Russell makes a big thing of Weierstrass’s argument including no reference to “infinitesimals”. It is true that for two hundred years before Weierstrass, mathematicians had presented arguments in terms of “infinitesimals”, numbers which were somehow infinitely small and yet not zero, with an uneasy feeling that the arguments seemed to work but was hardly watertight. It is also true that Weierstrass’s way of making the arguments watertight is still the standard way today.

From 1960, however, an alternative approach was developed by Abraham Robinson which gives a precise definition of “infinitesimals” and works with them. It is called Non-Standard Analysis. A description of the approach, and an argument for its value, has been written by Joel Tropp.