Category Archives: Paradoxes

A Two-Sided Ontological Solution to the Sleeping Beauty Problem

Preprint published on the PhilSci archive.

I describe in this paper an ontological solution to the Sleeping Beauty problem. I begin with describing the hyper-entanglement urn experiment. I restate first the Sleeping Beauty problem from a wider perspective than the usual opposition between halfers and thirders. I also argue that the Sleeping Beauty experiment is best modelled with the hyper-entanglement urn. I draw then the consequences of considering that some balls in the hyper-entanglement urn have ontologically different properties from normal ones. In this context, drawing a red ball (a Monday-waking) leads to two different situations that are assigned each a different probability, depending on whether one considers “balls-as-colour” or “balls-as-object”. This leads to a two-sided account of the Sleeping Beauty problem.

This account supersides my previous preprints on this topic. Please do no cite previous work.


A Two-Sided Ontological Solution to the Sleeping Beauty Problem

1. The hyper-entanglement urn

Let us consider the following experiment. In front of you is an urn. The experimenter asks you to study very carefully the properties of the balls that are in the urn. You go up then to the urn and begin to examine its content carefully. You notice first that the urn contains only red or green balls. By curiosity, you decide to take a sample of a red ball in the urn. Surprisingly, you notice that while you pick up this red ball, another ball, but a green one, also moves simultaneously. You decide then to replace the red ball in the urn and you notice that immediately, the latter green ball also springs back in the urn. Intrigued, you decide then to catch this green ball. You notice then that the red ball also goes out of the urn at the same time. Furthermore, while you replace the green ball in the urn, the red ball also springs back at the same time at its initial position in the urn. You decide then to withdraw another red ball from the urn. But while it goes out of the urn, nothing else occurs. Taken aback, you decide then to undertake a systematic and rigorous study of all the balls in the urn.

At the end of several hours of a meticulous examination, you are now capable of describing precisely the properties of the balls present in the urn. The latter contains in total 1000 red balls and 500 green balls. Among the red balls, 500 are completely normal balls. But 500 other red balls have completely astonishing properties. Indeed, each of them is linked to a different green ball. When you remove one of these red balls, the green ball which is associated with it also goes out at the same time from the urn, as if it was linked to the red ball by a magnetic force. Indeed, if you remove the red ball from the urn, the linked green ball also disappears instantly. And conversely, if you withdraw from the urn one of the green balls, the red ball which is linked to it is immediately removed from the urn. You even try to destroy one of the balls of a linked pair of balls, and you notice that in such case, the ball of the other colour which is indissociably linked to it is also destroyed instantaneously. Indeed, it seems to you that relative to these pairs of balls, the red ball and the green ball which is linked to it behave as one single object.

The functioning of this urn leaves you somewhat perplexed. In particular, your are intrigued by the properties of the pairs of correlated balls. After reflection, you tell yourself that the properties of the pairs of correlated balls are finally in some respects identical to those of two entangled quantum objects. Entanglement (Aspect & al. 1982) is indeed the phenomenon which links up two quantum objects (for example, two photons), so that the quantum state of one of the entangled objects is correlated or anti-correlated with the quantum state of the other, whatever the distance where the latter is situated. As a consequence, each quantum object can not be fully described as an object per se, and a pair of entangled quantum objects is better conceived of as associated with a single, entangled state. It also occurs to you that perhaps a pair of correlated balls could be considered, alternatively, as a ubiquitous object, i.e. as an object characterised by its faculty of occupying two different locations at the same time, with the colours of its two occurrences being anti-correlated. Setting this issue aside for the moment, you prefer to retain the similarity with the more familiar quantum objects. You decide to call “hyper-entanglement urn” this urn with its astonishing properties. After reflection, what proves to be specific to this urn, is that it includes at the same time some normal and some hyper-entangled balls. The normal red balls are no different from our familiar balls. But hyper-entangled balls do behave in a completely different way. What is amazing, you think, is that nothing seemingly differentiates the normal red balls from the red hyper-entangled ones. You tell yourself finally that it could be confusing.

Your reflection on the pairs of hyper-entangled balls and their properties also leads you to question the way the balls which compose the pairs of hyper-entangled balls are to be counted. Are they to be counted as normal balls? Or do specific rules govern the way these pairs of hyper-entangled balls are to be counted? You add a normal red ball in a hyper-entanglement urn. It is then necessary to increment the number of red balls present in the urn. On the other hand, the total number of green balls is unaffected. But what when you add in the hyper-entanglement urn the red ball of a pair of hyper-entangled balls? In that case, the linked green ball of the same pair of hyper-entangled balls is also added instantly in the urn. Hence, when you add a red ball of a pair of hyper-entangled balls in the urn, it also occurs that you add at the same time its associated green ball. So, in that case, you must not only increment the total number of red balls, but also the total number of green balls present in the urn. In the same way, if you withdraw a normal red ball from the urn, you simply decrement the total number of red balls of the urn, and the number of green balls in the urn is unaffected. But if you remove the red ball (resp. green) of a pair of hyper-entangled balls, you must decrement the total number of red balls (resp. green) present in the urn as well as the total number of green balls (resp. red).

At this very moment, the experimenter happens again and withdraws all balls from the urn. He announces that you are going to participate in the following experiment:

The hyper-entanglement urn A fair coin will be randomly tossed. If the coin lands Heads, the experimenter will put in the urn a normal red ball. On the other hand, if the coin lands Tails, he will put in the urn a pair of hyper-entangled balls, composed of a red ball and a green ball, both indissociably linked. The experimenter also adds that the room will be put in absolute darkness, and that you will therefore be completely unable to detect the colour of the balls, no more that you will be able to know, when you will have withdrawn a ball from the urn, whether it is a normal ball, or a ball which is part of a pair of hyper-entangled balls. The experimenter tosses then the coin. While you catch a ball from the urn, the experimenter asks you to assess the likelihood that the coin felt Heads.

2. The Sleeping Beauty problem

Consider now the well-known Sleeping Beauty problem (Elga 2000, Lewis 2001). Sleeping Beauty learns that she will be put into sleep on Sunday by some researchers. A fair coin will be tossed and if the coin lands Heads, Beauty will be awakened once on Monday. On the other hand, if the coin lands Tails, Beauty will be awakened twice: on Monday and Tuesday. After each waking, she will be put into sleep again and will forget that waking. Furthermore, once awakened, Beauty will have no idea of whether it is Monday or Tuesday. On awakening on Monday, what should then be Beauty’s credence that the coin landed Heads?

At this step, one obvious first answer (I) goes as follows: since the coin is fair, the initial probability that the coin lands Head is 1/2. But during the course of the experiment, Sleeping Beauty does not get any novel information. Hence, the probability of Heads still remains 1/2.

By contrast, an alternative reasoning (II) runs as follows. Suppose the experiment is repeated many times, say, to fix ideas, 1000 times. Then there will be approximately 500 Heads-wakings on Monday, 500 Tails-wakings on Monday and 500 Tails-wakings on Tuesday. Hence, this reasoning goes, the probability of Heads equals 500/1500 = 1/3.

The argument for 1/2 and the argument for 1/3 yield conflicting conclusions. The Sleeping Beauty problem is usually presented accordingly as a problem arising from contradicting conclusions resulting from the two above-mentioned competing lines of reasoning aiming at assigning the probability of Heads once Beauty is awakened. I shall argue, however, that this statement of the Sleeping Beauty problem is somewhat restrictive and that we need to envisage the issue from a wider perspective. For present purposes, the Sleeping Beauty problem is the issue of calculating properly (i) the probability of Heads (resp. Tails) once Beauty is awakened; (ii) the probability of the day being Monday (resp. Tuesday) on awakening; and (iii) the probability of Heads (resp. Tails) on waking on Monday. From the halfer perspective, the probability of the day being Monday on awakening equals 3/4, and the probability of the day being Tuesday on awakening is 1/4. By contrast, from the thirder’s perspective, the probability of the day being Monday on awakening equals 2/3 and the probability of the day being Tuesday on awakening is 1/3.

But the argument for 1/2 and for 1/3 also have their own account of conditional probabilities. To begin with, the probability of Heads on waking on Tuesday is not a subject of disagreement, for it equals 0 in both accounts. The same goes for the probability of Tails on waking on Tuesday, since it equals 1 from the halfer’s or from the thirder’s viewpoint. But agreement stops when one considers the probability of Heads on waking on Monday. For it equals 2/3 from a halfer’s perspective. However, from a thirder’s perspective, it amounts to 1/2. On the other hand, the probability of Tails on waking on Monday is 1/3 from a halfer standpoint, and 1/2 for a thirder.

3. The urn analogy

In what follows, I shall present an ontological solution to the Sleeping Beauty problem, which rests basically on the hyper-entanglement urn experiment. A specific feature of this account is that it incorporates insights from the halfer and thirder standpoints, a line of resolution initiated by Nick Bostrom (2007) that has recently inspired some new contributions (Groisman 2008, Delabre 2008)1.

The argument for 1/3 and the argument for 1/2 rest basically on an urn analogy. This analogy is made explicit in the argument for 1/3 but is less transparent in the argument for 1/2. The argument for 1/3, to begin with, is based on an urn analogy which associates the situation related to the Sleeping Beauty experiment with an urn that contains, in the long run (assuming that the experiment is repeated, say, 1000 times), 500 red balls (Heads-wakings on Monday), 500 red balls (Tails-wakings on Monday) and 500 green balls (Tails-wakings on Tuesday), i.e. 1000 red balls and 500 green balls in total. In this context, the probability of Heads upon awakening is determined by the ratio of the number of Heads-wakings to the total number of wakings. Hence, P(Heads) = 500/1500 =1/3. The balls in the urn are normal ones and for present purposes, it is worth calling this sort of urn a “standard urn”.

On the other hand, the argument for 1/2 is also based on an urn analogy, albeit less transparently. The main halfer proponent grounds his reasoning on calculations (Lewis 2001), but for the sake of clarity, it is worth rendering the underlying associated analogy more apparent. For this purpose, let us recall how the calculation of the probability of drawing a red ball is handled by the argument for 1/2. If the coin lands Heads then the probability of drawing a red ball is 1, and if the coin lands Tails then this latter probability equals 1/2. We get then accordingly the probability of drawing a red ball (Monday-waking): P(R) = 1 x 1/2 + 1/2 x 1/2 = 3/4. By contrast, if the coin lands Tails, we calculate as follows the probability of drawing a green ball (Tuesday-waking): P(G) = 0 x 1/2 + 1/2 x 1/2 = 1/4. To sum up, according to the argument for 1/3: P(R) = 3/4 and P(G) = 1/4. For the sake of comparison, it is worth transposing this reasoning in terms of an urn analogy. Suppose then that the Sleeping Beauty experiment is iterated. It proves then that the argument for 1/2 is based on an analogy with a standard urn that contains 3/4 of red balls and 1/4 of green ones. These balls are also normal ones and the analogy underlying the argument for 1/2 is also with a “standard urn”. Now assuming as above that the experiment is repeated 1000 times, we get accordingly an urn that contains 500 red balls (Heads-wakings on Monday), 250 red balls (Tails-wakings on Monday) and 250 green balls (Tails-wakings on Tuesday), i.e. 750 red balls and 250 green balls in total. Such content of the urn results directly from Lewis’ calculation. However, as it stands, this analogy would arguably be a poor argument in favour of the halfer’s viewpoint. But at this step, we should pause and consider that Lewis’ argument for 1/2 did not rely on this urn analogy, though the latter is a consequence of Lewis’ calculation. We shall now turn to the issue of whether the standard urn is the correct analogy for the Sleeping Beauty experiment.

In effect, it turns out that the argument for 1/3 and the argument for 1/2 are based on an analogy with a standard urn. But at this stage, a question arises: is the analogy with the standard urn well-suited to the Sleeping Beauty experiment? In other terms, isn’t another urn model best suited? In the present context, this alternative can be formulated more accurately as follows: isn’t the situation inherent to the Sleeping Beauty experiment better put in analogy with the hyper-entanglement urn, rather than with the standard urn? I shall argue, however, that the analogy with the standard urn is mistaken, for it fails to incorporate an essential feature of the experiment, namely the fact that Monday-Tails wakings are indissociable from Tuesday-Tails wakings. For in the Tails case, Beauty cannot wake up on Monday without also waking up on Tuesday and reciprocally, she cannot wake up on Tuesday without also waking up on Monday.

When one reasons with the standard urn, one feels intuitively entitled to add red-Heads (Heads-wakings on Monday), red-Tails (Tails-wakings on Monday) and green-Tails (Tails-wakings on Tuesday) balls to compute frequencies. But red-Heads and red-Tails balls prove to be objects of an essentially different nature in the present context. In effect, red-Heads balls are in all respects similar to our familiar objects, and can be considered properly as single objects. By contrast, it turns out that red-Tails balls are quite indissociable from green-Tails balls. For we cannot draw a red-Tails ball without picking up the associated green-Tails ball. And conversely, we cannot draw a green-Tails ball without picking up the associated red-Tails ball. In this sense, red-Tails balls and the associated green-Tails balls do not behave as our familiar objects, but are much similar to entangled quantum objects. For Monday-Tails wakings are indissociable from Tuesday-Tails wakings. On Tails, Beauty cannot be awakened on Monday (resp. Tuesday) without being also awakened on Tuesday (resp. Monday). From this viewpoint, it is mistaken to consider red-Tails and green-Tails balls as separate objects. The correct intuition, I shall argue, is that the red-Tails and the associated green-Tails ball can be assimilated to a pair of hyper-entangled balls and constitute but one single object. In this context, red-Tails and green-Tails balls are best seen intuitively as constituents and mere parts of one single object. In other words, red-Heads balls and, on the other hand, red-Tails and green-Tails balls, cannot be considered as objects of the same type for probability purposes. And this situation justifies the fact that one is not entitled to add unrestrictedly red-Heads, red-Tails and green-Tails balls to compute probability frequencies. For in this case, one adds objects of intrinsically different types, i.e. one single object with the mere part of another single object.

Given what precedes, the correct analogy, I contend, is with a hyper-entanglement urn rather than with a normal urn. As will become clearer later, this new analogy incorporates the strengths of both above-mentioned analogies with the standard urn. And we shall now consider the Sleeping Beauty problem in light of this new perspective.

4. Consequences of the analogy with the hyper-entanglement urn

At this step, it is worth drawing the consequences of the analogy with the hyper-entanglement urn, that notably result from the ontological properties of the balls. Now the key point proves to be the following one. Recall that nothing seemingly distinguishes normal balls from hyper-entangled ones within the hyper-entanglement urn. And among the red balls, half are normal ones, but the other half is composed of red balls that are each hyper-entangled with a different green ball. If one considers the behaviour of the balls, it turns out that normal balls behave as usual. But hyper-entangled ones do behave differently, with regard to statistics. Suppose I add the red ball of a hyper-entangled pair into the hyper-entanglement urn. Then I also add instantly in the urn its associated green ball. Suppose, conversely, that I remove the red ball of a hyper-entangled pair from the urn. Then I also remove instantly its associated green ball.

At this step, we are led to the core issue of calculating properly the probability of drawing a red ball from the hyper-entanglement urn. Let us pause for a moment and forget temporarily the fact that, according to its classical formulation, the Sleeping Beauty problem arises from conflicting conclusions resulting from the argument for 1/3 and the argument for 1/2 on calculating the probability of Heads once Beauty is awakened. For as we did see it before, the problem also arises from the calculation of the probability of the day being Monday on awakening (drawing a red ball), since conflicting conclusions also result from the two competing lines of reasoning. In effect, Elga argues for 2/3 and Lewis for 3/4. Hence, the Sleeping Beauty problem could also have been formulated alternatively as follows: once awakened, what probability should Beauty assign to her waking on Monday? In the present context, this is tantamount to the probability of drawing a red ball from the hyper-entanglement urn.

What is then the response of the present account, based on the analogy with the hyper-entanglement urn, to the issue of calculating the probability of drawing a red ball? In the present context, “drawing a red ball” turns out to be somewhat ambiguous. For according to the ontological properties of the balls within the hyper-entanglement urn, one can consider red balls either from the viewpoint of colour-ness, or from the standpoint of object-ness2. Hence, in the present context, “drawing a red ball” can be interpreted in two different ways: either (i) “drawing a red ball-as-colour”; or (ii) “drawing a red ball-as-object”. Now disambiguating the notion of drawing a red ball, we should distinguish accordingly between two different questions. First, (i) what is the probability of drawing a red ball-as-colour (Monday-waking-as-time-segment)? Let us denote by P(R↑) the latter probability. Second, (ii) what is the probability of drawing a red ball-as-object (Monday-waking-as-object)? Let us denote it by P(R→). This distinction makes sense in the present context, since it results from the properties of the hyper-entangled balls. In particular, this richer semantics results from the case where one draws a green ball of a hyper-entangled pair from the urn. For in the latter case, this green ball is not a red one, but it occurs that one also picks up a red ball, since the associated red ball is withdrawn simultaneously.

Suppose, on the one hand, that we focus on the colour of the balls, and that we consider the probability P(R↑) of drawing a red ball-as-colour. It occurs now that there are 2/3 of red balls-as-colour and 1/3 of green balls-as-colour in the urn. Accordingly, the probability P(R↑) of drawing a red ball-as-colour equals 2/3. On the other hand, the probability P(G↑) of drawing a green ball-as-colour equals 1/3.

Assume, on the other hand, that we focus on balls as objects, considering that one pair of hyper-entangled balls behaves as one single object. Now we are concerned with the probability P(R→) of drawing a red ball-as-object. On Heads, the probability of drawing a red ball-as-object is 1. On Tails, we can either draw the red or the green ball of a hyper-entangled pair. But it should be pointed out that if we draw on Tails the green ball of a hyper-entangled pair, we also pick up instantly the associated red ball. Hence, the probability of drawing a red ball on Tails is also 1. Thus, P(R→) = 1 x 1/2 + 1 x 1/2 = 1. Conversely, what is the probability P(G→) of drawing a green ball-as-object (a waking on Tuesday)? The probability of drawing a green ball-as-object is 0 in the Heads case, and 1 in the Tails case. For in the latter case, we either draw the green or the red ball of a hyper-entangled pair. But even if we draw the red ball of the hyper-entangled pair, we draw then instantly its associated green ball. Hence, P(G→) = 0 x 1/2 + 1 x 1/2 = 1/2. To sum up: P(R→) = 1 and P(G→) = 1/2. The probability of drawing a red ball-as-object (a waking on Monday) is then 1, and the probability of drawing a green ball-as-object (a waking on Tuesday) is 1/2. Now it turns out that P(R→) + P(G→) = 1 + 1/2 = 1.5. In the present account, this results from the fact that drawing a red ball-as-object and drawing a green ball-as-object from a hyper-entangled pair are not exclusive events for probability purposes. For we cannot draw the red-Tails (resp. green-Tails) ball without drawing the associated green-Tails (resp. red-Tails) ball.

To sum up now. It turns out that the probability P(R↑) of drawing a red ball-as-colour (Monday-waking-as-time-segment) equals 2/3. And the probability P(G↑) of drawing a green ball-as-colour (Tuesday-waking-as-time-segment) equals 1/3. On the other hand, the probability P(R→) of drawing a red ball-as-object (Monday-waking-as-object) equals 1; and the probability P(G→) of drawing a green ball-as-object (Tuesday-waking-as-object) equals 1/2.

At this step, we are led to the issue of calculating properly the number of balls present in the urn. Now we should distinguish, just as before, according to whether one considers balls-as-colour or balls-as-object. Suppose then that we focus on the colour of the balls. Then we have grounds to consider that there are in total 2/3 of red balls and 1/3 of green balls in the hyper-entanglement urn, i.e. 1000 red ones and 500 green ones. This conforms with the calculation that results from the thirder’s standpoint. Suppose, that we rather focus on balls as single objects. Things go then differently. For we can consider first that there are 1000 balls as objects in the urn, i.e. 500 (red) normal ones and 500 hyper-entangled ones. Now suppose that the 500 (red) normal balls are removed from the urn. Now there only remain hyper-entangled balls within the urn. Suppose then that we pick up one by one the remaining balls from the urn, by removing alternatively one red ball and one green ball from the urn. Now it turns out that we can draw 250 red ones and 250 green ones from the urn. For once we draw a red ball from the urn, its associated green ball is also withdrawn. And conversely, when we pick up a green ball from the urn, its associated red ball is also withdrawn. Hence, inasmuch as we consider balls as objects, there are in total 750 red ones and 250 green ones in the urn. At this step, it should be noticed that this corresponds accurately to the composition of the urn which is associated with Lewis’ halfer calculation. But this now makes sense, as far as the analogy with the hyper-entanglement urn is concerned. The above-mentioned analogy with the urn associated with Lewis’ halfer calculation was a poor argument inasmuch as the urn was a standard one, but things go differently when one considers now the analogy with the hyper-entanglement urn.

5. A two-sided account

From the above, it results that the line of reasoning which is associated with the balls-as-colour standpoint corresponds to the thirder’s reasoning. And conversely, the line of thought which is associated with the balls-as-object viewpoint echoes the halfer’s reasoning. Hence, the balls-as-colour/balls-as-object dichotomy parallels the thirder/halfer opposition. Grounded though they are on an unsuited analogy with the standard urn, the argument for 1/3 and the argument for 1/2 do have, however, their own strengths. In particular, the analogy with the urn in the argument for 1/3 does justice to the fact that the Sleeping Beauty experiment entails that 2/3 of Monday-wakings will occur in the long run. On the other hand, the analogy with the urn in the argument for 1/2 handles adequately the fact that one Heads-waking is put on a par with two Tails-wakings. In the present context however, these two analogies turn out to be one-sided and fail to handle adequately the probability notion of drawing a red ball (waking on Monday). But in the present context, the probability P(R↑) of drawing a red ball-as-colour corresponds to the thirder’s insight. And the probability P(R→) of drawing a red ball-as-object corresponds to the halfer’s line of thought. At this step, it turns out that the present account is two-sided, since it incorporates insights from the argument for 1/3 and from the argument for 1/2.

Finally, it turns out that the standard urn which is classically used to model the Sleeping Beauty problem does not allow for two possible interpretations of the probability of drawing a red ball. Rather, in the standard urn model, the two interpretations are exclusive of one another and this yields the classical contradiction between the argument for 1/3 and the argument for 1/2. But as we did see it, with the hyper-entanglement urn model, this contradiction dissolves, since two different interpretations of the probability of drawing a red ball (waking on Monday) are now allowed, yielding then two different calculations. In the latter model, these probabilities are no more exclusive of one another and the contradiction dissolves into complementarity.

Now the same ambiguity plagues the statement of the Sleeping Beauty problem, and its inherent notion of “waking”. For shall we consider “wakings-as-time-segment” or “wakings-as-object”? The initial statement of the Sleeping Beauty problem is ambiguous about that, thus allowing the two competing viewpoints to develop, with their respective associated calculations. But once we diagnose accurately the source of the ambiguity, namely the ontological status of the wakings, we allow for the two competing lines of reasoning to develop in parallel, thus dissolving the initial contradiction3.

In addition, what precedes casts new light on the argument for 1/3 and the argument for 1/2. For given that the Sleeping Beauty experiment, is modelled with a standard urn, both accounts lack the ability to express the difference between the probability P(R↑) of drawing a red ball-as-colour (a Monday-waking-as-time-segment) and the probability P(R→) of drawing a red ball-as-object (a Monday-waking-as-object), for it does not make sense with the standard urn. Consequently, there is a failure to express this difference with the standard urn analogy, when considering drawing a red ball. But such distinction makes sense with the analogy with the hyper-entanglement urn. For in the resulting richer ontology, the distinction between P(R↑) and P(R→) yields two different results: P(R↑) = 2/3 and P(R→) = 1.

At this step, it is worth considering in more depth the balls-as-colour/balls-as-object opposition, that parallels the thirder/halfer contradiction. It should be pointed out that “drawing a red ball-as-colour” is associated with an indexical (“this ball is red”), somewhat internal standpoint, that corresponds to the thirder’s insight. Typically, the thirder’s viewpoint considers things from the inside, grounding the calculation on the indexicality of Beauty’s present waking. On the other hand, “drawing a red ball-as-object” can be associated with a non-indexical (“the ball is red”), external viewpoint. This corresponds to the halfer’s standpoint, which can be viewed as more general and external.

As we did see it, the calculation of the probability of drawing a red ball (waking on Monday) is the core issue in the Sleeping Beauty problem. But what is now the response of the present account on conditional probabilities and on the probability of Heads upon awakening? Let us begin with the conditional probability of Heads on a Monday-waking. Recall first how the calculation goes on the two concurrent lines of reasoning. To begin with, the probability P(Heads|G) of Heads on drawing a green ball is not a subject of disagreement for halfers and thirders, since it equals 0 on both accounts. The same goes for the probability P(Tails|G) of Tails on drawing a green ball, since it equals 1 from the halfer’s or the thirder’s viewpoint. But agreement stops when one considers the probability P(Heads|R) of Heads on drawing a red ball. For P(Heads|R) = 1/2 from the thirder’s perspective and P(Heads|R) = 2/3 from the halfer’s viewpoint. On the other hand, the probability P(Tails|R) of Tails on drawing a red ball is 1/2 for a thirder and 1/3 for a halfer.

Now the response of the present account to the calculation of the conditional probability of Heads on drawing a red ball (waking on Monday) parallels the answer made to the issue of determining the probability of drawing a red ball. In the present account, P(Heads|G) = 0 and P(Tails|G) = 1, as usual. But we need to disambiguate how we interpret “drawing a red ball” by distinguishing between P(Heads|R↑) and P(Heads|R→), to go any further. For P(Heads|R↑) is the probability of Heads on drawing a red ball-as-colour. And P(Heads|R→) is the probability of Heads on drawing a red ball-as-object. P(Heads|R↑) is calculated in the same way as in the thirder’s account. Now we get accordingly: P(Heads|R↑) = 1/2. On the other hand, P(Heads|R→) is computed in the same way as from the halfer’s perspective, and we get accordingly: P(Heads|R→) = [P(Heads) x P(R→|Heads)] / P(R→) = [1/2 x 1] / 1 = 1/2.

Now the same goes for the probability of Heads upon awakening. For there are two different responses in the present account, depending on whether one considers P(R↑) or P(R→). If one considers balls-as-colour, the probability of Heads upon awakening is calculated in the same way as in the argument for 1/3, and we get accordingly: P(Heads↑) = 1/3 and P(Tails↑) = 2/3. On the other hand, if one is concerned with balls-as-object, it ensues, in the same way as with the halfer’s account, that there is no shift in the prior probability of Heads. As Lewis puts it, Beauty’s awakening does not add any novel information. It follows accordingly that the probability P(Heads→) of Heads (resp. Tails) on awakening still remains 1/2.

Finally, the above results are summarised in the following table:

halferthirderpresent account
P(Heads↑)1/31/3
P(Tails↑)2/32/3
P(Heads→)1/21/2
P(Tails→)1/21/2
P(drawing a red ball-as-colour) ≡ P(R↑)2/32/3
P(drawing a green ball-as-colour) ≡ P(G↑)1/31/3
P(drawing a red ball-as-object) ≡ P(R→)3/41
P(drawing a green ball-as-object) ≡ P(G→)1/41/2
P(Heads| drawing a red ball-as-colour) ≡ P(Heads|R↑)1/21/2
P(Tails| drawing a red ball-as-colour) ≡ P(Tails|R↑)1/21/2
P(Heads| drawing a red ball-as-object) ≡ P(Heads|R→)2/31/2
P(Tails| drawing a red ball-as-object) ≡ P(Tails|R→)1/31/2

At this step, it is worth recalling the diagnosis of the Sleeping Beauty problem put forth by Berry Groisman (2008). Groisman attributes the two conflicting responses to the probability of Heads to an ambiguity in the protocol of the Sleeping Beauty experiment. He argues that the argument for 1/2 is an adequate response to the probability of Heads on awakening, under the setup of coin tossing. On the other hand, he considers that the argument for 1/3 is an accurate answer to the latter probability, under the setup of picking up a ball from the urn. Groisman also considers that putting a ball in the box and picking up a ball out from the box are two different events, that lead therefore to two different probabilities. Roughly speaking, Groisman’s “coin tossing/picking up a ball” distinction parallels the present balls-as-colour/balls-as-object dichotomy. However, in the present account, putting a ball in the urn is no different from picking up a ball from the urn. For if we put in the urn a red ball of a hyper-entangled pair, we also immediately put in the urn its associated green ball. Rather, from the present standpoint, drawing (resp. putting in the urn) a red ball-as-colour from the urn is probabilistically different from picking up a red ball-as-object. The present account and Groisman’s analysis share the same overall direction, although the details of our motivations are significantly different.

Finally, the lesson of the Sleeping Beauty Problem proves to be the following: our current and familiar objects or concepts such as balls, wakings, etc. should not be considered as the sole relevant classes of objects for probability purposes. We should bear in mind that according to an unformalised axiom of probability theory, a given situation is classically modelled with the help of urns, dices, balls, etc. But the rules that allow for these simplifications lack an explicit formulation. However in certain situations, in order to reason properly, it is also necessary to take into account somewhat unfamiliar objects whose constituents are pairs of indissociable balls or of mutually inseparable wakings, etc. This lesson was anticipated by Nelson Goodman, who pointed out in Ways of Worldmaking that some objects which are prima facie completely different from our familiar objects also deserve consideration: “we do not welcome molecules or concreta as elements of our everyday world, or combine tomatoes and triangles and typewriters and tyrants and tornadoes into a single kind”.4 As we did see it, in some cases, we cannot add unrestrictedly an object of the Heads-world with an object of the Tails-world. For despite the appearances, objects of the Heads-world may have ontologically different properties from objects of the Tails-world. And the status of our probabilistic paradigm object, namely a ball, proves to be world-relative, since it can be a whole in the Heads-world and a part in the Tails-world. Once this goodmanian step accomplished, we should be less vulnerable to certain subtle cognitive traps in probabilistic reasoning.

Acknowledgements

I thank Jean-Paul Delahaye and Claude Panaccio for useful discussion on earlier drafts. Special thanks are due to Laurent Delabre for stimulating correspondence and insightful comments.

References

Arntzenius, F. (2002). Reflections on Sleeping Beauty. Analysis, 62-1, 53-62

Aspect, A., Dalibard, J. & Roger, G. (1982). Physical Review Letters. 49, 1804-1807

Black, M. (1952). The Identity of Indiscernibles. Mind 61, 153-164

Bostrom, N. (2002). Anthropic Bias: Observation Selection Effects in Science and Philosophy. (New York: Routledge)

Bostrom, N. (2007). Sleeping Beauty and Self-Location: A Hybrid Model. Synthese, 157, 59-78

Bradley, D. (2003). Sleeping Beauty: a note on Dorr’s argument for 1/3. Analysis, 63, 266-268

Delabre, L. (2008). La Belle au bois dormant : débat autour d’un paradoxe. Manuscript

Elga, A. (2000). Self-locating Belief and the Sleeping Beauty Problem. Analysis, 60, 143-147

Goodman, N. (1978). Ways of Worldmaking. (Indianapolis: Hackett Publishing Company)

Groisman, B. (2008). The End of Sleeping Beauty’s Nightmare. British Journal for the Philosophy of Science, 59, 409-416

Leslie, J. (2001). Infinite Minds (Oxford & New York: Oxford University Press)

Lewis, D. (2001). Sleeping Beauty: Reply to Elga. Analysis, 61, 171-176

Monton, B. (2002). Sleeping Beauty and the Forgetful Bayesian. Analysis, 62, 47-53

White, R. (2006). The generalized Sleeping Beauty problem: A challenge for thirders. Analysis, 66, 114-119

1 Bostrom opens the path to a third way out to the Sleeping Beauty problem: “At any rate, one might hope that having a third contender for how Beauty should reason will help stimulate new ideas in the study of self-location”. In his account, Bostrom sides with the halfer on P(Heads) and with the thirder on conditional probabilities, but his treatment has some counter-intuitive consequences on conditional probabilities.

2 This issue relates to the identity of indiscernibles and is notably hinted at by Max Black (1952, p. 156) who describes a universe composed of two identical spheres: “Isn’t it logically possible that the universe should have contained nothing but two exactly similar spheres? We might suppose that each was made of chemically pure iron, had a diameter of one mile, that they had the same temperature, colour, and so on, and that nothing else existed. Then every quality and relational characteristic of the one would also be a property of the other.” In the present context, it should be pointed out that the colours of the hyper-entangled balls are anti-correlated. John Leslie (2001, p. 153) also raises a similar issue with his paradox of the balls: “Here is a yet greater paradox for Identity of Indiscernibles to swallow. Try to picture a cosmos consisting just of three qualitatively identical spheres in a straight line, the two outer ones precisely equidistant from the one at the centre. Aren’t there plain differences here? The central sphere must be nearer to the outer spheres than these are to each other. Identity of Indiscernibles shudders at the symmetry of the situation, however. It holds that the so-called two outer spheres must really be only a single sphere. And this single sphere, which now has all the same qualities as its sole surviving partner, must really be identical to it. There is actually just one sphere!”.

3 It is worth noting that the present treatment of the Sleeping Beauty problem, is capable of handling several variations of the original problem that have recently flourished in the literature. For the above solution to the Sleeping Beauty problem applies straightforwardly, I shall argue, to these variations of the original experiment. Let us consider, to begin with, a variation were on Heads, Sleeping Beauty is not awakened on Monday but instead on Tuesday. This is modelled with a hyper-entanglement urn that receives one normal green ball (instead of a red one in the original experiment) in the Heads case.

Let us suppose, second, that Sleeping Beauty is awakened two times on Monday in the Tails case (instead of being awakened on both Monday and Tuesday). This is then modelled with a hyper-entanglement urn that receives one pair of hyper-entangled balls which are composed of two red balls in the Tails case (instead of a pair of hyper-entangled balls composed of a red and a green ball in the original experiment).

Let us imagine, third, that Beauty is awakened two times – on Monday and Tuesday – in the Heads case, and three times – on Monday, Tuesday and Wednesday – in the Tails case. This is then modelled with a hyper-entanglement urn that receives one pair of hyper-entangled balls composed of one red ball and one green ball in the Heads case; and in the Tails case, the hyper-entanglement urn is filled with one triplet of hyper-entangled balls, composed of one red, one green and one blue ball.

4 Goodman (1978, p. 21).

Elements of Dialectical Contextualism

Posprint in English (with additional illustrations) of  an article appeared in French in the collective book (pages 581-608) written on the occasion of the 60th birthday of Pascal Engel.

Abstract In what follows, I strive to present the elements of a philosophical doctrine, which can be defined as dialectical contextualism. I proceed first to define the elements of this doctrine: dualities and polar contraries, the principle of dialectical indifference and the one-sidedness bias. I emphasize then the special importance of this doctrine in one specific field of meta-philosophy: the methodology for solving philosophical paradoxes. Finally, I describe several applications of this methodology on the following paradoxes: Hempel’s paradox, the surprise examination paradox and the Doomsday Argument.

In what follows, I will endeavour to present the elements of a specific philosophical doctrine, which can be defined as dialectical contextualism. I will try first to clarify the elements that characterise this doctrine, especially the dualities and dual poles, the principle of dialectical indifference and the one-sidedness bias. I will proceed then to describe its interest at a meta-philosophical level, especially as a methodology to assist in the resolution of philosophical paradoxes. Finally, I will describe an application of this methodology to the analysis of the following philosophical paradoxes: Hempel’s paradox , the surprise examination paradox and the Doomday Argument.

The dialectical contextualism described here is based on a number of constitutive elements which have a specific nature. Among these are: the dualities and dual poles, the principle of dialectical indifference and the one-sidedness bias. It is worth analysing in turn each of these elements.

1. Dualities and dual poles

To begin with, we shall focus on defining the concept of dual poles (polar opposites)1. Although intuitive, this concept needs to be clarified. Examples of dual poles are static/dynamic, internal/external, qualitative/quantitative, etc.. We can define the dual poles as concepts (which we shall denote by A and Ā), which come in pairs, and are such that each of them is defined as the opposite of the other. For example, internal can be defined as the opposite of external and symmetrically, external can be defined as the contrary of internal. In a sense, there is no primitive notion here and neither A nor Ā of the dual poles can be regarded as the primitive notion. Consider first a given duality, that we can denote by A/Ā, where A and Ā are dual concepts. This duality is shown in the figure below:

The dual poles A and Ā

At this point, we can also provide a list (which proves to be necessarily partial) of dualities:

Internal/External, Quantitative/Qualitative, Visible/Invisible, Absolute/Relative Abstract/Concrete, Static/Dynamic, Diachronic/Synchronic, Single/Multiple, Extension/Restriction, Aesthetic/Practical, Precise/Vague, Finite/Infinite, Single/compound, Individual/Collective, Analytical/Synthetic, Implicit/Explicit, Voluntary/Involuntary

In order to characterize more accurately the dual poles, it is worth distinguishing them from other concepts. We shall stress then several properties of the dual poles, which allow to differentiate them from other related concepts. The dual poles are neutral concepts, as well as simple qualities; in addition, they differ from vague notions. To begin with, two dual poles A and Ā constitute neutral concepts. They can thus be denoted by A0 and Ā0. This leads to represent both concepts A0 and Ā0 as follows:

The dual neutral poles A0 and Ā0

The dual poles are neutral concepts, i.e. concepts that present no ameliorative or pejorative nuance. In this sense, external, internal, concrete, abstract, etc.., are dual poles, unlike concepts such as beautiful, ugly, brave, which present either a ameliorative or pejorative shade, and are therefore non-neutral. The fact that the dual poles are neutral has its importance because it allows to distinguish them from concepts that have a positive or negative connotation. Thus, the pair of concepts beautiful/ugly is not a duality and therefore beautiful and ugly do not constitute dual poles in the sense of the present construction. Indeed, beautiful has a positive connotation and ugly has a pejorative connotation. In this context, we can denote them by beautiful+ and ugly.

It should be emphasised, second, that the two poles of a given dual duality correspond to simple qualities, as opposed to composite qualities​​. The distinction between single and composite qualities can be made in the following manner. Let A1 and A2 be simple qualities. In this case, A1 ∧ A2, and A1 ∨ A2 are composite qualities. To take an example, static, qualitative, external are simple qualities, while static and qualitative, static and external, qualitative and external are composite qualities​​. A more general definition is as follows: let B1 and B2 be single or composite qualities, then B1 ∧ B2 and B1 ∨ B2 are composite qualities. Incidentally, this also highlights why the pairs of concepts red/non-red, blue/non-blue concepts can not be considered as dual poles. Indeed, non-red can thus be defined as follows as a composite quality: violetindigobluegreenyelloworangewhiteblack. In this context, one can assimilate non-blue to the negation-complement of blue, such complement negation being defined with the help of composite qualities​​.

Given the above definition, we are also in a position to distinguish the dual poles from vague objects. We can first note that dual poles and vague objects have certain properties in common. Indeed, vague objects come in pairs in the same way as dual poles. Moreover, vague concepts are classically considered as having an extension and an anti-extension, which are mutually exclusive. Such a feature is also shared by the dual poles. For example, qualitative and quantitative can be assimilated respectively to an extension and an anti-extension, which also have the property of being mutually exclusive, and the same goes for static and dynamic, etc.. However, it is worth noting the differences between the two types of concepts. A first difference (i) lies in the fact that the union of the extension and the anti-extension of vague concepts is not exhaustive in the sense that they admit of borderline cases (and also borderline cases of borderline cases, etc., giving rise to a hierarchy of higher-order vagueness of order n), which is a penumbra zone. Conversely, the dual poles do not necessarily have such a characteristic. Indeed, the union of the dual poles can be either exhaustive or non-exhaustive. For example, the abstract/concrete duality is then intuitively exhaustive, since there does not seem to exist any objects that are neither abstract nor concrete. The same goes for the vague/precise duality: intuitively, there does no exist indeed objects that are neither vague nor precise, and that would belong to an intermediate category. Hence, there are dual poles whose extension and anti-extension turns out to be exhaustive, unlike vague concepts, such as the two poles of the abstract/concrete duality. It is worth mentioning, second, another difference (ii) between dual poles and vague objects. In effect, dual poles are simple qualities, while vague objects may consist of simple or compound qualities. There exist indeed some vague concepts which are termed multi-dimensional vague objects, such as the notion of vehicle, of machine, etc.. A final difference between the two categories of objects (iii) lies in the fact that some dual poles have an inherently precise nature. This is particularly the case of the individual/collective duality, which is susceptible to give rise to a very accurate definition.

2. The principle of dialectical indifference

From the notions of duality and of dual poles which have been just mentioned, we are in a position to define the notion of a viewpoint related to a given duality or dual pole. Thus, we have first the notion of viewpoint corresponding to a given A/Ā duality: it consists for example in the standpoint of the extension/restriction duality, or of the qualitative/quantitative duality or of the diachronic/synchronic duality, etc.. It also follows the concept of point of view related to a given pole of an A/Ā duality: we get then, for example (at the level of the extension/restriction duality) the standpoint by extension, as well as the viewpoint by restriction. Similarly, the qualitative viewpoint or perspective results from it, as well as the quantitative point of view, etc.. (at the level of the qualitative/quantitative duality). Thus, when considering a given object o (either a concrete or an abstract object such as a proposition or a reasoning), we may consider it in relation to various dualities, and at the level of the latter, relative to each of its two dual poles.

The underlying idea inherent to the viewpoints relative to a given duality, or to a given pole of a duality, is that each of the two poles of the same duality, all things being equal, deserve an equal legitimacy. In this sense, if we consider an object o in terms of a duality A/Ā, one should not favour one of the poles with respect to the other. To obtain an objective point of view with respect to a given duality A/Ā, one should place oneself in turn from the perspective of the pole A, and then from that of the pole Ā. For an approach that would only address the viewpoint of one of the two poles would prove to be partial and truncated. The fact of considering in turn the perspective of the two poles, in the study of an object o and of its associated reference class allows to avoid a subjective approach and to meet as much as possible the needs of objectivity.

As we can see it, the idea underlying the concept of point of view can be formalized in a principle of dialectical indifference, in the following way:

(PRINCIPLE OF DIALECTICAL INDIFFERENCE) When considering a given object o and the reference class E associated with it, from the angle of duality A/Ā, all things being equal, it should be given equal weight to the viewpoint of the A pole and the viewpoint of the Ā pole.

This principle is formulated in terms of a principle of indifference: if we consider an object o under the angle of an A/Ā duality, there is no reason to favour the viewpoint from A with regard to the viewpoint from Ā, and unless otherwise resulting from the context, we must weigh equally the viewpoints A and Ā. A direct consequence of this principle is that if one considers the perspective of the A pole, one also needs to take into consideration the standpoint of the opposite pole Ā (and vice versa). The need to consider both points of view, the one resulting from the A pole and the other associated with the Ā pole, meets the need of analysing the object o and the reference class associated with it from an objective point of view. This goal is achieved, as far as possible, by taking into account the complementary points of view which are those of the poles A and Ā. Each of these viewpoints has indeed, with regard to a given duality A/Ā, an equal relevance. Under such circumstances, when only the A pole or (exclusively) the pole Ā is considered, it consists then of a one-sided perspective. Conversely, the viewpoint which results from the synthesis of the standpoints corresponding to both poles A and Ā is of a two-sided type. Basically, this approach proves to be dialectical in essence. In effect, the step consisting of successively analysing the complementary views relative to a given reference class, is intended to allow, in a subsequent step, a final synthesis, which results from the joint consideration of the viewpoints corresponding to both poles A and Ā. In the present construction, the process of confronting the different perspectives relevant to an A/Ā duality is intended to build cumulatively, a more objective and comprehensive standpoint than the one, necessarily partial, resulting from taking into account those data that stem from only one of the two poles.

The definition of the dialectical principle of indifference proposed here refers to a reference class E, which is associated with the object o. The reference class2 is constituted by a number of phenomena or objects. Several examples can be given: the class of human beings who ever lived, the class of future events in the life of a person, the class of body parts of a given person, the class of ravens, etc.. We shall consider in what follows, a number of examples. Mention of such a reference class has its importance because its very definition is associated with the above-mentioned duality A/Ā. In effect, the reference class can be defined either from the viewpoint of A or from the viewpoint of Ā. Such a feature needs to be emphasized and will be useful in defining the bias which is associated with the very definition of the principle of dialectical indifference: the one-sidedness bias.

3. Characterisation of the one-sidedness bias

The previous formulation of the principle of dialectical indifference suggests straightforwardly an error of reasoning of a certain type. Informally, such a fallacy consists in focusing on a given standpoint when considering a given object, and of neglecting the opposite view. More formally, in the context described above, such a fallacy consists, when considering an object o and the reference class associated with it, in taking into account the viewpoint of the A pole (respectively Ā), while completely ignoring the viewpoint corresponding to its dual pole Ā (respectively A) to define the reference class. We shall term one-sidedness bias such type of fallacy. The conditions of this type of bias, in violation of the principle of dialectical indifference, needs however to be clarified. Indeed, in this context, we can consider that there are some cases where the two-sidedness with respect to a given duality A/Ā is not required. Such is the case when the elements of the context do not presuppose conditions of objectivity and exhaustiveness of views. Thus, a lawyer who would only emphasise the evidence in defence of his/her client, while completely ignoring the evidence against him/her does not commit the above-mentioned type of error of reasoning. In such a circumstance, in fact, the lawyer would not commit a faulty one-sidedness bias, since it is his/her inherent role. The same would go in a trial for the prosecutor, who conversely, would only focus on the evidence against the same person, by completely ignoring the exculpatory elements. In such a situation also the resulting one-sidedeness bias would not be inappropriate, because it follows well from the context that it consists well of the limited role assigned to the prosecutor. By contrast, a judge who would only take into account the evidence against the accused, or who would commit the opposite error, namely of only considering the exculpatory against the latter, would well commit an inappropriate one-sidedness bias because the mere role of the judge implies that he/she takes into account the two types of elements, and that his/her judgement is the result of the synthesis which is made.

In addition, as hinted at above, the mention of a reference class associated with the object o proves to be important. In effect, as we will have the opportunity to see it with the analysis of the following examples, the definition itself is associated with an A/Ā duality. And the reference class can be defined either from the viewpoint of A, or from the viewpoint of Ā. Such feature has the consequence that all objects are not likely to give rise to a one-sidedness bias. In particular, the objects that are not associated with a reference class that is itself likely to be envisaged in terms of an A/Ā duality, do not give rise to any such one-sidedness bias.

Before illustrating the present construction with the help of several practical examples, it is worth considering, at this stage, the one-sidedness bias which has been just defined, and which results from the very definition of the principle of dialectical indifference, in the light of several similar concepts. In a preliminary way, we can observe that a general description of this type of error of reasoning had already been made, in similar terms, by John Stuart Mill (On Liberty, II):

He who knows only his own side of the case, knows little of that. His reasons may be good, and no one may have been able to refute them. But if he is equally unable to refute the reasons on the opposite side; if he does not so much know what they are, he has no ground for preferring either opinion.

In the recent literature, some very similar concepts have also been described. It consists in particular of the dialectic bias notably described by Douglas Walton (1999). Walton (999, pp. 76-77) places then himself in the framework of the dialectical theory of bias, which opposes one-sided to two-sided arguments:

The dialectical theory of bias is based on the idea […] that an argument has two sides. […] A one-sided argument continually engages in pro-argumentation for the position supported and continually rejects the arguments of the opposed side in a dialogue. A two-sided (balanced) argument considers all arguments on both sides of a dialogue. A balanced argument weights each argument against the arguments that have been opposed to it.

Walton describes thus the dialectical bias as a one-sided perspective that occurs during the course of the argument. Walton emphasizes, though, that dialectic bias, which is universally common in human reasoning, does not necessarily constitute an error of reasoning. In line with the distinction between “good” and “bad” bias due to Antony Blair (1988), Walton considers that the dialectic bias is incorrect only under certain conditions, especially if it occurs in a context that is supposed to be balanced, that is to say where the two sides of the corresponding reasoning are supposed to be mentioned (p. 81):

Bad bias can be defined as “pure (one-sided) advocacy” in a situation where such unbalanced advocacy is normatively inappropriate in argumentation.

A very similar notion of one-sidedness bias is also described by Peter Suber (1998). Suber describes indeed a fallacy that he terms one-sidedness fallacy. He describes it as a fallacy which consists in presenting one aspect of the elements supporting a judgement or a viewpoint, by completely ignoring the other aspect of the relevant elements relating to the same judgement:

The fallacy consists in persuading readers, and perhaps ourselves, that we have said enough to tilt the scale of evidence and therefore enough to justify a judgment. If we have been one-sided, though, then we haven’t yet said enough to justify a judgment. The arguments on the other side may be stronger than our own. We won’t know until we examine them.

The error of reasoning consists then in taking only into account one viewpoint relating to the judgement in question, whereas the other viewpoint could as well prove to be decisive with regard to the conclusion to be drawn. Suber also undertakes to provide a characterization of the one-sidedness fallacy and notes in particular that the fallacy of one-sidedness constitutes a valid argument. For its conclusion is true if its premises are true. Moreover, Suber notes, it appears that the argument is not only valid but sound. For when the premises are true, the conclusion of the argument can be validly inferred. However, as hinted at by Suber, the argument is defective due to the fact that a number of premises are lacking. This is essential because if the missing premises are restored within the argument, the resulting conclusion can be radically different.

4. An instance of the one-sidedness bias

To illustrate the above concepts, it is worth at this stage providing an example of the one-sidedness bias. To this end, consider the following instance, which is a form of reasoning, mentioned by Philippe Boulanger (2000, p. 3)3, who attributes it to the mathematician Stanislaw Ulam. The one-sidedness bias shows up in a deductive form. Ulam estimates that if a company were to achieve a level of workforce large enough, its performance would be paralysed by the many internal conflicts that would result. Ulam estimates that the number of conflicts between people would increase according to the square of the number n of employees, while the impact on the work that would result would only grow as a function of n. Thus, according to this argument, it is not desirable that the number of employees within a company becomes important. However, it turns out that Ulam’s reasoning is fallacious, as Boulanger points it out, for it focuses exclusively on the conflictual relations between employees. But the n2 relationships among the company employees can well be confrontational, but may include as well collaborative relationships that are quite beneficial for the company. And so there is no reason to favour conflictual relationships with respect to collaborative ones. And when among n2 relationships established between the company employees, some are genuine collaborative relationships, the effect is, instead, of improving business performance. Therefore, we can not legitimately conclude that it is not desirable that the workforce of a company reaches a large size.

For the sake of clarity, it is worth formalizing the above reasoning. It turns out thus that Ulam’s reasoning can be described as follows:

(D1Ā ) if <a company has a large workforce>

(D2Ā ) then <n2 conflictual relationships will result>

(D3Ā ) then negative effects will result

(D4Ā ) the fact that <a company has a large workforce> is bad

This type of reasoning has the structure of a one-sidedness bias, since it focuses only on conflicting relationships (the dissociation pole of the association/dissociation duality), by ignoring a parallel argument with the same structure that could legitimately be raised, focusing on collaborative relationships (the association pole), which is the other aspect relevant to this particular topic. This parallel argument goes as follows:

(D1A) if <a company has a large workforce>

(D2A) then <n2 collaborative relationships will result>

(D3A) then positive effects will result

(D4A) the fact that <a company has a large workforce> is good

This finally casts light on how the two formulations of the argument lead to conflicting conclusions, i.e. (D4Ā) and (D4A). At this point, it is worth noting the very structure of the conclusion of the above reasoning, which is as follows:

(D5Ā ) the situation s is bad from the viewpoint of Ā (dissociation)

while the conclusion of the parallel reasoning is as follows:

(D5A) the situation s is good from the viewpoint of A (association)

But if the reasoning had been complete, by taking into account the two points of view, a different conclusion would have ensued:

(D5Ā ) the situation s is bad from the viewpoint of Ā (dissociation)

(D5A) the situation s is good from the viewpoint of A (association)

(D6A/Ā) the situation s is bad from the viewpoint of Ā (dissociation) and good from the viewpoint of A (association)

(D7A/Ā) the situation s is neutral from the viewpoint of the duality A/Ā (association/dissociation)

And such a conclusion turns out to be quite different from that resulting from (D5Ā ) and (D5A).

Finally, we are in a position to replace the one-sidedness bias which has just been described in the context of the present model: the object o is the above reasoning, the reference class is that of the relationships between the employees of a business, and the corresponding duality – allowing to define the reference class – is the dissociation/association duality.

5. Dichotomic analysis and meta-philosophy

The aforementioned principle of dialectic indifference and its corollary – one-sidedness bias – is likely to find applications in several domains4. We shall focus, in what follows, on its applications at a meta-philosophical level, through the analysis of several contemporary philosophical paradoxes. Meta-philosophy is that branch of philosophy whose scope is the study of the nature of philosophy, its purpose and its inherent methods. In this context, a specific area within meta-philosophy is the method to use to attach oneself to resolve, or make progress towards the resolution of philosophical paradoxes or problems. It is within this specific area that falls the present construction, in that it offers dichotomous analysis as a tool that may be useful to assist in the resolution of paradoxes or philosophical problems.

The dichotomous analysis as a methodology that can be used to search for solutions to some paradoxes and philosophical problems, results directly from the statement of the principle of dialectical indifference itself. The general idea underlying the dichotomous approach to paradox analysis is that two versions, corresponding to one and the other pole of a given duality, can be untangled within a philosophical paradox. The corresponding approach then is to find a reference class which is associated with the given paradox and the corresponding duality A/Ā, as well as the two resulting variations of the paradox that apply to each pole of this duality. Nevertheless, every duality is not suitable for this, as for many dualities, the corresponding version of the paradox remains unchanged, regardless of the pole that is being considered. In the dichotomous method, one focuses on finding a reference class and a relevant associated duality, such that the viewpoint of each of its poles actually lead to two structurally different versions of the paradox , or the disappearance of paradox from the point of view of one of the poles. Thus, when considering the paradox in terms of two poles A and Ā, and if it has no effect on the paradox itself, the corresponding duality A/Ā reveals itself therefore, from this point of view, irrelevant.

The dichotomous analysis is not by far a tool that claims to solve all philosophical problems, but only constitutes a methodology that is susceptible of shedding light on some of them. In what follows, we shall try to illustrate through several works of the author, how dichotomous analysis can be applied to progress towards the resolution of three contemporary philosophical paradoxes: Hempel’s paradox, the surprise examination paradox and the Doomsday argument.

In a preliminary way, we can observe here that in the literature, there is also an example of dichotomous analysis of a paradox in David Chalmers (2002). Chalmers attempts then to show how the two-envelope paradox leads to two fundamentally distinct versions, one of which corresponds to a finite version of the paradox and the other to an infinite version. Such an analysis, although conceived of independently of the present construction can thus be characterized as a dichotomous analysis based on the finite/infinite duality.

The dual poles in David Chalmers’ analysis of the two-envelope paradox

6. Application to the analysis of the philosophical paradoxes

Karl Hempel

At this point, it is worth applying the foregoing to the analysis of concrete problems. We shall illustrate this through the analysis of several contemporary philosophical paradoxes: Hempel’s paradox, the surprise examination paradox and the Doomsday argument. We will endeavour to show how a problem of one-sidednessn bias associated with a problem of definition of a reference class can be found in the analysis of the aforementioned philosophical paradoxes. In addition, we will show how the very definition of the reference class associated with each paradox is susceptible of being qualified with the help of the dual poles A and Ā of a given duality A/Ā as they have just been defined.

6.1. Application to the analysis of Hempel‘s paradox

Hempel’s paradox is based on the fact that the two following assertions:

(H) All ravens are black

(H*) All non-black things are non-ravens

are logically equivalent. By its structure (H*) presents itself indeed as the contrapositive form of (H). It follows that the discovery of a black raven confirms (H) and also (H*), but also that the discovery of a non-black thing that is not a raven such as a red flame or even a grey umbrella, confirms (H*) and therefore (H). However, this latter conclusion appears paradoxical.

We shall endeavour now to detail the dichotomous analysis on which is based the solution proposed in Franceschi (1999). The corresponding approach is based on finding a reference class associated with the statement of the paradox, which may be defined with the help of an A/Ā duality. If we scrutinise the concepts and categories that underlie propositions (H) and (H*), we first note that there are four categories: ravens, black objects, non-black objects and non- ravens. To begin with, a raven is precisely defined within the taxonomy in which it inserts itself. A category such as that of the ravens can be considered well-defined, since it is based on a precise set of criteria defining the species corvus corax and allowing the identification of its instances. Similarly, the class of black objects can be accurately described, from a taxonomy of colours determined with respect to the wave lengths of light. Finally, we can see that the class of non-black objects can also be a definition that does not suffer from ambiguity, in particular from the specific taxonomy of colours which has been just mentioned.

However, what about the class of non-ravens? What does constitute then an instance of a non-raven? Intuitively, a blue blackbird, a red flamingo, a grey umbrella and even a natural number, are non-ravens. But should we consider a reference class that goes up to include abstract objects? Should we thus consider a notion of non-raven that includes abstract entities such as integers and complex numbers? Or should we limit ourselves to a reference class that only embraces the animals? Or should we consider a reference class that encompasses all living beings, or even all concrete things, also including this time the artefacts? Finally, it follows that the initial proposition (H*) is susceptible of giving rise to several variations, which are the following:

(H1*) All that is non-black among the corvids is a non-raven

(H2*) All that is non-black among the birds is a non-raven

(H3*) All that is non-black among the animals is a non-raven

(H4*) All that is non-black among the living beings is a non-raven

(H5*) All that is non-black among the concrete things is a non-raven

(H6*) All that is non-black among the concrete and abstract objects is a non-raven

Thus, it turns out that the statement of Hempel’s paradox and in particular of proposition (H*) is associated with a reference class, which allow to define the non-ravens. Such a reference class can be assimilated to corvids, birds, animals, living beings, concrete things, or to concrete and abstract things, etc.. However, in the statement of Hempel’s paradox, there is no objective criterion for making such a choice. At this point, it turns out that one can choose such a reference class restrictively, by assimilating it for example to corvids. But in an equally legitimate manner, we can choose a reference class more extensively, by identifying it for example to the set of concrete things, thus notably including umbrellas. Why then choose such or such reference class defined in a restrictive way rather than another one extensively defined? Indeed, we are lacking a criterion allowing to justify the choice of the reference class, whether we proceed by restriction or by extension. Therefore, it turns out that the latter can only be defined arbitrarily. But the choice of such a reference class proves crucial because depending on whether you choose such or such class reference, a given object such as a grey umbrella will confirm or not (H*) and therefore (H). Hence, if we choose the reference class by extension, thus including all concrete objects, a grey umbrella will confirm (H). On the other hand, if we choose such a reference class by restriction, by assimilating it only to corvids, a grey umbrella will not confirm (H). Such a difference proves to be essential. In effect, if we choose a definition by extension of the reference class, the paradoxical effect inherent to Hempel’s paradox ensues. By contrast, if we choose a reference class restrictively defined, the paradoxical effect disappears.

The dual poles in the reference class of the non-ravens within Hempel’s paradox

The foregoing permits to describe accurately the elements of the preceding analysis of Hempel’s paradox in terms of one-sidedness bias such as it has been defined above: to the paradox and in particular to proposition (H*) are associated the reference class of non-ravens, which itself is susceptible of being defined with regard to the extension/restriction duality. However, for a given object such as a grey umbrella, the definition of the reference class by extension leads to a paradoxical effect, whereas the choice of the latter by restriction does not lead to such an effect.

6.2. Application to the analysis of the surprise examination paradox

The classical version of the surprise examination paradox (Quine 1953, Sorensen 1988) goes as follows: a teacher tells his students that an examination will take place on the next week, but they will not know in advance the precise date on which the examination will occur. The examination will thus occur surprisingly. The students reason then as follows. The examination cannot take place on Saturday, they think, otherwise they would know in advance that the examination would take place on Saturday and therefore it could not occur surprisingly. Thus, Saturday is eliminated. In addition, the examination can not take place on Friday, otherwise the students would know in advance that the examination would take place on Friday and so it could not occur surprisingly. Thus, Friday is also ruled out. By a similar reasoning, the students eliminate successively Thursday, Wednesday, Tuesday and Monday. Finally, every day of the week is eliminated. However, this does not preclude the examination of finally occurring by surprise, say on Wednesday. Thus, the reasoning of the students proved to be fallacious. However, such reasoning seems intuitively valid. The paradox lies here in the fact the students’ reasoning is apparently valid, whereas it finally proves inconsistent with the facts, i.e. that the examination can truly occur by surprise, as initially announced by the professor.

In order to introduce the dichotomous analysis (Franceschi 2005) that can be applied to the surprise examination paradox, it is worth considering first two variations of the paradox that turn out to be structurally different. The first variation is associated with the solution to the paradox proposed by Quine (1953). Quine considers then the student’s final conclusion that the examination can not take place surprisingly on any day of the week. According to Quine, the student’s error lies in the fact of not having envisaged from the beginning that the examination could take place on the last day. Because the fact of considering precisely that the examination will not take place on the last day finally allows the examination to occur by surprise on the last day. If the student had also considered this possibility from the beginning, he would not have been committed to the false conclusion that the examination can not occur surprisingly.

The second variation of the paradox that proves interesting in this context is the one associated with the remark made ​​by several authors (Hall 1999, p. 661, Williamson 2000), according to which the paradox emerges clearly when the number n of units is large. Such a number is usually associated with a number n of days, but we may as well use hours, minutes, seconds, etc.. An interesting feature of the paradox is indeed that it emerges intuitively more significantly when large values ​​of n are involved. A striking illustration of this phenomenon is thus provided by the variation of the paradox that corresponds to the following situation, described by Timothy Williamson (2000, p 139).

Advance knowledge that there will be a test, fire drill, or the like of which one will not know the time in advance is an everyday fact of social life, but one denied by a surprising proportion of early work on the Surprise Examination. Who has not waited for the telephone to ring, knowing that it will do so within a week and that one will not know a second before it rings that it will ring a second later?

The variation described by Williamson corresponds to the announcement made to someone that he/she will receive a phone call during the week, but without being able to determine in advance at what exact second the latter event will occur. This variation highlights how surprise may occur, in a quite plausible way, when the value of n is high. The unit of time considered here by Williamson is the second, in relation with a time duration that corresponds to one week. The corresponding value of n here is very high and equal to 604800 (60 x 60 x 24 x 7) seconds. However, it is not necessary to take into account a value as large of n, and a value of n equal to 365, for example, should also be well-suited.

The fact that two versions of the paradox that seem a priori quite different coexist suggests that two structurally different versions of the paradox could be inextricably intertwined within the surprise examination paradox. In fact, if we analyse the version of the paradox that leads to Quine’s solution, we find that it has a peculiarity: it is likely to occur for a value of n equal to 1. The corresponding version of the professor’s announcement is then as follows: “An examination will take place tomorrow, but you will not know in advance that this will happen and therefore it will occur surprisingly.” Quine’s analysis applies directly to this version of the paradox for which n = 1. In this case, the student’s error resides, according to Quine, in the fact of having only considered the hypothesis: (i) “the examination will take place tomorrow and I predict that it will take place.” In fact, the student should also have considered three cases: (ii) “the examination will not take place tomorrow, and I predict that it will take place” (iii) “the examination will not take place tomorrow and I do not predict that it will take place” (iv) “the examination will take place tomorrow and I do not predict that it will take place.” And the fact of having envisaged hypothesis (i), but also hypothesis (iv) which is compatible with the professor’s announcement would have prevented the student to conclude that the examination would not finally take place. Therefore, as Quine stresses, it is the fact of having only taken into account the hypothesis (i) that can be identified as the cause of the fallacious reasoning.

As we can see it, the very structure of the version of the paradox on which Quine’s solution is based has the following features: first, the non-surprise may actually occur on the last day, and second, the examination may also occur surprisingly on the last day. The same goes for the version of the paradox where n = 1: the non-surprise and the surprise may occur on day n. This allows to represent such structure of the paradox with the following matrix S[k, s] (where k denotes the day on which the examination takes place and S[k, s] denotes whether the corresponding case of non-surprise (s = 0) or surprise (s = 1) is possible (in this case, S[k, i] = 1) or not (in this case, S[k, i] = 0)):

daynon-surprisesurprise
111
211
311
411
511
611
711

Matrix structure of the version of the paradox corresponding to Quine’s solution for n = 7 (one week)

daynon-surprisesurprise
111

Matrix structure of the version of the paradox corresponding to Quine’s solution for n = 1 (one day)

Given the structure of the corresponding matrix which includes values that are equal to 1 in both cases of non-surprise and of surprise, for a given day, we shall term joint such a matrix structure.

If we examine the above-mentioned variation of the paradox set by Williamson, it presents the particularity, in contrast to the previous variation, of emerging neatly when n is large. In this context, the professor’s announcement corresponding for example to a value of n equal to 365, is the following: “An examination will take place in the coming year but the date of the examination will be a surprise.” If such a variation is analysed in terms of the matrix of non-surprise and of surprise, it turns out that this version of the paradox has the following properties: the non-surprise cannot occur on the first day while the surprise is possible on this very first day; however, on the last day, the non-surprise is possible whereas the surprise is not possible.

daynon-surprisesurprise
101
36510

Matrix structure of the version of the paradox corresponding to Williamson’s variation for n = 365 (one year)

The foregoing allows now to identify precisely what is at fault in the student’s reasoning, when applied to this particular version of the paradox. Under these circumstances, the student would then have reasoned as follows. The surprise cannot occur on the last day but it can occur on day 1, and the non-surprise can occur on the last day, but cannot occur on the first day. These are proper instances of non-surprise and of surprise, which prove to be disjoint. However, the notion of surprise is not captured exhaustively by the extension and the anti-extension of the surprise. But such a definition is consistent with the definition of a vague predicate, which is characterized by an extension and an anti-extension which are mutually exclusive and non-exhaustive. Thus, the notion of surprise associated with a disjoint structure is that of a vague notion. Thus, the student’s error of reasoning at the origin of the fallacy lies in not having taken into account the fact that the surprise is in the case of a disjoint structure, a vague concept and includes therefore the presence of a penumbra corresponding to borderline cases between non-surprise and surprise. Hence, the mere consideration of the fact that the surprise notion is here a vague notion would have prohibited the student to conclude that S[k, 1] = 0, for all values ​​of k, that is to say that the examination can not occur surprisingly on any day of the period.

Finally, it turns out that the analysis leads to distinguish between two independent variations with regard to the surprise examination paradox. The matrix definition of the cases of non-surprise and of surprise leads to two variations of the paradox, according to the joint/disjoint duality. In the first case, the paradox is based on a joint definition of the cases of non-surprise and of surprise. In the second case, the paradox is grounded on a disjoint definition. Both of these variations lead to a structurally different variation of the paradox and to an independent solution. When the variation of the paradox is based on a joint definition, the solution put forth by Quine applies. However, when the variation of the paradox is based on a disjoint definition, the solution is based on the prior recognition of the vague nature of the concept of surprise associated with this variation of the paradox.

The dual poles in the class of the matrices associated with the surprise examination paradox

As we finally see it, the dichotomous analysis of the surprise examination paradox leads to consider the class of the matrices associated with the very definition of the paradox and to distinguish whether their structure is joint or disjoint. Therefore, it follows an independent solution for each of the resulting two structurally different versions of the paradox.

6.3. Application to the analysis of the Doomsday Argument

The Doomsday argument, attributed to Brandon Carter, was described by John Leslie (1993, 1996). It is worth recalling preliminarily its statement. Consider then proposition (A):

(A) The human species will disappear before the end of the XXIst century

We can estimate, to fix ideas, to 1 on 100 the probability that this extinction will occur: P(A) = 0.01. Let us consider also the following proposition:

(Ā) The human species will not disappear at the end of the XXIst century

Let also E be the event: I live during the 2010s. We can also estimate today to 60 billion the number of humans that ever have existed since the birth of humanity. Similarly, the current population can be estimated at 6 billion. One calculates then that one human out of ten, if event A occurs, will have known of the 2010s. We can then estimate accordingly the probability that humanity will be extinct before the end of the twenty-first century, if I have known of the 2010s: P(E, A) = 6×109/6×1010 = 0.1. By contrast, if humanity passes the course of the twenty-first century, it is likely that it will be subject to a much greater expansion, and that the number of human will be able to amount, for example to 6×1012. In this case, the probability that humanity will not be not extinct at the end of the twenty-first century, if I have known of the 2010s, can be evaluated as follows: P(E, Ā) = 6×109/6×1012 = 0,001. At this point, we can assimilate to two distinct urns – one containing 60 billion balls and the other containing 6,000,000,000,000 – the total human populations that will result. This leads to calculate the posterior probability of the human species’ extinction before the end of the XXIst century, with the help of Bayes’ formula: P'(A) = [P(A) x P(E, A)] / [P(A) x P(E, A) + P(Ā) x P(E, Ā )] = (0.01 x 0.1) / (0.01 x 0.1 + 0.99 x 0.001) = 0.5025. Thus, taking into account the fact that I am currently living makes pass the probability of the human species’ extinction before 2150 from 1% to 50.25 %. Such a conclusion appears counter-intuitive and is in this sense, paradoxical.

It is worth now describing how a dichotomous analysis (Franceschi, 1999, 2009) can be applied to the Doomsday Argument. We will endeavour, first, to point out how the Doomsday Argument has an inherent reference class5 problem definition linked to a duality A/Ā. Consider then the following statement:

(A) The human race will disappear before the end of the XXIst century

Such a proposition presents a dramatic, apocalyptic and tragic connotation, linked to the imminent extinction of the human species. It consists here of a prediction the nature of which is catastrophic and quite alarming. However, if we scrutinise such a proposition, we are led to notice that it conceals an inaccuracy. If the time reference itself – the end of the twenty-first century – proves to be quite accurate, the term “human species” itself appears to be ambiguous. Indeed, it turns out that there are several ways to define it. The most accurate notion in order to define the“’human race” is our present scientific taxonomy, based on the concepts of genus, species, subspecies, etc.. Adapting the latter taxonomy to the assertion (A), it follows that the ambiguous concept of “human species” is likely to be defined in relation to the genus, the species, the subspecies, etc.. and in particular with regard to the homo genus, the homo sapiens species, the homo sapiens sapiens subspecies, etc.. Finally, it follows that assertion (A) is likely to take the following forms:

(Ah) The homo genus will disappear before the end of the XXIst century

(Ahs) The homo sapiens species will disappear before the end of the XXIst century

(Ahss) The homo sapiens sapiens subspecies will disappear before the end of the XXIst century

At this stage, reading these different propositions leads to a different impact, given the original proposition (A). For if (Ah) presents well in the same way as (A) a quite dramatic and tragic connotation, it is not the case for (Ahss). Indeed, such a proposition that predicts the extinction of our current subspecies homo sapiens sapiens before the end of the twenty-first century, could be accompanied by the replacement of our present human race with a new and more advanced subspecies than we could call homo sapiens supersapiens. In this case, the proposition (Ahss) would not contain any tragic connotation, but would be associated with a positive connotation, since the replacement of an ancient race with a more evolved species results from the natural process of evolution. Furthermore, by choosing a reference class even more limited as that of the humans having not known of the computer (homo sapiens sapiens antecomputeris), we get the following proposition:

(Ahsss) The infra-subspecies homo sapiens sapiens antecomputeris will disappear before the end of the XXIst century

which is no longer associated at all with the dramatic connotation inherent to (A) and proves even quite normal and reassuring, being devoid of any paradoxical or counterintuitive nature. In this case, in effect, the disappearance of the infra-subspecies homo sapiens sapiens antecomputeris is associated with the survival of the much-evolved infra-subspecies homo sapiens sapiens postcomputeris. It turns out then that a restricted class of reference coinciding with an infra-subspecies goes extinct, but a larger class corresponding to a subspecies (homo sapiens sapiens) survives. In this case, we observe well the Bayesian shift described by Leslie, but the effect of this shift proves this time to be quite innocuous.

Thus, the choice of the reference class for proposition (A) proves to be essential for the paradoxical nature of the conclusion associated with the Doomsday Argument. If one chooses then an extended reference class for the very definition of humans, associated with e.g. the homo genus, one gets the dramatic and disturbing nature associated with proposition (A). By contrast, if one chooses such a reference class restrictively, by associating it for example with the infra-subspecies homo sapiens sapiens antecomputeris, a reassuring and normal nature is now associated with the proposition (A) underlying the Doomsday Argument.

Finally, we are in a position to replace the foregoing analysis in the present context. The very definition of the reference class of the “humans” associated with the proposition (A) inherent to the Doomsday Argument is susceptible of being made according to the two poles of the extension/restriction duality. An analysis based on a two-sided perspective leads to the conclusion that the choice by extension leads to a paradoxical effect, whereas the choice by restriction of the reference class makes this paradoxical effect disappear.

The dual poles within the reference class of “humans” in the Doomsday Argument

The dichotomous analysis, however, as regards the Doomsday argument, is not limited to this. Indeed, if one examines the argument carefully, it turns out that it contains another reference class which is associated with another duality. This can be demonstrated by analysing the argument raised by William Eckhardt (1993, 1997) against the Doomsday argument. According to Eckhardt, the human situation corresponding to DA is not analogous to the two-urn case described by Leslie, but rather to an alternative model, which can be termed the consecutive token dispenser. The consecutive token dispenser is a device that ejects consecutively numbered balls at regular intervals: “(…) suppose on each trial the consecutive token dispenser expels either 50 (early doom) or 100 (late doom) consecutively numbered tokens at the rate of one per minute.” Based on this model, Eckhardt (1997, p. 256) emphasizes that it is impossible to make a random selection, where there are many individuals who are not yet born within the corresponding reference class: “How is it possible in the selection of a random rank to give the appropriate weight to unborn members of the population?”. The strong idea of Eckhardt underlying this diachronic objection is that it is impossible to make a random selection when there are many members in the reference class who are not yet born. In such a situation, it would be quite wrong to conclude that a Bayesian shift in favour of the hypothesis (A) ensues. However, what can be inferred rationally in such a case is that the initial probability remains unchanged.

At this point, it turns out that two alternative models for modelling the analogy with the human situation corresponding to the Doomsday argument are competing: first, the synchronic model (where all the balls are present in the urn when the draw takes place) recommended by Leslie and second, Eckhardt’s diachronic model, where the balls can be added in the urn after the draw. The question that arises is the following: is the human situation corresponding to the Doomsday argument in analogy with (i) the synchronic urn model, or with (ii) the diachronic urn model? In order to answer, the following question arises: does there exist an objective criterion for choosing, preferably, between the two competing models? It appears not. Neither Leslie nor Eckhardt has an objective motivation allowing to justify the choice of their own favourite model, and to reject the alternative model. Under these circumstances, the choice of one or the other of the two models – whether synchronic or diachronic – proves to be arbitrary. Therefore, it turns out that the choice within the class of the models associated with the Doomsday argument is susceptible of being made according to the two poles of the synchronic/diachronic duality. Hence, an analysis based on a two-sided viewpoint leads to the conclusion that the choice of the synchronic model leads to a paradoxical effect, whereas the choice of the diachronic model makes this latter paradoxical effect disappear.

The dual poles within the models’ class of the Doomsday Argument

Finally, given the fact that the above problem related to the reference class of the humans and its associated choice within the extension/restriction duality only concerns the synchronic model, the structure of the dichotomous analysis at two levels concerning the Doomsday Argument can be represented as follows:

Structure of embedded dual poles Diachronic/Synchronic and Extension/Restriction for the Doomsday Argument

As we can see it, the foregoing developments implement the form of dialectical contextualism that has been described above by applying it to the analysis of three contemporary philosophical paradoxes. In Hempel’s paradox, the reference class of the non-ravens is associated with proposition (H*), which itself is susceptible of being defined with regard to the extension/restriction duality. However, for a given object x such as a grey umbrella, the definition of the reference class by extension leads to a paradoxical effect , whereas the choice of the latter reference class by restriction eliminates this specific effect. Secondly, the matrix structures associated with the surprise examination paradox are analysed from the angle of the joint/disjoint duality, thus highlighting two structurally distinct versions of the paradox , which themselves admit of two independent resolutions. Finally, at the level of the Doomsday argument, a double dichotomic analysis shows that the class of humans is related to the extension/restriction duality, and that the paradoxical effect that is evident when the reference class is defined by extension, dissolves when the latter is defined by restriction. It turns out, second, that the class of models can be defined according to the synchronic/diachronic duality; a paradoxical effect is associated with the synchronic view, whereas the same effect disappears if we place ourselves from the diachronic perspective.

Acknowledgements

This text is written starting from some entirely revised elements of my habilitation to direct research work report, presented in 2006. The changes introduced in the text, comprising in particular the correction of a conceptual error, follow notably from the comments and recommendations that Pascal Engel had made to me at that time.

References

Beck, AT. (1963) Thinking and depression: Idiosyncratic content and cognitive distortions, Archives of General Psychiatry, 9, 324-333.

Beck,AT. (1964) Thinking and depression: Theory and therapy, Archives of General Psychiatry, 10, 561-571.

Blair, J. Anthony (1988) What Is Bias?” in Selected Issues in Logic and Communication, ed. Trudy Govier [Belmont, CA: Wadsworth, 1988], 101-102).

Boulanger, P. (2000) Culture et nature, Pour la Science, 273, 3.

Chalmers, D. (2002) The St. Petersburg two-envelope paradox, Analysis, 62: 155-157.

Eckhardt, W. (1993) Probability Theory and the Doomsday Argument, Mind, 102, 483-488.

Eckhardt, W. (1997) A Shooting-Room view of Doomsday, Journal of Philosophy, 94, 244-259.

Ellis, A. (1962) Reason and Emotion in Psychotherapy, Lyle Stuart, New York.

Franceschi, P. (1999). Comment l’urne de Carter et Leslie se déverse dans celle de Carter, Canadian Journal of Philosophy, 29, 139-156.

Franceschi, P. (2002) Une classe de concepts, Semiotica, 139 (1-4), 211-226.

Franceschi, P. (2005) Une analyse dichotomique du paradoxe de l’examen surprise, Philosophiques, 32-2, 399-421.

Franceschi, P. (2007) Compléments pour une théorie des distorsions cognitives, Journal de Thérapie Comportementale et Cognitive, 17-2, 84-88. Preprint in English: www.cogprints.org/5261/

Franceschi, P. (2009) A Third Route to the Doomsday Argument, Journal of Philosophical Research, 34, 263-278.

Hall, N. (1999) How to Set a Surprise Exam, Mind, 108, 647-703.

Leslie, J. (1993) Doom and Probabilities, Mind, 102, 489-491.

Leslie, J. (1996) The End of the World: the science and ethics of human extinction, London: Routledge

Quine, W. (1953) On a So-called Paradox, Mind, 62, 65-66.

Sorensen, R. A. (1988) Blindspots, Oxford : Clarendon Press.

Stuart Mill, J. (1985) On Liberty, London: Penguin Classics, original publication in 1859.

Suber, E. (1998). The One-Sidedness Fallacy. Manuscript, https://www.earlham.edu/~peters/courses/inflogic/onesided.htm. Retrieved 11/25/2012

Walton, D. (1999) One-Sided Arguments: A Dialectical Analysis of Bias, Albany: State University of New York Press.

Williamson, T. (2000) Knowledge and its Limits, London & New York : Routledge.

1Such notion is central to the concept of matrices of concepts introduced in Franceschi (2002), of which we can consider that it constitutes the core, or a simplified form. In this paper that bears more specifically on the elements of dialectical contextualism and their application for solving philosophical paradoxes, merely presenting the dual poles proves to be sufficient.

2The present construction also applies to objects that are associated with several classes of reference. We shall limit ourselves here, for the sake of simplicity, to one single reference class.

3Philippe Boulanger says (personal correspondence) that he heard Stanislaw Ulam develop this particular point in a conference at the University of Colorado.

4An application of the present model to the cognitive distortions introduced by Aaron Beck (1963, 1964) in the elements of cognitive therapy, is provided in Franceschi (2007). Cognitive distortions are conventionally defined as fallacious reasoning that play a key role in the emergence of a number of mental disorders. Cognitive therapy is based in particular on the identification of these cognitive distortions in the usual reasoning of the patient, and their replacement by alternative reasoning. Traditionally, cognitive distortions are described as one of the twelve following methods of irrational reasoning: 1. Emotional reasoning 2. Hyper-generalization 3. Arbitrary inference 4. Dichotomous reasoning. 5. Should statements (Ellis 1962) 6. Divination or mind reading 7. Selective abstraction 8. Disqualifying the positive 9. Maximization and minimization 10. Catastrophism 11. Personalisation 12. Labelling.

5The analysis of the Doomsday Argument from the perspective of the reference class problem is performed in detail by Leslie (1996). But Leslie’s analysis aims at showing that the choice of the reference class, by extension or restriction does not affect the conclusion of the argument itself.


A Third Route to the Doomsday Argument

A paper published (2009) in English in the Journal of Philosophical Research, vol. 34, pages 263-278 (with significant changes with regard to the preprint).

In this paper, I present a solution to the Doomsday argument based on a third type of solution, by contrast with, on the one hand, the Carter-Leslie view and on the other hand, the Eckhardt et al. analysis. I begin by strengthening both competing models by highlighting some variations of their ancestors models, which renders them less vulnerable to several objections. I describe then a third line of solution, which incorporates insights from both Leslie and Eckhardt’s models and fits more adequately with the human situation corresponding to the Doomsday argument. I argue then that the resulting two-sided analogy casts new light on the reference class problem. This leads finally to a novel formulation of the argument that could well be more consensual than the original one.

This paper is cited in:

  • Alasdair Richmond, The Doomsday Argument, Philosophical Books Vol. 47 No. 2 April 2006, pp. 129–142
  • Robert Northcott, A Dilemma for the Doomsday Argument, Ratio, Volume29-3, September 2016, pages 268-282
  • William Poundstone, How to Predict Everything: The Formula Transforming What We Know About Life and the Universe, 2019, Oneworld

A Third Route to the Doomsday Argument

In what follows, I will endeavor to present a solution to the problem arising from the Doomsday argument (DA). The solution thus described constitutes a third way out, compared to, on the one hand, the approach of the promoters of DA (Leslie 1993 and 1996) and on the other hand, the solution recommended by its detractors (Eckhardt 1993 and 1997, Sowers 2002).i

I. The Doomsday Argument and the Carter-Leslie model

For the sake of the present discussion, it is worth beginning with a brief presentation of DA. This argument can be described as reasoning which leads to a Bayesian shift, starting from an analogy between what was has been called the two-urn caseii and the corresponding human situation.

Let us consider first the two-urn case experiment (adapted from Bostrom 1997):

The two-urn case experiment An opaque urniii is in front of you. You know that it contains either 10 or 1000 numbered balls. A fair coin has been tossed at time T0 and if the coin landed tails, then 10 balls were placed in the urn; on the other hand, if the coin landed heads, 1000 balls were placed in the urn. The balls are numbered 1,2,3,…. You formulate then the assumptions Hfew (the urn contains only 10 balls) and Hmany (the urn contains 1000 balls) with the initial probabilities P (Hfew) = P (Hmany) = 1/2.

Informed of all the preceding, you randomly draw a ball at time T1 from the urn. You get then the ball #5. You endeavor to estimate the number of balls that were contained at T0 in the urn. You conclude then to an upward Bayesian shift in favor of the Hfew hypothesis.

The two-urn case experiment is an uncontroversial application of Bayes’ theorem. It is based on the two following concurrent assumptions:

(H1few)the urn contains 10 balls
(H2many)the urn contains 1000 balls

and the corresponding initial probabilities: P (H1) = P (H2) = 1/2. By taking into account the fact that E denotes the evidence according to which the randomly drawn ball carries the #5 and that P (E|H1) = 1/10 and P (E|H2) = 1/1000, an upward Bayesian shift follows, by a straightforward application of Bayes’ theorem. Consequently, the posterior probabilities are such that P'(H1) = 0.99 and P'(H2) = 0.01.

Let us consider, on the second hand, the human situation corresponding to DA. While being interested in the total number of humans that humankind will finally count, it is worth considering the two following concurrent hypotheses:

(H3few)the total number of humans having ever lived will amount to 1011 (Apocalypse near)
(H4many)the total number of humans having ever lived will amount to 1014 (Apocalypse far)

It appears now that every human being has his own birth rank, and that yours, for example, is about 60×109. Let us also assume, for the sake of simplicity, that the initial probabilities are such as P(H3) = P(H4) = 1/2. Now, according to Carter and Leslie, the human situation corresponding to DA is analogous to the two urn case.iv If we denote by E the fact that our birth rank is 60×109, an application of Bayes’ theorem, by taking into account the fact that P(E|H3) = 1/1011 and that P(E|H4) = 1/1014, leads to an important Bayesian shift in favor of the hypothesis of a near Apocalypse, i.e., P'(H3) = 0.999. The importance of the Bayesian shift which results from this reasoning, associated with a very worrying situation related to the future of humankind, from the only recognition of our birth rank, appears counter-intuitive. This intrinsic problem requires that we set out to find it a solution.

In such context, it appears that a solution to DA has to present the following characteristics. On the one hand, it must point out in which ways the human situation corresponding to DA is similar to the two-urn case or possibly, to an alternative model, the characteristics of which are to be specified. On the second hand, such solution to DA must point out in which ways one or several models on analogy with the human situation corresponding to DA are associated with a frightening situation for the future of humankind.

In what follows, I will endeavor to present a solution to DA. In order to develop it, it will be necessary first to build up the space of solutions for DA. Such a construction is a non-trivial task that requires the consideration of not only several objections that have been raised against DA, but also the reference class problem. Within this space of solutions, the solutions advocated by the supporters as well as critics of DA, will naturally be placed. I will finally show that within the space of solutions thus established, there is room for a third way out, which is in essence a different solution from that offered by the proponents and opponents of DA.

II. Failure of an alternative model based on the incremental objection of Eckhardt et al.

DA is based on the matching of a probabilistic model – the two-urn case – with the human situation corresponding to DA. In order to build the space of solutions for DA, it is necessary to focus on the models that constitute an alternative to the two-urn case, which can also be put in correspondence with the human situation corresponding to DA. Several alternative models have been described by the opponents to DA. However, for reasons that will become clearer later, not all these models can be accepted as valid alternative models to the two-urn case, and take a place within the space of solutions for DA. It is therefore necessary to distinguish among these models proposed by the detractors of DA, between those which are not genuine alternative models, and those which can legitimately be included within the space of solutions for DA.

A certain number of objections to DA were formulated first by William Eckhardt (1993, 1997). For the sake of the present discussion, it is worth distinguishing between two objections, among those which were raised by Eckhardt, and that I will call respectively: the incremental objection and the diachronic objection. With each one of these two objections is associated an experiment intended to constitute an alternative model to the two-urn case.

Let us begin with the incremental objection mentioned in Eckhardt (1993, 1997) and the alternative model associated with it. Recently, George Sowers (2002) and Elliott Sober (2003) have echoed this objection. According to this objection, the analogy with the urn that is at the root of DA, is ungrounded. Indeed, in the two-urn case experiment, the number of the balls is randomly chosen. However, these authors emphasize, in the case of the human situation corresponding to DA, our birth rank is not chosen at random, but is indeed indexed on the corresponding time position. Therefore, Eckhardt stresses, the analogy with the two-urn case is unfounded and the whole reasoning is invalidated. Sober (2003) develops a similar argument,v by stressing that no mechanism designed to randomly assign a time position to human beings, can be highlighted. Finally, such an objection was recently revived by Sowers. The latter focused on the fact that the birth rank of every human being is not random because it is indexed to the corresponding time position.

According to the viewpoint developed by Eckhardt et al., the human situation corresponding to DA is not analogous to the two-urn case experiment, but rather to an alternative model, which may be called the consecutive token dispenser. The consecutive token dispenser is a device, originally described by Eckhardtvi, that ejects consecutively numbered balls at regular intervals: “(…) suppose on each trial the consecutive token dispenser expels either 50 (early doom) or 100 (late doom) consecutively numbered tokens at the rate of one per minute”. A similar device – call it the numbered balls dispenser – is also mentioned by Sowers, where the balls are ejected from the urn and numbered in the order of their ejection, at the regular interval of one per minute:vii

There are two urns populated with balls as before, but now the balls are not numbered. Suppose you obtain your sample with the following procedure. You are equipped with a stopwatch and a marker. You first choose one of the urns as your subject. It doesn’t matter which urn is chosen. You start the stopwatch. Each minute you reach into the urn and withdraw a ball. The first ball withdrawn you mark with the number one and set aside. The second ball you mark with the number two. In general, the nth ball withdrawn you mark with the number n. After an arbitrary amount of time has elapsed, you stop the watch and the experiment. In parallel with the original scenario, suppose the last ball withdrawn is marked with a seven. Will there be a probability shift? An examination of the relative likelihoods reveals no.

Thus, under the terms of the viewpoint defended by Eckhardt et al., the human situation corresponding to DA is not analogous with the two-urn case experiment, but with the numbered balls dispenser. And this last model leads us to leave the initial probabilities unchanged.

The incremental objection of Eckhardt et al. is based on a disanalogy. Indeed, the human situation corresponding to DA presents a temporal nature, for the birth ranks are successively attributed to human beings depending on the time position corresponding to their appearance on Earth. Thus, the corresponding situation takes place, for example, from T1 to Tn, where 1 and n are respectively the birth ranks of the first and of the last humans. However, the two-urn case experiment appears atemporal, because when the ball is drawn at random, all the balls are already present within the urn. The two-urn case experiment takes place at a given time T0. It appears thus that the two-urn case experiment is an atemporal model, while the situation corresponding to DA is a temporal model. And this forbids, as Eckhardt et al. underscore, considering the situation corresponding to DA and the two-urn case as isomorphic.viii

At this stage, it appears that the atemporal-temporal disanalogy is indeed a reality and it cannot be denied. However, this does not constitute an insurmountable obstacle for DA. As we shall see, it is possible indeed to put in analogy the human situation corresponding to DA, with a temporal variation of the two-urn case. This can be done by considering the following experiment, which can be termed the incremental two-urn case (formally, the two-urn case++):

The two-urn case++. An opaque urn in front of you. You know that it contains either 10 or 1000 numbered balls. A fair coin has been tossed at time T0 and if the coin landed tails, then the urn contains only 10 balls, while if the coin landed heads, then the urn contains the same 10 balls plus 990 extra balls, i.e. 1000 balls in total. The balls are numbered 1, 2, 3, …. You formulate then the Hfew (the box contains only 10 balls) and Hmany (the box contains 1000 balls) hypotheses with initial probabilities P(Hfew) = P(Hmany) = 1/2. At time T1, a device will draw a ball at random, and will eject then every second a numbered ball in increasing order, from the ball #1 until the number of the randomly drawn ball. At that very time, the device will stop.

You are informed of all the foregoing, and the device expels then the ball #1 at T1, the ball #2 at T2, the ball #3 at T3, the ball #4 at T4, and the ball #5 at T5. The device then stops. You wish to estimate the number of balls that were contained at T0 in the urn. You conclude then to an upward Bayesian shift in favor of the Hfew hypothesis.

As we can see, such a variation constitutes a mere adaptation of the original two-urn case, with the addition of an incremental mechanism for the expulsion of the balls. The novelty with this variationix is that the experience has now a temporal feature, because the random selection is made at T1 and the randomly drawn ball is finally ejected, for example at T5.

At this stage, it is also worth analyzing the consequences of the two-urn case++ for the analysis developed by Eckhardt et al. Indeed, in the two-urn case++, the number of each ball ejected from the device is indexed on the range of its expulsion. For example, I draw the ball #60000000000. But I also know that the previous ball was the ball #59999999999 and that the penultimate ball was the ball #59999999998, and so on. However, this does not prevent me from thinking in the same manner as in the original two-urn case and from concluding to a Bayesian shift in favor of the Hfew hypothesis. In this context, the two-urn case++ experiment leads to the following consequence: the fact of being indexed with regard to time does not mean that the number of the ball is not randomly chosen. This can now be confronted with the main thesis of the incremental objection raised by Eckhardt et al., i.e. that the birth rank of each human being is not randomly chosen, but is rather indexed on the corresponding time position. Sowers especially believes that the cause of DA is that the number corresponding to the birth rank is time-indexed.x But what the two-urn case++ experiment and the corresponding analogy demonstrates is that our birth rank can be time-indexed and nevertheless be determined randomly in the context of DA. For this reason, the numbered balls dispenser model proposed by Eckhardt and Sowers can not be considered as an alternative model to the two-urn case, within the space of solutions for DA.

III. Success of an alternative model grounded on William Eckhardt’s diachronic objection

William Eckhardt (1993, 1997) also describes another objection to DA, which we shall call, for the sake of the present discussion, the diachronic objection. This latter objection, as we shall see it, is based on an alternative model to the two-urn case, which is different from the one that corresponds to the incremental objection. Eckhardt highlights the fact that it is impossible to perform a random selection, when there exists many yet unborn individuals within the corresponding reference class: “How is it possible in the selection of a random rank to give the appropriate weight to unborn members of the population?” (1997, p. 256).

This second objection is potentially stronger than the incremental objection. In order to assess its scope accurately, it is worth translating now this objection in terms of a probabilistic model. It appears that the model associated with Eckhardt’s diachronic objection can be built from the two-urn case’s structure. The corresponding variation, which can be termed the diachronic two-urn case, goes as follows:

The diachronic two-urn case. An opaque urn in front of you. You know that it contains either 10 or 1000 numbered balls. A fair coin has been tossed at time T0. If the coin fell tails, 10 balls were then placed in the urn, while if the coin fell heads, 10 balls were also placed in the urn at time T0, but 990 supplementary balls will be also added to the urn at time T2, bringing up the total number of balls finally contained in the urn to 1000. The balls are numbered 1, 2, 3, …. You then formulate Hfew (the urn finally contains only 10 balls) and Hmany (the urn finally contains1000 balls) hypotheses with the initial probabilities P (Hfew) = P (Hmany) = 1 / 2.

Informed of all the above, you randomly draw at time T1 a ball from the urn. You get then the ball #5. You wish to estimate the number of balls that ultimately will be contained in the urn at T2. You conclude then that the initial probabilities remain unchanged.

At this stage, it appears that the protocol described above does justice to Eckhardt’s strong idea that it is impossible to perform a random selection where there are many yet unborn members in the reference class. In the diachronic two-urn case, the 990 balls, which are possibly (if the coin falls heads) added in T2 account for these members not yet born. In such a situation, it would be quite erroneous to conclude to a Bayesian shift in favor of the Hfew hypothesis. But what can be inferred rationally in such a case is that the prior probabilities remain unchanged.

We can also see that the structure of the protocol of the diachronic two-urn case is quite similar to the original two-urn case experiment (which we shall now term, by contrast, the synchronic two-urn case). This will allow now for making easy comparisons. So we see that if the coin lands tails: the situation is the same in both experiments, synchronic and diachronic. However, the situation is different if the coin lands heads: in the synchronic two-urn case, the 990 balls are already present in the urn at T0; on the other hand, in the model of the diachronic two-urn case, 990 extra balls are added to the urn later, namely at T2. As we can see, the diachronic two-urn case based on Eckhardt’s diachronic objection deserves completely to take a place within the space of solutions for DA.

IV. Construction of the preliminary space of solutions

In light of the foregoing, we are now in a position to appreciate how much the analogy underlying DA is appropriate. It appears indeed that two alternative models to model the analogy with the human situation corresponding to DA are in competition: on the one hand, the synchronic two-urn case advocated by the promoters of DA and, on the other hand, the diachronic two-urn case, based on Eckhardt’s diachronic objection. It turns out that these two models share a common structure, which allows for making comparisons.xi

At this step, the question that arises is the following: is the human situation corresponding to DA in analogy with (i) the synchronic two-urn case, or (ii) the diachronic two-urn case? In response, the next question follows: is there an objective criterion that allows one to choose, preferentially, between the two competing models? It appears not. Indeed, neither Leslie nor Eckhardt do provide objective reasons for justifying the choice of their favorite model, and for rejecting the alternative model. Leslie, first, defends the analogy of the human situation corresponding to DA with the lottery experiment (here, the synchronic two-urn case). At the same time, Leslie acknowledges that DA is considerably weakened if our universe is of an indeterministic nature, i.e. if the total number of people who will ever exist has not yet been settled.xii But it turns out that such indeterministic situation corresponds completely with the diachronic two-urn case. For the protocol of this experiment takes into account the fact that the total number of balls which will ultimately be contained in the urn, is not known at the time when the random drawing is performed. We see it finally, Leslie liberally accepts that the analogy with the synchronic two-urn case may not prevail in certain indeterministic circumstances, where, as we have seen, the diachronic two-urn case would apply.

Otherwise, a weakness in the position defended by Eckhardt is that he rejects the analogy with the lottery experiment (in our terminology, the synchronic two-urn case) in all cases. But how can we be certain that an analogy with the synchronic two-urn case does not prevail, at least for a given situation? It appears here that we lack the evidence allowing us to reject such an hypothesis with absolute certainty.

To sum now. Within the space of solutions for DA resulting from the foregoing, it follows now that two competing models may also be convenient to model the human situation corresponding to DA: Leslie’s synchronic two-urn case or Eckhardt’s diachronic two-urn case. At this stage, however, it appears that no objective criterion allows for preferring one or the other of these two models. In these circumstances, in the lack of objective evidence to make a choice between the two competing models, we are led to apply a principle of indifference, which leads us to retain both models as roughly equiprobable. We attribute then (Figure 1), applying a principle of indifference, a probability P of 1/2 to the analogy with the synchronic two-urn case (associated with a terrifying scenario), and an identical probability of 1/2 to the analogy with the diachronic two-urn case (associated with a reassuring scenario).

CaseModelT0T2PNature of the scenario
1synchronic two-urn case1/2terrifying
2diachronic two-urn case1/2reassuring

Figure 1.

However, it appears that such an approach is of a preliminary nature, for in order to assign a probability to each specific situation inherent in DA, it is necessary to take into account all the elements underlying DA. But it appears that a key element of DA has not yet been taken into account. It is the notoriously awkward reference class problem.

V. The reference class problem

Let us begin by recalling the reference class problem.xiii Basically, it is the problem of the correct definition of “humans”. More accurately, the problem can be stated as follows: how can the reference class be objectively defined in the context of DA? For a more or less extensive or restrictive definition of the reference class can be used. An extensively defined reference class would include, for example, the somewhat exotic varieties corresponding to a future evolution of humankind, with for example an average IQ equal to 200, a double brain or backward causation abilities. On the other hand, a restrictively defined reference class would only include those humans whose characteristics are exactly those of – for example – our subspecies Homo sapiens sapiens. Such a definition would exclude the extinct species such as Homo sapiens neandertalensis, as well as a possible future subspecies such as Homo sapiens supersapiens. To put this in line with our current taxonomy, the reference class can be set at different levels, which correspond to the Superhomo super-genus, the Homo genus, the Homo sapiens species, the Homo sapiens sapiens subspecies, etc. At this stage, it appears that we lack an objective criterion allowing to choose the corresponding level non-arbitrarily.

The solution to the reference class problem proposed by Leslie’s, which is exposed in the response made to Eckhardt (1993) and in The End of the World (1996), goes as follows: one can choose the reference class more or less as one wishes, i.e. at any level of extension or of restriction. Once this choice has been made, it suffices to adjust accordingly the initial probabilities, and DA works again. The only reservation mentioned by Leslie is that the reference class should not be chosen at an extreme level of extension or restriction.xiv According to Leslie, the fact that every human being can belong to different classes, depending on whether they are restrictively or extensively defined, is not a problem, because the argument works for each of those classes. In this case, says Leslie, a Bayesian shift follows for whatever class reference, chosen at a reasonable level of extension or of restriction. And Leslie illustrates this point of view by an analogy with a multi-color urn, unlike the one-color urn of the original two-urn case experiment. He considers an urn containing balls of different colors, for example red and green. A red ball is drawn at random from the urn. From a restrictive viewpoint, the ball is a red ball and then there is no difference with the two-urn case. But from a more extensive viewpoint, the ball is also a red-or-green ball.xv According to Leslie, although the initial probabilities are different in each case, a Bayesian shift results in both cases.xvi As we can see, the synchronic two-urn case can be easily adapted to restore the essence of Leslie’s multi-color model. It suffices in effect to replace the red balls of the original synchronic two-urn case with red-or-green balls. The resulting two-color model is then in all respects identical to the original synchronic two-urn case experiment, and leads to a Bayesian shift of the same nature.

At this stage, in order to incorporate properly the reference class problem into the space of solutions for DA, we still need to translate the diachronic two-urn case into a two-color variation.

A. The two-color diachronic two-urn case

In the one-color original experiment which corresponds to the diachronic two-urn case, the reference class is that of the red balls. It appears here that one can construct a two-color variation, which is best suited for handling the reference class problem, where the relevant class is that of red-or-green balls. The corresponding two-color variation is in all respects identical with the original diachronic two-urn case, the only difference being that the first 10 balls (#1 to #10) are red and the other 990 balls (#11 to #1000) are green. The corresponding variation runs as follows:

The two-color diachronic two-urn case. An opaque urn in front of you. You know it contains either 10 or 1000 numbered balls (consisting of 10 red balls and 990 green balls). The red balls are numbered #1, #2, …, #9, #10 and the green ones #11, #12, .., #999, #1000. A fair coin has been tossed at time T0. If the room fell tails, 10 balls were then placed in the urn, while if the coin fell heads, 10 red balls were also placed in the urn at time T0, but 990 green balls will be then added to the urn at time T2, bringing thus the total number of balls in the urn to 1000. You formulate then the hypotheses Hfew (the urn contains finally only 10 red-or-green balls) and Hmany (the box finally contains 1000 red-or-green balls) with the prior probabilities P(Hfew) = P(Hmany) = 1/2.

After being informed of all the above, you draw at time T1 a ball at random from the urn. You get the red ball #5. You proceed to estimate the number of red-or-green balls which will ultimately be contained in the urn at T2. You conclude that the initial probabilities remain unchanged.

As we can see, the structure of this two-color variation is in all respects similar to that of the one-color version of the diachronic two-urn case. In effect, we can considered here the class of red-or-green balls, instead of the original class of red balls. And in this type of situation, it is rational to conclude in the same manner as in the original one-color version of the diachronic two-urn case experiment that the prior probabilities remain unchanged.

B. Non-exclusivity of the synchronic one-color model and of the diachronic two-color model

With the help of the machinery at hand to tackle the reference class problem, we are now in a position to complete the construction of the space of solutions for DA, by incorporating the above elements. On a preliminary basis, we have assigned a probability of 1/2 to each of the one-color two-urn case – synchronic and diachronic – models, by associating them respectively with a terrifying and a reassuring scenario. But what is the situation now, with the presence of two-color models, which are better suited for handling the reference class problem?

Before evaluating the impact of the two-color model on the space of solutions for DA, it is worth defining first how to proceed in putting the two-color models and our present human situation into correspondence. For this, it suffices to assimilate the class of red balls to our current subspecies Homo sapiens sapiens and the class of red-or-green balls to our current species Homo sapiens. Similarly, we shall assimilate the class of green balls to the subspecies Homo sapiens supersapiens, a subspecies more advanced than our own, which is an evolutionary descendant of Homo sapiens sapiens. A situation of this type is very common in the evolutionary process that governs living species. Given these elements, we are now in a position to establish the relationship of the probabilistic models with our present situation.

At this stage it is worth pointing out an important property of the two-color diachronic model. It appears indeed that the latter model is susceptible of being combined with a one-color synchronic two-urn case. Suppose, then, that a one-color synchronic two-urn case prevails: 10 balls or 1000 red balls are placed in the urn at time T0. But this does not preclude green balls from being also added in the urn at time T2. It appears thus that the one-color synchronic model and the diachronic two-color model are not exclusive of one another. For in such a situation, a synchronic one-color two-urn case prevails for the restricted class of red balls, whereas a diachronic two-color model applies to the extended class of red-or-green balls. At this step, it appears that we are on a third route, of pluralistic essence. For the fact of matching the human situation corresponding to DA with the synchronic or the (exclusively) diachronic model, are well monist attitudes. In contrast, the recognition of the joint role played by both synchronic and diachronic models, is the expression of a pluralistic point of view. In these circumstances, it is necessary to analyze the impact on the space of solutions for DA of this property of non-exclusivity which has just been emphasized.

In light of the foregoing, it appears that four types of situations must now be distinguished, within the space of solutions for DA. Indeed, each of the two initial one-color models – synchronic and diachronic – can be associated with a two-color diachronic two-urn case. Let us begin with the case (1) where the synchronic one-color model applies. In this case, one should distinguish between two types of situations: either (1a) nothing happens at T2 and no green ball is added to the urn at T2, or (1b) 990 green balls are added in the urn at T2. In the first case (1a) where no green ball is added to the urn at T2, we have a rapid disappearance of the class of red balls. Similarly, we have a disappearance of the corresponding class of red-or-green balls, since it identifies itself here with the class of red balls. In such a case, the rapid extinction of Homo sapiens sapiens (the red balls) is not followed by the emergence of Homo sapiens supersapiens (the green balls). In such a case, we observe the rapid extinction of the sub-species Homo sapiens sapiens and the correlative extinction of the species Homo sapiens (the red-or-green balls). Such a scenario, admittedly, corresponds to a form of Doomsday that presents a very frightening nature.

Let us consider now the second case (1b), where we are always in the presence of a synchronic one-color model, but where now green balls are also added in the urn at T2. In this case, 990 green balls are added at T2 to the red balls originally placed in the urn at T0. We have then a rapid disappearance of the class of red balls, which accompanies, however, the survival of the class of red-or-green balls given the presence of green balls at T2. In this case (1b), one notices that a synchronic one-color model is combined with a diachronic two-color model. Both models prove to be compatible, and non-exclusive of one another. If we translate this in terms of the third route, one notices that, according to the pluralistic essence of the latter, the synchronic one-color model applies to the class, narrowly defined, of red balls, while a two-color diachronic model also applies to the class, broadly defined, of red-or-green balls. In this case (1b), the rapid extinction of Homo sapiens sapiens (the red balls) is followed by the emergence of the most advanced human subspecies Homo sapiens supersapiens (the green balls). In such a situation, the restricted class Homo sapiens sapiens goes extinct, while the more extended class Homo sapiens (red-or-green balls) survives. While the synchronic one-color model applies to the restricted class Homo sapiens sapiens, the diachronic two-color model prevails for the wider class Homo sapiens. But such an ambivalent feature has the effect of depriving the original argument of the terror which is initially associated with the one-color synchronic model. And finally, this has the effect of rendering DA innocuous, by depriving it of its originally associated terror. At the same time, this leaves room for the argument to apply to a given class reference, but without its frightening and counter-intuitive consequences .

As we can see, in case (1), the corresponding treatment of the reference class problem is different from that advocated by Leslie. For on Leslie’s view, the synchronic model applies irrespective of the chosen reference class. But the present analysis leads to a differential treatment of the reference class problem. In case (1a), the synchronic model prevails and a Bayesian shift applies, as well as in Leslie’s account, both to the class of red balls and to the class of red-or-green balls. But in case (1b), the situation goes differently. Because if a one-color synchronic model applies to the restricted reference class of red balls and leads to a Bayesian shift, it appears that a diachronic two-color model applies to the extended reference class of red-or-green balls, leaving the initial probability unchanged. In case (1b), as we can see, the third route leads to a pluralistic treatment of the reference class problem.

Let us consider now the second hypothesis (2) where the diachronic one-color model prevails. In this case, 10 red balls are placed in the urn at T0, and 990 other red balls are added to the urn at T2. Just as before, we are led to distinguish two situations. Either (2a) no green ball is added to the urn at T2, or (2b) 990 green balls are also added to the urn at T2. In the first case (2a), the diachronic one-color model applies. In such a situation (2a), no appearance of a much-evolved human subspecies such as Homo sapiens supersapiens occurs. But the scenario in this case is also very reassuring, since our current subspecies Homo sapiens sapiens survives. In the second case (2b), where 990 green balls are added to the urn at T2, a diachronic two-color model adds up to the initial diachronic one-color model. In such a case (2b), it follows the emergence of the most advanced subspecies Homo sapiens supersapiens. In this case, the scenario is doubly reassuring, since it leads both to the survival of Homo sapiens sapiens and of Homo sapiens supersapiens. As we can see, in case (2), it is the diachronic model which remains the basic model, leaving the prior probability unchanged.

At this step, we are in a position to complete the construction of the space of solutions for DA. Indeed, a new application of a principle of indifference leads us here to assign a probability of 1/4 to each of the 4 sub-cases: (1a), (1b), (2a), (2b). The latter are represented in the figure below:

CaseT0T2P
11a1/4
1b1/4
22a1/4
2b● ○1/4

Figure 2.

It suffices now to determine the nature of the scenario that is associated with each of the four sub-cases just described. As has been discussed above, a worrying scenario is associated with hypothesis (1a), while a reassuring scenario is associated with the hypotheses (1b), (2a) and (2b):

CaseT0T2PNature of the scenarioP
11a1/4terrifying1/4
1b1/4reassuring
22a1/4reassuring3/4
2b● ○1/4reassuring

Figure 3.

We see it finally, the foregoing considerations lead to a novel formulation of DA. For it follows from the foregoing that the original scope of DA should be reduced, in two different directions. It should be acknowledged, first, that either the one-color synchronic model or the diachronic one-color model applies to our current subspecies Homo sapiens sapiens. A principle of indifference leads us then to assign a probability of 1/2 to each of these two hypotheses. The result is a weakening of DA, as the Bayesian shift associated with a terrifying assumption no longer concerns but one scenario of the two possible scenarios. A second weakening of DA results from the pluralist treatment of the reference class problem. For in the case where the one-color synchronic model (1) applies to our subspecies Homo sapiens sapiens, two different situations must be distinguished. Only one of them, (1a) leads to the extinction of both Homo sapiens sapiens and Homo sapiens and corresponds thus to a frightening Doomsday. In contrast, the other situation (1b) leads to the demise of Homo sapiens sapiens, but to the correlative survival of the most advanced human subspecies Homo sapiens supersapiens, and constitutes then a quite reassuring scenario. At this stage, a second application of the principle of indifference leads us to assign a probability of 1/2 to each of these two sub-cases (see Figure 3). In total, a frightening scenario is henceforth associated with a probability of no more than 1/4, while a reassuring scenario is associated with a probability of 3/4.

As we can see, given these two sidesteps, a new formulation of DA ensues, which could prove to be more plausible than the original one. Indeed, the present formulation of DA can now be reconciled with our pretheoretical intuition. For the fact of taking into account DA now gives a probability of 3/4 for all reassuring scenarios and a probability of no more than 1/4 for a scenario associated with a frightening Doomsday. Of course, we have not completely eliminated the risk of a frightening Doomsday. And we must, at this stage, accept a certain risk, the scope of which appears however limited. But most importantly, it is no longer necessary now to give up our pretheoretical intuitions.

Finally, the preceding highlights a key facet of DA. For in a narrow sense, it is an argument related to the destiny of humankind. And in a broader sense (the one we have been concerned with so far) it emphasizes the difficulty of applying probabilistic models to everyday situations,xvii a difficulty which is often largely underestimated. This opens the path to a wide field which presents a real practical interest, consisting of a taxonomy of probabilistic models, the philosophical importance of which would have remained hidden without the strong and courageous defense of the Doomsday argument made by John Leslie.xviii

References

Bostrom, Nick. 1997. “Investigations into the Doomsday argument.” preprint.

———. 2002. Anthropic Bias: Observation Selection Effects in Science and Philosophy New York: Routledge.

Chambers, Timothy. 2001. “Do Doomsday’s Proponents Think We Were Born Yesterday?” Philosophy 76: 443-450.

Delahaye, Jean-Paul. 1996. “Recherche de modèles pour l’argument de l’apocalypse de Carter-Leslie.” manuscrit.

Eckhardt, William. 1993. “Probability Theory and the Doomsday Argument.” Mind 102: 483-488.

———. 1997. “A Shooting-Room view of Doomsday.” Journal of Philosophy 94: 244-259.

Franceschi, Paul. 1998. “Une solution pour l’argument de l’Apocalypse.” Canadian Journal of Philosophy 28: 227-246.

———. 1999. “Comment l’urne de Carter et Leslie se déverse dans celle de Hempel.” Canadian Journal of Philosophy 29: 139-156, English translation under the title “The Doomsday Argument and Hempel’s Problem” .

———. 2002. “Une application des n-univers à l’argument de l’Apocalypse et au paradoxe de Goodman.” Corté: University of Corsica, doctoral dissertation.

Hájek, Alan. 2002. “Interpretations of Probability.” The Stanford Encyclopedia of Philosophy, E. N. Zalta (ed.), http://plato.stanford.edu/archives/win2002/entries/probability-interpret.

Korb, Kevin. & Oliver, Jonathan. 1998. “A Refutation of the Doomsday Argument.” Mind 107: 403-410.

Leslie, John. 1993. “Doom and Probabilities.” Mind 102: 489-491.

———. 1996. The End of the World: the science and ethics of human extinction London: Routledge.

Sober, Eliott. 2003.An Empirical Critique of Two Versions of the Doomsday Argument – Gott’s Line and Leslie’s Wedge.” Synthese 135-3: 415-430.

Sowers, George. 2002. “The Demise of the Doomsday Argument.” Mind 111: 37-45.

i The present analysis of DA is an extension of Franceschi (2002).

ii Cf. Korb & Oliver (1998).

iii The original description by Bostrom of the two-urn case refers to two separate urns. For the sake of simplicity, we shall refer here equivalently to one single urn (which contains either 10 or 1000 balls).

iv More accurately, Leslie considers an analogy with a lottery experiment.

v Cf (2003: 9): “But who or what has the propensity to randomly assign me a temporal location in the duration of the human race? There is no such mechanism.” But Sober is mainly concerned with providing evidence with regard to the assumptions used in the original version of DA and with broadening the scope of the argument by determining the conditions of its application to real-life situations.

vi Cf. (1997: 251).

vii Cf. (2002: 39).

viii I borrow this terminology from Chambers (2001).

ix Other variations of the two-urn case++ can even be envisaged. In particular, variations of this experiment where the random process is performed diachronically and not synchronically (i.e. at time T0) can even be imagined.

x Cf. Sowers (2002: 40).

xi Both synchronic and diachronic two-urn case experiments can give rise to an incremental variation. The incremental variant of the (synchronic) two-urn case has been mentioned earlier: it consists of the two-urn case++. It is also possible to build a similar incremental variation of the diachronic two-urn case, where the ejection of the balls is made at regular time intervals. At this stage it appears that both models can give rise to such incremental variations. Thus, the fact of considering incremental variations of the two competing models – the synchronic two-urn case++ and the diachronic two-urn case++ – does not provide any novel elements with regard to the two original experiments. Similarly, we might consider some variations where the random sampling is done not at T0, but gradually, or some variants where a quantum coin is used, and so on. But in any case, such variations are susceptible to be adapted to each of the two models.

xii Leslie (1993: 490) evokes thus: “(…) the potentially much stronger objection that the number of names in the doomsday argument’s imaginary urn, the number of all humans who will ever have lived, has not yet been firmly settled because the world is indeterministic”.

xiii The reference class problem in probability theory is notably mentioned in Hájek (2002: s. 3.3). For a treatment of the reference class problem in the context of DA, see Eckhardt (1993, 1997), Bostrom (1997, 2002: ch. 4 pp. 69-72 & ch. 5), Franceschi (1998, 1999). The point emphasized in Franceschi (1999) can be construed as a treatment of the reference class problem within confirmation theory.

xiv Cf. 1996: 260-261.

xv Cf. Leslie (1996: 259).

xvi Cf. Leslie (1996: 258-259): “The thing to note is that the red ball can be treated either just as a red ball or else as a red-or-green ball. Bayes’s Rule applies in both cases. […] All this evidently continues to apply to when being-red-or-green is replaced by being-red-or-pink, or being-red-or-reddish”.

xvii This important aspect of the argument is also underlined in Delahaye (1996). It is also the main theme of Sober (2003).

xviii I thank Nick Bostrom for useful discussion on the reference class problem, and Daniel Andler, Jean-Paul Delahaye, John Leslie, Claude Panaccio, Elliott Sober, and an anonymous referee for the Journal of Philosophical Research, for helpful comments on earlier drafts.

A Solution to Goodman’s Paradox

English Posprint (with additional illustrations) of a paper published in French in Dialogue Vol. 40, Winter 2001, pp. 99-123 under the title “Une Solution pour le Paradoxe de Goodman”.
In the classical version of Goodman’s paradox, the universe where the problem takes place is ambiguous. The conditions of induction being accurately described, I define then a framework of n-universes, allowing the distinction, among the criteria of a given n-universe, between constants and variables. Within this framework, I distinguish between two versions of the problem, respectively taking place: (i) in an n-universe the variables of which are colour and time; (ii) in an n-universe the variables of which are colour, time and space. Finally, I show that each of these versions admits a specific resolution.


This paper is cited in:

  • Alasdair Richmond, The Doomsday Argument, Philosophical Books Vol. 47 No. 2 April 2006, pp. 129–142

A Solution to Goodman’s Paradox

Paul FRANCESCHI

a paper originally published in Dialogue, winter 2001, vol. 40, pp. 99-123

ABSTRACT: In the classical version of Goodman’s paradox, the universe where the problem takes place is ambiguous. The conditions of induction being accurately described, I define then a framework of n-universes, allowing the distinction, among the criteria of a given n-universe, between constants and variables. Within this framework, I distinguish between two versions of the problem, respectively taking place: (i) in an n-universe the variables of which are colour and time; (ii) in an n-universe the variables of which are colour, time and space. Finally, I show that each of these versions admits a specific resolution.

1. The problem

Nelson Goodman

Goodman’s Paradox (thereafter GP) has been described by Nelson Goodman (1946).i Goodman exposes his paradox as follows.ii Consider an urn containing 100 balls. A ball is drawn each day from the urn, during 99 days, until today. At each time, the ball extracted from the urn is red. Intuitively, one expects that the 100th ball drawn from the urn will also be red. This prediction is based on the generalisation according to which all the balls in the urn are red. However, if one considers the property S “drawn before today and red or drawn after today and non-red”, one notes that this property is also satisfied by the 99 instances already observed. But the prediction which now ensue, based on the generalisation according to which all the balls are S, is that the 100th ball will be non-red. And this contradicts the preceding conclusion, which however conforms with our intuition.iii

Goodman expresses GP with the help of an enumerative induction. And one can model GP in terms of the straight rule (SR). If one takes (D) for the definition of the “red” predicate, (I) for the enumeration of the instances, (H) for the ensuing generalisation, and (P) for the corresponding prediction, one has then:

(D) R = red

(I) Rb1·Rb2·Rb3·…·Rb99

(H) Rb1·Rb2·Rb3·…·Rb99·Rb100

∴ (P) Rb100

And also, with the predicate S:

(D*) S = red and drawn before T or non-red and drawn after T

(I*) Sb1·Sb2·Sb3·…·Sb99

(H*) Sb1·Sb2·Sb3·…·Sb99·Sb100 that is equivalent to:

(H’*) Rb1·Rb2·Rb3·…·Rb99·~Rb100

∴ (P*) Sb100 i. e. finally:

∴ (P’*) ~Rb100

The paradox resides here in the fact that the two generalisations (H) and (H*) lead respectively to the predictions (P) and (P’*), which are contradictory. Intuitively, the application of SR to (H*) appears erroneous. Goodman also gives in Fact, Fiction and Forecastiv a slightly different version of the paradox, applied in this case to emeralds.v This form is very well known and based on the predicate “grue” = green and observed before T or non-green and observed after T.

The predicate S used in Goodman (1946) presents with “grue”, a common structure. P and Q being two predicates, this structure corresponds to the following definition: (P and Q) or (~P and ~Q). In what follows, one will designate by grue a predicate having this particular structure, without distinguishing whether the specific form used is that of Goodman (1946) or (1954).

2. The unification/differentiation duality

The instances are in front of me. Must I describe them by stressing their differences? Or must I describe them by emphasising their common properties? I can proceed either way. To stress the differences between the instances, is to operate by differentiation. Conversely, to highlight their common properties, is to proceed by unification. Let us consider in turn each of these two modes of proceeding.

Consider the 100 balls composing the urn of Goodman (1946). Consider first the case where my intention is to stress the differences between the instances. There, an option is to apprehend the particular and single moment, where each of them is extracted from the urn. The considered predicates are then: red and drawn on day 1, red and drawn on day 2, …, red and drawn on day 99. There are thus 99 different predicates. But this prohibits applying SR, which requires one single predicate. Thus, what is to distinguish according to the moment when each ball is drawn? It is to stress an essential difference between each ball, based on the criterion of time. Each ball thus is individualised, and many different predicates are resulting from this: drawn at T1, drawn at T2, …, drawn at T99. This indeed prevents then any inductive move by application of SR. In effect, one does not have then a common property to allow induction and to apply SR. Here, the cause of the problem lies in the fact of having carried out an extreme differentiation.

Alternatively, I can also proceed by differentiation by operating an extremely precisevi measurement of the wavelength of the light defining the colour of each ball. I will then obtain a unique measure of the wavelength for each ball of the urn. Thus, I have 100 balls in front of me, and I know with precision the wavelength of the light of 99 of them. The balls respectively have a wavelength of 722,3551 nm, 722,3643 nm, 722,3342 nm, 722,3781 nm, etc. I have consequently 99 distinct predicates P3551, P3643, P3342, P3781, etc. But I have no possibility then to apply SR, which requires one single predicate. Here also, the common properties are missing to allow to implement the inductive process. In the same way as previously, it proves here that I have carried out an extreme differentiation.

What does it occur now if I proceed exclusively by unification? Let us consider the predicate R corresponding to “red or non-red”. One draws 99 red balls before time T. They are all R. One predicts then that the 100th ball will be R after T, i.e. red or non-red. But this form of induction does not bring any information here. The resulting conclusion is empty of information. One will call empty induction this type of situation. In this case, one observes that the process of unification of the instances by the colour was carried out in a radical way, by annihilating in this respect, any step of differentiation. The cause of the problem lies thus in the implementation of a process of extreme unification.

If one considers now the viewpoint of colour, it appears that each case previously considered requires a different taxonomy of colours. Thus, it is made use successively:

– of our usual taxonomy of colours based on 9 predicates: purple, indigo, blue, green, yellow, orange, red, white, black

– of a taxonomy based on a comparison of the wavelengths of the colours with the set of the real numbers (real taxonomy)

– of a taxonomy based on a single predicate (single taxon taxonomy): red or non-red

But it proves that each of these three cases can be replaced in a more general perspective. Indeed, multiple taxonomies of colours are susceptible to be used. And those can be ordered from the coarser (single taxon taxonomy) to the finest (real taxonomy), from the most unified to the most differentiated. We have in particular the following hierarchy of taxonomies:

– TAX1 = {red or non-red} (single taxon taxonomy)

– TAX2 = {red, non-red} (binary taxonomy)

– …

– TAX9 = {purple, indigo, blue, green, yellow, orange, red, white, black} (taxonomy based on the spectral colours, plus white and black)

– …

– TAX16777216 = {(0, 0, 0), …, (255, 255, 255)} (taxonomy used in computer science and distinguishing 256 shades of red/green/blue)

– …

– TAXR = {370, …, 750} (real taxonomy based on the wavelength of the light)

Within this hierarchy, it appears that the use of extreme taxonomies such as the one based on a single taxon, or the real taxonomy, leads to specific problems (respectively extreme unification and extreme differentiation). Thus, the problems mentioned above during the application of an inductive reasoning based on SR occur when the choice in the unification/differentiation duality is carried out too radically. Such problems relate to induction in general. This invites to think that one must rather reason as follows: I should privilege neither unification, nor differentiation. A predicate such as “red”, associated with our usual taxonomy of colours (TAX9)vii, corresponds precisely to such a criterion. It corresponds to a balanced choice in the unification/differentiation duality. This makes it possible to avoid the preceding problems. This does not prevent however the emergence of new problems, since one tries to implement an inductive reasoning, in certain situations. And one of these problems is naturally GP.

Thus, it appears that the stake of the choice in the duality unification/differentiation is essential from the viewpoint of induction, because according to whether I choose one way or the other, I will be able or not to use SR and produce valid inductive inferences. Confronted with several instances, one can implement either a process of differentiation, or a process of unification. But the choice that is made largely conditions the later success of the inductive reasoning carried out on those grounds. I must describe both common properties and differences. From there, a valid inductive reasoning can take place. But at this point, it appears that the role of the unification/differentiation duality proves to be crucial for induction. More precisely, it appears at this stage that a correct choice in the unification/differentiation duality constitutes one of the conditions of induction.

3. Several problems concerning induction

The problems which have been just mentioned constitute the illustration of several difficulties inherent to the implementation of the inductive process. However, unlike GP, these problems do not generate a genuine contradiction. From this point of view, they distinguish from GP. Consider now the following situation. I have drawn 99 balls respectively at times T1, T2, …, T99. The 100th ball will be drawn at T100. One observes that the 99 drawn balls are red. They are thus at the same time red and drawn before T100. Let R be the predicate “red” and T the predicate “drawn before T100“. One has then:

(I) RTb1, RTb2, …, RTb99

(H) RTb1, RTb2, …, RTb99, RTb100

∴ (P) RTb100

By direct application of SR, the following prediction ensue: “the 100th ball is red and drawn before T100“. But this is in contradiction with the data of the experiment in virtue of which the 100th ball is drawn in T100. There too, the inductive reasoning is based on a formalisation which is that of SR. And just as for GP, SR leads here to a contradiction. Call 2 this problem, where two predicates are used.

It appears that one can easily build a form of 2 based on one single predicate. A way of doing that is to consider the unique predicate S defined as “red and drawn before T100” in replacement of the predicates R and T used previously. The same contradiction then ensues.

Moreover, it appears that one can highlight another version (1) of this problem comprising only one predicate, without using the “red” property which appears useless here. Let indeed T be the predicate drawn before T100. One has then:

(I) Tb1, Tb2, …, Tb99

(H) Tb1, Tb2, …, Tb99, Tb100

∴ (P) Tb100

Here also, the conclusion according to which the 100th ball is drawn before T100 contradicts the data of the experiment according to which the 100th ball is drawn at T100. And one has then a contradictory effect, analogous to that of GP, without the structure of “grue” being implemented. Taking into account the fact that only the criterion of time is used to build this problem, it will be denoted in what follows by 1-time.

It appears here that the problems such as 1-time and 2 lead just as GP to a contradiction. Such is not the case for the other problems related to induction previously mentionedviii, which involve either the impossibility of carrying out induction, or a conclusion empty of information. However, it proves that the contradiction encountered in 1-time is not of the same nature as that observed in GP. Indeed in GP, one has a contradiction between the two concurrent predictions (P) and (P*). On the other hand, in 1-time, the contradiction emerges between on the one hand the conditions of the experiment (T  100) and on the other hand the prediction resulting from generalisation (T < 100).

Anyway, the problems which have been just encountered suggest that the SR formalism does not capture the whole of our intuitions related to induction. Hence, it is worth attempting to define accurately the conditions of induction, and adapting consequently the relevant formalism. However, before carrying out such an analysis, it is necessary to specify in more detail the various elements of the context of GP.

4. The universe of reference

Let us consider the law (L1) according to which “diamond scratches the other solids”. A priori, (L1) strikes us as an undeniable truth. Nevertheless, it proves that at a temperature higher than 3550°C, diamond melts. Therefore in last analysis, the law (L1) is satisfied at a normal temperature and in any case, when the temperature is lower than 3550°C. But such a law does not apply beyond 3550°C. This illustrates how the statement of the conditions under which the law (L1) is verified is important, in particular with regard to the conditions of temperature. Thus, when one states (L1), it proves necessary to specify the conditions of temperature in which (L1) finds to apply. This is tantamount to describing the type of universe in which the law is satisfied.

Let also (P1) be the following proposition: “the volume of the visible universe is higher than 1000 times that of the solar system”. Such a proposition strikes us as obvious. But there too, it appears that (P1) is satisfied at modern time, but that it proves to be false at the first moments of the universe. Indeed, when the age of our universe was 10-6 second after the big-bang, its volume was approximately equal to that of our solar system. Here also, it thus appears necessary to specify, at the same time as the proposition (P1) the conditions of the universe in which it applies. A nonambiguous formulation of (P1) thus comprises a more restrictive temporal clause, such as: “at our time, the volume of the visible universe is higher than 1000 times that of the solar system”. Thus, generally, one can think that when a generalisation is stated, it is necessary to specify the conditions of the universe in which this generalisation applies. The precise description of the universe of reference is fundamental, because according to the conditions of the universe in which one places oneself, the stated law can appear true or false.

One observes in our universe the presence of both constants and variables. There are thus constants, which constitute the fundamental constants of the universe: the speed of light: c = 2,998 x108 m/s; Planck’s constant: h = 6,626 x 10-34 J.s; the electron charge; e = 1,602 x 10-19 C; etc. There are on the other hand variables. Among those, one can mention in particular: temperature, pressure, altitude, localisation, time, presence of a laser radiation, presence of atoms of titanium, etc.

One often tends, when a generalisation is stated, not to take into account the constants and the variables which are those of our universe envisaged in its totality. Such is the case for example when one considers the situation of our universe on 1 January 2000, at 0h. One places then oneself explicitly in what constitutes a section, a slice of our universe. In effect, time is not regarded then a variable, but well as a constant. Consider also the following: “the dinosaurs had hot blood”ix. Here, one places oneself explicitly in a sub-universe of our where the parameters of time and space have a restricted scope. The temporal variable is reduced to the particular time of the Earth history which knew the appearance of the dinosaurs: the Triassic and the Cretaceous. And similarly, the space parameter is limited to our planet: Earth. Identically, the conditions of temperature are changing within our universe, according to whether one is located at one site or another of it: at the terrestrial equator, the surface of Pluto, the heart of Alpha Centauri, etc. But if one is interested exclusively in the balloon being used for the experimentation within the laboratory of physics, where the temperature is maintained invariably at 12°C, one can then regard valuably the temperature as a constant. For when such generalisations are expressed, one places oneself not in our universe under consideration in his totality, but only in what veritably constitutes a specific part, a restriction of it. One can then assimilate the universe of reference in which one places oneself as a sub-universe of our. It is thus frequent to express generalisations which are only worth for the present time, or for our usual terrestrial conditions. Explicitly or not, the statement of a law comprises a universe of reference. But in the majority of the cases, the variables and the constants of the considered sub-universe are distinct from those allowing to describe our universe in its totality. For the conditions are extremely varied within our universe: the conditions are very different according to whether one places oneself at the 1st second after the big-bang, on Earth at the Precambrian epoch, in our planet in year 2000, inside the particle accelerator of the CERN, in the heart of our Sun, near a white dwarf, or well inside a black hole, etc.

One can also think that it is interesting to be able to model universes the constants of which are different from the fundamental constants of our universe. One can thus wish to study for example a universe where the mass of the electron is equal to 9,325 x10-31 kg, or well a universe where the electron charge is equal to 1,598 x 10-19 C. And in fact, the toy-universes, which take into account fundamental constants different from those of our familiar universe, are studied by the astrophysicists.

Lastly, when one describes the conditions of a thought experiment, one places oneself, explicitly or not, under the conditions which are related to those of a sub-universe. When one considers for example 100 balls extracted from an urn during 100 consecutive days, one places then oneself in a restriction of our universe where the temporal variable is limited to one period of 100 days and where the spatial location is extremely reduced, corresponding for example to a volume approximately equal to 5 dm3. On the other hand, the number of titanium or zirconium atoms possibly present in the urn, the possible existence of a laser radiation, the presence or the absence of a sound source of 10 db, etc. can be omitted and ignored. In this context, it is not necessary to take into account the existence of such variables. In this situation, it is enough to mention the variables and the constants actually used in the thought experiment. For one can think indeed that the number of variables in our universe is so large that it is impossible to enumerate them all. And consequently, it does not appear possible to characterise our universe in function of all its variables, because one can not provide an infinite enumeration of it. It appears sufficient to describe the considered sub-universe, by mentioning only the constants and the variables which play an effective role in the experiment. Thus, in such situations, one will describe the considered sub-universe by mentioning only the effective criteria necessary to the description of the experiment.

What precedes encourages to think that generally, in order to model the context in which the problems such as GP take place, it is convenient to describe a given universe in terms of variables and constants. This leads thus to define a n-universe (n 0) as a universe the criteria of which comprise m constants, and n variables, where the m constants and n variables constitute the criteria of the given universe. Within this particular framework, one defines a temporal 1-universe1T) as a universe comprising only one criterion-variable: time. In the same way, one defines a coloured 1-universe1C) as a universe comprising only one criterion-variable: colour. One will define also a coloured and temporal 2-universe2CT) as a universe comprising two criterion-variables: time and colour. Etc. In the same way, a universe where all the objects are red, but are characterised by a different localisation will be modelled by a localised 1-universe1L) a criterion-constant (red) of which is colour.

It should be noted incidentally that the n-universe framework makes it possible in particular to model several interesting situations. Thus, a temporal universe can be regarded as a n-universe one of the variables of which is a temporal criterion. Moreover, a universe where one single moment T0 is considered, deprived of the phenomenon of succession of time, can be regarded as a n-universe where time does not constitute one of the variables, but where there is a constant-time. In the same way, an atemporal universe corresponds to a n-universe no variable of which corresponds to a temporal criterion, and where there is not any time-constant.

In the context which has been just defined, what is it now to be red? Here, being “red” corresponds to two different types of situations, according to the type of n-universe in which one places oneself. It can be on the one hand a n-universe one of the constants of which is colour. In this type of universe, the colour of the objects is not susceptible to change, and all the objects are there invariably red.

The fact of being “red” can correspond, on the second hand, to a n-universe one of the criterion-variables of which is constituted by colour. There, an object can be red or non-red. Consider the case of a Ω1C. In such a universe, an object is red or non-red in the absolute. No change of colour is possible there, because no other criterion-variable exists, of which can depend such a variation. And in a Ω2CT, being red is being red at time T. Within such a universe, being red is being red relatively to time T. Similarly, in a coloured, temporal and localised 3-universe (Ω3CTL), being red is being red at time T and at place L. Etc. In some such universe, being red is being red relatively to other criterion-variables. And the same applies to the n-universes which model a universe such as our own.

At this step arises the problem of the status of the instances of an object of a given type. What is it thus to be an instance, within this framework? This problem has its importance, because the original versions of GP are based on instances of balls (1946) and emeralds (1954). If one takes into account the case of Goodman (1946), the considered instances are 100 different balls. However, if one considers a unique ball, drawn at times T1, T2, …, T100, one notices that the problem inherent to GP is always present. It suffices indeed to consider a ball whose colour is susceptible to change during the course of time. One has drawn 99 times the ball at times T1, T2, …, T99, and one has noted each time that the ball was red. This leads to the prediction that the ball will be red at T100. However, this last prediction proves to be contradictory with an alternative prediction based on the same observations, and the projection of the predicate S “red and drawn before T100 or non-red and drawn at T100x.

The present framework must be capable of handling the diversity of these situations. Can one thus speak of an instantiated and temporal 1-universe, or well of an instantiated and coloured 1-universe? Here, one must observe that the fact of being instantiated, for a given universe, corresponds to an additional criterion-variable. For, on the contrary, what makes it possible to distinguish between the instances? If no criterion distinguishes them, it is thus only one and the same thing. And if they are distinct, it is thus that a criterion makes it possible to differentiate them. Thus, an instantiated and temporal 1-universe is in fact a 2-universe, whose 2nd criterion, which makes it possible to distinguish the instances between them, is in fact not mentioned nor explicited. By making explicit this second criterion-variable, it is thus clear that one is placed in a 2-universe. In the same way, an instantiated and coloured 1-universe is actually a 2-universe one of the criteria of which is colour and the second criterion exists but is not specified.

Another aspect which deserves mention here, is the question of the reduction of a given n-universe to another. Is it not possible indeed, to logically reduce a n-universe to a different system of criteria? Consider for example a Ω3CTL. In order to characterise the corresponding universe, one has 3 criterion-variables: colour, time and localisation. It appears that one can reduce this 3-universe to a 2-universe. That can be carried out by reducing two of the criteria of the 3-universe to one single criterion. In particular, one will reduce both criteria of colour and time to a single criterion of tcolour* (shmolorxi). And one will only preserve two taxa of tcolour*: G and ~G. Consider then a criterion of color comprising two taxa (red, non-red) and a criterion of time comprising two taxa (before T, after T). If one associates the taxa of colour and time, one obtains four new predicates: red before T, red after T, non-red before T, non-red after T, which one will denote respectively by RT, R~T, ~RT and ~R~T. Several of these predicates are compatible (RT and R~T, RT and ~R~T, ~RT and R~T, ~RT and ~R~T) whereas others are incompatible (RT and ~RT, R~T and ~R~T). At this stage, one has several manners (16)xii of grouping the compatible predicates, making it possible to obtain two new predicates G and ~G of tcolour*:

0123456789101112131415
RT  R~TXXXXXXXX
RT  ~R~TXXXXXXXX
~RT  R~TXXXXXXXX
~RT  ~R~TXXXXXXXX

In each of these cases, it results indeed a new single criterion of tcolour* (Z), which substitutes itself to the two preceding criteria of colour and time. One will denote by Zi (0 i 15) the taxa of tcolour* thus obtained. If it is clear that Z15 leads to the empty induction, it should be observed that several cases corresponding to the situation where the instances are RT lead to the problem inherent to GP. One will note thus that Z2, i.e. grue2 (by assimilating the Zi to gruei and the Z15-i to bleeni) is based on the definition: grue2 = red before T and non-red after T. It appears here as a conjunctive interpretation of the definition of “grue”. In the same way, grue7 corresponds to a definition of “grue” based on an exclusive disjunction. Lastly, grue12 is based on the traditional definition: grue12 = red before T or non-red after T, where the disjunction is to be interpreted as an inclusive disjunction.

Similarly, it also proves that a Ω2CT can be reduced to a tcoloured* 1-universe (Ω1Z). And more generally, a n-universe is thus reducible to an (n-1)-universe (for n > 1). Thus, if one considers a given universe, several characterisations in terms of n-universe can valuably be used. One can in particular apprehend a same universe like a Ω3CTL, or like a Ω2ZL. In the same way, one can represent a Ω2CT like a Ω1Z. At this stage, none of these views appears fundamentally better than the other. But each of these two characterisations constitute alternative ways to describe a same reality. This shows finally that a n-universe constitutes in fact an abstract characterisation of a real or an imaginary universe. A n-universe constitutes thus a system of criteria, comprising constants and variables. And in order to characterise a same real or imaginary given universe, one can resort valuably to several n-universes. Each of them appears finally as a different characterisation of the given universe, simply based on a different set of primitives.

5. Conditions of induction

The fact that the SR formalism involves the GP effect suggests that the intuition which governs our concept of induction is not entirely captured by SR. It is thus allowed to think that if the formal approach is necessary and useful to be used as support to induction, it does not constitute however a sufficient step. For it appears also essential to capture the intuition which governs our inductive reasoning. Therefore it proves necessary to supplement the formal approach of induction by a semantic approach. Goodman himself provides us with a definition of inductionxiii. He defines induction as the projection of characteristics of the past through the future, or more generally, as the projection of characteristics corresponding to a given aspect of an object through another aspect. This last definition corresponds to our intuition of induction. One can think however that it is necessary to supplement it by taking into account the preceding observationsxiv concerning the differentiation/unification duality. In that sense, it has been pointed out that induction consists of an inference from instances presenting both common properties and differences. Let the instances-source (instances-S) be the instances to which relate (I) or (I*) and the instance-destination (instance-D) that which is the subject of (P) or (P*). The common properties relate to the instances-S and the differentiated properties are established between the instances-S and the instance-D. The following definition ensues: induction consists precisely in the fact that the instance-Dxv also presents the property that is common to the instances-S, whereas one does vary the criterion (criteria) on which the differences between the instances-S and the instance-D is (are) based. The inductive reasoning is thus based on the constant nature of a property, whereas such other property is variable.

From this definition of induction arise straightforwardly several conditions of induction. I shall examine them in turn. The first two conditions are thus the following ones:

(C1) the instances-S must present some common properties

(C2) the instances-S and the instance-D must present some distinctive properties

This has for consequence that one cannot apply induction in two particular circumstances: firstly (i) when the instances do not present any common property. One will call such a situation a total differentiation of the instances. The problems corresponding to this particular circumstance have been mentioned abovexvi. And secondly (ii) when the instances do not present any distinctive property. One will call such a situation total unification. The problems encountered in this type of situation have also been mentioned previouslyxvii.

It should also be noted that it is not here a question of intrinsic properties of the instances, but rather of the analysis which is carried out by the one who is on the point of reasoning by induction.

Taking into account the definition of induction which has been given, a third condition can be thus stated:

(C3) a criterion-variable is necessary for the common properties of the instances-S and another criterion-variable for the distinctive properties

This refers to the structure of the considered universe of reference. Consequently, two criterion-variables are at least necessary, in the structure of the corresponding universe of reference. One will call that the minimalcondition of induction. Hence, a 2-universe is at least necessary in order that the conditions of induction can be satisfied. Thus, a 2CT will be appropriate. In the same way, a temporal and localised 2-universe (2TL) will also satisfy the conditions which have been just defined, etcxviii.

It should be noted that another way of stating this condition is as follows: the criterion-variable for the common properties and the criterion-variable for the differentiated properties must be distinct. One should not have confusion between the two. One can call that the condition of separation of the common properties and the distinctive properties. Such a principle appears as a consequence of the minimal condition for induction: one must have two criteria to perform induction, and these criteria must be different. If one chooses a same criterion for the common properties and the differentiated properties, one is brought back in fact to one single criterion and the context of a 1-universe, itself insufficient to perform induction.

Lastly, a fourth condition of induction results from the preceding definition:

(C4) one must project the common properties of the instances-S (and not the distinctive properties)

The conditions of induction which have been just stated make it possible from now on to handle the problems involved in the use of SR mentioned abovexix. It follows indeed that the following projectionsxx are correct: C°T in a Ω2CT, C°L in a Ω2CL, Z°L in a Ω2ZL, etc. Conversely, the following projections are incorrect: T°T in a Ω1T, Z°Z in a Ω1Z. In particular, one will note here that the projection T°T in the Ω1T is that of 1-time. 1-time takes indeed place in a Ω1T, whereas induction requires at the same time common properties and distinctive properties. Thus, a 2-universe is at least necessary. Usually, the criterion of time is used for differentiation. But here, it is used for unification (“drawn before T”). That can be done, but provided that one uses a distinct criterion for the differentiated properties. However, whereas common properties results here from that, the differentiated properties are missing. It thus misses a second criterion – corresponding to the differentiated properties – in the considered universe, to perform induction validly. Thus 1-time finds its origin in a violation of the minimal condition of induction. One can formulate this solution equivalently, with regard to the condition of separation. In effect, in 1-time, a same temporal criterion (drawn before T/drawn after T) is used for the common properties and the differentiated properties, whereas two distinct criteria are necessary. It can be thus analysed as a manifest violation of the condition of separation.

Lastly, the conditions of induction defined above lead to adapt the formalism used to describe GP. It proves indeed necessary to distinguish between the common and the distinctive property(ies). One will thus use the following formalism in replacement of the one used above:

(I) RT1·RT2·RT3·…·RT99

(H) RT1·RT2·RT3·…·RT99·RT100

where R denotes the common property and the Ti a distinctive property. It should be noted here that it can consist of a single object, or alternatively, of instances which are distinguished by a given criterion (which is not concerned by the inductive process) according to n-universe in which one places oneself. Thus, one will use in the case of a single instance , the colour of which is susceptible to change according to time:

(I) RT1·RT2·RT3·…·RT99

or in the case where several instances 1, 2, …, 99, 100 existxxi:

(I) RT11·RT22·RT33·…·RT9999

6. Origin of the paradox

Given the conditions of induction and the framework of n-universes which have been just defined, one is now in a position to proceed to determine the origin of GP. Preliminarily it is worth describing accurately the conditions of the universe of reference in which GP takes place. Indeed, in the original version of GP, the choice of the universe of reference is not defined accurately. However one can think that it is essential, in order to avoid any ambiguity, that this last is described precisely.

The universe of reference in which Goodman (1946) places himself is not defined explicitly, but several elements of the statement make it possible to specify its intrinsic nature. Goodman thus mentions the colours “red” and “non-red”. Therefore, colour constitutes one of the criterion-variables of the universe of reference. Moreover, Goodman distinguishes the balls which are drawn at times T1, T2, T3, …, T100. Thus, time is also a criterion-variable of the considered universe. Consequently, one can describe the minimal universe in which Goodman (1946) places himself as a Ω2CT. Similarly, in Goodman (1954), the criterion-variables of colour (green/non-green) and time (drawn before T/drawn after T) are expressly mentioned. In both cases, one thus places oneself implicitly within the minimal framework of a Ω2CT.

Goodman in addition mentions instances of balls or emeralds. Is it necessary at this stage to resort to an additional criterion-variable making it possible to distinguish between the instances? It appears that not. On the one hand indeed, as we have seen previouslyxxii, it proves that one has well a version of GP by simply considering a Ω2CT and a single object, the colour of which is susceptible to change during the course of time. On the other hand, it appears that if the criterion which is used to distinguish the instances is not used in the inductive process, it is then neither useful as a common criterion, nor as a differentiated criterion. It follows that one can dispense with this 3rd additional criterion. Thus, it proves that the fact of taking into account one single instance or alternatively, several instances, is not essential in the formulation of GP. In what follows, one will be able thus to consider that the statement applies, indifferently, to a single object or several instances that are distinguished by a criterion which is not used in the inductive process.

At this step, we are in a position to replace GP within the framework of n-universes. Taking into account the fact that the context of GP is that of a minimalΩ2CT, one will consider successively two situations: that of a Ω2CT, and then that of a Ω3CT (where  denotes a 3rd criterion).

6.1 “Grue” in the coloured and temporal 2-universe

Consider first the hypothesis of a Ω2CT. In such a universe, being “red” is being red at time T. One has then a criterion of colour for the common properties and a criterion of time for the differentiated properties. Consequently, it appears completely legitimate to project the common property of colour (“red”) into the differentiated time. Such a projection proves to be in conformity with the conditions of induction stated above.

Let us turn now to the projection of “grue”. One has observed previouslyxxiii that the Ω2CT was reducible to a Ω1Z. Here, the fact of using “grue” (and “bleen”) as primitives, is characteristic of the fact that the system of criteria used is that of a Ω1Z. What is then the situation when one projects “grue” in the Ω1Z? In such a universe of reference, the unique criterion-variable is the tcolour*. An object is there “grue” or “bleen” in the absolute. Consequently, if one has well a common criterion (the tcolour*), it appears that the differentiated criterion is missing, in order to perform induction validly. And the situation in which one is placed is that of an extreme differentiation. Thus, such a projection is carried out in violation of the minimal condition of induction. Consequently, it proves that GP cannot take place in the Ω2CT and is then blocked at the stage of the projection of “grue”.

But are these preliminary remarks sufficient to provide, in the context of a Ω2CT, a satisfactory solution to GP? One can think that not, because the paradox also arises in it in another form, which is that of the projection of tcolour* through time. One can formalise this projection Z°T as follows:

(I*) GT1·GT2·GT3·…·GT99

(H*) GT1·GT2·GT3·…·GT99·GT100 that is equivalent to:

(H’*) RT1·RT2·RT3·…·RT99·~RT100

(P*) GT100 that is equivalent to:

(P’*) ~RT100

where it is manifest that the elements of GP are still present.

Fundamentally in this version, it appears that the common properties are borrowed from the system of criteria of the Ω1Z, whereas the differentiated properties come from the Ω2CT. A first analysis thus reveals that the projection of “grue” under these conditions presents a defect which consists in the choice of a given system of criteria for the common properties (tcolour*) and of a different system of criteria for the differentiated properties (time). For the selection of the tcolour* is characteristic of the choice of a Ω1Z, whereas the use of time is revealing of the fact that one places oneself in a Ω2CT. But one must choose one or the other of the reducible systems of criteria to perform induction. On the hypotheses envisaged previously, the choice of the criteria for the common and differentiated properties was carried out within the same system of criteria. But here, the choice of the criteria for the common properties and the differentiated properties is carried out within two different (and reducible) systems of criteria. Thus, the common and differentiated criteria selected for induction are not genuinely distinct. And this appears as a violation of the condition of separation. Consequently, one of the conditions of induction is not respected.

However, the projection Z°T has a certain intuitive support, because it is based on the fact that the notions of “grue before T” and “grue after T” have a certain intuitive meaning. Let us then disregard the violation of the conditions of the induction which has been just mentioned, and consider thus this situation in more detail. In this context, GP is always present, since one observes a contradiction between (P) and (P’*). It is with this contradiction that it is worth from now on being interested. Consider the particular step of the equivalence between (H*) and (H’*). One conceives that “grue before T” is assimilated here to RT, because the fact that the instances-S are red before T results clearly from the conditions of the experiment. On the other hand, it is worth being interested by the step according to which (P*) entails (P’*). According to the classical definitionxxiv: “grue” = {RT  R~T, RT  ~R~T, ~RT  ~R~T }. What is it then to be “grue after T”? There, it appears that a “grue” object can be R~T (this corresponds to the case RT  R~T) or ~R~T (this correspond to the cases RT  ~R~T and ~RT  ~R~T). In conclusion, the object can be either R~T or ~R~T. Thus, the fact of knowing that an object is “grue after T” does not make it possible to conclude that this object is ~R~T, because this last can also be R~T. Consequently, the step according to which (P*) involves (P’*) appears finally false. From where it ensues that the contradiction between (P) and (P’*) does not have any more a raison d’etre.

One can convince oneself that this analysis does not depend on the choice of the classical definition of “grue” (grue12) which is carried out, by considering other definitions. Consider for example the definition based on grue9: “grue” = {RT  ~R~T, ~RT  ~R~T} and “bleen” = {RT  R~T, ~RT  R~T}. But in this version, one notes that one does not have the emergence of GP, because the instances-S, which are RT, can be at the same time “grue” and ” bleen”. And the same applies if one considers a conjunctive definition (grue2) such as “grue” = {RT  ~R~T}. In such a case indeed, the instances-S are “grue” only if they are RT but also ~R~T. However this does not correspond to the initial conditions of GP in the 2CT where one ignores if the instances-S are ~R~T.

One could also think that the problem is related to the use of a taxonomy of tcolour* based on two taxa (G and ~G). Consider then a taxonomy of tcolour* based on 4 taxa: Z0 = RT  R~T, Z1 = RT  ~R~T, Z2 = ~RT  R~T, Z3 = ~RT  ~R~T. But on this hypothesis, it appears clearly that since the instances-S are for example Z1, one finds himself replaced in the preceding situation.

The fact of considering “grue after T”, “grue before T”, “bleen before T”, “bleen after T” can be assimilated with an attempt of expressing “grue” and ” bleen” with the help of our own criteria, and in particular that of time. It can be considered here as a form of anthropocentrism, underlain by the idea to express the Ω1Z with the help of the taxa of the Ω2CT. Since one knows the code defining the relations between two reducible n-universes – the Ω1Z and the Ω2CT – and that one has partial data, one can be tempted to elucidate completely the predicates of the foreign n-universe. Knowing that the instances are GT, G~T, ~GT, ~G~T, I can deduce that they are respectively {RT, ~RT}, {R~T, ~R~T}, {~RT}, {R~T}. But as we have seen, due to the fact that the instances are GT and RT, I cannot deduce that they will be ~R~T.

The reasoning in this version of GP is based on the apparently inductive idea that what is “grue before T” is also “grue after T”. But in the context which is that of the Ω1Z, when an object is “grue”, it is “grue” in the absolute. For no additional criterion exists which can make its tcolour* vary. Thus, when an object is GT, it is necessarily G~T. And from the information according to which an object is GT, one can thus conclude, by deduction, that it is also G~T.

From what precedes, it ensues that the version of GP related to the Z°T presents the apparent characters of induction, but it does not constitute an authentic form of this type of reasoning. Z°T thus constitutes a disguised form of induction for two principal reasons: first, it is a projection through the differentiated criterion of time, which constitutes the standard mode of our inductive practice. Second, it is based on the intuitive principle according to which everything that is GT is also G~T. But as we have seen, it consists here in reality of a deductive form of reasoning, whose true nature is masked by an apparent inductive move. And this leads to conclude that the form of GP related to Z°T analyses itself in fact veritably as a pseudo-induction.

6.2 “Grue” in the coloured, temporal and localised 3-universe

Consider now the case of a Ω3CT. This type of universe of reference also corresponds to the definition of a minimal Ω2CT, but it also comprises one 3rd criterion-variablexxv. Let us choose for this last a criterion such as localisationxxvi. Consider then a Ω3CTL. Consider first (H) in such a 3-universe. To be “red” in the Ω3CTL, is to be red at time T and at location L. According to the conditions of GP, colour corresponds to the common properties, and time to the differentiated properties. One has then the following projection C°TL:

(I) RT1L1·RT2L2·RT3L3·…·RT99L99

(H) RT1L1·RT2L2·RT3L3·…·RT99L99·RT100L100

∴ (P) RT100L100

where taking into account the conditions of induction, it proves to be legitimate to project the common property (“red”) of the instances-S, into differentiated time and location, and to predict that the 100th ball will be red. Such a projection appears completely correct, and proves in all points in conformity with the conditions of induction mentioned above.

What happens now with (H*) in the Ω3CTL? It has been observed that the Ω3CTL could be reduced to a Ω2ZL. In this last n-universe, the criterion-variables are tcolour* and localisation. The fact of being “grue” is there relative to location: to be “grue”, is to be “grue” at location L. What is then projected is the tcolour*, i.e. the fact of being “grue” or “bleen”. There is thus a common criterion of tcolour* and a differentiated criterion of localisation. Consequently, if it is considered that the instances-S are “grue”, one can equally well project the property common “grue” into a differentiated criterion of localisation. Consider then the projection Z°L in the Ω2ZL:

(I*) GL1·GL2·GL3·…·GL99

(H*) GL1·GL2·GL3·…·GL99·GL100

∴ (P*) GL100

Such a projection is in conformity with the conditions mentioned above, and constitutes consequently a valid form of induction.

In this context, one can project valuably a predicate having a structure identical to that of “grue”, in the case of emeralds. Consider the definition “grue” = green before T or non-green after T, where T = 10 billion years. It is known that at that time, our Sun will be extinct, and will become gradually a dwarf white. The conditions of our atmosphere will be radically different from what they currently are. And the temperature will rise in particular in considerable proportions, to reach 8000°. Under these conditions, the structure of many minerals will change radically. It should normally thus be the case for our current emeralds, which should see their colour modified, due to the enormous rise in temperature which will follow. Thus, I currently observe an emerald: it is “grue” (for T = 10 billion years). If I project this property through a criterion of location, I legitimately conclude from it that the emerald found in the heart of the Amazonian forest will also be “grue”, in the same way as the emerald which has been just extracted from a mine from South Africa.

At this stage, one could wonder whether the projectibility of “grue” is not intrinsically related to the choice of a definition of “grue” based on inclusive disjunction (grue12)? Nevertheless, one easily checks by using an alternative definition of “grue” that its projection remains validxxvii.

It should be noticed that one has here the expression of the fact that the taxonomy based on the tcolour* is coarser than that based on time and colour. In effect, the former only comprises 2 taxa (grue/bleen), whereas the latter presents 4 of them. By reducing the criteria of colour and time to a single criterion of tcolor*, one has replaced 4 taxa (RT  R~T, RT  ~R~T, ~RT  R~T, ~RT  ~R~T) by 2. Thus, “grue” constitutes from this point of view a predicate coarser than “red”. The universe which is described did not change, but the n-universes which are systems of criteria describing these universes are different. With the tcolour* thus defined, one has less predicates at its disposal to describe a same reality. The predicates “grue” and “bleen” are for us not very informative, and are less informative in any case that our predicates “red”, “non-red”, “before T”, etc. But that does not prevent however “grue” and “bleen” to be projectibles.

Whereas the projection of “grue” appears valid in the Ω2ZL, it should be noticed however that one does not observe in this case the contradiction between (P) and (P’*). For here (I*) is indeed equivalent to:

(I’*) RT1L1·RT2L2·RT3L3·…·RT99 L99

since, knowing according to the initial data of GP that the instances-S are RT, one valuably replaces the GLi by the RTiLi (i < 100). But it appears that on this hypothesis, (P*) does not involve:

(P’*) ~RT100L100

because one does not have an indication relating to the temporality of the 100th instance, due to the fact that only the localisation constitutes here the differentiated criterion. Consequently, one has well in the case of the Ω3CTL a version built with the elements of GP where the projection of “grue” is carried out valuably, but which does not present a paradoxical nature.

7. Conclusion

In the solution to GP proposed by Goodman, a predicate is projectible or nonprojectible in the absolute. And one has in addition a correspondence between the entrenchedxxviii/non-entrenched and the projectible/nonprojectible predicates. Goodman in addition does not provide a justification to this assimilation. In the present approach, there is no such dichotomy, because a given predicate P reveals itself projectible in a given n-universe, and nonprojectible in another n-universe. Thus, P is projectible relatively to such universe of reference. There is thus the projectible/nonprojectible relative to such n-universe distinction. And this distinction is justified by the conditions of induction, and the fundamental mechanism of induction related to the unification/differentiation duality. There are thus n-universes where “green” is projectible and others where it is not. In the same way, “grue” appears here projectible relative to certain n-universes. Neither green nor grue are projectible in the absolute, but only relative to such given universe. Just as of some other predicates, “grue” is projectible in certain universes of reference, but nonprojectible in othersxxix.

Thus, it proves that one of the causes of GP resides in the fact that in GP, one classically proceeds to operate a dichotomy between the projectible and the nonprojectible predicates. The solutions classically suggested to GP are respectively based on the distinction temporal/nontemporal, local/non-local, qualitative/nonqualitative, entrenched/non-entrenched, etc. and a one-to-one correspondence with the projectible/nonprojectible distinction. One wonders whether a given predicate P* having the structure of “grue” is projectible, in the absolute. This comes from the fact that in GP, one has a contradiction between the two concurrent predictions (P) and (P*). One classically deduces from it that one of the two predictions must be rejected, at the same time as one of the two generalisations (H) or (H*) on which these predictions are respectively based. Conversely, in the present analysis, whether one places himself in the case of the authentic projection Z°L or in the case of the pseudo-projection Z°T, one does not have a contradiction between (P) and (P’*). Consequently, one is not constrained any more to reject either (H) or (H*). And the distinction between projectible/nonprojectible predicates does not appear indispensable any morexxx.

How is the choice of our usual n-universe carried out in this context? N-universes such as the Ω2CT, the Ω3CTL, the Ω2ZL etc. are appropriate to perform induction. But we naturally tend to privilege those which are based on criteria structured rather finely to allow a maximum of combinations of projections. If one operates from the criteria Z and L in the Ω2ZL, one restricts oneself to a limited number of combinations: Z°L and L°Z. Conversely, if one retains the criteria C, T and L, one places oneself in the Ω3CTL and one has the possibility of projections C°TL, T°CL, L°CT, CT°Lxxxi, CL°T, TL°C. One has thus a maximum of combinations. This seems to encourage to prefer the Ω3CTL to the Ω2ZL. Of course, pragmatism seems to have to play a role in the choice of the best alternative of our criteria. But it seems that it is only one of the multiple factors which interact to allow the optimisation of our criteria to carry out the primitive operations of grouping and differentiation, in order to then be able to generalise, classify, order, make assumptions or forecastxxxii. Among these factors, one can in particular mention: pragmatism, simplicity, flexibility of implementation, polyvalencexxxiii, economy in means, powerxxxiv, but also the nature of our real universe, the structure of our organs of perception, the state of our scientific knowledge, etcxxxv. Our usual n-universes are optimised with regard to these various factors. But this valuably leaves room for the choice of other systems of criteria, according to the variations of one or the other of these parametersxxxvi.

i Nelson Goodman, “A Query On Confirmation”, Journal of Philosophy, vol. 43 (1946), p. 383-385; in Problems and Projects, Indianapolis, Bobbs-Merrill, 1972, p. 363-366.

ii With some minor adaptations.

iii See Goodman “A Query On Confirmation”, p. 383: “Suppose we had drawn a marble from a certain bowl on each of the ninety-nine days up to and including VE day and each marble drawn was red. We would expect that the marble drawn on the following day would also be red. So far all is well. Our evidence may be expressed by the conjunction “Ra1·Ra2·…·Ra99” which well confirms the prediction Ra100.” But increase of credibility, projection, “confirmation” in any intuitive sense, does not occur in the case of every predicate under similar circumstances. Let “S” be the predicate “is drawn by VE day and is red, or is drawn later and is non-red.” The evidence of the same drawings above assumed may be expressed by the conjunction “Sa1·Sa2·…·Sa99“. By the theories of confirmation in question this well confirms the prediction “Sa100“; but actually we do not expect that the hundredth marble will be non-red. “Sa100” gains no whit of credibility from the evidence offered.”

iv Nelson Goodman, Fact, Fiction and Forecast, Cambridge, MA, Harvard University Press, 1954.

v Ibid., p. 73-4: “Suppose that all emeralds examined before a certain time t are green. At time t, then, our observations support the hypothesis that all emeralds are green; and this is in accord with our definition of confirmation. […] Now let me introduce another predicate less familiar than “green”. It is the predicate “grue” and it applies to all things examined before t just in case they are green but to other things just in case they are blue. Then at time t we have, for each evidence statement asserting that a given emerald is green, a parallel evidence statement asserting that that emerald is grue.”

vi For example with an accuracy of 10-4 nm.

vii Or any taxonomy which is similar to it.

viii See §2 above.

ix This assertion is controversial.

x Such a remark also applies to the statement of Goodman, Fact, Fiction and Forecast.

xi As J.S. Ullian mentions it, “More one ‘Grue’ and Grue”, Philosophical Review, vol. 70 (1961), p. 386-389, in p. 387.

xii I. e. C(0, 4)+C(1, 4)+C(2, 4)+C(3, 4)+C(4, 4) = 24, where C(p, q) denotes the number of combinations of q elements taken p times.

xiii See Goodman, “A Query On Confirmation”, p. 383: “Induction might roughly be described as the projection of characteristics of the past into the future, or more generally of characteristics of one realm of objects into another.”

xiv See §2 above.

xv One can of course alternatively take into account several instances-D.

xvi See §2 above.

xvii Ibid.

xviii For the application of this condition, one must take into account the remarks mentioned above concerning the problem of the status of the instances. Thus, one must actually compare an instantiated and temporal 1-universe to a 2-universe one of the criteria of which is temporal, and the second criterion is not explicitly mentioned. Similarly, an instantiated and coloured 1-universe is assimilated in fact to a 2-universe one of the criteria of which is temporal, and the second criterion is not specified.

xix See §3 above.

xx With the notations C (colour), T (time), L (localisation) and Z (tcolour*).

xxi However, since the fact that there exists one or more instances is not essential in the formulation of the given problem, one will obviously be able to abstain from making mention of it.

xxii See §4.

xxiii Ibid.

xxiv It is the one based on the inclusive disjunction (grue12).

xxv A same solution applies, of course, if one considers a number of criterion-variables higher than 3.

xxvi All other criterion distinct from colour or time, would also be appropriate.

xxvii In particular, it appears that the projection of a conjunctive definition (grue2) is in fact familiar for us. In effect, we do not proceed otherwise when we project the predicate “being green before maturity and red after maturity” applicable to tomatoes, through a differentiated criterion of location: this is true of the 99 instance-S observed in Corsica and Provence, and is projected validly to a 100th instance located in Sardinia. One can observe that such a type of projection is in particular regarded as nonproblematic by Jackson (Franck Jackson, “‘Grue'”, Journal of Philosophy, vol. 72 (1975), p. 113-131): “There seems no case for regarding ‘grue’ as nonprojectible if it is defined this way. An emerald is grue1 just if it is green up to T and blue thereafter, and if we discovered that all emeralds so far examined had this property, then, other things being equal, we would probably accept that all emeralds, both examined and unexamined, have this property (…).” If one were to replace such a predicate in the present analysis, one should then consider that the projection is carried out for example through a differentiated criterion of localisation (p. 115).

xxviii Goodman, Fact, Fiction and Forecast.

xxix The account presented in J Holland, K Holyoak, R. Nisbett and P. Thagard (Induction, Cambridge, MA; London, MIT Press, 1986) appears to me to constitute a variation of Goodman’s solution, directed towards the computer-based processing of data and based on the distinction integrated/non-integrated in the default hierarchy. But Holland’s solution presents the same disadvantages as that of Goodman: what justification if not anthropocentric, does one have for this distinction? See p. 235: “Concepts such as “grue”, which are of no significance to the goals of the learner, will never be generated and hence will not form part of the default hierarchy. (…) Generalization, like other sorts of inference in a processing system, must proceed from the knowledge that the system already has”.

The present analysis also distinguishes from the one presented by Susan Haack (Evidence and Inquiry, Oxford; Cambridge, MA, Blackwell, 1993) because the existence of natural kinds does not constitute here a condition for induction. See p. 134: “There is a connection between induction and natural kinds. […] the reality of kinds and laws is a necessary condition of successful inductions”. In the present context, the fact that the conditions of induction (a common criterion, a distinct differentiated criterion, etc.) are satisfied is appropriate to perform induction.

xxx A similar remark is made by Franck Jackson in conclusion of his article (“‘Grue'”, p. 131): “[…] the SR can be specified without invoking a partition of predicates, properties or hypotheses into the projectible and the nonprojectible”. For Jackson, all noncontradictory predicates are projectible: “[…] all (consistent) predicates are projectible.” (p. 114). Such a conclusion appears however stronger than the one that results from the current analysis. Because for Jackson, all predicates are thus projectible in the absolute. However in the present context, there are no projectible or nonprojectible predicates in the absolute. It is only relative to a given n-universe, that a predicate P reveals projectible or nonprojectible.

More generally, the present analysis distinguishes fundamentally from that of Jackson in the sense that the solution suggested to GP does not rest on the counterfactual condition. This last appears indeed too related to the use of certain predicates (examined, sampled, etc.). On the other hand, in the present context, the problem is considered from a general viewpoint, independently of the particular nature of the predicates constituting the definition of grue.

xxxi Such a projection corresponds for example to the generalisation according to which “the anthropomorphic statue-menhirs are of the colour of granite and date from the Age of Bronze”.

xxxii As Ian Hacking underlines it, Le plus pur nominalisme, Combas, L’éclat, 1993, p. 9: “Utiliser un nom pour une espèce, c’est (entre autres choses) vouloir réaliser des généralisations et former des anticipations concernant des individus de cette espèce. La classification ne se limite pas au tri : elle sert à prédire. C’est une des leçons de la curieuse “énigme” que Nelson Goodman publia il y a quarante ans.” My translation: “To use a name for a species, it is (among other things) to want to carry out generalisations and to form anticipations concerning the individuals of this species. Classification is not limited to sorting: it is used to predict. It is one of the lessons of the strange “riddle” which Nelson Goodman published forty years ago.”

xxxiii The fact that a same criterion can be used at the same time as a common and a differentiated criterion (while eventually resorting to different taxa).

xxxiv I.e. the number of combinations made possible.

xxxv This enumeration does not pretend to be exhaustive. A thorough study of this question would be of course necessary.

xxxvi I thank the editor of Dialogue and two anonymous referees for very helpful comments on an earlier draft of this paper.

A Dichotomic Analysis of the Surprise Examination Paradox

English translation of a paper appeared in French in Philosophiques 2005, vol. 32, pages 399-421 (with minor changes with regard to the published version).

This paper proposes a new framework to solve the surprise examination paradox. I survey preliminary the main contributions to the literature related to the paradox. I introduce then a distinction between a monist and a dichotomic analysis of the paradox. With the help of a matrix notation, I also present a dichotomy that leads to distinguish two basically and structurally different notions of surprise, which are respectively based on a conjoint and a disjoint structure. I describe then how Quine’s solution and Hall’s reduction apply to the version of the paradox corresponding to the conjoint structure. Lastly, I expose a solution to the version of the paradox based on the disjoint structure.

A Dichotomic Analysis of the Surprise Examination Paradox

I shall present in what follows a new conceptual framework to solve the surprise examination paradox (henceforth, SEP), in the sense that it reorganizes, by adapting them, several elements of solution described in the literature. The solution suggested here rests primarily on the following elements: (i) a distinction between a monist and a dichotomic analysis of the paradox; (ii) the introduction of a matrix definition, which is used as support with several variations of the paradox; (iii) the distinction between a conjoint and a disjoint definition of the cases of surprise and of non-surprise, leading to two structurally different notions of surprise.

In section 1, I proceed to describe the paradox and the main solutions found in the literature. I describe then in section 2, in a simplified way, the solution to the paradox which results from the present approach. I also introduce the distinction between a monist and a dichotomic analysis of the paradox. I present then a dichotomy which makes it possible to distinguish between two basically and structurally different versions of the paradox: on the one hand, a version based on a conjoint structure of the cases of non-surprise and of surprise; in the other hand, a version based on a disjoint structure. In section 3, I describe how Quine’s solution and Hall’s reduction apply to the version of SEP corresponding to the conjoint structure of the cases of non-surprise and of surprise. In section 4, I expose the solution to SEP corresponding to the disjoint structure. Lastly, I describe in section 5, within the framework of the present solution, what should have been the student’s reasoning.

1. The paradox

The surprise examination paradox finds its origin in an actual fact. In 1943-1944, the Swedish authorities planned to carry out a civil defence exercise. They diffused then by the radio an announcement according to which a civil defence exercise would take place during the following week. However, in order to perform the latter exercise under optimal conditions, the announcement also specified that nobody could know in advance the date of the exercise. Mathematician Lennart Ekbom understood the subtle problem arising from this announcement of a civil defence exercise and exposed it to his students. A broad diffusion of the paradox throughout the world then ensued.

SEP first appeared in the literature with an article of D. O’ Connor (1948). O’ Connor presents the paradox under the form of the announcement of a military training exercise. Later on, SEP appeared in the literature under other forms, such as the announcement of the appearance of an ace in a set of cards (Scriven 1951) or else of a hanging (Quine 1953). However, the version of the paradox related to the professor’s announcement of a surprise examination has remained the most current form. The traditional version of the paradox is as follows: a professor announces to his/her students that an examination will take place during the next week, but that they will not be able to know in advance the precise day where the examination will occur. The examination will thus occur surprisingly. The students reason as follows. The examination cannot take place on Saturday, they think, for otherwise they would know in advance that the examination would take place on Saturday and thus it could not occur surprisingly. Thus, Saturday is ruled out. Moreover, the examination cannot take place on Friday, for otherwise the students would know in advance that the examination would take place on Friday and thus it could not occur surprisingly. Thus, Friday is also ruled out. By a similar reasoning, the students eliminate successively Thursday, Wednesday, Tuesday and Monday. Finally, all days of the week are then ruled out. However, this does not prevent the examination from finally occurring surprisingly, say, on Wednesday. Thus, the students’ reasoning proved to be fallacious. However, such a reasoning appears intuitively valid. The paradox lies here in the fact that the students’ reasoning seems valid, whereas it finally proves to be in contradiction with the facts, namely that the examination can truly occur surprisingly, in accordance with the announcement made by the professor.

In the literature, several solutions to SEP have been proposed. There does not exist however, at present time, a consensual solution. I will briefly mention the principal solutions which were proposed, as well as the fundamental objections that they raised.

A first attempt at solution appeared with O’ Connor (1948). This author pointed out that the paradox was due to the contradiction which resulted from the professor’s announcement and the implementation of the latter. According to O’ Connor, the professor’s announcement according to which the examination was to occur by surprise was in contradiction with the fact that the details of the implementation of the examination were known. Thus, the statement of SEP was, according to O’ Connor, self-refuting. However, such an analysis proved to be inadequate, because it finally appeared that the examination could truly take place under some conditions where it occurred surprisingly, for example on Wednesday. Thus, the examination could finally occur by surprise, confirming thus and not refuting, the professor’s announcement. This last observation had the effect of making the paradox re-appear.

Quine (1953) also proposed a solution to SEP. Quine considers thus the student’s final conclusion according to which the examination can occur surprisingly on no day of the week. According to Quine, the student’s error lies in the fact of having not considered from the beginning the hypothesis that the examination could not take place on the last day. For the fact of considering precisely that the examination will not take place on the last day makes it finally possible for the examination to occur surprisingly, on the last day. If the student had also taken into account this possibility from the beginning, he would not concluded fallaciously that the examination cannot occur by surprise. However, Quine’s solution has led to criticisms, emanating notably from commentators (Ayer 1973, Janaway 1989 and also Hall 1999) who stressed the fact that Quine’s solution did not make it possible to handle several variations of the paradox. Ayer imagines thus a version of SEP where a given person is informed that the cards of a set will be turned over one by one, but where that person will not know in advance when the ace of Spades will be issued. Nevertheless, the person is authorized to check the presence of the ace of Spades before the set of cards is mixed. The purpose of the objection to Quine’s solution based on such a variation is to highlight a situation where the paradox is quite present but where Quine’s solution does not find to apply any more, because the student knows with certainty, given the initial data of the problem, that the examination will take place as well.

According to another approach, defended in particular by R. Shaw (1958), the structure of the paradox is inherently self-referential. According to Shaw, the fact that the examination must occur by surprise is tantamount to the fact that the date of the examination cannot be deduced in advance. But the fact that the students cannot know in advance, by deduction, the date of the examination constitutes precisely one of the premises. The paradox thus finds its origin, according to Shaw, in the fact that the structure of the professor’s announcement is self-referential. According to the author, the self-reference which results from it constitutes thus the cause of the paradox. However, such an analysis did not prove to be convincing, for it did not make it possible to do justice to the fact that in spite of its self-referential structure, the professor’s announcement was finally confirmed by the fact that the examination could finally occur surprisingly, say on Wednesday.

Another approach, put forth by Richard Montague and David Kaplan (1960) is based on the analysis of the structure of SEP which proves, according to these authors, to be that of the paradox of the Knower. The latter paradox constitutes a variation of the Liar paradox. What thus ultimately proposes Montague and Kaplan, is a reduction of SEP to the Liar paradox. However, this last approach did not prove to be convincing. Indeed, it was criticized because it did not take account, on the one hand, the fact that the professor’s announcement can be finally confirmed and on the other hand, the fact that one can formulate the paradox in a non-self-referential way.

It is also worth mentioning the analysis developed by Robert Binkley (1968). In his article, Binkley exposes a reduction of SEP to Moore’s paradox. The author makes the point that on the last day, SEP reduces to a variation of the proposition ‘P and I don’t know that P’ which constitutes Moore’s paradox. Binkley extends then his analysis concerning the last day to the other days of the week. However, this approach has led to strong objections, resulting in particular from the analysis of Wright and Sudbury (1977).

Another approach also deserves to be mentioned. It is the one developed by Paul Dietl (1973) and Joseph Smith (1984). According to these authors, the structure of SEP is that of the sorites paradox. What then propose Dietl and Smith, is a reduction of SEP to the sorites paradox. However, such an analysis met serious objections, raised in particular by Roy Sorensen (1988).

It is worth lastly mentioning the approach presented by Crispin Wright and Aidan Sudbury (1977). The analysis developed by these authors1 results in distinguishing two cases: on the one hand, on the last day, the student is in a situation which is that which results from Moore’s paradox; in addition, on the first day, the student is in a basically different situation where he can validly believe in the professor’s announcement. Thus, the description of these two types of situations leads to the rejection of the principle of temporal retention. According to this last principle, what is known at a temporal position T0 is also known at a later temporal position T1 (with T0 < T1). However, the analysis of Wright and Sudbury appeared vulnerable to an argument developed by Sorensen (1982). The latter author presented indeed a version of SEP (the Designated Student Paradox) which did not rely on the principle of temporal retention, on which the approach of Wright and Sudbury rested. According to Sorensen’s variation, the paradox was quite present, but without the conditions of its statement requiring to rely on the principle of temporal retention. Sorensen describes thus the following variation of the paradox. Five students, A, B, C, D and E are placed, in this order, one behind the other. The professor then shows to the students four silver stars and one gold star. Then he places a star on the back of each student. Lastly, he announces to them that the one of them who has a gold star in the back has been designated to pass an examination. But, the professor adds, this examination will constitute a surprise, because the students will only know that who was designated when they break their alignment. Under these conditions, it appears that the students can implement a similar reasoning to that which prevails in the original version of SEP. But this last version is diachronic, whereas the variation described by Sorensen appears, by contrast, synchronic. And as such, it is thus not based on whatever principle of temporal retention.

Given the above elements, it appears that the stake and the philosophical implications of SEP are of importance. They are located at several levels and thus relate2 to the theory of knowledge, deduction, justification, the semantic paradoxes, self-reference, modal logic, and vague concepts.

2. Monist or dichotomic analysis of the paradox

Most analyses classically proposed to solve SEP are based on an overall solution which applies, in a general way, to the situation which is that of SEP. In this type of analysis, a single solution is presented, which is supposed to apply to all variations of SEP. Such type of solution has a unitary nature and appears based on what can be termed a monist theory of SEP. Most solutions to SEP proposed in the literature are monist analyses. Characteristic examples of this type of analysis of SEP are the solutions suggested by Quine (1953) or Binkley (1968). In a similar way, the solution envisaged by Dietl (1973) which is based on a reduction of SEP to the sorite paradox also constitutes a monist solution to SEP.

Conversely, a dichotomic analysis of SEP is based on a distinction between two different scenarios of SEP and on the formulation of an independent solution for each of the two scenarios. In the literature, the only analysis which has a dichotomic nature, as far as I know, is that of Wright and Sudbury mentioned above. In what follows, I will present a dichotomic solution to SEP. This solution is based on the distinction of two variations of SEP, associated with concepts of surprise that correspond to different structures of the cases of non-surprise and of surprise.

At this step, it proves to be useful to introduce the matrix notation. With the help of this latter, the various cases of non-surprise and of surprise be modelled with the following S[k, s] table, where k denotes the day where the examination takes place and S[k, s] denotes if the corresponding case of non-surprise (s = 0) or of surprise (s = 1) is made possible (S[k, s] = 1) or not (S[k, s] = 0) by the conditions of the announcement (with 1  kn).3 If one considers for example 7-SEP 4, S[7, 1] = 0 denotes the fact that the surprise is not possible on the 7th day, and conversely, S[7, 1] = 1 denotes the fact that the surprise is possible on the 7th day; in the same way, S[1, 0] = 0 denotes the fact that the non-surprise is not possible on the 1st day by the conditions of the announcement, and conversely, S[1, 0] = 1 denotes the fact that the non-surprise is possible on the 1st day.

The dichotomy on which rests the present solution results directly from the analysis of the structure which makes it possible to describe the concept of surprise corresponding to the statement of SEP. Let us consider first the following matrix, which corresponds to a maximal definition, where all cases of non-surprise and of surprise are made possible by the professor’s announcement (with ■ = 1 and □ = 0):

(D1)S[k, 0]S[k, 1]
S[7,s]
S[6,s]
S[5,s]
S[4,s]
S[3,s]
S[2,s]
S[1,s]

At the level of (D1), as we can see it, all values of the S[k, s] matrix are equal to 1, which corresponds to the fact that all the cases of non-surprise and of surprise are made possible by the corresponding version of SEP. The associated matrix can be thus defined as a rectangular matrix.

At this stage, it appears that one can conceive of some variations of SEP associated with more restrictive matrix structures, where certain cases of non-surprise and of surprise are not authorized by the announcement. In such cases, certain values of the matrix are equal to 0. It is now worth considering the structure of these more restrictive definitions. The latter are such that it exists at least one case of non-surprise or of surprise which is made impossible by the announcement, and where the corresponding value of the matrix S[k, s] is thus equal to 0. Such a condition leaves place [***room] with a certain number of variations, of which it is now worth studying the characteristics more thoroughly.

One can notice preliminarily that certain types of structures can be discarded from the beginning. It appears indeed that any definition associated with a restriction of (D1) is not adequate. Thus, there are minimal conditions for the emergence of SEP. In this sense, a first condition is that the base step be present. This base step is such that the non-surprise must be able to occur on the last day, that is to say S[n, 0] = 1. With the previously defined notation, it presents the general form n*n* and corresponds to 7*7* for 7-SEP. In the lack of this base step, there is no paradoxical effect of SEP. Consequently, a structure of matrix such as S[n, 0] = 0 can be discarded from the beginning.

One second condition so that the statement leads to a genuine version of SEP is that the examination can finally occur surprisingly. This renders indeed possible the fact that the professor’s announcement can be finally satisfied. Such a condition – let us call it the vindication step – is classically mentioned as a condition for the emergence of the paradox. Thus, a definition which would be such that all the cases of surprise are made impossible by the corresponding statement would also not be appropriate. Thus, the structure corresponding to the following matrix would not correspond either to a licit statement of SEP:

(D2)S[k, 0]S[k, 1]
S[7,s]
S[6,s]
S[5,s]
S[4,s]
S[3,s]
S[2,s]
S[1,s]

because the surprise is possible here on no day of the week (S[k, 1 ] = 0) and the validation step is thus lacking in the corresponding statement.

Taking into account what precedes, one is now in a position to describe accurately the minimal conditions which are those of SEP:

(C3) S[n, 0] = 1 (base step)

(C4) k (1  kn) such that S[k, 1] = 1 (validation step)

At this step, it is worth considering the structure of the versions of SEP based on the definitions which satisfy the minimal conditions for the emergence of the paradox which have just been described, i.e. which contain at the same time the basic step and the validation step. It appears here that the structure associated with the cases of non-surprise and of surprise corresponding to a variation with SEP can present two forms of a basically different nature. A first form of SEP is associated with a structure where the possible cases of non-surprise and of surprise are such that it exists during the n-period at least one day where the non-surprise and the surprise are simultaneously possible. Such a definition can be called conjoint. The following matrix constitutes an example of this type of structure:

(D5)S[k, 0]S[k, 1]
S[7,s]
S[6,s]
S[5,s]
S[4,s]
S[3,s]
S[2,s]
S[1,s]

because the non-surprise and the surprise are simultaneously possible here on the 7th, 6th, 5th and 4th days. However, it proves that one can also encounter a second form of SEP the structure of which is basically different, in the sense that for each day of the n-period, it is impossible to have simultaneously the surprise and the non-surprise.5 A definition of this nature can be called disjoint. The following matrix thus constitutes an example of this type of structure:

(D6)S[k, 0]S[k, 1]
S[7,s]
S[6,s]
S[5,s]
S[4,s]
S[3,s]
S[2,s]
S[1,s]

Consequently, it is worth distinguishing in what follows two structurally distinct versions of SEP: (a) a version based on a conjoint structure of the cases of non-surprise and of surprise made possible by the announcement; (b) a version based on a disjoint structure of these same cases. The need for making such a dichotomy finds its legitimacy in the fact that in the original version of SEP, the professor does not specify if one must take into account a concept of surprise corresponding to a disjoint or a conjoint structure of the cases of non-surprise and of surprise. With regard to this particular point, the professor’s announcement of SEP appears ambiguous. Consequently, it is necessary to consider successively two different concepts of surprise, respectively based on a disjoint or conjoint structure of the cases of non-surprise and of surprise, as well as the reasoning which must be associated with them.

3. The surprise notion corresponding to the conjoint structure

Let us consider first the case where SEP is based on a concept of surprise corresponding to a conjoint structure of the cases of non-surprise and of surprise. Let SEP(I) be the version associated with such a concept of surprise. Intuitively, this version corresponds to a situation where there exists in the n-period at least one day where the non-surprise and the surprise can occur at the same time. Several types of definitions are likely to satisfy this criterion. It is worth considering them in turn.

4.1 The definition associated with the rectangular matrix and Quine’s solution

To begin with, it is worth considering the structures which are such that all cases of non-surprise and of surprise are made possible by the statement. The corresponding matrix is a rectangular matrix. Let thus SEP(I□) be such a version. The definition associated with such a structure is maximal since all cases of non-surprise and of surprise are authorized. The following matrix corresponds thus to such a general structure:

(D7)S[k, 0]S[k, 1]
S[7,s]
S[6,s]
S[5,s]
S[4,s]
S[3,s]
S[2,s]
S[1,s]

and the associated professor’s announcement is the following:

(S7)An examination will occur in the next week but the date of the examination will constitute a surprise.

At this step, it appears that we also get a version of SEP for n = 1 which satisfies this definition. The structure associated with 1-SEP(I□) is as follows:

(D8)S[1, 0]S[1, 1]
S[1,s]

which corresponds to the following professor’s announcement:

(S8)An examination will occur on tomorrow but the date of the examination will constitute a surprise.

Thus, 1-SEP(I□) is the minimal version of SEP which satisfies not only the above condition, but also the base step (C3) according to which the non-surprise must possibly occur on the last day, as well as the validation step (C4) in virtue of which the examination can finally occur by surprise. Moreover, it is a variation which excludes, by its intrinsic structure, the emergence of the version of SEP based on a concept of surprise corresponding to a disjoint structure. For this reason, (D8) can be regarded as the canonical form of SEP(I□). Thus, it is the genuine core of SEP(I□) and in what follows, we will thus endeavour to reason on 1-SEP(I□).

At this stage, it is worth attempting to provide a solution to SEP(I□). For that purpose, let us recall first Quine’s solution. The solution to SEP proposed by Quine (1953) is well-known. Quine highlights the fact that the student eliminates successively the days n, n -1…, 1, by a reasoning based on backward-induction and concludes then that the examination will not take place during the week. The student reasons as follows. On day n, I will predict that the examination will take place on day n, and consequently the examination cannot take place on day n; on day n -1, I will predict that the examination will take place on day n-1, and consequently the examination cannot take place on day n -1; …; on day 1, I will predict that the examination will take place on day 1, and consequently the examination cannot take place on day 1. Finally, the student concludes that the examination will take place on no day of the week. But this last conclusion finally makes it possible to the examination to occur surprisingly, including on day n. According to Quine, the error in the student’s reasoning lies precisely in the fact of not having taken into account this possibility since the beginning, which would then have prevented the fallacious reasoning.6

Quine, in addition, directly applies his analysis to the canonical form 1-SEP(I□), where the corresponding statement is that of (S8). In this case, the error of the student lies, according to Quine, in the fact of having considered only the single following assumption: (a) “the examination will take place tomorrow and I will predict that it will take place”. In fact, the student should have also considered three other cases: (b) “the examination will not take place tomorrow and I will predict that it will take place”; (c) “the examination will not take place tomorrow and I will not predict that it will take place”; (d) “the examination will take place tomorrow and I will not predict that it will take place”. And the fact of considering the assumption (a) but also the assumption (d) which is compatible with the professor’s announcement would have prevented the student from concluding that the examination would not finally take place.7 Consequently, it is the fact of having taken into account only the hypothesis (a) which can be identified as the cause of the fallacious reasoning. Thus, the student did only take partially into account the whole set of hypotheses resulting from the professor’s announcement. If he had apprehended the totality of the relevant hypotheses compatible with the professor’s announcement, he would not have concluded fallaciously that the examination would not take place during the week.

At this stage, it proves to be useful to describe the student’s reasoning in terms of reconstitution of a matrix. For one can consider that the student’s reasoning classically based on backward-induction leads to reconstitute the matrix corresponding to the concept of surprise in the following way:

(D9)S[1, 0]S[1, 1]
S[1,s]

In reality, he should have considered that the correct way to reconstitute this latter matrix is the following :

(D8)S[1, 0]S[1, 1]
S[1,s]

4.2 The definition associated with the triangular matrix and Hall’s reduction

As we have seen, Quine’s solution applies directly to SEP(I□), i.e. to a version of SEP based on a conjoint definition of the surprise and a rectangular matrix. It is now worth being interested in some variations of SEP based on a conjoint definition where the structure of the corresponding matrix is not rectangular, but which satisfies however the conditions for the emergence of the paradox mentioned above, namely the presence of the base step (C3) and the validation step (C4). Such matrices have a structure that can be described as triangular. Let thus SEP(I∆) be the corresponding version.

Let us consider first 7-SEP, where the structure of the possible cases of non-surprise and of surprise corresponds to the matrix below:

(D10)S[k, 0]S[k, 1]
S[7,s]
S[6,s]
S[5,s]
S[4,s]
S[3,s]
S[2,s]
S[1,s]

and to the following announcement of the professor

(S10)An examination will occur in the next week but the date of the examination will constitute a surprise. Moreover, the fact that the examination will take place constitutes an absolute certainty.

Such an announcement appears identical to the preceding statement to which the Quine’s solution applies, with however an important difference: the student has from now on the certainty that the examination will occur. And this has the effect of preventing him/her from questioning the fact that the examination can take place, and of making thus impossible the surprise to occur on the last day. For this reason, we note S[7, 1] = 0 in the corresponding matrix. The general structure corresponding to this type of definition is:

(D11)S[k, 0]S[k, 1]
S[n,s]
S[n-1,s]
………………………………

And similarly, one can consider the following canonical structure (from where the denomination of triangular structure finds its justification), which is that of SEP(I∆) and which corresponds thus to 2-SEP(I∆):

(D12)S[k, 0]S[k, 1]
S[2,s]
S[1,s]

Such a structure corresponds to the following announcement of the professor:

(S12)An examination will occur on the next two days, but the date of the examination will constitute a surprise. Moreover, the fact that the examination will take place constitutes an absolute certainty.

As we see it, the additional clause of the statement according to which it is absolutely certain that the examination will occur prevents here the surprise of occurring on the last day. Such a version corresponds in particular to the variation of SEP described by A. J. Ayer. The latter version corresponds to a player, who is authorized to check, before a set of playing cards is mixed, that it contains the ace, the 2, 3…, 7 of Spades. And it is announced that the player that he will not be able to envisage in advance justifiably, when the ace of Spades will be uncovered. Finally the cards, initially hidden, are uncovered one by one. The purpose of such a version is to render impossible, before the 7th card being uncovered, the belief according to which the ace of Spades will not be uncovered. And this has the effect of forbidding to Quine’ solution to apply on the last day.

It is now worth presenting a solution to the versions of SEP associated with the structures corresponding to (D11). Such a solution is based on a reduction recently exposed by Ned Hall, of which it is worth beforehand highlighting the context. In the version of SEP under consideration by Quine (1953), it appears clearly that the fact that the student doubts that the examination will well take place during the week, at a certain stage of the reasoning, is authorized. Quine thus places himself deliberately in a situation where the student has the faculty of doubting that the examination will truly occur during the week. The versions described by Ayer (1973), Janaway (1989) but also Scriven (1951) reveal the intention to prevent this particular step in the student’s reasoning. Such scenarios correspond, in spirit, to SEP(I∆). One can also attach to it the variation of the Designated Student Paradox described by Sorensen (1982, 357)8, where five stars – a gold star and four silver stars – are attributed to five students, given that it is indubitable that the gold star is placed on the back of the student who was designated.

However, Ned Hall (1999, 659-660) recently exposed a reduction, which tends to refute the objections classically raised against Quine’s solution. The argumentation developed by Hall is as follows:

We should pause, briefly, to dispense with a bad – though oft-cited – reason for rejecting Quine’s diagnosis. (See for example Ayer 1973 and Janaway 1989). Begin with the perfectly sound observation that the story can be told in such a way that the student is justified in believing that, come Friday, he will justifiably believe that an exam is scheduled for the week. Just add a second Iron Law of the School : that there must be at least one exam each week. (…) Then the first step of the student’s argument goes through just fine. So Quine’s diagnosis is, evidently, inapplicable.

Perhaps – but in letter only, not in spirit. With the second Iron Law in place, the last disjunct of the professor’s announcement – that E5 & J(E5) – is, from the student’s perspective, a contradiction. So, from his perspective, the content of her announcement is given not by SE5 but by SE4 : (E1 & J1(E1))  …  (E4 & J4(E4)). And now Quine’s diagnosis applies straightforwardly : he should simply insist that the student is not justified in believing the announcement and so, come Thursday morning, not justified in believing that crucial part of it which asserts that if the exam is on Friday then it will come as a surprise – which, from the student’s perspective, is tantamount to asserting that the exam is scheduled for one of Monday through Thursday. That is, Quine should insist that the crucial premise that J4(E1  E2  E3  E4) is false – which is exactly the diagnosis he gives to an ordinary 4-day surprise exam scenario. Oddly, it seems to have gone entirely unnoticed by those who press this variant of the story against Quine that its only real effect is to convert an n-day scenario into an n-1 day scenario.

Hall puts then in parallel two types of situations. The first corresponds to the situation in which Quine’s analysis finds classically to apply. The second corresponds to the type of situation under consideration by the opponents to Quine’s solution and in particular by Ayer (1973) and Janaway (1989). On this last hypothesis, a stronger version of SEP is taken into account, where one second Iron Law of the School is considered and it is given that the examination will necessarily take place during the week. The argumentation developed by Hall leads to the reduction of a version of n-SEP of the second type to a version of (n-1)-SEP of the quinean type. This equivalence has the effect of annihilating the objections of the opponents to Quine’s solution.9 For the effect of this reduction is to make it finally possible to Quine’s solution to apply in the situations described by Ayer and Janaway. In spirit, the scenario under consideration by Ayer and Janaway corresponds thus to a situation where the surprise is not possible on day n (i.e. S[n, 1] = 0). This has indeed the effect of neutralizing Quine’s solution based on n-SEP(I□). But Hall’s reduction then makes it possible to Quine’s solution to apply to (n-1)-SEP(I□). The effect of Hall’s reduction is thus of reducing a scenario corresponding to (D11) to a situation based on (D8). Consequently, Hall’s reduction makes it possible to reduce n-SEP(I∆) to (n-1)-SEP(I□). It results from it that any version of SEP(I∆) for one n-period reduces to a version of SEP(I□) for one (n-1)-period (formally n-SEP(I∆)  (n-1)-SEP(I□) for n > 1). Thus, Hall’s reduction makes it finally possible to apply Quine’s solution to SEP(I∆).10

4. The surprise notion corresponding to the disjoint structure

It is worth considering, second, the case where the notion of surprise is based on a disjoint structure of the possible cases of non-surprise and of surprise. Let SEP(II) be the corresponding version. Intuitively, such a variation corresponds to a situation where for a given day of the n-period, it is not possible to have at the same time the non-surprise and the surprise. The structure of the associated matrix is such that one has exclusively on each day, either the non-surprise or the surprise.

At this step, it appears that a preliminary question can be raised: can’t Quine’s solution apply all the same to SEP(II)? However, the preceding analysis of SEP(I) shows that a necessary condition in order to Quine’s solution to apply is that there exists during the n-period at least one day when the non-surprise and the surprise are at the same time possible. However such a property is that of a conjoint structure and corresponds to the situation which is that of SEP(I). But in the context of a disjoint structure, the associated matrix, in contrast, verifies k S[k, 0] + S[k, 1] = 1. Consequently, this forbids Quine’s solution to apply to SEP(II).

In the same way, one could wonder whether Hall’s reduction wouldn’t also apply to SEP(II). Thus, isn’t there a reduction of SEP(II) for a n-period to SEP(I) for a (n – 1)-period? It also appears that not. Indeed, as we did see it, Quine’s solution cannot apply to SEP(II). However, the effect of Hall’s reduction is to reduce a given scenario to a situation where Quine’s solution finally finds to apply. But, since Quine’s solution cannot apply in the context of SEP(II), Hall’s reduction is also unable to produce its effect.

Given that Quine’s solution does not apply to SEP(II), it is now worth attempting to provide an adequate solution to the version of SEP corresponding to a concept of surprise associated with a disjoint structure of the cases of non-surprise and of surprise. To this end, it proves to be necessary to describe a version of SEP corresponding to a disjoint structure, as well as the structure corresponding to the canonical version of SEP(II).

In a preliminary way, one can observe that the minimal version corresponding to a disjoint version of SEP is that which is associated with the following structure, i.e. 2-SEP(II):

(D13)S[1, 0]S[1, 1]
S[2,s]
S[1,s]

However, for reasons that will become clearer later, the corresponding version of SEP(II) does not have a sufficient degree of realism and of plausibility to constitute a genuine version of SEP, i.e. such that it is susceptible of inducing in error our reasoning.

In order to highlight the canonical version of SEP(II) and the corresponding statement, it is first of all worth mentioning the remark, made by several authors11, according to which the paradox emerges clearly, in the case of SEP(II), when n is large. An interesting characteristic of SEP(II) is indeed that the paradox emerges intuitively in a clearer way when great values of n are taken into account. A striking illustration of this phenomenon is thus provided to us by the variation of the paradox which corresponds to the following situation, described by Timothy Williamson (2000, 139):

Advance knowledge that there will be a test, fire drill, or the like of which one will not know the time in advance is an everyday fact of social life, but one denied by a surprising proportion of early work on the Surprise Examination. Who has not waited for the telephone to ring, knowing that it will do so within a week and that one will not know a second before it rings that it will ring a second later?

The variation suggested by Williamson corresponds to the announcement made to somebody that he will receive a phone call during the week, without being able however to determine in advance at which precise second the phone call will occur. This variation underlines how the surprise can appear, in a completely plausible way, when the value of n is high. The unit of time considered by Williamson is here the second, associated with a period which corresponds to one week. The corresponding value of n is here very high and equals 604800 (60 x 60 x 24 x 7) seconds. This illustrates how a great value of n makes it possible to the corresponding variation of SEP(II) to take place in both a plausible and realistic way. However, taking into account such large value of n is not indeed essential. In effect, a value of n which equals, for example, 365, seems appropriate as well. In this context, the professor’s announcement which corresponds to a disjoint structure is then the following:

(S14)An examination will occur during this year but the date of the examination will constitute a surprise.

The corresponding definition presents then the following structure :

(D14)S[1, 0]S[1, 1]
S[365,s]
………………………………
S[1,s]

which is an instance of the following general form :

(D15)S[1, 0]S[1, 1]
S[n,s]
………………………………
S[1,s]

This last structure can be considered as corresponding to the canonical version of SEP(II), with n large. In the specific situation associated with this version of SEP, the student predicts each day – in a false way but justified by a reasoning based on backward-induction – that the examination will take place on no day of the week. But it appears that at least one case of surprise (for example if the examination occurs on the first day) makes it possible to validate, in a completely realistic way, the professor’s announcement..

The form of SEP(II) which applies to the standard version of SEP is 7-SEP(II), which corresponds to the classical announcement:

(S7)An examination will occur on the next week but the date of the examination will constitute a surprise.

but with this difference with the standard version that the context is here exclusively that of a concept of surprised associated with a disjoint structure.

At this stage, we are in a position to determine the fallacious step in the student’s reasoning. For that, it is useful to describe the student’s reasoning in terms of matrix reconstitution. The student’s reasoning indeed leads him/her to attribute a value for S[k, 0] and S[k, 1]. And when he is informed of the professor’s announcement, the student’s reasoning indeed leads him/her to rebuild the corresponding matrix such that all S[k, 0] = 1 and all S[k, 1] = 0, in the following way (for n = 7):

(D16)S[k, 0]S[k, 1]
S[7,s]
S[6,s]
S[5,s]
S[4,s]
S[3,s]
S[2,s]
S[1,s]

One can notice here that the order of reconstitution proves to be indifferent. At this stage, we are in a position to identify the flaw which is at the origin of the erroneous conclusion of the student. It appears indeed that the student did not take into account the fact that the surprise corresponds here to a disjoint structure. Indeed, he should have considered here that the last day corresponds to a proper instance of non-surprise and thus that S[n, 0] = 1. In the same way, he should have considered that the 1st day12 corresponds to a proper instance of surprise and should have thus posed S[1, 1] = 1. The context being that of a disjoint structure, he could have legitimately added, in a second step, that S[n, 1] = 0 and S[1, 0] = 0. At this stage, the partially reconstituted matrix would then have been as follows:

(D17)S[k, 0]S[k, 1]
S[7,s]
S[6,s]
S[5,s]
S[4,s]
S[3,s]
S[2,s]
S[1,s]

The student should then have continued his reasoning as follows. The proper instances of non-surprise and of surprise which are disjoint here do not capture entirely the concept of surprise. In such context, the concept of surprise is not captured exhaustively by the extension and the anti-extension of the surprise. However, such a definition is in conformity with the definition of a vague predicate, which characterizes itself by an extension and an anti-extension which are mutually exclusive and non-exhaustive13. Thus, the surprise notion associated with a disjoint structure is a vague one.

What precedes now makes it possible to identify accurately the flaw in the student’s reasoning, when the surprise notion is a vague notion associated with a disjoint structure. For the error which is at the origin of the student’s fallacious reasoning lies in lack of taking into account the fact that the surprise corresponds in the case of a disjoint structure, to a vague concept, and thus comprises the presence of a penumbral zone corresponding to borderline cases between the non-surprise and the surprise. There is no need however to have here at our disposal a solution to the sorites paradox. Indeed, whether these borderline cases result from a succession of intermediate degrees, from a precise cut-off between the non-surprise and the surprise whose exact location is impossible for us to know, etc. is of little importance here. For in all cases, the mere fact of taking into account the fact that the concept of surprise is here a concept vague forbids to conclude that S[k, 1] = 0, for all values of k.

Several ways thus exist to reconstitute the matrix in accordance with what precedes. In fact, there exists as many ways of reconstituting the latter than there are conceptions of vagueness. One in these ways (based on a conception of vagueness based on fuzzy logic) consists in considering that there exists a continuous and gradual succession from the non-surprise to the surprise. The corresponding algorithm to reconstitute the matrix is then the one where the step is given by the formula 1/(np) when p corresponds to a proper instance of surprise. For p = 3, we have here 1/(7-3) = 0,25, with S[3, 1] = 1. And the corresponding matrix is thus the following one:

(D18)S[k, 0]S[k, 1]
S[7,s]10
S[6,s]0,750,25
S[5,s]0,50,5
S[4,s]0,250,75
S[3,s]01
S[2,s]01
S[1,s]01

where the sum of the values of the matrix associated with a day given is equal to 1. The intuition which governs SEP (II) is here that the non-surprise is total on day n, but that there exists intermediate degrees of surprise si (0 < si < 1), such as the more one approaches the last day, the higher the effect of non-surprise. Conversely, the effect of surprise is total on the first days, for example on days 1, 2 and 3.

One can notice here that the definitions corresponding to SEP (II) which have just been described, are such that they present a property of linearity (formally, k (for 1 < kn), S[k, 0]  S[k-1, 0]). It appears indeed that a structure corresponding to the possible cases of non-surprise and of surprise which would not present such a property of linearity, would not capture the intuition corresponding to the concept of surprise. For this reason, it appears sufficient to limit the present study to the structures of definitions that satisfy this property of linearity.

An alternative way to reconstitute the corresponding matrix, based on the epistemological conception of vagueness, could also have been used. It consists of the case where the vague nature of the surprise is determined by the existence of a precise cut-off between the cases of non-surprise and of surprise, of which it is however not possible for us to know the exact location. In this case, the matrix could have been reconstituted, for example, as follows:

(D19)S[k, 0]S[k, 1]
S[7,s]
S[6,s]
S[5,s]
S[4,s]
S[3,s]
S[2,s]
S[1,s]

At this stage, one can wonder whether the version of the paradox associated with SEP(II) cannot be assimilated with the sorites paradox. The reduction of SEP to the sorites paradox is indeed the solution which has been proposed by some authors, notably Dietl (1973) and Smith (1984). The latter solutions, based on the assimilation of SEP to the sorites paradox, constitute monist analyses, which do not lead, to the difference of the present solution, to two independent solutions based on two structurally different versions of SEP. In addition, with regard to the analyses suggested by Dietl and Smith, it does not clearly appear whether each step of SEP is fully comparable to the corresponding step of the sorites paradox, as underlined by Sorensen.14 But in the context of a conception of surprise corresponding to a disjoint structure, the fact that the last day corresponds to a proper instance of non-surprise can be assimilated here to the base step of the sorites paradox.

Nevertheless, it appears that such a reduction of SEP to the sorites paradox, limited to the notion of surprise corresponding to a disjoint structure, does not prevail here. On the one hand, it does not appear clearly if the statement of SEP can be translated into a variation of the sorites paradox, in particular for what concerns 7-SEP(II). Because the corresponding variation of the sorites paradox would run too fast, as already noted by Sorensen (1988).15 It is also noticeable, moreover, as pointed out by Scott Soames (1999), than certain vague predicates are not likely to give rise to a corresponding version of the sorites paradox. Such appears well to be the case for the concept of surprise associated with 7-SEP(II). Because as Soames16 points out, the continuum which is semantically associated with the predicates giving rise to the sorites paradox, can be fragmented in units so small that if one of these units is intuitively F, then the following unit is also F. But such is not the case with the variation consisting in 7-SEP(II), where the corresponding units (1 day) are not fine enough with regard to the considered period (7 days).

Lastly and overall, as mentioned above, the preceding solution to SEP(II) applies, whatever the nature of the solution which will be adopted for the sorites paradox. For it is the ignorance of the semantic structure of the vague notion of surprise which is at the origin of the student’s fallacious reasoning in the case of SEP(II). And this fact is independent of the solution which should be provided, in a near or far future, to the sorites paradox – whether this approach be of epistemological inspiration, supervaluationnist, based on fuzzy logic…, or of a very different nature.

5. The solution to the paradox

The above developments make it possible now to formulate an accurate solution to the surprise examination paradox. The latter solution can be stated by considering what should have been the student’s reasoning. Let us consider indeed, in the light of the present analysis, how the student should have reasoned, after having heard the professor’s announcement:

– The student: Professor, I think that two semantically distinct conceptions of surprise, which are likely to influence the reasoning to hold, can be taken into account. I also observe that you did not specify, at the time of your announcement, to which of these two conceptions you referred. Isn’t it?

– The professor: Yes, it is exact. Continue.

– The student: Since you refer indifferently to one or the other of these conceptions of surprise, it is necessary to consider each one of them successively, as well as the reasoning to be held in each case.

– The professor: Thus let us see that.

– The student: Let us consider, on the one hand, the case where the surprise corresponds to a conjoint definition of the cases of non-surprise and of surprise. Such a definition is such that the non-surprise and the surprise are possible at the same time, for example on the last day. Such a situation is likely to arise on the last day, in particular when a student concludes that the examination cannot take place on this same last day, since that would contradict the professor’s announcement. However, this precisely causes to make it possible for the surprise to occur, because this same student then expects that the examination will not take place. And in a completely plausible way, as put forth by Quine, such a situation corresponds then to a case of surprise. In this case, the fact of taking into account the possibility that the examination can occur surprisingly on the last day, prohibits eliminating successively the days n, n-1, n-2, …, 2, and 1. In addition, the concept of surprise associated with a conjoint structure is a concept of total surprise. For one faces on the last day either the non-surprise or the total surprise, without there existing in this case some intermediate situations.

– The professor: I see that. You did mention a second case of surprise…

– The student: Indeed. It is also necessary to consider the case where the surprise corresponds to a disjoint definition of the cases of non-surprise and of surprise. Such a definition corresponds to the case where the non-surprise and the surprise are not possible on the same day. The intuition on which such a conception of the surprise rests corresponds to the announcement made to students that they will undergo an examination in the year, while being moreover unaware of the precise day where it will be held. In such a case, it results well from our experience that the examination can truly occur surprisingly, on many days of the year, for example on whatever day of the first three months. It is an actual situation that can be experienced by any student. Of course, in the announcement that you have just made to us, the period is not as long as one year, but corresponds to one week. However, your announcement also leaves place to such a conception of surprise associated with a disjoint structure of the cases of non-surprise and of surprise. Indeed, the examination can indeed occur surprisingly, for example on the 1st day of the week. Thus, the 1st day constitutes a proper instance of surprise. In parallel, the last day constitutes a proper instance of non-surprise, since it results from the announcement that the examination cannot take place surprisingly on this day. At this stage, it also appears that the status of the other days of the corresponding period is not determined. Thus, such a disjoint structure of the cases of non-surprise and of surprise is at the same time disjoint and non-exhaustive. Consequently, the concept of corresponding surprise presents here the criteria of a vague notion. And this casts light on the fact that the concept of surprise associated with a conjoint structure is a vague one, and that there is thus a zone of penumbra between the proper instances of non-surprise and of surprise, which corresponds to the existence of borderline cases. And the mere existence of these borderline cases prohibits to eliminate successively, by a reasoning based on backward-induction, the days n, n-1, n-2, …, 2, and then 1. And I finally notice, to the difference of the preceding concept of surprise, that the concept of surprise associated with a conjoint structure leads to the existence of intermediate cases between the non-surprise and the surprise.

– The professor: I see. Conclude now.

– The student: Finally, the fact of considering successively two different concepts of surprise being able to correspond to the announcement which you have just made, resulted in both cases in rejecting the classical reasoning which results in eliminating successively all days of the week. Here, the motivation to reject the traditional reasoning appears different for each of these two concepts of surprise. But in both cases, a convergent conclusion ensues which leads to the rejection of the classical reasoning based on backward-induction.

6. Conclusion

I shall mention finally that the solution which has been just proposed also applies to the variations of SEP mentioned by Sorensen (1982). Indeed, the structure of the canonical forms of SEP(I□), SEP(I∆) or SEP(II) indicates that whatever the version taken into account, the solution which applies does not require to make use of whatever principle of temporal retention. It is also independent of the order of elimination and can finally apply when the duration of the n-period is unknown at the time of the professor’s announcement.

Lastly, it is worth mentioning that the strategy adopted in the present study appears structurally similar to the one used in Franceschi (1999): first, establish a dichotomy which makes it possible to divide the given problem into two distinct classes; second, show that each resulting version admits of a specific resolution.17 In a similar way, in the present analysis of SEP, a dichotomy is made and the two resulting categories of problems lead then to an independent resolution. This suggests that the fact that two structurally independent versions are inextricably entangled in philosophical paradoxes could be a more widespread characteristic than one could think at first glance and could also partly explain their intrinsic difficulty.18

REFERENCES

AYER, A. J. 1973, “On a Supposed Antinomy”, Mind 82, pp. 125-126.
BINKLEY, R. 1968, “The Surprise Examination in Modal Logic”, Journal of Philosophy 65, pp. 127-136.
CHALMERS, D. 2002, “The St. Petersburg two-envelope paradox”, Analysis 62, pp. 155-157.
CHOW, T. Y. 1998, “The Surprise Examination or Unexpected Hanging Paradox”, The American Mathematical Monthly 105, pp. 41-51.
DIETL, P. 1973, “The Surprise Examination”, Educational Theory 23, pp. 153-158.
FRANCESCHI, P. 1999, “Comment l’urne de Carter et Leslie se déverse dans celle de Hempel”, Canadian Journal of Philosophy 29, pp. 139-156. English translation.
HALL, N. 1999, “How to Set a Surprise Exam”, Mind 108, pp. 647-703.
HYDE, D. 2002 “Sorites Paradox”, The Stanford Encyclopedia of Philosophy (Fall 2002 Edition), E. N. Zalta (ed.), http ://plato.stanford.edu/archives/fall2002/entries/sorites-paradox.
JANAWAY, C. 1989, “Knowing About Surprises : A Supposed Antinomy Revisited”, Mind 98, pp. 391-410.
MONTAGUE, R. & KAPLAN, D. 1960, “A Paradox Regained”, Notre Dame Journal of Formal Logic 3, pp. 79-90.
O’ CONNOR, D. 1948, “Pragmatic paradoxes”, Mind 57, pp. 358-359.
QUINE, W. 1953, “On a So-called Paradox”, Mind 62, pp. 65-66.
SAINSBURY, R. M. 1995, Paradoxes, 2ème édition, Cambridge : Cambridge University Press.
SCRIVEN, M. 1951, “Paradoxical announcements”, Mind 60, pp. 403-407.
SHAW, R. 1958, “The Paradox of the Unexpected Examination”, Mind 67, pp. 382-384.
SMITH, J. W. 1984, “The surprise examination on the paradox of the heap”, Philosophical Papers 13, pp. 43-56.
SOAMES, S. 1999, Understanding Truth, New York, Oxford : Oxford University Press.
SORENSEN, R. A. 1982, “Recalcitrant versions of the prediction paradox”, Australasian Journal of Philosophy 69, pp. 355-362.
SORENSEN, R. A. 1988, Blindspots, Oxford : Clarendon Press.
WILLIAMSON, T. 2000, Knowledge and its Limits, London & New York : Routledge.
WRIGHT, C. & SUDBURY, A. 1977, “The Paradox of the Unexpected Examination”, Australasian Journal of Philosophy 55, pp. 41-58.

1 I simplify here considerably.

2 Without pretending to being exhaustive.

3 In what follows, n denotes the last day of the term corresponding to the professor’s announcement.

4 Let 1-SEP, 2-SEP,…, n-SEP be the problem for respectively 1 day, 2 days,…, n days.

5 The cases where neither the non-surprise nor the surprise are made possible on the same day (i.e. such that S[k, 0] + S[k, 1] = 0) can be purely and simply ignored.

6 Cf. (1953, 65) : ‘It is notable that K acquiesces in the conclusion (wrong, according to the fable of the Thursday hanging) that the decree will not be fulfilled. If this is a conclusion which he is prepared to accept (though wrongly) in the end as a certainty, it is an alternative which he should have been prepared to take into consideration from the beginning as a possibility.’

7 Cf. (1953, 66) : ‘If K had reasoned correctly, Sunday afternoon, he would have reasoned as follows : “We must distinguish four cases : first, that I shall be hanged tomorrow noon and I know it now (but I do not) ; second, that I shall be unhanged tomorrow noon and do not know it now (but I do not) ; third, that I shall be unhanged tomorrow noon and know it now ; and fourth, that I shall be hanged tomorrow noon and do not know it now. The latter two alternatives are the open possibilities, and the last of all would fulfill the decree. Rather than charging the judge with self-contradiction, let me suspend judgment and hope for the best.”‘

8 ‘The students are then shown four silver stars and one gold star. One star is put on the back of each student.’.

9 Hall refutes otherwise, but on different grounds, the solution proposed by Quine.

10 Hall’s reduction can be easily generalised. It is then associated with a version of n-SEP(I∆) such that the surprise will not possibly occur on the m last days of the week. Such a version is associated with a matrix such that (a) m (1  m < n) and S[nm, 0] = S[nm, 1] = 1 ; (b) p > nm S[p, 0] = 1 and S[p, 1] = 0 ; (c) q < nm S[q, 0] = S[q, 1] = 1. In this new situation, a generalised Hall’s reduction applies to the corresponding version of SEP. In this case, the extended Hall’s reduction leads to : n-SEP(I∆)  (nm)-SEP(I□).

11 Cf. notably Hall (1999, 661), Williamson (2000).

12 It is just an example. Alternatively, one could have chosen here the 2nd or the 3rd day.

13 This definition of a vague predicate is borrowed from Soames. Considering the extension and the anti-extension of a vague predicate, Soames (1999, 210) points out thus: “These two classes are mutually exclusive, though not jointly exhaustive”.

14 Cf. Sorensen (1988, 292-293) : ‘Indeed, no one has simply asserted that the following is just another instance of the sorites.

i. Base step : The audience can know that the exercise will not occur on the last day.

ii. Induction step : If the audience can know that the exercise will not occur on day n, then they can also know that the exercise will not occur on day n – 1

iii. The audience can know that there is no day on which the exercise will occur.

Why not blame the whole puzzle on the vagueness of ‘can know’? (…) Despite its attractiveness, I have not found any clear examples of this strategy.’

15 Cf. (1988, 324): ‘One immediate qualm about assimilating the prediction paradox to the sorites is that the prediction paradox would be a very ‘fast’ sorites. (…) Yet standard sorites arguments involve a great many borderline cases.’

16 Cf. Soames (1999, 218): ‘A further fact about Sorites predicates is that the continuum semantically associated with such a predicate can be broken down into units fine enough so that once one has characterized one item as F (or not F), it is virtually irresistible to characterize the same item in the same way’.

17 One characteristic example of this type of analysis is also exemplified by the solution to the two-envelope paradox described by David Chalmers (2002, 157) : ‘The upshot is a disjunctive diagnosis of the two-envelope paradox. The expected value of the amount in the envelopes is either finite or infinite. If it is finite, then (1) and (2) are false (…). If it is infinite, then the step from (2) to (3) is invalid (…)’.

18 I am grateful toward Timothy Chow, Ned Hall, Claude Panaccio and the anonymous referees for very useful comments concerning previous versions of this paper.

Probabilistic Situations for Goodmanian N-universes

A paper appeared (2006) in French in the Journal of Philosophical Research, vol. 31, pages 123-141, under the title “Situations probabilistes pour n-univers goodmaniens.”

I proceed to describe several applications of the theory of n-universes through several different probabilistic situations. I describe first how n-universes can be used as an extension of the probability spaces used in probability theory. The extended probability spaces thus defined allow for a finer modelling of complex probabilistic situations and fits more intuitively with our intuitions related to our physical universe. I illustrate then the use of n-universes as a methodological tool, with two thought experiments described by John Leslie. Lastly, I model Goodman’s paradox in the framework of n-universes while also showing how these latter appear finally very close to goodmanian worlds.


Probabilistic Situations for Goodmanian N-universes

The n-universes were introduced in Franceschi (2001, 2002) in the context of the study of the probabilistic situations relating to several paradoxes which are currently the object of intensive studies in the field of analytical philosophy: Goodman’s paradox and the Doomsday Argument. The scope of the present article is twofold: on one hand, to describe how modelling within the n-universes allows to extend the properties of the classical probability spaces used in probability theory, by providing at the same time a finer modelling of some probabilistic situations and a better support for intuition; on the other hand, to show how the use of n-universes allows to simplify considerably the study of complex probabilistic situations such as those which appear in the study of paradoxes.

When one models for example the situation corresponding to the drawing of a ball from an urn, one considers then a restricted temporal space, which limits itself to the few seconds that precede and follow the drawing. Events which took place the day before or one hour before, but also those who will happen for example the day after the drawing, can be purely and simply ignored. A very restricted interval of time, that it is possible to reduce to one or two discrete temporal positions, is then enough for characterising the corresponding situation. It suffices also to consider a restriction of our universe where the space variable is limited to the space occupied by the urn. For it is not useful to take into consideration the space corresponding to the neighbouring room and to the objects which are there. In a similar way, the number of atoms of copper or of molybdenum that are possibly present in the urn, the number of photons which are interacting with the urn at the time of the drawing, or the presence or absence of a sound source of 75 db, etc. can be omitted and ignored. In this context, it is not necessary to take into account the existence of such variables. In such situation, it is enough to mention the variables and constants really used in the corresponding probabilistic situation. For to enumerate all the constants and the variables which describe of our whole universe appears here as an extremely complicated and moreover useless task. In such context, one can legitimately limit oneself to describe a simplified universe, by mentioning only those constants and variables which play a genuine role in the corresponding probabilistic situation.

Let us consider the drawing of a ball from an urn which contains several balls of different colours. To allow the calculation of the likelihood of different events related to the drawing of one or several balls from the urn, probability theory is based on a modelling grounded on probability spaces. The determination of the likelihood of different events is then not based on the modelling of the physical forces which determine the conditions of the drawing, i.e. the mass and the dimensions of the balls, the material of which they are constituted, their initial spatio-temporal position, as well as the characteristics of the forces exercised over the balls to perform a random drawing. The modelling of random phenomena with the help of probability spaces does only retain some very simplified elements of the physical situation which corresponds to the drawing of a ball. These elements are the number and the colour of the balls, as well as their spatio-temporal position. Such methodological approach can be generalised in other probabilistic situations that involve random processes such as the drawing of one or several dices or of one or several cards. Such methodology does not constitute one of the axioms of probability theory, but it consists here of one important tenet of the theory, of which one can suggest that it would be worth being more formalized. It may also be useful to explain in more detail how the elements of our physical world are converted into probability spaces. In what follows, I will set out to show how the probability spaces can be extended, with the help of the theory of n-universes, in order to better restore the structure of the part of our universe which is so modelled.

1. Introduction to n-universes

It is worth describing preliminarily the basic principles underlying the n-universes. N-universes constitute a simplified model of the physical world which is studied in a probabilistic situation. Making use of Ockam’s razor, we set out then to model a physical situation with the help of the simplest universe’s model, in a compatible way however with the preservation of the inherent structure of the corresponding physical situation. At this stage, it proves to be necessary to highlight several important features of n-universes.

1.1. Constant-criteria and variable-criteria

The criteria of a given n-universe include both constants and variables. Although n-universes allow to model situations which do not correspond to our physical world, our concern will be here exclusively with the n-universes which correspond to common probabilistic situations, in adequacy with the fundamental characteristics of our physical universe. The corresponding n-universes include then at the very least one temporal constant or variable, as well as one constant or variable of location. One distinguishes then among n-universes: a T0L0 (a n-universe including a temporal constant and a location constant), a T0L (a temporal constant and a location variable), a TL0 (a temporal variable and a location constant), a TL (a temporal variable and a location variable). Other n-universes also include a constant or a variable of colour, of direction, etc.

1.2. N-universes with a unique object or with multiple objects

Every n-universe includes one or several objects. One distinguishes then, for example: a 0TL0 (n-world including a unique object, a temporal variable and a constant of location), a TL0 (multiple objects, a temporal variable and a location constant).

1.3. Demultiplication with regard to a variable-criterion

It is worth highlighting the property of demultiplication of a given object with regard to a variable-criterion of a given n-universe. In what follows, we shall denote a variable-criterion  with demultiplication by *. Whatever variable-criterion of a given n-universe can so be demultiplicated. The fact for a given object to be demultiplicated with regard to a criterion  is the property for this object to exemplify several taxa of criterion . Let us take the example of the time criterion. The fact for a given object to be demultiplicated with regard to time resides in the fact of exemplifying several temporal positions. In our physical world, an object 0 can exist at several (successive) temporal positions and finds then itself demultiplicated with regard to the time criterion. Our common objects have then a property of temporal persistence, which constitutes a special case of temporal demultiplication. So, in our universe of which one of the variable-criteria is time, it is common to note that a given object 0 which exists at T1 also exists at T2, …, Tn. Such object has a life span which covers the period T1-Tn. The corresponding n-universe presents then the structure 0T*L0 (T* with simplified notation).

1.4. Relation one/many of the multiple objects with a given criterion

At this stage, it proves to be necessary to draw an important distinction. It is worth indeed distinguishing between two types of situations. An object can thus exemplify, as we did just see it, several taxa of a given variable-criterion. This corresponds to the case of demultiplication which has just been described with regard to a given variable-criterion. But it is also worth taking into account another type of situation, which concerns only those n-universes with multiple objects. Indeed, several objects can instantiate the same taxon of a given criterion. Let us consider first the temporal criterion. Let us place ourselves, for example, in a n-universe with multiple objects including at the same time a temporal variable and a location constant L0. This can correspond to two types of different n-universes. In the first type of n-universe, there is one single object by temporal position. At some point in time, it is therefore only possible to have a unique object in L0 in the corresponding n-universe. We can consider in that case that every object of this n-universe is in relation one with the time taxa. We denote by T*L0 (with simplified notation T) such n-universe. Let us consider now a n-universe with multiple objects including a temporal variable and a location constant, but where several objects 1, 2, 3 can exist at the same time. In that case, the multiple objects are at a given temporal position in L0. The situation then differs fundamentally from the T*L0, because several objects can now occupy the same given temporal position. In other words, the objects can co-exist at a given time. In that case, one can consider that the objects are in relation many with the temporal taxa. We denote then by *T*L0 such n-universe (with simplified notation *T) .

Let us place ourselves now from the point of view of the location criterion. Let us consider a n-universe with multiple objects including at the same time a temporal variable and a variable of location, and where the objects are in relation many with the temporal criterion. It is also worth distinguishing here between two types of n-universes. In the first, a single object can find itself at a given taxon of the location criterion at the same time. There is then one single object by space position at a given time. This allows for example to model the situation which is that of the pieces of a chess game. Let us denote by *TL such n-universe (with simplified notation *TL). In that case, the objects are in relation one with the location criterion. On the other hand, in the second type of n-universe, several objects can find themselves in the same taxon of a location criterion at the same time. Thus, for example, the objects 1, 2, 3 are in L1 at T1. Such situation corresponds for example to an urn (which is thus assimilated with a given taxon of location) where there are several balls at a given time. We denote by *T*L such n-universe, where the objects are in relation many with the location taxa.

One can notice lastly that such differentiation is also worth for the variable-criterion of colour. One can then draw a distinction between: (a) a *T0*L0C (with simplified notation C) where several objects which can co-exist at the same time in a given space position present all necessarily a different colour, because the objects are in relation one with the colour criterion there; (b) a *T0*L0*C (with simplified notation *C) where several objects which can co-exist at the same time at a given space position can present the same colour, because the objects are in relation many with the colour criterion there.

1.5. Notation

At this stage, it is worth highlighting an important point which concerns the used notation. It was indeed made use in what precedes of an extended and of a simplified notation. The extended notation includes the explicit specification of all criteria of the considered n-universe, including at the same time the variable-criteria and the constant-criteria. By contrast, the simplified notation includes only the explicit specification of the variable-criteria of the considered n-universe. For constant-criteria of time and of location of the considered n-universe can be merely deduced from variable-criteria of the latter. This is made possible by the fact that the studied n-universes include, in a systematic way, one or several objects, but also a variable-criterion or a constant-criterion of time and of location.

Let us illustrate what precedes by an example. Consider first the case where we situate ourselves in a n-universe including multiple objects, a constant-criterion of time and a constant-criterion of location. In that case, it appears that the multiple objects exist necessarily at T0. As a result, in the considered n-universe, the multiple objects are in relation many with the constant-criterion of time. And also, there exist necessarily multiple objects at L0. So, the multiple objects are also in relation many with the constant-criterion of location. We place ourselves then in the situation which is that of a *T0*L0. But for the reasons which have just been mentioned, such n-universe can be denoted, in a simplified way, by .

The preceding remarks suggest then a simplification, in a general way, at the level of the used notation. Indeed, since a n-universe includes multiple objects and since it includes a constant-criterion of time, the multiple objects are necessarily in relation many with the constant-criterion of time. The n-universe is then a *T0. But it is possible to simplify the corresponding notation into . If a n-universe also includes multiple objects and a constant-criterion of location, the multiple objects are necessarily in relation many with the constant-criterion of location. The given n-universe is then a *L0, and it is possible to simplify the notation of the considered n-universe in . As a result, it is possible to simplify the notations *L0*T0 into , *L0T into T, *L0*T into *T, *L0*T* into *T*, etc.

2. Modelling random events with n-universes

The situations traditionally implemented in probability theory involve dices, coins, card games or else some urns that contain balls. It is worth setting out to describe how such objects can be modelled within the n-universes. It also proves to be necessary to model the notion of a “toss” in the probability spaces extended to n-universes. One can make use of the modellings that follow:1

2.1. Throwing a dice

How can we model a toss such as the result of the throwing of the dice is “5 “? We model here the dice as a unique object that finds itself at a space location L0 and which is susceptible of presenting at time T0 one discrete modality of space direction among {1,2,3,4,5,6}. The corresponding n-universe includes then a unique object, a variable of direction and a temporal constant. The unique object can only present one single direction at time T0 and is not with demultiplication with regard to the criterion of direction. The n-universe is a O (with extended notation 0T0L0O). Traditionally, we have the sample space  = {1,2,6} and the event {5}. The drawing of “5 ” consists here for the unique object to have direction 5 among {1,2,6} at time T0 and at location L0. We denote then the sample space by 0T0L0O{1,2,…,6} and the event by 0T0L0O{5}.2

How can we model two successive throws of the same dice, such as the result is “5” and then “1”? Traditionally, we have the sample space  = {1,2,…,6}2 and the event {5,1}. Here, it corresponds to the fact that the dice 0 has direction 5 and 1 respectively at T1 and T2. In the corresponding n-universe, we have now a time variable, including two positions: T1 and T2. Moreover, the time variable is with demultiplication because the unique object exists at different temporal positions. The considered n-universe is therefore a T*O (with extended notation 0T*L0O). We denote then the sample space by 0T*{1,2}L0O{1,2,…,6} and the event by {0T*{1}L0O{5}, 0T*{2}L0O{1}}.

2.2. Throwing a coin

How can we model the toss, for example of Tails, resulting from the flipping of a coin? We model here the coin as a unique object presenting 2 different modalities of direction among {P,F}. The corresponding n-universe is identical to the one which allows to model the dice, with the sole difference that the direction criterion includes only two taxa: {P,F}. The corresponding n-universe is therefore a O (with extended notation 0T0L0O). Classically, we have:  = {P,F} and {P}. Here, the Tails-toss is assimilated with the fact for the unique object to take direction {P} among {P,F} at time T0 and at location L0. The sample space is then denoted by 0T0L0O{P,F} and the event by 0T0L0O{P}.

How can we model two successive tosses of the same coin, such as the result is “Heads” and then “Tails”? Classically, we have the sample space  = {P,F}2 and the event {F,P}. As well as for the modelling of the successive throws of the same dice, the corresponding n-universe is here a T*O (with extended notation 0T*L0O). The sample space is then denoted by by 0T*{1,2}L0O{P,F} and the event by {0T*{1}L0O{F}, 0T*{2}L0O{P}}.

2.3. Throwing several discernible dices

How can we model the throwing of two discernible dices at the same time, for example the simultaneous toss of one “3” and of one “5”? The discernible dices are modelled here as multiple objects being each at a given space position and susceptible of presenting at time T0 one modality of space direction among {1,2,3,4,5,6}. The multiple objects co-exist at the same temporal position, so that the objects are in relation many with the temporal constant. In addition, the multiple objects can only present one single direction at time T0 and are not therefore with demultiplication with regard to the criterion of direction. The fact that both dices could have the same direction corresponds to the fact that objects are in relation many with the criterion of direction. There exists also a location variable, each of the dices 1 and 2 being at one distinct space position. We consider then that the latter property renders the dices discernible. The objects are here in relation one with the location criterion. In addition, the objects can only occupy one single space position at time T0 and are not therefore with demultiplication with regard to the location criterion. The n-universe is then a L*O (with extended notation *T0L*O). Classically, one has:  = {1,2,3,4,5,6}2 and {3,5}. Here, it corresponds to the fact that the dices 1 and 2 are to be found respectively at L1 and L2 and present a given direction among {1,2,6} at time T0. We denote then the sample space by {1,2}*T0L{1,2}*O{1,2,…,6} and the event by {{1}*T0L{1}*O{3}, {2}*T0L{2}*O{5}}.

2.4. Throwing several indiscernible dices

How can we model the throwing of two indiscernible dices, for example the toss of one “3” and one “5” at the same time? Both indiscernible dices are modelled as multiple objects being at space position L0 and susceptible of presenting at time T0 one modality of space direction among {1,2,3,4,5,6} at a given location. The multiple objects co-exist at the same temporal position, so that the objects are in relation many with the temporal constant. The multiple objects can only present one single direction at time T0 and are not therefore with demultiplication with regard to the criterion of direction. The fact that both dices are susceptible of having the same direction corresponds to the fact that the objects are in relation many with the criterion of direction. Both dices 1 and 2 are at the same location L0, what makes them indiscernible. In addition, the multiple objects are in relation many with the constant-criterion of location. Lastly, the objects can only be at one single space position at time T0 and are not therefore with demultiplication with regard to the location criterion. The corresponding n-universe is then a *O (with extended notation *T0*L0*O). Classically, we have:  = (i, j) with 1  ij  6 and {3,5}. Here, it corresponds to the fact that the dices 1 and 2 are both in L0 and present a given direction among {1,2,…,6} at T0. The sample space is then denoted by {1,2}*T0*L0*O{1,…,6} and the event by {{1}*T0*L0*O{3}, {2}*T0*L0*O{5}}.

2.5. Drawing a card

How can we model the drawing of a card, for example the card #13, in a set of 52 cards? Cards are modelled here as multiple objects presenting each a different colour among {1,2,…,52}. The cards’ numbers are assimilated here with taxa of colour, numbered from 1 to 52. Every object can have only one single colour at a given time. As a result, the multiple objects are not with demultiplication with regard to the colour criterion. In addition, a given card can only present one single colour at the same time. Hence, the objects are in relation one with the colour criterion. Moreover, the multiple objects can be at a given time at the same space location (to fix ideas, on the table). The objects are then in relation many with the location criterion. Lastly, the objects can co-exist at the same given temporal position. Thus, they are in relation many with the time criterion. The corresponding n-universe is then a C (with extended notation *T0*L0C). How can we model the drawing of a card? Classically, we have the sample space  = {1,2,…,52} and the event {13}. Here, the drawing of the card #13 is assimilated with the fact that the object the colour of which is #13 is at T0 at location L0. The sample space is then denoted by {1,2,…,52}*T0*L0C{1,2,…,52} and the event by {1}*T0*L0C{13}.

The drawing of two cards at the same time or the successive drawing of two cards are then modelled in the same way.

2.6 Drawing of a ball from an urn containing red and blue balls

How can we model the drawing of, for example, a red bowl, from an urn containing 10 balls among which 3 red balls and 7 blue balls? The balls are modelled here as multiple objects presenting each one colour among {R,B}. There exists then a colour variable in the corresponding n-universe. In addition, several objects can present the same colour. The objects are then in relation many with the variable-criterion of colour. Moreover, the objects are in relation many with regard to the constant-criteria of time and location. The corresponding n-universe is therefore a *T0**L0*C (with simplified notation *C). Classically, we have the sample space  = {R,R,R,B,B,B,B,B,B,B} and the event {R}. The sample space is then denoted by {1,2,…,10}*T0**L0*C{R,B} and the event by {{1}*T0**L0*C{R}}.

The drawing of two balls at the same time or the successive drawing of two balls are modelled in the same way.

3. Dimorphisms and isomorphisms

The comparison of the structures of the extended (to n-universes) sample spaces corresponding to two given probabilistic situations allows to determine if these situations are, from a probabilistic viewpoint, isomorphic or not. The examination of the structures of the sample spaces allows to determine easily the isomorphisms or, on the contrary, the dimorphisms. Let us give some examples.

Consider a first type of application where one wonders whether two probabilistic situations are of comparable nature. To this end, we model the two distinct probabilistic situations within the n-universes. The first situation is thus modelled in a *T0*L0*C (with simplified notation *C), and the second one in a *T0*L0C (with simplified notation C). One notices then a dimorphism between the n-universes that make it possible to model respectively the two probabilistic situations. Indeed, in the first situation, the multiple objects are in relation many with the colour criterion, corresponding thus to the fact that several objects can have an identical colour at a given moment and location. On the other hand, in the second situation, the multiple objects are in relation one with the colour criterion, what corresponds to the fact that each object has a different colour at a given time and location. The dimorphism observed at the level of the demultiplication of the variable-criterion of colour in the two corresponding n-universes makes it possible to conclude that the two probabilistic situations are not of a comparable nature.

It is worth considering now a second type of application. The throwing of two discernible dice is modelled, as we did see it, in a {1,2}T0*L{1,2}*O{1,…,6}. Now let us consider a headlight which can take at a given time one colour of 6 colours numbered from 1 to 6. If one considers now two headlights of this type, it appears that the corresponding situation can be modelled in a {1,2}T0*L{1,2}*C{1,…, 6}. In this last case, it appears that the variable-criterion of colour replaces the criterion of orientation. At this stage, it proves that the structure of such n-universe (with simplified notation L*C) is isomorphic to that of the n-universe in which the throwing of two discernible dice was modelled (with simplified notation L*O). This makes it possible to conclude that the two probabilistic situations are of a comparable nature.

Let us consider now a concrete example. John Leslie (1996, 20) describes in the following terms the Emerald case:

Imagine an experiment planned as follows. At some point in time, three humans would each be given an emerald. Several centuries afterwards, when a completely different set of humans was alive, five thousands humans would again each be given an emerald in the experiment. You have no knowledge, however, of whether your century is the earlier century in which just three people were to be in this situation, or the later century in which five thousand were to be in it. Do you say to yourself that if yours were the earlier century then the five thousand people wouldn’t be alive yet, and that therefore you’d have no chance of being among them? On this basis, do you conclude that you might just as well bet that you lived in the earlier century?

Leslie thus puts in parallel a real situation related to some emeralds and a probabilistic model concerning some balls in a urn. Let us proceed then to model the real, concrete, situation, described by Leslie, in terms of n-universes. It appears first that the corresponding situation is characterized by the presence of multiple objects: the emeralds. We find then ourselves in a n-universe with multiple objects. On the second hand, one can consider that the emeralds are situated at one single place: the Earth. Thus, the corresponding n-universe has a location constant (L0). Leslie also distinguishes two discrete temporal positions in the experiment: the one corresponding to a given time and the other being situated several centuries later. The corresponding n-universe comprises then a time variable with two taxa: T1 and T2. Moreover, it proves to be that the emeralds existing in T1 do not exist in T2 (and reciprocally). Consequently, the n-universe corresponding to the emerald case is a n-universe which is not with temporal demultiplication. Moreover, one can observe that several emeralds can be at the same given temporal position Ti: three emeralds exist thus in T1 and five thousand in T2. Thus, the objects are in relation many with the time variable. Lastly, several emeralds can coexist in L0 and the objects are thus in relation many with the location constant. Taking into account what precedes, it appears thus that the Emerald case takes place in a *T (with extended notation *T*L0), a n-universe with multiple objects, comprising a location constant and a time variable with which the objects are in relation many.

Compare now with the situation of the Little Puddle/London experiment, also described by Leslie (1996, 191):

Compare the case of geographical position. You develop amnesia in a windowless room. Where should you think yourself more likely to be: in Little Puddle with a tiny situation, or in London? Suppose you remember that Little Puddle’s population is fifty while London’s is ten million, and suppose you have nothing but those figures to guide you. (…) Then you should prefer to think yourself in London. For what if you instead saw no reason for favouring the belief that you were in the larger of the two places? Forced to bet on the one or on the other, suppose you betted you were in Little Puddle. If everybody in the two places developed amnesia and betted as you had done, there would be ten million losers and only fifty winners. So, it would seem, betting on London is far more rational. The right estimate of your chances of being there rather than in Little Puddle, on the evidence on your possession, could well be reckoned as ten million to fifty.

The latter experiment is based on a real, concrete, situation, to be put in relation with an implicit probabilistic model. It appears first that the corresponding situation characterises itself by the presence of multiple inhabitants: 50 in Little Puddle and 10 million in London. The corresponding n-universe is then a n-universe with multiple objects. It appears, second, that this experiment takes place at one single time: the corresponding n-universe has then one time constant (T0). Moreover, two space positions – Little Puddle and London – are distinguished, so that we can model the corresponding situation with the help of a n-universe comprising two space positions: L1 and L2. Moreover, each inhabitant is either in Little Puddle or in London, but but no one can be at the two places at the same time. The corresponding n-universe is then not with local demultiplication. Lastly, one can notice that several people can find themselves at a given space position Li: there are thus 50 inhabitants at Little Puddle (L1) and 10 million in London (L2). The objects are thus in a relation many with the space variable. And in a similar way, several inhabitants can be simultaneously either in Little Puddle, or in London, at time T0. Thus, the objects are in relation many with the time constant. Taking into account what precedes, it appears that the situation of the Little Puddle/London experiment takes place in a *L (with extended notation *T0*L), a n-universe with multiple objects, comprising a time constant and a location variable, with which the objects are in relation many.

As we can see it, the emerald case takes place in a *T, whereas the Little Puddle/London experiment situates itself in a *L. This makes it possible to highlight the isomorphic structure of the two n-universes in which the two experiments are respectively modelled. This allows first to conclude that the probabilistic model which applies to the one, is also worth for the other one. Moreover, it appears that both the *T and the *L are isomorphic with the *C. This makes it possible to determine straightforwardly the corresponding probabilistic model. Thus, the situation corresponding to both the emerald case and the Little Puddle/London experiment can be modelled by the drawing of a ball from an urn comprising red and blue balls. In the emerald case, it consists of an urn comprising 3 red balls and 5000 green balls. In the Little Puddle/London experiment, the urn includes thus 50 red balls and 107 green balls.

4. Goodman’s paradox

Another interest of the n-universes as a methodological tool resides in their use to clarify complex situations such as those which are faced in the study of paradoxes. I will illustrate in what follows the contribution of the n-universes in such circumstances through the analysis of Goodman’s paradox.3

Goodman’s paradox was described in Fact, Fiction and Forecast (1954, 74-75). Goodman explains then his paradox as follows. Every emeralds which were until now observed turned out to be green. Intuitively, we foresee therefore that the next emerald that will be observed will also be green. Such prediction is based on the generalisation according to which all emeralds are green. However, if one considers the property grue, that is to say “observed before today and green, or observed after today and not-green”,4 we can notice that this property is also satisfied by all instances of emeralds observed before. But the prediction which results from it now, based on the generalisation according to which all emeralds are grue, is that the next emerald to be observed will be not-green. And this contradicts the previous conclusion, which is conforms however with our intuition. The paradox comes here from the fact that the application of an enumerative induction to the same instances, with the two predicates green and grue, leads to predictions which turn out to be contradictory. This contradiction constitutes the heart of the paradox. One of the inductive inferences must then be fallacious. And intuitively, the conclusion according to which the next observed emerald will be not-green appears erroneous.

Let us set out now to model the Goodman’s experiment in terms of n-universes. It is necessary for it to describe accurately the conditions of the universe of reference in which the paradox takes place. Goodman makes thus mention of properties green and not-green applicable to emeralds. Colour constitutes then one of the variable-criteria of the n-universe in which the paradox takes place. Moreover, Goodman draws a distinction between emeralds observed before T and those which will be observed after T. Thus, the corresponding n-universe also includes a variable-criterion of time. As a result, we are in a position to describe the minimal universe in which Goodman (1954) situates himself as a coloured and temporal n-universe, i.e. a CT.

Moreover, Goodman makes mention of several instances of emeralds. It could then be natural to model the paradox in a n-universe with multiple objects, coloured and temporal. However, it does not appear necessary to make use of a n-universe including multiple objects. Considering the methodological objective which aims at avoiding a combinatorial explosion of cases, it is indeed preferable to model the paradox in the simplest type of n-universe, i.e. a n-universe with a unique object. We observe then the emergence of a version of the paradox based on one unique emerald the colour of which is likely to vary in the course of time. This version is the following. The emerald which I currently observe was green all times when I did observe it before. I conclude therefore, by induction, that it will be also green the next time when I will observe it. However, the same type of inductive reasoning also leads me to conclude that it will be grue, and therefore not-green. As we can see, such variation always leads to the emergence of the paradox. The latter version takes p lace in a n-universe including a unique object and a variable of colour and of time, i.e. a CT. At this step, given that the original statement of the paradox turns out to be ambiguous in this respect, and that the minimal context is that of a CT, we will be led to distinguish between two situations: the one which situates itself in a CT, and the one which takes place in a CT (where  denotes a third variable-criterion).

Let us place ourselves first in the context of a coloured and temporal n-universe, i;e. a CT. In such universe, to be green, is to be green at time T. In this context, it appears completely legitimate to project the shared property of colour (green) of the instances through time. The corresponding projection can be denoted by C°T. The emerald was green every time where I observed it before, and the inductive projection leads me to conclude that it will be also green next time when I will observe it. This can be formalized as follows (V denoting green):

(I1)VT1·VT2·VT3·…·VT99instances
(H2)VT1·VT2·VT3·…·VT99·VT100generalisation
(P3) VT100from (H2)

The previous reasoning appears completely correct and conforms to our inductive practice. But are we thus entitled to conclude from it that the green predicate is projectible without restriction in the CT? It appears not. For the preceding inductive enumeration applies indeed to a n-universe where the temporal variable corresponds to our present time, for example the period of 100 years surrounding our present epoch, that is to say the interval [-100, +100] years. But what would it be if the temporal variable extended much more far, by including for example the period of 10 thousand million years around our current time, that is to say the interval [-1010, +1010] years. In that case, the emerald is observed in 10 thousand million years. At that time, our sun is burned out, and becomes progressively a white dwarf. The temperature on our planet then warmed itself up in significant proportions to the point of attaining 8000°: the observation reveals then that the emerald – as most mineral – suffered important transformations and proves to be now not-green. Why is the projection of green correct in the CT where the temporal variable is defined by restriction in comparison with our present time, and incorrect if the temporal variable assimilates itself by extension to the interval of 10 thousand million years before or after our present time? In the first case, the projection is correct because the different instances of emeralds are representative of the reference class on which the projection applies. An excellent way of getting representative instances of a given reference class is then to choose the latter by means of a random toss. On the other hand, the projection is not correct in the second case, for the different instances are not representative of the considered reference class. Indeed, the 99 observations of emeralds come from our modern time while the 100th concerns an extremely distant time. So, the generalisation (H2) results from 99 instances which are not representative of the CT[-1010, +1010] and does not allow to be legitimately of use as support for induction. Thus green is projectible in the CT[-102, +102] and not projectible in the CT[-1010, +1010]. At this stage, it already appears that green is not projectible in the absolute but turns out to be projectible or not projectible relative to this or that n-universe.

In the light of what precedes, we are from now on in a position to highlight what proved to be fallacious in the projection of generalisation according to which “all swans are white”. In 1690, such hypothesis resulted from the observation of a big number of instances of swans in Europe, in America, in Asia and in Africa. The n-universe in which such projection did take place was a n-universe with multiple objects, including a variable of colour and of location. To simplify, we can consider that all instances had being picked at constant time T0. The corresponding inductive projection C°L led to the conclusion that the next observed swan would be white. However, such prediction turned out to be false, when occurred the discovery in 1697 by the Dutch explorer Willem de Vlamingh of black swans in Australia. In the n-universe in which such projection did take place, the location criterion was implicitly assimilating itself to our whole planet. However, the generalisation according to which “all swans are white” was founded on the observation of instances of swans which came only from one part of the n-universe of reference. The sample turned out therefore to be biased and not representative of the reference class, thus yielding the falseness of the generalisation and of the corresponding inductive conclusion.

Let us consider now the projection of grue. The use of the grue property, which constitutes (with bleen) a taxon of tcolour*, is revealing of the fact that the used system of criteria comes from the Z. The n-universe in which takes place the projection of grue is then a Z, a n-universe to which the CT reduces. For the fact that there exists two taxa of colour (green, not-green) and two taxa of time (before T, after T) in the CT determines four different states: green before T, not-green before T, green after T, not-green after T. By contrast, the Z only determines two states: grue and bleen. The reduction of the CT to the Z is made by transforming the taxa of colour and of time into taxa of tcolour*. The classical definition of grue (green before T or not-green after T) allows for that. In this context, it appears that the paradox is still present. It comes indeed under the following form: the emerald was grue every time that I did observe it before, and I conclude inductively that the emerald will also be grue and thus not-green the next time when I will observe it. The corresponding projection Z°T can then be formalized (G denoting grue):

(I4*)GT1·GT2·GT3·…·GT99instances
(H5*)GT1·GT2·GT3·…·GT99·GT100generalisation
(H5’*)VT1·VT2·VT3·…·VT99·~VT100from (H5*), definition
(P6*) GT100prediction
(P6’*) ~VT100from (P6*), definition

What is it then that leads to deceive our intuition in this specific variation of the paradox? It appears here that the projection of grue comes under a form which is likely to create an illusion. Indeed, the projection Z°T which results from it is that of the tcolor* through time. The general idea which underlies inductive reasoning is that the instances are grue before T and therefore also grue after T. But it should be noticed here that the corresponding n-universe is a Z. And in a Z, the only variable-criterion is tcolor*. In such n-universe, an object is grue or bleen in the absolute. By contrast, an object is green or not-green in the CT relative to a given temporal position. But in the Z where the projection of grue takes place, an additional variable-criterion is missing so that the projection of grue could be legitimately made. Due to the fact that an object is grue or bleen in the absolute in a Z, when it is grue before T, it is also necessarily grue after T. And from the information according to which an object is grue before T, it is therefore possible to conclude, by deduction, that it is also grue after T. As we can see it, the variation of the paradox corresponding to the projection Z°T presents a structure which gives it the appearance of an enumerative generalisation but that constitutes indeed a genuine deductive reasoning. The reasoning that ensues from it constitutes then a disguised form of induction, a pseudo-induction.

Let us envisage now the case of a coloured, temporal n-universe, but including an additional variable-criterion , i.e. a CT. A n-universe including variable-criteria of colour, of time and location,5 i.e. a CTL, will be suited for that. To be green in a CTL, is to be green at time T and at location L. Moreover, the CTL reduces to a ZL, a n-universe the variable-criteria of which are tcolor* and location. The taxa of tcolor* are grue and bleen. And to be grue in the ZL, is to be grue at location L.

In a preliminary way, one can point out here that the projections CTL and ZTL do not require a separate analysis. Indeed, these two projections present the same structure as those of the projections CT and ZT which have just been studied, except for an additional differentiated criterion of location. The conditions under which the paradox dissolves when one compares the projections CT and ZT apply therefore identically to the variation of the paradox which emerges when one relates the projections CTL and ZTL .

On the other hand, it appears here opportune to relate the projections CT°L and Z°L which respectively take place in the CTL and the ZL. Let us begin with the projection CT°L. The shared criteria of colour and of time are projected here through a differentiated criterion of location. The taxa of time are here before T and after T. In this context, the projection of green comes under the following form. The emerald was green before T in every place where I did observe it before, and I conclude from it that it will be also green before T in the next place where it will be observed. The corresponding projection C°TL can then be formalized as follows:

(I7)VTL1·VTL2·VTL3·…·VTL99instances
(H8)VTL1·VTL2·VTL3·…·VTL99·VTL100generalisation
(P9) VTL100prediction

At this step, it seems completely legitimate to project the green and before T shared by the instances, through a differentiated criterion of location, and to predict that the next emerald which will be observed at location L will present the same properties.

What is it now of the projection of grue in the CTL? The use of grue conveys the fact that we place ourselves in a ZL, a n-universe to which reduces the CTL and the variable-criteria of which are tcolour* and location. The fact of being grue is relative to the variable-criterion of location. In the ZL, to be grue is to be grue at location L. The projection relates then to a taxon of tcolour* ( grue or bleen) which is shared by the instances, through a differentiated criterion of location. Consider then the classical definition of grue (green before T or non-grue after T). Thus, the emerald was grue in every place where I did observe it before, and I predict that it will also be grue in the next place where it will be observed. If we take T = in 1010 years, the projection Z°L in the ZL appears then as a completely valid form of induction (V~T denoting green after T):

(I10*)GL1·GL2·GL3·…·GL99instances
(H11*)GL1·GL2·GL3·…·GL99·GL100generalisation
(H11’*)VT~V~TL1·VT~V~TL2·VT~V~TL3·…·VT~V~TL99·VT~V~TL100from (H11*), definition
(P12*) GL100prediction
(P12’*) VT~V~TL100from (P12*), definition

As pointed out by Franck Jackson (1975, 115), such type of projection applies legitimately to all objects which colour changes in the course of time, such as tomatoes or cherries. More still, one can notice that if we consider a very long period of time, which extends as in the example of emeralds until 10 thousand million years, such property applies virtually to all concrete objects. Finally, one can notice here that the contradiction between both concurrent predictions (P9) and (P12’*) has now disappeared since the emerald turns out to be green before T in L100 (VTL100) in both cases.

As we can see, in the present analysis, a predicate turns out to be projectible or not projectible in relative to this or that universe of reference. As well as green, grue is not projectible in the absolute but turns out to be projectible in some n-universes and not projectible in others. It consists here of a difference with several classical solutions offered to solve the Goodman’s paradox, according to which a predicate turns out to be projectible or not projectible in the absolute. Such solutions lead to the definition of a criterion allowing to distinguish the projectible predicates from the unprojectible ones, based on the differentiation temporal/non-temporal, local/non-local, qualitative/non-qualitative, etc. Goodman himself puts then in correspondence the distinction projectible/ unprojectible with the distinction entrenchedi/unentrenched, etc. However, further reflexions of Goodman, formulated in Ways of Worldmakingii, emphasize more the unabsolute nature of projectibility of green or of grue: “Grue cannot be a relevant kind for induction in the same world as green, for that would preclude some of the decisions, right or wrong, that constitute inductive inference”. As a result, grue can turn out to be projectible in a goodmanian world and not projectible in some other one. For green and grue belong for Goodman to different worlds which present different structures of categories.6 In this sense, it appears that the present solution is based on a form of relativism the nature of which is essentially goodmanian.

5. Conclusion

From what precedes and from Goodman’s paradox analysis in particular, one can think that the n-universes are of a fundamentally goodmanian essence. From this viewpoint, the essence of n-universes turns out to be pluralist, thus allowing numerous descriptions, with the help of different systems of criteria, of a same reality. A characteristic example, as we did see it, is the reduction of the criteria of colour and time in a CTL into a unique criterion of tcolour* in a ZL. In this sense, one can consider the n-universes as an implementation of the programme defined by Goodman in Ways of Worldmaking. Goodman offers indeed to construct worlds by composition, by emphasis, by ordering or by deletion of some elements. The n-universes allow in this sense to represent our concrete world with the help of different systems of criteria, which correspond each to a relevant point of view, a way of seeing or of considering a same reality. In this sense, to privilege this or that system of criteria, to the detriment of others, leads to a truncated view of this same reality. And the exclusive choice, without objective motivation, of such or such n-universe leads to engender a biased point of view.

However, the genuine nature of the n-universes turns out to be inherently ambivalent. For the similarity of the n-universes with the goodmanian worlds does not prove to be exclusive of a purely ontological approach. Alternatively, it is indeed possible to consider the n-universes from the only ontological point of view, as a methodological tool allowing to model directly this or that concrete situation. The n-universes constitute then so much universes with different properties, according to combinations resulting from the presence of a unique object or multiple objects, in relation one or many, with demultiplication or not, with regard to the criteria of time, location, colour, etc. In a goodmanian sense also, the n-universes allow then to build so much universes with different structures, which sometimes correspond to the properties of our real world, but which have sometimes some exotic properties. To name only the simplest of the latter, the L* is then a n-universe which includes only one ubiquitous object, presenting the property of being at several locations at the same time.7

At this stage, it is worth mentioning several advantages which would result from the use of the n-universes for modelling probabilistic situations. One of these advantages would be first to allow a better intuitive apprehension of a given probabilistic situation, by emphasising its essential elements and by suppressing its superfluous elements. By differentiating for example depending on whether the situation to model presents a constant or a time variable, a constant or a space variable, a unique object or several objects, etc. the modelling of concrete situations in the n-universes provides a better support to intuition. On the other hand, the distinction according to whether the objects are or not with demultiplication or in relation one/many with regard to the different criteria allows for a precise classification of the different probabilistic situations which are encountered.

One can notice, second, that the use of the notation of the probability spaces extended to the n-universes would allow to withdraw the ambiguity which is sometimes associated with classical notation. As we did see it, we sometimes face an ambiguity. Indeed, it proves to be that {1,2,…,6}2 denotes at the same time the sample space of a simultaneous throwing of two discernible dices in T0 and that of two successive throwing of the same dice in T1 and then in T2. With the use of the notation extended to n-universes, the ambiguity disappears. In effect, the sample space of the simultaneous throwing of two discernible dices in T0 is a {1,2}*T0L{1,2}*O{1,2,…,6}, whilst that of two successive throwing of the same dice in T1 and then in T2 is a 0T*{1,2}L0O{1,2,…,6}.

Finally, an important advantage, as we have just seen it, which would result from a modelling of probabilistic situations extended to n-universe is the easiness with which it allows comparisons between several probabilistic models and it highlights the isomorphisms and the corresponding dimorphisms. But the main advantage of the use of the n-universes as a methodological tool, as we did see it through Goodman’s paradox, would reside in the clarification of the complex situations which appear during the study of paradoxes.8

References

Franceschi, Paul. 2001. Une solution pour le paradoxe de Goodman. Dialogue 40: 99-123, English translation under the title The Doomsday Argument and Hempel’s Problem, http://cogprints.org/2172/. English translation
—. 2002. Une application des n-univers à l’argument de l’Apocalypse et au paradoxe de Goodman. Doctoral dissertation, Corté: University of Corsica. <http://www.univ-corse.fr/~franceschi/index-fr.htm> [retrievec Dec.29, 2003]
Goodman, Nelson. 1954. Fact, Fiction and Forecast. Cambridge, MA: Harvard University Press.
—. 1978. Ways of Worldmaking. Indianapolis: Hackett Publishing Company.
Jackson, Franck. 1975. “Grue”. The Journal of Philosophy 72: 113-131.
Leslie, John. 1996. The End of the World: The Science and Ethics of Human Extinction. London: Routledge.

1 Il convient de noter que ces différentes modélisations ne constituent pas une manière unique de modéliser les objets correspondants dans les n-univers. Cependant, elles correspondent à l’intuition globale que l’on a de ces objets.

2 De manière alternative, on pourrait utiliser la notation 0T0L0O5 en lieu et place de 0T0L0O{5}. Cette dernière notation est toutefois préférée ici, car elle se révèle davantage compatible avec la notation classique des événements.

3 Cette analyse du paradoxe de Goodman correspond, de manière simplifiée et avec plusieurs adaptations, à celle initalement décrite dans Franceschi (2001). La variation du paradoxe qui est considérée ici est celle de Goodman (1954), mais avec une émeraude unique.

4 P and Q being two predicates, grue presents the following structure: (P and Q) or (~P and ~Q).

5 Tout autre critère différent de la couleur et du temps tel que la masse, la température, l’orientation, etc. conviendrait également.

6 Cf. Goodman (1978, 11): “(…) a green emerald and a grue one, even if the same emerald (…) belong to worlds organized into different kinds”.

7 Les n-univers aux propriétés non standard nécessitent une étude plus détaillée, qui dépasse le cadre de la présente étude.

8 Je suis reconnaissant envers Jean-Paul Delahaye pour la suggestion de l’utilisation des n-univers en tant qu’espaces de probabilité étendus. Je remercie également Claude Panaccio et un expert anonyme pour le Journal of Philosophical Research pour des discussions et des commentaires très utiles.

i Entrenched.

ii Cf. Goodman (1978, 11).

The Simulation Argument and the Reference Class Problem: the Dialectical Contextualist’s Standpoint

Postprint. I present in this paper an analysis of the Simulation Argument from a dialectical contextualist’s standpoint. This analysis is grounded on the reference class problem. I begin with describing in detail Bostrom’s Simulation Argument. I identify then the reference class within the Simulation Argument. I also point out a reference class problem, by applying the argument successively to three different reference classes: aware-simulations, imperfect simulations and immersion-simulations. Finally, I point out that there are three levels of conclusion within the Simulation Argument, depending on the chosen reference class, that yield each final conclusions of a fundamentally different nature.

This article supersedes my preceding work on the Simulation argument. Please do not cite previous work.


The Simulation Argument and the Reference Class Problem : a Dialectical Contextualism Analysis

Paul FRANCESCHI

www.paulfranceschi.com

English postprint of a paper initially published in French in Philosophiques, 2016, 43-2, pp. 371-389, under the title L’argument de la Simulation et le problème de la classe de référence : le point de vue du contextualisme dialectique

ABSTRACT. I present in this paper an analysis of the Simulation Argument from a dialectical contextualist’s standpoint. This analysis is grounded on the reference class problem. I begin with describing in detail Bostrom’s Simulation Argument. I identify then the reference class within the Simulation Argument. I also point out a reference class problem, by applying the argument successively to three different reference classes: aware-simulations, imperfect simulations and immersion-simulations. Finally, I point out that there are three levels of conclusion within the Simulation Argument, depending on the chosen reference class, that yield each final conclusions of a fundamentally different nature.

1. The Simulation Argument

I shall propose in what follows an analysis of the Simulation Argument, recently described by Nick Bostrom (2003). I will first describe in detail the Simulation Argument (SA for short), focusing in particular on the resulting counter-intuitive consequence. I will then show how such a consequence can be avoided, based on the analysis of the reference class underlying SA, without having to give up one’s pre-theoretical intuitions.

The general idea behind SA can be stated as follows. It is very likely that post-human civilizations will possess a computing power that will be completely out of proportion with that of ours today. Such extraordinary computing power should give them the ability to carry out completely realistic human simulations, such as ensuring that the inhabitants of these simulations are aware of their own existence, in all respects similar to ours. In such a context, it is likely that post-human civilizations will devote part of their computer resources to carrying out simulations of the human civilizations that preceded them. In this case, the number of simulated humans should greatly exceed the number of authentic humans. Under such conditions, taking into account the simple fact that we exist leads to the conclusion that it is more likely that we are part of the simulated humans, rather than of the authentic humans.

Bostrom thus points out that the Simulation Argument is based on the following three hypotheses:

(1)it is very likely that humanity will not reach a post-human stage
(2)it is very unlikely that post-human civilizations will carry out simulations of the human races that preceded them
(3)it is very likely that we are currently living in a simulation carried out by a post-human civilization

and it follows that at least one of these three assumptions is true.

For the purposes of the present analysis, it is also useful at this stage to emphasize the underlying dichotomous structure of SA. The first step in the reasoning consists then in considering, by dichotomy, that either (i) humanity will not reach a post-human stage, or (ii) it will actually reach such a post-human stage. The first of these two hypotheses corresponds to the disjunct (1) of the argument. We consider then the hypothesis that humanity will reach a post-human stage and thus continue its existence for many millennia. In such a case, it can also be considered likely that post-human civilizations will possess both the technology and the skills necessary to perform human simulations. A new dichotomy then arises: either (i) these post-human civilizations will not perform such simulations — this is the disjunct (2) of the argument; or (ii) these post-human civilizations will actually perform such simulations. In the latter case, it will follow that the number of simulated humans will greatly exceed the number of humans. The probability of living in a simulation will therefore be much greater than that of living in the shoes of an ordinary human. The conclusion then follows that we, the inhabitants of the Earth, are probably living in a simulation carried out by a post-human civilization. This last conclusion constitutes the disjunct(3) of the argument. An additional step leads then to the conclusion that at least one of the hypotheses (1), (2) and (3) is true. The dichotomous structure underlying SA can thus be described step by step as follows:

(4)humanity will either not reach a post-human stage or reach a post-human stagedichotomy 1
(1)humanity will not reach a post-human stagehypothesis 1.1
(5)humanity will reach a post-human stagehypothesis 1.2
(6)post-human civilizations will be able to perform human simulationsfrom (5)
(7)post-human civilizations will either not perform human simulations or will perform themdichotomy 2
(2)post-human civilizations will not perform human simulationshypothesis 2.1
(8)post-human civilizations will perform human simulationshypothesis 2.2
(9)the proportion of simulated humans will far exceed that of humansfrom (8)
(3)it is very likely that we are currently living in a simulation carried out by a post-human civilizationfrom (9)
(10)at least one of the hypotheses (1), (2) and (3) is truefrom (1), (2), (3)

It is also worth mentioning an element that results from the very interpretation of the argument. For as Bostrom (2005) points out, the Simulation Argument must not be misinterpreted. This is not an argument that leads to the conclusion that (3) is true, namely that we are currently living in a simulation carried out by a post-human civilization. The core of SA is thus that one of the hypotheses (1), (2) or (3) at least is true.

This nuance of interpretation being mentioned, the Simulation Argument is not without its problems. Because SA leads to the conclusion that at least one of the assumptions (1), (2) or (3) is true, and that in the situation of ignorance in which we find ourselves, we can consider the latter as equiprobable. As Bostrom himself notes (Bostrom, 2003): “In the dark forest of our current ignorance, it seems sensible to apportion one’s credence roughly evenly between (1), (2) and (3)”. However, according to our pre-theoretical intuition, the probability of (3) is nil or at best extremely close to 0, so the conclusion of the argument has the consequence of increasing the probability that (3) is true from zero to a probability of about 1/3. Thus, the problem with the Simulation Argument is precisely that it shifts — via its disjunctive conclusion — from a zero or almost zero probability concerning (3) to a much higher probability of about 1/3. Because a probability of 1/3 for the hypotheses (1) and (2) is not a priori shocking, but is completely counter-intuitive as far as hypothesis (3) is concerned. It is in this sense that we can talk about the problem posed by the Simulation Argument and the need to find a solution to it.

As a preliminary point, it is worth considering what constitutes the paradoxical aspect of SA. What indeed gives SA a paradoxical nature? For SA differs from the class of paradoxes that lead to a contradiction. In paradoxes such as the Liar or the sorites paradox, the corresponding reasoning leads to a contradiction1. However, nothing of the sort can be seen at the level of SA, which belongs, from this point of view, to a different class of paradoxes, including the Doomsday Argument and Hempel’s problem. It is indeed a class of paradoxes whose conclusion is contrary to intuition, and which comes into conflict with the set of all our beliefs. In the Doomsday Argument then, the conclusion that taking into account our rank within the class of humans who have ever existed has the effect that an apocalypse is much more likely than one might have initially thought, offends the set of all our beliefs. Similarly, in Hempel’s problem, the fact that a blue umbrella confirms the hypothesis that all crows are black comes in conflict with the body of our knowledge. Similarly within SA, what finally appears paradoxical at first analysis is that SA leads to a probability of the hypothesis that we are currently living in a simulation created by post-humans, which is higher than that resulting from our pre-theoretical intuition.

2. The reference class problem and the Simulation Argument

The conclusion of the reasoning underlying SA, based on the calculation of the future ratio between real and simulated humans, albeit counter-intuitive, nevertheless results from a reasoning that appears a priori valid. However, such reasoning raises a question, which is related to the reference class that is inherent to the argument itself2. Indeed, it appears that SA has, indirectly, a particular reference class, which is that of human simulations. But what constitutes a simulation? The original argument implicitly refers to a reference class which is that of virtual simulations of humans, of a very high quality and by nature indistinguishable from authentic humans. However, there is some ambiguity about the very notion of a simulation and the question arises as to the applicability of SA to other types of human simulations3. Indeed, we are in a position to conceive of somewhat different types of simulations which also fall intuitively within the scope of the argument.

As a preliminary point, it is worth specifying here the nature of the simulations carried out by computer means referred to in the original argument. Implicitly, SA refers to computer simulations carried out by means of conventional computers composed of silicon chips. But it can also be envisaged that simulations are carried out using computers built from components using DNA properties and molecular biology. Recent research has shown that it is possible to implement high-performance algorithms (Adleman 1994, 1998) and to produce computer components (Benenson & al. 2001, MacDonald & al. 2006) based on bio-calculation techniques that exploit in particular the combinations of the four components (adenine, cytosine, guanine, thymine) of the DNA molecule. If such a field of research were to expand significantly and make it possible to produce computers at least as powerful as conventional computers, this type of bio-computers could legitimately fall within the scope of SA as well. Because the fact that the simulations are carried out using conventional or biological computers4 does not alter the scope of the argument. In any case, the result is that the proportion of simulated humans will be much higher than that of real humans, due to the properties of simulated reality using digital means, because the computer does not know the physical limits that are those of matter.

It can also be observed preliminarily that Bostrom explicitly refers to simulations carried out using computer means. However, the question arises as to whether simulated humans could not consist of perfectly successful physical copies of real humans. In such a case, simulations5 could be extremely difficult to discern. A priori, such a variation also constitutes an acceptable version of SA. However, there is a difference with the original argument, which also highlights Bostrom’s preferential choice of computer simulations. Indeed, in the original argument there is a very significant disproportion between humans simulated by computer means on the one hand and real humans on the other. This is the premise (9) of the argument: “the proportion of simulated humans will far exceed that of humans”. As Bostrom points out, the former would then be much more numerous than the latter, due to the very nature of computer simulations. It is this disproportion that then allows us to conclude (3) “we most probably live in a simulation carried out by a post-human civilization”. With simulations of a physical nature, one would not a priori have such a disproportion, and the scope of the conclusion would be somewhat different. Suppose, for example, that post-humans manage to perform simulations of a physical nature, the number of which would be equal to that of real humans. In this case, the proportion of simulated humans would be 1/2 (whereas it is close to 1 in the original argument). Premise (9) would then become: “the proportion of simulated humans and actual humans will be 1/2”. And this would only allow us to conclude (3) “the probability that we are simulations performed by a post-human civilization is equal to 1/2”. As can be seen, this would result in a significantly attenuated version of SA. The difference with the original version of SA is that the simulation argument for physical simulations applies with less force than the original argument. However, if the conditions were to change and this would result in the future in a disproportion of the same nature as with computer simulations for physical simulations, SA would then apply with all its force. In any event, the following analysis would then apply in the same way to this last category of simulations.

With these preliminary considerations in mind, we shall focus in turn on different types of human simulations, which are likely to be part of the SA reference class, and the ensuing conclusions at the argument level. Because the very question of defining the reference class for SA leads to questions about whether or not several types of simulations should be included within the SA scope. However, the question of the definition of the reference class for SA thus appears closely related to the nature of the future taxonomy of the beings and entities that will populate the Earth in the near or distant future. There is no question here of claiming exhaustiveness, given the speculative nature of such an area. However, it is possible to determine to what extent SA can also be applied to simulations of a different nature from those mentioned in the original argument, but which have equal legitimacy. We shall examine then in turn: conscious simulations, imperfect simulations, and immersion simulations.

3. The reference class problem : the caseof conscious simulations

At this step, it is not yet possible to really talk about the problem of the reference class within SA. To do so, it must be shown that the choice of one or the other reference class has completely different consequences at the level of the argument, and in particular that the nature of its conclusion is affected, i.e. fundamentally modified. In what follows, we will now focus on showing that depending on which reference class is chosen, radically different conclusions ensue at the level of the argument itself and that, consequently, there is a reference class problem within SA. For this purpose, we will consider several reference classes in turn, focusing on how conclusions of a fundamentally different nature result from them at the level of the argument itself.

The original version of SA implicitly depicts simulations of humans of a certain type. These are virtual simulations, almost indistinguishable from real humans and that present thus a very high degree of sophistication. Moreover, these are a type of simulations that are not aware that they are themselves simulated and are therefore convinced that they are genuine humans. This is implicit in the terms of the argument itself and in particular, the inference from (9) to (3) which leads to the conclusion that ʻweʼ are currently living in an indistinguishable simulation carried out by post-humans. In fact, these are simulations that are somehow abused and misled by post-humans regarding their true identity. For the purposes of this discussion, we shall term quasi-humans the simulated humans who are not aware that they are human.

At this stage, it appears that it is also possible to conceive of indistinguishable simulations that have an identical degree of sophistication but that, on the other hand, would be aware that they are being simulated. We shall then call quasi-humans+ the simulated humans who are aware that they are themselves simulations. Such simulations are in all respects identical to the quasi-humans to which SA implicitly refers, with the only difference that they are this time clearly aware of their intrinsic nature of simulation. Intuitively, SA also applies to this type of simulation. A priori, there is no justification for excluding such a type of simulation. Moreover, there are several reasons to believe that quasi-humans+ may be more numerous than quasi-humans. For ethical reasons (i) first of all, it may be thought that post-humans might be inclined to prefer quasi-humans+ to quasi-humans. For the fact of conferring an existence on quasi-humans constitutes a deception as to their true identity, whereas such an inconvenient is absent in the case of quasi-humans+. Such deception could reasonably be considered unethical and lead to some form of prohibition of quasi-humans. Another reason (ii) is that simulations of humans who are aware of their own simulation nature should not be dismissed a priori. Indeed, we can think that the level of intelligence acquired by some quasi-humans in the near future could be extremely high and in this case, the simulations would very quickly become aware that they are themselves simulations. It may be thought that from a certain degree of intelligence, and in particular that which may be obtained by humanity in the not too distant future (Kurtzweil, 2000, 2005; Bostrom, 2006), quasi-humans should be able — at least much more easily than at present — to collect evidence that they are the subject of a simulation. Furthermore (iii), the very concept of “unconscious simulation that it is a simulation” could be inherently contradictory, because it would then be necessary to limit one’s intelligence and therefore, it would no longer constitute an indistinguishable and sufficiently realistic simulation6. These three reasons suggest that quasi-humans+ may well exist in greater numbers than quasi-humans — or even that they may even be the only type of simulation implemented by post-humans.

At this stage, it is worth considering the consequences of taking into account the quasi-humans+ within the simulation reference class inherent to SA. For this purpose, let us first consider the variation of SA (let us term it SA*) that applies, exclusively, to the class of quasi-humans+. Such a choice, first of all, has no consequence on the disjunct (1) of SA, which refers to a possible disappearance of our humanity before it has reached the post-human stage. Nor does this has any effect on the disjunct (2), according to which post-humans will not perform quasi-humans+, i.e. conscious simulations of human beings. On the other hand, the choice of such a reference class has a direct consequence on the disjunct (3) of SA. Certainly, it follows, in the same way as for the original argument, the first level conclusion that the number of quasi-humans+ will far exceed the number of authentic humans (the disproportion). However, the second level conclusion that “we” are currently quasi-humans no longer follows. Indeed, such a conclusion (let us call it self-applicability) no longer applies to us, since we are not aware that we are being simulated and are completely convinced that we are authentic humans. Thus, in this particular context, the inference from (9) to (3) no longer prevails. Indeed, what constitutes SA’s worrying conclusion no longer results from step (9), since we cannot identify with the quasi-humans+, the latter being clearly aware that they are evolving in a simulation. Thus, unlike the original version of SA based on the reference class that associates humans with quasi-humains, this new version associating humans with quasi-humans+ is not associated with such a disturbing conclusion. The conclusion that now follows, as we can see, is quite reassuring, and in any case very different from the deeply worrying7 conclusion that results from the original argument.

At this stage, it appears that a question arises: should we identify, in the context of SA, the reference class to the quasi-humans or the quasi-humans+?8 It appears that no objective element in SA’s statement supports the a priori choice of the quasi-humans or the quasi-humans+. Thus, any version of the argument that includes the preferential choice of the quasi-humans or the quasi-humans+ appears to be biased. This is the case for the original version of SA, which thus contains a bias in favor of the quasi-humans, which results from Bostrom’s choice of a class of simulations that is exclusively assimilated to quasi-humans, i.e. simulations that are not aware of their simulation nature and are therefore abused and misled by post-humans about the very nature of their identity. And this is also the case for SA*, the alternative version of SA that has just been described, which includes a particular bias in favor of quasi-humans+, simulations that are aware of their own simulation nature. However, the choice of the reference class is fundamental here, because it has an essential consequence: if we choose a reference class that associates simulations with quasi-humans, the result is the worrying conclusion that we are most likely currently experiencing in a simulation. On the other hand, if a reference class is chosen that identifies simulations with quasi-humans+, the result is a scenario that reassuringly does not include such a conclusion. At this stage, it is clear that the choice of the quasi-humans i.e., non-conscious simulations — in the original version of SA, to the detriment of conscious simulations, constitutes an arbitrary choice. Indeed, what makes it possible to prefer the choice of quasi-humans, compared to quasi-humans+? Such justification is lacking in the context of the argument. At this stage, it appears that SA’s original argument contains a bias that leads to the preferential choice of quasi-humans, and to the alarming conclusion associated with it9.

4. The reference class problem : the case of imperfect simulations

The problem of the reference class within SA relates, as mentioned above, to the very nature and to the type of simulations referred to in the argument. Is this problem limited to the preferential choice, at the level of the original argument, of unconscious simulations, to the detriment of the alternative choice of conscious simulations, which correspond to very sophisticated simulations of humans, capable of creating illusion, but endowed with the awareness that they themselves are simulations? It appears not. Indeed, as mentioned above, other types of simulations can also be envisaged for which the argument also works, but which are of a somewhat different nature. In particular, it is conceivable that post-humans may design and implement simulations that are identical to those of the original argument, but that are not as perfect in essence. Such a situation is quite likely and does not have the ethical disadvantages that could accompany the indistinguishable simulations staged in the original argument. The choice to carry out such simulations could be the result of the necessary technological level, or of deliberate and pragmatic choices, designed to save time and resources. These could be, for example, simulations of excellent quality such that the scientific inhabitants of the simulations could only discover their artificial nature after, for example, ten years of research. Such simulations could be carried out in very large numbers and, given their less resource-intensive nature, could occur in even greater numbers than quasi-humans. For the purposes of this discussion, we will call imperfect simulations this category of simulations.

At this stage, one can ask oneself what are the consequences on SA of taking into account a reference class that identifies itself with imperfect simulations? In this case, it follows, in the same way as the original argument, that the first level conclusion that the number of imperfect simulations will far exceed the number of authentic humans (the disproportion). But here too, however, the second level conclusion that “we” are currently imperfect simulations (self-applicability) no longer follows. The latter no longer applies to us and a reassuring conclusion replaces it, since we are clearly aware that we are not such imperfect simulations. Finally, it turns out that the conclusion that results from taking into account the class of imperfect simulations is of the same nature as that which follows when considering the class of the quasi-humans+.

5. The reference class problem : the caseof immersion simulations

As we have seen, extending the SA reference class to conscious simulations leads to a conclusion of a different nature from the one that results from the original argument. The same applies to another category of simulations — imperfect simulations — which lead to a conclusion of the same nature as conscious simulations, and which in any case turns out to be different from that resulting from taking into account the simulations mentioned in the original argument. At this stage, the question arises as to whether the reference class can not be assimilated to other types of simulations relevant from the point of view of SA and whose consideration would lead to a conclusion that is inherently different from that which follows when considering the simulations of the original argument, or conscious or imperfect simulations.

In particular, the question arises as to whether human simulations, which would be such as to apply to ourselves — in a sense that may differ from the original argument — and which would include the conclusion of self-applicability inherent in SA, could not exist in a more or less near future. Some answers can be provided by considering an evolution of the concepts of virtual reality that are already being implemented in different fields such as psychiatry, surgery, industry, military training, entertainment, etc. In psychiatry in particular, virtual universes are used to implement techniques related to behavioral therapies, and offer advantages over traditional in vivo scenarios (Powers & Emmelkamp, 2008). In this type of treatment, the patient himself is simulated using an avatar and the universe in which he evolves is also simulated in the most realistic way possible. Convincing results have been obtained in the treatment of some phobias (Choy & al., 2007, Parsons & Rizzo, 2008), as well as post-traumatic stress disorder (Cukor & al., 2009, Baños & al., 2011).

In this context, it is conceivable that developments in this concept of virtual reality could lead to the realization of simulated humans, which would require a high degree of realism. This would require, in particular, the completion of current research, particularly on the simulation of the human brain. It is possible that significant progress may be made in the near future (Moravec, 1998; Kurzweil, 2005; Sandberg and Bostrom, 2008; De Garis et al. al., 2010). It is also conceivable that we will then have the ability to immerse ourselves in simulated universes by borrowing the personalities of humans thus simulated, while really having — the time of immersion — the impression that this is our real existence10. In addition, the same human simulation could take the form of multiple variations that would correspond to the purpose — therapeutic, scientific, playful, utilitarian, historical, etc. — sought during the immersion. For example, it is conceivable that some variations may only include important elements of the simulated personality’s life, neglecting uninteresting details. For the purposes of this discussion, we can term this type of simulation: immersion simulations. In this context, humans could thus frequently resort to immersion in a simulated anterior human personality. It is also possible that individuals may use simulations of themselves: they could be simulations of themselves at earlier times in their lives, with eventual slight variations, however, depending on the purpose sought for the immersion in question. In such circumstances, it is conceivable that very large quantities of this type of simulation could be carried out by computer means. In any case, it appears that the number of simulations at our disposal would be much greater than the inhabitants of our planet. In this context, it appears that SA functions in the same way as the original argument if we reason in relation to a reference class that identifies itself with this type of immersion simulations.

At this point, it is worth considering the effect on SA of assimilating the reference class to immersion simulations. In such a context, it appears that the first-level consequence based on the humans/simulations disproportion would apply here, in the same way as the original argument. Secondly, and this is an important consequence, the second level conclusion based on self-applicability would now apply, since we can conclude that “we” are also, in this extended sense, simulations. On the other hand, it would no longer follow the alarming conclusion, which is that of the original argument and which manifests itself at a third level, that we are unconscious simulations, since the fact that we are in this sense simulations does not imply here that we are mistaken about our first identity. Thus, unlike the original argument, the result is a reassuring conclusion: humans are occasionally immersion simulations, while being aware that they use them.

Could we not object here that we have not yet reached the state where we can identify, even if only temporarily, with such immersion simulations and that this does not make the above developments relevant to SA? Strictly speaking, the virtual reality implemented in our time can indeed be considered too coarse in nature to be assimilated to the very realistic simulations hinted at by Bostrom. However, it can be assumed that only high-quality immersion simulations, which would give the illusion at least the time of their use that they are a real existence, could be carried out, for such simulations to become relevant for the SA reference class. The hypothesis that such a technological level, based on an explosion of artificial intelligence, could be achieved within a few decades has thus been put forward (Kurzweil, 2005; Eden et al. al., 2013). If such a technological evolution were to occur within, for example, a few decades, could we not then legitimately consider that such simulations also fall within the reference class of SA? Given this possible temporal proximity, it seems appropriate to take into account the case of immersion simulations and to evaluate their consequences for SA11.

6. The different levels of conclusion according to the chosen reference class

Finally, the preceding discussion emphasizes that if SA is considered in light of its inherent reference class problem, there are actually several levels in the conclusion of SA: (C1) disproportion; (C2) self-applicability; (C3) unconsciousness (the worrying fact that we are fooled, deceived about our primary identity). In fact, the previous discussion shows that (C1) is true regardless of the chosen (by restriction or extension) reference class: quasi-humans, quasi-humans+, imperfect simulations and immersion simulations. In addition, (C2) is also true for the original reference class of quasi-humans — and for immersion simulations, but is false for the class of quasi-humans+ and imperfect simulations. Finally, (C3) is true for the original reference class of quasi-humans, but it proves to be false for quasi-humans+, imperfect simulations and immersion simulations. These three levels of conclusion are represented in the table below:

levelconclusioncasequasi-humansquasi-humans+imperfect simulationsimmersion simulations
C1the proportion of simulated humans will far exceed that of humans (disproportion)C1Atruetruetruetrue
the proportion of simulated humans will not significantly exceed that of humansC1Āfalsefalsefalsefalse
C2we are most likely simulations (self-applicability)C2Atruefalsefalsetrue
we are most likely not simulationsC2Āfalsetruetruefalse
C3we are unconscious simulations of their simulation nature (unconsciousness)C3Atruefalsefalsefalse
we are not unconscious simulations of their simulation natureC3Āfalsetruetruetrue

Figure 1. The different levels of conclusion within SA

as well as in the following tree structure:

Figure 2. Treeof the different levels of conclusion of SA

While SA’s original conclusion suggests that there is only one level of conclusion, it turns out, however, as just pointed out, that there are in fact several levels of conclusion in SA, when the argument is examined from a broader perspective, in the light of the reference class problem. The conclusion of the original argument (C3A) is itself worrying and alarming, in that it concludes that there is a much higher probability than we had imagined a priori that we are humans simulated without being aware of it. However, the above analysis shows that, depending on the chosen reference class, some conclusions of a very different nature can be inferred by the simulation argument. Thus, a completely different conclusion is associated with the choice of the reference class of the quasi-humans+ or imperfect simulations. The resulting conclusion is that we are not such simulations (C2Ā). Finally, another possible conclusion, itself associated with the choice of the immersion simulation class, is that we are eventually part of such a simulation class, but we are aware of it and therefore it is not a cause for concern (C3Ā).

The above analysis finally highlights what is wrong with the original version of SA, which is at a twofold level. First, the original argument focuses on the class of simulations that are not aware of their own simulation nature. This leads to a succession of conclusions that there will be a greater proportion of simulated humans than authentic humans (C1A), that we are part of simulated humans (C2A) and finally that we are, more likely than we might have imagined a priori, simulated humans unaware of being (C3A). However, as mentioned above, the very notion of human simulation is ambiguous, and such a class can in fact be defined in different ways, given that there is no objective criterion in SA for choosing such a class in a way that is not arbitrary. We can indeed choose the reference class by identifying the simulations with unconscious simulations, i.e. quasi-humans simulations. But the alternative choice of a reference class that identifies itself with simulations that are conscious of being simulations themselves, i.e. quasi-humans+, has equal legitimacy. In the original argument, there is no objective criterion for choosing the reference class in a non-arbitrary way. Thus, the fact of favoring, in the original argument, the choice of quasi-humans — with the alarming conclusion associated with them — over quasi-humans+, constitutes a bias, as well as the choice of a reference class that identifies itself with quasi-humans+, leads this time to a reassuring conclusion.

Secondly, it appears that the reference class of SA can be defined at a certain level of restriction or extension. The choice in the original argument of the quasi-humans — occurs at a certain level of restriction. But if we now move to a certain level of extension, the reference class now includes imperfect simulations. And if we place ourselves at an even greater level of extension, simulations include not only imperfect simulations, but also immersion simulations. But depending on whether the class is chosen at a particular level of restriction or extension, a completely different conclusion will follow. Thus, the choice, at a higher level of extension, of imperfect simulations leads to a reassuring conclusion. Similarly, at an even greater level of extension, which this time includes immersion simulations, there also follows a new reassuring conclusion. Thus, the above analysis shows that in the original version of SA, the choice is made preferentially, by restriction, on the reference class of quasi-humans, to which is associated a worrying conclusion, as well as a choice by extension, also taking into account imperfect simulations or immersion simulations, leads to a reassuring conclusion.

Can we not object, at this stage, that the above analysis leads to a change in the original scenario of SA and that it is no longer the same problem12? To this, it can be replied that the previous analysis is based on variations in SA that preserve the very structure of the original argument. What this analysis shows is that this same structure is likely to produce conclusions of a very different nature, as long as the reference class is varied within reasonable limits that correspond to the context of SA, and even though the original SA statement suggests a single type of conclusion. Bostrom himself emphasizes that it is the structure of the argument that constitutes its real core: “The structure of the Simulation Argument does not depend on the nature of the hypothetical beings that would be created by the technologically mature civilizations. If instead of computer simulations they created enormous numbers of brains in vats connected to a suitable virtual reality simulation, the same effect could in principle be achieved.” (Bostrom, 2005). In addition, the different levels of extension used here to highlight variations in the SA reference class are intended to illustrate how different levels of conclusion can result. But if we wish to preserve the very form of the original argument, we can then limit the variation of the reference class to what really constitutes the core of this analysis, by considering only a reference class that identifies itself with the quasi-humans. The reference class is then made up of both quasi-humans and quasi-humans+. This is sufficient to generate a reassuring conclusion — which is not taken into account in the original argument — and thus modify the general conclusion resulting from the argument. In this case, it is the same reference class as the one underlying the original argument, with the only difference that simulations knowing that they are simulated are now part of it. Because the latter, whose possible existence is not mentioned in the original argument, nevertheless have an equal right to legitimacy in the context of SA.

Finally, the preferential choice in the original argument of the quasi-humans class, appears to be an arbitrary choice that no objective criterion justifies, while other choices deserve equal legitimacy. For the SA statement does not contain any objective element allowing the choice of the reference class to be made in a non-arbitrary manner. In this context, the worrying conclusion associated with the original argument also turns out to be an arbitrary conclusion, since there are several other reference classes that have an equal degree of relevance to the argument itself, and from which a quite reassuring conclusion follows.13 14

References

Adleman Leonard « Molecular Computation of Solutions to Combinatorial Problems », Science, vol. 266, 1994, p. 1021-1024.

Adleman Leonard « Computing with DNA », Scientific American, vol. 279(2), 1998, p. 54-61.

Baños R.M., Guillen V. Quero S., García-Palacios A., Alcaniz M., Botella C. «A virtual reality system for the treatment of stress-related disorders», International Journal of Human-Computer Studies, vol. 69, no. 9, 2011, p. 602–613.

Benenson Y., Paz-Elizur T., Adar R., Keinan E., Livneh Z., Shapiro E. «Programmable and autonomous computing machine made of biomolecules», Nature, vol. 414, 2001, p. 430–434.

Bostrom, Nick « Are You a Living in a Computer Simulation? », Philosophical Quarterly, vol. 53, 2003, p. 243-55.

Bostrom, Nick « Reply to Weatherson », Philosophical Quarterly, vol. 55, 2005, p. 90-97.

Bostrom, Nick « How long before superintelligence? », Linguistic and Philosophical Investigations, vol. 5, no. 1, 2006, p. 11-30.

Chalmers, David « The Matrix as Metaphysics », dans Grau C., dir., Philosophers Explore the Matrix, New York, Oxford University Press, 2005.

Choy Yujuan, Fyer A., Lipsitz J., « Treatment of specific phobia in adults », Clinical Psychology Review, vol. 27, no. 3, 2007, p. 266–286.

Cukor Judith, Spitalnick J., Difede J., Rizzo A., Rothbaum B. O., « Emerging treatments for PTSD », Clinical Psychology Review, vol. 29, no. 8, 2009, p. 715–726.

Franceschi, Paul, « A Third Route to the Doomsday Argument », Journal of Philosophical Research, vol. 34, 2009, p. 263-278.

Franceschi, Paul (2014), « Eléments d’un contextualisme dialectique », in J. Dutant, D. Fassio & A. Meylan, dir., Liber Amicorum Pascal Engel, Genève, Université de Genève, p. 581-608.

De Garis, Hugo, Shuo, C., Goertzel, B., Ruiting, L., « A world survey of artificial brain projects, part i: Large-scale brain simulations », Neurocomputing, vol. 74, no. 1-3, 2010, p. 3-29.

Eckhardt, William, « Probability Theory and the Doomsday Argument », Mind, vol. 102, 1993, p. 483-488.

Eckhardt, William, « A Shooting-Room View of Doomsday », Journal of Philosophy, vol. 94, 1997, p. 244-259.

Eckhardt, William, Paradoxes in probability Theory. Dordrecht, New York, Springer, 2013.

Eden A., Moor J., Søraker J., Steinhart E. (eds.) Singularity Hypotheses: A Scientific and Philosophical Assessment, Londres, Springer, 2013.

Kurzweil, Ray, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, New York & Londres, Penguin Books, 2000.

Kurzweil, Ray, The Singularity is Near, New York, Viking Press, 2005.

MacDonald J., Li Y., Sutovic M., Lederman H., Pendri K., Lu W., Andrews B. L., Stefanovic D., Stojanovic M. N. « Medium Scale Integration of Molecular Logic Gates in an Automaton », Nano Letters, 6, 2006, p. 2598–2603.

Moravec, Hans « When will computer hardware match the human brain? », Journal of Evolution and Technology, 1998, vol. 1.

Parsons T.D., Rizzo A. « Affective outcomes of virtual reality exposure therapy for anxiety and specific phobias: A meta-analysis », Journal of Behavior Therapy and Experimental Psychiatry, vol. 39, no. 3, 2008, p. 250–261.

Powers M. B., Emmelkamp P. « Virtual reality exposure therapy for anxiety disorders: A meta-analysis », Journal of Anxiety Disorders, vol. 22, no. 3, 2008, p. 561–569.

Sandberg, Anders et Bostrom, Nick Whole Brain Emulation: a Roadmap, Technical Report #2008-3, Future of Humanity Institute, Oxford University, 2008.

Walton, Douglas, One-Sided Arguments: A Dialectical Analysis of Bias, Albany, State University of New York Press, 1999.

1 The Liar is thus both true and false. In the sorites paradox, an object with a certain number of grains of sand is both a heap and a non-heap. Similarly, in Goodman’s paradox, an emerald is both green and grue, and therefore both green and blue after a certain date. Finally, in the Sleeping Beauty paradox, the probability that the piece fell on heads before the awakening of the Sleeping Beauty is 1/2 by virtue of one reasoning mode, and only 1/3 by virtue of an alternative reasoning.

2 William Eckhardt (2013, p. 15) considers that — in the same way as the Doomsday Argument (Eckhardt 1993, 1997, Franceschi, 2009) — the problem inherent in SA comes from the use of retrocausality and the problem related to the definition of the reference class: “if simulated, are you random among human sims? hominid sims? conscious sims?”.

3 We will leave aside here the question of whether an infinite number of simulated humans should be taken into account. This could be the case if the ultimate level of reality were abstract. In this case, the reference class could include simulated humans who identify themselves, for example, with matrices of very large integers. But Bostrom answers such an objection in his FAQ (www.simulation-argument.com/faq.html) and points out that in this case, the calculations are no longer valid (the denominator is infinite) and the ratio is not defined. We will therefore leave this hypothesis aside, focusing our argument on what constitutes the core of SA, i.e. the case where the number of human simulations is finite.

4 The same would be true if simulations were carried out using quantum computers.

5 I thank an anonymous referee for highlighting this point, as well as the point about computers built from components using DNA properties and molecular biology.

6 It seems difficult to rule out here the case where quasi-humans discover, at least fortuitously, that they are simulated humans, thus becoming quasi-humans+ from that moment on. However, in order to advantage the paradox, we will consider here that the very notion of an indistinguishable simulation is not plagued with contradiction.

7 Bostrom (2003) considers that the fact that we live in a simulation would only moderately affect our daily lives: “Supposing we live in a simulation, what are the implications for us humans? The foregoing remarks notwithstanding, the implications are not all that radical”. However, it may be thought that the effect should be much more profound, given that the fundamental level of reality is not where the simulation subjects believe it to be and that, as a result, many of their beliefs are completely erroneous. As David Chalmers (2005) points it out: “The brain is massively deluded, it seems. It has all sorts of false beliefs about the world. It believes that it has a body, but it has no body. It believes that it is walking outside in the sunlight, but in fact it is inside a dark lab. It believes it is one place, when in fact it may be somewhere quite different”.

8 For the purposes of this discussion, we present things as an alternative between quasi-humans and quasi-humans+. However, one could conceive that post-humans – perhaps different post-human civilizations – create both quasi-humans and quasi-humans+. We would then have a tripartite situation involving humans, quasi-humans and quasi-humans+. For the sake of simplicity, we can assimilate here such a situation to the one that prevails when post-humans only create quasi-humans since it is sufficient that the latter are present in very large numbers to create the worrying effect inherent to SA.

9 This type of bias can be analyzed in one instance of the one-sidedness bias (Walton, 1999, p. 76-81, Franceschi, 2014, p. 587-592) where the reference class is that of the simulations and the associated duality is consciousness/unconsciousness.

10 A complete simulation of a human brain is also called an upload. One definition (Sandberg & Bostrom, 2008, p. 7) is as follows:  : “The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain.”

11 The above also shows that when examining SA carefully, it can be seen that the argument contains a second reference class. This second reference class is that of post-humans. What is a post-human? Should we assimilate this class to civilizations far superior to ours, to those that will evolve in the 25th century or the 43th century? Should the descendants of our current human race who will live in the 22nd century be counted among the post-humans if they were to make considerable technological progress in the field of simulations? In any case, the definition of the post-human class appears to be closely linked to that of simulations. Because if we are interested, in a broad sense, in immersion simulations, then post-humans can be assimilated to a generation of humans not very far from us. If we consider imperfect simulations, then they should be associated with a more distant time. On the other hand, if we consider, in a more restrictive sense, simulations of humans that are completely indistinguishable from our current humanity, then we should be interested in post-humans from a much more distant era. Thus, the class of post-humans appears to be closely correlated with that of simulations, because the degree of evolution of simulations is related to the level reached by the post-human civilizations that implement them. For this reason, we shall limit the present discussion to the reference class of the simulations.

12 I thank an anonymous referee for raising this objection.

13 The resulting double weakening of SA finally makes it possible to reconcile SA with our pre-theoretical intuitions, because the worrying scenario of the original argument now coexists with several scenarios of a quite reassuring nature.

14 The present analysis is a direct application to the Simulation Argument of the form of dialectical contextualism described in Franceschi (2014).

I thank two anonymous referees for Philosophiques, for very useful comments on an earlier version of this article.

A Solution to the Doomsday Argument

doomsday

Un article publié en français dans the Canadian Journal of Philosophy Volume 28, Juillet 1998, pages 227-46.

Cet article présente une solution pour l’Argument de l’Apocalypse (DA). Je montre tout d’abord qu’il n’existe pas de critère objectif pour le choix en général d’une classe de référence: dans ce cas, le calcul inhérent à DA ne peut pas prendre place. En second lieu, j’envisage le choix particulier d’une classe de référence donnée, ainsi que Leslie le recommande. Mais le caractère arbitraire de la sélection rend légitime de multiples possibilités de choix, soit par extension, soit par restriction: DA peut alors être établi en particulier pour le genre Homo, pour l’espèce Homo sapiens, pour la sous-espèce Homo sapiens sapiens, … , pour une classe définie de manière restreinte correspondant aux humains n’ayant pas connu l’ordinateur, etc. Finalement, il apparaît que DA “fonctionne”, mais sa conclusion se révèle inoffensive.

The Doomsday Argument and Hempel’s Problem

Posprint in English (with additional illustrations from wikimedia commons) of a paper published in French in the Canadian Journal of Philosophy Vol.29, July 1999, pp. 139-56 under the title “Comment l’Urne de Carter et Leslie se Déverse dans celle de Hempel”.
I begin by describing a solution to Hempel’s Problem. I recall, second, the solution to the Doomsday Argument described in my previous Une Solution pour l’Argument de l’Apocalypse (Canadian Journal of Philosophy 1998-2) and remark that both solutions are based on a similar line of reasoning. I show thirdly that the Doomsday Argument can be reduced to the core of Hempel’s Problem.


This paper is cited in:

Koji Sawa, Junki Yokokawa and Tatsuji Takahashi (2013) Logical Equivalence: Symmetric and Asymmetric Features, Symmetry: Culture and Science, Vol. 24, No. x.

Milan M. Cirkovic, A Resource Letter on Physical eschatology, Am.J.Phys. 71 (2003) 122-133

Nick Bostrom, Anthropic Bias: Observation Selection Effects in Science and Philosophy, Routledge (2002)

Alasdair Richmond, The Doomsday Argument, Philosophical Books Vol. 47 No. 2 April 2006, pp. 129–142


The Doomsday Argument and Hempel’s Problem

Postprint – with additional illustrations from Wikimedia commons) – of a paper originally pubslihed in French in the Canadian Journal of Philosophy under the title « Comment l’urne de Carter et Leslie se déverse dans celle de Carter », vol. 29, March 1999, pages 139-156.

Paul Franceschi

I Hempel’s Problem

Hempel’s Problem (thereafter, HP) is based on the fact that the two following assertions:

(H) All ravens are black

(H’) Everything that is non-black is a non-raven

are logically equivalent. The logical structure of (H) is:

(H1) All X are Y

that is to say x (Xx  Yx), whereas that of (H’) has the form:

(H1′) All non-Y are non-X

Carl Gustav Hempel.jpg
Carl Hempel

that is to say x (~Yx  ~Xx). In fact, the structure of the contrapositive form (H1′) is clearly equivalent to that of (H1). It follows that the discovery of a black raven confirms (H) and also (H’), but also that the discovery of a non-black thing which is not a raven such as a pink flame or even a grey umbrella, confirms (H’) and thus (H). This last conclusion appears paradoxical. The propositions (H1) and (H1′) are based on four properties X, ~X, Y and ~Y, respectively corresponding to raven, non-raven, black, and non-black in the original version of HP. These four properties determine four categories of objects: XY, X~Y, ~XY and ~X~Y, which correspond respectively to black ravens, non-black ravens, black non-ravens and non-black non-ravens. One can observe here that a raven is defined with precision in the taxonomy within which it fits. A category as that of the ravens can be regarded as well defined, because it is based on a set of precise criteria defining unambiguously the species corvus corax and allowing the identification of its instances. It also appears that one can build without difficulty a version of HP where a variation with regard to the X class is operated. If one replace the X class with that of the tulips or that of the dolphins, etc. by adapting correlatively the Y property, one still obtains a valid version of HP. It appears thus that changes can be operated at the level of the X class without loosing the problem inherent to HP.

Corvus corax

Similarly, the black property can be specified with precision, on the basis of a taxonomy of colours established with regard to the wavelengths of the light.1 Moreover, one can consider variations with regard to the Y property. One will thus be able to choose properties such as whose length is smaller than 50 cm, living less than 10 years, etc. Such variations also lead to acceptable versions of HP. Lastly, it should be noted that the non-black property can be the subject of a definition which does not suffer from ambiguity, in particular with the help of the precise taxonomy of colours which has been just mentioned. Similarly, if one takes into account variations of the Y property such as smaller than 40 cm, or whose diameter is larger than 25 cm, etc, one arrives to definitions of the non-Y property which just as non-black are established with precision and lead in addition to versions of HP presenting the same problem as the original version. Thus, the X class, just as the properties Y and non-Y can be the subject of a precise and nonambiguous definition. Moreover, variations operated at the level of these classes lead to acceptable versions of HP. In contrast, the situation is not the same for the non-X class.

II The reference class Z

The concept of non-raven present in the original version of HP leads to highlight an important problem. What constitutes an instance of a non-raven? Intuitively a blue jay, a pink flame, a grey umbrella and even a natural integer constitute non-ravens. One is thus confronted with the definition of a new reference class – call it Z – including X and non-X. The Z class allows defining complementarily the class of non-X, and in the original version of Hempel, the class of non-ravens. Thus Z is the implicit reference class with regard to which the definition of the X class allows that of non-X. Does one have then to consider a Z class that goes until including abstract objects? Is it necessary to consider a concept of non-raven including abstract entities such as natural integers and complex numbers? Or is it necessary to limit oneself to a Z class, which only embraces concrete things? Such a discussion has its importance, because there are infinitely many abstract objects, whereas there are only finitely many individualised concrete objects. This fact is likely to influence later importantly the possible application of a bayesian reasoning. One could thus have a reference class Z including at the same time abstract objects (natural integers, real and complex numbers, etc.) and concrete objects such as artefacts but also natural entities such as humans, animals, plants, meteorites, stars, etc. Such a reference class is defined very extensively. And the consequence of such a choice is that the discovery of any object confirms (H’) and thus (H). At this stage, anything2 confirms (H). It should be noted that one can also have a definition of Z including all concrete objects that have been just mentioned, but excluding this time the abstract objects.

Larus audouinii

The instances of this class are now finitely denumerable, just as the cardinal of the corresponding set: the reference class Z then includes animals, plants, stars, etc. But alternatively, one could still consider a Z class associating the ravens (corvus corax) and the Audouin’s gulls3 (larus audouinii). In this case, the instances of the X class (corvus corax) are in a number larger than those of the non-X class (larus audouinii). And we always face the corresponding version of HP.4

Lastly, nothing seems to prohibit, at a very restrictive level, to choose a Z class made up of the X class, only added with one single element such as a red tulip. With this definition of Z, we still face a minimal version of HP. Of course, any object, added to the class of X and constituting the non-X class will be appropriate and then confirm at the same time (H’) and (H). Thus, any object ~X~Y will lead to confirm (H). The remarks which have just been made call however an immediate objection. With various degrees, it is allowed to think that the choice of each reference class Z that has been just mentioned is arbitrary. Because it is allowed to reject on those grounds extreme definitions of Z such as the one defined above and including all abstract objects. Similarly, a Z class including the natural integers or the complex numbers can also be eliminated. The X class is defined with regard to the concrete objects that are the ravens and there is not particular reason to choose a Z class including abstract entities.

A red tulip

Similarly, one will be able to reject a definition of Z based on a purely artificial restriction, simply associating with X a determinate object such as a red tulip. Because I can choose arbitrarily, the object that constitutes the complement of X, i.e. I can define Z as I wish. Such an extreme conception appears as without relationship with the initial definition of X. A Z class thus defined is not homogeneous. And there is no justification to legitimate the association of a red tulip to the class of the ravens to build that of Z. The association within a same Z class of the ravens and the Audouin’s gulls, appears analogously as an illegitimate choice. Why not then the association of the ravens and the goldfinches? Such associations are symptomatic of a purely artificial selection. Thus, the choices of reference classes Z mentioned above reveal an arbitrary and artificial nature. Indeed, shouldn’t one make one’s possible to find a Z class which is the most natural and the most homogeneous possible, taking into account the given definition of X? One can think that one must attempt to operate a determination of the Z class, which is the most objective possible. In the original version of HP, doesn’t the choice of the ravens for the X class implicitly determine a Z class which is directly in connection with that of the ravens? A Z class naturally including that of the ravens such as that of the corvidae, or that of the birds, seems a good candidate. Because such a class is at least implicitly determined by the contents of the X class. But before analysing versions of HP built accordingly, it is worth considering before some nonparadoxical versions of HP.

III The analogy with the urn

It is notoriously admitted that certain versions5 of HP are not paradoxical. Such is in particular the case if one considers a reference class Z associated with boxes, or a set of playing cards. One can also consider a version of HP associated with an urn. An X class is thus considered where the objects are finitely denumerable and which only includes balls and tetrahedrons. The Y class itself is reduced to two colours: red and green. One has thus four types of objects: red balls, green balls, red tetrahedrons and green tetrahedrons. In this context, we have the following version of HP:

(H2) All balls are red

(H2′) All non-red objects are non-balls

It appears here that the case of red tetrahedrons can be ignored. Indeed, their role is indifferent and one can thus ignore their presence in the urn. They can be regarded as parasitic objects, whose eventual presence in the urn does not have importance. One is thus brought to take into account an urn containing the significant objects consisting in red balls, green balls and green tetrahedrons. And the fact that non-red objects can only be green, and that non-balls can only be tetrahedrons leads to consider equivalently:

(H3) All balls are red

(H3′) All green objects are tetrahedrons

that clearly constitutes a nonparadoxical version of HP. Indeed, the draw of a red ball confirms (H3) and (H3′) whereas the draw of a green tetrahedron confirms (H3′) and (H3).

Consider now the case where the urn contains six significant objects.6 One has just drawn three red balls and one green tetrahedron (the draw is 3-0-17) and one makes then the hypothesis (H3). At this stage, the probability that all balls are red corresponds to three draws (3-0-3, 4-0-2 and 5-0-1) among six possible draws (3-0-3, 3-1-2, 3-2-1, 4-0-2, 4-1-1, 5-0-1). Similarly, the probability that all green objects are tetrahedrons is identical. Thus, P(H3) = P(H3′) = 1/2 and also P(~H3) = P(~H3′) = 1/2. These initial probabilities being stated, consider now the case where one has just carried out a new draw in the urn. Another red ball is drawn (the draw is 4-0-1). This corresponds to three possible compositions of the urn (4-0-2, 4-1-1, 5-0-1). Let E be the event consisting in the draw of a red ball in the urn. We have then the probability of drawing a red ball if all the balls of the urn are red, i.e. P(E, H3) such as P(E, H3) = 2/3, since two cases (4-0-2, 5-0-1) correspond to the fact that all balls are red. In the same way, P(E, ~H3) = 1/3. The situation is identical if one considers P(E, H3′) and P(E, ~H3′). One is then in a position to calculate the posterior probability that all balls are red using Bayes formula: P'(H3) = [P(H3) X P(E, H3)] / [P(H3) X P(E, H3) + P(~H3) X P(E, ~H3)] = (0,5 X 2/3) / (0,5 X 2/3 + 0,5 X 1/3) = 2/3. And P'(~H3) = 1/3. There are identical results concerning P'(H3′) and P'(~H3′). Thus, P'(H3) > P(H3) and P'(H3′) > P(H3′), so that the hypothesis (H3) just as the equivalent hypothesis (H3′) are confirmed by the draw of a new red ball.

Let us examine finally the situation where, instead of a red ball, one draws a green tetrahedron (the draw is 3-0-2) in the urn. Let thus F be the event consisting in the draw of a green tetrahedron. In this case, we have three possible combinations (3-0-3, 3-1-2, 4-0-2). But among these, two (3-0-3, 4-0-2) correspond to a situation where hypotheses (H3) and (H3′) are confirmed. Thus, P(F, H3) = P(F, H3′) = 2/3 and P(F, ~H3) = P(F, ~H3′) = 1/3. A bayesian calculation provides the same results as on the preceding hypothesis of the draw of a red ball. Thus, on the hypothesis of the draw of a green tetrahedron, one calculates the posterior probabilities P'(H3) = P'(H3′) = 2/3 and P'(~H3) = P'(~H3′) = 1/3. Thus, the draw of a green tetrahedron confirms at the same time (H3′) and (H3). It should be noted that one can easily build versions of HP allowing to establish nonparadoxically the preceding reasoning. Consider a cubic mineral block of 1m on side. Such an object of 1m3 is divided into 1000 cubic blocks of 1 dm3, consisting either of quartz, or of white feldspar. One examines fifty of these blocks, and one notes that several of them consist of white feldspar of gemmeous quality. One is brought to make the hypothesis that all blocks of white feldspar are of gemmeous quality. We have then the following version of HP:

(H4) All blocks of white feldspar are of gemmeous quality

(H4′) All blocks of non-gemmeous quality are not white feldspar

that is equivalent to:

(H5) All blocks of white feldspar are of gemmeous quality

(H5′) All blocks of non-gemmeous quality are quartz

where we have in effect the equivalence between (H5) and (H5′) and where a correct bayesian reasoning can be established. Such an example (call it the mineral urn) can also be transposed to other properties X and Y, since identical conditions are preserved.

IV A solution to the problem

One must, taking into account the above developments,8 attempt to highlight a definition of the Z class that does not present an arbitrary and artificial nature, but proves on the contrary the most natural and the most homogeneous possible, with regard to the given definition of X. Consider accordingly the following9 version of HP:

(H6) All Corsican-Sardinian goshawks have a wingspan smaller than 3,50 m

(H6′) All birds having a wingspan larger than 3,50 m are not Corsican-Sardinian goshawks

In this particular version of (H’), the X class is that of the Corsican-Sardinian goshawks,10 and the reference class Z is that of the birds. This last class presents an obvious relationship with that of the Corsican-Sardinian goshawks. It is allowed to think that such a way of defining Z with regard to X is a natural one. Indeed such a definition does not present an arbitrary nature as obviously as that was the case with the examples of Z classes mentioned above. Of course, one can observe that it is possible to choose, in a more restricted but so natural way, a Z class corresponding to the accipiter genus. Such a class presents a homogeneous nature. It includes in particular the species accipiter gentilis (northern goshawk) but also accipiter nisus (European sparrowhawk), accipiter novaehollandiae (grey goshawk), accipiter melanoleucus (black and white goshawk).

Accipiter gentilis

However, alternatively and according to the same viewpoint, one could also extend the Z class to the instances of the – wider – family of accipitridae11 including at the same time the accipiter genus which have been just mentioned, but also the milvus (kite), buteo (buzzard), aquila (eagle), etc. genus. Such a class includes in particular the species milvus migrans (black kite), milvus milvus (red kite), buteo buteo (common buzzard), aquila chrysaetos (golden eagle), etc. These various acceptable definitions of the Z class find their justification in the taxonomy within which the Corsican-Sardinian goshawk inserts itself. More systematically, the latter belongs to the subspecies accipiter gentilis arrigonii, to the species accipiter gentilis, to the accipiter genus, to the family of accipitridae, to the order of falconiformes, to the class of birds, to the subphylum of vertebrates, to the phylum of chordates,12 to the animal reign, etc. It ensues that the following variations of (H’) are acceptable, in the context which has just been defined:

(H7′) All northern goshawks having a wingspan larger than 3,50 m are not Corsican-Sardinian goshawks

(H8′) All goshawks having a wingspan larger than 3,50 m are not Corsican-Sardinian goshawks

(H9′) All accipitridae having a wingspan larger than 3,50 m are not Corsican-Sardinian goshawks

(H10′) All falconiformes having a wingspan larger than 3,50 m are not Corsican-Sardinian goshawks

(H11′) All birds having a wingspan larger than 3,50 m are not Corsican-Sardinian goshawks

(H12′) All vertebrates having a wingspan larger than 3,50 m are not Corsican-Sardinian goshawks

(H13′) All chordates having a wingspan larger than 3,50 m are not Corsican-Sardinian goshawks

(H14′) All animals having a wingspan larger than 3,50 m are not Corsican-Sardinian goshawks

There are thus several versions of (H’) corresponding to variations of the Z class which themselves are made possible by the fact that the Corsican-Sardinian goshawk belongs to n categories, determined by the taxonomy to which it belongs. And in fact, when I meet one northern goshawk belonging to the nominal form (accipiter gentilis gentilis), it is at the same time a northern goshawk (accipiter gentilis) non- Corsican-Sardinian (non-accipiter gentilis arrigonii), a goshawk (accipiter) non-Corsican-Sardinian goshawk, an accipitridae non-Corsican-Sardinian goshawk, a falconiformes non-Corsican-Sardinian goshawk, a bird (aves) non-Corsican-Sardinian goshawk, but also a vertebrate non-Corsican-Sardinian goshawk, a chordate non-Corsican-Sardinian goshawk, an animal non-Corsican-Sardinian goshawk. Thus, the instance of accipiter gentilis gentilis that I have just observed, belongs at the same time to all these categories. And when I meet a grey whale, it is not a bird non-Corsican-Sardinian goshawk, but it is indeed a vertebrate non-Corsican-Sardinian goshawk, as well as a chordate non-Corsican-Sardinian goshawk, and also an animal non-Corsican-Sardinian goshawk.

In general, a given object x which has just been discovered belongs to n levels in the taxonomy within which it fits. It belongs thus to a subspecies, 13 a species, a sub-genus, a genus, a super-genus, a subfamily, a family, a super-family, a subphylum, a junction, a reign… One can assign to the subspecies the level14 1 in the taxonomy, to the species the level 2…, to the super-family the level 8, etc. And if within (H), the class X is at a level p, it is clear that Z must be placed at a level q such as q > p. But how to fix Z at a level q which is not arbitrary? Because the reference class Z corresponds to a level of integration. But where must one stop? Does one have to attach Z to the level of the species, the sub-genus, the genus…, the reign? One does not have an objective criterion allowing the choice of a level q among the possibilities that are offered. I can choose q close to p by proceeding by restriction; but in a so conclusive way, I am authorised to choose q distant from p, by applying a principle of extension. Then why choose such class of reference restrictively defined rather than such other extensively defined? One does not have actually a criterion to legitimate the choice, according to whether one proceeds by restriction or by extension, of the Z class. Consequently, it appears that the latter can only be defined arbitrarily. And it follows clearly here that the determination of the Z class and thus of the non-X class is arbitrary. But the choice of the reference class Z appears fundamental. Because according to whether I choose such or such reference class Z, it will result from it that a given object x will confirm or not (H). For any object x, I can build a Z class such as x belongs to non-X, as I can choose a Z class such as x does not belong to non-X. Thus, this choice is left to my arbitrary.

For a given object x, I can build a Z class such as this object confirms (H) and another Z class such as this object does not confirm (H). Of course, if Z is selected arbitrarily, the bayesian reasoning inherent to HP “works”, but corresponds to an arbitrary and artificial point of view: having found an object x, (H) is confirmed. But one can as well choose, in a so artificial and more restrictive way, a Z class where x misses and where x does not confirm (H). Thus, one is not enabled to conclude objectively that the discovery of the object x confirms (H). Because to reason thus would amount to conferring a universal and general value to a viewpoint which is only the expression of an arbitrary choice.

How this result can be reconciled with the facts mentioned above,15 concerning the existence of nonparadoxical versions of HP? It is worth noting here that the bayesian reasoning can be established in each case where the Z class is finite, and where this fact is known before the experiment.16 One can then show a bayesian shift. But at this stage, it is worth distinguishing the cases where the Z class is determined before the experiment by an objective criterion and the cases where it is not the case. In the first case, the contents of the Z class are given before the experiment and the Z class is thus not selected arbitrarily, but according to an objective criterion. Consequently, the bayesian reasoning is correct and provides relevant information. Such is in particular the case when one considers a version of HP applied to an urn, or a version such as the mineral urn. On this last hypothesis, the composition of the Z class is fixed in advance. There is then a significant difference with Nicod’s criterion:17 an object ~X~Y confirms (H) and an object XY confirms (H’).

Conversely, when the Z class is not fixed and is not determined before the experiment by an objective criterion, one can subjectively choose Z at any level of extension or restriction, but the conclusions resulting from the bayesian reasoning must be regarded as purely arbitrary and do not present thus an objective value. Because one then does not have a base and a justification to choose such or such level of restriction or extension. Thus, in this case, Nicod’s criterion according to which any object ~X~Y is neutral with respect to (H) and any object XY is neutral with respect to (H’), can apply itself. It should be observed that the present solution has the effect of preserving the equivalence of a proposition and its contraposition. And similarly, the principle of the confirmation of a generalisation by each of its instances is also preserved.

V A common solution to Hempel’s Problem and the Doomsday Argument

The Doomsday Argument (thereafter, DA) attributed to Brandon Carter, has been described by John Leslie (1992).18 DA can be described as follows. Consider an event A: the final extinction of the human race will occur before year 2150. One can estimate at 1 chance from 100 the probability that this extinction occurs: P(A) = 0,01. Let also ~A be the event: the final extinction of the human race will not occur before 2150. Consider also the event E: I live during the 1990s. In addition one can estimate today at 50 billions the number of humans having existed since the birth of humanity: let H1997 be such a number. In the same way, the current population can be evaluated to 5 billions: P1997 = 5×109. One calculates thus that one human from ten, if event A occurs, will have known the 1990s. The probability that humanity is extinct before 2150 if I have known the 1990s, is thus evaluated: P(E, A) = 5×109/5×1010 = 0,1. On the other hand, if the human race passes the course of the 2150s, one can think that it will be destined to a much more significant expansion, and that the number of humans will be able to rise for example to 5×1012. In this case, the probability that the human race is not extinct after 2150 if I have known the 1990s, can be evaluated as follows: P(E, ~A) = 5×109/5×1012 = 0,001. This now makes it possible to calculate the posterior probability of the human race extinction before 2150, using Bayes formula: P'(A) = [P(A) x P(E, A)] / [P(A) x P(E, A) + P(~A) X P(E, ~A)] = (0,01 x 0,1) / (0,01 x 0,1 + 0,99 x 0,001)  0,5025. Thus, the fact of taking into account the fact that I live currently has made the probability of the human race extinction before 2150 shift from 0,01 to 50,25.

640px-Dmanisi-D3844._Homo_erectus_or_Homo_georgicus
Homo erectus’ skull (photo by Ryan Somma)

I have presented in my paper ‘Une Solution pour l’Argument de l’Apocalypse’19 a solution to DA, whose main lines can be described as follows. The DA reasoning is based on a single reference class, which is that of the humans.20 But how this reference class has to be defined? Should it be limited to the only representatives of our current subspecies Homo sapiens sapiens? Or does one have to extend it to all the representatives of the species Homo sapiens, by including this time, in addition to Homo sapiens sapiens, Homo sapiens neandertalensis…? Or is it necessary to include in the reference class the entire Homo genus, including then all the successive representatives of Homo erectus, Homo habilis, Homo sapiens, etc? And isn’t it still necessary to go until envisaging a wider class, including all the representatives of a super-genus S, made up not only of the Homo genus, but also of the new genus Surhomo, Hyperhomo, etc. which will result from the foreseeable evolutions from our current species? It appears thus that one can consider a reduced reference class by proceeding by restriction, or apprehend a larger class by making the choice of a reference class by extension. One can thus operate for the choice of the reference class by applying either a principle of restriction or a principle of extension. And according to whether one applies one or the other principle, various levels of choice are each time possible.

But it appears that one does not have an objective criterion, which makes it possible to legitimate the choice of such or such a reference class. And even our current subspecies Homo sapiens sapiens cannot be regarded as a natural and an adequate choice for the reference class. Because isn’t it allowed to think that our paradigmatic concept of human has to undergo evolutions? And in addition, the fact of excluding from the reference class a subspecies such as Homo sapiens neandertalensis or the future evolutions of our species, doesn’t it reveal an anthropocentric viewpoint? Since one does not have an objective selection criterion, one can choose arbitrarily one or the other of the classes that have been just described. One can for example identify the reference class to the species Homo sapiens, and observe a bayesian shift. There is indeed then an increase in the posterior probability of the extinction of Homo sapiens. But this bayesian shift is worth as well for a still more restricted reference class, such as our subspecies Homo sapiens sapiens. There too, the application of Bayes formula leads to an appreciable increase in the posterior probability of the nearest end of Homo sapiens sapiens. However identically, the bayesian shift also applies to a still more reduced reference class, which is that of the representatives of Homo sapiens sapiens having not known the computer. Such a reference class will certainly face a nearest extinction. There however, such a conclusion is not likely to frighten us, because the evolutionary potentialities of our species are such that the succession of a new species to those which preceded them, constitutes one of the characteristics of our evolution mode.

It should be mentioned that this solution leads here to accept the conclusion (the bayesian shift) of Carter and Leslie for a given reference class, while placing it in comparison with conclusions of comparable nature relating to other reference classes, completely inoffensive. The fact of taking into account various levels of restriction, made legitimate by the lack of an objective criterion of choice, leads finally to the harmlessness of the argument. Thus, it appears that the argument based on the reference class and its arbitrary choice by restriction or extension constitutes a common solution to HP and DA. HP and DA are ultimately underlain by the same problem inherent to the definition of the Z class of HP and the single reference class of DA. One thus has a solution of comparable nature for the two paradoxes. It is worth here concluding by presenting an element that tends to confirm the common source of the two problems. One will observe first that one is not able to highlight a version of DA corresponding veritably to the original version of HP, a reference class such as that of the ravens being not transposable in DA. The inherent argument in DA is indeed based on the use of the anthropic principle and requires obviously a reference class made up of intelligent beings. When Leslie21 considers the extension of the reference class, he specifies expressly that the condition for the membership of the reference class is the aptitude to produce an anthropic reasoning. On the other hand it is possible to describe a version of HP made up from the elements of DA. If one takes X for our current subspecies Homo sapiens sapiens and Y for are alive only before 2150, one obtains the following version of HP:

(H15) All Homo sapiens sapiens will be alive only before the year 2150

(H15′) All those which will live after 2150 will be non-Homo sapiens sapiens

In this context, an alive human being in 1997 constitutes an instance confirming (H15). In parallel, the discovery of an Homo sapiens sapiens after 2150 leads to refute (H15). Lastly, the discovery of an alive non-Homo sapiens sapiens after 2150 constitutes a confirmation of (H15′) and thus of (H15). Taking into account this particular formulation, it is clear that one currently only observes instances confirming (H15). On the other hand, after 2150, one will be able to have instances refuting (H15) or instances confirming (H15′).

It is worth noting here that (H15) does not allow veritably to be used as support of a version of DA. Indeed, the reference class identifies itself here precisely as Homo sapiens sapiens, whereas in the original version of DA, the reference class consists in the human race. Consequently, one has not, strictly speaking, an identity between the event underlie by (H15) and A, so that (H15)-(H15′) does not constitute a joint version22 of DA and HP.

But this version of HP being made up with the elements of DA, one must be able, at this stage, to verify the common origin of the two problem, by showing how the argument raised in defence of DA with regard to the reference class, can also be used in support of HP. One knows the response made by Leslie to the objection that the reference class for DA is ambiguous or, due to the evolutions of Homo sapiens sapiens, leads to a heterogeneous reference class, of composite nature. It is exposed in the response made to Eckhardt:

How far should the reference class extend? (…) One can place the boundary more or less where one pleases, provided that one adjusts one’s prior probability accordingly. Exclude, if you really want to, all future beings with intelligence quotients above five thousand, calling them demi-gods and not humans23.

and developed in The End of the World24:

The moral could seem to be that one’s reference class might be made more or less what one liked. (…) What if we wanted to count our much-modified descendants, perhaps with three arms or with godlike intelligence, as ‘genuinely human’? There would be nothing wrong in this. Yet if we were instead interested in the future only of two-armed humans, or of humans with intelligence much like that of humans today, then there would be nothing wrong in refusing to count any others25.

For Leslie, one can go until including in the reference class, the descendants of humanity become very distant from our current species due to the fact of evolution. But Leslie also accepts liberally that one limits the reference class to the only individuals close to our current humanity. One is thus free to choose the reference class that one wishes, while operating either by extension, or by restriction. It will be enough in each case to adjust the initial probability accordingly. It appears here that this type of answer can be transposed, literally, to an objection to HP of comparable nature, based on the reference class of (H15)-(H15′). One can fix, so the objection goes, the Z class as one wishes, and assign to “all those” the desired content. One can for example limit Z to the species Homo sapiens, or well associate it to the whole of the Homo genus, including then the evolutions of our species such as Homo spatialis, Homo computeris, etc. What is important – could continue this defender – is to determine preliminarily the reference class and to conserve this definition when the various instances are then met. Thus, it proves that the arguments advanced in support of the reference class of DA can be transposed in defence of HP. This constitutes an additional element, going in the direction of the common origin of the two problems, dependent on the definition of a reference class. DA and HP need consequently a same type of answer. Thus, the urn of Carter and Leslie flows in that of Hempel.26

References

ECKHARDT, W. 1993. “Probability Theory and the Doomsday Argument.” Mind, 102 (1993): 483-8
FRANCESCHI, P. 1998, “Une Solution pour l’Argument de l’Apocalypse.” Canadian Journal of Philosophy, 28 (1998): 227-46
GOODMAN, N. 1955. Fact, Fiction and Forecast. Cambridge: Harvard University Press.
HEMPEL, C. 1945. “Studies in the logic of confirmation.” Mind, 54 (1945): 1-26 et 97-121
LESLIE, J. 1992. “Time and the Anthropic Principle.” Mind, 101 (1992): 521-40
—. 1993. “Doom and probabilities.” Mind, 102 (1993): 489-91
—. 1996. The End of the World: the science and ethics of human extinction. London and New York: Routledge.
PAPINEAU, D. 1995. “Methodology: the Elements of the Philosophy of Science.” In Philosophy A Guide Through the Subject, ed. A.C. Grayling. Oxford: Oxford University Press.
SAINSBURY, M. 1988. Paradoxes. New York: Cambridge University Press.
THIBAULT, J-C. 1983. Les oiseaux de Corse. Paris: De Gerfau.

1 It is known that a monochromatic light, of single wavelength, meets practically only in laboratory. But the natural colours can be modelled in terms of subtraction of lights of certain wavelengths, starting from the white light of the Sun.

2 Any object ~X~Y in the Z class thus extensively defined.

3 The total population of Audouin’s gulls is evaluated with approximately 3000 couples (cf. Thibault 1983, 132).

4 This incidentally makes it possible to verify that HP does not find its origin in a disproportion of the X class compared to that of the non-X. The fact that the instances of the X class are in a number larger than those of the non-X does not prevent the emergence of a version of HP.

5 Properly speaking, these are not thus versions of HP, since they are nonparadoxical. But the corresponding propositions have the logical structure of (H) and (H’).

6 The red tetrahedrons possibly found in the urn are regarded as nonsignificant objects.

7 With the notation: npq (red balls – green balls – green tetrahedrons).

8 Cf. § II.

9 This particular version of HP is chosen here because it is based on an X class corresponding to the subspecies accipiter gentilis arrigonii. Conversely, the original version of HP is grounded on the species corvus corax. The choice of a subspecies for the X class allows simply here a supplementary level of integration.

10 The Corsican-Sardinian goshawks (accipiter gentilis arrigonii) constitute a subspecies of the northern goshawk, specific to Corsica and Sardinia. This endemic subspecies differs from the nominal form of the northern goshawk by the following characteristics (cf. Thibault 1983): the colouring of the head is blackish instead of brown blackish; the back is brown; the lower part is darker.

11 The ornithologists still distinguish the class of the accipitriformes, corresponding to all accipitridae, to which are added the pandlionidae, such as pandlion haliaetus (osprey), etc.

12 The phylum of chordata includes all vertebrates and some invertebrates, which present the property of having a dorsal chord, at least at a given period of their life.

13 It is possible to consider alternatively, if one wishes, another taxonomy that our current scientific taxonomy. That does not affect the current reasoning, since the conclusions are identical, since the principles of classification are respected.

14 It is obviously possible to take into account finer taxonomies and including additional subdivisions starting from the various subspecies. Obviously, that does not affect the current line of reasoning.

15 Cf. § III.

16 As we have seen, the bayesian reasoning cannot take place when one considers a Z class including infinite sets such as natural integers, real numbers, etc.

17 Nicod’s criterion is defined as follows (Hempel 1945, 11), with S1 = (H) and S2 = (H’): ‘(…) let has, B, C, D Be furnace objects such that has is has raven and black, B is has raven goal not black, C not has raven goal black and D neither has raven NOR black. Then, according to Nicod’ S criterion, has would confirm S1, goal Be neutral with respect to S2; B would disconfirm both S1 and S2; C would Be neutral with respect to both S1 and S2, and D would confirm S1, goal Be neutral with respect to S2.’

18 John Leslie, ‘Time and the Anthropic Principle.’ Mind, 101 (1992): 521-40.

19 Canadian Journal of Philosophy 28 (1998) 227-46.

20 Leslie uses the terms of human race.

21 ‘How much widening of the reference class is appropriate when we look towards the future? There are strong grounds for widening it to include our evolutionarily much-altered descendants, three-armed or otherwise, as ‘humans’ for doomsday argument purposes – granted, that’s to say, that their intelligence would remain well above the chimpanzee level.’ (1996, 262)

22 I.e. comprising simultaneously the two problems.

23 W. Eckhardt, ‘Probability Theory and the Doomsday Argument.’ Mind, 102 (1993): 483-8; cf. John Leslie, ‘Doom and probabilities.’ Mind, 102 (1993): 489-91

24 This point of view is detailed by Leslie, in the part entitled ‘Just who should count have being human?’ (The End of the World, 256-63).

25 Cf. Leslie (1996, 260).

26 I thank two anonymous referees for the Canadian Journal of Philosophy for their comments, concerning an earlier draft of this paper.