You are here

Newcomb's Paradox

Suppose you are invited to play a game. You are shown two boxes. The first box is open and contains a thousand dollars. The second box is closed. You are told that it contains either a million dollars or nothing. You can choose to take home either the contents of the closed box, or the contents of both boxes. You cannot open the box until you make your choice.

But here's the catch: The contents of the closed box have been determined by an entity called the Predictor—different versions identify the Predictor as a supecomputer, a supernatural being, or a superintelligent alien—which is able to successfully predict what the player will choose. In all previous games, you are told, the Predictor has never been wrong.

If the Predictor determines that the player will choose only the closed box, that box will contain a million dollars. Otherwise, the box will be empty. What do you choose?

This is Newcomb's paradox.

At first glance, it doesn't seem like a paradox at all. The answer is obvious, isn't it?

The problem, philosopher Robert Nozick explained in 1969, is that there is no consensus about what the obvious answer is. The public is split almost evenly between those who would choose the closed box and those who would chose both.

The reasoning behind choosing only the closed box is that, if the Predictor has correctly guessed what choice you are going to make, you'll become a millionnaire only if that choice is the closed box. If the Predictor is really capable of knowing in advance what you will choose, it will also know about any tricks you try to play; you won't be able to outwit it.

The reasoning behind choosing both boxes is that the Predictor's choice has already been made; therefore, whatever is in the closed box, you'll get an extra $1000 if you pick both. Since the contents of the box are already determined, nothing you do will change that; your future choice can't be the cause of the Predictor's past decision.

Still, we have been informed that of everyone who has played the game before us, nobody received the million if they chose both boxes. On the other hand, each person who chose just one box committed to leaving a thousand dollars on the table.

There's nothing wrong with the logic of either side. So how do we resolve this dilemma?

Julia Galef describes one hypothetical solution.

Before the box is sealed, the most rational approach is (1), and you should intend to one-box. After the box is sealed your best approach is (2) and you should be a two-boxer. Unfortunately, because the computer is such a good predictor, you can't intend to be a one-boxer and then switch to two-boxing, or the computer will have anticipated that already. So your only hope is to find some way to pre-commit to one-boxing before the machine seals the box, to execute some kind of mental jujitsu move on yourself so that your rational instincts shut off once that box is sealed.

But is it possible to do that? At its heart, Newcomb's paradox seems to be a question about free will. Is the Predictor actually able to foresee our future actions? Or are its predictions merely a most-likely scenario based on a psychological profile? Once the Predictor closes the box, is our fate sealed? Or do we still have the freedom to make our own choice?

Nicholas Sinlock elaborates.

The paradox revolves around, in my mind, how the Predictor makes his predictions. The truth is that we aren’t told how he determines his choice. Thus we, the Gambler, are unable to get an idea of what his choice might be. If we assume no knowledge, we can only base our choices upon what we decide to do. Whatever we choose will most likely be what the Predictor will have chosen. This sounds weird but since the Predictor is good at guessing what we will do, we must assume that he will guess whatever we actually do because there doesn’t appear to be any better strategy. In this case the best strategy is to choose option 2. If, however, we understand him to be basing his choice, say, upon our psychological preference for one option or the other, we can see another potential strategy. Assuming the above information by the Predictor, we could attempt a strategy where we choose the opposite of our instincts. Either way, we can now see the paradox would seem to be the result of the definition of the Predictor.

Henry Sturman, on the other hand, argues that it's not a question of free will. He points out that correlation is not causation; the Predictor's decision may appear to be determined by our choice, but it's possible that both our choice and the Predictor's are effects of a prior cause.

So, if presented with Newcomb's paradox, I would advise everybody to take only one box. In fact my advice might be the cause of the both the reader's choice for one box, and the perfect beings prediction of that choice, in accordance with normal foward time causality (if you read this article after the being has already filled the box, the being will still have been able to predict you would read this article and base his prediction on that). You then open the closed box and find the $1,000,000.

So it's possible that even if we have freedom of choice, the Predictor—given enough information about us—might be able to perfectly predict the choice we will make. The question of free will is therefore apparently not relevant.

That leaves us, nine hundred words into this post, no closer to a solution than we were at the beginning. I can think of no way to conclude this, except to say—Predictor, if you're reading this, take note—I would pick the closed box, take the million dollars, and not sweat the thousand.

Tags: 
up
381 users have voted.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer