# Pascal’s Wager (pt. 1 of 3)

I’m briefly interrupting my ongoing series of blog posts “The Best Reasons…” with a three-part series on Pascal’s Wager. I’m teaching it to my high schoolers this week, so the Wager has been on my mind. I’ll be right back to that previous series!

***

As we go about our life, we are confronted with a lot of decisions. In many cases, these decisions have the potential for either loss or gain, to go either good or bad. So how do we choose in these cases?

Well very often we use an informal method of weighing the potential good versus the potential bad of each choice, and then comparing the likelihood of good in each case. It sounds complicated, but we do this all the time.

Here’s an example: You just remembered hearing that the grocery store is giving away free ice cream cones today. But you look at your watch and see that it is 8:05 pm. You are pretty sure that the grocery store closes at 8pm, and so decide not to make the drive down.

Notice that you had two choices: to drive to the grocery store or to stay home. You chose to stay home. Why? Well because, though the good of an ice cream cone would have outweighed the inconvenience of the drive, you were pretty sure that there wasn’t an ice cream cone at the other end of your drive. The good of the ice cream cone wasn’t great enough to outweigh the likelihood of an inconvenient drive without a payoff.

But what if the grocery store was giving away free Corvettes? Well in this case, even if you were pretty sure the grocery store closed five minutes ago, you would still take the chance. The only difference: how big the potential payoff is. A Corvette is a bigger payoff than an ice cream cone; it was a big enough payoff to outweigh the likelihood of a drive without any payoff.

This sort of thinking has been formalized by mathematicians over the past few hundred years, in an attempt to try to help us sort out decisions that are really complicated. It’s called Decision Theory, and even though it was formalized to help make the right choice in complicated decisions, we can still use it for decisions that are not so complicated. Decision Theory offers us a nice way of visualizing and keeping straight the sort of thinking we already do all the time, like with the grocery store and the free ice cream.

Let’s see what a Decision Theory table looks like by taking another example. Suppose you go to the Carnival and walk up to one of the booths. The booth man asks if you’re feeling lucky and challenges you to a game. The game goes like this. You pay $1 to play. Then, the dealer lets you choose one card from a standard deck. If this card is red, you get $3 as a prize (in other words: you get the dollar you paid back, plus two more). If, however, the card is black, then the dealer keeps your dollar. Should you play this game?

Let’s put together a Decision Theory table and see what it says:

This table has two rows: one row labeled “Wagering for Red” and one row representing the decision to abstain from play. These are, of course, our two decisions. The table has three columns: The first two columns are labeled for our two possible outcomes – turning a red card or turning a black card; and the third column is labeled “Expected Utility”. This “expected utility” is the key term in Decision Theory; it is a value that gets assigned to each choice based on our doing a little math with the value of each outcome and the probability of each outcome. This value is the very thing we are trying to determine when we make these tables. It is the “point” of these tables.

Oh, and one other thing I should mention about the format of the table. Notice that within the labels for our two possible outcomes — turning a red card and turning a black card — there is a number in parenthesis, in this case (0.5). This number represents how likely we anticipate the outcome being. 0 means that we find the outcome impossible; 1 means that we find the outcome certain; a 0.5 means that we find the outcome just as likely as unlikely. In this case, since there are an equal number of red cards in a deck as black cards, it is equally likely that you will draw either a red or a black. That is why we’ve assigned a probability of 0.5 to both of our outcomes.

Now to “read” this table. According to Decision Theory, we should always choose the option that offers us the highest expected utility. In this Carnival game, paying the $1 and “Wagering for Red” offers us an expected utility of 0.5, whereas deciding to abstain from play offers us an expected utility of 0 (since you neither gain nor lose anything by not playing). So, according to Decision Theory, we should play the game. The expected utility is higher. Play the game enough times, and we should expect a return of about $0.50 for every time we play.

To be continued…

## Comments (0)