There are lots of ways to manipulate a gamble that influence whether people will choose to take a chance at it. You could frame the problem in terms of gains or losses, separate a one-part gamble into a multi-part gamble, change a loss for losing the gamble into a cost for playing the gamble, or emphasize the opportunity of getting the best possible outcome or the threat of getting the worst possible outcome. All of these manipulations change the way that a gamble is described while leaving the probabilities and the net payoffs unchanged. Even though the gambles are equivalent, people's willingness to play varies.
There's another manipulation that can get more people to gamble: changing the payoffs so that people might lose money. How does that work? Let's look at an example
. Here are the original choices (pick A or B):
A: a certain gain of $2
B: a 7/36 chance of gaining $9 and a 29/36 chance of no gain
Not surprisingly, most people (67%) chose option A, which has a higher expected value than B ($2 vs $1.75) and less risk. Some people are given this alternative gamble instead:
C: a certain gain of $2
D: a 7/36 chance of gaining $9 and a 29/36 chance of losing $0.05
C is the same as A, while option D is strictly worse than B, since you're risking the loss of a nickel rather than the loss of nothing, and there's no compensating benefit. Now, the difference between the expected values of B and D is only about $0.04, so you might suspect that D wouldn't be much less attractive than B. But (if I hadn't tipped you off) would you ever have predicted that more people would choose D than B? A majority, 60%, chose D rather than C, which means that roughly 27% of people would take D but not B.
What's going on? Slovic et al. (2002), who conducted this study, explain the results in terms of evaluability. Some features are difficult to evaluate. For instance, is 20,000 entries an impressive number for a dictionary to have? How should I know? So if people get asked how much they are willing to pay for a dictionary with various features, things like whether it has a torn cover end up being more important than the number of entries that it has. But if people are given a choice between two dictionaries, one with 10,000 entries and one with 20,000 entries, then the 20,000 entries seem like a lot and people are willing to pay more for that dictionary, regardless of any superficial damage to its cover. It is not earth-shattring that, if the features are hard to evaluate, you judge one option by comparing it to the other options that you're given.
What's more interesting is that people seem to make these sorts of comparisons even when the features aren't as hard to evaluate as the 20,000 dictionary entries. For instance, in another study, people who were given a choice between a nice pen and a mug as their reward for participating in a study split pretty evenly between the two. People who were given a choice between a nice pen, a mug, and a cheap pen overwhelmingly chose the nice pen. Even though people have plenty of familiarity with pens and mugs, when that cheap pen is around to use as a standard of comparison, the nice pen starts to look a lot better. In some circles, a big deal is made out of the fact that most voting systems don't satisfy independence of irrelevant alternatives
, but it turns out that even individual decisions between everyday objects are not always so independent.
Slovic et al. argue that people do the same sort of evaluation-by-comparison thing even for money. Is $9 a big prize? It doesn't look so spectacular when you're only comparing it with the $2 gain, but compared with the $0.05 loss it's a lot of money. So making the gamble strictly worse by turning no gain into a $0.05 loss actually makes it more attractive to people because the $9 gain starts to look better by comparison.
Slovic et al. study reported in:
Slovic, P, Finucane, M, Peters, E, & MacGregor, D G. (2002). Rational actors or rational fools: implications of the affect heuristic for behavioral economics. Journal of Socio-Economics, 31, 329–342.
Reasoning at Mixing Memory