Take a look at this word problem, quoted from Gerd Gigerenzer's Calculated Risks. Before you read on, try to come up with a ballpark estimate of the answer; what does your gut tell you?
The probability that a woman of age 40 has breast cancer is about 1 percent. If she has breast cancer, the probability that she tests positive on a screening mammogram is 90 percent. If she does not have breast cancer, the probability that she nonetheless tests positive is 9 percent. What are the chances that a woman who tests positive actually has breast cancer?
This may remind you of problems from an introductory statistics class, and if so, you likely recall that intuition didn't get you very far here. Problems like this are usually a springboard for introducing concepts around Bayesian reasoning, and when you work out the math, the answers tend to be surprising. In this case, people typically estimate that the probability of actually having breast cancer in this scenario is quite high, but it is in fact only around 10%. That is, only one out of ten women with positive mammogram results actually have breast cancer! If that doesn't sit well with you, follow the blueprint of a similar problem here and do the math yourself. (Note that the numbers are rounded for mathematical simplicity, but roughly accurate. More data available here and here.)
Don't feel bad if your initial estimate of the answer was off here. It turns out that people, by and large, aren't very good at reasoning with probabilities like this. Evolution simply did not equip us to intuitively understand and manipulate data in this format. Even doctors whom we trust to help us make important medical decisions under uncertainty have trouble with understanding probabilities. In a 1998 study using a nearly identical word problem as that above, experienced doctors gave estimates of the answer ranging from 1 to 90%, and only 2 out of 24 answered the question correctly.
This is a single example, but innumeracy (that is, the inability to reason with numbers and other mathematical concepts) is a major problem for people all over the world, with implications for health, financial, and many other decisions. Luckily, researchers are devoting much time and energy to understanding why and how people fail to effectively work with probabilities and other numerical concepts. I strongly recommend taking a look at Calculated Risks if you're interested in learning more. The book focuses in particular on how presenting probabilistic information using so-called natural frequencies can greatly facilitate people's comprehension.
By now you may be wondering why I've spent so much time talking about mammograms and statistics problems, so here's the hypothesis that I've been getting to: games and game players may very well be and excellent tool for finding better ways to help people understand probabilistic information. I arrived at this idea after reflecting on how prevalent probabilistic information is in games, from the lowly 6- (or 8-, or 12-, or 20-...) sided die, to the overwhelming array of weapon and armor statistics in games like Diablo 3 or World of Warcraft. Sometimes the probabilistic information is explicit, as in the Diablo, WoW, and dice examples, other times it must be estimated from non-probabilistic information (e.g. when estimating the chance of completing a pass given the ability ratings of the quarterback, receiver, and defending cornerback in the Madden NFL games), and in other cases the player can only develop a sense of event probabilities with experience (e.g. when estimating the chance of a random enemy encounter in an RPG like Final Fantasy). These three categories only represent a rough taxonomy of how we might classify the different types of probabilistic information players can encounter in games, but the larger point is that such information is ubiquitous in gaming environments.
So, what follows from this? We can easily propose that, on average, game players are likely to simply have more experience with probabilities than non-game players, and perhaps that extra practice could make them better at probabilistic reasoning, but that by itself is not particularly exciting (in fact, it might be more interesting if we found that they were not any better than non-gamers, indicating that increased exposure to probabilities doesn't actually help people understand them better). I would go a step further, however, and claim that game players not only deal with probabilities more often, but also deal with them in a different manner than is typical for non-gamers. To explain how, let me take a moment to explain the difference between decisions from description and decisions from experience.
In judgment and decision making research, a common paradigm is to ask research participants to choose between sets of gambles (e.g. "Would you rather take a gamble with a 50% chance of winning $10 and a 50% chance of losing $5, or a gamble with 75% chance of winning $100 and a 25% chance of losing $50?"). If the information is presented as in the example, we have a decision from description, but in a study looking at decisions from experience, a participant would be allowed to make repeated "free" gambles that reflect the same underlying probabilities without them being explicitly described (usually simulated on a computer), and then choose which gamble they would rather take if they were using real money. In principal, the numbers are the same in both cases (comparing modes of presentation, not the two example gambles), but JDM research demonstrates that people behave in markedly different ways in the two scenarios, for a variety of reasons I don't have the space to get into here (but see, e.g., this paper). So, when a doctor tells you a treatment has an X% chance of curing some malady you're experiencing, but a Y% chance of some negative side effect, you can only make a decision from description, but when you, say, play a slot machine for two hours, and are deciding whether or not to feed the machine more coins, you are making a decision from experience.
What is unique about gaming is that, in many cases, it allows for players to engage in something of hybrid decision-making scenario. Consider the sword from Diablo 3 linked above, which grants a "2.4% chance to stun on hit". It can often be challenging to interpret what it really means for there to be 2.4% chance of an event when it is simply described to you, but the Diablo player has more than that; (s)he can proceed to use the sword against enemies for an hour, a day, or longer with little cost, and directly experience how a probabilistic event plays out over repeated trials. What I'm hypothesizing, then, is that games may provide a means of helping people connect numerical, descriptive probabilities to real, experienced events in a way that is not possible in other domains. Contrast the gaming case with a decision on transplant surgery, for example. In the vast majority of cases, this is a one-off decision (you can't exactly have 100 heart transplants, and then see how many of them work out), and thus patients cannot easily understand what follows from, say, a 90% chance that the surgery will be successful. The same holds for many of the other important health and financial decisions we face, and while I by no means intend to equate decisions in a game environment to those about your health or retirement fund, the point remains that in the former case the probabilities involved can be grounded in gameplay experiences, while in the latter this is often impossible.
The empirical question I thus propose is whether gameplay that marries descriptive presentation of probabilistic information with repeated experiences of events based on that information can in fact improve people's understanding of probabilities. That is, might game players better understand (as compared to non-gamers) what it means to say an event will occur with probability X? And if so, does this information transfer to non-game domains? Are game players better able, for example, to interpret what it means to say a procedure has a 70% success rate, or that person has a 1.4% chance of dying in a car accident (see this CDC report), and make more rational decisions thanks to this improved understanding? Whether the answers to these questions are affirmative or not, investigating them is sure to expand our understanding of how people understand and utilize probabilities.
Unfortunately, to the best of my knowledge, no work has been done to directly address these questions. For the moment, then, it will continue to be speculation on my part. Daphne Bavelier's lab at the University of Rochester has done interesting work suggesting that action games may improve improved probabilistic inference at a general level, but no one has explored if this translates to better understanding of quantitative probability data. We'll certainly be looking more into the matter here, and I hope to hear from any readers who know of research I may have missed, or ideas on how to tackle this question.
Reposted from Motivate.Play.