Odds in the news (e.g., mass murder, cavities)

After the news that an American soldier apparently committed mass murder in Afghanistan, some reported that he had served multiple combat tours, had suffered a head injury, and had marital problems. Were those factors the cause of his actions?

That line of reasoning did not resonate with some of those closest to the situation, such as this fellow soldier, who was quoted as saying:

I know a lot of people who’ve done multiple deployments, and they always come back so level headed, that I’ve seen.

Last week I had one of those embarrassing moments when I tried to explain something statistical in class using example numbers off the top of my head. It was odds ratios. A bedrock of social science research, odds ratio reporting often causes confusion. In my case, the odds of me screwing up the example are much greater when I don’t first work it out with a spreadsheet.

Now odds ratios are indirectly in the news, and the seminar meets again tonight, so I worked up a quick example using mass murder, and comparing it to something more mundane: cavities (using fake data).

The first general problem is just the idea of causal effects in general. Smoking causes people to die from lung cancer, and the majority of smokers don’t die from lung cancer. It increases the risk of lung cancer, the odds of dying from lung cancer. It causes lung cancer, just not all the time. Fortunately for anti-smoking education efforts, lung cancer is common enough that it’s easy to see the connection. But with rare events — like mass murder — this is easily confused.

Mass murder

Again, made-up data. Suppose there are 20,000 soldiers on a base, and 18,000 of them are on their first deployment. From previous research, we believe that .01% of soldiers commit mass murder, so we expect (statistically) 2 mass murders on the base. But when a soldier on his fourth deployment commits mass murder, someone hypothesizes that it’s caused by the stress of his multiple deployments.

Among those serving multiple deployments, the mass murder rate on this base is .05%. If the overall rate is going to be .01%, resulting in two mass murders (which we hope is not the case), the rate for those on first deployments would have to be a little more than half that, about .0055%. In that case, I would say mass murder is “9.091-times more common” among those serving multiple deployments, because .05/.0055 = 9.091.

The odds of committing mass murder for the two groups are 1/1,999 and 1/17,999 respectively, and the ratio of those odds is 9.004. So I would say the “odds of mass murder are 9.004-times greater” among those serving multiple deployments. That’s a big difference. It doesn’t mean deployments cause mass murder, but the hypothesis is still standing.

The difference between the rate ratio (9.091) and the odds ratio (9.004) isn’t big, but with more common events this can add to the confusion. I’ve added a (fictional data) cavity example for comparison. The hypothesis here is that eating sugar causes cavities.

Overall, 5% of kids get cavities (ok, bad example), and the odds of getting cavities are 500/9,500, or .05263. In the cavity example, as with the mass murders, it just so happens that the two groups are expected to experience the same number of cavities, but the “no sugar” group is 9-times larger (as if). So the relative rates are .25000/.02778 = 9.0. And the odds ratio is (250/750)/(250/8,750) = .3333/.0286 = 11.667.

Take-home points from these fake examples:

  • The vast majority of soldiers don’t commit mass murder regardless of whether they are on their first deployment or not, and the vast majority of kids don’t get cavities regardless of whether they eat sugar or not.
  • The odds ratios for mass murder associated with multiple deployments, and cavities associated with sugar, are very large. That is true even though the total number of mass murders, like the total number of kids with cavities, is expected to be the same in each pair of comparison groups.
  • We cannot reject the possibility that these hypotheses are not true. Further research may be warranted.

(If I screwed this example up, too, please let so thanks for letting me know so I can could correct it.)

6 Comments

Filed under In the news, Me @ work

6 responses to “Odds in the news (e.g., mass murder, cavities)

  1. Hi Philip,
    This is a really useful post (as usual). I’d like to underscore one point and ask a question about another:

    1. It’s important to remember that just becuase something increases your risk of getting/having some outcome, you always have to pay attention to the base rate — if I buy three lottery tickets I’ve increased my “risk” of winning threefold but I’ve still very little chance of actually hitting the number.

    2. Why do we use odds ratios rather than rate ratios since rate ratios (aka “risk ratios) are so much more intiutively obvious?

  2. Andy

    A problem I’ve always had with using odds ratios (or hazard ratios) to illustrate a point is that it completely ignores the scale of an event. It is ONLY for comparative purposes. For example, it might be striking that people serving multiple deployments have 10 times (or even 1,000 times) the odds of committing mass murder of those serving the first deployment. But it’s important not to forget that mass murder is exceedingly rare in BOTH groups, and that the ratio comparison makes absolutely small differences appear relatively large.

    If we use absolute differences, which actually might be more appropriate for understanding the true likelihood of events like mass murder, the difference in the between the two groups is only 0.000445 or 4 in 10,000, which seems tiny, even with these made up data.

    Your point is well-taken, but I still think odds ratios are overused in social science research.

  3. kim

    Technically, the odds ratio in your mass murder example is 9.004. You have to subtract the mass murdering soldiers from the row totals to get the number of non-mass murdering soldiers. Think of it as a 2 x 2 table with counts in each cell of 17,999 (single deployment, non- murderer) and 1 (single deployment, murderer) and 1,999 (multiple deployment, non-murderer) and 1 (multiple deployment, murderer).

    As for the question of why use odds ratios, odds ratios have very convenient properties, one of which is that they are invariant to changes in the margins. This is exceedingly important in research on mobility and segregation, to name two social science areas where odds ratios are commonly used, but it’s also useful in contexts such as your example.

    Say the army is doubles the number of recruits, but is especially lucky and doesn’t pick up any more soldiers who turn into mass murderers. The odds ratios — the relative chance of a multiple-deployment soldier going ballistic (sorry) compared to a single-deployment soldier going ballistic — won’t change relative to the example above, but the baseline percentages will. If you want to know the “effect” of multiple deployments on the relative chances of being a mass murderer, its more appropriate to look at odds ratios. If you want to compare across military units, or across countries’, you’d also want to look at odds ratios, so that cross-unit differences in the deployment margins don’t “pollute” your conclusions.

    But if you just want to know that a very tiny percentage of soldiers turn into mass murderers, yes, you’d want to look at, well, percentages. Different measure, different type of question that is being addressed.

    The best solution would be to report absolute counts (which in this example is critical, given how rare the event of interest is and hence the “noise” in both percentages and odds ratios), percentages, and odds ratios. But, that typically exceeds the statistical literacy of most journalists, not to mention the attention span of most readers.

    • Thanks, Kim. (I knew that, but must have done the spreadsheet wrong. Rats. Will I corrected it).

      Eyeballing it, I had figured the differences between the rates and the odds ratios was because of the relative frequency of the events, but naturally it is also affected by the relative size of the groups. It goes to show you that fooling around with the spreadsheet feels like a more intuitive way to learn these principles, but it’s not as efficient as actually learning them.

  4. Pingback: The Douglas Allen study of Canadian children of gay/lesbian parents is worthless | Family Inequality

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s