Tag Archives: mortality

Age composition change accounts for about half of the Case and Deaton mortality finding

This paper by Anne Case and Angus Deaton, one of whom just won a Nobel prize in economics, reports that mortality rates are rising for middle-aged non-Hispanic Whites. It’s gotten tons of attention (see e.g., “Why poor whites are dying of despair” in The Week, and this in NY Times).

It’s an odd paper, though, in its focus on just one narrow age group over time. The coverage mostly describes the result as if conditions are changing for a group of people, but the group of people changes every year as new 45-year-olds enter and 54-year-olds leave. That means the population studied is subject to change in its composition. This is especially important because the Baby Boom wave was moving through this group part of that time. The 1999-2013 time frame included Baby Boomers (born 1945-1964) from age 35 to age 68.

My concern is that changes in the age and sex composition of the population studied could account for a non-trivial amount of the trends they report.

For example, they report that the increased mortality is entirely concentrated among those non-Hispanic White men and women who have high school education or less. But this population changed from 1999 to 2013. Using the Current Population Survey — which is not the authority on population trends, but is weighted to reflect Census Bureau estimates of population trends — I see that this group became more male, and older, over the period studied. That’s because the Baby Boomers moved in, causing aging, the population reflects women’s advances in education, relative to men, circa the 1970s. Here are those trends:

12208749_1027485723940137_783054584452711772_n

It’s odd for a paper on mortality trends not to account for account for sex and age composition changes in the population over time. Even if the effects aren’t huge, I think that’s just good demography hygiene. Now, I don’t know exactly how much difference these changes in population composition would make on mortality rates, because I don’t have the mortality data by education. That would only make a difference if the mortality rates differed a lot by sex and age.

However, setting aside the education issue, we can tell something just looking at the whole non-Hispanic White population, and it’s enough tor raise concerns. In the overall 45-54 non-Hispanic White population, there wasn’t any change in sex composition. But there was a distinct age shift. For this I used the 2000 Census and 2013 American Community Survey. I could get 1999 estimates to match Case and Deaton, but 2000 seems close enough and the Census numbers are easier to get. (That makes my little analysis conservative because I’m lopping off one year of change.)

Look at the change in the age distribution between 2000 and 2013 among non-Hispanic Whites ages 45-54. In this figure I’ve added the birth year range for those included in 2000 and 2013.

casedeatonage

That shocking drop at age 54 in 2000 reflects the beginning of the Baby Boom. In 2000 there were a lot more 53-year-olds than there were 54-year-olds, because the Baby Boom started in 1946. (Remember, unlike today’s marketing-term “generations,” the Baby Boom was a real demographic event.) So there was a general aging, but also a big increase in 54-year-olds, between 2000 and 2013, which will naturally increase the mortality rate for that year.

So, to see whether the age shift had a non-trivial impact on the number of deaths in this population, I used one set of mortality rates: 2010 rates for non-Hispanic Whites by single year of age, published here. And I used the age and sex compositions as described above (even though the sex composition barely changed I did it separately by sex and summed them).

The 2010 age-specific mortality rates applied to the 2000 population produce a death rate of 3.939 per 1,000. When applied to the 2013 population they produce a death rate of 4.057 per 1,000. That’s the increase associated with the change in age and sex composition. How big is that difference? The 2013 death rate implies 118,313 deaths in 2013. The 2000 death rate implies 114,869 deaths in 2013. The difference is 3,443 deaths. Remember, this assumes age-specific death rates didn’t change, which is what you want to assess effects of composition change.

So I can say this: if age and sex composition had stayed the same between 2000 and 2013, there would have been 3,443 fewer deaths among non-Hispanic Whites in the ages 45-54.

Here is what Case and Deacon say:

If the white mortality rate for ages 45−54 had held at their 1998 value, 96,000 deaths would have been avoided from 1999–2013, 7,000 in 2013 alone.

So, it looks to me like age composition change accounts for about half of the rise in mortality they report. They really should have adjusted for age.

Here is my spreadsheet table (you can download the file here):

casedeatontab

As always, happy to be credited if I’m right, and told if I’m wrong. But if you just have suggestions for more work I could do, that might not work.

Follow up: Andrew Gelman has three excellent posts about this. Here’s the last.

13 Comments

Filed under Research reports

Weathering and delayed births, get your norms off my body edition

You can skip down to the new data and analysis — or go straight to my new working paper — if you don’t need the preamble diatribe.

I have complained recently about the edict from above that poor (implying Black) women should delay their births until they are “financially ready” — especially in light of the evidence on their odds of marriage during the childbearing years. And then we saw what seemed like a friendly suggestion that poor women use more birth control lead to some nut on Fox News telling Rebecca Vallas, who spoke up for raising the minimum wage:

A family of three is not supposed to be living on the minimum wage. If you’re making minimum wage you shouldn’t be having children and trying to raise a family on it.

As if minimum wage is just a phase poor people can expect to pass through only briefly, on their way to middle class stability — provided they don’t piss it away by having children they can’t “afford.” This was a wonderful illustration of the point Arline Geronimus makes in this excellent (paywalled) paper from 2003, aptly titled, “Damned if you do: culture, identity, privilege, and teenage childbearing in the United States.” Geronimus has been pointing out for several decades that Black women face increased health risks and other problems when they delay their childbearing, even as White women have the best health outcomes when they delay theirs. This has been termed “the weathering hypothesis.” In that 2003 paper, she explores the cultural dynamic of dominance and subordination that this debate over birth timing entails. Here’s a free passage (where dominant is White and marginal is Black):

In sum, a danger of social inequality is that dominant groups will be motivated to promote their own cultural goals, at least in part, by holding aspects of the behavior of specific marginal groups in public contempt. This is especially true when this behavior is viewed as antithetical or threatening to social control messages aimed at the youth in the dominant group. An acknowledgment that teen childbearing might have benefits for some groups undermines social control messages intended to convince dominant group youth to postpone childbearing by extolling the absolute hazards of early fertility. Moreover, to acknowledge cultural variability in the costs and consequences of early childbearing requires public admission of structural inequality and the benefits members of dominant groups derive from socially excluding others. One cannot explain why the benefits of early childbearing may outweigh the costs for many African Americans without noting that African American youth do not enjoy the same access to advanced education or career security enjoyed by most Americans; that their parents are compelled to be more focused on imperatives of survival and subsistence than on encouraging their children to engage in extended and expensive preparation for the competitive labor market; indeed, that African Americans cannot even take their health or longevity for granted through middle age (Geronimus, 1994; Geronimus et al., 2001). And one cannot explain why these social and health inequalities exist without recognizing that structural barriers to full participation in American society impede the success of marginalized groups (Dressler, 1995; Geronimus, 2000; James, 1994). To acknowledge these circumstances would be to contradict the broader societal ethic that denies the existence of social inequality and is conflicted about cultural diversity. And it would undermine the ability the dominant group currently enjoys to interpret their privilege as earned, the just reward for their exercise of personal responsibility.

But the failure to acknowledge these circumstances results in a disastrous misunderstanding. As a society, we have become caught in an endless loop that rationalizes, perhaps guarantees, the continued marginalization of urban African Americans. In the case at hand, by misunderstanding the motivation, context, and outcomes of early childbearing among African Americans, and by implementing social welfare and public health policies that follow from this misunderstanding, the dominant European American culture reinforces material hardship for and stigmatization of African Americans. Faced with these hardships, early fertility timing will continue to be adaptive practice for African Americans. And, reliably, these fertility and related family “behaviors” will again be unfairly derided as antisocial. And so on.

Whoever said demography isn’t theoretical and political?

A simple illustration

In Geronimus’s classic weathering work, she documented disparities in healthy life expectancy, which is the expectation of healthy, or disability-free, years of life ahead. When a poor 18-year-old Black woman considers whether or not to have a child, she might take into account her expectation of healthy life expectancy — how long can she count on remaining healthy and active? — as well as, and this is crucial, that of her 40-year-old mother, who is expected to help out with the child-rearing (they’re poor, remember). Here’s a simple illustration: the percentage of Black and White mothers (women living in their own households, with their own children) who have a work-limiting disability, by age and education:

motherswdisab

Not too many disabilities at age 20, but race and class kick in hard over these parenting years, till by their 50s one-in-five Black mothers with high school education or less has a disability, compared with one-in-twenty White mothers who’ve gone on to more education. That looming health trajectory is enough — Geronimus reasonably argues — to affect women’s decisions on whether or not to have a child (or go through with an accidental pregnancy). But for the group (say, Whites who aren’t that poor) who have a reasonable chance of getting higher education, and making it through their intensive parenting years disability-free, the economic consequence of an early birth weighs much more heavily.

Some new analysis

As I was thinking about all this the other day, I went to check on the latest infant mortality statistics, since that’s where Geronimus started this thread — with the observation that White women’s chance of a baby dying decline with age, while Black women’s don’t. And I noticed there is a new Period Linked Birth-Infant Death Data File for 2013. This is a giant database of all the births — with information from their birth certificates — linked to all the infant deaths from the same year. These records have been used for analyzing infant mortality dozens of times, including in pursuit of the weathering hypothesis, but I didn’t see any new analyses of the 2013 files, except the basic report the National Center for Health Statistics put out. The outcome is now a working paper at the Maryland Population Research Center.

The gist of the result is, to me, kind of shocking. Once you control for some basic health, birth, and socioeconomic conditions (plurality, parity, prenatal care, education, health insurance type, and smoking during pregnancy), the risk of infant mortality for Black mothers increases linearly with age: the longer they wait, the greater the risk. For White women the risk follows the familiar (and culturally lionized) U-shape, with the lowest risk in the early 30s. Mexican women (the largest Hispanic group I could include) are somewhere in between, with a sharp rise in risk at older ages, but no real advantage to waiting from 18 to 30.

I’ll show you (and these rates will differ a little from official rates for various technical reasons). First, the unadjusted infant mortality rates by maternal age:

Infant Death Rates, by Maternal Age: White, Black, and Mexican Mothers, U.S., 2013. Infant death rates per 1,000 live births for non-Hispanic white (N = 1,925,847), non-Hispanic black (N = 533,341), and Mexican origin (N = 501,390) mothers. Data source: 2013 Period Linked Birth/Infant Death Public Use File, Centers for Disease Control.

Infant Death Rates, by Maternal Age: White, Black, and Mexican Mothers, U.S., 2013. Infant death rates per 1,000 live births for non-Hispanic white (N = 1,925,847), non-Hispanic black (N = 533,341), and Mexican origin (N = 501,390) mothers. Data source: 2013 Period Linked Birth/Infant Death Public Use File, Centers for Disease Control.

These raw rates show the big health benefit to delay for White women, a smaller benefit for Mexican mothers, and no benefit for Black mothers. But when you control for those factors I mentioned, the infant mortality rates for young Black and Mexican mothers are lower — those are the mothers with low education and bad health care. Controlling for those things sort of simulates the decisions women face: given these things about me, what is the health effect of delay? (Of course, delaying could contribute to improving things, which is also part of the calculus.) Here are the adjusted age patterns:

Adjusted Probability of Infant Death, by Maternal Age: White, Black, and Mexican Mothers, U.S., 2013 Predicted probabilities of infant death generated by Stata margins command, adjusted for plurality, birth order, maternal education, prenatal care, payment source, and cigarette smoking during pregnancy; models estimated separately for white (A), black (B), and Mexican (C) mothers (see Tab. 1). Error bars are 95% confidence intervals. Data source: 2013 Period Linked Birth/Infant Death Public Use File, Centers for Disease Control.

Adjusted Probability of Infant Death, by Maternal Age: White, Black, and Mexican Mothers, U.S., 2013. Predicted probabilities of infant death generated by Stata margins command, adjusted for plurality, birth order, maternal education, prenatal care, payment source, and cigarette smoking during pregnancy; models estimated separately for white (A), black (B), and Mexican (C) mothers (see Tab. 1). Error bars are 95% confidence intervals. (A separate test showed the linear trend for Black women is statistically significant.) Data source: 2013 Period Linked Birth/Infant Death Public Use File, Centers for Disease Control.

My jaw kind of dropped. Infant mortality is mostly a measure of mothers’ health. Early childbearing looks a lot crazier for White women than for Black and Mexican women, and you can see why the messaging around delaying till your “ready” seems so out of tune to the less privileged (and that really means race more than class, in this case). Why wait? If women knew they had higher education, a good job, and decent health care awaiting them throughout their childbearing years, I think the decision tree would look a lot different.

Of course, I have often said that delayed marriage is good for women. And delayed childbearing would be — should be — too, as long as it doesn’t put the health of the mother and her children at risk (and squander the healthy rearing years of their grandparents).

Please check out the working paper for more background and references, and details about my analysis.

13 Comments

Filed under Me @ work

Santa’s magic, children’s wisdom, and inequality

Eric Kaplan, channeling Francis Pharcellus Church, writes in favor of Santa Claus in the New York Times. The Church argument, written in 1897 and barely updated here, is that (a) you can’t prove there is no Santa, so agnosticism is the strongest possible objection, and (b) Santa enriches our lives and promotes non-rationalized gift-giving, “so we might as well believe in him.” That’s the substance of it. It’s a very common argument, identical to one employed against atheists in favor of belief in God, but more charming and whimsical when directed at killjoy Santa-deniers.

All harmless fun and existential comfort-food. But we have two problems that the Santa situation may exacerbate. First is science denial. And second is inequality. So, consider this an attempted joyicide.

Science

From Pew Research comes this Christmas news:

In total, 65% of U.S. adults believe that all of these aspects of the Christmas story – the virgin birth, the journey of the magi, the angel’s announcement to the shepherds and the manger story – reflect events that actually happened.

Here are the details:

PR_14.12.15_Christmas-05

So the Santa situation is not an isolated question. We’re talking about a population with a very strong tendency to express literal belief in fantastical accounts. This Christmas story is the soft leading edge of a more hardcore Christian fundamentalism. For the past 20 years, the General Social Survey GSS has found that a third of American adults agrees with the statement, “The Bible is the actual word of God and is to be taken literally, word for word,” versus two other options: “The Bible is the inspired word of God but not everything in it should be taken literally, word for word”; and,”The Bible is an ancient book of fables, legends, history, and moral precepts recorded by men.” Those “actual word of God” people are less numerous than the virgin-birth believers, but they’re related.

Using the GSS I analyzed the attitudes of the “actual word of God” people (my Stata data and work files are here). Controlling for their sex, age, race, education, political ideology, and the year of the survey, they are much more likely than the rest of the population to:

  • Agree that “We trust too much in science and not enough in religious faith”
  • Oppose marriage rights for homosexuals
  • Agree that “people worry too much about human progress harming the environment”
  • Agree that “It is much better for everyone involved if the man is the achiever outside the home and the woman takes care of the home and family”

This isn’t the direction I’d like to push our culture. Of course, teaching children to believe in Santa doesn’t necessarily create “actual word of God” fundamentalists. But I expect it’s one risk factor.

Children’s ways of knowing

A little reading led me to this interesting review of the research on young children’s skepticism and credulity, by Woolley and Ghossainy (citations below were mostly referred by them).

It goes back to Margaret Mead’s early work. In the psychological version of sociology’s reading history sideways, Mead in 1932 reported on the notion that young children not only know less, but know differently, than adults, in a way that parallels social evolution. Children were thought to be “more closely related to the thought of the savage than to the thought of the civilized man,” with animism in “primitive” societies being similar to the spontaneous thought of young children. This goes along with the idea of believing in Santa as indicative of a state of innocence.

In pursuit of empirical confirmation of the universality of childhood, Mead investigated the Manus tribe in Melanesia, who were pagans, looking for magical thinking in children: “animistic premise, anthropomorphic interpretation and faulty logic.”

Instead, she found “no evidence of spontaneous animistic thought in the uncontrolled sayings or games” over five months of continuous observation of a few dozen children. And while adults in the community attributed mysterious or random events to spirits and ghosts, children never did:

I found no instance of a child’s personalizing a dog or a fish or a bird, of his personalizing the sun, the moon, the wind or stars. I found no evidence of a child’s attributing chance events, such as the drifting away of a canoe, the loss of an object, an unexplained noise, a sudden gust of wind, a strange deep-sea turtle, a falling seed from a tree, etc., to supernaturalistic causes.

On the other hand, adults blamed spirits for hurricanes hitting the houses of people who behave badly, believed statues can talk, thought lost objects had been stolen by spirits, and said people who are insane are possessed by spirits. The grown men all thought they had personal ghosts looking out for them – with whom they communicated – but the children dismissed the reality of the ghosts that were assigned to them. They didn’t play ghost games.

Does this mean magical thinking is not inherent to childhood? Mead wrote:

The Manus child is less spontaneously animistic and less traditionally animistic than is the Manus adult [“traditionally” here referring to the adoption of ritual superstitious behavior]. This result is a direct contradiction of findings in our own society, in which the child has been found to be more animistic, in both traditional and spontaneous fashions, than are his elders. When such a reversal is found in two contrasting societies, the explanation must be sought in terms of the culture; a purely psychological explanation is inadequate.

Maybe people have the natural capacity for both animistic and realistic thinking, and societies differ in which trait they nurture and develop through children’s education and socialization. Mead speculated that the pattern she found had to do with the self-sufficiency required of Manus children. A Manus child must…

…make correct physical adjustments to his environment, so that his entire attention is focused upon cause and effect relationships, the neglect of which would result in immediate disaster. … Manus children are taught the properties of fire and water, taught to estimate distance, to allow for illusion when objects are seen under water, to allow for obstacles and judge possible clearage for canoes, etc., at the age of two or three.

Plus, perhaps unlike in industrialized society, their simple technology is understandable to children without the invocation of magic. And she observed that parents didn’t tell the children imaginary stories, myths, and legends.

I should note here that I’m not saying we have to choose between religious fundamentalism and a society without art and literature. The question is about believing things that aren’t true, and can’t be true. I’d like to think we can cultivate imagination without launching people down the path of blind credulity.

Modern credulity

For evidence that culture produces credulity, consider the results of a study that showed most four-year-old children understood that Old Testament stories are not factual. Six-year-olds, however, tended to believe the stories were factual, if their impossible events were attributed to God rather than rewritten in secular terms (e.g., “Matthew and the Green Sea” instead of “Moses and the Red Sea”). Why? Belief in supernatural or superstitious things, contrary to what you might assume, requires a higher level of cognitive sophistication than does disbelief, which is why five-year-olds are more likely to believe in fairies than three-year-olds. These studies suggest children have to be taught to believe in magic. (Adults use persuasion to do that, but teaching with rewards – like presents under a tree or money under a pillow – is of course more effective.)

Richard Dawkins has speculated that religion spreads so easily because humans have an adaptive tendency from childhood to believe adults rather than wait for direct evidence of dangers to accumulate (e.g., “snakes are dangerous”). That is, credulity is adaptive for humans. But Woolley and Ghossainy review mounting evidence for young children’s skepticism as well as credulity. That, along with the obvious survival disadvantages associated with believing everything you’re told, doesn’t support Dawkins’ story.

Children can know things either from direct observation or experience, or from being taught. So they can know dinosaurs are real if they believe books and teachers and museums, even if they can’t observe them living (true reality detection). And they can know that Santa Claus and imaginary friends are not real if they believe either authorities or their own senses (true baloney detection). Similarly, children also have two kinds of reality-assessment errors: false positive and false negative. Believing in Santa Claus is false positive. Refusing to believe in dinosaurs is false negative. In this figure, adapted from Woolley and Ghossainy, true judgment is in green, errors are in red.

whatchildrenthink

We know a lot about kids’ credulity (Santa Claus, tooth fairy, etc.). But, Woolley and Ghossainy write, their skepticism has been neglected:

It is perplexing that a young child could believe that his or her knowledge of the world is complete enough to deny the existence of anything new. It would seem that young children would understand that there are many things that exist in the real world that they have yet to experience. As intuitive as this seems, it appears not to be the case. From this perspective, development regarding beliefs about reality involves, in addition to decreased reliance on knowledge and experience, increased awareness of one’s own knowledge and its limitations for assessing reality status. This realization that one’s own knowledge is limited gradually inspires a waning reliance on it alone for making reality status decisions and a concomitant increase in the use of a wider range of strategies for assessing reality status, including, for example, seeking more information, assessing contextual cues, and evaluating the quality of the new information.

The “realization that one’s own knowledge is limited” is a vital development, ultimately necessary for being able to tell fact from fiction. But, sadly, it need not lead to real understanding – under some conditions, such as, apparently, the USA today, it often leads instead to reliance on misguided or dishonest authorities who compete with science to fill the void beyond what we can directly observe or deduce. Believing in Santa because we can’t disprove his existence is a developmental dead end, a backward-looking reliance on authority for determining truth. But so is failure to believe in germs or vaccines or evolution just because we can’t see them working.

We have to learn how to inhabit the green boxes without giving up our love for things imaginary, and that seems impossible without education in both science and art.

Rationalizing gifts

What is the essence of Santa, anyway? In Kaplan’s NYT essay it’s all about non-rationalized giving — for the sake of giving. The latest craze in Santa culture, however, says otherwise: Elf on the Shelf. According to Google Trends, interest in this concept has increased 100-fold since 2008. In case you’ve missed it, the idea is to put a cute little elf somewhere on a shelf in the house. You tell your kids it’s watching them, and that every night it goes back to the North Pole to report to Santa on their nice/naughty ratio. While the kids are sleeping, you move it to another shelf in house, and the kids delight in finding it again each morning.

Foucault is not amused. Consider the Elf on a Shelf aftermarket accessories, like these handy warning labels, which threaten children with “no toys” if they aren’t on their “best behavior” from now on:

elfwarning

So is this non-rationalize gift giving? Quite the opposite. In fact, rather than cultivating a whimsical love of magic, this is closer to a dystopian fantasy in which the conjured enforcers of arbitrary moral codes leap out of their fictional realm to impose harsh consequences in the real life of innocent children.

Inequality

What does all this mean for inequality? My developmental question is, what is the relationship between belief in Santa and social class awareness over the early life course? In other words, how long after kids realize there is class inequality do they go on believing in Santa? Where do these curves cross?

santaclass

Beyond worrying about how Santa rewards or punishes them individually, if children are to believe that Christmas gifts are doled out according to moral merit, than what are they to make of the obvious fact that rich kids get more than poor kids? Rich or poor, the message seems the same: children deserve what they get. Of course, I’m not the first to think of this:

santapoormeme

Conclusion

I can’t demonstrate that believing in Santa causes children to believe that economic inequality is justified by character differences between social classes. Or that Santa belief undermines future openness to science and logic. But those are hypotheses.

Between the anti-science epidemic and the pervasive assumption that poor people deserve what they get, this whole Santa enterprise seems risky. Would it be so bad, so destructive to the wonder that is childhood, if instead of attributing gifts to supernatural beings we instead told children that we just buy them gifts because we love them unconditionally and want them — and all other children — to be happy?

24 Comments

Filed under Uncategorized

Certain death? Black-White death dispersions

New research report, after rumination.

Knowing the exact moment of death is a common fantasy. How would it change your life? Here’s a concrete example: when I got a usually-incurable form of cancer, and the oncologist told me the median survival for my condition was 10 to 20 years, I treated myself to the notion that at least I wasn’t going to the dentist anymore (6 years later, with no detectable cancer, I’m almost ready to give up another precious hour to dentistry).

I assume most people don’t want to die at a young age, but is that because it makes life shorter or because it makes them think about death sooner? When a child discovers a fear of death, isn’t it tempting to say, “don’t worry: you’re not going to die for a long, long time”? The reasonable certainty of long life changes a lot about how we think and interact (one of the many reasons you can’t understand modernity without knowing some basic demography). I wrote in that cancer post, “Nothing aggravates the modern identity like incalculable risk.” I don’t know that’s literally true, but I’m sure there’s some connection between incalculability and aggravation.

Consider people who have to decide whether to get tested for the genetic mutation that causes Huntington’s disease. It’s incurable and strikes in what should be “mid”-life. Among people with a family history of Huntington’s disease, Amy Harmon reported in the New York Times, the younger generation increasingly wants to know:

More informed about the genetics of the disease than any previous generation, they are convinced that they would rather know how many healthy years they have left than wake up one day to find the illness upon them.

The subject of Harmon’s story set to calculating (among other things) whether she’d finish paying off her student loans before her first symptoms appeared.

The personal is demographic

So what is the difference between two populations, one of which has a greater variance in age at death than the other? (In practice, greater variance usually means more early deaths, and the risk of a super long life probably isn’t as disturbing as fear of early death.) Researchers call the prevalence of early death — as distinct from a lower average age at death — “life disparity,” and it probably has a corrosive effect on social life:

Reducing early-life disparities helps people plan their less-uncertain lifetimes. A higher likelihood of surviving to old age makes savings more worthwhile, raises the value of individual and public investments in education and training, and increases the prevalence of long-term relationships. Hence, healthy longevity is a prime driver of a country’s wealth and well-being. While some degree of income inequality might create incentives to work harder, premature deaths bring little benefit and impose major costs. (source)

That’s why reducing life disparity may be as important socially as increasing life expectancy (the two are highly, but not perfectly, correlated).

New research

Consider a new paper in Demography by Glenn Firebaugh and colleagues, “Why Lifespans Are More Variable Among Blacks Than Among Whites in the United States.”

I previously reported on the greater life disparity and lower life expectancy among Blacks than among Whites. Here is Firebaugh et al’s representation of the pattern (the distribution of 100,000 deaths for each group):

bwdeaths

Black deaths are earlier, on average, but also more dispersed. The innovation of the paper is that they decompose the difference in dispersion according to the causes of death and the timing of death for each cause. The difference in death timing results from some combination of three patterns. Here’s their figure explaining that (to which I added colors and descriptions, as practice for teaching myself to use an illustration program — click to enlarge):

bw death disparities

The overall difference in death timing can result from the same causes of death, with different variance in timing for each around the same mean (spread); different causes of death, but with the same age pattern of death for each cause (allocation); and the same causes of death, but different average age at death for each (timing). Above I said greater variability in life expectancy usually means more early deaths, but with specific causes that’s not necessarily the case. For example, one group might have most of its accidental deaths at young ages, while another has them more spread over the life course.

Overall, the spread effect matters most. They conclude that even if Blacks and Whites died from the same causes, 87% of the difference in death timing would persist because of the greater variance in age at death for every major cause. There are differences in causes, but those mostly offset. Especially dramatic are greater variance in the timing of heart disease (especially for women), cancer, and asthma (presumably more early deaths), The offsetting causes are higher Black rates of homicide (for men) and HIV/AIDS deaths, versus high rates of suicide and accidental deaths among White men (especially drug overdoses).

The higher variance in causes of death seems consistent with problems of disease prevention and disparities in treatment access and quality. (I’m not expert on this stuff, so please don’t take it exclusively from me — read the paywalled paper or check with the authors if you want to pursue this.)

Are these differences in death timing enough to create differences in social life and outlook, or health-related behavior, between these two groups? I don’t know, but it’s worth considering.

1 Comment

Filed under Research reports

Why you can’t understand the texting and driving problem in one chart, in one chart

The other day I argued that focus on the “texting-white-driving epidemic” diverts attention from the dangers of driving generally. Here’s a different direction.

The contemporary fascination with using data to tell stories runs up against the need to tell stories in the length of a tweet or in one chart, sometimes resulting in data-focused news that uninforms people rather than informing them.

So, I may not be able to tell the whole teen car death story in one chart, but I can show that you can’t reduce the whole teen car death story to a texting epidemic in one chart (source).

cellphones traffic deaths with NEJM.xlsx

The rate at which teen drivers are involved in fatal crashes has fallen 55% in the last 10 years, faster than the rate for all other age groups (which are also falling). This is part of a long term trend, which has accelerated in the last 10 years. Between 2002 and 2008 alone, the number of text messages sent in the US increased from almost none to more than 100 million per month.* According to the Centers for Disease Control and Prevention’s 2011 national Youth Risk Behavior Survey, reported in Pediatrics, 45% of teens say they texted while driving in the past 30 days — compared with only 10% who said they drove when they had been drinking. An astonishing 12% of teens said they text while driving every day.**

Far be it from me to decide what the public pays attention to. However, we should understand that in this era of distraction there is an opportunity cost to focusing on any one thing. For example (source):

mva-suicide-teens

Incidentally, there is a possible clue in that Pediatrics article as to why accident rates aren’t rising due to all this texting. The teens who text while driving are much more likely to engage in other risky behaviors: driving drunk, riding with drunk drivers, and not wearing seatbelts. So texting deaths may to some extent be displacing deaths those same teens would have caused in other ways.

Follow this series of posts at the texting tag.

Notes:

*Thank linked paper argues that texting is contributing to the increase in distracted driving deaths, based on cellphone subscription rates and texts sent per month. It’s plausible but not entirely convincing, because I have doubts about the measure of distracted driving deaths (which rely on local police reports, fluctuate wildly, and include lots of labels, including “carelessness”). They don’t analyze the trend in total traffic deaths.

**This fact may be the source of the myth that 11 teens die from texting and driving every day (less than 8 die daily from all motor vehicle accidents), because someone got carried away by lab studies showing texting while driving was as dangerous as drinking and driving and just extrapolated.

 

4 Comments

Filed under Uncategorized

Cell phones don’t kill people, cars kill people

A powerful new documentary by Werner Herzog is making the rounds (presented by the phone companies), showing the consequences of accidents caused by phone-distracted driving. It got me to revisit my posts on mobile phones and traffic accidents and do some more speculating about this.

A new report from the federal government shows that, of 29,757 fatal crashes in 2011, 10% were reported to involve a distracted driver. Of those distracted-driver crashes, 12% involved a driver using a cell phone. Thus, the 350 fatal crashes in which a driver on a cell phone was reported to be involved account for 1.2% of all fatal crashes. (This is probably an undercount, as accidents can’t be coded this way without witnesses or a driver confession.)

Meanwhile, from 1994 to 2011, mobile phone subscriptions increased more than 1200%, from 24 million to 316 million. During that time, the number of traffic fatalities per mile driven has fallen 36%, and property-damage-only accidents per mile have fallen 31%. The improved safety of American roads is a big accomplishment. Here are the trends:

phonetraffictrendsSources: Accidents and deaths: this and earlier reports; Subscribers: this.

According to the US Department of Transportation, 5% of drivers are observed talking on handheld phones at any one time. Rates of distraction are presumably higher than this. There is an epidemic of distraction — and there is voluminous evidence that such distraction is dangerous — coinciding with large, continuous declines in traffic dangers.

How is this possible? Either (a) there is no connection between phones and accidents; (b) there is a positive causal connection, but it is swamped by whatever is making the roads safer; or, (c) cell phones are making the roads safer (say, by displacing other, more dangerous distractions, or by causing people to drive cautiously while they’re doing something they know is dangerous). It’s just a question. Anyway.

Cars kill people

The 1971 Keep America Beautiful Campaign featured this video: “People Start Pollution. People Can Stop It.” It shows intense industrial pollution in the background as an American Indian paddles his canoe. Then:

Some people have a deep, abiding respect for the natural beauty that was once this country. [Someone throws a bag of fast food waste out of a passing car, and it lands at the feet of the canoer, now standing on the shore.] And some people don’t.

The “crying Indian” ad tried to hang the global pollution crisis on the personal malfeasance of individuals who litter (which is a real problem).

Is the anti-phone campaign trying to hang the problem of 30,000 road deaths per year in the U.S. on the reckless behavior of individuals who drive distracted? Distracted people causing carnage and destruction on the roads is terrible, of course. But a system of transportation that relies on people driving around in private cars is a much more fundamental problem.

I’m sure someone else has figured out how many lives are saved (presumably) from using public transportation versus private cars, but I didn’t easily find it. In addition to the environmental health benefits, clearly countries where people get around in cars have a lot more road deaths:

car-rail-deaths

Sources: Passenger miles, road deaths, country populations.

The number of rail deaths is very small: in these countries the car/rail death ratio averaged 36, almost three-times the car/rail mile ratio. (The U.S. is not on the chart because I didn’t have rail miles traveled. But the U.S. road death rate of 10.4 per 100,000 would make us 4th in this group, behind only Greece, Poland, and Portugal.)

Let’s put it this way: Some people have a deep, abiding respect for the safety of their fellow citizens. And some people don’t. Public transportation saves lives.

8 Comments

Filed under In the news

Fact pattern: Women’s life expectancy advantage

Women live longer than men in all but a small handful of countries. Is that “natural”?

A future post will deal with this more. But here’s a preview.

It partly depends what you think is a “natural” fertility rate. It’s hard to find societies with really high fertility rates nowadays — hardly any countries have 6 or more children per woman. But where fertility rates are higher, women’s advantage in life expectancy is less (click to enlarge).

femaleadtfr

Why? Some women die in childbirth, but that’s not a huge factor in life expectancy anymore, thankfully. In sub Saharan Africa about 400-600 mothers die for every 100,000 births, about half of 1%, which isn’t going to drive overall life expectancy that much. Still, those places are rough places to be a woman, apparently.

Some distinctly unnatural elements are at work — besides war, murder, accidents and suicide — especially smoking, which has enlarged the female life expectancy advantage in the U.S. and Europe dramatically. The World Health Organization has smoking rates by sex for 133 countries or so. The differences are huge. Only Austria has more women than men smoking. The average prevalence gap is 21 percentage points, and in Indonesia the smoking gap is 64% (67% for men versus 3% for women). In a bunch of Arab countries almost half the men smoke, along with almost no women.

The effect of the smoking gap is not apparent in the recent cross-sectional data, however. It takes a few decades after men take up smoking at higher rates (peak female advantage for the U.S. was in the 1970s). But this could be an important factor in the world’s life expectancy gender gap for decades to come.

3 Comments

Filed under Uncategorized