Tag Archives: healthcare

Philip Cohen at 50, having been 14 in 1981

This is a sociological reflection about life history. It’s about me because I’m the person I know best, and I have permission to reveal details of my life.

I was born in August 1967, making me 50 years old this month. But life experience is better thought of in cohort terms. Where was I and what was I doing, with whom, at different ages and stages of development? Today I’m thinking of these intersections of biography and history in terms of technology, music, and health.

Tech

We had a TV in my household growing up, it just didn’t have a remote control or cable service, or color. We had two phones, they just shared one line and were connected by wires. (After I moved out my parents got an answering machine.) When my mother, a neurobiologist, was working on her dissertation (completed when I was 10) in the study my parents shared, she used a programmable calculator and graph paper to plot the results of her experiments with pencil. My father, a topologist, drew his figures with colored pencils (I can’t describe the sound of his pencils drawing across the hollow wooden door he used for a desktop, but I can still hear it, along with the buzz of his fluorescent lamp). A couple of my friends had personal computers by the time I started high school, in 1981 (one TRS-80 and one Apple II), but I brought a portable electric typewriter to college in 1988. I first got a cell phone in graduate school, after I was married.

The first portable electronic device I had (besides a flashlight) was a Sony Walkman, in about 1983, when I was 16. At the time nothing mattered to me more than music. Music consumed a large part of my imagination and formed the scaffolding of most socializing. The logistics of finding out about, finding, buying, copying, and listening to music played an outsized role in my daily life. From about 1980 to 1984, most of the money I made at my bagel store job went to stereo equipment, concerts, records, blank tapes for making copies, and eventually drums (as well video games). I subscribed to magazines (Rolling Stone, Modern Drummer), hitchhiked across town to visit the record store, pooled money with friends to buy blank tapes, spent hours copying records and labeling tapes with my friends, and made road trips to concerts across upstate New York (clockwise from Ithaca: Geneva, Buffalo, Rochester, Syracuse, Saratoga, Binghamton, New York City, Elmira).

As I’m writing this, I thought, “I haven’t listened to Long Distance Voyager in ages,” tapped it into Apple Music on my phone, and started streaming it on my Sonos player in a matter of seconds, which doesn’t impress you at all – but the sensory memories it invokes are shockingly vivid (like an acid flashback, honestly) – and having the power to evoke that so easily is awesome, in the old sense of that word.

Some of us worked at the Cornell student radio station (I eventually spent a while in the news department), whose album-oriented rock playlist heavily influenced the categories and relative status of the music we listened to. The radio station also determined what music stayed in the rotation – what eventually became known by the then-nonexistent term classic rock – and what would be allowed to slip away; it was history written in real time.

It’s like 1967, in 1981

You could think of the birth cohort of 1967 as the people who entered the world at the time of “race riots,” the Vietnam and Six Day wars, the Summer of Love, the 25th Amendment (you’re welcome!), Monterey Pop, Sgt. Peppper’s, and Loving v. Virginia. Or you could flip through Wikipedia’s list of celebrities born in 1967 to see how impressive (and good looking) we became, people like Benicio del Toro, Kurt Cobain, Paul Giamatti, Nicole Kidman, Pamela Anderson, Will Ferrell, Vin Diesel, Phillip Seymour Hoffman, Matt LeBlanc, Michael Johnson, Liev Schreiber, Julia Roberts, Jimmy Kimmel, Mark Ruffalo, and Jamie Foxx.

But maybe it makes more sense to think of us as the people who were 14 when John Lennon made his great commercial comeback, with an album no one took seriously – only after being murdered. The experiences at age 14, in 1981, define me more than what was happening at the moment of my birth. Those 1981 hits from album-oriented rock mean more to me than the Doors’ debut in 1967. My sense of the world changing in that year was acute – because it was 1981, or because I was 14? In music, old artists like the Moody Blues and the Rolling Stones released albums that seemed like complete departures, and more solo albums – by people like Stevie Nicks and Phil Collins – felt like stakes through the heart of history itself (I liked them, actually, but they were also impostors).

One moment that felt at the time like a historical turning point was the weekend of September 19, 1981. My family went to Washington for the Solidarity Day rally, at which a quarter million people demonstrated against President Reagan and for organized labor, a protest fueled by the new president’s firing of the PATCO air traffic controllers the previous month (and inspired by the Solidarity union in Poland, too). Besides hating Reagan, we also feared a nuclear war that would end humanity – I mean really feared it, real nightmare fear.

173042_940043902348_3109330_o

A piece of radio news copy I wrote and read at WVBR, probably 1983. The slashes are where I’m going to take a breath. “Local AQX” is the name of the tape cartridge with the sound bite (“actuality”) from Alfred Kahn, and “OQ:worse” means that’s the last word coming out of the clip.

On the same day as Solidarity, while we were in D.C., was Simon and Garfunkel’s Concert in Central Park. They were all of 40 (literally my mother’s age), tired old people with a glorious past (I’m sure I ignored the rave reviews). As I look back on these events – Reagan, the Cold War, sell-out music – in the context of what I thought of as my emerging adulthood, they seemed to herald a dark future, in which loss of freedom and individuality, the rise of the machines, and runaway capitalism was reflected in the decline of rock music. (I am now embarrassed to admit that I even hated disco for a while, maybe even while I listened 20 times, dumbstruck, to an Earth, Wind, and Fire album I checked out of the library.)

I don’t want to overdramatize the drama of 1981; I was basically fine. I came out with a penchant for Camus, a taste for art rock, and leftism, which were hardly catastrophic traits. Still, those events, and their timing, probably left a mark of cynicism, sometimes nihilism, which I carry today.

1937404_725426252838_6822157_n

About 1984, with Daniel Besman (who later died) in Ithaca. Photo by Linda Galgani.

Data aside

Maybe one reason 1981 felt like a musical watershed to me is because it really was, because pop music just got worse in the 1980s compared to the 1970s. To test (I mean prove, really) that hypothesis, I fielded a little survey (more like a game) that asked people to rate the same artists in both decades. I chose 58 artists by flipping through album charts from 1975-1984 and finding those that charted in both decades; then I added some suggestions from early respondents. To keep the task from being too onerous, as it required scoring bands twice from 1 (terrible) to 5 (great), once for each period, and some people found it difficult, I set the survey to serve each person just 10 artists at random (a couple of people did it more than once). The participants were 3/4 men, 3/4 over 40, and 3/4 White and US-born; found on Facebook, Twitter, and Reddit. The average artist was rated 11 times in each period (range 5 to 19). (Feel free to play along or share this link; I’ll update it if more come in.)

The results look very bad for the 1980s. The average change was a drop of .59, and only three acts showed noticeable improvement: Pat Benatar, Michael Jackson, and Prince (and maybe Talking Heads and the lowly Bryan Adams). Here is the full set (click to enlarge):

Technology and survival

I don’t think I would have, at age 14, given much weight to the idea that my life would repeatedly be saved by medical technology, but now that seems like business as usual, to me anyway. I guess as long as there’s been technology there have been people who owe their lives to it (and of course we’re more likely to hear from them than from those who didn’t make it). But the details are cohort-specific. These days we’re a diverse club of privileged people, our conditions, or their remnants, often hidden like pebbles wedged under the balls of our aging feet, gnawing reminders of our bodily precarity.

Family lore says I was born with a bad case of jaundice, probably something like Rh incompatibility, and needed a blood transfusion. I don’t know what would have happened without it, but I’m probably better off now for that intervention.

Sometime in my late teens I reported to a doctor that I had periodic episodes of racing heartbeat. After a brief exam I was sent home with no tests, but advised to keep an eye on it; maybe mitral valve prolapse, he said. I usually controlled it by holding my breath and exhaling slowly. We found out later, in 2001 – after several hours in the emergency room at about 200 very irregular beats per minute – that it was actually a potentially much more serious condition called Wolff-Parkinson-White syndrome. The condition is easily diagnosed nowadays, as software can identify the tell-tale “delta wave” on the ECG, and the condition is listed right there in the test report.

Rhythm_WPW

Two lucky things combined: (a) I wasn’t diagnosed properly in the 1980s (which might have led to open-heart surgery or a lifetime of unpleasant medication), and; (b) I didn’t drop dead before it was finally diagnosed in 2001. They fixed it with a low-risk radiofrequency ablation, just running a few wires up through my arteries to my heart, where they lit up to burn off the errant nerve ending, all done while I was almost awake, watching the action on an x-ray image and – I believed, anyway – feeling the warmth spread through my chest as the doctor typed commands into his keyboard.

Diverticulitis is also pretty easily diagnosed nowadays, once they fire up the CT scanner, and usually successfully treated by antibiotics, though sometimes you have to remove some of your colon. Just one of those things people don’t die from as much anymore (though it’s also more common than it used to be, maybe just because we don’t die from other things as much). I didn’t feel like much like surviving when it was happening, but I suppose I might have made it even without the antibiotics. Who knows?

More interesting was the case of follicular lymphoma I discovered at age 40 (I wrote about it here). There is a reasonable chance I’d still be alive today if we had never biopsied the swollen lymph node in my thigh, but that’s hard to say, too. Median survival from diagnosis is supposed to be 10 years, but I had a good case (a rare stage I), and with all the great new treatments coming online the confidence in that estimate is fuzzy. Anyway, since the cancer was never identified anywhere else in my body, the treatment was just removing the lymph node and a little radiation (18 visits to the radiation place, a couple of tattoos for aiming the beams, all in the summer with no work days off). We have no way (with current technology) to tell if I still “have” it or whether it will come “back,” so I can’t yet say technology saved my life from this one (though if I’m lucky enough to die from something else — and only then — feel free to call me a cancer “survivor”).

It turns out that all this life saving also bequeaths a profound uncertainty, which leaves one with an uneasy feeling and a craving for antianxiety medication. I guess you have to learn to love the uncertainty, or die trying. That’s why I cherish this piece of a note from my oncologist, written as he sent me out of the office with instructions never to return: “Your chance for cure is reasonable. ‘Pretest probability’ is low.”

pretest-probability-is-low

From my oncologist’s farewell note.

Time travel

It’s hard to imagine what I would have thought if someone told my 14-year-old self this story: One day you will, during a Skype call from a hotel room in Hangzhou, where you are vacationing with your wife and two daughters from China, decide to sue President Donald Trump for blocking you on Twitter. On the other hand, I don’t know if it’s possible to know today what it was really like to be me at age 14.

In the classic time travel knot, a visitor from the future changes the future by going back and changing the past. The cool thing about mucking around with your narrative like I’m doing in this essay (as Walidah Imarisha has said) is that it by altering our perception of the past, we do change the future. So time travel is real. Just like it’s funny to think of my 14-year-old self having thoughts about the past, I’m sure my 14-year-old self would have laughed at the idea that my 50-year-old self would think about the future. But I do!

6 Comments

Filed under Me @ work

Weathering and delayed births, get your norms off my body edition

You can skip down to the new data and analysis — or go straight to my new working paper — if you don’t need the preamble diatribe.

I have complained recently about the edict from above that poor (implying Black) women should delay their births until they are “financially ready” — especially in light of the evidence on their odds of marriage during the childbearing years. And then we saw what seemed like a friendly suggestion that poor women use more birth control lead to some nut on Fox News telling Rebecca Vallas, who spoke up for raising the minimum wage:

A family of three is not supposed to be living on the minimum wage. If you’re making minimum wage you shouldn’t be having children and trying to raise a family on it.

As if minimum wage is just a phase poor people can expect to pass through only briefly, on their way to middle class stability — provided they don’t piss it away by having children they can’t “afford.” This was a wonderful illustration of the point Arline Geronimus makes in this excellent (paywalled) paper from 2003, aptly titled, “Damned if you do: culture, identity, privilege, and teenage childbearing in the United States.” Geronimus has been pointing out for several decades that Black women face increased health risks and other problems when they delay their childbearing, even as White women have the best health outcomes when they delay theirs. This has been termed “the weathering hypothesis.” In that 2003 paper, she explores the cultural dynamic of dominance and subordination that this debate over birth timing entails. Here’s a free passage (where dominant is White and marginal is Black):

In sum, a danger of social inequality is that dominant groups will be motivated to promote their own cultural goals, at least in part, by holding aspects of the behavior of specific marginal groups in public contempt. This is especially true when this behavior is viewed as antithetical or threatening to social control messages aimed at the youth in the dominant group. An acknowledgment that teen childbearing might have benefits for some groups undermines social control messages intended to convince dominant group youth to postpone childbearing by extolling the absolute hazards of early fertility. Moreover, to acknowledge cultural variability in the costs and consequences of early childbearing requires public admission of structural inequality and the benefits members of dominant groups derive from socially excluding others. One cannot explain why the benefits of early childbearing may outweigh the costs for many African Americans without noting that African American youth do not enjoy the same access to advanced education or career security enjoyed by most Americans; that their parents are compelled to be more focused on imperatives of survival and subsistence than on encouraging their children to engage in extended and expensive preparation for the competitive labor market; indeed, that African Americans cannot even take their health or longevity for granted through middle age (Geronimus, 1994; Geronimus et al., 2001). And one cannot explain why these social and health inequalities exist without recognizing that structural barriers to full participation in American society impede the success of marginalized groups (Dressler, 1995; Geronimus, 2000; James, 1994). To acknowledge these circumstances would be to contradict the broader societal ethic that denies the existence of social inequality and is conflicted about cultural diversity. And it would undermine the ability the dominant group currently enjoys to interpret their privilege as earned, the just reward for their exercise of personal responsibility.

But the failure to acknowledge these circumstances results in a disastrous misunderstanding. As a society, we have become caught in an endless loop that rationalizes, perhaps guarantees, the continued marginalization of urban African Americans. In the case at hand, by misunderstanding the motivation, context, and outcomes of early childbearing among African Americans, and by implementing social welfare and public health policies that follow from this misunderstanding, the dominant European American culture reinforces material hardship for and stigmatization of African Americans. Faced with these hardships, early fertility timing will continue to be adaptive practice for African Americans. And, reliably, these fertility and related family “behaviors” will again be unfairly derided as antisocial. And so on.

Whoever said demography isn’t theoretical and political?

A simple illustration

In Geronimus’s classic weathering work, she documented disparities in healthy life expectancy, which is the expectation of healthy, or disability-free, years of life ahead. When a poor 18-year-old Black woman considers whether or not to have a child, she might take into account her expectation of healthy life expectancy — how long can she count on remaining healthy and active? — as well as, and this is crucial, that of her 40-year-old mother, who is expected to help out with the child-rearing (they’re poor, remember). Here’s a simple illustration: the percentage of Black and White mothers (women living in their own households, with their own children) who have a work-limiting disability, by age and education:

motherswdisab

Not too many disabilities at age 20, but race and class kick in hard over these parenting years, till by their 50s one-in-five Black mothers with high school education or less has a disability, compared with one-in-twenty White mothers who’ve gone on to more education. That looming health trajectory is enough — Geronimus reasonably argues — to affect women’s decisions on whether or not to have a child (or go through with an accidental pregnancy). But for the group (say, Whites who aren’t that poor) who have a reasonable chance of getting higher education, and making it through their intensive parenting years disability-free, the economic consequence of an early birth weighs much more heavily.

Some new analysis

As I was thinking about all this the other day, I went to check on the latest infant mortality statistics, since that’s where Geronimus started this thread — with the observation that White women’s chance of a baby dying decline with age, while Black women’s don’t. And I noticed there is a new Period Linked Birth-Infant Death Data File for 2013. This is a giant database of all the births — with information from their birth certificates — linked to all the infant deaths from the same year. These records have been used for analyzing infant mortality dozens of times, including in pursuit of the weathering hypothesis, but I didn’t see any new analyses of the 2013 files, except the basic report the National Center for Health Statistics put out. The outcome is now a working paper at the Maryland Population Research Center.

The gist of the result is, to me, kind of shocking. Once you control for some basic health, birth, and socioeconomic conditions (plurality, parity, prenatal care, education, health insurance type, and smoking during pregnancy), the risk of infant mortality for Black mothers increases linearly with age: the longer they wait, the greater the risk. For White women the risk follows the familiar (and culturally lionized) U-shape, with the lowest risk in the early 30s. Mexican women (the largest Hispanic group I could include) are somewhere in between, with a sharp rise in risk at older ages, but no real advantage to waiting from 18 to 30.

I’ll show you (and these rates will differ a little from official rates for various technical reasons). First, the unadjusted infant mortality rates by maternal age:

Infant Death Rates, by Maternal Age: White, Black, and Mexican Mothers, U.S., 2013. Infant death rates per 1,000 live births for non-Hispanic white (N = 1,925,847), non-Hispanic black (N = 533,341), and Mexican origin (N = 501,390) mothers. Data source: 2013 Period Linked Birth/Infant Death Public Use File, Centers for Disease Control.

Infant Death Rates, by Maternal Age: White, Black, and Mexican Mothers, U.S., 2013. Infant death rates per 1,000 live births for non-Hispanic white (N = 1,925,847), non-Hispanic black (N = 533,341), and Mexican origin (N = 501,390) mothers. Data source: 2013 Period Linked Birth/Infant Death Public Use File, Centers for Disease Control.

These raw rates show the big health benefit to delay for White women, a smaller benefit for Mexican mothers, and no benefit for Black mothers. But when you control for those factors I mentioned, the infant mortality rates for young Black and Mexican mothers are lower — those are the mothers with low education and bad health care. Controlling for those things sort of simulates the decisions women face: given these things about me, what is the health effect of delay? (Of course, delaying could contribute to improving things, which is also part of the calculus.) Here are the adjusted age patterns:

Adjusted Probability of Infant Death, by Maternal Age: White, Black, and Mexican Mothers, U.S., 2013 Predicted probabilities of infant death generated by Stata margins command, adjusted for plurality, birth order, maternal education, prenatal care, payment source, and cigarette smoking during pregnancy; models estimated separately for white (A), black (B), and Mexican (C) mothers (see Tab. 1). Error bars are 95% confidence intervals. Data source: 2013 Period Linked Birth/Infant Death Public Use File, Centers for Disease Control.

Adjusted Probability of Infant Death, by Maternal Age: White, Black, and Mexican Mothers, U.S., 2013. Predicted probabilities of infant death generated by Stata margins command, adjusted for plurality, birth order, maternal education, prenatal care, payment source, and cigarette smoking during pregnancy; models estimated separately for white (A), black (B), and Mexican (C) mothers (see Tab. 1). Error bars are 95% confidence intervals. (A separate test showed the linear trend for Black women is statistically significant.) Data source: 2013 Period Linked Birth/Infant Death Public Use File, Centers for Disease Control.

My jaw kind of dropped. Infant mortality is mostly a measure of mothers’ health. Early childbearing looks a lot crazier for White women than for Black and Mexican women, and you can see why the messaging around delaying till your “ready” seems so out of tune to the less privileged (and that really means race more than class, in this case). Why wait? If women knew they had higher education, a good job, and decent health care awaiting them throughout their childbearing years, I think the decision tree would look a lot different.

Of course, I have often said that delayed marriage is good for women. And delayed childbearing would be — should be — too, as long as it doesn’t put the health of the mother and her children at risk (and squander the healthy rearing years of their grandparents).

Please check out the working paper for more background and references, and details about my analysis.

13 Comments

Filed under Me @ work

Blame the poor, “We tried generosity and it just doesn’t work” edition

With all the money we have given them, why are the poor still poor?

One of the meanest right-wing statistical memes about poverty has been popping up a lot this fall. I saw it most recently in this commentary by Christine Kim, who wrote:

Since the mid-1960s, government has spent more than $19.8 trillion (in 2011 dollars) in total on means-tested welfare programs. With 80 such federal programs, targeted government spending for low-income families – including on health, education, housing, and income supports – totaled nearly $930 billion in fiscal 2011 alone. If converted to cash, this sum would be four times what is needed to lift every poor family out of poverty. About half of this annual means-tested spending goes to families with children. If divided among the 14 million poorest families with children, each family would receive about $33,000. Why, then, have poverty rates remained so high for so long? Clearly, the solution to alleviating poverty is not more of the same.

Brookings’ Ron Haskins used the same numbers, rearranged slightly, to write this in November:

We already spend more than enough money on means-tested programs for poor and low-income people to bring them all out of poverty. There were about 46.5 million people in poverty in 2012, a year in which spending on means-tested programs was around $1 trillion. If that money were divided up among the poor, we could spend about $22,000 per person. For a single mother and two children, that would be over $65,000. The poverty level in 2013 for a mother and two children is less than $20,000. So this strategy would work, but giving so much money to young, able-bodied adults would not be tolerated by the public.

This way of manipulating welfare state spending seems to have originated from Robert Rector at Heritage, who offered it in Congressional testimony in 2012.

This meme is — and I am choosing my words carefully — stupid and evil.

It’s stupid because it ignores how poverty is calculated and how “means-tested” money is spent. If you took away Medicaid and housing support alone, the poverty line for a single mother with two children would have to be a lot higher. For example, according to Rector’s original figures (shared here), half of that means-tested money is spent on medical care, mostly Medicaid. So, Haskins, if you took away Medicaid (and Obamacare subsidies), how much would a single mother with two children need to survive? Health insurance alone would cost her more than $10,000.

So is $33,000 per family such a ridiculously generous amount to live on that it would easily lift people out of poverty? Not without the benefits poor people get. Or if they get sick. In round numbers 10 years old, 5% of the population spends half the money on medical care. Using the distribution reported in that paper, $10,000 per family on medical care is not much, if it’s distributed more or less like this:

spendingperfamily

Further, all those non-poor families living on $33,000 in employment income are getting benefits, too, like tax-subsidized employer-provided healthcare, mortgage interest deductions, unemployment insurance, and retirement savings. If you took all that away and gave these non-poor families $33,000 to live on, they wouldn’t be non-poor for long. So the argument is stupid.

It’s also evil, because it says, “We’ve thrown so much money at poor people and it just doesn’t work, so it’s time for them to step up and contribute a little themselves.” The main thing Kim wants them to do is get married. She even says, “If single mothers simply were to wed the father of their child, their likelihood of living in poverty would fall by two-thirds,” and adds that, “contrary to myth the fathers are quite ‘marriageable.'”

The calculations for this are not shown, which is probably just as well. But the idea that the “benefits” of marriage — that is, the observed association between marriage and non-poverty — would accrue to single mothers if they “simply” married their partners is bonkers. There is a marriage queue (imperfect of course) that arranges people from most to least likely to marry, and on average the richer, healthier, better-at-relationships people are at the front, more likely to marry and produce the observed “benefits” of marriage. “Marriageable” isn’t a dichotomous condition, but it’s obvious that at any one time the currently non-married are not the same as the currently married.

But back to evil. The idea that we’ve spent so much on poverty that it proves spending doesn’t solve poverty is like saying, “we’ve spent $13 trillion on the military in just the last quarter century, and we don’t have complete world domination yet, so obviously war is not the answer.”

military-spending-88-12

Oh, wait, I do agree with that.

But we don’t spend money on the military and fight wars to fix the world. We do it to fatten defense contractors, provide jobs, prop up unpopular allies, and defend the country from the occasional threat. The defense industry doesn’t have to defend the claim that the spending is a one-time thing to cure a problem.

Giving poor people money — or in-kind benefits — to help them survive is not a solution to poverty, it’s a treatment for poverty. If we had more decency we’d do more of it.

11 Comments

Filed under Politics

Girls braced for beauty

Sociologists like to say that gender identities are socially constructed. That just means that what it is, and what it means, to be male or female is at least partly the outcome of social interaction between people – visible through the rules, attitudes, media, or ideals in the social world.

And that process sometimes involves constructing people’s bodies physically as well. And in today’s high-intensity parenting, in which gender plays a big part, this includes constructing – or at least tinkering with – the bodies of children.

Today’s example: braces. In my Google image search for “child with braces,” the first 100 images yielded about 75 girls.

google-braces

Why so many girls braced for beauty? More girls than boys want braces, and more parents of girls want their kids to have them, even though girls’ teeth are no more crooked or misplaced than boys’. This is just one manifestation of the greater tendency to value appearance for girls and women more than for boys and men. But because braces are expensive, this is also tied up with social class, so that richer people are more likely to get their kids’ teeth straightened, and as a result richer girls are more likely to meet (and set) beauty standards.

Hard numbers on how many kids get braces are surprisingly hard to come by. However, the government’s medical expenditure survey shows that 17 percent of children ages 11-17 saw an orthodontist in the last year, which means the number getting braces at some point in their lives is higher than that. The numbers are rising, and girls are wearing most of hardware.

A study of Michigan public school students showed that although boys and girls had equal treatment needs (orthodontists have developed sophisticated tools for measuring this need, which everyone agrees is usually aesthetic), girls’ attitudes about their own teeth were quite different:

michigan-braces

Clearly, braces are popular among American kids, with about half in this study saying they want them, but that sentiment is more common among girls, who are twice as likely as boys to say they don’t like their teeth.

This lines up with other studies that have shown girls want braces more at a given level of need, and they are more likely than boys to get orthodontic treatment after being referred to a specialist. Among those getting braces, there are more girls whose need is low or borderline. A study of 12-19 year-olds getting braces at a university clinic found 56 percent of the girls, compared with 47 percent of the boys, had “little need” for them on the aesthetic scale.

The same pattern is found in Germany, where 38 percent of girls versus 30 percent of boys ages 11-14 have braces, and in Britain – both countries where braces are covered by state health insurance if they are needed, but parents can pay for them if they aren’t.

Among American adults, women are also more likely to get braces, leading the way in the adult orthodontic trend. (Google “mother daughter braces” and you get mothers and daughters getting braces together; “father son braces” brings you to orthodontic practices run by father-son teams.)

anchors-braces

Caption: The teeth of TV anchors Anderson Cooper, Soledad O’Brien, Robin Roberts, Suzanne Malveaux, Don Lemon, George Stephanopolous, David Gregory, Ashley Banfield, and Diane Sawyer.

Teeth and consequences

Today’s rich and famous people – at least the one whose faces we see a lot – usually have straight white teeth, and most people don’t get that way without some intervention. And lots of people get that.

Girls are held to a higher beauty standard and feel the pressure – from media, peers or parents – to get their teeth straightened. They want braces, and for good reason. Unfortunately, this subjects them to needless medical procedures and reinforces the over-valuing of appearance. However, it also shows one way that parents invest more in their girls, perhaps thinking they need to prepare them for successful careers and relationships by spending more on their looks.

When they’re grown up, of course, women get a lot more cosmetic surgery than men do – 87 percent of all surgical procedures, and 94% of Botox-type procedures – and that gap is growing over time.

As is the case with lots of cosmetic procedures, people from wealthier families generally are less likely to need braces but more likely to get them. But add this to the gender pattern, and what emerges is a system in which richer girls (voluntarily or not) and their parents set the standard for beauty – and then reap the rewards (as well as harms) of reaching it.

Note: I didn’t find any sociological studies of this. Why don’t you do one?

3 Comments

Filed under Research reports

Home birth is more dangerous. Discuss.

How dangerous is too dangerous?

We don’t prohibit all dangerous behavior, or even behavior that endangers others, including people’s own children.

Question: Is the limit of acceptable risks to which we may subject our own children determined by absolute risks or relative risks?

Case for consideration: Home birth.

Let’s say planning to have your birth at home doubles the risk of some serious complications. Does that mean no one should do it, or be allowed to do it? Other policy options: do nothing, discourage home birth, promote it, regulate it, or educate people about the risks and let them do what they want.

Here is the most recent result from a large study reported on the New York Times Well blog, which looks to me like it was done properly, from the American Journal of Obstetrics & Gynecology. Researchers analyzed about 2 million birth records of live, term (37-43 weeks), singleton, vertex (head-first) births, including 12,000 planned home births (that is, not including those where the home birth was accidental). They also excluded those at freestanding birthing centers.

The planned-home birth mothers were generally relatively privileged, more likely to be White and non-Hispanic, college-educated, married, and not having their first child. However, they were also more likely to be older than 34 and to have waited to see a doctor until their second trimester.

On three measures of birth outcomes, the home-birth infants were more likely to have bad results: low Apgar scores and neonatal seizures. Apgar is the standard for measuring an infant’s wellbeing within 5 minutes of birth, assessing breathing, heart rate, muscle tone, reflex irritability and circulation (blue skin). With up to 2 points on each indicator, the maximum score is 10, but 7 or more is considered normal and under 4 is serious trouble. Low scores are usually caused by some difficulty in the birth process, and babies with low scores usually require medical attention. The score is a good indicator of risk for infant mortality.

These are the unadjusted low-Apgar and seizure rates:

homebirthoutcomesThese are big differences considering the home birth mothers are usually healthier. In the subsequent analysis, the researchers controlled for parity, maternal age, race/ethnicity, education, gestational age at delivery, number of prenatal care visits, cigarette smoking during pregnancy, and medical/obstetric conditions. With those controls, the odds ratios were 1.9 for Apgar<4, 2.4 for Apgar<7, and 3.1 for seizures. Pretty big effects.

Two years  ago I wrote about a British study that found much higher rates of birth complications among home births when the mother was delivering her first child. This is my chart for their findings:

Again, those were the unadjusted rates, but the disparities held with a variety of important controls.

These birth complication rates are low by world historical standards. In New Delhi, India, in the 1980s 10% of 5-minute-olds had Apgar scores of 3 or less. So that’s many-times worse than American home births. On the other hand, a number of big European countries (Germany, France, Italy) have Apgar<7 rates of 1% or less, which is much better.

A large proportional increase on a low risk for a high-consequence event (like nuclear meltdown) can be very serious. A large absolute risk of a common low-consequence event (like having a hangover) can be completely acceptable. Birth complications are somewhere in between. But where?

Seems like a good topic for discussion, and having some real numbers helps. Let me know what you decide.

24 Comments

Filed under Uncategorized

Obstacles to healthcare aren’t cheap either

Is the Healthcare.gov debacle, with its dozens of overlapping contractors, just a metaphor for why a single-payer system makes so much more sense, or is it actually one of the reasons a single-payer system makes so much more sense?

Leaflets

Giving things to people costs money, so you would expect that indiscriminate gifting would be expensive. But that doesn’t mean highly targeting giving is more efficient, or even cheaper overall.

Throwing leaflets out of an airplane might cost you $500 for the flight and $100 for the 1,000 leaflets. If you only drop 500 leaflets, you save on printing costs. Your cost per leaflet goes up, but your total cost goes down.

That’s indiscriminate. But giving away fewer leaflets will increase your costs if you want to be selective. If you want only men over six-foot-four to get your leaflet, the cost of administering that rule might be more expensive than the airplane drop. Trying to give just 50 leaflets only to men over six-foot-four requires hiring someone to walk around qualifying people as tall men, which would be expensive.

gates

Health care

Obamacare isn’t just about giving away healthcare, but that’s part of it. And it shows that restricting who gets healthcare isn’t just a savings: Yes, you’re giving away less, but you have to pay the cost of figuring out who can’t have it, and then preventing those people from stealing it. (This is a variant of what is known as the cost-of-gates-for-rich-people dilemma).

In the case of Obamacare, the Tea Party saved us money by denying health insurance to undocumented immigrants, but cost us the money spent screening customers to make sure they’re not undocumented immigrants (and then paying for the ER visits of innocent children with asthma).

It’s not just undocumented immigrants. The nearly infinite rules for subsidies and exclusions cost money to administer. Just in case you have a hard time figuring your way through this flowchart, the government will have to pay for a system to do it for you:

aca-flowchart

A plan this complicated has a lot of these costs. To name a few: In the budgeting and planning phase we have to pay for health economists, in the administration phase we pay for database managers, and in the PR-disaster phase we pay for lawyers representing private contractors who testify before Congress.

Which is what hit me yesterday, when, at a hearing of the House Energy and Commerce Committee on the Obamacare roll-out, John Lau from Serco bragged about “the professionalism of our recruiting efforts and the outstanding way we have on-boarded and trained our people.” Inventing verbs is never a good way to save money. More importantly, though, neither is attempting to communicate with databases from Social Security, the Internal Revenue Service, Homeland Security and many insurance plans thousands of times per minute, just to make sure people don’t steal health insurance.

At that hearing, the House committee also heard from Cheryl Campbell, a senior vice president at CGI Federal; Andrew Slavitt from Optum; and Lynn Spellecy, corporate counsel for Equifax Workforce Solutions. This is what their prepared statements covered (click on the image to enlarge, so you can see the references to “health”):

PowerPoint PresentationWhen I was a kid I lived in Sweden for a while. It was the 1970s. As members of the family of a visiting scientist, each of us got a little metal tag on a chain with a number stamped on it. When I went to the doctor, I just showed them my tag. (The dentist, of course, was at school, because that’s where the children are.)

Giving away healthcare has a lot of costs, but figuring out who to deny shouldn’t be one of them.

4 Comments

Filed under Politics

Jobs and repeals

Couldn’t resist this:

Sources: Current Employment Statistics (seasonally adjusted); Washington Post 2 Chambers blog.

Note: The blog is nonpartisan.

2 Comments

Filed under In the news