Category Archives: In the news

How do Black-White parents identify their children?

In 2015 the American Community Survey yields an estimate of 66,913 infants who have one Black parent and one White parent present in the household. (Either parent may be multiracial, too.)

What is the race of those infants? 73% of them were identified as both White and Black by whoever filled out the Census form.


(Note “other” doesn’t mean they specified “other,” it just means they used some other combination of races.)

These are children age 0 living with both parents, so it’s a pretty good bet they’re mostly biological parents, though some are presumably adopted. This is based on a sample of 507 such infants. If you pooled some years of ACS there is plenty to study here. Someone may already have done this – feel free to post in the comments.

That’s it, just FYI.

1 Comment

Filed under In the news

Vasectomy reversal, divorce, and American optimism

That would be a good title for a longer essay (feel free to use it).

“Studies suggest that up to 6% of such men will request vasectomy reversal,” wrote the authors of a chapter in Clinical Care Pathways in Andrology. “Divorce with remarriage is by far the most common reason for vasectomy reversal.”

So, when in the divorce process do people start Googling “vasectomy reversal”? Is it men with younger girlfriends, considering leaving their wives? Women considering marrying a divorced man? Divorced couples considering another round of kids?

I don’t know, but Google does, or they could if they looked into it. I’ve only gotten as far as the strong relationship between searches for “vasectomy reversal” and state divorce rates:


I like to think of it as the optimism rooted in the American spirit. We always look forward to the next renewal, the next reboot, rebranding, or escape. Not because I really think it’s true, I just like to think of it that way.

The Google data is from their Trends tool, the divorce data is from the ACS via

(This is a return to an old post, in which I first noticed this relationship, with new data. Now I think divorce per population makes more sense for Google correlations, rather than divorce per married population, which I used before.)


Filed under In the news

The liberalization of divorce attitudes proceeds apace

The 2016 Gallup poll results on what is morally acceptable versus morally wrong came out over the summer, and they show that U.S. attitudes toward divorce continue to grow more positive. The acceptable attitude has gained 5 points in the last 5 years:


This parallels results from the General Social Survey, which asks, “Should divorce in this country be easier or more difficult to obtain than it is now?” The latest GSS is still 2014, but it also shows a marked increase in the liberal easier view over the same time period:


See more under the divorce tag.


Filed under In the news

The new Wilcox thing is complete bologna and/or just dishonest

Update: after Wilcox updated his report with the complete data, I now conclude the report is just dishonest, not complete bologna. See below. 

Brad Wilcox and Nicholas Zill have a new report on Brad’s Institute for Family Studies website, about Arizona: “Stronger Families, Better Schools: Families and High School Graduation Across Arizona.” It bears a strong resemblance to a previous report, about Florida, “Strong Families, Successful Schools: High School Graduation and School Discipline in the Sunshine State.” Together these two give you a feeling like taking the first two bites of a leftover giant burrito that might have gone a little bad, and realizing there are probably about 48 more bites to go.

Anyway, this is about the Arizona one. I’ll first raise the possibility that it’s complete bologna – as in, fraudulent or error-ridden – and then discuss how it’s conclusions are dishonest at best even if the analysis is not technically wrong but rather just presented terribly.

Update: with the report corrected to show the complete data, the analysis now replicates fine. So I set aside the bologna issue. I leave this section here just so you can see the research design, but the main argument is in the next section.

First, the bologna issue

The report uses demographic data from 99 Arizona school districts to model graduation rates, and the gender gap in graduation rates. Their conclusion, based on two regression models using districts as the units of analysis and demographic indicators as the predictors, is this:

In Arizona, public school districts with better-educated and more married parents boast higher high school graduation rates. Gender equity is also greater in districts with more married parents. That is, boys come closer to matching the high school graduation rates of girls in districts with more married-parent families. Moreover, married parenthood is a better predictor of these two high school graduation outcomes than are child poverty, race, and ethnicity in public school districts across the Grand Canyon state.

To pad out the report, they also include appendix tables, so it’s theoretically possible to replicate their regressions. Unfortunately, unless I’m missing something, they don’t replicate. I wouldn’t normally bother rerunning someone’s regression, especially when the argument they’re building is so wrong-headed (see below), but just because we know from long experience that Wilcox does not behave honestly (in methods, ethics, and ethics) what the heck.

The report says, “Graduation rates and male/female graduation ratios for the 99 Arizona school districts in our study are shown in Table A1 in the Appendix.” Table A2 then lists the districts again, with the demographic variables. Unfortunately, table A2 only includes 83 districts – and the 16 missing are exactly those from Indian-Oasis to Paradise Valley in the alphabetical list of district names, so apparently an error handling the data. So I could only use 83 of the 99 for the regressions. Since I don’t know when they lost those 16 districts, I don’t know if it was before or after running the regressions (there are no Ns or standard errors on their regression tables).

For each of their dependent variables – graduation rate, and the male/female ratio in graduation rates – they list bivariate correlations, and adjusted betas from as multivariate regression. Here are their figures, with mine next to them. The key differences are highlighted:


If they’re using 99 cases and I have 83 (actually 81 for the gender cap because of missing data), you would expect some difference. But these are very similar, including the bivariate correlations and the R-squareds for the models.

The weird thing is that the biggest difference is exactly on their biggest claim: “married parenthood is a better predictor of these two high school graduation outcomes than are child poverty, race, and ethnicity…” That is based on the assertion that .29 is larger than -.28 (very luck for them, that tiny, insignificant difference in  magnitude!). In my model the minority-size effect is more than twice as large as the marriage-parenthood effect. So, huh. It’s definitely possible Brad simply lied about his results and made up a few numbers. (And I’m just using the data they include in the report.) But now let’s pretend he didn’t.

Update: with the complete data I can report that those two betas are actually .2865 (.29!) versu .2847 (.28!). The idea that one is a “better” predictor than the other is clearly not serious. Further, for some reason (we can only guess), they combined percent Black, Hispanic, and American Indian together into “minority,” which produced the .28 result. If they had entered them into the model separately, they would find that Hispanic and American Indian effects are each bigger than the married parent effect, as I show here:

So much for the headline result. Anyway, back to the argument…

Policy, shmolicy

The point of the analysis is to make policy recommendations. They conclude:

If the state enjoyed more stable families, it might also see better educational outcomes among its children. It’s for that reason that Arizona should consider measures designed to strengthen and stabilize families.

Their recommendations to that end are vocational education and marriage promotion.

Private and public initiatives to provide social marketing on behalf of marriage could prove helpful. Campaigns against smoking and teenage pregnancy have taught us that sustained efforts to change behavior can work.

First, I’m not an education specialist (and neither are they), but shouldn’t there be some kind of policy variables in this analysis, like per-pupil spending, or teacher salaries, or something about curriculum or programming? It’s unusual to use only demographic variables and then conclude that what we need is a policy to change the demographics. It’s just not a serious analysis. (Please also please remember that “controlling for income” is not an adequate control for economic conditions and status.)

But second, given the first billion dollars of money spent promoting marriage produced absolutely no increase in marriage, is there any possible way Brad legitimately thinks this is the best way to improve graduation rates?

These are just two ideas. More should be explored. The bottom line: policymakers, educators, business leaders, and religious leaders in Arizona need to address the fragile foundations of family life if they hope for the state’s children to lead the nation in academic achievement.

Does this report really support that “bottom line”? Would it be better to spend money promoting marriage than to spend the same amount of money on some effort to improve schools? That’s obviously a dumb idea, but is it possible he really believes it? These are the only policies proposed. Maybe I’m wrong, but I doubt he believes it. I think he wants to promote marriage promotion programs for other reasons: to fund him and his compatriots, to support pro-marriage ideology, and so on. Not to improve graduate rates in Arizona schools. But, maybe I’m wrong.

And a laptop

I think what Brad is really doing is noise noise statistics statistics marriage-is-good expertise trust me fund me. The details clearly aren’t that important.

Meanwhile, not coincidentally, things are looking up for Brad at the Institute for Family Studies (IFS), the organization he created to handle the foundation-money rake. He started in 2011 as president / director of IFS at a salary of $35,000. After paying himself a paltry $9,999 and in 2012, he started improving his productivity, paying himself $50,000 in 2013, and then $80,400 in 2014 as a Senior Fellow, the last year for which I found a 990 form. Much of that money is coming from the Bradley Foundation (which also funded the Regnerus/Wilcox study) — their 2015 report lists $75,000 for IFS, so projections are good for next year. This is, of course, on top of what he gets for his service to the public at the University of Virginia.

The IFS disclosure forms also show purchase of a MacBook Pro. Which might or might not have been for Brad.


I do not make this case, and make it personally, because I disagree with Brad about politics. There are lots of people I disagree with even more than him, and I don’t spend all day criticizing them. The dishonesty offends me because it’s work and issues I care about, it hurts real people, I’m well situated to expose it, and his corporate-Christian-right megaphone is big, so it shouldn’t go unchallenged.


Filed under In the news

Gender on the Diane Rehm show in September

The last Media Matters report on the Sunday TV talk shows reported that 73% of guests were men in 2015, a little less than the 75% recorded for the previous two years. (That includes journalists as well as politicos.) I expect my local NPR station, with its liberal audience, to have a better showing for women, and it does. The Diane Rehm Show, which is produced at WAMU but distributed nationally as well as podcasted, had 129 guests in September, and 80 of them (62%) were men, by my manual count. (I’m not counting the hosts, who changed over the month.)

But what has been striking me lately, and the reason I did the count, was how rarely it seems that women are in the majority among the guests, and especially how often there is one woman and more than one man. Without a whole conversation analysis, you can imagine the kind of dynamic that made me think, “sure is a high male/female word-count ratio in this discussion.”

The count confirms this. The show averaged 2.8 guests per episode. So how are the men and women distributed? Of the 46 shows aired in September, 12 featured just one guest, 8 of whom were male. Male guests outnumbered female guests overall in 29 episodes, or 63% of the shows. Female guests outnumbered men in only 8 shows (17%), with the remaining 9 (20%) being gender balanced. What accounts for my annoyance, maybe, was that in those male-dominated shows, more than half (16 of 29) featured just one woman and more than one man. The reverse – one man and more than one woman – happened just three times. Details in the figure.


The most common configuration is one woman and two men. 

My point is just that a 62% / 38% gender split leads to a lot of small-group discussions where men outnumber women, and especially solo-women versus multiple men, which is its own kind of gender situation. I imagine you get this pretty often in cases – say, at an academic conference – where there is some effort to reach gender balance on most panels, but women are less than half altogether. (You can see they were paying attention because there were no all-male panels of four or five.)

I’ll leave it to Media Matters to do their annual report again next year, but I did take a quick look at some of the Sunday shows for September. On Meet the Press I found 62% men, and 75% of the shows were male-dominated. On Fox News Sunday 71% of guests were male, and every show was male-dominated. Face the Nation had 72% male guests but also every one male-dominated. (Incidentally, Face the Nation has a convenient list of every guest so far for the year, so I was able quickly tally the gender of their 348 guests, 73% of whom were men, counting multiple appearances. That’s a tiny bit better than their 2015 total of 76%.)

Related on gender composition:

1 Comment

Filed under In the news

Black men raping White women: BJS’s Table 42 problem

I’ve been putting off writing this post because I wanted to do more justice both to the history of the Black-men-raping-White-women charge and the survey methods questions. Instead I’m just going to lay this here and hope it helps someone who is more engaged than I am at the moment. I’m sorry this post isn’t higher quality.

Obviously, this post includes extremely racist and misogynist content, which I am showing you to explain why it’s bad.

This is about this very racist meme, which is extremely popular among extreme racists.


The modern racist uses statistics, data, and even math. They use citations. And I think it takes actually engaging with this stuff to stop it (this is untested, though, as I have no real evidence that facts help). That means anti-racists need to learn some demography and survey methods, and practice them in public. I was prompted to finally write on this by a David Duke video streamed on Facebook, in which he used exaggerated versions of these numbers, and the good Samaritans arguing with him did not really know how to respond.

For completely inadequate context: For a very long time, Black men raping White women has been White supremacists’ single favorite thing. This was the most common justification for lynching, and for many of the legal executions of Black men throughout the 20th century. From 1930 to 1994 there were 455 people executed for rape in the U.S., and 89% of them were Black (from the 1996 Statistical Abstract):


For some people, this is all they need to know about how bad the problem of Blacks raping Whites is. For better informed people, it’s the basis for a great lesson in how the actions of the justice system are not good measures of the crimes it’s supposed to address.

Good data gone wrong

Which is one reason the government collects the National Crime Victimization Survey (NCVS), a large sample survey of about 90,000 households with 160,000 people. In it they ask about crimes against the people surveyed, and the answers the survey yields are usually pretty different from what’s in the crime report statistics – and even further from the statistics on things like convictions and incarceration. It’s supposed to be a survey of crime as experienced, not as reported or punished.

It’s an important survey that yields a lot of good information. But in this case the Bureau of Justice Statistics is doing a serious disservice in the way they are reporting the results, and they should do something about it. I hope they will consider it.

Like many surveys, the NCVS is weighted to produce estimates that are supposed to reflect the general population. In a nutshell, that means, for example, that they treat each of the 158,000 people (over age 12) covered in 2014 as about 1,700 people. So if one person said, “I was raped,” they would say, “1700 people in the US say they were raped.” This is how sampling works. In fact, they tweak it much more than that, to make the numbers add up according to population distributions of variables like age, sex, race, and region – and non-response, so that if a certain group (say Black women) has a low response rate, their responses get goosed even more. This is reasonable and good, but it requires care in reporting to the general public.

So, how is the Bureau of Justice Statistics’ (BJS) reporting method contributing to the racist meme above? The racists love to cite Table 42 of this report, which last came out for the 2008 survey. This is the source for David Duke’s rant, and the many, many memes about this. The results of Google image search gives you a sense of how many websites are distributing this:


Here is Table 42, with my explanation below:


What this shows is that, based on their sample, BJS extrapolates an estimate of 117,640 White women who say they were sexually assaulted, or threatened with sexual assault, in 2008 (in the red box). Of those, 16.4% described their assailant as Black (the blue highlight). That works out to 19,293 White women sexually assaulted or threatened by Black men in one year – White supremacists do math. In the 2005 version of the table these numbers were 111,490 and 33.6%, for 37,460 White women sexually assaulted or threatened by Black men, or:


Now, go back to the structure of the survey. If each respondent in the survey counts for about 1,700 people, then the survey in 2008 would have found 69 White women who were sexually assaulted or threatened, 11 of whom said their assailant was Black (117,640/1,700). Actually, though, we know it was less than 11, because the asterisk on the table takes you to the footnote below which says it was based on 10 or fewer sample cases. In comparison, the survey may have found 27 Black women who said they were sexually assaulted or threatened (46,580/1,700), none of whom said their attacker was White, which is why the second blue box shows 0.0. However, it actually looks like the weights are bigger for Black women, because the figure for the percentage assaulted or threatened by Black attackers, 74.8%, has the asterisk that indicates 10 or fewer cases. If there were 27 Black women in this category, then 74.8% of them would be 20. So this whole Black women victim sample might be as little as 13, with bigger weights applied (because, say, Black women had a lower response rate). If in fact Black women are just as likely to be attacked or assaulted by White men as the reverse, 16%, you might only expect 2 of those 13 to be White, and so finding a sample 0 is not very surprising. The actual weighting scheme is clearly much more complicated, and I don’t know the unweighted counts, as they are not reported here (and I didn’t analyze the individual-level data).

I can’t believe we’re talking about this. The most important bottom line is that the BJS should not report extrapolations to the whole population from samples this small. These population numbers should not be on this table. At best these numbers are estimated with very large standard errors. (Using a standard confident interval calculator, that 16% of White women, based on a sample of 69, yields a confidence interval of +/- 9%.) It’s irresponsible, and it’s inadvertently (I assume) feeding White supremacist propaganda.

Rape and sexual assault are very disturbingly common, although not as common as they were a few decades ago, by conventional measures. But it’s a big country, and I don’t doubt lots of Black men sexual assault or threaten White women, and that White men sexually assault or threaten Black women a lot, too – certainly more than never. If we knew the true numbers, they would be bad. But we don’t.

A couple more issues to consider. Most sexual assault happens within relationships, and Black women have interracial relationships at very low rates. In round numbers (based on marriages), 2% of White women are with Black men, and 5% of Black women are with White men, which – because of population sizes – means there are more than twice as many couples with Black-man/White-woman than the reverse. At very small sample sizes, this matters a lot. But we would expect there to be more Black-White rape than the reverse based on this pattern alone. Consider further that the NCVS is a household sample, which means that if any Black women are sexually assaulted by White men in prison, it wouldn’t be included. Based on a 2011-2012 survey of prison and jail inmates, 3,500 women per year are the victim of staff sexual misconduct, and Black women inmates were about 50% more likely to report this than White women. So I’m guessing the true number of Black women sexually assaulted by White men is somewhat greater than zero, and that’s just in prisons and jails.

The BJS seems to have stopped releasing this form of the report, with Table 42, maybe because of this kind of problem, which would be great. In that case they just need to put out a statement clarifying and correcting the old reports – which they should still do, because they are out there. (The more recent reports are skimpier, and don’t get into this much detail [e.g., 2014] – and their custom table tool doesn’t allow you to specify the perceived race of the offender).

So, next time you’re arguing with David Duke, the simplest response to this is that the numbers he’s talking about are based on very small samples, and the asterisk means he shouldn’t use the number. The racists won’t take your advice, but it’s good for everyone else to know.


Filed under In the news

Teen birth rate low but Bible remains a concern

In 2012 I did a post about teen birth rates, abstinence, and Google searches for Antichrist stuff. The most important point was that abstinence education doesn’t work. In this post I use the percentage of teen (women) having a birth, and see what people are Googling in places with more teen births.

This is an inductive approach that generates ideas and surprises. Out of the billions of things people search for, which searches are most correlated with a demographic or social pattern across states? For example, the relationship between low marriage rates and searches about Kanye West is very strong (even controlling for a bunch of demographics), and state suicide rates are highly correlated with lots of searches about guns. If you think these are random flukes, you may be right — but then look at what searches correlate with racial/ethnic composition of states.

So for teen births, this is easy to get from the American Community Survey via IPUMS (I used the 2010-2014 combined file), which asks of each person if they had a baby in the previous year. Teen birth rate is the percentage of women ages 15-19 who did. Then you surf over to the Google Correlate tool and upload the teen birth rates file. The result is the 100 searches that are most highly correlated with the state file you uploaded. Someone with the keys to Google could get more, but this is what any member of the public can do.

We know that teen births are most common in the Southwest and South, and that broad pattern is really what’s most important: Republican-dominated states, the Bible belt, and places with a lot of poor young people.* Here’s the broad strokes:


The Google searches is just for thinking about subtler cultural relationships and generating ideas.

Among the top 100 searches most correlated with teen births, American muscle cars stand out: Mustangs (13), Camaros (5), Hummers, Chargers, along with related things like transmissions. Next, however, is Bible stuff. There are 12 searches that correlate with the teen birth rate at .80 or higher on the list:

original bible
book of enoch
bible talk
the book of enoch
i believe in god
book of enoch pdf
bible names
the truth shall set you free
truth shall set you free

Here’s a map showing the ACS teen births rates on the left and searches for “original bible” on the right, correlation .83:


(A little disturbingly, “what is cinnamon” is also high on the list [correlation .81] — cinnamon is often promoted as a “natural” medicine to cause miscarriage.)

I exported the correlation file from Google and then averaged those 12 searches, producing a bible searches index that correlates with teen births at .87 (all the search correlations come out as z-scores, so the average has mean of 0 and s.d. of .93). Here are the results:**


I’m no Bible expert, and this could all be a total coincidence, but I think some real research on it might be pretty interesting. Maybe the people who say the Bible is awesome for families and teen births are bad should look into it.***

Followup: Of course, if you only look at the highest correlations out of billions, you find high correlations. So I don’t expect a research award for discovering that. And that fact that these bible searches are from certain niches of Christianity is an interesting tidbit but just as food for thought. The more theory-driven version of this research might start with searches for just the word “bible”and test the hypothesis that it’s correlated with teen births.  That relationship is not as strong (correlation .74), but it’s still plenty to go on:


I take from this weaker finding that the stronger pattern above is not just a fluke or an artifact of the method.

  • Follow the Google tag to see the many posts using this stuff.
  • Follow the teen birth tag for more, including the argument that the teen birth rate is a myth, and the racial implications of promoting delayed births.


* This survey measure is correlated .89 with the 2008 list of state teen birth rates published by the National Center for Health Statistics. I would have a better sense of which is the right one to use if Google Correlate would say what time period is used for their analysis, but I can’t find that anywhere. When I used the NCHS list instead of my ACS list, it was more dominated by muscle cars and had less Bible stuff, as only “book of enoch” was in the top 100, correlated .87 with teen births.

** Here's the Stata command for making this figure (which I then prettied up a little):
gr twoway (scatter teenbirth biblesearch , mlabel(state) mlabposition(0) msymbol(i)) (lfit teenbirth biblesearch)

*** The 2010-2014 teen birth rates, from the IPUMS release of ACS data are these:

State State Teen birth rate (%)
Alabama AL 2.44
Alaska AK 2.727
Arizona AZ 2.385
Arkansas AR 2.886
California CA 1.901
Colorado CO 1.755
Connecticut CT 0.902
Delaware DE 1.644
District of Columbia DC 2.088
Florida FL 2
Georgia GA 2.578
Hawaii HI 1.991
Idaho ID 2.202
Illinois IL 2.009
Indiana IN 2.69
Iowa IA 1.477
Kansas KS 2.432
Kentucky KY 2.936
Louisiana LA 2.36
Maine ME 0.852
Maryland MD 1.783
Massachusetts MA 0.941
Michigan MI 1.881
Minnesota MN 1.428
Mississippi MS 3.545
Missouri MO 2.756
Montana MT 2.065
Nebraska NE 1.304
Nevada NV 2.449
New Hampshire NH 1.135
New Jersey NJ 1.005
New Mexico NM 3.5
New York NY 1.494
North Carolina NC 2.48
North Dakota ND 2.328
Ohio OH 1.901
Oklahoma OK 3.214
Oregon OR 1.568
Pennsylvania PA 1.928
Rhode Island RI 1.978
South Carolina SC 2.829
South Dakota SD 2.271
Tennessee TN 2.974
Texas TX 3.303
Utah UT 1.666
Vermont VT 1.073
Virginia VA 1.636
Washington WA 1.688
West Virginia WV 2.146
Wisconsin WI 1.305
Wyoming WY 1.6


Filed under In the news