I was skipping through it looking for his current definition of the “Gold Standard” family (“intact, married, biological family”), when I found this amazing piece of sophistry masquerading as incompetence (or maybe the other way around, hard to know). To lighten things up around here, I thought I’d share it.
This is the clip (It’s here on the full video), followed by the transcript:
Stephanie Coontz, who is a very prominent, progressive author, writer and scholar, has argued that ‘marriage has become more joyful, more loving, more satisfying.’ So what’s the evidence tell us about what actually happened? Well, looking just at those folks who were married, from the 70s to the 80s, and the 90s, and actually the 2000s [he’s looking up at his own graph here], there’s no evidence that marriage is becoming more joyful or more loving among those folks who managed both to get married, and in this case, to remain married. So we see a decline. It’s actually particularly precipitous for women, from the 70s to the 2000s. And when you factor in the fact that fewer and fewer folks were getting married, that’s particularly interesting. What’s this is telling us is that a smaller and smaller share of American adults both men and women were in a very happy marriage from the 70s to the 2000s. So, I don’t think there’s a lot of evidence to sort of back up her idea that sort this newer, post-70s model, this more contingent model, was more successful.
Here’s a screengrab of the slide he’s looking at:
Wow, that sure makes it seem like the percent of adults in a very happy marriage is a lot lower than it used to be, although rebounding strongly this decade. It also just doesn’t look anything like the many graphs of marital happiness I have seen, and made, from the General Social Survey, which is the data he’s using here. So, how exactly is Brad Wilcox completely wrong this time?
Two amazingly incompetent and/or disingenuous ways. First, he included people who aren’t married in his denominator, so the decline in “happy marriage” he shows is due to the decline in marriage altogether, not to any change within marriage. That’s like saying the percentage of people who love reading handwritten parchment scrolls has plummeted since the Middle Ages.
Second, that huge drop in the 2000s he shows is entirely due to the fact that most people weren’t asked the question in survey years 2002, 2004, and 2006, and he included them as not in “very happy” marriages. Brad could just as accurately have said the 19,000 happily married people in the GSS are the only happily married people on earth.
Here is the actual trend, with what he showed (which is trivially easy to reconstruct), showing the individual years instead of the decade grouping he used:
I can’t think of something better to say than just showing this. (I didn’t bother with the gender breakdown, which isn’t the point.)
I hope this helps. Please stay at home if you can, and support those who can’t stay home as best you can.
Thanks to Philip Cohen for showing how Brad Wilcox misuses data to criticize my claim that modern marriages are “more loving, more satisfying” than in the past. What I actually say is that when they work WELL, modern marriages are more intimate, fair & loving than marriages of the past. When they DON’T work well, they are more disappointing and seem less bearable because of our higher expectations. This makes our current crisis especially challenging, because we have to expect more of our partners, and can expect less from others, than usual.
You can really do a lot with the common public misperception that divorce is always going up. Brad Wilcox has been taking advantage of that since at least 2009, when he selectively trumpeted a decline in divorce (a Christmas gift to marriage) as if it was not part of an ongoing trend.
I have reported that the divorce rate in the U.S. (divorces per married woman) fell 21 percent from 2008 to 2017. And yet yesterday, Faithwire’s Will Maule wrote, “With divorce rates rocketing across the country, it can be easy to lose a bit of hope in the God-ordained bond of marriage.”
The program, which has recently become an independent nonprofit organization called Communio, used the latest marketing techniques to “microtarget” outreach, engaged local churches to maximize its reach and influence, and deployed skills training to better prepare individuals and couples for the challenges they might face. COFI highlights how employing systems thinking and leveraging the latest in technology and data sciences can lead to significant progress in addressing our urgent marriage crisis.
The program claims 50,000 people attended four-hour “marriage and faith strengthening programs,” and further made 20 million Internet impressions “targeting those who fit a predictive model for divorce.” So, have they increased marriage and reduced divorce? I don’t know, and neither do they, but they say they do.
Funny aside, the results website today says “Communio at work: Divorce drops 24% in Jacksonville,” but a few days ago the same web page said 28%. That’s probably because Duval County (which is what they’re referring to) just saw a SHOCKING 6% INCREASE IN DIVORCE (my phrase) in 2018 — the 10th largest divorce rate increase in all 40 counties in Florida for which data are available (see below). But anyway, that’s getting ahead of the story.
Gimme the report
The 28% result came from this report by Brad Wilcox and Spencer James, although they don’t link to it. That’s what I’ll focus on here. The report describes the many hours of ministrations, and the 20 million Internet impressions, and then gets to the heart of the matter:
We answer this question by looking at divorce and marriage trends in Duval County and three comparable counties in Florida: Hillsborough, Orange, and Escambia. Our initial data analysis suggests that the COFI effort with Live the Life and a range of religious and civic partners has had an exceptional impact on marital stability in Duval County. Since 2016, the county has witnessed a remarkable decline in divorce: from 2015 to 2017, the divorce rate fell 28 percent. As family scholars, we have rarely seen changes of this size in family trends over such a short period of time. Although it is possible that some other factor besides COFI’s intervention also helped, we think this is unlikely. In our professional opinion, given the available evidence, the efforts undertaken by COFI in Jacksonville appear to have had a marked effect on the divorce rate in Duval County.
A couple things about these very strong causal claims. First, they say nothing about how the “comparable counties” were selected. Florida seems to have 68 counties, 40 of which the Census gave me population counts for. Why not use them all? (You’ll understand why I ask when they get to the N=4 regression.) Second, how about that “exceptional impact,” the “remarkable decline” “rarely seen” in their experience as family scholars? Note there is no evidence in the report of the program doing anything, just the three year trend. And while it is a big decline, it’s one I would call “occasionally seen.” (It helps to know that divorce is generally going down — something the report never mentions.)
To put the decline in perspective, first a quick national look. In 2009 there was a big drop in divorce, accelerating the ongoing decline, presumably related to the recession (analyzed here). It was so big that nine states had crude divorce rate declines of 20% or more in that one year alone. Here is what 2008-2009 looked like:
So, a drop in divorce on this scale is not that rare in recent times. This is important background Wilcox is (comfortably) counting on his audience not knowing. So what about Florida?
Wilcox and James start with this figure, which shows the number of divorces per 1000 population in Duval County (Jacksonville), and the three other counties:
Again, there is no reason given for selecting these three counties. To test the comparison, which evidently shows a faster decline in Duval, they perform two regression models. (To their credit, James shared their data with me when I requested it — although it’s all publicly available this was helpful to make sure I was doing it the same way they did.) First, I believe they ran a regression with an N of 4, the dependent variable being the 2014-2017 decline in divorce rate, and the independent variable being a dummy for Duval. I share the complete dataset for this model here:
I don’t know exactly what they did with the second model, which must somehow how have a larger sample than 4 because it has 8 variables. Maybe 16 county-years? Anyway, doesn’t much matter. Here is their table:
How to evaluate a faster decline among a general trend toward lower divorce rates? If you really wanted to know if the program worked, you would have to study the program, people who were in the program and people who weren’t and so on. (See this writeup of previous marriage promotion disasters, studied correctly, for a good example.) But I’m quite confident that this conclusion is ridiculous and irresponsible: “In our professional opinion, given the available evidence, the efforts undertaken by COFI in Jacksonville appear to have had a marked effect on the divorce rate in Duval County.” No one should take such a claim seriously except as a reflection on the judgment or motivations of its author.
Because the “comparison counties” was bugging me, I got the divorce counts from Florida’s Vital Statistics office (available here), and combined them with Census data on county populations (table S0101 on census.data.gov). Since 2018 has now come out, I’m showing the change in each county’s crude divorce rate from 2015, before Communio, through 2018.
You can see that Duval has had a bigger drop in divorce than most Florida counties — 32 of which saw divorce rates fall in this period. Of the counties that had bigger declines, Monroe and Santa Rosa are quite small, but Lake County is mid-sized (population 350,000), and bigger than Escambia, which is one of the comparison counties. How different their report could have been with different comparison cases! This is why it’s a good idea to publicly specify your research design before you collect your data, so people don’t suspect you of data shenanigans like goosing your comparison cases.
What about that 2018 rebound? Wilcox and James stopped in 2017. With the 2018 data we can look further. Eighteen counties had increased divorce rates in 2018, and Duval’s was large at 6%. Two of the comparison cases (Hillsborough and Escambria) had decreases in divorce, as did the state’s largest county, Miami-Dade (down 5%).
To summarize, Duval County had a larger than average decline in divorce rates in 2014-2017, compared with the rest of Florida, but then had a larger-than-average increase in 2018. That’s it.
Obviously, Communio wants to see more marriage, too, but here not even Wilcox can turn the marriage frown upside down.
Why no boom in marriage, with all those Internet hits and church sessions? They reason:
This may be because the COFI effort did not do much to directly promote marriage per se (it focused on strengthening existing marriages and relationships), or it may be because the effort ended up encouraging Jacksonville residents considering marriage to proceed more carefully. One other possibility may also help explain the distinctive pattern for Duval County. Hurricane Irma struck Jacksonville in September of 2017; this weather event may have encouraged couples to postpone or relocate their weddings.
OK, got it — so they totally could have increased marriage if they had wanted to. Except for the hurricane. I can’t believe I did this, but I did wonder about the hurricane hypothesis. Here are the number of marriages per month in Duval County, from 13 months before Hurrican Irma (September 2017), to 13 months after, with Septembers highlighted.
There were fewer marriages in September 2017 than 2016, 51 fewer, but September is a slow month anyway. And they almost made up for it with a jump in December, which could be hurricane-related postponements. But then the following September was no better, so this hypothesis doesn’t look good. (Sheesh, how much did they get paid to do this report? I’m not holding back any of the analysis here.)
Aside: Kristen & Jessica had a beautiful wedding in Jacksonville just a few days after Hurricane Irma. Jessica recalled, “Hurricane Irma hit the week before our wedding, which damaged our venue pretty badly. As it was outdoors on the water, there were trees down all over the place and flooding… We were very lucky that everything was cleaned up so fast. The weather the day of the wedding turned out to be perfect!” I just had to share this picture, for the Communio scrapbook:
So, to recap: Christian philanthropists and intrepid social scientists have pretty much reversed social disintegration and the media is just desperate to keep you from finding out about it.
Also, Brad Wilcox lies, cheats, and steals. And the people who believe in him, and hire him to carry their social science water, don’t care.
Recently I made the serious accusation that Brad Wilcox and his colleagues plagiarized me in a New York Times op-ed. After the blog post, I sent a letter to the Times and got no response. And until now Wilcox had not responded. But now thanks to an errant group email I had the chance to poke him, and he responded, in relevant part:
You missed the point of the NYT op-ed, which was to stress the intriguing J-Curve in women’s marital happiness when you look at religion and gender ideology. We also thought it interesting to note there is a rather similar J-Curve in women’s marital happiness in the GSS when it comes to political ideology, although the political ideology story was somewhat closer to a U-Curve in the GSS. Our NYT argument was not inspired by you, and our extension of the argument to a widely used dataset is not plagiarism.
Most of that comment is irrelevant to the question of whether the figure they published was ripped off from my blog; the only argument he makes is to underline the word not. To help readers judge for themselves, here is the sequence again, maybe presented more clearly than I did it last time.
Wilcox and Nicholas Wolfinger published this, claiming Republicans have happier marriages:
I responded by showing that that when you break out the categories more you get a U-shape instead:
Subsequently, I repeated the analysis, with newer data, using political views instead of party identification (the U-shape on the right):
This is the scheme, and almost exactly the results, that Wilcox and colleagues then published in the NYT, now including one more year of data:
The data used, the control variables, and the results, are almost identical to analysis I did in response to their work. His response is, “Our NYT argument was not inspired by you.” So that’s that.
Of course, only he knows what’s in his heart. But the premise of his plagiarism denial is an appeal to trust. So, do you trust him?
There is a long history here, and it’s hard to know where to start if you’re just joining. Wilcox has been a liberal villain since he took over the National Marriage Project and then organized what became (unfortunately) known as the Regnerus study (see below), and a conservative darling since the top administration at the University of Virginia overturned the recommendation of his department and dean to grant him tenure.
So here are some highlights, setting aside questions of research quality and sticking to ethical issues.
Wilcox led the coalition that raised $785,000, from several foundations, used to generate the paper published under Mark Regnerus’s name, intended to sway the courts against marriage equality. He helped design the study, and led the development of the media plan, and arranged for the paper to be submitted to Social Science Research, and then arranged for himself to be one of the anonymous peer reviewers. To do this, he lied to the editor, by omission, about his contribution the study — saying only that he “served on the advisory board.”
And then when the scandal blew up he lied about his role at the Witherspoon Institute, which provided most of the funding, saying he “never served as an officer or a staffer at the Witherspoon Institute, and I never had the authority to make funding or programmatic decisions at the Institute,” and that he was “not acting in an official Witherspoon capacity.” He was in fact the director of the institute’s Program on Family, Marriage, and Democracy, which funded the study, and the email record showed him approving budget requests and plans. To protect his reputation and cover up the lie, that position (which he described as “honorific”) has been scrubbed from his CV and the Witherspoon website. (In the emails uncovered later, the president of Witherspoon, Luis Tellez wrote, “we will include some money for you [Regnerus] and Brad on account of the time and effort you will be devoting to this,” but the amount he may have received has not been revealed — the grants aren’t on his CV.)
You might hold it against him that he organized a conspiracy to fight marriage equality, but even if you think that’s just partisan nitpickery, the fact that the research was the result of a “coalition” (their word) that included a network of right-wing activists, and that their roles were not disclosed in the publication, is facially an ethical violation. And the fact that it involved a series of public and private lies, which he has never acknowledged, goes to the issue of trust in every subsequent case.
Here I can’t say what ethical rule Wilcox may have broken. Academia is a game that runs on trust, and in his financial dealings Wilcox has not been forthcoming. There is money flowing through his work, but the source and purpose that money is not disclosed when the work is published. For example, in the NYT piece Wilcox is identified only as a professor at the University of Virginia, even though the research reported there was published by the Institute for Family Studies. His faculty position, and tenure, are signals of his trustworthiness, which he uses to bolster the reputation of his partisan efforts.
The Institute for Family Studies is a non-profit organization that Wilcox created in 2009, originally called the Ridge Foundation. For the first four years the tax filings list him as the president, then director. Since 2013, when it changed its name to IFS, he has been listed as a senior fellow. Through 2017, the organization paid him more than $330,000, and he was the highest paid person. The funders are right-wing foundations.
Most academics want people to know about their grants and the support for their research. On his CV at the University of Virginia, however, Wilcox does not list the Institute for Family Studies in the “Employment” section, or include it among the grants he has received. Even though it is an organization he created and built up, so far grossing almost $3 million in total revenue. It is only mentioned in a section titled “Education Honors and Awards,” where he lists himself as a “Senior Fellow, Institute for Family Studies.” An education honor and award he gave himself, apparently.
He also doesn’t list his position on the Marco Rubio campaign’s Marriage & Family Advisory Board, where he was among those who “understand” that “Windsor and Obergefell are only the most recent example of our failure as a society to understand what marriage is and why it matters”
Wilcox uses his academic position to support and legitimize his partisan efforts, and his partisan work to produce work under his academic title (of course IFS says it’s nonpartisan but that’s meaningless). If he kept them really separate that would be one thing — we don’t need to know what church academics belong to or what campaigns they support, except as required by law — but if he’s going to blend them together I think he incurs an ethical disclosure obligation.
Wilcox isn’t the only person to scrub Withserspoon from his academic record — which is funny because the Witherspoon Institute is housed at Princeton University (where Wilcox got his PhD). And the fact of removing Witherspoon from a CV was used to discredit a different anti-marriage-equality academic expert, Joseph Price at Brigham Young, in the Michigan trial that led to the Obergefell decision, because it made it seem he was trying to hide his political motivations in testifying against marriage equality. Here is the exchange:
Court proceedings are useful for bringing out certain principles. In this case I think they help illustrate my point: If Brad Wilcox wants people to trust his motivations, he should disclose the sources of support for his work.
In the New York Times yesterday, W. Bradford Wilcox, Jason S. Carroll and Laurie DeRose published an Op-Ed with the ridiculous title, “Religious Men Can Be Devoted Dads, Too.” In it they included this figure:
There are trivial differences between these figures. Theirs is from the General Social Survey for 2010-2018, mine was for 2010-2014. Theirs used political views while mine used party identification. Theirs is just women, and controls for age, education, and race; mine included men and women while controlling for gender, and I also controlled for income and religious attendance. (And they used gray for the middle bar, instead of purple.) However, in a subsequent post, from 2017, I redid the analysis for the years 2012-2016, using political views instead of party identification, in a post titled, “Who’s happy in marriage? (Not just rich, White, religious men, but kind of).” The results are almost identical to theirs in the Times (on the right, here):
Did they know about my pieces? I am certain they did, though I can’t prove it. It’s relevant that my first post, “That thing about Republican marriages…” was a critique of a post by Wilcox and Nick Wolfinger, which had only reported that Republicans were slightly happier in marriage than Democrats, which they called “The Republican Advantage in Marital Satisfaction.” My post was a correction, showing the U-shape the emerged when you broke out the categories — the change Wilcox and colleagues have now adopted. My follow-up post was reported by Bloomberg (and carried in the Chicago Tribune), and the Daily Mail. Both of my posts were tweeted by popular journalists who work in this area. I expect that would claim they never noticed my little blog posts.
You also could split hairs on the definition of plagiarism to try to defend this unethical behavior. The relevant passages of the American Sociological Association Code of Ethics:
(b) In their publications, presentations, teaching, practice, and service, sociologists provide acknowledgment of and reference to the use of their own and others’ work, even if the work is paraphrased and not quoted verbatim.
(c) While sociologists utilize and build on the concepts, theories, and paradigms of others, they may not claim credit for creating such ideas and must cite the creator of such ideas where appropriate.
But no one can seriously argue they shouldn’t have referenced my work.
Wilcox has done much worse, of course, most importantly leading a conspiracy to gin up research to turn the Supreme Court against same-sex marriage and then lying about his role in that conspiracy (the subject of a chapter in my book Enduring Bonds). And this is not a very important idea (their explanation is very flimsy, and I have no real explanation or theory to explain the pattern.) But this one goes on the list somewhere.
Why do I care? Is this just petty partisanship and even jealousy because Wilcox paid himself $80,000 of right-wing foundation money in 2016, and continues to publish low-quality research in important outlets like the New York Times? Draw your own conclusions. Of course his views are noxious to me. But more than that, in the game of trust that is the research ecosystem, reputations matter a lot. Once someone is tenured, and funded by unaccountable political actors, our options for defending the system are limited. The norms of publishing, especially outside academia, don’t require research transparency (like their current report, made to order for conservative funders, not the research community or peer review). If someone says, “This is my finding,” publishers (like the Times) usually vet the researcher instead of the research.
I don’t believe in lifetime bans, and I don’t care about atonement for research ethics. My question is, “Can we trust this person’s research?” Before we can answer that affirmatively, we need to have an accounting of past malfeasance that makes clear future work will be clean. Until then, I don’t mind spending a few minutes now and then reminding people that Wilcox (like Mark Regnerus) is not trustworthy.
Almost 2,000 people retweeted this from Brad Wilcox the other day.
Brad shared the graph from Charles Lehman (who noticed later that he had mislabeled the x-axis, but that’s not the point). First, as far as I can tell the values are wrong. I don’t know how they did it, but when I look at the 2016-2018 General Social Survey, I get 4.3 average hours of TV for people in the poorest families, and 1.9 hours for the richest. They report higher highs (looks like 5.3) and lower lows (looks like 1.5). More seriously, I have to object to drawing what purports to be a regression line as if those are evenly-spaced income categories, which makes it look much more linear than it is.
I fixed those errors — the correct values, and the correct spacing on the x-axis — then added some confidence intervals, and what I get is probably not worth thousands of self-congratulatory woots, although of course rich people do watch less TV. Here is my figure, with their line (drawn in by hand) for comparison:
Charles and Brad’s post got a lot of love from conservatives, I believe, because it confirmed their assumptions about self-destructive behavior among poor people. That is, here is more evidence that poor people have bad habits and it’s just dragging them down. But there are reasons this particular graph worked so well. First, the steep slope, which partly results from getting the data wrong. And second, the tight fit of the regression line. That’s why Brad said, “Whoa.” So, good tweet — bad science. (Surprise.) Here are some critiques.
First, this is the wrong survey to use. Since 1975, GSS has been asking people, “On the average day, about how many hours do you personally watch television?” It’s great to have a continuous series on this, but it’s not a good way to measure time use because people are bad at estimating these things. Also, GSS is not a great survey for measuring income. And it’s a pretty small sample. So if those are the two variables you’re interested in, you should use the American Time Use Survey (available from IPUMS), in which respondents are drawn from the much larger Current Population Survey samples, and asked to fill out a time diary. On the other hand, GSS would be good for analyzing, for example, whether people who believe the Bible is the “the actual word of God and is to be taken literally, word for word” watch TV more than those who believe it is “an ancient book of fables, legends, history, and moral precepts recorded by men” (Yes, they do, about an hour more.) Or looking at all the other social variables GSS is good for.
On the substantive issue, Gray Kimbrough pointed out that the connection between family income and TV time may be spurious, and is certainly confounded with hours spent at work. When I made a simple regression model of TV time with family income, hours worked, age, sex, race/ethnicity, education, and marital status (which again, should be done better with ATUS), I did find that both hours worked and family income had big effects. Here they are from that model, as predicted values using average marginal effects.
The banal observation that people who spend more time working spend less time watching TV probably wouldn’t carry the punch. Anyway, neither resolves the question of cause and effect.
Fits and slopes
On the issue of the presentation of slopes, there’s a good lesson here. Data presentation involves trading detail for clarity. And statistics have both have a descriptive and analytical purpose. Sometimes we use statistics to present information in simplified form, which allows better comprehension. We also use statistics to discover relationships we couldn’t otherwise — such as multivariate relationships that you can’t discern visually. The analyst and communicator has to choose wisely what to present. A good propagandist knows what to manipulate for political effect (a bad one just tweets out crap until they get lucky).
Here’s a much less click-worthy presentation of the relationship between family income and TV time. Here I truncate the y-axis at 12 hours (cutting off 1% of the sample), translate the binned income categories into dollar values at the middle of each category, and then jitter the scatterplot so you can see how many points are piled up in each spot. The fitted line is Stata’s median spline, with 9 bands specified (so it’s the median hours at the median income in 9 locations on the x-axis). I guess this means that, at the median, rich people in America watch about an hour of TV per day less than poor people, and the action is mostly under $50,000 per year. Woot.
Finally, a word about binning and the presentation of data (something I’ve written about before, here and here). We make continuous data into categories all the time, starting from measurement. We usually measure age in years, for example, although we could measure it in seconds or decades. Then we use statistics to simplify information further, for example by reporting averages. In the visual presentation of data, there is a particular problem with using averages or data bins to show relationships — you can show slopes that way nicely, but you run the risk of making relationships look more closely correlated than they are. This happens in the public presentation of data when analysts are showing something of their work product — such as a scatterplot with a fitted line — to demonstrate the veracity of their findings. When they bin the data first, this can be very misleading.
Here’s an example. I took about 1000 men from the GSS, and compared their age and income. Between the ages of 25 and 59, older men have higher average incomes, but the fit is curved with a peak around 45. Here is the relationship, again using jittering to show all the individuals, with a linear regression line. The correlation is .23
That might be nice to look at but it’s hard to see the underlying relationship. It’s hard to even see how the fitted line relates to the data. So you might reduce it by showing the average income at each age. By pulling the points together vertically into average bins, this shows the relationship much more clearly. However, it also makes the relationship look much stronger. The correlation in this figure is .65. Now the reader might think, “Whoa.”
Note this didn’t change the slope much (it still runs from about $30k to $60k), it just put all the dots closer to the line. Finally, here it is pulling the averages together in horizontal bins, grouping the ages in fives (25-29, 30-34 … 55-59). The correlation shown here is .97.
If you’re like me, this is when you figured out that reducing this to two dots would produce a correlation of 1.0 (as long as the dots aren’t exactly level).
To make good data presentation tradeoffs requires experimentation and careful exposition. And, of course, transparency. My code for this post is available on the Open Science Framework here (you gotta get the GSS data first).
W. Bradford Wilcox and Lyman Stone, both fellows who are fellows at the Institute for Family Studies (still waiting for the acknowledgment for correcting their data error), have a bad new post up called “The Happiness Recession.”
I made some annotations on their essay using the excellent Hypothesis tool. Now you can read the essay with my comments here. Click on the little arrow at the top right to expand the comments. Here are the highlights, with figures.
Note: Get a free Hypothesis account and you can annotate any page on the web, including in closed groups if you want to, and you can share your Hypothesis profile so people can see all the things you’ve annotated, like mine. You can also comment on SocArXiv papers this way.
The main problem is that there is no evidence of a happiness recession. They make their case with this figure from the General Social Survey:
To people not familiar with the General Social Survey, like, apparently, whoever edited this piece at the Atlantic, that 2018 drop in happiness might look dramatic. A lot of publications are used to mentioning margins of error when they report survey results. But maybe because WilcoxStone reminded the editor in the opening paragraph that the GSS is “a key barometer of American social life,” they didn’t bother. It seems so authoritative (and it’s a great, indispensable resource, publicly funded, and freely available for all researchers). But it’s a sample survey, and it’s not that big. Here are the actual numbers of men ages 18-34 who answered the happiness question on the survey:
It’s a little funky because of the weights I’m not showing, but in round numbers, if 8 young men had said “very happy” instead of “pretty happy” in 2018, WilcoxStone wouldn’t have been able to write this article — the percentage wouldn’t have changed. So, it’s a great survey, but you have to know what you’re doing with it, and you have to be honest about it.
There are different ways to assess significance, and no one rule, but I made this figure showing the percentage of young adults describing themselves as “very happy” in each year of the survey, with a simple p<.05 significance test for whether each year is different from 2018.
The 2018 level is the lowest point estimate, but it’s not distinguishable from about half the previous survey years at conventional levels of statistical significance. Eight pretty happy men doing all the work here.
A lot of what’s ridiculous about the post follows from this simple manipulation — pretending an insignificant change is very important. One other thing was totally wrong, though.
WilcoxStone have some incoherent theory about how friendship might play a role in the happiness recession. Maybe they were expecting to see a decline in friendship, to support their get-married-in-church-and-stay-there predetermined conclusion. Anyway, they produce this wrong figure:
I say “wrong” because, although they never define “regularly,” and neither does the GSS, men in the survey spend more evenings with their friends, and nothing seems to match these numbers. Maybe you can figure it out. Here are the distributions for young men and women in 2018 — see if you can see how they could get 64% for women and 35% for men, the last points in their figure:
In response to my tweets about this, linking to the annotated post, Stone responded that they “just use a different definition of regular,” before clarifying that they “just cut it a different way,” and finally acknowledging that “maybe there was a mislabeling in that,” and promising to look into it.
Anyway, they conclude their analysis with some counterfactual models to explain the “happiness trend.” No details are provided, and definitely nothing like standard errors or significance tests (and if the friend contact variable is screwed up, it’s useless). Here are the results:
If you look carefully, you can see that the biggest difference comes from adjusting for sexual frequency — which has declined in the sample, and sexual frequency is associated with higher happiness scores. That dotted light blue line ends at 29%. And the simulations start at 28%. So, if sexual frequency were held constant, they proclaim, happiness among young adults would be a point higher. Just kidding, they don’t give the number, they just say, “If Americans still had sex like they did in 2008, or even 2012, we might be a much happier country.” OK then.
After apparently reading my annotations, Stone was effusive on Twitter, delighted that, “And I think anyone who reads his annotations will see that he basically doesn’t refute anything we say. :)”
I appreciate @familyunequal annotating our work! It’s a good form of interaction! And I think anyone who reads his annotations will see that he basically doesn’t refute anything we say. 🙂 https://t.co/UEMY2iQZPo
Anyway, see if you can figure out this conclusion:
Thus, while most of the decline in happiness [there is no decline in happiness -pnc] is about declining sex, that’s not the end of the story. Declining sex is at least partly about family and religious changes that make it harder for people to achieve stable, coupled life at a young age. If we’d like more young adults to experience the joy of sex, we will have to either revive these institutions or find new ways to kindle love in the rising generation.
Your guess is as good as mine. But if it just means go to church, get married, and then have sex, and you’ll be happy, I don’t think they needed a bologna statistical analysis to reach that.
Publications like the Atlantic put out a lot of content, much of which they get for free or low cost. A lot of that cheap material comes from academics (which Wilcox sort of is). And some comes from think tank snake oil salespeople (which Wilcox definitely is), who use their right-wing, tax-subsidized foundation money to slime the public square. It might be too much to expect that publications with little budget for writers would somehow have the ability to vet statistical analyses. So they run on trust. People like Wilcox, who has a long proven record of lying about research, and Stone, prey on the credulity of the economically precarious media. People who know better should speak up and help limit the damage.
One critique of the marriage promotion movement is that it ignores the problem of available spouses, especially for Black women. Joanna Pepin and I addressed this with an analysis of marriage markets in this paper. White women ages 20-45, who are more than twice as likely to marry as Black women, live in metro areas with an average of 118 unmarried White men per 100 unmarried White women. Black women, on the other hand, face markets with only 78 single men per 100 single women. This is one reason for the difference in marriage rates; given very low rates of intermarriage, especially for Black women, some women essentially can’t marry.
But surely some people are still passing up potential marriages, or so the marriage promoters would have us believe, and in so doing they undermine their own futures and those of their children. Even if you can get past the sex ratio problem, you still have the issue of the benefits of marriage. Of course married people, and their kids, are better off on average. (There are great methodological lessons to be learned from their big lie use of this fact.) But who gets those benefits? The intellectual water-carriers of the movement, principally Brad Wilcox and his co-authors, always describe the benefits of increasing marriage as if the next marriage to occur will provide the same benefits as the average existing marriage. I wrote about how this wrong in Enduring Bonds:
The idea that the “benefits” of marriage—that is, the observed association between marriage and nonpoverty—would accrue to single mothers if they “simply” married their current partners is bonkers. The notion of a “marriage market” is not perfect, but there is something like a marriage queue that arranges people from most likely to least likely to marry. When you say, “Married people are better off than single people,” a big part of what you’re observing is that, on average, the richer, healthier, better-at-relationships people are at the front of that queue, more likely to marry and then to display what look like the benefits of marriage. Those at the back of the queue, who are more (if not totally) “unmarriageable,” clearly aren’t going to have those highly beneficial marriages if they “simply” marry the closest person.
In fact, I assume this problem has gotten worse as marriage has become more selective, as “it’s increasingly the most well off who are getting and staying married,” and those who aren’t marrying “may not have the assets that lead to marriage benefits: skills, wealth, social networks, and so on.”
Note on race
People who promote marriage don’t like to talk about race, but if it weren’t for race — and racism — they would never have gotten as far as they have in selling their agenda. They use supposedly race-neutral language to talk about fatherhood and a “culture of marriage” and “sustainably escaping poverty,” in ways that are all highly relevant to Black families and racial disparities. If you think the problem of marriage is that poor people are not marrying enough, you should not avoid the fact that you’re talking about race. Black women, especially mothers, are much less likely to be married than most other groups of women, even at the same level of income or education (last I checked Black college graduates were 5-times more likely than White college graduates to be single when they had a baby). So, don’t avoid that this is about race, own it — the demographic facts and political machinations in this area are all highly interwoven with race. I do this analysis, like the paper Joanna and I did, separately for Black and White women, because that’s the main faultline in this area. The code I share below is adaptable to use with other groups as well.
In this data exercise I try to operationalize something like that marriage market queue, to show that women who are least likely to marry are also least likely to enter an economically beneficial marriage if they did marry. See how you like this, and let me know what you think. Or take the data and code and come up with a different way of doing it.
The logic is to take a sample of never-married women, and women who just got married in the last year, and predict membership in the latter group. This generates a predicted probability of marrying for each woman, and it means I can look at the never-married women and see which among them are more or less likely to marry in a given year. For example, based on the models below, I would estimate that a Black woman under age 25, with less than a BA degree, who had a job with less-than-average earnings, has a 0.4% probability of marrying in one year. On the other hand, if she were age 25+, with a BA degree and above-average earnings, her chance of marrying rises to 3.5% per year. (Round numbers.)*
Next, I look at the husbands of women who married men in the year prior to the survey, and I assign them economic scores on an 11-point scale (this is totally arbitrary): up to four points for education, up to four points for earnings, and up to three points for employment level (weeks and hours worked in the previous year). So, a woman whose husband has a high school education, earned $30,000 last year, and worked full-time, year-round, would have 7 points.
Finally, I show the relationship between the odds of marriage for women who didn’t get married and the economic score of the men they would have married if they did.
There are two descriptive conclusions, which I assumed I would find: (1) women who get married marry men with better economic scores than the women who don’t get married would if they did get married; and, (2) the greater the odds of marriage, the better the economic prospects of the man they would marry. The substantive conclusion from this is that marriage promotion, if it could get more people to marry, would pull from the women on the lower rungs of marriage probability, so those new marriages would be less economically beneficial than the average marriage, and the use of married people’s characteristics to project the benefits of marriage for unmarried people is wrong. Like I said, I already believed this, so this is a way of confirming it or showing the extent to which it fits my expectations. (Or, I could be wrong.)
Here are the details.
I use the 2012-2016 five-year American Community Survey data from IPUMS.org (for larger sample). The sample is women ages 18-44, not living in group quarters, single-race Black or White, non-Hispanic, and US-born. I further limited the sample to those who never married, and those who are married for the first time in the previous 12 months. That condition — just married — is the dependent variable in a model predicting odds of first marriage. (Women with female spouses or partners are excluded, too.) The variables used to predict marriage are age (and its square), education, earnings in the previous year (logged), and having no earnings in the previous year (these women are most likely to marry), disability status, metro area residence, and state dummy variables. It’s a simple model, not trying for statistical efficiency but rather the best prediction of marriage odds. Then I use the same set of variables, limiting the analysis to just-married women, to predict their husbands’ economic scores. The regression models are in a table at the end.**
Figure 1 shows how the prediction models assign marriage probabilities. White women have much higher odds of marrying, and those who married have higher odds than those who didn’t, which is reassuring. In particular, a large proportion of never-married Black women are predicted to have very low odds of marrying (click to enlarge).
Figure 2 shows the distribution of husbands’ economic scores for Black and White women who married and those who didn’t. The women who didn’t marry have lower predicted husband scores, with the model giving them husbands with a mode of about 7.0 for Whites and 6.5 for Blacks (click to enlarge).
Finally, the last figure includes only never-married women. It shows the relationship between predicted marriage probability and predicted husband score, using median splines. So, for example, the average unmarried Black woman has a marriage probability of about 1.7%. Figure 3 shows that her predicted husband would have a median score of about 6.4. So he could be a full-time, full-year worker with a high school education, earning $19,000 per year, which would not be enough to lift her and one child out of poverty. The average never-married White woman has a predicted marriage probability of 5.1%, and her imaginary husband has a score of about 7.4 (e.g., a similar husband, but earning $25,000 per year).
Figure 3 implies what I thought was obvious at the beginning: the further down the marriage market queue you go, the worse the economic prospects of the men they would marry, if there were men for them to marry (whom they wanted to marry, and who wanted to marry them).
I will now be holding my breath while marriage promotion activists develop a more sensible set of assumptions for their assessment of the benefits of the promoted marriages they assure us they will be able to conjure if only we give them a few billion more dollars.
I’m posting the data and code used on the Open Science Framework, here. Please feel free to work with it and let me know what you come up with!
* This looks pretty similar to what Dohoon Lee did in this paper, including his figures, and since I was on his dissertation committee, and read his paper, which has similar figures, I credit him with this idea — I should have remembered earlier.
** Here are the regression models used to (1) predict marriage, and then (2) predict husband’s economic scores.
Update: IFS has taken down the report I critiqued here, and put up a revised report. They have added an editor’s note, which doesn’t mention me or link to this post:
Editor’s Note: This post is an update of a post published on March 14, 2018. The original post looked at marriage trends by education among all adults under age 25. It gave the misimpression that college graduates were more likely to be married young nowadays, compared to non-college graduates.
Getting married at a young age used to be more common among adults who didn’t go to college. But the pattern has reversed in the past decade or so. In 2016, 9.4% of college graduates ages 18 to 24 have ever been married, which is higher than the share among their peers without a college degree (7.9%), according to my analysis of the most recent Census data.
And then the dramatic conclusion:
“What this finding shows is that even at a young age, college-educated adults today are more likely than their peers without a college degree to be married. And this is new.”
That would be new, and surprising, if it were true, but it’s not.
Here’s the figure that supports the conclusion:
It shows that 9.4% of college graduates in the age range 18-24 have been married, compared with 7.9% of those who did not graduate from college. (The drop has been faster for non-graduates, but I’m setting aside the time trend for now.) Honestly, I guess you could say, based on this, that young college graduates are more likely than non-graduates to “be married,” but not really.
The problem is there are very very few college graduates in the ages 18-19. The American Community Survey, which they used here, reports only about 12,000 in the whole country, compared with 8.7 million people without college degrees ages 18-19 (this is based on the public use files that IPUMS.org uses; which is what I use in the analysis below). Wow! There are lots and lots of non-college graduates below age 20 (including almost everyone who will one day be a college graduate!), and very few of them are married. So it looks like the marriage rate is low for the group 18-24 overall. Here is the breakdown by age and marital status for the two groups: less than BA education, and BA or higher education — on the same population scale, to help illustrate the point:
If you pool all the years together, you get a higher marriage rate for the college graduates, mostly because there are so few college graduates in the younger ages when hardly anyone is married.
To show the whole thing in terms of marriage rates, here is the marital status for the two groups at every age from 15 (when ACS starts asking about marital status) to 54.
Ignoring 19-21, where there are a tiny number of college graduates, you see a much more sensible pattern: college graduates delay marriage longer, but then have higher rates at older ages (starting at age 28), for all the reasons we know marriage is ultimately more common among college graduates. In fact, if you used ages 15-24 (why not?), you get an even bigger difference — with 9.4% of college graduates married and just 5.7% of non-college graduates. Why not? In fact, what about ages 0-24? It would make almost as much sense.
Another way to do this is just to look at 24-year-olds. Since we’re talking about the ever-married status, and mortality is low at these ages, this is a case where the history is implied in the cross-sectional data. At age 24, as the figure shows, 19.9% of non-college graduates have been married, compared with 12.9% of college graduates. Early marriage is not more common for college graduates.
In general, I don’t recommend comparing college graduates and non-graduates, at least in cross-sectional data, below age 25. Lots of people finishing college below age 25 (and increasingly after that age as well). There is also an important issue of endogeneity here, which always makes education and age analysis tricky. Some people (mostly women) don’t finish college because they get married and have children).
Anyway, it looks to me like someone working for a pro-marriage organization saw what seemed like a story implying marriage is good (that’s why college graduates do it, after all), and one that also fits with the do-what-I-say-not-what-I-do criticism of liberals, who are supposedly not promoting marriage among poor people while they themselves love to get married (a critique made by Charles Murray, Brad Wilcox, and others). And, before thinking it through, they published it.
Mistakes happen. Fortunately, I dislike the Institute for Family Studies (see the whole series under this tag), and so I read it and pointed out this problem within a couple hours (first on Twitter, less than two hours after Wang tweeted it). It’s a social media post-publication peer review success story! If they correct it.
Anyway, this is about the Arizona one. I’ll first raise the possibility that it’s complete bologna – as in, fraudulent or error-ridden – and then discuss how it’s conclusions are dishonest at best even if the analysis is not technically wrong but rather just presented terribly.
Update: with the report corrected to show the complete data, the analysis now replicates fine. So I set aside the bologna issue. I leave this section here just so you can see the research design, but the main argument is in the next section.
First, the bologna issue
The report uses demographic data from 99 Arizona school districts to model graduation rates, and the gender gap in graduation rates. Their conclusion, based on two regression models using districts as the units of analysis and demographic indicators as the predictors, is this:
In Arizona, public school districts with better-educated and more married parents boast higher high school graduation rates. Gender equity is also greater in districts with more married parents. That is, boys come closer to matching the high school graduation rates of girls in districts with more married-parent families. Moreover, married parenthood is a better predictor of these two high school graduation outcomes than are child poverty, race, and ethnicity in public school districts across the Grand Canyon state.
To pad out the report, they also include appendix tables, so it’s theoretically possible to replicate their regressions. Unfortunately, unless I’m missing something, they don’t replicate. I wouldn’t normally bother rerunning someone’s regression, especially when the argument they’re building is so wrong-headed (see below), but just because we know from long experience that Wilcox does not behave honestly (in methods, ethics, and ethics) what the heck.
The report says, “Graduation rates and male/female graduation ratios for the 99 Arizona school districts in our study are shown in Table A1 in the Appendix.” Table A2 then lists the districts again, with the demographic variables. Unfortunately, table A2 only includes 83 districts – and the 16 missing are exactly those from Indian-Oasis to Paradise Valley in the alphabetical list of district names, so apparently an error handling the data. So I could only use 83 of the 99 for the regressions. Since I don’t know when they lost those 16 districts, I don’t know if it was before or after running the regressions (there are no Ns or standard errors on their regression tables).
For each of their dependent variables – graduation rate, and the male/female ratio in graduation rates – they list bivariate correlations, and adjusted betas from as multivariate regression. Here are their figures, with mine next to them. The key differences are highlighted:
If they’re using 99 cases and I have 83 (actually 81 for the gender cap because of missing data), you would expect some difference. But these are very similar, including the bivariate correlations and the R-squareds for the models.
The weird thing is that the biggest difference is exactly on their biggest claim: “married parenthood is a better predictor of these two high school graduation outcomes than are child poverty, race, and ethnicity…” That is based on the assertion that .29 is larger than -.28 (very luck for them, that tiny, insignificant difference in magnitude!). In my model the minority-size effect is more than twice as large as the marriage-parenthood effect. So, huh. It’s definitely possible Brad simply lied about his results and made up a few numbers. (And I’m just using the data they include in the report.) But now let’s pretend he didn’t.
Update: with the complete data I can report that those two betas are actually .2865 (.29!) versu .2847 (.28!). The idea that one is a “better” predictor than the other is clearly not serious. Further, for some reason (we can only guess), they combined percent Black, Hispanic, and American Indian together into “minority,” which produced the .28 result. If they had entered them into the model separately, they would find that Hispanic and American Indian effects are each bigger than the married parent effect, as I show here:
So much for the headline result. Anyway, back to the argument…
The point of the analysis is to make policy recommendations. They conclude:
If the state enjoyed more stable families, it might also see better educational outcomes among its children. It’s for that reason that Arizona should consider measures designed to strengthen and stabilize families.
Their recommendations to that end are vocational education and marriage promotion.
Private and public initiatives to provide social marketing on behalf of marriage could prove helpful. Campaigns against smoking and teenage pregnancy have taught us that sustained efforts to change behavior can work.
First, I’m not an education specialist (and neither are they), but shouldn’t there be some kind of policy variables in this analysis, like per-pupil spending, or teacher salaries, or something about curriculum or programming? It’s unusual to use only demographic variables and then conclude that what we need is a policy to change the demographics. It’s just not a serious analysis. (Please also please remember that “controlling for income” is not an adequate control for economic conditions and status.)
But second, given the first billion dollars of money spent promoting marriage produced absolutely no increase in marriage, is there any possible way Brad legitimately thinks this is the best way to improve graduation rates?
These are just two ideas. More should be explored. The bottom line: policymakers, educators, business leaders, and religious leaders in Arizona need to address the fragile foundations of family life if they hope for the state’s children to lead the nation in academic achievement.
Does this report really support that “bottom line”? Would it be better to spend money promoting marriage than to spend the same amount of money on some effort to improve schools? That’s obviously a dumb idea, but is it possible he really believes it? These are the only policies proposed. Maybe I’m wrong, but I doubt he believes it. I think he wants to promote marriage promotion programs for other reasons: to fund him and his compatriots, to support pro-marriage ideology, and so on. Not to improve graduate rates in Arizona schools. But, maybe I’m wrong.
And a laptop
I think what Brad is really doing is noise noise statistics statistics marriage-is-good expertise trust me fund me. The details clearly aren’t that important.
Meanwhile, not coincidentally, things are looking up for Brad at the Institute for Family Studies (IFS), the organization he created to handle the foundation-money rake. He started in 2011 as president / director of IFS at a salary of $35,000. After paying himself a paltry $9,999 and in 2012, he started improving his productivity, paying himself $50,000 in 2013, and then $80,400 in 2014 as a Senior Fellow, the last year for which I found a 990 form. Much of that money is coming from the Bradley Foundation (which also funded the Regnerus/Wilcox study) — their 2015 report lists $75,000 for IFS, so projections are good for next year. This is, of course, on top of what he gets for his service to the public at the University of Virginia.
The IFS disclosure forms also show purchase of a MacBook Pro. Which might or might not have been for Brad.
I do not make this case, and make it personally, because I disagree with Brad about politics. There are lots of people I disagree with even more than him, and I don’t spend all day criticizing them. The dishonesty offends me because it’s work and issues I care about, it hurts real people, I’m well situated to expose it, and his corporate-Christian-right megaphone is big, so it shouldn’t go unchallenged.
My question for Marco Rubio is, what are you going to do about this gay marriage you are still so against?
In his closing statement at last night’s debate, Marco Rubio said,
Our culture’s in trouble. Wrong is now considered right, and right is considered wrong. All the things that once held our families together are under constant assault. … If you elect me president we are going to re-embrace free enterprise, so that everyone can go as far as their talent and their work will take them. We are going to be a country that says that life begins at conception, and life is worthy of the protection of our laws. We’re gonna be a country that says that marriage is between one man and one woman.
Here it is:
This wrong-right thing is not exactly specified, but in context it clearly refers to abortion and gay marriage — so wrong, but not “considered right.”
What does it mean to say, “We’re gonna be a country that says that marriage is between one man and woman”? What does a country say? Does anyone really listen to what these people say?
Yes, they do. Because as of the morning of yesterday’s debate Rubio has a Marriage & Family Advisory Board to make sure that his words have meaning, and that right returns to right, while wrong is again returned to its proper place: hidden, shamed, and reviled.
Here’s the charge of the board:
This morning, the Marco Rubio for President campaign is excited to announce the formation of Marco Rubio’s Marriage & Family Advisory Board. Marco believes the family is the most important institution in society. He understands that in a vibrant culture of marriage and family everyone benefits, but in a culture where the importance of families is neglected all sorts of problems result. You cannot have a strong nation without strong people, and you cannot have strong people without strong values. Right and wrong. Good and bad. That is learned from your values instilled in you in the family. It is irreplaceable.
Strong statements for strong times. (In fact, you cannot have strong times without strong statements.) These are the board’s members:
Ryan T. Anderson, Ph.D., Senior Research Fellow, The Heritage Foundation
Joseph Backholm, Executive Director, Family Policy Institute of Washington
Ambassador Ken Blackwell, Senior Fellow, Family Research Council
David S. Dockery, President, Trinity Evangelical Divinity School
Sherif Girgis, J.D./Ph.D. candidate, Yale Law & Princeton
Alan Hawkins, Ph.D., Professor, Brigham Young University
Kay Hymowitz, William E. Simon Fellow, Manhattan Institute
Jonathan Keller, CEO, California Family Council
Caitlin La Ruffa, Executive Director, Love and Fidelity Network
Robert Lerman, Emeritus Professor of Economics, American University
Bill Wichterman, former special assistant to President George W. Bush
Bradford Wilcox, Senior Fellow, Institute for Family Studies & Visiting Scholar, American Enterprise Institute
I wish the Republicans would debate this a little more seriously. Ted Cruz has proposed a Constitutional amendment, Jeb Bush and John Kasich have complained about marriage equality but not argued for overturning it, Trump says he opposes marriage equality but doesn’t really care. So what’s Rubio’s plan. Either you think it can be reversed, which is dumb, or you’re just attacking gays and lesbians as “wrong,” which is mean.
On Rubio’s board, Wilcox, Lerman, Hawkins, and Hymowitz are Family Inequality regulars. Of course he doesn’t really need policy advice at this point in the campaign, so this is just about signaling — it’s Rubio showing donors the direction he’s taking, and it’s these people deciding to put their names on his campaign. (Somehow, though, I’m sure they will also still be able to describe themselves as “non-partisan,” because wrong is now right.) It’s also the first time I know of that Wilcox has publicly opposed marriage equality, which is a promising turn in his maturation as a partisan hack.