Category Archives: Research reports

It’s modernity, stupid (Book review of The Sacred Project of American Sociology, by Christian Smith)

Book review: The Sacred Project of American Sociology, by Christian Smith. Oxford University Press, 2014.

Smith Confounding

Christian Smith Confounding Philip Cohen: With (left to right) Brad Wilcox, Mark Regnerus, C. Wright Mills, and Talcott Parsons (Original source: Giovanni di Paolo, St. Thomas Aquinas Confounding Averroës, from Wikimedia Commons)*

Note: I am self-publishing this review rather than trying to find another outlet for it because I once (in response to Smith’s email described below) used a single profanity in an email reply, and I don’t want to get some editor in trouble for allowing me to write a review when I have a documented personal animosity against the author. Unfortunately, it’s much longer than it would be if someone else published it. Sorry!

Christian Smith in this book reminds me of a vaccine denier. He is convinced the whole modern world is a Big Lie but, except for a few fellow travelers, he can’t find a way to convince everyone else that they’re the ones who are crazy. Inevitably, out of desperation, he starts to write in italics.

…the secular enterprise that everyday sociology appears to be pursuing is actually not what is really going on at sociology’s deeper level. Contemporary American sociology is, rightly understood, actually a profoundly sacred project at heart. Sociology today is in fact animated by sacred impulses, driven by sacred commitments, and serves a sacred project (x).

(In his frustration, he also clutters up a very short and simple book with endless redundant phrases like “in fact,” “rightly understood,” and “actually.” I haven’t added italics to any of his quotes in this review.)

He’s not being “tricky” with the term sacred: he means it in the strictly Durkheimian sense of, “things set apart from the profane and forbidden to be violated,” things “hallowed, revered, and honored as beyond questioning,” things that “can never be defiled, defied, or desecrated by any infringement or desecration” (1-2). This is not a metaphor, this is “exactly the character of the dominant project of American sociology” (2). Literally.

The book is not just the familiar diatribe against leftist groupthink in academia. What sets this apart is that Smith’s real problem is modernity itself, which I’ll return to. However, this particular expression of modernity – the one that happens to surround him in his chosen academic discipline – is especially grating. So we’ll start with that.

Like a vaccine denier, Smith is more and more convinced of his theory the more all the sociologists around him deny it. In fact, actually, rightly understood, rampant denial is literally evidence that he’s right. By the end of the book he concludes, “Many American sociologists will … find it impossible to see the sacred project that sociology is – precisely because my argument above is correct” (199). This treads uneasily close to the line where common arrogance tips over into a lack of grip on reality.

In the text of the American Sociological Association (ASA) description of the discipline, for example, “none of it admits to advancing a sacred project” (6). Aha! Why not? Two reasons, he figures. First, the sacred project “is so ubiquitous and taken for granted … that it has become invisible to most sociologists themselves” (6-7). Why would we discuss something universal and uncontroversial? Second, admitting its existence “would threaten the scientific authority and scholarly legitimacy of academic sociology,” so it must be “misrecognized, implicit, and unexamined” to maintain “plausible deniability,” and therefore “sociologists carefully exempt their own discipline from their otherwise searching sociological gaze” (7). So, we “carefully” keep secret for strategic reasons that which we cannot even know exists. The devil does work in mysterious ways.

Sacred is as sacred does

What is the content of the sacred project? In a bizarre throwback to the 1950s – he even puts red-scare quotes around “the people” [12] – Smith describes “the project” as

about something like exposing, protesting, and ending, through social movements, state regulations, and government programs all human inequality, oppression, exploitation, suffering, injustice, poverty, discrimination, exclusion, hierarchy, constraint and domination by, of, and over other humans (and perhaps animals and the environment (7).

For convenience, we could reasonably shorten this to, “communism.”

But this veneer of egalitarianism “does not go deep enough.” The project is

more fully and accurately described as … the visionary project of realizing the emancipation, equality, and moral affirmation of all human beings as autonomous, self-directing, individual agents (who should be) out to live their lives as they personally so desire, by constructing their own favored identities, entering and exiting relationships as they choose, and equally enjoying the gratification of experiential, material, and bodily pleasures (7-8).

We might call this deeper goal, “decadence.”

After that, it’s only a matter of a few lines before he starts putting “(so-called)” before “the Enlightenment” (8) and stringing together terms like this: “modern liberal-Enlightenment-Marxist-social-reformist-pragmatist-therapeutic-sexually liberated-civil right-feminist-GLBTQ-social constructionist-poststructuralist/postmodernist” (11). Did I mention this guy was Mark Regnerus’s dissertation committee chair at UNC? (Funny, he forgot to mention that, too.)

Of course, Smith has to admit that the sacred project is not something that all sociologists are into. “Most are, I think, being more or less conscious and activist on behalf of it [the project]. But some are not” (23). Who are those more innocent ones? He grudgingly lists five groups of exceptions (23-24):

  • “believers in sociology as purely a scientific study of society … often very fine people”;
  • “just commonplace ‘institution improvers,'” trying practically to make modern society work better;
  • “professional data collectors” who work in various bureaucracies and companies;
  • “ordinary, middle-America college professors who simply like to learn and teach about the family, criminal justice, or what have you”, and, finally;
  • “old-school liberals who genuinely believe in tolerance, fairness, and pluralism.”

Don’t be fooled into thinking this comprises is an important slice of American sociology, however, because they don’t represent “the discipline’s dominant culture, sensibilities, interests, discourse, and project.” And anyway, they are a very small minority. Excluding these five groups, in fact, Smith estimates that 30 to 40 percent are “true believers” and another 50 to 60 percent are “essentially on board, but are circumspect in how they express it” (24). Doing a quick calculation 100-(30+50) and 100-(40+60), it appears that those five groups of exceptions sum to between 0 percent and 20 percent of American sociologists. But it’s worse than even that, because some of the moderates, “when scratched hard enough,” do “show their true colors as sympathizers” with the project (25).

Hardly evidence

It seems shocking that that such an overwhelming majority of American sociologists could be so deeply into something so radical. To make such an extreme claim in a book published by a leading, highly reputable university press, surely one must have some pretty damning evidence? No.

It doesn’t help his case much, but the chapter titled “Evidence” is packed with ammunition for any grad student who ends up with Smith on his or her dissertation committee. Keep these defensive lines handy:

  • “the evidence I can offer is not ‘conclusive,’ at least when the standards of proof are set as the types that count for, say, publication in the top journals” (28). (No offense intended to Oxford University Press, I’m sure.)
  • What is “personally most convincing” is his own experience of many years, which he hopes will help readers “intuitively grasp the truth of my thesis” (28).
  • “There is no practical way to ‘test’ my thesis with standard sociological measures; the issues involved are too subtle and elusive to be ‘verified’ by such means” (29).
  • “I cannot conduct a systematic investigation to ‘prove’ that [some random claim], but I am confident that one well conducted would validate my claim” (66).
  • “Again, nobody, I am sure, has conducted or could conduct a systematic study of such features and reactions to empirically ‘prove’ my point” (87).

Honestly, the required survey design seems pretty simple. First, ask a sample of sociologists if they “are now or have ever been an activist on behalf of the sacred project.” Then, provide them with a list of their friends and colleagues, and ask for them to identify the individuals who would or should answer affirmatively to the first question.

Rather than follow such a straightforward approach, Smith presents “an array of semi-systemic evidence,” beginning with a “stroll through the ASA’s annual convention book exhibit” (29) (presumably senior professors with endowed chairs conducts “strolls” to collect their data, while junior faculty might feel the need to at least jog). From his stroll, he constructs 12 generic categories into which “most” of the books there “could be translated.” I won’t list them all, but these give you a feel:

  • People are Not Paying Enough Attention to Social Problem X, But if They Read this Book they Will Realize that They Have To
  • Women, Racial Minorities, and Poor People are Horribly Oppressed and You Should Be Really Angry About That!
  • Gays, Lesbians, Transsexuals, and other Queers are Everywhere and Their Experiences are Some of the Most Important Things Ever to Know About

After establishing the categories, he reprints the titles of about 30 books from NYU Press (which he doesn’t name because “a look at the sociology lists of virtually every other university press and trade publisher would produce a list very similar” [34]). The book list supports his hypothesis that there is a “narrow range of themes and perspectives.”

This is confusing. When you use concepts like, “Social Problem X,” and then put most books into that category, the question really is how many values does X take? This is like saying many history books are the same because they fit into the category, “Something happened during Period X in Place Y.”

The actual list of books he includes covers topics as diverse as factory farming, GLBT people in Islam, mass incarceration, paganism, breastfeeding, fair trade, donor conception, the NRA, school discipline, hip-hop culture, marriage promotion, immigrant health care, deliberate self-injury, and homeless youth. To Smith these are all “these type of books” produced by “activist disciples of the sacred project.” And, without opening a single one of them, he concludes, “So much for celebrating diversity, the proactive inclusion of social others, and welcoming differences” (34). (I’m thinking, “What an interesting body of work!”)

To supplement the NYU list, Smith adds 30 books reviewed in one issue of Contemporary Sociology. And now he’s in the territory of Sen. Tom Coburn – just listing research topics which, if you already think social science is stupid, sound stupid.

While one cannot always judge a book from its cover (title), my discussion above provides the right interpretive context for knowing what these books are about. Collectively, they are focused on threatening social problems (about which sociologists are the prophetic experts), injustices committed (about which sociologists are the whistle blowers), abuses by economically and politically (especially ‘neo-liberal’) powerful elites (ditto on whistle blowing), and mobilizing social and political movements for sociopolitical and economic change (about which sociologists are the experts and cheerleaders) (40).

To supplement his evidence, because titles don’t tell you everything, he includes five “exemplar” books, into which he delves more deeply – which means quoting from the book jackets and random reviews posted on Amazon.

And then Smith spends four pages – more than he spends on any other research in the book – attacking one book (which he didn’t read) about religion: Moral Ambition: Mobilization and Social Outreach in Evangelical Megachurches. He writes, “Between the book itself and the reviewer’s presentation of it, American sociologists are generally confirmed in their standard stereotypical fears about and negative mental associations with evangelicalism” (43). He refers to the book as a “sociological ethnography,” which reflects attitudes held by “sociologists” and the practices of “sociologists far and wide.” He doesn’t even show the courtesy of identifying the author (Omri Elisha) and citing the book properly. If he had, he might have noticed a slight problem with his evidence: the author is an anthropologist! Details.

To analyze research articles, Smith turns to American Sociological Review, based on the method of reading the next issue that arrives (Vol 78, No 3) for evidence of the “sacred project.” Except for one methodological piece, “the raft of articles in this issue tilted clearly in the supportive direction of the sacred project to which, explicitly or implicitly, subtly or obviously, the ASR, the ASA, and American sociology as a whole are committed” (58-59). The evidence he finds is basically that most of the articles study inequality, and when they do they sometimes describe it in negative terms. In essence, the existence of any sociological work describing any aspect of inequality confirms his hypothesis. (And somehow he thought this was too subtle to study empirically.)

Mo’ better modernity

The extent of his disillusionment finally becomes clear in a brief discussion of Horne et al’s, ASR article on bridewealth in Ghana. They investigated “normative constraints on women’s autonomy in the reproductive domain.” Smith objects to the value-laden perspective by which autonomy for women is assumed to be a good thing. He virtually sneers, “Here ‘improving the lives of African women’ is equated, as a good western feminist presupposition, with expanding ‘women’s reproductive autonomy'” (57).

Smith may not know that reproductive autonomy usually refers to a broad suite of decisions about childbearing within families, and it’s an important predictor of such vital outcomes as seeking medical care during pregnancy and delivery (e.g., in Ethiopia, Tajikistan, Bangladesh, and India), reduced unintended pregnancies (e.g., in Bangladesh), and children’s adequate nutrition (in India). If the big problem with sociology is that we assume those are positive outcomes, then I think I’m OK with that.

But Smith is presumably thinking of autonomy in the modern American sense of, “I’m bored, let’s get a divorce”; or, “I love myself, I think I’ll masturbate instead of volunteering at a soup kitchen.” And in that he has reason to worry, as it appears the majority of the world may be headed that direction.

smith-wvs

But surely – given the weak influence of sociology on global culture – he misdirects his irritation over modern life in general onto the sociologists who merely reflect it. This is especially clear in the discussion of sociology’s roots, which reveals the origins of the sacred project he is trying to describe:

As a project, sociology [originally] belonged at the heart of a movement that self-consciously and intentionally displaced western Christianity’s integrative and directive role in society. It was a key partner in modernity’s world-historical efforts to create a secular, rational, scientific social order … Sociology was not merely about piecemeal reforms but world transformation guided by a radically new sacred vision of humanity, life, society, and the cosmos (122).

Indeed, the latest version of the sacred project focuses on “the moral centrality of the autonomous, self-directing, therapeutically oriented individual,” but “this is merely a new emphasis, the seeds of which were planted long ago and have been growing along with the progressive unfolding of western modernity” (130). Thus, “the sacred project that dominates mainstream sociology today is a natural, logical development of the inheritance of liberal, Enlightenment modernity” (131).

Given the worldwide magnitude of this project, and its global success over several centuries, in which American sociology has played such a small role, its seems useless to single out today’s idealistic graduate students and young researchers for blame. They are mere cogs in the modernity machine. This is the deep incoherence of the book: he pours his scorn so superfluously on the leftists who annoy him even though the details of contemporary politics seem tangential to his existential concerns.

Into ASA

Smith extends his superficial empirical analysis into the subject of ASA sections, the organizations sociologists use to develop affinities around their interests and expand their institutional influence. This analysis consists entirely of Smith separating sections into three categories by title based purely on his own inimitable expertise. No content, no text, not even a mocking list of conference presentation titles – just section titles.

The first category is those that are “at the vanguard of sociology’s sacred project.” Naturally, this is the largest category, with some 13,000 members (many people belong to more than one). These include, obviously, Sex and Gender, as well as, less obviously, Mental Health; Alcohol, Drugs and Tobacco; and Disability and Society. Next are those that are “less obviously but in many ways still promoting sociology’s sacred project.” These sections have about 11,000 members, including those covering Culture, Theory, Law, and Population. (Oddly, while Mental Health is in the seriously-bad category, Medical Sociology is only in the pretty-bad category. He said it was subtle.) He would “venture to say” based on his experience, that the “majority” of research and teaching by those in this second category “ultimately feeds into support for and the promotion of” the sacred project (66). Finally, there are only four sections, with less than 1,000 members, that are “seemingly not related” to the sacred project (History of Sociology, Mathematical Sociology, Rationality and Society, and Ethnomethodology).

To cover teaching, Smith discusses selected portions of John Macionis’s best-selling Society: The Basics. I have never used one, but I hear that intro books are often frustrating for research university professors, so I am sympathetic here, although my concerns would no doubt be different. I don’t mind criticizing the triumvirate theoretical framing of functionalism-conflict-interaction, but I’m OK with discussing the limits of “free will” (versus social influence), quoting Tocqueville on how excellent the French Revolution was, and even using of BCE/CE instead of BC/AD for dating eras (so touchy – who knew?).

Anyway, there is an extensive literature about introductory sociology textbooks, and since Smith ignores it I mostly ignored this section. However, I did like this: “I could also conduct the same kind of analysis of the other best-selling introductory sociology textbooks, and again, the results would be extremely similar” because “these textbooks are almost identical to each other” (85). I love that he knows this before conducting the “analysis.” But I also don’t doubt that we would reproduce similar conclusions regardless of what he read.

Tall tales

Smith concludes the crucial “Evidence” chapter with “some less systematic [!] but still I think revealing illustrations” (86). These are extended anecdotes that nicely illustrate his ability to harbor a grudge – including cases in which sociologists vehemently reacted to violations of the sacred project (mostly sociologists mistreating his friends).

For some of the anecdotes, Smith does not name names. This is supposedly to underscore his larger points, but since he is not a reliable reporter this is a very bad practice. One he discusses anonymously is obviously the reaction to the book by Linda Waite and Maggie Gallagher, The Case for Marriage: Why Married People are Happier, Healthier, and Better Off Financially. His description is completely misleading, characterizing it only as a “book about the many benefits of marriage, the lead author of which was a very highly regarded University of Chicago sociologist and demographer.” Excluded is the fact that the book was not published by a university press (Doubleday), and that the second author was a conservative activist, a non-academic “affiliate scholar” working with the Institute for American Values (IAV). Gallagher was already known as a right-wing nut (the author of Enemies of Eros: How the Sexual Revolution Is Killing Family, Marriage, and Sex and What We Can Do About It), who went on to become perhaps the most famous American anti-gay marriage fanatic.

Waite was also working outside of academia to advocate policy. She was writing for IAV, and served on the research board of the National Marriage Project, an academic-activist organization promoting pro-marriage policy. Waite said she and Gallagher kept their politics separate and out of the book. Others disagreed. Clearly, Waite was moving in a more activist direction, as she acknowledged herself, couching her advocacy for marriage in public health terms, and comparing it to the campaigns about smoking and for exercise. A lively debate ensued. Smith describes an author-meets-critics session at the ASA conference in 2002, and says an eyewitness told him that one of the critics “literally frothed at the mouth” and shouted, “You have betrayed us!

But why is Waite different from the other activists who use social science research to promote social agendas – a similarity hidden by Smith’s selective description? And how is this debate so much more damaging than any other? To show the harm done by the sacred sociologists, Smith reports that Waite, who had been on the ASA Council and chair of the Family Section (incidentally one of Smith’s “vanguard” sacred project sections…), has not since held elective office in ASA. That’s true, and I doubt she would be elected if she ran, because of her politics. She has, however, continued a very successful career, holding a named chair at the University of Chicago and serving in important positions at the National Institutes of Health, among other distinctions. Being president of ASA is a privilege, not a right.

Another anecdote concerns Brad Wilcox’s tenure promotion at the University of Virginia, also hardly anonymized (93-95). As a non-public personnel matter, however, this case is poorly suited for weaponization. I don’t know the facts first-hand, and Smith doesn’t offer any documentation or reveal his source for the story. The gist of it is that Wilcox’s department at the University of Virginia voted to deny him tenure, but they were overruled by the top level of administration (Smith says it was the provost that saved the promotion, while Wilcox colleague Robert George reported it was the president). I don’t know the extent to which Wilcox’s religious affiliation or political positions played a role in the department’s decision, and I certainly wouldn’t take Smith’s word for it. Simply counting the publications on a CV is not enough to judge a tenure decision; the quality and impact of the work matter, too, as do ethics and character. For example, regardless of his publication record I might vote to deny Wilcox tenure on the basis of his dishonesty and incompetence (which I have documented voluminously – although my stories begin after he was tenured).

Regnerus reflux

All this is setup for Smith’s rant about the Regnerus affair (overview here; archive of posts under this tag). When the scandal was unfolding in 2012, Smith made an unintentional appearance in the blogosphere when some of his outraged email to sociologists (including me) was posted on the Scatterplot blog (here and here). He followed that up with an essay defending Regnerus in the Chronicle of Higher Education, which accused academic sociology of perpetrating an auto-da-fé (which is similar to being criticized on blogs, except in every possible way).

Obsessed readers will recall that, in the original version of that essay, Smith wrote, “Full disclosure: I was on the faculty in Regnerus’s department and advised him for some years, but was not his dissertation chair.” That was later corrected to read, “Full disclosure: I was chair of Regnerus’s dissertation committee.” This seems not a minor detail to forget, considering (by his accounting) Smith and Regnerus co-authored eight articles together, and Regnerus was one of only six dissertations Smith chaired at UNC.

Regnerus dissertation signature page.

Regnerus dissertation signature page.

Smith is still having trouble with the details of the story, and forgets again to “fully disclose” this fact.

He also tells this story as if everything Regnerus said initially was true and nothing substantial was subsequently uncovered. For example, it hardly seems relevant anymore that, “Regnerus was clear in his article that his findings did not point in any specific policy direction” (102), now that Regnerus and his colleagues did use his results to press the case against marriage equality, in both briefs and expert testimony. We also now know, confirming the early conspiracy theories, that Regnerus and his colleagues – principally Wilcox – did indeed plan the study as an activist endeavor to influence the courts. (This doesn’t mean they faked the data, only that they were sure they would find a way to find something in the data to make gay and lesbian parents look bad.)

Smith quotes from Regnerus’s paper, “I have not and will not speculate here on causality,” but we now know that Regnerus grossly does exaggerate his results and draw causal conclusions when speaking to like-minded audiences, including by presenting unadjusted results while discussing his statistical controls, and by speculating about mechanisms for the patterns he found. The original published paper, with its caveats and disclaimers, proved irrelevant to how the movement against marriage equality used it for their ideological ends.

In any event, Smith still needs to vent on the ill treatment he believes Regnerus received at the hands of the purveyors of the sacred project. He devotes more than 14 pages to the scandal, of which almost 6 are footnotes in which he schools himself on the legal particulars of the case, condemns the non-academic activists who agitated and sued their way through the process, and takes on some of the wider research on same-sex parenting. Not surprisingly, however, I’m afraid Smith seems to have learned little from the scandal (including the relevant facts).

One odd falsehood Smith commits is claiming there was a “review process by which the [Regnerus] article had been unanimously judged worthy of publication by six double-blind reviewers” (107), which he repeats later (157). If this is an honest error it results from misreading the internal review conducted by Darren Sherkat for the journal, Social Science Research (SSR), in response to the scandal. Sherkat reported that there were six reviewers for two articles that sparked controversy – three each. So, three reviewers, not six. Also, Smith must know that SSR is the only major sociology journal to practice single-blind review. The reviewers always know who wrote the articles they review. In fact, as we now know, two of the three reviewers were directly involved in the research: Paul Amato, who has described his role as a paid consultant on the study; and, far worse, Brad Wilcox, the principal fundraiser and institutional architect of the research, whose role as a reviewer was finally admitted in August 2013. So, not exactly “double-blind,” even nominally.

Smith’s main complaint is that the sociologists criticizing Regnerus have always ignored weak studies and shoddy research methods when people who used them found that gay and lesbian parents don’t harm children. This is the “ideological double standard” that Smith called “pathetic” in an email to me and others, now rehashed at p. 110 of his book. But it’s ridiculous. Neither I nor the others objecting to the Regnerus paper claimed our primary objective was the protection of accurate science in some abstract sense. The paper drew the sustained attention that it did because of the moment and manner in which it appeared – and was deployed – in a raging national debate with important, practical consequences for real life.

Speaking for myself, I of course routinely review and recommend rejection for research articles whose results and apparent worldview are completely consistent with my empirical expectations and normative assumptions (even in cases where rubberstamping them into publication would increase my own citation count). And I often decline to cite relevant research that would support whatever case I’m making if I don’t find it sound or credible. I have standards for quality and I impose them in the routine course of business. But I don’t stand on street corners and holler at passersby every time a poor quality article is published, the way I do when one is that attacks minority civil rights. That’s not hypocrisy, that’s priorities.

On the merits of the equivalency claim – bad Regnerus, bad prior research – I also disagree. Much of the previous research on same-sex parenting was essentially in the form of case studies and convenience samples, which are legitimate ways of studying small and hard-to-identify populations, despite the possibility of selection bias and social desirability bias. As Andrew Perrin, Neal Caren and I argued in a response paper, that previous research, in the aggregate, is consistent with the “no differences” view because it fails to falsify the hypothesis that there is a notable disadvantage to being raised by same-sex parents. All those case studies and convenience samples do not prove there is no disadvantage attributable to same-sex parenting – they merely fail to find one. And that is the state of the research today.

Unsurprisingly, Smith draws the wrong conclusion from the Regnerus affair, arguing that the greatest negative outcome was the threat to the peer-review process posed by the criticism of Regnerus:

Most obvious in that episode was the attack on Regnerus himself. Less obvious but no less important was the assault on the integrity of the double-blind peer-review process involved in those attacks. Recall that Regnerus’ paper had been evaluated by six blind reviewers, all of whom recommended publication. Recall that the quality of Regnerus’ sample was, though not perfect, superior to any other that had been used to answer this research question prior to his study. Nothing in the review process was unusual or dubious… (157).

Even if all that were true, and it is demonstrably all not true, I still don’t think protest and criticism of published research – what Smith calls “scholarly review by mob intimidation” – marks “the end of credible social science” (161). This is like saying the problem with the Vietnam War was that it ushered in a new era in which elected politicians can’t even make the decision to go to war anymore without the threat of mob protests and civil disobedience. Who let the public in to this democracy, anyway? (Of course, as I was once instructed by a colleague, academia is not a democracy, it’s a meritocracy.)

His summary of the story takes on this Orwellian character. “I do not mean to suggest that sociology’s journal peer-review system is rampant with corruption,” he says, somehow referring not to the bad decision to publish the article, but rather to the public criticism it sustained.

But I do think it is vulnerable to pernicious influences exerted by some scholars who are driven by some of the less admirable aspects of sociology’s sacred project. The Regnerus debacle shows that it can happen and has happened. The potential for abuse is real (162).

The abuse in the Regnerus case was not in the protest, but in the mobilization of big, private money to generate research intended to influence the courts, infecting the reviewer pool with consulting fees among insider networks, manipulating the journal into relying on reviewers without expertise in gay and lesbian family studies, and then mobilizing the result for harmful political ends. The public criticism, on the other hand, besides making Regnerus professionally toxic – for which I have little sympathy – served only to bring this to the attention of the academic community and the public. I may be in the minority on this among academics who value their privileged social status, but I don’t even object to the public records requests for information on the peer review process (which, although not triggered by sociologists, consume several pages of Smith’s narrative). Rather, I regret that the use of private money and a corporate publisher limited the possibility for more thorough transparency in the process.

Mr. Banks from Mary Poppins.

Mr. Banks from Mary Poppins.

Blame the bloggers

Like he blames politics when he doesn’t like the political outcome, Smith blames communication itself when he doesn’t like the content expressed. In this case, that means the blogs. I find this passage jaw-dropping:

The Internet has created a whole new means by which the traditional double-blind peer-review system may be and already is in some ways, I believe, being undermined. I am referring here to the spate of new sociology blogs that have sprung up in recent years in which handfuls of sociologists publicly comment upon and often criticize published works in the discipline. The commentary published on these blogs operates outside of the gatekeeping systems of traditional peer review. All it takes to make that happen is for one or more scholars who want to amplify their opinions into the blogosphere to set up their own blogs and start writing. … If this were conducted properly, it could provide benefit to the discipline. But, in my observation, the discipline’s sacred project sometimes steers how these sociology blogs operate in highly problematic directions (166).

He calls this “vigilante peer review.” And I guess I’m doing it right now.

No journal or book review editor has asked any of these sociologists to review a paper or book. What publications get critiqued and sometimes lambasted is entirely up to the blog owners and authors (166).

Then, after a three-page excerpt from a Darren Sherkat blog post – which, admittedly, probably was not intended to lower Smith’s blood pressure – he concludes:

The Internet has created new means by which American sociology’s spiritual project … can and does interfere with the integrity and trustworthiness of the social-scientific, journal article peer-review system (172).

Yikes. This might all not seem so embarrassingly wrong if it didn’t follow from holding up Regnerus as the paragon of the peer review system.

I can’t think of exactly the right children’s movie analogy here – a grouchy traditionalist who eventually learns that it’s OK to be free and have fun. It’s not quite The Grinch, because that was just evil for no reason. It’s not quite Captain von Trapp from Sound of Music, because his misplaced need for social order resulted from the injury of his widowhood. Maybe it’s Mr. Banks from Mary Poppins, who is just merrily living his life, benignly assuming that children should be seen and not heard because that’s the way it’s always been. I like the movies where, in the end, the grouch learns that it’s OK to sing and dance.

On the other hand

The most persuasive passage in the book is one that is mostly irrelevant to Smith’s sacred project argument. He believes American sociologists have tended in recent years to separate themselves into disparate groups of like-minded people, so that there is “a tacit peace treaty specifying that everyone should mostly think and do whatever he or she wishes in terms of methods, theory, and intent and not suggest that what anyone else is doing might be a problem” – as long as it’s politically correct (142). If the people we talk and argue with professionally have very similar views, debate over broader intellectual or philosophical issues is too limited.

At the same time, too many grad students are trained as narrow technicians, with not enough “broadly read, thoughtful, intellectually interesting scholars and teachers” (143). He may perhaps overstate that case, but it’s a reasonable thing to worry about:

In sum, most of American sociology has become disciplinarily isolated and parochial, sectarian, internally fragmented, boringly homogeneous, reticently conflict-averse, philosophically ignorant, and intellectually torpid (144).

His greatest error in this part is see activist leftists dominating the prestige game within the discipline, shutting out and shunning anyone who doesn’t conform. It seems obvious to me that technical expertise and empirical problem solving skills are much greater determinants of access to top publications and jobs than is devotion to what Smith calls the sacred project. Whacky leftists who don’t think critically (who are of course only a subset of leftists) might be part of the winning electoral coalition within ASA, but I don’t think they’re running the discipline.

The greater culprit here, in my opinion, is not political homogeneity but rather pressure to specialize and develop technical expertise early in our graduate training in order to publish in prestigious journals as early as possible to get hired and promoted in tenure-track jobs. I would love it if sociology had more interaction and debate between, say, family scholars and criminologists, network sociologists and gender scholars, demographers and theorists. (One workaround, at least for me, has been devoting time in my career to blogging and social media, which generates excellent conversation and exposure to new people away from my areas of expertise.)

Don’t just stand there

I agree there are sociologists who see it as their mission to be “essentially the criminal investigative unit of the left wing of the Democratic Party” (21). From the thousands of graduate applications I’ve reviewed, it’s clear that many of our students enter sociology because they are looking for a way to attack social problems and move society in the direction determined by their moral and political views and values. And they’re usually not political conservatives or evangelical Christians.

Why fault people for doing that? What are people with such convictions and talents supposed to do? Sometimes it doesn’t work out as an academic career, but it’s often worth a try. In sociology training, meanwhile, they have the opportunity to learn a lot of facts and theories along with everyone else. And they might also learn to avoid some common intellectual problems. Some things students learn – or learn to appreciate further – include: Things do not always automatically get worse for oppressed people; not all state institutions are harmful to subordinate groups; some facts undermine our prior understandings and political views, and it’s OK to discuss them; and, no matter how oppressed your people are, if you’re in an American sociology graduate program, chances are there are people somewhere who are even more oppressed.

I don’t mean to be condescending to people who enter academia with activist intentions – although I’m sure I seem that way. Personally, I am happier working in such company than I would be surrounded by people who only have faux-value-neutral, technocratic ambitions and no righteous outrage to express. That doesn’t mean I’ll rubberstamp comprehensive exams or dissertations, or ignore errors in the peer-review process, because I like someone’s politics. I like the discipline that social science offers to activism. And I like that our discipline offers a career path for (among others) people whose passion is for changing the world in a good way – the meaning of which I’m happy to argue about further.

* I am not getting into personalism, critical realism, Aristotle, Karol Wojtyla, or other obscure stuff, much of which Smith puts in an appendix.

26 Comments

Filed under Research reports

Reviewing Nicholas Wade’s troublesome book

boston-review-frontpage

I have written a review of Nicholas Wade’s book, A Troublesome Inheritance: Genes, Race and Human History, for Boston Review. Because there already are a lot of reviews published, I also included discussion of the response to the book. And because I’m not expert in genetics and evolution, I got to do a pile of reading on those subject as well. I hope you’ll have a look: http://www.bostonreview.net/books-ideas/philip-cohen-nicholas-wade-troublesome-inheritance

5 Comments

Filed under Research reports

Response from Supporting Healthy Marriage supporters, with responses

In response to yesterday’s post, “This ‘Supporting Healthy Marriage,’ I do not think it means what you think it means,” Phil and Carolyn Cowan posted a comment, which I thought I should elevate to a new post.

Photo by Ben Francis from Flickr Creative Commons

Photo by Ben Francis from Flickr Creative Commons

Here is their comment, in full, with my responses.

Since the issue here is one of perspective in reporting, we (Phil Cowan and Carolyn Cowan) need to say that we were two of a group of academic consultants to the Supporting Healthy Marriage Project.

Thank you for acknowledging that. I noticed that Alan Hawkins, in his comment on the new study for Brad Wilcox’s blog, says he has “published widely on the effectiveness of marriage and relationship education programs,” but doesn’t say who paid for that voluminous research (with its oddly consistent positive findings). More about his Hawkins below.

Social scientists who want to inform the public about the results of an important study should actually inform the public about the results, not just give examples that support the author’s point of view.

Naturally, which is why I publicized the study, provided a link to it in full, and provided the examples quoted below.

It’s true as you report that there were no differences in the divorce rate between group participants and controls (we can debate whether affecting the divorce rate would be a good outcome), and that… [quoting from the original post]

“…there were no differences in the divorce rate between group participants and controls and “there were small but sustained improvements in subjectively-measured psychological indicators. How small? For relationship quality, the effect of the program was .13 standard deviations, equivalent to moving 15% of the couples one point on a 7-point scale from “completely unhappy” to “completely happy.” So that’s something. Further, after 30 months, 43% of the program couples thought their marriage was “in trouble” (according to either partner) compared with 47% of the control group. That was an effect size of .09 standard deviations. So that’s something, too. Many other indicators showed no effect. However, I discount even these small effects since it seems plausible that program participants just learned to say better things about their marriages. Without something beyond a purely subjective report — for example, domestic violence reports or kids’ test scores — I wouldn’t be convinced even if these results weren’t so weak.”

1. A slight uptick in marital satisfaction. The program moved 15% of the couples up one point. But more than 50 studies show that without intervention, marital quality, on the average goes down. And, it isn’t simply that 15% of the couples moved up one point. Since this is the mean result, some moved less (or down) but some moved up. Some also moved up from the lower point to relationship tolerability.

It is interesting that, with so many studies showing that marital quality goes down without intervention, this is not one of them. That is important because of what it implies about the sample. Quoting from the report now (p. 32):

At study entry, a fairly high percentage (66 percent) of both program and control group couples said that they had recently thought their marriage was in trouble. This percentage dropped across both research groups over time. This finding is contrary to much of the literature in the area, which generally suggests that marital distress tends to increase and that marital quality tends to decline over time. The decline in marital distress was initially steeper for program group members, and the difference between the program and control groups was sustained over time. This suggests that couples may have entered the program at low points in their relationships.

Back to the Cowans:

While the effects were small (but statistically reliable), they were hardly trivial. For instance, two years after the program, about 42% of SHM couples reported that their marriage had been in trouble recently compared to about 47% of control-group couples. That 5% difference means nearly 150 more SHM couples than control-group couples felt that their marriage was solid.

There are several problems here.

First, this paragraph appears verbatim in Hawkins’ post as well. I’m not going to speculate about how the same paragraph ended up in two places — there are some obvious possibilities — but clearly someone has not communicated the origin of this passage.

Second, this is not the right way to use “for instance.” This “for instance” refers to the only outcome of any substantial size in the entire study. It is not an “instance” of some larger pool of non-trivial results, it is the outlier. (And “solid” is not the same as not saying the marriage is “in trouble.”)

Anyway, third, this phrase is just wrong: “small (but statistically reliable)… hardly trivial.” For most of the positive outcomes they were exactly so small as to be trivial, and exactly not statistically reliable. Quoting from the report again, on coparenting and parenting (p. 39):

Table 9 shows that, of the 10 outcomes examined, only three impacts are statistically significant. The magnitudes of these impact estimates are also very small, with the largest one having an effect size of 0.07. These findings did not remain statistically significant after additional statistical tests were conducted to adjust for the number of outcomes examined. In essence, the findings suggest that there is a greater than 10 percent chance that this pattern of findings could have occurred if SHM had no effect on coparenting and parenting.

And quoting from the report again, on child outcomes (p. 41):

Table 10 shows that the SHM program had statistically significant impacts on two out of four child outcomes, but the impacts are extremely small. SHM improved children’s self-regulatory skills by 0.03 standard deviation, and it reduced children’s externalizing behavior problems by 0.04 standard deviation. … The evidence of impacts on child outcomes is further weakened by the results of subsequent analyses that were conducted to adjust for the number of outcomes examined. These findings suggest that there is a greater than 10 percent chance that this pattern could have occurred if SHM had no effect on child outcomes.

In other words, trivial effects, and not statistically reliable.

2. You say that “Without something beyond a purely subjective report…I wouldn’t be convinced even if these results weren’t so weak.” You were content to focus on two self-report measures. At the 18 month follow-up, program group members reported higher levels of marital happiness, lower levels of marital distress, greater warmth and support, more positive communication skills, and fewer negative behaviors and emotions in their interactions with their spouses, relative to control group members. They also reported less psychological abuse (though not less physical abuse). These effects continued at the 36 month follow-up [should be 30-month -pnc]. Observations of couple interaction (done only at 18 months) indicated that the program couples, on average, showed more positive communication skills and less anger and hostility than the control group. Because the quality of these interactions of the partners, the effects, though small, were coded by observers blind to experimental status of the participants, meaning that not only the self-reports suggest some positive effects but observers could identify some differences between couples in the intervention and control groups that we know are important to couple and child well-being.

I am confused by this. The description of the variables for communication skills and warmth (p. 67) describes them as answers to survey questions, not observations (e.g., “We are good at working out our differences”). I’m looking pretty hard and not seeing what is described here. The word “anger” is not in the report, and the word “hostility” only occurs with regard to parents’ behavior toward children. Someone please point me to the passage that contradicts me, if there is one.

3. When all the children were considered as one group, regardless of age, there were no effects on child outcomes, but there WERE significant effects on younger children (age 2-4), compared with children 5 to 8.5 and children 8.5 to 17. The behaviors of the younger children of group participants were reported to be – and observed to be — more self- regulated, less internalizing (anxious, depressed, withdrawn), and less externalizing (aggressive, non-cooperative, hyperactive). It seems reasonable to us that a 16 week intervention for parents might not be sufficient to reduce negative behavior in older children.

On the younger children, I discounted that because the report said (p. 42): “While the findings for the youngest children are promising, there is some uncertainty because the pattern of results is not strong enough to remain statistically significant once adjustments are made to account for the number of outcomes examined.”

4. For every positive outcome we have cited, you or any critic can find another measure that shows that the intervention had no effect. That’s part of our point here. Rather than yes or no, what we have is a complicated series of findings that lead to a complicated series of decisions about how best to be helpful to families.

That’s just not an accurate description. There are many null findings for each positive finding, and the positive findings themselves are either small, trivially small, or not statistically reliable.

4. Several times you suggest that giving couples the $9,000 per family (the program costs) would do better. Do you have evidence that giving families money increases, or at least maintains, family relationship quality? Is $9,000 a lot? Compared to what? According to the Associated Press, New York city’s annual cost per jail inmate was $167,731 last year. In other words, we are already spending billions to serve families when things go wrong, and some of the small effects of the marital could be thought of as preventive – especially at earlier stages of children’s development.

At the end of your blog, you rightly suggest a study in which giving families money is pitted in a random trial against relationship interventions. That’s a good idea, but that suggests more research. Furthermore, why must we always discuss programs in terms of yes or no, good or bad? What if we gave families $9,000 AND provided help with their relationships – and tested for the effects of a combined relationship and cash assistance.

We have lots of evidence that richer couples are less likely to divorce, of course. I don’t know that giving someone $9,000 would help with relationship quality, but I’m guessing it would at least help pay the rent or pay for some daycare.

It’s important to acknowledge that we’re not talking about research. The marriage promotion program is coming out of the welfare budget, not NIH or NSF. This study is a small part of it. Hundreds of millions of dollars have been spent on this, of which the studies account for a small amount. If this boondoggle continues, and they continue to study it, then they should include the cash-control group.

5. It seems to us that as a social scientist, you would want to ask “what have we learned about helping families from this study and from other research on couple relationship education?” We would suggest that we’ve learned that the earlier Building Strong Families program for unmarried low-income families had low attendance and no positive effects. A closer reading of those reports suggest that many of the unmarried partners were not in long-term relationships and were not doing very well at the outset. Perhaps it was a long-shot to offer some of them relationship help. We’ve also learned that the Strengthening Healthy Marriage program for married low-income families had some small but lasting effects on both self-reported and observed measures of their relationship quality (we think that the researchers learned something from the earlier study). And, notably, we’ve learned that there seemed to be some benefits for younger children when their parents took advantage of relationship strengthening behaviors.

We always learn something. See my comments above for why this is a stretch. I would be happy to see, and even pay for, research on what helps poor families. We already do some of that, through scientific agencies. My objection is not to the research, but to the program that it is studying, which takes money away from things we know are good.

Here is their last word — as good a defense as any for this program.

We know from many correlational studies that when parents are involved in unresolvable high level conflict, or are cold and withdrawn from each other, parenting is likely to be less effective, and their children fare less well in their cognitive, emotional, and social development. It was not some wild government idea that improving couple relationships could have benefits for children. Evidence in many studies and meta-analyses of studies of couple relationship interventions in middle-class families, and more recently for low-income families, have also been shown to produce benefits for the couples themselves — and for their kids. This was not a government program to force marriage on poor families. The participants were already married. It was a program that offered free help because maintaining good relationships is hard for couples at any level, but low-income folks have fewer financial resources to get all kinds of help that every family needs.

We are not suggesting that strengthening family relationships alone is a magic bullet for improving the lot of poor families. But, in our experience over the past many years, it gives the parents some tools for building more productive couple and parent-child relationships, which gives both the parents and their children more confidence and hope.

What we need to learn is how to do family relationship strengthening more effectively, and how to combine that activity with other approaches, now being tried in isolated silos of government, foundations, and private agencies, in order to make life better for parents and their kids.
In our view, trumpeting the failure of Supporting Healthy Marriage by focusing on a few of the negative findings doesn’t help move us toward that goal.

6 Comments

Filed under Research reports

This ‘Supporting Healthy Marriage,’ I do not think it means what you think it means

New results are in from the unrelenting efforts to redirect welfare spending to marriage promotion. By my unsophisticated calculations we’re more than $1 billion into this program, without a single, proven healthy marriage yet to show for it.

The latest report is a study of the Supporting Healthy Marriage program, in which half of 6,298 couples were offered an extensive relationship support and education program. Short version: Fail.

Photo by Marlin Keesley from Flickr Creative Commons

Photo by Marlin Keesley from Flickr Creative Commons

Supporting Healthy Marriage is a federal program called “the first large-scale, multisite, multiyear, rigorous test of marriage education programs for low-income married couples.” The program evaluation used eight locations, with married, low- or modest-income parents (or expectant couples) offered a year-long program. Those in the program group had a four- to five-month series of workshops, followed by educational and social events to reinforce the curriculum.

Longer than most marriage education services and based on structured curricula shown to be effective with middle-income couples, the workshops were designed to help couples enhance the quality of their relationships by teaching strategies for managing conflict, communicating effectively, increasing supportive behaviors, and building closeness and friendship. Workshops also wove in strategies for managing stressful circumstances commonly faced by lower-income families (such as job loss, financial stress, or housing instability), and they encouraged couples to build positive support networks in their communities.

This was a good program, with a good quality evaluation. To avoid selection biases, for example, the study included those who did not participate despite being offered the program. But participation rates were good:

According to program information data, on average, 83% of program group couples attended at least one workshop; 66% attended at least one supplemental activity; and 88% attended at least one meeting with their family support workers. Overall, program group couples participated in an average of 27 hours of services across the three components, including an average of 17 hours of curricula, nearly 6 hours of supplemental activities, and 4 hours of in-person family support meetings.

The couples had been together an average of 6 years; 82% had incomes below twice the poverty level. More than half thought their marriage was in trouble when they started.

But the treatment and control groups followed the exact same trajectory. At 12 months, 90% of both groups were still married or in a committed relationship, after 30 months it was 81.5% for both groups.

HMEval

The study team also broke down the very diverse population, but could not find a race/ethnic or income group that showed noteworthy different results. A complete failure.

But wait. There were some “small but sustained” improvements in subjectively-measured psychological indicators. How small? For relationship quality, the effect of the program was .13 standard deviations, equivalent to moving 15% of the couples one point on a 7-point scale from “completely unhappy” to “completely happy.” So that’s something. Further, after 30 months, 43% of the program couples thought their marriage was “in trouble” (according to either partner) compared with 47% of the control group. That was an effect size of .09 standard deviations. So that’s something, too. Many other indicators showed no effect.

However, I discount even these small effects since it seems plausible that program participants just learned to say better things about their marriages. Without something beyond a purely subjective report — for example, domestic violence reports or kids’ test scores — I wouldn’t be convinced even if these results weren’t so weak.

What did this cost? Round numbers: $9,100 per couple, not including evaluation or start-up costs. That would be $29 million for half the 6,298 couples. The program staff and evaluators should have thanked the poor families that involuntarily gave up that money from the welfare budget in the service of the marriage-promotion agenda. We know that cash would have come in handy – so thanks, welfare!

The mild-mannered researchers, realizing (one can only hope) that their work on this boondoggle is coming to an end, conclude:

It is worthwhile considering whether this amount of money could be spent in ways that bring about more substantial effects on families and children.

For example, giving the poor couples $9,000.

Trail of program evaluation tears

We have seen results this bad before. The Building Strong Families (BSF) program, also thoroughly evaluated, was a complete bust as well:

Some of the people trying to bolster these programs — researchers, it must be said, who are supported by the programs — have produced almost comically bad research, such as this disaster of an analysis I reported on earlier.

Now it’s time to prepare ourselves for the rebuttals of the marriage promoters, who are by now quite used to responding to this kind of news.

  • We shouldn’t expect government programs to work. Just look at Head Start. Of course, lots of programs fail. And, specifically, some large studies have failed to show that kids whose parents were offered Head Start programs do better than those whose parents were not. But Head Start is offering a service to parents who want it, that most of them would buy on their own if it were not offered. Head Start might fail at lifting children out of poverty while succeeding at providing a valuable, need-based service to low-income families.
  • Rich people get marriage counseling, so why shouldn’t poor people? As you can imagine, I am all for giving poor people all the free goods and services they can carry. Just make it totally voluntary, don’t do it to change their behavior to fit your moral standards, and don’t pay for it by taking cash out of the pockets of single-parent families. I really am all in favor of marriage counseling for people who want it, but this is not the policy platform to get that done.
  • These small subjectively-measured benefits are actually very important, and were really the point anyway. No, the point was to promote marriage, from the welfare law itself (described here) to the Healthy Marriage Initiative. If the point was to make poor people happier Congress never would have gone for it.
  • We have to keep trying. We need more programs and more research. If you want to promote marriage, here’s a research plan: have a third group in the study — in addition to the program and control group — who get cash equivalent to the cost of the service. See how well the cash group does, because that’s the outcome you need to surpass to prove this policy a success.

Everyone loves marriage these days. But a lot of people like to think of promoting marriage as a way to reduce poverty, and with that they believe poor people are that way because they’re not married. That’s mostly backwards.

11 Comments

Filed under In the news, Research reports

Divorce drop and rebound: paper in the news

My paper on divorce and the recession has been accepted by the journal Population Research and Policy Review, and Emily Alpert Reyes wrote it up for the L.A. Times today. The paper is online in the Maryland Population Research Center working paper collection.

latimes-divorce

Married couples promise to stick together for better or worse. But as the economy started to rebound, so did the divorce rate.

Divorces plunged when the recession struck and slowly started to rise as the recovery began, according to a study to be published in Population Research and Policy Review.

From 2009 to 2011, about 150,000 fewer divorces occurred than would otherwise have been expected, University of Maryland sociologist Philip N. Cohen estimated. Across the country, the divorce rate among married women dropped from 2.09% to 1.95% from 2008 to 2009, then crept back up to 1.98% in both 2010 and 2011.

To reach the figure of 150,000 fewer divorces, I estimated a model of divorce odds based on 2008 data (the first year the American Community Survey asked about divorce events). Based on age, education, marital duration, number of times married, race/ethnicity and nativity, I predicted how many divorces there would have been in the subsequent years if only the population composition changed. Then I compared that predicted trend with what the survey actually observed. This comparison showed about 150,000 fewer than expected over the years 2009-2011:

divorce-fig2

Notice that the divorce rate was expected to decline based only on changes in the population, such as increasing education and age. That means you can’t simply attribute any drop in divorce to the recession — the question is whether the pace of decline changed.

Further, the interpretation that this pattern was driven by the recession is tempered by my analysis of state variations, which showed that states’ unemployment rates were not statistically associated with the odds of divorce when individual factors were controlled. Foreclosure rates were associated with higher divorce rates, but this didn’t hold up with state fixed effects.

So I’m cautious about the attributing the trend to the recession. Unfortunately, this all happened after only one year of ACS divorce data collection, which introduced a totally different method of measuring divorce rates, which is basically not comparable to the divorce statistics compiled by the National Center for Health Statistics from state-reported divorce decrees.

Finally, in a supplemental analysis, I tested whether unemployment and foreclosures were associated with divorce odds differently according to education level. This showed unemployment increasing the education gap in divorce, and foreclosures decreasing it:

Microsoft Word - Divorce PRPR-revision-revision.docx

Because I didn’t have data on the individuals’ unemployment or foreclosure experience, I didn’t read too much into it, but left it in the paper to spur further research.

Aside: This took me a few years.

It started when I felt compelled to debunk Brad Wilcox’s fatuous and deliberately misleading interpretation of divorce trends — silver lining! — at the start of the recession, which he followed up with an even worse piece of conservative-foundation bait. Unburdened by the desire to know the facts, and the burdens of peer review, he wrote in 2009:

judging by divorce trends, many couples appear to be developing a new appreciation for the economic and social support that marriage can provide in tough times. Thus, one piece of good news emerging from the last two years is that marital stability is up.

That was my introduction to his unique brand of incompetence (he was wrong) and dishonesty (note use of “Thus,” to imply a causal connection where none has been demonstrated), which revealed itself most egregiously during the Regenerus affair (the full catalog is under this tag). Still, people publish his un-reviewed nonsense, and the American Enterprise Institute has named him a visiting scholar. If they know this record, they are unscrupulous; if they don’t, they are oblivious. I keep mentioning it to help differentiate those two mechanisms.

Check the divorce tag and the recession tag for the work developing all this.

1 Comment

Filed under Me @ work, Research reports

How to illustrate a .61 relationship with a .93 figure: Chetty and Wilcox edition

Yesterday I wondered about the treatment of race in the blockbuster Chetty et al. paper on economic mobility trends and variation. Today, graphics and representation.

If you read Brad Wilcox’s triumphalist Slate post, “Family Matters” (as if he needed “an important new Harvard study” to write that), you saw this figure:

chetty-in-wilcox

David Leonhardt tweeted that figure as “A reminder, via [Wilcox], of how important marriage is for social mobility.” But what does the figure show? Neither said anything more than what is printed on the figure. Of course, the figure is not the analysis. But it is what a lot of people remember about the analysis.

But the analysis on which it is based uses 741 commuting zones (metropolitan or rural areas defined by commuting patterns). So what are those 20 dots lying so perfectly along that line? In fact, that correlation printed on the graph, -.764, is much weaker than what you see plotted on the graph. The relationship you’re looking at is -.93! (thanks Bill Bielby for pointing that out).

In the paper, which presumably few of the people tweeting about it read, the authors explain that these figures are “binned scatter plots.” They broke the commuting zones into equally-sized groups and plotted the means of the x and y variables. They say they did percentiles, which would be 100 dots, but this one only has 20 dots, so let’s call them vigintiles.

In the process of analysis, this might be a reasonable way to eyeball a relationship and look for nonlinearities. But for presentation it’s wrong wrong wrong.* The dots compress the variation, and the line compresses it more. The dots give the misleading impression that you’re displaying the variance around the line. What, are you trying save ink?

Since the data are available, we can look at this for realz. Here is the relationship with all the points, showing a much messier relationship, the actual -.76 (the range of the Chetty et al. figure, which was compressed by the binning, is shown by the blue box):

chetty scattersThat’s 709 dots — one for each of the commuting zones for which they had sufficient data. With today’s powerful computers and high resolution screens, there is no excuse for reducing this down to 20 dots for display purposes.

But wait, there’s more. What about population differences? In the 2000 Census, these 709 commuting zones ranged in population in the 2000 Census from 5,000 (Southwest Jackson, Utah) to 16,000,000 (Los Angeles). Do you want to count Southwest Jackson as much as Los Angeles in your analysis of the relationship between these variables? Chetty et al. do in their figure. But if you weight them by population size, so each person in the population contributes equally to the relationship, that correlation that was -.76 — which they displayed as -.93 — is reduced to -.61. Yikes.

Here is what the plot looks like if you scale the commuting zones according to population size (more or less, not quite sure how Stata does this):

chetty scatters weighted

Now it’s messier, and the slope is much less steep. And you can see that gargantuan outlier — which turns out to be the New York commuting zone, which has 12 million people and with a lot more upward mobility than you would expect based on its family structure composition.

Finally, while we’re at it, we may as well attend to that nonlinearity that has been apparent since the opening figure. We can increase the variance explained from .38 to .42 by adding a quadratic term, to get this:

chetty scatters weighted quad

I hate to go beyond what the data can really tell. But — what the heck — it does appear that after 33% single-mother families, the effect hits its minimum and turns positive. These single mother figures are pretty old (when Chetty et al.’s sample were kids). Now that the country has surpassed 40% unmarried births, I think it’s safe to say we’re out of the woods. But that’s just speculation.**

*OK, OK: “wrong wrong wrong” is going too far. Absolute rules in data visualization are often wrong wrong wrong. Binning 709 groups down to 20 is extreme. Sometimes you have a zillion points. Sometimes the plot obscures the pattern. Sometimes binning is an inherent part of measurement (we usually measure age in years, for example, not seconds). None of that is an excuse in this case. However, Carter Butts sent along an example that makes the point well:

841101_10201299565666336_1527199648_o

On the other hand, the Chetty et al. case is more similar to the following extreme example:

If you were interested in the relationship between age and earnings for a sample of 1,400 full-time, year-round women, you might start with this, which is a little frustrating:

age-wage1

The linear relationship is hard to see, but it’s about +$500 per year of age. However, the correlation is only .13, and the variance explained by linear-age alone is only 1.7%. But if you plotted the mean wage over ages, the correlation jumps to .68:

age-wage2

That’s a different question. It’s not, “how does age affect earnings,” it’s, “how does age affect mean earnings.” And if you binned the women into 10-year age intervals (25-34, 35-44, 45-54), and plotted the mean wage for each group, the correlation is .86.

age-wage3

Chetty et al. didn’t report the final correlation, but they showed it, even adding the regression line, so that Wilcox could call it the “bivariate relationship.”

**This paragraph was a joke that several people missed, so I’m clarifying. I would never draw a conclusion like that from the scraggly tale of a loose correlation like this.

11 Comments

Filed under Research reports

Where is race in the Chetty et al. mobility paper?

What does race have to do with mobility? The words “race,” “black,” or “African American” don’t appear in David Leonhardt’s report on the new Chetty et al. paper on intergenerational mobility that hit the news yesterday. Or in Jim Tankersley’s report in the Washington Post, which is amazing, because it included this figure: post-race-mobility That’s not exactly a map of Black America, which the Census Bureau has produced, but it’s not that far off: census-black-2010

But even if you don’t look at the map, what if you read the paper? Describing the series of maps of intergenerational mobility, the authors write:

Perhaps the most obvious pattern from the maps in Figure VI is that intergenerational mobility is lower in areas with larger African-American populations, such as the Southeast. … Figure IXa confirms that areas with larger African-American populations do in fact have substantially lower rates of upward mobility. The correlation between upward mobility and fraction black is -0.585. In areas that have small black populations, children born to parents at the 25th percentile can expect to reach the median of the national income distribution on average (y25;c = 50); in areas with
large African-American populations, y25;c is only 35.

Here is that Figure IXa, which plots Black population composition and mobility levels for groups of commuting zones: ixa Yes, race is an important part of the story. In a nice part of the paper, the authors test whether Black population size is related to upward mobility for Whites (or, people in zip codes that are probably White, since race isn’t in their tax records), and find that it is. It’s not just Blacks driving the effect. I’m thinking about the historical patterns of industrial development, land ownership, the backwardness of racist elites in the South, and so on. But they’re not. For some reason, not explained at all, Chetty et al. offer this pivot:

The main lesson of the analysis in this section is that both blacks and whites living in areas with large African-American populations have lower rates of upward income mobility. One potential mechanism for this pattern is the historical legacy of greater segregation in areas with more blacks. Such segregation could potentially affect both low-income whites and blacks, as racial segregation is often associated with income segregation. We turn to the relationship between segregation and upward mobility in the next section.

And that’s it, they don’t discuss Black population size again, instead only focusing on racial segregation. They don’t pursue this “potential mechanism” in the analysis that follows. Instead, they drop percent Black for racial segregation. I have no idea why, especially considering this Table VII, which shows unadjusted (and normalized) correlations (more or less) between each variable and absolute upward mobility (the variable mapped above): tablevii

In these normalized correlations, fraction Black has a stronger relationship to mobility than racial segregation or economic segregation! In fact, it’s just about the strongest relationship on the whole long table (except for single mothers, with which it is of course highly correlated). So why do they not use it in their main models? Maybe someone else can explain this to me. (Full disclosure, my whole dissertation was about this variable.)

This is especially unfortunate because they do an analysis of the association between commuting zone family structure (using macro-level variables) and individual-level mobility, controlling for marital status — but not race — at the individual level. From this they conclude, “Children of married parents also have higher rates of upward mobility if they live in communities with fewer single parents.” I am quite suspicious that this effect is inflated by the omission of race at either level. So they write the following, which goes way beyond what they can find in the data:

Hence, family structure correlates with upward mobility not just at the individual level but also at the community level, perhaps because the stability of the social environment affects children’s outcomes more broadly.

Or maybe, race.

I explored the percent Black versus single mother question in a post a few weeks ago using the Chetty et al. data. I did two very simple OLS regression models using only the 100 largest commuting zones, weighted for population size, the first with just single motherhood, and then a model with proportion Black added: This shows that the association between single motherhood rates and immobility is reduced by two-thirds, and is no longer significant at conventional levels, when percent Black is added to the model. That is: Percent Black statistically explains the relationship between single motherhood and intergenerational immobility across U.S. labor markets. That’s not an analysis, it’s just an argument for keeping percent Black in the more complex models. Substantively, the level of racial segregation is just one part of the complex race story — it measures one kind of inequality in a local area, but not the amount of Black, which matters a lot (I won’t go into it all, but here are three old papers: one, two, three.

The burgeoning elite conversation about economic mobility, poverty, and inequality is good news. It’s avoidance of race is not.

14 Comments

Filed under Research reports