Tag Archives: academia

New data on gender-segregated sociology

Four years ago I wrote about the gender composition of sociology and the internal segregation of the discipline. Not much has changed, at least on the old measures. Here’s an update including some new measures (with some passages copied from the old post).

People may (or may not) want to be sociologists, they may or may not be accepted to graduate schools, thrive there (with good mentoring or bad), freely choose specializations, complete PhDs, publish, get jobs, rise to positions of leadership, and so on.  As in workplaces, gender segregation in academic sociology represents the cumulative intentions and actions of people in different institutional settings and social locations. It’s also the outcome of gender politics and power struggles. So, very interesting!

A report from the research folks at the American Sociological Association (ASA) got me thinking about this in 2011. The conversation revived the other day when someone asked ASA Vice President Elect Barbara Risman (a friend and colleague of mine), “What do you make of the fact that increasingly the majority of ASA election candidates tend to be women?” As we’ll see, the premise may be wrong, but the gender dynamics of ASA are interesting anyway.

#1: ASA leadership

The last four people elected president of ASA have been women (Ruth Milkman, Paula England, Annette Lareau, and Cecilia Ridgeway), and the the next winner will be either Michele Lamont or Min Zhou, both women. That’s an unprecedented run for women, and the greatest stretch of gender domination since the early 1990s, when men won six times in row. Here is the trend, by decade, starting with the decades before a woman president, 1906 through the 1940s:

sociology segregation.xlsx

Clearly, women have surpassed parity at the top echelons of the association’s academic leadership. ASA elections are a complicated affair, with candidates nominated by a committee at something like two per position. For president, there are two candidates. In the last nine presidential elections, six have featured a man running against a woman, and the women won four of those contests. So women are more than half the candidates, and they’ve been more likely to win against men. That pattern is general across elected offices since 2007 (as far back as I looked): more than half the candidates are women, but even more women win (most elections have about 36 candidates for various positions):

sociology segregation.xlsx

The nominating committees pick (or convince) more women than men to run, and then the electorate favors the women candidates, for reasons we can’t tell from these data.

These elections are run in an association that became majority female in its membership only in 2005, reaching only 53% female in 2010. That trend is likely to continue as older members retire and the PhD pool continues to shift toward women.

#2: Phds

Since the mid-1990s, according to data from the National Science Foundation, women have outnumbered men as new sociology PhDs, and we are now approaching two-thirds female. (The data I used in the old post showed a drop in women after 2007, but with the update, which now comes from here, that’s gone.)

sociology segregation.xlsx

Producing mostly-female PhDs for a quarter of a century is getting to be long enough to start achieving a critical mass of women at the top of the discipline.

#3: Specialization

These numbers haven’t been updated by ASA since 2010. The pattern of section belonging at that time showed a marked level of gender segregation. On a scale of 1 to 100, I calculate the sections are segregated at a level of .25.

sociology segregation.xlsx

#4: Editors and editorial boards

Finally, prestigious academic journals have one or more editors, often some associate editors, and then an editorial board. In sociology, this is mostly the people who are called upon to review articles more often. Because journal publication is a key hurdle for jobs and promotions, these sociologists serve as gatekeepers for the discipline. In return they get some prestige, the occasional reception, and they might be on the way to being an editor themselves someday.

Journal leadership is dragging behind the trends in PhDs, ASA members, and ASA leadership. I selected the top 20 journals in the Sociology category from the Journal Citation Reports (excluding a few misplaced titles), plus Social Problems and Social Forces, because these are considered to be leading journals despite low impact factors. The editors of these journals are 41% female (or 40% if you use journals as the unit of analysis instead of editors). Here is the list in two parts — general journals and specialty journals — with each sorted by impact factor. For multiple editors I either list the gender if they’re all the same, or show the breakdown if they differ:

Book12

It looks like the gender gap is partly attributable to the difference between journals run by associations and those run as department fiefdoms or by for-profit publishers.

For editorial boards, I didn’t do a systematic review, but I looked at the two leading research journals — American Sociological Review and American Journal of Sociology, as well as two prestigious specialized journals — Sociological Methods and Research, and Gender and Society (which is run by its own association,Sociologists for Women in Society, whose membership includes both women and men). Here’s the update to my 2011 numbers:

sociology segregation.xlsx

I removed a couple board members I know to have died in the last year, so these lists might not be that up to date.

Note on the journals that SMR and AJS are fiefdoms with no accountability to anyone outside their cliques, so it’s not surprising they are decades behind. ASR and G&S, on the other hand, are run by associations with majority-female memberships and hierarchies, in the case of G&S with a feminist mission. (ASA demands reports on gender and race/ethnicity composition from its editors.) AJS has no excuse and should suffer opprobrium for this. SMR might argue they can’t recruit women for this job (but someone should ask them to at least make this case).

2 Comments

Filed under Uncategorized

Stop me before I fake again

In light of the news on social science fraud, I thought it was a good time to report on an experiment I did. I realize my results are startling, and I welcome the bright light of scrutiny that such findings might now attract.

The following information is fake.

An employee training program in a major city promises basic job skills and as well as job search assistance for people with a high school degree and no further education, ages 23-52 in 2012. Due to an unusual staffing practice, new applications were for a period in 2012 allocated at random to one of two caseworkers. One provided the basic services promised but nothing extra. The other embellished his services with extensive coaching on such “soft skills” as “mainstream” speech patterns, appropriate dress for the workplace, and a hard work ethic, among other elements. The program surveyed the participants in 2014 to see what their earnings were in the previous 12 months. The data provided to me does not include any information on response rates, or any information about those who did not respond. And it only includes participants who were employed at least part-time in 2014. Fortunately, the program also recorded which staff member each participant was assigned to.

Since this provides such an excellent opportunity for studying the effects of soft skills training, I think it’s worth publishing despite these obvious weaknesses. To help with the data collection and analysis, I got a grant from Big Neoliberal, a non-partisan foundation.

The data includes 1040 participants, 500 of whom had the bare-bones service and 540 of whom had the soft-skills add-on, which I refer to as the “treatment.” These are the descriptive statistics:

fake-descriptives

As you can see, the treatment group had higher earnings in 2014. The difference in logged annual earnings between the two groups is significant at p

fake-ols-results

As you can see in Model 1, the Black workers in 2014 earned significantly less than the White workers. This gap of .15 logged earnings points, or about 15%, is consistent with previous research on the race wage gap among high school graduates. Model 2 shows that the treatment training apparently was effective, raising earnings about 11%. However, The interactions in Model 3 confirm that the benefits of the treatment were concentrated among the Black workers. The non-Black workers did not receive a significant benefit, and the treatment effect among Black workers basically wiped out the race gap.

The effects are illustrated, with predicted probabilities, in this figure:

fake-marginsplot

Soft skills are awesome.

I have put the data file, in Stata format, here.

Discussion

What would you do if you saw this in a paper or at a conference? Would you suspect it was fake? Why or why not?

I confess I never seriously thought of faking a research study before. In my day coming up in sociology, people didn’t share code and datasets much (it was never compulsory). I always figured if someone was faking they were just changing the numbers on their tables to look better. I assumed this happens to some unknown, and unknowable, extent.

So when I heard about the Lacour & Green scandal, I thought whoever did it was tremendously clever. But when I looked into it more, I thought it was not such rocket science. So I gave it a try.

Details

I downloaded a sample of adults 25-54 from the 2014 ACS via IPUMS, with annual earnings, education, age, sex, race and Hispanic origin. I set the sample parameters to meet the conditions above, and then I applied the treatment, like this:

First, I randomly selected the treatment group:

gen temp = runiform()
gen treatment=0
replace treatment = 1 if temp >= .5
drop temp

Then I generated the basic effect, and the Black interaction effect:

gen effect = rnormal(.08,.05)
gen beffect = rnormal(.15,.05)

Starting with the logged wage variable, lnwage, I added the basic effect to all the treated subjects:

replace newlnwage = lnwage+effect if treatment==1

Then added the Black interaction effect to the treated Black subjects, and subtracted it from the non-treated ones.

replace newlnwage = newlnwage+beffect if (treatment==1 & black==1)
replace newlnwage = newlnwage-beffect if (treatment==0 & black==1)

This isn’t ideal, but when I just added the effect I didn’t have a significant Black deficit in the baseline model, so that seemed fishy.

That’s it. I spent about 20 minutes trying different parameters for the fake effects, trying to get them to seem reasonable. The whole thing took about an hour (not counting the write-up).

I put the complete fake files here: code, data.

Would I get caught for this? What are we going to do about this?

BUSTED UPDATE:

In the comments, ssgrad notices that if you exponentiate (unlog) the incomes, you get a funny list — some are binned at whole numbers, as you would expect from a survey of incomes, and some are random-looking and go out to multiple decimal places. For example, one person reports an even $25,000, and another supposedly reports $25251.37. This wouldn’t show up in the descriptive statistics, but is kind of obvious in a list. Here is a list of people with incomes between $20000 and $26000, broken down by race and treatment status. I rounded to whole numbers because even without the decimal points you can see that the only people who report normal incomes are non-Blacks in the non-treatment group. Busted!

fake-busted-tableSo, that only took a day — with a crowd-sourced team of thousands of social scientists poring over the replication file. Faith in the system restored?

9 Comments

Filed under In the news, Research reports

Not all trigger warnings are the same

Print

I follow the debate over trigger warnings only loosely. Please feel free to add information in the comments.

In what I see, the debate over trigger warnings is hampered by ill-defined terms and unhelpful hyperbole. I want to give a very basic description of what I think should be a relatively simple approach to the issue, call out a gender problem, and then offer my own example.

To show you where I’m coming from: What prompted me finally to write this was the combination of this popular op-ed by Judith Shulevitz, this essay about the problem of teaching about rape in law school, and the flap over Christina Hoff Sommers’s anti-anti-rape-culture campus tour. I noted that a letter to the editor in the Oberlin Review about her upcoming talk began with this: “Content Warning: This letter contains discussion of rape culture, online harassment, victim blaming and rape apologism/denialism.”

Impending discourse

There are three kinds of relevant warnings that I would group together under the category of “impending discourse notification.” That is, warnings that take the form: something is about to be discussed or displayed. Keeping these three things straight would be really helpful.

1. Warnings of content likely to be disturbing to many people in the audience.

For example, graphic images of violence during a regular TV news program, descriptions of rape on NPR’s Morning Edition, or sociology classroom lectures that contain images of Blacks being lynched. In these cases, a warning of the impending discourse is something like common courtesy. It says, “we are about to see or hear something important enough to risk disturbing the audience, and potentially disturbing enough that you should gird yourself.” In these discrete cases warnings are not controversial in principle, though of course individual applications may be off target or offensive. Many settings carry an implied warning: A horror film can be expected to surprise you with specific acts of violence, but you know something bad is coming; a sociology class on racial inequality should be expected to include discussions of lynching, though some students have no idea about lynching; a history documentary on war is expected to show people being killed. Warnings in these cases seem optional.

2. Warnings of content that may trigger post-traumatic stress responses.

I am not expert on this, obviously, but my understanding (from, e.g., here) is that post-traumatic stress disorder (PTSD) was included in the DSM III in 1980, partly based on the experience of Vietnam War veterans. The condition was understood to involve reliving memories of trauma, avoiding reminders of trauma, and hyperarousal that can lead to high levels of distress. There are many kinds of traumas that can lead to PTSD, but some are much more common than others, especially violence, sexual abuse, and existential threats. You can’t expect to prevent all triggering events, but you can take steps to avoid common ones, or warn people when you are going to show or discuss something to an audience likely to be include people with PTSD. Again, war movies are expected to show graphically violent war scenes, but lectures to audiences of combat veterans about disability benefits should not. This is a question of sensitivity and awareness, not blanket prohibitions and censoring. And this is about shocking or graphic imagery, not mere mention of a topic. We just can’t have a democratic discourse without mentioning bad things, sometimes spontaneously. The Oberlin newspaper warning above is wrong. And I don’t agree with another Oberlin essayist who says trigger warnings should be treated as disability accommodations, “as common as wheelchair ramps.” (Of course, I would make an accommodation for a specific student — and I have — who asks to opt out of a specific class session based on the topic).

3. Warnings of obnoxious, offensive, disagreeable, or dangerous ideas.

These warnings are unnecessary and wrong. If someone wants to say the problem of campus rape is exaggerated, that Black men are genetically aggressive, the Holocaust is a myth, or Creationists are stupid — let them. Hand out flyers or picket at their talk, discredit them in the Q&A, denounce them on Twitter, or ignore them. If they are receiving honorary degrees or other accolades (or money) from governments or universities, that’s political fair game to protest. But protecting people from hearing bad ideas is a bad idea (outside of incitement to violence). On campus or in the classroom, exposure to bad ideas is essential to critical intellectual development. If you’re never offended in college you aren’t learning enough.

A gender problem

I have complained elsewhere that the non-criminal procedure for responding to campus rape “downgrades sexual violence from a real crime to a women’s issue.” Something similar is going on with trigger warnings. Although PTSD-type responses can be triggered by many kinds of experiences, it looks like sexual violence is the main arena of debate over campus trigger warnings. Why? This should not be reduced to a “women’s issue.” My admittedly limited exposure to this debate often makes me cringe at what seems like a demand for special protection — from discourse — for women. Women are in fact more likely to experience PTSD than men, but that’s only partly because they are more likely to be sexually assaulted. Men are more likely to experience other potentially traumatic events, including accidents, nonsexual assaults, combat, or witnessing violence, all of which can lead to PTSD. People with sensitivity to trauma-related triggering deserve respect and sensitivity. But women — like any subordinate group — need to exert leadership in the discourse surrounding that inequality, and that doesn’t come from avoiding the topic or silencing their opponents. If the only people discussing rape are people who have never been raped, the dialogue is likely to be male-dominated. We have to work on maintaining the line between offensive and unpleasant on the one hand and truly trauma-inducing on the other. If it’s necessary to avoid the latter, it’s all the more important for those who are able to engage the former.

How did I do addendum

I think we can learn a lot from these discussions. They have raised the question, “What if we acted like sexual assault is actually common?” That reality is hard to grasp — for people who are victims or not — because the experience is so often private.* In the chapter in my book about family violence and abuse, I didn’t include an impending discourse notification, but — after opening with a detailed story of violent abuse — I raised the issue of how discussing the topic might affect students:

The subject of family violence and abuse is personal and painful. Instructors and students should pause at this point to consider the possible effects of discussing these topics, especially for those who have experienced abuse in their own lives. Because this kind of victimization still is so common in the United States, most of us will know someone who has been touched by it in one way or another. However, because families often are protected by a cultural—and sometimes legal—expectation of privacy and a shroud of secrecy, those who suffer usually do so in isolation. That leaves us with the complexity of a problem that is widespread but experienced alone and often invisibly. Such isolation can make the experience of abuse even worse. One benefit of addressing the issue in this book is that we can help pierce that isolation and encourage victims to realize that they are not alone.

I think advising people in the classroom to “pause to consider” before launching into the topic is reasonable — it’s a common experience with a known risk of traumatic effects. But I didn’t write that just to protect people who might have a traumatic reaction to the topic, I did it because it’s a learning opportunity for everyone.

* In the book I tried to put rape in normal-experience terms: experiencing rape (18% of women by one reasonable estimate) is more common than using the Pill for contraception (17% of women currently), but less common than smoking cigarettes for young-adult women (22%, ages 25-34). Does that help?

11 Comments

Filed under In the news, Me @ work

Regnerus responds

Photo by carnagenyc from Creative Commons.

Photo by carnagenyc from Creative Commons.

Note: Corrected May 3 to reflect that these documents are about post-tenure review, not promotion to full professor. The blogger regrets the error, and thanks the tipster.

The news, reported in The Daily Texan, with documents retrieved via public records request, is that, in the face of conflicting views about Mark Regnerus’s promotion to the rank of full professor post-tenure review,  UT’s Dean of Liberal Arts, Randy Diehl, commissioned a report on the scandal by sociologist Marc Musick. The report is an excellent review and summary of the affair, and provides ample evidence for declining the promotion. And for the rest of us, it had the beneficial effect of flushing out Regnerus, who wrote his most detailed response yet to the accusations against him — a response he may or may not have realized would become public. (The new documents are linked in the Texan article; for my coverage, you can start here for a review with links.)

It’s difficult to try to draw a line, as Musick does, apparently at the dean’s request, between ethical misconduct and bad research. It’s really where the two are combined that Regnerus causes trouble. More on the promotion issue later.

Musick was entirely correct when he wrote:

Based on these [media] appearances and his [court] testimony, it is self-evident that Professor Regnerus has used his research in the debate over same-sex marriage in direct contradiction to the statements he made in the NFSS article and response to commentaries. When combined with clear evidence that he colluded with politically-motivated organizations prior to the publication of the study, it leads to the appearance that the post-study behavior was an extension of the political work that was happening prior to the study. In light of all of this activity, it appears that the statements he made in the article could certainly be seen as misleading at best and an outright fabrication of his intentions at worst.

This is the heart of the ethics side of the complaint: his bad research was part of a covertly-organized political effort, and he lied about it to cover that up. Regnerus simply asserts this isn’t true, but to believe his self-serving description of his own intentions is to be made a fool of. It’s just not plausible that,

I did not intend to utilize the results for any political or legal purpose, and stated so when I completed work on the manuscript in late February 2012. My interests, from the outset of participation in this project up through December 2012, lay squarely in the social science question that gave rise to the study.

Only God can truly see into the unlit depths of Regnerus’s heart — but the rest of us can be pretty sure he’s lying based on his actions.

Regnerus claims that as he became immersed in the subject he grew convinced that same-sex marriage is a bad policy, and began “to worry about esteeming the systematic severance of children from their biological origins.” But he was part of the “coalition” (his word!) against gay marriage from before the study was even fielded. His email to Brad Wilcox, prior to conducting the study:

I would like, at some point, to get more feedback from Luis [Tellez] and Maggie [Gallagher] about the ‘boundaries’ around this project, not just costs but also their optimal timelines (for the coalition meeting, the data collection, etc.), and their hopes for what emerges from this project, including the early report we discussed in DC.

What pure interest in “the social science question” involves planning an “early report” with the leading activist against gay marriage, Maggie Gallagher?

Lots of research is as poor quality as Regnerus’s. It’s in combination with the rotten ethics that we see the more serious problem — it’s how the research fits in with his diabolical political plans and his reprehensible moral views. That is, the research was not just bad, it was bad in a purposeful direction. That’s not discernible from a reading of the single, (not really) peer-reviewed article.

Cause and effect

The issue of causality is described in the report as one of methods, but I think it’s really an ethical issue.

Regnerus has been having this both ways from the beginning, and it highlights the challenge of (and for) public intellectuals who speak to multiple audiences. In the original paper he wrote, “I would be remiss to claim causation here.” So that is his cover (and he quotes again here). But in presentations to friendly audiences he is much less guarded. As I reported earlier, in a talk he gave at Catholic University:

He first described in some detail the “standard set of controls” he used to test the relationship between having a father or mother who ever (reportedly) had a same-sex romantic relationship and his many negative outcome variables. And then he proceeded to present bivariate relationships as if they were the results of those tests. He didn’t say they were adjusted [for the controls], but everyone thought the results he showed were controlling for everything. For example, to gasps from the crowd, he revealed that 17 percent of “intact bio family” kids had ever received welfare growing up, compared with 70 percent for those whose mother (reportedly) ever had a same-sex romantic relationship. If you don’t realize that this is mostly just a comparison between stable married-couple families and single-mother families, that might seem like a shockingly large effect.

The causal story at that talk was hammered home in two other ways. First, he presented the results as evidence of a “reduced kinship theory,” under which parents care less about their children the less biologically related they are. Second, he said his “best guess” about why he found worse outcomes for children of women who ever had a lesbian relationship than for those whose fathers ever had a gay relationship was that the former group spent more time with their mothers’ lesbian partners. Both of these descriptions are based on a causal interpretation of his findings.

Anyway, on to the political machinations.

Regnerus lies about Brad Wilcox’s lies

Regnerus complains that Musick brings up the “tired ethical complaint” about Brad Wilcox, who, Regnerus claims, “held an honorific position with the Witherspoon Institute.” And he offers this: “In my interactions with him, he never acted with authority, only advice suggestive of his own opinion.” Regnerus no-doubt thinks he is using a clever legalism, as if Wilcox did not have literal signing authority for dispersing Witherspoon funds and therefore did not offer anything beyond “his own opinion.” But it’s clearly wrong.

Just to be clear how ridiculous this hair-splitting is, here is the email exchange that they no-doubt both now regret (which Musick quoted as well). Regnerus writing, Wilcox answering in bold caps:

Tell me if any of these aren’t correct.

  1. We want to run this project through UT’s PRC. I’m presuming 10% overhead is acceptable to Witherspoon. YES
  2. We want a broad coalition comprising several scholars from across the spectrum of opinions… [goes on to discuss individuals]. YES
  3. We want to “repeat” in some ways the DC consultation with the group outlined in #2. … [details of how the planning document will be crafted] YES
  4. This document would in turn be used to approach several research organizations for the purpose of acquiring bids for the data collection project. YES

Did I understand that correctly?

And per your instruction, I should think of this as a planning grant, with somewhere on par of $30-$40k if needed. YES

Regnerus may now say, indignantly, “Professor Wilcox did not — and does not — speak on behalf of Mr. Tellez,” the Witherspoon president, but he certainly understood Wilcox as speaking for Witherspoon in that exchange. Otherwise, why wouldn’t he ask Tellez these organizational questions directly?

In a 2012 blog post on the now-defunct (and deleted, but preserved) Family Scholars blog hosted by the Institute for American Values, Wilcox wrote that he never served as an “officer” of Witherspoon. He was, on the Witherspoon website as preserved by the Internet Archive, listed as “director” of the institute’s Program on Marriage, Family, and Democracy from late 2008 to mid-2010. That program still exists on the website, incidentally, but it no longer mentions any director — Wilcox is the only director ever listed in the Internet Archive pages. As of last month, Wilcox’s CV doesn’t mention this position.* (I don’t understand the purpose of an honorific position if you’re not proud of it.)

And then there’s the Wilcox email where he refers to the study as “our dataset.”

Campaign, or coincidence?

Regnerus tells a story of coincidences. For example, Tellez may have (in his words) wanted the research done “before major decisions of the Supreme Court,” but that had nothing to do with Regnerus’s goals, which were to finish his report by January 2012 “for no other reason than I wished to finish it and move on to other projects.” At the time the research was funded, Regnerus says, he did not share Tellez’s political goals. Coincidentally, they both happened to want to project completed in the same time frame. And then, in another coincidence, Regnerus later came around to joining in Tellez’s opinion that same-sex marriage must be stopped. Is this a more plausible story than the simpler one in which Tellez, Wilcox, and Regnerus were all on the same page all along? The evidence for the conspiracy is pretty robust, considering Regnerus, Wilcox, David Blankenhorn, Maggie Gallagher and other anti-gay marriage activists planned the research at a meeting in Washington hosted and paid for by the Heritage Foundation. On the other hand, the evidence for the coincidence is Regnerus’s solemn word. This conspiracy is a theory kind of like evolution is a theory — it’s the only plausible explanation for a known series of events.

In the coincidence story, the survey was delayed, so Regnerus would have to keep working on it beyond January 2012. However, he nevertheless just “decided to give a journal submission a shot” in November 2011 anyway. Not that he was aiming for Tellez’s Supreme Court deadline. Just because. So he “contacted [Social Science Research editor] Professor James Wright to ask if he’d consider reviewing a manuscript on a study like this one,” before the data were even collected. You social scientists out there — have you ever asked a peer-reviewed journal editor if they would consider publishing something “like” what you were working on before you even had the data collected?

In fact, this “give a journal submission a shot” idea came from Wilcox, who in the email mentioned above suggested sending it to SSR because Wright was (the late) “Steve Nock’s good friend” and “also likes Paul Amato,” whom they had secured as a consultant. In the end, Wright would use both Wilcox and Amato as reviewers.

The coincidences Regnerus speaks of also include the meeting he had in August 2011 in Denver with Wilcox, Glenn Stanton from Focus on the Family, and Scott Stanley, after which (he wrote to Tellez at the time), “we feel like we have a decent plan moving forward” for “public/media relations for the NFSS project.” In his response to Mucisk, Regnerus now writes, “Denver was a convenient stop on the way back to Austin from the American Sociological Association annual meeting in Las Vegas, and I took the opportunity to meet socially with a few peers.” That includes Stanton, who “lived about an hour’s drive of where we met.” (I’m not sure why you need a “convenient stop” from Las Vegas to Austin, which is a short nonstop flight.)

See, no campaign. Sure, he also arranged for the study to be shared with “some conservative outlets” before publication, attended a “short function hosted by the Heritage Foundation” about the study just before it was published, and “another such function … at the offices of the Institute for American Values.” But he doesn’t even know, “frankly,” “how such groups came to be apprised of the impending study release.”

Then, after describing, literally, how he colluded with politically-motivated organizations prior to the publication of the study, Regnerus concludes, “This hardly merits the accusation that I ‘colluded with politically-motivated organizations prior to the publication of the study.'”

And oh, sure, on closer inspection (he actually says, “I see now…”) he did use the “media training” document that Heritage provided, which he has falsely testified he “largely ignored,” in his own promotion of the study. In his post on Patheos.com (here), he wrote:

Q: So are gay parents worse than traditional parents?

A: The study is not about parenting per se. There are no doubt excellent gay parents and terrible straight parents. The study is, among other things, about outcome differences between young adults raised in households in which a parent had a same-sex relationship and those raised by their own parents in intact families.

The Heritage talking points (from Musick’s report) included this:

Whether gay parents are worse than traditional parents.The study is not about parenting. There are no doubt excellent gay parents and terrible traditional parents. The study is about outcome difference between young adults raised in a same-sex household and those raised by their own parent in intact families.

Well, he says now, “I very likely did use a few lines” from the document. “So be it.” Nevertheless, “to suggest I received extensive media training — and leaned on it in a comprehensive campaign — is out of touch with my lived reality.” (Who’s a phenomenologist now?) It’s tempting, after reading his response, to assume that whatever Regnerus specifically denies is exactly true.

On the labeling issue

I noticed something new in reviewing material for this post. If you’ve made it this far, bear with me here on this detail.

The infamous Regnerus article was published with the title, “How different are the adult children of parents who have same-sex relationships?” In the article he referred to the adult children who reported that their mother ever had a same-sex romantic relationship as “LM” for “lesbian mother,” along with “GF” for “gay father.” In the rebuttal to his critics, published later in 2012, he acknowledged these were the wrong terms:

Concern about the use of the acronyms LM (lesbian mother) and GF (gay father) in the original study is arguably the most reasonable criticism. In hindsight, I wish I would have labeled LMs and GFs as MLRs and FGRs, that is, respondents who report a maternal (or mother’s) lesbian relationship, and respondents who report a paternal (or father’s) gay relationship. While in the original study’s description of the LM and GF categories I carefully and accurately detailed what respondents fit the LM and GF categories, I recognize that the acronyms LM and GF are prone to conflate sexual orientation, which the NFSS did not measure, with same-sex relationship behavior, which it did measure.

But he insisted this was just a question of confusing terms, not an attempt to actually label these parents according to their sexual orientation. He added:

The original study, indeed the entire data collection effort, was always focused on the respondents’ awareness of parental same-sex relationship behavior rather than their own assessment of parental sexual orientation, which may have differed from how their parent would describe it.

This came up in the Mucisk report, and Regnerus responded:

As noted in Professor Musick’s assessment, the problem of locating an optimal acronym here is something to which I have already confessed … It remains a significant regret. And yet the distinction between a woman’s same-sex relationship (to use Professor Musick’s acronym) and a woman’s “lesbian” relationship (as I assert by using the MLR acronym) is no doubt a narrow one. As ought to be obvious, I use the term “lesbian” as an adjective here, not a noun [emphasis added]. It describes a relationship, not a self-identity.

But did Regnerus really intend to use “lesbian” as an adjective? No, he did not. I know this because, in the email exchange between Social Science Research editor James Wright and Brad Wilcox (in which Wilcox lied by omission and which Wright later misrepresented), we can see the original title of the article Regnerus submitted, which is not the title subsequently published. The original title was, “How different are the adult children of lesbian mothers and gay fathers? Findings from the New Family Structures Study.” Clearly, Regnerus’s original intention was to describe the parents of the people he surveyed as “lesbian mothers” and “gay fathers” — using nouns referring to the people, not adjectives referring to their romantic relationships. It was not a matter of confusion; it was an attempt to create a false impression of the study’s implications.

Promoting Regnerus

In our department, promotion to full professor requires “an exemplary record in research, teaching, and service” which has made the candidate “widely regarded as a scholar.” However, these terms are not defined, and no quantities of research or citations are included. These things are left vague, and much rides on the interpretation of the experts consulted, who are considered the best judges of academic merit. So, what if a professor brings scandal and disrepute to himself and the institution? What if he expresses views that are morally reprehensible? What if he lies about his work, including in his work?

I don’t envy my colleagues in the excellent department of sociology at the University of Texas-Austin (about this case — I do envy them in other ways). Their directory lists 39 professors, only one of whom is disgraceful in those ways. It’s not a simple matter, denying a tenured professor a promotion (even though this is only post-tenure review, it’s the promotion issue that looms). It’s a personnel decision governed by laws, and it’s wrapped up in the tenure system, which is important for academic freedom.

In the case of Regnerus I’ve already expressed my opinion.

Honest social scientists do not combine these activities: (1) secret meetings with partisan activist groups to raise money and set political agendas for their research; and, (2) omitting mention of those associations later. If Regnerus, Wilcox, Allen, and Price, had included acknowledgements in their publications that described these associations, then they would be just like anyone else who does research on subjects on which they have expressed opinions publicly: potentially legitimate but subject to closer scrutiny (which should include editors not including people from the same group as reviewers). Failure to disclose this in the publication process is dishonesty.

Based on that — more than based on the morally reprehensible views — I would vote against Regnerus’s promotion. But I am not privy to the process at UT, to their reviews and other materials, and I haven’t been asked for my opinion or advice.

* Is it unethical to take academic activities off our academic CV if they make you look bad? It emerged in one of the gay marriage trials that Brigham Young economist Joseph Price, testifying as an expert against gay marriage, took a grant from the Witherspoon Institute off his CV. In this case I don’t see that Wilcox ever had Witherspoon on his CV, but he was listed as a director on their website.

3 Comments

Filed under In the news

Is the Moynihan-backlash chilling effect a myth?

Recently we have seen the revival of the idea that some faction of the political left (liberal, progressive, or radical) is silencing debate through “political correctness,” as retold, for example, by Jonathan Chait. Similarly, there is a push by those reviving the 1965 Moynihan Report (neo-Moynihanists?) to advance a narrative in which venomous race police attacked Moynihan with such force that liberal social scientists were scared off the topic of “cultural explanations” (especially about marriage) for Black poverty and inequality.

This Moynihan chilling effect narrative got a recent boost from Nicholas Kristof in the New York Times. As Kristof tells it, “The taboo on careful research on family structure and poverty was broken by William Julius Wilson, an eminent black sociologist.” Kristof lifted that description from this recent article by McLanahan and Jencks (which he cites elsewhere in the column). They wrote:

For the next two decades [after 1965] few scholars chose to investigate the effects of father absence, lest they too be demonized if their findings supported Moynihan’s argument. Fortunately, America’s best-known black sociologist, William Julius Wilson, broke this taboo in 1987, providing a candid assessment of the black family and its problems in The Truly Disadvantaged.

This narrative, which seems to grow more simplistic and linear with each telling, is just not true. In fact, it’s pretty bizarre.

Herbert Gans in 2011 attributed the story to William Julius Wilson’s first chapter of The Truly Disadvantaged (1987), in which he said that, after the criticism of Moynihan, “liberal scholars shied away from researching behavior construed as unflattering or stigmatizing.” Wilson told a version of the story in 2009, in which the ideology expressed by “militant black spokespersons” spread to “black academics and intellectuals,” creating an atmosphere of “racial chauvinism,” in which “poor African Americans were described as resilient and were seen as imaginatively adapting to an oppressive society” when they engaged in “self destructive” aspects of “ghetto life.” (These aren’t scare quotes, I’m just being careful to use Wilson’s words.) In this vein of research,

…this approach sidesteps the issue altogether by denying that social dislocations in the inner city represent any special problem. Researchers who emphasized these dislocations were denounced, even those who rejected the assumption of individual responsibility for poverty and welfare, and focused instead on the structure or roots of these problems.

Accordingly, in the early 1970s, unlike in the middle 1960s, there was little motivation to develop a research agenda that pursued the structural and cultural roots of ghetto social dislocations. The vitriolic attacks and acrimonious debate that characterized this controversy proved to be too intimidating to scholars, particularly to liberal scholars. Indeed, in the aftermath of this controversy and in an effort to protect their work from the charge of racism, or of blaming the victim, many liberal social scientists tended to avoid describing any behavior that could be construed as unflattering or stigmatizing to people of color. Accordingly, until the mid-1980s and well after this controversy had subsided, social problems in the inner-city ghetto did not attract serious research attention.

Wilson includes this very strong causal statement: “the controversy over the Moynihan Report resulted in a persistent taboo on cultural explanations to help explain social problems in the poor black community.” I would love to see any direct evidence — eyewitness accounts or personal testimony — of this chilling effect on researchers.

If you read it generously, Wilson is mostly saying that there was a fall-off in the kind of argument that he preferred, one that “pursued the structural and cultural roots of ghetto social dislocations,” and showed how ghetto lifestyles were harming Black fortunes. It’s one thing to say a certain perspective fell out of favor, but that’s a far cry from claiming that “few scholars chose to investigate … the black family and its problems,” the McLanhan and Jencks assertion that Kristof repeats.

What is the evidence? To make that causal story stick, you’d have to rule out other explanations for a shift in the orientation of research (if there was one). If attitudes like Moynihan’s fell out of favor after 1965, can you think of anything else happening at that time besides vicious academic critiques of Moynihan that might have provoked a new, less victim-blamey perspective? Oh, right: history was actually happening then, too.

free_breakfast

As for the idea people simply stopped researching Black poverty, “culture,” and family structure, that’s just wrong. Here, mostly drawn from Frank Furstenberg’s review, “The Making of the Black Family: Race and Class in Qualitative Studies in the Twentieth Century,” are some of the works published during this time when researchers were supposedly avoiding the topic:

  • Billingsley A. 1968. Black Families in White America. Englewood Cliffs, NJ: Prentice-Hall
  • Williams T, Kornblum W. 1985. Growing up Poor. Lexington, MA: Lexington Books
  • Chilman CS. 1966. Growing Up Poor. Washington, DC: USGPO
  • Liebow E. 1968. Tally’s Corner. Boston: Little, Brown
  • Hannerz U. 1969. Soulside: Inquiries into Ghetto Culture and Community. New York: Columbia Univ. Press
  • Stack C. 1974. All Our Kin. Chicago: Aldine
  • Schultz DA. 1969. Coming up Black: Patterns of Ghetto Socialization. Englewood Cliffs, NJ: Prentice Hall
  • Staples R. 1978. The Black Family: Essays and Studies. Belmont, CA: Wadsworth. 2nd ed.
  • Ladner JA. 1971. Tomorrow’s Tomorrow: The Black Woman. Garden City, NY: Doubleday
  • Furstenberg FF. 1976. Unplanned Parenthood: The Social Consequences of Teenage Childbearing. New York: Free Press

In Furstenberg’s account, many of the themes in these studies were reminiscent of research done earlier in the century, when social science research on poor Black families first emerged:

…the pervasive sense of fatalism among the poor, a lack of future orientation among youth, early parenthood as a response to blocked opportunity, sexual exploitation, tensions between men and women, the unswerving commitment to children regardless of their birth status among mothers, and the tenuous commitment among nonresidential fathers.

In addition, as Alice O’Connor notes in her intellectual history, Poverty Knowledge: Social Science, Social Policy, and the Poor in Twentieth-Century U.S. History, there was a shift around this time to more quantitative, technocratic research, using individual microdata. In particular, the highly influential Panel Study of Income Dynamics began producing studies at the start of the 1970s, and many scholars published research comparing social and economic outcomes across race, class, and family type using this data source. Here is a small sample of journal articles from 1971 to 1985, when the Moynihan taboo supposedly reigned:

  • Datcher, Linda. 1982. “Effects of Community and Family Background on Achievement.” Review of Economics and Statistics 64 (1): 32–41.
  • Greenberg, David, and Douglas Wolf. 1982. “The Economic Consequences of Experiencing Parental Marital Disruptions.” Children and Youth Services Review, 4 (1–2): 141–62.
  • Hampton, Robert L. 1979. “Husband’s Characteristics and Marital Disruption in Black Families.” Sociological Quarterly 20 (2): 255–66.
  • Hofferth, Sandra L. 1984. “Kin Networks, Race, and Family Structure.” Journal of Marriage and Family 46 (4): 791–806.
  • Hoffman, Saul. 1977. “Marital Instability and the Economic Status of Women.” Demography 14 (1): 67–76.
  • McLanahan, Sara. 1985. “Family Structure and the Reproduction of Poverty.” American Journal of Sociology 90 (4): 873–901.
  • Moffitt, Robert. 1983. “An Economic Model of Welfare Stigma.” American Economic Review 73 (5): 1023–35.
  • Smith, Michael J. 1980. “The Social Consequences of Single Parenthood: A Longitudinal Perspective.” Family Relations 29 (1): 75–81.

At least three of these scholars survived the experience of researching this subject and went on to become presidents of the Population Association of America.

Finally, an additional line of research pursued the question of family structure impacts on education or economic attainment, specifically aimed at assessing the impact of family structure on racial inequality. These studies were highly influential and widely cited, including:

  • Duncan, Beverly, and Otis Dudley Duncan. 1969. “Family Stability and Occupational Success.” Social Problems 16 (3): 273–85.
  • Featherman, David L., and Robert M. Hauser. 1976. “Changes in the Socioeconomic Stratification of the Races, 1962-73.” American Journal of Sociology 82 (3): 621–51.
  • Hauser, Robert M., and David L. Featherman. 1976. “Equality of Schooling: Trends and Prospects.” Sociology of Education 49 (2): 99–120.

I don’t know how you get from this rich literature to the notion that a liberal taboo was blocking progress — unless you define research progress according to the nature of the conclusions drawn, rather than the knowledge gained.

The resilience of this narrative reflects the success of conservative critics in building an image of leftist academics as ideological bullies who suppress any research that doesn’t toe their line. Such critics have a right to their own perspectives, but not to their own facts.

[Thanks to Shawn Fremstad for pointing me to some of these readings.]

Exceptions, suggested reading, and counterarguments welcome in the comments.

22 Comments

Filed under Research reports

Bogus versus extremely low-quality, Sullins edition

https://flic.kr/p/e6sfpP

Photo by CTBTO from Flickr Creative Commons (modified)

Calling a study “peer-reviewed” gives it at least some legitimacy. And if a finding is confirmed by “many peer-reviewed studies,” that’s even better. So the proliferation of bogus journals publishing hundreds of thousands of “peer-reviewed” articles of extremely low quality is bad news both for the progress of science and for public discourse that relies on academic research.

Two weeks ago I briefly reviewed some articles published by D. Paul Sullins, the anti-gay professor at Catholic University, on the hazards of being raised by gay and lesbian parents. I called the journals, published by Science Domain International (SDI), “bogus,” but said you could make an argument for extremely low quality instead.

After that Sullins sent me an email with some boilerplate from the publisher in defense of the journals, and he accused me of having a conflict of interest because his conclusions contradict one of my published articles. After correctly pointing out that a sting operation by Science failed to entrap an SDI journal with a bogus paper about cancer research, he said:

SDI is a new and emerging publisher. … While I would not say SDI is yet in the top tier, and I don’t like their journal names much either [which mimic real journal titles], for the reasons listed above I submit that this publisher is far from ‘bogus.’

How far from bogus?

Since that post, the reviews on the third of Sullins’ papers have been published by Science Domain and its journal, the (non-) British Journal of Education, Society & Behavioural Science. So we have some more information on which to judge.

The paper, “Emotional Problems among Children with Same-sex Parents: Difference by Definition,” was reviewed by three anonymous reviewers (from the USA, Brazil, Nigeria) and one identified as Paulo Verlaine Borges e Azevêdo, from Brazil. I summarize them here.

Anonymous USA

This reviewer only suggested minor revisions (nothing in the “compulsory revision” section). These were the suggestions: Avoid the first person, clarify the race of study participants, discuss the results in more detail, don’t use the word “trivial,” add citations to several statements, grammar check.

Anonymous, Brazil

This review demanded compulsory revisions: Clarify the level of statistical significance used, explain acronyms, clarify use of “biological parents” when discussing same-sex parents. And some minor revisions: one typo, one font-size change, standardize number of decimal places.

Anonymous, Nigeria

This reviewer included compulsory revisions: mention instrument used in the abstract, clarify measures used in previous studies on children’s well-being, test all four hypotheses proposed (not just three), clarify use of instrument used, shorten the discussion. Minor revisions: check for typos.

Paulo Verlaine Borges e Azevêdo, Brazil

This reviewer requested reorganizing the text, like this:

Would be better to redistribute the lengths of results (lessened), discussion (up) and conclusion (down) sections. In many moments, in the Result section the author deal with I believe would be better located in the Discussion (e. eg., between lines 345 and 355). I suggest that the subsections of Results would be reviewed by author and parts that discuss the results be transferred to the Discussion section … Strengths and Limitations would be better located in the discussion section too.

A few additional minor text modifications were included in the marked up manuscript.

Round two

Upon revision, Sullins was subjected to a punishing second round of reviews.

This included an interesting if ultimately fruitless attempt by Anonymous Brazil to object to this somewhat nutty sentence by Sullins: “biological parentage uniquely and powerfully distinguishes child outcomes between children with opposite-sex parents and those with same-sex parents.” What he meant was, when he controlled for the biological relationship between children and their parents — since hetero parents are more likely to have any biological parentage (and they’re the only ones with two bio parents) — it statistically reduced the gap in children’s mental health between married hetero versus same-sex parents. Although the exchange was meaningless in the decision whether to publish, and Sullins didn’t change it, and the reviewer dropped the objection, and the editors just said “publish it,” you would have to say this was a moment of actual review.

OK then

That’s it. None of this touched on the obvious fatal flaws in the study — that Sullins combines children in all same-sex families into one category while breaking those currently with different-sex parents into different groups (step-parents, cohabitors, single parents, etc.) — and that he has no data on how long the children currently with same-sex couples have lived with them, or how they came to live with them. So it leaves us right where we started on the question of same-sex parenting “effects” on children.

Of course, lots of individual reviews are screwed up. So, is this journal bogus or merely extremely low quality? Do we have a way of identifying these so-bad-they’re-basically-bogus journals that is meaningful to the various audiences they are reaching?

This matters is because journalists, judges, researchers, and the concerned public would like some way to evaluate the veracity of scientific claims that bear on current social controversies, such as marriage equality and the rights of gay and lesbian parents.

2 Comments

Filed under Research reports

Children in same-sex parent families, dead horse edition

Not that child well-being in different kinds of families isn’t a legitimate research topic, but this idea of proving same-sex parents are bad to whip up the right-wing religious base and influence court cases is really a shark jumping over a dead horse.

Without getting into all the possible detail and angles, here are some comments on the new research published by D. Paul Sullins, which claims to show negative outcomes for children with same-sex parents. Fortunately, I believe the legal efficacy of this kind of well-being witch-hunt research evaporated with Anthony Kennedy’s Windsor decision. Nevertheless, the gay-parents-are-bad-for-kids research community is still attempting to cause harm, and they still have big backers, so it’s important to respond to their work.

Research integrity

Below I will comment a little on the merits of the new studies, but first a look at the publication process and venues. As in the case of the Regnerus affair, in which Brad Wilcox, Mark Regnerus, and their backers conspired to manufacture mainstream legitimacy, Sullins is attempting to create the image of legitimate research, which can then be cited by advocates to the public and in court cases.

Although he has in the past published in legitimate journals (CV here), Sullins’ work now appears to have veered into the netherworld of scam open access journals (which, of course, does not include all open-access journals). Maybe this is just the decline of his career, but it seems they think a new round of desperate “peer-reviewed” publishing will somehow help with the impending legal door-slam against marriage inequality, so they’re rushing into these journals.

Sullins has three new articles about the mental health of children with same-sex parents. The first, I think, is “Bias in Recruited Sample Research on Children with Same-Sex Parents Using the Strength and Difficulties Questionnaire (SDQ).” This was published in the Journal of Scientific Research and Reports. The point of it is that same-sex parents who are asked to report about their children’s well-being exaggerate how well they’re doing.

The second paper is “Child Attention-Deficit Hyperactivity Disorder (ADHD) in Same-Sex Parent Families in the United States: Prevalence and Comorbidities.” It was published on January 21 in the British Journal of Medicine & Medical Research. It claims that children living with same-sex parents, surveyed in the National Health Interview Survey, are more likely to have ADHD than “natural” children of married couples.

The third — the one I call third because it doesn’t seem to have actually been published yet — is, “Emotional Problems among Children with Same-sex Parents: Difference by Definition,” in the British Journal of Education, Society & Behavioural Science. It’s point is the same as the second, with slightly different variables. (The author’s preprint is here.) This is the one Mark Regnerus referred to in a post calling attention to Sullins’ work. (The legitimacy strategy is apparent in Regnerus naming the fancy-sounding journal in the opening sentence of his post.)

What makes these scam journals? The first clue is that two of them have “British” in the name, despite not being British in any way (not that there’s anything wrong with that). They are all published by Science Domain, which is listed on “Beall’s List” of “potential, possible, or probable predatory scholarly open-access publishers.” They are not published by academic societies, they are not indexed by major academic journal databases, they publish thousands of papers with little or no peer review (at the expense of the authors), and they recruit authors, editors, and reviewers through worldwide spam campaigns that sweep up shady pseudo-scholars.

For the first two, which have been published, Science Domain documents the review process. The first paper, “Bias in Recruited Sample…,” first had to overcome Reviewer 1, Friday Okwaraji, a medical lecturer at the University of Nigeria, who recommended correcting a single typo. Reviewer 2, identified as “anonymous/Brazil,” apparently read the paper, suggesting several style changes and moving some sentences, and expressing misgivings about the whole point. After revisions, the editor considered the two reviews carefully, and then wrote to the managing editor, “Please accept the paper, it is okay.” It was submitted November 18, 2014 and accepted December 17, 2014.

The second paper, “Child ADHD…,” also shows its peer review process. Reviewer 1 was Renata Marques de Oliveira at the University of São Paulo, Brazil. In 2012 she was listed as a masters student in psychiatric nursing, and is now an RN. This is the entirety of her review of Sullins’ paper:

sullinsreview1

OK, then.

The second review is by Rejani Thudalikunnil Gopalan, described as a faculty member at Universiti Malaysia Sabah, or maybe Gujarat Forensic Sciences University, Gujarat, India. She was recently spotted drumming up submissions for a special issue of the scammy American Journal of Applied Psychology (“What? We didn’t say it was the same journal as the Journal of Applied Psychology, published by the American Psychological Association!”). The journal AJAP is published by the Science Publishing Group (see Beall’s List), but I couldn’t investigate further because their website happens to be down.

Unlike Oliveira, Gopalan seems to have read the paper, and offered a few superficial questions and suggestions – not quite the very worst review from a legitimate journal that I have ever read. After a cursory reply, the editor responded (in full): “The authors have addressed all reviewers’ concerns in a satisfactory way. This is an outstanding paper worthy of publication in BJMMR.” It was accepted two weeks after submission.

I don’t want to imply that three journals are illegitimate just because they are run for profit by low-status academics from developing countries. But looking at the evidence so far I think it’s fair to call these journals bogus. However, I wouldn’t argue too much if you wanted instead to say they are merely of the very lowest quality.

Why does a guy at a real university, with tenure, publish three articles in two months at a paper mill like Science Domain? I fear our dear Dr. Sullins has fallen out of love with the scientific establishment. Anyways.

Content

You might say we should just ignore these papers because of their provenance, but they’re out there. Plus, I want people to take my totally unreviewed blog posts seriously, so I should take these at least a little seriously. Fortunately, I can write them off based on simple, complete objections.

Combining the 1997-2013 National Health Interview Surveys, about 200,000 children, Sullins gets 512 children who are living with a same-sex couple (about 16% married, he says). In both the second and third papers, he compares these children to those living with married, biological or adoptive parents who are of different sexes. The basic problem here is obvious, and was apparent in the infamous Regnerus paper as well: same-sex couples, regardless of their history — married, divorced, never-married, just-married, married before the kid was born, just got together yesterday when the kid was 15, and so on — are all combined in one undifferentiated category. This just can’t show you the “effect” of same-sex parenting. (When Regnerus says this research supports the ” basic narrative … that children who grow up with a married mother and father fare best at face value,” he’s slipping in “grow up with,” though he knows the study doesn’t have the information necessary to make that claim.)

However, if Sullins did the data manipulations right — which I cannot judge because I don’t know the data, little detail is provided, and the reviewers have no expertise with it either — there is a simple descriptive finding here that is interesting, if unsurprising: children living with same-sex parents over the period 1997-2013, the vast majority of whom are not married, and presumably did not conceive or adopt the child in their relationship, have more emotional problems and ADHD than children living with their married, biological parents. We have to be smart enough to consider that — if it’s true — without falling into accepting the claim that such problems are the result of same-sex parenting, because that has not been established. Of course, this supports an argument for marriage equality, but it’s also just an empirical pattern worth understanding. If Sullins, Regnerus, and their ilk weren’t so hellbent on opposing homosexuality they could actually provide useful information that might be part of a knowledge base we use to improve children’s lives.

Sullins’ judgment is no doubt clouded by his overarching religious objection to homosexuality, which, he believes, like abortion and contraception,

contravene the natural operation of the body in order to conform human sexuality to the ideals of modernity… By severing the link between sex and children, both [abortion and homosexuality] increase privatization, diminish the social intentionality and form of the sexual union, and undermine the unitive good and the transcendent goal of marriage.

So for him it’s already settled — long before he extruded these papers (and Regnerus has expressed similar views). Apparently they think they just need a few bogus publications to bring the public along.

32 Comments

Filed under Research reports