Tag Archives: sociology

Sociology: “I love you.” Economics: “I know.”

Sour grapes, by Sy Clark. https://flic.kr/p/yFT3a

Sour grapes, by Sy Clark. https://flic.kr/p/yFT3a

A sociologist who knows how to use python or something could do this right, but here’s a pilot study (N=4) on the oft-repeated claim that economists don’t cite sociology while sociologists cite economics.

I previously wrote about the many sociologists citing economist Gary Becker (thousands), compared with, for example, the 0 economists citing the most prominent article on the gender division of housework by a sociologist (Julie Brines). Here’s a little more.

It’s hard to frame the general question in terms of numerators and denominators — which articles should cite which, and what is the universe? To simplify it I took four highly-cited papers that all address the gender gap in earnings: one economics and one sociology paper from the early 1990s, and one of each from the early 2000s. These are all among the most-cited papers with “gender” and “earnings OR wages” in the title from journals listed as sociology or economics by Web of Science.

From the early 1990s:

  • O’Neill, J., and S. Polachek. 1993. “Why the Gender-gap in Wages Narrowed in the 1980s.” Journal of Labor Economics 11 (1): 205–28. doi:10.1086/298323. Total cites: 168.
  • Petersen, T., and L.A. Morgan. 1995. “Separate and Unequal: Occupation Establishment Sex Segregation and the Gender Wage Gap.” American Journal of Sociology 101 (2): 329–65. doi:10.1086/230727. Total cites: 196.

From the early 2000s:

  • O’Neill, J. 2003. “The Gender Gap in Wages, circa 2000.” American Economic Review 93 (2): 309–14. doi:10.1257/000282803321947254. Total cites: 52.
  • Tomaskovic-Devey, D., and S. Skaggs. 2002. “Sex Segregation, Labor Process Organization, and Gender Earnings Inequality.” American Journal of Sociology 108 (1): 102–28. Total cites: 81.

A smart way to do it would be to look at the degrees or appointments of the citing authors, but that’s a lot more work than just looking at the journal titles. So I just counted journals as sociology or economics according to my own knowledge or the titles.* I excluded interdisciplinary journals unless I know they are strongly associated with sociology, and I excluded management and labor relations journals. In both of these types of cases you could look at the people writing the articles for more fidelity. In the meantime, you may choose to take my word for it that excluding these journals didn’t change the basic outcome much. For example, although there are some economists writing in the excluded management and labor relations journals (like Industrial Labor Relations), there are a lot of sociologists writing in the interdisciplinary journals (like Demography and Social Science Quarterly), and also in the ILR journals.


Citations to the economics articles from sociology journals:

  • O’Neill and Polachek (1993): 37 / 168 = 22%
  • O’Neill (2003): 4 / 52 = 8%

Citations to the sociology articles from economics journals:

  • Petersen and Morgan (1995): 6 / 196: 3%
  • Tomaskovic-Devey and Skaggs (2002): 0 / 81: 0%

So, there are 41 sociology papers citing the economics papers, and 6 economics papers citing the sociology papers.

Worth noting also that the sociology journals citing these economics papers are the most prominent and visible in the discipline: American Sociological Review, American Journal of Sociology, Annual Review of Sociology, Social Forces, Sociology of Education, and others. On the other hand, there are no citations to the sociology articles in top economics journals, with the exception of an article in Journal of Economic Perspectives that cited Peterson and Morgan — but it was written by sociologists Barbara Reskin and Denise Bielby. Another, in Feminist Economics, was written by sociologist Harriet Presser. (I included these in the count of economics journals citing the sociology papers.)

These four articles are core work in the study of labor market gender inequality, they all use similar data, and they are all highly cited. Some of the sociology cites of these economics articles are critical, surely, but there’s (almost) no such thing as bad publicity in this business. Also, the pattern does not reflect a simple theoretical difference, with sociologists focused more on occupational segregation (although that is part of the story), as the economics articles use occupational segregation as one of the explanatory factors in the gender gap story (though they interpret it differently).


Previous sour-grapes stuff about economics and sociology:


* The Web of Science categories are much too imprecise with, for example, Work & Occupations — almost entirely a sociology journal –classified as both sociology and economics.


Filed under Research reports

Shine a light on journal self-citation inflation

Photo by pnc.

Photo by pnc.

Note: Welcome Inside Higher Ed readers. I’d be happy to hear accounts from disciplines other than sociology. Email me at pnc@umd.edu.

In my post on peer review the other day, I mentioned that a journal editor made this request — before she agreed to send the paper out for review:

“If possible, either in this section or later in the Introduction, note how your work builds on other studies published in our journal.”

A large survey on “coercive citation” practices, published in Science in 2012 (paywalled; bootlegged PDF) found that 20% of researchers had, in the previous five years, “received a request from an editor to add more citations
from the editor’s journal for reasons that were not based on content.” The survey, which was sent to email lists for academic associations, including the American Sociological Association, found sociologists and psychologists were less likely to report having experienced this practice than were economists and those in business-related disciplines.

The journal I named, Sex Roles, is high on the list of those most frequently mentioned — cited by four respondents, more than any journal outside of business, marketing, or economics. But there are a lot of other journals you know on the list.

Although I made the assumption that the Sex Roles editor was trying to increase the impact factor — the citation rate — for her journal, one could defend this practice as being motivated by other interests (I’ll leave that to you). It also seems likely that some requests are open to interpretation — for example, mixing in citations from different journals, or offering specific reasons for including particular citations.

Tell me about it

To look into this a little more, I’m asking you to send me requests for journal self-citation that you have received. I’ll keep them confidential, but if I get enough to make it interesting, I will post: (1) journal name, (2) the type of request, (3) the date (month and year), and (4) the stage in the publication process. Feel free to include extenuating details or other information you would like to share, and let me know if you want it disclosed. I assume most of you are sociologists, but I’ll include items from any discipline.

To be included on the list, I’ll need to see copies of the letter or email you received. I will not disclose your identity or information about you, or the specific article under review. I won’t use quotes that might identify the author or article under review.

I will also send the list to the current editors of journals named and give them an opportunity to respond.

My contact information is here.

Maybe there’s not enough here to go on, but if there is, I think shining a light on it would be a good thing, and might deter the practice in the future.


Filed under Me @ work

Our broken peer review system, in one saga

When at last Odysseus returns.

When at last Odysseus returns.

Everybody’s got a story. This is the story of publishing a peer-reviewed journal article called, “The Widening Gender Gap in Opposition to Pornography, 1975–2012.” The paper has now been published, and is available here in preprint, or here if you’re on a campus that subscribes to Social Currents through Sage.

Lucia Lykke, a graduate student in our program, and I began this project in the fall of 2012. We came up with the idea together. I did the coding and she wrote the text. Over the course of two years we sent the paper to four journals – once to Gender & Society, four times to Sex Roles, once to Social Forces, and twice to Social Currents, which finally accepted it in July 2015 and published it online on September 21.*

This story illustrates some endemic problems with our system of scholarly communication, both generally and in the discipline of sociology specifically. I discuss the problems after the story.


The gist of our paper is this: Opposition to pornography has declined in the U.S. since 1975, but faster for men than for women. As a result, the gender gap in opposition – with women more likely to oppose pornography – has widened.

That’s the finding. Our interpretation – which is independent of the veracity of our finding – is that opposition has declined as porn became more ubiquitous, but that women have been slower to drop their opposition because at the same time mainstream porn has become more violent and degrading to women. We see all this reflecting two trends: pornographication (more things in popular culture becoming more pornographic) and post-feminism (less acceptance of speaking up against the sexist nature of popular media, including porn). We could be wrong in our interpretation, and there is no way to test it, but the empirical analysis is pretty straightforward and we should accept it as a description of the trend in attitudes toward pornography. And for doing that empirical work we beg permission to tell you our interpretation.

The analysis is possible because the General Social Survey has, since 1975, asked a large sample of U.S. adults this question about every two years:

Which of these statements comes closest to your feelings about pornography laws: 1. There should be laws against the distribution of pornography whatever the age. 2. There should be laws against the distribution of pornography to persons under 18. 3. There should be no laws forbidding the distribution of pornography.

We tracked the rate at which people selected the first choice versus the others. It’s not very complicated (although we tried it half a dozen other ways, of course). Also of course it’s not perfect – it’s not a great question for today’s social reality, but it’s the only thing like it asked over such a long period. This is what’s great and what’s limiting about the General Social Survey. So, let’s agree to collect better data, and also use this. There, was that so hard?

Here is supporting detail on our particular saga. (We have left the typos from reviewers intact, because it makes us look smarter than they are. And these are selective excerpts to make various points – there was a lot, lot, more.)

Before and after

Just to be clear what the world gained by 13 reviews and two years of waiting, you can compare the abstract at the beginning to the one at the end. This was the original abstract:

In the last several decades pornography in the U.S. has become more mainstream, more accessible, and more phallocentric and degrading to women. Yet research has not addressed how opposition to pornography has changed over the past several decades. Here, we examine opposition to pornography and gender differences in anti-pornography attitudes, using the 1975-2012 General Social Survey. Our findings show that both men’s and women’s opposition to pornography have decreased significantly over the past 40 years, but men’s opposition has declined faster and women remain more opposed to pornography. This is consistent with both the growing normative nature of pornography consumption for men and its increasingly degrading content. We situate these trends within a cultural climate in which women are caught between postfeminism and pornographication – between cultural messages that signal the social acceptability of pornography and compel women’s acquiescence, on the one hand, and the increased presence of pornography many women consider offensive and harmful on the other.

And this was the abstract we ended up with:

In the last several decades pornography in the U.S. has become more mainstream, more accessible, more phallocentric and more degrading to women. Further, consumption of pornography remains a major difference in the sexual experiences of men and women. Yet research has not addressed how opposition to pornography has changed over the this period, despite shifts in the accessibility and visibility of pornography as well as new cultural and legal issues presented by the advent of Internet pornography. We examine gender differences in opposition to pornography from 1975 to 2012, measured by support for legal censorship of pornography in the General Social Survey. Results show that both men’s and women’s opposition to pornography have decreased significantly over the past 40 years, suggesting a cultural shift toward “pornographication” affecting attitudes. However, women remain more opposed to pornography than men, and men’s opposition has declined faster, so the gender gap in opposition to pornography has widened, indicating further divergence of men’s and women’s sexual attitudes over time. This is consistent with the increasingly normative nature of pornography consumption for men, increases over time in men’s actual consumption of pornography, and its increasingly degrading depiction of women.

The regression model we started with in 2013 had logistic regression coefficients showing a decline of .012 per year in the log odds of women favoring laws against the distribution of pornography, versus .022 for men. (That is, the decline has been almost twice as fast for men.) After all we went through with the other variables, we ended up with .012 and .023.


August 6, 2013: Submitted to Gender & Society

September 23, 2013: Rejected, with four reviews

Reviewer A was concerned about framing, and about the dependent variable.

if one takes this more complex and nuanced definition of postfeminism into account, the theoretical frame of does not work well for the paper … I also thought that the authors could have gone further in discussing broader cultural changes in sexuality in the media, especially the increasing sexualization and pornification in advertising and the media. …

an analysis of a GSS question concerning laws regarding the restriction of pornography, seem limited. In particular, that GSS question does not seem to get at the historical changes that have occurred in pornography distribution and consumption given its widespread internet usage.

Reviewer B was all about framing:

[I] appreciate your analysis of anti-pornography research and the effects of post-feminism on attitudes towards pornography … [but] I think the literature review needs to spend at least some time outlining feminist pro-pornography arguments. …

doesn’t it make sense to incorporate a discussion of the history of pornography regulation since the 1970s in the U.S? [… and …] While you bring up race in the analysis of your data, the literature review is surprisingly devoid of anything having to do with pornographic representations of gender and race.

Reviewer C thought we should have included a content analysis of pornography over time – done a different study, that is — and framed it differently:

Pornography needs to be defined … Cost, images and rejection of feminist view would clearly support a content analysis on pornography … The provided discussion of pornographication seems to more support the use of images and actual study of pornography, more so than people’s attitudes toward it … more justification to the existing literature needs to be added … Some legal gender studies should be included here … Gender is not one sided and the author should consider adding some agency to [men’s] role in the study and discussion.

Reviewer D concluded that the data weren’t good enough to support our interpretation:

The author, however, does not empirically demonstrate that the found decline in opposition is the result of either postfeminism or pornographication. … The General Social Survey is convenient, easy to access, and quick to run. This, however, does not necessarily make for good empirical evidence. … If the author wanted to investigate postfeminism and pornagraphication and the relationship to pornography, a much more nuanced empirical study would have needed to have been designed.

In a world with limited space for publishing research – which is not our world – this would be a good reason to reject the article.

October 7, 2013 (approximate): Submitted to Sex Roles

October 9, 2013: Returned by the editor

The editor, Irene Frieze, returned the paper almost immediately, saying: “major revisions are needed before we can move ahead in the review process.”

Some of what she asked for reflects the competitive climate of contemporary academic journals. For example, she asked us to pad the journal’s citation count: “If possible, either in this section or later in the Introduction, note how your work builds on other studies published in our journal.”

And she tried to make the journal seem more international:

Explain why your study is important to readers from many countries with a sentence or two. … Note what country each empirical study you cite was done in and explain how any cited studies done in other countries are relevant in understanding your sample.

She also asked for what appear to be standard requirements for the journal:

Add demographic information about the sample and explain more about how they were recruited. Add a table showing the demographic characteristics of the women as compared to the men in the sample in different time periods. … Add correlations computed separately for women and men as well.

And, the dreaded memo requirement: “Assuming you do wish to submit a revision, I would need a revised manuscript and a detailed list outlining the changes you have made in response to these comments.”

November 9, 2013: Resubmitted to Sex Roles, first revision

February 17, 2014: Revise and resubmit, based on one review (“major revisions”)

The reviewer had trouble with our statistical presentation:

I see that on Table 2, the difference between the women’s and men’s regression effect for year shows both women’s and men’s significant (-.012 and -.022). This suggests that for both female and male respondents the year is significant, but it doesn’t show statistically that men’s decline in opposition is steeper than is women’s. Where is the statistic showing a significant difference in slope? [The table had a superscript b next to the men’s coefficient, with the note, “Gender difference significant at p<.05.” Although we didn’t provide the details, that test came from a separate, “fully-interacted” model in which every variable is allowed to have a separate effect by gender.]

This reviewer – who stuck with this complaint for three rounds – also had trouble with the smallness of the coefficients:

Although the coefficient is twice as large for year among men than among women, it’s a very small percentage. With such a large sample size, almost anything will be significant. I’d like to see an effect size statistic.

She might have been confused because the variable here is “year” – a continuous variable ranging from 0 in 1975 to 37 in 2012, so the coefficient reflects the size of the average one-year change, which makes it look “very small.”

A common problem for authors responding to reviewers is the simultaneous demands for less and more. Sometimes that’s good – a healthy revision process. Here is a funny example of that: “There seems to be a much longer introduction than is needed for the findings, especially since what would be interesting to me is omitted.”

However she grasped the concise nature of the findings, which she somehow took as a weakness:

I would like to see how each of these control variables interacts with the changes over years. I believe that analysis is possible using time series analyses. The reader is left with only a few main conclusions: both men and women indicate less opposition over time to pornography, and that men’s opposition declines more than female’s, and men show less opposition to pornography control overall.

Exactly. Oh well.

May 17, 2014: Resubmitted to Sex Roles, second revision

July 8, 2014: Revise and resubmit, with two reviews

The editor now told us: “We were able to find a second reviewer, this time. We won’t continue to add new reviewers for additional drafts.” (This promise, sadly, did not hold.)

The dependent variable – that three-response question about laws regulating pornography – caused continuing consternation. The editor wrote:

none of us feels that the combining of the three categories of responses for the pornography acceptance variable is appropriate. You either need to omit one of the 3 categories from the analysis, or do something like a discriminant analysis to look at differences in those responding to each of the three categories.

And then this bad signal that the editor and reviewers did not understand the basic structure of the analysis:

Another issue that all of us agree on is that you have failed to provide statistical evidence supporting your assertion of evidence of a linear trend in support over time. Either do a real trend analysis, for women and men separately, or compare the data over several specific years using something like ANOVA by year and gender. This would also allow you to see if these is really the interaction you assert is present.

As you can see in the final paper – which was the case in this revision as well – we did a “real trend analysis, for women and men separately.”

We tried to make this as clear as possible, writing in the paper:

We use logistic regression models to test for differences on this measure between men and women across the 23 administrations of the GSS since 1975. We test time effects with a continuous variable for year, which ranges from 0 in 1975 to 37 in 2012. This coding allows for an intuitive interpretation of the intercept and produces coefficients equal to the predicted change in the odds of opposing pornography associated with a one year change in the independent variable (non-linear specifications did not improve the model fit). … The first model combines men and women, while models 2 and 3 analyze men and women separately, after tests showed differences in the coefficients by gender on six of the variables (marked with superscript ‘b’). … Comparison of Model 2 and Model 3 confirms that the decline in opposition to pornography has been more pronounced for men than for women, as the coefficient for the year variable is almost twice as large.

We thought that Reviewer 1, back from the previous round, was doubling down on misunderstanding what we did, and the editor thought this as well. The reviewer wrote: “I don’t agree that the years need collapsing in the analyses. I believe it is better to see the linear trend. Also, I don’t like to see data left out, in this case data from the individual years.”

In fact, we found out in the next round of reviews we found the she meant this is a disagreement with the editor! (“The authors misread my statement about collapsing the years. I was disagreeing with the editor who suggested collapsing the years. I did not suggest myself that the years should be collapsed. I agree that the years should not be collapsed. It’s not me who misread the paper, it’s the authors who misread my statement.”)

That said, she still did not grasp the analysis:

You state that ‘This coding allows for an intuitive interpretation of the intercept and produces coefficients equal to the predicted change in the odds of opposing pornography associated with a one year change in the indepenjdent variable.’ In the results section, please describe how your data fit an ‘intuitive’ interpretation and how the coefficients that are produced explain the one year change. There is a disconnect for me from this statement and the description of the data.

And she added:

Please carefully describe the statistical analysis and statistical findings that describe the difference between the declines in opposition for women vs. men. Is the beta for gender .78 and for year -.02, and how did you test for the difference in betas of -.01 vs. -.02? Mention the test you used to assess this. This doesn’t seem like much of a difference in slope. That one is twice as large as the other is fairly meaningless when it is .01 vs. .02.

And added again later: “P. 22, agvain when you say a coefficient for the year variables is “amost twice as large,” you are talking about .01 vs .02.”


The editor and Reviewer 1 had a long-running dispute about how to handle all of our control variables. The editor was sticking to the policy that we needed a table showing complete correlations of all variables separately by gender. And a discussion of every variable, with references, justifying its inclusion. The editor said in the first round:

You also need to explain each of the control variables you include in your regressions in the Introduction. Add at least a sentence for each variable explaining why it is important to the issues you are testing.

In response, we included a long section beginning with, “Various social and demographic characteristics are associated with pornography use and attitudes toward pornography, and we account for these characteristics in our empirical analysis below.”

But then Reviewer 1 said of that passage: “Much of the material in “Attitudes Toward Pornography” is not relevant. … Gender and gender differences are what you are studying.”

And in response to our gigantic correlation table of all variables separately by gender, Reviewer wrote: “I … strongly recommend deletion of Table 3. This is not a study of the correlates of attitudes toward pornography, and the intercorrelations of all the control variables are outside the range of your focus.”

Never mind.

Reviewer 2, the new reviewer, had some reasonable questions and suggestions. For example, s/he recommended analyzing the outcome with a multinomial logistic regresstion, which we did but it didn’t matter; and controlling for pornography consumption (“watched an x-rated movie in the past year”), which we did and it didn’t matter (in fact, basically none of the control variables affect the basic story much, but reviewers have a hard time believing this). S/he also had lots of objections to how we characterized various feminist authors and terms in the framing, and really didn’t like “pornographication” as a term, listing as a “major” objection:

the term ‘pornographication’ is problematic and should be removed from the paper in favor of a more academic description of increased access to sexualized media.

September 10, 2014 (approximate): Resubmitted to Sex Roles, third revision

October 11, 2014: Revise and resubmit, with one review

The editor now informed us that one reviewer just recommended rejecting the paper because we didn’t address her concerns, while the other called for “major revisions.”

Given this type of feedback, I would normally reject a paper already in its third revision. However, I would like to offer one more opportunity for you to make the requested changes. If you do resubmit, I may seek new reviewers and essentially begin the review process anew, unless it is clear that my earlier concerns are fully addressed.

Despite three drafts and as many memos, the editor still did not seem to understand that our outcome variable was a single question with three options. She wrote:

One of my basic requests has been that you consider the question about exposure of pornography to those under 18 as a separate dependent variable, or omit this entirely from the study. Conceptually, I feel this is quite different from the other two survey items and cannot be combined with them. This will require major changes in the analysis and rationale for predictions relating to each of these measures.

The reviewer, however, disagreed, voicing approval for our choice. The editor clarified, “If my requests conflict with those of the reviewer, it is my requests you need to follow, not those of the reviewer.”

They had no trouble agreeing, however, that they did not understand the linear time trend we were testing: “As the reviewer explains, we do need a clearer discussion of how the linear trend is being tested.”

Reviewer 1 wrote:

Regarding the analysis of the time trend, although the authors state [in the memo] that the starring of the coefficients on Table 4 demonstrate a significant linear trend, it was not apparent to the editor and reviewers. As one of the main points of the study, it should be made very obvious that there is a significant linear trend via statistics. If this means being more explicit in the text of the results section, it would be important to do. If there’s this much confusion, the statistical analysis needs clarification.

You can look at the table in the final publication for yourself to see if this remains unclear. And then the reviewer added:

As I previously mentioned, though significant, a change of -.02 vs .-.01 is not substantial. Thus, the authors should refrain from concluding one is twice as large as the other.

We decided to take our business elsewhere rather than submit another revision.

November 4, 2014: Submitted to Social Forces

December 29, 2014: Rejected, with two reviews

Reviewer 1 only had concerns about framing, such as, “expand their discussion of the broader cultural changes in sexuality in the culture,” and discuss “changes in gay and lesbian identities and visibility during this period.”

Reviewer 2 simply thought we couldn’t answer the questions we posed with the data we had:

The paper is motivated by a largely assumed cultural ‘pornographication’ process linked to post-feminism. Neither concept seems well-suited to explain public opinion formation or change, and greater specificity about these concepts would likely outstrip the operational capacity of the GSS to model how gender and sexuality attitudes may influence shifts in beliefs about pornography.

There were some other technical issues about specific variables that aren’t very important. Again, this is very reasonable basis for making the ridiculous judgment forced by the system of publishing in the limited pages of a print journal.

January 16, 2015: Submitted to Social Currents

April 9, 2015: Revise and resubmit, based on three reviews

The editors, Toni Calasanti and Vincent Roscigno, wrote:

While stated differently in each case, the overriding sentiment across the reviewers is that the paper needs better framing. … the potential contribution of this study is not realized because the theoretical framework is lacking, limiting your ability to discuss the implications of your findings.

Reviewer 1 wanted the “post-feminism” discussion put back in the front: “It’s not until the conclusion of the manuscript that we learn about a potential contribution to ‘postfeminism’ and current work there.”

Reviewer 1 also attempted to lead us into a common trap. S/he wrote:

The hypotheses don’t necessarily derive from a particular theory in sociology or test a specific argument about gender, public opinion theories, and pornography per se. Rather, the project is descriptive (divergence of male/female support for legal control, rate of change over time, etc.). That isn’t fatal. But a project that makes a more direct connection to advancing current theoretical work in feminism and sexuality studies, or current theorizing about the importance of public opinion and values about pornography, would strengthen the overall contribution of this research.

Making the paper more theoretical is not a bad suggestion, but in this context – since the data are so limited – it’s a sure setup for a future reviewer to complain that you have asked questions you can’t sufficiently answer with your data.

The three reviewers’ other concerns by this point were quite familiar to us. For example, “perhaps a line or two to strengthen the validity of measure could be added based on some of the studies cited.” And a worry about about collapsing the dependent variable into two categories. And the need to acknowledge debates within feminism about the meaning of “pornographication.” We dutifully beefed up, clarified, and strengthened. And wrote a memo.

May 20, 2015: Resubmitted to Social Currents, first revision

July 18, 2015: Accepted


Some of the problems apparent in this story are common to sociology, some are more general.

Sociologists care way too much about framing. Most (or all) of the reviewers were sociologists, and most of what they suggested, complained about, or objected was about the way the paper was “framed,” that is, how we establish the importance of the question and interpret the results. Of course framing is important – it’s why you’re asking your question, and why readers should care (see Mark Granovetter’s note on the rejected version of “the Strength of Weak Ties”). But it takes on elevated importance when we’re scrapping over limited slots in academic journals, so that to get published you have to successfully “frame” your paper as more important than some other poor slob’s.

The journal system gets in the way. When journals reject you they report the low percentage of papers they accept. This is supposed to make the rejected authors feel better, but it also shows the gross inefficiency of the system: why should you bounce from journal to journal with low acceptance rates – in our case, asking our colleagues to write 13 reviews – instead of being vetted once by a centralized system with reviewers who work to a common standard? The answer is because that’s the way they did it in the Dark Ages, when physically printing research papers at high cost was the only way of distributing scholarly output.

The system is slow. As a result of these and other systemic problems, we do a terrible job of advancing knowledge. From the time of our first submission to the publication date was 776 days. For 281 of those days it was in our hands, but for the other 495 days it was in the hands of editors, reviewers, and the publisher. Despite responding to 13 reviews, with a lot of tinkering, the basic result did not change from our first submission in August 2013 to our last submission in May 2015. The new knowledge was all created two years before it was published.

The system is arbitrary I don’t want to make Social Currents look bad here, with the implication that they are a lower quality journal because they published something rejected by three journals before. After all, Granovetter’s paper was rejected by American Sociological Review before getting 35,000 citations as an American Journal of Sociology paper. I also like the example of Liana Sayer and Suzanne Bianchi’s paper on economic independence and divorce, which was rejected by the Journal of Marriage and Family, the flagship journal of the National Council on Family Relations (NCFR), before promptly winning NCFR’s best-paper award after it was published in the Journal of Family Issues. That is, one small group of reviewers deemed it unpublishable in a top journal, and the next declared it the best article of the year. That’s a very wide spread. The arbitrariness of the review system we have now creates cases like this – and who knows how many others. It’s not a systemic problem that Sex Roles has a reviewer that won’t let you say .02 is twice as large as .01. The problem is that could happen anywhere – and cost people their careers – at the same time that bad stuff gets through for arbitrary (or pernicious) reasons. There is too much noise in the current peer-review system to trust it for quality control.


Consider an alternative system, for example, in which the paper – having passed a very low bar of basic quality – had been published after the first set of reviews and then subjected to post-publication review and discussion in the field. Another alternative is publishing it before any formal review process, and allowing post-publication review to do the whole vetting process.

Models exist. Sociology doesn’t have a central working paper system, but there are smaller systems. In my neck of the woods, the California Center for Population Research has a working paper archive, which houses papers from six population centers. Math types have arXiv, which has more than a million papers, with each new one “reviewed by expert moderators to verify that they are topical and refereeable scientific contributions that follow accepted standards of scholarly communication.” They also use a system of member endorsement to cut down on junk submissions. If papers are subsequently published the arXiv version is updated to link to the published version. Sociology should make something like this.

Another step in the right direction is rapid-response, open-access peer-review, with quick up-or-down decisions. In sociology this includes Sociological Science, run by an independent team and supported by author fees (often paid by university libraries or grants); and Socious, run by the American Sociological Association and subsidized by the for-profit publisher Sage in an attempt to pacify open-access advocates. These work more or less like PLOS One, which “accepts scientifically rigorous research, regardless of novelty.”

I’m happy to publish in such outlets, but many of us worry about the career implications for our students who risk having their CVs seen as sketchy by old-fashioned types. We need them to be institutionalized.

In the meantime, those of us in position to conduct peer review can do our part to be better reviewers (see this excellent advice). And we can make explicit decisions about which journals we will review for. The system runs off our discretionary contributions, and we shape it through our actions. That argument is for a separate post.

* We did the research together — and Lucia did most of the work — but blame me for the content of this post.


Filed under Me @ work

Comment on Goffman’s survey, American Sociological Review rejection edition


Peer Review, by Gary Night. https://flic.kr/p/c2WH2E


  • I reviewed Alice Goffman’s book, On The Run.
  • I complained that her dissertation was not made public, despite being awarded the American Sociological Association’s dissertation prize. I proposed a rule change for the association, requiring that the winning dissertation be “publicly available through a suitable academic repository by the time of the ASA meeting at which the award is granted.” (The rule change is moving through the process.)
  • When her dissertation was released, I complained about the rationale for the delay.
  • My critique of the survey that was part of her research grew into a formal comment (PDF) submitted to American Sociological Review.

In this post I don’t have anything to add about Alice Goffman’s work. This is about what we can learn from this and other incidents to improve our social science and its contribution to the wider social discourse. As Goffman’s TED Talk passed 1 million views, we have had good conversations about replicability and transparency in research, and about ethics in ethnography. And of course about the impact of criminal justice system and over-policing on African Americans, the intended target of her work. This post is about how we deal with errors in our scholarly publishing.

My comment was rejected by the American Sociological Review.

You might not realize this, but unlike many scientific journals, except for “errata” notices, which are for typos and editing errors, ASR has no normal way of acknowledging or correcting errors in research. To my knowledge ASR has never retracted an article or published an editor’s note explaining how an article, or part of an article, is wrong. Instead, they publish Comments (and Replies). The Comments are submitted and reviewed anonymously by peer reviewers just like an article, and then if the Comment is accepted the original author responds (maybe followed by a rejoinder). It’s a cumbersome and often combative process, often mixing theoretical with methodological critiques. And it creates a very high hurdle to leap, and a long delay, before the journal can correct itself.

In this post I’ll briefly summarize my comment, then post the ASR editors’ decision letter and reviews.

Comment: Survey and ethnography

I wrote the comment about Goffman’s 2009 ASR article for accountability. The article turned out to be the first step toward a major book, so ASR played a gatekeeping role for a much wider reading audience, which is great. But then it should take responsibility to notify readers about errors in its pages.

My critique boiled down to these points:

  • The article describes the survey as including all households in the neighborhood, which is not the case, and used statistics from the survey to describe the neighborhood (its racial composition and rates of government assistance), which is not justified.
  • The survey includes some number (probably a lot) of men who did not live in the neighborhood, but who were described as “in residence” in the article, despite being “absent because they were in the military, at job training programs (like JobCorp), or away in jail, prison, drug rehab centers, or halfway houses.” There is no information about how or whether such men were contacted, or how the information about them was obtained (or how many in her sample were not actually “in residence”).
  • The survey results are incongruous with the description of the neighborhood in the text, and — when compared with data from other sources — describe an apparently anomalous social setting. She reported finding more than twice as many men (ages 18-30) per household as the Census Bureau reports from their American Community Survey of Black neighborhoods in Philadelphia (1.42 versus .60 per household). She reported that 39% of these men had warrants for violating probation or parole in the prior three years. Using some numbers from other sources on violation rates, that translates into between 65% and 79% of the young men in the neighborhood being on probation or parole — very high for a neighborhood described as “nice and quiet” and not “particularly dangerous or crime-ridden.”
  • None of this can be thoroughly evaluated because the reporting of the data and methodology for the survey were inadequate to replicate or even understand what was reported.

You can read my comment here in PDF. Since I aired it out on this blog before submitting it, making it about as anonymous as a lot of other peer-review submissions, I see no reason to shroud the process any further. The editors’ letter I received is signed by the current editors — Omar Lizardo, Rory McVeigh, and Sarah Mustillo — although I submitted the piece before they officially took over (the editors at the time of my submission were Larry W. Isaac and Holly J. McCammon). The reviewers are of course anonymous. My final comment is at the end.

ASR letter and reviews

Editors’ letter:


Dear Prof. Cohen:

The reviews are in on your manuscript, “Survey and ethnography: Comment on Goffman’s ‘On the Run’.” After careful reading and consideration, we have decided not to accept your manuscript for publication in American Sociological Review (ASR).  Our decision is based on the reviewers’ comments, our reading of the manuscript, an overall assessment of the significance of the contribution of the manuscript to sociological knowledge, and an estimate of the likelihood of a successful revision.

As you will see, there was a range of opinions among the reviewers of your submission.  Reviewer 1 feels strongly that the comment should not be published, reviewer 3 feels strongly that it should be published, and reviewer 2 falls in between.  That reviewer sees merit in the criticisms but also suggests that the author’s arguments seem overstated in places and stray at times from discussion that is directly relevant to a critique of the original article’s alleged shortcomings.

As editors of the journal, we feel it is essential that we focus on the comment’s critique of the original ASR article (which was published in 2009), rather than the recently published book or controversy and debate that is not directly related to the submitted comment.  We must consider not only the merits of the arguments and evidence in the submitted comment, but also whether the comment is important enough to occupy space that could otherwise be used for publishing new research.  With these factors in mind, we feel that the main result that would come from publishing the comment would be that valuable space in the journal would be devoted to making a point that Goffman has already acknowledged elsewhere (that she did not employ probability sampling).

As the author of the comment acknowledges, there is actually very little discussion of, or use of, the survey data in Goffman’s article.   We feel that the crux of the argument (about the survey) rests on a single sentence found on page 342 of the original article:  “The five blocks known as 6th street are 93 percent Black, according to a survey of residents that Chuck and I conducted in 2007.”  The comment author is interpreting that to mean that Goffman is claiming she conducted scientific probability sampling (with all households in the defined space as the sampling frame).  It is important to note here that Goffman does not actually make that claim in the article.  It is something that some readers might infer.  But we are quite sure that many other readers simply assumed that this is based on nonprobability sampling or convenience sampling.  Goffman speaks of it as a survey she conducted when she was an undergraduate student with one of the young men from the neighborhood.  Given that description of the survey, we expect many readers assumed it was a convenience sample rather than a well-designed probability sample.  Would it have been better if Goffman had made that more explicit in the original article?  Yes.

In hindsight, it seems safe to say that most scholars (probably including Goffman) would say that the brief mentions of the survey data should have been excluded from the article.  In part, this is because the reported survey findings play such a minor role in the contribution that the paper aims to make.

We truly appreciate the opportunity to review your manuscript, and hope that you will continue to think of ASR for your future research.


Omar Lizardo, Rory McVeigh, and Sarah Mustillo

Editors, American Sociological Review

Reviewer: 1

This paper seeks to provide a critique of the survey data employed in Goffman (2009).  Drawing on evidence from the American Community Survey, the author argues that data presented in Goffman (2009) about the community in which she conducted her ethnography is suspect.  The author draws attention to remarkably high numbers of men living in households (compared with estimates derived from ACS data) and what s/he calls an “extremely high number” of outstanding warrants reported by Goffman.  S/he raises the concern that Goffman (2009) did not provide readers with enough information about the survey and its methodology for them to independently evaluate its merits and thus, ultimately, calls into question the generalizability of Goffman’s survey results.

This paper joins a chorus of critiques of Goffman’s (2009) research and subsequent book.  This critique is novel in that the critique is focused on the survey aspect of the research rather than on Goffman’s persona or an expressed disbelief of or distaste for her research findings (although that could certainly be an implication of this critique).

I will not comment on the reliability, validity or generalizability of Goffman’s (2009) evidence, but I believe this paper is fundamentally flawed.  There are two key problems with this paper.  First the core argument of the paper (critique) is inadequately situated in relation to previous research and theory.  Second, the argument is insufficiently supported by empirical evidence.

The framing of the paper is not aligned with the core empirical aims of the paper.  I’m not exactly sure what to recommend here because it seems as if this is written for a more general audience and not a sociological one.  It strikes me as unusual, if not odd, to reference the popularity of a paper as a motivation for its critique.  Whether or not Goffman’s work is widely cited in sociological or other circles is irrelevant for this or any other critique of the work.  All social science research should be held to the same standards and each piece of scholarship should be evaluated on its own merits.

I would recommend that the author better align the framing of the paper with its empirical punchline.  In my reading the core criticism of this paper is that the Goffman (2009) has not provided sufficient information for someone to replicate or validate her results using existing survey data.  Although it may be less flashy, it seems more appropriate to frame the paper around how to evaluate social science research.  I’d advise the author to tone down the moralizing and discussion of ethics.  If one is to levy such a strong (and strongly worded) critique, one needs to root it firmly in established methods of social science.

That leads to the second, and perhaps even more fundamental, flaw.  If one is to levy such a strong (and strongly worded) critique, one needs to provide adequate empirical evidence to substantiate her/his claims.  Existing survey data from the ACS are not designed to address the kinds of questions Goffman engages in the paper and thus it is not appropriate for evaluating the reliability or validity of her survey research.  Numerous studies have established that large scale surveys like the ACS under-enumerate black men living in cities.  They fall into the “hard-to-reach” population that evade survey takers and census enumerators.  Survey researchers widely acknowledge this problem and Goffman’s research, rather than resolving the issue, raises important questions about the extent to which the criminal justice system may contribute to difficulties for conventional social science research data collection methods.  Perhaps the author can adopt a different, more scholarly, less authoritative, approach and turn the inconsistencies between her/his findings with the ACS and Goffman’s survey findings into a puzzle.  How can these two surveys generate such inconsistent findings?

Just like any survey, the ACS has many strengths.  But, the ACS is not well-suited to construct small area estimates of hard-to-reach populations.  The author’s attempt to do so is laudable but the simplicity of her/his analysis trivializes the difficultly in reaching some of the most disadvantaged segments of the population in conventional survey research.  It also trivializes one of the key insights of Goffman’s work and one that has been established previously and replicated by others: criminal justice contact fundamentally upends social relationships and living arrangements.

Furthermore, the ACS doesn’t ask any questions about criminal justice contact in a way that can help establish the validity of results for disadvantaged segments of the population who are most at-risk of criminal justice contact.  It is impossible to determine using the ACS how many men (or women) in the United States, Pennsylvania, or Philadelphia (or any neighborhood therein), have an outstanding warrant.  The ACS doesn’t ask about criminal justice contact, it doesn’t ask about outstanding warrants, and it isn’t designed to tap into the transient experiences of many people who have had criminal justice contact.  The author provides no data to evaluate the validity of Goffman’s claims about outstanding warrants.  Advancements in social science cannot be established from a “she said”, “he said” debate (e.g., FN 9-10).  That kind of argument risks a kind of intellectual policing that is antithetical to established standards of evaluating social science research.  That being said, someone should collect this evidence or at a minimum estimate, using indirect estimation methods, what fraction of different socio-demographic groups have outstanding warrants.

Although I believe that this paper is fundamentally flawed both in its framing and provision of evidence, I would like to encourage the author to replicate Goffman’s research.  That could involve an extended ethnography in a disadvantaged neighborhood in Philadelphia or another similar city.  That could also involve conducting a small area survey of a disadvantaged, predominantly black, neighborhood in a city with similar criminal justice policies and practices as Philadelphia in the period of Goffman’s study.  This kind of research is painstaking, time consuming, and sorely needed exactly because surveys like the ACS don’t – and can’t – adequately describe or explain social life among the most disadvantaged who are most likely to be missing from such surveys.

Reviewer: 2

I read this manuscript several times. It is more than a comment, it seems. It is 1) a critique of the description of survey methods in GASR and 2) a request for some action from ASR “to acknowledge errors when they occur.” The errors here have to do with Goffman’s description of survey methods in GASR, which the author describes in detail. This dual focus read as distracting at times. The manuscript would benefit from a more squarely focused critique of the description of survey methods in GASR.

Still, the author’s comment raises some valid concerns. The author’s primary concern is that the survey Goffman references in her 2009 ASR article is not described in enough detail to assess its accuracy or usefulness to a community of scholars. The author argues that some clarification is needed to properly understand the claims made in the book regarding the prevalence of men “on the run” and the degree to which the experience of the small group of men followed closely by Goffman is representative of most poor, Black men in segregated inner city communities. The author also cites a recent publication in which Goffman claims that the description provided in ASR is erroneous. If this is the case, it seems prudent for ASR to not only consider the author’s comments, but also to provide Goffman with an opportunity to correct the record.

I am not an expert in survey methods, but there are moments where the author’s interpretation of Goffman’s description seems overstated, which weakens the critique. For example, the author claims that Goffman is arguing that the entirety of the experience of the 6th Street crew is representative of the entire neighborhood, which is not necessarily what I gather from a close reading of GASR (although it may certainly be what has been taken up in popular discourse on the book). While there is overlap of the experience of being “on the run,” namely, your life is constrained in ways that it isn’t for those not on the run, it does appear that Goffman also uses the survey to describe a population that is distinct in important ways from the young men she followed on 6th street. The latter group has been “charged for more serious offenses like drugs and violent crimes,” she writes (this is the group that Sharkey argues might need to be “on the run”), while the larger group of men, whose information was gathered using survey data, were typically dealing with “more minor infractions”: “In the 6th Street neighborhood, a person was occasionally ‘on the run’ because he was a suspect in a shooting or robbery, but most people around 6th street had warrants out for far more minor infractions [emphasis mine].”

So, as I read it (I’ve also read the book), there are two groups: one “on the run” as a consequence of serious offenses and others “on the run” as a consequence of minor infractions. The consequence of being “on the run” is similar, even if the reason one is “on the run” varies.

The questions that remain are questions of prevalence and generalizability. The author asks: How many men in the neighborhood are “on the run” (for any reason)? How similar is this neighborhood to other neighborhoods? Answers to this question do rely on an accurate description of survey methods and data, as the author suggests.

This leads us to the most pressing and clearly argued question from the author: What is the survey population? Is it 1) “people around 6th Street” who also reside in the 6th Street neighborhood (of which, based on Goffman’s definition of in residence, are distributed across 217 distinct households in the neighborhood, however the neighborhood is defined e.g., 5 blocks or 6 blocks) or 2) the entirety of the neighborhood, which is made up of 217 households. It appears from the explanation from Goffman cited by the author that it is the former (“of the 217 households we interviewed,” which should probably read, of the 308 men we interviewed, all of whom reside in the neighborhood (based on Goffman’s definition of residence), 144 had a warrant…). Either way, the author makes a strong case for the need for clarification of this point.

The author goes on to explain the consequences of not accurately distinguishing among the two possibilities described above (or some other), but it seems like a good first step would be to request a clarification (the author could do this directly) and to allow more space than is allowed in a newspaper article to provide the type of explanation that could address the concerns of the author.

Is this the purpose of the comment? Or is the purpose of the comment merely to place a critique on record?  The primary objective is not entirely clear in the present manuscript.

The author’s comment is strong enough to encourage ASR to think through possibilities for correcting the record. As a critique of the survey methods, the comment would benefit from more focus. The comment could also do a better job of contextualizing or comparing/contrasting the use of survey methods in GASR with other ethnographic studies that incorporate survey methods (at the moment such references appear in footnotes).

Reviewer: 3

This comment exposes major errors in the survey methodology for Goffman’s article.  One major flaw is that the goffman article describes the survey as inclusive of all households in the neighborhood but later, in press interviews, discloses that it is not representative of all households in the neighborhood.  Another flaw that the author exposes is goffman’s data and methodological reporting not being up to par to sociological standards.  Finally, the author argues that the data from the survey does not match the ethnographic data.

Overall, I agree with the authors assertions that the survey component is flawed.  This is an important point because the article claims a large component of its substance from the survey instrument.  The survey helped goffman to bolster generalizability , and arguably, garner worthiness of publication in ASR.  If the massive errors in the survey had been exposed early on it is possible that ASR might have held back on publishing this article.

I am in agreement that ASR should correct the error highlighted on page 4 that the data set is not of the entire neighborhood but of random households/individuals given the survey in an informal way and that the sampling strategy should be described.  Goffman should aknowledge that this was a non-representative convenience sample, used for bolstering field observations.  It would follow then that the survey component of the ASR article would have to be rendered invalid and that only the field data in the article should be taken at face value.  Goffman should also be asked to provide a commentary on her survey methodology.

The author points out some compelling anomalies from the goffman survey and general social survey data and other representative data.  At best, goffman made serious mistakes with the survey and needs to be asked to show those mistakes and her survey methodology or she made up some of the data in the survey and appropriate action must be taken by ASR.  I agree with the authors final assessment, that the survey results be disregarded and the article be republished without mention of such results or with mention of the results albeit showing all of its errors and demonstrating the survey methodology.

My response

Regular readers can probably imagine my long, overblown, hyperventilating response to Reviewer 1, so I’ll just leave that to your imagination. On the bottom line, I disagree with the editors’ decision, but I can’t really blame them. Would it really be worth some number of pages in the journal, plus a reply and rejoinder, to hash this out? Within the constraints of the ASR format, maybe the pages aren’t worth it. And the result would not have been a definitive statement anyway, but rather just another debate among sociologists.

What else could they have done? Maybe it would have been better if the editors could simply append a note to the article advising readers that the survey is not accurately described, and cautioning against interpreting it as representative — with a link to the comment online somewhere explaining the problem. (Even so of course Goffman should have a chance to respond, and so on.)

It’s just wrong that now the editors acknowledge there is something wrong in their journal — although we seem to disagree about how serious the problem is — but no one is going to formally notify the future readers of the article. That seems like bad scholarly communication. I’ve said from the beginning that there’s no need for a high-volume conversation about this, or attack on anyone’s integrity or motives. There are important things in this research, and it’s also highly flawed. Acknowledge the errors — so they don’t compound — and move on.

This incident can help us learn lessons with implications up and down the publishing system. Here are a couple. At the level of social science research reporting: don’t publish survey results data without sufficient methodological documentation — let’s have the instrument and protocol, the code, and access to the data. At the system level of publishing, why do we still have journals with cost-defined page limits? Because for-profit publishing is more important than scholarly communication. The sooner we get out from under that 19th-century habit the better.


Filed under Me @ work

Goffman dissertation followup

I previously reviewed Alice Goffman’s book On The Run, and wrote a critique of the survey that was part of that project (including a formal comment sent to American Sociological Review). Then I complained that her dissertation was not made public, despite being awarded the American Sociological Association’s dissertation prize. I proposed a rule change for the association, requiring that the winning dissertation be “publicly available through a suitable academic repository by the time of the ASA meeting at which the award is granted.”

Here’s a quick followup.


I was interested in Goffman’s 2010 dissertation because I thought it might have more information about the survey she conducted than the 2014 book did. When I inquired about the dissertation on June 4 of this year, Princeton’s director of media relations, Martin Mbugua, told me she “was granted an exemption from submitting her dissertation to the University Archives, so we do not have a copy of her dissertation in our collection.”

Jesse Singal at New York magazine reported yesterday that they now have the dissertation, and he’s read it. Not only does it not have more methodological information than the book, Singal reports, it actually has less, as the methodological appendix that’s in the book is not in the dissertation. In a saved-you-a-trip-to-Princeton email to me, Singal says the dissertation’s description of her survey is “basically identical” to what is in ASR. That speaks to my critique of her survey, which seems unaffected by the release of the dissertation. (I’m not in charge of dissertations at Princeton, so I’m not critiquing the dissertation anyway.)

With regard to the open-science-inspired rule change for ASA dissertation awards, Singal’s article just reinforces my desire to see the rule adopted. Mbugua told Singal that Princeton now allows up to two two-year embargo periods for PhD students who don’t want their dissertations publicly released. But why embargo it? I think most people do this because they don’t want to undermine their book deals. The need for this may be overstated, but it’s a thing. (Eric Schwartz who acquires sociology books for Columbia University Press, tweeted: “No problem. Book and dissertation are for different audiences.”)

Anyway, Singal quotes Goffman giving a quite different reason:

The dissertation contained very sensitive material about people who were vulnerable to arrest and incarceration. … I wanted to think through the ethical and human subjects issues of making it available beyond the committee members and I wanted some time to go by between the actual events and a public reading. That felt safer for the people who had granted me permission to write about their lives, and for me, than publishing right away.

Apart from the fact that this concern did not prevent Goffman from submitting her book to a reading by an awards committee — “beyond the [dissertation] committee members” — I do not find this very credible, and I don’t like that rationale. If it was wrong to release it in 2010 because it would endanger her subjects, then it was wrong to publish a book in 2014 with the same — actually, more — incriminating information. In fact, as we now know, identifying the individuals mentioned in the book was trivial using Google, and of course the police knew who they were anyway. By this rationale, I cannot understand why the dissertation would not be given to the library until 14 months after the book was published — or until three months after the commercial paperback edition was published. Oh, wait.

Look, if people want to embargo their dissertations for financial gain, and their elite private universities allow it, then so be it. But that doesn’t have to be ASA’s problem. We can add one small piece to that calculation: giving up the ASA Dissertation Award.


Filed under In the news

On Goffman’s survey

Survey methods.

Survey methods.

Jesse Singal at New York Magazine‘s Science of Us has a piece in which he tracks down and interviews a number of Alice Goffman’s respondents. This settles the question — which never should have been a real question — about whether she actually did all that deeply embedded ethnography in Philadelphia. It leaves completely unresolved, however, the issue of the errors and possible errors in the research. This reaffirms for me the conclusion in my original review that we should take the volume down in this discussion, identify errors in the research without trying to attack Goffman personally or delegitimize her career — and then learn from the affair ways that we can improve sociology (for example, by requiring that winners of the American Sociological Association dissertation award make their work publicly available).

That said, I want to comment on a couple of issues raised in Singal’s piece, and share my draft of a formal comment on the survey research Goffman reported in American Sociological Review.

First, I want to distance myself from the description by Singal of “lawyers and journalists and rival academics who all stand to benefit in various ways if they can show that On the Run doesn’t fully hold up.” I don’t see how I (or any other sociologists) benefit if Goffman’s research does not hold up. In fact, although some people think this is worth pursuing, I am also annoying some friends and colleagues by doing this.

More importantly, although it’s a small part of the article, Singal did ask Goffman about the critique of her survey, and her response (as he paraphrased it, anyway) was not satisfying to me:

Philip Cohen, a sociologist at the University of Maryland, published a blog post in which he puzzles over the strange results of a door-to-door survey Goffman says she conducted with Chuck in 2007 in On the Run. The results are implausible in a number of ways. But Goffman explained to me that this wasn’t a regular survey; it was an ethnographic survey, which involves different sampling methods and different definitions of who is and isn’t in a household. The whole point, she said, was to capture people who are rendered invisible by traditional survey methods. (Goffman said an error in the American Sociological Review paper that became On the Run is causing some of the confusion — a reference to “the 217 households that make up the 6th Street neighborhood” that should have read “the 217 households that we interviewed … ” [emphasis mine]. It’s a fix that addresses some of Cohen’s concerns, like an implied and very unlikely 100 percent response rate, but not all of them.) “I should have included a second appendix on the survey in the book,” said Goffman. “If I could do it over again, I would.”

My responses are several. First, the error of describing the 217 households as the whole neighborhood, as well as the error in the book of saying she interviewed all 308 men (when in the ASR article she reports some unknown number were absent), both go in the direction of inflating the value and quality of the survey. Maybe they are random errors, but they didn’t have a random effect.

Second, I don’t see a difference between a “regular survey” and an “ethnographic survey.” There are different survey techniques for different applications, and the techniques used determine the data and conclusions that follow. For example, in the ASR article Goffman uses the survey (rather than Census data) to report the racial composition of the neighborhood, which is not something you can do with a convenience sample, regardless of whether you are engaged in an ethnography or not.

Finally, there are no people “rendered invisible by traditional survey methods” (presumably Singal’s phrase). There are surveys that are better or worse at including people in different situations. There are “traditional” surveys — of varying quality — of homeless people, prisoners, rape victims, and illiterate peasants. I don’t know what an “ethnographic survey” is, but I don’t see why it shouldn’t include a sampling strategy, a response rate, a survey instrument, a data sharing arrangement, and thorough documentation of procedures. That second methodological appendix can be published at any time.

ASR Comment (revised June 22)

I wrote up my relatively narrow, but serious, concerns about the survey, and posted them on my website here.

It strikes me that Goffman’s book (either the University of Chicago Press version or the trade book version) may not be subject to the same level of scrutiny that her article in ASR should have been. In fact, presumably, the book publishers took her publication in ASR as evidence of the work’s quality. And their interests are different from those of a scientific journal run by an academic society. If ASR is going to play that gatekeeping role, and it should, then ASR (and by extension ASA) should take responsibility in print for errors in its publications.


Filed under Research reports

Gender and the sociology faculty

In an earlier post, I reported on gender and the American Sociological Association’s (ASA) leaders, PhDs received, subject specialization, editors and editorial boards. Here is a little more data, which I’ll add to that post as well as posting it here.

Looking at the gender breakdown of PhDs, which became majority female in the 1990s, I wrote: “Producing mostly-female PhDs for a quarter of a century is getting to be long enough to start achieving a critical mass of women at the top of the discipline.” But I didn’t look at the tenure-ladder faculty, which is the next step in the pipeline to disciplinary domination.

To address that a little, I took a sample from the ASA’s 2015 Guide to Graduate Departments of Sociology, which I happened to get in the mail. Using random numbers, I counted the gender and PhD year for 201 full-time sociology faculty in departments that grant graduate degrees (that excludes adjuncts, affiliates, part-time, and emeritus faculty). This reflects both entrance into and attrition from the professoriate, so how it relates to the gender composition of PhDs will reflect everything from the job market through tenure decisions to retirement and mortality rates.

The median PhD year in my sample is 2000, and women are 47% of the sample. In fact, women earned 52% of sociology PhDs in the 1990s, but they are only 40% of the faculty with 1990s PhDs in my sample. After that, things improved for women. Women earned 60% of the PhDs in the 2000s, and they are 62% of current faculty with PhDs from the 2000s in this sample. So either we’re doing a better job of moving women from PhD completion into full-time faculty jobs, or the 2000s women haven’t been disproportionately weeded out yet.

Here is the breakdown of my sample, by PhD year and gender:


With 15 years or so of women earning 60% of the PhDs, they should be headed toward faculty dominance, and that yet may be the case. If men and women get tenure and retire at the same rate, another decade or so should do it, but that’s a big “if.” I don’t read much into women’s slippage in the last few years, except that it’s clearly not a slam-dunk.


Filed under Me @ work