No, poverty is not a mysterious, unknowable, negative-spiral loop

I don’t have much to add on the “consensus plan” on poverty and mobility produced by the Brookings and American Enterprise institutes, referred to in their launch event as being on “different ends of the ideological spectrum” (can you imagine?). In addition to the report, you might consider the comments by Jeff Spross, Brad DeLong, or the three-part series by Matt Bruenig.

My comment is about the increasingly (to me) frustrating description of poverty as something beyond simple comprehension and unreachable by mortal policy. It’s just not. The whole child poverty problem, for example, amounts to $62 billion dollars per year. There are certainly important details to be worked out in how to eliminate it, but the basic idea is pretty clear — you give poor people money. We have plenty of it.

This was obvious yet amazingly not remarked upon in the first 40 minutes of the launch event (which is all I watched). In the opening presentation, by Ron Haskins — for whom I have a well-documented distaste — started with this simple chart of official poverty rates:

offpov-brookingsaei

He started with the blue line, poverty for elderly people, and said:

The blue line is probably the nation’s greatest success against poverty. It’s the elderly. And it basically has declined pretty much all the time. It has no relationship to the economy, and there is good research that shows that its cause at least 90% by Social Security. So, government did it, and so Social Security is the reason we’re able to be successful to reduce poverty among the elderly.

And then everyone proceeded to ignore the obvious implication of that: when you give people money, they aren’t poor anymore. The most unintentionally hilarious illustration of this was in the keynote (why?) address from David Brooks (who has definitely been working on relaxing lately, especially when it comes to preparing keynote puff-pieces). He said this, according to my unofficial transcript:

Poverty is a cloud problem and not a clock problem. This is a Karl Popper distinction. He said some problems are clock problems – you can take them apart into individual pieces and fix them. Some problems are cloud problems. You can’t take a cloud apart. It’s a dynamic system that is always interspersed. And Popper said we have a tendency to try to take cloud problems and turn them into clock problems, because it’s just easier for us to think about. But poverty is a cloud problem. … A problem like poverty is too complicated to be contained by any one political philosophy. … So we have to be humble, because it’s so gloomy and so complicated and so cloud-like.

The good news is that for all the complexity of poverty, and all the way it’s a cloud, it offers a political opportunity, especially in a polarized era, because it’s not an either/or issue. … Poverty is an and/and issue, because it takes a zillion things to address it, and some of those things are going to come from the left, and some are going to come from the right. … And if poverty is this mysterious, unknowable, negative spiral-loop that some people find themselves in, then surely the solution is to throw everything we think works at the problem simultaneously, and try in ways we will never understand, to have a positive virtuous cycle. And so there’s not a lot of tradeoffs, there’s just a lot of throwing stuff in. And social science, which is so prevalent in this report, is so valuable in proving what works, but ultimately it has to bow down to human realities – to psychology, to emotion, to reality, and to just the way an emergent system works.

Poverty is only a “mysterious, unknowable, negative spiral-loop” if you specifically ignore the lack of money that is its proximate cause. Sure, spend your whole life wondering about the mysteries of human variation — but could we agree to do that after taking care of people’s basic needs?

I wonder if poverty among the elderly once seemed like a weird, amorphous, confusing problem. I doubt it. But it probably would if we had assumed that the only way to solve elderly poverty was to get children to give their parents more money. Then we would have to worry about the market position of their children, the timing of their births, the complexity of their motivations and relationships, the vagaries of the market, and the folly of youth. Instead, we gave old people money. And now elderly poverty “has declined pretty much all the time” and “it has no relationship to the economy.”

Imagine that.

5 Comments

Filed under Uncategorized

Journal self-citation practices revealed

I have written a few times about problems with peer review and publishing.* My own experience subsequently led me to the problem of coercive self-citation, defined in one study as “a request from an editor to add more citations from the editor’s journal for reasons that were not based on content.” I asked readers to send me documentation of their experiences so we could air them out. This is the result.

Introduction

First let me mention a new editorial in the journal Research Policy about the practices editors use to inflate the Journal Impact Factors, a measure of citations that many people use to compare journal quality or prestige. One of those practices is coercive self-citation. The author of that editorial, Ben Martin, cites approvingly a statement signed by a group of management and organizational studies editors:

I will refrain from encouraging authors to cite my journal, or those of my colleagues, unless the papers suggested are pertinent to specific issues raised within the context of the review. In other words, it should never be a requirement to cite papers from a particular journal unless the work is directly relevant and germane to the scientific conversation of the paper itself. I acknowledge that any blanket request to cite a particular journal, as well as the suggestion of citations without a clear explanation of how the additions address a specific gap in the paper, is coercive and unethical.

So that’s the gist of the issue. However, it’s not that easy to define coercive self-citation. In fact, we’re not doing a very good job of policing journal ethics in general, basically relying on weak enforcement of informal community standards. I’m not an expert on norms, but it seems to me that when you have strong material interests — big corporations using journals to print money at will, people desperate for academic promotions and job security, etc. — and little public scrutiny, it’s hard to regulate unethical behavior informally through norms.

The clearest cases involve asking for self-citations (a) before final acceptance, for citations (b) within the last two years and (c) without substantive reason. But there is a lot short of that to object to as well. Martin suggests that, to answer whether a practice is ethical, we need to ask: “Would I, as editor, feel embarrassed if my activities came to light and would I therefore object if I was publicly named?” (Or, as my friend Matt Huffman used to say when the used-textbook buyers came around offering us cash for books we hadn’t paid for: how would it look in grainy hidden-camera footage?) I think that journal practices, which are generally very opaque, should be exposed to public view so that unethical or questionable practices can be held up to community standards.

Reports and responses

I received reports from about a dozen journals, but a few could not be verified or were too vague. These 10 were included under very broad criteria — I know that not everyone will agree that these practices are unethical, and I’m unsure where to draw the line myself. In each case below I asked the current editor if they would care to respond to the complaint, doing my best to give the editor enough information without exposing the identity of the informant.

Here in no particular order are the excerpts of correspondence from editors, with responses from the editors to me, if any. Some details, including dates, may have been changed to protect informants. I am grateful to the informants who wrote, and I urge anyone who knows, or thinks they know, who the informants are not to punish them for speaking up.

Journal of Social and Personal Relationships (2014-2015 period)

Congratulations on your manuscript “X” having been accepted for publication in Journal of Social and Personal Relationships. … your manuscript is now “in press” … The purpose of this message is to inform you of the production process and to clarify your role in the process …

IMPORTANT NOTICE:

As you update your manuscript:

1. CITATIONS – Remember to look for relevant and recent JSPR articles to cite. As you are probably aware, the ‘quality’ of a journal is increasingly defined by the “impact factor” reported in the Journal Citation Reports (from the Web of Science). The impact factor represents a ratio of the number of times that JSPR articles are cited divided by the number of JSPR articles published. Therefore, the 20XX ratings will focus (in part) on the number of times that JSPR articles published in 20XX and 20XX are cited during the 20XX publication year. So citing recent JSPR articles from 20XX and 20XX will improve our ranking on this particular ‘measure’ of quality (and, consequently, influence how others view the journal. Of course only cite those articles relevant to the point. You can find tables of contents for the past two years at…

Response from editor Geoff MacDonald:

Thanks for your email, and for bringing that to my attention. I agree that encouraging self-citation is inappropriate and I have just taken steps to make sure it won’t happen at JSPR again.

Sex Roles (2011-2013 period)

In addition to my own report, already posted, I received an identical report from another informant. The editor, Irene Frieze, wrote: “If possible, either in this section or later in the Introduction, note how your work builds on other studies published in our journal.”

Response from incoming editor Janice D. Yoder:

As outgoing editor of Psychology of Women Quarterly and as incoming editor of Sex Roles, I have not, and would not, as policy require that authors cite papers published in the journal to which they are submitting.

I have recommended, and likely will continue to recommend, papers to authors that I think may be relevant to their work, but without any requirement to cite those papers. I try to be clear that it is in this spirit of building on existing scholarship that I make these recommendations and to make the decision of whether or not to cite them up to the author. As an editor who has decision-making power, I know that my recommendations can be interpreted as requirements (or a wise path to follow for authors eager to publish) but I can say that I have not further pressured an author whose revision fails to cite a paper I recommended.

I also have referred to authors’ reference lists as a further indication that a paper’s content is not appropriate for the journal I edit. Although never the sole indicator and never based only on citations to the specific journal I edit, if a paper is framed without any reference to the existing literature across journals in the field then it is a sign to me that the authors should seek a different venue.

I value the concerns that have been raised here, and I certainly would be open to ideas to better guide my own practices.

European Sociological Review (2013)

In a decision letter notifying the author of a minor revise-and-resubmit, the editor wrote that the author had left out of the references some recent, unspecified, publications in ESR and elsewhere (also unspecified) and suggested the author update the references.

Response from editor Melinda Mills:

I welcome the debate about academic publishing in general, scrutiny of impact factors and specifically of editorial practices.  Given the importance of publishing in our profession, I find it surprising how little is actually known about the ‘black box’ processes within academic journals and I applaud the push for more transparency and scrutiny in general about the review and publication process.  Norms and practices in academic journals appear to be rapidly changing at the moment, with journals at the forefront of innovation taking radically different positions on editorial practices. The European Sociological Review (ESR) engages in rigorous peer review and most authors agree that it strengthens their work. But there are also new emerging models such as Sociological Science that give greater discretion to editors and focus on rapid publication. I agree with Cohen that this debate is necessary and would be beneficial to the field as a whole.

It is not a secret that the review and revision process can be a long (and winding) road, both at ESR and most sociology journals. If we go through the average timeline, it generally takes around 90 days for the first decision, followed by authors often taking up to six months to resubmit the revision. This is then often followed by a second (and sometimes third) round of reviews and revision, which in the end leaves us at ten to twelve months from original submission to acceptance. My own experience as an academic publishing on other journals is that it can regularly exceed one year. During the year under peer review and revisions, relevant articles have often been published.  Surprisingly, few authors actually update their references or take into account new literature that was published after the initial submission. Perhaps this is understandable, since authors have no incentive to implement any changes that are not directly requested by reviewers.

When there has been a particularly protracted peer review process, I sometimes remind authors to update their literature review and take into account more recent publications, not only in ESR but also elsewhere.  I believe that this benefits both authors, by giving them greater flexibility in revising their manuscripts, and readers, by providing them with more up-to-date articles.  To be clear, it is certainly not the policy of the journal to coerce authors to self-cite ESR or any other outlets.  It is vital to note that we have never rejected an article where the authors have not taken the advice or opportunity to update their references and this is not a formal policy of ESR or its Editors.  If authors feel that nothing has happened in their field of research in the last year that is their own prerogative.  As authors will note, with a good justification they can – and often do – refuse to make certain substantive revisions, which is a core fundament of academic freedom.

Perhaps a more crucial part of this debate is the use and prominence of journal impact factors themselves both within our discipline and how we compare to other disciplines. In many countries there is a move to use these metrics to distribute financing to Universities, increasing the stakes of these metrics. It is important to have some sort of metric gauge of the quality and impact of our publications and discipline. But we also know that different bibliometric tools have the tendency to produce different answers and that sociology fairs relatively worse in comparison to other disciplines. Conversely, leaving evaluation of research largely weighted by peer review can produce even more skewed interpretations if the peer evaluators do not represent an international view of the discipline. Metrics and internationally recognized peer reviewers would seem the most sensible mix.

Work and Occupations (2010-2011 period)

“I would like to accept your paper for publication on the condition that you address successfully reviewer X’s comments and the following:

2. The bibliography needs to be updated somewhat … . Consider citing, however critically, the following Work and Occupations articles on the italicized themes:

[concept: four W&O papers, three from the previous two years]

[concept: two W&O papers from the previous two years]

The current editor, Dan Cornfield, thanked me and chose not to respond for publication.

Sociological Forum (2014-2015 period)

I am pleased to inform you that your article … is going to press. …

In recent years, we published an article that is relevant to this essay and I would like to cite it here. I have worked it in as follows: [excerpt]

Most authors find this a helpful step as it links their work into an ongoing discourse, and thus, raises the visibility of their article.

Response from editor Karen Cerulo:

I have been editing Sociological Forum since 2007. I have processed close to 2500 submissions and have published close to 400 articles. During that time, I have never insisted that an author cite articles from our journal. However, during the production process–when an article has been accepted and I am preparing the manuscript for the publisher–I do sometimes point out to authors Sociological Forum pieces directly relevant to their article. I send authors the full citation along with a suggestion as to where the citation be discussed or noted. I also suggest changes to key words and article abstracts, My editorial board is fully aware of this strategy. We have discussed it at many of our editorial board meetings and I have received full support for this approach. I can say, unequivocally, that I do not insist that citations be added. And since the manuscripts are already accepted, there is no coercion involved. I think it is important that you note that on any blog post related to Sociological Forum

I cannot tell you how often an author sends me a cover letter with their submission telling me that Sociological Forum is the perfect journal for their research because of related ongoing dialogues in our pages. Yet, in many of these cases, the authors fail to reference the relevant dialogues via citations. Perhaps editors are most familiar with the debates and streams of thought currently unfolding in a journal. Thus, I believe it is my job as editor and my duty to both authors and the journal to suggest that authors consider making appropriate connections.

Unnamed journal (2014)

An article was desk-rejected — that is, rejected without being sent out for peer review — with only this explanation: “In light of the appropriateness of your manuscript for our journal, your manuscript has been denied publication in X.” When the author asked for more information, a journal staff member responded with possible reasons, including that the paper did not include any references to the articles in that journal. In my view the article was clearly within the subject area of the journal. I didn’t name the journal here because this wasn’t an official editor’s decision letter and the correspondence only suggested that might be the reason for the rejction.

Sociological Quarterly (2014-2015 period)

In a revise and resubmit decision letter:

Finally, as a favor to us, please take a few moments to review back issues of TSQ to make sure that you have cited any relevant previously published work from our journal. Since our ISI Impact Factor is determined by citations, we would like to make sure papers under consideration by the journal are referring to scholarship we have previously supported.

The current editors, Lisa Waldner and Betty Dobratz, have not yet responded.

Canadian Review of Sociology (2014-2015 period)

In a letter communicating acceptance conditional on minor changes, the editor asked the author to consider citing “additional Canadian Review of Sociology articles” to “help with the journal’s visibility.”

Response from current editor Rima Wilkes:

In the case you cite, the author got a fair review and received editorial comments at the final stages of correction. The request to add a few citations to the journal was not “coercive” because in no instance was it a condition of the paper either being reviewed or published.

Many authors are aware of, and make some attempt to cite the journal to which they are submitting prior to submission and specifically target those journals and to contribute to academic debate in them.

Major publications in the discipline, such as ASR, or academia more generally, such as Science, almost never publish articles that have no reference to debates in them.

Bigger journals are in the fortunate position of having authors submit articles that engage with debates in their own journal. Interestingly, the auto-citation patterns in those journals are seen as “natural” rather than “coerced”. Smaller journals are more likely to get submissions with no citations to that journal and this is the case for a large share of the articles that we receive.

Journals exist within a larger institutional structure that has certain demands. Perhaps the author who complained to you might want to reflect on what it says about their article and its potential future if they and other authors like them do not engage with their own work.

Social Science Research (2015)

At the end of a revise-and-resubmit memo, under “Comment from the Editor,” the author was asked to include “relevant citations from Social Science Research,” with none specified.

The current editor, Stephanie Moller, has not yet responded.

City & Community (2013)

In an acceptance letter, the author was asked to approve several changes made to the manuscript. One of the changes, made to make the paper more conversant with the “relevant literature,” added a sentence with several references, one or more of which were to City & Community papers not previously included.

One of the current co-editors, Sudhir Venkatesh, declined to comment because the correspondence occurred before the current editorial teams’ tenure began.

Discussion

The Journal Impact Factor (JIF) is an especially dysfunctional part of our status-obsessed scholarly communication system. Self-citation is only one issue, but it’s a substantial one. I looked at 116 journals classified as sociology in 2014 by Web of Science (which produces the JIF), excluding some misplaced and non-English journals. WoS helpfully also offers a list excluding self-citations, but normal JIF rankings do not make this exclusion. (I put the list here.) On average removing self-citations reduces the JIF by 14%. But there is a lot of variation. One would expect specialty journals to have high self-citation counts because the work they publish is closely related. Thus Armed Forces and Society has a 31% self-citation rate, as does Work & Occupations (25%). But others, like Gender & Society (13%) and Journal of Marriage and Family (15%) are not high. On the other hand, you would expect high-visibility journals to have high self-citation rates, if they publish better, more important work; but on this list the correlation between JIF and self-citation rate is -.25. Here is that relationship for the top 50 journals by JIF, with the top four by self-citation labeled (the three top-JIF journals at bottom-right are American Journal of Sociology, Annual Review of Sociology, and American Sociological Review).

journal stats.xlsx

The top four self-citers are low-JIF journals. Two of them are mentioned above, but I have no idea what role self-citation encouragement plays in that. There are other weird distortions in JIFs that may or may not be intentional. Consider the June 2015 issue of Sociological Forum, which includes a special section, “Commemorating the Fiftieth Anniversary of the Civil Rights Laws.” That issue, just a few months old, as of yesterday includes the 9 most-cited articles that the journal published in the last two years. In fact, these 9 pieces have all been cited 9 times, all by each other — and each article currently has the designation of “Highly Cited Paper” from Web of Science (with a little trophy icon). The December 2014 issue of the same journal also gave itself an immediate 24 self-citations for a special “forum” feature. I am not suggesting the journal runs these forum discussion features to pump up its JIF, and I have nothing bad to say about their content — what’s wrong with a symposium-style feature in which the authors respond to each other’s work? But these cases illustrate what’s wrong with using citation counts to rank journals. As Martin’s piece explains, the JIF is highly susceptible to manipulation beyond self-citation promotion, for example by tinkering with the pre-publication queue of online articles, publishing editorial review essays, and of course outright fraud.

Anyway, my opinion is that journal editors should never add or request additional citations without clearly stated substantive reasons related to the content of the research and unrelated to the journal in which they are published. I realize that reasonable people disagree about this — and I encourage readers to respond in the comments below. I also hope that any editor would be willing to publicly stand by their practices, and I urge editors and journal management to let authors and readers see what they’re doing as much as possible.

However, I also think our whole journal system is pretty irreparably broken, so I put limited stock in the idea of improving its operation. My preference is to (1) fire the commercial publishers, (2) make research publication open-access with a very low bar for publication; and (3) create an organized system of post-publication review to evaluate research quality, with (4) republishing or labeling by professional associations to promote what’s most important.

* Some relevant posts cover long review delays for little benefit; the problem of very similar publications; the harm to science done by arbitrary print-page limits; gender segregation in journal hierarchies; and how easy it is to fake data.

10 Comments

Filed under Uncategorized

Is the New York Times trapped in an economics echo chamber?

Ask a stupid question.

When Justin Wolfers wrote about the dominance of economists in the pages of the New York Times, he concluded, “our popularity reflects the discerning tastes of our audience in the marketplace of ideas.” I discussed the evidence for that in this post, which focused on the particular organizational features of the NYT. At the time it didn’t occur to me that his data — relying on uses of “economist” in the paper — would be corrupted by false attributions. So this is a small data story and a larger point.

The small data story comes from a personal reflection by Dionne Searcey, who wrote about work-family conflict in her new post as West Africa Bureau Chief for the NYT. It was a perfectly reasonable piece, except for one thing:

Much has been written about work-life balance, about women getting ahead in their careers and trying to have it all. I often find that if you scratch beneath the surface of many successful working moms, they have husbands who work from home or have flexible schedules and possibly a trust fund. Or in many cases, you find a mom who does more than her fair share at home — or at least feels as if she does. Economists have a name for it, “the second shift.”

Wait, “economists”? The Second Shift is a classic work of sociology by Arlie Hochschild and Anne Machung first published in 1989 and revised twice. Why “economists”? The (very good) article that Searcey linked to was called, “The Second Shift: Men Do More at Home, but Not as Much as They Think,” written by journalist Claire Cain Miller, focusing principally on the research of several sociologists, led by Jill Yavorsky (a sociology PhD candidate at Ohio State with whom I have collaborated). There are no economists cited or quoted in the story.

The small data story is that this mention of economists will go into Wolfers’ count of the influence of economists in the marketplace of ideas, but it’s a false positive — it’s the influence of sociologists being falsely attributed to economists.

But why would Searcey say “economists”? The answer lies in the organizational culture of the NYT. Here’s why.

Here are my two tweets on the piece:

Considerately, Searcey replied:

How odd. When I pointed out again that the story she linked to was about sociologists talking about the second shift, she didn’t reply.

I recently wrote that economists don’t cite sociologists’ work as much as sociologists cite economists even when the two groups are working on the same questions with obvious implications for both. What about the second shift? A JSTOR search reveals 473 cases of “second shift” and “housework” in journals identified as sociology by the database. The same search in the realm of economics produces just 35 mentions (no fewer than 6 of which were written by sociologists).

So, why did Searcey think she “was referring to how economists talk about the second shift”? My only explanation is that it’s because the piece was published in the NYT section The Upshot. As I wrote in my Contexts post, Upshot

is edited by David Leonhardt, who was an economics columnist before he was promoted to Washington bureau chief in 2011. That promotion was a dramatic move, elevating an economics writer who hadn’t been a Washington political reporter. Upshot is a “data journalism” hub, which often (but not always) implies an economic focus. (On the opinion pages, economist Paul Krugman writes a column twice a week, and Joseph Stiglitz moderated a long series on inequality.) This can’t be the whole story, but in broad strokes it’s fair to say the paper as an organization moved in the direction of business and economics.

Upshot is, of course, where Wolfers was writing in praise of the idea-market power of economists. Is this just the free market of ideas allowing the most persuasive to rise to the top? Searcey’s errors suggests that it is not. Rather, the organizational status of economics has corrupted her perceptions so that if something appears there she simply believes it reflects economics (and no editor notices).

Incidentally David Leonhardt (whom I’ve written about several times) has been promoted to Op-Ed page columnist and associate editorial page editor.

2 Comments

Filed under In the news

Mary lives? (You’re welcome edition)

Things are looking up since last I wrote about the fate of the name Mary. It’s too early to tell, but it’s just possible things are beginning to turn around.

In 2014, Mary held steady at the 120th most-popular girls name in the U.S., as recorded by the Social Security Administration. That’s two years she’s been above her worst-ever showing of 123rd in 2012. Here’s the trend, starting with her last year at Number One, 1961:

Mary2014

You may recall that I first breathlessly reported Mary’s fall in 2009 when she dropped out of the top 100 U.S. girls names for the first time in recorded history (presumably ever). At the time I also speculated that she might have a chance of bouncing back, especially given the historical precedent of Emma, currently enjoying rare return to Number One:

Mary2014

Note that Emma had about 10 years of uncertainty before definitively tracking upward. With just a couple years of stall it’s way too early to write Mary’s triumph narrative, but you have to weight her odds of recovery higher than average because of the whole Christianity thing — especially with Catholics, who are holding their own amidst the general crisis of Christ.

names.xlsx

What is the basis for a potential Mary revival? We have seen before that popular events can hurt a name (Forrest, Monica, Ellen), or help a name (Maggie, Brandy, Angie, and my favorite, Rhiannon). In this case historians my someday date the resugence of Mary to the appearance in 2012 — her worst year ever — of my essay in The Atlantic with the memorable illustration:

atlanticmary

Call it a classic bottoming out.

1 Comment

Filed under In the news

Update: Adjusted divorce risk, 2008-2014

Quick update to yesterday’s post, which showed this declining refined divorce rate for the years 2008-2014:

On Twitter Kelly Raley suggested this could have to do with increasing education levels among married people. As I’ve reported using these data before, there is a much lower divorce risk for people with BA degrees or higher education.

Yesterday I quickly (but I hope accurately) replicated my basic model from that previous paper, so now I can show the trend as a marginal effect of year holding constant marital duration (from year of marriage), age, education, race/ethnicity, and nativity.*

2014 update

This shows that there has been a decrease in the adjusted odds of divorce from 2008 to 2014. You could interpret this as a continuous decline with a major detour caused by the recession, but that case is weaker than it was yesterday, looking at just the unadjusted trend.

If it turns out that increase in 2010-2012 is related to the recession, it’s not so different from my original view — a recession drop followed by rebound, it’s just that the drop is less and the rebound is more, and took longer, than I thought.  In any event, this should undermine any effort to resuscitate the old idea that the recession caused a decline in divorce by causing families to pull together during troubled times.

This does not contradict the results from Kennedy and Ruggles that show age-adjusted divorce rising between 1980 and 2008, since I’m not trying to compare these ACS trends with the older data sources. For time beyond 2008, they wrote in that paper:

If current trends continue, overall age-standardized divorce rates could level off or even decline over the next few decades. We argue that the leveling of divorce among persons born since 1980 probably reflects the increasing selectivity of marriage.

That would fit the idea of a long-term decline with a stress-induced recession bounce (with real-estate delay).

Alternative interpretations welcome.

* This takes a really long time for Stata to compute on my sad little public-university computer because it’s a non-linear model with 4.8 million cases – so please don’t ask for a lot of different iterations of this figure. I don’t have my code and output cleaned up for sharing, but if you ask me I’ll happily send it to you.

3 Comments

Filed under In the news

Divorce rate plunge continues

When I analyzed divorce and the recession in this paper, I only had data from 2008 to 2011. Using a model based on the predictors of marriage in 2008, I thought there had been a drop in divorces associated with the recession in 2009, followed by a rebound back to the “expected level” by 2011. So, the recession reduced divorces, perhaps temporarily.

That was looking iffy when the 2013 data showed a big drop in the divorce rate, as I reported last year. With new data now out from the 2014 American Community Survey, that story is seeming less and less adequate. With another deep drop in 2014, now it looks like divorce rates are on a downward slide, but in the years after the recession there was a bump up — so maybe recession-related divorces (e.g., those related to job loss or housing market stressors) took a couple years to materialize, producing a lull in the ongoing plunge. Who knows.

So, here is the latest update, showing the refined divorce rate — that is, the number of divorces in each year per 1,000 married people in that year.*

divorce rates.xlsx

Lots to figure out here. (As for why men and women have different divorce rates in the ACS, I still haven’t been able to figure that out; these are self-reported divorces, so there’s no rule that they have to match up [and same-sex divorces aren’t it, I think.])

For the whole series of posts, follow the divorce tag.

* I calculate this using the married population from table B12001, and divorces in the past year from table B12503, in the American Factfinder version of the ACS data.

1 Comment

Filed under In the news

Sociology: “I love you.” Economics: “I know.”

Sour grapes, by Sy Clark. https://flic.kr/p/yFT3a

Sour grapes, by Sy Clark. https://flic.kr/p/yFT3a

A sociologist who knows how to use python or something could do this right, but here’s a pilot study (N=4) on the oft-repeated claim that economists don’t cite sociology while sociologists cite economics.

I previously wrote about the many sociologists citing economist Gary Becker (thousands), compared with, for example, the 0 economists citing the most prominent article on the gender division of housework by a sociologist (Julie Brines). Here’s a little more.

It’s hard to frame the general question in terms of numerators and denominators — which articles should cite which, and what is the universe? To simplify it I took four highly-cited papers that all address the gender gap in earnings: one economics and one sociology paper from the early 1990s, and one of each from the early 2000s. These are all among the most-cited papers with “gender” and “earnings OR wages” in the title from journals listed as sociology or economics by Web of Science.

From the early 1990s:

  • O’Neill, J., and S. Polachek. 1993. “Why the Gender-gap in Wages Narrowed in the 1980s.” Journal of Labor Economics 11 (1): 205–28. doi:10.1086/298323. Total cites: 168.
  • Petersen, T., and L.A. Morgan. 1995. “Separate and Unequal: Occupation Establishment Sex Segregation and the Gender Wage Gap.” American Journal of Sociology 101 (2): 329–65. doi:10.1086/230727. Total cites: 196.

From the early 2000s:

  • O’Neill, J. 2003. “The Gender Gap in Wages, circa 2000.” American Economic Review 93 (2): 309–14. doi:10.1257/000282803321947254. Total cites: 52.
  • Tomaskovic-Devey, D., and S. Skaggs. 2002. “Sex Segregation, Labor Process Organization, and Gender Earnings Inequality.” American Journal of Sociology 108 (1): 102–28. Total cites: 81.

A smart way to do it would be to look at the degrees or appointments of the citing authors, but that’s a lot more work than just looking at the journal titles. So I just counted journals as sociology or economics according to my own knowledge or the titles.* I excluded interdisciplinary journals unless I know they are strongly associated with sociology, and I excluded management and labor relations journals. In both of these types of cases you could look at the people writing the articles for more fidelity. In the meantime, you may choose to take my word for it that excluding these journals didn’t change the basic outcome much. For example, although there are some economists writing in the excluded management and labor relations journals (like Industrial Labor Relations), there are a lot of sociologists writing in the interdisciplinary journals (like Demography and Social Science Quarterly), and also in the ILR journals.

Results

Citations to the economics articles from sociology journals:

  • O’Neill and Polachek (1993): 37 / 168 = 22%
  • O’Neill (2003): 4 / 52 = 8%

Citations to the sociology articles from economics journals:

  • Petersen and Morgan (1995): 6 / 196: 3%
  • Tomaskovic-Devey and Skaggs (2002): 0 / 81: 0%

So, there are 41 sociology papers citing the economics papers, and 6 economics papers citing the sociology papers.

Worth noting also that the sociology journals citing these economics papers are the most prominent and visible in the discipline: American Sociological Review, American Journal of Sociology, Annual Review of Sociology, Social Forces, Sociology of Education, and others. On the other hand, there are no citations to the sociology articles in top economics journals, with the exception of an article in Journal of Economic Perspectives that cited Peterson and Morgan — but it was written by sociologists Barbara Reskin and Denise Bielby. Another, in Feminist Economics, was written by sociologist Harriet Presser. (I included these in the count of economics journals citing the sociology papers.)

These four articles are core work in the study of labor market gender inequality, they all use similar data, and they are all highly cited. Some of the sociology cites of these economics articles are critical, surely, but there’s (almost) no such thing as bad publicity in this business. Also, the pattern does not reflect a simple theoretical difference, with sociologists focused more on occupational segregation (although that is part of the story), as the economics articles use occupational segregation as one of the explanatory factors in the gender gap story (though they interpret it differently).

Anyways.

Previous sour-grapes stuff about economics and sociology:

Note:

* The Web of Science categories are much too imprecise with, for example, Work & Occupations — almost entirely a sociology journal –classified as both sociology and economics.

5 Comments

Filed under Research reports