Joe Pinsker at the Atlantic has a piece out on the coming (probable) baby bust. In it he reviews existing evidence for a coming decline in births as a result of the pandemic, especially including historical comparisons and Google search data. Could we see this already?
The baby bust isn’t expected to begin in earnest until December. And it could take a bit longer than that, Sarah Hayford, a sociologist at Ohio State University, told me, if parents-to-be didn’t adjust their plans in response to the pandemic immediately back in March, when its duration wasn’t widely apparent.
If people immediately changed their plans in February, we might see a decline in births in October, but Hayford is right that’s early. And what about September, for which I’ve already observed declining births in Florida and California? If people who were pregnant already in January had miscarriages or abortions because of the pandemic, that would result in fewer births in September, but how big could that effect be? So maybe the Florida and California data are flukes, or data errors, or lots of pregnant people left those states and gave birth elsewhere (or pregnant people who normally come didn’t arrive). Perhaps more likely is that 2020 was already going to be a down year. As I told Pinsker:
“It might actually be that we were already heading for a record drop in births this year … If that’s the case, then birth rates in 2021 are probably going to be even more shockingly low.”
Anyway, we’ll find out soon enough. And to that end I’ve started assembling a dataset of monthly births where I can find them, which so far includes Florida, California, Oregon, Arizona, North Carolina, Ohio, Hawaii, Sweden, Finland, Scotland, and the Netherlands, to varying degrees of timeliness. As of today we have October data for some of them:
As of now Florida and California remain the strongest cases for a pandemic effect. But they are also both likely to add some more births to October (in November’s report, California increased the September number by 3%).
Anyway, lots of speculation while we’re killing time. You can get the little dataset here on the Open Science Framework: https://osf.io/pvz3g/. Check the date on the .csv or .xlsx file to see what I last updated it. I’ll add more countries or states if I find out about them.
Here’s the 2020 update of a series I started in 2013. This year, after the basic facts, I’ll add some pandemic facts below.
Is it true that “facts are useless in an emergency“? I guess we’ll find out this year. Knowing basic demographic facts, and how to do arithmetic, lets us ballpark the claims we are exposed to all the time. The idea is to get your radar tuned to identify falsehoods as efficiently as possible, to prevent them spreading and contaminating reality. Although I grew up on “facts are lazy and facts are late,” I actually still believe in this mission, I just shake my head slowly while I ramble on about it (and tell the same stories over and over).
It started a few years ago with the idea that the undergraduate students in my class should know the size of the US population. Not to exaggerate the problem, but too many of them don’t, at least when they reach my sophomore level family sociology class. If you don’t know that fact, how can you interpret statements like, “The U.S. economy lost a record 20.5 million jobs in April“?
Everyone likes a number that appears to support their perspective. But that’s no way to run (or change) a society. The trick is to know the facts before you create or evaluate an argument, and for that you need some foundational demographic knowledge. This list of facts you should know is just a prompt to get started in that direction.
These are demographic facts you need just to get through the day without being grossly misled or misinformed — or, in the case of journalists or teachers or social scientists, not to allow your audience to be grossly misled or misinformed. Not trivia that makes a point or statistics that are shocking, but the non-sensational information you need to make sense of those things when other people use them. And it’s really a ballpark requirement (when I test the undergraduates, I give them credit if they are within 20% of the US population — that’s anywhere between 264 million and 396 million!).
This is only a few dozen facts, not exhaustive but they belong on any top-100 list. Feel free to add your facts in the comments (as per policy, first-time commenters are moderated). They are rounded to reasonable units for easy memorization. All refer to the US unless otherwise noted. Most of the links will take you to the latest data:
The pandemic is changing everything. A lot of the numbers above may look different next year. Here are 21 basic pandemic facts to keep in mind — again, the point is to get a sense of scale, to inform your consumption of the daily flow of information (and disinformation). These are changing, too, but they are current as of August 31, 2020.
Global confirmed COVID-19 cases: 25 million
Confirmed US COVID-19 cases: 6 million
Second most COVID-19 cases: Brazil, 3.9 million
Third most COVID-19 cases: India, 3.6 million
Global confirmed COVID-19 deaths: 850,000
Confirmed US COVID-19 deaths: 183,000
Second most COVID-19 deaths: Brazil, 121, 000
Third most COVID-19 deaths: India: 65,000
Percent of U.S. COVID patients who have died: 3%
COVID-19 deaths per 100,000 Americans: 50
COVID-19 deaths per 100,000 non-Hispanic Whites: 43
COVID-19 deaths per 100,000 Blacks: 81
COVID-19 deaths per 100,000 Hispanics: 55
COVID-19 deaths per 100,000 Americans over age 65: 400
Annual deaths in the U.S. (these are for 2017): Total, 2.8 million
Catherine Rampell tweeted a link to a Zillow analysis showing 2.2 million adults ages 18-25 moving in with their parents or grandparents in March and April. Zillow’s Treh Manhertz estimates these move-homers would cost the rental market the better part of a billion dollars, or 1.4% of total rent if they stay home for a year.
We now have the data through July from the Current Population Survey data to work with, so I extended this forward, and did it differently. CPS is the large, monthly survey that the Census Bureau conducts for the Bureau of Labor Statistics each month, principally to track labor market trends. It also includes basic demographics and living arrangement information. Here is what I came up with.*
Among people ages 18-29, there is a large spike of living in the home of a parent or grandparent (of themselves or their spouse), which I’ll call “living at home” for short. This is apparent in a figure that compares 2020 with the previous 5 years (click figures to enlarge):
From February to April, the percentage of young adults living at home jumped from 43% to 48%, and then up to 49.4% in June and 48.7% in July. Clearly, this is anomalous. (I ran it back to 2008 just to make sure there were no similar jumps around the time of the last recession; in earlier years the rates were lower and there were no similar spikes.) This is a very large disturbance in the Force of Family Demography.
To get a better sense of the magnitude of this event, I modeled it by age, sex, and race/ethnicity. Here are the estimated share of adults living at home by age and sex. For this I use just July of each year, and compare 2020 with the pooled set of 2017-2019. This controls for race/ethnicity.
The biggest increase is among 21-year-olds, and women under 22 generally. These may be people coming home from college, losing their jobs or apartments, canceling their weddings, or coming home to help.
I ran the same models but broke out race/ethnicity instead (not separately for gender White, Black, and Latino, as the samples get small).
This shows that the 2020 bounce is greatest for Black young adults (below age 26) and the levels are lowest for Latinos (remember that many Latinos are immigrants whose parents and grandparents don’t live in the US).
To show the total race/ethnic and gender pattern, here are the predicted levels of living at home, controlling for age:
The biggest 2020 bounce is among Black men who have the highest overall levels, 59%, and White women having the lowest at 45%.
In conclusion, millions of young adults are living with their parents and grandparents who would not be if 2020 were like previous years. The effect is most pronounced among Black young adults. Future research will have to determine which of the many possible disruptions to their lives is driving this event.
For scale, there are 51 million (non-institutionalized) adults ages 18-29 in the country. If 2020 was like the previous three years, I would expect there to be 22.2 million of them living with their parents. Instead there are 24.9 million living at home, an increase of 2.7 million from the expected number (numbers updated for July 2020). That is a lot of rent not being spent, but even with that cost savings I don’t think this is good news.
* The IPUMS codebook, Stata code, spreadsheet, and figures are in an Open Science Framework project under CC0 license here: osf.io/2xrhc.
This updates a series of posts that have addressed gender in academic sociology, starting in 2011 and updated in 2015, along with various tweets (to see random fact tweets from me on Twitter, Google familyunequal “now you know”).
Gender in academic sociology is complicated because the profession is running pretty female these days, with more than half the U.S. PhDs going to women since 1994, and more than 60% overall since 1999. So although there are various kinds of exclusion, it’s not as simple as excluding women from the discipline, and the representation of women’s representation depends on the choice of denominators. For example, a recent report found that, in the top 100 U.S. sociology departments in 2012, women were 60% of the assistant professors, 54% of the associate professors, and 34% of the full professors. This probably reflects a combination of age and tenure, with this year’s full professors representing yesteryear’s hiring, as well as women having lower rates of progression up the hierarchy.
Also, feminists (myself included) cheer the entry of women into formerly male-only professions but bemoan their concentration into female ghettos, but there is no bright line beyond which one process transforms into the other (don’t get me started on “tipping points“).
But however we want to interpret the trends, we have to know the trends. So here are some, starting with updates on previous reports (degrees, sections, elections), and then some new ones (journal articles, peer reviewers).*
The National Science Foundation reports the number and gender of PhD recipients by discipline (since 2006, earlier). This is what we get (smoothed with three-year averages): mostly more than 60% female since the late-1990s, with women accounting for most of the growth.
Sociology is a very broad discipline, including people who specialize in many distinct substantive and methodological areas. Within the American Sociological Association (ASA), we divide into 49 sections, which serve as a mechanism to organize conferences and journals, and to give awards (people can belong to as many as they want, for a small charge). The sections are pretty segregated by gender. Here are the gender compositions of each, from Sex and Gender (86% female) to Mathematical Sociology (22%):
The ASA leadership is elected in annual balloting by the membership, which is open to anyone who wants to join as long as they claim some affiliation with the discipline (the price ranges from $51 for students to $377 for people with incomes over $150,000). The association elected its first president in 1906, its first woman in 1952 and its second woman in 1973. In the last 10 years 7 of the presidents have been women.
Is the shift toward women presidents because there are more women in the association, or in the hierarchy of the association, or because of the preference of the membership? The president, along with all the other elected officers, are selected for the ballot by a nominations committee. In recent years it has become conventional wisdom that men usually lose to women in these elections because ASA members vote for a woman if they don’t have a strong preference between candidates, but I don’t know how well-founded that perception is.
Here is the gender of candidates for top positions (president, vice president, secretary, and council members), and the gender of the winners, from 2007 to 2018. Note that in the last three years they have nominated fewer women, but except for 2016 the membership has voted for more women (with 2018 being having the widest gap yet):
I’ve only done a little of this, but here is a quick look at the gender of authors in two of the highest-status sociology journals for the last several years. American Sociological Review (an ASA journal published by Sage), and American Journal of Sociology (an independent journal published by University of Chicago), 35% of authors in the last 11 issues have been women (by my assessment, regular articles only)
Someone could easily do a much more serious assessment of gender in sociology journal authorship.
For the last few months I have been working on peer review in sociology and the social sciences — how it works, how it doesn’t, and how it might be improved. (Here are slides from a talk I gave, with Micah Altman at MIT). One of my concerns about peer review is its general lack of accountability; no one supervises the process, generally, as the only person who knows everything at a journal is the editor, and the only thing the public sees is the published outcome. And yet publication peer review determines all manner of statuses in academia.
Looking for externally accessible data that might shine a light on the process, I checked for reviewer acknowledgement lists, which some journals publish at the end of a volume (lots of journals don’t apparently, including Social Forces, Sociological Methods and Research, Social Science Research, Sociological Forum, Sociological Theory, Work & Occupations, Social Currents, and Mobilization.) I used the genderize.io API to count the genders of the reviewers, using 80% confidence for the first pass, and then personally checking or Googling other names (I didn’t do all the names, but almost).
The reviewer gender shares are a little higher for ASR and AJS than they were for authors, with the former having somewhat more women. Publication in one of these two journals is the probably most important gatekeeping mechanism to the upper echelons of the discipline. The methods journal has the lowest representation of women, the gender journal as the highest. Unknown here is the proportion of women among the pool of reviewers solicited by the editors.
So, that’s my report.
* These data all treat gender as sex as binary, either because the data were reported that way, or because I coded them from names. I don’t address race, ethnicity, or other traits for the same reason.
Update: I’m delighted and gratified that we met the donation goal described below. Thank you.
It snuck up on me again, the anniversary of my cancer experience, which came and went, more or less, in 2008, ten years ago. Last year I wrote about the experience a little:
There is a reasonable chance I’d still be alive today if we had never biopsied the swollen lymph node in my thigh, but that’s hard to say, too. Median survival from diagnosis is supposed to be 10 years, but I had a good case (a rare stage I), and with all the great new treatments coming online the confidence in that estimate is fuzzy. Anyway, since the cancer was never identified anywhere else in my body, the treatment was just removing the lymph node and a little radiation (18 visits to the radiation place, a couple of tattoos for aiming the beams, all in the summer with no work days off). We have no way (with current technology) to tell if I still “have” it or whether it will come “back,” so I can’t yet say technology saved my life from this one (though if I’m lucky enough to die from something else — and only then — feel free to call me a cancer “survivor”).
It turns out that all this life saving also bequeaths a profound uncertainty, which leaves one with an uneasy feeling and a craving for antianxiety medication. I guess you have to learn to love the uncertainty, or die trying.
Unlike the anxiety I have now, the fear and sadness I felt that summer were almost overwhelming. Today, 10 years later, with no detectable disease (not that I’m looking), I am thinking of the millions of people who have no access to the kind of medical care I had, who face similar or worse medical conditions in infinitely worse social conditions.
I will match contributions to Doctors Without Borders up to $1000 for this campaign (plus the $80 or so GoFundMe will charge to collect it). It’s a small token of appreciation for my good fortune.
Here’s the GoFundMe link: Emergency Global Healthcare. I’ll make my contribution directly to Doctors Without Borders after it reaches $1000 or stops growing. Thank you for considering it.
And below is what I wrote on the five-year anniversary.
My 5-year cancerversary
I didn’t even register it right away. Five years ago this Memorial Day I got my diagnosis of follicular lymphoma, a form of non-Hodgkin’s lymphoma. It was late on the Friday afternoon when the surgeon called with the biopsy results. He never said the word “cancer,” but recommended I see an oncologist. He was a very nice guy, and told me I was going to live to be an old man. Within 15 minutes I had read that follicular lymphoma is usually incurable. (The UpToDate database I used now puts it this way: “most cases of follicular lymphoma are not curable with currently available therapies.”) It was a long long weekend.
Usually follicular lymphoma – a blood cancer – is advanced before it’s first discovered. In the next few weeks, one oncologist told me the median survival was between 10 and 20 years. I was 40 with a wife and 4-year-old daughter. I asked her why she was an oncologist. She said she was interested in end-of-life issues. Also, the nicest people get cancer.
Eventually we determined that I had what apparently was a rare case of Stage I, which may be curable. I had 18 days of painless radiation and didn’t (physically) miss a day of work. Lucky is a funny word for this.
Five years later I don’t have an oncologist anymore. It’s the first line on my medical chart but not a to-do list item. When we moved away, my Bayesian-minded oncologist wrote in his farewell note, using his best handwriting: “Your chance for cure is reasonable: pre-test probability is low. Early detection is not helpful. If you get an enlarged lymph node, get biopsied.” Maybe that’s oncology speak for: “Relax, good luck!”
Anyway, there were lots of people I never told, including the chair of my department and some good friends and colleagues. Maybe that’s because it went from incurable (yikes, too much information) to possibly-cured (so stop complaining already) so quickly – before the start of the new semester – so I didn’t know how to bring it up or what to say.
For most people with this disease, the story is different. Thankfully, we’ve had a revolution in lymphoma treatment, and it’s usually a very long story. Most people live many years, and I’m told the new treatments usually aren’t that bad. (Easy for me to say.) Chance of surviving (that is, dying from something else) is pretty good. Experts debate whether the word “cure” should be used more.
Meanwhile, now there are two kinds of people in the world: people with a better prognosis, and people with a worse prognosis. Of course that’s always been true. But this experience sometimes makes me dwell on that, which increases my tendency to draw a sharp resentment/sympathy line according to this criterion. That isn’t healthy because it obscures the more important bases upon which to relentlessly judge people and compare myself to them.
I’m writing this because I remembered how lonely and scared I felt back then – when I didn’t even know where on the scale to put myself. Nothing aggravates the modern identity like incalculable risk. Fortunately, I had the greatest family and friend support – and medical care – anyone could ask for. Life got back to normal. We adopted another daughter. There are other risks to worry about.
But I’m thinking that somewhere someone with no idea what to do next is getting news like I did and Googling “follicular lymphoma.” If that’s someone you know, or it is you, maybe it will help to know about one more person who’s still living about as normal a life as I was before. Feel free to drop me a note.
I don’t know where it came from, but sometime after the 2016 election the word craptastic started rolling around in my head. Eventually it congealed into the title of something I want to write.
Some people use craptastic to mean “so bad it’s good,” like bad food you love. But to me it’s that thing you say when you thought something was going well — maybe turning around from a bad situation — and it suddenly turns out to be even worse than you thought. An early use appears in a 2007 young adult novel called Two Foot Punch:
“Come on. Now that we know where Derek is, we can get help!”
“Not yet,” I say. My voice becomes weak, even for a whisper. “He told the guys that if anyone comes, or if something goes wrong, they’re going to kill Derek.” …
Rain leans against the duct, shaking her head. “Craptastic.”
The situation with Derek was bad, but then they found out where he was (lucky break!), but it turns out if they act on that he will be killed (craptastic!).
Dr. Jeffrey R. Gardere, Ph.D., a clinical psychologist, said some of his patients over the past nine months “have expressed much frustration, unhappiness and stress with the present political climate,” and that he is seeing increased instances of “dysphoria, and sometimes the related eating and sleeping interruptions.”
We all know this is happening. My theory for Craptastic is that the catastrophic thinking and uncontrollable feelings of impending doom go beyond the very reasonable reaction to the Trump shitshow that any concerned person would have, and reflect a sense that things are turning around in a suddenly serious way, rupturing what Anthony Giddens describes as the progress narratives of modernity people use to organize their identities. People thought things were sort of going to keep getting better, arc of the moral universe and all that, but suddenly they realize what a naive fantasy that was. It’s not just terrible, it’s craptastic.
If that’s true, I suppose, it would be felt more strongly by relatively privileged people, who had the luxury of believing their good lives were just a little ahead of the lives of those obviously much worse off, so being happy wasn’t a betrayal of humanity, it was just a little premature. Now, they feel not just bad, but worse. (My insider perspective on this is a plus, right?)
I suspect that if America lives to see this chapter of its decline written, Trump will not be as big a part of the story as it seems he is right now. And that impending realization is one reason for the Trump-inspired dysphoria that so many people are feeling.
* If you love this idea and want to help make it happen, please contact my agent. Or I guess be my agent.
Here’s my syllabus for Family Demography this semester. Play along at home!
I went for contemporary readings for most subjects, rather than classic readings. I’ll talk about the background myself, and I added an origin/impact analysis assignment, where students dig into the front end of the papers and figure out where they’re coming from – and then follow the citations to see where they went (if they’re not brand new). If I had my stuff together I’d have a better list of background readings as a supplement, but we have comprehensive exam readings lists for that, too. Anyway, we’ll see how that works.
I hope this is useful. Feel free to add your own supplemental readings and suggestions in the comments.
This course is designed to build knowledge on the key theories, empirical patterns, and contemporary debates in the study of family demography, with lesser attention to methodology. (Some students previously took my seminar Families and Modern Social Theory; those who haven’t may find interesting background material in that syllabus: http://www.terpconnect.umd.edu/~pnc/FMST-syllabus.pdf.)
Students are expected to read assigned material and write a response paper each week, and a summary essay or research report at the end of the semester. In addition, each student will do an origin/impact analysis of one of the assigned readings and make a brief presentation to the class. Evaluation will be based on participation, weekly writings, the presentation, and the final paper.
The principle of universal learning means that our classroom and our interactions be as inclusive as possible. Your success in this class is important to me. If there are circumstances that may affect your performance in this class, please let me know as soon as possible so that we can work together to meet both your needs and the requirements of the course. Students with particular needs should contact the UMD Disability Support Service (http://www.counseling.umd.edu/DSS/), which will forward the necessary information to me. Please do it now instead of waiting till late in the semester.
Classroom conduct. Students should not come to class late, as this creates a distraction for those who are participating. If your schedule regularly does not permit you to be in class from beginning to end, do not take the course. Students who need to leave early should sit at the back and leave quietly. Students may not use laptops, tablet computers, or mobile phones in class. If you have a need for keeping your phone handy in class notify the professor in advance for an exception.
Discussion. We will discuss course readings and related material, as well as current events, social issues, and politics. Everyone is free to express personal opinions and disagree with others, including the professor – just raise your hand. All discussion must be polite and respectful, and differences of opinion are tolerated. The professor will work to ensure the classroom is a safe space for all of use to participate freely. Please let me know if you have any concerns or suggestions for accomplishing this.
Two selections from Families in an Era of Increasing Inequality (2015) edited by Paul R. Amato, Alan Booth, Susan M. McHale, and Jennifer Van Hook, 3–23. National Symposium on Family Issues 5. Springer International Publishing.
McLanahan, Sara, and Wade Jacobsen. “Diverging Destinies Revisited.”
Cohen, Philip N. 2015. “Divergent Responses to Family Inequality.”
The article is about an NBER working paper (not yet peer reviewed) by, Daniel Hamermesh, Katie Genadek, and Michael Burda. It’s officially here, but I put a copy up in case you don’t have am NBER subscription.) The analysis uses the American Time Use Survey to see whether time at work spent not working varies by race/ethnicity, and they find that it does. The abstract:
Evidence from the American Time Use Survey 2003-12 suggests the existence of small but statistically significant racial/ethnic differences in time spent not working at the workplace. Minorities, especially men, spend a greater fraction of their workdays not working than do white non-Hispanics. These differences are robust to the inclusion of large numbers of demographic, industry, occupation, time and geographic controls. They do not vary by union status, public-private sector attachment, pay method or age; nor do they arise from the effects of equal-employment enforcement or geographic differences in racial/ethnic representation. The findings imply that measures of the adjusted wage disadvantages of minority employees are overstated by about 10 percent.
When the Economist contacted me, I consulted several colleagues for their response. Reeve Vanneman pointed out that minority workers might slack off at work because they are discriminated against, and Liana Sayer pointed out that the activity measures in the ATUS may not be not precise enough to say what if any “non-work” activity is actually contributing to the bottom line – the paper doesn’t detail what these “non-work” activities are. My own critique was that, before we start attributing work behavior to “culture,” we might consider whether work reporting behavior varies by “culture” as well (the ATUS uses self-reported time diaries). The authors did a little monkeying around with the General Social Survey to address that, but I found it unpersuasive.
Anyway, you can read the Economist article yourself. I would have preferred they killed the article, because I don’t think the paper sustains its conclusions, but they did a reasonable job of reporting it. And here are the full comments I sent them:
The analysis in the paper does not support the conclusion that wage disparities between blacks and whites are overstated. There just isn’t enough there to make that claim. As the authors note, the problem of differential reporting is an obvious concern. Their analysis of the “importance of work” questions in the GSS seems immaterial – it’s just not the same question.
This is exacerbated by the problem that they don’t describe the difference between work-related non-work activities and non-work-related non-work activities. We just don’t know enough about what they’re doing to draw the conclusion that the work-related activities are really productivity enhancing while the non-related activities are really not. (Consider trying to parse the effect of eating alone at your desk versus eating with a team-member in the cafeteria. Which is productivity enhancing?) It is always the case that jobs differ between blacks and whites in ways surveys do not capture – that’s the whole question of the wage gap. Controlling for things like industry and occupation helps but it’s the tip of the iceberg. For example, the difference between small and large employers, and between those with formal management procedures and those without, is not captured here.
Finally, consider the possibility of reverse-causality. What if blacks are discriminated against and paid less than whites for the same level of productivity – or treated poorly in other ways – a very reasonable hypothesis? Might that not lead those black workers to be less devoted to their employers, and spend more time on other things when no one is looking? I wouldn’t blame them.
In short, the paper uses a lot of ambiguous information, which is interesting and suggestive, to draw a conclusion that is not warranted. It’s part of a tradition in economics of assuming there must be some rational basis for pay disparities, and looking really hard to find it, rather than treating employer motivations more skeptically and trusting the voluminous evidence of racist bias in the labor market.
In the email exchange, they asked for followup on the evidence of racial bias, so I added this:
The best evidence of discrimination is from audit studies. This is one of the best. That author, Michael Gaddis at Penn State, can talk much more about it, but the point is that even when you can’t identify an individual act of racism, in the aggregate employer behavior shows a preference for whites — as we can tell by imposing experimental conditions in which the only thing different between resumes is the names. Other approaches include studying disparities in performance evaluation (e.g., this [by Marta Elvira and Robert Town]), or analyzing discrimination case files directly (e.g., this [by Ryan Light, Vincent Roscigno, and Alexandra Kalev]).
That all got reduced to this, in the article: “Worse treatment by managers of minority workers may itself encourage slacking, says Philip Cohen.” (Though they went on to cite evidence that workers work less when their managers are biased against them.)
On the other hand
As I think about it more, there is another important angle on this, which goes back to Reeve’s comment, and also something in the conclusion to the Economist article:
Within hours of publication, Mr Hamermesh received vitriolic messages and was labelled a racist in an online forum popular among economists. Mr Hamermesh, an avowed progressive, who refers to Donald Trump only by amusing nicknames and resigned from a post at the University of Texas over a state law permitting the open carrying of firearms, finds this unfair. He notes that Americans work too much. His preferred solution would not be for some groups to work more, but for others to work less.
There is an understandable anti-racist tendency to want to avoid a story of minority workers as lazy and shiftless – which is a character flaw. But there is a resistance story to tell as well, and the liberal anti-racist approach papers it over. For this, we need historian Robin D. G. Kelley, who wrote a brilliant paper called, “‘We Are Not What We Seem’: Rethinking Black Working-Class Opposition in the Jim Crow South” (free copy here). Here’s a relevant excerpt, in which he cites W. E. B. Du Bois:
Part of the reason [labor historians have not written more about workplace theft and sabotage by Southern Blacks], I think, lies in southern labor historians’ noble quest to redeem the black working class from racist stereotypes. In addition, company personnel records, police reports, mainstream white newspaper accounts, and correspondence have left us with a somewhat serene portrait of black folks who only occasionally deviate from what I like to call the “cult of true Sambohood.” The safety and ideological security of the South required that pilfering, slowdowns, absenteeism, tool breaking, and other acts of black working-class resistance be turned into ineptitude, laziness, shiftlessness, and immorality. But rather than reinterpret these descriptions of black working-class behavior, sympathetic labor historians are often too quick to invert the images, remaking the black proletariat into the hardest working, thriftiest, most efficient labor force around. Historians too readily naturalize the Protestant work ethic and project onto black working people as a whole the ideologies of middle-class and prominent working-class blacks. But if we regard most work as alienating, especially work done amid racist and sexist oppression, then a crucial aspect of black working-class struggle is to minimize labor with as little economic loss as possible. Let us recall one of Du Bois’s many beautiful passages from Black Reconstruction: “All observers spoke of the fact that the slaves were slow and churlish; that they wasted material and malingered at their work. Of course they did. This was not racial but economic. It was the answer of any group of laborers forced down to the last ditch. They might be made to work continuously but no power could make them work well.”
Working hard for the man’s benefit is not the only way to build character.
People using my book for in their classes get excellent teaching materials from Norton to use. They also have a Facebook group for sharing ideas and materials (instructors visit here). For extra support, and to maximize timeliness, I also regularly update this list of blog posts that might help you with your course, whether or not you’re using my book.
As in previous lists, there are recent posts and some older favorites. Plenty of good material is still available on the supplements 2013, 2014, and 2015. As always, I appreciate feedback on what works and what doesn’t.
Is dating still dead? The death of dating is now 50 years old, and its been eulogized so many times that its feelings are starting to get hurt.
Online dating: efficiency, inequality, and anxiety: I’m skeptical about efficiency, and concerned about inequality, as more dating moves online. Some of the numbers I use in this post are already dated, but this could be good for a debate about dating rules and preferences.
Is the price of sex too damn low? To hear some researchers tell it in a recent YouTube video, women in general — and feminism in particular — have ruined not only sex, but society itself. The theory is wrong. Also, they’re insanely sexist.
In response to an annoying conversation on Twitter about this short paper, which felt very familiar, here is an argument about the sex segregation of work, in the form of unsourced propositions of 140 characters or less. You can find most of these in longer form in various posts under the segregation tag. It’s tweetstorm, in one post!
Many studies show men and women have mean differences in personality and preferences, although there is overlap in the distributions; but
Every respondent in any such study was born and raised in a male-dominated society, because all societies are male dominated.
Most people in the debates I see, being elites, act like everyone is a college graduate who chose their job, or “field” of work; but
We know lots of people are in jobs they didn’t freely choose or didn’t get promoted out of, for reasons related to gender (like pregnancy).
No one knows how much segregation results from differences in choices of workers vs. parent/employer/educator pressure or constraints; and
The level of sex segregation varies across social contexts (across space and time), which means it is not all caused by biology; and
Because segregation causes inequality and constrains human freedom, and we have the means to reduce it, the biology theory is harmful; so
Go ahead and study the biology of sex differences, because society is interesting, but don’t use that as an excuse for inequality.