Craptastic

34870130493_bacf11e7df_k

Your bright future written in the sign over a urinal (Photo: pnc)

I don’t know where it came from, but sometime after the 2016 election the word craptastic started rolling around in my head. Eventually it congealed into the title of something I want to write.

Some people use craptastic to mean “so bad it’s good,” like bad food you love. But to me it’s that thing you say when you thought something was going well — maybe turning around from a bad situation — and it suddenly turns out to be even worse than you thought. An early use appears in a 2007 young adult novel called Two Foot Punch:

“Come on. Now that we know where Derek is, we can get help!”

“Not yet,” I say. My voice becomes weak, even for a whisper. “He told the guys that if anyone comes, or if something goes wrong, they’re going to kill Derek.” …

Rain leans against the duct, shaking her head. “Craptastic.”

The situation with Derek was bad, but then they found out where he was (lucky break!), but it turns out if they act on that he will be killed (craptastic!).

Joy-Ann Reid has a descriptive piece up at Daily Beast called “The Enormous Emotional Toll of Trumpism,” in which she writes:

Dr. Jeffrey R. Gardere, Ph.D., a clinical psychologist, said some of his patients over the past nine months “have expressed much frustration, unhappiness and stress with the present political climate,” and that he is seeing increased instances of “dysphoria, and sometimes the related eating and sleeping interruptions.”

We all know this is happening. My theory for Craptastic is that the catastrophic thinking and uncontrollable feelings of impending doom go beyond the very reasonable reaction to the Trump shitshow that any concerned person would have, and reflect a sense that things are turning around in a suddenly serious way, rupturing what Anthony Giddens describes as the progress narratives of modernity people use to organize their identities. People thought things were sort of going to keep getting better, arc of the moral universe and all that, but suddenly they realize what a naive fantasy that was. It’s not just terrible, it’s craptastic.

If that’s true, I suppose, it would be felt more strongly by relatively privileged people, who had the luxury of believing their good lives were just a little ahead of the lives of those obviously much worse off, so being happy wasn’t a betrayal of humanity, it was just a little premature. Now, they feel not just bad, but worse. (My insider perspective on this is a plus, right?)

I suspect that if America lives to see this chapter of its decline written, Trump will not be as big a part of the story as it seems he is right now. And that impending realization is one reason for the Trump-inspired dysphoria that so many people are feeling.

(Cohen forthcoming)*


* If you love this idea and want to help make it happen, please contact my agent. Or I guess be my agent.

3 Comments

Filed under Uncategorized

Sociology’s culture of trust, don’t verify

Replication in sociology is a disaster. There basically isn’t any. Accountability is something a select few people opt into; as a result, mostly people with nothing to hide ever have their work verified or replicated. Even when work is easily replicable, such as that using publicly available datasets, there is no common expectation that anyone will do it, and no support for doing it; basically no one funds or publishes replications.

Peer review is good, but it’s not about replicability, because it almost always relies on the competence and good faith of the authors. Reviewers might say, “This looks funny, did you try this or that?” But if the author says, “Yes, I did that,” that’s usually the end of it. Academic sociology, in short, runs on a system of trust. That’s worth exactly what it’s worth. It doesn’t have to be this way.

I thought of this today when I read the book excerpt by Mark Regnerus in the Wall Street Journal. (I haven’t read his new book, Cheap Sex yet, although I called the basic arguments a “big ball of wrong” three years ago when he first published them.) Regnerus opens that essay with a single quote supposedly from an anonymous 24-year-old recent college graduate that absolutely perfectly represents his thesis:

If you know what girls want, then you know you should not give that to them until the proper time. If you do that strategically, then you can really have anything you want…whether it’s a relationship, sex, or whatever. You have the control.

(Regnerus argues men have recently gained control over sex because women have stopped demanding marriage in exchange for it.)

Scholars and readers in sociology don’t normally question whether specific quotes in qualitative research are real or not. We argue over the interpretation, or elements of the research design that might call the interpretation into question (such as the method of selecting respondents or a field site). But if we simply don’t trust the author, what do we do? In the case of Regnerus, we know that he has lied, a lot, about important things related to his research. So how do you read his research in a discipline with no norm of verification or replicability, a discipline naively based on trust? The fake news era is here; we have to address this. Fortunately, every other social discipline already is, so we don’t have to reinvent the wheel.

Tackling it

Of course there are complicated issues with different kinds of sociology, especially qualitative work. It’s one of the things people wrestled with in the Contexts forum Syed Ali and I organized for the American Sociological Association on how to do ethnography right.

That forum took place in the wake of all the attention Alice Goffman received for her book, and article, On the Run (my posts on that are under this tag). One person who followed that controversy closely was law professor Steven Lubet, who has written a new book titled, “Interrogating Ethnography: Why Evidence Matters,” which addresses that situation in depth. The book comes out October 20, at a conference at Northwestern University’s law school. I will be one of a number of people commenting on the book and its implications.

inteth

I hope you can come to the event in Chicago.

Finally, regardless of your opinion on recent controversies in sociology, if you haven’t read it, I urge you to read (and, if you’re in such a position, require that your students read) “Replication in Social Science,” by Jeremy Freese and David Peterson, in the latest Annual Review of Sociology (SocArXiv preprint; journal version). Freese and Peterson refer to sociology as “the most undisciplined social science,” and they write:

As sociologists, the most striking thing in reviewing recent developments in social science replication is how much all our neighbors seem to be talking and doing about improving replicability. Reading economists, it is hard not to connect their relatively strict replication culture with their sense of importance: shouldn’t a field that has the ear of policy-makers do work that is available for critical inspection by others? The potential for a gloomy circle ensues, in which sociology would be more concerned with replication and transparency if it was more influential, but unwillingness to keep current on these issues prevents it from being more influential. In any case, the integrative and interdisciplinary ambitions of many sociologists are obviously hindered by the field’s inertness on these issues despite the growing sense in nearby disciplines that they are vital to ensuring research integrity.

That paper has some great ideas for easy reforms to start out with. But we need to get the conversation moving. In addition developing replication standards and norms, we need to get the next generation of sociologists some basic training in the (jargon alert!) political economy of scholarly communication and the publishing ecosystem. The individual incentives are weak, but the need for the discipline to act is very strong. If we can at least get sociologists to be vaguely aware of the attention to this issue generated in most other social science disciplines, it would be a great step forward.

Incidentally, Freese will also present on the topic of replication at the O3S: Open Scholarship for the Social Sciences symposium SocArXiv is hosting at the University of Maryland later this month; still time to register!

1 Comment

Filed under In the news

Who’s happy in marriage? (Not just rich, White, religious men, but kind of)

I previously said there was a “bonafide trend back toward happiness” within marriage, for the years 2006 to 2012. This was based on the General Social Survey trend going back 1973, with married people responding to the question, “Taking all things together, how would you describe your marriage?”

Since then, the bonafide trend has lost its pop. Here’s my updated figure:

hapmar16

I repeated this analysis controlling for age, race/ethnicity, and education, and year specified in quadratic form. This shows happiness falling to a trough at 2004 and then starting to trend back. But given the last two points, confidence in that rebound is weak. Still a solid majority are happy with their marriages.

Who’s happy?

But who are those happy in marriage people? Combining the last three surveys, 2012, 2014, and 2016, this is what we get (effect of age and non-effect of education not shown). Note the y-axis starts at 50%.

hapmar16c

So to be happy in marriage, my expert opinion is you should become male and White, see yourself as upper class, go to church all the time, and have extreme political views. And if you’re not all those things, don’t let the marriage promoters tell you what your marriage is going to be like.

Note: I previously analyzed the political views thing before, so this is an update to that. On trends and determinants of social class identification, see this post.)


Here’s my Stata code, written to run on the full GSS through 2016 data. Play along at home!

set maxvar 10000
use "GSS7216_R1a.dta", clear
gen since73 = year-1973
gen rwgt = round(wtssall)
keep if year >1972
gen verhap=0
replace verhap=1 if hapmar==1
logit verhap i.sex c.age##c.age i.degree i.race c.since73##c.since73 [weight=rwgt]
margins, at(since73=(0(1)43))
recode attend (1/3=1) (4/6=2) (7/8=3), gen(attendcat)
logit verhap i.sex c.age##c.age i.degree i.race i.class i.attendcat i.polviews if year>2010 [weight=rwgt]
margins sex race class attendcat polviews if year>2010

 

6 Comments

Filed under Research reports

Hitting for the cycle in Trump land

While cautious about the risks the normalizing Trump, I have nevertheless attempt to engage a little with his followers on Twitter, which is the only place I usually meet people who are willing to support him openly. One exchange yesterday struck me as iconic so I thought I’d share it.

Maybe if I’d studied conversation or text analysis more I would be less amazed at how individuals acting alone manage to travel the same discursive paths with such regularity. In this case a Trump supporter appears to spontaneously recover this very common path over a short handful of tweets:

  1. I don’t believe your facts
  2. If they are true it’s no big deal
  3. Obama was worse
  4. Nothing matters everyone is corrupt

The replies got jumbled up so I use screenshots as well as links (you can start here if you want to try to follow it on Twitter).

Ivanka Trump tweeted something about how she was going to India. Since I’m blocked by Donald but not Ivanka, if it’s convenient I sometimes do my part by making a quick response to her tweets. I said, “Your representation of the US in India epitomizes the corruption and incompetence of this administration.”

iv1

The responses by @armandolbstd and @dreadGodshand are very typical, demanding “proof” about things that are obvious to basically informed people. I made the typical mistake of thinking we could talk about common facts, using the word “literally” a lot:

iv2

OK, so then I got sucked in with what I thought was the most obvious example of corruption, leading @dreadGodshand into the whole cycle:

iv3

Interesting how the “ok, maybe it’s true but so what” thing we hear constantly strikes him as suddenly a new question. And from there on through no-big-deal to Obama-was-worse to nothing-matters:

iv4iv5

And he concluded, “I’m not hating obama for it. It’s not that big of a deal. It’s designed that way to help their parties. Who really cares?”

This reminds me of the remarkable shift in attitudes toward immoral conduct among White evangelicals, who used to think it was a very big deal if elected officials (Obama) did immoral things in private but now (Trump) shrug:

iv6

People do change. But I don’t put that much stock in changing people, and contrary to popular belief I don’t think that’s how you have to win elections. In the end defeating Trumpism politically means outvoting people who think like this, which will be the result of a combination of things: increasing turnout (one way or the other) among people who oppose him, decreasing turnout among people who support him, and changing the number of people in those two categories.

You might think this example just shows the futility of conversations like this, but maybe I’m missing some opportunity to get through. And it’s also possible that this kind of thing is demoralizing to Trump supporters, which could be good, too. So, live and learn.

Leave a comment

Filed under Politics

Science finds tiny things nowadays (Malia edition)

We have to get used to living in a world where science — even social science — can detect really small things. Understanding how important really small things are, and how to interpret them, is harder nowadays than just finding them.

Remember when Hanna Rosin wrote this?

One of the great crime stories of the last twenty years is the dramatic decline of sexual assault. Rates are so low in parts of the country — for white women especially — that criminologists can’t plot the numbers on a chart.

Besides being wrong about rape (it has declined a lot, but it’s still high compared with most countries), this was a funny statement about science (I’ve heard we can even plot negative numbers now!). But the point is we have problems understanding, and communicating about, small things.

So, back to names.

In 2009, the peak year for the name Malia in the U.S., 1,681 girls were given that name, according to the Social Security Administration, or .041% of the 4.14 million children born that year (there are no male Malias in the SSA’s public database, meaning they have never recorded more than 4 in one year). That year, 7.5% of women ages 18-44 had a baby. If my arithmetic is right, say you know 100 women ages 18-44, and each of them knows 100 others (and there is no overlap in your network). That would mean there is a 30% chance one of your 10,000 friends of a friend had a baby girl and named her Malia in 2009. But probably there is a lot of overlap; if your friend-of-friend network is only 1,000 women 18-44 then that chance would fall to 3%.

Here is the trend in girls named Malia, relative to the total number of girls born, from 1960 to 2016:

names.xlsx

To make it easier to see the Malias, here is the same chart with the y-axis on a log scale.

names.xlsx

This shows that Malia has been on a long upward trend, from less than 50 per year in the 1960s to more than 1,000 per year now. And it also shows a pronounced spike in 2009, the year Malia peaked .041%. In that year, the number of people naming daughters Malia jumped 75% before declining over the next three years to resume it’s previous trend. Here is the detail on the figure, just showing the Malia in 2005-2016:

names.xlsx

What happened there? We can’t know for sure. Even if you asked everyone why they named their kid what they did, I don’t know what answers you would get. But from what we know about naming patterns, and their responsiveness to names in the news (positive or negative), it’s very likely that the bump in 2009 resulted from the high profile of Barack Obama and his daughter Malia, who was 11 when Obama was elected.

What does a causal statement like that that really mean? In 2009, it looks to me like about 828 more people named their daughters Malia than would have otherwise, taking into account the upward trend before 2008. Here’s the actual trend, with a simulated trend showing no Obama effect:

names.xlsx

Of course, Obama’s election changed the world forever, which may explain why the upward trend for Malia accelerated again after 2013. But in this simple simulation, which brings the “no Obama” trend back into line with the actual trend in 2014, there were 1,275 more Malias born than there would have been without the Obama election. This implies that over the years 2008-2013, the Obama election increased the probability of someone naming their daughter Malia by .00011, or .011%.

That is a very small effect. I think it’s real, and very interesting. But what does it mean for anything else in the world? This is not a question of statistical significance, although those tools can help. (These names aren’t a probability sample, it’s a list of all names given.) So this is a question for interpreting research findings now that we have these incredibly powerful tools, and very big data to analyze with them. The number alone doesn’t tell the story.

1 Comment

Filed under Me @ work

On artificially intelligent gaydar

A paper by Yilun Wang and Michal Kosinski reports being able to identify gay and lesbian people from photographs using “deep neural networks,” which means computer software.

I’m not going to describe it in detail here, but the gist of it is they picked a large sample of people from a dating website who said they were looking for same-sex partners, and an equal number that were looking for different-sex partners, and trained their computers to learn the facial features that could distinguish the two groups (including facial structure measurements as well as grooming things like hairline and facial hair). For a deep dive on the context of this kind of research and its implications, and more on the researchers and the controversy, please read this post by Greggor Mattson first. These notes will be most useful after you’ve read that.

I also reviewed a gaydar paper five years ago, and some of the same critiques apply.

This figure from the paper gives you an idea:

gd4

These notes are how I would start my peer review, if I was peer reviewing this paper (which is already accepted and forthcoming in the Journal of Personality and Social Psychology — so much for peer review [just kidding it’s just a very flawed system]).

The gay samples here are “very” gay, in the sense of being out and looking for same-sex partners. This does not mean that they are “very” gay in any biological, or born-this-way sense. If you could quantitatively score people on the amount of their gayness (say on some kind of scale…), outness and same-sex attraction might be correlated, but they are different things. The correlation here is assumed, and assumed to be strong, but this is not demonstrated. (It’s funny that they think they address the problem of the sample by comparing the results with a sample from Facebook of people who like pages such as “I love being gay” and “Manhunt.”)

Another way of saying this is that the dependent variable is poor defined, and then conclusions from studying it are generalized beyond the bounds of the research. So I don’t agree that the results:

provide strong support provide strong support for the PHT [prenatal hormone theory], which argues that same-gender sexual orientation stems from the underexposure of male fetuses and overexposure of female fetuses to prenatal androgens responsible for the sexual differentiation of faces, preferences, and behavior.

If it were my study I might say the results are “consistent” with PHT theory, but it would be better to say, “not inconsistent” with the theory. (There is no data about hormones in the paper, obviously.)

The authors give too much weight to things their results can’t say anything about. For example, gay men in the sample are less likely to have beards. They write:

nature and nurture are likely to be as intertwined as in many other contexts. For example, it is unclear whether gay men were less likely to wear a beard because of nature (sparser facial hair) or nurture (fashion). If it is, in fact, fashion (nurture), to what extent is such a norm driven by the tendency of gay men to have sparser facial hair (nature)? Alternatively, could sparser facial hair (nature) stem from potential differences in diet, lifestyle, or environment (nurture)?

The statement is based on the faulty premise that they are “nature and nurture are likely to be as intertwined.” They have no evidence of this intertwining. They could just as well have said “it’s possible nature and nurture are intertwined,” or, with as much evidence, “in the unlikely event nature and nurture are intertwined.” So they loaded the discussion with the presumption of balance between nature and nurture, and then go on to speculate about sparse facial hair, for which they also have no evidence. (This happens to be the same way Charles Murray talks about race and IQ: there must be some intertwining between genetics and social forces, but we can’t say how much; now let’s talk about genetics because it’s definitely in there.)

Aside from the flaws in the study, the accuracy rate reported is easily misunderstood, or misrepresented. To choose one example, the Independent wrote:

According to its authors, who say they were “really disturbed” by their findings, the accuracy of an AI system can reach 91 per cent for homosexual men and 83 per cent for homosexual women.

The authors say this, which is important but of course overlooked in much of the news reporting:

The AUC = .91 does not imply that 91% of gay men in a given population can be identified, or that the classification results are correct 91% of the time. The performance of the classifier depends on the desired trade-off between precision (e.g., the fraction of gay people among those classified as gay) and recall (e.g., the fraction of gay people in the population correctly identified as gay). Aiming for high precision reduces recall, and vice versa.

They go on to give a technical, and I believe misleading example. People should understand that the computer was always picking between two people, one of whom was identified as gay and the other not. It had a high percentage chance of getting that choice right. That’s not saying, “this person is gay”; it’s saying, “if I had to choose which one of these two people is gay, knowing that one is, I’d choose this one.” What they don’t answer is this: Given 100 random people, 7 of whom are gay, how many would the model correctly identify yes or no? That is the real life question most people probably think the study is answering.

As technology writer Hal Hodson pointed out on Twitter, if someone wanted to scan a crowd and identify a small number individuals who were likely to be gay (and ignoring many other people in the crowd who are also gay), this might work (with some false positives, of course).

gd1

Probably someone who wanted to do that would be up to no good, like an oppressive government or Amazon, and they would have better ways of finding gay people (like at pride parades, or looking on Facebook, or dating sites, or Amazon shopping history directly — which they already do of course). Such a bad actor could also train people to identify gay people based on many more social cues; the researchers here compare their computer algorithm to the accuracy of untrained people, and find their method better, but again that’s not a useful real-world comparison.

Aside: They make the weird but rarely-necessary-to-justify decision to limit the sample to White participants (and also offer no justification for using the pseudoscientific term “Caucasian,” which you should never ever use because it doesn’t mean anything). Why couldn’t respondents (or software) look at a Black person and a White person and ask, “Which one is gay?” Any artificial increase in the homogeneity of the sample will increase the likelihood of finding patterns associated with sexual orientation, and misleadingly increase the reported accuracy of the method used. And of course statements like this should not be permitted: “We believe, however, that our results will likely generalize beyond the population studied here.”

Some readers may be disappointed to learn I don’t think the following is an unethical research question: Given a sample of people on a dating site, some of whom are looking for same-sex partners and some of whom are looking for different-sex partners, can we use computers to predict which is which? To the extent they did that, I think it’s OK. That’s not what they said they were doing, though, and that’s a problem.

I don’t know the individuals involved, their motivations, or their business ties. But if I were a company or government in the business of doing unethical things with data and tools like this, I would probably like to hire these researchers, and this paper would be good advertising for their services. It would be nice if they pledged not to contribute personally to such work, especially any efforts to identify people’s sexual orientation without their consent.

11 Comments

Filed under Research reports

Teach it! Family syllabus supplements for Fall 2017

This year we were working on the second edition of my book The Family: Diversity, Inequality, and Social Change, which will be out in 2018. And my new book, a collection of essays, will also be out for Spring: Enduring Bonds: Inequality, Marriage, Parenting, and Everything Else That Makes Families Great and Terrible, from University of California Press. But I’ve still produced a few blog posts this year, so I can provide an updated list of potential syllabus supplements for this fall.

In addition to the excellent teaching materials to support The Family from Norton, there is also an active Facebook group for sharing ideas and materials (instructors visit here). And then I provide a list of blog posts for family sociology courses (for previous lists, visit the teaching page). So here are some new, and some old, organized by topic. As always, I appreciate your feedback.

1. Introduction

2. History

3. Race, ethnicity, and immigration

4. Social class

5. Gender

6. Sexuality

7. Love and romantic relationships

  • Is dating still dead? The death of dating is now 50 years old, and its been eulogized so many times that its feelings are starting to get hurt.
  • Online dating: efficiency, inequality, and anxiety: I’m skeptical about efficiency, and concerned about inequality, as more dating moves online. Some of the numbers I use in this post are already dated, but this could be good for a debate about dating rules and preferences.
  • Is the price of sex too damn low? To hear some researchers tell it in a recent YouTube video, women in general — and feminism in particular — have ruined not only sex, but society itself. The theory is wrong. Also, they’re insanely sexist.

8. Marriage and cohabitation

9. Families and children

10. Divorce, remarriage, and blended families

11. Work and families

12. Family violence and abuse

13. The future of the family

2 Comments

Filed under Me @ work