How do you know if it’s working?
Job-performance evaluations are the subject of many studies in the sociology of work and inequality (including a new article by Emilio Castilla in American Sociological Review). In the professoriate, the job evaluation process is all over the place — teaching evaluations, peer-review of research for publication, peer-review of grant applications, conference acceptances and invitations, reviews of books, awards, comprehensive reviews at promotion time.
Last week I got two decisions from peer-reviewed journals (both rejections), got a bunch of chapter reviews for my book from people who teach family sociology courses, and conducted informal evaluations in my undergraduate class. Each of these types of evaluation is useful. In reverse order, here’s how I think they are.
1. Informal course evaluations
A class of 57 students is small enough to have a discussion, but big enough for some people to never speak up. Before my midterm exam, when they still haven’t gotten much formal feedback from me except some quiz grades, I asked the students to answer three questions anonymously: favorite thing about the course so far, least favorite thing, and suggestions for improvement. (As suggested here.)
The answers ranged from “love the material, love sociology in general” to “can be a little boring when it’s just lecturing”; from “really enjoy the lectures, powerpoints, sense of humor 🙂 class is never boring” to “SO MUCH READING.” The bottom lines were nearly unanimous, however: More discussion and interaction, more video clips, and turn the lights up. Great advice, mission accomplished, course improved.
It’s hard to know what’s working in a class because, although I’ve been teaching “Families and Society” for five years at UNC, I change the material and readings every time, the students change all the time, and for all I know I change every time, too. Unless you repeat the same sequence with only specific changes, it’s hard to know which innovations produce which results. The mid-course evaluation helps make improvements right away and identify potential problems that people are reluctant to speak up about publicly.
2. Book chapter reviews.
The editor for the book I’m writing at W. W. Norton, Karl Bakeman, sends my chapters out for review as I draft them. We must have had a few dozen reviews so far. The ways these reviews differ exemplify why it’s hard to evaluate your own course: there are too many variables in play to compare them all and know what’s best. Each person compares my chapters with their own experience and knowledge, what and how they teach, and their own sociological perspectives. Imagine that, for each reviewer, there are at least three possible schemes: the way they used to do it, how they do it now, and how they’d like to do it. Then multiply that by the number of possible ways to organize the book and frame the subject matter.
What I hadn’t realized going into the process was how valuable these different views would be for trying to create the best possible book – and the best possible courses to come out of it. Not by taking everyone’s advice, but by weighing the range of opinions and interpretations. What’s hard about teaching one course may be a benefit to a book project that aims to improve many people’s courses.
3. Peer-reviewed research
Since I already have tenure — and even have my next job lined up — I can be philosophical about the journal rejections. Of the articles I’ve published, very few have been written by me alone, as these last two were. And each of them was a struggle, involving premature submission and outright rejection. My first drafts of journal articles are (to date) awful. And the revisions I do myself before submitting them for publication are not much better. I could protest, but the data to support this generalization are too clear. It’s not that the research or findings are found to be wrong (usually), but rather that the writing and framing aren’t clear, the cut corners are too glaring, and the ambition is overreaching. This pattern represents a substantial imposition on my peers, who volunteer to review this research, and I owe it to them to try to learn from this and do better.
Having also written a round of tenure review letters this year, I also know that there is pressure from some quarters to critically evaluate collaboration, to make sure each scholar makes an “independent” contribution and deserves individual recognition (in the form of lifetime job security). My own experience reinforces my tendency to downplay the risks of crediting people for collaborative work.