I recently sat in on a series of job talks in my college’s History Department. One thing that intrigued me was the sort of questions asked by historians. At any talk, but a job talk especially because the stakes are so high, faculty members typically try to poke holes in the candidate’s argument. After all, you want to expose any fatal flaws in their work before you offer them an almost lifetime contract.
What surprised me about these talks was that there were very few attacks directly on the candidate’s data and methods. For me, these are the areas where a questioner might expose key vulnerabilities in the research. If their evidence is weak or unrepresentative and the tools for examining them inappropriate or misguided, then the argument probably falls apart. Instead, at the job talks I attended, most questions probed how the research related to the work of others, to different cases or time periods than the author studied, or to different theories.1
This led me to thinking about what counts as a killer question in a job talk - a question that can immediately sink the candidate’s chances - and whether the type of killer questions would differ across fields. I had thought that the killer question for the history talks I saw would be about the representativeness of the evidence. In works that draw from archives or letters, how do we know whether the pieces of evidence that the author cites are representative of broader trends and how prominent are they relative to other trends? However these weren’t the issues that seemed to interest historians. In fact, I’m still trying to figure out what historians consider a killer question.
I think I do have a better sense of what counts as a killer question in political science. (I’m ignoring outright fraud here as in the recent Ariely and Francesca cases, because fraud is rarely uncovered in a public talk.) I group the questions into the real killers and the merely flesh wounds. Obviously other factors matter more in the success of a talk - like the novelty and importance of the work and ultimately the candidate’s publication record - but those tend to be more subjective judgments and not so much the focus of questioning, except for the classic “So what?”.
The real killers
Data issues
I think the most devastating questions in political science (or perhaps any field) raise doubts about whether the data gathered actually measure what the author thinks they are measuring or whether the data are reliable. (Yes, I’m speaking in quantitative terms here, but you can substitute evidence for data.) Every scholar’s nightmare is probably Naomi Wolf’s interview where it was pointed out that the archival phrase “death recorded” referred not to an execution of someone accused of being a homosexual but in fact to a commutation of their sentence. Some of the most prominent recent failures in social sciences fall into this category - like Reinhart & Rogoff’s Excel problems - but I think there are lots of cases where authors simply don’t know their data well or haven’t read footnotes to the data in the sources. Consider, for example, these criticisms of Piketty and Zucman’s data on inequality. Though in this case it doesn’t yet seem to have hurt the authors’ reputations.
These problems are naturally most severe when the author has gathered their own data, but they can happen even with canned data. Measures of democracy have recently come under especially harsh criticism, but the same applies maybe even more strongly to measures of stateness/state capacity and ethnic diversity/nationalism, all central variables in political science. As Gerring points out, any sort of causal inference depends first on describing a phenomenon accurately and this is exactly where we sometimes fail.
Selection bias
In the past, selection on the dependent variable was relatively common. Scholars often studied a case (or cases) of revolution, democratic transition, or war without comparing them to cases of non-revolutions, non-transitions, and non-wars. Such a design makes it nearly impossible to assess cause and effect because the potential cause might be present in the negative cases. This is probably why King, Keohne, and Verba spent so much time on selection bias in their classic methods text. Today this mistake can torpedo a talk.
But I don’t think this problem comes up as much as it used to. It was relatively common in the era of single-country case studies. However qualitative scholars have adopted methods that get around this problem. The main ones are structured focused comparisons which makes sure to include positive and negative cases and process tracing which breaks down a single case into multiple cases over time. Yet, the problem has not completed disappeared as Pape’s work on suicide terrorism shows.
Method misunderstanding
Another less-specific killer is a speaker who is challenged on their statistical method and reveals that they don’t understand it well. I remember being caught out on this (fortunately not in a job talk) when I referred to matching methods as being a solution to the problem of endogeneity. (Unfortunately, it is not.) This isn’t to say that one will be tested on the details of every statistical method or on the latest advancements in, say, regression discontinuity designs. But if you do happen to have a stickler for statistics in your audience, then they may be capable of sinking your talk by exposing your unfamiliarity with the foundations of the method that you are using.
Flesh wounds
There are a number of issues that seem to be common but less fatal, particularly in political science.
Omitted variables
The most common questions in job talks - and often the most annoying - are ones that point out potential omitted variables, other factors that might lead to the outcome that the author wishes to explain. I say annoying because it is always easy to bring up missing factors - just take your pick of one of a thousand different variables from the realm of culture, religion, institutions, history, economics, groups, etc. And it may not make sense to add these factors to the model the author is estimating because they are hard to measure or play a role as mediating rather than causal variables.
Yes, authors often fail to consider important alternative explanations and they are often called on this in talks. But except in extreme cases this tends to be a venal rather than a mortal sin. Mostly it is viewed as an easily corrected mistake. The equivalent of meeting a flawed potential boyfriend or girlfriend and saying, “I can fix that.”
Endogeneity
Here the problem is that the outcome that the scholar tries to explain is the cause instead of the effect or that the cause is not exogenous to the outcome. In economics, most questions tend to focus on exactly these identification issues - is the speaker able to isolate a truly exogenous variable that causes the outcome. Without exogeneity it is very hard to say whether an explanation is causal. This is part of the so-called credibility revolution, which has come to political science as well. Methods that approximate experiments fulfill this best because by randomly assigning a treatment, the experimenter can ensure that the cause is exogenous to the outcome.
My sense is that political scientists tend to be more forgiving of these problems than economists (who regard them as killers). Yes, we often probe authors on this point, but I think we often accept imperfect attempts to deal with it because the political world is complicated. To turn this into a killer question, the author would need to be unaware of potential endogeneity. A good defense is first to be proactive in identifying the problem and second to show care in presenting one’s work as non-causal.
Lack of knowledge of literature
The same goes for questions that reveal that the author doesn’t know important articles or books on their topic. Here the question sounds like this: “What about author X?” Yes, it is problematic when one is not aware of other studies (and historians and humanities scholars are probably more sensitive to this problem than political scientists), but it is also easily fixable. These tend to be the easiest criticisms to respond to in peer reviews. The main problems would be when it implicates an omitted variable that should have been included, which itself as a venal rather than mortal sin. I suppose it could rise to the level of a killer if another author has done essentially the same project as yours, but this is relatively rare.2
Conclusion
All of these problems are of course well-known by now and so I don’t think I am adding much new. I would probably mainly emphasize the importance of specific issues like not knowing one’s data well. And I’d add that avoiding killer questions does not equal a successful talk. Ultimately, candidates are chosen for the promise and excitement of their work rather than for not making fatal mistakes.
Probably for me the most interesting issue is how killer questions differ across disciplines. Economists tend to be obsessed with identification. I’m not quite sure if political science has a similar obsession and I couldn’t figure out at all what historians were looking for. I hope to be educated on this.
I can guess the reasons for this. I sense that historians are deferential towards primary sources and if they are not familiar with the source in question, then they would tend not to challenge it.
I am partial to Thaler’s advice to researchers: “Admittedly my strategy of writing the paper first and only then reading the literature (or, more likely, letting the referees tell me what they think I should have read) is an extreme one, but it is better than trying to read everything.”