Some years ago, I authored and circulated an open letter to Brian Leiter expressing concern about the influence the Philosophical Gourmet Report was having both upon students who were selecting graduate schools and upon the profession more generally. As a result of my move from Harvard to Brown,1 the website where the open letter and accompanying material had been posted ceased to exist.2 I thought about moving the old site to a new location, but it had by then been almost four years since the open letter was sent. It seemed inappropriate to re-post a somewhat out-of-date website, so I didn't re-post it. But then people who had been encouraging their students to look at it for another perspective started writing me and asking what happened to it, so I figured I'd better do what I'd long been meaning to do and write some kind of up-to-date remarks on the Report. Anyone who would like to read the original criticisms may find them on the WayBack Machine.
The Report has changed in the intervening years,3 in most ways for the better. Many people had expressed concern, for example, that something with as much influence as the Philosophical Gourmet Report ought not to be controlled by one individual. Leiter remains in charge, of course, but a formal Advisory Board is now in place. Unfortunately, the Board is unrepresentative of the field, but that is some progress, nonetheless. The scores are finally normalized. Although, as Leiter himself notes, normalization introduces new biases, since not everyone ranks every department, that's progress again. And perhaps most significantly, Leiter no longer compiles the rankings within individual areas on his own but now includes area rankings in the survey. That is definitely progress.
Despite these changes, however, there are still serious problems with the Philosophical Gourmet Report. For example:
Another, and in some ways more serious, worry concerns the influence the Report has upon the profession as a whole. Partly as a result of the factors just mentioned, the overall rankings in the Report are biased towards certain areas of philosophy at the expense of others. The most famous such bias is that against continental philosophy. I don't much care for that style of philosophy myself, but it isn't transparently obvious why Leiter's oft-expressed and very intense distaste for much of what goes on in certain "continental" departments should be permitted to surface so strongly in the rankings.9 Other biases are less obvious but every bit as real. It is well understood in the profession that hiring someone pretty good who works in philosophy of mind will have more influence on a department's overall ranking than will hiring someone much better who works on logic, let alone on ancient or medieval philosophy. I have been told that this fact has actually influenced hiring decisions—told, that is, by people who were present at meetings where such decisions were made. I'm sure most supporters of the Report would be as concerned as I am about such events. But what's to be done? Should departments simply not consider how their hiring decisions might affect their ranking? That isn't very realistic, especially when administrators have taken to confronting departments with their reduced rankings and demanding action, which is something I've personally seen happen (not at Harvard) and have been told about many other times. There is only one solution, and that is to put an end to the disproportionate influence a department's strength in so-called "core" areas of metaphysics and epistemology has upon its overall ranking. Or, better yet, to produce a set of rankings that, at the very least, doesn't have the sorts of flaws that one knows, in advance, will lead to some such biases.
In closing, let me repeat something I've said elsewhere. I don't actually think the Philosophical Gourmet Report is completely useless. As I've said several times, I think there is a small but positive correlation between the Report's rankings and the quality of graduate programs. The Report therefore can be useful to students who are considering where to apply. The decision where to apply is sufficiently coarse-grained that "small but positive" will be helpful, so long as the usual warnings are heeded. But, in my opinion, it would be a serious mistake to give the Report's rankings any credence when making a decision that is more fine-grained, such as which graduate school to attend. Perhaps the correlation is good enough that it would rarely be wise to choose a school around 30 over one around 5. But that's not usually the sort of decision with which students struggle.
1It has often been speculated that my criticisms of the Report were motivated by a desire to defend the honor of the Harvard philosophy department against perceived slights. So long as I was at Harvard, I was limited by my obligations to that department in how I could respond to this criticism. Now that I'm not at Harvard, I should like to take the opportunity to set the record straight.
It is indeed true that I have long regarded the Report's various rankings of Harvard as misleading, but I was never out to "defend" Harvard. (The smoking gun Leiter claimed was found—an article in the Harvard newspaper featuring a quote from Gisela Striker and a remark from me to the effect that, yes, the Report has influence—is laughably unimpressive.) In some respects, yes, I think Harvard has sometimes been badly under-ranked. For example, Harvard was producing some very strong epistemologists—Adam Leite and Tom Kelly are two—during a period when it did not even appear on the epistemology rankings. My first contact with Leiter, in fact, consisted of a letter in which I bemoaned this fact and argued that the presence of Bob Nozick and Jim Pryor, with ample support available elsewhere, ought to have garnered us at least a mention. (Harvard was mentioned in the next year's rankings, as it happens.) Bob, I suppose, was overlooked because he hadn't worked in the area for some time, and Jim was overlooked because he was young. That's precisely the sort of combination that gets one overlooked, and students interested in epistemology may have been discouraged from attending Harvard by its absence from the list, to what might have been their loss. (With Bob's untimely death and Jim's move to Princeton, such students might have been better off elsewhere, in the end, but that's the sort of thing that can happen at any department.)
In other respects, I think Harvard has been over-ranked, in large part because it has benefited from the very "halo effect" that some supporters of the Report see it as counteracting. The idea that presenting lists of faculty without naming the department counteracts the "halo effect" is simply silly. Departments with illustrious histories will benefit from them in the rankings whether they are explicitly named or not. Anyone who doesn't know which department is Harvard, which Princeton, which Yale, which Rutgers, which Stanford, which UCLA, and which Columbia, has no business filling out the survey. Perhaps "not includ[ing] the name of the university with the faculty lists [is] beneficial in forcing evaluators to respond to the current faculty" (PGR). That is, perhaps it has some effect, but I know of no reason to believe it has much of one. To the contrary, the much discussed "staying power" of traditionally strong departments even after significant deaths, retirements, and departures is strong evidence that it has little. But lest I be accused next of sour grapes, I should probably say no more.
2Leiter apparently takes some satisfaction in the fact that the link to the original site has been removed from the Harvard philosophy department's website. I removed it, before handing the site over to its new maintainer, since I knew the link was about to go dead. (Try visiting emerson.fas.harvard.edu. The machine that used to have that URL now lives at frege.brown.edu.)
3Leiter says he didn't make any changes in response to criticisms of the Report. I'll leave it to others to speculate about whether those criticisms might have had some effect via other routes, such as via members of the Advisory Board who thought some of the criticisms had some merit but who expressed them more gently than I did. (If I had it to do over, I'd be a lot more gentle.)
4Brian Weatherson has some very nice things to say about why a department like MIT might be under-ranked. (It was Brian who first seems to have realized that it was the Report's treatment of MIT that had gotten under my skin and driven me to act.)
5To see any significance whatsoever in the facts that there is some positive correlation between the Report's rankings and research quality and some positive correlation between research quality and quality of graduate education, one would again have to make a simple statistical fallacy.
6I should note that there are some who deny there is any such bias, but I don't see how one could seriously defend such a position, given the survey's methodology. The fact that there are others who think that, even if there is such a bias, it's not objectionable is enough to make one start worrying about self-knowledge.
7In the 2001 Report, for example, small departments got an extra tenth.
8Kieran Healy's analysis of the Report's data uncovered an unusually high degree of consensus among those responding to the survey. As he had no access to other data, he was of course unable to determine to what extent that consensus was an artifact of how the respondents were selected. The issue was raised in the discussion that followed, however, and interested parties will find it makes good reading.
9It does so, of course, because it influences who is asked to complete the survey, what departments are represented, and so on and so forth. For some discussion, see John Hartmann's comments on Leiter's treatment of continental programs.