Search
share
Search
Graduate School Philosophy Placement: The Leiter Report

Date

author

share

The Match: Placement Records, The Leiter Report, and Kind of Placement

Does having a good faculty (Leiter Report) ranking mean that a school will also have a good placement record? In my initial take on this question, I concluded that there is virtually no correlation between faculty rankings and tenure-track/permanent placements.  This was prejudged, however, as many have pointed out.  Schools change faculty frequently.  As the faculty quality increases or decreases, the faculty ranking will increase or decrease, and very likely, the tenure-track/permanent placement ranking will also increase or decrease.  And this changes from year to year.

I had compared tenure-track/permanent placement rankings based on data from 2000 to 2013 to faculty rankings from 2011 only.  However, this was unfair.  Either I needed to compare a school’s overall tenure-track/permanent  placement ranking to a school’s overall faculty ranking, or I needed to compare a school’s placement rank by year to a school’s faculty rank by year instead of mixing the two approaches together.  In my second attempt, I corrected that mistake by looking at the average overall faculty ranks compared with the average overall tenure-track/permanent placement ranks.  By attempting to correlate the tenure-track/permanent placement rank with the faculty rank, I discovered that the faculty rank explained (roughly) about 25% of both the initial and current tenure-track/permanent placement ranks. However, that still left about 75% of the tenure-track/permanent placement rank explained by other unknown factors. I concluded that the overall quality of faculty at a school does have an impact on the overall placement success of the school.   However, it only accounts for about 1/4 or 25% of that success.  The other 3/4 or 75% is determined by other unknown factors.

Again, this result was criticized.  It was argued (see comments below) that "placement is backwards-looking and faculty quality is, at least as far as placement goes, forward-looking. If one is looking at placement from 2000-2013, it would be most meaningful to look at PGR ranking from, say, 1995 to 2008, since the students placed 2000-2013 would have been looking at schools during that period."  This point is well taken.  In theory, the best students are going to apply to the best ranked schools at the time of their application.  As such, their placements would not show up for 5-7 years after they have applied to a school.  Consequently, it does make sense to add a time delay between tenure-track/permanent placement rankings and the Leiter Report rankings.  I will do so in the following analysis.

The overall assumption of the Leiter Report seems to be that if you go to a school with well ranked faculty, you will get a better placement in both the short term and the long term, and hence, have a more successful philosophy career overall, than if you went to a school with lesser ranked faculty.  I wish to test that assumption.

 If the overall assumption of the Leiter Report is correct, schools with overall faculty ranks that are great should, in general, have tenure-track/permanent placement rankings that are great as well.  Similarly, schools with overall faculty ranks that are poor should, in general, have overall tenure-track/permanent placement ranks that are poor as well.  Is this the case?

Since the Leiter Rankings need to be 5-7 years prior to the tenure-track/permanent placement rankings, I will do three analyses for the moment:

1. Comparing US/CA faculty ranks from 2002-2008 to tenure-track/permanent placement rankings from 2007-2013 (5 year gap)

I found the rank of average US/CA rankings from 2002-2008 (using the English world rankings and then re-ranking) and then compared these with the tenure-track/permanent placement rankings from 2007-2013.  In the initial tenure-track/permanent placement ranking, I found a correlation coefficient of 0.54, meaning that the faculty rank roughly explains about 29% of the initial tenure-track/permanent placement ranking.

 

In the current tenure-track/permanent/tenured placement ranking, I found a correlation coefficient of 0.47, meaning that the faculty rank roughly explains about 22% of the current tenure-track/permanent placement ranking.  However, one can see the formation of a linear trend along with some outliers.  The red line shows a perfect correlation while the blue line shows the actual correlation. 

What effect does removing the 6 or so very obvious outliers from the dataset have on the correlation? The correlation coefficient goes up to 0.74, meaning that the faculty ranking explains 56% of the current tenure-track/permanent placement ranking (without the outliers).

 

Which schools were the outliers and had overall faculty ranks during this period that were pretty average but had great current tenure-track/permanent placement ranking in comparison? Yale University had an average faculty rank of 22, but had the best TT/Permanent placement rank; University of Massachusetts, Amherst had an average faculty rank of 29, but had the second best TT/Permanent placement rank; and the University of Washington had an average faculty rank of 34, but had the third best TT/Permanent placement rank.  Here are all of the outliers:

 

2. Comparing US/CA faculty ranks from 2002-2007 to tenure-track/permanent placement rankings from 2008-2013 (6 year gap)

I found the average US/CA rankings from 2002-2007 (using the English world rankings and then re-ranking) and then compared these with the tenure-track/permanent placement rankings from 2008-2013.  In the initial tenure-track/permanent placement ranking, I found a correlation coefficient of 0.56, meaning that the faculty rank roughly explains about 31% of the initial tenure-track/permanent placement ranking.  In the current tenure-track/permanent/tenured placement ranking, I found a correlation coefficient of 0.50, meaning that the faculty rank roughly explains about 25% of the current tenure-track/permanent placement ranking.  However, one can again see the formation of a linear trend along with some outliers (although they are starting to look less like outliers).

What effect does removing the 6 or so very obvious outliers from the dataset have on the correlation (Note: these outliers happen to be the same schools as above)? The correlation coefficient goes up to 0.76, meaning that the faculty ranking explains 59% of the current tenure-track/permanent placement ranking (without the outliers).

 

3. Comparing US/CA faculty ranks from 2002-2006 to tenure-track/permanent placement rankings from 2009-2013 (7 year gap)

I found the average US/CA rankings from 2002-2006 (using the English world rankings and then re-ranking) and then compared these with the tenure-track/permanent placement rankings from 2009-2013.  In the initial tenure-track/permanent placement ranking, I found a correlation coefficient of 0.55, meaning that the faculty rank roughly explains about 31% of the initial tenure-track/permanent placement ranking.  In the current tenure-track/permanent/tenured placement ranking, I found a correlation coefficient of 0.49, meaning that the faculty rank roughly explains about 24% of the current tenure-track/permanent placement ranking.  However, one can again see the formation of a linear trend along with some outliers.  And this time, they do not look so much like outliers since other values are approaching them and filling the gap between.

 

What effect does removing the same schools from before from the dataset have on the correlation? The correlation coefficient goes up to 0.78, meaning that the faculty ranking explains 57% of the current tenure-track/permanent placement ranking (without the outliers).

 

Conclusion:

Based on the foregoing analysis, it appears that faculty rank does not matter so much regarding initial placement into a tenure-track/permanent position. The faculty ranks explains roughly 30% of this placement, meaning that 70% of the placement is explained by other factors.  However, since we should probably expect students from highly ranked schools to go to post-docs/research positions and lecturing positions at highly ranked schools, we should not read too much into this.  On the other hand, we should expect these students to eventually end up at great schools in tenure-track/permanent/tenured positions.  But it appears that faculty rank only explains about 57% of a school’s current placement into tenure-track/permanent/tenured positions, and this is after removing outliers which would lower the explanation to about 24%.

While this does not account for the quality of placement, this is a significant result regarding the kind of placement.  For students that are mainly concerned about getting a tenure-track/permanent/tenured position, this analysis shows that faculty rank isn’t everything.  Students can go to less prestigious schools and still get a tenure-track/permanent/tenured position in the long run.  In fact, some schools appear to be very good at this sort of placement even though they aren’t ranked as highly (e.g., University of Massachusetts, Amherst, Northwestern University, Johns Hopkins University).

This is not to say that faculty rank doesn’t matter.  It clearly does.  It explains a little over half of the placement.  However, that still leaves most of the other half unexplained…

   

The Match: Placement Records, The Leiter Report, and Quality of Placement

How well do faculty rankings correlate with the quality of placement.  If a student goes to a highly ranked program, will that student be placed into a highly ranked school as a post-doc, lecturer, or tenure-track professor?

I used the average faculty overall English World Rank from 2002-2013.  I then compared this rank with the Prestige rankings (both initial and current) for top MA department placements, top PhD department placements, top US News National Universities, and top US News Liberal Arts Colleges.  Here are the results:

1. Initial MA department placement rank score 2002-2013 vs. re-ranked average faculty overall English World Rank from 2002-2013

The correlation between the initial MA department placement rank score for 2002-2013 and the re-ranked average faculty overall English World Rank from 2002-2013 (i.e., the average faculty overall English World Rank from 2002-2013, re-ranked and including only those departments with an initial MA rank score to normalize the comparison) is 0.178.  This means that the faculty rank only explains about 3% of the initial MA placement rank, which basically means there is virtually no correlation at all.

image 

2. Current MA department placement rank score 2002-2013 vs. re-ranked average faculty overall English World Rank from 2002-2013

The correlation between the current MA department placement rank score for 2002-2013 and the re-ranked average faculty overall English World Rank from 2002-2013 is 0.071.  This means that the faculty rank only explains 0% of the current MA placement rank.

image

3.  Initial PhD department placement rank score 2002-2013 vs. re-ranked average faculty overall English World Rank from 2002-2013

The correlation between the initial PhD department placement rank score for 2002-2013 and the re-ranked average faculty overall English World Rank from 2002-2013 is 0.81.  This means that the faculty rank explains 65% of the initial PhD Department Placement rank.  In the graph below, it is very easy to see the linear trend relating faculty rank and PhD placement rank.  For the most part, as the faculty rank increases, the PhD Department Placement rank increases.

image

4.  Current PhD department placement rank score 2002-2013 vs. re-ranked average faculty overall English World Rank from 2002-2013

The correlation between the current PhD department placement rank score for 2002-2013 and the re-ranked average faculty overall English World Rank from 2002-2013 is 0.73.  This means that the faculty rank explains 53% of the current PhD Department Placement rank.

image

5. Initial US News National University placement rank score 2002-2013 vs. re-ranked average faculty overall English World Rank from 2002-2013

The correlation between the initial US News National University placement rank score for 2002-2013 and the re-ranked average faculty overall English World Rank from 2002-2013 is 0.74.  This means that the faculty rank explains 54% of the initial US News National University placement rank.

image

6.  Current US News National University placement rank score 2002-2013 vs. re-ranked average faculty overall English World Rank from 2002-2013

The correlation between the current US News National University placement rank score for 2002-2013 and the re-ranked average faculty overall English World Rank from 2002-2013 is 0.72.  This means that the faculty rank explains 52% of the current US News National University placement rank.

image

7. Initial US News Liberal Arts College placement rank score 2002-2013 vs. re-ranked average faculty overall English World Rank from 2002-2013

The correlation between the Initial US News Liberal Arts College placement rank score for 2002-2013 and the re-ranked average faculty overall English World Rank from 2002-2013 is 0.18.  This means that the faculty rank explains 3% of the initial US News Liberal Arts College placement rank.

image

8. Current US News Liberal Arts College placement rank score 2002-2013 vs. re-ranked average faculty overall English World Rank from 2002-2013

The correlation between the Current US News Liberal Arts College placement rank score for 2002-2013 and the re-ranked average faculty overall English World Rank from 2002-2013 is 0.16.  This means that the faculty rank explains 2% of the Current US News Liberal Arts College placement rank.

image

Conclusions:

What does this show us?  We can see that the faculty rank of a school does not correlate with that school’s job placement ranking into great MA departments in philosophy or great Liberal Arts colleges.  However, the faculty rank does appear to be pretty well correlated with that school’s job placement ranking into great PhD departments in philosophy and great National Universities, accounting for over 50% of the placement ranking.

While these results could be interpreted in many different ways, here is my take.  It seems unlikely that graduates coming from great PhD programs could get jobs at great National Universities and in great PhD philosophy departments, but have a difficult time getting jobs in great Liberal Arts colleges or in great MA departments in philosophy.  What seems more likely is that jobs in great PhD departments and in national universities are perceived as the most desirable (from a career perspective) and the most competitive, so the best philosophy graduates seek and attain these job positions. 

In contrast, the jobs at great MA departments in philosophy and at great Liberal Arts colleges may seem less desirable (from a career perspective) so these jobs go to a wide range of graduates.  Some jobs go to graduates from highly ranked programs that are more interested in teaching, living close to family, living in a certain location, etc. than in working at a highly ranked university.  Other jobs go to graduates from lower ranked programs that could not find a job at a prestigious university or department.

These are only speculations as there are many factors that govern an individual’s decision about where to take a job.  However, it seems reasonable to conclude, based on the foregoing data, that, at most, approximately 56% of a philosophy department’s placement quality is determined by its faculty ranking.  That is, at least 44% of a philosophy department placement quality is determined by other factors.

 

Missing: What Else Matters?

Taken together, we can see that only about half of one’s job prospects, both in terms of quality and kind of placement, can be explained by the faculty rank of the school that one attends for his or her PhD in philosophy.

What is missing?

I suspect other factors like salary, general location, closeness to family, and other more personal factors will make up for some of the difference.  However, there are other factors that are more directly comparable which do seem important.  For example, perhaps the school’s overall faculty rank isn’t as important as much as that school’s rank in particular areas of specialty.  If it matches the specialty of that student, then that student will place well.  To test this, I matched the Subfields from the "Breakdown by Specialty"  in The Leiter Report with the primary area of study for each student.  Then, I selected only those students whose PhD school had a Group Number of 1 or 2 in their area of specialty (thus, these students were going to the best schools for their chosen subfield in philosophy). There were approximately 1400 students that fit this description.  How did they place?

They did place better, although not by a whole lot.  Since 2000, in initial placements, 43% (compared to 39%) of these students received a tenure track or permanent position.

 

In terms of current placement since 2000, 60% (compared to 54%) of these students have acquired permanent positions in academic philosophy.

 

Taken altogether, this means that if you are applying to graduate schools in philosophy and are trying to decide which schools you want to apply to or attend, and if you are concerned about your placement prospects after graduation, you need to consider (1) how well a school ranks overall, (2) how well a school ranks in your chosen specialty, and (3) how well that school places students overall.  All three are important components of setting yourself up for a successful career in philosophy. 

Moving Forward: What Next?

We at Philosophy News are working on making this analysis more nuanced.  We will continue to work on rounding out the analysis, responding to criticisms, and updating the data to give the fullest picture of current graduate placement in philosophy as is possible.

Thanks for your patience.

Andy Carson

Philosophy News

More
articles

More
news