Who should be included in and who should author scholarly impact assessments?

"Everyone heading for the top / But tell me how far is it from the bottom / Everybody want to go to heaven / But nobody want to die. “ Peter Tosh

Following the initiation of my research project on citations with Michael Marsh, a lively debate has taken place over whether and how scholarly impact should be measured and compared in the study of politics in Ireland. Some of this debate is visible on this website; some of it has occurred in private communications between me, Michael, and many individual scholars; some of it has occurred in meetings of research committees at different universities; other debates have taken place at meetings of the PSAI executive committee and most recently, at a meeting of heads of politics departments in Ireland (May 23, 2008). What we would like to do here is to address a few main questions concerning the exercise.

Q: Is it the prerogative of an individual scholar to “opt out” of the exercise?

A: No. Our data comes from publicly available sources on public_ations, also by definition (and etymology) public, and anyone has the right to analyze these.

First, the research was conducted on the basis of publicly available data, and anyone has the right to analyze these data. Just as no TD could demand to be omitted from a study of legislative activity based on openly published data, individual scholars employed at public universities in Ireland have no basis on which to demand to be omitted from the exercise. We might add here that, much like other public servants, the scholars included in our exercise are all permanent employees with very secure employment, in a higher-education sector that is among the best-paid and best-pensioned in the world. Second, as a profile of research impact in Ireland, the analysis would be invalidated as a comparison of scholarship if those included were selected according to their levels of impact. (Since those with the lowest measured impact have the greatest incentive to be omitted from the exercise, this is exactly what we suspect would happen.) So unless we held a purely randomly drawn lottery to remove individual scholars, allowing (self-)selected opt-outs would invalidate our exercise.

Q: Is it the prerogative of a department to “opt out” of the exercise?

A: No, in the same way that individual permission is not required. Furthermore, we feel there is a significant first-mover advantage to starting this (inevitable) sort of exercise within our own ranks, rather than waiting for it to be imposed by external forces (such as the HEA).

For the same reasons that apply to individuals, our use of the data on research impact from public sources is open for analysis by anyone. Our analysis of scholarly impact at the level of an academic department is simply an aggregation of that data based on publicly available information - usually the department’s own web site listing the permanent members of its staff.

Furthermore, we feel that assessing the scholarly impact of the study of politics in Ireland is a positive step toward highlighting its success, and that aggregating to departments provides a natural basis for this profile to be developed. Selectively excluding departments would undermine the goal of promoting Irish scholarly activities by limiting the scope of the exercise.

Whether departments like it or not, some form of impact assessment seems inevitable in Ireland, and by showing that we can rank ourselves, as well as conduct a lively and healthy debate on how assessment should be performed, from within our ranks, we have a better chance to be on top of this process than having it be thrust on top of us. Citation-based measures are certainly not the only method of assessing impact, but by putting forth bibliometric measures as one method – as does our paper – we have started the debate and focused attention on the strengths and limits of this particular method.

We would also add that numerous comparative bibliometric exercises that are inclusive of all departments in a particular context have already been written or published in other contexts:

Dale, Tony and Shaun Goldfinch. 2005. “Article Citation Rates and Productivity of Australasian Political Science Units 1995-2002.” Australian Journal of Political Science 40(3, Sept): 425-434.

Hix, Simon. 2004. “A global ranking of political science departments.” Political Studies Review 2(3): 293-313.

As well as in Ireland specifically (including economics):

  • Elgie, Robert and Iain Mcmenamin. 2008. “Journal Publications From Politics Departments In Ireland 2003-2007: An Update Using The Hix Method.” Dublin City University Manuscript.
  • Rouane, Frances and Richard S. J. Tol. 2007. “Centres of Research Excellence in Economics in the Republic of Ireland.” Economic and Social Review 38(3, Winter): 289-322.
  • Coup, Tom and Patrick Paul Walsh. 2003. “Quality Based Rankings of Irish Economists 1990-2000.” Economic and Social Review 34(2, Summer/Autumn): 145-149.

Q: Should our review exclude departments outside the Republic of Ireland?

A: This is solely a question of whether including such departments improves our paper, not a political question whereby the limits of the Irish national university system prevent us from comparing scholars or departments from other national university systems.

In fact, we feel that there are sound reasons to wish to compare Irish politics departments to departments in other contexts, since this provides an external benchmark for the scholarly performance of Irish politics departments.

Previous studies (e.g. Hix 2004, Dale and Goldfinch 2005) have compared department from different national systems, and other rankings of insttutions and deparmtents (e.g. the Times Hugher Education Supplement) regularly perform international comparisons. Our study would hardly be unique in this regard.

A sensible question arises with regard to whether any ranking of individual political scientists is meaningful in a selected international context, and here we would tend to agree that either the context should be restricted to a meaningful boundary for inclusion, or broadened to a Europe-wide or fully international focus.

Q: Does the fact that the authors appear at the top of the rankings in their own paper not impugn the credibility of the exercise?

A: It should not, because the rankings are based on completely open, publicly available criteria, and not created, manipulated, transformed, or selectively applied by the authors.

In other words, any other scholar would be able to reproduce these results using the methods which we have carefully and explicitly detailed in our draft paper, and also publicly listed in order that others might also verify that the authors’ citation rates are what they claim. We welcome any scrutiny in this regard.

Turned around, would the report’s conclusions be more credible if the two scholars at the bottom of the rankings had undertaken this exericise? Putting aside the fact that two scholars with zero citations on any ranking would never undertake such an exercise, as long as the method is public and can be replicated, the identity of the authors or position of the authors in the ranking should not be related to the conclusions drawn. In other words, the authors, whether from the bottom, top, or a randomly chosen position in the rankings, should reach exactly the same conclusions.

We do agree however that it would be preferable from many standpoints - certainly from that of not attracting the ire of the bulk of the Irish political science community! - had an independent, third-party commission undertaken this review. But as far as we know, no such commission has been awarded or plans to be tendered for a stidy such as ours any time soon. This is no doubt why political scientists in Australasia and economists in Ireland and the Netherlands (see citations above) have conducted such studies themselves, just as we have.

Ken Benoit
Ken Benoit
Professor of Computational Social Science
comments powered by Disqus

Related