This article is more than 15 years old

RAE 2008: Results and rankings

RAE 2008 results are now out (effective 18 December 2008) Many, many ways to calculate rankings from the data but arguably the most authoritative and convincing one comes from Research Fortnight: Research Fortnight Power Rankings 2008 1 Oxford 2 Cambridge 3 UCL 4 Manchester 5 Edinburgh 6 Imperial 7 Nottingham 8 Leeds 9 Sheffield 10 … Continued
This article is more than 15 years old

RAE 2008 results are now out (effective 18 December 2008)

Many, many ways to calculate rankings from the data but arguably the most authoritative and convincing one comes from Research Fortnight:

Research Fortnight Power Rankings 2008

1 Oxford
2 CambridgeWonkhe rae2008
3 UCL
4 Manchester
5 Edinburgh
6 Imperial
7 Nottingham
8 Leeds
9 Sheffield
10 Bristol
11 King’s College
12 Birmingham
13 Southampton
14 Glasgow
15 Warwick
16 Cardiff
17 Newcastle
18 Liverpool
19 Durham
20 Queen Mary

The Times Higher rankings can be found here. They are using a Grade Point Average (ie no direct indication of volume). The Guardian’s calculations are here. Not very different from THE and using GPA again which shows excellent performance for institutions with slightly smaller strong submissions including Essex, Warwick and York. All of the tables show a very good improvement by Queen Mary in particular but also Nottingham.

Other analysis is awaited…

11 responses to “RAE 2008: Results and rankings

  1. Creating league tables based on GPA, without consideration of volume of research distorts the tables considerably.

    Knowing that tables were to be constructed in this way has encouraged some institutions to refuse to enter early career researchers who would reduce their GPA.

    Universities that made the decision to support the development of early career researchers by encouraging them to enter the RAE process are unfairly penalised by a reduced GPA.

    Volume of research activity, calculated by multiplying the GPA by the FTE count, would be a much better basis for the tables.

  2. “Volume of research activity, calculated by multiplying the GPA by the FTE count, would be a much better basis for the tables.”

    But this would make it nigh-on impossible for the smaller research-intensive universities to perform well in the tables. Why should highly successful institutions (ie majority of staff producing world-leading or internationally excellent research) be penalised just because they have fewer staff in total?

  3. I have to agree with the second comment.

    The “Power Ranking” Table is merely a record of size not quality of institutions. In the RAE GPA tables you will find that London School of Economics is joint second with Oxford (excluding specialist institutions) and leads the way in 4* research with 35% to Oxbridge’s shared 32%. Also the LSE is a research intensive institution with “LSE submitt[ing] over 90 per cent of eligible staff for assessment” – that is a quote taken from the Director of the LSE on the LSE site.

    Thus we have an institution with the highest percentage of 4* research and the joint second highest GPA not making the top 20 on the Power Ranking because it is a specialised Social Science School with a smaller overall faculty (it entered 490 staff compared to Oxford’s 2246).

    If that’s not bias I don’t know what is.

  4. ‘The Times Higher rankings can be found here. They are using a Grade Point Average (ie no direct indication of volume).’

    I’m sorry to disagree but unless I’m mistaken under the methodology section of The Times Higher article it says:

    ‘The contextual column listing the indicative proportion of RAE-eligible staff submitted is an attempt to show how selective universities have been in choosing which academics to enter into the RAE. It is calculated by dividing the total number of staff that an institution submits to the RAE by the number of academic staff at the institution within the grades “professors”, “senior lecturers and researchers” and “lecturers”, according to the latest published data available from the Higher Education Statistics Agency (Resources of Higher Education Institutions, 2006-07)…’

    Unless I’ve misunderstood The Times Higher is not therefore calculated purely on GPA. At any rate, I completely agree with the posters above.

  5. The actual calculation is a GPA. The “contextual column” is just that, contextual. It is also seems to far from precise given the number of institutions credited with entering more than 100% of their staff.

  6. Research Fortnight should be ashamed to publish such misinformation. The method that has been used permits low quality research activity in large departments to be highly ranked. Inversely some of the UK’s best small departments score badly simply because they are small. In short the statistics on which the league table is fundamentally flawed if one really wants to rank quality and not quatity. The methods used narrow the quality scale as measured by the RAE and over-weights the outcome by staff numbers. Their understanding of quality seems to be minimal and they have seriously reduced my faith in their future credibility.

  7. It is of course far from perfect but the Research Power calculation does at least take some account of scale or critical mass. A GPA takes no account of this and, in the context of the RAE, arguably favours those institutions which have been much more selective in their returns. It does though, as has rightly been noted, disadvantage high quality more specialist institutions.

    If we did have an accurate representation of percentage return this would possibly provide a more satisfactory means of indicating the relative strength in depth but we don’t.

  8. RAE = waste of money

    After wasting £1billion or £2billion of valuable tax payers money we get a result that we all knew before the assessment; ie that Oxford, Cambridge, Imperial are at the top and that universities like Heriot-Watt and Luton are at the bottom.

    When are universities going to stop wasting tax payers money?

  9. The GPA approach takes no proper account of research intensity whereas the research power rankings do seem, conversely, to be a rather simplistic measure of quantity over quality. As accurate figures on submitted staff as a proportion of eligible staff seem to be unobtainable, it is not going to get any more meaningful. Result:
    RAE 2008 is capable of far more ‘spin’ than the old RAE methodology which generated a single ranking per UoA.

    In response to Graham Seed’s comment, the answer to his question would seem to be when the government, through the funding councils, stops imposing this nonsense on the sector! The universities did not, as I recall, ask to have this massively wasteful exercise. Though I do also think the sector could be more robust in questionning the continuing point of it all.

Leave a Reply