Do we need a league table of scholars produced by Silicon Valley?

For Lawrie Phipps, Donna Lanclos, and Richard Watermeyer a new ranking of individual academics is a troubling development in a changing academic landscape

Lawrie Phipps is Senior Research Lead at Jisc and a visiting professor of digital leadership at the University of Chester


Donna Lanclos is a senior research fellow at Munster Technological University. 


Professor of Higher Education and Co-Director of the Centre for Higher Education Transformations at the University of Bristol

One recent afternoon, one of us received an email from an unknown-to-us business, Scholar GPS. It offers congratulations for “exceptional scholarly performance” and for our being placed “in the top 0.05 per cent of all scholars worldwide”.

Our initial reaction was largely indifference, and dismissal (and deletion) of predatory junk mail.

But, as other colleagues, in other countries, shared publicly having received the same message, we collectively looked to dig a little deeper.

What is this ranking?

ScholarGPS appears to be a system that utilises data mining/data scraping to produce a ranking based on metrics like productivity, quality, and impact, specifically targeting individual scholars. According to the company (we’re not linking to it, we are certain you, our readers, can find it), it profiles approximately 30 million scholars, drawing connections between citations, collaborations, and institutional affiliations to create a hierarchical system of academic performance.

As a recent addition to the ongoing marathon of metricisation, ScholarGPS exemplifies a pre-existing trend.

It aims to enhance scholar “visibility” by ranking individual academics based on quantitative metrics. It is no secret that in academia, rankings, metrics, and data-driven assessments of research often dominate the evaluation of individual and institutional performance, and institutional chasing of “excellent” ratings (as in the Research Excellence Framework). While alleged to offer objective measures of success, the systems are frequently instead reductive and exploitative. We would point for example to cogent critiques of the very notion of “excellence” as well as inventories of the cost of individualism, and lack of solidarity, to the academic sector as a whole.

The prestige economy

The suggestion that people participate in individual league tables of scholarship has appeal in a context where academia functions as a prestige economy where the number of publications and visibility of research output are inextricably linked to perceived success, and so professional advancement. ScholarGPS and similar platforms offer scholars not just metrics, but validation and an (illusory) sense of security amidst growing fears of job precarity and the relentless pressures of academic performance. ScholarGPS’s three main metrics – productivity, quality, and impact – become currency within this prestige economy, enabling scholars to acquire badges of “excellence” that can further promote themselves, and (in theory) develop their careers.

Rankings can create the illusion of a meritocratic system where hard work and talent alone dictate success. But evidence suggests that these rankings do more to reinforce privilege than to promote genuine merit.

For example, if you have a look at the lists of top-ranked scholars on platforms like ScholarGPS, they are overwhelmingly affiliated with traditionally elite, well-funded institutions. When we looked at STEM and humanities examples in the UK and US, they were also primarily white, with a high percentage of men. It is plausible that what the rankings actually reflect is the degree of institutional support rather than individual merit.

Scholars from resource-rich universities enjoy better access to research funding, infrastructure, and opportunities, as well as the impact of years of systemic privilege. Rather than providing an objective measure of merit, we would suggest that ScholarGPS (and other) rankings showcase and potentially amplify existing inequalities in the academic system.

One size

Platforms like ScholarGPS not only reinforce privilege, they also marginalise non-traditional scholars. Independent scholars, freelance researchers, and those operating in non-traditional academic roles often produce valuable research, collaborate across disciplines, and contribute to public life in ways that are not easily captured by institutional metrics. We should not fall into the trap of mistaking that which is measurable (institutional academic performance) for that which is valuable.

The absence of scholars working outside the institutional bounds of academia from tools that prioritise productivity, quality, and impact, further limits the already scant recognition of diverse and often unconventional paths of scholarship. Meritocracy as represented by indices and ranking systems not only then perpetuates a narrow view of scholarly worth, but also reinforces the ivory tower walls that keep out those whose work does not fit into neatly quantified metrics. Yet, their exclusion from being counted does not mean their contribution does not matter. Rather it reveals the inequity of a system of reward and recognition that fails to account for activity outside the tower itself.

A challenge

The emergence of platforms like ScholarGPS and the increasing focus on individual metrics pose a challenge to a vision of academia that values collaboration, critical inquiry, and the open dissemination of knowledge. As rankings grow more granular and pervasive, they threaten to strengthen mechanisms that render invisible and excluded independent scholars and those in non-traditional roles. Such rankings also work to further precaritise scholars who are in traditional roles, but increasingly left to scrape together individual cases for their own security.

ScholarGPS and similar ranking platforms present an appealing (to some) but risky (to all) illusion of meritocracy in academia. Academic worth, reduced to a series of impersonal metrics, risks not only obscuring genuine scholarly contributions but also reinforces the very inequities they claim to address. In a sector that urgently needs diverse perspectives and collaborative efforts to solve pressing global challenges, academia’s obsession with rankings threatens to alienate and exclude voices from non-elite institutions and non-traditional backgrounds.

Prioritising quantifiable individual success over qualitatively meaningful contributions erodes the principles on which scholarship is built, and puts up barriers to a more inclusive academy built on solidarity and collective action. As a sector we must reject a narrow definition of success and instead embrace a holistic, community-driven vision of achievement.

And with regards to ScholarGPS, in an already highly measured academic landscape, often monitored by the academy itself, is a “Silicon Valley start-up”, potentially aiming to extract profits from an underfunded sector, best placed to introduce yet another layer of league tables?

5 responses to “Do we need a league table of scholars produced by Silicon Valley?

  1. In my field, the claim you made is not accurate. Over the past five years, the top of the table has been dominated by researchers from developing countries. Contrary to your assertion, I do not believe that this kind of league table significantly promotes white privilege. Instead, it encourages research that scores well, which often leads to opportunistic and hyped topics. This has resulted in unwanted side effects such as paper mills and citation circles.

    While I do not wish to defend ScholarGPS, which may not be a particularly appealing addition to the ranking landscape, comparing the quantitative aspects of academic outputs is necessary. Unfair judgment in academia is more often perpetuated by old boys’ clubs, subjective letters of recommendation, and general nepotism (medals, prizes, and other subjectively awarded honors). This is a significant problem that cements hierarchies and offers unfair advantages to those closely associated with influential professors. Sometimes, it is beneficial to evaluate what a person has actually accomplished rather than relying on authorities with their own vested interests.

  2. Core to our argument is that “productivity” in the form of many articles is not the same as “accomplishment” and that these league tables (as you actually allude to) encourage “productivity” over any other more meaningful value in scholarship.

  3. As I watch my own organisation chasing desperately after a falling ranking, paying consultants and adjusting policy to effect an improvement in the number assigned to us by an opaque and unaccountable private comapny, I feel a shiver of horror at the thought of how this most recent unwelcome expansion of metricisation may ultimately impact on people’s jobs.

  4. I also disagree with the authors of the articles. First, ScholarGPS is far superior to any previous ranking platform associated with scholars and universities. Unlike other ranking systems, ScholarGPS ranks scholars by specific fields, or disciplines, rather than grouping all scholars together. For example, scholars in chemistry are ranked within their field, and similarly, scholars in computer science are ranked separately. This approach acknowledges the significant variation in scholarly activities across different technical areas. For instance, scholars in political science or mathematics generally publish less frequently than those in fields like medicine or engineering.

    Second, ScholarGPS ranks scholars primarily based on productivity, impact, and quality, rather than reputation. This means that scholars from non-Ivy institutions or from countries outside the U.S. are ranked fairly and without bias.

    Finally, ScholarGPS does not include non-archival sources or self citation like Google Scholar, ensuring that only peer-reviewed with no self citation , published work is considered in rankings.

    However, ScholarGPS, like other platforms, does not take the order of authors into account in its ranking system.

  5. We read with interest the critique of ScholarGPS (“Do we need a league table of scholars produced by Silicon Valley?”, Phipps, L., Lanclos, D. and R. Watermeyer) published by Wonkhe on 21/11/24. Although we have recently submitted an extended response on the Social Science Research Network (SSRN) site (“The diversity of highly ranked scholars and their institutions by ScholarGPS”, Faghri, A. and T.L. Bergman (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5090521), we feel it is important to both correct certain fundamental errors in the critical article and provide further clarifications here. For details, we refer readers to the SSRN paper at the link above.

    First, ScholarGPS is not from Silicon Valley. We’re based in Los Angeles, CA, which is some distance from Silicon Valley. Secondly, we suspect the article’s headline was intended to convey a sense of “tech bros getting involved in academia”. We’re not tech bros; all the founders of ScholarGPS and our consultants are current or former academics, all of whom have experience both in the regular professoriate and as academic administrators. Before launch, ScholarGPS was widely trialed by academics and university presidents, as well as by scholars from outside of academia.

    In terms of ranking per se, we at ScholarGPS agree that ranking is complex, and that no single set of ranking criteria (including “no ranking at all”) is likely to satisfy all constituencies having a stake in academia. We make this plain on our website at https://scholargps.com/faq — see FAQ 60. We state: “It is likely impossible to find any encompassing set of metrics that would create the perfect scholarly ranking model — one that would be embraced by all scholars and which would rank scholars with absolute and complete fairness and accuracy. ScholarGPS® recognizes that great care should be taken in using any scores (whether those from ScholarGPS® or any other ranking system) as the final statement of any scholar’s true productivity or value. Users should therefore not construe a lower score or ranking as necessarily representative of lesser influence or prestige”.

    As noted in the critical article, one of the authors (we won’t say which one — you can look it up at https://scholargps.com) was notified that the ScholarGPS ranking algorithms had identified them as a Highly Ranked Scholar, i.e., someone who scores in the upper 0.05% of all scholars in their field, discipline or specialty. The usual reaction to these notifications is that the scholar is pleased to have their work so recognized. This was apparently not the case here. As seen in the article, the recipient author refers to the email as “predatory”, which might lead the reader to conclude that there was more to the email message than a congratulatory notification of Highly Ranked Scholar status. But that is all the email did; it simply notified the recipient of their rank status. We did not ask for money —not even donations. Currently, all access to ScholarGPS is free-of-charge. But we do apologize to the offended recipient, and we would remind them that they are under no expectation to publicize their ScholarGPS Highly Ranked Scholar ranking in any way, even tangentially. To paraphrase Groucho, you don’t have to “belong to any club that will accept you as a member.”

    The authors of the article complain that rankings are “frequently instead reductive and exploitative”. Complaints like this tend to refer to rankings of the professoriate (and by extension institutions). But oddly, the poor students who are examined (and ranked) from age 5 until they emerge at the end of their educational experience are not included in this set of concerns. We still grade our students’ coursework; we still deploy (in the UK) exams such as the GCSEs and A levels; there are still First Class Honours degrees; and we (in the US at least) still require PhD candidates to face public scrutiny in defense of their theses. And even within the ranking-resistant branch of the professoriate there is (usually) a sense that elevation in academic rank (say from lecturer to reader in the UK, or from associate to full professor in the US) requires some measure of performance beyond “well, I think they’re a jolly good fellow, and so say all of us”.

    Consider the core of the argument presented in the article — that rankings can create an “illusion of a meritocratic system”, and “do more to reinforce privilege than to promote genuine merit.” But metrics are an inanimate tool. They don’t create meritocracies or enforce privilege – people do, including especially those who inappropriately use and interpret rankings. Furthermore, we also realize that some scholars are simply excellent in their field, and would probably still produce excellence no matter which institution they belong to. Srinivasa Ramanajan comes to mind for mathematicians, and Einstein for physicists (in 1905 the Swiss Patent Office was not, as the authors would likely contend, one of the “traditionally elite, well-funded institutions.”)

    Similarly, the authors of the article argue that rankings such as those provided by ScholarGPS “marginalize non-traditional scholars”. Perhaps some ranking systems do, but ScholarGPS includes scholars from all kinds of entities — the only thing an individual needs to do is publish an archival journal paper/book/conference paper or a patent. (In fact, all three of the authors of the article have ScholarGPS profiles. We’re sure they have checked that.) So, we agree that there will be scholars who might be considered “non-traditional”, and we agree that consideration should be given to research which is “valuable”.

    But how do you determine what is and what is not “valuable”? If someone were to theorize that aircraft are kept aloft by “air pixies” that support the wings, this would certainly be a non-traditional approach to fluid dynamics. We suspect almost all academics would counter the “air pixie” theory by referring to established science, and (one would hope) “The theory of air pixies as solutions to the Navier-Stokes equation” would not be published. We tend — in all disciplines which involve publishing as the means of disseminating scholarly work — to use the peer review process. This is a judgement of value and quality. When a peer-reviewed publication is cited (for the correct reasons) by multiple other researchers, it is again a judgement of quality and importance to the area the paper contributes to. By all means let us celebrate “non-traditional” scholars and the work they produce but let us not lose sight of the need to identify quality in scholarly pursuit, by whatever metric one chooses to use.

    We also do not understand the following assertions: “Prioritising quantifiable individual success over qualitatively meaningful contributions erodes the principles on which scholarship is built, and puts up barriers to a more inclusive academy built on solidarity and collective action.“ Surely the “principles on which scholarship is built” is (or should be) precisely based on quantitative judgements. In fact, it is the purely qualitative judgements that tend — in our experience — to cause all the problems. An example is the “halo effect” most of us in academia rail against; if Prof. X is a faculty member at Excellent University Y, then qualitatively Prof. X is excellent. This “qualitative approach” trivializes the work of scholars at institutions that are not in some favoured “elite grouping”. Moreover, we can all agree on the number of citations a scholarly work has achieved (up to the mechanics of counting citations), but how can we agree on what constitutes a “qualitatively meaningful contribution(s)”? Who defines such qualitative measures? What happens if the people making the qualitative judgements just don’t happen to like the scholar? What happens if the anointed (or worse, self-anointed) evaluator retires or dies? Chaos would reign.

    Although the referenced SSRN submission (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5090521) explores all aspects touched on in the critical article and its companion piece (https://link.springer.com/article/10.1007/s42438-024-00519-8), it is important to again emphasize that the ScholarGPS Highly Ranked Scholar recognition is not restricted to scholars from the “elite groupings”, whether that be the Ivies or large public and private institutions of the US, or the Russell Group in the UK. We refer you to the SSRN paper for many specific examples; for instance, in the prior five-year period you will see that excellence in scholarship exists not only in “up and coming” countries such as China (which now leads the US and Europe in a variety of technical areas), but also in India, Iran and Malaysia (to name just three). We believe the recognition of brilliant scholars in these, and similar countries is overdue and should be celebrated, not scorned.

Leave a Reply