One recent afternoon, one of us received an email from an unknown-to-us business, Scholar GPS. It offers congratulations for “exceptional scholarly performance” and for our being placed “in the top 0.05 per cent of all scholars worldwide”.
Our initial reaction was largely indifference, and dismissal (and deletion) of predatory junk mail.
But, as other colleagues, in other countries, shared publicly having received the same message, we collectively looked to dig a little deeper.
What is this ranking?
ScholarGPS appears to be a system that utilises data mining/data scraping to produce a ranking based on metrics like productivity, quality, and impact, specifically targeting individual scholars. According to the company (we’re not linking to it, we are certain you, our readers, can find it), it profiles approximately 30 million scholars, drawing connections between citations, collaborations, and institutional affiliations to create a hierarchical system of academic performance.
As a recent addition to the ongoing marathon of metricisation, ScholarGPS exemplifies a pre-existing trend.
It aims to enhance scholar “visibility” by ranking individual academics based on quantitative metrics. It is no secret that in academia, rankings, metrics, and data-driven assessments of research often dominate the evaluation of individual and institutional performance, and institutional chasing of “excellent” ratings (as in the Research Excellence Framework). While alleged to offer objective measures of success, the systems are frequently instead reductive and exploitative. We would point for example to cogent critiques of the very notion of “excellence” as well as inventories of the cost of individualism, and lack of solidarity, to the academic sector as a whole.
The prestige economy
The suggestion that people participate in individual league tables of scholarship has appeal in a context where academia functions as a prestige economy where the number of publications and visibility of research output are inextricably linked to perceived success, and so professional advancement. ScholarGPS and similar platforms offer scholars not just metrics, but validation and an (illusory) sense of security amidst growing fears of job precarity and the relentless pressures of academic performance. ScholarGPS’s three main metrics – productivity, quality, and impact – become currency within this prestige economy, enabling scholars to acquire badges of “excellence” that can further promote themselves, and (in theory) develop their careers.
Rankings can create the illusion of a meritocratic system where hard work and talent alone dictate success. But evidence suggests that these rankings do more to reinforce privilege than to promote genuine merit.
For example, if you have a look at the lists of top-ranked scholars on platforms like ScholarGPS, they are overwhelmingly affiliated with traditionally elite, well-funded institutions. When we looked at STEM and humanities examples in the UK and US, they were also primarily white, with a high percentage of men. It is plausible that what the rankings actually reflect is the degree of institutional support rather than individual merit.
Scholars from resource-rich universities enjoy better access to research funding, infrastructure, and opportunities, as well as the impact of years of systemic privilege. Rather than providing an objective measure of merit, we would suggest that ScholarGPS (and other) rankings showcase and potentially amplify existing inequalities in the academic system.
One size
Platforms like ScholarGPS not only reinforce privilege, they also marginalise non-traditional scholars. Independent scholars, freelance researchers, and those operating in non-traditional academic roles often produce valuable research, collaborate across disciplines, and contribute to public life in ways that are not easily captured by institutional metrics. We should not fall into the trap of mistaking that which is measurable (institutional academic performance) for that which is valuable.
The absence of scholars working outside the institutional bounds of academia from tools that prioritise productivity, quality, and impact, further limits the already scant recognition of diverse and often unconventional paths of scholarship. Meritocracy as represented by indices and ranking systems not only then perpetuates a narrow view of scholarly worth, but also reinforces the ivory tower walls that keep out those whose work does not fit into neatly quantified metrics. Yet, their exclusion from being counted does not mean their contribution does not matter. Rather it reveals the inequity of a system of reward and recognition that fails to account for activity outside the tower itself.
A challenge
The emergence of platforms like ScholarGPS and the increasing focus on individual metrics pose a challenge to a vision of academia that values collaboration, critical inquiry, and the open dissemination of knowledge. As rankings grow more granular and pervasive, they threaten to strengthen mechanisms that render invisible and excluded independent scholars and those in non-traditional roles. Such rankings also work to further precaritise scholars who are in traditional roles, but increasingly left to scrape together individual cases for their own security.
ScholarGPS and similar ranking platforms present an appealing (to some) but risky (to all) illusion of meritocracy in academia. Academic worth, reduced to a series of impersonal metrics, risks not only obscuring genuine scholarly contributions but also reinforces the very inequities they claim to address. In a sector that urgently needs diverse perspectives and collaborative efforts to solve pressing global challenges, academia’s obsession with rankings threatens to alienate and exclude voices from non-elite institutions and non-traditional backgrounds.
Prioritising quantifiable individual success over qualitatively meaningful contributions erodes the principles on which scholarship is built, and puts up barriers to a more inclusive academy built on solidarity and collective action. As a sector we must reject a narrow definition of success and instead embrace a holistic, community-driven vision of achievement.
And with regards to ScholarGPS, in an already highly measured academic landscape, often monitored by the academy itself, is a “Silicon Valley start-up”, potentially aiming to extract profits from an underfunded sector, best placed to introduce yet another layer of league tables?
In my field, the claim you made is not accurate. Over the past five years, the top of the table has been dominated by researchers from developing countries. Contrary to your assertion, I do not believe that this kind of league table significantly promotes white privilege. Instead, it encourages research that scores well, which often leads to opportunistic and hyped topics. This has resulted in unwanted side effects such as paper mills and citation circles.
While I do not wish to defend ScholarGPS, which may not be a particularly appealing addition to the ranking landscape, comparing the quantitative aspects of academic outputs is necessary. Unfair judgment in academia is more often perpetuated by old boys’ clubs, subjective letters of recommendation, and general nepotism (medals, prizes, and other subjectively awarded honors). This is a significant problem that cements hierarchies and offers unfair advantages to those closely associated with influential professors. Sometimes, it is beneficial to evaluate what a person has actually accomplished rather than relying on authorities with their own vested interests.
Core to our argument is that “productivity” in the form of many articles is not the same as “accomplishment” and that these league tables (as you actually allude to) encourage “productivity” over any other more meaningful value in scholarship.
As I watch my own organisation chasing desperately after a falling ranking, paying consultants and adjusting policy to effect an improvement in the number assigned to us by an opaque and unaccountable private comapny, I feel a shiver of horror at the thought of how this most recent unwelcome expansion of metricisation may ultimately impact on people’s jobs.
I also disagree with the authors of the articles. First, ScholarGPS is far superior to any previous ranking platform associated with scholars and universities. Unlike other ranking systems, ScholarGPS ranks scholars by specific fields, or disciplines, rather than grouping all scholars together. For example, scholars in chemistry are ranked within their field, and similarly, scholars in computer science are ranked separately. This approach acknowledges the significant variation in scholarly activities across different technical areas. For instance, scholars in political science or mathematics generally publish less frequently than those in fields like medicine or engineering.
Second, ScholarGPS ranks scholars primarily based on productivity, impact, and quality, rather than reputation. This means that scholars from non-Ivy institutions or from countries outside the U.S. are ranked fairly and without bias.
Finally, ScholarGPS does not include non-archival sources or self citation like Google Scholar, ensuring that only peer-reviewed with no self citation , published work is considered in rankings.
However, ScholarGPS, like other platforms, does not take the order of authors into account in its ranking system.