Once again, technology is being used to solve the complex social problem that is academic plagiarism.
Damian Hinds has made an intervention in university assessment processes, albeit at a partial remove. His ministerial campaign consists of nicely asking payment processors not to process payments to people who provide materials that students could submit in partial fulfilment of the requirements of a university course.
With wonderful irony, the idea is not original (the QAA has already been running such a campaign). Hinds’ intervention follows Sam Gyimah’s earlier dark hints that “legislative options are not off the table” by demonstrating that, for the moment, legislative options remain off the table.
But (as so often in HE policy) we’ve been here before. Two approaches to “the problem of plagiarism” were trialled in the UK at the turn of the century. One addressed the underlying issues and faded largely into obscurity. The other provided a shiny technical salve that looked fantastic but solved nothing, and was recently sold for $1.74bn.
Student plagiarism in context
From a broad historical perspective, the concept of plagiarism in universities makes no sense. Not that academic life is beyond reproach on the matter, more that the direct duplication of already existing ideas was pretty much the original point of the whole enterprise.
Long before essay mills, indeed long before essays, a successful scholar would have spent their entire university career gaining mastery over a particular body of knowledge. In a model also familiar to those with an interest in the history of the church (or the history of the legal system, or the history of medicine…) the ability to retain and reproduce knowledge – on demand – was the signifier of a scholar.
Newman himself defines a university as “a place of teaching universal knowledge” – the idea being that such knowledge was an entity the could be passed from one mind to another for the general betterment of society. We can still see elements of this idea within modern curricula, and it is a good thing that we can. I am fairly glad that my doctor has a thorough understanding of anatomy, for example – such knowledge is mandated in medical programmes by the General Medical Council.
The Enlightenment – and, if we are cynical, mass produced printed material sold for a profit – led to essayists and writers being held to high standards of originality. The whole second volume of Cervantes’ Don Quixote, for instance, is a response to and a parody of literary plagiarism. But academic plagiarism long took a back seat to wider ideas of academic integrity – the Philosophical Transactions of the Royal Society (one of the world’s first academic journals) was published to corroborate the veracity of scientific findings, not their authorship.
The idea of expecting originality from students is a modern one. Originality, like novelty, was once a word that described an unsound and potentially dangerous idea. Mastery and comprehension was judged by the ability to regurgitate – the words of an authority were preferable to the words of a student. Again, elements of this persist – what mark would you give a student essay without citations, or (more subtly) one without the expected citations?
How did we manage to prove that a students work was not their own before the internet? A knowledge of both the key literature of a subject area and the likely capabilities of a given student would certainly help, but we could never know for sure. The chances of non-originality (and I’m still wanting to hold on to the problematic idea of originality here) could be decreased by a carefully designed question – where a lecture set the same essay each year he or she could expect to mark many of the same answers each year.
Widely disseminated tales of prominent students caught out and the underlying suspicion that a degree was useless without the knowledge and skills that go alongside it meant that the problem was less widespread, or less widely reported. But it was impossible to be sure, and in certain corners of academia this issue continued to nag.
Enter the internet
At the turn of the millennium, Jisc (formerly the Joint Information Systems Committee, or JISC) commissioned research from a range of institutions into the use of software for plagiarism detection. Alas most of the links on that page are now dead, though the executive summary is still timely:
It became clear that, as with most things in life, technology can only assist us, it will never replace the expertise of humans and that the answer to problems usually lies in process and procedures not technology alone. Electronic detection has its place in institutions but the real solutions lie in appropriate assessment mechanisms, supportive institutional culture, clear definitions of plagiarism and policies for dealing with it and adequate training for staff and students.
Such caveats underpinned the commissioning and launch of a National Plagiarism Advisory Service. The name here is important, as it saw a large part of its role as educative rather than punitive – supporting staff and students in avoiding rather than detecting academic misconduct. This was established at the University of Northumbria under the leadership of Fiona Duggan.
However, community feedback also suggested a demand for a central “detection service” – Jisc tendered for suitable software to manage it, and the contract was won by a Californian company named iParadigms. For the first three years of operation (later extended until August 2005) their services were paid for on behalf of the sector by Jisc.
With this inbuild cost advantage the iParadigms service quickly became the dominant UK player in a new market for plagiarism “detection”, with other players (including the charmingly named CopyCatch) falling by the wayside. iParadigms, of course, became Turnitin. And the offshoot of the University of Northumbria responsible for managing the advisory service spun out as nlearning, before eventually becoming TurnitinUK.
But more of that later. Turnitin checked submitted student work over:
- A database of previously submitted material (i.e. other students essays and assignments)
- Over 4.5 billion urls
- Copyright free material from the Gutenberg project
- Selected subscription services
Despite the work of the Plagiarism Advisory Service in promoting advice and support for academics and students, the “detection” service speedily became a byword for addressing the issue of plagiarism. Under the auspices of the Higher Education Academy, the advisory and support functions of the plagiarism advisory service continued in a less visible way, but TurnitinUK (with Will Murray, a former head of online services for the University of Northumbria, at the lead) contributed hugely to the international growth of Turnitin.
These days Turnitin is a multi-billion dollar enterprise, not to mention the subject of several lawsuits focusing on the ownership and reuse of submitted student work. Turnitin themselves are happy to clarify that the students themselves retain ownership of their work, but grant the company the right to use their data in plagiarism detection. It makes them a lot of money.
The limits of detection
Think of a plagiarism detection service as a vast database of written work. It is possible to search for terms within this corpus from a submitted essay – and similarities to other works are flagged for the attention of the academic. It is the job of the person marking the essay to make a human judgement as to whether the academic offence of plagiarism has been committed, or whether further instruction is needed in correct referencing or citation practice. Or, frankly, whether the similarity is a coincidence – there are only so many ways you can write about the purpose and motivations of the character of Ophelia in Hamlet after all. Or whether the similarity is a canny bit of postmodern intertextuality.
It adds a datapoint or two to a wider assessment of student work – but does not in any real sense “detect” plagiarism. The rise of the essay mill, for instance, can be seen as a response to services like Turnitin, which have meant that original work – rather than the sight of a second year’s essay from last year, or a fortuitous find on the internet – is needed to successfully cheat. As software develops that can identify such practices, practices will inevitably shift elsewhere.
What none of this has done is address the issue that Jisc identified back in 2000. Why do students plagiarise? On the site last week Phil Pilkington spelled out the economic realities and the pressure to succeed – Jisc’s earlier work also included ideas of academic interest, and issues with the way ideas of academic practice are explained to students.
The nature of knowledge (and the knowledge of nature)
The high pressure life of students is clearly a factor in the decision to cheat – again, Pilkington hits the nail on the head by asking why more choose not to. The idea of academic honour (sometimes codified as an explicit code, elsewhere established as a general sense that you can’t really qualify as a pharmacist if you don’t know any pharmacology yourself) would be one reason, a sense of fair play- or a fear of being caught – might be another.
If viewed dispassionately, the academic essay is a strange beast. We expect students to express the ideas they have been taught in forms of words that are slightly different (but not too different) from the forms of words that have been used to teach them. Why? To test knowledge, understanding, and the application of both. There are almost certainly better ways – practice based and problem based assessment has nearly supplanted the essay in some subjects allied to medicine for example.
If we are teaching students to write well, why not be explicit about the fact – and privilege style and rhetorical force over originality. If we are teaching the mastery of a subject, we should perhaps cleave to the wider cultural norms of practice rather than a narrower understanding based on the Lyotardian language game of academic publication.
Technology is a facilitator of plagiarism, and a facilitator of plagiarism detection. But it answers none of the deeper questions about how and why we assess students.