University metrics keep academics in their ivory towers
By Jason Ensor, University of Western Sydney
Perhaps we should excuse comments made during the 2013 federal election about “wasteful” and “increasingly ridiculous research” undertaken in Australia.
The real shock was not that shadow ministers could make such baseless statements but that these comments could go largely unchallenged by the public. Although it seemed an act of political posturing by vilifying academics, the now-Coalition government was merely tapping into an existing view of the university sector within Australian society.
This view is that academia is out of touch with the taxpayers it services.
Which it is. Not because scholarship can sometimes appear to be esoteric or arcane, but because scholars often fail to make the effort to argue why it is otherwise to those who foot the bill. We fail to regard the public as a research partner worthy of our attention and our respect.
In defence of academics, the barriers that prevent scholars from adding value to where it really matters are not of our making … not quite.
Australia has national research priorities and associated goals set by the government. These range from investigating social well-being to improving cyber security, from lifting manufacturing productivity to understanding cultural and economic change in our region. But how this research is disseminated and “counted” is at odds with public expectations of access and engagement.
Talking to ourselves
The university sector is about to enter the 2015 round of the Excellence in Research Australia (ERA) initiative. Under ERA, the dissemination of research via monographs, book chapters, peer-reviewed journal articles and peer-reviewed conference papers is tightly linked to a points system in which these traditional modes of publication “count”. Called “outputs”, monographs are worth 5 points each and everything else 1 point.
These are the metrics by which the bulk of goods and services produced by the academy are weighed. Academic careers grow or wither under this points system. As a result, the printed book and peer-reviewed journal article remain the exemplars of published research and the currency of scholarly accreditation and promotion. This system is bluntly characterised as the “publish or perish” dimension of academic workloads.
This fixation on forms of publication that pre-date the internet helps maintain the widespread perception of scholarship as dry and aloof. The target audience of these outputs is usually other scholars (who increasingly have little or no time to absorb colleagues’ work), not the public.
Awareness is growing of the need to move into new modes of engagement that are more available to modern society. But how do academics engage with new and emergent forms of interaction when the goalposts set by evaluation systems like the ERA value monographs and journal articles most highly?
Granted, the “publish or perish” imperative is giving way to “be visible or vanish”. But even this is perhaps too self-interested, even mildly narcissistic, by suggesting discoverability as the new fashion that will restore academia. For sure, this has led to the consideration of alternative ways to evaluate scholarship in the public domain. These range from counting actual downloads of a journal article (if it’s open access) to “hits” or “page views” of an online exhibition.
Yet in some ways this only modifies the units for measuring impact without really questioning the underlying premise of impact. Be it citations or eyeballs, these are more suited to grant applications and keeping your job than truly opening up a dialogue with the public.
Under this model, traditional forms of publication “count”. Tweeting, blogging, teaching, media appearances, public lectures, community forums and the convening of other non-print-based outcomes – all of which require a lot of commitment to develop, curate and present – rarely do, or require supporting evidence to count towards a research component.
As with live performances, exhibitions and reports to government bodies, most things digital are also considered “non-traditional research outputs”. These require extensive explanation to justify why they should be “counted”.
Catching up with the community
In an age where searching for tutorials on YouTube and information in Wikipedia is second-nature to young inquiring minds, casting digital outputs as “non-traditional” is out of sync with society.
The division of research into traditional and non-traditional is also at odds with community engagement. While most education institutions see their role as servicing and advancing Australian society, the forms of evaluation used to rank Australian university subjects can work against fulfilling this goal.
New forms of recognised outputs and outcomes are required to change the relationship and to renew scholarship as an important part of public discussion. Moreover, scholars need to be able to present research in ways that are meaningful to society and at the same time also count for their institutions. Engaging public audiences alongside academic audiences need to be core competencies with equal footing.
To date, these goals have been mutually exclusive. By not using and valuing the forms of communication and knowledge-sharing that Australians engage in every day, the research sector has actively contributed to the growing sense of irrelevance that stalks academia.
As long as our goalposts value talking to each other over and above talking with the community, we remain the primary agents of our own marginalisation.
Jason Ensor is affiliated with the Australasian Association for Digital Humanities, the Alliance of Digital Humanities Organisations, DHCommons (CenterNet), and the Society for the History of Authorship, Reading and Publishing. He works for the University of Western Sydney.