I read these pieces as I listened to The Smiths, Strangeways Here We Come.
it seems that the panopticon is about to be extended across the whole academic hierarchy with the introduction of ‘faculty dashboards’. These are tools which allow data on each academic to be collated into an individual profile showing publications, citations, research grants and awards won. It can be updated daily by the head of department, dean or vice-chancellor. Norms can be established, and of course, extended year-on-year. They may be changed, according to strategic priorities beyond the control, or indeed the value set, of academics.
Morrish, L. 2015. The disciplinary dashboard: from reception class to retirement
Academic Analytics’ unique “flower chart” affords the viewer a visualization of the overall productivity of the faculty within a given academic discipline. Variables on different scales (per capita, per grant dollar, per publication, etc.) and measuring different areas of scholarly productivity can be viewed simultaneously on a single comparative scale based on national benchmarks for the discipline. This powerful graphic facilitates rapid identification of the strongest and weakest areas in a given academic discipline on your campus.
Considering research more specifically, Hazelkorn notes a number of implications arising from the use of league tables, performance indicators and quantitative measures. The central problem identified here is that academic quality is a complex notion that cannot easily be reduced to quantification – the use of proxy variables runs the risk of misrepresenting the qualities of research contributions and may lead to unintended consequences. She contests that there is considerable difficulty obtaining meaningful indicators and comparative data (nationally and internationally), and that the adoption of rankings serves to embed a metrics culture and to down-weight features of research or teaching quality that cannot easily be captured with numbers. This is a perspective echoed by the European Commission’s expert group on research assessment: “Unintended consequences can occur when indicators are taken in isolation and simple correlations are made. This may include overconcentrating on research, favouring particular disciplines of allocating resources and realigning priorities to match indicators.
Further, the use of such indicators is felt by many to risk reinforcing a hierarchical system of institutions that may lead to simplistic comparisons. Such comparisons are hard to justify when aggregate scores show statistically insignificant differences – indeed, an over-emphasis on a small set of indicators risks encouraging perverse behaviour within and across institutions. Comparisons between institutions may lead to an unhelpful focus on the ‘top’ universities worldwide and foster a narrow definition of excellence; such a focus is not likely to be relevant to the institutional goals of universities, where the balance of research and teaching, the geographical focus and disciplinary distinctiveness may vary considerably.
Wilsdon, J., et al. 2015. The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management, pp. 75-6. DOI: 10.13140/RG.2.1.4929.1363
in any situation where decisions are being made using metric data, how a subject discipline performs in relation to that data will be a critical issue for its potential fate. On the one hand, subject disciplines evolve according to an ‘inner logic’ relating to an audience of subject specialists, but they also develop as an adaptation to external audiences, including policy-makers, etc, some of which are keen to shape universities in the light of metric data.
… at the cutting edge of neo-liberal public policy, universities are no longer being subject to governance by market proxies, they are being marketised directly and metric data is part of the process by which the market in higher education is brought into being.
… Where  discursive knowledge is aligned with the understandings of elite publics, no particular problem of credibility arises. However, where a discipline has an aspiration to engage with less powerfully placed publics, then a different issue of credibility arises, precisely that of our credibility because we represent a challenge to the certainties of neo-liberal orthodoxies and are witnesses to the consequences of the widening social inequalities with which they are associated.
Holmwood, J. 2013. Death by Metrics. Global Dialogue: Newsletter for the International Sociological Association.
You know you’re in trouble when the discourse turns to ‘journeys’ and ‘destinations’, but it gets worse. Although the government’s Education Evaluation fact sheet constitutes a total failure of logic, it displays a discursive masterstroke, with a chaining of ‘learning outcomes’, ‘performance data’, ‘accountability’, ‘interventions’, and then serving the whole salad up as a solution to ‘social mobility’. And [this] re-designates universities as mere factories for the production of labour inputs…
Morrish, L. 2015. It’s Metricide: Don’t Do It.
The excellence intensification of academic labour is shaped by three aspects (p. 32): teaching quality (TEF); learning environment [which demands that universities open themselves up to part-privatisation for the service redesign and workforce efficiencies of the Treasury’s Productivity Plan]; and student outcomes and learning gain [data]. This is Holmwood’s Academia as big data project amplified by human capital intensity, alongside the incorporation of ‘new common metrics on engagement with study (including teaching intensity) and learning gain, once they are sufficiently robust and available on a comparable basis’. This is not just the excellence intensity of work, but the intensity of motivation to work. It is also the shaming of those who do not enhance ‘Student commitment to learning – including appropriate pedagogical approaches’, or ‘Teaching intensity – measures might include time spent studying, as measured in the UK Engagement Surveys, proportion of total staff time spent on teaching’ (p. 32).
It is important students have information about the composition of the course, including contact hours, to help them make informed choices about the course they choose to study. The [Competition and Markets Authority] identified this as being material information likely to be required by the Consumer Protection Regulations, and as part of the payment, service delivery and performance information required to be provided pre-contract under the Consumer Contracts (Information, Cancellation and Additional Charges) Regulations. (p. 32)
In order to avoid metricide, or the inability to financialise positive outputs/outcomes because of poor data, competition will compel universities to drive down on staff working conditions, including new workload arrangements and increased surveillance of teaching, research and administration. As Andrew McGettigan has noted ‘if you work in HE, then pay bargaining is going to be a dismal business for the foreseeable.’
We are therefore pushed towards the acceptance of further state-sponsored privatisation of HE. This is not re-imagining the university through learning, teaching or pedagogy, but an unmaking of the university in the name of service redesign, workforce restructuring/efficiency and global, high-tech enterprise. This is HE deterritorialised for productivity, so that only those [academics, students, institutions] ‘that innovate and present a more compelling value proposition to students will be able to increase their share’ (p. 54). As a result what emerges from the Green Paper is an assault on collective work: the collective work of students unions; and of the collective work of students and staff as academic labour. Instead we are forced into asymmetrical relationship to the reality of our fetishized and rugged individualism in the market.
Hall, R. 2015. Against teaching intensity.