podcast transcript

Donald Gillies, UCL

Impact Measures: the Verdict of History of Science
In this talk I want to present an argument against the use of impact measures for the evaluation of research. This argument is based on history of science. My own subject is history and philosophy of science. Moreover it can be used not just against impact measures, but against any kind of research assessment system, such as the RAE or the REF.

Now what do I mean by research assessment system? Well here is a definition: A research assessment system is a system in which groups of researchers are assessed at intervals. If the assessment is good, the group retains its funding or gets more, while, if the assessment is bad, the group’s funds are reduced or perhaps removed altogether.

Now a research assessment system might seem, at first sight, to be an obvious and common sense procedure. We want to produce good research. So let us first find out who the good researchers are by an assessment, and then give funding to good researchers while removing it from bad researchers. In this way we will obviously improve the quality of the research produced. What could be wrong with such a system?

The defect in the system is, I claim, a simple one. The study of history of science shows that it is not in fact possible for researchers to give accurate assessments of contemporary research. After twenty or thirty years, the assessments of a piece of research have normally reached a consensus, which does not change much thereafter. However, this consensus after twenty or thirty years may be wildly different from the judgements, which were made at the time the research was first produced. Research which was then thought to be really important may, after twenty or thirty years, be seen as the exploration of a blind alley, while research which was thought then to be of no value may after twenty or thirty years be seen to be a crucial breakthrough.

The phenomenon to which I wish to draw attention could be described as delayed recognition. Let us suppose that a scientist, Mr S. say, publishes a paper in which he proposes a new theory based on his research, which, after
thirty years, is recognised as a major advance in the field. It may well be that his fellow scientists working in that field may not immediately recognise that Mr S.’s new theory is a good one. They may initially think that Mr S.’s theory is completely wrong, and largely ignore his work.  Mr S. may then have to continue developing his theory through his research and perhaps that of a few supporters for many years before its value is recognised by the scientific community.

So this is what I mean by the concept of delayed recognition. That -to summarise- a scientist, Mr S. say, publishes a paper with a new theory; initially his fellow scientists are not impressed by this theory and ignore it. Mr S. for our future benefit continues his research and developing his theory. After 30 years Mr S's theory comes to generate (x?) and (x?) . Now we can see why, as we learn from the history of science, it is that delayed recognition is a very common phenomenon in the history of science, and, interestingly, it most often occurs for advances which, with hindsight, are seen to be among the most important breakthroughs. It is moreover fairly easy to explain why it happens. According to Kuhn, and I think he is correct here, scientists always work within a framework of assumptions or paradigm[s], which they accept for the time being as correct. Now a major advance in research is likely to go against some of the assumptions in the dominant paradigm. Working scientists are likely to reject, at least initially, a theory which contradicts any of the basic assumptions of their paradigm. Hence we would expect there to be initially a negative reaction to what later turns out to be a major advance.

So here is my explanation of delayed recognition: the new theory contradicts the dominant paradigm, which is exactly why it's opposed by other scientists working in the field, most of the most important research breakthroughs contradict the dominant paradigm on at least some points and so are liable to the phenomenon of delayed recognition.

In the light of this, let us turn to impact measures. The idea of an impact measure is to measure the quality of a research paper by its impact on other researchers in the field, on government policy, or on business. I will begin by focussing on the impact on research, but will say something about impact on business later. A simple way of measuring research impact would be count the number of references to the paper made by other researchers. Now if this is used to evaluate papers published 30 or 40 years before, then I would say that it is a good measure of the quality of research in those papers. However, it is not a good measure of the quality of research in papers published in the previous 4 or 5 years precisely because of the phenomenon of delayed recognition, which I have just described. Our scientist Mr S who makes a major research advance, but one which is not recognised for 20 or 30 years, will initially get a very low value on an impact measure, although his research will be seen with hindsight as being of excellent quality. Thus impact measures are not satisfactory measures of the quality of research produced in the last few years.

The phenomenon of delayed recognition is extremely common in the history of science, and I give many examples of it in my book How Should Research be Organised? published in 2008 to coincide with the results of the last RAE. For this short talk, however, I will confine myself to one recent example.

In 2008, Harald zur Hausen was awarded the Nobel Prize for the discovery that a form of cervical cancer is caused by a preceding infection by the papilloma virus. In the research, which led to the discovery, however, the majority of researchers favoured the view that the causal agent for cervical cancer was a herpes virus and not a papilloma virus. This was the dominant paradigm at the time, and zur Hausen and his group were the only ones who favoured the papilloma virus.

One of the reasons why the research community favoured the idea that a herpes virus was the cause of cervical cancer was that it had been shown that a herpes virus, the Epstein-Barr virus, was the cause of another cancer: Burkitt’s Lymphoma. The dominance of the herpes virus approach is shown by the fact that, in December 1972, there was an international conference of researchers in the area at Key Biscayne in Florida, which had the title: Herpes virus and Cervical Cancer. Zur Hausen attended this conference and made some criticisms of the herpes virus approach. He said that he believed that the results indicate at least a basic difference in the reaction of herpes simplex virus type 2 with cervical cancer cells, as compared to another herpes virus, Epstein-Barr virus. In Burkitt’s lymphomas and nasopharyngeal carcinomas, the tumour cells seem to be loaded with viral genomes, and obviously the complete viral genomes are present in those cells. Thus a basic difference seems to exist between these 2 systems.1 It is reported that the audience listened to zur Hausen in stony silence2. The summary of the conference written by George Klein3 does not mention zur Hausen. Clearly at that time, impact measures for zur Hausen’s research would have been very low, although in the long run zur Hausen proved to be right4.

OK, I now turn to the impact of using impact measures. First of all, let us consider the specific example of zur Hausen’s discovery that infection by the papilloma virus causes cervical cancer. If impact measures had been used for research assessment in 1973, as fortunately they were not, then zur Hausen and his group would have got a very low rating. Their research funding would have been cut off, and the discovery of the cause of cervical cancer would have been long delayed. Millions of dollars would still have been spent on searching for a herpes virus causing cervical cancer, but no result would have been produced. Because actually it is not caused by the herpes virus. We know that millions of dollars would have been spent because there are always other research groups with very low funding looking for x science. And most of them, they weren't finding it but they thought it was just kind of (caught?). Moreover, it would have been very difficult for zur Hausen or anyone else to challenge the dominant paradigm (that a herpes virus caused cervical cancer), because anyone who initially advocated such a view would have received a low rating on impact measures and consequently been denied funding. As a result the development of a vaccine which protects against this unpleasant and often fatal disease would have been delayed for several decades, while huge sums of money - note this - would have continued to be spent on research. It is worth noting that sales of the vaccine have generated large profits for pharmaceutical companies. So these profits would not have occurred either.

So here is my summary of the impact in this specific example. First of all funding for zur Hausen’s group would have been cut off in the 1970s because the impact of their research was zero, consequently discovery of the viral cause of cervical cancer would have been long delayed, but still millions of dollars would have been spent in research on herpes virus causing cervical cancer obviously (x?). And pharmaceutical companies' profits (x?) vaccine (x?) impact on business.

Now let us turn to the general effect of the use of impact measures in a research assessment system such as the REF. I remarked earlier that the phenomenon of delayed recognition occurs most frequently in the case of big innovations, significant advances and major breakthroughs. This is explained by the fact that advances of this kind usually contradict some features of the dominant paradigm accepted by most scientists working in the field. Hence we can conclude that the general impact of the use of impact measures will be to stifle big innovations, significant advances and major breakthroughs in research.

It is also worth noting that it is precisely the big innovations, which generate the largest profits for the private sector. So a subsidiary consequence then is to reduce profits in the private sector.  And as the government is dedicated above all to generating large profits in the private sector, its advocacy of impact measures is a clear instance of shooting itself in the foot.

Mary Evans, LSE

Thank you very much Donald, and thank you for both your comments, and also for that particularly striking and vivid example of the history of science, which I have to say - speaking as somebody coming from the social sciences - is very difficult for the social sciences to replicate. But what I wanted to do in responding to your paper is actually try and comment - in fact - on three things in particular that you have said: First of all I think this notion of delayed recognition is a very fascinating one because what it allows us to speculate about is the ways in which recognition is actually constructed within academic disciplines. Ideas in the sort of thinking which initiates proposals about impact assume that we are all waiting for great ideas that are ready-made for receivers of those ideas, sitting there full of responsiveness, full of eagerness to embrace the novel, to actually take up novel ideas of innovation as they come off the printer. So I think that what delayed recognition should make us think about is the nature of what is out there in terms of networks of recognition in the academic world. So it does seem to me that what we have to consider is the interrelationship between the emergence of ideas and our capability of recognising the ways in which we take forward those ideas themselves. What is very often, I think, being separated in ideas about impact is a kind of rigid separation between the ideas themselves and recognition. So what I want to suggest is that the reality of academic research is very often much more that an idea has been recognised, somebody then challenges, explores or develops it, and that idea is then taken back and forth by other people within that particular discipline. So there is much to-ing and fro-ing, there's much more exchange- there is much more mutual exchange of the discussion of ideas than in fact a very crude notion of impact suggests. That's the first thing.

The second thing that I wanted to consider is what we might describe as a kind of rigidity about knowledge, that is to say that impact of an idea is – as I said in the first point - going to blossom suddenly and transform an aspect of the social world, the world of knowledge. That kind of view of the history of knowledge seems to me to have two dimensions to it. In the first place it may very well be the case that actually the social world does possess very rigid ideas about what is what, about what is going on particularly in the social sciences in the social world. But actually those aspects of the social world with that kind of impact will often be challenged. Now thinking about the flat notion of challenge we get of course close to the idea of only welcoming impact ‘friendly ‘ ideas precisely because they represent a version of what we already think. We thus need to consider how close the relationship might be between knowledge in the sense of world views, political views about the social world that already exist, and other self-enforcing ideas which mean that impact can become a very shallow reflection of what already exists. That relationship has to be considered; it has to be thought out in terms of the conceptualisation of research in terms of existing ideas and the ‘expected’ conclusions of research.

Part of this second group of issues that I want to consider is the idea that the social world always has the same stable appetite for certain kinds of knowledge and information. Now it's certainly true that in the last few years we might argue that there has been an increased government appetite for the private sector, but another thought I wanted to suggest about what we define as impact is that it can be closely related to (if not derived from ) what has famously been described as’ moral panic’ in the social world. So we have to consider the origin of the research, which produces ‘impact’ and the degree to which that ‘origin’ is linked to eventual ‘impact’

The third and final comment that I want to make is irresistible and it’s partly very much part of Donald's talk today and a subject in his excellent book, and it's, I suppose, a comment on the linguistic understanding of people who define impact as invariably positive. This is a very unambiguous definition and of course ignores the possibilities of words such as notoriety and the notorious and the possibility that impact will be that of wrong ideas, misguided ideas. How should we define the moral and political economy around questions of what is impact? When is impact good, when is impact bad? When, most problematically, are the demands of impact such that they create a situation in which we allow ourselves only to consider what we already know or will be welcome? Thank you very much.

Chair Sarah Franklin passes over to

Valerie Hey, University of Sussex

Thank you very much for the invitation. I might open cans of worms and some of the worms I will talk about I will have anticipated and some I won't. So it's highly speculative and it's in the spirit of curiosity and the imagination I've brought to bear on this question. I want to think about the discourse of impact as part of the whole kind of architecture of audit, and I want to think about its affective dimensions, how it affects us at the level of bodies, emotions and feelings because I think that has been somewhat significantly under-theorised. So it is a tap in perspective but I hope for a good purpose. And I really want to start and contribute to the way the academy itself, its key workers, its key stakeholders, in some extent ourselves can generate our own discourse about what on earth is going on. Many people have intentions for us. I think it’s about time that we started to regenerate intention for ourselves. I don't mean that in an individualistic, detached mode, but I do think it's in the spirit of using our own resources to understand our own conditions of production. That's what I mean.

There are many narratives of the academy in the emergent Sociology of Higher Education. It's about ruins; it's about the good old days. I'm not actually interested in contributing to that. I don’t think that the good old days were all that good for certain categories of persons in the academy.  And so I think there have been some games, but I would like to see more. It's that kind of discourse I want to contribute to. Part of the inspiration for this comes from a seminar series that I with Louise Morley and colleagues at the Centre for Higher Education Equity Research started, which is about constructing some narratives and grammar about the university today. I just want to flag some of the vocabulary, which has the disheartening, disaffecting affect on me of wanting me to walk out of the door.

Now I don't speak as a scientist scientist, I speak as a social scientist. I bring to this discourse Cultural Studies, and it's Feminist, and it is Critical Theory and it's a whole lot of other things around the psychosocial. But I cannot understand the third definition - perhaps someone can explain it to me later- it’s accompanied by a lovely grid and scattergramme - but my point here is about the kind of further imposition of this kind of language. It seems to me delegitimating our own commitments, our own passions, our own moral universes, its like we’re guilty until proven innocent and we have to prove our innocence through our utility. So I am quite mindful of what Mary said. And it's certainly in terms of other concerns I have.

I think part of the discourse of impact - part of the relocation of Higher Education into the clutches of the department of business and innovation - is a kind of industrialisation of research. I am thinking less about its scale than about its key vocabularies and the kind of thing that nourishes the discourse of impact. And I think one of the likely effects of having to account for ourselves through impact is a kind of- we are part of a teleology of It, in which we now in our research grounds have to anticipate what our impacts might be before we have done the research, and then stand suddenly as after recovering and harvesting the impact from all sorts of places. I have an email box, which says 'impact'. Somebody sends me a little fan letter (not that I get many I usually get other implications) I log it under impact […].

There certainly has been a shift in education research – that's one of my communities - to produce work of immense utility, which it derives and thrives on, a kind of discourse of behaviourism in terms of 'what can we do about the low count of the working classes?', but we now have a certain re-legitimation in terms of policy gaze of the government, and there is lots of discourse about ‘why aren’t they aspiring?’ And there's a whole thinning out, a depletion of resources to think about education inequity that’s reduced to intervention studies. So it's like researchers' inoculation, so people rush along, do interventions - I don't know if anyone has read the brochure called Going the Extra Mile. Read it. Well, my heart stopped but it really is the reduction of poverty to culture and from there a whole set of other things follow about what we should do about subjects who refuse to convert themselves into nice middle class subjects. So I am interested and have been developing work with colleagues about 'impact’s' and 'audit’s' affects on ourselves as the workers who are producing the knowledge some of which ends we can’t always control. Increasingly we know we have been invited to –there is an incitement- to performativity, which I think is thinning out at notions of “authenticity”.

Now I use the word authenticity with some discomfort because I am from a poststructuralist – social constructionist position, but there is something going on about when we have to align our subjectivity as performing to all these indicators: how much impact we have, how many times our stuff has been cited, etc. etc. There’s a thinning out of those commitments – certainly for me -, which brought me into the academy, and there’s also a wooing of our already over- exercised passions for our work into other peoples' ends for us. And my thinking here is inspired by Sara Ahmed’s work and I was thinking about the living of this new space of intensification of 'audit', and having to line up with the dominant discourses which are very very policy driven. I’ll just leave that there.

I’m interested in emotions as they are felt on the (pulse?) and the body and the skin, but I’m also interested in the kind of moral and affective economy at the academy in these conditions of intense competition. And for me, and it’s hardly news, the value of it is the register that seems to be endemic in the English - certainly English - education system. It’s the main provoker, it seems to me, of all sorts of affects and the failure to meet the criteria for progress, for success, for doing this, for doing that, is massively felt I think. At the same time that it is massively felt, it’s almost universally not spoken. I’m just going to skip.

So I see the implication to evidence our 'impact' as 'audit plus'. I’m sure that (there will be?) an audit plus plus and super plus just like, you know, there is a complete mania for this sort of gradation whether it’s gold star, silver star, sex, order. […] I mean this obsession about grading, hierachist places, positioning, and so on and so forth.

And I am also interested in the kind of ironic confluence that is going on. Just at the same time that capitalism is deconstructed, it has of course been rescued and been taken into public ownership at the same time as this model which apparently so wonderfully is laid on us about the very aspirational space we should meet. I can’t believe that there is a kind of seamlessness about that that is open to challenge. In this conversation today it clearly is – it’s almost beyond irony, certainly beyond power game that we are being asked to produce and produce and produce paper mountains that we never have time to actually read. I mean this is not my original point, but I do see impact as yet another mode of our - an attempt to delegitimise our - existence as those workers who have been privileged enough - and I know it’s a privilege; it’s a difficult privilege - but we have at one historic point in time been distanced from necessity to play with ideas, to think the unthinkable and stand like contestants on the X – factor, to live the dream.

But, actually, it is the dream of the imagination I want to resuscitate because I think there’s a massive fantasy invested in the discourse of impact, just like there’s massive fantasies in the discourse of evidence based policy making, etc. etc. So I really would like to interrupt the kind of gleeful austerity that seems to be around in which 'impact' is yet another stick to beat us with. So I think we really need to turn to what is traditionally called and speculatively (?) called the feminist lexicon. And I'm trying to think of what it was that got me into the academy first of all. And, believe it or not, it was those seditious things called ideas. It was the gap between my experience and its representation and feminist knowledge, which spoke to me, which resonated with me, which, ironically of course, started outside the academy and then travelled in, and I travelled in with it on that wave of excitement and passion. And what was so fabulous about that knowledge - it reorganised my world for me, it produced new realities, it legitimated the realities I was living. And it allowed me to see things differently. And it’s very easy for us to forget how housework was just housework until it got rebranded as a sexual division of labour. And that had impact on me and continues to do so. This is the particular text that captures me, a whole set of co-preoccupations I have with, I think, what is generative for me about knowledge, ideas, language and discourse.

This is something from Valerie Walkerdine, who is inspirational for me, and inspirational for many other women who come from particular sorts of spaces in the social, and who have had to think about their inheritance and their present and their future. I like it because it’s kind of angry.  So it has lots of those images in it, things in it called feelings and emotions, but it is intellectually robust because what her vocabulary went on to delineate was precisely about the complexity of social reproduction. Why is it, if we know what an unjust society we live in, why does it reproduce itself? I mean it’s partially her explorations of the psychosocial that allow us to explain our investments in those things that are precious and that have had vested impacts for me. And I think also it’s about time we started generating and acknowledging our own desires in our work. Plenty of people, Val indicated, have desires on us and project desires onto us, some of which are highly problematic.

I’m also interested in academic writing that is actually pleasurable, that is aesthetically crafted, that is motion. I am interested in discourses about, you know, that’s non-formulaic sociology. I speak as somebody who is interested in Bourdieu, but I promise you if I read another paper that invokes him I should probably have it deconstructed because, you know, there is sociology beyond Bourdieu and there was before and there will be after - and there is. However, I am interested in work that engages the heart and the brain, I'm interested in writing that is sparkly, connected and resonant. And I think that it ought to be possible for us to start generating prose, a dense but (not the?) cadaverous prose that we are forced to listen to in endless meetings about targets and this that and the other. It really does sap your spirit and I want our spirit to be nourished - at least my spirit to be nourished. It’s presumptive of me to assume that you might share my imaginary or will walk with me into some of this.

And of course part of my feminist imaginary is texts that tell you something you have not thought, precisely working against the logic Mary characterised as the way this kind of teleology of impact is inviting us to sign up to things we already know in a common sense way about the world. This is a view of knowledge, this is a view of text, of discourse, of the distribution of some intellectual resources that allow us to think otherwise, that allow us to step outside or think again about the norms through which we are constructed. It's an invitation to rumination, as Judith Butler says5. It’s an invitation not to collude in the discursive terms of our own production which comes so easily, segway over into some of the narrative that is of such use value to our current economic and social dispositions.

So it’s an argument finally for the imagination. And, thankfully, it’s inspired by Appadurai and also John Elliott. And he was arguing that there is something about the imagination: no matter how many times people come for you and call you to account, call you to render yourself in a particular way so you can get recognition, the imagination always escapes the conditions of that kind of fracture. And I think the university ought to be a space in we nourish and cherish multiple forms of imagination within and without.

Chair Sarah Franklin passes over to

Fran Tonkiss, LSE

Thank you very much Val for that presentation. I do appreciate your words on the language of impact. One of the things that act in your course is excellent. It is this killing of language and the proliferation of jargon, and I think 'impact' is not in itself such an attractive word anyway. And there is something kind of ugly about the way it is operationalised xx (you have of course??) sociological jargon in all the various measures that you indicated. Or otherwise it is reduced into a very simple formula under which the Natural Sciences become industry, the Social Sciences become policy and Humanities become at best television (and it is/becomes irrelevant?).

I want to pick up four points and perhaps to play with them, and possibly in a contrary way to that Val has raised. The first of these is about the policy of fantasies, the fantasies around policy, and the fantasies that politicians perhaps have about academic research. The second relates to Val’s particular position in the Higher Education field and her comments on inspiration, ideas and imagination. The third is paper mounts and perpetual writing. And then, finally, something about competition- the intense competition that you referred to. And they are very short points.

Val mentioned at the outset the performative nature of 'impact' and 'audit', but also the performances in a simple, more conventional sense around this. We have certain fantasies […] about the relationship of our research to policy, but equally I think there is a relationship on the other side. And I read recently a very nice piece by Margaret Wetherell from the Open University in a recent number of Critical Social Policy6, where she gives a very nice account of one ( x?). And it doesn’t get much better than this when she goes to 10 Downing Street as part of this (palette?) of academics, and think tankers, and experts and so on, to consult with Tony Blair, then Prime Minister, and his take on a very important speech that he gave in September 2006 on multiculturalism and on the need to integrate. And I looked up the speech, I remember the speech quite well, and I looked it up having read Margaret Wetherell’s paper. And I couldn’t find it; it seems to have been disappeared from the 10 Downing Street archive, and in the national archives indeed. But the burden of Wetherell’s paper is this amazing built up. You know, she gets the call to 10 Downing Street, who else is going there, it's sort of an orgast guestlist. So she has to (bring) all her network of researchers and engage with this large scale research programme and the (x?) have then conversations in the late night and weekend working and so on and they get to Downing Street. And they certainly discussed it with Blair and his company and he gives the speech and she finds not the scarce bit of any of the discussion in this speech and it’s a complete (reformulated?) piece of rhetoric on Blair’s part. So after this great built up you're left with this sense of anti-climax at the end. But of course No. 10 in a way needs to be seen to be doing that on the other side also – the result she says (of course?) not but I do need you to read it to grasp it and I do recommend it.

The second point I wanted to raise, coming out of Val’s position in HE and her comments on inspiration and ideas: Val and I had an exchange prior to this afternoon, and one of her really good lines is one that she left out (and this perhaps was in the section she skipped), but Val referred to ‘the corralling of the imagination at the service of the mundane’. And I was struck by this idea, and I thought I would quite like to make an argument for the value of the mundane, the mundane ways in which people in Higher Education, not just as researchers but as educators, actually, have an impact. And many of these mundane forms of impact are not auditable although (we've heard from our speakers) everything being auditable these days and not easily auditable, it’s all of those conversations that you have with distant people that read the things you send on, the sort of the rotating credit that happens in academic life that is difficult and just stupid to quantify because it would be more time-consuming than business and innovation consider worth while. But other of the mundane ways in which people working in Higher Education have impact are quite readily auditable but are not readily subject to research and impact assessment. Teaching is the most obvious case in point here. And I am not making an argument for the enhanced audit of teaching, but if we think in terms of our biographies, as it were, the individuals who have had most impact on me and been most inspiring apart from Karl Marx are the people who taught me - including especially the people who taught me Karl Marx. This for me is a reality of impact and this is why it is quite divorced from research about teaching and various kinds of audit around teaching. [… ] I would still say it’s quite separate and the role of Higher Education in education and the role of Higher Education in research seem to become more and more separated.

The other aspect of the mundane about which I'd like to say a word more here, that's on Val 's point on citation and the importance of citation. I recently saw a list of the most cited sociologists in the world and it was full of the usual suspects, many North American, but of course considerably at the top of the list of citations is Antony Giddens. And clearly he has had a great deal of impact in the world of sociology and beyond, in the world of policy for a while, but I would put it to that one of the major impacts that Giddens really has had, that is in the 6th edition of his textbook now in publication and surely the 7th is in fabrication. He has probably reached more readers through this means than he has through some of his more referable writings. And, of course, writing for students you do reach a larger audience. It is not something that you to do at all in audit and impact culture. If you would like to be read then writing for student audiences is certainly one way to do that.

Thirdly, it brings me very briefly to this concept of paper mountains and perpetual writing and I had a thought that resonated so much. As someone involved in the editorial of academic journals I think one of the outcomes of the impact culture is the overproduction of academic work, which I hesitate - I mean you do comply by calling it research? - I mean some of it is just writing for writing’s sake it seems to me. So while I think there may be benefits of the current research audit culture for promoting research writing and even promoting policy, I think there is certainly a case for the production of irrelevant writing precisely to the (x of nonsense?).

And my final point concerns the intense competition, as Val notes that academics are increasingly subject to. Another thing I read recently is a piece by Katherine Smith in the current number of the British Journal of Sociology7, just the March issue on research policy and funding. [She] thinks about precisely the kinds of concerns that we are talking about today. And one of the points that her interviewee makes is that she discusses at a certain degree of length the problem of credibility within this (weird?) competition that many researches, many of our colleagues who are producing policy making work, who are being heard in various corridors of power, or who are ticking the boxes on impact are held in a degree of contempt by their colleagues […]. And I think that this is a reality, that's one that I certainly see in my own academic context, that power competitions not just surround who is most auditable but the fact that many of those colleagues who are not increasing their credibility by their academic colleagues and this can be a(?) trend within academia and perhaps a rather depressing thought.

Mike Power, LSE

Good afternoon. I’ve got a powerpoint slide, but I'm not going to put it up. If you want it you can get it from me afterwards. But I thought it might be better to do without it. I very much talk in a very complimentary way to the previous presentations, so indeed; personally I have really only got footnotes to some of the things that have been said already. But I wanted to start with two sorts of stories. One is: I am a non-executive director of a very large financial service for an organisation so it's been a very interesting couple of years. And one of the things that the financial regulator wants to do to people like me [is] to be more challenging in the boardroom. And the evidence that they want to see that would be more challenging is in the minutes. So my colleagues and I are very concerned that the minutes are written in such a way that they reflect our questions and challenges. Now, that didn't use to happen, but now we're very concerned with procreating traces of our challenges in the minutes. So it’s not just academics - that’s the point of that story.

The second one is that like Val I also have an impact file, which I keep. And it’s got lots of great stuff in it actually, but it is a vulgar and sort of narcissistic activity. But I don't think I'm alone in this. I think CV's are getting longer, almost nothing is left out, and this seems to me a tremendous opportunity to provide a sort of psycho-analytical explanation of some of these traces that we seem determined to leave behind in our reference of what we do. Espeland and Saunder’s8 marvellous article in the American Journal of Sociology about ranking in 2007 talked of this kind of pre-audit activity of the individual agencies in terms of reactivity, and that reactivity is very common, but it starts rather as a (hook up?) line in the very mundane registers of activity impact files. Even before we know what the final rewards/results(?) are going to look like, we are anticipating and creating registers in process, which will come to have an affect on us in the way that Val was intonating.

And I think all this forces us to touch on the question, the very large question a bit before I came on, forces us to think more of - before we get to normativeness there is more about sociology of knowledge and the sociology of our own disciplines as a way of stepping back from the impact discussion. Then we need to ask how academic disciplines in our own fields and sub-fields and dealing with these, how they are connected in some way both to each other and to wider domains of practice for want of a better term. And I think when we reflect in that way we see that these connections are very complex and very cross-phased in time. We can tell kind of broad brushed genealogies of disciplines and say that anthropology originated in imperial projects of control that may not be true - it sounds great. But although we have those origins in many fields and disciplines that require that kind of relative autonomy which I don’t think is very well understood by the academic practitioners within it. And I think part of all this happening to us, since it is exactly your point that you made earlier – part of all this happening to us is that actually we have forgotten about those relations of connections to worlds of practice. And we have forgotten how to develop interesting counter narratives about our own role in the world, not just to teaching but to other ways as well. So we should definitely be sceptical about myths of academic autonomy. But I think there is a falsity of knowledge, certainly in the area I’m involved in, the reflective knowledge about the sort of connections - the complex relations of practice which have not just to do with applying stuff of their own very world. And there’s a lot of very field specific variety across social work, law… I was always very shocked that people teaching law in this place would actually go and also be judges in -or barristers, sorry - in the inns of court, and there’s a modelling practice if ever there was one. Of course ironically it doesn’t x? for me, it's quite upsetting. So there are lots of different kinds of impact implicated previously.

And of course the one story of 'impact', that I think needs telling, is the role of financial economics and the mess that we currently have reached. It touches on the point of that positive and negative impact that was made earlier. Of course, none of this is new, policy makers have been fretting about (the depressions/pressures/?) of the economy for a long time and (our own spendings?) relative to other countries and so on. A good point was (x?) ESRC, so if you have applied for the ESRC grant then you’ve been engaged in this sort of stuff before, in trying to demonstrate engagements and the impacts before they've even happened. There's a very distinctive and novel ambition, something like a step change if you like, in the current proposals around the REF to actually have - ‘demonstrable benefits’ I think is the phrase – ‘demonstrable benefits’ on the economy and society. It is quite a new thing. And I think we do probably need to do a little bit of conceptual clearing up work around the impact space. 'Impact' is a kind of relational concept, a coalition concept, if you take it like that, and we have to really be clear about impact of what, on who, and when? Your presentation spoke to the issue of time (x )very very able?. But I think it is this issue of anthology of impact. What particular units are we talking about which are having the impact, which I think there is a great deal of unclarity about? And of course the most extreme anthology is of the individual research paper, an identifiable unit whose calls of impact will be traced to these people called users, who then change their behaviour for the benefits of society. I think that mythical model is just lurking in the background as a kind of invisible benchmark - more or less an invisible benchmark - for a lot of this discussion. Of course my experience with users - or maybe we call them ‘impactees’- my experience is that they don’t cite you, they steal your ideas, they're nicking stuff all the time and plagiarise it. I mean they may not mention you but (x?) just say you're lovely, and yeah, so you just get ignored, but actually I think they steal and reconstruct a lot. So there’s a little kind of issue of the impactees, (who?) have that root person who is whatever they need to be themselves of course- not really constructive, not just social constructive, but (then more?) well managed than educated in some sense. And that's the real issue there because they can't be controlled. I mean that I think we can control engagement, which the softer edge of this stuff, but there's a very uncontrollable element of impact, which makes it so bizarre. Of course researchers themselves, I name my colleagues in the LSE, have huge amounts of symbolic capital drawn into these condensed meetings and so on, where they almost become hybrid policy makers. Of course that is not impact either, and that's because you've lost that distance that allows your research to have the impact at a distance. So actually the irony here is - the closer you get to practice of policymaking, you don't tick the boxes either. So there's a very sort of strange effect that might result from people withdrawing from a really direct engagement where they might actually be rather good translators and mediators to what they do to and which policy to make, and just trying to market research papers. And then of course (through) the other end of the (use?) institutions themselves, which you can think of compared to the LSE as a very complex portfolio of activities. And I think thinking of impact on that level would - I think in a kind of interesting way - force the governments, whoever they are, of this organisation to actually define the intellectual topography in a way that hasn't been done before. So as a unit gets larger the impact (measure is?) I'm sure a bit more in favour of them because they provide rather challenging opportunities for whole of organisation narratives. And I am reminded of the business review that private organisations now put at the front of the annual report - you probably don't know about this, but they have (accounts?) and they have this business review at front and actually they rather much sustain documents, narratives with metric (symbols?) (that are?) not unintelligent. And it seems to me, if one could get into that a little bit more, that genre of writing around these issues that would be really quite interesting.

I'm running short of time I know, but we've already talked about impact on other academics and teachers and teaching, but I think it's quite interesting that the kind of standardised model of impact which is coming through the proposals, it's very much a problem solving notion of impact as opposed to what's already been said, a problematising of impact. I think this sort of asymmetry of, you know, the only impact can have is sort of positive discovery type impact, it's really something that we ought to challenge. Of course all of this is a product really of the enfranchisement of the increased authority of groups, non-academic groups around academics, the stakeholder world that we now live in, do project their aspirations onto what we do. And I think we need to have a better understanding of that world rather than simply thinking of them as stakeholders, but thinking of the more complex sets of engagements and relationships that might exist also on a x(?) specific basis. Part of me thinks we're kind of getting what we deserve and I think this is your point again, that we've vacated a space, an audit space that got occupied for us and there's a lot of catching up to do, more complex narratives of accountability because I think there's no getting away from that, and the nymphomaniac grain, I think it's a real responsibility not to just challenge impact, but to offer alternatives to these (x?). I won't deal with the time horizons because that's been dealt with, but there's an auditability issue (x kind of (dis) chronological) issue - when do we know that impact has happened? And of course the logic of auditability, which in my own work I sort of see as rather ubiquitous, provides a real constraint on any more - what do I call more intelligent narrative of impact, which would speak more to the kind of differential complex relationships that fields have with domains of practices. Research is often like a pebble thrown to a pond with very contingent records of many diverse and uncertain facts. And I sort of see it as a bet. And I think that's what Donald was talking about; we have to protect the portfolio of bets that is research even in the social sciences. So I feel (rather?) cross (affects?) for what I would call the acceptable phase of narrative of impact narratives. We are a long way from that and my fears are that the logic of demonstrable effects, the logic of auditability is pushing us in these other directions, which have already been mentioned.

There is some very strange paragraphs which Donald has touched on as well, of hard wiring impact into individual behaviours. Not just the paragraphs of my strange, narcissistic file of all kind of impacts I think I've had on journalism and so forth. But actually that when something which you want as an outcome, when you design that in its target, you can actually have a worse outcome. And there's lots and lots of research examples where that is the case. A case I read of a research laboratory where they wanted to improve the production patents – patentable products from the scientists. So they made the patents the target for each scientist, then the number of patent went down. This is a kind of classic example.

We can expect lots of gaming, lots of strange behaviours. When a journalist talks to me I say to them things, strange things, like: “Would you mind dropping me an email and telling me how much you enjoyed talking to me?” - Strange but true. I think there will be a lot of gaming, and sort of 'manage impact' because we all have to survive, don't we? […]. I don't think it's possible to defend knowledge for it's own sake, and even in the humanities there must be some notion of cultural benefit, which needs to be articulated. The problem with this impact requirement is, it's very hard to be against it in the abstract. And I think that the real challenge is to get a better understanding for us as academics, I mean a better understanding of (xtiness?). And to, before we become too upset, to divert the complex but also digestible narratives of performance around that. Think I'll leave it there.
Don Slater, LSE

My response will build on a theme that's been emerging over the course of this event so for: to what extent are we victims of our own failure to occupy this very important terrain of relevance - or even ‘impact’? Have we failed to develop our own debates about how we should imagine, model and manage our relationship as academics to other domains of practice in the world? Is it the case that we because we have not offered other kinds of models for how to manage these relationships we have ended up with these kinds of extreme and impoverished audit procedures? 

I think Mike said it very clearly: none of us has any great desire to hide behind terms like 'academic freedom' and use it as an absolute license to ignore our ‘impact’ or relevance; and no-one is endorsing an ideal of autonomous pure knowledge or a refusal of any accountability to other constituencies. And actually I don't think that this is the real issue for most academics anyway. The question is how do we develop more productive relationships with a whole range of constituencies in the world? And do this such that the knowledges we produce are developed and integrated in ways that are more and more complex, productive, unpredictable, adventurous, and so on.

In terms of my own experience, I have been thinking a lot about this for the last ten years or so because I've been working on issues of new media and development, and have been funded by or collaborated with a somewhat bewildering range of users from UNESCO bureaucrats through to villagers in various places and everyone in-between. This obviously also represents an enormous range of relationships, each one of which has to be thought through in different ways and each one of which matures over time: my sense of how I'm relating to people, how I imagine these relationships, has changed enormously. The terms 'impact' and ‘relevance’ mean different things in different contexts and at different times: that is to say, there can be no simple and formalistic audit approach to how knowledge – research strategies, data, findings, reports, consultations, etc. – enter into these relationships. I want mature and responsible connections to the various parties to whom research may be relevant, whether they be funders, stakeholders, beneficiaries or innocent bystanders; to those I study and those who pay me to do the study. And the question of what kind of ‘impact’ research should have is – normally – a profoundly political one. One cannot (certainly should not) contain all this complexity and these ethical and political concerns submit to a unitary measurement of ‘impact’.

For me the gold standard for managing these evolving and complex relationships has always been ethnography. Ethnography starts from its recognition that researchers enter into highly complex and unpredictable human relationships, and that their ‘impact’ can only be ethically and maturely dealt with by tracing the ways that a researcher’s speech, ideas, interactions, publications entering into different worlds, how we mediate and are mediated. Ethnography is committed to reflexivity – rather than the imposition of an abstract audit tool. It is also dialogic – it is always about the meeting of at least two worlds (the researcher and the researched) and therefore about ‘impact’ in at least two directions.

The issue, then, is obviously not that impact or relevance are themselves dangerous criteria; the problem is that the models of impact that are currently in circulation are based on horribly impoverished, reductive and unconvincing ways of modelling relationships between researchers and the various people and the multiple social worlds that they deal with.

I've always loved the subtitle of Mike's book 'Audit society': 'Rituals of verification'. In the common sense meaning of ritual, the subtitle catches the emptiness and formalism of audit methodologies. In a more anthropological sense, these rituals are also technologies for producing trust in a wider society that really doesn't know what the hell we are doing. I’ve invoked ethnography partly to say that while I entirely accept a responsibility to account for what I do, I also feel that to do so in purely formal and empty terms is to act in seriously bad faith.

One last response that I’d like to make to Mike’s presentation: Mike indicated that one of the reasons that audits of impact are so reductive and impoverished is that they are based on the idea that you can treat research as an interdependent variable with a causal relationship to other things in the world. The whole point of audit here is to be able to trace unproblematically some event of the world to a piece of paper that's someone publication. The point is not simply that the events to be measured are so daft (e.g., counting up indications of journalistic interest); it is the idea of a research output as a discrete variable. This idea brings me back to some Media Studies issues. My own first research was on advertising where such issues were constitutive: enormous amounts of ink and breath were wasted on whether or not advertisements caused various social consequences (irrational consumption, sexism, cultural decline and bad taste, whatever). Whether research impact or media impact, there seems to be an equally daft assumption, that it makes sense to ask ‘what impact has this representation – on its own, as an isolated entity - caused in the world?', what can be attributed to it and only it. And we all know that his is a very silly way to think about how any respresentations circulate in the world. Again, a good response to the question of how to model the relationships of our research to the worlds we study would be to build better models rather than leaving this rather crucial political work those who do it badly.

1 cf. Goodheart, 1973, p. 1417
2 Mcintyre, 2005, p.35
3 Klein, 1973
4 D. G. thanks research student Brendan Clarke for providing him with details of zur Hausen’s case.
5 Judith Butler, What is Critique?, 2000.
6 Wetherell, M. 2008. “Speaking to Power: Tony Blair, Complex Multicultures and Fragile White English Identities.” Critical Social Policy 28, 3: 299-319.
7 Smith, K. (2010) “Research, policy and Funding – Academic treadmills and the squeeze on intellectual spaces.” The British Journal of Sociology. March, 61 (1) pp. 176 – 195.
8 Espeland, W. N. And Saunders, M. (2007) ”Rankings and Reacticity: How Public Measures Recreate Social Worlds.” American Journal of Sociology. July, 113,1, pp. 1 – 40. 

No comments:

Post a Comment