The Hidden Costs of Research Funding: Balancing Accountability and Creativity
Exploring the challenges researchers face with current funding systems, emphasizing the need for a balance between accountability and fostering creativity in science.
File
The craziness of research funding. It costs us all. Geraldine Fitzpatrick TEDxTUWien
Added on 09/27/2024
Speakers
add Add new speaker

Speaker 1: Show of hands, who pays taxes here? Oh, quite a few. Even some of the students, I thought there might not be so many hands given that there's so many students. So, especially to those of you who pay taxes, I want to say thank you. Because you fund our public universities, and you pay my salary, and you allow me and people in my institute to do research that we're passionate about, which is about designing technology that fits with people, and that fits into their everyday lives, not you having to fit technology. Your taxes also go to fund some of the big funding schemes that we apply for, for research funding in consortiums to do our research. My concern here today is not to tell you about the great research we're doing in our group, which I would love to do another time, but to tell you something that is increasingly of concern to me, and that is that the systems and processes that we have in place now mean that people are spending more and more time talking about doing research, or constructing reports about research that they should have done, and not actually doing the research that we need to address some of the really serious problems we're facing in today's society. I can illustrate this a little bit from an email I got just last week from a research proposal that was submitted to a large European funding scheme, and this was with 11 partners from universities, industry, public bodies from six countries, and it was this really great research project that we cared about, about designing technologies to support health and wellbeing for older people living at home. The rest of the letter goes on to say, we regret that we can't fund you, that your score didn't reach the funding level, but in fact what happened here, there were only six out of 187 proposals that were funded, and this was in fact a highly rated proposal, so it's not like the research proposed wasn't any good. So apart from not being able to do the research that we cared about and wanted to do, and had put in enormous amounts of time and effort, months of effort to write up, a postdoc fellow in my institute won't have a job to go to when his current short-term project ends. A colleague from another university has very hard monetary targets that he has to meet that is set by his university each year as part of his annual performance review, so this won't contribute to his meeting his funding targets. But I also have a guilty confession to make, because another part of me also said thank you, that we did not get it, and that's because I know that if we had have got it, I was buying into an enormous amount of administration and management and overhead that would be really onerous. So what's going on with this? I think that the proposal itself illustrates some of this, because it was a 153-page proposal, and only about 33 of those pages were actually about the research, and the rest was about administration and management, et cetera. We could have, if we had wanted to, have brought in consultants at 1,000 euros a day to write those pieces, because they know how to fit the funding body's needs. In that proposal, we had to say exactly what we would be doing at any point in time, and in the next three years that this project was running, in the state of radically changing technology, we would have had to say in three and a half, four years' time when we were doing this work, what we would be doing, how long it would take, and exactly what we'd be producing. This becomes a straitjacket, and we have to report against this Gantt plan at regular intervals to the funding body, and they don't pay us all our money until the very end, and sometimes they withhold it, because for some reason we didn't quite meet exactly what we'd said. We also have to have very detailed timesheets that we keep, as if we can clearly differentiate what this thought was about and what work package it mapped to, and that we still know that this timesheet has to reflect the time that we said it would take, not the time that it really took. There are some false assumptions here. There's an assumption that research work can be defined as a rational process, that you're going from A to B, and that you know what that path is, and you know the shape and the size of the moon that you're going to reach at the other end. But the reality of research is so much different. We know where we're starting from, and we can really define the problem space really well, and we can have a good plan for how we think we're going to get there to solve it, but we don't know what's happening over that hill, what we're going to encounter, and what other paths we might end up taking. That might take us to far more exciting and interesting places than we'd ever thought we could go to. Some of the ways that we do that, particularly for us as human-computer interaction researchers, is talking to the people that we're designing for and with. We might come to them with an idea, but after talking with them and working with them, they might say no. But the straitjacket of our Gantt chart and our agreement with the funding body says that we can't take on their no. We have to still deliver what we had said, so we can't really respond to their needs. They continue helping us because they want to help as participants, but we're not delivering something of value to them. We're delivering a tick box to a report. There's also no room in this rational model of what research is for errors, learning, mistakes, and that's a key part of how we make innovation, of how we make advances, of how we generate new knowledge. In some of the accounting models for many of these funding agencies, these errors or failures are deviations or problems that we can account for in our report with yet more documentation, but they're not seen as a normal part of the research process. They're not seen as learning opportunities to explore what's a better way, that every failure or mistake is one more step towards somewhere better. Albert Einstein would have trouble these days operating in our current funding environment, in our current university environment, because he talks about time to think. What a luxury, and time to think for months and years, like open-ended. I don't know how long it will take me to think about this. He talks about trying things 99 times and succeeding on the 100th time, but he can't predict in his Gantt chart up front that it will take him 100 goes. He just has to keep trying out. I can just imagine him putting in a proposal to one of these funding schemes and getting a rejection letter because he couldn't fully specify his path to relativity theory. There are loads of examples of research that is unexpected, innovations that come in unexpected ways. Percy Spencer, who had a chocolate bar melt in his pocket as he stood beside a magnetron that generated the microwave as an invention. An arts design student who took a radically different approach to thinking about how to design prosthesis by doing life models and drawings and really exploring the articulation of joints and changed the paradigm for how we do prosthesis and robotic arms, but did it in this very practice-led approach that he didn't define up front or couldn't have defined up front because he did it by trial and error. In fact, the medical physicists he was working with were really sceptical that he would get anywhere or that this was a way to go, but it's changed what we do with prosthesis now. Research is a far more exciting, open-ended, creative process than is ever captured in these formal models and graphs and timesheets that embody this rationalistic account of research that can be totally defined up front. What it encourages in the end is not so much creativity in the research, but creativity in the reporting and in the timesheet keeping, so that we can keep the computer happy and keep the funding body happy and make sure we get the money paid for the work that we really do want to do. This has really problematic implications, I think, that we should all be concerned about. One is at a societal level, and the other one is the individual cost for the researchers involved. At a societal level, I'm just using this as an illustration. In 2015, this is a report from a report from one of the major European funding agencies. There were 42,000 proposals submitted that year, and a total of about 4,200 or so that were actually funded. It's a success rate of about 10.7%. I have some figures from a 2011 study about the order of effort required to make a proposal. I am putting together figures from different years, so they're just indicative. What they indicate is that we had nearly 19,700 person-years in 2015 just put to writing proposals, writing about research, not doing research, nearly €1.5 billion. If we look at a UNESCO report, that actually equates to all of the researchers in Ireland in 2015 not being able to do any actual research, only able to write about doing research. Then there are also costs of the people who do get funded, the negotiations for the contract that take time and effort, and also when they get the projects funded. Over the next three years, there's significant effort required to manage and report. Every D and W on that Gantt chart we saw before generates reams of documents. Would we be happy if this level of effort was put into—if we had €1.5 billion put into funding teams of doctors and nurses competing to be able to save patients' lives in this operating theatre? Not operating, not saving lives, just competing, talking about what they do. Then when they do get that funding, for the few who do, they actually wouldn't be in the operating theatre very much. They'd be out filling in the paperwork and the reporting and the admin around it. It's a big cost. It's using up our taxpayer funds to talk about research, not do it. I understand that because it's taxpayer funds, there's an accountability that's really important, but I think we've got the balance wrong between managing the risk and the control versus trusting the researchers to get on and do research and to appropriately allocate their time to delivering benefits for society. I said that there's also personal costs. In this era, academics are increasingly becoming subject to numerous metrics. There was a 2012 report in the UK that said UK academics were subject to 100 different metrics that measure publications or grant income or teaching evaluations, numerous of them. There's this quantified academic selves now, reduced to numbers that are decontextualised and don't make any sense, but these numbers matter because they matter for promotion, for getting your next job, for getting your next funding grant. It puts enormous pressure on people. I hear this. I do some interviews for a podcast series with academics, and I hear the enormous stress that people are under. There are study reports that show that people are working 50, 60 hours on average in academia. There's a real problem with chronic stress and burnout. We know from psychology literature that when people are stressed, they're less creative. Their thinking is more closed. These are the very people we want being open and being creative. My colleague in our proposal, it wasn't funded. His university recognises, as do many others, that their academics, their staff, are getting increasingly stressed. What they're doing is not dealing with the measures and the performance indicators that are causing the stress. They're offering them stress management courses and mindfulness training, which are great. They're great skills to have, but not just to put people back into the battlefield, into those same pressure situations. What do we need to do? Science is really, really important. There was a march for science on the 22nd of April, and 600 cities, more than 600 cities around the world, people marching for the importance of what science can contribute to society in terms of the health of our communities, education, well-being, the economy, everything. We need good, healthy scientists and researchers delivering the outcomes to society that we're paying them to do as taxpayers. The United Nations developed a set of sustainable development goals in 2015. I think these are great because they were recognising that gross domestic product is the only indicator of the health of a nation. It was a very narrow indicator. This was about trying to develop a much more inclusive set of indicators and to redefine what success means. I wonder what we might be able to do similarly within a research environment, within an academic context. What might a set of sustainable research and researcher goals look like that recognises that research is creative and messy and open-ended, that different types of research need different timeframes, that the health and well-being of the researchers doing the research is just as important as the outcomes of the work, and that recognises that compassionate collaborative work environments, not competition and stress, are the way to deliver good science, the way we can manage risk in positive ways that draw on trust in our researchers. I want to encourage us to start thinking about what this set of goals might look like, redefining what success is and what our indicators are so that we can really put our taxpayers' funds to the best use in actually doing science, not just talking about science. Thank you.

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript