New to AcademicProductivity.com?
Here are a few posts that other readers recommend you check out:
[dismiss]
Cambridge zoologist Peter A. Lawrence has published a thoughtful piece on the frustration of scientists (whether young or not so young) facing the ruthlessness of the research granting system (Real Lives and White Lies in the Funding of Scientific Research). He suggests how a “drastic simplification of this grant-writing process would help scientists return to the business of doing science” and quotes a passage from a recent NYT column by Stephen Quake, who asks what sounds to me like a challenging question:
Could we stimulate more discovery and creativity if more scientists had…security of…research support? Would this encourage risk-taking and lead to an overall improvement in the quality of science?
I take this as a genuine question in search of a convincing empirical answer.
- The full article is available in PLoS Biology.
- CC-licensed photo courtesy of .
If you enjoyed this post, make sure you !
[how to cite this post]
[hide]
AMA citation:
Taraborelli D. Portrait of the scientist as a bureaucrat. Academic Productivity. 2009. Available at: https://academicproductivity.com/2009/portrait-of-the-scientist-as-a-bureaucrat/. Accessed August 25, 2011.
APA citation:
Taraborelli, Dario. (2009). Portrait of the scientist as a bureaucrat. Retrieved August 25, 2011, from Academic Productivity Web site: https://academicproductivity.com/2009/portrait-of-the-scientist-as-a-bureaucrat/
Chicago citation:
Taraborelli, Dario. 2009. Portrait of the scientist as a bureaucrat. Academic Productivity. https://academicproductivity.com/2009/portrait-of-the-scientist-as-a-bureaucrat/ (accessed August 25, 2011).
Harvard citation:
Taraborelli, D 2009, Portrait of the scientist as a bureaucrat, Academic Productivity. Retrieved August 25, 2011, from
MLA citation:
Taraborelli, Dario. "Portrait of the scientist as a bureaucrat." 15 Sep. 2009. Academic Productivity. Accessed 25 Aug. 2011.
This entry was posted on Tuesday, September 15th, 2009 at 11:50 pm and is filed under Evaluation, Funding, Jobs, Writing. You can follow any responses to this entry through the feed. You can leave a response, or trackback from your own site.
September 16th, 2009 at 9:45 am
It is like the old debate whether the welfare state (which provides high level of job security) stimulates or kills entrepreneurship and innovation. The research on varieties of capitalism (i.e. comparing US style with German style capitalism) suggests that higher security allows people to specialize more and produce incremental innovations (think ever better cars and machinery). However, the US style arrangement without security, but with venture funding (aka grant funding) is better in producing explosive innovations (think Silicon valley). If that works for engineering, it might apply to other areas of research as well.
September 16th, 2009 at 4:17 pm
The managers of scientific research will certainly welcome those essays criticising, from the inside, the evaluation systems. What is targeted is both the evaluation systems for funding through grant application and the evaluation systems for career development through counting and weighting publications.
These criticisms are great as they are spelling out the problems met by researchers and include informative reflexions on their working conditions. Yet, one should also acknowledge the limits of these complaints: they rarely look at the full picture and often have a naïve view of scientific development.
In the article brought up by Dario, this can be seen in the discrepancy between the complaint — the bureaucracy kills research and the career of very able scientists – and the solution – maybe the application forms should ask for a few less details? (What is it that the evaluators need to make fair judgements?).
I find that the sociologists and philosophers of science have taken too little part to the debates about the management of scientific research. Yet, there is some important work in the field that is highly relevant. For instance, a seminal book in the history and sociology of science is exactly about the relation between innovation and creativity, and the “normal†practices of scientists. This is Kuhn’s Structure of Scientific Revolution (but see also his book chapter “The Essential Tension: Tradition and Innovation in Scientific Research?†In: The Essential Tension). Kuhn shows that it is the development of normal science that brings about revolutionary innovation. By comparison, the current enthusiasm for innovation, as if it could stem out of nothing but well funded smart scientists, is a bit naive. Deciding between risky innovative research programmes and standard well-grounded research programme is never an easy task. Both are needed, and resources are, of course, scarce.
I find a similar naivety in the critics of the evaluation systems based on publications counts. The point of counting is obviously that it is efficient. Would the complainer be willing to read and re-evaluate the articles that are just counted? First it is incredibly time consuming (I thought the complainer wanted more time for their own research) and second the re-evaluator will certainly be less well placed to judge the quality of the articles than the reviewers of the specialised journals. When there are a hundred of candidates for each job, I don’t think there is a better proxy than such a count of publications and their importance via Impact Factor. It is a matter of delegating the assessments to experts and compiling these assessments for straightforward and efficient decision-making.
Yes: everybody would like to have more opportunities for showing one’s value; but nobody is willing to give all that time to assessment processes. It seems to me that Peter A. Lawrence, the author of the paper mentioned above, wants at the same time lighter evaluation procedures (when writing grant proposal) and heavier evaluation procedures (not just counting publications).
I too have spent at least a third of my working time as a researcher looking for grants and jobs. And I can’t say this is the part I enjoyed most. Yet, if there is a one thing that demarcates science from other cultural domains, then the collective and distributed evaluation procedures form a very good candidate.
The evaluation processes come with the obligation to communicate and convince others of the truth of one’s claim, i.e. these apparently despised “salesmanship and networkingâ€. How can we know that some work is of good quality but by the fact that it convinces other experts of the truth and relevance of its claims?
It is good that scientists want to improve the assessment systems and complain about its defects. But let us keep in mind what these systems achieve and why they are needed. Also, convincing others is a key and non-despicable aspect of scientific practice.
September 19th, 2009 at 7:36 pm
Really good analogy, Brum, and great comment Christophe (I love when the comment is longer/more detailed than the starting post)!
I think there’s a big difference in EU-US funding models. EU tends to produce smaller grants, with orders of magnitude more paperwork, and interminable ‘deliverables’ along the grant time. The US system is mostly one final report and you are done.
The problem with ‘deliverables’ is that they are mostly inert. Writing deliverables is a different skill from writing papers, and it’s not trivial to convert one into another.
September 21st, 2009 at 4:01 am
I agree in principle that this is an empirical question; but I also think it is one not easily investigated and for which definitive answers are unlikely to be forthcoming. Therefore, it seems to me that multiple systems and multiple sources of funding are parts of the answer. There need to be scientists who are assured sources of funding and given research assistants as a right, and those who struggle to put funding and a research team together. How to realize this kind of hybrid without an intolerable degree of unfairness is a big challenge.
A couple of other points, not really linked to the above or to each other, as I don’t have a clear idea of what needs to be done:
Christophe’s criticisms of naivety are well taken. Yet I would guess that most of us involved in writing grant proposals find it unpleasant and often find ourselves writing what we ourselves think is baloney to satisfy the imagined readership, and further that the number of person-hours lost to preparing and reading rejected proposals is so colossal that the need for change–whether along the lines suggested or not–is something that most of us can agree on.
Like Christophe, we all have to accept that resources are scarce. Yet one gets the feeling that resources are never scarce when it comes to fighting a war or bailing out multi-billion dollar corporations. I don’t have figures at my fingertips, but I suspect that the amount devoted to science research is much less than that used for, for example, helping financial firms in trouble. I further suspect that resentment at such (alleged) facts is a sometimes unacknowledged factor in the desire for change, as well as a thought that if more money were made available for what presumably is a very important enterprise success rates would go up and a large proportion of the problem might go away.
September 26th, 2009 at 5:11 pm
[...] Portrait of the scientist as a bureaucrat (via Academic Productivity) [...]