| ||Interests: |
- Internal communication and sociology in science
- The relations between science, the public and journalism
- The scientific evaluation of research after publication
Activities at CPNSS, 2009:
- My master thesis is an analysis of how published articles are evaluated by biologist. There is a conception of an internal continuous evaluation of articles by colleagues, and I want to explore to what extent this is working. Do the biologists have appropriate possibilities to discuss methods and conclusions? To what extent do they explore colleagues' articles, and how do they react if they disagree with parts of it?
My hope is to contribute to the development of the publication system. I will explore what parts of the system are working well in evaluation of published research, and whether there are better alternatives to the praxis of today.
- Iben Wiene Rathje (2009): Evalueringsadfærd: Normer og praksis hos biologiske forskere. (Evaluation behaviour among scientists in biology - guidelines and praxis for publication of criticism). Master thesis at the Institute of Biology and Center for the Philosophy of Nature and Science Studies, University of Copenhagen. URL: http://www.nbi.dk/~natphil/prs/iwr/IWR2009_Speciale.pdf (download the thesis in Danish).
Abstract: Scientists read and produce numerous articles, and the success of an individual scientist is measured by the number of papers published in high impact journals (Kokko & Sutherland, 1999; Lawrence, 2007; Tenopir et al., 2003). This study investigates how qualitative evaluation and critique is realized when scientists read papers published by other scientists, and whether there are norms motivating active evaluative behaviour as an integral part of research work.
This study builds upon interviews and written material gathered from 40 researchers within biology in Denmark. It is shown that scientists read papers used for their own research with highly varying degrees of thoroughness. Scientists working with biological processes below the level of the organism more rarely read the methods and statistics sections than scientists working with populations. When a scientist expresses critique of a read paper, it is infrequently communicated in a way that makes it accessible to the author or potential readers. Scientists seldom search accessible databases for other experts' critique of the articles they use.
Qualitative interviews showed that there are no explicit univocal norms within the research community regarding how to evaluate articles published by peers. It seems clear that missing incentives is a reason why evaluation of published research is rare. Therefore a system of evaluation is proposed that would make it possible to use, document and thereby measure evaluative behaviour. Thus, evaluation and error correction could be recognized in scientometric analyses and rewarded on a par with citations, H-index, etc.