“Science outside of society doesn’t exist” (Dr. Paul Wouters)
I have seen this really interesting presentation from Paul Wouters, Director of the Center for Science and Technology studies WTS from Leyden University in the Netherlands, the tittle of the presentation is “New generation metrics“.
This presentation was part of the New research evaluation methods conference, that took place in the Biblioteconomy and Documentation Faculty of Barcelona University.
Dr. Wouters asses that evaluation is not a measurement of what you have done but it’s an inquiry about the possibilities that you have given, the state of affairs in your group or in your university at the moment: where you want to invest, what kind of people do you need. The evaluative process is never rigid, it’s never uniform for all fields and it’s contend-oriented. Mixed-methods approach are needed, not only quantitative and qualitative indicators such impact and indexes or rankings.
Evaluation is about the conditions and infrastructure. It means, for example, the synergies and conditions of the people of the group and so on.
He also assess that the evaluation and assessment must be in the core of the knowledge creation.
He tells that the impact doesn’t exist in society, the quality doesn’t exist also, both are assessments. So it’s not possible to measure the research only thought indicators.
He talks about open science and how this new scenario must change the evaluation and assessment methods of the scientific research. The ambitions of open science are:
- More comprehensive measurement of traditional scientific publication (eg Mendeley)
- Recognizing and capturing the diversity of scientific output including new forms (eg. software and blogs)
- Opening up the whole scientific publication system (Open Access) and more interactive communication
- Opening up the very core of knowledge creation and it’s role in higher education and innovation (participatory science)
And this last point is, in opinion of Dr. Wouters, the most important, and the game-changer in the research evaluation.
He shows 3 key assessment points of the individual researcher:
So, for him, it’s clear that a new way to evaluate the research is coming but now there is a lack of creativity in research assessment and he is working with others in this issue.
In short, he assess: The evaluation and assessment must be part of the research process and not be and administrative work that the researcher must do, apart from their core work.
I found the presentation really interesting. Now researchers are basically measured by their outputs, but what about their expertise and influence? What about their “other works” like blogs, tweets, software, and so on? How can we measure this? How can we measure the researcher reputation with quality and in an objective form?