Not only h-index… #research #metrics #openScience

Posted on Actualizado enn

Do no misunderstand me. Metrics like h-index are useful, but they are valid for comparisons between similar researches and similar kinds of research, but research is not one-dimensional, the process is complex and no two projects are identical, so we must use metrics carefully.

Metrics must be used both quantitative and qualitative and the more quantitative, the better.

But to understand this previous assertion we can explore about metrics, so the first question that comes to mind is:

How we must use metrics?

In the article “Guide to research metrics, of Taylor & Francis [2], they explain well:

First of all, you must ask what aspect of the research want to evaluate, and what you need to understand, if this can be measured, how it can be?, then match the correct metrics that will answer your question.

Use quantitative (research metrics) with qualitative (opinions). Research metrics are a useful tool but enhance them by gathering expert opinions: ask colleagues and peers for their thoughts too.

Finally, see a more rounded picture. Each metric gets its data from different sources and uses a different calculation. Use at least a couple of metrics to reduce bias and give you a more rounded view – so look at a journal’s Impact Factor but also at the Altmetric details for its most read articles for instance.

In this way, metrics (index-h, citations, SJR, JIF…), altmetrics (social media…), author profiles, opinions and so on are valid.

Then we can ask:

Which metrics are the best for me? [1]

Well, it depends; there are different aspects and levels to have into account. First of all, there are metrics at author level, at article level and finally at journal level.

metricas1

So, each kind of metrics is useful, typically, for different kind of users: researchers, journal editors and librarians.

Here, you can see an interesting overview of the most common research metrics, what they are, and how they’re calculated: downloadable guide.

In more detail, we can ask:

How metrics can help me? [2]

For Researchers: metrics can help to select which journal to publish in, and assess the ongoing impact of an individual piece of research (including your own).

For Journal editors: Research metrics can help you assess your journal’s standing in the community, raise your journal’s profile, and support you in attracting high-quality submissions.

For librarians: Research metrics can help you to select journals for your institution, and analyze their reach and impact. They can also help you assess the impact of research published by those in your institution.

Finally we can ask:

Where can I find metrics? [1]

metrics2

In short, we should not use only a limited qualitative metrics, for example h-index, due the biases that they produced, but we can make and elaborate use of mixing qualitative and quantitative metrics to do evaluations and benchmarking to take decisions.

I have read an interesting paper about Metrics. It examines four familiar types of analysis that can obscure real research performance when misused and offers four alternative visualizations that unpack the richer information that lies beneath each headline indicator [3]:

  1. Researchers: A beam-plot not an h-index. The h-index is a widely quoted but poorly understood way of characterizing a researcher’s publication and citation profile whilst the beam plot can be used for a fair and meaningful evaluation.
  2. Journals: The whole Journal Citation Record (JCR), not just the Journal Impact Factor (JIF). The JIF has been irresponsibly applied to wider management research whilst the new JCR offers revised journal profiles with a richer data context.
  3. Institutes: An Impact Profile not an isolated Average Citation Impact. Category normalised citation impacts have no statistical power and can be deceptive whilst Impact Profiles show the real spread of citations.
  4. Universities: A Research Footprint, not a university ranking. A global university ranking may be fun but suppresses more information than most analyses and hides the diversity and complexity of activity of any one campus, whilst a Research Footprint provides a more informative approach as it can unpack performance by discipline or data type.

With the advance of the OpenScience, this landscape of metrics is evolving rapidly, in this way we have two paradigms:

Old paradigm –> From idea to measurable citation count can take 5-10 years

New paradigm –> Metrics are available immediately.

So, we are seeing a rapid transformation of the metrics ecosystem and this is only the beginning, in my opinion.

References:

[1] https://www.slideshare.net/Library_Connect/researcher-profiles-and-metrics-that-matter

[2] http://explore.tandfonline.com/page/gen/a-guide-to-research-metrics

[3]You can download the paper here: https://clarivate.com/g/profiles-not-metrics/

Responder

Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión /  Cambiar )

Google photo

Estás comentando usando tu cuenta de Google. Cerrar sesión /  Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión /  Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión /  Cambiar )

Conectando a %s