Not only h-index… #research #metrics #openScience

Posted on Actualizado enn

Do no misunderstand me. Metrics like h-index are useful, but they are valid for comparisons between similar researches and similar kinds of research, but research is not one-dimensional, the process is complex and no two projects are identical, so we must use metrics carefully.

Metrics must be used both quantitative and qualitative and the more quantitative, the better.

But to understand this previous assertion we can explore about metrics, so the first question that comes to mind is:

How we must use metrics?

In the article “Guide to research metrics, of Taylor & Francis [2], they explain well:

First of all, you must ask what aspect of the research want to evaluate, and what you need to understand, if this can be measured, how it can be?, then match the correct metrics that will answer your question.

Use quantitative (research metrics) with qualitative (opinions). Research metrics are a useful tool but enhance them by gathering expert opinions: ask colleagues and peers for their thoughts too.

Finally, see a more rounded picture. Each metric gets its data from different sources and uses a different calculation. Use at least a couple of metrics to reduce bias and give you a more rounded view – so look at a journal’s Impact Factor but also at the Altmetric details for its most read articles for instance.

In this way, metrics (index-h, citations, SJR, JIF…), altmetrics (social media…), author profiles, opinions and so on are valid.

Then we can ask:

Which metrics are the best for me? [1]

Well, it depends; there are different aspects and levels to have into account. First of all, there are metrics at author level, at article level and finally at journal level.


So, each kind of metrics is useful, typically, for different kind of users: researchers, journal editors and librarians.

Here, you can see an interesting overview of the most common research metrics, what they are, and how they’re calculated: downloadable guide.

In more detail, we can ask:

How metrics can help me? [2]

For Researchers: metrics can help to select which journal to publish in, and assess the ongoing impact of an individual piece of research (including your own).

For Journal editors: Research metrics can help you assess your journal’s standing in the community, raise your journal’s profile, and support you in attracting high-quality submissions.

For librarians: Research metrics can help you to select journals for your institution, and analyze their reach and impact. They can also help you assess the impact of research published by those in your institution.

Finally we can ask:

Where can I find metrics? [1]


In short, we should not use only a limited qualitative metrics, for example h-index, due the biases that they produced, but we can make and elaborate use of mixing qualitative and quantitative metrics to do evaluations and benchmarking to take decisions.

I have read an interesting paper about Metrics. It examines four familiar types of analysis that can obscure real research performance when misused and offers four alternative visualizations that unpack the richer information that lies beneath each headline indicator [3]:

  1. Researchers: A beam-plot not an h-index. The h-index is a widely quoted but poorly understood way of characterizing a researcher’s publication and citation profile whilst the beam plot can be used for a fair and meaningful evaluation.
  2. Journals: The whole Journal Citation Record (JCR), not just the Journal Impact Factor (JIF). The JIF has been irresponsibly applied to wider management research whilst the new JCR offers revised journal profiles with a richer data context.
  3. Institutes: An Impact Profile not an isolated Average Citation Impact. Category normalised citation impacts have no statistical power and can be deceptive whilst Impact Profiles show the real spread of citations.
  4. Universities: A Research Footprint, not a university ranking. A global university ranking may be fun but suppresses more information than most analyses and hides the diversity and complexity of activity of any one campus, whilst a Research Footprint provides a more informative approach as it can unpack performance by discipline or data type.

With the advance of the OpenScience, this landscape of metrics is evolving rapidly, in this way we have two paradigms:

Old paradigm –> From idea to measurable citation count can take 5-10 years

New paradigm –> Metrics are available immediately.

So, we are seeing a rapid transformation of the metrics ecosystem and this is only the beginning, in my opinion.


[1] https://www.slideshare.net/Library_Connect/researcher-profiles-and-metrics-that-matter

[2] http://explore.tandfonline.com/page/gen/a-guide-to-research-metrics

[3]You can download the paper here: https://clarivate.com/g/profiles-not-metrics/

The #euroCRIS international conference 2018 #research #FAIR #openScience #cris2018

Posted on


Last month I attended the international conference of euroCRIS in the beautiful location of UMEA, Sweeden.

The mission of euroCRIS, is to promote collaboration within the research information community and advance interoperability through CERIF. CERIF is the Common European Research Information Format, a format that allows the interoperability of the CRISs. A CRIS is a Current Research Information System, is a Database that unifies all the research information of an institution with data quality and fiability.

The general theme of the conference was “FAIRness of Research Information”. In a broader sense this theme reflects on the question of how to optimally present research (projects), its practitioners (researchers, institutions) and – above all – its products (datasets, publications) and make them optimally discoverable and useful, first of all for fellow researchers, but in a wider perspective also for professional communities and the general public as a whole.

There was a lot of interesting presentations, worshops, posters and so on, and with a conference of this kind you can be aware of the main trends a the ‘state-of-the-art’ related to the research information management.

In this way, there were several presentations of universities and other centres, explaining their experiences with the CRIS’s, and their evolution, and, related to this, the interoperability with CERIF.

There was much talk about OpenScience and the need of having policies, metrics and incentives to promote the broad openness of the research publications (openAccess) and data (openData), and not only this but, open-peer-review and other elements of the OpenScience. Also it was presented many interesting initiatives about this.

Also, the increasingly relevance of the CRIS in the institutions as a central repository of scientific information that must be integrated with the institutional openAccess repository.

There was also many presentations about case studies of metrics, analytics and visualisation of the research information. In this way, I presented a case study of implementation of a project of analytics: Analyzing a CRIS: From data to insight in university research”. You can access the presentation here.


There were some examples of semantics approaches that provides information discovery creating a conceptual model of related entities of the research environment.

Finally, there was talk about openAIRE. OpenAIRE is a repository of openAcces, it’s goal is to make as much European funded research output as possible available to all, via the OpenAIRE portal. It has been published a guide to incorporate the information of CRIS providers in the Portal and aimed to all the CRIS providers to upload their CRIS information in the repository.

It was a great opportunity to talk with a lot of people about different issues and points of view. But finally, the main topics are almost the same in the scientific environment, so we are working more or less in the same way.

Seeing this, I think that collaboration is necessary to advance more in this scenario that is changing rapidly. Can we go together?









AI: Data are for machines, information for Researchers.

Posted on Actualizado enn


AI: Artificial Intelligence, a concept created by Alan Turing. That is a concept that exists 66 years ago but, unlike science fiction films, has not had a realistic approach that has allowed its development and implementation in all these years. But no, I don’t want to talk about robots that act like people, witch develop feelings and others. No, I think that finally, the right approach to AI is coming, the approach that should have had from the beginning, but for some reasons it has not been possible to develop. I mean the use of the machines so that they can process the huge amount of data flowing through the network, and it has not been possible until the advent and popularisation of the Internet.

I remember a couple of years ago, I attended a conference in a Spanish public university where it was questioned if in the university were applicable concepts such as artificial intelligence (AI) and bigData, among others. Several of the speakers claimed that no, others with small mouth pointed that perhaps it was possible, but generally not seen how or where to apply.

Well, it’s only two years after and I think we can say that in the universities we can talk about bigData and many applications of AI. If we focus on one of the pillars of the university, the research area, with the development of models such as open access of publications and data as well as the internationalisation of research, where increasingly proliferate large research groups, international and multidisciplinary, no doubt, that there is an explosion of bigData in research data. And, in my view, the bigData, ie, generation and access many data in real time, leads to the need to apply artificial intelligence techniques for the analysis and processing of terabytes of data that proliferate in the network.

Thus, and related to my previous post, AI techniques are already required such as, machine learning, deep learning, computational linguistics, natural language processing and data mining, among others.

Our researchers do not have to, nor can, spend time searching and analysing raw data, ie process thousands of data to find what they seek. To do this, these algorithms and technologies need to facilitate access to information, which is not the same as access to data. So, in this way, we could say that the data is for machines, information for Researchers. I am currently working on these issues, to provide researchers with tools that can really help them in their work and enable them to move faster and to obtain the best results.

Enjoy your summer!