Do no misunderstand me. Metrics like h-index are useful, but they are valid for comparisons between similar researches and similar kinds of research, but research is not one-dimensional, the process is complex and no two projects are identical, so we must use metrics carefully.
Metrics must be used both quantitative and qualitative and the more quantitative, the better.
But to understand this previous assertion we can explore about metrics, so the first question that comes to mind is:
How we must use metrics?
In the article “Guide to research metrics, of Taylor & Francis , they explain well:
First of all, you must ask what aspect of the research want to evaluate, and what you need to understand, if this can be measured, how it can be?, then match the correct metrics that will answer your question.
Use quantitative (research metrics) with qualitative (opinions). Research metrics are a useful tool but enhance them by gathering expert opinions: ask colleagues and peers for their thoughts too.
Finally, see a more rounded picture. Each metric gets its data from different sources and uses a different calculation. Use at least a couple of metrics to reduce bias and give you a more rounded view – so look at a journal’s Impact Factor but also at the Altmetric details for its most read articles for instance.
In this way, metrics (index-h, citations, SJR, JIF…), altmetrics (social media…), author profiles, opinions and so on are valid.
Then we can ask:
Which metrics are the best for me? 
Well, it depends; there are different aspects and levels to have into account. First of all, there are metrics at author level, at article level and finally at journal level.
So, each kind of metrics is useful, typically, for different kind of users: researchers, journal editors and librarians.
Here, you can see an interesting overview of the most common research metrics, what they are, and how they’re calculated: downloadable guide.
In more detail, we can ask:
How metrics can help me? 
For Researchers: metrics can help to select which journal to publish in, and assess the ongoing impact of an individual piece of research (including your own).
- Understanding the Impact Factor | 1 min read
- Measuring impact with article metrics: understanding the Altmetric Attention Score | 3 min read
- Tracking the commercialization of research: patent citations | 3 min read
- VIEWPOINT: Can you publish and still perish? A question of impact, Dr John Harrison, Associate Editor of Regional Studies | 3 min read
For Journal editors: Research metrics can help you assess your journal’s standing in the community, raise your journal’s profile, and support you in attracting high-quality submissions.
- Journal citation metrics: an introduction to metrics and their use | 5 min read
- Author metrics and the h-index | 5 min read
- New Scopus metric: CiteScore | 4 min read
- WATCH: Research metrics and recognition panel discussion from the Taylor & Francis Scholarly Summit featuring Clarivate Analytics, Altmetric and Colwiz – Wizdom.ai | 33 min watch
For librarians: Research metrics can help you to select journals for your institution, and analyze their reach and impact. They can also help you assess the impact of research published by those in your institution.
- How research becomes impact: librarians helping faculty use scholarly metrics to select journals, College and Undergraduate Libraries, Marc Vinyard & Jaimie Beth Colvin, Pepperdine University | 30 min read
- Bibliometrics: Understanding author-, article- and journal-level metrics, Serials Review, Margaret Mering, University of Nebraska-Lincoln | 25 min read
- Explore our research metrics guide
Finally we can ask:
Where can I find metrics? 
In short, we should not use only a limited qualitative metrics, for example h-index, due the biases that they produced, but we can make and elaborate use of mixing qualitative and quantitative metrics to do evaluations and benchmarking to take decisions.
I have read an interesting paper about Metrics. It examines four familiar types of analysis that can obscure real research performance when misused and offers four alternative visualizations that unpack the richer information that lies beneath each headline indicator :
- Researchers: A beam-plot not an h-index. The h-index is a widely quoted but poorly understood way of characterizing a researcher’s publication and citation profile whilst the beam plot can be used for a fair and meaningful evaluation.
- Journals: The whole Journal Citation Record (JCR), not just the Journal Impact Factor (JIF). The JIF has been irresponsibly applied to wider management research whilst the new JCR offers revised journal profiles with a richer data context.
- Institutes: An Impact Profile not an isolated Average Citation Impact. Category normalised citation impacts have no statistical power and can be deceptive whilst Impact Profiles show the real spread of citations.
- Universities: A Research Footprint, not a university ranking. A global university ranking may be fun but suppresses more information than most analyses and hides the diversity and complexity of activity of any one campus, whilst a Research Footprint provides a more informative approach as it can unpack performance by discipline or data type.
With the advance of the OpenScience, this landscape of metrics is evolving rapidly, in this way we have two paradigms:
Old paradigm –> From idea to measurable citation count can take 5-10 years
New paradigm –> Metrics are available immediately.
So, we are seeing a rapid transformation of the metrics ecosystem and this is only the beginning, in my opinion.
You can download the paper here: https://clarivate.com/g/profiles-not-metrics/