I have seen an interesting interview to Ed Simmons president of the euroCRIS, that I have the pleasure to know personally due to my relation with the euroCRIS organisation as a member.
As Ed himself explains, euroCRIS is an organisation mainly dedicated to maintain and evolve an standard interchange system of the scientific information called CERIF (Common European Research Information Format) and to organise meetings at international level to promote the international collaboration between different research stakeholders.
The CERIF model format could be used to create the CRIS (Current Research information Systems) of the institutions to have an unique repository that stores all kind of research information of the institution, their researchers, groups, and so on.
In the interview, Ed explains some aspects of this CRIS’s, and why they are more and more relevant. He explains that provides the data about how the research is done and how research groups perform within the organisation, so it is a tool for managers to see whether the goals of the research reached and who produces more or less.
in short, he explains for whom are the CRIS’s:
- Is a database form the managers of the information (libraries, research areas, etc…)
- For researchers, because increase the number of all kinds of aspects of their research (projects, founders, timings, money, results, publications, datasets, etc…), to generate their CVs in many formats, or to show their scientific production on a web page or in a conference and so on.
- For the institutional evaluation of the research, who performs well or less at national level.
- To show the research to the world on the internet.
A CRIS also could be a primary resource for other systems, for example, OpenAcess repository or OpenData repository, so, in this way, the CRIS system has more and more a central position in the institution and it’s information could be exported to all kind of systems. A one spot for registering the information and then putting that information out of all kinds of info requesters.
In short: “only one system for all”.
Ed, asserts that if a institution has a CRIS, the researchers use it, and I agree with him. I work with a CRIS (SIGMA Research) that is running in some Spanish universities and I can assert, too, that it has this central position to which Ed refers. And day by day, these CRISs systems are more and more used.
Finally, I think that CRISs provides also the concept of FAIR data, i.e, information Findable, Accessible, Interoperable and Reusable.
So in this way, I also agree with Ed that the CERIF format model, can help to improve especially the interoperability aspect of the FAIR data.
You can access the Ed Simmons interview, here
I’ve been involved in a very interesting Project with some of the universities of the SIGMA AIE universities consortium where I work. It’s not new, and maybe it seems that it’s not a big project, but the relevance of the results that it shows, are clear.
It’s about the researchers Bibliometric information. As we know, this is a very important and relevant information for the Researchers and their scientific activity. It’s a key information, for example, to obtain funding and for their accreditation and evaluation as a researcher or scholar.
This information is extracted automatically and stored in the CRIS (Current Research Information System of the institution), so nobody must enter it. This information is sometimes validated by the library staff.
Here are two exemples:
You can access the ‘Scientific Production Portal’ of the Universitat de Vic-Central de Catalunya, UVIC-UCC or the ‘Scientific output’ of the Universitat Pompeu Fabra, UPF publically, and select a Researcher of the institution. Here, we can see their main bibliometric indicators such as: Index H of WOS, Scopus and Google scholar and cites of google scholar besides the Researcher photo.
Then, we can enter one of the publications and see the main bibliometric indicators related to the publication such as: cites in Scopus, Wos, Google scholar, Scimago, altmetrics and so on…
Another example, this time from the UPF:
This information, in addition to giving public information about the Researcher, is very valuable information for the institution that can manage information about its positioning and can analyze its scientific production thru these standard indicators. The possibilities are great.
I really enjoyed being in this project, and it’s only the beginning…
AI: Artificial Intelligence, a concept created by Alan Turing. That is a concept that exists 66 years ago but, unlike science fiction films, has not had a realistic approach that has allowed its development and implementation in all these years. But no, I don’t want to talk about robots that act like people, witch develop feelings and others. No, I think that finally, the right approach to AI is coming, the approach that should have had from the beginning, but for some reasons it has not been possible to develop. I mean the use of the machines so that they can process the huge amount of data flowing through the network, and it has not been possible until the advent and popularisation of the Internet.
I remember a couple of years ago, I attended a conference in a Spanish public university where it was questioned if in the university were applicable concepts such as artificial intelligence (AI) and bigData, among others. Several of the speakers claimed that no, others with small mouth pointed that perhaps it was possible, but generally not seen how or where to apply.
Well, it’s only two years after and I think we can say that in the universities we can talk about bigData and many applications of AI. If we focus on one of the pillars of the university, the research area, with the development of models such as open access of publications and data as well as the internationalisation of research, where increasingly proliferate large research groups, international and multidisciplinary, no doubt, that there is an explosion of bigData in research data. And, in my view, the bigData, ie, generation and access many data in real time, leads to the need to apply artificial intelligence techniques for the analysis and processing of terabytes of data that proliferate in the network.
Thus, and related to my previous post, AI techniques are already required such as, machine learning, deep learning, computational linguistics, natural language processing and data mining, among others.
Our researchers do not have to, nor can, spend time searching and analysing raw data, ie process thousands of data to find what they seek. To do this, these algorithms and technologies need to facilitate access to information, which is not the same as access to data. So, in this way, we could say that the data is for machines, information for Researchers. I am currently working on these issues, to provide researchers with tools that can really help them in their work and enable them to move faster and to obtain the best results.
Enjoy your summer!