Year : 2019 | Volume
: 5 | Issue : 2 | Page : 97--99
Indicators of scientific impact: The need for a tectonic shift
Sujiv Akkilagunta1, Pradeep Deshmukh1, Vikas Bhatia2,
1 Department of Community and Family Medicine, AIIMS, Nagpur, Maharashtra, India
2 Department of Community and Family Medicine, AIIMS Bhubaneswar, Odisha, India
Department of Community Medicine, All India Institute of Medical Sciences, Plot No.2, Sector-20, MIHAN, Nagpur - 441 108, Maharashtra
The proliferation of research witnessed in the past couple of decades has increased the need for distinguishing the impact of publications. As it could take a long time for a scientific finding to be translated into action, surrogate measures are sought after. Citation of a published work is often considered the ideal surrogate measure. The growth of social media and the use of published work in policy documents and news media have called for other alternate measures. The measures of scientific impact are formulated into impact indicators. Journal impact factor is a widely known indicator of scientific impact. Initially intended for the purpose of rating journals, it is often used for rating the article and the author. Impact factor is an average and is influenced by the outliers. The field of research, citation index, and the problem of deliberate self-citation influence it. The h-index, an author-level metric, should be interpreted with other complementary indices such as the e-index and q-index. Altmetric Attention Score, a novel indicator which uses alternative measures of impact such as used in social media, discussions, and policy documents, is a promising indicator.
|How to cite this article:|
Akkilagunta S, Deshmukh P, Bhatia V. Indicators of scientific impact: The need for a tectonic shift.Indian J Community Fam Med 2019;5:97-99
|How to cite this URL:|
Akkilagunta S, Deshmukh P, Bhatia V. Indicators of scientific impact: The need for a tectonic shift. Indian J Community Fam Med [serial online] 2019 [cited 2020 Jul 5 ];5:97-99
Available from: http://www.ijcfm.org/text.asp?2019/5/2/97/273471
Scientific journals and scholarly articles have seen a widespread proliferation in the present age. An estimate says that there is an increase in scholarly articles by 3% every year. The growth is contributed in part due to an increase in poor-quality predatory journals and poor publishing ethics. Owing to this, the need to distinguish oneself in the scholarly world has become crucial for getting research grants and promotions. The field of scientometrics, involved in evaluating the impact of scientific research, has been growing proportionately to address this need. At this juncture, it is pertinent to address two questions – What are all the measures of scientific impact? Moreover, which indicator/composite indicator gives a satisfying answer to address the need of identifying scholarly research?
Measures of Scientific Impact
Quantifying the impact of research has been a pursuit spanning decades since the inception of the impact factor by Garfield. The scientific impact of research is the best measured by its application to benefit humanity. The benefit may vary in terms of its intensity and reach. However, not all published research show a direct tangible benefit. It may take decades for scientific research to be applied in action. Due to this, surrogate indicators of research outcomes were sought after. Of these, the citation is the widely used measure, included in most of the scientometric indicators. Citation depends on the access to the article, the type of publication, the field of research, age of the article, and the topic itself. For example, exploratory research might garner fewer citations compared to a clinical trial, and similarly, new and novel topics against the old topic. Review articles gather far more citations compared to the original research. The problem of deliberate self-citation could be used to manipulate. Due to these reasons, an effort has been made to include research outcomes apart from citation.
Few databases have used alternative measures of scientific impact. For example, page views, downloads, and social media sharing have been used in public library of science metrics. These measures are still plagued by the limitations of access, and hence, more relevant to open-access journals. Even a question arises whether they reflect impact or mere feedback. Other alternative measures include media coverage, blogging, and use in Q and A threads. However, important of them all would be the use of research as a reference in policy documents. The use of research in policy would be the closest to scientific impact in terms of its benefit to humanity. Mere citation in policy documents may not implicate its direct contribution to policy. However, it is a beginning in the right direction.
Journal Impact Indicators
The second question relates to finding the best available composite indicator of scientific impact at all three levels – Journal, Article, and Author. The impact factor is the widely known journal-level metric of journals in the Web of Science database and is published every year as Journal Citation Reports. The impact factor measures the average number of citations received in a year by all the articles published in the previous 2 years. Since the measure is an average, it is affected by the skewness of the distribution. A study on biochemical journals showed that the top-cited 15% of the articles contribute to 50% of the citations.
The original purpose of the impact factor had been to help librarians choose journals for a subscription. However, it has become the basis for the author to choose a journal for submission, a judge of the researcher's capability, and the basis for funding research projects. It has been used to distinguish authors and articles. An article or author may be published in a journal with a high impact factor, but itself may not be necessarily cited. Other journal-level metrics include SCIMAGO Journal rank, impact per publication, and Source Normalized Impact per Paper using the SCOPUS database. Article influence score and Eigenfactor score use the Web of Science database. These indicators vary from each other in the time window of citation, subject field limit, the range of citable items, and the adjustment for self-citation.
Author-level metrics are used to distinguish between researchers based on the impact of their publications. H-index introduced by a physicist, Hirsh, has been widely reported. To calculate the h-index, the papers of an author are arranged in decreasing order of the number of citations received. The h-index is then the largest rank “h” such that the paper on this rank has h or more citations. For example, an author with h-index of 23 has 23 papers with 23 or more citations. Since its conception, improvements have been made to refine it further. Individual h-index has been developed to reduce the contributing effect of coauthors. Due to its nature, the h-index ignores papers with fewer citations and also those with a very high number of citations. As a result, two researchers with a huge difference in the citation count may have an identical h-index. A complementary metric termed e-index has been developed to account for those citations not included in the estimation of h-index. Another index termed q-index evaluates deliberate self-citation by researchers. Interpreting h-index requires the use of other complementary indices. Another commonly used indicator is i-10 index, i.e., number of articles with at least 10 citations.
Experts proposed g-index to overcome the poor weightage given by h-index to total citations and papers with a large number of citations. The papers by an author are similarly ranked in descending order of their citations. The g-index is the highest rank that the top g papers cumulatively have at least g2 citations. A researcher with a g-index of 10 has 10 papers that have together received at least 100 (102 citations). However, both the h-index and g-index do not adjust for the age of the article. A contemporary h-index tries to address this issue by giving weightage to recent well-cited articles.
Other author-level metrics are the SIGAPS score and ResearchGate (RG) score. RG score is a composite score that includes citation indirectly in the form of journal impact factor. Other components of the RG score are discussion (question and answers) and followers. However, it has been criticized due to its lack of transparency and the inclusion of journal impact factors in its assessment.
The commonly used article-level metrics are – mean normalized citation score, Relative Citation Ratio, and Altmetric Attention Score. While the first two use citations, the Altmetric Attention Score is comprehensive including traditional measures such as citation and contemporary indicators. The contemporary indicators include media attention, discussion, blogging, social media, and citation in policy documents. Altmetrics, however, faces drawbacks such as heterogeneity, data quality, and lack of reliability. The accuracy of data on social media attention, blogging, etc., is questionable. The error in estimation is multiplied due to the inclusion of a wide variety of indicators. The methodology is revised often which makes it unreliable. Although Altmetrics is appealing due to its contemporary nature, it is still in its nascent phase. Further research is required to overcome these limitations and enhance its use in future.
Citation still forms a base for most of the research impact indicators. In this era of digital publication, open-access and social media influence demands the inclusion of other research outcome indicators. Altmetrics and other similar indices while promising need further research.
To conclude, the growth of scientometrics has enabled a long list of impact indicators. There is a need for a unified, comprehensive indicator at each level to allow for simplicity and transparency in the assessment of scientific impact. Despite its drawbacks, the journal impact factor is still used by funding agencies for decision-making. In India, impact factor and h-index have been reported for funding agencies such as Indian Council of Medical Research as a measure of research output. As discussed above, no single indicator allows for a comprehensive, unbiased assessment. Ensuring awareness and access to the key scientometric indices is necessary for informed decision-making.
Financial support and sponsorship
Conflicts of interest
There are no conflicts of interest.
|1||Chi Y. Global trends in medical journal publishing. J Korean Med Sci 2013;28:1120-1.|
|2||Garfield E. Citation indexes for science: A new dimension in documentation through association of ideas. Science 1955;122:108 LP-111.|
|3||Track Impact with ALMs,PLOS. Available from: https://www.plos.org/article.level.metrics.. [Last accessed on 2019 Feb 16].|
|4||Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ 1997;314:498-502.|
|5||Roldan-Valadez E, Salazar-Ruiz SY, Ibarra-Contreras R, Rios C. Current concepts on bibliometrics: A brief review about impact factor, eigenfactor score, CiteScore, SCImago journal rank, source-normalised impact per paper, H-index, and alternative metrics. Ir J Med Sci 2019;188:939-51.|
|6||Abbas AM. Bounds and inequalities relating h-index, g-index, e-index and generalized impact factor: An improvement over existing models. PLoS One 2012;7:e33699.|
|7||De Filosofia F, Batista PD, Campiteli MG, Kinouchi O. Is it possible to compare researchers with different scientific interests? Scientometrics 2006;68:179-89.|
|8||Zhang CT. The e-index, complementing the h-index for excess citations. PLoS One 2009;4:e5429.|
|9||Bartneck C, Kokkelmans S. Detecting h-index manipulation through self-citation analysis. Scientometrics 2011;87:85-98.|
|10||Eggue L. Theory and practise of the g-index. Scientometrics 2006;69:131-52.|
|11||Sidiropoulos A, Katsaros D, Manolopoulos Y. Generalized Hirsch h-index for disclosing latent facts in citation networks. Scientometrics 2007;72:253-80.|
|12||RG Score. Available from: https://https://explore.researchgate.net/display/support/RG+Score [Last accessed on 2019 Feb 18].|
|13||Kraker P, Lex E. A critical look at the ResearchGate score as a measure of scientific reputation. In: Proceedings of the Quantifying and Analysing Scholarly Communication on the Web workshop (ASCW'15), Web Science Conference; 2015.|
|14||Baheti AD, Bhargava P. Altmetrics: A Measure of Social Attention toward Scientific Research. Curr Probl Diagn Radiol 2017;46:391-2.|
|15||Patthi B, Prasad M, Gupta R, Singla A, Kumar JK, Dhama K, et al. Altmetrics – A collated adjunct beyond citations for scholarly impact: A systematic review. J Clin Diagn Res 2017;11:ZE16-20.|
|16||Madhan M, Gunasekaran S, Arunachalam S. Evaluation of research in India – Are we doing it right? Indian J Med Ethics 2018;3:221-9.|