Start Submission Become a Reviewer

Reading: Altmetrics as an Answer to the Need for Democratization of Research and Its Evaluation

Download

A- A+
Alt. Display

Research

Altmetrics as an Answer to the Need for Democratization of Research and Its Evaluation

Author:

Cinzia Daraio

Department of Computer, Control and Management Engineering “Antonio Ruberti” (DIAG), Sapienza University of Rome, Via Ariosto, 25, I 0085 Rome, IT
X close

Abstract

In the evaluation of research, the same unequal structure present in the production of research is reproduced. Despite a few very productive researchers (in terms of papers and citations received), there are also few researchers who are involved in the research evaluation process (in terms of being editorial board members of journals or reviewers). To produce a high number of papers and receive many citations and to be involved in the evaluation of research papers, you need to be in the minority of giants who have high productivity and more scientific success. As editorial board members and reviewers, we often find the same minority of giants. In this paper, we apply an economic approach to interpret recent trends in research evaluation and derive a new interpretation of Altmetrics as a response to the need for democratization of research and its evaluation. In this context, the majority of pygmies can participate in evaluation with Altmetrics, whose use is more democratic, that is, much wider and open to all.

How to Cite: Daraio, C., 2021. Altmetrics as an Answer to the Need for Democratization of Research and Its Evaluation. Journal of Altmetrics, 4(1), p.5. DOI: http://doi.org/10.29024/joa.43
31
Views
10
Downloads
  Published on 22 Jun 2021
 Accepted on 30 May 2021            Submitted on 30 May 2021

1. Introduction

We live in a society of evaluation (Dahler-Larsen 2011; Gläser & Whitley 2007). Researchers are evaluated for everything and in every aspect. We observe a massive use of metrics to evaluate individuals (Schubert & Schubert 2019; Wildgaard 2019), even when these metrics are not appropriate (Ruocco et al. 2017). The mantra of ‘impact or perish’ (Biagioli & Lippman 2020) has recently been added to the widespread culture of ‘publish or perish’ (Fanelli 2020). All this leads to what was emblematically called in Muller’s (2018) book, ‘The tyranny of metrics’.

The assessment of research activity involves different steps related to the assessment process, including setting criteria and formation of judgments. Assessment is also complicated by the quantification of data and data processing for use in different contexts and for different purposes (Carson 2020; Daraio & Glänzel 2016), including process monitoring, input-output monitoring, and ex-ante and ex-post evaluation. In this case, there is a need to specify standards and rules for metadata definition and quantification (for additional details and references, see Daraio & Glänzel 2016). It seems that the need for ‘a clear and unambiguous terminology and specific standards’ (Glänzel 1996: 176) is still relevant and timely today.

De Solla Price (1963: 59) highlighted the intrinsic inequality of scientific productivity, a sort of undemocratic nature inherent in scientific production, stating,

About this process there is the same sort of essential, built-in undemocracy that gives us a nation of cities rather than a country steadily approximating a state of uniform population density. Scientists tend to congregate in fields, in institutions, in countries, and in the use of certain journals. They do not spread out uniformly, however desirable that may or may not be. In particular, the growth is such as to keep relatively constant the balance between the few giants and the mass of pygmies. The number of giants grows so much more slowly than the entire population that there must be more and more pygmies per giant, deploring their own lack of stature and wondering why it is that neither man nor nature pushes us toward egalitarian uniformity.

Altmetrics, or alternative metrics, refer to alternative indicators (alternative with respect to the traditional count of citations received) to measure the impact of scholarly research on science and society using social media platforms (Priem et al. 2010). Altmetrics include but are not limited to downloads, as they refer also to readership, and diffusion and reuse indicators that can be tracked via blogs, social media, peer production systems, or collaborative annotation tools, such as social bookmarking and reference management services. Proponents of Altmetrics (Priem et al. 2010, 2012) view traditional metrics, peer review, citation counting, and journal impact factors as no longer adequate means of ascertaining the value of academic work and as a way to filter only the most significant and relevant material from the huge volume of academic literature produced. This is because the amount of material produced has increased and academic communication has moved online (Priem et al. 2010). Altmetrics are able to measure the impact at the journal-article level as evidenced by activity on social media, as well as the impact measured by examining other relevant research results. Traditional metrics have generally dealt with journals or articles and have not measured other meaningful research results, such as blog posts, presentations, datasets, and other important academic communications. Altmetrics offer the ability to uncover new impact insights that were previously impossible to obtain. Altmetrics are faster compared to traditional metrics that rely on journal citation counts and impact factors.

Altmetrics were introduced at a time of great turmoil in research and its evaluation. Nielsen (2012) describes the open science revolution that is happening in the era of networked science. Floridi (2014) shows how the developments in information and communication technologies have brought new opportunities as well as new challenges for human development and have led to the ‘fourth revolution’, according to which ‘we are now slowly accepting the idea that we might be informational organisms among many agents…, inforgs not so dramatically different from clever, engineered artefacts, but sharing with them a global environment that is ultimately made of information, the infosphere’.

In Daraio (2019), we described this state as a period equivalent to the Middle Ages, that is, a historical epoch of transition from the ancient age of evaluation to the modern one. One of the key elements of this transition is the shift of the focus of evaluation from research production to its impact (Hill 2016).

Within this context, some relevant questions emerge: Why do scholars spend their time on Altmetrics-related activities? What is the meaning of Altmetrics? Although Altmetrics were introduced more than 10 years ago, still little is known about them today, despite several attempts proposed in the literature.

In this paper, we contribute to the existing literature, proposing a new interpretation of Altmetrics’ meaning. By applying the economics of democracy (Acemoglu & Robinson 2006), we show that Altmetrics may be conceived of as an answer to the need for democratization of research and its evaluation.

The next section describes the main aim and contribution of the paper. The following section outlines existing related literature. Section 4 introduces the economics of democracy and applies it to the context of evaluation of research. Section 5 illustrates results. The last section discusses and concludes the paper.

2. Main Aim and Contribution

The introduction of Altmetrics has resulted in intense research into their nature and the potential and limitations offered by these new metrics. Many have interpreted Altmetrics as new impact measures that must complement traditional bibliometric measures characterized by citations (Barbaro et al. 2014; Barnes 2015). Some have interpreted Altmetrics as signs of the computerization of research (Moed 2016). In this work, we contribute to the existing research on Altmetrics by proposing a new meaning connected to the need to democratize the research evaluation process, also connected to the need to democratize the production of research.

Applying the economics of democracy by Acemoglu and Robinson (2006), we interpret the changes taking place in evaluation and in particular the presence and diffusion of Altmetrics as a response to the inequalities inherent in the production and evaluation of research. Altmetrics then can be considered as an answer to the need for democratization of research and its evaluation.

3. Related Literature

Inequality in scientific production is a well-known stylized fact (Allison 1980; Allison et al. 1982; Allison & Steward 1974; de Solla Price 1963) that is linked to the skewness of the bibliometric indicators (Albarrán et al. 2011; Ruiz-Castillo & Costas 2014; Seglen 1992). Recently, the issue has been re-actualized by Lok (2016). See also Rousseau and Rousseau (2017).

Although it has long been known that scientific productivity is asymmetrical and that, in scientific production, there are few who produce many articles and are much cited, with the vast majority instead producing much less, the evaluation of researchers based on the number of publications and the number of citations received is increasingly used. The same type of unequal and undemocratic structure also occurs for the evaluation of research, entrusted to a minority often characterized by the same few more productive and cited researchers, who play the role of editors, members of editorial boards, and reviewers.

In this context, Altmetrics were born.

Altmetrics (Priem et al. 2010, 2012), since their origin, have been introduced as ‘alternative measures’ relative to traditional bibliometric indicators, aiming at capturing other impact dimensions of scholarship activities. They are related to the development of web-based activities. Moed (2017: 33) links Altmetrics to i) an increasing awareness of the multidimensionality of research performance by policy makers, ii) developments in Information and Communication Technology (ICT) and social media technologies, and iii) the open science movement in the scholarly community.

‘Many are beginning to view Altmetrics as a bright new area in the field of metrics with the potential to revolutionize the analysis of the value and impact of scholarly work’, as concluded by Galligan and Dyas-Correia (2013: 61).

Altmetrics are a new class of research impact and attention data that can help researchers understand scientific influence and share it with others for a variety of purposes (Konkiel 2016).

Article views and downloads from online digital libraries or repositories are very well known: the most used Altmetrics are mentions and shares on social networks. Online mentions of scholarly outputs, such as on online social networks, blogs, and news sites, are also popular Altmetrics (Aung et al. 2017). The role of Altmetrics in research evaluation is analyzed in several books (see e.g., Holmberg 2015; Glänzel et al., 2019 and Roemer & Borchardt 2015) and articles (see e.g. Haustein et al., 2016; Rasmussen and Andersen, 2013; Regan and Henchion, 2019).

Several criticisms have been made of the use of altmetrics for research evaluation. Some authors have focused on the lack of validation of the metrics and limitations of data collection (Wouters & Costas 2012); whereas, others have argued that altmetrics are not impact indicators, but rather indicators of attention and popularity (Crotty 2014; Gruber 2014). Adie (2013) shows how Altmetrics can be gamed. Others (Haustein 2016) highlight instability, heterogeneity, data quality problems, and dependencies. Fraumann (2018) examines the limits of Altmetrics in being used for evaluating, promoting, and disseminating research.

Rousseau and Ye (2013) consider Altmetrics as a ‘good idea, but with a bad name’, so criticizing the name Altmetrics because existing literature shows that such indicators may be considered complementary rather than alternative to citations (see also Melero 2015).

Glanzel and Gorraiz (2015) clarify the differences between ‘usage metrics’ and Altmetrics, which are often confused in the literature. They explain that usage metrics have been known and widespread much longer than Altmetrics. Indeed, usage metrics are even older than citation metrics because librarians have tracked usage since the beginning of their profession, ranging from basic user surveys for tracking the use of physical journal issues and monographs to library loan statistics to sophisticated analysis of the use of electronic media (e-metrics). The term ‘Altmetrics’ was introduced later than ‘usage metrics’ (Priem et al. 2010), and they are meant as an alternative to citation metrics.

In contrast to usage metrics they are based on the repercussion of whatsoever publications on the web, notably in social media, in contrast to usage metrics which, for so far, rely on e-content from publishers and other information providers. The whole concept is still in its infancy, still lacking standardization of what exactly and how this is all measured. Whereas usage metrics target downloads and views, which are the most usual proxies for usage at present, even if they rather measure the intention to use something than their actual usage (Gorraiz et al. 2014), Altmetrics comprise of an abundance of very heterogeneous indicators from mentions and captures, to links, bookmarks, storage, and conversations (Glanzel and Gorraiz 2015: 2162).

With the spread of social media, new online tools that allow for diffusing, discussing, and organizing scholarship emerged. The activities performed on social media platforms are heterogeneous and include social recommending, rating, and reviewing together with social networking, social bookmarking, social data sharing, video, blogging, microblogging, and so on. In parallel, new research indicators to measure these activities were proposed.

The introduction of these new online platforms may allow for broader discussion outside the scientific community and thus could allow for a broader conversation about research. Nevertheless, the presence of the platforms alone does not guarantee a higher impact. Sugimoto and colleagues (2017) offer a comprehensive review of the literature on the use of social media in academia and in scholarly communication and on the metrics proposed from these uses. In concluding their survey, Sugimoto and colleagues (2017: 2052) state, ‘Time will tell whether social media and altmetrics are an epiphenomenon of the research landscape, or if they become central to scholars’ research dissemination and evaluation practices’.

Although there is intense research on these metrics (Costas et al. 2016; Glänzel & Gorraiz 2015; Haustein, Bowman & Costas 2015; Ràfols, Robinson-García & van Leeuwen, 2017; Thelwall et al. 2013), more than 10 years after their introduction, we do not yet have a clear understanding of what they actually measure and in particular of why scholars decide to commit to Altmetrics activities (Wouters, Zahedi & Costas 2019).

In this work, we consider 10 relevant existing trends in the current landscape of research assessment, including Altmetrics, and offer an interpretation of Altmetrics by applying the economics of democracy. The 10 broad movements we consider are as follows:

  1. Changes in the production of knowledge
  2. Complexity of the assessment of research
  3. Extension to societal values and value for money
  4. Introduction of performance-based funding and request for new indicators from policy makers
  5. Rankings and international competition
  6. Increase in data availability and open-access repositories
  7. Development of internal research assessment tools
  8. Growth of ‘desktop bibliometrics’
  9. Recent critiques of traditional bibliometric indicators
  10. Introduction and development of Altmetrics.

Table 1 provides a short description of these 10 trends (including Altmetrics) and some references.

Table 1

Current Trends in Research Evaluation.


BUILDING BLOCK SELECTED REFERENCES AND MAIN CONCEPTS

1. Changes in the production of knowledge The new production of knowledge described in Gibbons et al. (1994); the change of knowledge and the public in an age of uncertainty described in Nowotny et al. (2001).

2. Complexity of the assessment of research Need to adopt a systematic view, complexity of the assessment of research (Daraio 2017); multidimensionality of the assessment of research (Moed & Halevi 2015); problems of data quantification, harmonization, and standardization for different evaluation and assessment purposes (Daraio & Glänzel 2016; Glänzel 1996; Glänzel & Willems 2016).

3. Extension to societal values and value for money Extension and inclusion of impacts (Bornmann 2013; Hill 2016).

4. Introduction of performance-based funding and request for new indicators from policy makers Greater attention to efficiency and effectiveness of publicly funded research (Hicks 2009; Jonkers & Zacharewicz 2016); policy makers increasingly demanding in terms of granularity and cross-referencing of indicators (Daraio & Bonaccorsi 2017).

5. Rankings and international competition Proliferation of rankings in a globalized competitive research space; proposal of multidimensional rankings and critiques to existing rankings’ limitations (Daraio et al. 2015; Daraio & Bonaccorsi 2017; Fauzi et al. 2020; Vernon et al. 2018).

6. Increase in data availability and open-access repositories Increase in globally stored information (Hilbert & López 2011); extraordinary development of open-access repositories all over the world (Pinfield et al. 2014).

7. Development of internal research assessment tools More and more institutions implement internal research assessment processes and build research information systems.

8. Growth of ‘desktop bibliometrics’ The diffusion of the ‘Publish or Perish’ culture spawned several easy bibliometrics tools called ‘desktop bibliometrics’ (Katz & Hicks 1997). In this context, Google Scholar citation, and other commercial products, such as SciVal and InCites, appeared.

9. Recent critiques of traditional bibliometric indicators Critiques to traditional bibliometric indicators are presented in books (including Biagioli & Lippman 2020; Cronin & Sugimoto 2014, 2015; Gingras 2016), declarations and reports (such as DORA declaration, Leiden Manifesto in Hicks et al. 2015; Wilsdon 2015) and articles (e.g. Benedictus, Miedema & Ferguson 2016; Stephan et al. 2017; Zitt 2015).

10. Introduction and development of Altmetrics Priem et al. (2010, 2012) and the references cited in Section 3.

4. Method: An Application of the Economics of Democracy

We apply the economic framework proposed by Acemoglu and Robinson (2006) for analyzing the creation and consolidation of democracy to interpret the current situation of research evaluation (outlined in Table 1) in which Altmetrics appeared.

Bunnin and Yu (2004) define democracy as

A form of government, traditionally contrasted to aristocracy (rule by the best), oligarchy (rule by the few), and monarchy (rule by the one). Ideally, democracy requires all citizens to join in making governmental decisions, but such pure democracy, excluding women and slaves, was only practiced for a short period in ancient Athens. The standard democratic form is representative democracy, that is, rule by a group of representatives who are elected for limited periods directly or indirectly by the people. A representative democracy governs through discussion and persuasion rather than by force. Decisions are generally made by majority vote in order that policies will reflect at least to some degree the will or interests of the people. In order to prevent the over-concentration of power, the main legislative, executive, and judicial functions of government are separated. The values and principles underlying this form of government are liberty and equality, sometimes called the democratic ideals.

For democratization, we mean ‘the introduction of a democratic system or democratic principles’ (https://en.oxforddictionaries.com/definition/democratization, last accessed 10 January 2018). Transferred to the field of research evaluation, in this paper, democratic principles mean a transparent and participatory evaluation system (deliberative policy learning (Kowarsch 2016; van den Hove 2007) and equality of citizens (‘distributive justice’ consists of ‘giving each one his own’; see Cozzens 2007 for the concept in the Science, Technology and Innovation -STI- policy).

Acemoglu and Robinson (2006) state

Dictatorship, nevertheless, is not stable when citizens can threaten social disorder and revolution. In response, when the costs of repression are sufficiently high and promises of concessions are not credible, elites may be forced to create democracy. By democratizing, elites credibly transfer political power to the citizens, ensuring social stability. Democracy consolidates when elites do not have a strong incentive to overthrow it.

According to Acemoglu and Robinson (2006), the main conditions to create and consolidate democracy are i) the strength of civil society, ii) the structure of political institutions, iii) the nature of political and economic crises, iv) the level of economic inequality, v) the structure of the economy, and vi) the form and extent of globalization.

Table 2 shows the application of these conditions in the context of the evaluation of research.

Table 2

An Application of the Economics of Democracy to the Evaluation of Research.


CONDITIONS OF DEMOCRACY ACCORDING TO ACEMOGLU AND ROBINSON (2006) APPLICATION IN THE CONTEXT OF RESEARCH EVALUATION

1. The strength of civil society The movement against the blinkered use of bibliometric indicators (see point 9 of Table 1).

2. The structure of political institutions Science-policy interfaces (van den Hove 2007); deliberative policy learning (Kowarsch et al. 2016).

3. The nature of political and economic crises The crisis of science (see below).

4. The level of economic inequality Inequality connected to the skewness of scientific productivity.

5. The structure of the economy
6. The form and extent of globalization
Structure of the sciences and their linkages.
International collaboration and globalization of science.
- Hill (2016): ‘making impact assessment mainstream’;

7. Examples of calls for democratization - Douglass (2016): The New Flagship University: Changing the Paradigm from Global Ranking to National Relevancy;
- Paradeise and Thoenig (2015), unsustainability of the top of the pile model.

Let us discuss some of the points identified in the right column of Table 2.

The present crisis of science is well summarized by Benessia and colleagues (2016), who identify the most heated points of discussion in reproducibility (see Munafò et al. 2017), peer review, publication metrics, scientific leadership, scientific integrity, and the use of science for policy (see also ‘The end of the Cartesian dream’ in Saltelli & Funtowicz 2015).

van den Hove (2007) defines science-policy interfaces ‘as social processes which encompass relations between scientists and other actors in the policy process, and which allow for exchanges, co-evolution and joint construction of knowledge with the aim of enriching decision-making’. van den Hove (2007: 824; see also p. 815, Table 2) identifies the following methodological issues to account for in the design, implementation, and assessment of the science-policy interfaces:

  1. the reinforcement and enlargement of scientific quality and validation processes;
  2. the development of transdisciplinary research methodologies;
  3. transparency, participation and dynamism of interfaces, in particular the role of other stakeholders and the public;
  4. accountability of the different actors;
  5. translation of scientific knowledge into policy-relevant knowledge and of policy knowledge into science-relevant knowledge;
  6. the inclusion of a diversity of knowledges and intelligences;
  7. the development of dialogical dissemination channels for scientific knowledge which specifically target the various potential user groups; and
  8. the institutionalisation of science-policy interfaces in a democratic context.

With respect to the last point of van den Hove (the institutionalization of science-policy interfaces in a democratic context), Kowarsch and colleagues (2016) identify four main building blocks of deliberative policy learning, where ‘deliberative’ is defined as ‘inclusive and argumentative way of designing the process’ and policy learning as ‘updating of beliefs about policies resulting from a combination of social interaction, personal experiences, value change and scientific policy analysis’. The four blocks are i) representation (incorporating wide variety of viewpoints and stakeholders), ii) empowerment (critically scrutinizing requirements to adequately participate), iii) capacity building (distinguishing internal capacity of participants, based on knowledge integration and synthesis, and external capacity building, based on providing knowledge about implications of alternatives, disclosing key uncertainty and normative assumptions), and iv) spaces for deliberation (realizing vertical and horizontal linkages) (Kowarsch et al. 2016: 8, Table 3).

5. Results

We believe the skewness of bibliometric indicators highlights the inequality among scholars and institutions. Scholars in general, even those (the minority) with high performance indicators, think that traditional bibliometric indicators (number of papers and citations received) should be handled with care in research assessment. The majority of scholars, who belong to the long tail below the average, consider traditional bibliometric indicators as unfair or unjust tools for research assessment. Indeed, these bibliometric indicators are so by construction (see Figure 1).

Figure 1 

An illustration of the ‘unfairness’ of performance indicators (those illustrated on the left) generated by their skewness.

Often the evaluation of researchers is carried out by few peers and rewards few outliers. There is a sort of dictatorship of the research elite that prevails in the editorial boards of the most prestigious journals and in the programs of important conferences, while the great majority of the pygmies fall behind.

An interpretation of Altmetrics that we propose here, supported by the application of the economics of democracy framework proposed in the previous section, is that Altmetrics may be an answer to the need for democratization we can observe in the field. This conjecture is supported by Nielsen (2012), who states

the increase of cognitive intelligence could be achieved by conversational critical mass and collaboration which becomes self-stimulating with online tools, which may establish architecture of attention that directs each participant where it is best suited. This collaboration may follow the patterns of open source software: commitment to working in modular way; encouraging small contributions; allowing easy reuse of earlier work; using signaling mechanisms (e.g., scores) to help people to decide where to direct attention.

To highlight the connection of Nielsen (2012) with the democratic values, we report another definition of democracy:

… the term democracy refers very generally to a method of group decision making characterized by a kind of equality among the participants at an essential stage of the collective decision making. Four aspects of this definition should be noted.

  • – First, democracy concerns collective decision making, by which I mean decisions that are made for groups and that are binding on all the members of the group.
  • – Second, this definition means to cover a lot of different kinds of groups that may be called democratic. So there can be democracy in families, voluntary organizations, economic firms, as well as states and transnational and global organizations.
  • – Third, the definition is not intended to carry any normative weight to it. It is quite compatible with this definition of democracy that it is not desirable to have democracy in some particular context. So the definition of democracy does not settle any normative questions.
  • – Fourth, the equality required by the definition of democracy may be more or less deep. It may be the mere formal equality of one-person one-vote in an election for representatives to an assembly where there is competition among candidates for the position. Or it may be more robust, including equality in the processes of deliberation and coalition building.

‘Democracy’ may refer to any of these political arrangements. It may involve direct participation of the members of a society in deciding on the laws and policies of the society or it may involve the participation of those members in selecting representatives to make the decisions” (Tom 2015).

This concept of democracy highlights the connection with Nielsen’s (2012) open science movement described above. van den Hove’s (2007) methodological points reported in the previous section, in particular transparency, participation, and dynamism of interfaces, and the institutionalization of science-policy interfaces in a democratic context, are also in line with this concept of democracy. It is also coherent with the deliberative policy learning of Kowarsch and colleagues (2016). Finally, this concept of democracy is particularly useful to interpret Altmetrics, which are alternative metrics introduced to replace classic metrics (citations) and to make metrics affordable for everyone, that is, to democratize them. Our interpretation is coherent with the name given by the authors who introduced it (Priem et al. 2010, 2012), namely alternative metrics that are substitutive of the traditional ones.

6. Discussion and Conclusion

This paper offers an interpretation of Altmetrics within existing current trends in the evaluation of research. From our analysis, there seems to be a trend toward the democratization of research and its evaluation, which is the need to introduce democratic principles characterized by social equality, representativeness, transparency, participation, and open deliberation. We propose an interpretation of Altmetrics as an answer to this need for democratization.

The findings of this paper seem to show that perhaps the critique of traditional bibliometric indicators (constructed on number of publications and citations) is exacerbated by some unpleasant and tricky properties these indicators have (e.g., skewness and asymmetry), which highlight the inequality among the assessed scholars. The critiques of bibliometric indicators have increased over the years because, among other factors, there has been an increasing use of bibliometric indicators at the individual level. When indicators are used in research assessment in which individuals are the unit of analysis, greater care should be given to the aspects of democratization.

Democracy is a delicate word that evokes emotional and philosophical responses. Someone may suggest that legitimation of evaluation would be better than democratization of evaluation. Some may agree with our interpretation because they like the open evaluation idea, the idea of co-creation of value, and the mixing of a producer and a consumer approach in this context. However, democracy makes people think about equality, and someone may think it is not appropriate to the research activity, in which we do not have homogenous intelligences and talents. According to this perspective, an assessment should find the best and select those who merit being selected, not those elected by the majority of the population. Now we come to a tricky issue, which is the relationship between democracy and meritocracy. Young (1958) defines merit as the sum of intelligence and effort. Nevertheless, one of the primary concerns with meritocracy is the ambiguous definition of ‘merit’ (Arrow et al. 1999) and the need to consider a broader meaning of merit (Daraio 2021). Sternberg and Kaufman (2011) and Kaufman (2013) show that ‘greatness’ is more than just the sum of the ‘nature’ and ‘nurture’ components and, to understand it, we have to go beyond talent and practice. Carson (2007), in The Measure of Merit, shows that talents and intelligence have become constituents of the societies in which they were produced and adopted, continually shaping and being shaped by these cultures. The concepts of intelligence and merit hence remain always contestable terms in the recurrent debates about the social and political implications of inequality for a modern democracy (Carson 2007).

The relationship between democracy and meritocracy is a relevant question because it is linked to the future of research evaluation. Given that the models of evaluation have implications and change the behavior of people who are evaluated, this question also has implications for the research activity itself.

Which model of democracy, which level of democracy, and what open channels are best for representativeness, citizenship, and participation in research and in the evaluation of research are all relevant open questions.

The consideration and implementation of the main substantial and formal criteria for democracy—including division of powers, no conflicts of interest, decentralization, and contextualization—are all open questions to address. They include issues related to data platforms, technical solutions, private or public ownership, and so on.

Considering normative democratic theory could be helpful for further developments. A definition of the function of normative democratic theory is as follows:

The function of normative democratic theory is not to settle questions of definition but to determine which, if any, of the forms democracy may take are morally desirable and when and how. For instance, Joseph Schumpeter argues (1956, chap. XXI), with some force, that only a highly formal kind of democracy in which citizens vote in an electoral process for the purpose of selecting competing elites is highly desirable while a conception of democracy that draws on a more ambitious conception of equality is dangerous. On the other hand, Jean-Jacques Rousseau (1762, Book II, chap. 1) is apt to argue that the formal variety of democracy is akin to slavery while only robustly egalitarian democracies have political legitimacy. Others have argued that democracy is not desirable at all. To evaluate their arguments we must decide on the merits of the different principles and conceptions of humanity and society from which they proceed (Tom 2015).

As Tom (2015) described, normative democratic theory is linked to the underlying principles and conceptions of humanity and society.

Another question that remains open is the following: is it right that the production of research and its evaluation be democratic?

Addressing all of these questions requires further research. We hope that our contribution may stimulate further research on these challenging questions.

Acknowledgements

Previous versions of this paper (see Daraio, 2018) have been presented at the STI/ENID Conference 2017 in Paris, at the ISSI 2017 Conference in Wuhan, at a workshop in Bergamo in October 2017, at a MORE@DIAG Seminar at Sapienza University of Rome on 11 January 2018, and at the international workshop ‘The evaluation of research in Italy (La valutazione della Ricerca in Italia)’, 5 June 2018, CNR, Rome (Italy). We thank the conferences’ and seminar’s participants, and in particular Rodrigo Costas and Nicolas Robinson Garcia, for helpful discussions.

This work is supported by the Sapienza Project Awards No. RM11916B8853C925.

Competing Interests

The author has no competing interests to declare.

References

  1. Acemoglu, D., & Robinson, J. A. (2006). Economic origins of dictatorship and democracy. Cambridge: Cambridge University Press. DOI: https://doi.org/10.1017/CBO9780511510809 

  2. Adie, E. (2013). Gaming altmetrics. Retrieved from http://www.altmetric.com/blog/gaming-altmetrics/ 

  3. Albarrán, P., Crespo, J. A., Ortuño, I., & Ruiz-Castillo, J. (2011). The skewness of science in 219 sub-fields and a number of aggregates. Scientometrics, 88(2), 385–397. DOI: https://doi.org/10.1007/s11192-011-0407-9 

  4. Allison, P. D. (1980). Inequality and scientific productivity. Social Studies of Science, 10(2), 163–179. DOI: https://doi.org/10.1177/030631278001000203 

  5. Allison, P. D., Long, J. S., & Krauze, T. K. (1982). Cumulative advantage and inequality in science. American Sociological Review (pp. 615–625). DOI: https://doi.org/10.2307/2095162 

  6. Allison, P. D., & Stewart, J. A. (1974). Productivity differences among scientists: Evidence for accumulative advantage. American Sociological Review (pp. 596–606). DOI: https://doi.org/10.2307/2094424 

  7. Arrow, K., Bowles, S., & Durlauf, S. (1999). Meritocracy and Economic Inequality. Princeton, NJ: Princeton University Press. DOI: https://doi.org/10.1515/9780691190334 

  8. Aung, H. H., Erdt, M., & Theng, Y. L. (2017). Awareness and usage of altmetrics: A user survey. Proceedings of the Association for Information Science and Technology, 54(1), 18–26. DOI: https://doi.org/10.1002/pra2.2017.14505401003 

  9. Barbaro, A., Gentili, D., & Rebuffi, C. (2014). Altmetrics as new indicators of scientific impact. Journal of the European Association for Health Information and Libraries, 10(1), 3–6. 

  10. Barnes, C. (2015). The use of altmetrics as a tool for measuring research impact. Australian academic & research libraries, 46(2), 121–134. DOI: https://doi.org/10.1080/00048623.2014.1003174 

  11. Benedictus, R., Miedema, F., & Ferguson, M. W. (2016). Fewer numbers, better science. Nature, 538(7626). DOI: https://doi.org/10.1038/538453a 

  12. Benessia, A., Funtowicz, S., Giampietro, M., Pereira, A. G., Ravetz, J., Saltelli, A., … & van der Sluijs, J. P. (2016). Science on the Verge. Amazon Book, in the series The Rightful Place of Science Consortium for Science, Policy Outcomes Tempe, AZ and Washington, DC. 

  13. Biagioli, M., & Lippman, A. (Eds.) (2020). Gaming the metrics: Misconduct and manipulation in academic research. Cambridge, MA: MIT Press. DOI: https://doi.org/10.7551/mitpress/11087.001.0001 

  14. Bornmann, L. (2013). What is societal impact of research and how can it be assessed? A literature survey. Journal of the American Society for information science and technology, 64(2), 217–233. DOI: https://doi.org/10.1002/asi.22803 

  15. Bunnin, N., & Yu, J. (2004). Democracy, in The Blackwell Dictionary of Western Philosophy, eISBN: 9781405106795. Last accessed 7 January 2018. DOI: https://doi.org/10.1111/b.9781405106795.2004.00002.x 

  16. Carson, J. (2007). The measure of merit: Talents, intelligence, and inequality in the French and American republics, 1750-1940. Princeton, NJ: Princeton University Press. DOI: https://doi.org/10.1515/9780691187679 

  17. Carson, J. (2020). Quantification–Affordances and Limits. Scholarly Assessment Reports, 2(1). DOI: https://doi.org/10.29024/sar.24 

  18. Costas, C. R., Haustein, S., Zahedi, Z., & Larivière, V. (2016). Exploring paths for the normalization of Altmetrics: Applying the Characteristic Scores and Scales. The 2016 Altmetrics Workshop, Bucharest, Romania. 

  19. Cozzens, S. E. (2007). Distributive justice in science and technology policy. Science and Public Policy, 34(2), 85–94. DOI: https://doi.org/10.3152/030234207X193619 

  20. Cronin, B., & Sugimoto, C. R. (2014). Beyond bibliometrics: harnessing multidimensional indicators of scholarly impact. Cambridge, MA: MIT Press. DOI: https://doi.org/10.7551/mitpress/9445.001.0001 

  21. Cronin, B., & Sugimoto, C. R. (Eds.). (2015). Scholarly metrics under the microscope: from citation analysis to academic auditing. Medford: Information Today. 

  22. Crotty, D. (2014). Altmetrics: Finding meaningful needles in the data haystack. Serials Review, 40, 141–146. DOI: https://doi.org/10.1080/00987913.2014.947839 

  23. Dahler-Larsen, P. (2011). The evaluation society. Stanford, CA: Stanford University Press. DOI: https://doi.org/10.2307/j.ctvqsdq12 

  24. Daraio, C. (2017). A Framework for the Assessment of Research and its Impacts. Journal of Data and Information Science, 2(4), 7–42. DOI: https://doi.org/10.1515/jdis-2017-0018 

  25. Daraio, C. (2018). The Democratization of Evaluation and Altmetrics, Technical Report DIAG, 01/2018. 

  26. Daraio, C. (2019). Econometric approaches to the measurement of research productivity. In W. Glänzel, H. F. Moed, H. Schmoch & M. Thelwall (Eds.), Springer Handbook of Science and Technology Indicators (pp. 633–666). DOI: https://doi.org/10.1007/978-3-030-02511-3_24 

  27. Daraio, C. (2021). In Defense of Merit to Overcome Merit. Frontiers in Research Metrics and Analytics, January 2021, 5, Article 614016. DOI: https://doi.org/10.3389/frma.2020.614016 

  28. Daraio, C., & Bonaccorsi, A. (2017). Beyond university rankings? Generating new indicators on universities by linking data in open platforms. Journal of the Association for Information Science and Technology, 68(2), 508–529. DOI: https://doi.org/10.1002/asi.23679 

  29. Daraio, C., Bonaccorsi, A., & Simar, L. (2015). Rankings and university performance: A conditional multidimensional approach. European Journal of Operational Research, 244(3), 918–930. DOI: https://doi.org/10.1016/j.ejor.2015.02.005 

  30. Daraio, C., & Glänzel, W. (2016). Grand Challenges in Data Integration. State of the Art and Future Perspectives: An Introduction. Scientometrics, 108(1), 391–400. DOI: https://doi.org/10.1007/s11192-016-1914-5 

  31. de Solla Price, D. J. (1963). Little science, big science… and beyond. New York: Columbia University Press. DOI: https://doi.org/10.7312/pric91844 

  32. Douglass, J. A. (Ed.). (2016). The New Flagship University: Changing the Paradigm from Global Ranking to National Relevancy. New York, NY: Springer. 

  33. Fanelli, D. (2020). Pressures to publish: What effects do we see? In M. Biagioli & A. Lippman (Eds.), Gaming the metrics: Misconduct and manipulation in academic research (pp. 111–122). 

  34. Fauzi, M. A., Tan, C. N. L., Daud, M., & Awalludin, M. M. N. (2020). University rankings: A review of methodological flaws. Issues in Educational Research. 

  35. Floridi, L. (2014). The fourth revolution: How the infosphere is reshaping human reality. Oxford: Oxford University Press. 

  36. Fraumann, G. (2018). The values and limits of altmetrics. New Directions for Institutional Research, 2018(178), 53–69. DOI: https://doi.org/10.1002/ir.20267 

  37. Galligan, F., & Dyas-Correia, S. (2013). Altmetrics: rethinking the way we measure. Serials Review, 39(1), 56–61. DOI: https://doi.org/10.1080/00987913.2013.10765486 

  38. Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P., & Trow, M. (1994). The new production of knowledge: The dynamics of science and research in contemporary societies. Los Angeles, CA: Sage. 

  39. Gingras, Y. (2016). Bibliometrics and research evaluation: Uses and abuses. Cambridge, MA: MIT Press. DOI: https://doi.org/10.7551/mitpress/10719.001.0001 

  40. Glänzel, W. (1996). The need for standards in bibliometric research and technology. Scientometrics, 35(2), 167–176. DOI: https://doi.org/10.1007/BF02018475 

  41. Glänzel, W., & Gorraiz, J. (2015). Usage metrics versus altmetrics: Confusing terminology? Scientometrics, 102(3), 2161–2164. DOI: https://doi.org/10.1007/s11192-014-1472-7 

  42. Glänzel, W., Moed, H. F., Schmoch, H., & Thelwall, M. (Eds.) (2019). Springer Handbook of Science and Technology Indicators. Germany: Springer. DOI: https://doi.org/10.1007/978-3-030-02511-3 

  43. Glänzel, W., & Willems, H. (2016). Towards standardisation, harmonisation and integration of data from heterogeneous sources for funding and evaluation purposes. Scientometrics, 106(2), 821–823. DOI: https://doi.org/10.1007/s11192-015-1813-1 

  44. Gläser, J., & Whitley, R. (Eds.). (2007). The changing governance of the sciences: The advent of research evaluation systems. Germany: Springer. DOI: https://doi.org/10.1007/978-1-4020-6746-4 

  45. Gruber, T. (2014). Academic sell-out: How an obsession with metrics and rankings is damaging academia. Journal of Marketing for Higher Education, 24, 165–177. DOI: https://doi.org/10.1080/08841241.2014.970248 

  46. Haustein, S. (2016). Grand challenges in altmetrics: Heterogeneity, data quality and dependencies. Scientometrics, 108(1), 413–423. DOI: https://doi.org/10.1007/s11192-016-1910-9 

  47. Haustein, S., Bowman, T. D., & Costas, R. (2016). Interpreting “altmetrics”: Viewing acts on social media through the lens of citation and social theories. In C. R. Sugimoto (Ed.), Theories of Informetrics: A Festschrift in Honor of Blaise Cronin. Berlin: De Gruyter. 

  48. Hicks, D. (2009). Evolving regimes of multi-university research evaluation. Higher Education, 57(4), 393–404. DOI: https://doi.org/10.1007/s10734-008-9154-0 

  49. Hicks, D., Wouters, P., Waltman, L., De Rijcke, S., & Rafols, I. (2015). Bibliometrics: the Leiden Manifesto for research metrics. Nature, 520, 429–431. DOI: https://doi.org/10.1038/520429a 

  50. Hilbert, M., & López, P. (2011). The world’s technological capacity to store, communicate, and compute information. science, 332(6025), 60–65. DOI: https://doi.org/10.1126/science.1200970 

  51. Hill, S. (2016). Assessing (for) Impact: Future Assessment of the Societal Impact of Research. Palgrave Communications, 2, 16073. DOI: https://doi.org/10.1057/palcomms.2016.73 

  52. Holmberg, K. J. (2015). Altmetrics for information professionals: Past, present and future. Amsterdam: Chandos Publishing. DOI: https://doi.org/10.1016/B978-0-08-100273-5.00002-8 

  53. Jonkers, K., & Zacharewicz, T. (2016). Research performance based funding systems: A comparative assessment. Luxembourg: Publications Office of the European Union. EUR, 27837. 

  54. Katz, J. S. A. H., & Hicks, D. (1997). Desktop scientometrics. Scientometrics, 38(1), 141–153. DOI: https://doi.org/10.1007/BF02461128 

  55. Kaufman, S. B. (Ed.). (2013). The complexity of greatness: Beyond talent or practice. Oxford: Oxford University Press. DOI: https://doi.org/10.1093/acprof:oso/9780199794003.001.0001 

  56. Konkiel, S. (2016). Altmetrics: diversifying the understanding of influential scholarship. Palgrave Communications, 2(1), 1–7. DOI: https://doi.org/10.1057/palcomms.2016.57 

  57. Kowarsch, M., Garard, J., Riousset, P., Lenzi, D., Dorsch, M. J., Knopf, B., … & Edenhofer, O. (2016). Scientific assessments to facilitate deliberative policy learning. Palgrave Communications, 2, 16092. DOI: https://doi.org/10.1057/palcomms.2016.92 

  58. Lok, C. (2016). Science’s 1%: How income inequality is getting worse in research. Nature News, 537(7621), 471. DOI: https://doi.org/10.1038/537471a 

  59. Melero, R. (2015). Altmetrics–A complement to conventional metrics. Biochemia medica, 25(2), 152–160. DOI: https://doi.org/10.11613/BM.2015.016 

  60. Moed, H. F. (2016). Altmetrics as traces of the computerization of the research process. In C. R. Sugimoto (Ed.), Theories of informetrics and scholarly communication. A Festschrift in Honor of Blaise Cronin (pp. 360–371). Berlin: De Gruyter. DOI: https://doi.org/10.1515/9783110308464-021 

  61. Moed, H. F. (2017). Applied Evaluative Informetrics. Germany: Springer. DOI: https://doi.org/10.1007/978-3-319-60522-7 

  62. Moed, H. F., & Halevi, G. (2015). Multidimensional assessment of scholarly research impact. Journal of the Association for Information Science and Technology, 66(10), 1988–2002. DOI: https://doi.org/10.1002/asi.23314 

  63. Muller, J. Z. (2018). The tyranny of metrics. Princeton, NJ: Princeton University Press. 

  64. Munafò, M. R., Nosek, B. A., Bishop, D. V., Button, K. S., Chambers, C. D., Du Sert, N. P., … & Ioannidis, J. P. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1(1), 1–9. DOI: https://doi.org/10.1038/s41562-016-0021 

  65. Nielsen, M. (2012). Reinventing discovery: The new era of networked science. Princeton, NJ: Princeton University Press. DOI: https://doi.org/10.2307/j.ctt7s4vx 

  66. Nowotny, H., Scott, P., & Gibbons, M. (2001). Re-thinking science: Knowledge and the public in an age of uncertainty (p. 12). Cambridge: Polity. 

  67. Paradeise, C., & Thoenig, J. C. (2015). In Search of Academic Quality. Germany: Springer. DOI: https://doi.org/10.1057/9781137298294 

  68. Pinfield, S., Salter, J., Bath, P. A., Hubbard, B., Millington, P., Anders, J. H., & Hussain, A. (2014). Open-access repositories worldwide, 2005–2012: Past growth, current characteristics, and future possibilities. Journal of the association for information science and technology, 65(12), 2404–2421. DOI: https://doi.org/10.1002/asi.23131 

  69. Priem, J., Groth, P., & Taraborelli, D. (2012). The altmetrics collection. PloS one, 7(11), e48753. DOI: https://doi.org/10.1371/journal.pone.0048753 

  70. Priem, J., Taraborelli, D., Groth, P., & Neylon, C. (2010). Altmetrics: A manifesto, 26 October 2010. http://altmetrics.org/manifesto 

  71. Ràfols, I., Robinson-García, N., & van Leeuwen, T. (2017). How to make altmetrics useful in societal impact assessments: Shifting from citation to interaction approaches. Impact of Social Sciences Blog. 

  72. Rasmussen, P. G., & Andersen, J. P. (2013). Altmetrics: An alternate perspective on research evaluation. Sciecom info, 9(2). 

  73. Regan, Á., & Henchion, M. (2019). Making sense of altmetrics: The perceived threats and opportunities for academic identity. Science and Public Policy, 46(4), 479–489. DOI: https://doi.org/10.1093/scipol/scz001 

  74. Roemer, R. C., & Borchardt, R. (2015). Meaningful metrics: A 21st-century librarian’s guide to bibliometrics, altmetrics, and research impact. American Library Association. 

  75. Rousseau, R., & Ye, Y. F. (2013). A multi-metric approach for research evaluation. Chinese Science Bulletin, 58(26), 3288–3290. DOI: https://doi.org/10.1007/s11434-013-5939-3 

  76. Rousseau, S., & Rousseau, R. (2017). Inequality in science and the possible rise of scientific agents. ISSI Newsletter 12(4), 68–70. 

  77. Ruiz-Castillo, J., & Costas, R. (2014). The skewness of scientific productivity. Journal of Informetrics, 8(4), 917–934. DOI: https://doi.org/10.1016/j.joi.2014.09.006 

  78. Ruocco, G., Daraio, C., Folli, V., & Leonetti, M. (2017). Bibliometric indicators: The origin of their log-normal distribution and why they are not a reliable proxy for an individual scholar’s talent. Palgrave Communications, 3, 17064 DOI: https://doi.org/10.1057/palcomms.2017.64 

  79. Saltelli, A., & Funtowicz, S. (2015). Evidence-based policy at the end of the Cartesian dream. In Science, Philosophy and Sustainability (pp. 169–184). New York, NY: Routledge. 

  80. Schubert, A., & Schubert, G. (2019). All along the h-index-related literature: A guided tour. In Springer Handbook of Science and Technology Indicators (pp. 301–334). Germany: Springer. DOI: https://doi.org/10.1007/978-3-030-02511-3_12 

  81. Seglen, P. O. (1992). The skewness of science. Journal of the American Society for Information Science, 43(9), 628. DOI: https://doi.org/10.1002/(SICI)1097-4571(199210)43:9<628::AID-ASI5>3.0.CO;2-0 

  82. Stephan, P., Veugelers, R., & Wang, J. (2017). Blinkered by bibliometrics. Nature, 544, 411–412. DOI: https://doi.org/10.1038/544411a 

  83. Sternberg, R. J., & Kaufman, S. B. (Eds.). (2011). The Cambridge handbook of intelligence. Cambridge: Cambridge University Press. DOI: https://doi.org/10.1017/CBO9780511977244 

  84. Sugimoto, C. R., Work, S., Larivière, V., & Haustein, S. (2017). Scholarly use of social media and altmetrics: A review of the literature. Journal of the Association for Information Science and technology, 68(9), 2037–2062. DOI: https://doi.org/10.1002/asi.23833 

  85. Thelwall, M., Haustein, S., Larivière, V., & Sugimoto, C. R. (2013). Do altmetrics work? Twitter and ten other social web services. PloS one, 8(5), e64841. DOI: https://doi.org/10.1371/journal.pone.0064841 

  86. Tom, C. (2015). Democracy. The Stanford Encyclopedia of Philosophy (Spring 2015 Edition), Edward N. Zalta (ed.). Retrieved from https://plato.stanford.edu/archives/spr2015/entries/democracy/. Last accessed 16 May 2021. 

  87. Van den Hove, S. (2007). A rationale for science–policy interfaces. Futures, 39(7), 807–826. DOI: https://doi.org/10.1016/j.futures.2006.12.004 

  88. Vernon, M. M., Balas, E. A., & Momani, S. (2018). Are university rankings useful to improve research? A systematic review. PloS One, 13(3), e0193762. DOI: https://doi.org/10.1371/journal.pone.0193762 

  89. Wildgaard, L. (2019). An overview of author-level indicators of research performance. In Springer Handbook of Science and Technology Indicators (pp. 361–396). DOI: https://doi.org/10.1007/978-3-030-02511-3_14 

  90. Wilsdon, J., et al. (2015). The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. DOI: https://doi.org/10.4135/9781473978782 

  91. Wouters, P., & Costas, R. (2012). Users, Narcissism and control—Tracking the impact of scholarly publications in the 21st century. In Proceedings of 17th International Conference on Science and Technology Indicators, 2, 847–857. 

  92. Wouters, P., Zahedi, Z., & Costas, R. (2019). Social media metrics for new research evaluation. In Springer handbook of science and technology indicators (pp. 687–713). Germany: Springer. DOI: https://doi.org/10.1007/978-3-030-02511-3_26 

  93. Young, M. D. (1958). The rise of the meritocracy. New Brunswick: Transaction Publisher. 

  94. Zitt, M. (2015). The excesses of research evaluation: The proper use of bibliometrics. Journal of the Association for Information Science and Technology, 66(10), 2171–2176. DOI: https://doi.org/10.1002/asi.23519 

comments powered by Disqus