Start Submission Become a Reviewer

Reading: What Is Societal Impact and Where Do Altmetrics Fit into the Equation?

Download

A- A+
Alt. Display

Research

What Is Societal Impact and Where Do Altmetrics Fit into the Equation?

Authors:

Kim Holmberg ,

University of Turku, FI
X close

Sarah Bowman,

Baldwin Public Library, US
X close

Timothy Bowman,

Wayne State University, US
X close

Fereshteh Didegah,

Simon Fraser University, CA
X close

Terttu Kortelainen

University of Oulu, FI
X close

Abstract

The expectation that scientific research should provide answers to societal issues and support institutional decision-making is increasing, but still there are no systematic methods of identifying and measuring the wider societal impacts of research. In this article, various views on the meaning of impact, the different types of impact or influence that research can have on the society, and the potential of altmetrics to capture and measure this societal impact will be discussed.
How to Cite: Holmberg, K., Bowman, S., Bowman, T., Didegah, F. and Kortelainen, T., 2019. What Is Societal Impact and Where Do Altmetrics Fit into the Equation?. Journal of Altmetrics, 2(1), p.6. DOI: http://doi.org/10.29024/joa.21
413
Views
110
Downloads
  Published on 18 Dec 2019
 Accepted on 18 Oct 2019            Submitted on 26 Jun 2019

1 Introduction

An important goal or mission of research is that it should advance more research by creating grounds for new research to build upon, or to allow other researchers to ‘stand on the shoulders of giants’ as Newton so eloquently noted when attributing his success to the work of those before him. But science has a much wider influence that goes beyond other researchers; the effects of science reach all corners of society and impact society in multiple ways, including educationally, culturally, environmentally, and economically. Science is indeed the cornerstone of the educational system and thus has had a profound educational impact. Scientific breakthroughs can inspire artists, musicians, and poets in the objects they create, the lyrics they write, and the stories they tell, demonstrating cultural impact. Scientific evidence of climate change has had environmental impact, as people are becoming increasingly concerned about the environment and are changing their behavior because of this awareness. Scientific discoveries and innovations can also have significant economic impact. Widely accepted means of capturing these types of impact, however, have not yet been available (Dinsmore, Allen & Dolby 2014).

The expectation that scientific research should provide answers to societal issues and support institutional decision-making is increasing as governments, organizations, universities, and private businesses supporting research demand evidence of how the research they support will influence society and how they can attract the attention of a wider audience to the research benefits. Simultaneously, research assessment has become increasingly important, as governments and other funding bodies try to identify the researchers, research groups, and universities that are most deserving of the limited funds. As quality of research can be difficult to assess and the assessment itself can be subjective, other approaches have been developed and proxies of quality have been commonly used in research assessments. Bibliometric analyses of research publications and the measured citation impact of those publications have been used as proxies and indicators of the impact of research (De Bellis 2009). Today, a wide range of quantitative methods focusing on citations are used in research assessment (e.g., Moed et al. 1985; Moed, De Bruin & Van Leeuwen 1995), but the methods of measuring citations can only reflect scientific impact (i.e., how a certain research product has been used by other researchers). Examining only scientific impact can be misleading and limited because science can, and often should, have influence or impact on a range of different audiences beyond academia, leading to improvements in society and changes in behavior. In fact, a highly cited article, one that has been recognized as valuable by the research community, may have little impact on society and vice versa.

To study wider societal impact of research, the focus needs to move beyond academic literature (Bornmann 2013) to examine research impact on various entities, including clinicians, policy makers, educators, other practitioners, and the general public. While various types of impact have been discussed (e.g., Pitt 2000; Rutherford 1987; Samuel & Derrick 2015; Tullos 2009) and some approaches to track and measure the broader impact of science have been previously attempted (e.g., Walter et al. 2007; Wolf et al. 2013), there are no generally accepted methods of identifying, capturing, and measuring this wider societal impact of research. Therefore, none of the many different forms of impact that research can have on society are usually considered or measured in research assessments. With the increasing demand for evidence of different forms of a wider societal impact of research, new ways to identify and measure these types of impact are needed. In this article, various views on the meaning of impact, the types of impact that science can have on society, and the potential of altmetrics to capture and measure societal impact of research will be discussed.

2 What Is Impact?

As research impact has become firmly incorporated into the expectations of research outcomes, and as impact, or impact potential, drives funding, it is important to understand what is meant by impact and to speak of the complexity of impact. Impact of science can affect many different stakeholders and different stakeholders (e.g., researchers, research administrators, funders, policymakers, and the general public) may understand impact differently and be interested in specific aspects of impact (Penfield et al. 2014), making the notion of impact highly contested.

The impact of research can be thought of as all the different ways by which research can benefit individuals, organizations, and nations (ESRC 2016). Impact of scientific activities could be thought of as the difference between the consequences of specific research being undertaken and published and the consequences were this research not to take place (IAIA 2009). Impact of research has been defined as ‘an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia’, or as the ‘reduction or prevention of harm, risk, cost or other negative effects’ (Research Excellence Framework 2014, 2011: 26). The Research Councils UK (RCUK) defines impact as ‘the demonstrable contribution that excellent research makes to society and the economy’ (RCUK 2014). A key word in the definition by RCUK is ‘demonstrable’, meaning a mere indication or suggestion of awareness is not enough to demonstrate or reflect impact. Demonstrable requires evidence of actual research impact, evidence of how the results have been used by, for instance, policymakers or practitioners, and evidence of how the results have led to some concrete improvements or wider changes in the society. However, it may be difficult to attribute specific societal or behavioral changes in the public to specific research outputs. Thus, it has been easier to focus on the tangible and measurable research outputs and to use them as proxies of impact. The impact science can have on society is nevertheless more complex than simple, quantifiable outputs designed to provide a measurement of the scientific impact of research. In addition, the impact of research on society depends not only on the scientific discoveries and developments, but also on ‘economic forces and political wills’ (Brenner 1998). Definitions of impact tend to expect the impact of research to have positive influences, although some negative impacts could also be foreseen. For instance, certain research could point to positive impacts on the economy while simultaneously having negative impacts on the environment. For example, the Three Gorges Dam in China had an environmental impact assessment conducted, which identified several potentially negative impacts on the environment (Tullos 2009), yet positive economic impacts and political influences carried more weight and the dam was built. Since the project has been completed, there have been many follow-up studies that show significant negative environmental impacts (including the extinction of a species) directly resulting from the dam (Tullos 2009).

Another way of looking at the impact of research is to categorize it as a) instrumental, when the research is ‘influencing the development of policy, practice or service provision, shaping legislation, or altering behavior’, b) conceptual, when it is ‘contributing to the understanding of policy issues and reframes debates’, and c) capacity building, when the impact is reached ‘through technical and personal skill development’ (Nutley et al. 2007). Miettinen, Tuunainen, and Esko (2015) classify impact into a) epistemological, b) artefactual, and c) institutional-interactional dimensions, based on whether the impact is assessed from the point of view of creating new understanding; using new artefacts, methods, or tools; or by mapping the interactions between universities and other social actors, respectively. While the categorization of Nutley et al. (2007) focuses on the outcomes and increased knowledge, Miettinen, Tuunainen, and Esko (2015) focus more on the processes connected to the outcomes. Building on this it is possible to view conducting science as a process, where the foundation is formed by the epistemological groundwork of knowing things, which then enables us to do things. This is in line with Bertrand Russell (1952/2016), according to whom science enables us to know things and to do things. However, while knowing or becoming aware of something new can in some cases be deduced from events connected to specific research outputs, evidence of doing something as a result of new knowledge may be much more difficult to identify. In addition, behavioral changes may be difficult to accomplish. Kollmuss and Agyeman (2002) reviewed previous behavioral theories with psychological and sociological frameworks and found relationships between environmental knowledge, environmental awareness, and behavior. The authors summarized findings from earlier studies and listed both internal factors (knowledge, values, attitudes, fear and emotional involvement) and external factors (infrastructure; political, social, and cultural factors; and the economic situation) that influenced behavior changes. The review indicated there is no direct positive correlation between an increase of knowledge and new behavior. In fact, barriers such as old behavior patterns, lack of internal or external incentives, negative feedback, and lack of knowledge may negatively influence (in the case presented by Kollmuss and Agyeman (2002)) pro-environmental behavior. These findings are rather disconcerting, as it would appear new or increased knowledge does not have a direct impact on behavioral changes; instead, personal impacts appear to be the strongest motivations for behavioral changes, at least when environmental behavior is concerned. Indicators of mere awareness should, thus, not be used to assume behavioral changes nor as evidence of impact. On the other hand, researchers cannot assume simply publishing their research can contribute to behavioral changes and achieve impact. To put their findings to actual usage (rather than just focus on publishing more research articles), researchers should also promote their work, maximize their dissemination efforts, and engage with the readers (Green 2019).

2.1 Societal impact

The terms ‘societal impact’ and ‘social impact’ are often used synonymously and interchangeably in the literature, though earlier literature suggests a difference between the two terms. While societal impact refers to the impact of science on various levels and areas of society, social impact often refers to a more personal level of influence, affecting people directly or indirectly (Vanclay et al. 2015). In this text, the term societal impact will be used as an umbrella term to cover all types and forms of impact that research can have at different levels and areas of the society. In other words, societal impact of research designates something beyond scientific impact (Penfield et al. 2014: 21; Wolf et al. 2014: 291), something that is beneficial for other sections of society outside of science (Bornmann 2014). Societal impact assessment could examine the social, cultural, environmental, or economic benefits of research (Bornmann 2014), the environmental and technological benefits (Bond & Pope 2012), or any of the 11 different types of impact (impact on science, technology, economy, culture, society, policy, organizations, health, environment, symbolism, and training) listed by Godin and Doré (2005), depending on the areas covered by the research. It has also been argued the interaction between the research and the public will lead to and determine the outcome of the research (i.e., what kind of influence the research has had and what kind of changes in society it has ignited). With that, the societal impact of science can be seen as bidirectional, suggesting when both parties (science and society) participate in the ‘making of impact’, the ‘amount and density’ of impact is likely to grow (Siika-aho 2015: 261). Similarly, the notion of co-creation (a term frequently used in EC Framework Programmes) stems from the ideology that impacts of science are constructed from the interaction between science and society.

Societal impacts of science can be seen in a multitude of places, and different indicators or objects can be seen as evidence of such impacts. As evidence of economic impact, patents can indicate how and where knowledge and business flow from research. They can be the result of academic-corporate collaboration and may show a commercial application of research. Although bibliometric methods have been criticized for not being able to identify and measure all the different aspects of research impact on technology and the economy, patent citations are still considered applicable tools to measure the knowledge flow and interactions between science and technology (Hung 2012). By examining the intensity of research article citations in patents, one is able to reveal scientific excellence in both technology and economic domains (Van Looy et al. 2003). This also helps researchers to better understand the innovation process (Verbeek, Debackere & Luwel 2003). Economics, technological advancements, and science all have mutual reciprocities, which all need to be acknowledged and identified when assessing the economic or technological impact of research. In medicine, on the other hand, Hamers and Visser (2012) have defined societal impact of research as the influence of research on ‘clinical practice and healthcare policy and […] on patients’ well-being and quality of life’ and there is ‘mounting quantitative proof of the benefits of medical research to health, society, and the economy’ (WHO 2013). Godin and Doré (2005) wrote cultural impact of science refers to ‘public understanding of science’ (i.e., an individual’s understanding and knowledge of science). However, science can influence and inspire popular culture as well, although the connection or path between scientific discoveries and cultural outputs may be even more difficult to identify than other types of impact and the valuation of the cultural outputs that has sprung from science may be debated. With regards to the intersection of science and the arts, there are several examples of artists during the Renaissance era and later who combined scientific studies with art, including Leonardo da Vinci and Johannes Vermeer. Each of these artists utilized science to inform their works, with da Vinci’s Sketch of Uterus and Fetus and Vermeer’s The Astronomer being just two examples of science informing art (Eskridge 2014). There are also several examples of the impact of science on music, with composers such as Mozart and Bartok utilizing scientific principles in their compositions. Composers such as these have integrated mathematic principles in their music, with Bartok utilizing the golden mean in several of his compositions. In other examples, artists were inspired by advances in science during the moon race of the late 1960s to create rock-n-roll music, such as glam rocker David Bowie’s Space Oddity album and Pink Floyd’s album Astronomy Domine (Ball 2015, March 20). Such connections between scientific discoveries and societal impacts can, however, be difficult to trace, and it could perhaps be questioned whether all effects of science should be identified and/or quantified. As these examples demonstrate, the complexity of the societal impact of science requires a new way of thinking about impact and new methods for evaluating the various types of impact research may have had.

3 Evaluating Impact of Science

Researchers can produce many different types of research outputs, from openly available datasets and code to news articles, blog entries, and keynote lectures. While these types of outputs are often difficult to identify and their impact potential is difficult to measure, research evaluations have mainly focused on peer reviewed and published research articles. Peer review is the foundation of all scientific evaluation. Peer review forms the mechanism of scientific quality control, as it is through peer reviewing decisions about which scientific articles are ‘good enough’ to get published are made and, with that, which articles are included in the common pool of scientific knowledge. Peer review cannot be replaced by quantitative performance measures (Butler 2007), but various quantitative measures can be cost-effective and efficient in other types of research evaluations, such as evaluations of the performance of research groups or universities. This is why citation counts are considered to be the most important research impact indicator (Furnham 1990). Furthermore, citations are also widely acknowledged as indicators of scientific merit, with highly cited authors recognized as having made a more significant contribution to science (Merton 1968). These assumptions are supported by earlier research that have found high-quality articles are indeed usually also cited more often (Lawani 1986; Patterson & Harris 2009) and that articles cited a lot also associate with other quality measures, such as winning awards (Cole 1973). However, the lack of a generally accepted citation theory has been discussed by many scholars (Cronin 1984; Leydesdorff 1998; Zuckerman 1978), and the complexity of identifying motivations and the rationale for the act of citing is high. The motivations to cite vary greatly between researchers and also between cited works (MacRoberts & MacRoberts 1989). All citations are, for instance, not positive acknowledgments of quality or of value, as citations can also criticize the earlier work (Murugesan & Moravcsik 1978). Moreover, citations are not immediate metrics due to publication and indexing policies; thus, it can take a long time for a research paper to receive its first citation (if it even gets cited at all) after it has been published.

According to Vanclay, societal impact assessment involves

analysing, monitoring, and managing the intended and unintended, positive and negative, social consequences of development [concerning] changes in societies’ way of life, culture, community, political system, environment, health and wellbeing, personal and property rights, fears, or aspirations (as cited in Bradbury-Jones & Taylor, 2014: 46–47).

With that, in impact evaluation one of the core evaluation questions is, according to Streatfield and Markless (2009), how one can tell if they are truly making a difference to their users. This leads to efforts trying to identify changes in the user’s (of science) behavior (doing things differently), competence (doing things better), and levels of knowledge and attitudes (e.g., confidence). In scientific impact assessment, a citation could, in this sense, be considered an indication of increased level of knowledge as the researcher is citing earlier work and, with that, acknowledging he or she is using the earlier research work. But as the societal impact of research is difficult to identify and measure and often it may be difficult to pinpoint what type of impact research has had and on whom, or which specific research has led to a specific impact, there is a danger of focusing disproportionately on quantifiable aspects, such as the commercialization of science or other financial benefits (Russell Group Papers 2012). Or as Godin and Doré (2005) have noted, other dimensions seem to be missing from the picture as most research, when identifying societal impact, references only economic impact. Hicks and Wouters (2015), in the Leiden manifesto, warn of Impact-factor obsession, noting easy-to-access metrics may lure one to measure only what is available. This may lead to a real danger of neglecting the funding of basic research (Shapiro & Taylor 2013) as the outcomes of applied research may be easier to predict and assess.

3.1 Systems to assess societal impact

Impact, and impact assessments, can be divided into potential impact (ex ante impact) and realized impact (ex post impact) (i.e., what the impact might be and what the impact has been, respectively). While ex ante impact assessment focuses on the possibilities to transform important societal questions and problems into research questions and the abilities to answer them and are thus predictive in their nature, ex post impact assessment would use existing evidence and recorded performance to identify how well the research has been able to answer those questions and translate the findings into practical solutions and policy decisions and ultimately, changes in the behavior of those affected by the research directly or indirectly. Both types of impact are used in research assessment and in decisions about research funding. A research group applying for funding for a research project would be assessed ex ante (i.e., the assessment would try to forecast the capacity potential of the research group to accomplish the set research goals). Universities, on the other hand, are assessed ex post (i.e., their past performance is assessed in order to make future funding decisions). The National Science Foundation (NSF) in the US reviews research funding proposals in the light of two aspects: intellectual merit and broader impacts. The intellectual merit component refers to the potential of the proposed research to advance scientific knowledge, and the broader impacts ‘encompasses the potential to benefit society and contribute to the achievement of specific, desired societal outcomes’ (NSF 2013). Therefore, the reviewing process combines both ex ante and ex post assessments or, in other words, the process evaluates past performance and forecasts potential impact. While peer review ensures the quality of research, it becomes a prediction or a guess of future potential when used for decisions about research funding, thus placing the reviewers in the role of ‘unwilling futurologists’ (Rip 2000). To judge the societal impact of research proposals makes reviewing even more complicated as it takes the reviewers beyond their disciplinary expertise, but ‘unless scientists embrace their own ability to judge impact, their role in the decision-making process will increasingly be transferred to others’ (Holbrook & Frodeman 2011: 245). Holbrook and Frodeman (2011: 245) also argued

scientists ought to play a central role in determining what research gets funded. But this will only continue to be possible if scientists also embrace the fact that their research can be judged on its potential societal impacts as well as its intrinsic intellectual merit.

Earlier projects, such as ERiC (Van der Meulen 2010), SIAMPI (SIAMPI 2012), ASIRPA (Joly et al. 2015), and UNICO (Holi, Wickramasinghe & van Leeuwen 2008) have identified and assessed a significant number of quantitative and qualitative indicators that can be used to measure research impact, or some aspects thereof, on different areas of society. For instance, ERiC and ASIRPA divide indicators into categories such as dissemination of knowledge (e.g., publications, advisory activities, number of PhDs, conference presentations), interest of stakeholders (e.g., funding, collaboration, staff exchanges, consortium partnerships), and impact and use of results (e.g., public debates and media appearance, patens, spin-offs). UNICO takes a different approach and focuses on knowledge transfer, listing potential indicators connected to networks in which researchers operate, professional development, collaborative research, contract research, spin-outs, teaching, and many other measures. The multitude of possible measures listed in approaches such as these highlight the complexity of the possible influences that research can have on society.

While approaches such as those presented by SIAMPI and ASIRPA focus on the processes and interactions, other approaches try to introduce more multi-faceted assessment systems that take into account both quantifiable indicators and narratives in the form of case studies. The Payback Framework approach by Buxton and Hanney (1996) allows for narratives to be put forward, thus giving researchers a chance to explain and demonstrate the impact their research has had on society. The Payback Framework was one of the first research assessment tools to take both scientific and societal impact into account in the evaluation, specifically in the case of health sciences. Donovan and Hanney (2011) explained how the Payback Framework consists of a model incorporating the complete research process from the inception of a research idea to the dissemination of the results and, eventually, to the final outcomes of wider societal benefits. According to Donovan and Hanney (2011: 181), ‘its multi-dimensional categorization of benefits from research starts with more traditional academic benefits of knowledge production and research capacity-building, and then extends to wider benefits to society.’ The Payback Framework (Buxton & Hanney 1996) uses an outcome-based approach, including a multitude of different methods for data collection and data analysis, documentary and literature reviews, interviews, and bibliometric analyses (Hanney et al. 2004; Samuel & Derrick 2015). Furthermore, the inclusion of narratives allows highlighting types of impact that would not be identified using more traditional impact indicators. Perhaps the most current, and certainly the largest and most followed, example of such approaches in research assessment in recent years has been the Research Excellence Framework (REF) 2014 in the UK, in which almost 7,000 case studies in the form of narratives were assessed by more than 1,000 assessment panel members. An outcome-based evaluation is a method of program evaluation that can be used to determine how to implement projects, to ascertain if the desired outcomes were achieved, and to determine the overall societal impact (Westat 2010). The approach highlights the importance of preset goals and the use of data sources and methods that are suitable for assessing how well those goals were met. The models used in this approach typically consist of inputs (e.g., money, time), activities (e.g., projects, services), outputs (e.g., the products of the activities), and short-, medium-, and long-term outcomes (that can overlap to some degree). Short-term outcomes often demonstrate changes in awareness, skills, or knowledge, while changes in behavior, knowledge, or attitudes are considered intermediate outcomes, and changes in attitudes, values, conditions, and life status are long-term goals (McNamara 2015). In addition to outcomes, outputs are also measured in outcome-based evaluations. In Walter et al. (2007), outputs are defined as the immediate, tangible results of a research project, including workshops, meetings, reports, and other publications. Impacts are intermediate effects, such as changes in knowledge, attitudes, and behavior, while outcomes are long-term effects that meet the set goals of the research project. Outcomes represent changes in the policy, which have steered the behavior of a wider public and thus had wide impact. When interviewing REF2014 evaluators about impact, Samuel and Derrick (2015) found a majority of the 62 interviewed evaluators in fact viewed impact as an outcome, emphasizing that counting or assessing the research outputs do not convey much about their impact or of the outcome of the research. Focusing on the research outputs would therefore not tell anything about the resulting outcomes of the research or of the impact they have had on society.

4 Potential of and Challenges with Altmetrics

Altmetrics captures the mentions of research outputs in social media and elsewhere online. They could potentially reveal something about the influence or impact research has made (Priem et al. 2010). Shema, Bar-Ilan, and Thelwall (2014) defined these new metrics as ‘web-based metrics for the impact of scholarly material, with an emphasis on social media outlets as sources of data’ (e.g., Twitter, Facebook, blogs, LinkedIn, YouTube, Reddit, Wikipedia, mainstream media). Altmetrics data are the aggregated views, mentions, downloads, shares, discussions, and recommendations of research outputs across the scholarly web (Fenner 2014), as well as citations and mentions in more non-academic communications, such as public policy documents, online syllabi, patent applications, and clinical guidelines (Bradbury-Jones & Taylor 2014). With that, altmetrics capture a wide variety of interactions from an equally wide variety of different online data sources. Furthermore, as altmetrics events occur quickly after the research articles are published, altmetrics could provide faster means to measure how the public reacts to research (Barnes 2015) and possibly to ‘provide evidence of the reach, uptake, and diffusion of research’ (Dinsmore, Allen & Dolby 2014).

Much of the earlier research on altmetrics has focused on studying correlations between different altmetrics and citations, finding some evidence of a connection between the two (e.g., Haustein, Costas & Lariviére, 2015; Mohammadi et al. 2015; Thelwall et al. 2013). Based on these results, altmetrics could perhaps roughly be divided into two groups: data sources that reflect certain aspects of new forms of scholarly communication (those most similar to citations) and those that reflect some other aspects of how scientific information is shared, received, discussed, and used, possibly by a predominantly non-academic audience and that, therefore, could complement more traditional metrics of research impact (those that least resemble citations). Altmetrics events identified on different platforms could thus provide evidence of different types of impact. For instance, sites such as Mendeley, that have shown high similarity between reader counts and later citation counts (and are predominantly used by researchers) might be used as evidence of possible future citation counts (Thelwall 2018), thus providing evidence of future scientific impact. As we also know who the primary audiences of online syllabi are and why the research articles and books have been listed in the syllabi, we can assume with fairly strong confidence that syllabi could be analyzed for educational impact of research (Kousha & Thelwall 2016). In a similar way, we could analyze the mentions of research articles in clinical guidelines as evidence of impact on well-being and health. On the other hand, more general social media sites used by a wider audience (including, but not limited to, researchers) could be able to reflect wider societal impact of research. For altmetrics to be a reliable source for evidence of the societal impact of research the data must contain information about 1) how one’s knowledge has increased or how one’s behavior has changed because of research (derived from the two functions of science according to Russell (1952/2016)) and 2) identifying whose knowledge has changed or whose behavior has changed. If it is unclear who has been influenced by research, it is unclear what areas of society (if any) have been influenced. In addition, for altmetrics to be attributed to a specific research there must be a clear path between research outputs and the outcomes of the research. In other words, there has to be evidence of changes in behavior or increases in knowledge that can be traced back to a specific research object.

The review by Sugimoto et al. (2017) highlights the heterogeneity of the online platforms from which altmetrics are generated, extending to both the underlying actions and the intentions and motivations behind those actions. For instance, the act of citing a research article on Wikipedia is most likely motivated by different intentions than mentioning or sharing a research article on Twitter or Facebook. In fact, even different actions within a single platform may be motivated by different objectives, such as tweeting and retweeting a research article may be, as the research article triggers tweeting while the retweeting is triggered by the tweet about the research article. The heterogeneity of altmetrics is perhaps its greatest promise, as different altmetrics could potentially reflect different forms and various levels of engagement with research outputs (Haustein, Bowman & Costas 2016). A simple tweet could reflect awareness, while a blog entry could reflect deeper levels of engagement. But regarding the content of tweets, tweets are restricted by length, which places a restriction also on the amount of evidence that can be found about possible impact that specific research has made. In fact, Robinson-Garcia et al. (2017) found the majority of tweets that mention scientific articles are ‘devoid of original thought’ and they are just mechanical actions of forwarding the information. In only about 10% of their sample could evidence of original thought and commentary about the research be found. Simple mentions of research outputs, no matter who wrote them, do not necessarily disclose anything about the kind of impact the research has had or whether the person that has seen the research output has changed his or her behavior in any way. A simple tweet mentioning a research output is not evidence the person tweeting about it has even read it, and a retweet of a tweet mentioning a research output may provide even less evidence. Still, Twitter is one of the biggest (as measured by the number of identified altmetrics events) altmetrics data sources (Thelwall et al. 2013), with millions of research outputs being disseminated in tweets and retweets. It has been found, however, many of the tweets that mention scientific articles may be sent by researchers themselves (Birkholz, Seeber & Holmberg 2015; Tsou et al. 2015; Vainio & Holmberg 2017). In fact, Sugimoto et al. (2017) argue ‘social media has rather opened a new channel for informal discussions among researchers, rather than a bridge between the research community and society at large.’ If this is the case, then altmetrics events may not express societal impact, but rather reflect new forms of scholarly communication. But many social media users prefer to remain anonymous, using nicknames or pseudonyms in their profiles and refraining from revealing any personal information in their profiles. Although groups of users can be identified to some degree (Haustein & Costas 2015), determining on whom or what the research has influenced remains difficult at best. In addition, a great deal of the content on social media in general and Twitter in particular is generated by automated accounts or so-called bots (Gilani et al. 2017; Wojcik et al. 2018) that may be difficult to identify. How prevalent bots are in generating and disseminating scientific content is unknown, but they certainly have some effect (Haustein et al. 2015) on overall counts. On Wikipedia, for instance, it has been discovered approximately 15% of the articles on average have been edited by bots (Steiner 2014), but on certain language versions, this number may be much higher. For instance, the bot called Lsjbot (https://sv.wikipedia.org/wiki/Anv%C3%A4ndare:Lsjbot) created and edited over 17 million articles in the Swedish, Cebuano, and Waray language versions of Wikipedia. Bots backed by artificial intelligence are also writing hundreds of news articles for mainstream media (Tatalovic 2018), and the amount of online content created by bots is rapidly increasing due to technological advances. Bots mentioning research articles would most likely not exhibit any measurable evidence of how the research they mention has made changes to the society, yet the acts would be counted as evidence of attention or impact if only the quantifiable events were assessed. In addition, several earlier studies have pointed out limitations associated with data collection and quality (e.g., Bornmann 2014; Wouters & Costas 2012; Zahedi, Fenner & Costas 2014). Most of the actions generating altmetrics on different platforms are identified by the mentions of unique object identifiers (such as DOIs) that are attached to research outputs (mainly research articles). The mention of a unique identifier shows a direct path between the specific altmetrics event and the research article, but unique identifiers, such as DOIs, are not always attached to online conversations about scientific research (Haustein 2016), neither do all research articles have DOIs attached to them. Furthermore, using the unique identifiers to identify online discussions about research loses a great deal of the surrounding conversations that do not include the identifiers, thus a complete conversation surrounding the research article is not captured. With the data collection issues, potential presence of bots and the lack of original thought in online messages mentioning research articles, it may be challenging to find any demonstrable evidence of behavioral changes or of the impact research may have had on society. But altmetrics may also be used to show the networks where research is being communicated and point to the actors engaged in these conversations. Altmetrics may be best suited to map the networks where research is being disseminated and discussed and to track where and how researchers engage with the public (Haustein in press; Holmberg et al. 2014; Robinson-Garcia, van Leeuwen & Rafols 2017) and, through that, hinting at societal influences of research.

5 Conclusions

The goal of assessing the societal impact of research is to identify and measure how a specific research document has been used and what kind of influence it has had, not just within academia, but also beyond. Altmetrics are currently being investigated for that purpose, if and how they could be used to assess societal impact of research. This leads to the following question: Would an evaluation system that took societal impact into account favor researchers who were better at communicating their research or that would have the means to employ the help of professionals to design and execute a communication strategy? There is a real danger that some researchers will begin to manipulate the attention their work receives online if altmetrics indicators were integrated into the scientific reward system. This could happen in many forms, including writing document titles that appeal to larger audiences, which would make them more likely to be shared, or the creation of automated bots that would automatically disseminate information about the research articles. Before any altmetrics can be used for research assessment, instruments to detect this kind of intentional manipulation of the altmetrics acts need to be in place. Furthermore, it needs to be specified what would account as intentional manipulation and what would be counted as normal scientific communication.

Impact of research can be considered as the outcome of research or how the increased knowledge from research has led to a change in some area of society or possibly on science itself. As discussed earlier in this text, a requirement of impact (as defined by funders) is that it is demonstrable and the influences of research can be identified. This entails the path from specific research to the outcomes or the influences of it be identifiable and demonstrable. With an increasing demand for evidence of wider societal impact of research, many researchers are investigating altmetrics as a potential data source for evidence of societal impact and creating new indicators that utilize these new online data sources. The potential societal impacts of research are, however, often less tangible than scientific impact of research, which can be traced through citations. While the DOIs, when present, can be used to identify mentions of specific research articles, in many types of altmetrics it can be difficult to determine who the users or groups of users are that are generating the altmetrics by interacting with research outputs and equally difficult may be to determine their motivations to do so. This makes it difficult to judge on whom the research has potentially had some influence or who has at least become aware of it. Furthermore, the actions generating altmetrics may often be ‘void of original thought’, as the users are only forwarding and sharing information about scientific articles without discussing them, making it difficult to judge what kind of influence, if any, the research has had. Some researchers have even suggested altmetrics should not be considered as impact metrics but rather as indicators of attention (Crotty 2014; Sugimoto 2015). Another challenge we must not forget is the dynamic nature of the web and, with that, altmetrics. Any assessment using altmetrics would be an analysis of a situation only in a specific moment in time, a snapshot of online data, while more dynamic approaches would be required to capture the dynamic nature of the data and the actions generating altmetrics on different online platforms. With these challenges, two important questions still remain, answers to which determine the applicability and reliability of altmetrics for research assessment purposes: 1) Who are the users generating altmetrics acts, and 2) Where is the evidence of impact? Ideally, an altmetrics event would include information and evidence about both, but this rarely seems to be the case. Perhaps due to its bibliometric roots, in altmetrics research the focus often seems to be in quantifying online events connected to research outputs and in creating new indicators from the collected data. But because of the many uncertainties with and the dynamic nature of the data, we may need to come up with new approaches and new research questions that are more suitable to fully take advantage of the rich data that is altmetrics. It may not be possible to aggregate meaningful indicators from the online data that is available, but instead the data may be able to answer new types of questions about new forms of scholarly communication and societal impact of research, questions that we have not yet come to ask. Aggregated factors of impact may not be a fruitful way to utilize the rich data that is altmetrics. Instead, our best bet may be to use the data to examine the social networks in which impact is created.

On the other hand, demands from the funders for researchers to plan for ‘pathways to impact’ or to demonstrate a plan on how to communicate research findings to audiences beyond academia forces researchers to think beyond the specific research outputs and to think about their potential research outcomes as narratives. Researchers are thus increasingly expected to engage with the public and to communicate their research to audiences beyond academia. That, even if not measurable, certainly leads to increased societal impact.

Competing Interests

The authors have no competing interests to declare.

References

  1. Ball, P. (2015). The best and oddest science-inspired music. Retrieved June 27, 2018, from http://www.bbc.com/future/story/20150320-the-best-and-oddest-science-music 

  2. Barnes, C. (2015). The Use of Altmetrics as a Tool for Measuring Research Impact. Australian Academic and Research Libraries, 46(2), 121–134. DOI: https://doi.org/10.1080/00048623.2014.1003174 

  3. Birkholz, J. M., Seeber, M., & Holmberg, K. (2015). Drivers of higher education institutions’ visibility: A study of UK HEIs social media use vs. organizational characteristics. In Proceedings of the 2015 International Society for Scientometrics and Informetrics (pp. 502–513). Istanbul, Turkey. Retrieved June 27, 2018, from http://www.issi2015.org/files/downloads/all-papers/0502.pdf 

  4. Bond, A., & Pope, J. (2012). The state of the art of impact assessment in 2012. Impact Assessment and Project Appraisal, 30(1), 1–4. DOI: https://doi.org/10.1080/14615517.2012.669140 

  5. Bornmann, L. (2013). What is societal impact of research and how can it be assessed? A literature survey. Journal of the American Society for Information Science and Technology, 64(2), 217–233. DOI: https://doi.org/10.1002/asi.22803 

  6. Bornmann, L. (2014). Validity of altmetrics data for measuring societal impact: A study using data from Altmetric and F1000Prime. Journal of Informetrics, 8, 935–950. DOI: https://doi.org/10.1016/j.joi.2014.09.007 

  7. Bradbury-Jones, C., & Taylor, J. (2014). Applying social impact assessment to nursing research. Nursing Standard, 28(48), 45–49. DOI: https://doi.org/10.7748/ns.28.48.45.e8262 

  8. Brenner, S. (1998). The Impact of Society on Science. Science, 282(5393), 1411–1412. DOI: https://doi.org/10.1126/science.282.5393.1411 

  9. Butler, L. (2007). Assessing university research: a plea for a balanced approach. Science and Public Policy, 34(8), 565–574. DOI: https://doi.org/10.3152/030234207X254404 

  10. Buxton, M., & Hanney, S. (1996). How can payback from health services research be assessed? Journal of Health Services Research, 1(1), 35–43. DOI: https://doi.org/10.1177/135581969600100107 

  11. Cole, N. S. (1973). Bias in Selection. Journal of Educational Measurement, 10, 237–255. DOI: https://doi.org/10.1111/j.1745-3984.1973.tb00802.x 

  12. Cronin, B. (1984). The Citation Process: The Role and Significance of Citations in Scientific Communication. London: Taylor Graham. 

  13. Crotty, D. (2014). Altmetrics: Finding meaningful needles in the data haystack. Serials Review, 40, 141–146. DOI: https://doi.org/10.1080/00987913.2014.947839 

  14. De Bellis, N. (2009). Bibliometrics and citation analysis: from the Science citation index to cybermetrics. Lanham, MD: Scarecrow Press. 

  15. Dinsmore, A., Allen, L., & Dolby, K. (2014). Alternative Perspectives on Impact: The Potential of ALMs and Altmetrics to Inform Funders about Research Impact. PLoS Biology, 12(11). DOI: https://doi.org/10.1371/journal.pbio.1002003 

  16. Donovan, C., & Hanney, S. (2011). The “Payback Framework” explained. Research Evaluation, 20(3): 181–183. DOI: https://doi.org/10.3152/095820211X13118583635756 

  17. Economic and Social Research Council. (2016). What is Impact? Retrieved June 27, 2018, from http://www.esrc.ac.uk/funding-and-guidance/impact-toolkit/what-how-and-why/what-is-research-impact.aspx 

  18. Eskridge, R. (2014). The Enduring Relationship of Science and Art. Adapted from a lecture by Robert Eskridge titled Exploration and the Cosmos: The Consilience of Science and Art. Retrieved June 27, 2018, from http://artic.edu/aic/education/sciarttech/2a1.html 

  19. Fenner, M. (2014). Altmetrics and Other Novel Measures for Scientific Impac. In S. Bartling, & S. Friesike (Eds.), Opening Science (pp. 179–189). Cham: Springer International Publishing. DOI: https://doi.org/10.1007/978-3-319-00026-8_12 

  20. Furnham, A. F. (1990). Quantifying quality: An argument in favor of citation counts. Journal of Further and Higher Education, 14(2), 105–110. DOI: https://doi.org/10.1080/0309877900140208 

  21. Gilani, Z., Crowcraft, J., Farahbakhsh, R., & Tyson, G. (2017). The implications of Twitterbot generated data traffic on networked systems. In: The Proceedings of the SIGCOMM Posters and Demos (pp. 51–53). Retrieved June 27 June, 2018, from DOI: https://doi.org/10.1145/3123878.3131983 

  22. Godin, B., & Dore, C. (2005). Measuring the Impacts of Science; Beyond the Economic Dimension. Urbanisation INRS, Culture et Société. Helsinki, Finland: Helsinki Institute for Science and Technology Studies. Retrieved June 27, 2018, from http://www.csiic.ca/PDF/Godin_Dore_Impacts.pdf 

  23. Green, T. (2019). Maximizing dissemination and engaging readers: The other 50% of an author’s day: A case study. Learned Publishing, 32, 395–405. DOI: https://doi.org/10.1002/leap.1251 

  24. Hamers, J. P., & Visser, A. P. (2012). Editorial: Societal impact – an important performance indicator of nursing research. Journal of Clinical Nursing, 21(21–22), 2997–2999. DOI: https://doi.org/10.1111/jocn.12038 

  25. Hanney, S. R., Grant, J., Wooding, S., & Buxton, M. J. (2004). Proposed methods for reviewing the outcomes of health research: The impact of funding by the UK’s’ Arthritis Research Campaign. Health Research Policy and Systems, 2(1), 4. DOI: https://doi.org/10.1186/1478-4505-2-4 

  26. Haustein, S. (2016). Grand challenges in altmetrics: Heterogeneity, data quality and dependencies. Scientometrics, 108(1), 413–423. DOI: https://doi.org/10.1007/s11192-016-1910-9 

  27. Haustein, S. (in press). Scholarly Twitter metrics. In W. Glänzel, H. F. Moed, U. Schmoch, & M. Thelwall (Eds.), Handbook of Quantitative Science and Technology Research. Springer. 

  28. Haustein, S., Bowman, T. D., & Costas, R. (2016). Interpreting “altmetrics”: Viewing acts on social media through the lens of citation and social theories. In Sugimoto, C. R. (Eds.), Theories of Informetrics and Scholarly Communication (pp. 372–405). Berlin: De Grutyer Mouton. 

  29. Haustein, S., Bowman, T. D., Holmberg, K., Tsou, A., Sugimoto, C. R., & Lariviére, V. (2015). Tweets as impact indicators: Examining the implications of automated “bot” accounts on Twitter. Journal of the Association for Information Science and Technology, 67(1), 232–238. DOI: https://doi.org/10.1002/asi.23456 

  30. Haustein, S., & Costas, R. (2015). Identifying Twitter audiences: Who is tweeting about scientific papers? ASIS&T SIG/MET Metrics 2015 Workshop. Retrieved June 18, 2019, from https://www.asist.org/SIG/SIGMET/wp-content/uploads/2015/10/sigmet2015_paper_11.pdf 

  31. Haustein, S., Costas, R., & Larivière, V. (2015). Characterizing social media metrics of scholarly papers: The effect of document properties and collaboration patterns. PLoS One, 10, e0120495. DOI: https://doi.org/10.1371/journal.pone.0120495 

  32. Hicks, D., & Wouters, P. (2015). The Leiden Manifesto for research metrics. Nature, 520(7548), 429–431. Retrieved June 27, 2018, from DOI: https://doi.org/10.1038/520429a 

  33. Holbrook, J. B., & Frodeman, R. (2011). Peer review and the ex ante assessment of societal impacts. Research Evaluation, 20(3), 239–246. DOI: https://doi.org/10.3152/095820211X12941371876788 

  34. Holi, M. T., Wickramasinghe, R., & van Leeuwen, M. (2008). Metrics for the Evaluation of Knowledge Transfer Activities at Universities. Retrieved June 27, 2018, from http://ec.europa.eu/invest-in-research/pdf/download_en/library_house_2008_unico.pdf 

  35. Holmberg, K., Bowman, T. D., Haustein, S., & Peters, I. (2014). Astrophysicists’ Conversational Connections on Twitter. PLoS ONE, 9(8). DOI: https://doi.org/10.1371/journal.pone.0106086 

  36. Hung, W. C. (2012). Measuring the use of public research in firm RandD in the Hsinchu Science Park. Scientometrics, 92(1), 63–73. DOI: https://doi.org/10.1007/s11192-012-0726-5 

  37. International Association for Impact Assessment. (2009). What Is Impact Assessment? Retrieved June 27, 2018, from http://www.iaia.org/uploads/pdf/What_is_IA_web.pdf 

  38. Joly, P.-B., Gaunand, A., Colinet, L., Larédo, P., Lemarié, S., & Matt, M. (2015). ASIRPA: A comprehensive theory-based approach to assessing the societal impacts of a research organization. Working Papers 2015–04, Grenoble Applied Economics Laboratory (GAEL). Retrieved June 27, 2018, from https://ideas.repec.org/p/gbl/wpaper/2015-04.html. DOI: https://doi.org/10.1093/reseval/rvv015 

  39. Kollmuss, A., & Agyeman, J. (2002). Mind the Gap: Why do people act environmentally and what are the barriers to pro-environmental behavior? Environmental Education Research, 8(3), 239–260. DOI: https://doi.org/10.1080/13504620220145401 

  40. Kousha, K., & Thelwall, M. (2016). An automatic method for assessing the teaching impact of books from online academic syllabi. Journal of the Association for Information Science and Technology, 67(12), 2993–3007. DOI: https://doi.org/10.1002/asi.23542 

  41. Lawani, S. M. (1986). Some bibliometric correlates of quality in scientific research. Scientometrics, 9(1), 13–25. DOI: https://doi.org/10.1007/BF02016604 

  42. Leydesdorff, L. (1998). Theories of Citation? Scientometrics, 43(1), 5–25. DOI: https://doi.org/10.1007/BF02458391 

  43. MacRoberts, M. H., & MacRoberts, B. R. (1989). Problems of citation analysis: A critical review. Journal of the American Society for Information Science, 40(5), 342–349. DOI: https://doi.org/10.1002/(SICI)1097-4571(198909)40:5<342::AID-ASI7>3.0.CO;2-U 

  44. McNamara, C. (2015). Basic Guide to Outcomes-Based Evaluation for Nonprofit Organizations with Very Limited Resources. Retrieved June 27, 2018, from http://managementhelp.org/evaluation/outcomes-evaluation-guide.htm 

  45. Merton, R. K. (1968). The Matthew effect in science. Science, 159(3810), 56–63. DOI: https://doi.org/10.1126/science.159.3810.56 

  46. Miettinen, R., Tuunainen, J., & Esko, T. (2015). Epistemological, Artefactual and Interactional–Institutional Foundations of Social Impact of Academic Research. Minerva, 53(3), 257–277. DOI: https://doi.org/10.1007/s11024-015-9278-1 

  47. Moed, H. F., Burger, W. J. M., Frankfort, J. G., & Van Raan, F. J. (1985). The use of bibliometric data for the measurement of university research performance. Research Policy, 14(3), 131–149. DOI: https://doi.org/10.1016/0048-7333(85)90012-5 

  48. Moed, H. F., De Bruin, R. E., & Van Leeuwen, T. N. (1995). New bibliometric tools for the assessment of national research performance: Database description, overview of indicators and first applications. Scientometrics, 33(3), 381–422. DOI: https://doi.org/10.1007/BF02017338 

  49. Mohammadi, E., Thelwall, M., Haustein, S., & Larivière, V. (2015). Who reads research articles? An altmetrics analysis of Mendeley user categories. Journal of the Association for Information Science and Technology, 66, 1832–1846. DOI: https://doi.org/10.1002/asi.23286 

  50. Murugesan, P., & Moravcsik, M. J. (1978). Variation of the nature of citation measures with journals and scientific specialties. Journal of the American Society for Information Science, 29(3), 141–147. DOI: https://doi.org/10.1002/asi.4630290307 

  51. National Science Foundation. (2013). Chapter III – NSF Proposal Processing and Review. Retrieved June 27, 2018, from https://www.nsf.gov/pubs/policydocs/pappguide/nsf13001/gpg_3.jsp 

  52. Nutley, S., Walter, I., & Davies, H. (2007). Using evidence: How research can inform public services. Bristol: Policy Press. DOI: https://doi.org/10.2307/j.ctt9qgwt1 

  53. Patterson, M. S., & Harris, S. (2009). Editorial: Are higher quality papers cited more often? Physics in Medicine and Biology, 54(17). DOI: https://doi.org/10.1088/0031-9155/54/17/E01 

  54. Penfield, T., Baker, M. J., Scoble, R., & Wykes, M. C. (2014). Assessment, evaluations and definitions of research impact: A review. Research Evaluation, 23(1), 21–32. DOI: https://doi.org/10.1093/reseval/rvt021 

  55. Pitt, A. (2000). The cultural impact of science in France: Ernest Renan and the “Vie de Jesus”. Historical Journal, 43(1), 79–101. DOI: https://doi.org/10.1017/S0018246X99008948 

  56. Priem, J., Taraborelli, D., Groth, P., & Neylon, C. (2010). Altmetrics: A manifesto. Retrieved June 27, 2018, from http://altmetrics.org/manifesto/ 

  57. Research Councils UK. (2014). Pathways to Impact. Retrieved June 27, 2018, from http://www.rcuk.ac.uk/innovation/impacts/ 

  58. Research Excellence Framework 2014. (2011). Assessment framework and guidance on submissions. Retrieved June 27, 2018, from http://www.ref.ac.uk/media/ref/content/pub/assessmentframeworkandguidanceonsubmissions/GOS%20including%20addendum.pdf 

  59. Rip, A. (2000). Higher forms of nonsense. European Review, 8(04), 467–485. DOI: https://doi.org/10.1017/S1062798700005032 

  60. Robinson-Garcia, N., Costas, R., Isett, K., Melkers, J., & Hicks, D. (2017). The unbearable emptiness of tweeting—About journal articles. PLoS ONE, 12(8), e0183551. DOI: https://doi.org/10.1371/journal.pone.0183551 

  61. Robinson-Garcia, N., van Leeuwen, T. N., & Rafols, I. (2017). Using altmetrics for contextualised mapping of societal impact: From hits to networks. Science and Public Policy. Available at SSRN: https://ssrn.com/abstract=2932944. DOI: https://doi.org/10.2139/ssrn.2932944 

  62. Russell, B. (2016). The Impact of Science on Literature (p. 120). Abingdon, Oxon: Routledge Classics. 

  63. Russell Group Papers. (2012). The social impact of research conducted in Russell Group universities. Russell Group Papers 3. 

  64. Rutherford, J. F. (1987). The Impact of Science on Precollege Education: A Glancing Blow. Science Communication, 9(2), 297–310. DOI: https://doi.org/10.1177/0164025987009002008 

  65. Samuel, G. N., & Derrick, G. E. (2015). Societal impact evaluation: Exploring evaluator perceptions of the characterization of impact under the REF2014. Research Evaluation, 24, 229–241. DOI: https://doi.org/10.1093/reseval/rvv007 

  66. Shapiro, S., & Taylor, J. (2013). Federal R and D: Analyzing the Shift From Basic and Applied Research Toward Development (pp. 1–50). Stanford University. 

  67. Shema, H., Bar-Ilan, J., & Thelwall, M. (2014). Scholarly blogs are a promising altmetric source. Research Trends, 37, 1–4. 

  68. SIAMPI. (2012). Social Impact Assessment Methods through Productive Interactions (SIAMPI). Retrieved June 27, 2018, from http://www.siampi.eu/Content/SIAMPI_Final%20report.pdf 

  69. Siika-aho, P. (2015). Yliopistojen yhteiskunnallinen vaikuttavuus. Yhteiskunnallisen vuorovaikutuksen (YVV) seuranta ja palkitseminen. In Finnish Ministry of Education and Culture publications ed., Vastuullinen ja vaikuttava: Tulokulmia korkeakoulujen yhteiskunnalliseen vaikuttavuuteen (pp. 260–275). Retrieved June 27, 2018, from http://www.minedu.fi/export/sites/default/OPM/Julkaisut/2015/liitteet/okm13.pdf 

  70. Steiner, T. (2014). Bots vs. wikipedians, anons vs. logged-ins (redux). In Proceedings of The International Symposium on Open Collaboration—OpenSym ’14 (pp. 1–7). New York, USA: ACM Press. Retrieved June 27, 2018, from DOI: https://doi.org/10.1145/2641580.2641613 

  71. Streatfield, D., & Markless, S. (2009). What is impact assessment and why is it important? Performance Measurement and Metrics, 10(2), 134–141. DOI: https://doi.org/10.1108/14678040911005473 

  72. Sugimoto, C. R. (2015, June 24). “Attention is not impact” and other challenges for altmetrics. Retrieved June 27, 2018, from http://exchanges.wiley.com/blog/2015/06/24/attention-is-not-impact-and-other-challenges-for-altmetrics/#comment-2097762855 

  73. Sugimoto, C. R., Work, S., Lariviére, V., & Haustein, S. (2017). Scholarly use of social media and altmetrics: A review of the literature. Journal of the Association for Information Science and Technology, 68(9), 2037–2062. DOI: https://doi.org/10.1002/asi.23833 

  74. Tatalovic, M. (2018). AI writing bots are about to revolutionise science journalism: We must share how this is done. Journal of Science Communication, 17(1). DOI: https://doi.org/10.22323/2.17010501 

  75. Thelwall, M. (2018). Early Mendeley readers correlate with later citation counts. Scientometrics. DOI: https://doi.org/10.1007/s11192-018-2715-9 

  76. Thelwall, M., Haustein, S., Larivière, V., & Sugimoto, C. R. (2013). Do altmetrics work? Twitter and ten other social web services. PLoS One, 8, e64841. DOI: https://doi.org/10.1371/journal.pone.0064841 

  77. The World Health Organization. (2013). The World Health Report 2013: Research for Universal Health Coverage. Geneva: WHO; 2013. DOI: https://doi.org/10.30875/c5be4728-en 

  78. Tsou, A., Bowman, T. D., Ghazinejad, A., & Sugimoto, C. R. (2015). Who tweets about science? In Proceedings of the 2015 International Society for Scientometrics and Informetrics (pp. 95–100). Istanbul, Turkey. Retrieved June 27, 2018, from http://www.issi2015.org/files/downloads/all-papers/0095.pdf 

  79. Tullos, D. (2009). Assessing the influence of environmental impact assessments on science and policy: An analysis of the Three Gorges Project. Journal of Environmental Management, 90, S208–S223. DOI: https://doi.org/10.1016/j.jenvman.2008.07.031 

  80. Vainio, J., & Holmberg, K. (2017). Highly tweeted science articles – who tweets them? An analysis of Twitter user profile descriptions. Scientometrics, 112(1), 345–366. DOI: https://doi.org/10.1007/s11192-017-2368-0 

  81. Van der Meulen, B. (2010). Evaluating the societal relevance of academic research: A guide. KNAW. Retrieved June 27, 2018, from http://repository.tudelft.nl/view/ir/uuid:8fa07276-cf52-41f3-aa70-a71678234424/ 

  82. Van Looy, B., Callaert, J., Debackere, K., & Verbeek, A. (2003). Patent related indicators for assessing knowledge-generating institutions: Towards a contextualized approach. Journal of Technology Transfer, 28, 53–61. DOI: https://doi.org/10.1023/A:1021630803637 

  83. Vanclay, F., Esteves, A. M., Aucamp, I., & Franks, D. M. (2015). Social Impact Assessment: Guidance for assessing and managing the social impacts of projects. Fargo ND: International Association for Impact Assessment, (p. 170). Retrieved June 27, 2018, from http://espace.library.uq.edu.au/view/UQ:355365/UQ355365.pdf 

  84. Verbeek, A., Debackere, K., & Luwel, M. (2003). Science cited in patents: A geographic “flow” analysis of bibliographic citation patterns in patents. Scientometrics, 58(2), 241–263. DOI: https://doi.org/10.1023/A:1026232526034 

  85. Walter, A. I., Helgeberger, S., Wiek, A., & Scholz, R. W. (2007). Measuring societal effects of transdisciplinary research projects: Design and application of an evaluation method. Evaluation and Program Planning, 30(4), 325–338. DOI: https://doi.org/10.1016/j.evalprogplan.2007.08.002 

  86. Westat, J. F. (2010). The 2010 User-Friendly Handbook for Project Evaluation. Retrieved June 27, 2018, from http://www.evalu-ate.org/resources/doc-2010-nsfhandbook/ 

  87. Wojcik, S., Messing, S., Smith, A., Rainie, L., & Hitlin, P. (2018, April 9). Bots in the Twittersphere. Pew Research Center. Retrieved June 27, 2018, from http://www.pewinternet.org/2018/04/09/bots-in-the-twittersphere/ 

  88. Wolf, B., Lindenthal, T., Szerencsits, J. M., Holbrook, J. B., & Heß, J. (2013). Evaluating Research beyond Scientific Impact. How to Include Criteria for Productive Interactions and Impact on Practice and Society. Gaia, 22(2), 11. DOI: https://doi.org/10.14512/gaia.22.2.9 

  89. Wolf, B., Szerencsits, M., Gaus, H., Müller, C. E., & Heß, J. (2014). Developing a Documentation System for Evaluating the Societal Impact of Science. Procedia Computer Science, 33, 289–296. DOI: https://doi.org/10.1016/j.procs.2014.06.046 

  90. Wouters, P., & Costas, R. (2012). Users, Narcissism and control — Tracking the impact of scholarly publications in the 21st century. In Proceedings of 17th International Conference on Science and Technology Indicators, 2, 847–857. Retrieved June 27, 2018, from http://2012.sticonference.org/Proceedings/vol2/Wouters_Users_847.pdf 

  91. Zahedi, Z., Fenner, M., & Costas, R. (2014). How consistent are altmetrics providers? Study of 1000 PLOS ONE publications using the PLOS ALM, Mendeley and Altmetric.com APIs. In Altmetrics 14 workshop at the Web Science Conference, Bloomington, USA. Retrieved June 27, 2018, from http://files.figshare.com/1945874/How_consistent_are_altmetrics_providers__5_.pdf 

  92. Zuckerman, H. (1978). Theory choice and problem choice in science. Sociological Inquiry, 48, 65–95. DOI: https://doi.org/10.1111/j.1475-682X.1978.tb00819.x 

comments powered by Disqus