Altmetrics for Research Impact Actuation ARIA: An Impact Tracking Tool for Multidisciplinary and Rolebased CrossMetric Analysis

Authors: {'first_name': 'Aarthy', 'last_name': 'Nagarajan'},{'first_name': 'Aravind Sesagiri', 'last_name': 'Raamkumar'},{'first_name': 'Mojisola', 'last_name': 'Erdt'},{'first_name': 'Harsha', 'last_name': 'Vijayakumar'},{'first_name': 'Feiheng', 'last_name': 'Luo'},{'first_name': 'Han', 'last_name': 'Zheng'},{'first_name': 'Yin-Leng', 'last_name': 'Theng'}


Altmetrics are new-age research impact metrics that hold the promise of looking beyond the traditional methods of measuring research impact. Altmetrics are real-time metrics that show the outreach of scientific research among an audience from different academic and non-academic backgrounds. Several altmetric systems have been developed in the last few years, either as a cumulative exploratory tool that showcases the different metrics from the various altmetric sources, or as part of existing publisher systems and databases. In the first part of this study, we have analyzed features of nine different altmetric systems, two academic social networking systems, and five other types of systems, including digital libraries, publisher systems, and databases. Results of a feature analysis indicated that the overall coverage of individual features by the systems is moderate, with maximum coverage being 27 out of 33 features analyzed. Features like the visualization of metrics, altmetric sources and bibliometric sources were not found in many systems. Identified gaps were later implemented in the second part of the study, wherein we developed a prototype system, called Altmetrics for Research Impact Actuation (ARIA). We also conducted a user evaluation study of the prototype, the outcome of which was used to improve certain features of ARIA based on user feedback.
Keywords: altmetricsaltmetric systemsfeature analysisscholarly communicationsocial mediabibliometrics 
 Accepted on 04 Jan 2021            Submitted on 07 Nov 2020

1. Introduction

It has become essential for academic administrators and funding agencies to measure the research impact of individual researchers and research centers. Synonymous to how the growing population has its impact on natural resources, the number of tenure-track positions have reached its characteristic limit with the rising number of PhD graduates (Larson, Ghaffarzadegan & Xue 2014). In addition, the research funding scenario is not very significant either. Academia is moving towards competitive research funding rather than institutional block grants; few countries that have retained block grants have also made it performance sensitive (University World News 2013).

While assessing research impact is the need of the hour, traditional impact metrics still stand out in influencing major impact-based decisions in all academic spheres. There is still much confusion surrounding the introduction of altmetrics in such decision-making processes. There is uncertainty regarding the quality of data, inconsistency in metrics across various altmetric systems, potential concerns about gaming, and skepticism regarding how they relate to bibliometrics (Haustein 2016). However, altmetrics provide a broader timely insight into research and its impact on society.

Due to the emerging interest in altmetrics, several tools have been developed over the last few years. Some are aggregators of metrics from various altmetric data sources, and few others have coupled these metrics with their existing applications, like publisher systems and digital libraries. Several studies have been published, such as Peters et al. (2014), Erdt et al. (2016), and Sugimoto et al. (2017), describing and comparing these tools and also reviewing the related literature and corresponding methodologies. There are also a few blogs written, such as Priem et al. (2010), Webster, B. (n.d.) and Wee (2014), outlining these tools in detail. Nonetheless, the existing tools are not comprehensive enough in their coverage nor do they come with the necessary aide to help alleviate researchers’ confusions and concerns surrounding the variety of metrics. The need of the hour is a research metric system that not only brings in these metrics of various types from various sources, but also puts them in perspective and in comparison with each other for the researchers and research administrators to be able to draw useful insights out of them. Driven by this motivation, we decided to conduct a detailed feature analysis of the different altmetric systems, which has not been done before. Our objective for the feature analysis study was to identify gaps in existing systems which could be used to design new features for a cross-metric analysis system.

1.1 Related work

The altmetric systems available for use by researchers vary a great deal in terms of features offered and social media sources tracked. A comparative analysis of the different altmetric tools revealed that variations could lead to differences in coverage of publications and impact of research (Jobmann et al. 2014). An analysis of these tools, by looking at the different features covered and the data sources from where various events were collected, has also been conducted (Erdt et al. 2016). This paper had identified some prevalent user interface features such as search and filter options, and coverage of article-level and artifact-level metrics. Only a few tools were observed to be offering visualizations, most of which contained only raw counts. These tools were found to mainly derive their metrics from altmetric sources such as Mendeley (2008), Twitter (Dorsey, Glass & Stone 2006), Facebook (Zuckerberg, Saverin, McCollum & Moskovitz 2004), other academic social media, publisher sites and digital repositories. A few of the tools were noted to be using bibliometric sources such as Scopus (Elsevier Scopus n.d.) and Google Scholar (Google 2004). Nevertheless, none of the tools were noted to be offering an option to perform cross-metric analysis. The paper also presented the results of a meta-analysis on cross-metric validation studies. The results illustrated a weak correlation between altmetrics and bibliometrics, nevertheless, the diversity of these studies and the challenges ensuing the comparison of the two types of metrics were put forth. Another study categorized the tools as academic and non-academic along with a summary and evaluation of the different tools (Roemer 2015). Further inquiry on identifying the advantages and disadvantages of these tools was conducted which not only gave an overview but discussed how best these tools could be used by researchers to make an online presence through their research work (Nelson 2016). (2012) was studied, elaborating details of the system, different data sources covered, and potential limitations (Robinson-García et al. 2014).

In this study, we conducted a detailed feature analysis of different research metric systems, which has not been attempted before. We then exploited the results of the feature analysis study to implement a system aiming to incorporate some of the important characteristics of metric systems which were found to be missing during the study.

1.2 Research objectives

Our main objective was to design and implement a research metric system that aims to fill in some of the gaps in existing research metric systems, while following the prototype strategy for information systems development (Bally, Brittan & Wagner 1977), which is easily adaptable. We also evaluated the system by conducting a usability study to measure its user-satisfaction and effectiveness, while using the feedback to optimize the system features for a better user experience. Hence, our objectives can be summarized as follows, each of which will be presented in different sections of the paper:

  1. To identify the gaps in existing research metric systems, which can be translated into requirements for building ARIA.
  2. Design and implement ARIA based on the prototype strategy of systems development.
  3. Conduct a usability study of ARIA, the results of which will be used as feedback to optimize the system features.

All steps involved in the feature analysis, design, development and usability study of ARIA were executed between July 2017 and September 2018.

2. Feature Analysis of Altmetric Systems

There is an increasing number of varied metric tools to assess research impact using altmetrics. Each of these metric tools track different social media platforms at different granularities, leading to inconsistencies in metrics and features across these systems (Jobmann et al. 2014). In addition, there are other concerns related to uncertainty regarding the quality of data (Haustein 2016), potential concerns about gaming (Lin 2012), and skepticism regarding how well the metrics relate to each other (Costas, Zahedi & Wouters 2015). Some of these tools are aggregators of metrics from various data sources, while others have coupled these metrics with their existing applications, such as publisher systems and digital libraries (Sutton 2014). Our aim was to conduct a feature analysis that can identify some useful features which would be good to have in any metric tool and also highlight potential gaps that could be implemented in the future.

2.1 Method

In this section, we elaborate on the method for the selection of systems and features, followed by the coding procedure for the feature analysis.

2.1.1 Systems selection

We prepared an initial list of 32 systems which we categorized into 15 altmetric systems, 10 publisher systems, 5 digital libraries, and 2 academic social networking systems. We tagged those systems as altmetric systems, which served as a source of research impact metrics collected from social media. Systems that enabled communication, networking, and sharing amongst researchers were named as academic social networking systems. Those systems that belonged to publishing companies were grouped as publisher systems. Certain publisher systems were instead named as a database, if they served as a collection of scholarly content such as articles and books. There were challenges concerning access to some of these systems and system features, due to registration issues and availability of sufficient research publications in the researcher accounts that were used for the study. Hence, the shortlisted selection of systems included nine altmetric systems, two academic social networking systems, and five other types of systems consisting of a mix of publisher systems and databases, as shown in Table 1.

Table 1

List of systems selected.

System Description Type of account Type of system

Impact Story Open-source tool that helps researchers explore and share the online impact of their research (Priem & Piwowar 2011) Researcher’s private account Altmetric system
Kudos Tool that helps researchers disseminate research and improve their impact metrics (Kudos, 2014) Researcher’s private account Altmetric system
Aminer Tool providing search and mining services on researcher social networks (Tang 2006) Researcher’s private account Altmetric system
Publish or Perish Software that uses Google Scholar data to compute different metrics ( n.d.) Researcher’s private account Altmetric system
Figshare Digital repository where researcher can save and share research work (Figshare 2011) Researcher’s private account Altmetric system
Open Researcher and Contributor ID (ORCID) Non-profit organization that provides digital identifiers to researchers (ORCID 2012) Researcher’s private account Altmetric system
Mendeley Free reference management system and an academic social network (Mendeley 2008) Researcher’s private account Altmetric system Aggregator that collects metrics surrounding scholarly output from various sources and presents a single cumulative score ( 2012) Publicly available information Altmetric system
Plum Analytics Tool that uses metrics from various sources to provide meaningful insights into research impact (Michalek 2011) University library’s access to Elton B. Stephens Company (EBSCO) information services Altmetric system
ResearchGate Social networking site for researchers to share their research work and find collaborators (Madisch & Hofmayer 2008) Researcher’s private account Academic social networking system Platform to share and follow research (Price 2008) Researcher’s private account Academic social networking system
Elsevier ScienceDirect Platform of peer-reviewed scholarly literature (Elsevier ScienceDirect n.d.) Researcher’s private account Database
Elsevier Scopus Database of peer-reviewed literature providing a comprehensive overview of research outputs (Elsevier Scopus n.d.) Researcher’s private account Database
Springer (Springer Nature) Enables researchers to share, find and access research work (Springer Nature n.d.) Researcher’s private account Publisher system
Emerald Insight Database consisting of full-text articles, reviews and other scholarly content (Emerald Insight n.d.) Researcher’s private account Database
Wiley Online Library Collection of online journals, books and other scholarly outputs (Wiley n.d.) Researcher’s private account Database

We had considered Webometric Analyst (Statistical Cybermetrics Research Group n.d.), Snowball Metrics (Elsevier Snowball Metrics n.d.), F1000Prime (Faculty of 1000 2012), Public Library of Science Article-Level Metrics (PLoS ALM) (PLOS 2009a), Newsflo (Elsevier Newsflo 2012) and Scholarometer (Indiana University n.d.), but due to the following reasons, we could not include them in the study. Webometric Analyst is not an impact assessment tool, but rather used for network analysis. Snowball metric is unique compared to other systems due to the fact that their metrics are standardized, created and owned by universities to assess and compare their research performance. Scholarometer was down during the study period as the Scholarometer team was working on a new version of the tool. Faculty of 1000 – F1000Prime is not freely available, and Newsflo is not a tool on its own but integrated with other platforms such as SciVal (Elsevier SciVal n.d.), Mendeley (Mendeley 2008), and Pure (Elsevier Pure n.d.). PLoS ALM’s Lagotto system (PLOS 2009b) required technical expertise to setup which was not feasible at the time of the study.

We had considered other publisher systems like Nature Publishing Group (Nature Research n.d.), Biomed Central (BioMed Central n.d.), Elsevier (Elsevier n.d.), HighWire (HighWire n.d.), Springer Link (Springer n.d.), Emerald Group Publishing (Emerald n.d.), and PLoS (PLOS n.d.), though we could not include them in the study due to access issues or the absence of sufficient content. The researchers whose accounts were used did not have sufficient publications with these publishers in order for the coder to access their metric related features. Other digital libraries like the Association for Information Science and Technology (ASIS&T) digital library (ASIST n.d.) and the Association for Information Systems (AIS) electronic library (AIS n.d.) are not freely available. We faced some login issues with Social Science Research Network (SSRN) (SSRN n.d.). We were also unable to get access to CrossRef (Crossref, 1999), as accounts were available only for publishers.

2.1.2 Feature selection

About 140 features spanning 24 categories were initially chosen for exploration in each of the above systems. As some of the features were too specialized and not found in many systems, such as spam detection, rankings, and metric related features like H5-index, G-index, Sociability, Mean Average Precision (MAP), and the like, those were not included in the study. The metric features that we did not include, such as the H5-index and G-index, are also not widely used by researchers. We included Altmetric Attention score as it is a composite score that aggregates altmetrics from a variety of sources and hence is a good representative score for altmetrics. We did not include features like PlumPrint and donut in our analysis, as these are widgets that help to show the metric counts visually and do not represent a source of altmetrics per se.

Removing such sparse features resulted in 33 individual system features, belonging to eight broad categories, which were used for feature analysis. The list of features analyzed is shown in Table 2.

Table 2

List of Features Analyzed.

Category Features Description

Visualization Time-series (Temporal) analysis Shows the evolution of data over time
Spatial analysis Shows data on a world map
Informational details shown Additional details concerning visualized data
Cross-metric analysis Metrics compared with each other
User-friendly Interface Dashboard provided Landing page or the home page of an application that shows an at-a-glance view of the different sections of the system
User Control Features Supports search Finding articles
Supports sorting Sort a list by descending or ascending order of a certain criteria.
Supports filtering Refine search results by specifying certain criteria
Allows export Export metrics information from articles.
Data Quality Digital Object Identifiers (DOIs) Unique identifiers for research objects like publications that can be used for disambiguation and verification of research outputs (International DOI Foundation 2000)
Uniform Resource Locators (URLs) Web address for research works (Lee, Masinter & McCahill 1994)
Verification of metrics provided Proof for metrics
Metric Related Artifact level metrics Metrics for different types of research outputs
Article level metrics Metrics for research papers
Journal level metrics Metrics for journals (Clarivate Analytics InCites n.d.)
Author/researcher level metrics Metrics for authors (Hirsch 2010)
Institutional/organizational level metrics Metrics for institutions
Country level Aggregated metrics for each country
Novel in-house metrics offered New metrics developed by the metrics provider.
Publicly known factors contributing to metric Novel metric is explained and factors contributing to the metric are made known to the user.
Context of metrics Factors influencing metrics
User Access Free access Access to the system is free.
Altmetric Sources No. of full text downloads Full-text downloads of the research work
No. of views Views of the research work
Altmetric Attention score Aggregated score offered by ( 2012)
Twitter Source of tweets and retweets (Dorsey, Glass & Stone 2006)
Facebook Source of Facebook posts, comments, likes, among others (Zuckerberg, Saverin, McCollum & Moskovitz 2004)
Mendeley Source of Mendeley readers and bookmarks (Mendeley 2008)
Bibliometric Sources Google Scholar citations Source of citations of publications (Google 2004)
Web of Science citations Source of citations of publications (Clarivate Analytics WoS n.d.)
Scopus citations Source of citations of publications (Elsevier Scopus n.d.)
Journal Impact Factor Yearly average number of citations of articles published in the journal (Clarivate Analytics InCites n.d.)
Number of publications/articles Total count of research works published.

2.1.3 Coding

We conducted the feature analysis by using a single coder who was given a set of yes or no questions that were prepared with the aim of covering a wide range of system features. These questions had been tested by two coders prior to the study. The two coders had no prior experience with metric systems and the inter-rater reliability (IRR) of their ratings is discussed in the following section.

Inter-rater Reliability. Intercoder reliability, often referred to as inter-rater or interjudge reliability, is the generally desired analytical component in content analysis which can demonstrate the validity of the supposition (Lavrakas 2008). To compute the consistency among the ratings of the two coders, we calculated the Cohen’s kappa (Cohen 1960) using R (R Team 1993). The interrater reliability IRR (Lavrakas 2008) was determined for the entire set of ratings by the two coders, and also for subsets based on the systems and features explored. The Cohen’s kappa (Cohen 1960) values for the different sets of ratings are set out in Table 3. The Cohen’s kappa value for the entire set is 0.554 (p < 0.001), the highest is 0.675 (p < 0.001) for altmetric sources and the lowest is 0.406 (p < 0.001) for data quality. Overall, the IRR was moderate for coding of altmetric systems and features such as user-friendly interface, altmetric sources, and bibliometric sources, and lower for features like user control features, context of metrics, and data validation. In order to avoid any wrong judgement stemming from inexperience in using such systems, we employed one of our research staff, who had been exposed to the different metric systems, to do the coding. The coder was asked to code for the shortlisted systems and indicate a ‘Y’ for those features that were present in a system, ‘N’ for those that were not present, and ‘U’ for ones that she was unsure of.

Table 3

Overview of IRR Values.

Subset Type System/Features Cohen’s kappa value (**p < 0.001)

All systems and features 0.554**
Systems Altmetric Systems 0.599**
Academic Social Networking Systems 0.456**
Databases 0.475**
Feature Categories Visualization 0.552**
User-friendly Interface 0.641**
User-control Features 0.413**
Data Quality 0.406**
Metric Related 0.468**
User Access 0.509**
Altmetric Sources 0.675**
Bibliometric Sources 0.605**

2.1.4 Data analyses

The following section presents the data analyses of the coded results both at a system-level as well as at a feature-level. The feature coverage is referred to as low, moderate and high in comparison with all features analysed. A feature is denoted to have a low coverage if it is covered by less than 8 systems, moderate coverage if covered by 8–12 systems and high coverage if covered by more than 12 systems.

System-level coverage analysis

Overview of feature coverage. Firstly, we calculated the percentage of counts of the ‘Y’, ‘N’, and ‘U’ ratings of the coder. Nine out of 16 systems had covered less than 50% of the total number of features (33) considered for the study. While (81.82%), Plum Analytics (69.7%), and (60.6%) were the top three systems for coverage of features, Emerald Insight showed the least coverage, of 18.18%, amongst other systems. Plum Analytics received the highest number of ‘U’ votes from the coder.

Number of features/categories covered. We analyzed the number of categories present per system. had the highest count of features and categories, covering all categories that were considered for the study. Systems like, Mendeley and Kudos, covered the maximum number of categories, but did not have a notable coverage of individual features. Most other systems showed a mediocre coverage of both categories and features, except Publish or Perish and Emerald Insight, which exhibited an overall low coverage of both categories and features.

Feature-wise coverage analysis

As a next step, we analyzed the in-depth coverage of the individual features and custom groups. We have also described below as to how the feature coverage analysis helped us identify some key gaps in existing systems to take into consideration while designing ARIA. Figure 2 is a graphical representation of the coding results and can be regarded as a go-to plot to identify which systems contain a particular feature. In order to show the ‘Y’, ‘N’, and ‘U’ ratings of the coder graphically, we have given 1 point for those features rated ‘Y’ by the coder, 0 points for those features rated ‘N’ by the coder, and 0.5 for those features rated ‘U’ by the coder.

Category-wise coverage. User control features have been covered by all of the systems, followed by metric related, data quality and user access features which are present in 15 systems. Visualization is the least covered feature category, with only 7 out of 16 systems containing visual analysis of metrics information namely Kudos, Mendeley,, Plum Analytics, ResearchGate,, and Elsevier Scopus. Following is the detailed analyses of features under each group, to show which individual features under each category have the maximum presence.

Visualization. The visualization category consists of four features, namely, time-series (temporal) analysis, visualizations having additional information details shown, spatial analysis, and cross-metric analysis. The number of systems that have these features has been plotted in the chart shown in Figure 1. Overall, the visualization feature shows a low coverage in comparison with other features. Temporal analysis and visualizations having information details are the highest covered visualization types, with seven systems having these features, whereas cross-metric analysis and spatial analysis are covered only by three systems.

Figure 1 

Overview of systems having visualization features.

Figure 2 

Detailed Overview of Systems Having Individual Features (N = 33).

User-friendly Interface. We examined the presence of a dashboard, to determine if a system is user friendly. Eleven systems were found to contain dashboards showing a good coverage of the feature.

Data Quality. Features that act as persistent identifiers of researchers and research works were examined to measure the extent to which a system ensures the quality of data presented to the user. DOIs and URLs are the most popular for identifying research work, with 12 systems using these identifiers. Verification of metrics has a moderate coverage of nine systems having the feature. DOI for author name disambiguation shows the lowest coverage of only one system, this being

Metric Related. We divided the metric related features into three main types, namely, level of metrics, context of metrics and novel metrics, and then analyzed the presence of these as per the ratings of the coder. Results are shown in Figure 3.

  • Among the different levels at which metrics can be presented, article level metrics seems to be the most popular with a coverage of 13 systems, followed by institutional level metrics having a coverage of 10 systems. Country-level metrics has the lowest coverage with only three systems presenting metrics at country level. This is followed by artifact-level and journal-level metrics, covered by four systems and five systems, respectively. Author- or researcher-level metrics show a moderate coverage of nine systems.
  • The coders have indicated the availability of context in eight systems, thus showing a moderate coverage.
  • Novel in-house metrics show a moderate coverage, too, and have been found to be available in eight systems, as per the coders’ ratings. Publicly known factors contributing to the metric is true for seven out of the eight systems containing novel in-house metrics.
Figure 3 

Overview of Systems having Metric Related Features.

Altmetric Sources. There are numerous altmetric data sources from which altmetrics about research works could be collected. Given the challenges involved in inspecting many sources, we shortlisted to include only the most prominent ones. It is apparent from Table 4 that full text downloads and views count show a moderate coverage of eight systems, followed by tweets, Facebook mentions, and Mendeley readership data that are present in six systems. Altmetric Attention score is available only in five systems and shows a low coverage.

Table 4

Overview of Systems having Altmetric & Bibliometric Data Sources.

Category Features Available Unavailable Not sure

Altmetric sources No. of full text downloads 8 8 0
No. of views 8 8 0
Twitter 6 10 0
Facebook 6 10 0
Mendeley 6 10 0
Altmetric Attention score 5 11 0
Bibliometric sources Number of publication/articles 14 2 0
Scopus citations 5 11 0
Journal Impact Factor 3 13 0
Google Scholar citations 2 14 0
Web of Science citations 2 14 0

Bibliometric Sources. Article counts is the only bibliometric source, with the highest coverage of 14 systems. Other bibliometric sources like Scopus citations (5 systems), Journal Impact Factor (3 systems), Google Scholar citations (2 systems) and Web of Science citations (2 systems) show a low coverage.

3. Conceptualization of ARIA

From the feature analysis study, it is clear that some important features have not been widely implemented in current systems. We found that visualization features and metric related features such as country-level, artifact-level, and journal-level metrics, have a low coverage. In this digital era, with huge amounts of constantly flowing data overwhelming us, there is a growing need for methods that help to decipher data. Data visualization is one such method which has also attracted researchers or rather persuaded them to exploit its techniques in order to survive the data deluge (Kennedy & Hill 2017). The notion of visualizing applies not only to research data but also to research metrics which are multidimensional and hard to interpret. Information science researchers are recently investing their efforts in the direction of identifying the right kind of visualizations for understanding research influence (Ginde 2016; Portenoy & West 2019). While visualizations are being taken care of, it is also important to ensure that the metrics being visualized are exhaustive and picked from the most relevant sources that track scholarly impact. We found that data sources such as Twitter, Facebook, Mendeley, and from altmetric sources, and Scopus citations, Journal Impact Factor, Google Scholar citations, and Web of Science citations from bibliometric sources, have a low coverage despite being widely adopted by researchers to discuss their research work and to track research impact. Most of these features have been incorporated into ARIA.

Based on the feature analysis study and the identified gaps in existing systems, we ensured that ARIA would address most of the current gaps and have some of the existing important features as well. Table 5 shows the list of features analyzed during the feature analysis study and the different ARIA screens that have implemented those.

Table 5

Features covered by ARIA.

Category Features ARIA Screens

Visualization Time-series (temporal) analysis Publication vs. Citation TrendBibliometric Time SeriesPublication vs. Altmetrics TrendAltmetric Time SeriesCross-metric Explorer
Spatial analysis Map View
Informational details shown Publication vs. Citation TrendBibliometric Time SeriesPublication vs. Altmetrics TrendAltmetric Time SeriesCross-metric ExplorerMap View
Cross-metric analysis Cross-metric Explorer
User-friendly Interface Dashboard provided Dashboard
User Control Features Supports search ArtifactsMap View
Supports sorting ArtifactsMap View
Supports filtering ArtifactsMap View
Allows export ArtifactsPublication vs. Citation TrendBibliometric Time SeriesPublication vs. Altmetrics TrendAltmetric Time SeriesCross-metric ExplorerMap View
Data Quality DOIs ArtifactsPublication vs. Citation TrendBibliometric Time SeriesPublication vs. Altmetrics TrendAltmetric Time SeriesCross-metric ExplorerMap View
URLs ArtifactsMap View
Verification of metrics provided ArtifactsPublication vs. Citation TrendBibliometric Time SeriesPublication vs. Altmetrics TrendAltmetric Time SeriesCross-metric ExplorerMap View
Metric Related Artifact level metrics Artifacts
Article level metrics Artifacts
Journal level metrics Artifacts
Author/researcher level metrics ARIA Researcher Dashboard
Institutional/organizational level metrics ARIA Admin Dashboard
Country level Map View
Novel in-house metrics offered ArtifactsPublication vs. Citation TrendBibliometric Time SeriesPublication vs. Altmetrics TrendAltmetric Time SeriesCross-metric ExplorerMap View
Publicly known factors contributing to metric Yes
Context of metrics Not implemented
User Access Free Access N/A (Currently implemented only as a proof of concept. Not ready for external distribution)
Altmetric Sources No of full text downloads Artifacts
No. of views Artifacts
Altmetric Attention score ArtifactsPublication vs. Altmetrics TrendCross-metric Explorer
Twitter Altmetric Time SeriesCross-metric Explorer
Facebook Not implemented
Mendeley Artifacts
Bibliometric Sources Google Scholar citations Not implemented
Web of Science citations Publication vs. Citation TrendBibliometric Time SeriesCross-metric Explorer
Scopus citations ArtifactsPublication vs. Citation TrendBibliometric Time SeriesCross-metric ExplorerMap View
Journal Impact Factor Artifacts
Number of publications/articles ArtifactsPublication vs. Citation TrendBibliometric Time SeriesPublication vs. Altmetrics TrendAltmetric Time SeriesCross-metric ExplorerMap View

After the initial planning of the selected features that would go into the design of ARIA, we came up with individual screens of interest to the stakeholders based on the data that we had collected, as explained in detail in Sections 3.1 and 3.2. We then categorized the different screens and grouped them into sections for better organization and navigation of the tool.

3.1 Altmetrics for Research Impact Actuation (ARIA)

ARIA is a multidisciplinary prototype system that strives to offer a visual analysis of the research impact data at researcher level and administrative level. It is a role-based tool for cross-metric validation between traditional and altmetrics across different disciplines. ARIA can be used to determine the research impact for the benefit of researchers, institutions, and funding agencies. ARIA offers various functionalities based on the role of the user. It is useful for exploring metrics over time on an author/researcher level for individual researchers, and to probe into aggregated metrics on an entity level. For the administrative users of the ARIA system, an entity is considered to be a school/research center, college or university. Researchers are able to browse through their metrics and validate the different metrics of their published research outputs. Administrative users can investigate metrics aggregated for the chosen entities they are entitled to access based on their role in the institution. Administrative users can also compare and contrast between entities, or between an entity and a researcher, by juxtaposing the corresponding metrics in the form of cross-metric visualizations.

ARIA presents a variety of visualizations that cater to the requirements of researchers and administrative users alike. The visualizations in ARIA, thereby, address one of the important gaps in existing systems, as identified by the feature analysis study. Time-series component is one of the novel features of ARIA and helps visualize both bibliometric and altmetric data in the form of quarterly and monthly time-series, respectively. The ARIA institutional dashboard for administrative users is quite unique as it offers a customized view on the aggregated data based on user preferences and access rights. It meets the needs of an institutional administrator by being able to provide quick and deep insights into the achievements of the institution, college and school or research center by using drill-up/drill-down options in pre-designed visualizations. ARIA supports diverse kinds of research artifacts, while demonstrating disciplinary aspects of the generated impact. In ARIA, visualizations of bibliometric data are enriched by the distinction of citations from authors affiliated with prestigious institutions, thereby providing a better understanding of research impact quality and international acclaim.

ARIA harvests article metadata and bibliometric data from five sources – Scopus, Web of Science, Crossref, and Journal Citation Reports (JCR), while altmetric data is harvested from, Twitter, and PlumX. We did not implement Google Scholar citations as they do not provide an API and there are restrictions on web scraping their data. Some of these sources were identified to have a low coverage in existing systems during the feature analysis study. To support data extraction, transformation, and loading on a monthly basis, ARIA’s database has been designed as an enterprise data warehouse (EDW) in which both current and historical data are stored. The relational model comprises of fact and dimension tables. Fact tables contain the metrics data, while the dimension tables contain the metadata related to articles, researchers, publication venues, and organizations. The initial prototype version of the ARIA system supports around 55,000 publications of 2,637 Nanyang Technological University (NTU) affiliated academic and research staff. This dataset showcases the role-based visualizations and the cross-metric validation across disciplines and entity levels of an institution.

3.2 System design and features of ARIA

ARIA offers two different dashboards, either one or both depending on the role of the user as a researcher or/and an administrator. Following a tile-based design, the application incorporates colorful tiles to navigate to the different sections of the system in an organized manner. A small red button with an information icon at the center is placed at the top right corner of all the pages. A navigation menu is placed on the left side of all the pages, for easy navigation to the different sections of the application. Breadcrumbs, to show the current location of the user in the system hierarchy and to quickly navigate to a higher-level page, are placed at the top left corner of all the pages.

One important feature of the administrative dashboard is the entity selection that allows the user to select schools, colleges and research centers that they have access to, in order to visualize the aggregated data for the selection in the different sections of the prototype, such as bibliometrics, altmetrics, artifacts, and cross-metric explorer. The bibliometric and altmetric visualizations in the admin dashboard, as shown in Figure 4, can drill down at data points to view the top researchers contributing to the performance of the selected entities based on various metrics. All visualizations can be exported, and include tables containing visualized data and corresponding statistical metrics. The bibliometric and cross-metric plots were created using the highcharts JavaScript (JS) library (“Interactive JavaScript charts for your webpage|Highcharts”), whereas the altmetric plots were created using amcharts JS library (“JavaScript Charts, Maps – amCharts”). These libraries offer some built-in features like exporting the charts as images and zooming into a specific time period. All plots have been implemented to be very user-intuitive such that the details behind the counts can be viewed by clicking on data points.

Figure 4 

ARIA Admin Bibliometrics – Publication Vs Citation Trend (top researchers by citation count).

Bibliometrics. The bibliometrics section of the prototype includes two different visualizations featuring the research impact of the researcher in terms of publication and citation counts. The total citation counts from Scopus have been further categorized into Quacquarelli Symonds (QS) (Nunzio Quacquarelli n.d.) and Non-QS citation counts in order to project those citations from prestigious universities from around the world.

Publication versus citation trend. The publication versus citation trend showcases the yearly performance of the researcher by plotting publication and citation counts per year as a bar chart. The citation counts are obtained from two sources, namely, Scopus and Web of Science, and one or the other can be chosen using the dropdown selection, on the top left corner of the page, to visualize the corresponding data.

Bibliometric time-series. The time series component visualizes the bibliometric data as a quarterly time series chart. Only the Scopus citation counts are plotted, as Web of Science citations are not available per quarter.

Altmetrics. The altmetrics section of the prototype contains two visualizations similar to the ones in the bibliometrics section, analyzing the impact of the researcher derived from social media metrics.

Publication versus altmetrics trend. The publication versus altmetrics trend, shown in Figure 5, demonstrates the yearly altmetrics performance of the researcher by plotting the altmetrics coverage of publications as against the Altmetric Attention score from

Figure 5 

ARIA Admin Altmetrics – Publication versus Altmetrics Trend.

Altmetric time-series. The altmetric time series component transforms the Twitter data of the researcher into a monthly time-series chart.

Artifacts. The artifacts section categorizes the works of the researcher based on the type of research output. Currently, the system supports journal publications, conference publications, and books. In the admin dashboard, the artifacts section aggregates the research outputs for the selected institutions. The search box and tree view is used to navigate through the institutional hierarchy to look for researchers who can then be viewed individually.

The list of publications can be sorted based on metrics and filtered based on publication venue. The citation counts of each publication are hyperlinked to show the list of publications associated with the counts.

Another interesting feature of the publication list is the percentile tags of papers based on the Journal Impact-Factor Quartiles the publication belongs to as per ranking within the various disciplinary topics. Journal-level impact was identified to have a low coverage during the feature analysis study, and hence was implemented in ARIA. Each paper can have one or more tags representing the percentile ranking of the publication under a particular topic. These quartile tags are helpful in understanding the disciplinary differences in research impact. Based on the Journal Rank from the JCR database, we have divided the publications into quartiles. While the 25th quartile represents the bottom 25% of the range, the 100th quartile represents the top 25%.

Cross-metric explorer. Cross-metric explorer is one of the unique features of ARIA. With increasing number of metrics to gauge research impact, it has become more and more important to be able to compare and correlate between the different metrics in order to understand how these metrics correspond to one another. Cross-metric explorer does just that. The cross-metric explorer page contains a user input form that offers metrics, frequency and time duration selections, followed by submit and reset buttons. Once the selection is made, the submit button can then be clicked to visualize the corresponding data. The reset button can be used to reset the input form to its default values.

For admin users, cross-metric explorer offers three types of analysis. It can be used to compare and correlate between two metrics of 1.) a single entity, 2.) any two entities, and 3.) an entity and a researcher. Figure 6 is a screenshot of comparison between two entities. As data being visualized is aggregated data, it is useful to be able to control the portion of data being aggregated, so, the comparison between metrics makes most sense. The cross-metric explorer page for administrators contains a user input form that offers selection of aggregation type, metrics, frequency, time duration, and researchers, followed by a submit and a reset button.

Figure 6 

ARIA Admin Cross-metric Explorer.

Map view. The map view can be used to track the number of citations, QS citations, and Non-QS citations of a researcher by country. This addresses one of the gaps identified in the feature analysis study. It was found during the study that country-level metrics had a low coverage in existing systems. In map view, countries are shaded based on the range in percentage contribution made, as mentioned in the legend of the chart.

There are two user selections to choose the year and quarter for filtering the map view data. Map view data is displayed in a tabular format below the map, including details of the citations. The table includes a search box for quick filtering of data based on keywords, numbers or years. The map can be exported in various formats. Figure 7 is a screenshot of the map view page from ARIA.

Figure 7 

ARIA Map View.

ARIA system summary. Overall, ARIA includes a researcher dashboard consisting of five sections and an admin dashboard consisting of four sections. The bibliometrics, altmetrics, cross-metric explorer, and the artifacts sections are available in both the dashboards. Whereas the map view is present only in the researcher dashboard. Bibliometrics and altmetrics sections contain two visualizations each, while the cross-metric explorer and map view have a single interactive visualization each. Artifacts page is content rich and has no visualization. Thus, ARIA is an extensive tool that offers several necessary and important features to researchers and research administrators. The number of HTML webpages available in the ARIA system and their loading times are summarized in Table 6.

Table 6

ARIA system summary.

Dashboard No. of sections No. of HTML webpages Loading time per page (in seconds)

Researcher 5 9 3–6 s
Admin 4 8 3–6 s

4. Usability Testing of ARIA

The first phase of ARIA evaluation was conducted through a usability study with participants from NTU. In the below sections, interim results are presented. A total of 20 participants completed this study. Since this was a usability study, the intention was to collect data pertaining to bugs, initial user opinion on the usability and usefulness of the system. Data was collected through participant observations and interview questions. For participant observations we used the think-aloud protocol (Lewis 2017), wherein the participants were asked to speak out what they observe, think and feel while performing the tasks. Candidates who had the experience of writing research papers were selected. This sample included graduate research students, research staff and teaching staff.

After the task instructions were read out, the participants executed the tasks and spoke out their observations as they did. After completing the tasks, three interview questions were raised to the participants. These questions were:

  1. Q1. How did you feel when you were using the ARIA prototype?
  2. Q2. Do you have any suggestions to improve the ARIA prototype?
  3. Q3. When you were using ARIA, did you have any other thoughts of some good features on alternative systems to complete the tasks?

4.1 Summary of results

As mentioned earlier, the study was conducted with 20 participants. Since the ARIA system has two dashboards (researcher and admin), participants were allowed to test both dashboards only if they had more research experience. Out of the 20 participants, 13 participants were deemed eligible to test both dashboards. The ARIA system has a total of 13 components which are predominantly visualizations. These 13 components were grouped in nine tasks (five researcher and four admin tasks).

In Table 7, the number of total attempts, successful attempts and success attempts percentage are listed for each of the tasks. It is to be noted that each task has one or two ARIA components associated with it. For example, the task ‘Tracking Bibliometrics’ is executed with two visualizations. Hence, this task when executed by all the 20 participants, has a total of 40 attempts. An attempt was deemed as a successful attempt if the participant did not face any issues while executing the task with the particular visualization(s).

Table 7

Task-level Statistics by Attempts and Success Rate.

Task ID Viz. Count Access Level Participants Count Number of Attempts Number of Successful Attempts Percent Successful by Attempt

Tracking Bibliometrics T1 2 R 20 40 40 100.00%
Tracking Altmetrics T2 2 R 20 41 40 97.56%
Tracking Artifact-Level Metrics T3 1 R 20 29 20 68.97%
Tracking Cross-Metric Comparison T4 1 R 20 21 20 95.24%
Tracking Map View T5 1 R 20 26 20 76.92%
Tracking Bibliometrics T6 2 A 13 30 26 86.67%
Tracking Altmetrics T7 2 A 13 29 26 89.66%
Tracking Artifact-Level Metrics T8 1 A 13 21 13 61.90%
Tracking Cross-Metric Comparison T9 1 A 13 17 13 76.47%

Note: R indicates Researcher and A indicates Admin in Access Level.

The task ‘Tracking Bibliometrics’ had 100% success rate mainly because of the existence of traditional metrics such as publication count, and citation count in the visualizations. If we consider a success rate above 80% as a fairly good indicator, five tasks passed this threshold value. In the remaining four tasks, the lowest performance was for the ‘Tracking Artifact-Level Metrics’ task at both researcher and admin level with 68.97% and 61.90% respectively. The next stage of analysis is focused on the issues that affected the success rate of the tasks.

In Table 8, the major issues identified by the participants are listed. Any issue which had affected at least five participants were considered. The proposed solutions to resolve the particular issues are also listed in Table 8. Issue I2 affected the highest number of users (n = 14) as it was very apparent for most users. The artifacts page took a long time to load since a lot of data had to be retrieved. The second major issue, I6, affected 10 participants who found the cross-metric explorer to be a bit confusing in terms of its usage scenarios. The task instructor had to explain the different features of cross-metric explorer to these participants. The other four issues were minor usability issues, which had comparatively simple fixes.

Table 8

Major ARIA Usability Issues.

Issue ID Usability Problem Affected Participants Count Proposed Solution

I1 The presence and usage of scale-bar in Altmetric time series was not apparent 8 Scale bar modified to make it apparent
I2 Artifacts page takes time to load 14 Database performance issue
I3 Incorrectly clicking on Journal Impact Factor legends 9 Display of the Journal Impact Factor legends modified
I4 User hitting enter instead of clicking the submit/search button 7 Enter functionality added
I5 Sorting legends not understood 5 The sort arrows added in addition to the sort names
I6 Inadequate understanding of cross-metric explorer 10 Three separate tabs included in the page to delineate the scenarios in which the cross-metric explorer could be used

The transcribed user responses for the three interview questions were consolidated (please see Appendix for the table showing the user responses). For reporting purposes, the responses have been condensed and re-worded, but the adjectives have been retained. For the first question (Q1), participants were asked how they felt about the system. In general, participants felt the system was good in terms of features, navigation, usability and coverage. There were some critical remarks too. Some felt there was lot of information which affected the interpretation of the usefulness of certain features such as the cross-metric explorer. There were some remarks about the page loading time. Visualizations such as map view and publication-citation trend were considered useful by some of the participants. A couple of participants also seemed to like the artifacts page and had mentioned it to be a powerful feature. While some of them found ARIA to be a comprehensive tool, there were also a few others who thought ARIA had a bit too many features and that first time users should be provided with some initial guide to use the system.

For the second question (Q2), participants were asked for suggestions to improve the system. Some of the participants took interest in the visualizations and offered suggestions such as enabling user-selected colors for the visualizations, and including pie charts. Many participants felt that more detailed data should be included. For instance, they felt it would be nice to view the actual tweets for the tweet counts. The cross-metric explorer’s design was also highlighted by some participants who wanted the visualization to be redesigned to make it more intuitive. Some participants noted that the use of different JavaScript libraries used for the could be standardized to provide a uniform user experience across the system. Other suggestions were minor remarks which could be addressed by simple User Interface (UI) tweaks in the system.

For the third question (Q3), participants were asked if they thought some ‘nice to have’ features from other systems could be added to ARIA. Most participants felt that ARIA already had more features than Google Scholar. They also felt that ARIA was more advanced and had a lot more data coverage than systems such as Google Scholar and Web of Science. There was one participant who felt that ARIA could have a personalized homepage where papers could be manually uploaded by researchers themselves, similar to ResearchGate. As ARIA contained several pages, navigating between the pages seemed cumbersome for a few participants.

5. Discussion and Conclusion

The coding and feature analysis study of the metric systems was a testing venture starting with identifying the not so easily noticeable features to analyzing the ratings of the coders. Nevertheless, the prominence and obscurity of some of the features did not go unobserved. Although (27 features out of 8 categories), (20 features out of 8 categories), and Mendeley (18 features out of 8 categories) have a high count of features and categories, the number of features covered is still not very high considering the total number of features explored (33 features out of 8 categories). Hence, the overall coverage of individual features by the systems is rather moderate. However, the category coverage seems to shed a positive light with 4 out of 16 systems covering all feature categories.

There were some limitations and challenges faced during the feature analysis study. It should also be mentioned that the complexity of these systems made it quite challenging for the coders to identify some of the features and functionalities. It requires repeated usage to get a thorough sense of what could be accomplished by using these systems. In addition, the representation of some features in these systems could not be clearly interpreted by the coders. As an example, to illustrate this issue, an export functionality is named as “save/view as JSON” in one of these systems. Not all users have enough technical expertise to easily translate this to an export function. Several features, such as the cross-metric analysis and spatial analysis visualizations, disambiguation DOIs, country-level, artifact-level, and journal-level metrics, Altmetric Attention score, Journal Impact Factor, Google Scholar citations, Web of Science citations, and Scopus citations were observed in very few systems that were analyzed. The feature analysis study was not able to include all metric tools available, rather a specific set consisting of some of the prominent ones, due to time constraints and accessibility issues. We filtered out a few systems that were not freely available or required a subscription. We also had to eliminate a few other systems that were not available to the discipline of the researchers whose accounts were used for the study.

Some of the gaps identified above bring us to the conclusion that the existing metric systems are not as user-friendly and straightforward as they seem to be, and not all users, coming from varied backgrounds and having diverse skillsets, will find these tools accessible. Hence, there is a critical need to develop metric systems that are comprehensible, simple and easy to use, which in turn could lead to enhanced user satisfaction.

In the second part of the study, we implemented the ARIA prototype system to address the gaps identified during the feature analysis study. The findings from the usability study were used to update the system. The approach undertaken for the ARIA implementation included a thorough feature analysis of existing systems, leading to identify the set of features to be implemented in the prototype, followed by a usability study to obtain user feedback. This iterative protocol of analyzing and gathering requirements, followed by design and development of the system and then executing the quality control testing is part of the system development life cycle (SDLC) process. Although ARIA is not complete in itself, as found during the usability study, we consider building the ARIA system as the first step towards implementing holistic metric systems covering key features like data visualizations, cross-metric analysis, country-level metrics among others, that are necessary for understanding and exploring research impact.


Table A1

Consolidated Unique Responses for the Three Interview Questions.

Q1 Q2 Q3

  • Lot of features in the system
  • Admin dashboard is best among the two
  • Interesting tool
  • Useful for hiring committees
  • User experience is very good
  • Lot of information and overwhelming experience
  • Understandability/interpretation is an issue
  • Useful for chair of schools
  • Worth buying for millions of dollars
  • Complete and comprehensive tool
  • Select entities option is not easily understandable
  • Publication citation trend was most useful
  • Graphical user interface could be better
  • Switching (navigating) between bibliometric time series and publication citation trend was not convenient
  • Changing roles meant the system switched to the homepage always, making it slightly inconvenient
  • System is user-friendly
  • Most of the data in the system could be compared
  • Lot of data in the system but recent data is more relevant
  • System is well organized
  • Navigation is good
  • UI is neat
  • System is OK
  • UI is colorful and user-friendly
  • Features were not very intuitive
  • Visualizations can be made more simple and easy to understand
  • Artifacts page was the best feature of the system
  • Usability is good
  • System is simple and easy to understand
  • System was slow in certain places
  • Helpful in tracking citations from different sources
  • Navigation is good
  • System is visually good
  • Map view is good and quite interactive
  • Data coverage in ARIA is good
  • Visualizations were not intuitive, specifically Altmetric time series and tree view in artifacts page
  • Detailed tweet should be displayed
  • Layout change in cross-metric explorer would be good
  • Pie charts could be added
  • View publications by extracted topics
  • Display Scopus categories for artifacts
  • Viewing metrics at quarterly and monthly not so useful
  • Coverage is an issue as papers without DOIs or URLs will not get picked up
  • Reference of artifacts could be useful addition
  • Include a mew metric “efficiency” which is h-index/number of publications, with variations for altmetrics data and QS citations
  • Username textbox in login page should have automatic focus
  • A researcher should be able to view another researcher’s data
  • Virtual tour can be shown for first time user
  • Search feature in artifacts page should have advanced options
  • Personalized profile page of researchers with profile pic, publications and citations list
  • JavaScript chart libraries to be standardized
  • Need to see detailed data instead of high level metrics
  • No need to have different sections for Journals and Conference papers in Artifacts page
  • Journal Impact Factor percentiles in artifacts page should be better explained
  • Need more detailed data (specifically for social media metrics)
  • Need abstract and short summaries of papers in artifacts page
  • More help to be provided to first time users with tutorials
  • School names/abbreviations can be added instead of School 1 and School 2 in cross-metric explorer
  • Participant can have the option to choosing color in the visualizations
  • Viewing data in ARIA faster than Google Scholar
  • ARIA more advanced than Google Scholar
  • ResearcherId is good as it contains geographical data and affiliations of citations
  • Artifacts page was powerful than Google Scholar and more easy to use than Web of Science
  • Google Scholar and ARIA cannot be compared as Google Scholar is more graphical
  • ARIA has lot more features than Google Scholar
  • ARIA doesn’t allow data upload similar to ResearchGate
  • ARIA is sophisticated and holistic than Google Scholar
  • Use Google Scholar for tracking citations
  • ARIA is one of the systems that included social media data
  • UI of Google Scholar and Web of Science is comparatively simpler and intuitiveARIA has more features than these systems
  • Google Scholar has endless hyperlinks

Table A2

Distribution of participants based on research experience.

Research Experience Participant Count

1–5 years 12
5–10 years 6
10+ years 2

Table A3

Distribution of participants based on qualification.

Degree Participant Count

Bachelors 4
Masters 9
Doctorate 7


This research is supported by the National Research Foundation, Prime Minister’s Office, Singapore under its Science of Research, Innovation and Enterprise programme (SRIE Award No. NRF2014-NRF-SRIE001-019). We thank Professor Edie Rasmussen (University of British Columbia, Vancouver) for her valuable inputs during the feature analysis study. We also thank the subjects who participated in the usability study and shared their ideas and thoughts about ARIA.

Competing Interests

The authors have no competing interests to declare.


  1. AIS. (n.d.). AIS Electronic Library (AISeL)|Association for Information Systems Research. Retrieved June 25, 2019, from 

  2. (2012). Discover the attention surrounding your research – Altmetric. Retrieved June 25, 2019, from 

  3. AmCharts. (n.d.). JavaScript Charts & Maps – amCharts. Retrieved June 25, 2019, from 

  4. ASIST. (n.d.). AsistDigitalLibrary. Retrieved June 25, 2019, from 

  5. Bally, L., Brittan, J., & Wagner, K. H. (1977). A prototype approach to information system design and development. Information and Management, 1(1), 21–26. DOI: 

  6. BioMed Central. (n.d.). BMC, research in progress. Retrieved June 25, 2019, from 

  7. Clarivate Analytics InCites. (n.d.). InCites. Retrieved June 25, 2019, from 

  8. Clarivate Analytics WoS. (n.d.). Web of Science. Retrieved June 25, 2019, from 

  9. Cohen, J. (1960). A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20, 37–46. DOI: 

  10. Costas, R., Zahedi, Z., & Wouters, P. (2015). Do “altmetrics” correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective. Journal of the Association for Information Science and Technology, 66(10), 2003–2009. DOI: 

  11. Crossref. (1999). You are Crossref – Crossref. Retrieved June 25, 2019, from 

  12. Dorsey, J., Glass, N., & Stone, B. (2006). Twitter. It’s what’s happening. Retrieved June 25, 2019, from 

  13. Elsevier. (n.d.). Elsevier|An Information Analytics Company|Empowering Knowledge. Retrieved June 25, 2019, from 

  14. Elsevier Newsflo. (2012). Newsflo|Measures an academic’s societal impact|Elsevier. Retrieved June 25, 2019, from 

  15. Elsevier Pure. (n.d.). Pure|Helps Research Managers at your Institution|Elsevier Solutions. Retrieved June 25, 2019, from 

  16. Elsevier ScienceDirect. (n.d.). ScienceDirect|Elsevier’s leading information solution|Elsevier. Retrieved June 25, 2019, from 

  17. Elsevier SciVal. (n.d.). SciVal|Navigate the world of research with a ready-to-use solution|Elsevier Solutions. Retrieved June 25, 2019, from 

  18. Elsevier Scopus. (n.d.). The largest database of peer-reviewed literature – Scopus|Elsevier Solutions. Retrieved June 25, 2019, from 

  19. Elsevier Snowball Metrics. (n.d.). Snowball Metrics – Standardized Research Metrics – By the Sector For the Sector. Retrieved June 25, 2019, from 

  20. Emerald. (n.d.). Emerald Group Publishing. Retrieved June 25, 2019, from 

  21. Emerald Insight. (n.d.). Emerald Insight. Retrieved June 25, 2019, from 

  22. Erdt, M., Nagarajan, A., Sin, S.-C. J., & Theng, Y.-L. (2016). Altmetrics: an analysis of the state-of-the-art in measuring research impact on social media. Scientometrics, 109(2), 1117–1166. DOI: 

  23. Faculty of 1000. (2012). Homepage – F1000Prime. Retrieved June 25, 2019, from 

  24. Figshare. (2011). Figshare – credit for all your research. Retrieved June 25, 2019, from 

  25. Ginde, G. (2016). Visualisation of massive data from scholarly Article and Journal Database A Novel Scheme. CoRR, abs/1611.01152. 

  26. Google. (2004). Google Scholar. Retrieved June 25, 2019, from 

  27. (n.d.). Publish or Perish. Retrieved June 25, 2019, from 

  28. Haustein, S. (2016). Grand challenges in altmetrics: heterogeneity, data quality and dependencies. Scientometrics, 108(1), 413–423. DOI: 

  29. HighCharts. (n.d.). Interactive JavaScript charts for your webpage|Highcharts. Retrieved June 25, 2019, from 

  30. HighWire. (n.d.). Digital Publishing Technology|HighWire Press. Retrieved June 25, 2019, from 

  31. Hirsch, J. E. (2010). An index to quantify an individual’s scientific research output that takes into account the effect of multiple coauthorship. Scientometrics, 85(3), 741–754. DOI: 

  32. Indiana University. (n.d.). Scholarometer – informatics. Retrieved June 25, 2019, from 

  33. International DOI Foundation. (2000). Digital Object Identifier System. Retrieved June 25, 2019, from 

  34. Kennedy, H., & Hill, R. L. (2017). The Pleasure and Pain of Visualizing Data in Times of Data Power. Television & New Media, 18(8), 769–782. DOI: 

  35. Kudos. (2014). Kudos – helping increase the reach and impact of research. Retrieved June 25, 2019, from 

  36. Larson, R. C., Ghaffarzadegan, N., & Xue, Y. (2014). Too many PhD graduates or too few academic job openings: The basic reproductive number R0 in academia. Systems Research and Behavioral Science, 31(6), 745–750. DOI: 

  37. Lavrakas, P. J. (2008). Encyclopedia of survey research methods. Thousand Oaks, Calif: SAGE Publications. DOI: 

  38. Lewis, C. (1982). Using the “thinking Aloud” Method in Cognitive Interface Design. IBM Research Report, RC-9265. 

  39. Lin, J. (2012). A Case Study in Anti-Gaming Mechanisms for Altmetrics: PLoS ALMs and DataTrust – Retrieved June 25, 2019, from 

  40. Madisch, I., & Hofmayer, S. (2008). ResearchGate|Find and share research. Retrieved June 25, 2019, from 

  41. Michalek, A. (2011). Plum Analytics – Plum Analytics. Retrieved June 25, 2019, from 

  42. Mendeley, E. (2008). Mendeley. Retrieved June 25, 2019, from 

  43. Nature Research. (n.d.). Home: About NPG. Retrieved June 25, 2019, from 

  44. Nelson, S.-M. (2016). Using Altmetrics as an Engineering Faculty Outreach Tool. 

  45. Nunzio Quacquarelli. (n.d.). Quacquarelli Symonds. Retrieved January 23, 2021, from 

  46. ORCID. (2012). ORCID. Retrieved June 25, 2019, from 

  47. Peters, I., Jobmann, A., Hoffmann, C. P., Künne, S., Schmitz, J., & Wollnik-Korn, G. (2014). Altmetrics for large, multidisciplinary research groups: Comparison of current tools. Bibliometrie-praxis und forschung, 3. 

  48. PLOS. (n.d.). PLOS. Retrieved June 25, 2019, from 

  49. PLOS. (2009). ALM. Retrieved June 25, 2019, from 

  50. Portenoy, J., & West, J. D. (2019). Visualizing scholarly publications and citations to enhance author profiles, 26th International World Wide Web Conference 2017, WWW 2017 Companion, 2(November), pp. 1279–1282. DOI: 

  51. Price, R. (2008). – Share research. Retrieved June 25, 2019, from 

  52. Priem, J., & Piwowar, H. (2011). Impactstory. Retrieved June 25, 2019, from 

  53. Priem, J., Taraborelli, D., Groth, P., & Neylon, C. (2010). Altmetric: A manifesto. In Altmetrics. Retrieved from 

  54. R Team. (1993) R Homepage. Retrieved December 24, 2020, from 

  55. Robinson-García, N., Torres-Salinas, D., Zahedi, Z., & Costas, R. (2014). New data, new possibilities: Exploring the insides of El Profesional de La Informacion. DOI: 

  56. Roemer, R. C., & Borchardt, R. (2015). Major altmetrics tools. Library Technology Reports, 51(5), 11–19. 

  57. Springer. (n.d.). Springer. Retrieved June 25, 2019, from 

  58. Springer Nature. (n.d.). Springer Nature. Retrieved June 25, 2019, from 

  59. SSRN. (n.d.). Home :: SSRN. Retrieved June 25, 2019, from 

  60. Statistical Cybermetrics Research Group. (n.d.). Webometric Analyst. Retrieved June 25, 2019, from 

  61. Sugimoto, C. R., Work, S., Larivière, V., & Haustein, S. (2017). Scholarly use of social media and altmetrics: A review of the literature. Journal of the Association for Information Science and Technology, 68(9), 2037–2062. DOI: 

  62. Sutton, S. W. H. (2014). Altmetrics: What good are they to academic libraries. Kansas Library Association College and University Libraries Section Proceedings, 4(2), 1–7. DOI: 

  63. Tang, J. (2006). AMiner. Retrieved June 25, 2019, from 

  64. University World News. (2013). The global shift to competitive research funding. Retrieved June 25, 2019, from 

  65. Webster, B. (n.d.). LibGuides: Altmetrics: Tools. Retrieved June 25, 2019, from 

  66. Wee, J. (2014). Altmetrics : The Tools. Retrieved June 21, 2019, from 

  67. Wiley. (n.d.). Wiley Online Library|Scientific research articles, journals, books, and reference works. Retrieved June 25, 2019, from 

  68. Zuckerberg, M., Saverin, E., McCollum, A., & Moskovitz, D. (2004). Facebook. Retrieved June 25, 2019, from