February 24, 2011 Off

altmetrics11: Tracking scholarly impact on the Social Web

By in CFP, Conferences, e-Science, Social Media, Web 2.0

altmetrics11

Koblenz (Germany), 14-15 June 2011
An ACM Web Science Conference 2011 Workshop

Keynote: Mike Thelwall, University of Wolverhampton:
“Evaluating online evidence of research impact”

Call for papers

The increasing quantity and velocity of scientific output is presenting scholars with a deluge of data. There is growing concern that scholarly output may be swamping traditional mechanisms for both pre-publication filtering (e.g peer review) and post-publication impact filtering (e.g. the Journal Impact Factor).

Increasing scholarly use of Web2.0 tools like CiteULike, Mendeley, Twitter, and blog-style article commenting presents an opportunity to create new filters. Metrics based on a diverse set of social sources could yield broader, richer, and more timely assessments of current and potential scholarly impact. Realizing this, many authors have begun to call for investigation of these “altmetrics.” (see altmetrics.org)

Read the rest of this entry »

February 10, 2011 Off

Why do scientists (not) contribute to Wikipedia?

By in Collaboration, Social Media, Surveys, Wikis

wikipedia logoAn excellent article published last month in the Chronicle celebrates Wikipedia’s 10th anniversary by observing that today the project doesn’t represent “the bottom layer of authority, nor the top, but in fact the highest layer without formal vetting” and, as such, it can serve as “an ideal bridge between the validated and unvalidated Web”. An increasing number of university students use Wikipedia for “pre-research”, as part of their course assignments or research projects. Yet many among academics, scientists and experts turn their noses up at the thought of contributing to Wikipedia, despite a growing number of calls from the scientific community to join the project (see for instance this recent initiative of the Association for Psychological Science or this call for biomedical experts to help contribute rigorous public health information in Wikipedia).

A survey has been launched by the Wikimedia Research Committee to understand why scientists, academics and other experts do (or do not) contribute to Wikipedia, and whether individual motivation aligns with shared perceptions of Wikipedia within different communities of experts. The survey is anonymous and takes about 20 min to complete. Whether you are an active Wikipedia contributor or not, you can take the survey and help Wikipedia think of ways around barriers to expert participation.

October 28, 2010 6

Alt-metrics: A manifesto

By in Evaluation, Social Media, Statistics, Web 2.0
J. Priem, D. Taraborelli, P. Groth, C. Neylon (2010), Alt-metrics: A manifesto, (v.1.0), 26 October 2010. http://altmetrics.org/manifesto

No one can read everything. We rely on filters to make sense of the scholarly literature, but the narrow, traditional filters are being swamped. However, the growth of new, online scholarly tools allows us to make new filters; these alt-metrics reflect the broad, rapid impact of scholarship in this burgeoning ecosystem. We call for more tools and research based on alt-metrics.

As the volume of academic literature explodes, scholars rely on filters to select the most relevant and significant sources from the rest.

Unfortunately, scholarship’s three main filters for importance are failing:

Read the rest of this entry »

Tags: , , , , ,

September 22, 2010 15

ReaderMeter: Crowdsourcing research impact

By in Announcements, Collaboration, Reference management, Statistics, Visualization, Web 2.0

Readers of this blog are not new to my ramblings on soft peer review, social metrics and post-publication impact measures:

  • can we measure the impact of scientific research based on usage data from collaborative annotation systems, social bookmarking services and social media?
  • should we expect major discrepancies between citation-based and readership-based impact measures?
  • are online reference management systems more robust a data source to measure scholarly readership than traditional usage factors (e.g. downloads, clickthrough rates etc.)?

These are some of the questions addressed in my COOP ’08 paper. Jason Priem also discusses the prospects of what he calls “scientometrics 2.0″ in a recent First Monday article and it is really exciting to see a growing interest in these ideas from both the scientific and the STM publishing community.

We now need to think of ways of putting these ideas into practice. Science Online London 2010 earlier this month offered a great chance to test a real-world application of these ideas in front of a tech-friendly audience and this post is meant as its official announcement.

ReaderMeter is a proof-of-concept application showcasing the potential of readership data obtained from reference management tools. Following the announcement of the Mendeley API, I decided to see what could be built on top of the data exposed by Mendeley and the first idea was to write a mashup aggregating author-level readership statistics based on the number of bookmarks scored by each of one’s publications. ReaderMeter queries the data provider’s API for articles matching a given author string. It parses the response and generates a report with several metrics that attempt to quantify the relative impact of an author’s scientific production based on its consumption by a population of readers (in this case the 500K-strong Mendeley user base):



The figure above shows a screenshot of ReaderMeter’s results for social scientist Duncan J Watts, displaying global bookmark statistics, the breakdown of readers by publication as well as two indices (the HR index and the GR index) which I compute using bookmarks as a variable by analogy to the two popular citation-based metrics. Clicking on a reference allows you to drill down to display readership statistics for a given publication, including the scientific discipline, academic status and geographic location of readers of an individual document:

A handy permanent URL is generated to link to ReaderMeter’s author reports (using the scheme: [SURNAME].[FORENAME+INITIALS]), e.g.:

http://readermeter.org/Watts.Duncan_J

I also included a JSON interface to render statistics in a machine-readable format, e.g.:

http://readermeter.org/Watts.Duncan_J/json

Below is a sample of the JSON output:

{
	"author": "Duncan J Watts",
	"author_metrics":
	{
		"hr_index": "15",
		"gr_index": "26",
		"single_most_read": "140",
		"publication_count": "57",
		"bookmark_count": "760",
		"data_source": "mendeley"
	},
	"source": "http://readermeter.org/Watts.Duncan_J",
	"timestamp": "2010-09-02T15:41:08+01:00"
}

Despite being just a proof of concept (it was hacked in a couple of nights!), ReaderMeter attracted a number of early testers who gave a try to its first release. Its goal is not to redefine the concept of research impact as we know it, but to complement this notion with usage data from new sources and help identify aspects of impact that may go unnoticed when we only focus on traditional, citation-based metrics. Before a mature version of ReaderMeter is available for public consumption and for integration with other services, though, several issues will need to be addressed.

1. Author name normalisation

The first issue to be tackled is the fact the same individual author may be mentioned in a bibliographic record under a variety of spelling alternates: Rod Page was among the first to spot and extensively discuss this issue, which will hopefully be addressed in the next major upgrade (unless a provision to fix this problem is directly offered by Mendeley in a future upgrade of their API).

2. Article deduplication

A similar issue affects individual bibliographic entries, as noted by Egon Willighagen among others. Given that publication metadata in reference management services can be extracted by a variety of sources, the uniqueness of a bibliographic record is far from given. As a matter of fact, several instances of the same publication can show up as distinct items, with the result of generating flawed statistics when individual publications and their relative impact need to be considered (as is the case when calculating the H- and G-index). To what extent crowdsourced bibliographic databases (such as those of Mendeley, CiteULike, Zotero, Connotea, and similar distributed reference management tools) can tackle the problem of article duplication as effectively as manually curated bibliographic databases, is an interesting issue that sparked a heated debate (see this post by Duncan Hull and the ensuing discussion).

3. Author disambiguation

A way more challenging problem consists in disambiguating real homonyms. At the moment, ReaderMeter is unable to tell the difference between two authors with an identical name. Considering that surnames like Wang appear to be shared by about 100M people on the planet, the problem of how to disambiguate authors with a common surname is not something that can be easily sorted out by a consumer service such as ReaderMeter. Global initiatives with a broad institutional support such as the ORCID project are trying to fix this problem for good by introducing a unique author identifier system, but precisely because of their scale and ambitious goal they are unlikely to provide a viable solution in the short run.

4. Reader segmentation and selection biases

You may wonder: how genuine is data extracted from Mendeley as an indicator of an author’s actual readership? Calculating author impact metrics based on the user population of a specific service will always by definition result in skewed results due to different adoption rates by different scientific communities or demographic segments (e.g. by academic status, language, gender) within the same community. And how about readers who just don’t use any reference management tools? Björn Brembs posted some thoughtful considerations on why any such attempt at measuring impact based on the specific user population of a given platform/service is doomed to fail. His proposed solution, however – a universal outlet where all scientific content consumption should happen–sounds not only like an unlikely scenario, but also in many ways an undesirable one. Diversity is one of the key features of the open source ecosystem, for one, and as long as interoperability is achieved (witness the example of the OAI protocol and its multiple software implementation), there is certainly no need for a single service to monopolise the research community’s attention for projects such as ReaderMeter to be realistically implemented. The next step on ReaderMeter’s roadmap will be to integrate data from a variety of content providers (such as CiteULike or Bibsonomy) that provide free access to article readership information: although not the ultimate solution to the enormous problem of user segmentation, data integration from multiple sources should hopefully help reduce biases introduced by the population of a specific service.

What’s next

I will be working in the coming days on an upgrade to address some of the most urgent issues, in the meantime feel free to test ReaderMeter, send me your feedback and feature requests, follow the latest news on the project via Twitter or just help spread the word!

Tags: , , , , , , , , , , , ,

September 11, 2010 6

The ages of productivity

By in Funding, Reading

The Undercover Economist, Tim Harford, has a good article in today’s Financial Times about the stages in life when different professions are most productive. For example, I did a quick Google/calculation: the average median age of a Nobel Prize winner in physics or chemistry is 55; in the literature and peace prizes, it’s 64. (Sorry, not going to do the full test for statistical difference today). This distinction makes some sense, as the great discoveries in the two scientific subjects are marked by innovation (something that may become replaced by habit with age) and excellence in literature and statesmanship benefits from vast amounts of experience.

But, in keeping with our recent discussions about reform in academia, perhaps the bigger question is whether or not we should be actively targeting funding to match these periods of productivity? A quote from the FT article:

Two of my favourite writers, Malcolm Gladwell and Jonah Lehrer, are worried about this – but from different perspectives. Gladwell, a Galenson fan, worries that our obsession with youthful genius will cause us to reject future late bloomers.

Lehrer has the opposite concern: that funding goes to scientists past their prime. He says the US’s National Institutes of Health (NIH) has been funding ever-older scientists. Thirty years ago, researchers in their early thirties used to receive 10 per cent of NIH grants; by 2006 the figure had fallen to 1 per cent.

From my experience in the UK, I think both groups have good, but different, funding opportunities. Established researchers are well-versed in applying for traditional call-based research grants, whereas young researchers are catered for by a number of fellowship schemes. I haven’t seen much evidence of disciplinary-based bias and to be honest, I think anti-discrimination laws would make it difficult to explicitly exclude a group of talented researchers just because they’ve reached an arbitrary age barrier. Think of Andrew Wiles, who found a proof of Fermat’s Last Theorem but just over the Fields Medal’s age limit of 40.

Ultimately the top performers in these disciplines are so unique that it doesn’t make sense to design generalized development or funding programmes for the rest of us. However we can at least take comfort that our best days may be ahead of us!

Tags: