Archive for category: Announcements

To RSS subscribers: sorry, last post was not intended for

January 30th, 2012 by

The explanation below may only make sense to you if you read this from an RSS reader. If you don’t please skip it.

Have you ever sent an email to the wrong person? Did wish you could pull it back? I just did this, but for our blog (!).

I was feeding a WP install of what will be our company blog, and then I did something nasty…

I was about to post from my desktop tool (word 2010, used to be live writer), and I hit ‘post’ on the wrong doc, and ‘about’ page, ‘google-translated’ from German; prose as horrible as one can get. The blog selected was, not our own. I immediately went to the admin page and removed it, so’s readers won’t see it. But the RSS feed… is another story. For a blog that was dead for a year, we still have >4000 followers. This was the first post to break a long silent stretch. I’d hate if you, the reader, think we resurrected, just to find out a nonsensical blog post.

Since we use feedburner, removing the post locally didn’t help. I had to log in at feedburner, and try to remove it from there. They have a ‘nuke’ option that should force a refresh. But it just didn’t work. I tried a few times, the nonsensical blog post was still there. The only option I could think of was to delete the feed from feedburner, in the hope that they do not broadcast it. But it was too late; all people who subscribe to’s RSS feed have received the post.

I apologize for kidnapping your attention without a good reason. I’m sorry you got involved in this, but silly mistakes do occur. I will be more careful in the future.

Since the feedburner feed is now gone, if you want to continue receiving updates from you’d need to resubscribe. Simply click again on the RSS icon on the address bar, and follow the steps there. In any case, we are not dead, and will continue writing for whenever we find something worth writing about.

ReaderMeter: Crowdsourcing research impact

September 22nd, 2010 by

Readers of this blog are not new to my ramblings on soft peer review, social metrics and post-publication impact measures:

  • can we measure the impact of scientific research based on usage data from collaborative annotation systems, social bookmarking services and social media?
  • should we expect major discrepancies between citation-based and readership-based impact measures?
  • are online reference management systems more robust a data source to measure scholarly readership than traditional usage factors (e.g. downloads, clickthrough rates etc.)?

These are some of the questions addressed in my COOP ’08 paper. Jason Priem also discusses the prospects of what he calls “scientometrics 2.0″ in a recent First Monday article and it is really exciting to see a growing interest in these ideas from both the scientific and the STM publishing community.

We now need to think of ways of putting these ideas into practice. Science Online London 2010 earlier this month offered a great chance to test a real-world application of these ideas in front of a tech-friendly audience and this post is meant as its official announcement.

ReaderMeter is a proof-of-concept application showcasing the potential of readership data obtained from reference management tools. Following the announcement of the Mendeley API, I decided to see what could be built on top of the data exposed by Mendeley and the first idea was to write a mashup aggregating author-level readership statistics based on the number of bookmarks scored by each of one’s publications. ReaderMeter queries the data provider’s API for articles matching a given author string. It parses the response and generates a report with several metrics that attempt to quantify the relative impact of an author’s scientific production based on its consumption by a population of readers (in this case the 500K-strong Mendeley user base):

The figure above shows a screenshot of ReaderMeter’s results for social scientist Duncan J Watts, displaying global bookmark statistics, the breakdown of readers by publication as well as two indices (the HR index and the GR index) which I compute using bookmarks as a variable by analogy to the two popular citation-based metrics. Clicking on a reference allows you to drill down to display readership statistics for a given publication, including the scientific discipline, academic status and geographic location of readers of an individual document:

A handy permanent URL is generated to link to ReaderMeter’s author reports (using the scheme: [SURNAME].[FORENAME+INITIALS]), e.g.:

I also included a JSON interface to render statistics in a machine-readable format, e.g.:

Below is a sample of the JSON output:

        "author": "Duncan J Watts",
                "hr_index": "15",
                "gr_index": "26",
                "single_most_read": "140",
                "publication_count": "57",
                "bookmark_count": "760",
                "data_source": "mendeley"
        "source": "",
        "timestamp": "2010-09-02T15:41:08+01:00"

Despite being just a proof of concept (it was hacked in a couple of nights!), ReaderMeter attracted a number of early testers who gave a try to its first release. Its goal is not to redefine the concept of research impact as we know it, but to complement this notion with usage data from new sources and help identify aspects of impact that may go unnoticed when we only focus on traditional, citation-based metrics. Before a mature version of ReaderMeter is available for public consumption and for integration with other services, though, several issues will need to be addressed.

1. Author name normalisation

The first issue to be tackled is the fact the same individual author may be mentioned in a bibliographic record under a variety of spelling alternates: Rod Page was among the first to spot and extensively discuss this issue, which will hopefully be addressed in the next major upgrade (unless a provision to fix this problem is directly offered by Mendeley in a future upgrade of their API).

2. Article deduplication

A similar issue affects individual bibliographic entries, as noted by Egon Willighagen among others. Given that publication metadata in reference management services can be extracted by a variety of sources, the uniqueness of a bibliographic record is far from given. As a matter of fact, several instances of the same publication can show up as distinct items, with the result of generating flawed statistics when individual publications and their relative impact need to be considered (as is the case when calculating the H- and G-index). To what extent crowdsourced bibliographic databases (such as those of Mendeley, CiteULike, Zotero, Connotea, and similar distributed reference management tools) can tackle the problem of article duplication as effectively as manually curated bibliographic databases, is an interesting issue that sparked a heated debate (see this post by Duncan Hull and the ensuing discussion).

3. Author disambiguation

A way more challenging problem consists in disambiguating real homonyms. At the moment, ReaderMeter is unable to tell the difference between two authors with an identical name. Considering that surnames like Wang appear to be shared by about 100M people on the planet, the problem of how to disambiguate authors with a common surname is not something that can be easily sorted out by a consumer service such as ReaderMeter. Global initiatives with a broad institutional support such as the ORCID project are trying to fix this problem for good by introducing a unique author identifier system, but precisely because of their scale and ambitious goal they are unlikely to provide a viable solution in the short run.

4. Reader segmentation and selection biases

You may wonder: how genuine is data extracted from Mendeley as an indicator of an author’s actual readership? Calculating author impact metrics based on the user population of a specific service will always by definition result in skewed results due to different adoption rates by different scientific communities or demographic segments (e.g. by academic status, language, gender) within the same community. And how about readers who just don’t use any reference management tools? Björn Brembs posted some thoughtful considerations on why any such attempt at measuring impact based on the specific user population of a given platform/service is doomed to fail. His proposed solution, however – a universal outlet where all scientific content consumption should happen–sounds not only like an unlikely scenario, but also in many ways an undesirable one. Diversity is one of the key features of the open source ecosystem, for one, and as long as interoperability is achieved (witness the example of the OAI protocol and its multiple software implementation), there is certainly no need for a single service to monopolise the research community’s attention for projects such as ReaderMeter to be realistically implemented. The next step on ReaderMeter’s roadmap will be to integrate data from a variety of content providers (such as CiteULike or Bibsonomy) that provide free access to article readership information: although not the ultimate solution to the enormous problem of user segmentation, data integration from multiple sources should hopefully help reduce biases introduced by the population of a specific service.

What’s next

I will be working in the coming days on an upgrade to address some of the most urgent issues, in the meantime feel free to test ReaderMeter, send me your , follow the latest news on the project via or just help spread the word!

Tenure denial starts shooting, kills three. Columbine in the academia?

February 14th, 2010 by

This is a quick note that may not surprise most people. Amy Bishop, at University of Alabama  Huntsville, just killed three colleagues and injured some more. It seems that this act may be related to having been denied tenure.  A PhD from Harvard, Amy Bishop had grants, and sat in a startup board, which are traces of a successful academic career. She was also a mother of four. Can your academic job environment be so toxic as to motivate murder? She was possibly suffering major depression at the time of the incident, and other mental health issues.

The evidence that an academic career is too stressing is piling up. An academic deals with rejection very often, from both peers and students, gets paid like a boy scout, and works every waking hour. This should be a waking call to all academics that feel tenure is the center of their lives.

A Previous Shooting Death at the Hands of Alabama Suspect –

UPDATE: removed wrong photo.

Review of Google Wave as a scholarly HTML editor

November 17th, 2009 by


Peter Sefton wrote a series of posts on wave. He has published on Scholarly HTML so I read attentively what he has to say. What follows is some highlights of his posts, and my thinking about where things are going. There are at least four things that bother me about wave –as it is today:

1- It’s not really HTML

I thought that waves being XML documents would be a good thing because it’d separate content and formatting. But it seems that they made some strange decisions about how to represent formatting with “very tenuous relationship to HTML”. For example

While there is talk of ‘XML documents’ in the whitepapers etc, a wave document in the current implementation is apparently a series of lines of text. All formatting and what you might think of as structure, such as whether something is a heading or not, is considered an annotation.


If you read only one Google Wave post, read this one

October 2nd, 2009 by

After 100,000 invites went out yesterday, the web is boiling with reviews. The best no-nonsense explanation I found is a chapter of a forthcoming O’reilly book.

If you got an account, I’ve been on the dev preview (intended for developers to build on but otherwise identical), my user is