Soft peer review? Social software and distributed scientific evaluation

February 21st, 2007 by dario
For an extended version of this post, see also:
D. Taraborelli (2008), Soft peer review. Social software and distributed scientific evaluation, Proceedings of the 8th International Conference on the Design of Cooperative Systems (COOP 08), Carry-Le-Rouet, France, May 20-23, 2008

Online reference managers are extraordinary productivity tools, but it would be a mistake to take this as their primary interest for the academic community. As it is often the case for social software services, online reference managers are becoming powerful and costless solutions to collect large sets of metadata, in this case collaborative metadata on scientific literature. connotea popular tagsTaken at the individual level, such metadata (i.e. tags and ratings added by individual users) are hardly of interest, but on a large scale I suspect they will provide information capable of outperforming more traditional evaluation processes in terms of coverage, speed and efficiency. Collaborative metadata cannot offer the same guarantees as standard selection processes (insofar as they do not rely on experts’ reviews and are less immune to biases and manipulations). However, they are an interesting solution for producing evaluative representations of scientific content on a large scale.

I recently had the chance of meeting one of the developers of CiteULike and this was a good occasion to think of the possible impact these tools may have in the long run on scientific evaluation and refereeing processes. My feeling is that academic content providers (including publishers, scientific portals and bibliographic databases) will be urged to integrate metadata from social software services as soon as the potential of such services is fully acknowledged. In this post I will try to unpack this idea.

Traditional peer review has been criticised on various grounds but possibly the major limitation it currently faces is scalability, i.e. the ability to cope with an increasingly large number of submissions, which—given the limited number of available reviewers and time constraints on the publication cycle—results in a relative small acceptance rate for high quality journals. Although I don’t think social software will ever replace hard evaluation processes such as traditional peer review, I suspect that soft evaluation systems(as those made possible by social software) will soon take over in terms of efficiency and scalability. The following is a list of areas in which I expect social software services targeted at the academic community to challenge traditional evaluation processes.

Semantic metadata

A widely acknowledged application of tags as collaborative metadata is to use them as indicators of semantic relevance. Tagging is the most popular example of how social software, according to its defendants, helped overcome the limits of traditional approaches to content categorization. Collaboratively produced tags can be used to extract similarity patterns or for automatic clustering. In the case of academic literature, tags can provide extensive lists of keywords for scientific papers, often more accurate and descriptive than those originally added by the author. The following is an example of tags used by users to describe a popular article about tagging, ordered by the number of users who selected a specific tag. screenshot

Similar lists can be found in CiteULike or Connotea, although neither of these services seem to have realized so far how important it is to rank tags by the number of users who applied them to a specific item. Services allowing to aggregate tags for specific items from multiple users are in best position to become providers of reliable semantic metadata for large sets of scientific articles in an effortless way.


Another fundamental type of metadata that can be extracted by social software is popularity indicators. Looking at how many users bookmarked an item in their personal reference library can provide a reliable measure of the popularity of that item within a given community. Understandably, academically oriented services (like CiteSeer, Web of Science or ) have focused so far on citations, which is the standard indicator of a paper’s authority in the bibliometric tradition. My feeling is that popularity indicators from online reference managers will eventually become a factor as crucial as citations for evaluating scientific content. This may sound paradoxical if we consider that authority measures were introduced precisely to avoid the typical biases of popularity measurements. But insofar as popularity data are extracted from the natural behavior of users of a given service (e.g. users bookmarking an item because they are genuinely interested in reading it, not to boost its popularity) they can provide pretty accurate information on what papers are frequently read and cited in a given area of science. It would actually be interesting to conduct a study on a representative sample of articles comparing the distribution of citations and the distribution of popularity indicators (such as bookmarks in online reference managers) to see if there is any significant correlation. recently realized the strategical importance of redistributing popularity data it collects. It recently introduced the possibility of displaying on external websites a popularity badge based on the number users who added a specific URL to their bookmarks. Similar ideas have been in circulation for years (consider for example Google’s PageRank indicator or Alexa’s rank in their browser toolbars) but it seems that social software developers have only recently caught up with this idea. popularity indicator in CiteULikeConnotea, CiteULike and similar services should consider giving back to content providers (from which they borrow metadata) the ability to display the popularity indicators they produce. When this happens, it’s not unlikely that publishers may start displaying popularity indicators on their websites (e.g. “this article was bookmarked 10,234 times in Connotea”) to promote their content.


“Hotness” can be described as an indicator of short-term popularity, a useful measure to identify emerging trends within specific communities. Mapping popularity distributions on a temporal scale is actualy a common practice. Authoritative indicators such as ISI Impact Factor take into account the frequency of citations articles receive within specific timeframes. Similar criteria are used by social software services (such as, technorati or ) to determine “what’s hot” in the last days of activity.

Online reference managers have recently started to look at such indicators. As of its current implementation, CiteULike extracts measures of hotness by explicitly asking users to vote for articles they like. The goal—CiteULike developer Richard Cameron explains—is to “catch influential papers as soon as possible after publication”. I think in this case they got it wrong. Relying on votes (whether they are combined or not with other metrics) is certainly not the best way of extracting meaningful popularity information from users, insofar as most users who use these services for work won’t ever bother to vote, whereas a large part of those users who actively vote may do so for opportunistic reasons. I believe that in order to provide reliable indicators, popularity measures should rely on patterns that are implicitly generated by user behavior: the best way to know what users prefer is certainly not to ask them, but to extract meaningful patterns from what they naturally do when using a service. Hopefully online reference management services will soon realize the importance of extracting measures of recent popularity in an implicit and automatic way: most mature social software projects have faced this issue avoiding the use of explicit votes.

Collaborative annotation

One of the most understated (and in my opinion, most promising) aspects of online reference managers is the ability they provide to collaboratively annotate content. Users can add reviews to items they bookmark, thus producing lists of collaborative annotations. This is interesting because adding annotations is something individual users naturally do when bookmarking references in their library. The problem with such reviews is that they can hardly be used to extract meaningful evaluative data on a large scale.

The obvious reason why collaborative annotation cannot be compared, in this sense, with traditional refereeing is that the expertise of the reviewer is questionable. Is there a viable strategy to make collaborative annotation more reliable while maintaining the advantages of social software? A solution would be to start rating users as a function of their expertise. Asking users to rate each other is definitely not the way to go: as in the case of “hotness measures” based on explicit votes, mutual user rating is an easily biasable strategy.

The solution I’d like to suggest is that online reference management systems implement an idea similar to that of anonymous refereeing, while making the most of their social software nature. The most straightforward way to achieve this would be, I believe, a wiki-like system coupled with anonymous rating of user contributions. Each item in the reference database would be matched to a wiki page where users would freely contribute their comments and annotations. Crucial is the fact that each annotation would be displayed anonymously to other users, who whould then have the possibility to save it in their own library if they consider it useful. This behavior (i.e. importing useful annotations) could then be taken as an indicator of a positive rating for the author of the annotation, whose overall score would result from the number of anonymous contributions she wrote that other users imported. Now it’s easy to see how user expertise could be measured with respect to different topics. If user A got a large number of positive ratings for comments she posted on papers massively tagged with tag “dna”, this will be a indicator of her expertise for the “dna” topic within the user community. User A will have different degrees of expertise for topics “tag1″, “tag2″, “tag3″, as a function of the usefulness other users found in her anonymous annotations to papers tagged respectively with “tag1″, “tag2″, “tag3″.

This is just an example and several variatons of this scheme are possible. My feeling is that allowing indirect rating of metadata posted via anonymous contributions would allow to implement at a large scale a sort of soft peer review process. This would allow social software services to aggregate much larger sets of evaluative metadata about scientific literature than traditional reviewing models will ever be able to provide on a large scale.

The place of social software in scientific knowledge production

I’ve suggested a few ways in which online reference management systems could be used to extract from user behavior evaluative indicators of scientific literature. In the long run, I expect these bottom-up, distributed processes to become more and more valuable to the academic community and traditional publishers to become increasingly aware of the usefulness of metadata collected through social software.

This will be possible if online reference management services start developing facilities (ideally programmable interfaces or API) to expose the data they collect and feed them back to potential consumers (publishers, individual users or other services). The future place of online reference managers (the way I wish it to be) is that of intermediate content providers—between information producers and information consumers—of collaborative metadata. To quote a recent post on the future of mashup economy:

[Y]ou don’t have to have your own data to make money off of data access. Right now, there’s revenue to be had in acting as a one-stop shop for mashup developers, essentially sticking yourself right between data providers and data consumers.

I think a similar strategy could justify a strong presence of these services in the scientific arena. If they succeed in doing this, they will come to occupy a crucial function in the system of scientific knowledge production and challenge traditional processes of scientific content evaluation.

tags: social bookmarking, , , , , , , , , , , , , , ,

If you enjoyed this post, make sure you !

58 Responses to “Soft peer review? Social software and distributed scientific evaluation”

  1. ShaneNo Gravatar Says:

    That’s a really interesting post Dario. Lots of good ideas.

    Regarding explicit ratings systems, despite their flaws, I frequently do make use of them. The main examples that spring to mind are Metacritic and Amazon. Most relevant of an example of folk generated review system is the user comment system of Amazon- you can get an idea of the interest of a book by number of comments, you can get idea of quality based on ratings, and you can get idea of the quality of the reviewers based on your own judgment, and people rating if they thought the review/comment was useful or not. Plus you have option for non-anonymous reviewers that have certain accreditations (e.g. top 1000 reviewer etc), and commenting on other people’s reviews. Of course its not perfect and open to abuse, but I do find it useful and informative.

    So, to summarise, for each academic reference you could have:
    Number of citations
    Number of people storing article in the their reference collection
    User generated tags, ordered by popularity
    Number of comments
    (possibly) explicit rating
    Time based metrics of these (e.g. number of comments in last 6 months etc)
    The comments and reviews themselves , and possibly number of times they are stored
    Another possible metric is number of times an abstract or article is read/accessed, or even number of times read and not stored!

    Obviously though, we are long way off. One big problem is that there are lots of different on-line reference managers, cite-u-like, connetea, refwords, endnote now has online offering, zotero plans to, and there are couple of others whose names don’t spring to mind. And social networking sites are only successful as their user base. Perhaps we need a “myspace” academic reference manager, the one that everyone uses because everyone else does. And we need people to use online reference mangers. Right now, pretty much none of the phd students (apart from a few geeks like me) I know use online reference management, and these are the “next generation” of academics.

    One idea from what you suggest is that indexing and search tools could index measures of how many times an article is accessed and most importantly, exported to a reference manager. So, when using web of science, scopus, google scholar etc it would index when a ref is exported to a reference manager – which has the advantage that it doesn’t depend on using online reference management. Perhaps google could be the one to innovate here.

  2. Mark GrimshawNo Gravatar Says:

    Some useful ideas there. It’s worth noting that WIKINDX has two rankings for individual resources. One is calculated on the number of views of a resource while the other is manually input by the admin allowing a subjective assessment compared to the more objective(-ly calculated) assessment of the first method. The tag clouds based on keywords, categories, authors, journal collections etc. may be ranked according to the number of resources each has.

    Of course, everyone has their own preferences as to how to rank — what wieght or importance to give to particular parameters. I’m planning that the next version of WIKINDX, in addition to giving a default weighting formula, will use a simple templating system (as it does for creating/editing bibliographic styles) to allow individual users of the wikindx to create and edit their own ranking systems. So, for example, they can include some or all of other users’ comments, the number of quotations stored for a particular resource, the number of times it has been viewed, the number of citations to it from other resources, how many user bibliographies does the resource appear in etc. etc. etc. and can apply different weightings to each of these to arrive at a final ranking. Perhaps, different wieghtings can be applied to whichever user in the wikindx actually input the resource to begin with allowing some assessment of the expertise of that user.

  3. Euan AdieNo Gravatar Says:

    Hi Dario,

    Nice post. Good ideas, too, as the other comments say.

    There’s a bit of a chicken and egg thing going on when it comes to the social features in academic social bookmarking systems: audiences are relatively small – compared to Digg, Amazon or, anyway – and they’re made up of people with diverse interests. Coupled with the 90-9-1 rule (90% of visitors never contribute, 9% contribute a little, 1% make the vast majority of contributions) this means that it’s difficult to get useful metrics out of explicit rating systems: so I’m with you on leveraging usage patterns and other implicit measures of popularity instead. Of course, if people *did* use explicit rating systems more then they’d become more useful, so more people would use them… etc.

    Your wiki per paper idea is pretty good, though it’d be pretty barren at first (the reward for contributing – the respect of your peers as represented by the ‘reviewer’ ratings, I guess – would be low until there were more pages up, more people using the system and reading annotations and so on).

    I like the badges idea, too. That’s something we’ve thought about before… give us a week or two. :)

    - Euan (@ Connotea)

  4. Gerry MckiernanNo Gravatar Says:

    Thank you for your vision of an enlightened view of Peer Review.
    I too have been interested in Alternative Peer Review Models. I have summarized various approaches in an article titled

    “Peer Review in the Internet Age: Five (5) Easy Pieces,” _Against the Grain_ 16, no. 3 (June 2004): 50, 52-55. Self-archived at

    I have outlined my version of a New Peer Review Model in a presentation given in a 2005 PPT titled

    “Wikis: Disruptive Technologies for Dynamic Possibilities.” Sel-archived at

    I have created a blog titled “Disruptive Scholarship” about this view


    Gerry McKiernan
    Disruptive Librarian
    Iowa State University
    Ames IA 50011

  5. Christina PikasNo Gravatar Says:

    I think some STM publishers and aggregators are approaching a few of your points. Several, for example, provide information on hot articles by downloads. Ei EngineeingVillage has tags now and also buttons to tag records in (I think other tagging places will be added very soon). Dissect Medicine is sort of a Digg for biomed, but I’m not sure it’s going to get to critical mass even if it is a good idea. It’s sort of in your realm of soft peer review.

    In any case, I think you did a good job of pulling it all together, thanks!

  6. atom proberNo Gravatar Says:

    Interesting post. I think that paper popularity (either through the traditional “impact factor” or by looking at Connotea or CiteULike) is already exploited during the writing process–popular papers tend to get more cites & become even more popular.

    I personally agree that the mere act of bookmarking a reference is a “popularity vote.” However, this too is gamed & anyone who follows knows that spammers do get through. Rating and commenting systems still have value–as amazon has shown. Yes, some small subset of comments may exploit the system & give dishonest reviews of their own products or products by competitors. But a discerning reader will usually be able to see through this. And popularity/hotness is hardly the same as “good.” Unique and intelligible comments do help to weed out spammers who would just create multiple accounts to bookmark the same thing to increase the apparent popularity. Truly popular entities will tend to have more ratings and comments, and so there is an even greater chance that there will be real value in them. Review aggregation and evaluation is possible, a’la OpenID and other systems do make evaluating the credibility of reviewers (even on many different servers) an achievable goal. And I think “web of trust” systems could easily identify any who frequently authored original content or participated in commenting/review.

    Anonymous/public review has been tried by no less than Nature.

  7. atom proberNo Gravatar Says:

    I don’t see different online reference managers as a problem. If we had one true reference manager, we’d still have lots and lots of publishers and lots and lots of journals for each publisher. Gathering data from disparate sources and sharing it with different destinations is doable & I think Zotero and HubMed and refbase have shown that the right thing to do is to make it easy to get data in and out. Embedded metadata formats like COinS and unAPI enable sharing of reference data and, as discussed before, OpenID and other systems can identify authors and reviewers. It is already too late for an academic myspace–too many researchers are in love with their personal choice of tools. But this is really a good thing–these tools “suck a lot less” than a monolithic myspace.

  8. Patricia’s Blog » Blog Archive » Hard and Soft Peer Review Says:

    [...] Soft peer review? Social software and distributed scientific evaluation, Academic Productivity. [...]

  9. Yihong DingNo Gravatar Says:

    excellent article. I am enjoying reading it. But I need more time to digest it more to have more constructive comments. In the meantime, I want to mention that I am currently writing an article about the evolution of World Wide Web. It is a new theory that analogizes the web evolution to human’s growing up. I have braught many thoughts from the social science side and discovered a unique evolutional orbit of World Wide Web. Currently, I am writing the part of predicting the next-generation of World Wide Web based on this theory. I hope I may finish it in recently weeks. Hopefully we can have chances to share our ideas in the future.


  10. HilaryNo Gravatar Says:

    It hasn’t been mentioned yet, but Connotea does provide an API:

  11. darioNo Gravatar Says:

    Hilary, you’re right—I know Connotea already provides an API, but I doubt whether it exposes most of the metrics I refer to in my post, correct me if I’m wrong.

  12. Science Library Pad Says:

    soft peer review…

    Via Library 2.0 – Social Software and New Opportunities for Peer Review, I find a fantastic posting about the many different ways in which formal hard peer review can be enhanced by open web technologies: Academic Productivity blog – Soft…

  13. : Blog Archive : links for 2007-03-09 Says:

    [...] Academic Productivity » Soft peer review? Social software and distributed scientific evaluation (tags: academic publishing scholarlycommunication toread) Bookmark: These icons link to social bookmarking sites where readers can share and discover new web pages. [...]

  14. Sarah ElkinsNo Gravatar Says:

    Rather than tying the info to wiki pages, how about autogenerated blog entries? See, for example, for an explanation of tying a library’s online public access catalog (OPAC) to WordPress (one entry per book in the catalog): “Why misuse WordPress that way? WordPress has a a few things we care about built-in: permalinks, comments, and trackbacks (and a good comment spam filter), just to start. But it also offers something we’ve never seen in a library application before: access to a community of knowledge, programmers, and designers outside libraries.”

  15. Pedro BeltraoNo Gravatar Says:

    I have tried some of these ideas on a small scale. Social bookmarking can be used as proxy for highly cited papers and it can be used to predict gene function.

  16. Weblog Academic productivity « Dee’tjes Says:

    [...] logs over research, ’social software’ , zoekmachines e.d. Een interssante post is bijv: Soft peer review? Social software and distributed scientific evaluation. Hun meest recente is: Comparison of academic search engines and bibliographic software. [...]

  17. Jonathan EdsonNo Gravatar Says:

    I hate to be late to the party — but so much of this post reflects my own views of things that are happening and should happen in the academic research process. As other have pointed out there is a lot to digest, but one thing struck me as being somewhat different from own hope about how this plays out. While I think it would be interesting to have the academic social media sites (CiteULike, Connotea, Carmun) make ranking data available to publishers, I think the more important flow of data needs to be the other way. I think the social media sites offer a great opportunity for publishers to promote their content — but we need their help to extract metadata and more especially to genereate localized permission more effectively than we can today. This could be done with broader adoption of standards or even on a database by database basis. It seems to me that too much context is lost for what a popularity or explicit rating means working the other way. Also, I cannot imagine the databases wanting to promote the fact that they contain poorly judged or low popularity articles — which at some level is part of the value of social filtering in first place.

  18. sozlog » Blog Archive » Old school. The academic elite Says:

    [...] social forces resulting from the logic of folksonomy. My guess is that these mechanisms represent a soft p2p-review ex post complementing the p2p review ex ante of a publication. They give evidence that the very attention [...]

  19. edernet » Blog Archive » ej n. 0 2007: peer reviewing Says:

    [...] Taraborelli D., Soft peer review? Social software and distributed scientific evaluation, 2007, [Blog post] [...]

  20. shaneNo Gravatar Says:

    I have just discovered the online public access peer reviewed journal Its funding model is that the paper authors have to pay a fee (around $1200) for publication. What is interesting about this journal is that it has a nice implementation of a number of your ideas – . It has a rating system, with an overall rating, plus a breakdown for insight, reliability and style. It has shared non anonymous annotations, and it has a discussion facility. The few articles I looked at had no annotations, and a couple of short discussion comments, which highlights to me again that the success soft peer review is dependent on a large and active user base, but it struck well as one of the best implemented online journal sites I have seen.

  21. edernet » Blog Archive » La natura del libro (2) Says:

    [...] Altri elementi di cambiamento derivano dalla natura del “contenitore”, vale a dire dal mezzo di comunicazione che si utilizza. Il libro troverà una sua destinazione, secondo Regalzi, nella forma digitale; tuttavia, la rete permette la creazione di strumenti la cui funzione è complementare a quella del libro, sia tradizionale, sia elettronico. Sul fatto che il libro non sia destinato a sparire, ho qualche dubbio. L’affermazione di McLuhan “il mezzo è il messaggio” non è uno slogan. L’architettura della rete e del Web stanno trasformando il messaggio, e credo fermamente che il passaggio dal testo all’ipertesto avrà importanti conseguenze sui formati della produzione scientifica del futuro. Ciò vale anche per altre pubblicazioni, ad esempio le riviste. La natura dell’ipertesto ha favorito la nascita di strumenti di creazione e di pubblicazione come i wiki e i blog che, presumibilmente, potranno sostituire le riviste scientifiche (e certamente saranno in grado di farlo i programmi del futuro). Il ricercatore del presente può essere l’editore di se stesso ed essere valutato, ex post, sia tramite il peer review tradizionale (da cosiddetti “overlay journal”), sia, ancora meglio, attraverso procedure di soft peer review. [...]

  22. Academic Productivity » CiteULike upgraded: new team-oriented features Says:

    [...] users we accept to give away a little bit of our privacy to gain a lot of valuable information and socially aggregated metadata in return. Social enrichment of my own reference library would not be possible if everyone opted [...]

  23. Harvard new policy: make your scholarly articles available free online | Academic Productivity Says:

    [...] Soft peer review? Social software and distributed scientific evaluation [...]

  24. PLE and Soft Peer review « Viplav Baxi’s Meanderings Says:

    [...] vbaxi I read about soft peer reviews on George Siemens’ blog and immediately went on to read more about the concept. I am very intrigued because of a discussion I had not long ago with an [...]

  25. Citegeist » Random thoughts on tagging Says:

    [...] Dempsey references a post by Dario Taraborelli regarding tagging and scientific communication, the implications of which are that tagging and [...]

  26. driversNo Gravatar Says:

    I think web2.0 is focused on the development of future networks

  27. Soft ManiacNo Gravatar Says:

    Meta tags used not only for visitor. It helps google “undestand” themes of your posts.

  28. Signs that social scholarship is catching on in the humanities « Digital Scholarship in the Humanities Says:

    [...] reviewers, new approaches to peer review engage a larger community in evaluation and leverage collaborative bookmarking and social tagging applications to determine the impact of a work. For example, in preparing his book Expressive Processing: [...]

  29. Software loaderNo Gravatar Says:

    Is social networking have ideal conception? What did you think about it?

  30. New Models of Social Research in the Humanities? « Humanities Computing and Media Centre Says:

    [...] the availability of tools (like Seasr in the US, and TAPoR in Canada) to support collaboration, new forms of peer review, support for collaboration by funding agencies, and a new emphasis by universities on community as [...]

  31. Open Culture: la conversazione è cominciata « The Geek Librarian Says:

    [...] modalità sociali di pubblicazione online accreditata: Researchblogging, OpenWetWare, UsefulChem, soft peer review, e in una parola Science [...]

  32. Wiser RockerNo Gravatar Says:

    I think that much of this article is beyond the realm of my comprehension but I can comment on a very popular site that has had success with self aggregation of information/news, etc. is perhaps the best example I can think of where the viewers are the editors and decide which articles and news stories are important. I have watched the site change over recent years from something that was truly tech related to a site populated by stories of political in nature. This simply shows you that people, to me anyway, are interested in a similar site for scholarly articles perhaps.

  33. AntimaulnetizmNo Gravatar Says:

    Very interesting idea and prediction.

    How might blogs figure into this?

    Blogging software combined with Google give unprecedented power to anybody with something interesting to say. Science is not immune to this trend.

    We’re rapidly approaching a point at which the scientific publisher, in the traditional sense, becomes irrelevant.

    At that point Google Page Rank will matter a whole lot more than Impact Factor. And open scientific information aggregators, possibly relying on social networking phenomena such as tagging, would play a very important role in that.

  34. Social Bookmarking NutNo Gravatar Says:

    Soft Maniac said, “Meta tags used not only for visitor. It helps google “undestand” themes of your posts” –I have been reading and hearing that meta tags play an almost non existant role as far as google is concerned. Lots of people seem to agree that a few years back meta tags indeed played an important role in telling google what the theme of your post was but supposedly the role of meta tags have diminished. Any thoughts?

    One more thing…that graphic above of the brain and the meat grinder…Nasty!!

  35. adwords ppc managementNo Gravatar Says:

    Social Bookmarking Nut says “I have been reading and hearing that meta tags play an almost non existant role as far as google is concerned. Lots of people seem to agree that a few years back meta tags indeed played an important role in telling google what the theme of your post was but supposedly the role of meta tags have diminished. Any thoughts?”

    Meta tags still have their place in the scheme of things, both in relating to the ‘blurb’ displayed by google when listing your site in its rankings as well as for search engine spiders.

    While their specific value in any search engine’s algorithm varies, they no doubt still have a role.

    Very interesting article, btw, never considered this side of social services and management.


  36. fundamental indexingNo Gravatar Says:

    I like the concept, but I think it would be a bit harder to implement successfully. While it’s nice to think that the wisdom of the crowd will win out, it seems that you usually just end up with mob mentality.

  37. BukmacherNo Gravatar Says:

    meta tags still play important role in seo, not much as 7-8 years ago, but in your website metatags you can’t double the same keywords because google may panalize your site

  38. Cable TiesNo Gravatar Says:

    Meta tags in my opinion play a role in yahoo rankings but not so much in google serps anymore. They definitely count with yahoo though. I know this because I have a site that I set up for one topic and then forgot about it.

    Then I went back to the site and changed the topic and added content. I forgot to change the meta tags though! What happened next was a surprise for me. I started getting traffic for my content as I expected.

    But I also started to get traffic for phrases like “celebrity gossip” and things like that from Yahoo. The reason was because I had “celebrity gossip” in my meta tags still !!

    I thought it was interesting.

  39. Tim ONo Gravatar Says:

    What distinguishes hard peer review from soft peer review is the rigorous nature of the reviews, right? Hard peer reviews are done rigorously by experts, soft peer reviews are done by anonymous, and possibly inexperienced, entities. Why would these have to run in parallel?
    I think is actually the model that needs to be examined, because of its recent implementations of RealName status. Looking at, one will occasionally see eminent personae reviewing articles – basically bringing the credibility of hard reviews to what is normally a ‘soft review’ format.
    An ideal model for the future of these soft peer review systems should have an approach allowing everything from anonymous tagging to authoritative review. To reference the metadata screenshot above: I’d love to see some of those tags broken down;

    391 cognitive
    102 toread
    50 toread[public]
    30 toread[BA+]
    22 toread[Phd+]

  40. ReviewNo Gravatar Says:

    Semantic metadata could advance the science community also by helping scholars spot similar and even duplicated content right away without using search bots and other software. A software built for this puprose with a good algorithm could even be a contender in this field.

  41. Academic Productivity » Luis von Ahn: on doing research vs. writing papers Says:

    [...] solution, he suggests, is a mix of soft peer review and liquid publication: Can a combination of a wiki, karma, and a voting method like reddit or digg [...]

  42. AldreyNo Gravatar Says:

    Clever boy!!!

    This is the post of the year.

  43. links for 2009-06-05 at LIS :: Michael Habib Says:

    [...] Academic Productivity » Soft peer review? Social software and distributed scientific evaluation (tags: socialbookmarking peerreview) [...]

  44. Stuart ShawNo Gravatar Says:

    Hi Dario

    Would you mind if I reviewed this for a nascent tribe building academic psychology site called MavEdu (the prelude site Im writing for is PsychFutures with MavEdu coming in March 2010)? Would run it by you first naturally. Stuart

  45. darioNo Gravatar Says:

    Sure, I look forward to it.

  46. Marcadores sociales y herramientas bibliográficas para profesionales de la información » Los Gestores de Referencias Sociales: índices de popularidad y descubrimiento científico. Says:

    [...]  Taraborelli, D.,  “Soft peer review? Social software and distributed scientific evaluation“.  Proceedings of the 8th International Conference on the Design of Cooperative Systems (COOP 08), 2008. [...]

  47. Las Universidades abren sus revistas « Clionauta: Blog de Historia Says:

    [...] Soft peer review? Social software and distributed scientific evaluation [...]

  48. 20091125 « El blogsitorio Says:

    [...] Evaluación científica a través del etiquetado social (2007) y gestores de referencia sociales. ¿Qué sentido tiene intentar hacer una [...]

  49. Stuart ShawNo Gravatar Says:

    Hi Dario

    Bit later than planned, but finally got my piece up on PsychFutures about your soft peer review idea: Hope you like it.

  50. darioNo Gravatar Says:

    Thanks Stuart–I didn’t know I was an edupunk :)

  51. Thinkepi » Gestores de referencias sociales: la información científica en el entorno 2.0 Says:

    [...] Taraborelli, D. “Soft peer review? Social software and distributed scientific evaluation”. En: Proceedings of the 8th International Conference on the Design of Cooperative Systems (COOP 08), 2008. Disponible en: [...]

  52. Gestores de referencias sociales | Universo Abierto Says:

    [...] Taraborelli, D. “Soft peer review? Social software and distributed scientific evaluation”. En: Proceedings of the 8th International Conference on the Design of Cooperative Systems (COOP 08), 2008. Disponible en: [...]

  53. sazkove kancelareNo Gravatar Says:

    think is actually the model that needs to be examined, because of its recent implementations of RealName status. Looking at, one will occasionally see eminent personae reviewing articles – basically bringing the credibility of hard reviews to what is normally a ’soft review’ format.

  54. Academic Productivity » ReaderMeter: Crowdsourcing research impact Says:

    [...] of this blog are not new to my ramblings on soft peer review, social metrics and post-publication impact [...]

  55. Another researcher index? ReaderMeter looks to answer with Mendeley | Mendeley Blog Says:

    [...] of this blog are not new to my ramblings on soft peer review, social metrics and post-publication impact [...]

  56. » softly review the library? Bibliodox Says:

    [...] The debate on the prospects of peer-review in the Internet age and the increasing criticism leveled against the dominant role of impact factor indicators are calling for new measurable criteria to assess scientific quality. Usage-based metrics offer a new avenue to scientific quality assessment but face the same risks as first generation search engines that used unreliable metrics (such as raw traffic data) to estimate content quality. In this article I analyze the contribution that social bookmarking systems can provide to the problem of usage-based metrics for scientific evaluation. I suggest that collaboratively aggregated metadata may help fill the gap between traditional citation-based criteria and raw usage factors. I submit that bottom-up, distributed evaluation models such as those afforded by social bookmarking will challenge more traditional quality assessment models in terms of coverage, efficiency and scalability. Services aggregating user-related quality indicators for online scientific content will come to occupy a key function in the scholarly communication system. [...]

  57. Eugen SpiererNo Gravatar Says:

    An analysis of some of the points made in the post has been published on
    Further debate will be much appreciated.

    Eugen Spierer

  58. social software and science « Sometimes I listen to myself Says:

    [...] software and science By Charles Kiyanda, on December 8th, 2010 I haven’t been through this entire post on, but glancing at the first few lines seems interesting. On the subject of science and social [...]

Leave a Reply