Archive for category: Evaluation

Testing the general model of productivity

August 5th, 2009 by james

In a previous episode, I suggested that productivity is really just an efficiency measure. Since the working currency for academics is arguably prestige, productive researchers are those that can acquire the most prestige for the least effort and this can be formally written as:

productivity = sum over all t for outputs over inputs

where each task t is assigned a prestige benefit (pt per activity × n activities) and an effort cost (attention units per hour at × ht number of hours).

The comments on the original post suggested that there was a lot of enthusiasm for implementing and testing the theory and so I’ve spent the past month gathering data and preparing for a bit of an empirical assessment. The results are a work-in-progress but I hope to keep the conversation going and get your feedback. Here then is a step-by-step guide to how I’ve analysed my productivity over the last month using the general model.

(more…)

A general model of productivity?

June 15th, 2009 by james

I want to try something a bit different in this post. Here at AP.com, we’ve talked a lot about tools, theory, trends and the general ephemera of academic productivity. But writing as academics, we should probably be trying to take this experience and build it into a cohesive model of productivity. So my goal here is to suggest a general model, one that we might use to understand what we’ve learned from previous posts and hopefully apply to our own work.

My starting point for this post was simple; I wanted to know how my productivity has changed (hopefully improved) since I first started my DPhil. From keeping a research journal, I know that some days are more productive than others and it would very helpful if I could understand when those fits and starts occur, to spot co-occuring events and thereby learn when to say “Forget work, I’m going for a run.”

In other words, I wanted to plot my productivity cycle over time. It might look something like this:

productivity_graph

But the obvious problem with this exercise is how to measure productivity. It’s a subject that’s been tackled indirectly on this site before but going through the old posts, I haven’t yet find any attempts at a general theory – and related measures – of productivity. So drawing on the collected wisdom of previous AP.com posts, here’s a rough sketch of such a theory.
(more…)

A hybrid mind mapping and reference management tool: Freemind scholar

May 27th, 2009 by jose

Sciplore has produced an interesting hybrid between a mind mapping and reference management tool. Freemind Scholar adds two basic features over the excellent Freemind: you can have references (at this time, only bibTeX) inserted, and you can drag and drop higsciplore-small hlights on pdfs (the pdf is linked).

This looks like the perfect IDE for sketching notes while reading papers. I use oneNote for this, but I’ve tried mind maps before and could easily revert to it.

As soon as they implement zotero/endnote references, I can imagine many people finding this tool very useful.

Freemind Scholar is in alpha right now. Feel free to try it out and send them your comments, chances are they will implement your feature requests since they are just starting.

Scientific Publishing Task Force – how the semantic web may help organizing results

April 26th, 2009 by jose

According to Wikipedia, “the semantic web is expected to revolutionize scientific publishing, such as real-time publishing and sharing of experimental data on the Internet.” The W3C HCLS group’s Scientific Publishing Task Force is going to explore how this could happen.

Currently, one describes experiments in a more or less ad-hoc way. The mapping between experiments, papers, and titles is… well, not the most consistent ever.

Do you want to know if the experiment you have in mind is done already? Good luck mining the litclipboard26.04.2009 _ 22_14_43erature. Although mostly everyone is well-versed on building queries in scientific search engines, the task is far from accurate.

Maybe the problem is in the way we write the literature. If we could write a description of every experiment in some kind of agreed format that both humans and machines understand, searches would be trivial.

An alternative would be to use an ontology to describe experiments. The ontology should not be too complicated to use. If a user feels overwhelmed by the large number of parameters required to describe an experiment, this user may hesitate to do it. Of course, every field would need to built its own ontology. The effort to integrate ontologies across fields may be titanic.

There is some progress in the direction of using named entity extraction as metadata already. For example, the pubmed interface gopubmed is above and beyond anything I have seen. It uses facets (left sidebar) to show metadata. I do not know the details on how it works, but going back to say Web of Science after gopubmed feels like going 5 years back in time. Is there any hope to have a similar interface for all scientific databases? I sure hope so.

Zotero 1.5 Beta Released. The sharing features are here, and also getting meta data from existing pdfs

February 28th, 2009 by jose

This is an exciting release.

In a single stroke, Zotero may have added the most important feature of online apps such as citeUlike (collaboration) and the best feature of Mendeley (metadata extraction). I have no idea how well these work, as I have just moved to zotero recently and don’t want to risk trying the beta this soon, but if they work well, this is a quantum leap.

(more…)