Archive for category: Reading

The ages of productivity

September 11th, 2010 by

The Undercover Economist, Tim Harford, has a good article in today’s Financial Times about the stages in life when different professions are most productive. For example, I did a quick Google/calculation: the average median age of a Nobel Prize winner in physics or chemistry is 55; in the literature and peace prizes, it’s 64. (Sorry, not going to do the full test for statistical difference today). This distinction makes some sense, as the great discoveries in the two scientific subjects are marked by innovation (something that may become replaced by habit with age) and excellence in literature and statesmanship benefits from vast amounts of experience.

But, in keeping with our recent discussions about reform in academia, perhaps the bigger question is whether or not we should be actively targeting funding to match these periods of productivity? A quote from the FT article:

Two of my favourite writers, Malcolm Gladwell and Jonah Lehrer, are worried about this – but from different perspectives. Gladwell, a Galenson fan, worries that our obsession with youthful genius will cause us to reject future late bloomers.

Lehrer has the opposite concern: that funding goes to scientists past their prime. He says the US’s National Institutes of Health (NIH) has been funding ever-older scientists. Thirty years ago, researchers in their early thirties used to receive 10 per cent of NIH grants; by 2006 the figure had fallen to 1 per cent.

From my experience in the UK, I think both groups have good, but different, funding opportunities. Established researchers are well-versed in applying for traditional call-based research grants, whereas young researchers are catered for by a number of fellowship schemes. I haven’t seen much evidence of disciplinary-based bias and to be honest, I think anti-discrimination laws would make it difficult to explicitly exclude a group of talented researchers just because they’ve reached an arbitrary age barrier. Think of Andrew Wiles, who found a proof of Fermat’s Last Theorem but just over the Fields Medal’s age limit of 40.

Ultimately the top performers in these disciplines are so unique that it doesn’t make sense to design generalized development or funding programmes for the rest of us. However we can at least take comfort that our best days may be ahead of us!

SciSurfer: real-time search on journal articles

May 5th, 2010 by

Imagine a world where real-time search is the norm. You will get just the information you seek landing on your lap the exact minute it becomes available, without you having to explicitly search for it. Will this change the way you do science? SciSurfer thinks it will.

The release cycle of scientific knowledge is slow. It may take up to 2 years for a paper to get accepted in a journal. The publishing process in itself will add a buffer of a few months (arguably because of the time cost of having a paper edition, even though most people will never use it). So, for some of us, it doesn’t feel like we are missing much if we do not get the latest updates on our field the very same minute they are published. Just going to conferences yearly feels like more than enough. But there is a portion of the academia that needs constant updates on their field, as close to real-time as possible. If you are in the life sciences, getting the latest paper about a molecule or a gene you work on before your competitor does may make or break your career.

For those academics, sciSurfer may be a very valuable tool. The basic idea of sciSurfer is to integrate all journal feeds and search over them. Note that they do not archive RSS, so only the latest articles are available. This is a different way to think about search, closer to twitter’s than to Google’s.

image 

(more…)

LaTeX rendering of equations in Google Wave – LaTeXy

November 2nd, 2009 by

It was a matter of time before someone wrote a robot thatwaveLatexy-images grabbed latex  and returned an image after latex processing. LaTeXy does exactly that and has just increased tenfold the usefulness of wave for academics.

AutoVer (windows) gives you easy versioning

October 27th, 2009 by

About two years ago we talked about filehamster. It was  screen-composite-smfree, unobtrusive, and simpler than doing version control ‘by hand’ (adding numbers to filenames) or ‘by machine’ (using a proper versioning tool such as subversion or mercurial).

Well, since then filehamster has moved on to be a pain in the ass. Now the free version nags you a lot, and the paid versions are not really giving us any outstanding features. Plus as a .NET application, it eats up RAM.

Enter AutoVer. Completely freeware, no nags, and a much better interface to boot. The GUI and options make more sense too. I even use it for coding when I’m doing something small and a mercurial repo would be overkill.

Eventually, all writing applications should enable smooth versioning and real-time collaboration (Office 2010 beta does! Wave and etherpad are not alone anymore). The slider that controls versioning as in a time machine is fantastic. AutoVer would not give you that. The AutoVer model also breaks when you send the manuscript to a collaborator, and he edits it on his machine (often changing the file name). Still, it’s much better than not doing versioning at all or doing it by hand.

By the way, does anyone know an alternative that is cross-platform?

What’s Wrong with Probability Notation?

October 22nd, 2009 by

Sometimes I wonder why many humans (me included) have trouble understanding probability. In cognitive science, probabilistic models are taking over most areas. Still, most people struggle with them. Could it be that the notation is just hard to swallow? What’s Wrong with Probability Notation? is a magnificent post that gives some basic reasons:

The first two issues arise in the usual expression of the first step of Bayes’s rule,

p(x|y) = p(y|x)p(x) / p(y),

where each of the four uses of p() corresponds to a different probability function! In computer science, we’re used to using names to distinguish functions. So f(x) and f(y) are the same function f applied to different arguments. In probability notation, p(x) and p(y) are different probability functions, picked out by their arguments.

This is one clear communication problem. Ideally we want more people to follow probabilistic reasoning. Doctors, judges, etc all show significant struggles when given probabilities (see e.g., Helping Doctors and Patients Make Sense of Health Statistics).

But how do we tackle this problem? Changing notation is easier said than done. In fact, anyone departing from traditional notation will have to convince reviewers that his notation is better… and add to the risk of cause a less-than-ideal impression.

Any ideas?