Do you wonder why people without funding do research? Naw, probably not, because you do it too . Getting grant money involves a huge effort and most people do not have grants. However, everyone tries their best to get time to do research. In fact, universities encourage their faculty to focus on research at the expense of teaching time. This article covers a few theories on why this might happen. For example, Students gravitate toward research orientations, and Research quality has become a proxy for teaching quality. Interesting that two economists wrote it.
Sciplore has produced an interesting hybrid between a mind mapping and reference management tool. Freemind Scholar adds two basic features over the excellent Freemind: you can have references (at this time, only bibTeX) inserted, and you can drag and drop hig hlights on pdfs (the pdf is linked).
This looks like the perfect IDE for sketching notes while reading papers. I use oneNote for this, but I’ve tried mind maps before and could easily revert to it.
As soon as they implement zotero/endnote references, I can imagine many people finding this tool very useful.
Freemind Scholar is in alpha right now. Feel free to try it out and send them your comments, chances are they will implement your feature requests since they are just starting.
This is a killer app: Flux calculates what time of day it is and adjust your monitor accordingly. Wonderful if you stare at pdfs (lots of white!) on the screen at night. I wonder how I lived without Flux . It also seems to help regulating sleep patterns. Recommended.
According to Wikipedia, “the semantic web is expected to revolutionize scientific publishing, such as real-time publishing and sharing of experimental data on the Internet.” The W3C HCLS group’s Scientific Publishing Task Force is going to explore how this could happen.
Currently, one describes experiments in a more or less ad-hoc way. The mapping between experiments, papers, and titles is… well, not the most consistent ever.
Do you want to know if the experiment you have in mind is done already? Good luck mining the literature. Although mostly everyone is well-versed on building queries in scientific search engines, the task is far from accurate.
Maybe the problem is in the way we write the literature. If we could write a description of every experiment in some kind of agreed format that both humans and machines understand, searches would be trivial.
An alternative would be to use an ontology to describe experiments. The ontology should not be too complicated to use. If a user feels overwhelmed by the large number of parameters required to describe an experiment, this user may hesitate to do it. Of course, every field would need to built its own ontology. The effort to integrate ontologies across fields may be titanic.
There is some progress in the direction of using named entity extraction as metadata already. For example, the pubmed interface gopubmed is above and beyond anything I have seen. It uses facets (left sidebar) to show metadata. I do not know the details on how it works, but going back to say Web of Science after gopubmed feels like going 5 years back in time. Is there any hope to have a similar interface for all scientific databases? I sure hope so.
How much of the advice we take is based on solid empirical evidence? Surprisingly worrying little! I’d love it if someone actually tries to put together an estimation (let me know if you know one!).
The Chronicle, in a surprising streak of opinion articles, finds that Strunk and White’s claims are mostly baseless:
Simple experiments (which students could perform for themselves using downloaded classic texts from sources like http://gutenberg.org) show that Strunk and White preferred to base their grammar claims on intuition and prejudice rather than established literary usage.
If academics take advice without questioning the evidence, I wonder what will save the general public . Good to see people at The Chronicle debunking BS; I have fallen prey of recommending Strunk and White myself…