March 28, 2008 19

Rethinking life hacks

By in Blog, Evaluation, Time management, Web 2.0, Writing

“Math is hard; let’s go shopping!”

-hacked Barbie

Summary: It looks like the difficulties of measuring  productivity make people use common sense to give advice on how to improve it instead of actually attacking productivity as a hard problem that needs empirical study. But people do follow barely tested advice on productivity. They are either too busy to afford dismissing it, or too pragmatic to believe that we can reach systematic, scientific productivity techniques.

There is a current craze about productivity in many forms (sometimes disguised as personal development). At least 4 of the top 100 blogs in the blogosphere are about productivity (according 3038597_e5f95e2017_mto technorati’s authority: lifehacker #6; Zen Habits #41; #66 43 Folders #73). There’s a current craze about personal productivity and personal development. The best treatment I have read recently is Cal Newport’s Flak magazine article

In fact, lifehacking is a trend of the 21st century. The idea is to reduce the things that bother you in your life (or reduce the time it takes to complete them) while increasing the quality and quality of the experiences that you like. This is pretty intuitive, but is this a working definition of whatever personal productivity is? Hardly. Today, anything that solves an everyday problem in a clever or non-obvious way might be called a life hack.

Hacks are by definition, unsystematic. Everything goes, as long as it works. This is the contrary to the incremental evolution of scientific thinking. Even though sometimes there are large changes in the form of paradigm shifts, most of the time progress is incremental and lineal.

The advantages are clear: one can build on the knowledge acquired by the previous generation.

But do we have the same incremental progress in personal productivity theories? If there anything remotely similar to a science of productivity? Should people follow only empirically tested advice about productivity?

Wikipedia didn’t have a definition for personal productivity; but look at this gem from the social productivity definition:

The term productivity is etymologically derived from the Latin word producere, which means “lead or bring forth, draw out” (from pro- “forth” + ducere “to bring, lead”). It connotes in a general sense, the state of being productive, fertile or efficient. It is often confused with “efficiency”, “rationalization” or “profitability”.

So for a start, it seems that there aren’t very good definitions of the kind of knowledge worker productivity that is so ‘in vogue’ in blog posts everywhere and that should be measured carefully by those organizations who care about it. Which, now that I think of it, are mostly every single organization nowadays! And the available definitions are directly imported from mechanistic approaches to physical products that are not really that useful. Whatever intellectual productivity is, it’s not related to ‘cranking widgets’.

Instead, there’s a cult-following of people who, rather than offering a rational definition of productivity, are actually masters at producing rules of thumb that work. That is, they are lifehackers. They produce hacks. The essence of a hack is that it works.  But do we know they work? How well? How easily can they be adopted? Are they all-terrain hacks? We have no idea. Most of the hacks we all use and learn from the books are essentially untested empirically; most are reported by their inventor to work well in their personal experience. What I’m trying to discuss next is, should we really worry about this? Can we test any better?

Note that the term hack may have negative meaning. In programming a hack is a brilliant solution (and a hacker is a respected member of an elite). But it’s also used as something quickly put together, barely tested, and that works only by miracle. It’s an ugly solution that may break if you really stare at it. Something you may not want to do, because code has aesthetics, and hacks are by definition aesthetically repulsive. Human languages have semantics that are really fun. As you see, the same term can have two opposite meanings. That’s why computers don’t get semantics all that well.

Here,  lifehacking is fully enjoying this dualism. Hacks are great; hacks are bad. Which one do you really mean?

Giving advice about how to improve productivity

Top lifehackers



6-months of polyphasic sleep; top 1000 blogger. Two degrees in 3 semesters (computer science, math).

Tim Ferris

Speaks 6 languages. Participated in top championships for martial arts and dance. Travels the world extensively. Wrote a best seller. Top 1000 blogger.

David Allen

Created a cult-following productivity system, GTD. Wrote a best-selling book.

Mark Forster

Created a cult-following productivity system, DIT. Wrote a best-selling book.

I have to say that I have a lot of respect for those authors  lifehackers. However, it hits me that they are not super-human in their achievements. The first two authors do feature their own lives prominently in their blogs, so the public can see what they have achieved and feel inspired.

The last two best-selling book authors are remarkable in that there’s nothing in their lives that would highlight them as specially productive; At least, they do not show off their achievements publicly. They do not blog about their live achievements. But note that this does not deter people from buying their books and following their advice.

But do these techniques really work? The obvious answer is: we still don’t know. Nobody has run any systematic comparison to see whether people using these techniques are in fact more efficient or not (!). This is a very hard experiment to run, since most people won’t agree to follow certain practices just because you have assigned them to a certain treatment.

Essentially, I’m saying that the entire general public (‘the internet’) follows unproven advice. This is a bit shocking. But why? Is it that difficult to study productivity the right way?

Studying productivity with the scientific method

Psychology is no stranger to constructs that are hard to measure and define. Let’s look at a few examples. Intelligence, as much as a slippery concept as it is, has several standardized tests. Conscious awareness, a feature that may well distinguish humans from animals, is an active topic of study which has experienced a lot of progress in the recent years.

How come productivity is still a virtually empty scientific field? Productivity cannot be a harder nut to crack than intelligence or consciousness.

There are a few psychologists who care about productivity, but chances are you (who have read several months worth of productivity blog posts :) ) have never heard of them. But this would be the topic for an entire new post, so I’ll just skip it.

But the fact is that nobody knows what productivity is doesn’t prevent people from writing extensively about how to improve it. Worse, knowledge workers have a great time reading those writings and following the advice given.

Most blogs use something close to common sense to generate recommendations:

“common sense” [...] equates to the knowledge and experience which most people have, or which the person using the term believes that they do or should have.

Edw519 observes a common pattern in blog posts (Although he was referring to Paul Graham essays, which are probably a category of their own!):


Observe Something (mostly objective)

Generalize (subjective leap)

Expand (more subjective)

Conclude & Recommend (very subjective)

Let It Go (expose bullseye)


This is remarkable different from what we are taught is good practice to generate useful knowledge; it’s kind of a proto-scientific method based on just personal experience. And it’s spreading like fire on the net thanks to self-publishing being so easy.

Blogs and ‘pop-sci’ books (well, sometimes not even pop-sci but the lowest class of off-the-self, armchair philosophy, self-help books) abound. In fact, the self-help market in 2008 will move 11 billion dollars (!).

Personal productivity blogs use personal experience (that is, an n=1 experiment design) as their only source of evidence. But that doesn’t prevent tens of thousands of individual readers to follow their advice.

The fact is that productivity is a huge, tasty, cute open problem that many people will try to solve in the next few years. Here’s an attempt to promise a x10 increase in productivity without even knowing how. His point is sound:

As far as I can tell, no company in America is focusing on the heart of the productivity problem. And the software tools that are supposedly about “productivity” are really about “collaboration and goal-setting.”

So why are so many people taking advice with people who basically don’t test or test in impoverished conditions (n=1)? Well, maybe they have realized that their experience is all they have. as Terry Grossman says,  “Life is not a randomized, double-blind, placebo-controlled study. We don’t have that luxury. We are operating with incomplete information. The best we can do is experiment with ourselves.”

If that wasn’t enough of a reason, we can look at which other areas of their lives people take barely tested advice: investment, medical, career. Do we ask for a track record of a doctor healing the particular illness we have? not really, we follow the general feeling of trustworthiness he transmits and we generalize from weak evidence such as similar illnesses he has fixed that we heard by word of mouth. And don’t get me started with investment advice: people are happy to give it to you even if they make less money than you!

In fact, following thoroughly tested advice is basically the exception, not the norm. We scientists may be biased thinking that everyone should be following only knowledge that can be empirically tested. And even this assertion may be way too radical even for scientists; let’s be honest, there’s plenty of untested, but religiously followed, rules in most scientific fields. Plus you have a large proportion of so-called sciences where empirical testing is only an adornment (social sciences in some cases -as much as I hate to admit it).


Although application of the scientific method is hard, it may be worthwhile for such an in-demand area as knowledge worker productivity. If science can handle topics such as consciousness, productivity should not be scary. The current landscape is millions of readers following advice from hundreds of bloggers and book authors offering personal experience instead of replicable, tested rules. This will not change overnight even if such a science existed. In fact, following weak evidence to make important decisions is a pervasive fact in our society.

If you enjoyed this post, make sure you subscribe to my RSS feed!

19 Responses to “Rethinking life hacks”

  1. [...] viel spricht dafür, mit Jose Quesada in lifehacking, productivity oder anti-procrastination einen wichtigen Trend des 21. Jahrhunderts [...]

  2. NormanNo Gravatar says:

    You might take a look at this “scientific study” of GTD. I haven’t had a chance to read through it yet, but the DavidAllenCo says that it does adequately explain how it works. (Given, possible conflict of interest here… but I’ve seen much worse from the government.)

  3. joseNo Gravatar says:

    Hi Norman,

    I know that article and have passed it around.
    However, I don’t think it solves any of the questions I’m interested in. It offers a theory on why GTD works (external memory; cues for action, etc); but it doesn’t really do any empirical comparison between productivity techniques. It doesn’t say how often it works, how much improvement we can expect, what tasks have the biggest gains, etc. All in all, it doesn’t offer any better metric for productivity (nor a definition) and it certainty doesn’t register the improvement (if any) in productivity when using GTD.

    Something as simple as an split test (assuming we had workable metrics) would do wonders. I.e., just half the people participating would use GTD, the other half, nothing. People would be assigned randomly to groups, and there should be a way to enforce that the GTD group did everything ‘by the book’ whereas the non-GTD group did none of the things recommended.

  4. Incredible post, Jose. Like any example of good writing, you’ve taken a gem of an idea and made it clear to me – thank you. Questions:

    I wonder if personal productivity *can* be generalized. Wouldn’t that take the “personal” out of it?

    Why are there so many of these blogs (mine included)? I think we’re looking at a fundamental mis-match between human brains and our world. From multi-tasking to cognitive dissonance around dirtying our nest, we’re on very new ground. Book idea: I’d love to see a look at significant historical shifts from this “brains-can’t-change-and-what-are-the-costs” perspective. Maybe something along the lines of Jared Diamond’s “Collapse: How Societies Choose To Fail Or Succeed.”

    I’m also really interested in metrics to measure improvements.

    Thanks again!

  5. Francis WadeNo Gravatar says:

    This is quite a thought-provoking post, and echoes some observations I have had.

    I studied at an engineering school that taught some of these “available definitions” that are “directly imported from mechanistic approaches to physical products that are not really that useful.” I also agree that “Whatever intellectual productivity is, it’s not related to ‘cranking widgets’.”

    I studied Operations Research and Industrial Engineering, which we were told had to do with the optimal allocation of scarce resources: man, time, money,material, etc. There still is no course in white collar productivity, but there is lots of research being done in the field in how improve the standard of widget building, down to whether or not an individual should use a 1/8″ inch or 1/4″ screw in building the widget.

    White collar productivity presents a specific problem that the others don’t — it’s universal. All humans have the same problem.

    This makes it difficult to study, as it’s a skill that’s impossible to isolate completely by a researcher, who must (be definition) be using a time management technique while they study time management techniques.

    In other words, it’s hard to be “professionally uninvolved” when your own schedule is a mess, voice-mails are going unreturned and your bills are unpaid.

    I had a professor who taught “Optimization” in school. He sometimes would answer the knock on his door, but would always answer his phone. His students learned that to find him we had to visit the reception area and ask them to call his office. He’s answer if he were there…

    He was hardly optimizing his time, and it would have been difficult for him to turn his attention to researching “time management” when he seemed so very poor at it.

    Point is, this is just one reason why the field is under-studied — he and others are safer studying things that don’t involve them directly.

    Also, time management and productivity are not strictly engineering problems. They have aspects of psychology, computer science, behavioral science, human resources all included. It’s just too hard to put together a committee, let alone find a school to properly study the field in a general way.

    So, what we are left with is hacks. “I might start a thread on my blog called “the craziest time management hacks I ever heard of!” Some of them are hilarious…

    I believe that there are underlying principles involved to time management
    and that they are provably _inescapable_. (And, they don’t involve spending $500 every 2 years on the latest gadget.)

    No, I haven’t done the research, and am hoping that in my lifetime someone will also!!

    Thanks for a great article — I’m sure to link it to my blog and write about it some more.

  6. JohannNo Gravatar says:

    Thanks a lot for this great article! I just stumbled over your blog, and found this article the most interesting so far.

    Regarding the empirical testing of productivity advice: I too would love to see some “hard” scientific work on this (and as you pointed out, there are indeed a handful of psychologists working on this topic, so it could be worth writing that extra post!).
    But on the other side, in general, I don’t have a problem with advice coming from own experience. In my view, the problem is not so much the people giving this sort of advice, but also the people blindly accepting it. I think that with any advice that you take, in the end you are left alone finding out if works or not. You can even see this as a form of empirical testing: Someone generates advice from his/her own experience, gives it on to others (which are in different situations), and they in turn need to “test” this advice and see if it really suits them in their own situation.

    So, as long as the advice-giver makes clear where he gets his/her insights from (which, admitedly, some don’t), and that in the end, it is “just” his/her own experience, I don’t see a problem with advice that stems from a sample of just n=1. It’s just up to the advice-receiver to find out if it works or not.

    The seminars I took so far on time management and other subjects that really stood out, made this clear from the beginning: No one can really tell what’s good for you except you.

  7. JohannNo Gravatar says:

    One more piece of thought:
    The problem with defining productivity and the fact that it seems to be a very subjective criterion (different from, let’s say, intelligence), adds to my argument: If productivity means something different to everybody, it’s even more difficult to give general advice.

  8. Francis WadeNo Gravatar says:


    This is quite true.

    However, while there is no way to measure productivity, there are probably ways to measure other indicative measure, such as “average number of mail items in an email inbox /voice-mailbox / paper inbox.”

    Or — “number of promises that are made and forgotten”

    or — “% of projects that are delivered late”

    or –”% of promises that are kept”

    Some are easier to measure than others, and it’s too bad that Outlook doesn’t help in this regard because it could easily capture some stats like these.

  9. Francis WadeNo Gravatar says:


    I agree that someone who gives advice can really only share from their experience.

    I have tired to come up with a way of helping a user to describe and improve their time management system according to 11 “inescapable” fundamentals.

    I am always on the lookout for new fundamentals… if you have any ideas.

  10. shaneNo Gravatar says:

    Evidence based is productivity is a start, but in order to move to a more scientific evaluation of productivity and lifehacks, then it demands N > 1, and I imagine to do that seriously would be a full time job. I really liked Jose’s post, as it fits in with our anti-GTD approach, so I like this theme, but I don’t think we could start a movement…maybe suggest that a movement should be started…

    I think we need to make a clear difference between lifehacking and productivity systems. I don’t think you can apply a scientific evidence based approach to lifehacks, as by their definition they are isolated time savers and neat tricks. But I think applying evidence based thinking to more comprehensive approaches to productivity has value. By a comprehensive approach, I am thinking of the systems of productivity expounded by people like Covey, Allen, Ferris, Forster and the like. Most of these guys are essentially salesmen, who are also life coaches. They are selling a product. Their evidence is based on what worked with their clients, and hence they cite individual case studies.Which is better than just having an n=1, since most other productivity gurus (like Steve P or Zen Habits) are just working with themselves as evidence.

    However, in calling for “academic productivity”, then you might disregarding the fact that many people do research productivity and publish articles on it. But like much in academic life, it doesn’t filter to the mainstream, and instead the public gets the hook up from pop psychology people who work mostly from common sense and the odd tidbit they picked up from introductory psychology courses (“we only use 10% of our brains!!!” etc…).

    Perhaps the biggest problem in evaluating productivity systems is different strokes for different folks. Any system might work for some people, some may work better than others for some people, and for some people no system would work, no matter how good, I think the most progress would be made by fitting productivity systems and techniques to personalities, but then I can’t say the “learning styles” movement has been a particular success…

  11. BruthaNo Gravatar says:

    Isn’t the main idea behind blog like Steve Pavlina’s and Tim Ferris to give the reader approaches that the reader can test in his own lifes?
    What else is left for the person who wants to improve his productivity then taking ideas and testing whether those ideas help himself?

    “average number of mail items in an email inbox /voice-mailbox / paper inbox.” in a time where a lot of people argue that spending a lot of time on emails, quantity of accomplished tasks means very little as a measure for productivity. You would need to measure quality somehow.

  12. DonNo Gravatar says:

    Thanks for a thought-provoking post. I agree that we need to use scientific approach for such an important field as personal productivity. That will give us better foundation to know what work and what don’t.

    On the other hand, I can understand why many people love GTD: it works for them. In that case, I can’t blame them for following a method that is not scientifically proven.

  13. FrancisNo Gravatar says:


    I agree with you, and would like to invite you to my blog to see the little progress I have made in that direction.

    In retrospect, I used my industrial engineering background without really knowing it.

  14. quibbleNo Gravatar says:

    You make many good points.
    But isn’t it “pop-psy” ? I thought self-help books were “popular psychology”.

  15. heck says:


    [...]Academic Productivity » Rethinking life hacks[...]…

Leave a Reply