The way I search for scientific articles is pretty simple. Say I have a problem to solve that was assigned by some course teachers or my research supervisor. I mark some keywords and Google for them. If I don’t find any relevant information I use combination of those keywords or use alternative keywords adapted from the search results. Once I start getting some keywords that produce relevant results in Google, I pass it to Google Scholar. Sometimes I go to some other subject specific search engines to search using those keywords
I use Web of Science, because it can track cited articles. This is also present in google scholar, but somehow I don’t find it as reliable. I tend to sort by citations, and pay attention to the top few papers only. I guess if most people do like me, there must be a snowball effect going on here, with a ‘rich gets richer’ situation.
Search engines are measured using precision and recall. This is of course relevant, but sometimes more mundane measures are interesting too. The basic unit for productivity evaluating search engines should be something like “time (or clicks) needed to get both the full text and the reference to your hard drive”. Here, small inprovements in usability like going from 21 to 16 clicks to achieve your goal can save quite a lot of time, since we academics use search services so often.