This is by Glenn Haya from Stockholm University and Else Nygren from Uppsala University. This was a study done in Sweden, where there is a national MetaLib consortium.
It may be interesting to compare this with the analysis Marco did last year of Google Scholar.
The study has just be completed – although it looks like quite a weighty tome – you can read it online (still waiting for a link on this).
The big questions they hoped to answer were:
What happens when a student sits down with Google Scholar or Metalib and tries to search for material? (interestingly there seemed to be more written about MetaLib than Google Scholar here)
What role does instruction play in the experience?
They used 32 students, and were asked to do searching, one group with training, and one without. Each student was recorded using software calle ‘More’ (or Moray?), which records audio, video, mouse clicks, keyboard strokes. These were advanced and intermediate undergraduates who were writing a thesis – and the searching was on their thesis topic.
It’s worth noting that the MetaLib implementation lacked certain functionality that is available in the product in general – specifically the personalisation aspects. Also it looks like SFX is implemented within Google Scholar.
Anyway – down to the analysis:
They found that there were specific phases to the search process:
Navigation
Beginning
Search
View Results list
(View Metadata)
View Full-text
Save Full-text
They graphed how much time was spent on each of these phases as their search session progressed.
Some interesting results on the quick set screen – the search immediately tried to select multiple quick sets.
Then, where there was a problem, then the searcher just added another search term – and so got far too specific in their search – and got no results. Then hits the back button – which doesn’t work. The user never got to a full-text article.
To compare Google Scholar and MetaLib:
Phase | Google (Time %) | MetaLib (Time %) |
Beginnging | 0.6 | 3.9 |
Searching | 15.3 | 15.9 |
Viewing Results | 30.9 | 11.2 |
Viewing Metadata | 5.1 | 11.0 |
Viewing Fulltext | 30.0 | 3.9 |
Saving Fulltext | 2.1 | 1.0 |
Navigation | 15.8 | 52.9 |
About half the searches carried out in MetaLib did not produce a results list.
Training made a large difference in how many articles were saved when using MetaLib (from 12 to 30). However, compare these to Google – 37 to 48.
Looking at the quality of the articles found, what was interesting was that before training with Google scholar only 21% were peer-reviewed. However, after training, this became 48%. With MetaLib the percentage stayed persistent at 42% before and after training. Some suspicion that the improvement with Google was due to the use of the SFX link in the Google Scholar results, leading the users to pay more attention to those articles available from the University via SFX.
Some considerations – Google is essentially a familiar interface, and the user expectations are geared towards this. MetaLib is a more complex application, and is designed to do more than Google Scholar. It is also true that perhaps for Undergraduate level Google Scholar is ‘good enough’?
A comment for MetaLib team is that in v4, the default search will be keyword – but it is currently phrase – which leads to the problems with multiple terms mentioned above.
However, I think the biggest issue here is the amount of time spent on Navigation in MetaLib. Whatever way you interpret the results, 50% of the ‘searching’ time spent ‘navigating’ surely is a bad thing?