Go read Dr. Alan Jacobs’s article, Google-Trained Minds Can’t Deal with Terrible Research Database UI, over on the Atlantic for the back story.
I do wish library research databases were simpler to use for our students and faculty. However, I don’t think Dr. Jacobs’s suggestion to put “greater emphasis…to improve the search tools” is the answer.
1. If you want the Google experience applied to research, then go use Google Scholar. Make sure you set the Library Links option (if you attend or work at an Ohio college or university use this link enabled for OhioLINK) in Scholar Preference settings to be able to access subscription based full-text journal articles paid for by your library.
2. The native JSTOR search interface isn’t that bad. Granted, JSTOR’s advanced search syntax isn’t always intuitive. However, taking 5 minutes or less to figure it out will save you a lot of time and provide better results than using Google.
3. Librarians can encourage EBSCO, Gale, ProQuest, et al to dramatically improve their search interfaces, expose their metadata to Google, or even license Google’s search algorithm. In the end, however, these research database providers have invested to much of their money in developing their underlying database structure and search interfaces to have much incentive to change and be more like Google.
4. Those “terrible research database” user interfaces allow you to do a much more precise search. Google gives us good enough results. The clunky research database interface allows the student or faculty member to have greater control over the results returned.
5. I have a hard time believing that Dr. Jacobs could not access the article knowing the citation. He doesn’t provide enough information in the article to fully understand why trying to find the article was such a challenge that an ISSN had to be used.
6. Searching the scholarly literature is only part of the research process. Students and faculty still need to apply human intellect before even going to a search box and relying/expecting an algorithm to do the heavy lifting for them.
"Leveraging the Economics of Information and Scholarly Communication Process to Enrich Instruction" was the rest of the title of this session presented by Kim Duckett and Scott Warren from NC State University. Their PowerPoint presentation (1.9MB) is available and you should read through the slides because I can't do them justice in this post.
Kim and Scott started with the argument that our students are not savvy enough to know when they have left our discovery tools to access paid content. Students have not made the connection yet, even though they probably have a similar mental model. Students normally don't consider how much money is spent to provide access to electronic journal articles. They go to the library web site and get access to the content for free (with few or little authentication barriers), so it's just like a lot of other content on the open web.
Strategies they have been using successfully with upper level classes…
Start with what students already know about the peer review process and build on their prior knowledge. Challenge assumptions by asking:
- Why don't researchers just use blogs?
- Do all papers submitted get published?
- Are all journals equal?
- Do authors get royalties?
- How much does it cost an author to publish?
Examples of sticker shock were used to further challenge assumptions about how much scholarly content actually costs. This naturally leads to a discussion about why publishers charge so much and why libraries provide access to expensive content. They discuss the various stakeholders in the publishing process: author, publisher, database vendor, and library.
Continued discussion of the invisible web follows, where the concept that Google doesn't make a distinction when indexing content if it is free or free. The crawlers are just discovering content and making a pointer to it available for retrieval. Finally, Scott and Kim were able to leverage the existing mental model of online shopping (buying airline tickets at Expedia or Travelocity) to help the student make the connection between discovery and access.
Data curation has been a topic cropping up at conferences I have been to this past year. I've heard it mentioned in sessions at ACRL and ALA, mostly by librarians from the big ARLs.
"Sources at Google have disclosed that the humble domain, http://research.google.com, will soon provide a home for terabytes of open-source scientific datasets. The storage will be free to scientists and access to the data will be free for all."
"The storage would fill a major need for scientists who want to openly share their data, and would allow citizen scientists access to an unprecedented amount of data to explore."
I still have to wonder how this will be monetized. Or, will this project be underwritten by Google's main revenue stream? Guess those institutional repositories will still have some room in them after all.
Read the full story with links to more details at "Google to Host Terabytes of Open-Source Science Data" on Wired Science.