Happy Friday the Thirteenth! I thought I would address your worst conference nightmare…wasting a session. It doesn't matter if it's ALA or ALAO (that's the Ohio ACRL chapter), but I usually end up sitting through one disappointing presentation. In this case, the content delivered didn't live up to expectations created by the session title and description.
The issue with this specific session was the use of the phrase "lessons learned" in the title and the thought provoking questions posed in description. Instead of hearing about lessons learned or best practices or tips or sage advice…I heard an infomercial. My disappointment stems from the content presented not being scalable or applicable to another institution. Don't tempt us with "How do you prioritize your resources and staff?" to only tell us that you were able to hire more librarians and have money for an information commons. That's great for you, however it's not practical or implementable for the majority of your audience.
The only transferable "lesson learned" (and not even to my local situation) I got out of the presentation is to share laptops between library departments. Do you use laptops for instruction and for lending to students? Great! Work with circulation/access services/whomever controls the lending side to pool laptops during peak times of the academic year. Use the majority of laptops for instruction at the beginning of the semester/quarter (when instruction is high and assignments low). Reverse the distribution model at the end of the semester. Let circulation/access services use your laptops for students to borrow to work on all of those end of term papers and projects. You're probably not doing much (if any) instruction at the end of the term. It's a win-win and a great way to extend those insufficient capital expense dollars.
Feel free to take a look at the PowerPoint slides. You might get something out of them that is useful for your institution. My lesson learned for you? Go with your gut instinct when picking conference sessions. I'll try to do a better job with all of my schedule conflicts in Anaheim.
If I could do it over again, I would have gone to hear Paul Waelchli and Sara Holladay talk about "Fantasy Sports: The Road to Information Literacy Championships." Paul and Sara win the prize for information sharing! You have to appreciate the amount of time they put into creating an amazing Fantasy Football Toolkit for Libraries. Check it out…
"Leveraging the Economics of Information and Scholarly Communication Process to Enrich Instruction" was the rest of the title of this session presented by Kim Duckett and Scott Warren from NC State University. Their PowerPoint presentation (1.9MB) is available and you should read through the slides because I can't do them justice in this post.
Kim and Scott started with the argument that our students are not savvy enough to know when they have left our discovery tools to access paid content. Students have not made the connection yet, even though they probably have a similar mental model. Students normally don't consider how much money is spent to provide access to electronic journal articles. They go to the library web site and get access to the content for free (with few or little authentication barriers), so it's just like a lot of other content on the open web.
Strategies they have been using successfully with upper level classes…
Start with what students already know about the peer review process and build on their prior knowledge. Challenge assumptions by asking:
- Why don't researchers just use blogs?
- Do all papers submitted get published?
- Are all journals equal?
- Do authors get royalties?
- How much does it cost an author to publish?
Examples of sticker shock were used to further challenge assumptions about how much scholarly content actually costs. This naturally leads to a discussion about why publishers charge so much and why libraries provide access to expensive content. They discuss the various stakeholders in the publishing process: author, publisher, database vendor, and library.
Continued discussion of the invisible web follows, where the concept that Google doesn't make a distinction when indexing content if it is free or free. The crawlers are just discovering content and making a pointer to it available for retrieval. Finally, Scott and Kim were able to leverage the existing mental model of online shopping (buying airline tickets at Expedia or Travelocity) to help the student make the connection between discovery and access.
Candice Benjes-Small and Eric Ackermann from Radford University spoke about how they redesigned their assessment process for instruction. They had reached a point where merely counting number of sessions was deemed no longer useful in measuring success.
All librarians had been using a standard student evaluation form that had a four point Likert scale and a single comment box at the end. They found the disconnect between the scores and the comments to be problematic and not useful in making changes. It was decided to modify the evaluation form to ask for qualitative feedback for each question.
The modified evaluation form asks the following three questions
1. I learned something useful for this workshop.
- Strongly Agree: Name one thing you learned from this workshop?
- Agree: Name one thing you learned from this workshop?
- Disagree: How can the workshop be improved?
- Strongly Disagree: How can the workshop be improved?
2. I think this librarian was a good teacher.
- Strongly Agree: What did you like about the teaching?
- Agree: What did you like about the teaching?
- Disagree: What did you dislike about the teaching?
- Strongly Disagree: What did you dislike about the teaching?
3. I would recommend this workshop to someone interested in library research.
- Strongly Agree
- Strongly Disagree
They chose a comment based metric methodology for assessment. This is similar to what the University of Virginia Library is doing with their balanced scorecard metrics. "What did you dislike about the teaching?" was chosen as the question to measure. This would allow for the librarian teaching to have something tangible for improving instructional delivery. A target of less than 5% negative comments was set to be the measure for total success. Partial success would be achieved if 5 to 10% of the comments were negative.
- Evidence based
- Allows for goals to be set and measured
- Flexible to measure what you want to know
- Time intensive, especially coding qualitative comments
- Difficult to change evaluation forms if you want to go back and measure another goal
Questions to consider
- What do you want to know?
- How are you going to measure?
- Are you going to focus on evaluation scores (quantitative) or comments (qualitative)?
- What is the target for success?
- Who is going to compile the results?
Their PowerPoint slides are available.