Sunday, November 7, 2010

Fifth Blog Post

The articles for this blog post center around the usability of academic library web sites. The articles that I found the most interesting were "Usability of the Academic Library Web Site: Implications for Design" by Louise McGillis and Elaine G. Toms written in 2001 and "Website Redesign and Testing with a Usability Consultant: Lessons Learned" by Robert L. Tolliver, et al. written in 2005. These articles discussed the ways to test web sites for the factor of "usability," focusing specifically on academic library web sites, but the ideas that they have can certainly be applied to any library web site.

McGillis and Toms' article "Usability of the Academic Library Web Site" seeks to study the usability of the website of the Memorial University of Newfoundland Library. They wanted to see how faculty and students use the site and how easy it was to use. The authors define "usability" as "the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use" (356). In other words, how user-friendly is the web site in accomplishing the goals for which the users utilize the web site. In order to test the site for usability, the authors sought to test three specific aspects: effectiveness, efficiency, and satisfaction. In order to do this, they had 33 participants (17 undergraduates, 2 graduate students, and 14 faculty members) complete three tasks that allowed them to evaluate the web site: the participants completed a background/experience questionnaire, were given six tasks to perform using the library web site (including finding a book, an article, an Internet resource, help with databases, how to renew a book, and asking a reference question online), and then they completed a questionnaire about their experience in attempting to complete the tasks. This test enabled the authors to examine the effectiveness, efficiency, and satisfaction of the site--the effectiveness was measured by how many tasks were successfully completed, efficiency was measured by how long it took the participants to complete the tasks, and satisfaction was measured by the results of a survey questionnaire administered after the tasks were completed. Some interesting things were found in the study. They found that there was no difference in performance based on the status of the participant, so faculty and students alike had the same issues and successes. The participants completed 75% of the assigned tasks on average, taking roughly 2 minutes per task. In the follow-up questionnaire, it was shown that the main problems for the participants was in knowing where to begin for each task--they claimed that they had problems with the menus and understanding what some of the terminology meant, giving them hesitation in clicking a certain button that may not lead them where they needed to go. For instance, many claimed that they had trouble distinguishing between the links for "resources" and "databases," leaving them unsure of what they meant and where they would take them. Terminology was the biggest problem, and the authors recognize that "...we make too many assumptions about the extent of user knowledge" (364). They found that even though the menus were structured in a similar way to the physical library, the participants still did not know where to start because when they have a specific problem, they often cannot fit it into the terms that are given to them, which highlights the importance of terminology and the awareness of the user's familiarity with library "lingo" on the web site. In order to combat this, the authors claim that this shows the importance of a user-centered approach to web design--knowing and understanding why and how the users use the web site and attempting to preempt those needs by creating a site that is easily used and understood. The authors also state that there needs to be better and more solid "benchmark standards" for assessing the usability of a web site. Since this article was written in 2001, when libraries were really just beginning to embrace the use of the Internet for online catalogs, these "benchmark standards" may have already been designed and tested, but these authors understand the need for the standards and the need for a more user-centered and user-involved approach to the library web site.

In the second article, written in 2005, "Website Redesign and Testing with a Usability Consultant," Robert L. Tolliver, et al. have decided to move the web site for the University of Michigan Art, Architecture, and Engineering Library from an HTML-based site to a CMS-driven site, and before they do so, they want to completely redesign the web site and conduct a usability test, allowing them to better design the new site. The team selected for the redesign was large, pulling in special subject librarians as well as IT people so that they could address both the content and context of the web site structure. In order to best redesign the site, the library used a usability consultant who was an expert on usability and usability testing, a move which the authors claim was definitely the right one for them. To begin the usability testing, several librarians were consulted, and several library student assistants were asked to complete several tasks using the current site. This gave the consultant an idea of what the current problems were that needed to be addressed. The library was offered three levels of testing and they chose the mid-range option which included heuristic analysis (a basic survey of the site--this was cut down, however, due to time constraints), sector interviews (four subject librarians were interviewed, allowing for the consultant to pull out what the good and bad parts of the current site are and how students and librarians use it), card sorting, and wire framing. The card sorting aspect of the testing was possibly the most interesting and result-driven. Based on the interviews with the librarians, the consultant wrote out terms from the library web site on cards and asked six participants (undergraduates and graduate students) to sort them into whatever groups made sense to them and to label the category. Out of the six participants, five of their responses were very similar, and were thus used for the redesign consideration. Generally, these five students separated the cards into four groups--library information and services, information services, specialized resources, and miscellany. Based on the information garnered from the card sorting, the consultant created a paper prototype of the proposed new layout, which was given to eight students who were given tasks to complete with the prototype. The consultant and librarians observed the paths that the students followed and took notes on the terminology problems that students had while going through the tasks. Using the results of the prototype test, the consultant and librarians changed the site to better fit the needs of the students, tested one more time, and then introduced the new site. In the end, the authors found that working with a usability consultant was an extremely positive thing in order to better redesign the web site. Tolliver, et al. state, "Given the increasing importance of a library's web presence and the multitude of electronic resources through which users must navigate, it is important for librarians to appreciate the value of usability testing, and using a professional usability consultant may provide a valuable foundation" (165). They found that the combination of the librarians' expertise with the expertise of the usability consultant made for good conversations and breakthroughs that may not have been realized if not for the combination of the expertise levels. For these librarians, having an outside consultant was the best way to go to redesign their site, and the entire experience seemed to open their eyes to the important of usability and the value of the user's input when it comes to the library's web site.

What is valuable in these articles is their recognition of the importance of user input and the assessment of usability of a library web site. In both instances, users were consulted in order to help the librarians and web designers completely understand how they used the web site and what they expected. In both cases, terminology was an issue--the users were confused about some ambiguous, vague, or unfamiliar terminology, and these studies helped the librarians to better phrase the categories and terms on the site to enable better results for the users. The McGillis and Toms article was written in 2001, at the beginning of a time when library web sites were still in their relative infancy, and many of the issues that they found may not be as problematic anymore because of the users' familiarity with the Internet and the way it is set up, but their findings on terminology and easy lay-out is still relevant. McGillis and Toms did not use a usability consultant, but Tolliver, et al. seem to really believe that using a consultant was key to the success of their redesign. It would be interesting to see how differently McGillis and Toms study may have been structured if they had brought in a consultant. Both of these studies are valuable in thinking about assessing a library's web site and how usable it is, citing different methods of assessment that can be catered to your library's specific needs.

Discussion question: Do you think that these studies and their findings can be translated to different types of libraries? What would have to change in the assessment process in a public library or a special library? Would you use a consultant, or administer testing on your own?

Works Cited:

McGillis, Louise, and Elaine G. Toms. "Usability of the Academic Library Web Site: Implications for Design." College and Research Libraries 62.4 (2001): 355-367.

Tolliver, Robert L., et al. "Website Redesign and Testing with a Usability Consultant: Lessons Learned." OCLC Systems & Services 21.3 (2005): 156-166.

No comments:

Post a Comment