Tuesday, November 23, 2010

Sixth Blog Post

The topic of this blog posting's readings was copyright and privacy in libraries, particularly in regards to technology and the ways that new technology capabilities have challenged libraries and other organizations to develop new policies and in some cases, to try to stretch the limits.

In Siva Vaidhyanathan's article "The Googlization of Everything and the Future of Copyright," it is clear that Vaidhyanathan fears for the ways that the GoogleBooks project will change the face of copyright law as we currently understand it. First, a quick summary of the Google Library Project: since 2003, Google has been making digital images of books which have been made accessible to them from academic and commercial publishers and keeping these images on their servers. Google then controls the access the Google user has to the book--for older, out of copyright materials, that often means that the reader can see much of the book, but for newer materials, the user is given a quick view of the book, and a link which takes them to retailers from whom they may purchase the book. Google has also struck a deal with several major English-speaking universities around the world--if the universities allow Google to digitize their collections, Google will in turn allow the universities access to the scanned images for free. Initially, there was fear that this project would be a boon to the publishing industry, but it turned out to not affect their sales much. However, the publishers still are concerned about Google taking a "free ride" on the materials and not paying for what they are scanning. The author begins her article by citing an article published in the New York Times Magazine by Kevin Kelly in which Kelly claims that Google will change the book-buying world completely, and he is not concerned for this--in fact, he sees it as a liberating thing, allowing more people better access to more information. Vaidhyanathan, however, sees it differently--what about copyright law? The author claims that Google is taking advantage of a system that is old and misunderstood and that what they are doing is threatening the law to its very core by stretching the notion of fair use. In fact, Vaidhyanathan compares the Google Library Project with Grokster, which was a music exchanging and downloading program which was found guilty of copyright infringement, and the author believes that the similarities between the two projects must be recognized and that if Grokster had to be punished, Google should be as well because they are facilitating almost identical enterprises. Vaidhyanathan makes it clear that he is not an enemy to Google and uses their products frequently, but believes that this project steps on the toes of libraries and violates copyright law. He gives us three reasons to worry about Google possibly taking over as a primary source of book-based information--privacy, privatization, and property.

Vaidhyanathan points out that Google is very good at retaining the information of its users (if you've ever used Gmail, you know that the ads that show up on your screen are selected specifically for you based on the emails you send and receive) and that we cannot trust Google to not give out that private information if subpoenaed for it, unlike the library, who will not give out that information (often in compliance with the ALA Code of Ethics and Bill of Rights). When it comes to privatization, he argues that "companies fail," but universities last. Google may be the top dog today, but maybe not tomorrow. Not only that, but Google is a business, and like businesses, they answer to stockholders, which can be dangerous and scary to the people most affected by any kind of change they decide to make. Not only that, but they do not have standardized metadata (unlike libraries) and thus lack quality control of their material, providing a less-than-adequate search function for the books. Property also is an issue--copyright law comes into question when considering the project. And not only that, but Vaidhyanathan argues that Google may not be the right provider of this service, saying that it should be one that the libraries of the world take under their wing and provide the right way. Overall, the author is afraid that Google is not the right medium for this project, that it violates copyright law and threatens to take away the protective aspects of the law if it is reconsidered, and that libraries are threatened by the new power that Google is showing and need to take a stand against it.

In a slightly different vein, Karen Coombs' article "Protecting User Privacy in the Age of Digital Libraries" hits a little closer to home. Coombs is the electronic services librarian at SUNY Cortland library and has been faced with the challenge of all SUNY's libraries moving to the same ILS, bringing up various privacy and protection issues that Coombs then goes on to discuss and confront in the article. Coombs gives some background on the privacy policies instated by the government as well as the ALA Bill of Rights and Code of Ethics policy, which many libraries take on as their own or adapt to their own needs. In light of the recent ILS migration of all SUNY libraries, Coombs takes a look for user privacy issues that they face and comes up with various methods of solving them. First, she prioritizes by making a list of the different functions of a library that rely on patron data, including circulation, collection development, and security. In light of these and several other library functions, she found that the user's Social Security number or birth date was not necessary to these functions. With the thought of "the more data you have, the more you have to protect," she recommends attempting to get rid of unnecessary data by looking for ways the data can be accessed, ensuring that it is protected well and that policies are in place and actions are taken, as well as deleting the unnecessary data. Coombs does this in the ILS, ILL, web sites, proxy servers, and public computers, finding ways to separate the user from the item that they requested, searched for, or otherwise would be connected to. This allows for the library to not keep the information on hand, thus not making them liable for court subpoenas and ensuring the user's total privacy, which is one responsibility of the library.

And truly, I think that it is this notion, the user's privacy, that should be taken into account in both of these issues--with Google, they have little to no responsibility to the user, and so may or may not turn over the dossiers of data they keep on each user to authorities. This is very different from the notion of what a library stands for, where privacy is one of the main concerns, and users can feel protected. Just look at what Coombs is doing to ensure the privacy of her users in coming up with detailed strategies and manipulations that allow users to feel completely safe in the areas that she and her team are able to control. To be honest, copyright is something which has always been confusing for me (as it seems to be for many others) and I wish that Vailhyanathan had put the copyright policies into a kind of "dummy" language, allowing me to better understand his point and argument. I also think it would be interesting to read a pro-Google Library Project article and compare the two.

Possible discussion questions-- As Google changes the book-world, do you think that their policies will have to change with it, reflecting a more library-like policy? Do you think that Google has the potential to make libraries as they are today defunct in our society? How does copyright and privacy fit into this notion?

Coombs, Karen A. (2005). "Protecting User Privacy in the Age of Digital Libraries." Computers in Libraries 25.6: 16-20.

Vaidhyanathan, Siva. (2005). "The Googlization of Everything and the Future of Copyright." (2005).
http://lawreview.law.ucdavis.edu/issues/Vol40/Issue3/DavisVol40No3_Vaidhyanathan.pdf.

Sunday, November 7, 2010

Fifth Blog Post

The articles for this blog post center around the usability of academic library web sites. The articles that I found the most interesting were "Usability of the Academic Library Web Site: Implications for Design" by Louise McGillis and Elaine G. Toms written in 2001 and "Website Redesign and Testing with a Usability Consultant: Lessons Learned" by Robert L. Tolliver, et al. written in 2005. These articles discussed the ways to test web sites for the factor of "usability," focusing specifically on academic library web sites, but the ideas that they have can certainly be applied to any library web site.

McGillis and Toms' article "Usability of the Academic Library Web Site" seeks to study the usability of the website of the Memorial University of Newfoundland Library. They wanted to see how faculty and students use the site and how easy it was to use. The authors define "usability" as "the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use" (356). In other words, how user-friendly is the web site in accomplishing the goals for which the users utilize the web site. In order to test the site for usability, the authors sought to test three specific aspects: effectiveness, efficiency, and satisfaction. In order to do this, they had 33 participants (17 undergraduates, 2 graduate students, and 14 faculty members) complete three tasks that allowed them to evaluate the web site: the participants completed a background/experience questionnaire, were given six tasks to perform using the library web site (including finding a book, an article, an Internet resource, help with databases, how to renew a book, and asking a reference question online), and then they completed a questionnaire about their experience in attempting to complete the tasks. This test enabled the authors to examine the effectiveness, efficiency, and satisfaction of the site--the effectiveness was measured by how many tasks were successfully completed, efficiency was measured by how long it took the participants to complete the tasks, and satisfaction was measured by the results of a survey questionnaire administered after the tasks were completed. Some interesting things were found in the study. They found that there was no difference in performance based on the status of the participant, so faculty and students alike had the same issues and successes. The participants completed 75% of the assigned tasks on average, taking roughly 2 minutes per task. In the follow-up questionnaire, it was shown that the main problems for the participants was in knowing where to begin for each task--they claimed that they had problems with the menus and understanding what some of the terminology meant, giving them hesitation in clicking a certain button that may not lead them where they needed to go. For instance, many claimed that they had trouble distinguishing between the links for "resources" and "databases," leaving them unsure of what they meant and where they would take them. Terminology was the biggest problem, and the authors recognize that "...we make too many assumptions about the extent of user knowledge" (364). They found that even though the menus were structured in a similar way to the physical library, the participants still did not know where to start because when they have a specific problem, they often cannot fit it into the terms that are given to them, which highlights the importance of terminology and the awareness of the user's familiarity with library "lingo" on the web site. In order to combat this, the authors claim that this shows the importance of a user-centered approach to web design--knowing and understanding why and how the users use the web site and attempting to preempt those needs by creating a site that is easily used and understood. The authors also state that there needs to be better and more solid "benchmark standards" for assessing the usability of a web site. Since this article was written in 2001, when libraries were really just beginning to embrace the use of the Internet for online catalogs, these "benchmark standards" may have already been designed and tested, but these authors understand the need for the standards and the need for a more user-centered and user-involved approach to the library web site.

In the second article, written in 2005, "Website Redesign and Testing with a Usability Consultant," Robert L. Tolliver, et al. have decided to move the web site for the University of Michigan Art, Architecture, and Engineering Library from an HTML-based site to a CMS-driven site, and before they do so, they want to completely redesign the web site and conduct a usability test, allowing them to better design the new site. The team selected for the redesign was large, pulling in special subject librarians as well as IT people so that they could address both the content and context of the web site structure. In order to best redesign the site, the library used a usability consultant who was an expert on usability and usability testing, a move which the authors claim was definitely the right one for them. To begin the usability testing, several librarians were consulted, and several library student assistants were asked to complete several tasks using the current site. This gave the consultant an idea of what the current problems were that needed to be addressed. The library was offered three levels of testing and they chose the mid-range option which included heuristic analysis (a basic survey of the site--this was cut down, however, due to time constraints), sector interviews (four subject librarians were interviewed, allowing for the consultant to pull out what the good and bad parts of the current site are and how students and librarians use it), card sorting, and wire framing. The card sorting aspect of the testing was possibly the most interesting and result-driven. Based on the interviews with the librarians, the consultant wrote out terms from the library web site on cards and asked six participants (undergraduates and graduate students) to sort them into whatever groups made sense to them and to label the category. Out of the six participants, five of their responses were very similar, and were thus used for the redesign consideration. Generally, these five students separated the cards into four groups--library information and services, information services, specialized resources, and miscellany. Based on the information garnered from the card sorting, the consultant created a paper prototype of the proposed new layout, which was given to eight students who were given tasks to complete with the prototype. The consultant and librarians observed the paths that the students followed and took notes on the terminology problems that students had while going through the tasks. Using the results of the prototype test, the consultant and librarians changed the site to better fit the needs of the students, tested one more time, and then introduced the new site. In the end, the authors found that working with a usability consultant was an extremely positive thing in order to better redesign the web site. Tolliver, et al. state, "Given the increasing importance of a library's web presence and the multitude of electronic resources through which users must navigate, it is important for librarians to appreciate the value of usability testing, and using a professional usability consultant may provide a valuable foundation" (165). They found that the combination of the librarians' expertise with the expertise of the usability consultant made for good conversations and breakthroughs that may not have been realized if not for the combination of the expertise levels. For these librarians, having an outside consultant was the best way to go to redesign their site, and the entire experience seemed to open their eyes to the important of usability and the value of the user's input when it comes to the library's web site.

What is valuable in these articles is their recognition of the importance of user input and the assessment of usability of a library web site. In both instances, users were consulted in order to help the librarians and web designers completely understand how they used the web site and what they expected. In both cases, terminology was an issue--the users were confused about some ambiguous, vague, or unfamiliar terminology, and these studies helped the librarians to better phrase the categories and terms on the site to enable better results for the users. The McGillis and Toms article was written in 2001, at the beginning of a time when library web sites were still in their relative infancy, and many of the issues that they found may not be as problematic anymore because of the users' familiarity with the Internet and the way it is set up, but their findings on terminology and easy lay-out is still relevant. McGillis and Toms did not use a usability consultant, but Tolliver, et al. seem to really believe that using a consultant was key to the success of their redesign. It would be interesting to see how differently McGillis and Toms study may have been structured if they had brought in a consultant. Both of these studies are valuable in thinking about assessing a library's web site and how usable it is, citing different methods of assessment that can be catered to your library's specific needs.

Discussion question: Do you think that these studies and their findings can be translated to different types of libraries? What would have to change in the assessment process in a public library or a special library? Would you use a consultant, or administer testing on your own?

Works Cited:

McGillis, Louise, and Elaine G. Toms. "Usability of the Academic Library Web Site: Implications for Design." College and Research Libraries 62.4 (2001): 355-367.

Tolliver, Robert L., et al. "Website Redesign and Testing with a Usability Consultant: Lessons Learned." OCLC Systems & Services 21.3 (2005): 156-166.

Monday, October 11, 2010

Fourth Blog Post

With the development of Web 2.0 capabilities and the popularity and sustainability of these new technologies, it is no wonder that their role in the library has come into question. For more background and information on Web 2.0 and the amorphous "Library 2.0," I read the articles "What is Library 2.0?" by Holmberg, et al. and "'All that Glisters is Not Gold'--Web 2.0 and the Librarian" by Paul Anderson. "What is Library 2.0" was an extremely informative and interesting article which searches for the definition of "Library 2.0." Holmberg, et al. acknowledge the changes in library technology, but maintain that although the methods for delivering library services has evolved over the years, the core values of libraries (to provide access to and facilitate the usage of information) has not changed. They also discuss Web 2.0, technologies that allow users to interact with and personalize their web experience, and how it has begun to infiltrate libraries and library services in the form of "Library 2.0."

Holmberg, et al. bemoan the fact that although the term is bandied about, there is no clear definition for "Library 2.0." The authors give some examples of the ways that librarians have attempted to categorize the term, but seem to have found them all to be too uninformed or vague, so they performed a study in order to come up with a better definition. The method of the study was to ask "What is Library 2.0?" to 29 librarians at a workshop on Library 2.0 that took place in Finland. The people asked were of all ages, stages in their careers, and were all at different kinds of libraries. The librarians were instructed to answer the question by writing their answer in five minutes with no discussion. Some wrote full sentences, others did bullet points, but all were able to express their thoughts. Then, the authors used co-word analysis to analyze the data. Co-word analysis uses the frequency of words in a given piece, then extracts connections between words and measures the strength of those connections. Using the librarians' responses to the question "What is Library 2.0," the Holmberg, et al. were able to extract key terms used and the relationships between them to create network maps that help visualize the definition. The authors found that the terms "interactivity" and "users" have the strongest connection, and the term "interactivity" was the most-used single term. Using the analysis, they created clusters of terms based on the concepts they represent, thus coming up with seven core components of Library 2.0: interactivity, users, participation, libraries and library services, web and web 2.0, social aspects, and technology and tools. Holmberg, et al. consider these seven facets to be the building blocks of Library 2.0, and allowed them to come up with a definition which is: "Library 2.0 is a change in interaction between users and libraries in a new culture of participation catalysed by social web technologies" (677). The authors believe that this is empirically the best of the definitions because it is able to encompass all seven facets that are the building blocks of libraries, while still remaining versatile enough so that different libraries can emphasize different aspects that relate more to them. This is meant to be a very fluid definition, and one that can be used to help facilitate further discussion of Library 2.0 and provide a framework for that discussion.

To supplement this reading and understanding of Library 2.0 and Web 2.0, I read "'All That Glisters is Not Gold'--Web 2.0 and the Librarian" by Paul Anderson which is an editorial discussing Web 2.0 and the ways that librarians could/should be involved in its development in order to make it a tool for libraries. Anderson argues that we need to create a framework in which we can discuss and understand Web 2.0, and his suggested framework has three aspects: understanding the software and services of Web 2.0, applying the "Six Big Ideas," and web technologies and standards that are evolving. In coming up with a framework in which to understand Web 2.0 in libraries, Anderson has noted some barriers that may come up in the discussion. One of these barriers is the speed at which technology advances, often evolving so quickly that it is financially and aesthetically impossible for libraries to keep up. However, Library 2.0 will continue to come, and so the author calls for action on the part of librarians--especially in this environment where copyright and privacy issues are bound to arise quickly, librarians need to make sure that they are part of the development process and that they are using their skills and taking risks to fully understand and develop Web 2.0 and its impact on libraries. At the end of the paper, Anderson sites O'Reilly's "Six Big Ideas" and adapts them into six key concepts which he believes help to form the framework of Web 2.0: individual production and user-generated content, harness the power of the crowd, data on an epic scale, architecture of participation, network effects, and openness. These six "ideas" are quite similar to those that are explored by Holmberg, et al., but it seems as though they focus more on the hard- and soft-ware of the technology, and less on the end user and the use of Library 2.0. The article written by Holmberg, et al. is a few years older than that written by Anderson, and I think that the conclusions of it make that clear, but both articles seem to understand the need for a framework and definition of Library 2.0 in order for libraries and librarians to completely develop and utilize the new technologies, but while Holmberg, et al. are focused on concept and social implications, it seems that Anderson is more focused on the harder science of it.

In reading these two articles, it becomes clear that as grand and wonderful as Library 2.0 may seem, there are certainly barriers that it will come up against, not the least of which is the lack of a true definition in order for all librarians to be on the same page when it comes to understanding the term. Although the Holmberg article did address this a little bit, I guess what my main problem was with these two articles was that I kept asking myself: what about the users? I understand the need to have a definition for Library 2.0 before really delving into its use and implications, but the user got only a passing glance in the Holmberg article and nothing really in Anderson's article. Shouldn't we be talking to users and seeing what they want Library 2.0 to mean before we come up with a definition that is defined by what librarians seem to think it might mean? Also, I understand that all libraries share a core value, but wouldn't a public library and a special library and an academic library all handle Library 2.0 differently? How does this notion represent itself in the discussion of Library 2.0? Holmberg, et al. mention the fluidity of their definition, allowing it to be grasped by all types of libraries, but after the definition is established and accepted, what are the implications for different types of libraries?

Works Cited:

Anderson, Paul. "'All that Glisters is Not Gold'--Web 2.0 and the Librarian." Journal of Librarianship and Information Science 39.4 (2007): 195-198.

Holmberg, et al. "What is Library 2.0?" Journal of Documentation 65.4 (2009): 668-681.

Sunday, September 26, 2010

Third Blog Post

In Marcia Deddins' article "Overview of Integrated Library Systems," the author opens with discussing the progress that Integrated Library Systems have made in the past years and how well they fit with the library's system goals of providing content to users easily. Deddins discusses the importance of the ILS and how it truly influences and shapes the library itself--without the ILS and the work that goes into creating it, libraries would not be as efficient or well-used as they are. Deddins goes on to look at three different ILS vendors (making the caveat beforehand that vendors tend to put a fine gloss on their product and make you hear only the great things): Endeavor, Innovative Interfaces, and SIRSI. It seems as though the purpose of putting these different vendors descriptions of their own products up against each other is to see the similarities and differences between them, and also to show what they are trying to do to help the library. What is similar about all of these vendors is their stress on integration of different library materials and metadata. All of them claim that their product is the perfect integrated system for data management and for allowing for a better digital library environment. But each of these vendors has a different stress overall--for Endeavor, they focus on the notion of creating a stronger digital library in which the searching process is simpler, as is individualization and access. For Innovative Interfaces, they focus more on the integration of metadata, and the more technology-based advances that they are making. Finally, SIRSI has a completely different pitch in that they bring it all to the user and the people rather than the technology, making it seem as though they are more interested in helping the user than in innovation for itself. At the end of the article, Deddins concludes that the innovations that are occuring in ILS today are great opportunities for librarians and IT professionals to collaborate and innovate so that the ILS will become stronger and more user-friendly, accomplishing the goals that are set for the library.

This leads into the second article, "Re-Integrating the 'Integrated' Library System" by Marshall Breeding. Breeding is dismayed at the route that the ILS has taken in recent years, seemingly upstaged by Google and "a la carte" automation utilities that defeat the purpose of an ILS. Breeding's biggest issue seems to be that with all of these a la carte tools that are being introduced to libraries and are bought up and used more and more as the need for integration becomes stronger and stronger, libraries are moving farther away from being able to have a more integrated system and save the ILS from becoming defunct. He sees the problem as being the fact that all of these different software applications are not compatible or cohesive, thus not allowing for a more seamless system (which although he admits is impossible, he can see the possibility for better cohesiveness) and turning users away from the ILS which seems complicated and bulky compared to Google's sleekness. Breeding sees two problems standing in the way of a more integrated ILS: Requests for Proposals and the industry itself. Vendors don't want to steer away from the long checklists of things librarians expect out of their systems, even when some of the things asked for hinder best functionality. When it comes to the marketplace, the ILS is expensive and if the add-ons were put into the ILS package as a whole, it would only get more expensive. With limited and shrinking budgets, having an a la carte system for purchases makes it easier for libraries to afford what they need and gives the vendor better profits. At the end of his article, Breeding seems to be optimistic that the ILS will find a way to better integrate with the add-ons libraries require, but does warn that if they don't, more and more users will turn away from using the library as a resource and go straight to Google and Amazon.

The combination of these two articles makes for an interesting discussion: where Deddins seems to have a pretty positive outlook on ILS, Breeding does not, and thinks that there needs to be a strong upheaval of the traditional ILS in order to keep up with web technology. What both do seem to realize is the need and opportunity for collaboration between the IT profession and librarians in order to get users to continue using the ILS that libraries provide. It seems as though most of the articles we read about library technology are constantly bringing up the fact that Google and Amazon have a leg-up over libraries because of their sleekness, and we need to revamp our own systems to become more user friendly, inviting more people to use them. Whether this is through some tweaking on the part of the vendors or librarians, or if we need a complete overhaul of the ILS is a question that is under serious debate. A discussion question that arises out of these two articles might be: to what extent do libraries need to mirror Google and Amazon? Will becoming more like them take away from the library's goal and the basic structure of the ILS as we've known it? Is there something that can be done when it comes to the vendors (e.g. rising costs, asking for different features, etc.)?

Bibliography
Deddins, Marcia. "Overview of Integrated Library Systems." EDUCAUSE Evolving Technologies Committee. 2002. http://net.educause.edu/ir/library/pdf/DEC0201.pdf.

Breeding, Marshall. "Re-Integrating the Integrated Library System." Computers in Libraries 25.1 (2002):28-30.

Sunday, September 19, 2010

Second Blog Post

Calvin Mooers' theory (or better known as Mooers' Law) is an interesting and troubling one when looked at from a librarian's point of view. As librarians, we are trained to (and often inherently drawn to) love information and we study throughout our entire careers, even after graduate school, how to find, use, interpret, and seek out information. It is the definition behind the work that we do. Although this is true, what Mooers proposes as a law is just as necessary to a librarian's repertoire as knowing how to find and distribute information. Mooers' Law posits that people will not use information retrieval systems if it is more painful for them to actually have the information. Mooers makes a good point--oftentimes, information is a burden. There are many implications and responsibilities when it comes to having information--things need to be sought out, interpreted, applied to an issue, or just basically read, seen, or heard. These things take time and energy, thus, information retrieval systems are often ignored by the public unless they absolutely must have a certain piece of information. This Law has many implications for librarians--we need to make sure that when people do use the information retrieval systems that we provide for them, the systems need to be clear, easy to use, and allow the user to get the information they need as quickly, accurately, concisely, and easy as we can. Although I was not fully aware of Mooers' Law prior to reading this article, I believe that the knowledge that comes from it is extremely necessary to librarians and their thinking on systems, which brings us to the second article that we read.

In Antelman, Lynema, and Pace's article "Toward a Twenty-First Century Library Catalog," the authors explain the issues that are inherent to the current library catalog, explaining the first- and second-generation online catalogs, and the need for further advancement of these online catalogs. In all things, librarians need to be thinking of their users. This may seem like a basic principle to live by as a librarian, but oftentimes it seems as though we forget who we are serving. What the North Carolina State University did was to take a step back from their daily routine, and really analyze the ways in which the user was and was not being served by the catalog. Because the catalog is often the access point for users, it is important that it be relevant and easy to use, allowing for both discovery and locational information for resources. NCSU, unafraid of taking calculated risks, attempted to integrate the Endeca's Information Access Platform, which is a platform used by web sites, which make them more appealing to users. NCSU wanted to take the searching methods employed by this platform, and make it easier for users to find the most relevant information for them, enhancing browsability, improving subject access, and generally making the searching process more user-friendly. NCSU attempted to accomplish this by employing spell correction, "did you mean..." responses, the implementation of "dimensions" that allow users to redefine their searches by clicking on different facets to narrow it down, relevance ranking (which was the thing most appreciated by users, it seems) and better browsability (allowing for searching without having to enter a query). After the implementation of the new service, the university did a study to see if users were finding more relevant things faster than they had before, and found that generally, the students were happy with the improved system. Although this new "discovery portal" OPAC has come a long way from what OPACs have been, there are still things that need to be improved such as the widening of the scope, an ability to use more colloquial language in search queries, and a way for users to supply feedback on the relevance of the material they found. However, even with the kinks to work out and more advancements to be made, there have still been significant strides made with these new developments.

Placing these two articles side-by-side says a lot for the state of information seeking through systems today and what we need to be focusing on when we create or attempt to improve our systems. Mooers' Law states basically that users take the path of least resistance, as it were, and will not seek out information if it is in any way painful to them. At the end of his article, Mooers makes a good point, stating that although his Law is true, there will be times when information is needed and sought, and for those people, the systems need to be as user-friendly as possible. There is nothing we as librarians can do about the outside sources of people not wanting information, but we can improve our own systems to make the process less burdensome and painful for those who use our information retrieval systems. NCSU has realized this notion and has gone so far as to implement a new system into their OPAC, allowing users more control over their search, and allowing for relevance ranking and "did you mean..." queries that allow the users to have a more narrowed focus and thus find the relevant information that they need with less sifting and sorting through the things they do not need. I think that what we as librarians can take away from these articles is a need to focus on the user, understand the way they search and their motivations, and cater to them, allowing them a smoother process, thus bringing them back to our systems instead of turning to the Internet. I think that one question that can be asked from these readings comes from something that Antelman, et al., raised: users do not seem to understand HOW to search properly, and that is part of the problem with them being able to get over the barriers that controlled vocabularies and subject headings seem to hold for them. So, how can we use our understanding Mooers' Law, and the implications of the study done at NCSU to better improve the knowledge that the users have? If they do not know the difference between a subject and a keyword, how can we educate them to make their searches clearer while still using these new methods to help them search? Is it still our job to teach Boolean searching if we are coming up with new methods of searching? Something to think about, perhaps.

Bibliography:

Antelman, Kristin, Emily Lynema, & Andrew Pace. "Toward a Twenty-First Century Library Catalog." Information Technology & Libraries 25.3 (2006): 128-139.

Mooers, C.N. "Mooers' Law or, Why Some Retrieval Systems Are Used and Others Are Not." Bulletin of the American Society for Information Science and Technology 23.1 (1996): 22-23.

Monday, September 6, 2010

First Blog Assignment

Reading the two articles "The Evolution of LIS and Enabling Technologies" and "Environmental Scan: A Report on Trends and Technologies Affecting Libraries" were very enlightening on the topic of technology in libraries in different ways. "The Evolution of LIS and Enabling Technologies" was interesting for its look at the past when it comes to technologies in libraries. Something that I truly believe in is the importance of an examination and understanding of the past in order to prepare ourselves and attempt to anticipate the future and understand the present. Kochtanel and Matthews dive into the past to flesh out the stages that LIS has gone through and help to define what the term Library Information Systems means. Terms such as this one often are ubiquitous to the point of losing the meaning, but the authors do a great job of helping us to understand what is meant by the term. Kochtanel and Matthews understand LIS as the entity which brings content and users together, providing the service that defines the library. I believe, and it seems that Kochtanel and Matthews do as well, that libraries need to focus on the user and what they need (without the user, what is the point of having a library in the first place?) and creating and adopting library systems functions that are both helpful to the user and easy to use can only increase the user's awareness of the library offerings and perhaps be an entrance point for people who do not already use the library's offerings. Another interesting distinction that the authors make is in the ways that libraries adopt new technologies. Their assessment and commentary on page 7 on this topic proved to be a thought-provoking one for me--if I was the head of a library, how would I want to approach new technologies? Is it worth it to risk a failure and be the "bleeding edge" library? I know that I certainly would not want to be on the "trailing edge!" This distinction should give librarians something to consider in adopting new technologies. One final thing that I really gleaned from this article was in the "three phases or periods of innovation in the design of several computing configurations" (Kochtanel 10). I find it interesting and telling of the library profession that we have moved from periods that focus on collections and content to one that focuses on users. Through my LIS education, I have become increasingly interested in outreach and the users of libraries, and it is good to know and understand the ways in which library systems are becoming more user-friendly and user-focused. I thought that this article did a great job of examining the past of LIS and helping me to understand how important the past is in fueling the future.

Pairing this article with the article entitled "Environmental Scan: A Report on Trends and Technologies Affecting Libraries" made for interesting reading. Arnold Hirshon attempts to forecast the future of libraries in the context of technology, which is made most effective by his inclusion of all aspects of libraries in his predictions and discussions. Hirshon makes clear that his predictions are not comprehensive, and that although his ideas are based on studies and statistics, they are not definitive. Hirshon looks at the future of libraries by splitting his article into five issues that libraries will be facing: societal and economic, technology, education and learning, information content, and library leadership and organization. It is interesting that technology comprises only one of these issues, but it certainly informs most of the issues with these other factors. Some of the most interesting and thought-provoking issues that Hirshon discusses were on Generation Y, societal changes, gaming and augmented reality, e-books, and a refocus on users. Each of these issues are informed by new technologies--Generation Y is one that have had technology around them throughout their entire lives, and libraries have to learn how to adapt their older ways to suit the new demands and expectations of a generation of "digital natives." Hirshon discuses the JISC report in which one of the conclusions was that librarians need to better understand how these new users of library services search and retrieve and use information in the light of the Internet and technology so that their users are getting the best information that they can. The author also comments on the rise of gaming and augmented reality in learning environments and how it is effecting the way that people learn and interact with their information, while inferring that libraries need to analyze these methods and incorporate some aspect of them into their presence in order to keep the attention of their users. Hirshon takes a look at the more pronounced presence of e-books and how they are effecting budgets, cataloging, and general technology needs. With new technologies like the Nook and Kindle, e-books are becoming more prevalent in our society and libraries need to adapt to this new market and learn how its community responds to them. As I have mentioned before, libraries are changing their focus from collection-based to user-based, which Hirshon seems to approve of, but which he also warns is not to be taken lightly, and management and leadership roles will have to change with this change in focus. Obviously, Hirshon focuses on much more in his article than these points, but I found these to be the most interesting and they made me think about my own use of technology over the years and how it has informed my own research habits and library usage. It seems to me that the overall message Hirshon wants the reader to take away is that libraries are living creatures which change rapidly, and as librarians, it is crucial that we understand them changes and act accordingly in order to benefit our users.

I found both of these articles to be intriguing and thought-provoking, and in thinking about them, I came up with a possible discussion point based on the articles: Social networking has been played up quite a bit over the years, and both articles touch on it (Hirshon's more than Kochtanel and Matthews) but I'm wondering what the influence of social networking will be on the future of libraries, and how libraries and librarians can possibly foresee their role in a world that values social networking in the ways that we do now. Although the influence of social networking may not be the same in 10 years, its impact should still be considered an important one in the library world.

Works Cited

Kochtanek, T.R., and J.R. Matthews. "The Evolution of LIS and Enabling Technologies." Chapter 1. Library Information Systems. Westport, CT: Libraries Unlimited, 2002. 3-13.

Hirshon, Arnold. "Environmental Scan: A Report on Trends and Technologies Affecting Libraries." NELINET August 2008. 1-12.