UX testing in library instruction

I believe that there is no clear boundary between digital and physical when evaluating user experience as these two notions are intertwined. In the context of an academic library, digital presence such as library’s website can affect a positive or negative impact on services provided in a physical environment like research consultation and vice versa. In order to identify how they are interconnected, I think that ethnographic studies (also known as field studies or contextual inquiries) or diary study can be good methods because these allow UX professionals to see how users actually interact with services or products in their own context. However, both types of research are expensive and time-consuming. Unlike survey or UX study in a lab setting, participants have to devote longer period of time to complete; therefore, it is difficult to recruit and even retain them from beginning to end.

I came up with an idea to conduct UX testing in library instruction. First, I would like to see whether there was a relationship between library instruction and digital user experience. Second, it would be easier to attract voluntary participation from students as it was planned to test at the beginning of a library instructional session. In addition, as Castonguay (2008) suggests, web usability testing can be used as an assessment tool for library instruction as well. However, it wouldn’t be easy without careful research preparation. I would like to share my experience conducting UX testing in library instruction, especially the study process and its pros and cons.

Process

At the beginning of library instruction, a subject librarian (Eric Snajdr) introduced an overview of the study to students and explained how their participation could benefit not only web improvement but also instruction. Two things were highlighted: it was voluntary so students were able to opt out anytime and there was no right or wrong answer. Moreover, students spent some time reading a written information sheet approved by IRB and asked any questions before starting the study.

Although Screencast-O-Matic – a screen and voice capture tool – was easy to operate, the librarian went through all together with students to give a brief introduction on how to adjust screen size and start recording. Then, students were given 5 minutes to perform a series of 6 information seeking tasks. The questions were asked in a random order, so that students next to each other didn’t have the same task. After 5 minutes, the librarian asked students to stop doing the tasks and again went through all together in order to make sure that they followed guidelines on how to save video files and send them to the librarian by email.

Immediately after completing the tasks, students were given a pre-survey deployed online using a tool, SurveyMonkey, to elicit their opinions on web usability and tasks as well as their emotions. This was followed by the library instructional session. Then, at the end of the session, the students were asked to finish a post-survey which contained questions on website, library instruction and their emotions.

Since only 20 minutes or so were allowed during 90 minutes of library instructional session, all links and tools were already installed as a “UX2015” folder on the desktop in the computer machine. This enabled me and the subject librarian to facilitate the process as smooth as possible.

Pros and Cons

Every research method has pros and cons. In terms of advantage, it was easier to recruit a large number of participants at one time. For example, a total of 213 students from 9 classes participated in the study during the fall 2015 semester. The classes were of varying levels including freshman, middle level, and senior courses. Furthermore, I was able to collect not only quantitative data through surveys but also qualitative data in the format of recorded video. It allowed me to cross analyze between “what students said” and “what students did.”

In terms of disadvantage, due to time limit of library instruction, it was not permitted to ask further questions like “why you did that way?” or “why you used certain resources?” although I was able to figure out later with data analysis. However, I still missed students’ own explanation. Additionally, training was required for subject librarian to lead the study smoothly, so we had to rehearse several times. In order to analyze data, data coding was needed. Frankly, it was tedious as there were 160 valid video files to be analyzed manually. I had to create a codebook because three people - me, subject librarian, and Digital Scholarship Collections specialist – converted qualitative data into qualitative data for the purpose of analysis. Three of us watched all of the video files and met regularly to make sure data consistency to minimize the chance of errors from coding. 

Since there were few studies about conducting UX testing in library instruction, it was challenging to design the study from scratch. However, this study generated rich data about students' performance, their behavior, their attitude as well as their overall experience and helped the DUX Working Group (me, Andy Smith, and Lisa Calvert) discover new interesting projects to improve our website and how to collaborate with our instructional librarians to provide consistent user experience.

Reference                                                                   

Castonguay, R. (2008). Assessing library instruction through web usability and vocabulary studies. Journal of Web Librarianship, 2(2-3), 429-455.

Blog Categories: 
Digital Scholarship Blogs
User experience (UX)
Updated May 03, 2016 by Webmaster