HCIR 2013‎ > ‎

Full papers

These papers will appear in the ACM Digital Library (DL).  When they are available in the ACM DL, we will link to the DL pages for the papers.

Are Some Tweets More Interesting Than Others? #HardQuestion
Omar Alonso, Catherine Marshall and Marc Najork
Twitter has evolved into a significant communication nexus, coupling personal and highly contextual utterances with local news, memes, celebrity gossip, headlines, and other microblogging subgenres. If we take Twitter as a large and varied dynamic collection, how can we predict which tweets will be interesting to a broad audience in advance of lagging social indicators of interest such as retweets? The telegraphic form of tweets, coupled with the subjective notion of interestingness, makes it difficult for human judges to agree on which tweets are indeed interesting. 
In this paper, we address two questions: Can we develop a reliable strategy that results in high-quality labels for a collection of tweets, and can we use this labeled collection to predict a tweet’s interestingness? To answer the first question, we performed a series of studies using crowdsourcing to reach a diverse set of workers who served as a proxy for an audience with variable interests and perspectives. This method allowed us to explore different labeling strategies, including varying the judges, the labels they applied, the datasets, and other aspects of the task. To address the second question, we used crowdsourcing to assemble a set of tweets rated as interesting or not; we scored these tweets using textual and contextual features; and we used these scores as inputs to a binary classifier. We were able to achieve moderate agreement (kappa = 0:52) between the best classifier and the human assessments, a figure which reflects the challenges of the judgment task.
Search user interfaces (SUIs) are usually designed and optimized for generic users or for a certain user group. Users within the group are similar, for example, in their information need, search goals or in cognitive skills. These properties influence the decisions made in the user interface (UI) design process of SUIs. However, especially for young and elderly users, the design requirements change relatively fast due to changes in users’ abilities, so that a flexible modification of the SUI is needed. In order to overcome this issue, we suggest to develop an evolving search user interface (ESUI). It adapts the UI dynamically based on the derived capabilities of the user interacting with it. In this paper, we present a first prototypical implementation of this idea: A search user interface which takes the special requirements of children into account and is customizable to their abilities. We offer adaptation in menu type and structure, search results visualization, surrogate structure, font, audio, theme and other SUI properties. This SUI was evaluated in a user study with 27 children and 17 adults. We present the results of the study and discuss implications for further research towards an ESUI.
This paper presents a usability-tested interface design that enables time-constrained analysts to organize their search results in a lightweight manner during and immediately following their search sessions. The research literature suggests that users want to lay out search results spatially in overlapping “piles,” but a pilot study with a flexible canvas tool revealed that this design requires too much manipulation and has other drawbacks. This finding led to a novel hybrid design that combines structure with a flexible visual layout and which allows the analysts to quickly triage documents first and organize them later, or interweave these two processes. Two usability studies comparing the new design against a legacy tool found overwhelming preference for the new tool for saving and organizing search results. Design guidelines derived from this work could improve sensemaking interfaces for other search applications.
In this paper, we examined why information searchers perceive search tasks as difficult, and what factors/reasons make them perceive tasks as difficult. We also examined if task difficulty reasons vary across different tasks (task types). Data was collected through a controlled laboratory experiment in which tasks were designed following a classification scheme. A total of 32 undergraduate students participated, each was given 4 search tasks, and they were asked in questionnaires both before and after the tasks for task difficulty ratings and why they gave those ratings. We developed a coding scheme based on the difficulty reasons users gave, which covered various aspects of task, user, and user-task interaction. Difficulty reasons were categorized following this scheme. Results showed that searchers had some common reasons for task difficulty in different tasks, but most of the difficulty reasons varied across tasks. For each task, there were also common reasons for task difficulty, although there was some variation here as well. Task difficulty was also found to be negatively correlated with users topic knowledge, previous experience, and topic interest. Our findings help understand search task difficulty, as well as the relationships between task difficulty and task type, knowledge background, etc. These can also be helpful with experiment task design.

Slow Search: Information Retrieval without Time Constraints
Jaime Teevan, Kevyn Collins-Thompson, Ryen W. White, Susan T. Dumais, and Yubin Kim
Significant time and effort has been devoted to reducing the time between query receipt and search engine response, and for good reason. Research suggests that even slightly higher retrieval latency by Web search engines can lead to dramatic decreases in users’ perceptions of result quality and engagement with the search results. While users have come to expect rapid responses from search engines, recent advances in our understanding of how people find information suggest that there are scenarios where a search engine could take significantly longer than a fraction of a second to return relevant content. This raises the important question: What would search look like if search engines were not constrained by existing expectations for speed? In this paper, we explore slow search, a class of search where traditional speed requirements are relaxed in favor of a high quality search experience. Via large-scale log analysis and user surveys, we examine how individuals value time when searching. We confirm that speed is important, but also show that there are many search situations where result quality is more important. This highlights intriguing opportunities for search systems to support new search experiences with high quality result content that takes time to identify. Slow search has the potential to change the search experience as we know it.