TREC 2014 Contextual Suggestion Track Guidelines

The Contextual Suggestion Track investigates search techniques for complex information needs that are highly dependent on context and user interests.

Track Organizers:

Track webpage: https://sites.google.com/site/treccontext/

Mailing list: Send a mail message to listproc (at) nist.gov such that the body consists of the line subscribe trec-context <FirstName> <LastName>

What's New

The Contextual Suggestion Track will be running for a third time for TREC 2014, here are the major differences from last year:

Background

According to a report from the The Second Strategic Workshop on Information Retrieval in Lorne (published in the SIGIR Forum, June 2012): “Future information retrieval systems must anticipate user needs and respond with information appropriate to the current context without the user having to enter an explicit query... In a mobile context such a system might take the form of an app that recommends interesting places and activities based on the user’s location, personal preferences, past history, and environmental factors such as weather and time...  In contrast to many traditional recommender systems, these systems must be open domain, ideally able to make suggestion and synthesize information from multiple sources...”

For example, imagine a group of information retrieval researchers with a November evening to spend in beautiful Gaithersburg, Maryland.  A contextual suggestion system might recommend a beer at the Dogfish Head Alehouse (www.dogfishalehouse.com), dinner at the Flaming Pit (www.flamingpitrestaurant.com), or even a trip into Washington on the metro to see the National Mall (www.nps.gov/nacc).  The goal of the Contextual Suggestion track is to provide a venue for the evaluation of such systems

Task Summary

As input to the task, participants will be given a set of profiles, a set of example suggestions, and a set of contexts.  Details of all file formats are given in separate sections below.  Each profile corresponds to a single user, and indicates that user’s preference with respect to each example suggestion.  For example, one suggestion might be to have a beer at the Dogfish Head Alehouse, and the profile might include a negative preference with respect to this suggestion.  Each training suggestion includes a title, description, and an associated URL.  Each context corresponds to a particular geographical location (a city).  For example, the context might be Gaithersburg, Maryland.

For each profile/context pairing, participants should return a ranked list of up to 50 ranked suggestions. Each suggestion should be appropriate to the profile (based on the user’s preferences) and the context (according to the location). The description of the suggestion may be tailored to reflect the preferences of that user. Profiles correspond to the stated preferences of real individuals, who will return to judge proposed suggestions. Users recruited through crowdsourcing sites or are university undergraduate and graduate students. For the purposes of this experiment, you can assume users are of legal drinking age at the location specified by the context. You may assume that the user has up to five hours available to follow a suggestion and has access to appropriate transportation (e.g., a car).

Timeline

Example Suggestions & Profiles

Profiles consist of two ratings for a series of attractions, one rating for the attraction's title and description and one rating for the attraction's website. The profile should gives systems which attractions a particular user likes and which ones the user does not like. The ratings are given on a five-point scale based on how interested the user would be in going to the attraction if they were visiting the city it was in:

Profiles will be distributed as two files:

exaples2014.csv is a CSV file with the columns id (attraction_id), title, description, and url, for example:

  id,title,description,url

  1,Fresh on Bloor,"Our vegan menu boasts an array of exotic starters, multi-layered salads, filling wraps, high  protein burgers and

  our signature Fresh bowls.",http://www.freshrestaurants.ca

  ...

profiles2014.csv is a CSV file with the columns id (profile_id), attraction_id, description, and website, for example:

  id,attraction_id,description,website

  1,1,1,0

  1,2,3,4

  ...

  2,1,4,4

  ...

Contexts

Contexts will be distributed as one file, contexts2014.csv, which contains the cities suggestions need to be generated for. The cities in the contexts file will be the primary cities of 50 randomly selected metropolitan areas (which are not part of a larger metropolitan area) excluding the two seed cities in the example suggestions*.

contexts2014.csv is a CSV file with the columns id (context_id), city, state, lat, and long, for example:

  id,city,state,lat,long

  1,New York City,NY,40.71427,-74.00597

  2,Chicago,IL,41.85003,-87.65005

  ...

The latitude and longitude and provided as a convenience and is intended to be synonymous with the city and state information.

* The list of metropolitan areas is gathered from http://en.wikipedia.org/wiki/List_of_metropolitan_areas_of_the_United_States

Suggestions

Suggestion files should contain a list of up to 50 ranked suggestion for each profile+context pair are returned as a single CSV file.

Your submitted CSV file should contain the columns groupid, runid, profile, context, rank, title, description, url, and docId, (in that order) for example:

  groupid,runid,profile,context,rank,title,description,url,docId

  group44,run44A,1,1,1,Deschutes Brewery Portland Public House,"Deschutes Brewery’s distinct Northwest brew pub in Portland’s

  Pearl District has become a convivial gathering spot of beer and food lovers since it’s 2008 opening.",

  http://www.deschutesbrewery.com,

The suggestions file contains:

Submissions will be identified as either ClueWeb runs of open web runs. Submitted files must consists of suggestions with either only ClueWeb12 docIds or only urls from the open web. Note that we cannot guarantee that judgements for ClueWeb docs will be based off of documents in the ClueWeb12 corpus, docIds may simply be translated to urls and judged off the open web.

The Contextual Suggestion Track will use version 1.1 of ClueWeb12 (which fixes a duplicate document problem in v1.0). Further information regarding the collection can be found on the ClueWeb12 website. Since it can take several weeks to obtain the dataset, we urge you to start this process as soon as you can.

If you are unable to work with the full ClueWeb12 dataset, we will accept runs over the smaller ClueWeb "Category B" dataset (called ClueWeb12-B-13) but we stongly encourage you to use the full dataset if you can. Additionally, both the full dataset and the B-13 dataset will be ranked together during evaluation.

Each group may submit either one or two runs, which should use the same group id but different run ids.

Judging

Suggestions will be judged both by users and NIST assessors:

ClueWeb and open web submissions will be ranked separately.

Evaluation Metrics

P@5 (main), MRR, and TBG are the three measures that will be used for the TREC 2014 Contextual Suggestion Track.