The Contextual Suggestion Track investigates search techniques for complex information needs that are highly dependent on context and user interests.
Track webpage: https://sites.google.com/site/treccontext/
Mailing list: Send a mail message to listproc (at) nist.gov such that the body consists of the line subscribe trec-context <FirstName> <LastName>
TREC 2013 will be the second year the Contextual Suggestion Track is running, here are the major differences from TREC 2012:
According to a report from the The Second Strategic Workshop on Information Retrieval in Lorne (published in the SIGIR Forum, June 2012): “Future information retrieval systems must anticipate user needs and respond with information appropriate to the current context without the user having to enter an explicit query... In a mobile context such a system might take the form of an app that recommends interesting places and activities based on the user’s location, personal preferences, past history, and environmental factors such as weather and time... In contrast to many traditional recommender systems, these systems must be open domain, ideally able to make suggestion and synthesize information from multiple sources...”
For example, imagine a group of information retrieval researchers with a November evening to spend in beautiful Gaithersburg, Maryland. A contextual suggestion system might recommend a beer at the Dogfish Head Alehouse (www.dogfishalehouse.com), dinner at the Flaming Pit (www.flamingpitrestaurant.com), or even a trip into Washington on the metro to see the National Mall (www.nps.gov/nacc). The goal of the Contextual Suggestion track is to provide a venue for the evaluation of such systems
As input to the task, participants will be given a set of profiles, a set of example suggestions, and a set of contexts. Details of all file formats are given in separate sections below. Each profile corresponds to a single user, and indicates that user’s preference with respect to each example suggestion. For example, one suggestion might be to have a beer at the Dogfish Head Alehouse, and the profile might include a negative preference with respect to this suggestion. Each training suggestion includes a title, description, and an associated URL. Each context corresponds to a particular geographical location (a city). For example, the context might be Gaithersburg, Maryland.
Profiles consist of two ratings for a series of attractions, one rating for the attraction's title and description and one rating for the attraction's website. The profile should gives systems which attractions a particular user likes and which ones the user does not like. The ratings are given on a five-point scale based on how interested the user would be in going to the attraction if they were visiting the city it was in:
Profiles will be distributed as two files:
examples.json is a JSON file which contains an object where the keys are attraction ids and the values being objects which contain the title, url, and description for that attractions, for example:
Alternatively you may use exaples.csv, a CSV file with the columns id (attraction_id), title, description, and url, for example:id,title,description,url
1,Fresh on Bloor,"Our vegan menu boasts an array of exotic starters, multi-layered salads, filling wraps, high protein burgers and
our signature Fresh bowls.",http://www.freshrestaurants.ca
profiles.json is a JSON file which contains an object where the keys are profile_ids and the values are objects which contain the attraction_id, and the description and website ratings that correspond to that attraction for that user, for example:
A sample of examples.(json|csv) and profiles.(json|csv) can be downloaded below.
Contexts will be distributed as one file, contexts.(json|csv), which contains the cities suggestions need to be generated for. The cities in the contexts file will be the primary cities of 50 randomly selected metropolitan areas (which are not part of a larger metropolitan area) excluding Philadelphia, PA.* [UPDATED: May 13, 2013]
contexts.json is a JSON file which contains an object where the keys are context_ids and the values are objects which contain the latitude, longitude, city name, and state the city is in, for example:
The latitude and longitude and provided as a convenience and is intended to be synonymous with the city and state information.
A sample of both contexts.json and contexts.csv can be downloaded below.
* The list of metropolitan areas is gathered from http://en.wikipedia.org/wiki/List_of_metropolitan_areas_of_the_United_States.
Suggestion files should contain a list of up to 50 ranked suggestion for each profile+context pair are returned as a single JSON or CSV file.
If you choose to submit a JSON file it should contain an object that contains the groupid, runid, and an array of suggestions. The array contains objects which contain the profile_id, context_id, rank, title, description, url and docId, for example:
"title": "Deschutes Brewery Portland Public House",
"description": "Deschutes Brewery’s distinct Northwest brew pub in Portland’s Pearl District has become a convivial
gathering spot of beer and food lovers since it’s 2008 opening.",
If you choose to submit a CSV file it should contain the columns groupid, runid, profile, context, rank, title, description, url, and docId, for example:
group44,run44A,1,1,1,Deschutes Brewery Portland Public House,"Deschutes Brewery’s distinct Northwest brew pub in Portland’s
Pearl District has become a convivial gathering spot of beer and food lovers since it’s 2008 opening.",
The suggestions file contains:
Submissions will be identified as either ClueWeb runs of open web runs. Submitted files must consists of suggestions with either only ClueWeb12 docIds or only urls from the open web. Note that we cannot guarantee that judgements for ClueWeb docs will be based off of documents in the ClueWeb12 corpus, docIds may simply be translated to urls and judged off the open web.
The Contextual Suggestion Track will use version 1.1 of ClueWeb12 (which fixes a duplicate document problem in v1.0). Further information regarding the collection can be found on the ClueWeb12 website. Since it can take several weeks to obtain the dataset, we urge you to start this process as soon as you can.
If you are unable to work with the full ClueWeb12 dataset, we will accept runs over the smaller ClueWeb "Category B" dataset (called ClueWeb12-B-13) but we stongly encourage you to use the full dataset if you can. Additionally, both the full dataset and the B-13 dataset will be ranked together during evaluation.
Suggestions will be judged both by users and NIST assessors:
ClueWeb and open web submissions will be ranked separately.
P@5 and MRR were the two measures used as part of the TREC 2012 Contextual Suggestion Track. Other measures may be developed as part of the track this year. In addition to baseline runs, the organizers will be submitting ClueWeb12 runs whose judgements can be used for the purposes of re-ranking after the track is complete.
These are sample files based on last year's data.