TREC 2013 Contextual Suggestion Track Guidelines

The Contextual Suggestion Track investigates search techniques for complex information needs that are highly dependent on context and user interests.

Track Organizers:

Track webpage: https://sites.google.com/site/treccontext/

Mailing list: Send a mail message to listproc (at) nist.gov such that the body consists of the line subscribe trec-context <FirstName> <LastName>

What's New

TREC 2013 will be the second year the Contextual Suggestion Track is running, here are the major differences from TREC 2012:

Background

According to a report from the The Second Strategic Workshop on Information Retrieval in Lorne (published in the SIGIR Forum, June 2012): “Future information retrieval systems must anticipate user needs and respond with information appropriate to the current context without the user having to enter an explicit query... In a mobile context such a system might take the form of an app that recommends interesting places and activities based on the user’s location, personal preferences, past history, and environmental factors such as weather and time...  In contrast to many traditional recommender systems, these systems must be open domain, ideally able to make suggestion and synthesize information from multiple sources...”

For example, imagine a group of information retrieval researchers with a November evening to spend in beautiful Gaithersburg, Maryland.  A contextual suggestion system might recommend a beer at the Dogfish Head Alehouse (www.dogfishalehouse.com), dinner at the Flaming Pit (www.flamingpitrestaurant.com), or even a trip into Washington on the metro to see the National Mall (www.nps.gov/nacc).  The goal of the Contextual Suggestion track is to provide a venue for the evaluation of such systems

Task Summary

As input to the task, participants will be given a set of profiles, a set of example suggestions, and a set of contexts.  Details of all file formats are given in separate sections below.  Each profile corresponds to a single user, and indicates that user’s preference with respect to each example suggestion.  For example, one suggestion might be to have a beer at the Dogfish Head Alehouse, and the profile might include a negative preference with respect to this suggestion.  Each training suggestion includes a title, description, and an associated URL.  Each context corresponds to a particular geographical location (a city).  For example, the context might be Gaithersburg, Maryland.

For each profile/context pairing, participants should return a ranked list of up to 50 ranked suggestions. Each suggestion should be appropriate to the profile (based on the user’s preferences) and the context (according to the location). The description of the suggestion may be tailored to reflect the preferences of that user. Profiles correspond to the stated preferences of real individuals, who will return to judge proposed suggestions. Users recruited through crowdsourcing sites or are university undergraduate and graduate students. For the purposes of this experiment, you can assume users are of legal drinking age at the location specified by the context. You may assume that the user has up to five hours available to follow a suggestion and has access to appropriate transportation (e.g., a car).

Timeline

Example Suggestions & Profiles

Profiles consist of two ratings for a series of attractions, one rating for the attraction's title and description and one rating for the attraction's website. The profile should gives systems which attractions a particular user likes and which ones the user does not like. The ratings are given on a five-point scale based on how interested the user would be in going to the attraction if they were visiting the city it was in:

Profiles will be distributed as two files:

examples.json is a JSON file which contains an object where the keys are attraction ids and the values being objects which contain the title, url, and description for that attractions, for example:

    {

      "1": {

        "url": "http://www.freshrestaurants.ca",

        "description": "Our vegan menu boasts an array of exotic starters, multi-layered salads, filling wraps, high  protein burgers            and our signature Fresh bowls.",

        "title": "Fresh on Bloor"

      },

      ...

    }

Alternatively you may use exaples.csv, a CSV file with the columns id (attraction_id), title, description, and url, for example:

  id,title,description,url

  1,Fresh on Bloor,"Our vegan menu boasts an array of exotic starters, multi-layered salads, filling wraps, high  protein burgers and

  our signature Fresh bowls.",http://www.freshrestaurants.ca

  ...

profiles.json is a JSON file which contains an object where the keys are profile_ids and the values are objects which contain the attraction_id, and the description and website ratings that correspond to that attraction for that user, for example:

  {

    "1": [

      {"attraction_id": 1, "website": 1, "description": 0},

      ...

    ],

    "2": [

      {"attraction_id": 1, "website": 4, "description": 4},

      ...

    ],

    ...

  }

Alternatively you may use profiles.csv, a CSV file with the columns id (profile_id), attraction_id, description, and website, for example:

  id,attraction_id,description,website

  1,1,1,0

  1,2,3,4

  ...

  2,1,4,4

  ...

A sample of examples.(json|csv) and profiles.(json|csv) can be downloaded below.

Contexts

Contexts will be distributed as one file, contexts.(json|csv), which contains the cities suggestions need to be generated for. The cities in the contexts file will be the primary cities of 50 randomly selected metropolitan areas (which are not part of a larger metropolitan area) excluding Philadelphia, PA.* [UPDATED: May 13, 2013]

contexts.json is a JSON file which contains an object where the keys are context_ids and the values are objects which contain the latitude, longitude, city name, and state the city is in, for example:

  {

    "1": {

      "lat": "40.71427", "city": "New York City", "state": "NY", "long": "-74.00597"

    },

    "2": {

      "lat": "41.85003", "city": "Chicago", "state": "IL", "long": "-87.65005"

    },

    ...

  }

contexts.csv is a CSV file with the columns id (context_id), city, state, lat, and long, for example:

  id,city,state,lat,long

  1,New York City,NY,40.71427,-74.00597

  2,Chicago,IL,41.85003,-87.65005

  ...

The latitude and longitude and provided as a convenience and is intended to be synonymous with the city and state information.

A sample of both contexts.json and contexts.csv can be downloaded below.

* The list of metropolitan areas is gathered from http://en.wikipedia.org/wiki/List_of_metropolitan_areas_of_the_United_States.

Suggestions

Suggestion files should contain a list of up to 50 ranked suggestion for each profile+context pair are returned as a single JSON or CSV file.

If you choose to submit a JSON file it should contain an object that contains the groupid, runid, and an array of suggestions. The array contains objects which contain the profile_id, context_id, rank, title, description, url and docId, for example:

  {

    "groupid": "group44",

    "runid": "run44A",

    "suggestions": [

      {

        "profile": 1,

        "context": 1,

        "rank": 1,

        "title": "Deschutes Brewery Portland Public House",

        "description": "Deschutes Brewery’s distinct Northwest brew pub in Portland’s Pearl District has become a convivial

        gathering spot of beer and food lovers since it’s 2008 opening.",

        "url": "http://www.deschutesbrewery.com",

        "docId": ""

      },

      ...

    ]

  }

If you choose to submit a CSV file it should contain the columns groupid, runid, profile, context, rank, title, description, url, and docId, for example:

  groupid,runid,profile,context,rank,title,description,url,docId

  group44,run44A,1,1,1,Deschutes Brewery Portland Public House,"Deschutes Brewery’s distinct Northwest brew pub in Portland’s

  Pearl District has become a convivial gathering spot of beer and food lovers since it’s 2008 opening.",

  http://www.deschutesbrewery.com,

The suggestions file contains:

Submissions will be identified as either ClueWeb runs of open web runs. Submitted files must consists of suggestions with either only ClueWeb12 docIds or only urls from the open web. Note that we cannot guarantee that judgements for ClueWeb docs will be based off of documents in the ClueWeb12 corpus, docIds may simply be translated to urls and judged off the open web.

The Contextual Suggestion Track will use version 1.1 of ClueWeb12 (which fixes a duplicate document problem in v1.0). Further information regarding the collection can be found on the ClueWeb12 website. Since it can take several weeks to obtain the dataset, we urge you to start this process as soon as you can.

If you are unable to work with the full ClueWeb12 dataset, we will accept runs over the smaller ClueWeb "Category B" dataset (called ClueWeb12-B-13) but we stongly encourage you to use the full dataset if you can. Additionally, both the full dataset and the B-13 dataset will be ranked together during evaluation.

Each group may submit either one or two runs, which should use the same group id but different run ids.

Judging

Suggestions will be judged both by users and NIST assessors:

ClueWeb and open web submissions will be ranked separately.

Evaluation Measures

P@5 and MRR were the two measures used as part of the TREC 2012 Contextual Suggestion Track. Other measures may be developed as part of the track this year. In addition to baseline runs, the organizers will be submitting ClueWeb12 runs whose judgements can be used for the purposes of re-ranking after the track is complete.

Sample files

These are sample files based on last year's data.