Appel: Campagne d'evaluation, MediaEval Retrieving Diverse Social Images 2014 Task

Thierry Hamon hamon at LIMSI.FR
Sat Apr 19 08:25:43 UTC 2014

Date: Wed, 16 Apr 2014 17:42:02 +0000
From: POPESCU Adrian 211643 <adrian.popescu at>
Message-ID: <A3CBBA37AC11414DB0FF9BB6D0A5E4DC1F8DEFB5 at>

[Apologies for cross-postings]

Retrieving Diverse Social Images 2014 Task
@ the MediaEval 2014 Multimedia Benchmark Evaluation
Call for Participation
Regular registration deadline: 1 May 2014

We are happy to announce the opening of the registration for the 2014
Retrieving Diverse Social Images Task of the MediaEval Multimedia

**About the Retrieving Diverse Social Images 2014 Task**

This task is a follow-up of the 2013 edition. The task addresses the
problem of result diversification in social photo retrieval.

We use a tourist use case where a person tries to find more information
about a place she is potentially visiting. The person has only a vague
idea about the location, e.g., knowing the name of the location. She
uses the name to learn additional information about the location from
the Internet, for instance by visiting a Wikipedia page, e.g., getting a
photo, the geographical position of the place and basic
descriptions. Before deciding whether this location suits her needs, the
person is interested in getting a more complete visual description of
the place.

The participating systems are expected, given a ranked list of location
photos retrieved from Flickr using text information (up to 300 photos
per query), to refine the results. The refinement consists on providing
a set of images that are in the same time relevant, e.g., depict
partially or entirely the target location, and provide a diversified
summary, e.g., around 50 images that depict different views of the
location at different times of the day/year and under different weather
conditions, creative views, etc. Initial results are typically noisy and

The refinement and diversification process will be based on the social
metadata associated with the images and/or on the visual characteristics
of the images.

In particular, this year we provide information about user annotation
credibility. Credibility is determined as an automatic estimation of the
quality (correctness) of a particular user's tags. Participants are
allowed to exploit this credibility estimation or to compute their own
approach, in addition to classical retrieval techniques. A specifically
designed dataset will be used to train this measure and will be provided
to the participants.

**Task schedule**

March-May: Registration and return usage agreements.
1 May: Release of development/training data.
2 June: Release of test data.
8 September: Participants submit their completed runs.
15 September: Evaluation of submitted runs. Participants write their
2-page working notes papers.
28 September: Participants submit their camera ready working notes papers.
16-17+18 October: MediaEval 2014 Workshop, Barcelona, Spain.

**Task organizers**

Bogdan Ionescu, LAPI, University Politehnica of Bucharest, Romania,
Adrian Popescu, CEA LIST, France,
Mihai Lupu, Vienna University of Technology, Austria,
Henning Müller, University of Applied Sciences Western Switzerland,
(HES-SO) in Sierre, Switzerland.

**Detailed information**

For more information about the task see:

For registration and general information about the 2014 MediaEval
benchmarking see:

We look forward to a very successful evaluation campaign!

On behalf of the task organizers,
Bogdan Ionescu
University Politehnica of Bucharest
Message diffuse par la liste Langage Naturel <LN at>
Informations, abonnement :
English version       : 
Archives                 :

La liste LN est parrainee par l'ATALA (Association pour le Traitement
Automatique des Langues)
Information et adhesion  :

ATALA décline toute responsabilité concernant le contenu des
messages diffusés sur la liste LN

More information about the Ln mailing list