Appel: MediaEval 2014, Retrieving Diverse Social Images Task survey
Thierry Hamon
hamon at LIMSI.FR
Wed Jan 29 10:17:07 UTC 2014
Date: Tue, 28 Jan 2014 23:07:36 +0000
From: POPESCU Adrian 211643 <adrian.popescu at cea.fr>
Message-ID: <A3CBBA37AC11414DB0FF9BB6D0A5E4DC1F881D3A at EXDAG0-A3.intra.cea.fr>
X-url: http://www.multimediaeval.org/mediaeval2014/
- we apologize if you receive multiple copies of this message -
Call for Participation in the 2014 Retrieving Diverse Social Images Task
survey
MediaEval 2014 Multimedia Benchmark
http://www.multimediaeval.org/mediaeval2014/
*About MediaEval*
MediaEval (http://www.multimediaeval.org) is a benchmarking initiative
dedicated to evaluating new algorithms for multimedia access and
retrieval. It emphasizes the 'multi' in multimedia and focuses on human
and social aspects of multimedia tasks. MediaEval attracts participants
who are interested in multimodal approaches to multimedia involving,
e.g., speech recognition, visual analysis, music and audio analysis,
user-contributed information (tags, comments, tweets), viewer affective
response, social networks, geo-coordinates and non-linear video access.
The MediaEval 2014 season kicks off with the MediaEval 2014 Survey. The
survey is used to collect your opinion about which tasks should be
offered by the MediaEval multimedia benchmark in 2014:
https://www.surveymonkey.com/s/mediaeval2014
The survey will take you about 5 minutes if you fill in only the main
questions. There are 13 main questions, which are multiple choice
questions that collect your opinion on each of the 13 tasks that have
been proposed for MediaEval 2014. However, we encourage you to answer
the additional questions on the tasks that most interest you — your
answers contribute to decisions than are made about the design and
implementation of the tasks.
The MediaEval 2014 task list will be finalized in mid February and sign
up for participation will open at the beginning of March. Please be sure
to fill your email address in on the first page of the survey if you
would like to receive an email when sign up opens.
Our goal is to have the survey filled out by as many researchers as
possible in the next three weeks — please pass the survey link along to
colleagues in the field of multimedia who might be interested.
Note that the deadline for results submissions this year will be early
to mid-September and the workshop will be held in October in Barcelona.
*About 2014 Retrieving Diverse Social Images Task*
This task is a follow-up of last year’s edition. The task addresses the
problem of result diversification in the context of social photo
retrieval. The task this year is build around the same use case scenario
as in 2013 (we use a tourist use case). The participating systems are
expected, given a ranked list of location photos retrieved from Flickr
using text and GPS queries, to refine the results by providing a set of
images that are in the same time relevant to the query and provide a
diversified summary of it (initial results are typically noisy and
redundant). The refinement and diversification process will be based on
the social metadata associated with the images and on the visual
characteristics of the images.
In particular, this year's novelty will be in exploring the effect of
user annotation credibility on relevance and diversity. Credibility is
determined as an automatic estimation of the quality (correctness) of a
particular user's tags (a specifically designed dataset will be used to
train this measure). Participants will be encouraged to exploit this
provided credibility estimation in addition to classical retrieval
techniques.
Moreover, another novelty for this year will consist of introducing a
more adequate diversification scenario by considering the annotation of
up to 300 images per location - compared to 150 last year.
Target communities involve both machine and human media analysis such as
image retrieval (text, vision, multimedia communities), re-ranking,
relevance feedback, crowd-sourcing and automatic geo-tagging. To solve
the challenge, participants are free to choose from any approaches, from
human oriented, machine-based to hybrid machine-human; as well as to
take advantage of any additional data sources, e.g., the Internet. To
encourage participation of groups from different research areas,
additional resources such as general purpose visual descriptors and
textual models will be provided for the entire collection. Evaluation of
performance is going to be carried out by comparison with human
generated ground truth.
*Task organizers*
Bogdan Ionescu, LAPI, University Politehnica of Bucharest, Romania;
Adrian Popescu, CEA LIST, France;
Mihai Lupu, Vienna University of Technology, Austria;
Henning Müller, University of Applied Sciences Western Switzerland in
Sierre, Switzerland.
Thank you for your interest and input.
On behalf of the task organizers,
Bogdan Ionescu
University Politehnica of Bucharest
-------------------------------------------------------------------------
Message diffuse par la liste Langage Naturel <LN at cines.fr>
Informations, abonnement : http://www.atala.org/article.php3?id_article=48
English version :
Archives : http://listserv.linguistlist.org/archives/ln.html
http://liste.cines.fr/info/ln
La liste LN est parrainee par l'ATALA (Association pour le Traitement
Automatique des Langues)
Information et adhesion : http://www.atala.org/
ATALA décline toute responsabilité concernant le contenu des
messages diffusés sur la liste LN
-------------------------------------------------------------------------
More information about the Ln
mailing list