<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<br>
[Apologies for duplication]<br>
<br>
<div align="center"> 2nd Call for Participation <br>
SEMEVAL 2015 Task 2 <br>
Semantic Textual Similarity<br>
</div>
<br>
<b>NEW</b>: train data for pilot on interpretable STS, including
chunk-level alignments. Task definition has been updated.<br>
<br>
Semantic textual similarity (STS) has received an increasing amount
of attention in recent years, culminating with the Semeval/*SEM
tasks organized in 2012, 2013 and 2014, bringing together more than
60 participating teams. Please check <a
class="moz-txt-link-freetext"
href="http://ixa2.si.ehu.es/stswiki/">http://ixa2.si.ehu.es/stswiki/</a>
for details on previous tasks.<br>
<br>
Given two sentences of text, s1 and s2, the systems participating in
this task should compute how similar s1 and s2 are, returning a
similarity score, and an optional confidence score. The annotations
and systems will use a scale from 0 (no relation) to 5 (semantic
equivalence), indicating the similarity between two sentences.
Participating systems will be evaluated using the same metrics
traditionally employed in the evaluation of STS systems, and also
used in previous Semeval/*SEM STS evaluations, i.e., mean Pearson
correlation between the system output and the gold standard
annotations.<br>
<br>
In 2015 we will continue to evaluate STS systems on the following
subtasks:<br>
<br>
- <b>NEW</b> for 2015, we devised a <b>pilot subtask on
interpretable STS</b>. With this pilot task we want to explore
whether STS systems are able to explain WHY they think the two
sentences are related / unrelated, adding an explanatory layer to
the similarity score. As a first step in this direction,
participating systems will need to<b> align the segments</b> in one
sentence in the pair to the segments in the other sentence,
describing what kind of <b>relation</b> exists between each pair of
segments. This pilot subtask will provide specific training data.<br>
<br>
- <b>English STS</b>, <font><span
style="background-color:rgba(255,255,255,0)">with sentence pairs
extracted from encyclopedic content and newswire</span></font>.<br>
<br>
- <b>Spanish STS</b>, with sentence pairs extracted from
encyclopedic content and newswire, and text snippet pairs obtained
from news headlines.<br>
<br>
<br>
Please join the mailing list for updates at <a
class="moz-txt-link-freetext"
href="http://groups.google.com/group/STS-semeval">http://groups.google.com/group/STS-semeval</a>.
Check out the task's webpage at <a class="moz-txt-link-freetext"
href="http://alt.qcri.org/semeval2015/task2/">http://alt.qcri.org/semeval2015/task2/</a>
for more details.<br>
<br>
Important dates:<br>
<br>
Evaluation start: December 5, 2014 [updated due to clash with
NAACL-2015 deadline]<br>
Evaluation end: December 20, 2014 [updated due to clash with
NAACL-2015 deadline]<br>
Paper submission due: January 30, 2015<br>
Paper reviews due: February 28, 2015<br>
Camera ready due: March 30, 2015<br>
SemEval workshop: Summer 2015<br>
<br>
Organizers:<br>
* Coordination: Eneko Agirre, Carmen Banea, Mona Diab, Montse
Maritxalar<br>
* STS English: Eneko Agirre, Daniel Cer, Mona Diab, Aitor
Gonzalez-Agirre, Weiwei Guo, and German Rigau.<br>
* STS Spanish: Carmen Banea, Claire Cardie, Rada Mihalcea, and
Janyce Wiebe.<br>
* STS pilot on interpretability and segment alignment: Eneko Agirre,
Aitor Gonzalez-Agirre, Iñigo Lopez-Gazpio, Montse Maritxalar and
German Rigau.<br>
<br>
References:<br>
<br>
<strong></strong>Eneko Agirre, Carmen Banea, Claire Cardie, Daniel
Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea,
German Rigau, Janyce Wiebe. SemEval-2014 Task 10: Multilingual
Semantic Textual Similarity. Proceedings of SemEval 2014. [<a
data-cke-saved-href="http://anthology.aclweb.org/S/S14/S14-2010.pdf"
href="http://anthology.aclweb.org/S/S14/S14-2010.pdf">pdf</a>]<br>
<br>
Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, WeiWei
Guo. *SEM 2013 shared task: Semantic Textual Similarity, Proceedings
of *SEM 2013. [<a
data-cke-saved-href="http://aclweb.org/anthology//S/S13/S13-1004.pdf"
href="http://aclweb.org/anthology//S/S13/S13-1004.pdf">pdf</a>]<br>
<br>
Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre.
SemEval-2012 Task 6: A Pilot on Semantic Textual Similarity.
Proceedings of SemEval 2012. [<a
data-cke-saved-href="http://aclweb.org/anthology-new/S/S12/S12-1051.pdf"
href="http://aclweb.org/anthology-new/S/S12/S12-1051.pdf">pdf</a>]<br>
<br>
</body>
</html>