<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">
++++++ NEW DEADLINE ++++++
<div class="">Dear all, please note that the deadline for submissions to the sign language workshop @ LREC has been extended to February 18th (23:59 CET).<br class="">
<br class="">
<div class=""><br class="">
</div>
<div class="">* We apologize if you receive multiple copies of this CfP *<br class="">
<br class="">
<br class="">
CALL FOR PAPERS<br class="">
9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives<br class="">
LREC Satellite Workshop in Marseille, France, on May 16, 2020<br class="">
<a href="https://www.sign-lang.uni-hamburg.de/lrec2020/cfp.html" class="">https://www.sign-lang.uni-hamburg.de/lrec2020/cfp.html</a><br class="">
<br class="">
<br class="">
Submissions are invited for a full day workshop on sign language resources, to take place following the 2020 LREC conference in Marseille, France, on May 16, 2020.<br class="">
<br class="">
During the past years, a number of large-scale sign language corpus projects have started. Some have already been completed, but many more projects are about to start. At the same time, sign language technologies are maturing and are promising to support the
time-consuming basic annotation. The workshop aims at bringing together those researchers who already work with multimodal sign language corpora (and those who see the need for empirical underpinnings of their current research) with those who develop sign
language technologies. It provides the platform to compare competing approaches.<br class="">
<br class="">
As sign language resources technologies to a large extent build on methodologies and tools used in the LR community in general, but adds very specific perspectives (e.g. no writing system established, use of video as data source) and works with a different
modality of human language, sign language research is able to feed back to the LR community at large. At the same time, as the raw data are in the visual domain, the field naturally bridges into Computer Vision. Thus, researchers use Machine Learning methods
on both visual and linguistic data.<br class="">
<br class="">
We invite submissions of papers to be presented either on stage (20 minutes plus 10 minutes discussion) or as posters (with or without demonstrations) on the following topics:<br class="">
<br class="">
2020 HOT TOPICS<br class="">
<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span>• “In the Service of the Language Community”<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span><span class="Apple-tab-span" style="white-space: pre;"></span>• What is the value of sign language resources for the sign language community?<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span>• “Language and the Brain”<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span><span class="Apple-tab-span" style="white-space: pre;"></span>• Experimental methods using or producing sign language resources<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span><span class="Apple-tab-span" style="white-space: pre;"></span>• New (multimodal) types of datasets and resources<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span><span class="Apple-tab-span" style="white-space: pre;"></span>• Methods aiming at new multimodal experimentations<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span><span class="Apple-tab-span" style="white-space: pre;"></span>• Sign language processing applications<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span>• “Machine / Deep Learning”<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span><span class="Apple-tab-span" style="white-space: pre;"></span>• Machine Learning methods both in the visual domain and on linguistic annotation of sign language data<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span><span class="Apple-tab-span" style="white-space: pre;"></span>• How to get along with the size of sign language resources actually existing<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span><span class="Apple-tab-span" style="white-space: pre;"></span>• Human-computer interfaces to sign language data and sign language annotation profiting from Machine Learning<br class="">
GENERAL ISSUES ON SIGN LANGUAGE CORPORA AND TOOLS<br class="">
<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span>• Avatar technology as a tool in sign language corpora and corpus data feeding into advances in avatar technology<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span>• Experiences in building sign language corpora<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span>• Elicitation methodology appropriate for corpus collection<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span>• Proposals for standards for linguistic annotation or for metadata descriptions<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span>• Experiences from linguistic research using corpora<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span>• Use of (parallel) corpora and lexicons in translation studies and machine translation<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span>• Language documentation and long-term accessibility for sign language data<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span>• Annotation and Visualization Tools<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span>• Linking corpora and lexicons and integrated presentation of corpus and dictionary contents<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span>• “Internet as a Corpus” for sign languages<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span>• Sign language corpus mining<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span>• Crowd and community sourcing for corpus work<br class="">
<span class="Apple-tab-span" style="white-space: pre;"></span>• Connecting sign language resources to language resources for spoken languages<br class="">
<br class="">
In the tradition of LREC, oral/signed presentations and poster presentations (with or without demonstrations) have equal status, and authors are encouraged to suggest the presentation format best suited to communicate their ideas. Papers (4, 6 or 8 pages) of
all accepted submissions to this workshop will be published as workshop proceedings published on the conference website – independent of whether you have a poster or an oral/signed presentation.<br class="">
<br class="">
Please submit your paper through the LREC START system at <a href="https://www.softconf.com/lrec2020/SignLang2020/" class="">https://www.softconf.com/lrec2020/SignLang2020/</a> not later than Feb 18, 2020 (23:59 CET=GMT+1), indicating whether you prefer an
oral/signed, a poster presentation or a poster presentation with demo.<br class="">
<br class="">
ATTENTION Please note that you are expected to submit the full paper, not an extended abstract as in previous years!<br class="">
<br class="">
IDENTIFY, DESCRIBE AND SHARE YOUR LRS!<br class="">
<br class="">
Describing your LRs in the LRE Map is now a normal practice in the submission procedure of LREC (introduced in 2010 and adopted by other conferences). To continue the efforts initiated at LREC 2014 about “Sharing LRs” (data, tools, web-services, etc.), authors
will have the possibility, when submitting a paper, to upload LRs in a special LREC repository. This effort of sharing LRs, linked to the LRE Map for their description, may become a new “regular” feature for conferences in our field, thus contributing to creating
a common repository where everyone can deposit and share data.<br class="">
<br class="">
As scientific work requires accurate citations of referenced work so as to allow the community to understand the whole context and also replicate the experiments conducted by other researchers, LREC 2020 endorses the need to uniquely Identify LRs through the
use of the International Standard Language Resource Number (ISLRN, <a href="http://www.islrn.org" class="">www.islrn.org</a>), a Persistent Unique Identifier to be assigned to each Language Resource. The assignment of ISLRNs to LRs cited in LREC papers will
be offered at submission time.<br class="">
<br class="">
<br class="">
<br class="">
For more information, please turn to the workshop website at<br class="">
<br class="">
<a href="http://www.sign-lang.uni-hamburg.de/lrec2020/" class="">http://www.sign-lang.uni-hamburg.de/lrec2020/</a><br class="">
<br class="">
In case of questions, please contact lrec2020 (at) <a href="http://dgs-korpus.de" class="">dgs-korpus.de</a>.<br class="">
<br class="">
<br class="">
The organizing committee,<br class="">
<br class="">
Eleni Efthimiou, Athens GR<br class="">
Evita Fotinea, Athens GR<br class="">
Thomas Hanke, Hamburg DE<br class="">
Julie Hochgesang, Washington US<br class="">
Jette Kristoffersen, Copenhagen DK<br class="">
Johanna Mesch, Stockholm SE</div>
</div>
</body>
</html>