<html dir="ltr">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=Windows-1252">
<style id="owaParaStyle" type="text/css">
<!--
p
{margin-top:0;
margin-bottom:0}
p
{margin-top:0;
margin-bottom:0}
p
{margin-top:0;
margin-bottom:0}
-->
P {margin-top:0;margin-bottom:0;}</style>
</head>
<body ocsi="0" fpstyle="1">
<div style="direction: ltr;font-family: Tahoma;color: #000000;font-size: 10pt;">Workshop On Vision And Language 2014 (VL'14), Dublin, 23rd August 2014
<br>
<div style="font-family: Times New Roman; color: #000000; font-size: 16px">
<div>
<div>
<div style="direction:ltr; font-family:Tahoma; color:#000000; font-size:10pt">
<div style="font-family:Times New Roman; color:#000000; font-size:16px">
<div>
<div style="direction:ltr; font-family:Tahoma; color:#000000; font-size:10pt">
<div style="font-family:Times New Roman; color:#000000; font-size:16px">
<div>
<div style="direction:ltr; font-family:Tahoma; color:#000000; font-size:10pt"><br>
The 3rd Annual Meeting Of The EPSRC Network On Vision & Language and The 1st Technical Meeting of the European Network on Integrating Vision and Language<br>
<br>
A Workshop of the 25th International Conference on Computational Linguistics (COLING 2014)
<br>
<br>
<br>
*** Funding available for full V&L EPSRC (UK) Network members and iV&L Net European COST Action participants who present a paper ***<br>
<br>
<br>
Call for papers <br>
<br>
Fragments of natural language, in the form of tags, captions, subtitles, surrounding text or audio, can aid the interpretation of image and video data by adding context or disambiguating visual appearance. In addition, labelled images are essential for training
object or activity classifiers. On the other hand, visual data can help resolve challenges in language processing such as word sense disambiguation. Studying language and vision together can also provide new insight into cognition and universal representations
of knowledge and meaning. Meanwhile, sign language and gestures are languages that require visual interpretation.
<br>
<br>
We welcome papers describing original research combining language and vision. To encourage the sharing of novel and emerging ideas we also welcome papers describing new datasets, grand challenges, open problems, benchmarks and work in progress as well as survey
papers. <br>
<br>
Topics of interest include (but are not limited to): <br>
<br>
* Image and video labelling and annotation <br>
* Computational modelling of human vision and language <br>
* Multimodal human-computer communication <br>
* Language-driven animation <br>
* Assistive methodologies <br>
* Image and video description <br>
* Image and video search and retrieval <br>
* Automatic text illustration <br>
* Facial animation from speech <br>
* Text-to-image generation <br>
<br>
<br>
Paper Submission <br>
<br>
Submissions should be between 4 and 8 pages long (shorter lengths are more suitable for e.g. reporting work in progress), where 8 pages is the maximum including references. Submissions should adhere to the COLING 2014 format (style files available from http://www.coling-2014.org/instructions-for-authors.php),
and should be in PDF format.<br>
<br>
Please make your submission via the START workshop submission pages: https://www.softconf.com/coling2014/WS-3/<br>
<br>
<br>
Important Dates <br>
<br>
Paper Submission Deadline 13th June 2014<br>
Author Notification Deadline 27th June 2014<br>
Camera-Ready Paper Deadline 11th July 2014<br>
Date of Workshop 23rd August 2014<br>
<br>
<br>
Organisers <br>
<br>
Anja Belz, University of Brighton <br>
Kalina Bontcheva, University of Sheffield <br>
Darren Cosker, University of Bath <br>
Frank Keller, University of Edinburgh <br>
Sien Moens, University of Leuven<br>
Alan Smeaton, Dublin City University<br>
William Smith, University of York <br>
<br>
<br>
Programme Committee<br>
<br>
Yannis Aloimonos, University of Maryland, US<br>
Dimitrios Makris, Kingston University, UK<br>
Desmond Elliot, University of Edinburgh, UK<br>
Tamara Berg, Stony Brook, US<br>
Claire Gardent, LORIA, France<br>
Lewis Griffin, UCL, UK<br>
Brian Mac Namee, Dublin Institute of Technology, Ireland<br>
Margaret Mitchell, University of Aberdeen, UK<br>
Ray Mooney, University of Texas at Austin, US<br>
Chris Town, University of Cambridge, UK<br>
David Windridge, University of Surrey, UK<br>
Lucia Specia, University of Sheffield, UK<br>
John Kelleher, Dublin Institute of Technology, Ireland<br>
Sergio Escalera, Autonomous University of Barcelona, Spain<br>
Erkut Erdem, Hacettepe University, Turkey<br>
Isabel Trancoso, INESC-ID, Portugal<br>
<br>
<br>
The EPSRC Network On Vision And Language (V&L Net) - http://www.vl-net.org.uk/<br>
<br>
The EPSRC Network on Vision and Language (V&L Net) is a forum for researchers from the fields of Computer Vision and Language Processing to meet, exchange ideas, expertise and technology, and form new partnerships. Our aim is to create a lasting interdisciplinary
research community situated at the language- vision interface, jointly working towards solutions for some of today's toughest computational challenges, including image and video search, description of visual content and text-to-image generation.<br>
<br>
<br>
The European Network on Integrating Vision and Language (iV&L Net) - http://www.cost.eu/domains_actions/ict/Actions/IC1307<br>
<br>
The explosive growth of visual and textual data (both on the World Wide Web and held in private repositories by diverse institutions and companies) has led to urgent requirements in terms of search, processing and management of digital content. Solutions for
providing access to or mining such data depend on the semantic gap between vision and language being bridged, which in turn calls for expertise from two so far unconnected fields: Computer Vision (CV) and Natural Language Processing (NLP). The central goal
of iV&L Net is to build a European CV/NLP research community, targeting 4 focus themes: (i) Integrated Modelling of Vision and Language for CV and NLP Tasks; (ii) Applications of Integrated Models; (iii) Automatic Generation of Image & Video Descriptions;
and (iv) Semantic Image & Video Search. iV&L Net will organise annual conferences, technical meetings, partner visits, data/task benchmarking, and industry/end-user liaison. Europe has many of the world’s leading CV and NLP researchers. Tapping into this expertise,
and bringing the collaboration, networking and community building enabled by COST Actions to bear, iV&L Net will have substantial impact, in terms of advances in both theory/methodology and real world technologies.<br>
<br>
<br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br clear="both">
___________________________________________________________<BR>
This email has been scanned by MessageLabs' Email Security<BR>
System on behalf of the University of Brighton.<BR>
For more information see http://www.brighton.ac.uk/is/spam/<BR>
___________________________________________________________<BR>
</body>
</html>