[Corpora-List] VL'14 Programme and Final Call for Participation: Workshop On Vision And Language 2014, Dublin, 23rd August 2014

Anya Belz A.S.Belz at brighton.ac.uk
Tue Aug 19 08:47:31 UTC 2014


Workshop On Vision And Language 2014 (VL'14), Dublin, 23rd August 2014

The 3rd Annual Meeting Of The EPSRC Network On Vision & Language and The 1st Technical Meeting of the European Network on Integrating Vision and Language

A Workshop of the 25th International Conference on Computational Linguistics (COLING 2014)


Final Call for Participation


KEYNOTE SPEAKER: ALEX JAIMES, YAHOO INC.


Workshop Programme:

*** 09.00 - 09.15 Introduction and Welcome to Workshop

*** 09.15 - 10.30 Oral Papers Session 1: Interaction

The Effect of Sensor Errors in Situated Human-Computer Dialogue
Niels Schütte, John Kelleher and Brian Mac Namee

Joint Navigation in Commander/Robot Teams: Dialog & Task Performance When Vision is Bandwidth-Limited
Douglas Summers-Stay, Taylor Cassidy and Clare Voss

TUHOI: Trento Universal Human Object Interaction Dataset
Dieu-Thu Le, Jasper Uijlings and Raffaella Bernardi

*** 10.30 - 11.00 Morning Coffee

*** 11.00 - 11.40 Invited Keynote Talk - Alex Jaimes, Yahoo! Inc.

*** 11.40 - 12.30 Oral Papers Session 2: Language Descriptors

Concept-oriented labelling of patent images based on Random Forests and proximity-driven generation of synthetic data
Dimitris Liparas, Anastasia Moumtzidou, Stefanos Vrochidis and Ioannis Kompatsiaris

Exploration of functional semantics of prepositions from corpora of descriptions of visual scenes
Simon Dobnik and John Kelleher

*** 12.30 - 13.30 Lunch

*** 13.30 - 14.20 Oral Papers Session 3: Visual Indexing

A Poodle or a Dog? Evaluating Automatic Image Annotation Using Human Descriptions at Different Levels of Granularity
Josiah Wang, Fei Yan, Ahmet Aker and Robert Gaizauskas

Key Event Detection in Video using ASR and Visual Data
Niraj Shrestha, Aparna N. Venkitasubramanian and Marie-Francine Moens

*** 14.20 - 15.00 Poster Boasters

*** 15.30 - 17.00 Long Poster Papers (Parallel session)

Twitter User Gender Inference Using Combined Analysis of Text and Image Processing
Shigeyuki Sakaki, Yasuhide Miura, Xiaojun Ma, Keigo Hattori and Tomoko Ohkuma

Semantic and geometric enrichment of 3D geo-spatial models with captioned photos and labelled illustrations
Chris Jones, Paul Rosin and Jonathan Slade

Weakly supervised construction of a repository of iconic images
Lydia Weiland, Wolfgang Effelsberg and Simone Paolo Ponzetto

Cross-media Cross-genre Information Ranking based on Multi-media Information Networks
Tongtao Zhang, Haibo Li, Hongzhao Huang, Heng Ji, Min-Hsuan Tsai, Shen-Fu Tsai and Thomas Huang

Speech-accompanying gestures in Russian: functions and verbal context
Yulia Nikolaeva

DALES: Automated Tool for Detection, Annotation, Labelling, and Segmentation of Multiple Objects in Multi-Camera Video Streams
Mohammad Bhat and Joanna Isabelle Olszewska

A Hybrid Segmentation of Web Pages for Vibro-Tactile Access on Touch-Screen Devices
Waseem SAFI, Fabrice Maurel, Jean-Marc Routoure, Pierre Beust and Gaël Dias

*** 15.30 - 17.00 Short Poster Papers (Parallel session)

Expression Recognition by Using Facial and Vocal Expressions
Gholamreza Anbarjafari and Alvo Aabloo

Formulating Queries for Collecting Training Examples in Visual Concept Classification
Kevin McGuinness, Feiyan Hu, Rami Albatal and Alan Smeaton

Towards Succinct and Relevant Image Descriptions
Desmond Elliott

Coloring Objects: Adjective-Noun Visual Semantic Compositionality
Dat Tien Nguyen, Angeliki Lazaridou and Raffaella Bernardi

Multi-layered Image Representation for Image Interpretation
Marina Ivasic-Kos, Miran Pobar and Ivo Ipsic

The Last 10 Metres: Using Visual Analysis and Verbal Communication in Guiding Visually Impaired Smartphone Users to Entrances
Anja Belz and Anil Bharath

Keyphrase Extraction using Textual and Visual Features
Yaakov HaCohen-Kerner, Stefanos Vrochidis, Dimitris Liparas, Anastasia Moumtzidou and Ioannis Kompatsiaris

Towards automatic annotation of communicative gesturing
Kristiina Jokinen and Graham Wilcock


Background

Fragments of natural language, in the form of tags, captions, subtitles, surrounding text or audio, can aid the interpretation of image and video data by adding context or disambiguating visual appearance. In addition, labelled images are essential for training object or activity classifiers. On the other hand, visual data can help resolve challenges in language processing such as word sense disambiguation. Studying language and vision together can also provide new insight into cognition and universal representations of knowledge and meaning. Meanwhile, sign language and gestures are languages that require visual interpretation.

We welcome papers describing original research combining language and vision. To encourage the sharing of novel and emerging ideas we also welcome papers describing new datasets, grand challenges, open problems, benchmarks and work in progress as well as survey papers.

Topics of interest include (but are not limited to):

 * Image and video labelling and annotation
 * Computational modelling of human vision and language
 * Multimodal human-computer communication
 * Language-driven animation
 * Assistive methodologies
 * Image and video description
 * Image and video search and retrieval
 * Automatic text illustration
 * Facial animation from speech
 * Text-to-image generation


Contact

Email: vl-net at brighton.ac.uk
Website: https://vision.cs.bath.ac.uk/VL_2014/


Organisers

Anja Belz, University of Brighton
Kalina Bontcheva, University of Sheffield
Darren Cosker, University of Bath
Frank Keller, University of Edinburgh
Sien Moens, University of Leuven
Alan Smeaton, Dublin City University
William Smith, University of York


Programme Committee

Yannis Aloimonos, University of Maryland, US
Dimitrios Makris, Kingston University, UK
Desmond Elliot, University of Edinburgh, UK
Tamara Berg, Stony Brook, US
Claire Gardent, CNRS/LORIA, France
Lewis Griffin, UCL, UK
Brian Mac Namee, Dublin Institute of Technology, Ireland
Margaret Mitchell, University of Aberdeen, UK
Ray Mooney, University of Texas at Austin, US
Chris Town, University of Cambridge, UK
David Windridge, University of Surrey, UK
Lucia Specia, University of Sheffield, UK
John Kelleher, Dublin Institute of Technology, Ireland
Sergio Escalera, Autonomous University of Barcelona, Spain
Erkut Erdem, Hacettepe University, Turkey
Isabel Trancoso, INESC-ID, Portugal


The EPSRC Network On Vision And Language (V&L Net) - http://www.vl-net.org.uk/

The EPSRC Network on Vision and Language (V&L Net) is a forum for researchers from the fields of Computer Vision and Language Processing to meet, exchange ideas, expertise and technology, and form new partnerships. Our aim is to create a lasting interdisciplinary research community situated at the language- vision interface, jointly working towards solutions for some of today's toughest computational challenges, including image and video search, description of visual content and text-to-image generation.


The European Network on Integrating Vision and Language (iV&L Net) - http://www.cost.eu/domains_actions/ict/Actions/IC1307

The explosive growth of visual and textual data (both on the World Wide Web and held in private repositories by diverse institutions and companies) has led to urgent requirements in terms of search, processing and management of digital content. Solutions for providing access to or mining such data depend on the semantic gap between vision and language being bridged, which in turn calls for expertise from two so far unconnected fields: Computer Vision (CV) and Natural Language Processing (NLP). The central goal of iV&L Net is to build a European CV/NLP research community, targeting 4 focus themes: (i) Integrated Modelling of Vision and Language for CV and NLP Tasks; (ii) Applications of Integrated Models; (iii) Automatic Generation of Image & Video Descriptions; and (iv) Semantic Image & Video Search. iV&L Net will organise annual conferences, technical meetings, partner visits, data/task benchmarking, and industry/end-user liaison. Europe has many of the world’s leading CV and NLP researchers. Tapping into this expertise, and bringing the collaboration, networking and community building enabled by COST Actions to bear, iV&L Net will have substantial impact, in terms of advances in both theory/methodology and real world technologies.


___________________________________________________________
This email has been scanned by MessageLabs' Email Security
System on behalf of the University of Brighton.
For more information see http://www.brighton.ac.uk/is/spam/
___________________________________________________________
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/corpora/attachments/20140819/7579f3de/attachment.htm>
-------------- next part --------------
_______________________________________________
UNSUBSCRIBE from this page: http://mailman.uib.no/options/corpora
Corpora mailing list
Corpora at uib.no
http://mailman.uib.no/listinfo/corpora


More information about the Corpora mailing list