6.1073, Calls: Spoken Lang Generation and Multimodal Info

The Linguist List linguist at tam2000.tamu.edu
Thu Aug 10 16:01:04 UTC 1995


---------------------------------------------------------------------------
LINGUIST List:  Vol-6-1073. Thu Aug 10 1995. ISSN: 1068-4875. Lines:  165
 
Subject: 6.1073, Calls: Spoken Lang Generation and Multimodal Info
 
Moderators: Anthony Rodrigues Aristar: Texas A&M U. <aristar at tam2000.tamu.edu>
            Helen Dry: Eastern Michigan U. <hdry at emunix.emich.edu>
 
Associate Editor:  Ljuba Veselinova <lveselin at emunix.emich.edu>
Assistant Editors: Ron Reck <rreck at emunix.emich.edu>
                   Ann Dizdar <dizdar at tam2000.tamu.edu>
                   Annemarie Valdez <avaldez at emunix.emich.edu>
 
Software development: John H. Remmers <remmers at emunix.emich.edu>
 
Editor for this issue: dseely at emunix.emich.edu (T. Daniel Seely)
 
---------------------------------Directory-----------------------------------
1)
Date:  Thu, 10 Aug 1995 11:30:58 +0200
From:  bateman at darmstadt.gmd.de ("Dr. John Bateman")
Subject:  CFP: Workshop on Spoken Language Generation and Multimodal
               Information Systems
 
---------------------------------Messages------------------------------------
1)
Date:  Thu, 10 Aug 1995 11:30:58 +0200
From:  bateman at darmstadt.gmd.de ("Dr. John Bateman")
Subject:  CFP: Workshop on Spoken Language Generation and Multimodal
               Information Systems
 
 
  2ND `SPEAK!' WORKSHOP: SPEECH GENERATION IN MULTIMODAL INFORMATION
                  SYSTEMS AND PRACTICAL APPLICATIONS
 
                        2nd-3rd November 1995
 
                     GMD/IPSI, Darmstadt, Germany
 
 
  *******************   CALL FOR CONTRIBUTIONS  *******************
 
 
This  workshop aims   to bring together  researchers, developers,  and
potential producers and marketers of multimodal information systems in
order to   consider the role  of  *spoken language synthesis*  in such
systems.  Not only  do we need to  be able to  produce spoken language
appropriately---including  effective  control of intonation---but also
we need  to know in which practical  contexts spoken language  is most
beneficial. This requires a   dialogue between those providing  spoken
natural language technology and those considering the practical use of
multimodal information systems.
 
The workshop  will  consist  of   paper presentations   and  practical
demonstrations,  as well  as   a  roundtable discussion  on  the  best
strategies for pursuing the  practical application of  spoken language
synthesis technology in information systems.
 
Suggested Topic Areas/Themes include, but are not limited to:
 
* functional control of intonation in synthesized speech
 
* use of speech in intelligent interfaces for information systems
 
* integration of speech into automatic query systems
 
* cooperative   integration  of  speech   with   text  generation  for
  information systems
 
* evaluation strategies   for  information systems    involving speech
  synthesis
 
* applications  for  information systems with   spoken language output
  capabilities
 
* practical requirements for  information systems with spoken language
  capabilities.
 
Potential participants  are   invited to submit  short  statements  of
interest indicating whether they would  be interested in presenting  a
paper,  offering a system demonstration,   participating in the  round
table discussion, or simply   attending.  Statements of  interest  and
extended abstracts (max.  7 pages) should  be sent by 1st. October  by
e-mail to: `bateman at gmd.de' or by post to: John A.  Bateman, GMD/IPSI,
Dolivostr.  15, D-64293 Darmstadt,  Germany.  Extended  abstracts will
be made available at the workshop.
 
During  the workshop  current  results  and  demonstrations of the  EU
Copernicus   Program   Project  `Speak!' will     also   be given (see
attachment).
 
- ---------------------------------------
 
Project Information:
 
                         The SPEAK! Project:
         Speech Generation in Multimodal Information Systems
 
"SPEAK!" is  a European Union  funded project  (COPERNICUS '93 Project
No. 10393)   whose aim is to embed   spoken natural language synthesis
technology   with sophisticated user interfaces    in order to improve
access to information systems.
 
Multimedia technology  and knowledge-based text processing enhance the
development of  new types of information systems  which not only offer
references or full-text documents to the user  but also provide access
to images, graphics,  audio and video documents.  This diversification
of   the  in  formation offered  has   to be  supported by easy-to-use
multimodal user interfaces, which are capable  of presenting each type
of information item  in a way that it  can be perceived and  processed
effectively by the user.
 
Users  can easily process   simultaneously   the graphical  medium  of
information presentation and the  linguistic medium. The separation of
mode is also quite appropriate  for  the different functionalities  of
the  main  graphical interaction   and  the supportive   meta-dialogue
carried out linguistically.  We believe, therefore, that a substantial
improvement  in both  functionality   and  user acceptance  is   to be
achieved by the integration of spoken languages capabilities.
 
However,  text-to-speech  devices commercially available today produce
speech that  sounds unnatural  and  that is hard  to  listen to.  High
quality synthesized  speech  that sounds acceptable to  humans demands
appropriate intonation  patterns.  The effective control of intonation
requires  synthesizing from meanings,  rather than word sequences, and
requires understanding  of the functions of  intonation. In the domain
of   sophisticated human-machine interfaces, we  can   make use of the
increasing tendency to design such   interfaces as independent  agents
that themselves engage in  an interactive dialogue (both graphical and
linguistic) with their users. Such  agents need to maintain models  of
their discourses, their users, and their communicative goals.
 
The SPEAK! project,  which  was  launched recently as  a   cooperation
between  the  Speech Research  Technology  Laboratory of the TECHNICAL
UNIVERSITY OF BUDAPEST and  the TECHNICAL UNIVERSITY OF DARMSTADT  (in
cooperation with GMD-IPSI), aims at developing such an interface for a
multimedia retrieval system.  At IPSI, the  departments KOMET (natural
language  generation)  and   MIND  (information retrieval   dialogues)
contribute to this project.
 
The   project is  to   construct  a proof-of-concept  prototype  of  a
multimodal information system combining  graphical and spoken language
output in a variety of  languages.  The work involves four  supporting
goals: first, to advance the state of the art in the domains of speech
synthesis, spoken  text  generation,  and graphical  interface design;
second,   to  provide enabling   technology for  higher  functionality
information systems that are more  appropriate for general public use;
third, to significantly improve  the public and industrial  acceptance
of  speech synthesis  in   general  and the  Hungarian  text-to-speech
technology  elaborated within the  project in particular; and, fourth,
to act as a focusing point for speech work in Hungary.
 
 
Contact points:
   GMD/IPSI, Darmstadt: John Bateman
                             e-mail: bateman at gmd.de
                             fax:    +49/6151-869-818
                             tel:    +49/6151-869-826
   TU-Budapest:         G'eza N'emeth
                             e-mail: NEMETH at ttt-202.ttt.bme.hu
                             fax:    +36/1-463-3107
                             tel:    +36/1-463 2401
------------------------------------------------------------------------
LINGUIST List: Vol-6-1073.



More information about the LINGUIST mailing list