31.23, Calls: Comp Ling, Disc Analysis, Forensic Ling, Pragmatics, Text/Corpus Ling/France

The LINGUIST List linguist at listserv.linguistlist.org
Fri Jan 3 00:55:16 UTC 2020


LINGUIST List: Vol-31-23. Thu Jan 02 2020. ISSN: 1069 - 4875.

Subject: 31.23, Calls: Comp Ling, Disc Analysis, Forensic Ling, Pragmatics, Text/Corpus Ling/France

Moderator: Malgorzata E. Cavar (linguist at linguistlist.org)
Student Moderator: Jeremy Coburn
Managing Editor: Becca Morris
Team: Helen Aristar-Dry, Everett Green, Sarah Robinson, Peace Han, Nils Hjortnaes, Yiwen Zhang, Julian Dietrich
Jobs: jobs at linguistlist.org | Conferences: callconf at linguistlist.org | Pubs: pubs at linguistlist.org

Homepage: http://linguistlist.org

Please support the LL editors and operation with a donation at:
           https://funddrive.linguistlist.org/donate/

Editor for this issue: Everett Green <everett at linguistlist.org>
================================================================


Date: Thu, 02 Jan 2020 19:52:52
From: Ritesh Kumar [ritesh.lists at gmail.com]
Subject: Second Workshop on Trolling, Aggression and Cyberbullying

 
Full Title: Second Workshop on Trolling, Aggression and Cyberbullying 
Short Title: TRAC - 2020 

Date: 16-May-2020 - 16-May-2020
Location: Palais du Pharo, Marseille, France 
Contact Person: Ritesh Kumar
Meeting Email: comma.kmi at gmail.com
Web Site: https://sites.google.com/view/trac2/home 

Linguistic Field(s): Computational Linguistics; Discourse Analysis; Forensic Linguistics; Pragmatics; Text/Corpus Linguistics 

Call Deadline: 07-Feb-2020 

Meeting Description:

As the number of people and their interaction over the web has increased,
incidents of aggression and related activities like trolling, cyberbullying,
flaming, abusive, offensive and hate speech, have also increased manifold
globally. The reach and extent of Internet has given such incidents
unprecedented power and influence to affect the lives of billions of people.
It has been reported that such incidents of online abuse have not only created
mental and psychological health issues for users, but they have impacted our
lives in many other ways, spanning from deactivating accounts to instances of
self-harm and suicide. NLP and related methods have shown great promise in
dealing with such abusive behavior through early detection of inflammatory
content.

This workshop focuses on the applications of NLP and Machine Learning to
tackle these issues.


Call for Papers:

We invite original, unpublished research papers as well as demos around the
following themes and areas of research:

- Theories and models of aggression and conflict in language.
- Trolling, hate speech, cyberbullying and aggression on the web.
- Multilingualism and aggression.
- Resource Development - Corpora, Annotation Guidelines and Best Practices for
aggression, trolling and cyberbullying detection
- Computational Models and Methods for aggression, cyberbullying, hate speech
and offensive language detection in text and speech.
- Automatic detection of physical threats on the web.
- Censorship, moderation and content governance on the web: ethical, legal and
technological issues and challenges.

The LRE 2020 Map and the ''Share your LRs!'' initiative:

Submission Info:
https://sites.google.com/view/trac2/submission

When submitting a paper from the START page, authors will be asked to provide
essential information about resources (in a broad sense, i.e. also
technologies, standards, evaluation kits, etc.) that have been used for the
work described in the paper or are a new result of your research.

Moreover, ELRA encourages all LREC authors to share the described LRs (data,
tools, services, etc.) to enable their reuse and replicability of experiments
(including evaluation ones).

Shared Tasks:

The workshop includes two shared tasks as detailed below:

Sub-task A: Aggression Identification Shared Task. The task will be to develop
a classifier that could make a 3-way classification in between ‘Overtly
Aggressive’, ‘Covertly Aggressive’ and ‘Non-aggressive’ text data. We are
making available a dataset of 5,000 aggression-annotated data from social
media each in Bangla (in both Roman and Bangla script), Hindi (in both Roman
and Devanagari script) and English for training and validation. We will
release additional data for testing your system. The train and test sets for
the tasks are different from the ones made available during TRAC - 1.

Sub-task B: Misogynistic Aggression Identification Shared Task: This task will
be to develop a binary classifier for classifying the text as ‘gendered’ or
‘non-gendered’. We will provide a dataset of 5,000 annotated data from social
media each in Bangla (in both Roman and Bangla script), Hindi (in both Roman
and Devanagari script) and English for training and validation. We will
release additional data for testing your system.

Please go to the workshop website to register for the shared tasks and get the
dataset.




------------------------------------------------------------------------------

***************************    LINGUIST List Support    ***************************
 The 2019 Fund Drive is under way! Please visit https://funddrive.linguistlist.org
  to find out how to donate and check how your university, country or discipline
     ranks in the fund drive challenges. Or go directly to the donation site:
               https://iufoundation.fundly.com/the-linguist-list-2019

                        Let's make this a short fund drive!
                Please feel free to share the link to our campaign:
                    https://funddrive.linguistlist.org/donate/
 


----------------------------------------------------------
LINGUIST List: Vol-31-23	
----------------------------------------------------------






More information about the LINGUIST mailing list