33.172, Support: English; Computational Linguistics: PhD, King's College London

The LINGUIST List linguist at listserv.linguistlist.org
Wed Jan 19 00:27:31 UTC 2022


LINGUIST List: Vol-33-172. Tue Jan 18 2022. ISSN: 1069 - 4875.

Subject: 33.172, Support: English; Computational Linguistics: PhD, King's College London

Moderator: Malgorzata E. Cavar (linguist at linguistlist.org)
Student Moderator: Billy Dickson
Managing Editor: Lauren Perkins
Team: Helen Aristar-Dry, Everett Green, Sarah Goldfinch, Nils Hjortnaes,
      Joshua Sims, Billy Dickson, Amalia Robinson, Matthew Fort
Jobs: jobs at linguistlist.org | Conferences: callconf at linguistlist.org | Pubs: pubs at linguistlist.org

Homepage: http://linguistlist.org

Please support the LL editors and operation with a donation at:
           https://funddrive.linguistlist.org/donate/

Editor for this issue: Lauren Perkins <lauren at linguistlist.org>
================================================================


Date: Tue, 18 Jan 2022 19:26:49
From: Barbara McGillivray [barbara.mcgillivray at kcl.ac.uk]
Subject: English; Computational Linguistics: PhD, King's College London, United Kingdom

 Institution/Organization: King's College London 
Department: Informatics (Safe and Trusted AI Doctoral Training Programme) 
Web Address: https://safeandtrustedai.org/ 

Level: PhD 

Duties: Research
 
Specialty Areas: Computational Linguistics 
 
Required Language(s): English (eng)

Description:

The UKRI Centre for Doctoral Training in Safe and Trusted Artificial
Intellligence has approximately 12 fully funded doctoral studentships
available each year. Apply now for entry in September 2022. The next
application deadline is Tuesday, 15 February, 2022.

Committed to providing an inclusive environment in which diverse students can
thrive, we particularly encourage applications from women, disabled and Black,
Asian and Minority Ethnic (BAME) candidates, who are currently
under-represented in the sector.

You can see all projects proposed here:
https://safeandtrustedai.org/apply-now/#projects .

The following project will be supervised by Albert Meroño Peñuela (Department
of Informatics) and Barbara McGillivray (Department of Digital Humanities,
King's College London). 

Project Title:
Symbolic knowledge representations for time-sensitive offensive language
detection

Project description:
Language models learned from data have become prevalent in AI systems, but
they are sensitive to the identification of undesired behaviour posing risks
to society, like offensive language. The task of automatic detection of
offensive language has attracted significant attention in Natural Language
Processing (NLP) due to its high social impact. Policy makers and online
platforms can leverage computational methods of offensive language detection
to oppose online abuse at scale. State-of-the-art methods for automatic
offensive language detection, typically relying on ensembles of
transformer-based language models such as BERT, are trained on large-scale
annotated datasets.  

Detecting offensive language is aggravated by the fact that the meaning of
words changes over time, and conventional, neutral language can evolve into
offensive language at short time scales, following rapid changes in social
dynamics or political events. The word karen, from a neutrally connotated name
of person, for example, acquired an offensive meaning in 2020, turning into a
“pejorative term for a white woman perceived as entitled or demanding beyond
the scope of what is normal”. Adapting to the way meaning of language changes
is a key characteristic of intelligent behaviour. Current AI systems developed
to process language computationally are not yet equipped to react to such
changes: the artificial neural networks they are built on do not capture the
full semantic range of words, which only becomes available if we access
additional knowledge (e.g. author, genre, origin, register) that is typically
contained in external, symbolic, and linguistic world knowledge bases.   

This project aims to develop new computational methods for offensive language
detection that combine distributional information from large textual datasets
with symbolic knowledge representations to develop time-sensitive methods for
offensive language detection. Specifically, this project will develop
representations of word meaning from textual data and external knowledge bases
containing relevant linguistic and world knowledge, such as lexicons,
thesauri, semantic networks, knowledge graphs (e.g. Wikidata), and ontologies,
embedding this knowledge into distributional word vectors derived from
time-sensitive text data (diachronic corpora) and exploring various approaches
for combining these representations. 

Full description:
https://safeandtrustedai.org/project/symbolic-knowledge-representations-for-ti
me-sensitive-offensive-language-detection/ .
 

Application Deadline: 15-Feb-2022 

Web Address for Applications: https://safeandtrustedai.org/apply-now/ 

Contact Information: 
	Barbara McGillivray
	barbara.mcgillivray at kcl.ac.uk  


------------------------------------------------------------------------------

***************************    LINGUIST List Support    ***************************
 The 2020 Fund Drive is under way! Please visit https://funddrive.linguistlist.org
  to find out how to donate and check how your university, country or discipline
     ranks in the fund drive challenges. Or go directly to the donation site:
                   https://crowdfunding.iu.edu/the-linguist-list

                        Let's make this a short fund drive!
                Please feel free to share the link to our campaign:
                    https://funddrive.linguistlist.org/donate/
 


----------------------------------------------------------
LINGUIST List: Vol-33-172	
----------------------------------------------------------






More information about the LINGUIST mailing list