32.66, Books: Embeddings in Natural Language Processing: Pilehvar, Camacho-Collados
The LINGUIST List
linguist at listserv.linguistlist.org
Wed Jan 6 01:36:04 UTC 2021
LINGUIST List: Vol-32-66. Tue Jan 05 2021. ISSN: 1069 - 4875.
Subject: 32.66, Books: Embeddings in Natural Language Processing: Pilehvar, Camacho-Collados
Moderator: Malgorzata E. Cavar (linguist at linguistlist.org)
Student Moderator: Jeremy Coburn
Managing Editor: Becca Morris
Team: Helen Aristar-Dry, Everett Green, Sarah Robinson, Lauren Perkins, Nils Hjortnaes, Yiwen Zhang, Joshua Sims
Jobs: jobs at linguistlist.org | Conferences: callconf at linguistlist.org | Pubs: pubs at linguistlist.org
Homepage: http://linguistlist.org
Please support the LL editors and operation with a donation at:
https://funddrive.linguistlist.org/donate/
Editor for this issue: Jeremy Coburn <jecoburn at linguistlist.org>
================================================================
Date: Tue, 05 Jan 2021 20:35:49
From: Brent Beckley [beckley at morganclaypool.com]
Subject: Embeddings in Natural Language Processing: Pilehvar, Camacho-Collados
Title: Embeddings in Natural Language Processing
Subtitle: Theory and Advances in Vector Representations of Meaning
Series Title: Human Language Technologies
Publication Year: 2020
Publisher: Morgan & Claypool Publishers
http://www.morganclaypool.com
Book URL: https://www.morganclaypoolpublishers.com/catalog_Orig/product_info.php?products_id=1600
Author: Mohammad Taher Pilehvar
Author: Jose Camacho-Collados
Electronic: ISBN: 9781636390222 Pages: 176 Price: U.S. $ 51.96
Hardback: ISBN: 9781636390239 Pages: 176 Price: U.S. $ 84.95
Paperback: ISBN: 9781636390215 Pages: 176 Price: U.S. $ 64.95
Abstract:
Embeddings have undoubtedly been one of the most influential research areas in
Natural Language Processing (NLP). Encoding information into a low-dimensional
vector representation, which is easily integrable in modern machine learning
models, has played a central role in the development of NLP. Embedding
techniques initially focused on words, but the attention soon started to shift
to other forms: from graph structures, such as knowledge bases, to other types
of textual content, such as sentences and documents.
This book provides a high-level synthesis of the main embedding techniques in
NLP, in the broad sense. The book starts by explaining conventional word
vector space models and word embeddings (e.g., Word2Vec and GloVe) and then
moves to other types of embeddings, such as word sense, sentence and document,
and graph embeddings. The book also provides an overview of recent
developments in contextualized representations (e.g., ELMo and BERT) and
explains their potential in NLP.
Throughout the book, the reader can find both essential information for
understanding a certain topic from scratch and a broad overview of the most
successful techniques developed in the literature.
Linguistic Field(s): Computational Linguistics
Written In: English (eng)
See this book announcement on our website:
http://linguistlist.org/pubs/books/get-book.cfm?BookID=150493
------------------------------------------------------------------------------
*************************** LINGUIST List Support ***************************
The 2020 Fund Drive is under way! Please visit https://funddrive.linguistlist.org
to find out how to donate and check how your university, country or discipline
ranks in the fund drive challenges. Or go directly to the donation site:
https://crowdfunding.iu.edu/the-linguist-list
Let's make this a short fund drive!
Please feel free to share the link to our campaign:
https://funddrive.linguistlist.org/donate/
----------------------------------------------------------
LINGUIST List: Vol-32-66
----------------------------------------------------------
More information about the LINGUIST
mailing list