Chiudi

Aggiungi l'articolo in

Chiudi
Aggiunto

L’articolo è stato aggiunto alla lista dei desideri

Chiudi

Crea nuova lista

Offerta imperdibile
Semi-Supervised Learning and Domain Adaptation in Natural Language Processing - Anders Sogaard - cover
Semi-Supervised Learning and Domain Adaptation in Natural Language Processing - Anders Sogaard - cover
Dati e Statistiche
Wishlist Salvato in 0 liste dei desideri
Semi-Supervised Learning and Domain Adaptation in Natural Language Processing
Attualmente non disponibile
40,05 €
40,05 €
Attualmente non disp.
Chiudi

Altre offerte vendute e spedite dai nostri venditori

Altri venditori
Prezzo e spese di spedizione
ibs
Spedizione Gratis
40,05 €
Altri venditori
Prezzo e spese di spedizione
ibs
Spedizione Gratis
40,05 €
Altri venditori
Prezzo e spese di spedizione
Chiudi
ibs
Chiudi

Tutti i formati ed edizioni

Chiudi
Semi-Supervised Learning and Domain Adaptation in Natural Language Processing - Anders Sogaard - cover
Chiudi

Promo attive (0)

Descrizione


This book introduces basic supervised learning algorithms applicable to natural language processing (NLP) and shows how the performance of these algorithms can often be improved by exploiting the marginal distribution of large amounts of unlabeled data. One reason for that is data sparsity, i.e., the limited amounts of data we have available in NLP. However, in most real-world NLP applications our labeled data is also heavily biased. This book introduces extensions of supervised learning algorithms to cope with data sparsity and different kinds of sampling bias. This book is intended to be both readable by first-year students and interesting to the expert audience. My intention was to introduce what is necessary to appreciate the major challenges we face in contemporary NLP related to data sparsity and sampling bias, without wasting too much time on details about supervised learning algorithms or particular NLP applications. I use text classification, part-of-speech tagging, and dependency parsing as running examples, and limit myself to a small set of cardinal learning algorithms. I have worried less about theoretical guarantees (""this algorithm never does too badly"") than about useful rules of thumb (""in this case this algorithm may perform really well""). In NLP, data is so noisy, biased, and non-stationary that few theoretical guarantees can be established and we are typically left with our gut feelings and a catalogue of crazy ideas. I hope this book will provide its readers with both. Throughout the book we include snippets of Python code and empirical evaluations, when relevant.
Leggi di più Leggi di meno

Dettagli

Synthesis Lectures on Human Language Technologies
2013
Paperback / softback
103 p.
Testo in English
235 x 187 mm
9781608459858
Chiudi
Aggiunto

L'articolo è stato aggiunto al carrello

Chiudi

Aggiungi l'articolo in

Chiudi
Aggiunto

L’articolo è stato aggiunto alla lista dei desideri

Chiudi

Crea nuova lista

Chiudi

Chiudi

Siamo spiacenti si è verificato un errore imprevisto, la preghiamo di riprovare.

Chiudi

Verrai avvisato via email sulle novità di Nome Autore