Challenges in Annotating and Parsing Spoken, Code-switched, Frisian-Dutch Data

While high performance have been obtained for high-resource languages, performance on low-resource languages lags behind. In this paper we focus on the parsing of the low-resource language Frisian. We use a sample of code-switched, spontaneously spoken data, which proves to be a challenging setup. We propose to train a parser specifically tailored towards the target domain, by selecting instances from multiple treebanks. Specifically, we use Latent Dirichlet Allocation (LDA), with word and character N-grams. We use a deep biaffine parser initialized with mBERT. The best single source treebank... Mehr ...

Verfasser: Braggaar, Anouck
van der Goot, Rob
Dokumenttyp: contributionToPeriodical
Erscheinungsdatum: 2021
Verlag/Hrsg.: Association for Computational Linguistics
Sprache: Englisch
Permalink: https://search.fid-benelux.de/Record/base-27025483
Datenquelle: BASE; Originalkatalog
Powered By: BASE
Link(s) : https://pure.itu.dk/portal/en/publications/challenges-in-annotating-and-parsing-spoken-codeswitched-frisiandutch-data(57e10cc0-9c3a-4793-87e8-7fd6a9baea38).html

While high performance have been obtained for high-resource languages, performance on low-resource languages lags behind. In this paper we focus on the parsing of the low-resource language Frisian. We use a sample of code-switched, spontaneously spoken data, which proves to be a challenging setup. We propose to train a parser specifically tailored towards the target domain, by selecting instances from multiple treebanks. Specifically, we use Latent Dirichlet Allocation (LDA), with word and character N-grams. We use a deep biaffine parser initialized with mBERT. The best single source treebank (nl_alpino) resulted in an LAS of 54.7 whereas our data selection outperformed the single best transfer treebank and led to 55.6 LAS on the test data. Additional experiments consisted of removing diacritics from our Frisian data, creating more similar training data by cropping sentences and running our best model using XLM-R. These experiments did not lead to a better performance.