BERTje: A Dutch BERT Model ...

The transformer-based pre-trained language model BERT has helped to improve state-of-the-art performance on many natural language processing (NLP) tasks. Using the same architecture and parameters, we developed and evaluated a monolingual Dutch BERT model called BERTje. Compared to the multilingual BERT model, which includes Dutch but is only based on Wikipedia text, BERTje is based on a large and diverse dataset of 2.4 billion tokens. BERTje consistently outperforms the equally-sized multilingual BERT model on downstream NLP tasks (part-of-speech tagging, named-entity recognition, semantic ro... Mehr ...

Verfasser: de Vries, Wietse
van Cranenburgh, Andreas
Bisazza, Arianna
Caselli, Tommaso
van Noord, Gertjan
Nissim, Malvina
Dokumenttyp: Artikel
Erscheinungsdatum: 2019
Verlag/Hrsg.: arXiv
Schlagwörter: Computation and Language cs.CL / FOS: Computer and information sciences
Sprache: unknown
Permalink: https://search.fid-benelux.de/Record/base-28980378
Datenquelle: BASE; Originalkatalog
Powered By: BASE
Link(s) : https://dx.doi.org/10.48550/arxiv.1912.09582

The transformer-based pre-trained language model BERT has helped to improve state-of-the-art performance on many natural language processing (NLP) tasks. Using the same architecture and parameters, we developed and evaluated a monolingual Dutch BERT model called BERTje. Compared to the multilingual BERT model, which includes Dutch but is only based on Wikipedia text, BERTje is based on a large and diverse dataset of 2.4 billion tokens. BERTje consistently outperforms the equally-sized multilingual BERT model on downstream NLP tasks (part-of-speech tagging, named-entity recognition, semantic role labeling, and sentiment analysis). Our pre-trained Dutch BERT model is made available at https://github.com/wietsedv/bertje. ...