Biased polar questions in Sign Language of the Netherlands - Analysis scripts

These R-markdown files can be used to calculate the inter-rater reliability of ELAN annotations. Two approaches are available here: a frame-based approach and an event-based approach. The event-based approach is not yet discussed in published work; the frame-based approach is discussed in: Oomen, Marloes, Tobias de Ronde, Lyke Esselink & Floris Roelofsen. In press. Towards a procedure for annotating non-manual markers in sign languages and multi-modal communication. In Proceedings of NELS 53 . The input is a .csv file with annotations extracted from ELAN. The output consists of confusion m... Mehr ...

Verfasser: Lyke Esselink
M. Oomen
Floris Roelofsen
Dokumenttyp: Software
Erscheinungsdatum: 2023
Schlagwörter: Technology / Language / Communication and Culture / Sign language / Inter-rater reliability / IRR / Cohen's Kappa / R markdown
Sprache: unknown
Permalink: https://search.fid-benelux.de/Record/base-27208767
Datenquelle: BASE; Originalkatalog
Powered By: BASE
Link(s) : https://doi.org/10.21942/uva.24080724.v3

These R-markdown files can be used to calculate the inter-rater reliability of ELAN annotations. Two approaches are available here: a frame-based approach and an event-based approach. The event-based approach is not yet discussed in published work; the frame-based approach is discussed in: Oomen, Marloes, Tobias de Ronde, Lyke Esselink & Floris Roelofsen. In press. Towards a procedure for annotating non-manual markers in sign languages and multi-modal communication. In Proceedings of NELS 53 . The input is a .csv file with annotations extracted from ELAN. The output consists of confusion matrices of overlapping categories, Cohen's Kappa agreement scores, and a .csv file with overlapping annotations. Some notes: - These functions only work with two annotators. - Each video has a "Question" tier, which denotes the video that is annotated. Our file structure was organised as such that multiple videos could be annotated in one ELAN file. - Our annotators both annotated the same videos, in separate files. To extract the annotations, we created a domain in ELAN with all of the files. We then selected the Question tier, and each other tier separately, in order to extract just the annotations of that tier. - Our annotations were created based on time, not on frames. Code will need to be adjusted to accommodate for annotations made on frames. - There is a difference between "Annotator.a and Annotator.b" and "annotator.x and annotator.y" seen in the code. Code "a" and "b" always correspond to the same respective annotators (in our case, "Annotator.a" is always "M", and "Annotator.b" is always "T"). On the other hand, codes "x" and "y" do not necessarily correspond to these annotators. It is possible that "annotator.x" is sometimes "M" and sometimes "T". **Update 15/09/2023: - Sometimes, one of the two annotator may not use one of the possible tier values at all, while the other annotator does. In this case, the confusion matrices would be missing the respective row/column of this value for that annotator. This has ...