Non-Repeatable Experiments and Non-Reproducible Results: The Reproducibility Crisis in Human Evaluation in NLP (2023)
Attributed to:
ReproHum: Investigating Reproducibility of Human Evaluations in Natural Language Processing
funded by
EPSRC
Abstract
No abstract provided
Bibliographic Information
Digital Object Identifier: http://dx.doi.org/10.18653/v1/2023.findings-acl.226
Publication URI: http://dx.doi.org/10.18653/v1/2023.findings-acl.226
Type: Conference/Paper/Proceeding/Abstract