Benchmark Evaluation for Tasks with Highly Subjective Crowdsourced Annotations: Case study in Argument Mining of Political Debates (2023)
Attributed to:
Global Surface Air Temperature (GloSAT)
funded by
NERC
Abstract
No abstract provided
Bibliographic Information
Publication URI: https://doi.org/10.36190/2023.52
Type: Conference/Paper/Proceeding/Abstract