Home

Welcome to the SAD 2018 Workshop!

5 July 2018, room KO2-F-174, University of Zurich

This full-day workshop, co-located with the HCOMP2018 Conference, aims to bring together a latent community of researchers who treat disagreement (and subjectivity and ambiguity) as signal, rather than noise. Such researchers use theoretical and empirical methodology to characterize, utilize, mitigate and derive value from uncertainty, ambiguity and disagreement.  The workshop will include invited talks, short technical talks and a discussion of medium- and long-term challenges to fuel future work. We invite researchers from fields such as computer science, information sciences, law, communication science and political science, as well as those primarily working on human computation and crowdsourcing.  Solutions to these challenging problems will benefit from a diverse set of perspectives.

Ambiguity creates uncertainty in practically every facet of crowdsourcing.  This includes the information presented to workers as part of a task, the instructions for what to do with it, and the information they are asked to provide.  Besides lexical ambiguities, ambiguity can result from missing details, contradictions and subjectivity. Subjectivity may stem from differences in cultural context, life experiences, or individual perception of hard-to-quantify properties. All of these can leave workers with conflicting interpretations, leading to results that requesters—including the end-users of crowd-powered systems—would regard as “wrong”.

Historically, the human computation community has largely attributed disagreement to low-quality workers. This led to mathematical approaches intended to minimize the supposed noise. Strategies included aggregation (e.g., majority, expectation maximization), linguistic approaches, statistical filtering and incentive design (among many others). These are all executed after the data is collected.

Recent approaches apply principles from interaction design and computer-supported collaborative work to refine task designs until disagreement is minimized. This is akin to the methodology used by linguistic annotation and social content analysis of task refinement using inter rater reliability (Krippendorff, 2013). Here the focus is on minimizing the perceived ambiguity or subjectivity before the data has been collected. 

 

Advertisements