Drazen Prelec (MIT)
Bayesian truth serum and the ‘wine tasting problem.’ (slides) The term “Bayesian truth serum” refers to an information-theoretic scoring algorithm that rewards respondents for honest reporting of private information, using the reports of other people as the only input (individual honesty is non-verifiable). The algorithm can also function as an objective truth-detector, identifying which answer to a multiple-choice question is most likely to be true. I will start by reviewing some of the theoretical and experimental results obtained with this approach, and then turn to the problem of incentivizing respondents to create their own questions. This will be illustrated by the ‘wine tasting problem.’ An unstructured wine tasting may generate a great deal of tantalizing discussion, sometimes ending in consensus and sometimes not. It is hard to know if disagreements are caused by an ambiguous vocabulary or genuine differences in perceptual experience. I will discuss how scoring systems, related to the Bayesian truth serum, can train respondents to stabilize their phenomenological vocabulary.
Brent Hecht (Northwestern University)
Disagreement in Crowdsourcing due to Cultural Context. (slides) The people who do crowdwork online exist in a large variety of cultural contexts. In this talk, focusing on our research looking at Wikipedia, I will show how these diverse cultural contexts often result in significant disagreements between members of the crowd. I will then discuss how our research into this phenomenon helped to establish what is now known as “algorithmic bias”. Finally, I will highlight our work showing that these disagreements can inform a new class of applications powered by “algorithmic diversity”.