Modeling Mechanical Turk with Incentive Structures for Performance of Human-Intelligence Tasks
Frank Chen, Human-Computer Interaction Group, Stanford University, firstname.lastname@example.org
In recent years, there has been an explosion in the use of crowd-sourcing platforms to solve real problems. These solutions are commonly called artificial-artificial intelligence and the question in harnessing the power of the crowds begin the question: how can problems be decomposed enough to be solved and incentive by crowds. The largest of these platforms is called Amazon’s Mechanical Turk (AMT).
Essentially AMT creates an inexpensive scalable research platform for mediating many of the data-collection and participant recruitment logistical issues. Researchers in human-computer interaction, psychology, and political science have used this platform to perform large-scale information look-up, work and human-computation tasks. Furthermore, many canonical experiments that is appropriate for the context of AMT have been recreated and demonstrated on this platform.Agent-based modeling is framing of computational problems using situated agents in an ecosystem . One can imagine AMT as a particular instance of an ecosystem, where each Turker is an agent. The analogy is appropriate due to the several constraints placed on the knowledge of agents. Agent-based Modeling (ABM) provides a framework to imagine each Turker as intelligent, purposeful, and situated in time & space. We can now use ABM to obseve emergent patterns and decompose the opaque AMT ecosystem to observe the behaviors and the completion of activities by looking at the literature as a basis to construct an agent-based model.
Throughout this paper, we present literature in AMT for the incentives, data-quality, and efficiency. Then we examine how we will create an ABM for AMT, results from this model, and discuss the insights gained from observing the phenomena at the agent fidelity. We conclude with critiques and next steps for this model.