Category Archives: Crowdsourcing

New Scientist coverage of our AutoMan project

The New Scientist has just published an article covering our AutoMan project, which makes it possible to program with people. Full article below. Reasonably accurate, though it’s my team, not Dan’s :). Also on the project are my student Charlie Curtsinger, and my UMass colleague Andrew McGregor.

Continue reading New Scientist coverage of our AutoMan project

AutoMan: A Platform for Integrating Human-Based and Digital Computation

We have made an alpha release of AutoMan, a platform for integrating human-based and digital computation. It allows programmers to “program with people”, which appear to the programmer to be ordinary function calls. AutoMan automatically handles details like quality control, payment, and task scheduling. It is currently implemented as a domain-specific language embedded in Scala (a language that runs on any machine with a Java Virtual Machine), and uses Amazon’s Mechanical Turk as a backend.

Visit the project page ( for download information.

Technical report UMass CS TR 2011-44: Dan Barowy, Emery D. Berger, and Andrew McGregor.


Humans can perform many tasks with ease that remain difficult or impossible for computers. Crowdsourcing platforms like Amazon’s Mechanical Turk make it possible to harness human-based computational power on an unprecedented scale. However, their utility as a general-purpose computational platform remains limited. The lack of complete automation makes it difficult to orchestrate complex or interrelated tasks. Scheduling human workers to reduce latency costs real money, and jobs must be monitored and rescheduled when workers fail to complete their tasks. Furthermore, it is often difficult to predict the length of time and payment that should be budgeted for a given task. Finally, the results of human-based computations are not necessarily reliable, both because human skills and accuracy vary widely, and because workers have a financial incentive to minimize their effort.

This paper introduces AutoMan, the first fully automatic crowdprogramming system. AutoMan integrates human-based computations into a standard programming language as ordinary function calls, which can be intermixed freely with traditional functions. This abstraction allows AutoMan programmers to focus on their programming logic. An AutoMan program specifies a confidence level for the overall computation and a budget. TheAutoMan runtime system then transparently manages all details necessary for scheduling, pricing, and quality control. AutoMan automatically schedules human tasks for each computation until it achieves the desired confidence level; monitors, reprices, and restarts human tasks as necessary; and maximizes parallelism across human workers while staying under budget.