I have long been a proponent of double-blind reviewing. People suffer from expectation bias, and double-blind reviewing is a tried and true approach to combat it. I adopted double-blind reviewing when I co-chaired VEE 2010 and just recently for WoDet 3, and have decided to take a stand to sway more program committees to implement it. Join me!
When asked to serve on a PC, agree only if double-blind reviewing is used.
This approach doesn’t always work, but the fact is that most program chairs simply had not considered it and are happy to adopt it. My advisor Kathryn McKinley‘s case for double-blind and Mike Hicks‘ fantastic FAQ on the topic make excellent ammunition. I suggested it to Todd Mowry and he implemented it for ASPLOS 2011; P. Sadayappan did the same for PPoPP 2012 (I am grateful to both for their patience!)
But there has been some backsliding; double-blind reviewing is not going to be used for POPL 2013, despite the overwhelmingly positive response of the POPL 2012 committee members.
So the next time you get asked to serve on a PC, at least bring it up. Let’s help make this a standard practice across our community.
My interview today with our NPR affiliate New England Public Radio went from Zappos to broader security to a discussion of the End Times (more or less)!
We have made an alpha release of AutoMan, a platform for integrating human-based and digital computation. It allows programmers to “program with people”, which appear to the programmer to be ordinary function calls. AutoMan automatically handles details like quality control, payment, and task scheduling. It is currently implemented as a domain-specific language embedded in Scala (a language that runs on any machine with a Java Virtual Machine), and uses Amazon’s Mechanical Turk as a backend.
Visit the project page (automan-lang.org) for download information.
Technical report UMass CS TR 2011-44: Dan Barowy, Emery D. Berger, and Andrew McGregor.
Humans can perform many tasks with ease that remain difficult or impossible for computers. Crowdsourcing platforms like Amazon’s Mechanical Turk make it possible to harness human-based computational power on an unprecedented scale. However, their utility as a general-purpose computational platform remains limited. The lack of complete automation makes it difficult to orchestrate complex or interrelated tasks. Scheduling human workers to reduce latency costs real money, and jobs must be monitored and rescheduled when workers fail to complete their tasks. Furthermore, it is often difficult to predict the length of time and payment that should be budgeted for a given task. Finally, the results of human-based computations are not necessarily reliable, both because human skills and accuracy vary widely, and because workers have a financial incentive to minimize their effort.
This paper introduces AutoMan, the first fully automatic crowdprogramming system. AutoMan integrates human-based computations into a standard programming language as ordinary function calls, which can be intermixed freely with traditional functions. This abstraction allows AutoMan programmers to focus on their programming logic. An AutoMan program specifies a confidence level for the overall computation and a budget. TheAutoMan runtime system then transparently manages all details necessary for scheduling, pricing, and quality control. AutoMan automatically schedules human tasks for each computation until it achieves the desired confidence level; monitors, reprices, and restarts human tasks as necessary; and maximizes parallelism across human workers while staying under budget.
-Hoard” — enabling the use of my Hoard memory allocator — is now an officially sanctioned configuration flag for SPEC CPU2006 (the industry-standard way to measure CPU performance)! See the flags in use for the Intel compiler and Open64. My opinions about benchmarking notwithstanding, I am ok with my work being a standard configuration flag :).
Camera-ready versions of recent pubs from our research group:
SOSP 2011: Dthreads: Efficient and Deterministic Multithreading, Tongping Liu, Charlie Curtsinger, and Emery Berger.
OOPSLA 2011: Sheriff: Precise Detection and Automatic Mitigation of False Sharing, Tongping Liu and Emery Berger.
Dthreads: Efficient Deterministic Multithreading,
Tongping Liu, Charlie Curtsinger, Emery D. Berger
[paper (PDF)][source code] [YouTube video of presentation][PPT slides]
Multithreaded programming is notoriously difficult to get right. A key problem is non-determinism, which complicates debugging, testing, and reproducing errors. One way to simplify multithreaded programming is to enforce deterministic execution, but current deterministic systems for C/C++ are incomplete or impractical. These systems require program modification, do not ensure determinism in the presence of data races, do not work with general-purpose multithreaded programs, or run up to 8.4× slower than pthreads.
This paper presents Dthreads, an efficient deterministic mul- tithreading system for unmodified C/C++ applications that replaces the pthreads library. Dthreads enforces determinism in the face of data races and deadlocks. Dthreads works by exploding multithreaded applications into multiple processes, with private, copy-on-write mappings to shared memory. It uses standard virtual memory protection to track writes, and deterministically orders updates by each thread. By separating updates from different threads, Dthreads has the additional benefit of eliminating false sharing. Experimental results show that Dthreads substantially outperforms a state-of-the-art deterministic runtime system, and for a majority of the benchmarks evaluated here, matches and occasionally exceeds the performance of pthreads.
Related post: SHERIFF: Precise Detection and Automatic Mitigation of False Sharing