All posts by emeryberger

Professor of Computer Science at the University of Massachusetts, Amherst.

Reviewing Guidelines for Program Committee Members

I prepared these guidelines for the PLDI Program Committee when I was program chair in 2016. I am posting this lightly-edited version in the hopes that it will be useful to other program chairs. Feel free to adapt some or all of it for your own use; if you do so, I just ask that you cite this page and point me to your page in the hopes of providing an easy go-to for future program chairs and PC members. Thanks!

Reviewing Guidelines

All committee members are expected to:

  • Personally read and write reviews all of their assigned papers. The reviews should be entirely of the reviewer’s own devising. If you want to invite someone for a supplementary review, let me know and I will handle it.
  • Write positive, detailed, and constructive reviews that will help the authors revise their papers and make them better.
  • Not seek to break double-blind reviewing by Googling (or Binging, for those of you who do that) or other means.
  • Turn your reviews in ON TIME.
  • Actively participate in all on-line discussions of the papers, and (for EPC members), participate in a teleconference to discuss PC papers prior to the physical PC meeting.

Reviews will take the following questions into account:

  • Is the paper well-motivated? What problem does it address, and is it an important problem?
  • Does the paper significantly advance the state of the art or break new ground?
  • What are the paper’s key insights?
  • What are the paper’s key scientific and technical contributions?
  • Does the paper credibly support its claimed contributions?
  • What did you learn from the paper?
  • Is the paper sufficiently clear that most PLDI attendees will be able to read and understand it?
  • Does the paper clearly establish its context with respect to prior work? Does it discuss prior work accurately and completely? Are comparisons with previous work clear and explicit?
  • Does the paper describe something that has actually been implemented? If so, has it been evaluated properly? Is it publicly available so that these results can be verified?
  • What impact is this paper likely to have (on theory & practice)?
  • Is the work of broad appeal and interest to the PLDI community?

A key part of ensuring quality reviews is making sure that papers are reviewed by experts. All reviewers will indicate specifically what the nature of their expertise is with respect to each paper, e.g., “I have written papers (X, Y, & Z) on this topic.”

Guardians: One reviewer will be appointed as a “guardian” to lead all discussions and ensure that author responses are read and addressed. The guardian will also ensure that final reviews include a summary of the online and/or PC discussion to explain decisions for acceptance/rejection.


This section is adapted from the bidding instructions from POPL 2015.

Bidding will be carried out in HotCRP. Each PC, EPC, and ERC member should enter a bid for every paper. A bid (called a review preference in HotCRP) is a combination of two things:

  • an integer between 3 and -3, inclusive, or -100 for a conflict, which indicates how much you would like to review the paper.
    • 3: I would really like to review this paper!
    • 2: I would like to review this paper a lot, but it isn’t one of my absolute top favorites.
    • 1: I would like to review this paper more than average
    • 0: I don’t care one way or the other
    • -1: I would like to review this paper less than average
    • -2: I do not want to review this paper very much at all, but doing so won’t kill me.
    • -3: I really don’t want to review this paper!
  • a letter (X, Y, Z) that indicates how much expertise you expect to have concerning a paper
    • X: I expect to be an expert reviewer for this paper. Experts should be able to understand the technical content of the paper (unless the paper is particularly poorly written) and are acutely aware of related research in the area (i.e., you have written a paper on the topic).
    • Y: I expect to be a more knowledgeable reviewer than most for this paper, because I generally follow the literature in this area.
    • Z: I am an outsider. I do not expect to have any special knowledge of the topics discussed in this paper.

Positive numbers in your review preference mean you have greater than average desire to review the paper. Negative numbers mean you have a less than average desire to review the paper. A score of −100 means you think you have a conflict. Examples:

  • A preference of 3X means you really want to review the paper a lot and expect to be an expert.
  • A preference of 2Z indicates you want to review this paper somewhat less than the one you scored a 3, but you still want to review it a lot and you expect to be an outsider.
  • A score of -3X means you really do not want to review this paper at all, but expect to be an expert on the topic.
  • A score of 0Z means you do not care very much one way or the other whether you are assigned this paper or not, and expect to be an outsider.

There are probably a number of ways to game this preference system. Please don’t try. For example, if you assign 20 papers a 3 and every other paper a -3, you won’t get those 20 and I won’t know which ones you really want or don’t want. (We will automatically check the distribution of scores, and if you do this, you will make the program chair unhappy.) I’d like everyone to be excited about the stack of papers they receive to review but naturally, I will have to balance that against the need to ensure papers have proper expertise assigned to them.

When bidding, you won’t have to read the entire paper (though, of course, you are free to look at any paper in depth when bidding), so you are only estimating your expertise. If, when you review a paper, you find you have made a mistake in your estimate, that is just fine, and is bound to happen from time to time. If you find yourself downgrading your expertise from an X, we might find a paper suddenly lacking expert reviewers. In such a case, feel free to alert the PC chair. I’ll see what I can do.

Entering bids in HotCRP: There is more than one way you can enter bids in to HotCRP. One way to begin is to go the reviewer preferences page. There, you will see a list that shows all submitted papers. You may enter your preferences in the text boxes here. Alternatively, you may flip through the paper pages (use keys k and j to flip forwards and backwards through the paper pages efficiently). If you go through the papers in numeric order, flip a coin first to decide whether you will go through them back to front or front to back. You may also upload preferences from a text file; see the “Download” and “Upload” links below the paper list on your review preferences page.


PLDI 2016 will be shepherding all accepted papers. Please write your reviews taking this into account. We are doing this for a variety of reasons, including improving paper quality and letting us accept papers with flaws that can easily be fixed. Shepherding will enforce that all *minor* changes requested by reviewers are incorporated in the final paper. Making this apply to all papers means that there will be no stigma attached to having a paper shepherded.

This approach will let us accept papers, for example, that do not cite certain related papers that reviewers feel should be discussed. We can also require that authors address minor stylistic issues to enhance readability, so those kind of things should not be deal-breakers for acceptance.

However, there is a limit to how much we can expect of the shepherding process. For example, if a paper’s evaluation is unacceptable, that is probably not something that can be salvaged during shepherding. The same is true for cases when the technical core of the paper is impenetrable.

In your reviews (either in the comments to authors, the PC, or both), please feel free to point out things that will need to be addressed during shepherding.


  1. Who is reviewing PC member submissions?
    • The new External Program Committee will be responsible for reviewing all PC submissions.
  2. What’s the role of the External Program Committee?
    • The External Program Committee is an innovation that was recently approved by the PLDI Steering Committee. Its aim is to guarantee an extremely high quality reviewing process by forming a committee composed of the leaders of our field. This approach is inspired by the standard practice in the systems community, where senior members of the community are invited to serve on a “light” Program Committee that reviews fewer papers (e.g., 10) than the “heavy” Program Committee and does not attend or participate in the physical meeting. In addition to providing a group of distinguished experts who can be counted upon to provide expert reviews across the areas of PLDI, the External Program Committee will review and make the decisions for all Program Committee submissions, making its job incredibly important.
  3. What is the role of the ERC?
    • In a departure from recent PLDIs, this year the ERC will serve primarily as a stable for obtaining expert reviews as needed, and not as a load-shedding mechanism or as a means of handling PC submissions (which now are handled by the EPC). The ERC is also going to be wider than usual, including experienced senior graduate students.
  4. How long should my reviews be?
    • You should aim for your reviews to be approximately 500 words long. HotCRP has a feature that enables searching for reviews by the number of words: you can see all of your reviews with fewer than 500 words by entering this in the search bar: “re:me:words<500”.
  5. What does “expertise” mean for the purposes of reviewing?
    • You should enter a sentence or two explaining what your expertise level is for each paper you review. The working definition of “expert” is that you have written one or more papers on the topic – you should indicate the titles and dates. Knowledgeable means that you follow the literature on this topic but may have missed recent developments.
  6. Are we doing two rounds of reviewing?
    • Yes. There will be two rounds, immediately followed by an author response period.
  7. When will authors be unblinded?
    • Only accepted papers will be unblinded to preserve the integrity of double-blind reviewing for future submissions. The author responses will also be anonymous.
  8. Can I enlist a student (or trusted colleague) to work with me on reviewing a paper?
    • Briefly, no. In more detail: every committee member is expected to write their own, independent reviews for every paper. If you believe that having a student do an expert review of a paper would be helpful, please let me know and if it is appropriate (e.g., there are no conflicts), they can always be invited as an expert reviewers. Also, please do not distribute your assignments to anyone without vetting by the chair; because of double-blind reviewing, there may be authorship conflicts you are not aware of. Just send the chair a note. Because having students write reviews is an important part of the training process, this year we are specifically opening up the ERC to senior graduate students for this exact reason. If you do not want a student to actually submit a review but just want them to write a “test” review (one that will not actually be sent to the authors), that can also be arranged, but again, please vet this with the chair.
  9. How should I avoid breaking double-blind reviewing when searching for related work? For instance, reading the cited previous paper that sounds like it is most technically similar, and finding a figure that’s identical to the paper I’m reviewing is a pretty strong indication about authorship.
    • Inadvertent discovery of authorship is sometimes unavoidable. Here are some ways of reducing the risk of stumbling across something and thus breaking double-blind.
      • Initially read the paper off-line and write a preliminary review assuming that the authors have done their homework properly (in terms of scholarship, citing previous work, etc.).
      • Look for related work as a matter of due diligence after writing that review, and revise if needed.
      • To avoid accidentally unblinding the authors, don’t type the title into Google.
  10. How long will the author response be? Will there be a hard limit on the number of words? (I have lots of questions I’d like the authors to answer.)
    • The author response will only have a soft limit, but reviewers are required to only read the first 600 words (roughly one page of text). Here’s the message that will be sent out to the authors.

      The authors’ response should address reviewer concerns and correct misunderstandings. In particular, respond to explicit questions by reviewers. Make it short and to the point; the conference deadline has passed. Try to stay within 600 words. You may write more words but reviewers are only required to read the first 600 words, so address your key points first.


A Guide for Session Chairs

I just sent this message as a guide to the program committee members who will be chairing sessions for PLDI 2016 (I figure it’s the first time for some of them). A few people suggested I post it, so here it is (lightly edited). Additions or other suggestions welcome.

  • Find your speakers before the session begins. You will have to talk to them about some stuff – see below.
  • Find out how to pronounce their names properly.
  • Find out if they are on the market next year – sometimes people like the advertisement that they will be graduating soon.
  • Have them check their equipment (particularly if they are using Linux…). To be on the safe side, carry a spare Mac VGI dongle – speakers forget this shockingly often. You should consider writing your name on it in Sharpie (or do what one of my students does – cover it in bright pink fingernail polish). This greatly increases the odds you will get your dongle back after the session.
  • Before each session, introduce the entire session (as in, “I am So-and-So, from Wherever University; welcome to the session on drone-based programming languages.”
  • Before each talk, introduce each speaker. I personally recommend not reading their title, since lots of speakers are on autopilot and will just repeat everything you said. You can instead say something like “This is Foo Bar, who will be talking about verifying clown car drivers.” In fact, come to think of it, you could just say that for every talk.
  • Keep track of time. For PLDI this year, speakers get 25 minutes, and then there are 5 minutes for questions. If you have an iPad, there’s an app I have used to display time to speakers (big giant numbers, you can set it to change colors when you hit 5 min or 1 min till the end). You can of course always go old school and hold up a sheet of paper indicating when time is drawing near. I recommend doing this when there are 5 minutes left and 1 minute left. Let the speakers know you will be doing this.
  • When the speaker is done, if it hasn’t happened already, make sure everyone applauds by saying “Let’s thank our speaker” and start applauding. Then open the floor to questions.
  • COME PREPARED WITH A QUESTION. The worst thing ever is when the talk is a disaster does not go well and no one has any questions for the speaker, and then: <crickets>. Read over each paper so you have at least a couple of questions planned for this eventuality. Hopefully it won’t come to this and someone will ask something, but it happens sometimes, and it’s great if you can save the day. It’s still a good idea to ask a question or two in case there are very few questions from the audience.
  • Make sure people who ask questions use the mic and state their name and affiliation.
  • You may also have to clarify the question for the speaker, repeat the question, etc. Understanding questioners can occasionally be a challenge for non-native English speakers: it’s a stressful time, and the questioners may have unfamiliar accents, etc. Be prepared to give the speaker a helping hand.
  • Be prepared to cut off a questioner. YOU ARE IN CHARGE OF THE SESSION. If a questioner won’t give up the mic and keeps asking questions and is burning time, rambling, etc., you are empowered to move on to the next questioner (e.g., by suggesting “how about we take this off-line”).
  • Hopefully this won’t be an issue you will have to deal with, but questioners who are belligerent or insulting must not be tolerated. Cut them off and report them to the program chair (me) or the general chair. I sincerely hope and expect that this will not happen, but I want you to realize you are empowered to take action immediately. You can read over SIGPLAN’s non-harassment policy here, which is based on ACM’s:
  • To make sure things run smoothly, have the next speaker on deck with their laptop a minute or so before question times end. Ideally, they will be setting up while the current speaker is wrapping up questions.
  • Finally, when questions are over, say “Let’s thank our speaker again” and applaud.
  • At the end of the session, tell everyone what’s next (e.g., “next is lunch, and talks will resume at 1:30pm”).

And thanks again to all the session chairs for volunteering!


Coz: Finding code that counts with causal profiling

Nice summary of Coz.

the morning paper

Coz: Finding code that counts with causal profiling – Curtsinger & Berger 2015

update: fixed typo in paper title

Sticking to the theme of ‘understanding what our systems are doing,’ but focusing on a single process, Coz is a causal profiler. In essence, it makes the output of a profiler much more useful to you by showing you where optimisations would genuinely have a beneficial effect (which doesn’t always equate with the places programs spend the most time). Interestingly, it can also show you places where locally optimising performance will actually slow down the overall system. That might sound counter-intuitive: the Universal Scalability Law gives us some clues as to why this might be. The understanding gained from finding such locations is also very useful in optimising the application overall.

Conventional profilers rank code by its contribution to total execution time. Prominent examples include oprofile, perf, and gprof. Unfortunately, even…

View original post 1,647 more words

Doppio Selected as SIGPLAN Research Highlight

Doppio, our work on making it possible to run general-purpose applications inside the browser, recently won two awards. At PLDI, it received the Distinguished Artifact Award. SIGPLAN, the Special Interest Group of ACM that focuses on Programming Languages, just selected Doppio as a Research Highlight. These papers are chosen by a board from across the PL community; SIGPLAN highlights are also recommended for consideration for the CACM Research Highlights section.

Below is the citation. IMHO John did an extraordinary job on the paper and the system, and I am glad to see that the community agrees!

Title: Doppio: Breaking the Browser Language Barrier
Authors: John Vilk, Emery Berger, University of Massachusetts
Venue: PLDI 2014

The authors build a JavaScript-based framework, Doppio, in which unmodified programs can be executed within a web browser. They do this by creating a runtime environment in JavaScript that supports basic services such as sockets, threading, and a filesystem that are not otherwise supported within the browser. The authors demonstrate the framework by implementing an in-browser JVM and an in-browser runtime for C++. The paper is an engineering tour de force. The paper should appeal to a wide audience because of the ubiquity of the browser (and thus the utility of their systems), and because it is broad in scope.

Washington Post, Take Down This Article!


The Washington Post just published an article from a kid claiming he graduated at the top of his class at Penn State in Computer Science but couldn’t find a job. But his description of Computer Science classes is completely disconnected from reality. Turns out, he graduated with a degree in Management Information Systems (a business degree) and not from the Penn State any reasonable person would assume, but rather a satellite campus. All this info is right on the dude’s own LinkedIn page and a previous version of the article from Sept. 2013. Washington Post, Take Down This Article!

[This was initially publicly posted on Facebook here:]

Update – I wrote a Letter to the Editor of the Washington Post. They did not choose to print it, though they did partially correct the article.


Dear Editor:

A recent op-ed article by Casey Ark (“I studied computer science, not English. I still can’t find a job.”, August 31) is deceptive and misleading. Ark says he graduated at the top of his class at Penn State in Computer Science but found himself unable to find a job. All of these claims are false. An accurate headline would read “I studied business, not English. I had job opportunities, but I turned them down.”

Ark’s descriptions of his class experiences — non-rigorous, memorization-based, and non-technical — sound nothing like a Computer Science degree, and here’s why. A visit to his LinkedIn page ( shows that he graduated with a degree in Management Information Systems, a non-technical business degree that has little to do with Computer Science and is decidedly not a STEM (Science, Technology, Engineering, and Math) field.

Ark also fails to mention that he attended a satellite campus rather than the more prestigious flagship University Park campus of Penn State, a fact included in an earlier version of this article that appeared on PennLive in September 2013
( Regardless of its quality, leaving out the location leads readers to believe he graduated from the main campus.

In this earlier article, Ark describes having chosen to not take two entry-level job options, but instead deciding to become an entrepreneur.

I am surprised and chagrined that this op-ed made it through whatever fact-checking mechanisms exist at Washington Post, when a few moments with Google sufficed to discredit the central claims of the article.

Professor Emery Berger
School of Computer Science
University of Massachusetts Amherst


Professor Stephen A. Edwards
Department of Computer Science
Columbia University in the City of New York

Asst. Professor Brandon Lucia
Department of Electrical and Computer Engineering
Carnegie Mellon University

Associate Professor Daniel A. Jiménez
Department of Computer Science & Engineering
Texas A&M University

Assistant Professor David Van Horn
Department of Computer Science
University of Maryland, College Park

Assistant Professor Santosh Nagarakatte
Department of Computer Science
Rutgers, The State University of New Jersey, New Brunswick

Assistant Professor Swarat Chaudhuri
Department of Computer Science
Rice University

Associate Professor Dan Grossman
Department of Computer Science & Engineering
University of Washington

Professor Michael Hicks (B.S. Computer Science, Penn State ‘93)
Department of Computer Science
University of Maryland

Associate Professor Matthew Hertz
Department of Computer Science
Canisius College

Associate Professor Landon Cox
Department of Computer Science
Duke University

Associate Professor Benjamin Liblit (B.S. Computer Science, Penn State ‘93)
Department of Computer Sciences
University of Wisconsin–Madison

Associate Professor John Regehr
School of Computing
University of Utah

Professor Jeff Foster
Department of Computer Science
University of Maryland, College Park

Kaushik Veeraraghavan

Some comments from the Facebook thread posted by my fellow Computer Science colleagues:

Daniel Ángel Jiménez This kind of garbage causes lots of confusion. At my last job, almost all of the complaints from local industry about our CS graduates turned out to actually be about morons from the business school.

Shriram Krishnamurthi “Correction: An earlier version of this story’s headline misidentified what the author studied. It has been corrected.” They changed “engineering” to “computer science”. Thanks, WaPo!

Rob Ennals It seems that whenever I read a media article about something I actually know about, there is something fundamentally wrong with their understanding of the situation. This makes me worry about the accuracy of the information I’m getting about things I’m not knowledgable about.

Emery Berger He laments “they’re looking for employees who can actually do things – like build iPhone apps…. I wish I’d been taught how to do those things in school, but my college had something different in mind.”

PSU offers CMPSC 475, WHICH TEACHES iOS PROGRAMMING.…/courses/C/CMPSC/475/201314SP

Tao Xie Another very important piece of information (from the original earlier post:…/heres_why_why_more_and_more…), “When I graduated from PSU’s Harrisburg campus in May, ….” This kid graduated from PSU Harrisburg Campus, **NOT** the State College campus!! There are 24 campuses of PSU ( Note that the Washington Post article (carefully?) “rephrased” the above quoted sentence to be “When I graduated from Penn State a year ago, …” smh..

Stephen A. Edwards Breathtaking naivete on display in this column. I have no idea what he was studying: any CS graduate shouldn’t have any idea about the difference between advertising and marketing. His lament about all the programming languages and tools I learned were years out of date is also laughable. Of course they’re out of date: everything in CS is more-or-less instantly. The thing is to make sure you understand the basic concepts so you can learn the new stuff faster. But I really got a chuckle about his suggestion that we be more lax about academic standards and hire better businesspeople. Absolutely that will improve the quality of your education, no question.

New Scientist coverage of our AutoMan project

The New Scientist has just published an article covering our AutoMan project, which makes it possible to program with people. Full article below. Reasonably accurate, though it’s my team, not Dan’s :). Also on the project are my student Charlie Curtsinger, and my UMass colleague Andrew McGregor.

Continue reading New Scientist coverage of our AutoMan project