Monthly Archives: March 2010

Research Proposal #1: The Effects of Author Preparation in Peer Code Review

The Problem Space

Click here to read about my problem space

Related Work

During his study at Cisco Systems, Jason Cohen noticed that review requests with some form of author preparation consistently had fewer defects found in them.

Jason Cohen explains what author preparation is…

The idea of “author preparation” is that authors should annotate their source code before the review begins.  Annotations guide the reviewer through the changes, showing which files to look at first and defending the reason and methods behind each code modification.  The theory is that because the author has to re-think all the changes during the annotation process, the author will himself uncover most of the defects before the review even begins, thus making the review itself more efficient.  Reviewers will uncover problems the author truly would not have thought of otherwise.

(Best Kept Secrets of Peer Code Review, p80-81)

Cohen gives two theories to account for the drop in defects:

  1. By performing author preparation, authors were effectively self-reviewing, and removed defects that would normally be found by others.
  2. Since authors were actively explaining, or defending their code, this sabotaged the reviewers ability to do their job objectively and effectively.  There is a “blinding effect”.

In his study, Cohen subscribes to the first theory.  He writes:

A survey of the reviews in question show the author is being conscientious, careful, and helpful, and not misleading the reviewer.  Often the reviewer will respond to or ask a question or open a conversation on another line of code, demonstrating that he was not dulled by the author’s annotations.

While it’s certainly possible that Cohen is correct, the evidence to support his claim is tenuous at best, as it suffers from selection bias, and has not been drawn from a properly controlled experiment.

What do I want to do?

I want to design a proper, controlled experiment in an attempt to figure out why exactly the number of found defects drop when authors prepare their review requests.

My experiment is still being designed, but at its simplest:

We devise a review request with several types of bugs intentionally inserted.  We create “author preparation” commentary to go along with the review request.  We show the review request to a series of developers – giving some the author preparation, and some without – and ask the developers to perform a review.

We then take measurement on the number/type/density of the defects that they find.

Why do you care?

If it is shown that author preparation does not negatively affect the number of defects that the reviewers find, this is conclusive evidence to support Cohen’s claim that author preparation is good.  This practice can then be adopted/argued for in order to increase the effectiveness of code reviews.

On the other hand, if it is shown that author preparation negatively affects the number of defects that the reviewers find, this has some interesting consequences.

The obvious one is the conclusion that authors should not prepare their review requests, so as to maximize the number of defects that their reviewers find.

The less obvious one takes the experimental result a step further. Why should this “blinding effect” stop at author preparation?  Perhaps a review by any participant will negatively affect the number of defects found by subsequent reviews?  The experiment will be designed to investigate this possibility as well.

Either way, the benefits or drawbacks of author preparation will hopefully be revealed, to the betterment of the code review process.

Research Proposal: My Problem Space

I want to talk about peer code review.

The code inspection process was formally brought to light by Michael Fagan in the 1970’s, when he showed that code inspection improves the quality of source code. Code inspection, coupled with rigorous testing / QA, helps to reduce the number of defects in a piece of software before it is releasedwhich is really the cheapest time to find and fix those defects.

Jason Cohen took Fagan’s inspection technique out of the conference room, and helped to bring it online.  After a study at Cisco Systems, he found (among other things) that light-weight code reviews were just as (or more) effective as Fagan inspections, and took less time.

There are a myriad of light-weight peer code review tools available now.  Code review has become more of a common software development practice.*

That’s really great.  But how can we make it better? Here are some research project proposals…

*For more information on code review, I’ve written ad nauseum about it…

Peer Code Review Achievements

First off, check this out:  Unit Testing Achievements for Python

Cool:  rewards for testing (or, in some cases, breaking the build…see the “Complete Failure” achievement).

And so my brain instantly goes to:  why can’t this be applied to peer code review?

@gvwilson, @wolever and @bwinton and I have been tossing the idea around on Twitter.  Here’s a list of the achievements that we have come up with so far:

  • “The Congress Award”:  review request with the longest discussion thread
  • “The Two Cents Award”:  most review participants
  • “The One-Liner Award”:  diff has only a single line
  • “The Difference-Maker Award”:  biggest diff
  • “The Awed Silence Award”:  oldest review request without a single review
  • “The I-Totally-Rock Award”:  author gives themselves a ship-it
  • “The New Hire Award”:  user has submitted 5 patches, but hasn’t reviewed any code yet
  • “The Manager Award”: reviewed patches to submitted patches ratio > 5
  • “The Code Ninja Award”:  biggest diff to get a ship-it and commit 1st time through
  • “The Persistence Award”:  diff which gets >5 review before being accepted
  • “The Bulwer-Lytton Award”:  for the most long-winded review
  • “The Ten-Foot-Pole Award”:  for the scariest review

Can you think of any more?

And while it’s fun to think of these, what would the effects of achievements be on code review adoption?  Sounds like an interesting thesis topic.

Or a GSoC project.

More Stuff About Peer Code Review

Feel like some interesting reading?

Here’s a slew of links I’ve gone through recently that are related/semi-related to peer code review:

Just so I can give credit where credits due, a bunch of these links are regurgitated from this blog post.


Author Preparation in Code Review: What Are Those Authors Saying?

If you recall, I’m looking at author preparation in code review, and whether or not it impairs the ability of reviewers to perform objective reviews effectively.

If this is really going to be my research project, I’ll need to get my feet a bit more wet before I design my experiment.  It’s all well and good to say that I’m studying author preparation…but I need to actually get a handle on what authors tend to say when they prepare their review requests.

So how am I going to find out the kinds of things that authors write during author preparation?  The MarkUs Project and the Basie Project both use ReviewBoard, so it’ll be no problem to grab some review requests from there.  But that’s a lot of digging if I do it by hand.

So I won’t do it by hand.  I’ll write a script.

You see, I’ve become pretty good at manipulating the ReviewBoard API.  So mining the MarkUs and Basie ReviewBoard’s should be a cinch.

But I’d like to go a little further. I want more data.  I want data from some projects outside of UofT.

Luckily, ReviewBoard has been kind enough to list several open source projects that are also using their software.  And some of those projects have their ReviewBoard instances open to the public.  So I just programmed my little script to visit those ReviewBoard instances, and return all of the review requests where the author of the request was the first person to make a review.  Easy.

Besides MarkUs and Basie, I chose to visit the AsteriskKDE, and MusicBrainz projects.

Asterisk was a crapshoot – of all of their review requests, not a single one returned a positive.

But I got a few blips on the others. Not many, but a few.

I read all of the author preparation for each blip, and broke down what I read into some generalizations.

So, now to the meat:  here are some generalizations of what the authors tended to say, in no particular order.  I’ve also included a few examples so you can check them out for yourselves.

“Here’s why I did this”

The author makes it explicit why a change was made in a particular way.


“Here’s what this part does…”

The author goes into detail about what a portion of their diff actually does.


“Can I get some advice on…”

The author isn’t entirely sure of something, and wants input from their peers.


“Whoops, I made a mistake / inserted a bug.  I’ll update the diff.”

The author has found a mistake in their code, and either indicates that they’ll update the diff in the review request, or change the code before it is committed.

“Whoops – that stuff isn’t supposed to be there.  Ignore.”

The author has accidentally inserted some code into the diff that they shouldn’t have.  They give their assurances that it’ll be removed before committing – reviewers are asked to ignore.


“Before you apply this patch, you should probably…”

The author believes that the reviewers will need to do something special, or out of the ordinary, in order to apply the diff.


The review request has been idle for a while without a single review.  The author pings everybody for some attention.


Anyhow, those are the general patterns that stand out.  I’ll post more if I find any.

Have you seen any other common patterns in author preparation?  What would you say, if you were preparing your code for someone else to review?  I’d love to hear any input.

PS:  If anyone is interested in getting the full list of author prepared review requests for these 4 projects, let me know, and I’ll toss up all the links.