Category Archives: Code Reviews

Author Preparation in Code Review: What Are Those Authors Saying?

If you recall, I’m looking at author preparation in code review, and whether or not it impairs the ability of reviewers to perform objective reviews effectively.

If this is really going to be my research project, I’ll need to get my feet a bit more wet before I design my experiment.  It’s all well and good to say that I’m studying author preparation…but I need to actually get a handle on what authors tend to say when they prepare their review requests.

So how am I going to find out the kinds of things that authors write during author preparation?  The MarkUs Project and the Basie Project both use ReviewBoard, so it’ll be no problem to grab some review requests from there.  But that’s a lot of digging if I do it by hand.

So I won’t do it by hand.  I’ll write a script.

You see, I’ve become pretty good at manipulating the ReviewBoard API.  So mining the MarkUs and Basie ReviewBoard’s should be a cinch.

But I’d like to go a little further. I want more data.  I want data from some projects outside of UofT.

Luckily, ReviewBoard has been kind enough to list several open source projects that are also using their software.  And some of those projects have their ReviewBoard instances open to the public.  So I just programmed my little script to visit those ReviewBoard instances, and return all of the review requests where the author of the request was the first person to make a review.  Easy.

Besides MarkUs and Basie, I chose to visit the AsteriskKDE, and MusicBrainz projects.

Asterisk was a crapshoot – of all of their review requests, not a single one returned a positive.

But I got a few blips on the others. Not many, but a few.

I read all of the author preparation for each blip, and broke down what I read into some generalizations.

So, now to the meat:  here are some generalizations of what the authors tended to say, in no particular order.  I’ve also included a few examples so you can check them out for yourselves.

“Here’s why I did this”

The author makes it explicit why a change was made in a particular way.

Examples:

“Here’s what this part does…”

The author goes into detail about what a portion of their diff actually does.

Examples:

“Can I get some advice on…”

The author isn’t entirely sure of something, and wants input from their peers.

Examples:

“Whoops, I made a mistake / inserted a bug.  I’ll update the diff.”

The author has found a mistake in their code, and either indicates that they’ll update the diff in the review request, or change the code before it is committed.

“Whoops – that stuff isn’t supposed to be there.  Ignore.”

The author has accidentally inserted some code into the diff that they shouldn’t have.  They give their assurances that it’ll be removed before committing – reviewers are asked to ignore.

Examples:

“Before you apply this patch, you should probably…”

The author believes that the reviewers will need to do something special, or out of the ordinary, in order to apply the diff.

“…hello?”

The review request has been idle for a while without a single review.  The author pings everybody for some attention.

Examples:

Anyhow, those are the general patterns that stand out.  I’ll post more if I find any.

Have you seen any other common patterns in author preparation?  What would you say, if you were preparing your code for someone else to review?  I’d love to hear any input.

PS:  If anyone is interested in getting the full list of author prepared review requests for these 4 projects, let me know, and I’ll toss up all the links.

A Sobering Post About Code Review From Microsoft

It’s easy to get on the code review band-wagon, and tout it as the “silver bullet” for bugs, or the key to developing awesome, elegant software, etc.  It’s easy to get carried away, and forget that code review should probably be accompanied by rigorous testing, static analysis, and security integration from day one.

While the purpose of this blog post by Shawn Hernan from Microsoft may be to attack or question the merits of open source software, I see it as an interesting discussion on the role of code review in software engineering and how it relates to writing secure code.

Insert your own joke about Microsoft security here.  I, personally, think their IE team should read Shawn’s post.

Particularly interesting is one of the comments to the post by “danclarke_2000”:

I think another point is diminishing returns of code review..  Each extra code review brings less value than the preeding; review comments can already be known and awaiting action, not important enough to change etc

having extra eyes reviewing code means generating extra code review output.  Here is the true cost, all the code review comments of the many eyes have to pass through the bottleneck of the few people who have authority to make changes.  As each extra review has less value, processing the extra reviews has a higher and higher opportunity cost.

Sound kind of familiar?

Anyhow, Hernan’s post is an interesting read.  Click here to check it out.

UPDATE:

Here’s a quote from Joshua Bloch of Google on a similar topic:

…We programmers need all the help we can get, and we should never assume otherwise. Careful design is great. Testing is great. Formal methods are great. Code reviews are great. Static analysis is great. But none of these things alone are sufficient to eliminate bugs: They will always be with us. A bug can exist for half a century despite our best efforts to exterminate it. We must program carefully, defensively, and remain ever vigilant.

Read the entire post here.

Take Those Code Review Requests for a TestDrive…

Remember how I wrote a while back that I wanted to write a script to let me do some quick and easy pre-commit continuous integration with the MarkUs project?

Well, I think I just wrote one.

Introducing TestDrive…

TestDrive will fetch a review request, grab the latest diff (yes, found an easy way past the lack of API there), check out a fresh copy of MarkUs, throw down the diff, set it up with some Sqlite3 databases, run your tests, and voila – go to localhost:3000, and you’re running the review request diff.

I’ve been using it myself for about a week or so, and so far, it’s helped me catch a number of bugs that I wouldn’t have caught just by looking at the code in ReviewBoard.  Nice.

Click here to check out TestDrive.

Pants First, Then Shoes: More Argument for Pre-Commit Code Review

In my opinion, at least for The MarkUs Project, post-commit code review would probably be analogous to putting on your shoes before your pants.  And though I mentioned earlier that there is plenty of preference for post-commit, I forgot to include this juicy little tidbit.

Click here to read one of the developers of ReviewBoard state his case for pre-commit code review.

To each their own.  But I dig his points.

Screencasting Code Reviews is Hard

I’ve been trying to record myself performing code reviews for The MarkUs Project.

It’s a lot harder than I thought it’d be.  The screencasts are really only useful if I’m saying what I’m thinking, and I’m finding it difficult to maintain stream of consciousness and perform an effective/thorough review.  The last few times I’ve tried it, I find myself blurting an expletive, stopping the recording in frustration, and then starting the review over so that I can do a good, proper job.

I think this is going to take more practice.