Tag Archives: reviewboard

Pants First, Then Shoes: More Argument for Pre-Commit Code Review

In my opinion, at least for The MarkUs Project, post-commit code review would probably be analogous to putting on your shoes before your pants.  And though I mentioned earlier that there is plenty of preference for post-commit, I forgot to include this juicy little tidbit.

Click here to read one of the developers of ReviewBoard state his case for pre-commit code review.

To each their own.  But I dig his points.

The Importance of First Impressions: How Theatre Criticism Might Inform Peer Code Review

Discussion Plays

I have seen plays that have very clear stories, and very clear plots.  I leave the theatre knowing what has happened, and I can be pretty confident that the people who sat around me in the theatre all got the same message as I did.

I have also seen plays that are completely the opposite.  There doesn’t appear to be a story.  There doesn’t appear to be plot.  There are no real characters.  For these plays, all of a sudden, I have to do the work in order to make sense of it all.  And you can be pretty sure that every single audience member got something different out of it.

I want to talk about this second kind of play.  For now, I’m going to call this kind of play a discussion play, because for me, the best part about these kinds of plays is the discussion I have with my friends afterwards. We’ll all sit down in a restaurant or a cafe, order some food, and try to figure out what the hell we just saw.  Theories are tossed around.  Everybody brings their own unique impressions and observations to the table.  A very rich ecosystem of ideas develops.

Back to Peer Code Reviews

(trust me, this all ties together in the end)

When Jason Cohen did his Peer Review at Cisco Study, he noticed that code that had been prepared by the author for review seemed to have a lower defect density than code that had not been prepared.

What do I mean by prepared?  I’ll let Jason Cohen explain:

The idea of “author preparation” is that authors should annotate their source code before the review begins.  Annotations guide the reviewer through the changes, showing which files to look at first and defending the reason and methods behind each code modification.  The theory is that because the author has to re-think all the changes during the annotation process, the author will himself uncover most of the defects before the review even begins, thus making the review itself more efficient.  Reviewers will uncover problems the author truly would not have thought of otherwise.

(Best Kept Secrets of Peer Code Review, p80-81)

Looking at the data, author preparation does seem to have a palpable effect.  As Cohen notes, “for all reviews with at least one author preparation comment, defects density is never over 30; in fact the most common case is for there to be no defects at all!”.

The study has two explanations for this:

  1. Authors gave their code such a thorough look while annotating them, that most defects were eliminated right off the bat.
  2. Since authors were actively explaining, or defending their code, this sabotaged the reviewers ability to do their job effectively.

Cohen buys into the first explanation.  He writes:

A survey of the reviews in question show the author is being conscientious, careful, and helpful, and not misleading the reviewer.  Often the reviewer will respond to or ask a question or open a conversation on another line of code, demonstrating that he was not dulled by the author’s annotations.

I have huge respect for this study.  But I don’t entirely buy this explanation.  As Cohen later mentioned in an email to me, this conclusion is not derived from a controlled experiment, and also suffers from selection bias.

Back to those Discussion Plays

One of the worst things that can happen to me before going into a discussion play is for someone who has already seen it to tell me their impressions of what they thought was going on.  As soon as I hear their opinion, my own objectivity is compromised.  Whether I want to or not, I’ll have their impressions in the back of my mind, and I’ll be using it as a measuring stick or reference point for my own opinions and critiques. They’ve carved a cognitive path through the work, and I’m doomed to notice that path, and react to it.

This is horrible.  This limits me.  This more or less hobbles my ability to contribute something unique to the pool of ideas and criticisms in the after-play discussion.  Every impression I have is tainted by someone else’s first impression.

Don’t get me wrong – I love hearing about everyone’s impressions.  But after I have formed my own. This way, I believe we cover more ground.  A group of us watching a discussion play will carve unique cognitive paths through the work without influencing one another.  When we finally open up and present these paths and ideas to one another over food and drink, I believe we cover more ground.

I have no data to back this up.  Only years of theatre-going experience.

A Code Review Anecdote

I recently received an email from a colleague of mine.  She wanted me to go over some of her Javascript to make sure it was up to snuff, since she was relatively new to the language.  I noticed that she had also sent a copy of the email to another developer who has pretty sharp Javascript chops.

When I finally had some free time, I went back to her email to write up the review.  I felt bad – it was late, and the other reviewer hadn’t made a peep on the email thread, and she was hoping to use the code relatively soon.  So I dove in, wrote my review, and sent it off.

A little while later, the other developer sent me his review, saying:

And here was my answer, which I didn’t send to you so as not to influence your reply.  😉

So the author of the code received two unique reviews, and neither of them had influenced the other.  When I read his review, I noticed that we covered some similar ground, but a lot of unique ground as well.  I suspect this wouldn’t have been the case had he sent his review to me first.

The Hypothesis

I hypothesize that author preparation in code review sabotages reviewers abilities to objectively carve their own unique cognitive paths through the code.  They see things from the author’s point of view, and this dulls their critical eye.  Because of this, I believe fewer defects are detected.

I will take this hypothesis one step further.

I suspect any review, by the author or otherwise, will taint future reviews.  If someone has already reviewed some code, I suspect this review will impact and possibly limit the ability of other reviewers to look at the code objectively.  Like author preparation, I suspect this prevents reviewers from getting their own unique, valuable first impressions of the code.  And I suspect that this causes some defects to go undetected.

Testing This Hypothesis

It’s a simple idea really.  Take a chunk of code, and get some number of developers to review it.  Take this same code, add some author preparation comments, and get more developers to review it.  Do all of the normal balancing, etc.

The question:  do the number of detected defects drop?  If so, this looks like evidence that author preparation sabotages review ability.

Take the experiment one step further.  Take some code, have someone else review it, and then have participants review this code, having seen the first review.  What happens to the number and type of defects that they find?  What happens if they don’t see that initial review?  What yields high defect detection?

Sounds doable.  Sounds interesting.  Sounds like something that would answer a few questions.

Implications and Ideas

So what if one or both of my hypotheses are true?  What does this mean for peer code review?

Well, if author preparation alone sabotages review ability, then the answer is simple:  don’t let the authors prepare the review.  The code goes up, and they stay silent.

But what if both are true?

An idea:  how about I tweak MarkUs’s ReviewBoard so that reviewers cannot see what other reviewers have said until they’ve given one review?  What would happen to the defect detection numbers?  Would reviewers react negatively to this?  Would there be lots of repetition in the comments?  Sounds like something worth looking into.

I’d love to hear some thoughts on this.  Anyone?

Pre-Commit Code Review in MarkUs Development

So, for my Master’s thesis, I’ve pretty much set my heart on code review as my research area.  Here’s why:  it works.  It makes your code better.  It helps you find bugs.  And I’m not just quoting something overheard in a pub – there’s evidence to back these claims up.

And then there’s my own experience to boot – the MarkUs team has been using ReviewBoard as our pre-commit code review tool since last summer, and I wouldn’t ever go back.  If I ever have to work in a shop that doesn’t perform code reviews, I’ll campaign my butt off.

Having said all that, pre-commit reviews certainly aren’t for everyone.  Some downsides of pre-commit over post-commit:

  • It goes against “check in early, check in often
  • “The major downside of pre-checkin code review is that it puts a major bottle neck on getting changes into the system for other developers to integrate with early enough.” (from this link)
  • For some applications, testing takes hours on end.  Why wait?  Might as well toss it into the repo, let the Continuous Integration build it, and just see what happens.

There are probably more.

My response:  at least for MarkUs, pre-commit code reviews are working just fine, thank you very much.  At least we’re reviewing it – and any review is better than no review.  But to continue my response, here are a couple of advantages of pre-commit code review for the MarkUs development team:

  1. Since most students working on MarkUs are doing it for half-credits, this means there’s a lot of turnover every semester.  ReviewBoard lowers the chance of our new hotshot developers  accidentally slipping something ridiculous into the repository and having to do that neat Subversion trick of pulling it out again.  This is the obvious one.
  2. It helps all developers keep track of what everyone else is doing.  This is true for post-commit reviews too, but it’s certainly worth the mention.  It sure beats reading SVN log messages…
  3. It’s a great arena for new developers to ask questions.  Our new developers this semester have been very active on our ReviewBoard, asking plenty of questions about things that are showing up in the diffs under review.  Sometimes, “theoretical” code is posted to demonstrate how something would be done.  Post-commit does not support this nicely.
  4. It’s an excellent way of showing how you’re coming along with a task, without the embarrassment of breaking the build.  MarkUs developers sometimes post up “sneak previews” just to give everybody a taste of how their particular task is coming.  This “sneak preview” gives the opportunity for other developers to critique the direction that the submitter is going in, and offer pointers in case they seem to be heading off in a hazardous direction.

Yep, there’s just something so satisfying about seeing all of those little green “ship-it’s”, and then firing off your code into the repository… it’s positive reinforcement for code reviews.  And it’s strangely addictive to me.

Another Idea to Augment This Process

A little while ago, I wrote about what I consider to be one of the Achilles’ Heels of Peer Code Review.  Here’s another one: at least for ReviewBoard, during a review all you’re looking at is the code.  That’s all fine and dandy if you want to look at the logic of the code…but what if you want to try it?  Does trying it out help find more bugs than just looking at it?

Well, at least for MarkUs, it’s helped.  I’ve recently started checking out a fresh copy of MarkUs every time a review request is put up, and splat a copy of the diff under review on top.  I run the test suites, and if they pass, I drive it around.  I try out the new features that the diff supposedly adds, or try to recreate the bug that the diff supposedly fixes.

And I’ve caught a few bugs this way.  This is because ReviewBoard is good at showing me what is in the code, but is bad at telling me what is not there.  And that’s perfectly understandable – it’s not psychic.

So here’s an idea – how about writing a little script that checks ReviewBoard for new review requests.  When it finds one, it checks out a brand new copy of MarkUs, splats down the diff under review over top, runs the tests, and then posts back as a ReviewBoard reviewer how many tests passed, how many failed, etc. If we wanted to get fancy, the script could even do some commenting on the code – maybe using Roodi, Flog, Flay, or some of those other sadistically named Ruby tools to say things about the diff.  The script would be another reviewer.

And then the kicker – the script posts a link in its review where developers can try out a running instance of MarkUs with the applied diff.

Want a fancy name to kick around the office?  Call it pre-commit continuous integration. I just checked – it’s not a common term, but I’m not the first to use it.  Again, so much for being cutting edge.

Would this be useful?  It’s possible that the Roodi/Flog/Flay stuff would bring too much noise to the review process – that’s something to toy with later.  But what about the link to the running instance?  Will that little feature help catch more bugs in MarkUs?  How about for Basie?

I’m curious to find out.

Unfortunately, ReviewBoard doesn’t let me download diffs through its API just yet…if schoolwork lets up for a few days, I’ll look into changing that.

I’d love to hear your thoughts.

The Achilles’ Heel of Light-Weight Code Review

So I had my weekly meeting with my supervisor, and fellow students Zuzel and Jon Pipitone.  Something interesting popped up, and I thought I’d share it.

If it wasn’t already clear, I dig code review.  I think it really helps get a team of developers more in tune with what’s going on in their repository, and is an easy way to weed out mistakes and bad design in small chunks of code.

But there’s a fly in the soup.

This semester, the MarkUs project has been using ReviewBoard to review all commits to the repository.  We’ve caught quite a few things, and we’ve been learning a lot.

Now for that fly:

A developer recently found a typo in our code base.  An I18n variable was misspelled in one of our Views as I18n.t(:no_students_wihtou_a_group_message).  One of our developers saw this, fixed the typo, and submitted the diff for review.

I was one of the reviewers.  And I gave it the green light.  It made sense – I18n.t(:no_students_wihtou_a_group_message) is clearly wrong, and I18n.t(:no_students_without_a_group_message) is clearly what was meant here.

So the review got the “Ship It”, and the code was committed.  What we didn’t catch, however, was that the locale string was actually named “no_student_without_a_group_message” in the translation file, not “no_students_without_a_group_message”.  So the fix didn’t work.

This is important:  the diff looked good, but the bug remained because we didn’t have more information on the context of the bug.  We had no information about I18n.t(:no_students_without_a_group_message) besides the fact that I18n.t(:no_students_wihtou_a_group_message) looked wrong.

Which brings me back to the conversation we had yesterday:  while it seems plausible that code review helps catch defects in small code blocks, does the global defect count on the application actually decrease?  Since ReviewBoard doesn’t have any static analysis tools to check what our diffs are doing, isn’t it plausible that while our diffs look good, we’re not preventing ourselves from adding new bugs into the code base?

So, the question is:  does light-weight code review actually decrease the defect count across an application as a whole?

If not, can we augment these code review tools so that they’re more sensitive to the context of the diffs that are being reviewed?

Research Question Idea #3

When we started using ReviewBoard with MarkUs a few months back, all of a sudden, commits to the repository seemed to slow down: we would take more time cleaning up our code, and polishing it for others to see.

Our commits were usually quite large too.  This is because we were all working on different sections of the code, and we wanted to commit stuff that “instantly worked” and was “instantly perfect”.  So after days of silence, 1000 lines of code would suddenly go up for review…and as Jason Cohen can probably tell you, the number of defects found during review decreases as the amount of code to look at increases.  So, the reviewer would skip through 1000 lines, assume most of it was OK, and give it the Ship It.

Yeah, I know.  Awful.  I wonder if this is a standard newbie mistake for student groups just starting out with code review…

So, study idea:

Have two separate groups working on some assignment.  Have Group 1 commit to their repository without any review process.  Have Group 2 do pre-commit reviews using a tool like ReviewBoard.

Now check out the size, frequency, and readability of the repository diffs of each group.  Might generate some interesting data.

Anyhow, in our defence, we seem to have calmed down on MarkUs.  Diffs up for review are pretty small, and get posted relatively frequently.  Using ReviewBoard on MarkUs has made me a believer.  Testify!