Tag Archives: basie

Author Preparation in Code Review: What Are Those Authors Saying?

If you recall, I’m looking at author preparation in code review, and whether or not it impairs the ability of reviewers to perform objective reviews effectively.

If this is really going to be my research project, I’ll need to get my feet a bit more wet before I design my experiment.  It’s all well and good to say that I’m studying author preparation…but I need to actually get a handle on what authors tend to say when they prepare their review requests.

So how am I going to find out the kinds of things that authors write during author preparation?  The MarkUs Project and the Basie Project both use ReviewBoard, so it’ll be no problem to grab some review requests from there.  But that’s a lot of digging if I do it by hand.

So I won’t do it by hand.  I’ll write a script.

You see, I’ve become pretty good at manipulating the ReviewBoard API.  So mining the MarkUs and Basie ReviewBoard’s should be a cinch.

But I’d like to go a little further. I want more data.  I want data from some projects outside of UofT.

Luckily, ReviewBoard has been kind enough to list several open source projects that are also using their software.  And some of those projects have their ReviewBoard instances open to the public.  So I just programmed my little script to visit those ReviewBoard instances, and return all of the review requests where the author of the request was the first person to make a review.  Easy.

Besides MarkUs and Basie, I chose to visit the AsteriskKDE, and MusicBrainz projects.

Asterisk was a crapshoot – of all of their review requests, not a single one returned a positive.

But I got a few blips on the others. Not many, but a few.

I read all of the author preparation for each blip, and broke down what I read into some generalizations.

So, now to the meat:  here are some generalizations of what the authors tended to say, in no particular order.  I’ve also included a few examples so you can check them out for yourselves.

“Here’s why I did this”

The author makes it explicit why a change was made in a particular way.

Examples:

“Here’s what this part does…”

The author goes into detail about what a portion of their diff actually does.

Examples:

“Can I get some advice on…”

The author isn’t entirely sure of something, and wants input from their peers.

Examples:

“Whoops, I made a mistake / inserted a bug.  I’ll update the diff.”

The author has found a mistake in their code, and either indicates that they’ll update the diff in the review request, or change the code before it is committed.

“Whoops – that stuff isn’t supposed to be there.  Ignore.”

The author has accidentally inserted some code into the diff that they shouldn’t have.  They give their assurances that it’ll be removed before committing – reviewers are asked to ignore.

Examples:

“Before you apply this patch, you should probably…”

The author believes that the reviewers will need to do something special, or out of the ordinary, in order to apply the diff.

“…hello?”

The review request has been idle for a while without a single review.  The author pings everybody for some attention.

Examples:

Anyhow, those are the general patterns that stand out.  I’ll post more if I find any.

Have you seen any other common patterns in author preparation?  What would you say, if you were preparing your code for someone else to review?  I’d love to hear any input.

PS:  If anyone is interested in getting the full list of author prepared review requests for these 4 projects, let me know, and I’ll toss up all the links.

Pre-Commit Code Review in MarkUs Development

So, for my Master’s thesis, I’ve pretty much set my heart on code review as my research area.  Here’s why:  it works.  It makes your code better.  It helps you find bugs.  And I’m not just quoting something overheard in a pub – there’s evidence to back these claims up.

And then there’s my own experience to boot – the MarkUs team has been using ReviewBoard as our pre-commit code review tool since last summer, and I wouldn’t ever go back.  If I ever have to work in a shop that doesn’t perform code reviews, I’ll campaign my butt off.

Having said all that, pre-commit reviews certainly aren’t for everyone.  Some downsides of pre-commit over post-commit:

  • It goes against “check in early, check in often
  • “The major downside of pre-checkin code review is that it puts a major bottle neck on getting changes into the system for other developers to integrate with early enough.” (from this link)
  • For some applications, testing takes hours on end.  Why wait?  Might as well toss it into the repo, let the Continuous Integration build it, and just see what happens.

There are probably more.

My response:  at least for MarkUs, pre-commit code reviews are working just fine, thank you very much.  At least we’re reviewing it – and any review is better than no review.  But to continue my response, here are a couple of advantages of pre-commit code review for the MarkUs development team:

  1. Since most students working on MarkUs are doing it for half-credits, this means there’s a lot of turnover every semester.  ReviewBoard lowers the chance of our new hotshot developers  accidentally slipping something ridiculous into the repository and having to do that neat Subversion trick of pulling it out again.  This is the obvious one.
  2. It helps all developers keep track of what everyone else is doing.  This is true for post-commit reviews too, but it’s certainly worth the mention.  It sure beats reading SVN log messages…
  3. It’s a great arena for new developers to ask questions.  Our new developers this semester have been very active on our ReviewBoard, asking plenty of questions about things that are showing up in the diffs under review.  Sometimes, “theoretical” code is posted to demonstrate how something would be done.  Post-commit does not support this nicely.
  4. It’s an excellent way of showing how you’re coming along with a task, without the embarrassment of breaking the build.  MarkUs developers sometimes post up “sneak previews” just to give everybody a taste of how their particular task is coming.  This “sneak preview” gives the opportunity for other developers to critique the direction that the submitter is going in, and offer pointers in case they seem to be heading off in a hazardous direction.

Yep, there’s just something so satisfying about seeing all of those little green “ship-it’s”, and then firing off your code into the repository… it’s positive reinforcement for code reviews.  And it’s strangely addictive to me.

Another Idea to Augment This Process

A little while ago, I wrote about what I consider to be one of the Achilles’ Heels of Peer Code Review.  Here’s another one: at least for ReviewBoard, during a review all you’re looking at is the code.  That’s all fine and dandy if you want to look at the logic of the code…but what if you want to try it?  Does trying it out help find more bugs than just looking at it?

Well, at least for MarkUs, it’s helped.  I’ve recently started checking out a fresh copy of MarkUs every time a review request is put up, and splat a copy of the diff under review on top.  I run the test suites, and if they pass, I drive it around.  I try out the new features that the diff supposedly adds, or try to recreate the bug that the diff supposedly fixes.

And I’ve caught a few bugs this way.  This is because ReviewBoard is good at showing me what is in the code, but is bad at telling me what is not there.  And that’s perfectly understandable – it’s not psychic.

So here’s an idea – how about writing a little script that checks ReviewBoard for new review requests.  When it finds one, it checks out a brand new copy of MarkUs, splats down the diff under review over top, runs the tests, and then posts back as a ReviewBoard reviewer how many tests passed, how many failed, etc. If we wanted to get fancy, the script could even do some commenting on the code – maybe using Roodi, Flog, Flay, or some of those other sadistically named Ruby tools to say things about the diff.  The script would be another reviewer.

And then the kicker – the script posts a link in its review where developers can try out a running instance of MarkUs with the applied diff.

Want a fancy name to kick around the office?  Call it pre-commit continuous integration. I just checked – it’s not a common term, but I’m not the first to use it.  Again, so much for being cutting edge.

Would this be useful?  It’s possible that the Roodi/Flog/Flay stuff would bring too much noise to the review process – that’s something to toy with later.  But what about the link to the running instance?  Will that little feature help catch more bugs in MarkUs?  How about for Basie?

I’m curious to find out.

Unfortunately, ReviewBoard doesn’t let me download diffs through its API just yet…if schoolwork lets up for a few days, I’ll look into changing that.

I’d love to hear your thoughts.