And then there’s my own experience to boot – the MarkUs team has been using ReviewBoard as our pre-commit code review tool since last summer, and I wouldn’t ever go back. If I ever have to work in a shop that doesn’t perform code reviews, I’ll campaign my butt off.
Having said all that, pre-commit reviews certainly aren’t for everyone. Some downsides of pre-commit over post-commit:
- It goes against “check in early, check in often“
- “The major downside of pre-checkin code review is that it puts a major bottle neck on getting changes into the system for other developers to integrate with early enough.” (from this link)
- For some applications, testing takes hours on end. Why wait? Might as well toss it into the repo, let the Continuous Integration build it, and just see what happens.
There are probably more.
My response: at least for MarkUs, pre-commit code reviews are working just fine, thank you very much. At least we’re reviewing it – and any review is better than no review. But to continue my response, here are a couple of advantages of pre-commit code review for the MarkUs development team:
- Since most students working on MarkUs are doing it for half-credits, this means there’s a lot of turnover every semester. ReviewBoard lowers the chance of our new hotshot developers accidentally slipping something ridiculous into the repository and having to do that neat Subversion trick of pulling it out again. This is the obvious one.
- It helps all developers keep track of what everyone else is doing. This is true for post-commit reviews too, but it’s certainly worth the mention. It sure beats reading SVN log messages…
- It’s a great arena for new developers to ask questions. Our new developers this semester have been very active on our ReviewBoard, asking plenty of questions about things that are showing up in the diffs under review. Sometimes, “theoretical” code is posted to demonstrate how something would be done. Post-commit does not support this nicely.
- It’s an excellent way of showing how you’re coming along with a task, without the embarrassment of breaking the build. MarkUs developers sometimes post up “sneak previews” just to give everybody a taste of how their particular task is coming. This “sneak preview” gives the opportunity for other developers to critique the direction that the submitter is going in, and offer pointers in case they seem to be heading off in a hazardous direction.
Yep, there’s just something so satisfying about seeing all of those little green “ship-it’s”, and then firing off your code into the repository… it’s positive reinforcement for code reviews. And it’s strangely addictive to me.
Another Idea to Augment This Process
A little while ago, I wrote about what I consider to be one of the Achilles’ Heels of Peer Code Review. Here’s another one: at least for ReviewBoard, during a review all you’re looking at is the code. That’s all fine and dandy if you want to look at the logic of the code…but what if you want to try it? Does trying it out help find more bugs than just looking at it?
Well, at least for MarkUs, it’s helped. I’ve recently started checking out a fresh copy of MarkUs every time a review request is put up, and splat a copy of the diff under review on top. I run the test suites, and if they pass, I drive it around. I try out the new features that the diff supposedly adds, or try to recreate the bug that the diff supposedly fixes.
And I’ve caught a few bugs this way. This is because ReviewBoard is good at showing me what is in the code, but is bad at telling me what is not there. And that’s perfectly understandable – it’s not psychic.
So here’s an idea – how about writing a little script that checks ReviewBoard for new review requests. When it finds one, it checks out a brand new copy of MarkUs, splats down the diff under review over top, runs the tests, and then posts back as a ReviewBoard reviewer how many tests passed, how many failed, etc. If we wanted to get fancy, the script could even do some commenting on the code – maybe using Roodi, Flog, Flay, or some of those other sadistically named Ruby tools to say things about the diff. The script would be another reviewer.
And then the kicker – the script posts a link in its review where developers can try out a running instance of MarkUs with the applied diff.
Want a fancy name to kick around the office? Call it pre-commit continuous integration. I just checked – it’s not a common term, but I’m not the first to use it. Again, so much for being cutting edge.
Would this be useful? It’s possible that the Roodi/Flog/Flay stuff would bring too much noise to the review process – that’s something to toy with later. But what about the link to the running instance? Will that little feature help catch more bugs in MarkUs? How about for Basie?
I’m curious to find out.
Unfortunately, ReviewBoard doesn’t let me download diffs through its API just yet…if schoolwork lets up for a few days, I’ll look into changing that.
I’d love to hear your thoughts.
This sounds like a good feedback loop you’ve introduced to your environment. I’m glad to hear of it’s successful adoption.
We get around the checking broken code into the repo problem by using “pre-tested commits” in TeamCity:
I’m seeing some parallels with testing branches and the approach you’ve taken, developers can cheaply branch the mainline and put their experimental code into SCM for review. Especially with distributed SCM where we are all encouraged to branch liberally.
Pingback: Feb. 5th Status Update at MarkUs Blog
This is a very interesting post. I think that DVCS such as Git and Mercurial could address a few of the issues highlighted here, namely:
Check in early, check in often: With a DVCS, you can check-in to your personal or a topic branch, but you don’t necessarily have to merge the changes into your main trunk (the ‘one source of truth’ from which you actually release software). If developers are able to initiate and perform code reviews from personal/topic branches, then they can avoid ‘polluting’ the trunk with unreviewed code, and also be confident that their changes will not be lost due to an errant edit on their local copy.
Lack of working code: Intelligent use of DVCS should allow the reviewer to look at an actual working copy of the code under review, since they can access the changes in other personal/topic branches, and are not strictly limited to pulling code from the trunk.
I suspect that if usage of DVCS becomes really widespread, we might see ‘pre-commit reviews’ and ‘pre-commit builds’ replaced by DVCS-enabled processes that involve multiple stages of build, test and code review before a change is merged into the primary trunk from which releases are created. An example workflow could be:
Make a change –> Commit to personal/topic branch –> Compile/test based on branch –> Peer code review on personal/topic branch –> Commit to trunk for release
The number of steps between committing a change to a personal/topic branch and inclusion of that change into the ‘trunk’ is arbitrary, and allows teams to be as rigorous as they want about maintaining the tidiness of trunk without preventing changes from being committed into some form of version control.