Tag Archives: gail murphy

Code Reviews and Predictive Impact Analysis

A few posts ago, I mentioned what I think of as the Achilles’ Heel of light-weight code review:  the lack of feedback over dependencies that can/will be impacted by a posted change.  I believe this lack of feedback can potentially give software developers the false impression that proposed code is sound, and thus allow bugs to slip through the review process.  This has happened more than once with the MarkUs project, where we are using ReviewBoard.

Wouldn’t it be nice…

Imagine that you’re working on a “Library” Rails project.  Imagine that you’re about to make an update to the Book model within the MVC framework:  you’ve added a more efficient static class method to the Book model that allows you to check out large quantities of Books from the Library all at once, rather than one at a time.  Cool.  You update the BookController to use the new method, run your regression tests (which are admittedly incomplete, and pass with flying colours), and post your code for review.

Your code review tool takes the change you’re suggesting, and notices a trend:  in the past, when the “checkout” methods in the Book model have been updated, the BookController is usually changed, and a particular area in en.yml locale file is usually updated too.  The code review tool notices that in this latest change, nothing has been changed in en.yml.

The code review tool raises its eyebrow.  “I wonder if they forgot something…”, it ponders.

Now imagine that someone logs in to review the code.  Along with the proposed changes, the code review tool suggests that the reviewer also take a peek at en.yml just in case the submitter has missed something.  The reviewer notices that, yes, a translation string for an error message in en.yml no longer makes sense with the new method.  The reviewer writes a comment about it, and submits the review.

The reviewee looks at the review and goes, “Of course!  How could I forget that?”, and updates the en.yml before updating the diff under review.

Hm.  It’s like a recommendation engine for code reviews…”by the way, have you checked…?”

I wonder if this would be useful…

Mining Repositories for Predicting Impact Analysis

This area of research is really new to me, so bear with me as I stumble through it.

It seems like it should be possible to predict what methods/files have dependencies on other files based on static analysis, as well as VCS repository mining.  I believe this has been tried in various forms.

But I don’t think anything like this has been integrated into a code review tool.  Certainly not any of the ones that I listed earlier.

I wonder if such a tool would be accurate…  and, again, would it be useful?  Could it help catch more of the bugs that the standard light-weight code review process misses?

Thoughts?