Tag Archives: Code Reviews

Integrating Pedagogical Code Reviews into a CS 1 Course: An Empirical Study

Integrating Pedagogical Code Reviews into a CS 1 Course:
An Empirical Study

by Christopher Hundhausen, Anukrati Agrawal, Dana Fairbrother, and Michael Trevisan

Here’s another paper talking about using code review in a first year programming course.  Let’s give it a peek.

The idea of this paper is to introduce a type of “studio-based” learning environment for programmers.  The term originates from architecture, where architecture students bring various designs into class, and then the classroom as a unit critiques and evaluates the design.  In the studio-based learning model, the instructor guides the critiques.  This way, not only does the critiqued student get valuable feedback, but the critics also learn about what to critique.  Remember when I talked about learning how to read code? Studio-based learning seems to be one solution to that problem.

Studio-based learning could also help foster community among the students.  In a field where isolated cubicles are still quite normal (I’m sitting in one right now!), a sense of community could be a welcome relief.

The paper also notes the following:

…the peer review process can (a) prepare students to deal with criticism [1], teach students to provide constructive criticism to others [1], provide students with experience with coming to a consensus opinion in cases where opinions differ [1], and build teamwork skills [8] . All of these are “soft” or “people” skills that are likely to be important in their future careers.

I whole-heartedly agree.

The paper then goes on to give a quick survey of how peer review (not just for code, but peer review in general) is currently used in education:

  • pair programming (which is what I experienced at UofT)
  • “in-class conference” model
  • peer feedback meetings (mostly for project documentation)
  • web-based peer grading solutions (RRAS, and Peer Grader)
  • peer code review

Maybe I’m just getting tired of reading these papers, but I found this section particularly difficult to get through.  I had to read it 3 or 4 times just to understand what was going on.

Finally, they get on with it, and talk about their approach to peer code review – a student oriented version of a formal code review.  Yikes – if by formal code review, they mean a Fagan Inspection, then these poor students are in for some long meetings…

Here is their 5 step outline of their approach:

  1. Plan the inspection of a specific piece of code.
  2. Hold a kick-off meeting with an inspection team to distribute the code to be inspected, train the team on the process, and set inspection goals.
  3. Have members of the inspection team inspect the code for defects on their own time.
  4. Hold an inspection meeting to log issues found by individual members’ inspections, and to find additional issues.
  5. Edit the code to address the issues uncovered in the inspection, and verify that the issues have been resolved.

Yep, that sounds a lot like a Fagan Inspection.  As I’ve already found out, there are lighter, faster, and just-as-effective approaches to peer code review…

After using their approach, their study noticed:

  1. A positive trend in the quality of the students code, and a negative trend in the number of defects found per review
  2. Discussion transitioned from syntax and style to more high-level concepts, such as architecture and design of the system.  Thus, meaningful discussion was generated using their technique.  The study notes, however, that “…we
    cannot provide any objective evidence that students were able to provide helpful critiques of each other’s code within the code reviews themselves.”
  3. There is anecdotal evidence that suggests that this work helped to create more of a community feel among the students in the class

Sounds like a lot of anecdotal evidence.

I don’t know, reading the results of this paper, I was disappointed.  The evidence/data they collected feels light and insubstantial.  I certainly support what the study was trying to do, and maybe I’ve read too many peer review papers, but I don’t find it surprising anymore to hear that peer code review improves students code.  It’s good to have evidence for that, but I was hoping for something more.

The paper closes with the writers almost agreeing with me:

There are several limitations to our results that suggest the need for more rigorous follow-up studies with both larger student samples and additional data collection methods.

I knew it wasn’t just me!

Anyhow, the paper finishes with the writers outlining how they’d do future studies.

And that’s that.

Smart Bear, Cisco, and the Largest Study on Code Review Ever

In 2006, Smart Bear software teamed up with the MeetingPlace development group at Cisco Systems, and over 10 months, produced the “largest-ever case study of its kind” on a “light-weight code review process”.

The results of the study can be found in the free book “Best Kept Secrets of Peer Code Review”.

They can also be found in one of the sample chapters that they’ve put on the site.  You can read the study right here, if you’re interested.

Here are my thoughts on the chapter…

First of all, my guard is up a bit. This all seems a bit like a sales pitch, since the software that Cisco ends up using is Smart Bear’s own Code Collaborator. I’m reading the first paragraph, and already I know how it ends – “everybody is happy, the software is improved dramatically, so you should buy Code Collaborator”. Something like that. I’ll be happy when I see some solid data, some numbers, some graphs…

Ok, I’m in at page 54 – they’re talking about how data was collected, and how they pared it down to get the most meaningful results.  This is good.  This sounds like science, and not a sales pitch.  Nice.

The next thing the study talks about is the rate that lines of code (LOC) are analyzed at – the LOC inspection rate.  The data they’ve collected shows no discernible 1-1 correlation between the LOC inspection rate, and the amount of code to inspect.  There were rare exceptions where a reviewer would seem to have such a correlation, but these reviewers tended to be novices who had not participated in many reviews before.  Analyzing the LOC inspection rates by code authors (those who are having their code reviewed) also failed to show any correlations.  In fact, there were several cases where separate reviewers took widely varied amounts of time on the same chunk of code under review.

So this leaves us with no clear answer on what factors play a part in LOC inspection rate.

The study then begins to discuss the effectiveness of the reviews, and whether or not slow reviews reveal more “defects” (where a defect is defined as any change to the code that wouldn’t have happened without the review). Because defect data from the Code Collaborator database was not considered wholly reliable (see pages 62 and 63 if you want to know why), 300 reviews were randomly plucked from the original 2500, and the discussions in each one were analyzed to gather the defect statistics.

The study then introduces the concept of “defect density”, which is a ratio of the number of defects detected per 1000 lines of code (referred to henceforth as kLOC).

I’ll skip right to some results:

Our reviews had an average 32 defects per 1000 lines of code.  61% of the reviews uncovered no defects; of the others the defect density ranged evenly between 10 and 130 defects per kLOC.

I’m surprised that 61% of the reviews found no defects.  That’s remarkably high, in my opinion.  True, it’s only a sample of 300 reviews, but still, my instincts were expecting a significantly lower number.

An even more interesting result, is that defects found (and therefore, review effectiveness) dropped off for large amounts of code to review.  The study notes that:

Anything below 200 lines produces a relatively high rate of defects, often several times the average.

So there seems to be a sweet spot.  I wonder if this is a factor in what was causing the surprising high number of defect-less reviews in that sample of 300.  Perhaps many of those reviews were for large sections of code.  Or perhaps they’re for reviews that involve only a single line of code.  The study doesn’t go into this.

What the study does go into, is a general guideline for limiting the time that code reviews take.  Their study noted a stark dropoff in review effectiveness after about an hour.  Totally understandable – I think an hour reviewing someone else’s code would probably be my limit before I started getting distracted.

The study then goes on to suggest that the “slower is better” approach to reviewing code is the right idea:

Reviewers slower than 400 lines per hour were above average in their ability to uncover defects. But when faster than 450 lines/hour the defect density is below average in 87% of the cases.

So already there are some guidelines: try to review something around 200 LOC, take your time, but don’t go over an hour.  This is useful information.

The study then goes into a test of a slight modification of how reviews are performed: before submitting code for review, the author should annotate the code, describing how the changes are structured, why they were coded the way they were coded, etc.  This has a dual-benefit:  it gives reviewers some clues at how to look at the code (while, hopefully, maintaining the distance they need to do a good job), and it also has the added benefit of getting the author to go over their code again to weed out obvious defects.

So, some reviews were carried out in this fashion.  Here’s what they found:

First, for all reviews with at least one author preparation comment, defects density is never over 30; in fact the most common case is for there to be no defects at all! Second, reviews without author preparation comments are all over the map [in terms of defect density] whereas author-prepared reviews do not share that variability.

The study gives two possible conclusions for these results:

  1. Authors gave their code such a thorough look while annotating them, that most defects were eliminated right off the bat.  I’m…skeptical of this conclusion.
  2. Since authors were explaining, or defending their changes, this sabotaged the reviewers ability to do their job effectively.

I find myself believing the second conclusion more, simply from experience:  if somebody is guiding me through things, suddenly I’m in the passenger seat, and I’m less inclined to disagree with a change if their explanation or defense sounds solid.

However, Smart Bear disagrees:

A survey of the reviews in question show the author is being conscientious,  careful, and helpful, and not misleading the reviewer. Often the reviewer will respond or ask a question or open a conversation on
another line of code, demonstrating that he was not dulled by the author’s annotations.

…we believe that requiring preparation will cause anyone to be more careful, rethink their logic, and write better code overall.

I’d like to see their data on this.  In particular, I’d like to see how often reviewers detected defects in lines of code that the author had annotated.  Unfortunately, this data is not provided in the study.

In the last few pages, the study notes that while review size has a detrimental impact on defect density (the number of defects reviewers found per kLOC), there seemed to be a fixed rate on the number of defects found per hour.  While this seems at odds with the original discovery that smaller reviews are more effective, they note:

Although the smaller reviews afforded a few especially high rates, 94% of all reviews had a defect rate under 20 defects per hour regardless of review size.

…the take-home point from Figure 22 is that defect rate is constant across all the reviews regardless of external factors.

So, assume that a reviewer has a steady defect detection rate, but that this rate drops off after about an hour.  Given a small section of code, of course the number of defects detected will be high.  And given a large chunk of code, of course the defects will be more spread out.

It’s the steady defect detection rate that bothers me – you would imagine that the detection rate would depend on the quality of the code and also on the experience of the reviewer.  But I guess, according to this study,  it doesn’t.

And so, the study goes into its conclusions.  I can’t really do much better summarizing the first conclusions for you than they did, so I’ll just regurgitate:

  • LOC under review should be under 200, not to exceed 400. Anything larger overwhelms reviewers and defects are not uncovered.
  • Inspection rates less than 300 LOC/hour result in best defect detection. Rates under 500 are still good; expect to miss significant percentage of defects if faster than that.
  • Authors who prepare the review with annotations and
  • explanations have far fewer defects than those that do not.  We presume the cause to be that authors are forced to self-review the code.
  • Total review time should be less than 60 minutes, not ex- ceed 90. Defect detection rates plummet after that time.
  • Expect defect rates around 15 per hour. Can be higher only with less than 175 LOC under review.
  • Left to their own devices, reviewers’ inspection rate will vary widely, even with similar authors, reviewers, files, and size of the review.

Given these factors, the single best piece of advice we can give is to review between 100 and 300 lines of code at a time and spend 30-60 minutes to review it.  Smaller changes can take less time, but always spend at least 5 minutes, even on a single line of code.

The study then goes into the differences in effectiveness between heavy-duty Fagan-esque code reviews, and the lightweight style of code review that took place at Cisco.  While some of their results from their study exactly match results from studies on heavyweight code reviews (time to spend on a review, when effectiveness drops off), there were some stark differences too.

For example, in the lightweight study, defect detection rate using Smart Bear was 7 times faster than the average rate found across four studies of traditional code review methods.  Sounds like we’re getting into that sales-pitch part…

The study then admits that there was no experiment control – reviews using heavyweight techniques weren’t carried out in parallel with the Code Collaborator study, so comparisons of their effectiveness on the MeetingPlace software have little-to-no data to work with.

In the end, their conclusion is that lightweight code review is just as effective as the traditional methods, while being remarkably faster to boot.  I’d like to see more evidence to back up the comparison on effectiveness, but faster seems more than plausable (Fagan inspections involve very lengthy meetings, so I’ve read).

Smart Bear agrees with my last point on effectiveness comparison, and notes that future study should be conducted where the same set of code is analyzed using both heavy and lightweight methods.

They finish it off with an invitation to software development shops to contact Smart Bear if they’d like to be involved in such a study.

So that’s the gist of it.

Do Peer Code Reviews Seem Incompatible With The Traditional Classroom?

I recently wrote a post asking the question “why aren’t code reviews part of every undergraduate computer science curriculum?”, Gregg Sporar responded by saying

I don’t know of many professors who are really “into” code review.

This is so strange.

You would think that a process that provably helps reduce code defects earlier in development would be right up there in intro programming courses, along with “use a debugger instead of sprinkling print statements everywhere”.

So, what gives?

Well, one thought is that perhaps these courses are structured so that peer code review would completely undermine the ability to evaluate students correctly. I’ve been in plenty of classes where I’ve been instructed never to show my code to anyone else. We can vaguely discuss issues on the course bulletin board, but absolutely no code can be discussed directly. For these assignments, team work, plagiarism, and cooperative learning is a big no-no.

At first glance, it’s perfectly understandable:  it seems like the only way to fairly evaluate the abilities of each individual student. But still, it seems like that particular teaching model is really missing out on a huge opportunity to give their students valuable feedback.

I have to make some corrections, though. I haven’t been completely honest. I’ve made it seem like I’ve never had any code review done in my courses at UofT, and that’s not entirely true. Quick examples: in CSC180, I did pair-programming with another student – and pair-programming is certainly a type of peer code review. In CSC301, my team would rigorously review the diffs of each other’s repository commits, looking for flaws, and emailing back and forth defect reports. We weren’t explicitly instructed to do this, but it certainly seemed like a good idea at the time for the success of the project.

But it’s nothing like this.  Look at it this assignment from Carnegie Mellon.  What a great idea for an assignment!  How come this type of assignment seems so rare?

Back when I was an Electrical and Computer Engineering undergrad, one of my professors once told me that “great programmers write great code because they read great code”.  I think this is something that was missing in my formal education: how to read, review, analyze, and critique code. And above all, what beautiful, elegant code looks like!

In my academic Drama classes,  we poured over tons of famous, landmark plays.  And of course we tore them apart, boiling them down, analyzing them, critiquing and debating, arguing our pants off…  I got good at reading them.  My critical evaluation skills in this area are pretty sharp.  I can tell when I’m reading something really well done.  And, because of this, given some practice, I could probably write something half-decent if I put my mind to it.

So I think that’s something I’m missing from my computer science education.  Perhaps there’s a good reason for it’s absence – but if there is, I don’t know it.  Instructor moderated peer code reviews, or (even better!) code reviews of industry software…that sounds juicy.  That sounds useful.  That sounds like valuable feedback and instruction.

It’s too bad it never happened.

Anyhow, I haven’t answered my question yet.  I still need to finish my review of Smart Bear’s Cisco study.  I’ve also found a case study where code peer reviews were used as part of an introductory computer programming course – I’m interested to see what they found out.

Taking a peek at Jason Cohen, Smart Bear, and Code Collaborator

Check this out.

Hang on, hang on, let me back up.

Jason Cohen.  Is that name familiar?  If it isn’t, this is the guy who founded Smart Bear Software.  Smart Bear Software has a piece of software called Code Collaborator, which is a “web-based tool that simplifies and expedites peer code reviews.”  Here’s the sales pitch.

The reason I’m posting all of this is because of what I’m looking for – papers concerning code review, and the relationship (or lack thereof) between code review techniques and computer science education.

Which all traces back to that first link I posted – a series of whitepapers and articles on Code Collaborator, and code reviews in general.  This is good stuff.  Sure, it’s not really oriented around computer science education, but these people seem to know what they’re talking about.

There’s even a free book, which details their massive code review study, which is, apparently, the largest-ever case study of its kind.

I’ve ordered the free book, just for kicks.  In the mean time, I’m going to glance over their Cisco study.

“Code reviews” by Arjen Markus (2009)

Code Reviews

by Arjen Markus
Deltares, The Netherlands
ACM Fortran Forum, August 2009, 28, 2

This is one of the first papers I found.  Consider it my “warm up” paper.

According to the header, Arjen Markus works for “Deltares”, and after a quick Google-hunt, I found out that Deltares is a “new independent Dutch institute for national and international delta issues”.

Upon closer inspection, it seems that Markus’ paper is concerned with what reviewers should be looking for during code reviews:

“What should you be looking for in the code?  It is not enough to check that the code adheres to the programming standard of the project it belongs to.  Such a standard may not exist, be incomplete or be focussed on layout, not on questionable constructs that are a liability.  With this article I would like to fill in this practical gap, at least partly.” (Page 4)

This isn’t exactly what I set out to look for, but I thought I’d give it a once-over anyways.

Markus’ paper is not what I would call a rigorous scientific publication.  There is no empirical data, no hypothesis, none of that good ol’ scientific method stuff.  Instead, it’s more akin to a “do” and “do not” set of advice and examples that one would find in a software engineering textbook.

A FORTRAN software engineering textbook, to be more precise.  Markus’ examples are all in FORTRAN.

Broken down simply, Markus has four principles, or bits of advice:

“The importance of being explicit”

Essentially, this means to be clear with what you’re doing in the code.  It’s common sense stuff:  don’t be overly clever, be readable, don’t use magic numbers or strings, document your code, group related routines into the same modules, use information hiding in your modules when appropriate, clear and precise error messages, etc.

“Don’t go your own way”

Markus advises developers to stick to an agreed coding standard / style guide.  Don’t reinvent the wheel – instead, use typical solutions to problems that arise.  “Don’t go against the grain” (Page 7).

“Be careful out there”

Markus advises developers to watch out for documented language quirks, common language pitfalls, etc.  This is followed by numerous examples in FORTRAN.

“Curiouser and curiouser”

Markus asks to keep an eye out for “a lack of attention to design, to readability and other aspects that are important for the program in the long run” (Page 11).    He also repeats a few things from “the importance of being explicit” – mainly, to make sure that the code is organized in a way that makes sense to the developers.

I don’t know.  I don’t think I’m the target audience here.  In hindsight, I found the information in this paper to be very general, and rather self-evident.  The only thing I seemed to learn was a bit about FORTRAN quirks.

I think I need to be less laissez faire in my paper selections.  This one didn’t help me find what I was looking for, and I should have seen that from the abstract.  Bah.

EDIT:  Why did I waste my time searching ACM when more interesting information was waiting right under my nose?  I think I have an idea what to review next.